Jump to content

UHQBot

Forum Bot
  • Posts

    39,329
  • Joined

  • Last visited

  • Days Won

    25

Everything posted by UHQBot

  1. As regulators approach their deadline to approve or reject Microsoft’s $69 billion acquisition of Activision Blizzard, those for and against the mega merger are filing every last argument they can think of to try and sway the outcome. My favorite one yet involves Microsoft telling Sony to quit whining already and just… Read more... View the full article
  2. Hello Daniel, Welcome to UnityHQ Nolfseries Community. Please feel free to browse around and get to know the others. If you have any questions please don't hesitate to ask. Daniel joined on the 03/21/2023. View Member
  3. Although Diablo IV’s early access beta wrapped up on March 20, another one will run from March 24 through 26. This window gives you access to all five classes, two of which—the Druid and Necromancer—were initially unavailable. While none have anything in common, other than their propensity to kill evil 'frack', one thing… Read more... View the full article
  4. This weekend, Twitch streamer and political commentator Hasan Piker was trying to adopt a new dog. While he visited three Los Angeles based shelters, he decided to donate a total of $25,000 to help cover the adoption fees for other prospective dog owners. Read more... View the full article
  5. The just-released Bayonetta prequel side-story, Bayonetta Origins: Cereza and the Lost Demon, is actually really good, a return to form after the disappointment of Bayonetta 3, as I noted in my impressions yesterday. But apparently I was off base in positing that it was intended to placate lore-starved Bayonetta… Read more... View the full article
  6. Today, Electronic Arts and developer Dice announced plans to sunset a selection of older Battlefield games and Mirror’s Edge later this month. Sadly, the list includes one of the best entries in the Battlefield franchise: Bad Company 2. And while you’ll still be able to play it online until the end of this year, you… Read more... View the full article
  7. Modding, and by extension, modders, know no bounds. Whether it’s fixing unattended-to-glitches in hit games, adding absurd new character models, or just offering a new take on beloved gameplay, mods are a great way to add new life to a game you already know very well. And if that game is Elden Ring, well today’s… Read more... View the full article
  8. Read more... View the full article
  9. When Overwatch 2 launched in October of last year, it was only natural that players stumbled into its new Push mode like ignorant babies in the dark. But y’all, we’ve had five months of pushing barricades across Toronto and Rome, and I need the rest of the Overwatch community to get off the goddamn robot when we’re… Read more... View the full article
  10. Fortnite Twitch streamer and hugely popular gaming TikToker Chica seemed to get some surprising news yesterday—Twitch removed one of her emotes, a cartoon chick with big, wet eyes and its tongue sticking out, for “inciting abuse.” Read more... View the full article
  11. ChatGPT is just the start. With computing now advancing at what he called “lightspeed,” NVIDIA founder and CEO Jensen Huang today announced a broad set of partnerships with Google, Microsoft, Oracle and a range of leading businesses that bring new AI, simulation and collaboration capabilities to every industry. “The warp drive engine is accelerated computing, and the energy source is AI,” Huang said in his keynote at the company’s GTC conference. “The impressive capabilities of generative AI have created a sense of urgency for companies to reimagine their products and business models.” In a sweeping 78-minute presentation anchoring the four-day event, Huang outlined how NVIDIA and its partners are offering everything from training to deployment for cutting-edge AI services. He announced new semiconductors and software libraries to enable fresh breakthroughs. And Huang revealed a complete set of systems and services for startups and enterprises racing to put these innovations to work on a global scale. Huang punctuated his talk with vivid examples of this ecosystem at work. He announced NVIDIA and Microsoft will connect hundreds of millions of Microsoft 365 and Azure users to a platform for building and operating hyperrealistic virtual worlds. He offered a peek at how Amazon is using sophisticated simulation capabilities to train new autonomous warehouse robots. He touched on the rise of a new generation of wildly popular generative AI services such as ChatGPT. And underscoring the foundational nature of NVIDIA’s innovations, Huang detailed how, together with ASML, TSMC and Synopsis, NVIDIA computational lithography breakthroughs will help make a new generation of efficient, powerful 2-nm semiconductors possible. The arrival of accelerated computing and AI come just in time, with Moore’s Law slowing and industries tackling powerful dynamics —sustainability, generative AI, and digitalization, Huang said. “Industrial companies are racing to digitalize and reinvent into software-driven tech companies — to be the disruptor and not the disrupted,” Huang said. Acceleration lets companies meet these challenges. “Acceleration is the best way to reclaim power and achieve sustainability and Net Zero,” Huang said. GTC: The Premier AI Conference GTC, now in its 14th year, has become one of the world’s most important AI gatherings. This week’s conference features 650 talks from leaders such as Demis Hassabis of DeepMind, Valeri Taylor of Argonne Labs, Scott Belsky of Adobe, Paul Debevec of Netflix, Thomas Schulthess of ETH Zurich and a special fireside chat between Huang and Ilya Sutskever, co-founder of OpenAI, the creator of ChatGPT. More than 250,000 registered attendees will dig into sessions on everything from restoring the lost Roman mosaics of 2,000 years ago to building the factories of the future, from exploring the universe with a new generation of massive telescopes to rearranging molecules to accelerate drug discovery, to more than 70 talks on generative AI. The iPhone Moment of AI NVIDIA’s technologies are fundamental to AI, with Huang recounting how NVIDIA was there at the very beginning of the generative AI revolution. Back in 2016 he hand-delivered to OpenAI the first NVIDIA DGX AI supercomputer — the engine behind the large language model breakthrough powering ChatGPT. Launched late last year, ChatGPT went mainstream almost instantaneously, attracting over 100 million users, making it the fastest-growing application in history. “We are at the iPhone moment of AI,” Huang said. NVIDIA DGX supercomputers, originally used as an AI research instrument, are now running 24/7 at businesses across the world to refine data and process AI, Huang reported. Half of all Fortune 100 companies have installed DGX AI supercomputers. “DGX supercomputers are modern AI factories,” Huang said. NVIDIA H100, Grace Hopper, Grace, for Data Centers Deploying LLMs like ChatGPT are a significant new inference workload, Huang said. For large-language-model inference, like ChatGPT, Huang announced a new GPU — the H100 NVL with dual-GPU NVLink. Based on NVIDIA’s Hopper architecture, H100 features a Transformer Engine designed to process models such as the GPT model that powers ChatGPT. Compared to HGX A100 for GPT-3 processing, a standard server with four pairs of H100 with dual-GPU NVLink is up to 10x faster. “H100 can reduce large language model processing costs by an order of magnitude,” Huang said. Meanwhile, over the past decade, cloud computing has grown 20% annually into a $1 trillion industry, Huang said. NVIDIA designed the Grace CPU for an AI- and cloud-first world, where AI workloads are GPU accelerated. Grace is sampling now, Huang said. NVIDIA’s new superchip, Grace Hopper, connects the Grace CPU and Hopper GPU over a high-speed 900GB/sec coherent chip-to-chip interface. Grace Hopper is ideal for processing giant datasets like AI databases for recommender systems and large language models, Huang explained. “Customers want to build AI databases several orders of magnitude larger,” Huang said. “Grace Hopper is the ideal engine.” DGX the Blueprint for AI Infrastructure The latest version of DGX features eight NVIDIA H100 GPUs linked together to work as one giant GPU. “NVIDIA DGX H100 is the blueprint for customers building AI infrastructure worldwide,” Huang said, sharing that NVIDIA DGX H100 is now in full production. H100 AI supercomputers are already coming online. Oracle Cloud Infrastructure announced the limited availability of new OCI Compute bare-metal GPU instances featuring H100 GPUs Additionally, Amazon Web Services announced its forthcoming EC2 UltraClusters of P5 instances, which can scale in size up to 20,000 interconnected H100 GPUs. This follows Microsoft Azure’s private preview announcement last week for its H100 virtual machine, ND H100 v5. Meta has now deployed its H100-powered Grand Teton AI supercomputer internally for its AI production and research teams. And OpenAI will be using H100s on its Azure supercomputer to power its continuing AI research. Other partners making H100 available include Cirrascale and CoreWeave, both which announced general availability today. Additionally, Google Cloud, Lambda, Paperspace and Vult are planning to offer H100. And servers and systems featuring NVIDIA H100 GPUs are available from leading server makers including Atos, Cisco, Dell Technologies, GIGABYTE, Hewlett Packard Enterprise, Lenovo and Supermicro. DGX Cloud: Bringing AI to Every Company, Instantly And to speed DGX capabilities to startups and enterprises racing to build new products and develop AI strategies, Huang announced NVIDIA DGX Cloud, through partnerships with Microsoft Azure, Google Cloud and Oracle Cloud Infrastructure to bring NVIDIA DGX AI supercomputers “to every company, from a browser.” DGX Cloud is optimized to run NVIDIA AI Enterprise, the world’s leading acceleration software suite for end-to-end development and deployment of AI. “DGX Cloud offers customers the best of NVIDIA AI and the best of the world’s leading cloud service providers,” Huang said. NVIDIA is partnering with leading cloud service providers to host DGX Cloud infrastructure, starting with Oracle Cloud Infrastructure. Microsoft Azure is expected to begin hosting DGX Cloud next quarter, and the service will soon expand to Google Cloud and more. This partnership brings NVIDIA’s ecosystem to cloud service providers while amplifying NVIDIA’s scale and reach, Huang said. Enterprises will be able to rent DGX Cloud clusters on a monthly basis, ensuring they can quickly and easily scale the development of large, multi-node training workloads. Supercharging Generative AI To accelerate the work of those seeking to harness generative AI, Huang announced NVIDIA AI Foundations, a family of cloud services for customers needing to build, refine and operate custom LLMs and generative AI trained with their proprietary data and for domain-specific tasks. AI Foundations services include NVIDIA NeMo for building custom language text-to-text generative models; Picasso, a visual language model-making service for customers who want to build custom models trained with licensed or proprietary content; and BioNeMo, to help researchers in the $2 trillion drug discovery industry. Adobe is partnering with NVIDIA to build a set of next-generation AI capabilities for the future of creativity. Getty Images is collaborating with NVIDIA to train responsible generative text-to-image and text-to-video foundation models. Shutterstock is working with NVIDIA to train a generative text-to-3D foundation model to simplify the creation of detailed 3D assets. Accelerating Medical Advances And NVIDIA announced Amgen is accelerating drug discovery services with BioNeMo. In addition, Alchemab Therapeutics, AstraZeneca, Evozyne, Innophore and Insilico are all early access users of BioNemo. BioNeMo helps researchers create, fine-tune and serve custom models with their proprietary data, Huang explained. Huang also announced that NVIDIA and Medtronic, the world’s largest healthcare technology provider, are partnering to build an AI platform for software-defined medical devices. The partnership will create a common platform for Medtronic systems, ranging from surgical navigation to robotic-assisted surgery. And today Medtronic announced that its GI Genius system, with AI for early detection of colon cancer, is built on NVIDIA Holoscan, a software library for real-time sensor processing systems, and will ship around the end of this year. “The world’s $250 billion medical instruments market is being transformed,” Huang said. Speeding Deployment of Generative AI Applications To help companies deploy rapidly emerging generative AI models, Huang announced inference platforms for AI video, image generation, LLM deployment and recommender inference. They combine NVIDIA’s full stack of inference software with the latest NVIDIA Ada, Hopper and Grace Hopper processors — including the NVIDIA L4 Tensor Core GPU and the NVIDIA H100 NVL GPU, both launched today. • NVIDIA L4 for AI Video can deliver 120x more AI-powered video performance than CPUs, combined with 99% better energy efficiency. • NVIDIA L40 for Image Generation is optimized for graphics and AI-enabled 2D, video and 3D image generation. • NVIDIA H100 NVL for Large Language Model Deployment is ideal for deploying massive LLMs like ChatGPT at scale. • And NVIDIA Grace Hopper for Recommendation Models is ideal for graph recommendation models, vector databases and graph neural networks. Google Cloud is the first cloud service provider to offer L4 to customers with the launch of its new G2 virtual machines, available in private preview today. Google is also integrating L4 into its Vertex AI model store. Microsoft, NVIDIA to Bring Omniverse to ‘Hundreds of Millions’ Unveiling a second cloud service to speed unprecedented simulation and collaboration capabilities to enterprises, Huang announced NVIDIA is partnering with Microsoft to bring NVIDIA Omniverse Cloud, a fully managed cloud service, to the world’s industries. “Microsoft and NVIDIA are bringing Omnivese to hundreds of millions of Microsoft 365 and Azure users,” Huang said, also unveiling new NVIDIA OVX servers and a new generation of workstations powered by NVIDIA RTX Ada Generation GPUs and Intel’s newest CPUs optimized for NVIDIA Omniverse. To show the extraordinary capabilities of Omniverse, NVIDIA’s open platform built for 3D design collaboration and digital twin simulation, Huang shared a video showing how NVIDIA Isaac Sim, NVIDIA’s robotics simulation and synthetic generation platform, built on Omniverse, is helping Amazon save time and money with full-fidelity digital twins. It shows how Amazon is working to choreograph the movements of Proteus, Amazon’s first fully autonomous warehouse robot, as it moves bins of products from one place to another in Amazon’s cavernous warehouses alongside humans and other robots. Digitizing the $3 Trillion Auto Industry Illustrating the scale of Omniverse’s reach and capabilities, Huang dug into Omniverse’s role in digitalizing the $3 trillion auto industry. By 2030, auto manufacturers will build 300 factories to make 200 million electric vehicles, Huang said, and battery makers are building 100 more megafactories. “Digitalization will enhance the industry’s efficiency, productivity and speed,” Huang said. Touching on Omniverse’s adoption across the industry, Huang said Lotus is using Omniverse to virtually assemble welding stations. Mercedes-Benz uses Omniverse to build, optimize and plan assembly lines for new models. Rimac and Lucid Motors use Omniverse to build digital stores from actual design data that faithfully represent their cars. Working with Idealworks, BMW uses Isaac Sim in Omniverse to generate synthetic data and scenarios to train factory robots. And BMW is using Omniverse to plan operations across factories worldwide and is building a new electric-vehicle factory, completely in Omniverse, two years before the plant opens, Huang said. Separately. NVIDIA today announced that BYD, the world’s leading manufacturer of new energy vehicles NEVs, will extend its use of the NVIDIA DRIVE Orin centralized compute platform in a broader range of its NEVs. Accelerating Semiconductor Breakthroughs Enabling semiconductor leaders such as ASML, TSMC and Synopsis to accelerate the design and manufacture of a new generation of chips as current production processes near the limits of what physics makes possible, Huang announced NVIDIA cuLitho, a breakthrough that brings accelerated computing to the field of computational lithography. The new NVIDIA cuLitho software library for computational lithography is being integrated by TSMC, the world’s leading foundry, as well as electronic design automation leader Synopsys into their software, manufacturing processes and systems for the latest-generation NVIDIA Hopper architecture GPUs. Chip-making equipment provider ASML is working closely with NVIDIA on GPUs and cuLitho, and plans to integrate support for GPUs into all of their computational lithography software products. With lithography at the limits of physics, NVIDIA’s introduction of cuLitho enables the industry to go to 2nm and beyond, Huang said. “The chip industry is the foundation of nearly every industry,” Huang said. Accelerating the World’s Largest Companies Companies around the world are on board with Huang’s vision. Telecom giant AT&T uses NVIDIA AI to more efficiently process data and is testing Omniverse ACE and the Tokkio AI avatar workflow to build, customize and deploy virtual assistants for customer service and its employee help desk. American Express, the U.S. Postal Service, Microsoft Office and Teams, and Amazon are among the 40,000 customers using the high-performance NVIDIA TensorRT inference optimizer and runtime, and NVIDIA Triton, a multi-framework data center inference serving software. Uber uses Triton to serve hundreds of thousands of ETA predictions per second. And with over 60 million daily users, Roblox uses Triton to serve models for game recommendations, build avatars, and moderate content and marketplace ads. Microsoft, Tencent and Baidu are all adopting NVIDIA CV-CUDA for AI computer vision. The technology, in open beta, optimizes pre- and post-processing, delivering 4x savings in cost and energy. Helping Do the Impossible Wrapping up his talk, Huang thanked NVIDIA’s systems, cloud and software partners, as well as researchers, scientists and employees. NVIDIA has updated 100 acceleration libraries, including cuQuantum and the newly open-sourced CUDA Quantum for quantum computing, cuOpt for combinatorial optimization, and cuLitho for computational lithography, Huang announced. The global NVIDIA ecosystem, Huang reported, now spans 4 million developers, 40,000 companies and 14,000 startups in NVIDIA Inception. “Together,” Huang said. “We are helping the world do the impossible.” View the full article
  12. Companies across industries are looking to use interactive avatars to enhance digital experiences. But creating them is a complex, time-consuming process requiring state-of-the-art AI models that can see, hear, understand and communicate with end users. To ease this process, NVIDIA is providing creators and developers with real-time AI solutions through Omniverse Avatar Cloud Engine (ACE), a suite of cloud-native microservices for end-to-end development of interactive avatars. In collaboration with early-access partners, NVIDIA is delivering improvements that will provide users with the tools they need to easily design and deploy various kinds of avatars, from interactive chatbots to intelligent digital humans. AT&T and Quantiphi are among the first to experience how Omniverse ACE can help increase employee productivity and enhance customer service experiences. Omniverse ACE users can now seamlessly integrate NVIDIA AI into their applications, including Riva for speech AI, NeMo service for natural language understanding, and Omniverse Audio2Face or Live Portrait for AI-powered 2D and 3D character animation. With the latest improvements to Omniverse ACE, teams can also deploy advanced avatars across web conferencing and customer service use cases by integrating domain-specific NVIDIA AI workflows like Tokkio and Maxine. Early Partners and Customers Develop AI-Driven Digital Humans AT&T is planning to use Omniverse ACE and the Tokkio AI avatar workflow to build, customize and deploy virtual assistants for customer service and its employee help desk. Working with Quantiphi, one of NVIDIA’s service delivery partners, AT&T is developing interactive avatars that can provide 24/7 support in local languages across regions. This is helping the company reduce costs while providing a better experience for its employees worldwide. In addition to customer service, AT&T is planning to build and develop digital humans for various use cases across the company. “Quantiphi and NVIDIA have been collaborating to make customer experience more immersive by combining the power of large language models, graphics and recommender systems,” said Siddharth Kotwal, global head of NVIDIA Practice at Quantiphi. “NVIDIA’s Tokkio framework has made it easier to build, deploy and personalize AI-powered digital assistants or avatars for our enterprise customers. The process of seamlessly integrating automatic speech recognition, conversational agents and information retrieval systems with real-time animation has been simplified.” Leading professional-services company Deloitte is also working with NVIDIA to help enterprises deploy transformative applications. Deloitte’s latest hybrid-cloud offerings — which consist of NVIDIA AI and Omniverse services and platforms, including Omniverse ACE — will be added to the Deloitte Center for AI Computing. An Advanced, Streamlined Solution for Deploying Avatars Omniverse ACE provides all the necessary tools so users can streamline the development process for realistic, intelligent avatars. Teams can also customize pre-built AI avatar workflows to suit their needs with applications like NVIDIA Tokkio. Additionally, Omniverse ACE is bringing new improvements to existing microservices. Learn more about NVIDIA Omniverse ACE and register to join the early-access program, available now for developers. Dive into the art of AI avatars at GTC, a global conference for the era of AI and the metaverse. Join sessions with NVIDIA and industry experts, and watch the GTC keynote below: View the full article
  13. With AI at its tipping point, AI-enabled computer vision is being used to address the world’s most challenging problems in nearly every industry. At GTC, a global conference for the era of AI and the metaverse running through Thursday, March 23, NVIDIA announced technology updates poised to drive the next wave of vision AI adoption. These include NVIDIA TAO Toolkit 5.0 for creating customized, production-ready AI models; expansions to the NVIDIA DeepStream software development kit for developing vision AI applications and services; and early access to Metropolis Microservices for powerful, cloud-native building blocks that accelerate vision AI. Exploding Adoption and Ecosystem More than 1,000 companies are using NVIDIA Metropolis developer tools to solve Internet of Things (IoT), sensor processing and operational challenges with vision AI — and the rate of adoption is quickening. The tools have now been downloaded over 1 million times by those looking to build vision AI applications. PepsiCo is optimizing its operations with NVIDIA Metropolis to improve throughput, reduce downtime and minimize energy consumption. The convenience-food and beverages giant is developing AI-powered digital twins of its distribution centers using the NVIDIA Omniverse platform to visualize how different setups in its facilities will impact operational efficiency before implementing them in the real world. PepsiCo is also using advanced machine vision technology, powered by the NVIDIA AI platform and GPUs, to improve efficiency and accuracy in its distribution process. Siemens, a technology leader in industrial automation and digitalization, is adding next-level perception into its edge-based applications through NVIDIA Metropolis. With millions of sensors across factories, Siemens uses NVIDIA Metropolis — a key application framework for edge AI — to connect entire fleets of robots and IoT devices and bring AI into its industrial environments. Automaker BMW Group is using computer vision technologies based on lidar and cameras — built by Seoul Robotics and powered by the NVIDIA Jetson edge AI platform — at its manufacturing facility in Munich to automate the movement of cars. This automation has resulted in significant time and cost savings, as well as employee safety improvements. Making World-Class Vision AI Accessible to Any Developer on Any Device As AI is made accessible to developers of any skill level, the next phase of AI adoption will arrive. GTC is showcasing major expansions of Metropolis workflows, which put some of the latest AI capabilities and research into the hands of developers through NVIDIA TAO Toolkit, Metropolis Microservices and the DeepStream SDK, as well as the NVIDIA Isaac Sim synthetic data generation tool and robotics simulation applications. NVIDIA TAO Toolkit is a low-code AI framework that supercharges vision AI model development for practically any developer, in any service, on any device. TAO 5.0 is filled with new features, including vision transformer pretrained AI models, the ability to deploy models on any platform with standard ONNX export, automatic hyperparameter tuning with AutoML, and AI-assisted data annotation. STMicroelectronics, a global leader in embedded microcontrollers, integrates TAO into its STM32Cube AI developer workflow. TAO has enabled the company to run sophisticated AI in widespread IoT and edge use cases that STM32 microcontrollers power within their compute and memory budget. The NVIDIA DeepStream SDK has emerged as a powerful tool for developers looking to create vision AI applications across a wide range of industries. With its latest update, a new graph execution runtime (GXF) allows developers to expand beyond the open-source GStreamer multimedia framework. DeepStream’s addition of GXF is a game-changer for users seeking to build applications that require tight execution control, advanced scheduling and critical thread management. This feature unlocks a host of new applications, including those in industrial quality control, robotics and autonomous machines. Adding perception to physical spaces often requires applying vision AI to numerous cameras covering multiple regions. Challenges in computer vision include monitoring the flow of packaged goods across a warehouse or analyzing individual customer flow across a large retail space. Metropolis Microservices make these sophisticated vision AI tasks easy to integrate and deploy into users’ applications. Leading IT services company Infosys is using NVIDIA Metropolis to supercharge its vision AI application development and deployment. The NVIDIA TAO low-code training framework and pretrained models help Infosys reduce AI training efforts. Metropolis Microservices, along with the DeepStream SDK, optimize the company’s vision processing pipeline throughput and cut overall solution costs. Infosys can also generate troves of synthetic data with the NVIDIA Omniverse Replicator SDK to easily train AI models with new stock keeping units and packaging. Latest Metropolis Features Tap into the latest in NVIDIA vision AI technologies: Read the TAO 5.0 blog. Try TAO Toolkit on NVIDIA LaunchPad. GXF runtime, now part of NVIDIA DeepStream, unlocks new use cases that require tight scheduling control. Try it on NVIDIA LaunchPad. Sign up for early access to Metropolis Microservices, a suite of cloud-native microservices and reference applications that accelerate efforts to create API-driven solutions for the edge and the cloud. Learn more about NVIDIA Metropolis — through corporate blogs, technical blogs and case studies — to see how vision AI is transforming the world. Register free to attend GTC, and watch these sessions to learn how to accelerate vision AI application development and understand its many use cases. Watch NVIDIA founder and CEO Jensen Huang’s GTC keynote in replay: View the full article
  14. Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation. Powerful AI technologies are revolutionizing 3D content creation — whether by enlivening realistic characters that show emotion or turning simple texts into imagery. The brightest minds, artists and creators are gathering at NVIDIA GTC, a free, global conference on AI and the metaverse, taking place online through Thursday, March 23. NVIDIA founder and CEO Jensen Huang’s GTC keynote announced a slew of advancements set to ease creators’ workflows, including using generative AI with the Omniverse Audio2Face app. NVIDIA Omniverse, a platform for creating and operating metaverse applications, further expands with an updated Unreal Engine Connector, open-beta Unity Connector and new SimReady 3D assets. New NVIDIA RTX GPUs, powered by the Ada Lovelace architecture, are fueling next-generation laptop and desktop workstations to meet the demands of the AI, design and the industrial metaverse. The March NVIDIA Studio Driver adds support for the popular RTX Video Super Resolution feature, now available for GeForce RTX 40 and 30 Series GPUs. And this week In the NVIDIA Studio, the Adobe Substance 3D art and development team explores the process of collaborating to create the animated short End of Summer using Omniverse USD Composer (formerly known as Create). Omniverse Overdrive Specialized generative AI tools can boost creator productivity, even for users who don’t have extensive technical skills. Generative AI brings creative ideas to life, producing high-quality, highly iterative experiences — all in a fraction of the time and cost of traditional asset development. The Omniverse Audio2Face AI-powered app allows 3D artists to efficiently animate secondary characters, generating realistic facial animations with just an audio file — replacing what is often a tedious, manual process. The latest release delivers significant upgrades in quality, usability and performance including a new headless mode and a REST API — enabling game developers and other creators to run the app and process numerous audio files from multiple users in the data center. A new Omniverse Connector developed by NVIDIA for Unity workflows is available in open beta. Unity scenes can be added directly onto Omniverse Nucleus servers with access to platform features: the DeepSearch tool, thumbnails, bookmarks and more. Unidirectional live-sync workflows enable real-time updates. With the Unreal Engine Connector’s latest release, Omniverse users can now use Unreal Engine’s USD import utilities to add skeletal mesh blend shape importing, and Python USD bindings to access stages on Omniverse Nucleus. This release also delivers improvements in import, export and live workflows, as well as updated software development kits. In addition, over 1,000 new SimReady assets are available for creators. SimReady assets are built to real-world scale with accurate mass, physical materials and center of gravity for use within Omniverse PhysX for the most photorealistic visuals and accurate movements. March Studio Driver Brings Superfly Super Resolution Over 90% of online videos consumed by NVIDIA RTX GPU owners are 1080p resolution or lower, often resulting in upscaling that further degrades the picture despite the hardware being able to handle more. The solution: RTX Video Super Resolution. The new feature, available on GeForce RTX 30 and 40 Series GPUs, uses AI to improve the quality of any video streamed through Google Chrome and Microsoft Edge browsers. Click the image to see the differences between bicubic upscaling (left) and RTX Video Super Resolution. This improves video sharpness and clarity. Users can watch online content in its native resolution, even on high-resolution displays. RTX Video Super Resolution is now available in the March Studio Driver, which can be downloaded today. New NVIDIA RTX GPUs Power Professional Creators Six new professional-grade NVIDIA RTX GPUs — based on the Ada Lovelace architecture — enable creators to meet the demands of their most complex workloads using laptops and desktops. The NVIDIA RTX 5000, RTX 4000, RTX 3500, RTX 3000 and RTX 2000 Ada Generation laptop GPUs deliver up to 2x the performance compared with the previous generation. The NVIDIA RTX 4000 Small Form Factor (SFF) Ada Generation desktop GPU features new RT Cores, Tensor Cores and CUDA cores with up to 20GB of graphics memory. These include the latest NVIDIA Max-Q and RTX technologies and are backed by the NVIDIA Studio platform with RTX optimizations in over 110 creative apps, NVIDIA RTX Enterprise Drivers for the highest levels of stability and performance, and exclusive AI-powered NVIDIA tools: Omniverse, Canvas and Broadcast. Professionals using these laptop GPUs can run advanced technologies like DLSS 3 to increase frame rates by up to 4x compared to the previous generation, and Omniverse Enterprise for real-time collaboration and simulation. Next-generation mobile workstations featuring NVIDIA RTX GPUs will be available starting this month. Creative Boosts at GTC Experience GTC for more inspiring content, expert-led sessions and a must-see keynote to accelerate your life’s creative work. Catch these sessions on Omniverse, AI and 3D workflows — live or on demand: Fireside Chat With OpenAI Founder Ilya Sutskever and Jensen Huang: AI Today and Vision of the Future [S52092] How Generative AI Is Transforming the Creative Process: Fireside Chat With Adobe’s Scott Belsky and NVIDIA’s Bryan Catanzaro [S52090] Generative AI Demystified [S52089] 3D by AI: How Generative AI Will Make Building Virtual Worlds Easier [S52163] Custom World Building With AI Avatars: The Little Martians Sci-Fi Project [S51360] AI-Powered, Real-Time, Markerless: The New Era of Motion Capture [S51845] 3D and Beyond: How 3D Artists Can Build a Side Hustle in the Metaverse [SE52117] NVIDIA Omniverse User Group [SE52047] Accelerate the Virtual Production Pipeline to Produce an Award-Winning Sci-Fi Short Film [S51496] As part of the Watch ‘n Learn Giveaway with valued partner 80LV, GTC attendees who register for any Omniverse for creators session — or watch on-demand before March 30 — have a chance to win a powerful GeForce RTX 4080 GPU. Simply fill out this form and tag #GTC23 and @NVIDIAOmniverse with the name of the session. Search the GTC session catalog and check out the “Media and Entertainment” and “Omniverse” topics for additional creator-focused sessions. A Father-Daughter Journey Back Home The short animation End of Summer, created by the Substance 3D art and development team at Adobe, may evoke a surprising amount of heart. That was the team’s intent. “We loved the idea of allowing the artwork to invoke an emotion in the viewer, letting them develop their own version of a story they felt was unfolding before their eyes,” said team member Wes McDermott. “End of Summer” design boards. End of Summer, a nod to stop-motion animation studios such as Laika, began as an internal Adobe Substance 3D project aimed at accomplishing two goals. First, to encourage a relatively new group of artists to work together as a team by leaning into a creative endeavor. And second, to test their pipeline feature set for the potential of the Universal Scene Description (USD) framework. Early concept work for “End of Summer.” The group divided the task of creating assets across the most popular 3D apps, including Adobe Substance 3D Modeler, Autodesk 3ds Max, Autodesk Maya, Blender and Maxon’s Cinema 4D. Their GeForce RTX GPUs unlocked AI denoising in the viewport for fast, interactive rendering and GPU-accelerated filters to speed up and simplify material creation. “NVIDIA Omniverse is a great tool for laying out and setting up dressing scenes, as well as learning about USD workflows and collaboration. We used painting and NVIDIA PhysX collision tools to place assets.” — Wes McDermott “We quickly started to see the power of using USD as not only an export format but also a way to build assets,” McDermott said. “USD enables artists on the team to use whatever 3D app they felt most comfortable with.” The Adobe team relied heavily on the Substance 3D asset library of materials, models and lights to create their studio environment. All textures were applied in Substance 3D Painter, where RTX-accelerated light and ambient occlusion baking optimized assets in mere moments. Then, they imported all models into Omniverse USD Composer, where the team simultaneously refined and assembled assets. “This was also during the pandemic, and we were all quarantined in our homes,” McDermott said. “Having a project we could work on together as a team helped us to communicate and be creative.” Accelerate scene composition, and assemble, simulate and render 3D scenes in real time in Omniverse USD Composer. Lastly, the artists imported the scene into Unreal Engine as a stage for lighting and rendering. Final scene edits in Unreal Engine. McDermott stressed the importance of hardware in his team’s workflows. “The bakers in Substance Painter are GPU accelerated and benefit greatly from NVIDIA RTX GPUs,” he said. “We were also heavily working on Unreal Engine and reliant on real-time rendering.” For more on this workflow, check out the GTC session, 3D Art Goes Multiplayer: Behind the Scenes of Adobe Substance’s ‘End of Summer’ Project With Omniverse. Registration is free. Adobe Substance 3D team lead and artist Wes McDermott. Check out McDermott’s portfolio on Instagram. Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. Learn more about Omniverse on Instagram, Medium, Twitter and YouTube for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community. View the full article
  15. The automotive industry is undergoing a digital revolution, driven by breakthroughs in accelerated computing, AI and the industrial metaverse. Automakers are digitalizing every phase of the product lifecycle — including concept and styling, design and engineering, software and electronics, smart factories, autonomous driving and retail — using the NVIDIA Omniverse platform and AI. Based on the Universal Scene Description (USD) framework, Omniverse transforms complex 3D workflows, allowing teams to connect and customize 3D pipelines and simulate large-scale, physically accurate virtual worlds. By taking the automotive product workflow into the virtual world, automakers can bypass traditional bottlenecks to save critical time and reduce cost. Bringing Ideas to Life Designing new vehicle models — and refreshing current ones — is a collaborative process that requires review and alignment of even the tiniest details. By refining concepts in Omniverse, designers can visualize every facet of a car’s interior and exterior in the full context of the broader vehicle. Global teams can iterate quickly with real-time, physically based, photorealistic rendering. For example, they can collaborate to design the cockpit’s critical components, such as digital instrument clusters and infotainment systems, which must strike a balance of communicating information while minimizing distraction. Omniverse enables designers to flexibly lay out the cabin and cockpit onscreen user experience along with the vehicle’s physical interior to ensure a harmonious look and feel. With this next-generation design process, automakers can catch flaws early and make real-time improvements, reducing the number of physical prototypes to test and validate. Virtual Validation Once the design is complete, developers can use Omniverse to kick the tires on their new concepts. Perfecting the interior is necessary for customer experience as well as safety. Developers can take these in-cabin designs for a spin in the virtual world, collaborating and sharing designs for efficient refinement and validation. Digitalization is also transforming the way automakers approach vehicle engineering. Teams can test different materials and components in a virtual environment to further reduce physical prototyping. For example, engineers can use computational fluid dynamics to refine aerodynamics and perform virtual crash simulations for safer vehicle designs. Continuous Improvement The coming generation of vehicles are highly advanced computers on wheels, packed with complex, centralized electronic systems and software for enhanced safety, intelligence and security. Typically, vehicle functions are controlled by dozens of electronic control units distributed throughout a vehicle. By centralizing computing into core domains, automakers can replace many components and simplify what has been an incredibly complex supply chain. With a digital representation of this entire architecture, automakers can simulate and test the vehicle software, and then provide over-the-air updates for continuous improvement throughout the car’s lifespan — from remote diagnostics to autonomous-driving capabilities to subscriptions for entertainment and other services. Digital-First Production Vehicle production is a colossal undertaking that requires thousands of parts and workers moving in sync. Any supply chain or production issues can lead to costly delays. With Omniverse, automakers can develop and operate complex, AI-enabled virtual environments for factory and warehouse design. These physically based, precision-timed digital twins are the key to unlocking operational efficiencies with predictive analysis and process automation. Factory planners can access the digital twin of the factory to review and improve the plant as needed. Every change can be quickly evaluated and validated in the virtual world, then implemented in the real world to ensure maximum efficiency and optimal ergonomics for factory workers. Additionally, automakers can synchronize plant locations anywhere in the world for scalable design and iteration. Autonomous Vehicle Proving Grounds On top of enhancing traditional product development and manufacturing, Omniverse offers a complete toolchain for developing and validating automated and autonomous-driving systems. NVIDIA DRIVE Sim is a physically based simulation platform, built on NVIDIA Omniverse, designed for fast and efficient autonomous-vehicle testing and validation at scale. It is time-accurate and supports the complete development toolchain, so developers can run simulation at the component level or for the entire system. With DRIVE Sim, developers can repeatedly simulate routine driving scenarios, as well as rare and hazardous conditions that are too risky to test in the real world. Additionally, real-world driving recordings can be turned into reactive simulation scenarios using the platform’s Neural Reconstruction Engine. Automakers can also fine-tune their advanced driver-assistance and autonomous-vehicle systems for New Car Assessment Program (NCAP) regulations, which evaluate the safety performance of new cars based on several crash tests and safety features. The DRIVE Sim NCAP tool provides high-fidelity NCAP test protocols in simulation, so automakers can efficiently perform dedicated development and validation at scale. The ability to drive in physically based virtual environments can significantly accelerate the autonomous-vehicle development process, overcoming data collection and scenario diversity hurdles that occur in real-world testing. Omniverse’s generative AI reconstructs previously driven routes into 3D so past experiences can be reenacted or modified. Try Before You Buy The end customer benefits from digitalization, too. Immersive technologies in Omniverse — including 3D visualization, augmented reality (AR) and virtual reality (VR) streamed using NVIDIA CloudXR — deliver consumers a more engaging experience, allowing them to explore features before making a purchase. Prospective buyers can customize their vehicle in a car configurator — choosing colors, interior materials, trim levels and more — without being limited by the physical inventory of a dealership. They can then check out the car from every angle using 3D visualization. And with AR and VR, they can view and virtually test drive a car from anywhere. The benefits of digitalization extend beyond the automotive industry. With Omniverse, any enterprise can reimagine their workflows to increase efficiency, productivity and speed, revolutionizing the way they do business. Omniverse is the digital-to-physical operating system to realize industrial digitalization. Learn more about the latest in AI and the metaverse by watching NVIDIA founder and CEO Jensen Huang’s GTC keynote address: View the full article
  16. Transportation industry trailblazers are propelling their next-generation vehicles by building on NVIDIA DRIVE end-to-end solutions, which span the cloud to the car. The world’s best-selling new energy vehicle (NEV) brand BYD announced at NVIDIA GTC that it’s using the NVIDIA DRIVE Orin centralized compute platform to power an even wider range of vehicles within its mainstream Dynasty and Ocean series of NEVs. This comes hot on the heels of BYD’s recent announcement that it’s working to bring the NVIDIA GeForce NOW cloud gaming service to its vehicles to further enhance the in-car experience. DeepRoute.ai, a developer of production-ready autonomous driving solutions, has launched its Driver 3.0 HD Map-Free solution. Built on NVIDIA DRIVE Orin, this product is designed to offer a non-geo-fenced solution for mass-produced advanced driver-assistance system (ADAS) vehicles, and will be available at the end of the year. By using the computational power of the automotive-grade DRIVE Orin system-on-a-chip, which delivers 254 trillion operations per second (TOPS) of compute performance, DeepRoute’s HD Map-Free solution promises to accelerate deployment of driver-assistance functions to consumer cars and robotaxis. Plus, Pony.ai announced that its autonomous-driving domain controller (ADC), powered by NVIDIA DRIVE, will be deployed for large-scale commercial use in autonomous-delivery vehicles for Beijing-based companies Meituan and Neolix. With NVIDIA DRIVE Orin as the AI brain of their driverless vehicles, Meituan and Neolix are well-positioned to fulfill growing consumer demand for safe, scalable autonomous delivery of goods. Lenovo announced it is a tier-one manufacturer of a new ADC based on the next-generation NVIDIA DRIVE Thor centralized computer. Packed with up to 2,000 TOPS of performance, DRIVE Thor will power Lenovo’s ADC, which is set to become the company’s top-tier vehicle computing product line, with mass production expected in 2025. Rimac Technology, the engineering arm of Croatian-based Rimac Group, is working on a new central vehicle computer, or R-CVC, that will power ADAS, in-vehicle cockpit systems, the vehicle dynamics logic and the body and comfort software stack. NVIDIA DRIVE hardware and software will be used in this platform to accelerate Rimac Technology’s development efforts and enable its manufacturer customers to speed time to market, reduce development costs, streamline maintenance, and boost vehicle performance. Rimac Technology’s central vehicle computer. New premium intelligent all-electric auto brand smart is now developing next-generation intelligent mobility solutions with NVIDIA. The startup will build its future all-electric portfolio using the NVIDIA DRIVE Orin platform to create a “smarter” urban mobility experience for its global customers. The start of vehicle production is expected by the end of 2024. In addition, smart will collaborate with NVIDIA to build a dedicated data center for the development of highly advanced assisted-driving and AI systems to explore cutting-edge mobility solutions. Changing the Rules of the Road The transportation industry is undergoing a revolution, and NVIDIA is leading the charge with its game-changing DRIVE end-to-end platform, which is transforming the way mobility leaders are building advanced driving systems. NVIDIA’s dedication to safer, smarter and more enjoyable in-vehicle experiences is core to all aspects of its DRIVE platform, from the ability to train AI in the data center to delivering high-performance centralized compute in the car. The NVIDIA DRIVE AV and DRIVE IX software stacks enable custom applications, and the DRIVE Sim platform powered by Omniverse provides a comprehensive testing and validation platform for autonomous vehicles. Learn more about the latest technology breakthroughs in automotive and other industries by watching NVIDIA founder and CEO Jensen Huang’s GTC keynote: View the full article
  17. Mitsui & Co., Ltd., one of Japan’s largest business conglomerates, is collaborating with NVIDIA on Tokyo-1 — an initiative to supercharge the nation’s pharmaceutical leaders with technology, including high-resolution molecular dynamics simulations and generative AI models for drug discovery. Announced today at the NVIDIA GTC global AI conference, the Tokyo-1 project features an NVIDIA DGX AI supercomputer that will be accessible to Japan’s pharma companies and startups. The effort is poised to accelerate Japan’s $100 billion pharma industry, the world’s third largest following the U.S. and China. “Japanese pharma companies are experts in wet lab research, but they have not yet taken advantage of high performance computing and AI on a large scale,” said Yuhi Abe, general manager of the digital healthcare business department at Mitsui. “With Tokyo-1, we are creating an innovation hub that will enable the pharma industry to transform the landscape with state-of-the-art tools for AI-accelerated drug discovery.” The project will provide customers with access to NVIDIA DGX H100 nodes supporting molecular dynamics simulations, large language model training, quantum chemistry, generative AI models that create novel molecular structures for potential drugs, and more. Tokyo-1 users can also harness large language models for chemistry, protein, DNA and RNA data formats through the NVIDIA BioNeMo drug discovery software and service. Xeureka, a Mitsui subsidiary focused on AI-powered drug discovery, will be operating Tokyo-1, which is expected to go online later this year. The initiative will also include workshops and technical training on accelerated computing and AI for drug discovery. Invigorating Drug Discovery Research With AI, HPC According to Abe, Japan’s pharmaceutical environment has long faced drug lag: delays in both drug development and the approval of treatments that are already available elsewhere. The problem received renewed attention during the race to develop vaccines during the COVID-19 pandemic. The nation’s pharmaceutical companies see AI adoption as part of the solution — a key tool to strengthen and accelerate the industry’s drug development pipeline. Training and fine-tuning AI models for drug discovery require enormous compute resources, such as the Tokyo-1 supercomputer, which in its first iteration will include 16 NVIDIA DGX H100 systems, each with eight NVIDIA H100 Tensor Core GPUs. The DGX H100 is based on the powerful NVIDIA Hopper GPU architecture, which features a Transformer Engine designed to accelerate the training of transformer models, including generative AI models for biology and chemistry. Xeureka plans to add more nodes to the system as the project grows. “Tokyo-1 is designed to address some of the barriers to implementing data-driven, AI-accelerated drug discovery in Japan,” said Hiroki Makiguchi, product engineering manager in the science and technology division at Xeureka. “This initiative will uplevel the Japanese pharmaceutical industry with high performance computing and unlock the potential of generative AI to discover new therapies.” Customers will be able to access a dedicated server on the supercomputer, receive technical support from Xeureka and NVIDIA, and participate in workshops from the two companies. For larger training runs that require more computational resources, customers can request access to a server with more nodes. Users can also purchase Xeureka’s software solutions for molecular dynamics, docking, quantum chemistry and free-energy perturbation calculations. By using NVIDIA BioNeMo software on the Tokyo-1 supercomputer, researchers will be able to scale state-of-the-art AI models to millions and billions of parameters for applications including protein structure prediction, small molecule generation and pose prediction estimation. Tokyo-1 Accelerates Japanese Companies in Pharma and Beyond Major Japanese pharma companies including Astellas Pharma, Daiichi-Sankyo and Ono Pharmaceutical are already making plans to advance their drug discovery projects with Tokyo-1. Tokyo-based Astellas Pharma is pursuing innovative digital solutions across its business — including in sales, manufacturing, and research and development — to maximize outcomes for patients and reduce the costs of healthcare. With Tokyo-1, the company will accelerate its research with molecular simulations and large language models for generative AI through NVIDIA BioNeMo software. “AI and large-scale simulations can be used for applications including small molecule compounds, antibodies, gene therapy, cell therapy, targeted protein degradation, engineered phage therapy and mRNA medicine,” said Kazuhisa Tsunoyama, head of digital research solutions, advanced informatics and analytics at Astellas. “By enabling us to take full advantage of recent advances in AI and simulation technology, Tokyo-1 will be one of the foundations on which Astellas can achieve its VISION for the future of pharmaceutical research.” Tokyo-based Daiichi Sankyo will use Tokyo-1 to establish a drug discovery process that fully integrates AI and machine learning. “By adopting AI and the cutting-edge GPU resources of Tokyo-1, we will be able to perform large-scale computations to accelerate our drug discovery efforts,” said Takayuki Serizawa, senior researcher at Daiichi Sankyo. “These advancements will provide new value to patients by improving drug delivery and potentially enabling personalized medicine.” Ono Pharmaceutical, based in Osaka, focuses on drug discovery in the fields of oncology, immunology and neurology. “Training AI models requires significant computational power, and we believe that the massive GPU resources of Tokyo-1 will solve this problem,” said Hiromu Egashira, director of the Drug Discovery DX Office in the drug discovery technology department at Ono. “We envision our use of the DGX supercomputer to be very broad, including high-quality simulations, image analysis, video analysis and language models.” Beyond the pharmaceutical industry, Mitsui plans to make the Tokyo-1 supercomputer accessible to Japanese medical-device companies and startups — and to connect Tokyo-1 customers to AI solutions developed by global healthcare startups in the NVIDIA Inception program. NVIDIA will also connect Tokyo-1 users with the hundreds of global life science customers in its developer network. Discover the latest in AI and healthcare at GTC, running online through Thursday, March 23. Registration is free. Watch the GTC keynote address by NVIDIA founder and CEO Jensen Huang below: View the full article
  18. Digitalization that combines AI and simulation is redefining how industrial products are created and transforming how people interact with the digital world. To help enterprises tackle complex new workloads, NVIDIA has unveiled the third generation of its NVIDIA OVX computing system. OVX is designed to power large-scale digital twins built on NVIDIA Omniverse Enterprise, a platform for creating and operating metaverse applications. The latest OVX system provides the breakthrough graphics and AI required to accelerate massive digital twin simulations and other demanding applications by combining NVIDIA BlueField-3 DPUs with NVIDIA L40 GPUs, ConnectX-7 SmartNICs and the NVIDIA Spectrum Ethernet platform. Some of the world’s largest systems makers will be bringing the latest OVX systems to customers worldwide later this year, providing enterprises with the technology to handle complex manufacturing, design and Omniverse-based workloads. Businesses can take advantage of the real-time, true-to-reality capabilities of OVX to collaborate on the most challenging visualization, virtual workstation and data center processing workflows. Reimagining Digital Twin Simulation Customers using third-generation OVX systems can speed their workflows and optimize simulations through immersive digital twins used to model factories, cities, autonomous vehicles and more before deployment in the real world. This helps maximize operational efficiency and predictive planning capabilities. For example, DB Netze’s Digitale Schiene Deutschland is leveraging the capabilities of OVX to power large-scale digital twins of dynamic physical systems, including rail networks. Others, like Jaguar Land Rover, are leveraging the graphics and simulation capabilities of OVX systems in conjunction with the NVIDIA DRIVE Sim platform to accelerate the testing and development of next-generation autonomous vehicles. Next-Generation Platform Features The third generation of OVX features a new architecture, with a server design based on a dual-CPU platform with four NVIDIA L40 GPUs. Based on the Ada Lovelace architecture, the L40 GPU delivers revolutionary neural graphics, AI compute and the performance needed for the most demanding Omniverse workloads. Each OVX server also includes two high-performance ConnectX-7 SmartNICs to enable multi-node scalability and precise time synchronization. The Ethernet adapters enable the multi-node scalability of OVX systems and provide networking capabilities for the low-latency, high-bandwidth communication that globally dispersed teams need. New with this generation, the BlueField-3 data processing unit offloads, accelerates and isolates CPU-intensive infrastructure tasks. For deploying Omniverse at data center scale, BlueField-3 DPUs provide a secure foundation for running the data center control-plane, enabling higher performance, limitless scaling, zero-trust security and better economics. Helping users keep up with networking performance, the accelerated NVIDIA Spectrum Ethernet platform provides high bandwidth and network synchronization to enhance real-time simulation capabilities. Availability In addition to original NVIDIA OVX partners Lenovo and Supermicro, third-generation OVX systems will be available later this year through Dell Technologies, GIGABYTE and QCT. NVIDIA is also working on Digital Twin as a Service offerings based on OVX with HPE Greenlake. To learn more about OVX, watch NVIDIA founder and CEO Jensen Huang’s GTC keynote. Register free for NVIDIA GTC, a global AI conference, to attend sessions with NVIDIA and industry leaders: Building a Digital Twin of the German Rail Network to Deliver Next-Generation Railway Systems by Digitale Schiene Deutschland Optimizing Distribution and Fulfillment Center Operations with Computer Vision and Digital Twins by PepsiCo Connect With the Experts: How to Build a Digital Twin in Omniverse Hit the Ground Running With Data Center Digital Twin Automation View the full article
  19. Healthcare enterprises globally are working with NVIDIA to drive AI-accelerated solutions that are detecting diseases earlier from medical images, delivering critical insights to care teams and revolutionizing drug discovery workflows. NVIDIA Clara, a suite of software and services that powers AI healthcare solutions, is enabling this transformation industry-wide. The Clara ecosystem includes BioNeMo for drug discovery, Holoscan for medical devices, Parabricks for genomics and MONAI for medical imaging. Using NVIDIA Clara, healthcare researchers and companies have recently achieved milestones including generating blueprints for two novel proteins with BioNeMo, conducting a first-of-its-kind surgery with Holoscan, and deploying MONAI-powered solutions in radiology departments. BioNeMo Enables Generative AI for Drug Discovery Traditional drug discovery is a time- and resource-intensive process. Many drugs take more than a decade to go to market, with an average drug candidate success rate of just 10%. Generative AI, which makes use of large language models, can help increase the chances of success in less time with fewer costs. Just as the large language models behind services like ChatGPT can generate text, generative AI models trained on biomolecular data can generate blueprints for new molecules and proteins, a critical step in drug discovery. NVIDIA BioNeMo is a cloud service for generative AI in biology, offering a variety of AI models for small molecules and proteins. With BioNeMo, pharmaceutical research and industry professionals can use generative AI to accelerate the identification and optimization of new drug candidates. Startup Evozyne used NVIDIA BioNeMo for AI protein identification to engineer new proteins with enhanced functionality. A joint paper describes the engineered proteins — one to potentially be used for treating disease and another designed for carbon consumption. Deloitte is using AI models ESM and OpenFold in BioNeMo for its AI drug discovery platform for 3D protein structure prediction, model rank classification and druggable region prediction. NVIDIA Inception member Innophore uses BioNeMo with its product Cavitomix, a tool that allows users to analyze protein cavities from any input structure. PyTorch-based AI model OpenFold is accelerated up to 6x in BioNeMo, resulting in lightning-fast 3D protein structure prediction of linear amino acids. Holoscan Powers Real-Time AI in Medical Devices Millions of medical devices are used every day across hospitals to enable robot-assisted surgery, radiation therapy, CT scans and more. NVIDIA Holoscan — a scalable, software-defined AI computing platform for processing real-time data at the edge — accelerates these devices to deliver the low-latency inference required for AI in a clinical setting. In a landmark step, doctors at Belgium-based surgical training center ORSI Academy brought NVIDIA Holoscan into the operating room to support real-world, robot-assisted surgery for the first time. At Onze-Lieve-Vrouw Hospital, urologists trained at ORSI successfully removed the patient’s kidney using Intuitive’s da Vinci robotic-assisted surgical system, with the help of an augmented reality overlay of the patient’s anatomy from a CT scan, rendered in real time and AI-augmented with Holoscan. The video feed overlay allowed the surgeon to clearly view the patient’s vascular and tissue structures that may have been obstructed from view by the surgical instruments used during the procedure. ORSI Academy surgeons interact with NVIDIA Holoscan in the operating room. Image courtesy of ORSI Academy. Parabricks Accelerates Genomics for Precision Medicine Accelerating genomic sequencing, the process of determining the genetic makeup of a specific organism or cell type, is critical to unlocking the full potential of precision medicine. NVIDIA Parabricks is a suite of AI-accelerated genomic analysis applications that enhances the speed and accuracy of the entire sequencing process, from gathering genetic data to analyzing and reporting it. A whole genome can be analyzed in 16 minutes vs. about 24 hours on CPU, meaning that around 32,000 genomes can be analyzed in a year on a single server. Accessible from either the genomics instrument itself or through cloud services, Parabricks allows for flexible, scalable and efficient genomics analysis that can lead to more accurate diagnoses and tailored treatments. Form Bio has recently integrated NVIDIA Parabricks into its computational life sciences platform, resulting in a 52% reduction in overall costs and an over 80x speedup, enabling life sciences professionals to accelerate whole genome sequence analysis. PacBio began shipping its Revio system, a long-read sequencer designed to deliver accurate, complete genomes at high throughput. With on-board NVIDIA GPUs, Revio has 20x more computing power than prior PacBio systems. The compute is used to handle the increased scale and to utilize advanced AI models for basecalling and methylation analysis. For spatial biology workflows, Nanostring is using NVIDIA technology in its CosMx instrument to power 5-20x faster cell segmentation. MONAI Helps to Build and Deploy Medical AI Accurate, detailed processing of medical images is crucial for precise diagnosis. MONAI, a medical imaging AI framework accelerated by NVIDIA, simplifies the creation of healthcare AI applications that can label and analyze medical images. MONAI recently surpassed 1 million downloads, solidifying its position as an industry-standard tool for healthcare AI developers. MONAI MAPs streamline the deployment of AI models created with the framework as applications that integrate within healthcare workflows and medical software ecosystems. Biomedical research data platform Flywheel is incorporating MONAI in its offerings. In collaboration with the University of Wisconsin Radiology Department, Flywheel has used MONAI to develop a model-based image classifier that predicts and labels the body regions present in medical images. The AI application speeds up data preparation from up to eight months to just one day. MLOps platform Weights & Biases is bringing MONAI to Cincinnati Children’s Hospital, providing AI researchers there with a full suite of tools to train and tune computer vision algorithms for AI-assisted object detection to aid diagnosis. AI Available Anytime, Anywhere With the vast applications and impact of AI in healthcare, strategic implementation of the technology is essential. NVIDIA Clara is reaching developers wherever they are, however it’s needed, through global systems integrators, original design manufacturers, cloud platforms and more. Bringing AI to a global network: Global systems integrator Deloitte is helping solution providers around the world bring NVIDIA Clara to the healthcare ecosystem. With access to Clara, Deloitte’s professionals are leveraging MONAI for medical imaging, NVIDIA FLARE for federated learning and BioNeMo for drug discovery to develop innovative solutions for customers across the industry. AI solutions expertise: Service delivery partner Quantiphi consults with clients on AI solutions using its expertise in NVIDIA healthcare software, including Clara Discovery, MONAI, BioMegatron and BioNeMo. Managing data in the cloud: MONAI has been integrated with all major cloud hyperscalers, allowing for optimized processing and data sharing in a single environment. NVIDIA Parabricks is available in every public cloud and on genomics-specific cloud platforms, including the Terra cloud platform, which is co-developed by The Broad Institute of MIT and Harvard, Microsoft and Verily and has more than 25,000 users. Software-defined devices: System builder Advantech is adopting NVIDIA IGX, an industrial-grade edge AI platform, for low-latency, real-time healthcare applications in its all-in-one, medical-grade computers. Discover the latest in AI and healthcare at GTC, running online through Thursday, March 23. Registration is free. Watch the GTC keynote address by NVIDIA founder and CEO Jensen Huang below: View the full article
  20. Powerful AI technologies are making a massive impact in 3D content creation and game development. Whether creating realistic characters that show emotion or turning simple texts into imagery, AI tools are becoming fundamental to developer workflows — and this is just the start. At NVIDIA GTC and the Game Developers Conference (GDC), learn how the NVIDIA Omniverse platform for creating and operating metaverse applications is expanding with new Connectors and generative AI services for game developers. Part of the excitement around generative AI is because of its ability to capture the creator’s intent. The technology learns the underlying patterns and structures of data, and uses that to generate new content, such as images, audio, code, text, 3D models and more. Announced today, the NVIDIA AI Foundations cloud services enable users to build, refine and operate custom large language models (LLMs) and generative AI trained with their proprietary data for their domain-specific tasks. And through NVIDIA Omniverse, developers can get their first taste of using generative AI technology to enhance game creation and accelerate development pipelines with the Omniverse Audio2Face app. Accelerating 3D Content With Generative AI Specialized generative AI tools can boost creator productivity, even for users who don’t have extensive technical skills. Anyone can use generative AI to bring their creative ideas to life, producing high-quality, highly iterative experiences — all in a fraction of the time and cost of traditional game development. For example, NVIDIA Omniverse Avatar Cloud Engine (ACE) offers the fastest, most versatile solution for bringing interactive avatars to life at scale. Game developers could leverage ACE to seamlessly integrate NVIDIA AI into their applications, including NVIDIA Riva for creating expressive character voices using speech and translation AI, or Omniverse Audio2Face and Live Portrait for AI-powered 2D and 3D character animation. Today, game developers are already taking advantage of Audio2Face, where artists are more efficiently animating secondary characters without a tedious manual process. The app’s latest release brings major quality, usability and performance updates, including headless mode and a REST API — enabling developers to run the app and process numerous audio files from multiple users in the data center. Mandarin Chinese language support can now be previewed in Audio2Face, along with improved lip-sync quality, more robust multi-language support and a new pretrained female model. The world’s first fully real-time, ray-traced subsurface scattering shader is also demonstrated in the demo with Diana, a new digital human model. GSC Game World, one of Europe’s leading game developers, is adopting Omniverse Audio2Face in its upcoming game, S.T.A.L.K.E.R. 2 Head of Chernobyl. Join the NVIDIA and GCS session at GDC to learn how developers are implementing generative AI technology in Omniverse. A scene from “S.T.A.L.K.E.R. 2 Head of Chernobyl.” Fallen Leaf, an indie game developer, is also using Omniverse Audio2Face for character facial animation in Fort Solis, a third-person sci-fi thriller game that takes place on Mars. New generative AI services such as NVIDIA Picasso, announced at GTC, preview the future of building and deploying assets for game production pipelines. Omniverse is opening portals to enrich workflows with generative AI tools powered by NVIDIA and its partners, and the momentum around unifying the game asset pipeline is growing. Unifying Game Asset Pipelines With Universal Scene Description Based on the Universal Scene Description (USD) framework, NVIDIA Omniverse is the connecting fabric that helps creators and developers build interoperability between their favorite tools — like Autodesk Maya, Autodesk 3ds Max and Adobe Substance 3D Painter — or make their own custom applications. And with USD — an open, extensible framework and ecosystem for composing, simulating and collaborating within 3D worlds — developers can achieve non-destructive, collaborative workflows when creating scenes, as well as simplify asset aggregation so content creation teams can iterate faster. Image courtesy of Tencent Games. Tencent Games is adopting USD workflows based on Omniverse to better streamline content creation pipelines. To create vast worlds in every level of a game, the artists at Tencent use design tools such as Autodesk Maya, SideFX Houdini and Unreal Engine to produce up to millions of trees, buildings and other properties to enrich their scenes. The technical artists often look to optimize their content creation pipelines to speed up this process, so they developed a proprietary Unreal Engine workflow powered by OmniObjects. With USD, Tencent Games’ teams saw the opportunity to easily streamline and seamlessly connect their workflows. Building on Omniverse as the platform for developing USD workflows, the artists at Tencent no longer need to install plug-ins for each software they use. Using just one USD plug-in enables interoperability across all their favorite software tools. Learn more about Tencent Games by joining this session at GDC. New and updated Omniverse Connectors for game engines are also now available. The open-beta Omniverse Connector for Unity workflows helps users of Omniverse and Unity collaborate on projects. Developed by NVIDIA, the Connector delivers USD support alongside Unity workflows, enabling Unity users to take advantage of interoperable workflows. It offers Omniverse Nucleus connection and browsing, USD geometry export, lights, cameras, Material Definition Language and preview for USD materials. Early features also include physics export, USD import and unidirectional live sync. And with the Unreal Engine Connector’s latest release, Omniverse users can now use Unreal Engine’s USD import utilities to add skeletal mesh blend shape importing, and Python USD bindings to access stages on Omniverse Nucleus. The latest release also delivers improvements in import, export and live workflows, as well as updated software development kits. Learn more about these latest technologies by joining NVIDIA at GDC. And catch up on all the groundbreaking announcements in generative AI and the metaverse by watching the NVIDIA GTC keynote. Follow NVIDIA Omniverse on Instagram, Medium, Twitter and YouTube for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community. View the full article
  21. BMW Group is at the forefront of a key new manufacturing trend — going digital-first by using the virtual world to optimize layouts, robotics and logistics systems years before production really starts. The automaker announced today with NVIDIA at GTC that it’s expanding its use of the NVIDIA Omniverse platform for building and operating industrial metaverse applications across its production network around the world, including the planned electric vehicle plant in Debrecen, Hungary, that will only start operations in 2025. In his GTC keynote, NVIDIA founder and CEO Jensen Huang shared a demo in which he was joined by BMW Group’s Milan Nedeljković, member of the board of management, to officially open the automaker’s first entirely virtual factory, powered by NVIDIA Omniverse. “We are excited and incredibly proud of the progress BMW has made with Omniverse. The partnership will continue to push the frontiers of virtual integration and virtual tooling for the next generation of smart-connected factories around the world,” Huang said during the GTC keynote. Omniverse — the culmination of over 25 years of NVIDIA graphics, accelerated computing, simulation and AI technologies — enables manufacturing companies to plan and optimize multibillion-dollar factory projects entirely virtually. This means they can get to production faster and operate more efficiently, improving time to market, digitalization and sustainability. The keynote demo highlights a virtual planning session for BMW’s Debrecen EV plant. With Omniverse, the BMW team can aggregate data into massive, high-performance models, connect their domain-specific software tools and enable multi-user live collaboration across locations. All of this is possible from any location, on any device. Starting to work in the virtual factory two years before it opens enables the BMW Group to ensure smooth operation and optimal efficiency. Virtual Integration for Real-World Efficiencies BMW Group’s virtual Debrecen plant illustrates the power and agility of planning AI-driven industrial manufacturing plants with the Omniverse platform. In the EV factory demo, Nedeljković invites Huang into an update in which the BMW team seeks to include a robot in a constrained floor space. The team solves the problem on the fly, with logistics and production planners able to visualize and decide the ideal placement. “This is transformative — we can design, build and test completely in a virtual world,” said Nedeljković. It’s a lens into the future of BMW Group’s journey into digital transformation. It’s also a blueprint for reducing risks and ensuring success before committing to massive construction projects and capital expenditures. This kind of digital transformation pays off. Putting in change orders and flow reoptimizations on existing facilities is extremely costly and causes production downtime. So having the ability to pre-optimize virtually eliminates such costs. BMW Group Transforming Production Worldwide BMW Group’s production network is poised to benefit from the digital transformation opportunities brought by Omniverse. With factories and factory planners all over the world, BMW has a complex planning process. The automaker uses many software tools and processes to connect people across geographies and time zones, which comes with limitations. With Omniverse, a development platform based on Universal Scene Description (USD), a 3D language that creates interoperability between software suites, BMW is able to bridge existing software and data repositories from leading industrial computer-aided design and engineering tools such as Siemens Process Simulate, Autodesk Revit, and Bentley Systems MicroStation. With this unified view, BMW is powering its internal teams and external partners to collaborate and share knowledge and data from existing factories to help in the planning of new ones. Additionally, the BMW team is developing a suite of custom applications with Omniverse, including a new application called Factory Explorer, based on Omniverse USD Composer, a customizable foundation application of the Omniverse platform. BMW used core components of USD Composer and added custom-built extensions tailored to its factory-planning teams’ needs, including finding, constructing, navigating, and analyzing factory data. Omniverse Platform Accelerates Digital Twin Collaboration The Omniverse platform enables BMW teams to collaborate across virtual factories from everywhere. A unified approach to data, allowing global changes in real time, lets BMW share updates across its teams. With these new capabilities, BMW can now validate and test entirely in a virtual world, accelerating its time to production and improving efficiency across all of its plants. To learn more about the latest in digitalization, watch NVIDIA founder and CEO Jensen Huang’s GTC keynote and these sessions featuring speakers from BMW: Are We There Yet? A Status Check on the Industrial Metaverse Scalable Quantum Simulators for Industry Problems Data-Driven AV Development: Data Management and MLOps Learn more about NVIDIA Omniverse View the full article
  22. Developers and creators can better realize the massive potential of generative AI, simulation and the industrial metaverse with new Omniverse Connectors and other updates to NVIDIA Omniverse, a platform for creating and operating metaverse applications. Omniverse Cloud, a platform-as-a-service unveiled today at NVIDIA GTC, equips users with a range of simulation and generative AI capabilities to easily build and deploy industrial metaverse applications. New Omniverse Connectors and applications developed by third parties enable enterprises across the globe to push the limits of industrial digitalization. Omniverse Ecosystem Expansion Omniverse enhances how developers and professionals create, design and deploy massive virtual worlds, AI-powered digital humans and 3D assets. Its newest additions include: New Omniverse Connectors: Elevating connected workflows, new Omniverse Connectors for the Siemens Xcelerator portfolio — including Siemens Teamcenter, Siemens NX and Siemens Process Simulate — Blender, Cesium, Emulate3D by Rockwell Automation, Unity and Vectorworks are now available — linking more of the world’s most advanced applications through the Universal Scene Description (USD) framework. Azure Digital Twin, Blackshark.ai, FlexSim and NavVis Omniverse Connectors are coming soon. SimReady 3D assets: Over 1,000 new SimReady assets enable easier AI and industrial 3D workflows. KUKA, a leading supplier of intelligent automation solutions, is working with NVIDIA and evaluating an adoption of the new SimReady specifications to make customer simulation easier than ever. Synthetic data generation: Lexset and Siemens SynthAI are both using the Omniverse Replicator software development kit to enable computer-vision-aided industrial inspection. Datagen and Synthesis AI are using the SDK to create synthetic digital humans for AI training. And Deloitte is providing synthetic data generation services using Omniverse Replicator for customers across domains ranging from manufacturing to telecom. Available now is LumenRT for NVIDIA Omniverse, developed by Bentley Systems, which enables automatic synchronized changes to visualization workflows for infrastructure digital twins, and applications developed by SyncTwin. Also available now is Aireal’s OmniStream, a web-embeddable and cloud-based extended reality digital twin platform that allows builders to give photorealistic 3D virtual tours to their buyers. Aireal’s Spaces, a visualization tool that enables automatic generation of home interior design, is coming soon. And the disguise platform will now integrate to NVIDIA Omniverse, connecting the virtual production pipeline to allow for easier, quicker changes, enhanced content creation and improved media and entertainment workflows. Run Omniverse Everywhere NVIDIA also introduced systems and services making Omniverse more powerful and easier to access. Next-generation NVIDIA RTX workstations are powered by NVIDIA Ada Lovelace GPUs, NVIDIA ConnectX-6 Dx SmartNICs and Intel Xeon processors. The newly announced RTX 5000 Ada generation laptop GPU enables professionals to access Omniverse and industrial metaverse workloads in the office, at home or on the go. Plus, NVIDIA introduced the third generation of OVX, a computing system for large-scale digital twins running within NVIDIA Omniverse Enterprise, powered by NVIDIA L40 GPUs and Bluefield-3 DPUs. Omniverse Cloud will be available to global automotive companies, enabling them to realize digitalization across their industrial lifecycles from start to finish. Microsoft Azure is the first global cloud service provider to deploy the platform-as-a-service. Learn more about Omniverse Cloud in the demo and our press release. Customers Driving Innovation in Omniverse Hundreds of enterprises are using Omniverse to transform their industrial lifecycles through digitalization, which improves the design, development and deployment of teams’ operations. In his GTC keynote, NVIDIA founder and CEO Jensen Huang showcased how Lucid Motors is tapping Omniverse and USD workflows to enable automotive digitalization projects. He also highlighted BMW Group’s use of Omniverse to build and deploy its upcoming electric vehicle factory in Debrecen, Hungary. Core Updates Coming to Omniverse Huang also gave a preview of the next Omniverse release coming this spring, which includes: Updates to Omniverse apps that enable developers and enterprise customers to build on foundation applications to suit their specific workflows: NVIDIA USD Composer (formerly Omniverse Create) — a customizable foundation application for designers and creators to assemble large-scale, USD-based datasets and compose industrial virtual worlds. NVIDIA USD Presenter (formerly Omniverse View) — a customizable foundation application visualization reference app for showcasing and reviewing USD projects interactively and collaboratively. NVIDIA USD-GDN Publisher — a suite of cloud services that enables developers and service providers to easily build, publish and stream advanced, interactive, USD-based 3D experiences to nearly any device in any location. Improved developer experience — The new public extension registry enables users to receive automated updates to extensions. New configurator templates and workflows as well as an NVIDIA Warp Kernel Node for Omnigraph will enable zero-friction developer workflows for GPU-based coding. Next-level rendering and materials — Omniverse is offering for the first time a real-time, ray-traced subsurface-scattering shader, enabling unprecedented realism in skin for digital humans. The latest update to Universal Material Mapper lets users seamlessly bring in material libraries from third-party applications, preserving material structure and full editing capability. Groundbreaking performance — In a major development to enable massive large-scene performance, USD’s runtime data transfer technology provides an efficient method to store and move runtime data between modules. The scene optimizer allows users to run optimizations at USD level to convert large scenes into more lightweight representations for improved interactions. AI training capabilities — Automatic domain randomization and population-based training make complex robotic training significantly easier for autonomous robotics development. Generative AI — A new text-to-materials extension allows users to automatically generate high-quality materials solely from a text prompt. To accelerate usage of generative AI, updates within Omniverse also include text-to-materials and text-to-code generation tools. Additionally, updates to the Audio2Face app include headless mode, a REST application programming interface, improved lip-sync quality and more robust multi-language support including for Mandarin. Developers can also use AI-generated inputs from technology such as ChatGPT to provide data to Omniverse extensions like Camera Studio, which generates and customizes cameras in Omniverse using data created in ChatGPT. Register free for GTC, running through Thursday, March 23, to attend the GTC keynote and Omniverse sessions. Get started with NVIDIA Omniverse by downloading the standard license free, or learn how Omniverse Enterprise can connect your team. Stay up-to-date on the platform by subscribing to the newsletter, and following NVIDIA Omniverse on Instagram, Medium, and Twitter. For resources, check out our forums, Discord server, Twitch and YouTube channels. View the full article
  23. NVIDIA announced today at GTC that Omniverse Cloud will be hosted on Microsoft Azure, increasing access to Isaac Sim, the company’s platform for developing and managing AI-based robots. The company also said that a full lineup of Jetson Orin modules is now available, offering a performance leap for edge AI and robotics applications. “The world’s largest industries make physical things, but they want to build them digitally,” said NVIDIA founder and CEO Jensen Huang during the GTC keynote. “Omniverse is a platform for industrial digitalization that bridges digital and physical.” Isaac Sim on Omniverse Enterprise for Virtual Simulations Building robots in the real world requires creating datasets from scratch, which is time consuming and expensive and slows deployments. That’s why developers are turning to synthetic data generation (SDG), pretrained AI models, transfer learning and robotics simulation to drive down costs and accelerate deployment timelines. The Omniverse Cloud platform-as-a-service, which runs on NVIDIA OVX servers, puts advanced capabilities into the hands of Azure developers everywhere. It enables enterprises to scale robotics simulation workloads, such as SDG, and provides continuous integration and continuous delivery for devops teams to work in a shared repository on code changes while working with Isaac Sim. Isaac Sim is a robotics simulation application and SDG tool that drives photorealistic, physically accurate virtual environments. Isaac Sim, powered by the NVIDIA Omniverse platform, enables global teams to remotely collaborate to build, train, simulate, validate and deploy robots. Making Isaac Sim accessible in the cloud allows teams to work together more effectively with access to the latest robotics tools and software development kits. Omniverse Cloud gives enterprises more options in the cloud with Azure, in addition to the existing cloud-based methods of using Isaac Sim for self-managed containers, or with using it on virtual workstations or fully managed services such as AWS RoboMaker. And with access to Omniverse Replicator, an SDG engine in Isaac Sim, engineers can build production-quality synthetic datasets to train robust deep learning perception models. Amazon uses Omniverse to automate, optimize and plan its autonomous warehouses with digital twin simulations before deployment into the real world. With Isaac Sim, Amazon Robotics is also improving the capabilities of Proteus, its latest autonomous mobile robot (AMR). This helps the online retail giant fulfill thousands of orders in a cost- and time-efficient manner. Working with automation company idealworks, BMW Group uses Isaac Sim in Omniverse to generate synthetic data and run scenarios for testing and training AMRs and factory robots. NVIDIA is developing across the AI tools spectrum — from computing in the cloud with simulation like Isaac Sim to at the edge with the Jetson platform — accelerating robotics adoption across industries. Jetson Orin for Efficient, High-Performance Edge AI and Robotics NVIDIA Jetson Orin-based modules are now available in production to support a complete range of edge AI and robotics applications. This includes the Jetson Orin Nano — which provides up to 40 trillion operations per second (TOPS) of AI performance in the smallest Jetson module — up to the Jetson AGX Orin, delivering 275 TOPS for advanced autonomous machines. The new Jetson Orin Nano Developer Kit delivers 80x the performance when compared with the previous-generation Jetson Nano, enabling developers to run advanced transformer and robotics models. And with 50x the performance per watt, developers getting started with the Jetson Orin Nano modules can build and deploy power-efficient, entry-level AI-powered robots, smart drones, intelligent vision systems and more. Application-specific frameworks like NVIDIA Isaac ROS and DeepStream, which run on the Jetson platform, are closely integrated with cloud-based frameworks like Isaac Sim on Omniverse and NVIDIA Metropolis. And using the latest NVIDIA TAO Toolkit for fine-tuning pretrained AI models from the NVIDIA NGC catalog reduces time to deployment for developers. More than 1 million developers and over 6,000 customers have chosen the NVIDIA Jetson platform, including Amazon Web Services, Canon, Cisco, Hyundai Robotics, JD.com, John Deere, Komatsu, Medtronic, Meituan, Microsoft Azure, Teradyne and TK Elevator. Companies adopting the new Orin-based modules include Hyundai Doosan Infracore, Robotis, Seyeon Tech, Skydio, Trimble, Verdant and Zipline. More than 70 Jetson ecosystem partners are offering Orin-based solutions, with a wide range of support from hardware, AI software and application design services to sensors, connectivity and developer tools. The full lineup of Jetson Orin-based production modules is now available. The Jetson Orin Nano Developer Kit will start shipping in April. CTA: Learn more about NVIDIA Isaac Sim, Jetson Orin, Omniverse Enterprise and Metropolis. View the full article
  24. CCC Intelligent Solutions (CCC) has become the first company in the auto insurance industry to deliver an AI-powered repair estimating solution, called CCC Estimate – STP, short for straight-through processing. The Chicago-based auto-claims technology powerhouse uses AI, insurer-driven rules and CCC’s vast ecosystem to deliver repair estimates in seconds, instead of days. It’s a technological feat considering there are thousands of vehicle makes and models on the road, and countless repair permutations. The company’s commitment to AI spans many years, with its first AI solutions hitting the market more than five years ago. Today, it’s working to bring AI and intelligent experiences to key facets of claims and mobility for its 30,000 customers, who process more than 16 million claims annually using CCC solutions. “Our data scientists play a crucial role in creating new solutions and the ability to build models, experiment and easily integrate the model into our AI workflows is key,” said Reza Rooholamini, chief scientific officer at CCC. CCC has four decades of expertise in automotive claims and collects millions of unstructured and structured automotive-claim data points every year. The combination of industry experience and raw data, however, is just the starting point for CCC’s efforts. The company runs a 100% cloud production environment, providing customers with a flexible platform for continuous innovation. As a market leader, CCC regularly reports AI adoption among its customers to track progress. According to its 2023 AI Adoption report, the company reported that more than 14 million unique claims have been processed using CCC’s computer vision AI through 2022. The company also saw a 60% year-over-year increase in the application of advanced AI for claims processing. And AI isn’t just being used to process more claims, it’s informing more decisions across the entire claims management experience. In fact, the number of claims processed with four or more of CCC’s AI applications has more than doubled, year-over-year. CCC has built an end-to-end hybrid-cloud AI development and training pipeline to support its continuous innovation. This infrastructure uses over 150 NVIDIA A100 Tensor Core GPUs, including NVIDIA DGX systems on premises and additional resources within NVIDIA DGX Cloud. The CCC development teams are using DGX Cloud to supplement on-prem capacity, support supercomputing demand spikes and accelerate AI model development overall. “The AI pipeline we’ve built enables us to unleash all kinds of innovations,” said Neda Hantehzadeh, director of data science at CCC. With 25-30% of its data scientists and engineering teams’ time dedicated to experimentation, coupled with massive datasets that are growing each day, CCC needed to enable a more scalable, multi-platform, hybrid multi-cloud for its training environment. Using its AI pipeline, CCC launched CCC Estimate – STP, which can deliver a detailed line-level estimate of the collision repair cost based on insurer rules in seconds using AI and just a few pictures of vehicle damage taken from a smartphone. Traditional methods can take several days. This saves time for adjusters, freeing them up for more complex work. This digitalized estimation process helps elevate the customer experience as well as lower processing costs and is currently being used by leading insurance companies across the U.S. But the results are broader. Using the NVIDIA Base Command Platform integrated with their development pipeline for training job orchestration and data management, the CCC team realizes improved productivity. Data scientists can run experiments 2x faster, which can mean more learnings for more innovation and solution development. “We run some experiments on premises on NVIDIA DGX systems, but we may have spikes where we want to add, for example, 10 million more data points and do another run,” Hantehzadeh said. “If we need additional capacity, we can switch to DGX Cloud. Base Command Platform makes this process seamless.” CCC plans to continue taking its investment to the leading edge of AI development, injecting AI and STP into different channels and products across the property and casualty insurance economy. Learn more about NVIDIA DGX Cloud and NVIDIA Base Command Platform. View the full article
  25. As a sports commentator for a professional lacrosse team, Grant Farhall knows the value in having the right teammates. As the chief product officer for Getty Images, a global visual-content creator and marketplace, he believes the collaboration between his company and NVIDIA is an excellent pairing for taking generative AI to the next level. The companies aim to develop two generative AI models using NVIDIA Picasso, part of the new NVIDIA AI Foundations cloud services. Users could employ the models to create a custom image or video in seconds, simply by typing in a concept. “With our high quality and often unique imagery and videos, this collaboration will give our customers the ability to create a greater variety of visuals than ever before, helping creatives and non-creatives alike fuel visual storytelling,” Farhall said. Getty Images is a unique partner, not only for its stunning images and video, but also its rich metadata, with appropriate rights. Its creative team and research bring a wealth of expertise that can deliver impactful outputs. For artists, generative AI adds a new tool that expands their canvas. For content creators, it’s an opportunity to create a custom visual tailored to a brand or business they’re building. “More often than not, it’s a visual that cuts through the noise of a busy world to capture your attention, and being able to stand out from the crowd is crucial for businesses of all shapes and sizes,” Farhall said. Building Responsible AI But, as in lacrosse, you need to play by the rules. The models will be trained on Getty Images’ fully licensed content, and revenue generated from the models will provide royalties to content creators. “Both companies want to develop these tools in a responsible way that returns benefits to creators and doesn’t pass risks on to customers, and this collaboration is testament to the fact that’s possible,” he said. A Time-Tested Relationship It’s not the first inning for this collaboration. “We’ve been fostering and growing a relationship for some time — NVIDIA brings the tech expertise and talent, and we bring the high quality and unique content and marketplace,” said Farhall. The technology, values and connections are catalysts for experiences that wow creators and users. It’s a feeling Farhall shares, sitting in front of his mic on a Saturday night. “There’s an adrenaline rush when the live action of a game becomes your singular focus and you’re just in the moment,” he said. And by training a custom model with NVIDIA Picasso, Getty Images and NVIDIA aim to help storytellers everywhere create more moments that perfectly capture their audiences’ attention. To learn more about what NVIDIA is doing in generative AI and beyond, watch company founder and CEO Jensen Huang’s GTC keynote below. Image at top courtesy Roberto Moiola/Sysaworld/Getty Images. View the full article
×
×
  • Create New...

Important Information

By using this site, you agree to our Guidelines Privacy Policy.