Jump to content

All Activity

This stream auto-updates

  1. Yesterday
  2. Garry’s Mod, a popular 2006 sandbox game that emerged from the modding scene around Valve’s Source software, has recently been issued takedown notices by Nintendo. As a result, Facepunch Studios, the developers of Garry’s Mod, are in the process of removing about 20 years’ worth of Nintendo-related content from the… Read more... View the full article
  3. Fallout 76 isn’t like Bethesda’s previous open-world RPGs. While those single-player adventures were somewhat forgiving campaigns that let you build your character however you wanted, the multiplayer spin-off is full of bullet sponge-y enemies that require you to focus on a particular skill set in order to survive the… Read more... View the full article
  4. The past year or so has been a slow-moving trainwreck for Embracer Group, following the large company’s unprecedented studio acquisition spree. And, after announcing on April 22 that the company was planning to split itself up into three different companies, the CEO of Embracer is only filled with excuses as to why… Read more... View the full article
  5. Overwatch 2 outlined several new rules and penalties it would be implementing in season 10 to help curate a less hostile player environment on April 10. But since the season began last week, players have been reporting bans that would typically be applied to someone leaving a game early being implemented for seemingly… Read more... View the full article
  6. Stellar Blade, the new PS5-exclusive character action game, has tons of skills to unlock like its genre siblings Bayonetta and Devil May Cry. Across protagonist Eve’s five skill trees, some abilities increase your damage, stun your enemies, dodge attacks entirely, let you double jump, and more. It can be daunting,… Read more... View the full article
  7. Hello nolfified, Welcome to UnityHQ Nolfseries Community. Please feel free to browse around and get to know the others. If you have any questions please don't hesitate to ask. nolfified joined on the 04/24/2024. View Member
  8. So, you watched the Fallout TV show on Amazon, got excited about the franchise, maybe saw some folks playing Fallout 76, and decided that you were finally going to install Bethesda’s post-apocalyptic MMO and give it a real shot. But then you tried playing it on PC and discovered that it runs like garbage. Wait! Before… Read more... View the full article
  9. Final Fantasy XIV fans are having a debate over what the healer role should look like going forward. The MMO’s next expansion, Dawntrail, releases this summer and will bring with it a myriad of new features and changes to how the game will play. Recently a benchmark for the expansion was released which, on top of… Read more... View the full article
  10. Read more... View the full article
  11. Overwatch 2 has had a lot of bad news since it launched, such as the gutting of its PvE suite, and now the reported cancellation of its story missions. Every seemingly good thing that’s introduced, like a store to buy rare skins, comes with a caveat, in that case being egregiously expensive. The limited-time… Read more... View the full article
  12. Sand Land caught my attention as soon as I saw the game’s booth at New York City Comic Con last year, with its enormous recreation of the Royal Army tank. The game’s setting, character designs, and of course the tank, stuck with me, even though I wouldn’t consider myself familiar with much, if any, of Akira Toriyama’s… Read more... View the full article
  13. Fallout is bigger than ever right now, thanks in large part to the recent live-action Amazon show. The show’s success has led to millions of players hopping into past Fallout games. And yes, that includes the franchise’s oddball MMO, Fallout 76. Thanks to the show and 76’s inclusion in Game Pass, a lot of people are… Read more... View the full article
  14. In many ways, Stellar Blade is awesome. The action is awesome, the soundtrack is awesome, the world is awesome. However, when you peel back the curtain just a bit and peer behind all the flash and style, you find that Shift Up’s character action game can also be quite tedious. It’s a game of dichotomies, one that’s… Read more... View the full article
  15. A juxtaposition of environmental elements that, taken together, can be interpreted as an allusion to the n-word will be removed from sci-fi action game Stellear Blade in a day one patch, Sony confirmed today. The company told IGN the offensive reference was “unintentional.” Read more... View the full article
  16. The latest trailer for Deadpool & Wolverine gave audiences their first good look at the movie’s big bad, Cassandra Nova. She’s an X-Men character with a weird history in the comics and no obvious connection to Deadpool. So how does she fit into the merc with a mouth’s introduction into the MCU? The trailer doesn’t… Read more... View the full article
  17. To address the shift to electric vehicles, increased semiconductor demand, manufacturing onshoring, and ambitions for greater sustainability, manufacturers are investing in new factory developments and re-engineering their existing facilities. These projects often run over budget and schedule, due to complex and manual planning processes, legacy technology infrastructure, and disconnected tools, data and teams. To address these challenges, manufacturers are embracing digitalization and virtual factories, powered by technologies like digital twins, the Universal Scene Description (OpenUSD) ecosystem and generative AI, that enable new possibilities from planning to operations. What Is a Virtual Factory? A virtual factory is a physically accurate representation of a real factory. These digital twins of factories allow manufacturers to model, simulate, analyze and optimize their production processes, resources and operations without the need for a physical prototype or pilot plant. Benefits of Virtual Factories Virtual factories unlock many benefits and possibilities for manufacturers, including: Streamlined Communication: Instead of teams relying on in-person meetings and static planning documents for project alignment, virtual factories streamline communication and ensure that critical design and operations decisions are informed by the most current data. Contextualized Planning: During facility design, construction and commissioning, virtual factories allow project stakeholders to visualize designs in the context of the entire facility and production process. Planning and operations teams can compare and verify built structures with the virtual designs in real time and decrease costs by identifying errors and incorporating feedback early in the review process. Optimized Facility Designs: Connecting virtual factories to simulations of processes and discrete events enables teams to optimize facility designs for production and material flow, ergonomic work design, safety and overall utilization. Intelligent and Optimized Operations: Operations teams can integrate their virtual factories with valuable production data from Internet of Things technology at the edge, and tap AI to drive further optimizations. Virtual Factories: A Testing Ground for AI and Robotics Robotics developers are increasingly using virtual factories to train and test AI and autonomous systems that run in physical factories. For example, virtual factories can enable developers and manufacturing teams to simulate digital workers and autonomous mobile robots (AMRs), vision AI agents and sensors to create a centralized map of worker activity throughout a facility. By fusing data from simulated camera streams with multi-camera tracking, developers can generate occupancy maps that inform optimal AMR routes. Developers can also use these physically accurate virtual factories to train and test AI agents capable of managing their robot fleets, to ensure AI-enabled robots can adapt to real-world unpredictability and to identify streamlined configurations for human-robot collaboration. What Are the Foundations of a Virtual Factory Building large-scale, physically accurate virtual factories that unlock these transformational possibilities requires bringing together many tools, data formats and technologies to harmonize the representation of real-world aspects in the digital world. Originally invented by Pixar Animation Studios, OpenUSD encompasses a collection of tools and capabilities that enable the data interoperability developers and manufacturers require to achieve their digitalization goals. OpenUSD’s core superpower is flexible data modeling. 3D input can be accepted from source applications and combined with a variety of data, including from computer-aided design software, live sensors, documentation and maintenance records, through a unified data pipeline. OpenUSD enables developers to share these data types across different simulation tools and AI models, providing insights for all stakeholders. Data can be synced from the factory floor to the digital twin, surfacing real-time insights for factory managers and teams. By developing virtual factory solutions on OpenUSD, developers can enhance collaboration for factory teams, allowing them to review plans, discuss optimization opportunities and make decisions in real time. To support and accelerate the development of the OpenUSD ecosystem, Pixar, Adobe, Apple, Autodesk and NVIDIA formed the Alliance for OpenUSD, which is building open standards for USD in core specification, materials, geometry and more. Industrial Use Cases for Virtual Factories To unlock the potential of virtual factories, industry leaders including Autodesk, Continental, Pegatron, Rockwell Automation, Siemens and Wistron are developing virtual-factory solutions on OpenUSD and NVIDIA Omniverse, a platform of application programming interfaces (APIs) and software development kits that enable developers to build applications for complex 3D and industrial digitalization workflows based on OpenUSD. FlexSim, an Autodesk company, uses OpenUSD to enable factory teams to analyze, visualize and optimize real-world processes with its simulation modeling for complex systems and operations. The discrete-event simulation software provides an intuitive drag-and-drop interface to create 3D simulation models, account for real-world variability, run “what-if” scenarios and perform in-depth analyses. Developers at Continental, a leading German automotive technology company, developed ContiVerse, a factory planning and manufacturing operations application on OpenUSD and NVIDIA Omniverse. The application helps Continental optimize factory layouts and plan production processes collaboratively, leading to an expected 13% reduction in time to market. Partnering with software company SoftServe, Continental also developed Industrial Co-Pilot, which combines AI-driven insights with immersive visualization to deliver real-time guidance and predictive analytics to engineers. This is expected to reduce maintenance effort and downtime by 10%. Pegatron, one of the world’s largest manufacturers of smartphones and consumer electronics, is developing virtual-factory solutions on OpenUSD to accelerate the development of new factories — as well as to minimize change orders, optimize operations and maximize production-line throughput in existing facilities. Rockwell Automation is integrating NVIDIA Omniverse Cloud APIs and OpenUSD with its Emulate3D digital twin software to bring manufacturing teams data interoperability, live collaboration and physically based visualization for designing, building and operating industrial-scale digital twins of production systems. Siemens, a leading technology company for automation, digitalization and sustainability and a member of the Alliance for OpenUSD, is adopting Omniverse Cloud APIs within its Siemens Xcelerator Platform, starting with Teamcenter X, the industry-leading cloud-based product lifecycle management software. This will help teams design, build and test next-generation products, manufacturing processes and factories virtually, before they’re built in the physical world. Wistron, a leading global technology service provider and electronics manufacturer, is digitalizing new and existing factories with OpenUSD. By developing virtual-factory solutions on NVIDIA Omniverse, Wistron enables its factory teams to collaborate remotely to refine layout configurations, optimize surface mount technology and in-circuit testing lines, and transform product-on-dock testing. With these solutions, Wistron has achieved a 51% boost in worker efficiency and 50% reduction in production process times. Layout optimization and real-time monitoring have decreased defect rates by 40%. And construction time on Wistron’s new NVIDIA DGX factory was cut in half, from about five months to just two and a half months. Learn more at the Virtual Factory Use Case page, where a reference architecture provides an overview of components and capabilities developers should consider when developing virtual-factory solutions. Get started with NVIDIA Omniverse by downloading the standard license free, access OpenUSD resources, and learn how Omniverse Enterprise can connect your team. Stay up to date on Instagram, Medium and X. For more, join the Omniverse community on the forums, Discord server, Twitch and YouTube channels. View the full article
  18. Sand Land, the video game adaptation of Dragon Ball creator Akira Toriyama’s one-volume manga of the same name, arrives on April 26, 2024 for PS5, PS4, Xbox Series X/S, and Windows. Read more... View the full article
  19. League of Legends pro Lu ‘Leyan’ Jue said he was just rocking out with his favorite oversized plushie. But Riot Games felt differently and fined the Invictus Gaming jungler nearly $7,000 and suspended him for two matches over footage of him humping a giant stuffed animal of Lots-o’-Huggin’ Bear from Toy Story 3 during… Read more... View the full article
  20. If you’ve been on social media the past few days, you’re probably wondering why Foghorn Leghorn, the boisterous rooster from Looney Tunes, is giving pep talks to your favorite anime and video game characters. It turns out the joke has roots going as far back as 2021, but has gained new life in the past few days. Read more... View the full article
  21. Steam has an oft-exploited refund policy, whereby any player is able to request their money back for a game that they bought within the last 14 days, and played for less than two hours. (Selling a game that’s shorter than two hours? Sucks to be you.) But it seems that people buying games with so-called “Advanced… Read more... View the full article
  22. To help customers make more efficient use of their AI computing resources, NVIDIA today announced it has entered into a definitive agreement to acquire Run:ai, a Kubernetes-based workload management and orchestration software provider. Customer AI deployments are becoming increasingly complex, with workloads distributed across cloud, edge and on-premises data center infrastructure. Managing and orchestrating generative AI, recommender systems, search engines and other workloads requires sophisticated scheduling to optimize performance at the system level and on the underlying infrastructure. Run:ai enables enterprise customers to manage and optimize their compute infrastructure, whether on premises, in the cloud or in hybrid environments. The company has built an open platform on Kubernetes, the orchestration layer for modern AI and cloud infrastructure. It supports all popular Kubernetes variants and integrates with third-party AI tools and frameworks. Run:ai customers include some of the world’s largest enterprises across multiple industries, which use the Run:ai platform to manage data-center-scale GPU clusters. “Run:ai has been a close collaborator with NVIDIA since 2020 and we share a passion for helping our customers make the most of their infrastructure,” said Omri Geller, Run:ai cofounder and CEO. “We’re thrilled to join NVIDIA and look forward to continuing our journey together.” The Run:ai platform provides AI developers and their teams: A centralized interface to manage shared compute infrastructure, enabling easier and faster access for complex AI workloads. Functionality to add users, curate them under teams, provide access to cluster resources, control over quotas, priorities and pools, and monitor and report on resource use. The ability to pool GPUs and share computing power — from fractions of GPUs to multiple GPUs or multiple nodes of GPUs running on different clusters — for separate tasks. Efficient GPU cluster resource utilization, enabling customers to gain more from their compute investments. NVIDIA will continue to offer Run:ai’s products under the same business model for the immediate future. And NVIDIA will continue to invest in the Run:ai product roadmap as part of NVIDIA DGX Cloud, an AI platform co-engineered with leading clouds for enterprise developers, offering an integrated, full-stack service optimized for generative AI. NVIDIA DGX and DGX Cloud customers will gain access to Run:ai’s capabilities for their AI workloads, particularly for large language model deployments. Run:ai’s solutions are already integrated with NVIDIA DGX, NVIDIA DGX SuperPOD, NVIDIA Base Command, NGC containers, and NVIDIA AI Enterprise software, among other products. NVIDIA’s accelerated computing platform and Run:ai’s platform will continue to support a broad ecosystem of third-party solutions, giving customers choice and flexibility. Together with Run:ai, NVIDIA will enable customers to have a single fabric that accesses GPU solutions anywhere. Customers can expect to benefit from better GPU utilization, improved management of GPU infrastructure and greater flexibility from the open architecture. View the full article
  23. Can machine learning help predict extreme weather events and climate change? Christopher Bretherton, senior director of climate modeling at the Allen Institute for Artificial Intelligence, or AI2, explores the technology’s potential to enhance climate modeling with AI Podcast host Noah Kravitz in an episode recorded live at the NVIDIA GTC global AI conference. Bretherton explains how machine learning helps overcome the limitations of traditional climate models and underscores the role of localized predictions in empowering communities to prepare for climate-related risks. Through ongoing research and collaboration, Bretherton and his team aim to improve climate modeling and enable society to better mitigate and adapt to the impacts of climate change. Stay tuned for more episodes recorded live from GTC, and watch the replay of Bretherton’s GTC session on using machine learning for climate modeling. The AI Podcast · AI2’s Christopher Bretherton Discusses Using Machine Learning for Climate Modeling – Ep. XXX Time Stamps 2:03: What is climate modeling and how can it prepare us for climate change? 5:28: How can machine learning help enhance climate modeling? 7:21: What were the limitations of traditional climate models? 10:24: How does a climate model work? 12:11: What information can you get from a climate model? 13:26: What are the current climate models telling us about the future? 15:56: How does machine learning help enable localized climate modeling? 18:39: What, if anything, can individuals or small communities do to prepare for what climate change has in store for us? 25:59: How do you measure the accuracy or performance of an emulator that’s doing something like climate modeling out into the future? You Might Also Like… ITIF’s Daniel Castro on Energy-Efficient AI and Climate Change – Ep. 215 AI-driven change is in the air, as are concerns about the technology’s environmental impact. In this episode of NVIDIA’s AI Podcast, Daniel Castro, vice president of the Information Technology and Innovation Foundation and director of its Center for Data Innovation, speaks with host Noah Kravitz about the motivation behind his AI energy use report, which addresses misconceptions about the technology’s energy consumption. DigitalPath’s Ethan Higgins on Using AI to Fight Wildfires – Ep. 211 DigitalPath is igniting change in the golden state — using computer vision, generative adversarial networks and a network of thousands of cameras to detect signs of fire in real-time. In the latest episode of NVIDIA’s AI Podcast, host Noah Kravtiz spoke with DigitalPath system architect Ethan Higgins about the company’s role in the ALERTCalifornia initiative, a collaboration between California’s wildfire fighting agency CAL FIRE and the University of California, San Diego. Anima Anandkumar on Using Generative AI to Tackle Global Challenges – Ep. 203 Generative AI-based models can not only learn and understand natural languages — they can learn the very language of nature itself, presenting new possibilities for scientific research. On the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with Anandkumar on generative AI’s potential to make splashes in the scientific community. How Alex Fielding and Privateer Space Are Taking on Space Debris – Ep. 196 In this episode of the NVIDIA AI Podcast, host Noah Kravitz dives into an illuminating conversation with Alex Fielding, co-founder and CEO of Privateer Space. Privateer Space, Fielding’s latest venture, aims to address one of the most daunting challenges facing our world today: space debris. Subscribe to the AI Podcast Get the AI Podcast through iTunes, Google Podcasts, Google Play, Amazon Music, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. Make the AI Podcast better: Have a few minutes to spare? Fill out this listener survey. View the full article
  24. Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and which showcases new hardware, software, tools and accelerations for RTX PC users. AI continues to raise the bar for PC gaming. DLSS 3.5 with Ray Reconstruction creates higher quality ray-traced images for intensive ray-traced games and apps. This advanced AI-powered neural renderer is a groundbreaking feature that elevates ray-traced image quality for all GeForce RTX GPUs, outclassing traditional hand-tuned denoisers by using an AI network trained by an NVIDIA supercomputer. The result improves lighting effects like reflections, global illumination, and shadows to create a more immersive, realistic gaming experience. A Ray of Light Ray tracing is a rendering technique that can realistically simulate the lighting of a scene and its objects by rendering physically accurate reflections, refractions, shadows and indirect lighting. Ray tracing generates computer graphics images by tracing the path of light from the view camera — which determines the view into the scene — through the 2D viewing plane, out into the 3D scene, and back to the light sources. For instance, if rays strike a mirror, reflections are generated. A visualization of how ray tracing works. It’s the digital equivalent to real-world objects illuminated by beams of light and the path of the light being followed from the eye of the viewer to the objects that light interacts with. That’s ray tracing. Simulating light in this manner — shooting rays for every pixel on the screen — is computationally intensive, even for offline renderers that calculate scenes over the course of several minutes or hours. Instead, ray samples fire a handful of rays at various points across the scene for a representative sample of the scene’s lighting, reflectivity and shadowing. However, there are limitations. The output is a noisy, speckled image with gaps, good enough to ascertain how the scene should look when ray traced. To fill in the missing pixels that weren’t ray traced, hand-tuned denoisers use two different methods, temporally accumulating pixels across multiple frames, and spatially interpolating them to blend neighboring pixels together. Through this process, the noisy raw output is converted into a ray-traced image. This adds complexity and cost to the development process, and reduces the frame rate in highly ray-traced games where multiple denoisers operate simultaneously for different lighting effects. DLSS 3.5 Ray Reconstruction introduces an NVIDIA supercomputer-trained, AI-powered neural network that generates higher-quality pixels in between the sampled rays. It recognizes different ray-traced effects to make smarter decisions about using temporal and spatial data, and retains high frequency information for superior-quality upscaling. And it recognizes lighting patterns from its training data, such as that of global illumination or ambient occlusion, and recreates it in-game. Portal with RTX is a great example of Ray Reconstruction in action. With DLSS OFF, the denoiser struggles to reconstruct the dynamic shadowing alongside the moving fan. With DLSS 3.5 and Ray Reconstruction enabled, the denoiser is trained on AI and recognizes certain patterns associated with shadows and keeps the image stable, accumulating accurate pixels while blending neighboring pixels to generate high-quality reflections. Deep Learning, Deep Gaming Ray Reconstruction is just one of the AI graphics breakthroughs that multiply performance in DLSS. Super Resolution, the cornerstone of DLSS, samples multiple lower resolution images and uses motion data and feedback from prior frames to reconstruct native-quality images. The result is high image quality without sacrificing game performance. DLSS 3 introduced Frame Generation, which boosts performance by using AI to analyze data from surrounding frames to predict what the next generated frame should look like. These generated frames are then inserted in between rendered frames. Combining the DLSS-generated frames with DLSS Super Resolution enables DLSS 3 to reconstruct seven-eighths of the displayed pixels with AI, boosting frame rates by up to 4x compared to without DLSS. Because DLSS Frame Generation is post-processed (applied after the main render) on the GPU, it can boost frame rates even when the game is bottlenecked by the CPU. Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter. View the full article
  25. Last week
  26. When I started playing Fallout 76 in 2018 there were no backpacks. So I never thought about it. But when I learned from a random comment that backpacks had been added after that point and had been in the game for years, I felt stupid for never crafting one. And then I went to collect the recipe for a pack and felt… Read more... View the full article
  27. Helldivers 2 players have strong feelings about the planets they’re fighting to defend in the game’s ongoing galactic war. First there was Malevelon Creek, which became a notorious hotspot for players, after it was dubbed “Robot Vietnam” due to its flame-spewing robots, jungle setting, and significant difficulty.… Read more... View the full article
  1. Load more activity
×
×
  • Create New...

Important Information

By using this site, you agree to our Guidelines Privacy Policy.