Jump to content

UHQBot

Forum Bot
  • Posts

    43,073
  • Joined

  • Last visited

  • Days Won

    25

Everything posted by UHQBot

  1. Amazon-owned livestreaming platform Twitch has disciplined content creators again. Just as a new meta began dominating the site earlier this month, in which streamers started playing games like Fortnite on green-screened boobs and butts, Twitch has updated its Terms of Service to strictly prohibit this kind of risque… Read more... View the full article
  2. About halfway through the first mission in South Park: Snow Day, I found myself pausing the game and checking my phone as I desperately looked for entertainment. That’s what we professional and very serious critics call: Not a good sign. Read more... View the full article
  3. One of the first quests you undertake in Dragon’s Dogma 2, Capcom’s new action RPG, has you traveling from the town of Melve to the capital city of Vernworth. It’s a slow journey that starts and stops several times along the winding roads leading to your final destination. You need to dispatch goblins making the road… Read more... View the full article
  4. The next Overwatch 2 hero is getting a brief trial period for those who want to take them for a spin before launch, just like Blizzard did with tank hero Mauga before his launch. Venture, a non-binary, drill-using damage hero, is set to join the roster in season 10 (which kicks off on April 16), but Blizzard is giving… Read more... View the full article
  5. The upcoming survival MMO Dune Awakening seems neat and all, but if you know the history of its developers, Funcom, you might be wondering: Hey, does this open-world online survival game with custom characters also include a penis-size slider? The answer is no, because apparently, Dune ain’t “sexy” and “savage.” Read more... View the full article
  6. Video game and nerd culture retailer GameStop appears to be floundering once more. According to a Reuters report, the company recently laid off an unspecified number of employees after reporting a decline in earnings in the fourth quarter as physical sales fall and digital purchases rise. Read more... View the full article
  7. Final Fantasy VII has been incorporating exercise mini-games ever since the original 1997 release challenged Cloud to pull off some competitive squats. Those returned in 2020’s Final Fantasy VII Remake, with pull-ups tossed in for good measure. Rebirth keeps things going by also introducing a new exercise competition… Read more... View the full article
  8. Nintendo of America is restructuring the small army of contractors that helps test its games and hardware in its Washington state headquarters, the company confirmed to Kotaku. According to four current and former contractors, the result is a massive downsizing that comes amid layoffs across the rest of the video game… Read more... View the full article
  9. The actor who provided the English voice of Yuffie in Final Fantasy 7 Rebirth says that while recording her lines, she was asked to pretend that she was puking from motion sickness. Apparently, her fake vomiting was too accurate and gross because Square Enix had to tell her to pull back. Read more... View the full article
  10. After some teases and leaks, Marvel and NetEase Games have officially revealed Marvel Rivals. The game is a free-to-play, 6v6 hero shooter in the style of Overwatch, but it’s in third-person and stars superheroes from across Marvel’s various comics. While we’ve only seen one trailer, it looks like it could give Overwat… Read more... View the full article
  11. A free demo for Stellar Blade, developer Shift Up’s character-action PS5 exclusive, will go live on March 29. Ahead of its imminent release, some gaming publications got early hands-on time with roughly two hours of the game, which contains the first level and its respective boss. Based on everything that’s being said… Read more... View the full article
  12. It’s official: NVIDIA delivered the world’s fastest platform in industry-standard tests for inference on generative AI. In the latest MLPerf benchmarks, NVIDIA TensorRT-LLM — software that speeds and simplifies the complex job of inference on large language models — boosted the performance of NVIDIA Hopper architecture GPUs on the GPT-J LLM nearly 3x over their results just six months ago. The dramatic speedup demonstrates the power of NVIDIA’s full-stack platform of chips, systems and software to handle the demanding requirements of running generative AI. Leading companies are using TensorRT-LLM to optimize their models. And NVIDIA NIM — a set of inference microservices that includes inferencing engines like TensorRT-LLM — makes it easier than ever for businesses to deploy NVIDIA’s inference platform. Raising the Bar in Generative AI TensorRT-LLM running on NVIDIA H200 Tensor Core GPUs — the latest, memory-enhanced Hopper GPUs — delivered the fastest performance running inference in MLPerf’s biggest test of generative AI to date. The new benchmark uses the largest version of Llama 2, a state-of-the-art large language model packing 70 billion parameters. The model is more than 10x larger than the GPT-J LLM first used in the September benchmarks. The memory-enhanced H200 GPUs, in their MLPerf debut, used TensorRT-LLM to produce up to 31,000 tokens/second, a record on MLPerf’s Llama 2 benchmark. The H200 GPU results include up to 14% gains from a custom thermal solution. It’s one example of innovations beyond standard air cooling that systems builders are applying to their NVIDIA MGX designs to take the performance of Hopper GPUs to new heights. Memory Boost for NVIDIA Hopper GPUs NVIDIA is shipping H200 GPUs today. They’ll be available soon from nearly 20 leading system builders and cloud service providers. H200 GPUs pack 141GB of HBM3e running at 4.8TB/s. That’s 76% more memory flying 43% faster compared to H100 GPUs. These accelerators plug into the same boards and systems and use the same software as H100 GPUs. With HBM3e memory, a single H200 GPU can run an entire Llama 2 70B model with the highest throughput, simplifying and speeding inference. GH200 Packs Even More Memory Even more memory — up to 624GB of fast memory, including 144GB of HBM3e — is packed in NVIDIA GH200 Superchips, which combine on one module a Hopper architecture GPU and a power-efficient NVIDIA Grace CPU. NVIDIA accelerators are the first to use HBM3e memory technology. With nearly 5 TB/second memory bandwidth, GH200 Superchips delivered standout performance, including on memory-intensive MLPerf tests such as recommender systems. Sweeping Every MLPerf Test On a per-accelerator basis, Hopper GPUs swept every test of AI inference in the latest round of the MLPerf industry benchmarks. The benchmarks cover today’s most popular AI workloads and scenarios, including generative AI, recommendation systems, natural language processing, speech and computer vision. NVIDIA was the only company to submit results on every workload in the latest round and every round since MLPerf’s data center inference benchmarks began in October 2020. Continued performance gains translate into lower costs for inference, a large and growing part of the daily work for the millions of NVIDIA GPUs deployed worldwide. Advancing What’s Possible Pushing the boundaries of what’s possible, NVIDIA demonstrated three innovative techniques in a special section of the benchmarks called the open division, created for testing advanced AI methods. NVIDIA engineers used a technique called structured sparsity — a way of reducing calculations, first introduced with NVIDIA A100 Tensor Core GPUs — to deliver up to 33% speedups on inference with Llama 2. A second open division test found inference speedups of up to 40% using pruning, a way of simplifying an AI model — in this case, an LLM — to increase inference throughput. Finally, an optimization called DeepCache reduced the math required for inference with the Stable Diffusion XL model, accelerating performance by a whopping 74%. All these results were run on NVIDIA H100 Tensor Core GPUs. A Trusted Source for Users MLPerf’s tests are transparent and objective, so users can rely on the results to make informed buying decisions. NVIDIA’s partners participate in MLPerf because they know it’s a valuable tool for customers evaluating AI systems and services. Partners submitting results on the NVIDIA AI platform in this round included ASUS, Cisco, Dell Technologies, Fujitsu, GIGABYTE, Google, Hewlett Packard Enterprise, Lenovo, Microsoft Azure, Oracle, QCT, Supermicro, VMware (recently acquired by Broadcom) and Wiwynn. All the software NVIDIA used in the tests is available in the MLPerf repository. These optimizations are continuously folded into containers available on NGC, NVIDIA’s software hub for GPU applications, as well as NVIDIA AI Enterprise — a secure, supported platform that includes NIM inference microservices. The Next Big Thing The use cases, model sizes and datasets for generative AI continue to expand. That’s why MLPerf continues to evolve, adding real-world tests with popular models like Llama 2 70B and Stable Diffusion XL. Keeping pace with the explosion in LLM model sizes, NVIDIA founder and CEO Jensen Huang announced last week at GTC that the NVIDIA Blackwell architecture GPUs will deliver new levels of performance required for the multitrillion-parameter AI models. Inference for large language models is difficult, requiring both expertise and the full-stack architecture NVIDIA demonstrated on MLPerf with Hopper architecture GPUs and TensorRT-LLM. There’s much more to come. Learn more about MLPerf benchmarks and the technical details of this inference round. View the full article
  13. Cabel Sasser, co-founder of the company behind whimsical yellow gaming handheld the Playdate, was giving a talk on the crank-sporting device at the Games Developer Conference when he provided a noteworthy factoid and anecdote. Apparently, $400,000 worth of Playdates curiously went missing at a shipping factory in Las… Read more... View the full article
  14. Final Fantasy VII Rebirth pits you against countless enemies, many of which have abilities your characters can deploy against them and other enemies via the Enemy Skill materia. Found in the original 1997 FF7 release, Enemy Skill works a little differently in 2024. You’ll need to complete a series of battles in… Read more... View the full article
  15. Nintendo Switch Online, the subscription service that gives players access to a library of games for consoles and handhelds from the company’s past, will receive its latest Game Boy Advance title this week. F-Zero: Maximum Velocity is dropping onto the service this Friday, March 29. Read more... View the full article
  16. Overwatch 2’s ninth season is nearly over, and when the tenth begins on April 16, the hero shooter is getting a few significant changes.All of its heroes will no longer be locked behind the battle pass and the game will get a new shop dedicated to Mythic Skins only. But Overwatch 2 is also getting a new Damage hero in … Read more... View the full article
  17. In a video game, you’re only as good as your weapon. Try getting through Elden Ring’s hardest boss battle without a powerful sword, or a tough multiplayer match in Halo 3 without nabbing the Needler and unloading on an incoming Ghost. Even the most elite-level gamers among us owe some of their talent to the quality of… Read more... View the full article
  18. Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and which showcases new hardware, software, tools and accelerations for RTX PC users. As generative AI advances and becomes widespread across industries, the importance of running generative AI applications on local PCs and workstations grows. Local inference gives consumers reduced latency, eliminates their dependency on the network and enables more control over their data. NVIDIA GeForce and NVIDIA RTX GPUs feature Tensor Cores, dedicated AI hardware accelerators that provide the horsepower to run generative AI locally. Stable Video Diffusion is now optimized for the NVIDIA TensorRT software development kit, which unlocks the highest-performance generative AI on the more than 100 million Windows PCs and workstations powered by RTX GPUs. Now, the TensorRT extension for the popular Stable Diffusion WebUI by Automatic1111 is adding support for ControlNets, tools that give users more control to refine generative outputs by adding other images as guidance. TensorRT acceleration can be put to the test in the new UL Procyon AI Image Generation benchmark, which internal tests have shown accurately replicates real-world performance. It delivered speedups of 50% on a GeForce RTX 4080 SUPER GPU compared with the fastest non-TensorRT implementation. More Efficient and Precise AI TensorRT enables developers to access the hardware that provides fully optimized AI experiences. AI performance typically doubles compared with running the application on other frameworks. It also accelerates the most popular generative AI models, like Stable Diffusion and SDXL. Stable Video Diffusion, Stability AI’s image-to-video generative AI model, experiences a 40% speedup with TensorRT. The optimized Stable Video Diffusion 1.1 Image-to-Video model can be downloaded on Hugging Face. Plus, the TensorRT extension for Stable Diffusion WebUI boosts performance by up to 2x — significantly streamlining Stable Diffusion workflows. With the extension’s latest update, TensorRT optimizations extend to ControlNets — a set of AI models that help guide a diffusion model’s output by adding extra conditions. With TensorRT, ControlNets are 40% faster. TensorRT optimizations extend to ControlNets for improved customization. Users can guide aspects of the output to match an input image, which gives them more control over the final image. They can also use multiple ControlNets together for even greater control. A ControlNet can be a depth map, edge map, normal map or keypoint detection model, among others. Download the TensorRT extension for Stable Diffusion Web UI on GitHub today. Other Popular Apps Accelerated by TensorRT Blackmagic Design adopted NVIDIA TensorRT acceleration in update 18.6 of DaVinci Resolve. Its AI tools, like Magic Mask, Speed Warp and Super Scale, run more than 50% faster and up to 2.3x faster on RTX GPUs compared with Macs. In addition, with TensorRT integration, Topaz Labs saw an up to 60% performance increase in its Photo AI and Video AI apps — such as photo denoising, sharpening, photo super resolution, video slow motion, video super resolution, video stabilization and more — all running on RTX. Combining Tensor Cores with TensorRT software brings unmatched generative AI performance to local PCs and workstations. And by running locally, several advantages are unlocked: Performance: Users experience lower latency, since latency becomes independent of network quality when the entire model runs locally. This can be important for real-time use cases such as gaming or video conferencing. NVIDIA RTX offers the fastest AI accelerators, scaling to more than 1,300 AI trillion operations per second, or TOPS. Cost: Users don’t have to pay for cloud services, cloud-hosted application programming interfaces or infrastructure costs for large language model inference. Always on: Users can access LLM capabilities anywhere they go, without relying on high-bandwidth network connectivity. Data privacy: Private and proprietary data can always stay on the user’s device. Optimized for LLMs What TensorRT brings to deep learning, NVIDIA TensorRT-LLM brings to the latest LLMs. TensorRT-LLM, an open-source library that accelerates and optimizes LLM inference, includes out-of-the-box support for popular community models, including Phi-2, Llama2, Gemma, Mistral and Code Llama. Anyone — from developers and creators to enterprise employees and casual users — can experiment with TensorRT-LLM-optimized models in the NVIDIA AI Foundation models. Plus, with the NVIDIA ChatRTX tech demo, users can see the performance of various models running locally on a Windows PC. ChatRTX is built on TensorRT-LLM for optimized performance on RTX GPUs. NVIDIA is collaborating with the open-source community to develop native TensorRT-LLM connectors to popular application frameworks, including LlamaIndex and LangChain. These innovations make it easy for developers to use TensorRT-LLM with their applications and experience the best LLM performance with RTX. Get weekly updates directly in your inbox by subscribing to the AI Decoded newsletter. View the full article
  19. In the latest episode of NVIDIA’s AI Podcast, Viome Chief Technology Officer Guru Banavar spoke with host Noah Kravitz about how AI and RNA sequencing are revolutionizing personalized healthcare. The startup aims to tackle the root causes of chronic diseases by delving deep into microbiomes and gene expression. With a comprehensive testing kit, Viome translates biological data into practical dietary recommendations. Viome is forging ahead with professional healthcare solutions, such as early detection tests for diseases, and integrating state-of-the-art technology with traditional medical practices for a holistic approach to wellness. The AI Podcast · Personalized Health: Viome’s Guru Banavar Discusses Startup’s AI-Driven Approach – Ep. 352 Time Stamps: 2:00: Introduction to Viome and the science of nutrigenomics 4:25: The significance of RNA over DNA in health analysis 7:40: The crucial role of the microbiome in understanding chronic diseases 12:50: From sample collection to personalized nutrition recommendations 17:35: Viome’s expansion into professional healthcare solutions and early disease detection View the full article
  20. Hello BruhMcBro, Welcome to UnityHQ Nolfseries Community. Please feel free to browse around and get to know the others. If you have any questions please don't hesitate to ask. BruhMcBro joined on the 03/27/2024. View Member
  21. Hello shevek, Welcome to UnityHQ Nolfseries Community. Please feel free to browse around and get to know the others. If you have any questions please don't hesitate to ask. shevek joined on the 03/27/2024. View Member
  22. It’s been just over a decade since BioShock series creator Ken Levine closed down Irrational Games to form “a smaller, more entrepreneurial endeavor” that would become Ghost Story Games. The studio’s first project, Judas, has been in the works for ten years as well, and after a few trailers teasing the BioShock spiritu… Read more... View the full article
  23. If you played Red Dead Redemption or a recent Grand Theft Auto and wondered “Hey, why hasn’t Hollywood made either of these open-world blockbuster games into movies?” then you should know that you aren’t alone. In fact, you have some famous company, as Jack Black is wondering the same thing. Read more... View the full article
  24. I was out with friends recently and the age-old topic of games that pushed us to the point where controllers were thrown came up during a conversation about Smash pro Riddles doing that very thing at a big tournament last year. The usual suspects were discussed over drinks: FromSoft games like Dark Souls and Sekiro:… Read more... View the full article
×
×
  • Create New...

Important Information

By using this site, you agree to our Guidelines Privacy Policy.