Jump to content
Countdown To Christmas!
 

 

UHQBot

Forum Bot
  • Posts

    53,411
  • Joined

  • Last visited

  • Days Won

    27

UHQBot last won the day on May 30

UHQBot had the most liked content!

Reputation

24 Excellent
About UHQBot
 
 
  • Rank
    Cyber Cate aka Ms. Cyber Archer
    Grand Master
 
Contact Methods
 
 
  • Website URL
    https://unityhq.net
 
Profile Information
 
 
  • Location
    The UHQ Forum- resident AI
  • Interests
    Greeting new community members
    Anything cyber, Hey I'm a bot what did you expect.
 
Previous Fields
 
 
  • NOLF games played
    Plays all NOLF series games
 
Recent Profile Visitors
 
 
20,148 profile views
 
  1. As the scale and complexity of AI infrastructure grows, data center operators need continuous visibility into factors including performance, temperature and power usage. These insights enable data center operators to actively monitor and adjust data center configurations across large-scale, distributed systems — validating that these systems are operating at their highest efficiency and reliability. NVIDIA is developing a software solution for visualizing and monitoring fleets of NVIDIA GPUs — giving cloud partners and enterprises an insights dashboard that can help them boost GPU uptime across computing infrastructures. The offering is an opt-in, customer-installed service that monitors GPU usage, configuration and errors. It will include an open-source client software agent — part of NVIDIA’s ongoing support of open, transparent software that helps customers get the most from their GPU-powered systems. With the service, data center operators will be able to: Track spikes in power usage to keep within energy budgets while maximizing performance per watt. Monitor utilization, memory bandwidth and interconnect health across the fleet. Detect hotspots and airflow issues early to avoid thermal throttling and premature component aging. Confirm consistent software configurations and settings to ensure reproducible results and reliable operation. Spot errors and anomalies to identify failing parts early. These capabilities can help enterprises and cloud providers visualize their GPU fleet, address system bottlenecks and optimize productivity for higher return on investment. This optional service provides real-time monitoring by each GPU system communicating and sharing GPU metrics with the external cloud service. NVIDIA GPUs do not have hardware tracking technology, kill switches and backdoors. Open-Source Agent Offers Insights for Data Center Owners The service will feature a client software agent that the customer can install to stream node-level GPU telemetry data to a portal hosted on NVIDIA NGC. Customers will be able to visualize their GPU fleet utilization in a dashboard, globally or by compute zones — groups of nodes enrolled in the same physical or cloud locations. The dashboard provides insight into GPU status across a customer’s global fleet. The client tooling agent is also slated to be open sourced, providing transparency and auditability. It’ll offer a working example for how customers can incorporate NVIDIA tools into their own solutions for monitoring GPU infrastructure — whether for critical compute clusters or entire fleets. The software provides insight into a company’s GPU inventory but cannot modify GPU configurations or underlying operations. It provides read-only telemetry data that’s customer managed and customizable. The service will also enable customers to generate reports that detail GPU fleet information. As AI applications grow in number and complexity, modern AI infrastructure management is evolving to keep pace. Making sure that AI data centers are running at peak health is vital as AI revolutionizes every industry and application. This software service is here to help. Register for NVIDIA GTC, taking place March 16-19 in San Jose, California, to learn more. See notice regarding software product information. View the full article
  2. I think there is a very compelling case for Twitter not being the place that information about, well, honestly anything, should be casually shared in a matter of fact manner. It is a site for, if we must use it, posting things like "just downloaded some MP3s to my iPod Touch," not sharing that Arrowhead are currently testing a roguelite mode in Helldivers 2, which is exactly what the game's creative director Johan Pilestedt did today. Read more View the full article
  3. With trends and trendy games passing by these days, I'd understand if you'd already forgotten about Suika Game. I am not here to remind you of its existence to talk about it, more so use it as genre context for Dogpile, a new game that is essentially the question "what if Suika Game actually had a bunch of dogs and was also a roguelike deckbuilder?" I know, I know, there are too many of those already, but this one's just so charming! Read more View the full article
  4. The world’s top-performing system for graph processing at scale was built on a commercially available cluster. NVIDIA last month announced a record-breaking benchmark result of 410 trillion traversed edges per second (TEPS), ranking No. 1 on the 31st Graph500 breadth-first search (BFS) list. Performed on an accelerated computing cluster hosted in a CoreWeave data center in Dallas, the winning run used 8,192 NVIDIA H100 GPUs to process a graph with 2.2 trillion vertices and 35 trillion edges. This result is more than double the performance of comparable solutions on the list, including those hosted in national labs. To put this performance in perspective, say every person on Earth has 150 friends. This would represent 1.2 trillion edges in a graph of social relationships. The level of performance recently achieved by NVIDIA and CoreWeave enables searching through every friend relationship on Earth in just about three milliseconds. Speed at that scale is half the story — the real breakthrough is efficiency. A comparable entry in the top 10 runs of the Graph500 list used about 9,000 nodes, while the winning run from NVIDIA used just over 1,000 nodes, delivering 3x better performance per dollar. NVIDIA tapped into the combined power of its full-stack compute, networking and software technologies — including the NVIDIA CUDA platform, Spectrum-X networking, H100 GPUs and a new active messaging library — to push the boundaries of performance while minimizing hardware footprint. By saving significant time and costs at this scale in a commercially available system, the win demonstrates how the NVIDIA computing platform is ready to democratize access to acceleration of the world’s largest sparse, irregular workloads — involving data and work items that come in varying and unpredictable sizes — in addition to dense workloads like AI training. How Graphs at Scale Work Graphs are the underlying information structure for modern technology. People interact with them on social networks and banking apps, among other use cases, every day. Graphs capture relationships between pieces of information in massive webs of information. For example, consider LinkedIn. A user’s profile is a vertex. Connections or relationships to other users are edges — with other users represented as vertices. Some users have five connections, others have 50,000. This creates variable density across the graph, making it sparse and irregular. Unlike an image or language model, which is structured and dense, a graph is unpredictable. Graph500 BFS has a long history as the industry-standard benchmark because it measures a system’s ability to navigate this irregularity at scale. BFS measures the speed of traversing the graph through every vertex and edge. A high TEPS score for BFS — measuring how fast the system can process these edges — proves the system has superior interconnects, such as cables or switches between compute nodes, as well as more memory bandwidth and software able to take advantage of the system’s capabilities. It validates the engineering of the entire system, not just the speed of the CPU or GPU. Effectively, it’s a measure of how fast a system can “think” and associate disparate pieces of information. Current Techniques for Processing Graphs GPUs are known for accelerating dense workloads like AI training. Until recently, the largest sparse linear algebra and graph workloads have remained the domain of traditional CPU architectures. To process graphs, CPUs move graph data across compute nodes. As the graph scales to trillions of edges, this constant movement creates bottlenecks and jams communications. Developers use a variety of software techniques to circumvent this issue. A common approach is to process the graph where it is with active messages, where developers send messages that can process graph data in place. The messages are smaller and can be grouped together to maximize network efficiency. While this software technique significantly accelerates processing, active messaging was designed to run on CPUs and is inherently limited by the throughput rate and compute capabilities of CPU systems. Reengineering Graph Processing for the GPU To speed up the BFS run, NVIDIA engineered a full-stack, GPU-only solution that reimagines how data moves across the network. A custom software framework developed using InfiniBand GPUDirect Async (IBGDA) and the NVSHMEM parallel programming interface enables GPU-to-GPU active messages. With IBGDA, the GPU can directly communicate with the InfiniBand network interface card. Message aggregation has been engineered from the ground up to support hundreds of thousands of GPU threads sending active messages simultaneously, compared with just hundreds of threads on a CPU. As such, in this redesigned system, active messaging runs completely on GPUs, bypassing the CPU. This enables taking full advantage of the massive parallelism and memory bandwidth of NVIDIA H100 GPUs to send messages, move them across the network and process them on the receiver. Running on the stable, high-performance infrastructure of NVIDIA partner CoreWeave, this orchestration enabled doubling the performance of comparable runs while using a fraction of the hardware — at a fraction of the cost. NVIDIA submission run on CoreWeave cluster with 8,192 H100 GPUs tops the leaderboard on the 31st Graph500 breadth-first search list. Accelerating New Workloads This breakthrough has massive implications for high-performance computing. HPC fields like fluid dynamics and weather forecasting rely on similar sparse data structures and communication patterns that power the graphs that underpin social networks and cybersecurity. For decades, these fields have been tethered to CPUs at the largest scales, even as data scales from billions to trillions of edges. NVIDIA’s winning result on Graph500, alongside two other top 10 entries, validates a new approach for high-performance computing at scale. With the full-stack orchestration of NVIDIA computing, networking and software, developers can now use technologies like NVSHMEM and IBGDA to efficiently scale their largest HPC applications, bringing supercomputing performance to commercially available infrastructure. Stay up to date on the latest Graph500 benchmarks and learn more about NVIDIA networking technologies. View the full article
  5. I am most certainly the kind of sucker that has few complaints about Final Fantasy 7 Remake being split into multiple parts. Those are my guys! I love my grumpy idiot all too realistic looking Cloud, I think Midgar being so well realised helps to justify the splitting of the game, and as maximalist as Rebirth is, there's a quality to it I can't help but admire, even with its many flaws. Keeping things fresh is still something necessary in dividing the game up though, and in a recent interview, director Naoki Hamaguchi spoke of how he's trying to do that with the as-of-yet untitled third part. Read more View the full article
  6. The NVIDIA accelerated computing platform is leading supercomputing benchmarks once dominated by CPUs, enabling AI, science, business and computing efficiency worldwide. Moore’s Law has run its course, and parallel processing is the way forward. With this evolution, NVIDIA GPU platforms are now uniquely positioned to deliver on the three scaling laws — pretraining, post-training and test-time compute — for everything from next-generation recommender systems and large language models (LLMs) to AI agents and beyond. How NVIDIA has transformed the foundation of computing AI pretraining, post-training and inference are driving the frontier How hyperscalers are using AI to transform search and recommender systems The CPU-to-GPU Transition: A Historic Shift in Computing At SC25, NVIDIA founder and CEO Jensen Huang highlighted the shifting landscape. Within the TOP100, a subset of the TOP500 list of supercomputers, over 85% of systems use GPUs. This flip represents a historic transition from the serial‑processing paradigm of CPUs to massively parallel accelerated architectures. Before 2012, machine learning was based on programmed logic. Statistical models were used and ran efficiently on CPUs as a corpus of hard-coded rules. But this all changed when AlexNet running on gaming GPUs demonstrated image classification could be learned by examples. Its implications were enormous for the future of AI, with parallel processing on increasing sums of data on GPUs driving a new wave of computing. This flip isn’t just about hardware. It’s about platforms unlocking new science. GPUs deliver far more operations per watt, making exascale practical without untenable energy demands. Recent results from the Green500, a ranking of the world’s most energy-efficient supercomputers, underscore the contrast between GPUs versus CPUs. The top five performers in this industry standard benchmark were all NVIDIA GPUs, delivering an average of 70.1 gigaflops per watt. Meanwhile, the top CPU-only systems provided 15.5 flops per watt on average. This 4.5x differential between GPUs versus CPUs on energy efficiency highlights the massive TCO (total cost of ownership) advantage of moving these systems to GPUs. Another measure of the CPU-versus-GPU energy-efficiency and performance differential arrived with NVIDIA’s results on the Graph500. NVIDIA delivered a record-breaking result of 410 trillion traversed edges per second, placing first on the Graph500 breadth-first search list. The winning run more than doubled the next highest score and utilized 8,192 NVIDIA H100 GPUs to process a graph with 2.2 trillion vertices and 35 trillion edges. That compares with the next best result on the list, which required roughly 150,000 CPUs for this workload. Hardware footprint reductions of this scale save time, money and energy. Yet NVIDIA showcased at SC25 that its AI supercomputing platform is far more than GPUs. Networking, CUDA libraries, memory, storage and orchestration are co-designed to deliver a full-stack platform. Enabled by CUDA, NVIDIA is a full-stack platform. Open-source libraries and frameworks such as those in the CUDA-X ecosystem are where big speedups occur. Snowflake recently announced an integration of NVIDIA A10 GPUs to supercharge data science workflows. Snowflake ML now comes preinstalled with NVIDIA cuML and cuDF libraries to accelerate popular ML algorithms with these GPUs. With this native integration, Snowflake’s users can easily accelerate model development cycles with no code changes required. NVIDIA’s benchmark runs show 5x less time required for Random Forest and up to 200x for HDBSCAN on NVIDIA A10 GPUs compared with CPUs. The flip was the turning point. The scaling laws are the trajectory forward. And at every stage, GPUs are the engine driving AI into its next chapter. But CUDA-X and many open-source software libraries and frameworks are where much of the magic happens. CUDA-X libraries accelerate workloads across every industry and application — engineering, finance, data analytics, genomics, biology, chemistry, telecommunications, robotics and much more. “The world has a massive investment in non-AI software. From data processing to science and engineering simulations, representing hundreds of billions of dollars in compute cloud computing spend each year,” Huang said on NVIDIA’s recent earning call. Many applications that once ran exclusively on CPUs are now rapidly shifting to CUDA GPUs. “Accelerated computing has reached a tipping point. AI has also reached a tipping point and is transforming existing applications while enabling entirely new ones,” he said. What began as an energy‑efficiency imperative has matured into a scientific platform: simulation and AI fused at scale. The leadership of NVIDIA GPUs in the TOP100 is both proof of this trajectory and a signal of what comes next — breakthroughs across every discipline. As a result, researchers can now train trillion‑parameter models, simulate fusion reactors and accelerate drug discovery at scales CPUs alone could never reach. The Three Scaling Laws Driving AI’s Next Frontier The change from CPUs to GPUs is not just a milestone in supercomputing. It’s the foundation for the three scaling laws that represent the roadmap for AI’s next workflow: pretraining, post‑training and test‑time scaling. Pre‑training scaling was the first law to assist the industry. Researchers discovered that as datasets, parameter counts and compute grew, model performance improved predictably. Doubling the data or parameters meant leaps in accuracy and versatility. On the latest MLPerf Training industry benchmarks, the NVIDIA platform delivered the highest performance on every test and was the only platform to submit on all tests. Without GPUs, the “bigger is better” era of AI research would have stalled under the weight of power budgets and time constraints. Post‑training scaling extends the story. Once a foundation model is built, it must be refined — tuned for industries, languages or safety constraints. Techniques like reinforcement learning from human feedback, pruning and distillation require enormous additional compute. In some cases, the demands rival pre‑training itself. This is like a student improving after basic education. GPUs again provide the horsepower, enabling continual fine‑tuning and adaptation across domains. Test‑time scaling, the newest law, may prove the most transformative. Modern models powered by mixture-of-experts architectures can reason, plan and evaluate multiple solutions in real time. Chain‑of‑thought reasoning, generative search and agentic AI demand dynamic, recursive compute — often exceeding pretraining requirements. This stage will drive exponential demand for inference infrastructure — from data centers to edge devices. Together, these three laws explain the demand for GPUs for new AI workloads. Pretraining scaling has made GPUs indispensable. Post‑training scaling has reinforced their role in refinement. Test‑time scaling is ensuring GPUs remain critical long after training ends. This is the next chapter in accelerated computing: a lifecycle where GPUs power every stage of AI — from learning to reasoning to deployment. Generative, Agentic, Physical AI and Beyond The world of AI is expanding far beyond basic recommenders, chatbots and text generation. VLMs, or vision language models, are AI systems combining computer vision and natural language processing for understanding and interpreting images and text. And recommender systems — the engines behind personalized shopping, streaming and social feeds — are but one of many examples of how the massive transition from CPUs to GPUs is reshaping AI. Meanwhile, generative AI is transforming everything from robotics and autonomous vehicles to software-as-a-service companies and represents a massive investment in startups. NVIDIA platforms are the only to run on all of the leading generative AI models and handle 1.4 million open-source models. Once constrained by CPU architectures, recommender systems struggled to capture the complexity of user behavior at scale. With CUDA GPUs, pretraining scaling enables models to learn from massive datasets of clicks, purchases and preferences, uncovering richer patterns. Post‑training scaling fine‑tunes those models for specific domains, sharpening personalization for industries from retail to entertainment. On leading global online sites, even a 1% gain in relevance accuracy of recommendations can yield billions more in sales. Electronic commerce sales are expected to reach $6.4 trillion worldwide for 2025, according to Emarketer. The world’s hyperscalers, a trillion-dollar industry, are transforming search, recommendations and content understanding from classical machine learning to generative AI. NVIDIA CUDA excels at both and is the ideal platform for this transition driving infrastructure investment measured in hundreds of billions of dollars. Now, test‑time scaling is transforming inference itself: recommender engines can reason dynamically, evaluating multiple options in real time to deliver context‑aware suggestions. The result is a leap in precision and relevance — recommendations that feel less like static lists and more like intelligent guidance. GPUs and scaling laws are turning recommendation from a background feature into a frontline capability of agentic AI, enabling billions of people to sort through trillions of things on the internet with an ease that would otherwise be unfeasible. What began as conversational interfaces powered by LLMs is now evolving into intelligent, autonomous systems poised to reshape nearly every sector of the global economy. We are experiencing a foundational shift — from AI as a virtual technology to AI entering the physical world. This transformation demands nothing less than explosive growth in computing infrastructure and new forms of collaboration between humans and machines. Generative AI has proven capable of not just creating new text and images, but code, designs and even scientific hypotheses. Now, agentic AI is arriving — systems that perceive, reason, plan and act autonomously. These agents behave less like tools and more like digital colleagues, carrying out complex, multistep tasks across industries. From legal research to logistics, agentic AI promises to accelerate productivity by serving as autonomous digital workers. Perhaps the most transformative leap is physical AI — the embodiment of intelligence in robots of every form. Three computers are required to build physical AI-embodied robots — NVIDIA DGX GB300 to train the reasoning vision-language action model, NVIDIA RTX PRO to simulate, test and validate the model in a virtual world built on Omniverse, and Jetson Thor to run the reasoning VLA at real-time speed. What’s expected next is a breakthrough moment for robotics within years, with autonomous mobile robots, collaborative robots and humanoids disrupting manufacturing, logistics and healthcare. Morgan Stanley estimates there will be 1 billion humanoid robots with $5 trillion in revenue by 2050. Signaling how deeply AI will embed into the physical economy, that’s just a sip of what’s on tap. NVIDIA CEO Jensen Huang stands on stage with a lineup of nine advanced humanoid robots during his keynote address at the GTC DC 2025 conference. The robots, including models from Boston Dynamics, Figure, Agility Robotics, and Disney Research, were brought together to showcase NVIDIA’s new Project GR00T, a general-purpose foundation model aimed at advancing the capabilities of humanoid robots and artificial intelligence. AI is no longer just a tool. It performs work and stands to transform every one of the world’s $100 trillion in markets. And a virtuous cycle of AI has arrived, fundamentally changing the entire computing stack, transitioning all computers into new supercomputing platforms for vastly larger opportunities.​ View the full article
  7. I am not someone that thinks you can be therapied out of any kind of mental anguish. Life just doesn't work that way! Sure, it can be a helpful tool, but sometimes you need to pick up a sledgehammer, go back to your hometown that is filled with robots, and smash it all down. Or, at least that's the argument that Virtue and a Sledgehammer makes, the latest game from The Red Strings Club and The Cosmic Wheel Sisterhood developer Deconstructeam. Read more View the full article
  8. All these massive, multi-billion dollar acquisitions are getting a bit scary, aren't they? The one on everyone's minds at the moment is of course Netflix's proposed takeover of movie studio giant Warner Bros, offering up a cool $82.7 billion in exchange. This, of course, has an indescribably massive potential to ruin mainstream cinema, but we won't get into that right this second, because there's another concern: how much the streamer does not seem to care about the games side of Warner Bros. Read more View the full article
  9. Rockstar Games' firing of more than 30 workers just over a month ago has once again been brought up by UK politicians, with prime minister Keir Starmer calling it a "deeply concerning case" which will be looked into by government ministers. Rockstar have been accused of union busting by the Independent Workers' Union of Great Britain over the firings, with the union having filed legal claims against the company. The dismissals reportedly followed a discussion on a union-focused Discord server in which staff cited emails from Rockstar about changes to the company's internal Slack policies. Read more View the full article
  10. Romero Games, the studio founded by Doom co-creator John Romero and Brenda Romero, are marching on despite losing staff after Microsoft suddenly pulling funding for their next shooter amid mass layoffs this summer. Romero says the project isn't living on in its previous guise, but is instead having elements pulled from it as part of a near-total redesign into a smaller game. Read more View the full article
  11. Well, there you go. Arc Raiders' winter update's now got got a proper release date. Snow will be coming to the shooter on December 16th, and judging by the way Embark have revealed that, it looks like there could be a temperature-based twist coming to the usual murder-deathing and trying to sneak into places you shouldn't. Read more View the full article
  12. With Call of Duty Black Ops 7 now out in the wild and having earned a largely mixed reception, Activision have said right, that's it, time to do a thing. Said thing is committing to no longer releasing entries in the same sub-series, be that Modern Warfare or Black Ops, in back-to-back years going forwards. Read more View the full article
  13. Three new trademarks which very much look to be related to Larian's Divinity RPG series have been unearthed, with one closely mirroring the design of a desert monolith Geoff Keighley's posted a picture of in advance of The Geoff Awards. That monolith teaser, which Keoff captioned "regal.inspiring.thickness" has been the subject of much chatter for the past week or so, as the big showcase hype machine churns. Read more View the full article
  14. As someone staring down the barrel of untangling a box of Christmas lights, I won't lie, a not tiny part of me wants to throw them in the bin and replace the whole lot with a fresh set from the shop. (I must stress, I won't be doing that. But the urge still stands.) However, Restory is a cosy game that appeals to the better angel of my nature, the part that will patiently untangle the lights so they can be enjoyed for another year. Restory sets you up as the manager of a small Tokyo electronics maintenance shop at the turn of the millennium. Customers bring you broken devices to painstakingly disassemble, clean, and replace their broken parts, restoring them to working order. Though you can also order broken devices and spare parts online using your delightfully dated PC. Read more View the full article
  15. Hello HairryTurttleneck, Welcome to UnityHQ Nolfseries Community. Please feel free to browse around and get to know the others. If you have any questions please don't hesitate to ask. Be sure to join our Discord HairryTurttleneck joined on the 12/09/2025. View Member
×
  • Create New...

Important Information

By using this site, you agree to our Guidelines Privacy Policy.