-
Posts
39,331 -
Joined
-
Last visited
-
Days Won
25
Content Type
Profiles
Forums
Events
Downloads
Everything posted by UHQBot
-
It’s a nightmare scenario for Yu-Gi-Oh fans who have to look up cards with convoluted descriptions. Over the weekend, the administrators of Yugipedia—which, as its name implies, is a wiki on all things Yu-Gi-Oh—announced that they had accidentally deleted half of the wikia’s primary database. “Yugipedia has suffered a… Read more... View the full article
-
Hello nolfrevival, Welcome to UnityHQ Nolfseries Community. Please feel free to browse around and get to know the others. If you have any questions please don't hesitate to ask. nolfrevival joined on the 03/07/2023. View Member
-
Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. An adrenaline-fueled virtual ride in the sky is sure to satisfy all thrill seekers — courtesy of 3D artist Kosei Wano’s sensational animation, Moon Hawk. Wano outlines his creative workflow this week In the NVIDIA Studio. Plus, join the #GameArtChallenge — running through Sunday, April 30 — by using the hashtag to share video game fan art, character creations and more for a chance to be featured across NVIDIA social media channels. Welcome to the #GameArtChallenge! Ever created a video game or some game inspired art like this one from @beastochahin? Share it with us using #GameArtChallenge from today to the end of April for a chance to be featured on the Studio or @nvidiaomniverse channels! pic.twitter.com/7tqLtWk9pV — NVIDIA Studio (@NVIDIAStudio) March 6, 2023 Original game content can be made with NVIDIA Omniverse — a platform for creating and operating metaverse applications — using the Omniverse Machinima app. This enables users to collaborate in real time when animating characters and environments in virtual worlds. Who Dares, Wins Wano often finds inspiration exploring the diversity of flora and fauna. He has a penchant for examining birds — and even knows the difference in wing shapes between hawks and martins, he said. This interest in flying entities extends to his fascination with aircrafts. For Moon Hawk, Wano took on the challenge of visually evolving a traditional, fuel-based fighter jet into an electric one. With reference material in hand, Wano opened the 3D app Blender to scale the fighter jet to accurate, real-life sizing, then roughly sketched within the 3D design space, his preferred method to formulate models. “Moon Hawk” in its traditional form. The artist then deployed several tips and tricks to model more efficiently: adding Blender’s automatic detailing modifier, applying neuro-reflex modeling to change the aircraft’s proportions, then dividing the model’s major 3D shapes into sections to edit individually — a step Wano calls “dividing each difficulty.” Neuro-reflex modeling enables Wano to change proportions while maintaining model integrity. Blender Cycles RTX-accelerated OptiX ray tracing, unlocked by the artist’s GeForce RTX 3080 Ti GPU, enabled interactive, photorealistic modeling in the viewport. “Optix’s AI-powered denoiser renders lightly, allowing for comfortable trial and error,” said Wano, who then applied sculpting and other details. Next, Wano used geo nodes to add organic style and customization to his Blender scenes and animate his fighter jet. Applying geo nodes. Blender geo nodes make modeling an almost completely procedural process — allowing for non-linear, non-destructive workflows and the instancing of objects — to create incredibly detailed scenes using small amounts of data. The “Moon Hawk” model is nearly complete. For Moon Hawk, Wano applied geo nodes to mix materials not found in nature, creating unique textures for the fighter jet. Being able to make real-time base mesh edits without the concern of destructive workflows gave Wano the freedom to alter his model on the fly with an assist from his GPU. “With the GeForce RTX 3080 Ti, there’s no problem, even with a model as complicated as this,” he said. Animations accelerated at the speed of light with Wano’s GeForce RTX GPU. Wano kicked off the animation phase by selecting the speed of the fighter jet and roughly designing its flight pattern. Mapping the flight path in advance. The artist referenced popular fighter jet scenes in cinema and video games, as well as studied basic rules of physics, such as inertia, to ensure the flight patterns in his animation were realistic. Then, Wano returned to using geo nodes to add 3D lighting effects without the need to simulate or bake. Such lighting modifications helped to make rendering the project simpler in its final stage. Parameters were edited with ease, in addition to applying particle simulations and manually shaking the camera to add more layers of immersion to the scenes. Final color edits in Blender. With the animation complete, Wano added short motion blur. Accelerated motion blur rendering enabled by his RTX GPU and the NanoVBD toolset for easy rendering of volumes let him apply this effect quickly. And RTX-accelerated OptiX ray tracing in Blender Cycles delivered the fastest final frame renders. Wano imported final files into Blackmagic Design’s DaVinci Resolve application, where GPU-accelerated color grading, video editing and color scopes helped the artist complete the animation in record time. 3D artist Kosei Wano. Choosing GeForce RTX was a simple choice for Wano, who said, “NVIDIA products have been trusted by many people for a long time.” For a deep dive into Wano’s workflow, visit the NVIDIA Studio YouTube channel to browse the playlist Designing and Modeling a Sci-Fi Ship in Blender With Wanoco4D and view each stage: Modeling, Materials, Geometry Nodes and Lightning Effect, Setting Animation and Lights and Rendering. View more of Wano’s impressive portfolio on ArtStation. Who Dares With Photogrammetry, Wins Again Wano, like most artists, is always growing his craft, refining essential skills and learning new techniques, including photogrammetry — the art and science of extracting 3D information from photographs. In the NVIDIA Studio artist Anna Natter recently highlighted her passion for photogrammetry, noting that virtually anything can be preserved in 3D and showcasing features that have the potential to save 3D artists countless hours. Wano saw this same potential when experimenting with the technology in Adobe Substance 3D Sampler. “Photogrammetry can accurately reproduce the complex real world,” said Wano, who would encourage other artists to think big in terms of both individual objects and environments. “You can design an entire realistic space by placing it in a 3D virtual world.” Try out photogrammetry and post your creations with the #StudioShare hashtag for a chance to be featured across NVIDIA Studio’s social media channels. Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. View the full article
-
Preparing a retailer’s online catalog once required expensive physical photoshoots to capture products from every angle. A Tel Aviv startup is saving brands time and money by transforming these camera clicks into mouse clicks. Hexa uses GPU-accelerated computing to help companies turn their online inventory into 3D renders that shoppers can view in 360 degrees, animate or even try on virtually to help their buying decisions. The company, which recently announced a $20.5 million funding round, is working with brands in fashion, furniture, consumer electronics and more. “The world is going 3D,” said Yehiel Atias, CEO of Hexa. “Just a few years ago, the digital infrastructure to do this was still so expensive that it was more affordable to arrange a photographer, models and lighting. But with the advancements of AI and NVIDIA GPUs, it’s now feasible for retailers to use synthetic data to replace physical photoshoots.” Hexa’s 3D renders are used on major retail websites such as Amazon, Crate & Barrel and Macy’s. The company creates thousands of renders each month, reducing the need for physical photoshoots of every product in a retailer’s catalog. Hexa estimates that it can save customers up to 300 pounds of carbon emissions for each product imaged digitally instead of physically. From Physical Photoshoots to AI-Accelerated Renders Hexa can reconstruct a single 2D image, or a set of low-quality 2D images, into a high-fidelity 3D asset. The company uses differing levels of automation for its renders depending on the complexity of the shape, the amount of visual data that needs to be reconstructed, and the similarity of the object to Hexa’s existing dataset. To automate elements of its workflow, the team uses dozens of AI algorithms that were developed using the PyTorch deep learning framework and run on NVIDIA Tensor Core GPUs in the cloud. If one of Hexa’s artists is reconstructing a 3D toaster, for example, one algorithm can identify similar geometries the team has created in the past to give the creator a head start. https://blogs.nvidia.com/wp-content/uploads/2023/03/Toaster-Item-Promo-video.mp4 Another neural network can scan a retailer’s website to identify how many of its products Hexa can support with 3D renders. The company’s entire rendering pipeline, too, runs on NVIDIA GPUs available through Amazon Web Services. “Accessing compute resources through AWS gives us the option to use thousands of NVIDIA GPUs at a moment’s notice,” said Segev Nahari, lead technical artist at Hexa. “If I need 10,000 frames to be ready by a certain time, I can request the hardware I need to meet the deadline.” Nahari estimates that rendering on NVIDIA GPUs is up to 3x faster than relying on CPUs. Broadening Beyond Retail, Venturing Into Omniverse Hexa developers are continually experimenting with new methods for 3D rendering — looking for workflow improvements in preprocessing, object reconstruction and post-processing. The team recently began working with NVIDIA GET3D, a generative AI model by NVIDIA Research that generates high-fidelity, three-dimensional shapes based on a training dataset of 2D images. By training GET3D on Hexa’s dataset of shoes, the team was able to generate 3D models of novel shoes not part of the training data. In addition to its work in ecommerce, Hexa’s research and development team is investigating new applications for the company’s AI software. “It doesn’t stop at retail,” Atias said. “Industries from gaming to fashion and healthcare are finding out that synthetic data and 3D technology is a more efficient way to do things like digitize inventory, create digital twins and train robots.” The team credits its membership in NVIDIA Inception, a global program that supports cutting-edge startups, as a “huge advantage” in leveling up the technology Hexa uses. “Being part of Inception opens doors that outsiders don’t have,” Atias said. “For a small company trying to navigate the massive range of NVIDIA hardware and software offerings, it’s a door-opener to all the cool tools we wanted to experiment with and understand the potential they could bring to Hexa.” Hexa is testing the NVIDIA Omniverse Enterprise platform — an end-to-end platform for building and operating metaverse applications — as a tool to unify its annotating and rendering workflows, which are used by dozens of 3D artists around the globe. Omniverse Enterprise enables geographically dispersed teams of creators to customize their rendering pipelines and collaborate to build 3D assets. “Each of our 3D artists has a different software workflow that they’re used to — so it can be tough to get a unified output while still being flexible about the tools each artist uses,” said Jonathan Clark, Hexa’s CTO. “Omniverse is an ideal candidate in that respect, with huge potential for Hexa. The platform will allow our artists to use the rendering software they’re comfortable with, while also allowing our team to visualize the final product in one place.” To learn more about NVIDIA Omniverse and next-generation content creation, register free for NVIDIA GTC, a global conference for the era of AI and the metaverse, taking place online March 20-23. Images and videos courtesy of Hexa View the full article
-
Hello Tommilive, Welcome to UnityHQ Nolfseries Community. Please feel free to browse around and get to know the others. If you have any questions please don't hesitate to ask. Tommilive joined on the 03/07/2023. View Member
-
Hello Miauracle, Welcome to UnityHQ Nolfseries Community. Please feel free to browse around and get to know the others. If you have any questions please don't hesitate to ask. Miauracle joined on the 03/06/2023. View Member
-
The Nintendo GameCube was a weird little console, one that had all kinds of wacky add-ons and peripherals released for it over the years, from LAN adapters to Game Boy Links to bongo drums. One thing I never knew about until today, however, were the company’s plans for an official LCD screen. Read more... View the full article
-
Hello parham, Welcome to UnityHQ Nolfseries Community. Please feel free to browse around and get to know the others. If you have any questions please don't hesitate to ask. parham joined on the 03/06/2023. View Member
-
Paradox Announces Cities: Skylines 2, Plus Loads Of Other Stuff
UHQBot posted a topic in Gaming News
Strategy specialists Paradox had a big day today, taking the lid off a bunch of new projects due for release over the next couple of years, including both new games and expansions to existing ones. Read more... View the full article -
Nerdy conventions and concerts can be a lot of fun, a great way to meet like-minded people and share your love of something with others. They also have a reputation for smelling bad, as certain people attending may not be taking care of themselves as much as you would hope…or even bathing at all, in some cases. So it… Read more... View the full article
-
North of the Border is a YouTube channel that creates the most astonishing, often horrendous clay models, depicting scenes or characters from some of your favorite video games. Whether it’s Elden Ring, Breath of the Wild, or Pokémon, the site’s creator, Adam, aims to produce outstandingly realistic models. Sometimes,… Read more... View the full article
-
Wo Long: Fallen Dynasty’s Most Frustrating Enemy Can Go To Hell
UHQBot posted a topic in Gaming News
After about 40 hours of hellish action and sitting around level 95, I’m stuck on Wo Long: Fallen Dynasty’s last boss. I guess that makes sense. This is the end of the game, after all, so of course the finale would be difficult. But reaching this point was perilous for one specific reason: Despite being one of the most… Read more... View the full article -
There are different kinds of “scary” things. There’s what I’d call “haunted house scary,” anything that immediately elicits a sharp scream, like a plate falling on the kitchen floor, that you forget about as soon as it’s over—you gather up the shards and move on. Read more... View the full article
-
Welcome to the first edition of Exp. Share, Kotaku’s new Pokémon column in which we dive deep to explore notable characters, urban legends, communities, and just plain weird quirks from throughout the Pokémon franchise. This week we’re taking a look at one of the breakout characters from Pokémon Scarlet and Violet.… Read more... View the full article
-
Pokémon Scarlet and Violet launched in an incredibly rough state last November, and it seems that the latest patch has introduced an even bigger problem for its players. Fans on Reddit have reported that connecting to the Pokémon Go app or downloading the DLC has caused them to lose their entire save files. Read more... View the full article
-
In Lincoln County, Tennessee a Last Of Us-like situation is happening as a strong, heat-resistant fungus is spreading across trees, homes, cars, signs, and decks. And you can blame all of it on Jack Daniels and its warehouses full of whiskey, which this particular fungus feeds off of. Residents are so mad that at… Read more... View the full article
-
Jimmy “MrBeast” Donaldson, the biggest YouTuber in the world, recently asked fans to do him a favor: patrol the shelves of their local big box retailer to make sure his Feastables chocolate bars weren’t a huge mess. Some appear to have happily enlisted in his effort, while others lampooned the brand activation psy-op… Read more... View the full article
-
Last week, Final Fantasy producer Yoshida Naoki objected to the label “JRPG,” stating that the term felt “discriminatory.” His comments kicked off a round of discussion about whether or not the term “JRPG” was racially problematic. Some couldn’t understand how JRPG could be pejorative, especially given the genre’s… Read more... View the full article
-
There are a lot of tough jerks in Wo Long: Fallen Dynasty. I’m talking warriors like the infamous Lu Bu (Dynasty Warriors fans know), demons such as the four-horned cow monstrosity Aoye, electric dragons, fiendish porcupines, and so many more you’ll probably want to dig your own grave instead of engaging in the game’s… Read more... View the full article
-
Today, Paramount Pictures and Nickelodeon released the first trailer for the upcoming animated film Teenage Mutant Ninja Turtles: Mutant Mayhem. And it’s not only a cute trailer that promises a fun, teen-focused TMNT story, but it also looks amazing. Just ignore the weird Seth Rogen stuff. Read more... View the full article
-
You think Elden Ring is hard? Try painstakingly recreating one of the game’s iconic locations out of paper, that’s hard. But that’s also what artist Sky Burkson did when he made a miniature model of the Leyndell Royal Capital out of paper, toothpicks, wool, and other components. Read more... View the full article
-
Accelerated computing — a capability once confined to high-performance computers in government research labs — has gone mainstream. Banks, car makers, factories, hospitals, retailers and others are adopting AI supercomputers to tackle the growing mountains of data they need to process and understand. These powerful, efficient systems are superhighways of computing. They carry data and calculations over parallel paths on a lightning journey to actionable results. GPU and CPU processors are the resources along the way, and their onramps are fast interconnects. The gold standard in interconnects for accelerated computing is NVLink. So, What Is NVLink? NVLink is a high-speed connection for GPUs and CPUs formed by a robust software protocol, typically riding on multiple pairs of wires printed on a computer board. It lets processors send and receive data from shared pools of memory at lightning speed. Now in its fourth generation, NVLink connects host and accelerated processors at rates up to 900 gigabytes per second (GB/s). That’s more than 7x the bandwidth of PCIe Gen 5, the interconnect used in conventional x86 servers. And NVLink sports 5x the energy efficiency of PCIe Gen 5, thanks to data transfers that consume just 1.3 picojoules per bit. The History of NVLink First introduced as a GPU interconnect with the NVIDIA P100 GPU, NVLink has advanced in lockstep with each new NVIDIA GPU architecture. In 2018, NVLink hit the spotlight in high performance computing when it debuted connecting GPUs and CPUs in two of the world’s most powerful supercomputers, Summit and Sierra. The systems, installed at Oak Ridge and Lawrence Livermore National Laboratories, are pushing the boundaries of science in fields such as drug discovery, natural disaster prediction and more. Bandwidth Doubles, Then Grows Again In 2020, the third-generation NVLink doubled its max bandwidth per GPU to 600GB/s, packing a dozen interconnects in every NVIDIA A100 Tensor Core GPU. The A100 powers AI supercomputers in enterprise data centers, cloud computing services and HPC labs across the globe. Today, 18 fourth-generation NVLink interconnects are embedded in a single NVIDIA H100 Tensor Core GPU. And the technology has taken on a new, strategic role that will enable the most advanced CPUs and accelerators on the planet. A Chip-to-Chip Link NVIDIA NVLink-C2C is a version of the board-level interconnect to join two processors inside a single package, creating a superchip. For example, it connects two CPU chips to deliver 144 Arm Neoverse V2 cores in the NVIDIA Grace CPU Superchip, a processor built to deliver energy-efficient performance for cloud, enterprise and HPC users. NVIDIA NVLink-C2C also joins a Grace CPU and a Hopper GPU to create the Grace Hopper Superchip. It packs accelerated computing for the world’s toughest HPC and AI jobs into a single chip. Alps, an AI supercomputer planned for the Swiss National Computing Center, will be among the first to use Grace Hopper. When it comes online later this year, the high-performance system will work on big science problems in fields from astrophysics to quantum chemistry. The Grace CPU packs 144 Arm Neoverse V2 cores across two die connected by NVLink-C2C. Grace and Grace Hopper are also great for bringing energy efficiency to demanding cloud computing workloads. For example, Grace Hopper is an ideal processor for recommender systems. These economic engines of the internet need fast, efficient access to lots of data to serve trillions of results to billions of users daily. Recommenders get up to 4x more performance and greater efficiency using Grace Hopper than using Hopper with traditional CPUs. In addition, NVLink is used in a powerful system-on-chip for automakers that includes NVIDIA Hopper, Grace and Ada Lovelace processors. NVIDIA DRIVE Thor is a car computer that unifies intelligent functions such as digital instrument cluster, infotainment, automated driving, parking and more into a single architecture. LEGO Links of Computing NVLink also acts like the socket stamped into a LEGO piece. It’s the basis for building supersystems to tackle the biggest HPC and AI jobs. For example, NVLinks on all eight GPUs in an NVIDIA DGX system share fast, direct connections via NVSwitch chips. Together, they enable an NVLink network where every GPU in the server is part of a single system. To get even more performance, DGX systems can themselves be stacked into modular units of 32 servers, creating a powerful, efficient computing cluster. NVLink is one of the key technologies that let users easily scale modular NVIDIA DGX systems to a SuperPOD with up to an exaflop of AI performance. Users can connect a modular block of 32 DGX systems into a single AI supercomputer using a combination of an NVLink network inside the DGX and NVIDIA Quantum-2 switched Infiniband fabric between them. For example, an NVIDIA DGX H100 SuperPOD packs 256 H100 GPUs to deliver up to an exaflop of peak AI performance. To get even more performance, users can tap into the AI supercomputers in the cloud such as the one Microsoft Azure is building with tens of thousands of A100 and H100 GPUs. It’s a service used by groups like OpenAI to train some of the world’s largest generative AI models. And it’s one more example of the power of accelerated computing. View the full article
