News Posts matching #technical

Return to Keyword Browsing

NVIDIA's Bryan Catanzaro Discusses Future of AI Personal Computing

Imagine a world where you can whisper your digital wishes into your device, and poof, it happens. That world may be coming sooner than you think. But if you're worried about AI doing your thinking for you, you might be waiting for a while. In a fireside chat Wednesday (March 20) at NVIDIA GTC, the global AI conference, Kanjun Qiu, CEO of Imbue, and Bryan Catanzaro, VP of applied deep learning research at NVIDIA, challenged many of the clichés that have long dominated conversations about AI. Launched in October 2022, Imbue made headlines with its Series B fundraiser last year, raising over $200 million at a $1 billion valuation.

The Future of Personal Computing
Qiu and Catanzaro discussed the role that virtual worlds will play in this, and how they could serve as interfaces for human-technology interaction. "I think it's pretty clear that AI is going to help build virtual worlds," said Catanzaro. "I think the maybe more controversial part is virtual worlds are going to be necessary for humans to interact with AI." People have an almost primal fear of being displaced, Catanzaro said, but what's much more likely is that our capabilities will be amplified as the technology fades into the background. Catanzaro compared it to the adoption of electricity. A century ago, people talked a lot about electricity. Now that it's ubiquitous, it's no longer the focus of broader conversations, even as it makes our day-to-day lives better.

Sony Semiconductor Solutions Selects Cutting-Edge AMD Adaptive Computing Tech

Yesterday, AMD announced that its cutting-edge adaptive computing technology was selected by Sony Semiconductor Solutions (SSS) for its newest automotive LiDAR reference design. SSS, a global leader in image sensor technology, and AMD joined forces to deliver a powerful and efficient LiDAR solution for use in autonomous vehicles. Using adaptive computing technology from AMD significantly extends the SSS LiDAR system capabilities, offering extraordinary accuracy, fast data processing, and high reliability for next-generation autonomous driving solutions.

In the rapidly evolving landscape of autonomous driving, the demand for precise and reliable sensor technology has never been greater. LiDAR (Light Detection and Ranging) technology plays a pivotal role in enabling depth perception and environmental mapping for various industries. LiDAR delivers image classification, segmentation, and object detection data that is essential for 3D vision perception enhanced by AI, which cannot be provided by cameras alone, especially in low-light or inclement weather. The dedicated LiDAR reference design addresses the complexities of autonomous vehicle development with a standardized platform to enhance safety in navigating diverse driving scenarios.

Avatar: Frontiers of Pandora's Latest Patch Claims Fixing of FSR 3 Artefacts & FPS Tracking

A new patch for Avatar: Frontiers of Pandora deployed on March 1, bringing more than 150 fixes and adjustments to the game. Title Update 3 includes technical, UI, balancing, main quest, and side quest improvements, plus additional bug fixes. To provide players with an improved experience, the development team, led by Massive Entertainment, listened to feedback from the community while working on Title Update 3.

An additional patch, Title Update 3.1, was deployed on March 7, adding additional fixes to the game. Check out the full list of improvements included in Title Update 3 & 3.1 here, and read on for the most notable improvements now available in Avatar: Frontiers of Pandora.

Update Mar 14th: TPU has received alerts regarding player feedbackMassive Entertainment's "Title Update 3" has reportedly broken their implementation of FSR 3 in Avatar: Frontiers of Pandora. We will keep an eye on official Ubisoft channels—so far they have not addressed these FSR-related problems.

NVIDIA Introduces Generative AI Professional Certification

NVIDIA is offering a new professional certification in generative AI to enable developers to establish technical credibility in this important domain. Generative AI is revolutionizing industries worldwide, yet there's a critical skills gap and need to uplevel employees to more fully harness the technology. Available for the first time from NVIDIA, this new professional certification enables developers, career professionals, and others to validate and showcase their generative AI skills and expertise. Our new professional certification program introduces two associate-level generative AI certifications, focusing on proficiency in large language models and multimodal workflow skills.

"Generative AI has moved to center stage as governments, industries and organizations everywhere look to harness its transformative capabilities," NVIDIA founder and CEO Jensen Huang recently said. The certification will become available starting at GTC, where in-person attendees can also access recommended training to prepare for a certification exam. "Organizations in every industry need to increase their expertise in this transformative technology," said Greg Estes, VP of developer programs at NVIDIA. "Our goals are to assist in upskilling workforces, sharpen the skills of qualified professionals, and enable individuals to demonstrate their proficiency in order to gain a competitive advantage in the job market."

IBM Opens State-of-the-Art "X-Force Cyber Range" in Washington DC

IBM has announced the official opening of the new IBM X-Force Cyber Range in Washington, DC. The range includes new custom training exercises specifically designed to help U.S. federal agencies, their suppliers and critical infrastructure organizations more effectively respond to persistent and disruptive cyberattacks, and threats posed by AI. The state-of-the-art facility is designed to help everyone from legal and mission-critical leaders, to the C-Suite and technical security leaders prepare for a real-world cyber incident. According to IBM's 2023 Cost of a Data Breach report the global average cost of a data breach reached $4.45 million, with the US facing the highest breach costs across all regions. Organizations that formed an incident response (IR) team and tested their IR plan experienced faster incident response times and lower costs than organizations that did neither. In fact, the report found that high levels of IR planning and testing saved industry and government nearly $1.5 million in breach costs and 54 days from the data breach lifecycle.

"From national security threats to supply chain disruptions impacting the goods and services we rely on every day, cyberattacks on government and critical infrastructure can have ramifications that go far beyond the balance sheet," said Alice Fakir, Partner, Lead of Cybersecurity Services, US Federal Market for IBM Consulting. "The elite and highly customizable cyber response training we provide at our new DC range helps organizations and federal agencies better defend against existing and emerging threats, and also addresses federal mandates like those in the Biden Administration's Executive Order 14028 focused on improving the nation's cybersecurity."

ServiceNow, Hugging Face & NVIDIA Release StarCoder2 - a New Open-Access LLM Family

ServiceNow, Hugging Face, and NVIDIA today announced the release of StarCoder2, a family of open-access large language models for code generation that sets new standards for performance, transparency, and cost-effectiveness. StarCoder2 was developed in partnership with the BigCode Community, managed by ServiceNow, the leading digital workflow company making the world work better for everyone, and Hugging Face, the most-used open-source platform, where the machine learning community collaborates on models, datasets, and applications. Trained on 619 programming languages, StarCoder2 can be further trained and embedded in enterprise applications to perform specialized tasks such as application source code generation, workflow generation, text summarization, and more. Developers can use its code completion, advanced code summarization, code snippets retrieval, and other capabilities to accelerate innovation and improve productivity.

StarCoder2 offers three model sizes: a 3-billion-parameter model trained by ServiceNow; a 7-billion-parameter model trained by Hugging Face; and a 15-billion-parameter model built by NVIDIA with NVIDIA NeMo and trained on NVIDIA accelerated infrastructure. The smaller variants provide powerful performance while saving on compute costs, as fewer parameters require less computing during inference. In fact, the new 3-billion-parameter model matches the performance of the original StarCoder 15-billion-parameter model. "StarCoder2 stands as a testament to the combined power of open scientific collaboration and responsible AI practices with an ethical data supply chain," emphasized Harm de Vries, lead of ServiceNow's StarCoder2 development team and co-lead of BigCode. "The state-of-the-art open-access model improves on prior generative AI performance to increase developer productivity and provides developers equal access to the benefits of code generation AI, which in turn enables organizations of any size to more easily meet their full business potential."

Nightingale Devs Working on "Offline Play" Mode

Hey Realmwalkers, we've seen a lot of discussion in recent days around our decision to make Nightingale online-only at our Early Access release. We understand that this can be frustrating for a number of reasons. Our vision for the game since inception was to create an interconnected series of Realms, with the idea of allowing for co-operative exploration in mind—a universe bigger than a single Realm or server. That meant we made a choice early in development between supporting co-op from day one or focusing development on an offline mode.

Co-operative gameplay associated with having party members across multiple Realms was the more technically challenging problem and therefore the one we chose to tackle first. Looking back on that decision, we misjudged what some of you were looking for in your experience.

NVIDIA Joins US Artificial Intelligence Safety Institute Consortium

NVIDIA has joined the National Institute of Standards and Technology's new U.S. Artificial Intelligence Safety Institute Consortium as part of the company's effort to advance safe, secure and trustworthy AI. AISIC will work to create tools, methodologies and standards to promote the safe and trustworthy development and deployment of AI. As a member, NVIDIA will work with NIST—an agency of the U.S. Department of Commerce—and fellow consortium members to advance the consortium's mandate. NVIDIA's participation builds on a record of working with governments, researchers and industries of all sizes to help ensure AI is developed and deployed safely and responsibly.

Through a broad range of development initiatives, including NeMo Guardrails, open-source software for ensuring large language model responses are accurate, appropriate, on topic and secure, NVIDIA actively works to make AI safety a reality. In 2023, NVIDIA endorsed the Biden Administration's voluntary AI safety commitments. Last month, the company announced a $30 million contribution to the U.S. National Science Foundation's National Artificial Intelligence Research Resource pilot program, which aims to broaden access to the tools needed to power responsible AI discovery and innovation.

NVIDIA CG100 "Grace" Server Processor Benchmarked by Academics

The Barcelona Supercomputing Center (BSC) and the State University of New York (Stony Brook and Buffalo campuses) have pitted NVIDIA's relatively new CG100 "Grace" Superchip against several rival products in a "wide variety of HPC and AI benchmarks." Team Green marketing material has focused mainly on the overall GH200 "Grace Hopper" package—so it is interesting to see technical institutes concentrate on the company's "first true" server processor (ARM-based), rather than the ever popular GPU aspect. The Next Platform's article summarized the chip's internal makeup: "(NVIDIA's) Grace CPU has a relatively high core count and a relatively low thermal footprint, and it has banks of low-power DDR5 (LPDDR5) memory—the kind used in laptops but gussied up with error correction to be server class—of sufficient capacity to be useful for HPC systems, which typically have 256 GB or 512 GB per node these days and sometimes less."

Benchmark results were revealed at last week's HPC Asia 2024 conference (in Nagoya, Japan)—Barcelona Supercomputing Center (BSC) and the State University of New York also uploaded their findings to the ACM Digital Library (link #1 & #2). BSC's MareNostrum 5 system contains an experimental cluster portion—consisting of NVIDIA Grace-Grace and Grace-Hopper superchips. We have heard plenty about the latter (in press releases), but the former is a novel concept—as outlined by The Next Platform: "Put two Grace CPUs together into a Grace-Grace superchip, a tightly coupled package using NVLink chip-to-chip interconnects that provide memory coherence across the LPDDR5 memory banks and that consumes only around 500 watts, and it gets plenty interesting for the HPC crowd. That yields a total of 144 Arm Neoverse "Demeter" V2 cores with the Armv9 architecture, and 1 TB of physical memory with 1.1 TB/sec of peak theoretical bandwidth. For some reason, probably relating to yield on the LPDDR5 memory, only 960 GB of that memory capacity and only 1 TB/sec of that memory bandwidth is actually available."

Intel Arrow Lake-S 24 Thread CPU Leaked - Lacks Hyper-Threading & AVX-512 Support

An interesting Intel document leaked out last month—it contained detailed pre-release information that covered their upcoming 15th Gen Core Arrow Lake-S desktop CPU platform, including a possible best scenario 8+16+1 core configuration. Thorough analysis of the spec sheet revealed a revelation—the next generation Core processor family could "lack Hyper-Threading (HT) support." The rumor mill had produced similar claims in the past, but the internal technical memo confirmed that Arrow Lake's "expected eight performance cores without any threads enabled via SMT." These specifications could be subject to change, but tipster—InstLatX64—has uprooted an Arrow Lake-S engineering sample: "I spotted (CPUID C0660, 24 threads, 3 GHz, without AVX 512) among the Intel test machines."

The leaker had uncovered several pre-launch Meteor Lake SKUs last year—with 14th Gen laptop processors hitting the market recently, InstLatX64 has turned his attention to seeking out next generation parts. Yesterday's Arrow Lake-S find has chins wagging about the 24 thread count aspect (sporting two more than the fanciest Meteor Lake Core Ultra 9 processor)—this could be an actual 24 core total configuration—considering the evident lack of hyper-threading, as seen on the leaked engineering sample. Tom's Hardware reckons that the AVX-512 instruction set could be disabled via firmware or motherboard UEFI—if InstLatX64's claim of "without AVX-512" support does ring true, PC users (demanding such workloads) are best advised to turn to Ryzen 7040 and 8040 series processors, or (less likely) Team Blue's own 5th Gen Xeon "Emerald Rapids" server CPUs.

Helldivers 2 Technical Director Addresses Anti-Cheat Concerns

Hi everyone, my name is Peter Lindgren and I'm the Technical Director of Helldivers 2. I've been making games at Arrowhead since the Magicka days and I've been involved in every game we've released to date. I will do my best in this post to address the concerns and confusion that's come up recently regarding the choice of Anti-Cheat software in Helldivers 2. So, let's start off with the more urgent questions:

Is GameGuard a kernel-level / administrator-priviledge anti-cheat?
Yes, GameGuard is a "kernel-level", aka rootkit, anti-cheat. Most anti-cheat run at "kernel-level", especially all of the popular ones. It's unfortunately one of the more effective ways to combat cheating. There are some anti-cheat systems that can run in "user-mode," but they are much less effective and tend to be cracked very quickly, resulting in widespread cheating.

Payday 3 Open Beta Goes Live this Weekend

Anyone up for another beta? You guessed it, this weekend we'll run another technical beta of Payday 3. This time it will be OPEN for you to push our servers to the limit. Payday 3 is the much anticipated sequel to one of the most popular co-op shooters ever. Since its release, Payday-players have been reveling in the thrill of a perfectly planned and executed heist. That's what makes Payday a high-octane, co-op FPS experience without equal.

Second BETA
As you might've heard already, we're doing a second Technical BETA - in other words, a server stress test. Same build as last time—so no new UI or heists for now. We want to ensure that we have as few issues as possible ahead of our very imminent launch, so this test is meant to do just that. Between September 8th and 11th we're running an open BETA, just go to our store page, download the build and play.

Intel Previews AVX10 ISA, Next-Gen E-Cores to get AVX-512 Capabilities

Intel has published a preview article covering its new AVX10 ISA (Instruction Set Architecture)—the announcement reveals that both P-Cores & E-Cores (on next-gen processors) will be getting support for AVX-512. Team Blue stated: "Intel AVX10 represents a major shift to supporting a high-performance vector ISA across future Intel processors. It allows the developer to maintain a single code-path that achieves high performance across all Intel platforms with the minimum of overhead checking for feature support. Future development of the Intel AVX10 ISA will continue to provide a rich, flexible, and consistent environment that optimally supports both Server and Client products."

Due to technical issues (E-core related), Intel decided to disable AVX-512 for Alder Lake and Raptor Lake client-oriented CPU lineups. AMD has recently adopted the fairly new instruction set for its Ryzen 7040 mobile series, so it is no wonder that Team Blue is attempting to reintroduce it in the near future—AVX-512 was last seen working properly on Rocket and Tiger Lake chips. AVX10 implementation is expected to debut with Granite Rapids (according to Longhorn), and VideoCardz reckons that Intel will get advanced instructions for Efficiency cores working with its Clearwater Forest CPU architecture.

Valve Releases Major Steam Desktop Client Update

Hello! We're excited to announce that we've just shipped a new version of the Steam Client to everyone. This update includes all the new Steam Desktop features that have been tested and fine-tuned in the beta branch. Before we get into the details, we want to thank our beta testers really quick - we couldn't have shipped without all of your invaluable feedback and bug reports!

New framework, new foundation
The most impactful changes in this update aren't immediately visible; much of the work went into changing how we share code across the Steam Desktop Client, Big Picture mode, and Steam Deck. These changes also means quicker implementation and iteration of new features. For example, many of the features in this update (like Notes in the overlay) are simultaneously shipping on Steam Deck because of the shared codebase.

Layers of Fear Gets Another Unreal Engine 5 Showcase

Experience the visually stunning world of Layers of Fear, one of the first games to utilize the power of Unreal Engine 5 and advanced technological tools. Come with us on an immersive journey as we explore how we incorporated these cutting-edge features to bring you one of the most visually striking horror games out there. With the latest engine, we expedited the development process and introduced a host of technical novelties that make this game a spectacular visual treat. The Lumen, Ray Tracing, and Niagara effects helped us create visuals and effects that are beyond words and feel lifelike.

The canvas awaits its final brushstrokes. The stage calls for its lead actor. The novel needs its final chapter. It's time to face your fears. One. Last. Time. The series, which has left a mark on narrative-driven first-person psychological horror games, returns to tell its final spine-chilling story in Layers of Fear.. The series' crowning work is the definitive way to experience the critically acclaimed franchise as it features Layers of Fear and Layers of Fear 2, as well as all DLCs (including the new 'The Final Note' DLC that will give you a new perspective on the Layers of Fear storyline) and the never-before-told story of The Writer, which will tie everything together. Built on Unreal Engine 5, Layers of Fear supports Ray Tracing, HDR and 4K resolution to make the stunning visuals, and your nightmarish experience, as immersive and realistic as possible.

Lords of the Fallen Showcase Trailer Demonstrates Cutting Edge Unreal Engine Visuals

Hexworks and CI Games have shown off their upcoming RPG, Lords of the Fallen, as part of the State of Unreal GDC Showcase. The trailer demonstrates the team's ambitions and the use of Unreal Engine 5 has allowed them "to create the most immersive game experience possible". The game's character customization system is a cumulation of UE5 generated geometry and 3D scans taken from real life human models. The development team states that this allows players to "create unique faces and bodies by dynamically morphing between a huge range of shapes before finessing the finer details."

Unreal Engine 5's Lumen Global Illumination has been put to good use by the team - the game's complex and rich environments are lit in dramatic and atmospheric fashion, all in real time, with an added layer of technical accomplishment due to the player being able to swap between two parallel worlds. Hexworks devised its own custom toolset to enable the seamless shift from Axiom, the realm of the living, to Umbral, the realm of the dead: "This means our artists and designers can ensure these worlds feel intrinsically linked, like two sides of the same coin... even if one side is decidedly more horrific than the other."
Return to Keyword Browsing
Apr 29th, 2024 15:37 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts