News Posts matching #TPU

Return to Keyword Browsing

Lenovo Releases Two New Duet Chromebooks

Lenovo elevates productivity and enjoyment on-the-go with Chromebook Duet (11", 9) and Chromebook Duet EDU G2, two 10.95-inch convertible laptops with 2K display that balance design and performance. Encased in a full metal chassis with Corning Gorilla Glass for ultimate durability and complemented by a soft protective folio, the Chromebook Duet (11", 9) is a portable companion that entertains, adapts, and encourages you to create. A learning resource for students and teachers, the Chromebook Duet EDU G2 takes durability to the next level with a rugged case and detachable keyboard.

"Modern society is constantly on the move, a challenge to traditional notions of productivity and leisure based on being stationary, while underscoring the necessity of having a portable and efficient laptop," said Benny Zhang, Executive Director and General Manager of Chromebooks in Lenovo's Intelligent Devices Group. "The Lenovo Chromebook Duet and Lenovo Chromebook Duet EDU G2 powered by MediaTek Kompanio 838 processor with AI-enabled NPU are versatile companions for work, play, learning and everything in between."

Apple Trained its Apple Intelligence Models on Google TPUs, Not NVIDIA GPUs

Apple has disclosed that its newly announced Apple Intelligence features were developed using Google's Tensor Processing Units (TPUs) rather than NVIDIA's widely adopted hardware accelerators like H100. This unexpected choice was detailed in an official Apple research paper, shedding light on the company's approach to AI development. The paper outlines how systems equipped with Google's TPUv4 and TPUv5 chips played a crucial role in creating Apple Foundation Models (AFMs). These models, including AFM-server and AFM-on-device, are designed to power both online and offline Apple Intelligence features introduced at WWDC 2024. For the training of the 6.4 billion parameter AFM-server, Apple's largest language model, the company utilized an impressive array of 8,192 TPUv4 chips, provisioned as 8×1024 chip slices. The training process involved a three-stage approach, processing a total of 7.4 trillion tokens. Meanwhile, the more compact 3 billion parameter AFM-on-device model, optimized for on-device processing, was trained using 2,048 TPUv5p chips.

Apple's training data came from various sources, including the Applebot web crawler and licensed high-quality datasets. The company also incorporated carefully selected code, math, and public datasets to enhance the models' capabilities. Benchmark results shared in the paper suggest that both AFM-server and AFM-on-device excel in areas such as Instruction Following, Tool Use, and Writing, positioning Apple as a strong contender in the AI race despite its relatively late entry. However, Apple's penetration tactic into the AI market is much more complex than any other AI competitor. Given Apple's massive user base and millions of devices compatible with Apple Intelligence, the AFM has the potential to change user interaction with devices for good, especially for everyday tasks. Hence, refining AI models for these tasks is critical before massive deployment. Another unexpected feature is transparency from Apple, a company typically known for its secrecy. The AI boom is changing some of Apple's ways, and revealing these inner workings is always interesting.

OpenAI in Talks with Broadcom About Developing Custom AI Chips to Power Next Generation Models

According to The Information, OpenAI is reportedly in talks with Broadcom about developing a custom AI accelerator to power OpenAI's growing demand for high-performance solutions. Broadcom is a fabless chip designer known for a wide range of silicon solutions spanning from networking, PCIe, SSD controllers, and PHYs all the way up to custom ASICs. The latter part is what OpenAI wants to focus on, but all the aforementioned IP developed by Broadcom is of use in a data center. Suppose OpenAI decides to use Broadcom's solutions. In that case, the fabless silicon designer offers a complete vertical stack of products for inter-system communication using various protocols such as PCIe, system-to-system communication using Ethernet networking with Broadcom Tomahawk 6 and future revisions, alongside storage solutions and many other complimentary elements of a data center.

As a company skilled in making various IPs, it also makes ASIC solutions for other companies and has assisted Google in the making of its Tensor Processing Unit (TPU), which is now in its sixth generation. Google TPUs are massively successful as Google deploys millions of them and provides AI solutions to billions of users across the globe. Now, OpenAI wants to be part of the AI chip game, and Broadcom could come to the rescue with its already-established AI success and various other data center componentry to help make a custom AI accelerator to power OpenAI's infrastructure needed for the next generation of AI models. With each new AI model released by OpenAI, compute demand spikes by several orders of magnitude, and having an AI accelerator that exactly matches their need will help the company move faster and run even bigger AI models.

Logitech G Pro X Superlight 2 Gets 4 kHz Response Rate Upgrade

Logitech's G PRO X SUPERLIGHT 2 wireless gaming mouse arrived last September with an MSRP of $159—at the time, cutting-edge peripheral enthusiasts were expecting a mighty upgrade over the ultra popular 2020 iteration. TPU's resident mouse guru—pzogel—had a "hard time making a case for the Pro X Superlight 2," when comparing it to the preceding model. Additionally, the Superlight 2's closest rival—Razer's Viper V2 Pro when combined with their 4K HyperPolling Wireless Dongle—was deemed technologically superior. A November 2023 leak revealed that Logitech was readying an intriguing firmware update—the Superlight 2 was set to be upgraded with a 4 kHz report rate mode.

The launch specification of 2 kHz (maximum) was a disappointing aspect, considering that many cheaper alternatives had already rolled out with higher polling rates (via external dongles). Logitech's latest G HUB software update brings good news for PRO X Superlight 2 users: "we are excited to share a new firmware update for your mouse that will increase your report rate to 4 kHz for free. Once you update the firmware, you'll be able to access the higher report rate in G HUB using your existing wireless dongle. For more information, visit our YouTube channel for an update from Chris Pate." In TPU's late October review, pzogel noted that the 2000 Hz rate: "comes at a price, which is reduced battery life." We hope to see a revisit with the newly implemented 4000 Hz mode tested out.

AMD to Address "Bugged" Limited Overclocking on Radeon RX 7900 GRE GPU

TechPowerUp's resident GPU reviewer extraordinaire—W1zzard—has grappled with a handful of custom design AMD Radeon RX 7900 GRE 16 GB models. Team Red and its board partners are pushing a proper/widespread Western release of the formerly China market-exclusive "Golden Rabbit Edition" GPU. TPU's initial review selection of three Sapphire cards and a lone ASRock Steel Legend OC variant garnered two Editor's Choice Awards, and two Highly Recommended badges. Sapphire's Radeon RX 7900 GRE Nitro+ was also honored with a "...But Expensive" tag, due to its MSRP of $600—the premium tier design was one of last year's launch day models in China. Western reviewers have latched onto a notable GRE overclocking limitation—all of TPU's review samples were found to have "overclocking artificially limited by AMD." Steve Walton of Hardware Unboxed has investigated whether the GRE's inherent heavily limited power specification was less of an issue on Sapphire's Nitro+ variant—check out his "re-re-review" video below.

The higher board power design—305 W OC TGP limit and 351 W total board power—is expected to exhibit "up to 10% higher performance than Radeon RX 7800 XT" according to VideoCardz, but falls short. TPU's W1zzard found the GRE Nitro+ card's maximum configurable clock of 2803 MHz: "Overclocking worked quite well on our card, we gained over 8% in real-life performance, which is well above what we usually see, but less than other GRE cards tested today. Sapphire's factory OC eats into OC potential, and maximizes performance out of the box instead. Unfortunately AMD restricted overclocking on their card quite a lot, probably to protect sales of the RX 7900 XT. While NVIDIA doesn't have any artificial limitations for overclockers, AMD keeps limiting the slider lengths for many models, this is not a gamer-friendly approach. For the GRE, both GPU and memory overclocking could definitely go higher based on the results that we've seen in our reviews today." An AMD representative has contacted Hardware Unboxed, in reaction to yesterday's Update review—the GRE's overclocking limitation is a "bug," and a fix is in the works. This situation is a bit odd, given that the Golden Rabbit Edition is not a brand-new product.

Google Faces Potential Billion-Dollar Damages in TPU Patent Dispute

Tech giant Google is embroiled in a high-stakes legal battle over the alleged infringement of patents related to its Tensor Processing Units (TPUs), custom AI accelerator chips used to power machine learning applications. Massachusetts-based startup Singular Computing has accused Google of incorporating architectures described in several of its patents into the design of the TPU without permission. The disputed patents, first filed in 2009, outline computer architectures optimized for executing a high volume of low-precision calculations per cycle - an approach well-suited for neural network-based AI. In a 2019 lawsuit, Singular argues that Google knowingly infringed on these patents in developing its TPU v2 and TPU v3 chips introduced in 2017 and 2018. Singular Computing is seeking between $1.6 billion and $5.19 billion in damages from Google.

Google denies these claims, stating that its TPUs were independently developed over many years. The company is currently appealing to have Singular's patents invalidated, which would undermine the infringement allegations. The high-profile case highlights mounting legal tensions as tech giants race to dominate the burgeoning field of AI hardware. With billions in potential damages at stake, the outcome could have major implications for the competitive landscape in cloud-based machine learning services. As both sides prepare for court, the dispute underscores the massive investments tech leaders like Google make to integrate specialized AI accelerators into their cloud infrastructures. Dominance in this sphere is a crucial strategic advantage as more industries embrace data-hungry neural network applications.

Update 17:25 UTC: According to Reuters, Google and Singular Computing have settled the case with details remaining private for the time being.

NVIDIA GeForce RTX 4080 SUPER GPUs Pop Up in Geekbench Browser

We are well aware that NVIDIA GeForce RTX 4080 SUPER graphics cards are next up on the review table (January 31)—TPU's W1zzard has so far toiled away on getting his evaluations published on time for options further down the Ada Lovelace SUPER food chain. This process was interrupted briefly by the appearance of custom Radeon RX 7600 XT models, but today's attention soon returned to another batch of GeForce RTX 4070 Ti SUPER cards. Reviewers are already toying around with driver-enabled GeForce RTX 4080 SUPER sample units—under strict confidentiality conditions—but the occasional leak is expected to happen. The appropriately named Benchleaks social media account has kept track of emerging test results.

The Geekbench Browser database was updated earlier today with premature GeForce RTX 4080 SUPER GPU test results—one entry highlighted by Benchleaks provides a quick look at the card's prowess in three of Geekbench 5.1's graphics API trials: Vulkan, CUDA and OpenCL. VideoCardz points out that all of the scores could be fundamentally flawed; in particular the Vulkan result of 100378 points—the regular (non-SUPER) GeForce RTX 4080 GPU can achieve almost double that figure in Geekbench 6. The SUPER's other results included a Geekbench 5 CUDA score of 309554, and an achievement of 264806 points in OpenCL. A late morning entrant looks to be hitting the right mark—an ASUS testbed (PRIME Z790-A WIFI + Intel Core i9-13900KF) managed to score 210551 points in Geekbench 6.2.2 Vulkan.

AMD Germany Confirms Ryzen 5 7500F's Western Release Strategy

AMD is preparing its Ryzen 5 7500F processor for a global launch according to reports from earlier today—Team Red's German operation has since informed local media outlets about its updated international release strategy for the iGPU-less Zen 4 desktop SKU. Markus Lindner, a regional company spokesman stated: "This processor model will be available starting July 23, 2023 at 9PM ET. It will be available in greater China as a processor-in-box, and in the rest of world as an option for select system builders."

Chinese reviewers have been getting hands-on experience with the Ryzen 5 7500F, with early reports pointing to impressive performance for its price point ($180) when lined up against competing Intel Core i5-13400 and i5-13400F CPUs. International buyers could express concern regarding AMD Germany's mentioning that availability will be somewhat limited to system integrators. Hopefully these "select system builders" will have good distribution links to retail outlets—70% of TPU quick poll participants expressed interest in seeing a western launch (prior to AMD's "global" announcement).

No Official Review Program for NVIDIA GeForce RTX 4060 Ti 16 GB Cards

NVIDIA is reported to be taking a hands off approach prior to the launch of its GeForce RTX 4060 Ti 16 GB GPU next week—rumored to take place on July 18. Murmurs from last week posited that add-in card (AIC) partners were not all that confident in the variant's prospects, with very little promotional activity lined up. NVIDIA itself is not releasing a Founders Edition GeForce RTX 4060 Ti 16 GB model, so it will be relying on board partners to get custom design units sent out to press outlets/reviewers. According to Hardware Unboxed, as posted on Twitter earlier today, no hardware will be distributed to the media: "Now there's no official review program for this model, there will be no FE version and it seems that NVIDIA and their partners really don't want to know about it. Every NVIDIA partner I've spoken to so far has said they won't be providing review samples, and they're not even sure when their model will be available."

Their announcement continued: "So I don't know when you'll be able to view our review, but I will be buying one as soon as I can. I expect coverage will be pretty thin and that's probably the plan, the release strategy here is similar to that of the RTX 3080 12 GB." TPU can confirm that test samples have not been sent out by NVIDIA's board partners, so a retail unit will be purchased (out of pocket) for reviewing purposes. Previous reports have theorized that not many custom models will be available at launch, with the series MSRP of $499 not doing it many favors in terms of buyer interest. MSI has prepared a new white GAMING X design for the 16 GB variant, so it is good to see at least one example of an AIB putting the effort in...but it would be nice to get a press sample.

AMD Radeon RX 7600 GPU-Z Screenshot Leaked

AMD Radeon RX 7600 GPU-related leaks continue to trickle out as we get closer to the late May launch window, when AMD's entry-level model is predicted to take on the upcoming NVIDIA GeForce RTX 4060 graphics card. Details of Sapphire and MSI RX 7600 (non-XT) custom cards have appeared online over the past week and a half, with basic GPU specifications spotted on retail packaging. VideoCardz has received yet another tip-off and has today released a screenshot from a GPU-Z session. We cannot confirm the accuracy of the specs due to our own NDA with AMD, but we can confirm that GPU-Z since version 2.53.0 does have early support the Navi 33 GPU.

The listed details of a "Navi 33 XL GPU" correlate with leaked information from the past week or two, confirming the presence of 32 Compute Units and 2048 Stream Processors. The standard allocation of 8 GB GDDR6 memory is verified once more, complete with a 128-bit wide memory interface. VideoCardz notes that the GeForce RTX 4060 Ti is quite similar in terms of memory technicalities - with a maximum bandwidth of 288 GB/s (Gen 4 PCIe interface restricted to 8 lanes). GPU-Z states that the Navi 33 XL/RX 7600 GPU has the following core clock speeds: 1720 MHz (base), 2250 MHz (performance game mode) and 2655 MHz (boosted) - tipsters have suggested that the chip is capable of hitting a maximum threshold of 2850 MHz.

Colorful Custom RTX 4060 Ti GPU Clocks Outed, 8 GB VRAM Confirmed

Resident TechPowerUp hardware database overseer T4C Fantasy has divulged some early information about a custom version of the NVIDIA GeForce RTX 4060 Ti GPU card - Colorful's catchily named iGame RTX 4060 Ti Ultra White OC model has been added to the TPU GPU database, and T4C Fantasy has revealed a couple of tidbits on Twitter. The GPU has been tuned to have a maximum boost clock of 2580 MHz, jumping from a base clock of 2310 MHz. According to past leaks the reference version of the GeForce RTX 4060 Ti has a default boost clock of 2535 MHz, so Colorful's engineers have managed to add another 45 MHz on top of that with their custom iteration - so roughly 2% more than the reference default.

T4C Fantasy also confirmed that the Colorful iGame RTX 4060 Ti Ultra W OC will be appointed with 8 GB of VRAM, which also matches the reference model's rumored memory spec. T4C Fantasy points out that brands have the option to produce RTX 4060 Ti cards with a larger pool of attached video memory, but launch models will likely stick with the standard allotment of 8 GB of VRAM. The RTX 4060 Ti is listed as being based on the Ada Lovelace GPU architecture (GPU variant AD106-350-A1), and T4C Fantasy expects that Team Green will stick with a 5 nm process size - contrary to reports of a transition to manufacturing on 4 nm (chez TSMC foundries).

Tachyum Submits Bid for 20-Exaflop Supercomputer to U.S. Department of Energy Advanced Computing Ecosystems

Tachyum today announced that it has responded to a U.S. Department of Energy Request for Information soliciting Advanced Computing Ecosystems for DOE national laboratories engaged in scientific and national security research. Tachyum has submitted a proposal to create a 20-exaflop supercomputer based on Tachyum's Prodigy, the world's first universal processor.

The DOE's request calls for computing systems that are five to 10 times faster than those currently available and/or that can perform more complex applications in "data science, artificial intelligence, edge deployments at facilities, and science ecosystem problems, in addition to the traditional modeling and simulation applications."

Tachyum Successfully Runs FreeBSD in Prodigy Ecosystem; Expands Open-Source OS Support

Tachyum today announced it has completed validation of its Prodigy Universal Processor and software ecosystem with the operating system FreeBSD, and completed the Prodigy instruction set architecture (ISA) for FreeBSD porting. FreeBSD powers modern servers, desktops, and embedded platforms in environments that value performance, stability, and security. It is the platform of choice for many of the busiest websites and the most pervasive embedded networking and storage devices.

The validation of FreeBSD extends Tachyum's support for open-source operating systems and tools, including Linux, Yocto Project, PHP, MariaDB, PostgreSQL, Apache, QEMU, Git, RabbitMQ, and more.

Google Uses Artificial Intelligence to Develop Faster and Smaller Hardware Accelerators

Designing Artificial Intelligence / Machine Learning hardware accelerators takes effort from hardware engineers in conjunction with scientists working in the AI/ML area itself. A few years ago, we started seeing AI incorporated into parts of electronic design automation (EDA) software tools, helping chip designers speed up the process of creating hardware. What we were "used to" seeing AI do are just a couple of things like placement and routing. And having that automated is a huge deal. However, it looks like the power of AI for chip design is not going to stop there. Researchers at Google and UC Berkeley have made a research project that helps AI design and develop AI-tailored accelerators smaller and faster than anything humans made.

In the published paper, researchers present PRIME - a framework that created AI processors based on a database of blueprints. The PRIME framework feeds off an offline database containing accelerator designs and their corresponding performance metrics (e.g., latency, power, etc.) to design next-generation hardware accelerators. According to Google, PRIME can do so without further hardware simulation and has processors ready for use. As per the paper, PRIME improves performance upon state-of-the-art simulation-driven methods by as much as 1.2x-1.5x. It also reduces the required total simulation time by 93% and 99%, respectively. The framework is also capable of architecting accelerators for unseen applications.

Tachyum Selected for Pan-European Project Enabling 1 AI Zettaflop in 2024

Tachyum today announced that it was selected by the Slovak Republic to participate in the latest submission for the Important Projects of Common European Interest (IPCEI), to develop Prodigy 2 for HPC/AI. Prodigy 2 for HPC/AI will enable 1 AI Zettaflop and more than 10 DP Exaflops computers to support superhuman brain-scale computing by 2024 for under €1B. As part of this selection, Tachyum could receive a 49 million Euro grant to accelerate a second-generation of its Tachyum Prodigy processor for HPC/AI in a 3-nanometer process.

The IPCEI program can make a very important contribution to sustainable economic growth, jobs, competitiveness and resilience for industry and the economy in the European Union. IPCEI will strengthen the EU's open strategic autonomy by enabling breakthrough innovation and infrastructure projects through cross-border cooperation and with positive spill-over effects on the internal market and society as a whole.

QNAP Introduces TS-364 3-bay RAID 5 2.5GbE NAS with M.2 SSD Caching

QNAP Systems, Inc., a leading computing, networking, and storage solution innovator, today launched the 3-bay RAID 5 2.5 GbE TS-364 NAS with M.2 PCIe Gen3 NVMe SSD slots and 2.5 GbE connectivity, providing optimized performance in home/office environments. Supporting light virtualization/containers and HDMI output, the TS-364 provides a centralized storage, backup, file sharing, and multimedia solution that features rich NAS applications for greater productivity and boundless entertainment.

"The new TS-364 NAS allows building a secure RAID 5 array with three disks for optimized storage capacity, cache performance and protection against one disk failing. With M.2 PCIe Gen3 slots, the TS-364 enables cache acceleration or SSD storage pools for improved performance, or Edge TPU for AI image recognition. Its 2.5 GbE port accelerates network transmission while its USB 3.2 Gen2 (10 Gbps) ports allow quickly transferring large media files," said Joseph Chiang, Product Manager of QNAP.

Tachyum Prodigy Software Emulation Systems Now Available for Pre-Order

Tachyum Inc. today announced that it is signing early adopter customers for the software emulation system for its Prodigy Universal Processor, customers may begin the process of native software development (i.e. using Prodigy Instruction Set Architecture) and porting applications to run on Prodigy. Prodigy software emulation systems will be available at the end of January 2021.

Customers and partners can use Prodigy's software emulation for evaluation, development and debug, and with it, they can begin to transition existing applications that demand high performance and low power to run optimally on Prodigy processors. Pre-built systems include a Prodigy emulator, native Linux, toolchains, compilers, user mode applications, x86, ARM and RISC-V emulators. Software updates will be issued as needed.

A Merry Christmas and a Happy New Year from TPU!

Another year is drawing to a close, all around the world and here on TechPowerUp as well. A special year, to say the least; it's not the worst in recorded human history, but it's been one of the hardest ever, with more limitations imposed in our lives than in any other we can remember. Most of us have adapted to a new, hopefully temporary reality, one where social distancing and face masks have become commonplace. A reality where countless human lives were lost to an invisible, undiscerning enemy; where economic recession has thrown families and friends out of business or even from their homes. Globalization is indisputable, and while it has many positives, nothing could ever be without fault; the increased interconnectedness of our world means that the system becomes all the more fragile even as it becomes stronger. And while the argument for technology being a potentiator for human loneliness and creating isles of silence between us individuals of the human species presents valid points, one can certainly say that it was technology that helped bridge sentimental and physical gaps for all of us.

EVGA Teases the GeForce RTX 3090 KINGPIN: 360mm AIO Hybrid Cooler, RGB OLED Screen

NVIDIA's announcement of the new RTX 3000-series cards has led to simultaneous announcement of add-in card partners with their solutions and custom takes on the same. These typically come in the form of a press release, with common specifications shared and emphasis on the cooling systems as well as other brand-specific features. EVGA was no different, and the PR shows off their new iCX3 cooling technology in it. More interesting to the enthusiasts and overclockers among us is newer information on their halo card- the RTX 3090 K|NGP|N (Kingpin)- with product management director Jacob revealing it will continue to use a hybrid AIO cooler as with the RTX 2080 Ti variant, but with a massive 360 mm radiator and three 120 mm fans for the behemoth GA102 die and accompanying power delivery solution. No word on pricing or retail availability yet, but look forward to your friendly neighborhood TPU reviews in due course of time to help make a purchase decision on your next GPU.

Hot Chips 2020 Program Announced

Today the Hot Chips program committee officially announced the August conference line-up, posted to hotchips.org. For this first-ever live-streamed Hot Chips Symposium, the program is better than ever!

In a session on deep learning training for data centers, we have a mix of talks from the internet giant Google showcasing their TPUv2 and TPUv3, and a talk from startup Cerebras on their 2nd gen wafer-scale AI solution, as well as ETH Zurich's 4096-core RISC-V based AI chip. And in deep learning inference, we have talks from several of China's biggest AI infrastructure companies: Baidu, Alibaba, and SenseTime. We also have some new startups that will showcase their interesting solutions—LightMatter talking about its optical computing solution, and TensTorrent giving a first-look at its new architecture for AI.
Hot Chips

Folding@Home Now More Powerful Than World's Seven Top Supercomputers Combined - Join TPU!

This one here is another shot in the arm when it comes to faith in humanity. Folding@Home, the distributed computing project where users can donate their spare CPU and GPU cycles for a given cause, has hit an absolute bonkers milestone. According to Greg Bowman, Director of Folding@home, the network has reached a peak compute power amounting to some 470 petaFLOPS - more than double that of the world's current supercomputing record holder, the Summit supercomputer, which dishes out 200 peak petaFLOPS. Folding@Home's 470 Petaflops means users donating their spare cycles are delivering more computing power than that which is available in the world's top 7 supercomputers combined.

After some slight service outages where users weren't getting any work units due to the increased number of donors over the last few days, the computing service now seems to be running at full steam ahead. Remember that you can select the causes for which you are donating your computing power: whether cancer, Alzheimer's, Huntington, or Parkinson's disease, as well as some other non-selectable projects.

NVIDIA: Image Quality for DLSS in Metro Exodus to Be Improved in Further Updates, and the Nature of the Beast

NVIDIA, in a blog post/Q&A on its DLSS technology, promised implementation and image quality improvements on its Metro Exodus rendition of the technology. If you'll remember, AMD recently vouched for other, non-proprietary ways of achieving desired quality of AA technology across resolutions such as TAA and SMAA, saying that DLSS introduces "(...) image artefacts caused by the upscaling and harsh sharpening." NVIDIA in its blog post has dissected DLSS in its implementation, also clarifying some lingering questions on the technology and its resolution limitations that some us here at TPU had already wondered about.

The blog post describes some of the limitations in DLSS technology, and why exactly image quality issues might be popping out here and there in titles. As we knew from NVIDIA's initial RTX press briefing, DLSS basically works on top of an NVIDIA neural network. Titled the NGX, it processes millions of frames from a single game at varying resolutions, with DLSS, and compares it to a given "ground truth image" - the highest quality possible output sans any shenanigans, generated from just pure raw processing power. The objective is to train the network towards generating this image without the performance cost. This DLSS model is then made available for NVIDIA's client to download and to be run at your local RTX graphics card level, which is why DLSS image quality can be improved with time. And it also helps explain why closed implementations of the technology, such as 3D Mark's Port Royal benchmark, show such incredible image quality scenarios compared to, say, Metro Exodus - there is a very, very limited number of frames that the neural network needs to process towards achieving the best image quality.
Forumites: This is an Editorial

AMD Expands EPYC Availability, Introduces ROCm 1.7 With Tensor Flow Support

AMD has been steadily increasing output and availability of their latest take on the server market with their EPYC CPUs. These are 32-core, 64-thread monsters that excel in delivering a better feature set in 1P configuration than even some of Intel's 2P setups, and reception for these AMD processors has been pretty warm as a result. The usage of an MCM design to create a 4-way cluster of small 8-core processor packages has allowed AMD to improve yields with minimum retooling and changes to its manufacturing lines, which in turn, has increased yields and profits for a company that sorely needed a a breakout product.

Corsair Share to be Sold to Private Investor (Again)

News agency Reuters recently reported that hardware giant Corsair was in talks with middle market private equity firm EagleTree Capital for acquisition of the company in its entirety. The deal was reported by Reuters as being valued at close to $500 million. Part of the Corsair brand is, and continues to be, owned by Corsair founder and Chief Executive Andy Paul, who founded the company in 1994. Another part of it, however, is currently owned by Francisco Partners, an American private equity firm focused exclusively on investments in technology and technology-enabled services. In 2013, Francisco Partners made a $75 million investment in the company, after Corsair scrapped its IPO plans.

However, an industry source has shed some more light on the matter. The source (singular, since we couldn't verify this through multiple channels) told TPU that it isn't the entirety of Corsair that's for sale; only the share previously acquired by Francisco Partners. It's this particular stake that's being eyed by EagleTree Capital - probably at a much higher valuation than the initial $75 million investment. As always, you should take this current information with a grain of salt.

VR is Dead? UK Firm "Improbable" Raises $500m

Improbable, which was founded five years ago by Herman Narula and Rob Whitehead, has achieved a Softbank (the company that purchased ARM backing in a funding round that values the business at more than $1bn. >Despite this cash injection, it looks like Improbable will stay independent to work on their purported aim: to build large-scale virtual worlds and simulations. These could be leveraged by games developers, or some other, non-gaming investments and applications, such as transport systems modelling, virtual couch-travelling, and military applications.

Founder Herman Narula said that Improbable's vision "is to create completely new realities, massive virtual worlds that can change the way we live and work and can impact the way we understand some of the hardest problems." The company believes it has developed revolutionary technology with its Spatial OS operating system, which it has recently opened up to other developers. A Google partnership to put its system on the search giant's cloud, thus allowing small developers to create massive simulations without much infrastructure of their own, means real business for this company, and the backing of one of tech's giants lends credence to their ambitions. Let's see where this leads, but it seems that tales of VR's death were greatly exaggerated. You can discuss these finding here, on our very own TPU Virtual Reality Club.
Return to Keyword Browsing
Oct 4th, 2024 16:52 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts