News Posts matching #next-gen

Return to Keyword Browsing

Intel Xeon "Granite Rapids-SP" 80-core Engineering Sample Leaked

A CPU-Z screenshot has been shared by YuuKi_AnS—the image contains details about an alleged next-gen Intel Xeon Scalable processor engineering sample (ES). The hardware tipster noted in (yesterday's post) that an error had occurred in the application's identification of this chunk of prototype silicon. CPU-Z v2.09 has recognized the basics—an Intel Granite Rapids-SP processor that is specced with 80 cores, 2.5 GHz max frequency, a whopping 672 MB of L3 cache, and a max. TDP rating of 350 W. The counting of 320 threads seems to be CPU-Z's big mistake here—previous Granite Rapids-related leaks have not revealed Team Blue's Hyper-Threading technology producing such impressive numbers.

The alleged prototype status of this Xeon chip is very apparent in CPU-Z's tracking of single and multi-core performance—the benchmark results are really off the mark, when compared to finalized current-gen scores (produced by rival silicon). Team Blue's next-gen Xeon series is likely positioned to catch up with AMD EPYC's deployment of large core counts—"Granite Rapids" has been linked to the Intel 3 foundry node, reports from last month suggest that XCC-type processors could be configured with "counts going up to 56-core/112-threads." Micron is prepping next-gen "Tall Form Factor" memory modules, designed with future enterprise processor platforms in mind—including Intel's Xeon Scalable "Granite Rapids" family. Industry watchdogs posit that Team Blue will be launching this series in the coming months.

Square Enix Artist Discusses Rebirth's Modernization of Final Fantasy VII 3D Assets

It'd be fair to say Final Fantasy VII Rebirth's next-gen makeover of characters, monsters, and more from the 1997 original has been a spectacular glow-up. The modern console era has returned an iconic cast and world to us with a level of realism in gameplay that even pre-rendered cutscenes over 25 years ago couldn't match. We asked Square Enix if they could crunch some numbers and share some insight into the changes nearly three decades of technological advancement have wrought. Here, main character modeler and lead character artist Dai Suzuki walks us through a selection of characters, creatures, weapons, and more.

Dai Suzuki: When people think of Cloud, most think of his gigantic sword and his unique hairstyle. Because it is so iconic, we needed to put special effort into creating Cloud's hair for Final Fantasy VII Remake, to properly express his personality. The hair was an extremely high-priority element and in fact accounted for half of the total polygon count for the whole model. In Final Fantasy VII Rebirth, the hardware has been changed to PS5, allowing for a higher polygon count to be used than in Final Fantasy VII Remake.

Taiwan Dominates Global AI Server Supply - Government Reportedly Estimates 90% Share

The Taiwanese Ministry of Economic Affairs (MOEA) managed to herd government representatives and leading Information and Communication Technology (ICT) industry figures together for an important meeting, according to DigiTimes Asia. The report suggests that the main topic of discussion focused on an anticipated growth of Taiwan's ICT industry—current market trends were analyzed, revealing that the nation absolutely dominates in the AI server segment. The MOEA has (allegedly) determined that Taiwan has shipped 90% of global AI server equipment—DigiTimes claims (based on insider info) that: "American brand vendors are expected to source their AI servers from Taiwanese partners." North American customers could be (presently) 100% reliant on supplies of Taiwanese-produced equipment—a scenario that potentially complicates ongoing international tensions.

The report posits that involved parties have formed plans to seize opportunities within an evergrowing global demand for AI hardware—a 90% market dominance is clearly not enough for some very ambitious industry bosses—although manufacturers will need to jump over several (rising) cost hurdles. Key components for AI servers are reported to be much higher than vanilla server parts—DigiTimes believes that AI processor/accelerator chips are priced close to ten times higher than general purpose server CPUs. Similar price hikes have reportedly affected AI adjacent component supply chains—notably cooling, power supplies and passive parts. Taiwanese manufacturers have spread operations around the world, but industry watchdogs (largely) believe that the best stuff gets produced on home ground—global expansions are underway, perhaps inching closer to better balanced supply conditions.

AMD EPYC "Turin" 9000-series Motherboard Specs Suggest Support for DDR5 6000 MT/s

AMD's next-gen EPYC Zen 5 processor family seems to be nearing launch status—late last week, momomo_us uncovered an unnamed motherboard's datasheet; this particular model will accommodate a single 9000-series CPU—with a maximum 400 W TDP—via an SP5 socket. 500 W and 600 W limits have been divulged (via leaks) in the past, so the 400 W spec could be an error or a: "legitimate compatibility issue with the motherboard, though 400 Watts would be in character with high-end Zen 4 SP5 motherboards," according to Tom's Hardware analysis.

AMD's current-gen "Zen 4" based EPYC "Genoa" processor family—sporting up to 96-cores/192-threads—is somewhat limited by its DDR5 support transfer rates of up to 4800 MT/s. The latest leak suggests that "Turin" is upgraded quite nicely in this area—when compared to predecessors—the SP5 board specs indicate DDR5 speeds of up to 6000 MT/s with 4 TB of RAM. December 2023 reports pointed to "Zen 5c" variants featuring (max.) 192-core/384-thread configurations, while larger "Zen 5" models are believed to be "modestly" specced with up to 128-cores and 256-threads. AMD has not settled on an official release date for its EPYC "Turin" 9000-series processors, but a loose launch window is expected "later in 2024" based on timeframes presented within product roadmaps.

Micron Shows Off "Tall Form Factor" 256 GB DDR5-8000 MCRDIMM

Micron representatives showcased new products at last week's NVIDIA GTC event—one eye-catching DIMM is all set for deployment within next-generation servers. Tom's Hardware spent some time at Micron's booth—they found out that the "Tall Form Factor" 256 GB DDR5-8800 Multiplexer Combined Ranks (MCR) DIMM is being prepared for future enterprise processor platforms, including Intel's Xeon Scalable "Granite Rapids" family. A lone "tall" prototype module was exhibited, but company representatives indicated that standard height MCRDIMMs are in development. Tom's Hardware found out that these will compact enough to fit in 1U-sized server systems. According to their in-person experience: "(Micron's) 256 GB MCRDIMMs are based on monolithic 32 Gb DDR5 ICs, but the tall one places 80 DRAM chips on both sides of the module, whereas the standard one uses 2Hi stacked packages, which means that they run slightly hotter due to less space for thermal dissipation. In any case, the tall module consumes around 20 W, which isn't bad as Micron's 128 GB DDR5-8000 RDIMM consumes 10 W at DDR5-4800."

In a recent earnings call, Micron CEO Sanjay Mehrotra, commented on his company's latest technology: "we (have) started sampling our 256 GB MCRDIMM module, which further enhances performance and increases DRAM content per server." Next-gen Intel Xeon platforms are expected to support 12 or 24 memory slots per processor socket. Enabled datacenter machines could be specced with total 3 TB or 6 TB (DDR5-8000) memory capacities. AnandTech has summarized the benefits of Micron's new part: "Multiplexer Combined Ranks (MCR) DIMMs are dual-rank memory modules featuring a specialized buffer that allows both ranks to operate simultaneously. This buffer enables the two physical ranks to operate as though they were separate modules working in parallel, which allows for concurrent retrieval of 128 bytes of data from both ranks per clock cycle—compared to 64 bytes per cycle when it comes to regular memory modules—effectively doubling performance of a single module." The added complexity is offset by significant performance boons—ideal for advanced server-side AI-crunching in the future.

Existence of Intel Meteor Lake-PS CPU Series Revealed in iBase MI1002 Datasheet

An intriguing offshoot of Intel's Meteor Lake generation of processors has been discovered by hardware sleuth momomo_us—an iBase MI1002 motherboard specification sheet contains references to a 14th Gen Core Ultra (Meteor Lake-PS) family, with a next-gen LGA1851 socket listed as the desktop platform. The industrial iBase Mini-ITX workstation board is "coming soon" according to a promotional image—this could signal a revival of Meteor Lake outside of laptop platforms. 2023 was a bit of a rollercoaster year for MTL-S SKUs (on socket LGA1851)—one moment Team Blue confirmed that it was happening, then a couple of days later it was disposed of. The upcoming Arrow Lake processor generation seems to be the logical taker of this mantle, but the (leaked) existence of Meteor Lake-PS throws a proverbial spanner into the works.

iBase's MTL-PS-ready boards will be niche "industrial/embedded" items—according to Tom's Hardware: "Intel hasn't officially revealed Meteor Lake PS, but given the "PS" designation, these upcoming processors target the IoT market, similar to Alder Lake PS. Therefore, it's safe to assume that Intel is bringing the mobile Meteor Lake processors to the LGA1851 socket...Although the motherboard has (this) socket, no chipset is present because Meteor Lake PS is the spitting image of the Meteor Lake chip and doesn't need a PCH." Team Blue is hyping up Arrow Lake (ARL-S) as its next-gen mainstream desktop platform, with a launch window set for later in 2024—by sharp contrast, Meteor Lake PS parts are highly unlikely to receive much fanfare upon release.

MediaTek Launches Next-gen ASIC Design Platform with Co-packaged Optics Solutions

Ahead of the 2024 Optical Fiber Communication Conference (OFC), MediaTek (last week) announced it is launching a next-generation custom ASIC design platform that includes the heterogeneous integration of both high-speed electrical and optical I/Os in the same ASIC implementation. MediaTek will be demonstrating a serviceable socketed implementation that combines 8x800G electrical links and 8x800G optical links for a more flexible deployment. It integrates both MediaTek's in-house SerDes for electrical I/O as well as co-packaged Odin optical engines from Ranovus for optical I/O. Leveraging the heterogeneous solution that includes both 112G LR SerDes and optical modules, this CPO demonstration delivers reduced board space and device costs, boosts bandwidth density, and lowers system power by up to 50% compared to existing solutions.

Additionally, Ranovus' Odin optical engine has the option to provide either internal or external laser optical modules to better align with practical usage scenarios. MediaTek's ASIC experience and capabilities in the 3 nm advanced process, 2.5D and 3D advanced packaging, thermal management, and reliability, combined with optical experience, makes it possible for customers to access the latest technology for high-performance computing (HPC), AI/ML and data center networking.

Samsung Roadmaps UFS 5.0 Storage Standard, Predicts Commercialization by 2027

Mobile tech tipster, Revegnus, has highlighted an interesting Samsung presentation slide—according to machine translation, the company's electronics division is already responding to an anticipated growth of "client-side large language model" service development. This market trend will demand improved Universal Flash Storage (UFS) interface speeds—Samsung engineers are currently engaged in: "developing a new product that uses UFS 4.0 technology, but increases the number of channels from the current 2 to 4." The upcoming "more advanced" UFS 4.0 storage chips could be beefy enough to be utilized alongside next-gen mobile processors in 2025. For example; ARM is gearing up "Blackhawk," the Cortex-X4's successor—industry watchdogs reckon that the semiconductor firm's new core is designed to deliver "great Large Language Model (LLM) performance" on future smartphones. Samsung's roadmap outlines another major R&D goal, but this prospect is far off from finalization—their chart reveals an anticipated 2027 rollout. The slide's body of text included a brief teaser: "at the same time, we are also actively participating in discussions on the UFS 5.0 standard."

SK hynix Unveils Highest-Performing SSD for AI PCs at NVIDIA GTC 2024

SK hynix unveiled a new consumer product based on its latest solid-state drive (SSD), PCB01, which boasts industry-leading performance levels at GPU Technology Conference (GTC) 2024. Hosted by NVIDIA in San Jose, California from March 18-21, GTC is one of the world's leading conferences for AI developers. Applied to on-device AI PCs, PCB01 is a PCIe fifth-generation SSD which recently had its performance and reliability verified by a major global customer. After completing product development in the first half of 2024, SK hynix plans to launch two versions of PCB01 by the end of the year which target both major technology companies and general consumers.

Optimized for AI PCs, Capable of Loading LLMs Within One Second
Offering the industry's highest sequential read speed of 14 gigabytes per second (GB/s) and a sequential write speed of 12 GB/s, PCB01 doubles the speed specifications of its previous generation. This enables the loading of LLMs required for AI learning and inference in less than one second. To make on-device AIs operational, PC manufacturers create a structure that stores an LLM in the PC's internal storage and quickly transfers the data to DRAMs for AI tasks. In this process, the PCB01 inside the PC efficiently supports the loading of LLMs. SK hynix expects these characteristics of its latest SSD to greatly increase the speed and quality of on-device AIs.

Dell Expands Generative AI Solutions Portfolio, Selects NVIDIA Blackwell GPUs

Dell Technologies is strengthening its collaboration with NVIDIA to help enterprises adopt AI technologies. By expanding the Dell Generative AI Solutions portfolio, including with the new Dell AI Factory with NVIDIA, organizations can accelerate integration of their data, AI tools and on-premises infrastructure to maximize their generative AI (GenAI) investments. "Our enterprise customers are looking for an easy way to implement AI solutions—that is exactly what Dell Technologies and NVIDIA are delivering," said Michael Dell, founder and CEO, Dell Technologies. "Through our combined efforts, organizations can seamlessly integrate data with their own use cases and streamline the development of customized GenAI models."

"AI factories are central to creating intelligence on an industrial scale," said Jensen Huang, founder and CEO, NVIDIA. "Together, NVIDIA and Dell are helping enterprises create AI factories to turn their proprietary data into powerful insights."

AMD Zen 5 "Znver5" CPU Enablement Spotted in Change Notes

Close monitoring of AMD engineering activities—around mid-February time—revealed the existence of a new set of patches for GNU Compiler Collection (GCC). At the time, news reports put spotlights on Team Red's "znver5" enablement—this target indicated that staffers were prepping Zen 5 processor microarchitecture with an expanded AVX instruction set (building on top of Zen 4's current capabilities). Phoronix's Michael Larabel has fretted over AMD's relative silence over the past month—regarding a possible merging of support prior to the stable release of GCC 14.

He was relieved to discover renewed activity earlier today: "AMD Zen 5 processor enablement has been merged to GCC Git in time for the GCC 14.1 stable release that will be out in the coming weeks. It was great seeing AMD getting their Zen 5 processor enablement upstreamed ahead of any Ryzen or EPYC product launches and being able to do so in time for the annual major GNU Compiler Collection feature release." Team Red is inching ever closer to the much anticipated 2024 rollout of next-gen Ryzen 9000 processors, please refer to a VideoCardz-authored timeline diagram (below)—"Granite Ridge" is an incoming AM5 desktop CPU family (reportedly utilizing Zen 5 and RDNA 2 tech), while "Strix Point" is scheduled to become a mobile APU series (Zen 5 + RDNA 3.5).

ScaleFlux To Integrate Arm Cortex-R82 Processors in Its Next-Generation Enterprise SSD Controllers

ScaleFlux, a leader in deploying computational storage at scale, today announced its commitment to integrating the Arm Cortex -R82 processor in its forthcoming line of enterprise Solid State Drive (SSD) controllers. The Cortex-R82, is the highest performance real-time processor from Arm and the first to implement the 64-bit Armv8-R AArch64 architecture, representing a significant advancement in processing power and efficiency for enterprise storage solutions.

ScaleFlux's adoption of the Cortex-R82 is a strategic move to leverage the processor's high performance and energy efficiency. This collaboration underscores ScaleFlux's dedication to delivering cutting-edge technology in its SSD controllers, enhancing data processing capabilities and efficiency for data center and AI infrastructure worldwide.

Samsung Accelerates R&D of Glass Substrate Chip Packaging

The Samsung Group has formed a new cross-department alliance—according to South Korea's Sedaily—this joint operation will concentrate on the research and development of a "dream substrate." The company's Electronics, Electrical Engineering, and Display divisions are collaborating in order to accelerate commercialization of "glass substrate" chip packaging. Last September, Intel revealed its intention to become an industry leader in "glass substrate production for next-generation advanced packaging." Team Blue's shiny new Arizona fabrication site will be taking on this challenge, following ten years of internal R&D work. Industry watchdogs reckon that mass production—in North America—is not expected to kick off anytime soon. Sensible guesstimates suggest a start date somewhere in 2030.

The Sedaily article states that Samsung's triple department alliance will target "commercialization faster than Intel." Company representatives—in attendance at CES 2024—set a 2026 window as their commencement goal for advanced glass substrate chip package mass production. An unnamed South Korean industry watcher has welcomed a new entrant on the field: "as each company possesses the world's best technology, synergies will be maximized in glass substrate research, which is a promising field...it is also important to watch how the glass substrate ecosystem of Samsung's joint venture will be established." Glass substrate packaging is ideal for "large-area and high-performance chip combinations" due to inherent heat-resistant properties and material strength. So far, the semiconductor industry has struggled with its development—hence the continued reliance on plastic boards and organic materials.

AMD Pushes Performance Monitoring Patches for Upcoming Zen 5 CPUs

Thanks to Phoronix, we have discovered that AMD has recently released initial patches for performance monitoring and events related to their upcoming Zen 5 processors in the Linux kernel. These patches, sent out for review on the kernel mailing list, provide the necessary JSON files for PMU (Performance Monitoring Unit) events and metrics that will be exposed through the Linux perf tooling. As the patches consist of JSON additions and do not risk regressing existing hardware support, there is a possibility that they could be included in the upcoming Linux v6.9 kernel cycle. This would allow developers and enthusiasts to access detailed performance data for Zen 5 CPUs once they become available, helping with optimization and analysis of the next-generation processors.

The release of these patches follows AMD's publication of performance monitor counter documentation for AMD Family 1Ah Model 00h to 0Fh processors last week, confirming that these models represent the upcoming Zen 5 lineup. While Linux kernel 6.8 already includes some elements of Zen 5 CPU support, the upstream Linux enablement for these next-generation AMD processors is an ongoing process. Upon Phoronix examining the Zen 5 core and uncore events, as well as the metrics and mappings, it appears that they are mainly similar to those found in the current Zen 4 processors. This suggests that AMD has focused on refining and optimizing the performance monitoring capabilities of its new architecture rather than introducing significant changes. As the launch of Zen 5 CPUs draws closer, we await to see the performance and capabilities of these next-generation processors. With performance monitoring also getting a push, this could be a sign that Zen 5 launch is nearing.

JEDEC Reportedly Finalizing LPDDR6 Standard for Mobile Platforms

JEDEC is expected to announce a next-gen low-power RAM memory (LPDDR) standard specification by the third quarter of this year. Earlier today, smartphone technology watcher—Revegnus—highlighted insider information disclosed within an ETnews article. The International Semiconductor Standards Organization (JEDEC) has recently concluded negotiations regarding "next-generation mobile RAM standards"—the report posits that: "more than 60 people from memory, system semiconductor, and design asset (IP) companies participated" in a Lisbon, Portugal-situated meeting. A quoted participant stated (to ETnews): "We have held various discussions to confirm the LPDDR6 standard specification...(Details) will be released in the third quarter of this year."

The current generation LPDDR5 standard was secured back in February 2019—noted improvements included 50% performance and 30% power efficiency jumps over LPDDR4. Samsung Electronics and SK Hynix are in the process of mass-producing incremental improvements—in the form of LPDDR5X and LPDDR5T. A second source stated: "Technology development and standard discussions are taking place in a way to minimize power consumption, which increases along with the increase in data processing." A full-fledged successor is tasked with further enhancing data processing performance. Industry figures anticipate that LPDDR6 will greatly assist in an industry-wide push for "on-device AI" processing. They reckon that "large-scale AI calculations" will become the norm on smartphones, laptops, and tablet PCs. Revegnus has heard (fanciful) whispers about a potential 2024 rollout: "support may be available starting with Qualcomm's Snapdragon 8 Gen 4, expected to be released as early as the second half of this year." Sensible predictions point to possible commercialization in late 2025, or early 2026.

NVIDIA Calls for Global Investment into Sovereign AI

Nations have long invested in domestic infrastructure to advance their economies, control their own data and take advantage of technology opportunities in areas such as transportation, communications, commerce, entertainment and healthcare. AI, the most important technology of our time, is turbocharging innovation across every facet of society. It's expected to generate trillions of dollars in economic dividends and productivity gains. Countries are investing in sovereign AI to develop and harness such benefits on their own. Sovereign AI refers to a nation's capabilities to produce artificial intelligence using its own infrastructure, data, workforce and business networks.

Why Sovereign AI Is Important
The global imperative for nations to invest in sovereign AI capabilities has grown since the rise of generative AI, which is reshaping markets, challenging governance models, inspiring new industries and transforming others—from gaming to biopharma. It's also rewriting the nature of work, as people in many fields start using AI-powered "copilots." Sovereign AI encompasses both physical and data infrastructures. The latter includes sovereign foundation models, such as large language models, developed by local teams and trained on local datasets to promote inclusiveness with specific dialects, cultures and practices. For example, speech AI models can help preserve, promote and revitalize indigenous languages. And LLMs aren't just for teaching AIs human languages, but for writing software code, protecting consumers from financial fraud, teaching robots physical skills and much more.

NVIDIA RTX 50-series "GB20X" GPU Memory Interface Details Leak Out

Earlier in the week it was revealed that NVIDIA had distributed next-gen AI GPUs to its most important ecosystem partners and customers—Dell's CEO expressed enthusiasm with his discussion of "Blackwell" B100 and B200 evaluation samples. Team Green's next-gen family of gaming GPUs have received less media attention in early 2024—a mid-February TPU report pointed to a rumored PCIe 6.0 CEM specification for upcoming RTX 50-series cards, but leaks have become uncommon since late last year. Top technology tipster, kopite7kimi, has broken the relative silence on Blackwell's gaming configurations—an early hours tweet posits a slightly underwhelming scenario: "although I still have fantasies about 512 bit, the memory interface configuration of GB20x is not much different from that of AD10x."

Past disclosures have hinted about next-gen NVIDIA gaming GPUs sporting memory interface configurations comparable to the current crop of "Ada Lovelace" models. The latest batch of insider information suggests that Team Green's next flagship GeForce RTX GPU—GB202—will stick with a 384-bit memory bus. The beefiest current-gen GPU AD102—as featured in GeForce RTX 4090 graphics cards—is specced with a 384-bit interface. A significant upgrade for GeForce RTX 50xx cards could arrive with a step-up to next-gen GDDR7 memory—kopite7kimi reckons that top GPU designers will stick with 16 Gbit memory chip densities (2 GB). JEDEC officially announced its "GDDR7 Graphics Memory Standard" a couple of days ago. VideoCardz has kindly assembled the latest batch of insider info into a cross-generation comparison table (see below).

SK Hynix To Invest $1 Billion into Advanced Chip Packaging Facilities

Lee Kang-Wook, Vice President of Research and Development at SK Hynix, has discussed the increased importance of advanced chip packaging with Bloomberg News. In an interview with the media company's business section, Lee referred to a tradition of prioritizing the design and fabrication of chips: "the first 50 years of the semiconductor industry has been about the front-end." He believes that the latter half of production processes will take precedence in the future: "...but the next 50 years is going to be all about the back-end." He outlined a "more than $1 billion" investment into South Korean facilities—his department is hoping to "improve the final steps" of chip manufacturing.

SK Hynix's Head of Packaging Development pioneered a novel method of packaging the third generation of high bandwidth technology (HBM2E)—that innovation secured NVIDIA as a high-profile and long term customer. Demand for Team Green's AI GPUs has boosted the significance of HBM technologies—Micron and Samsung are attempting to play catch up with new designs. South Korea's leading memory supplier is hoping to stay ahead in the next-gen HBM contest—supposedly 12-layer fifth generation samples have been submitted to NVIDIA for approval. SK Hynix's Vice President recently revealed that HBM production volumes for 2024 have sold out—currently company leadership is considering the next steps for market dominance in 2025. The majority of the firm's newly announced $1 billion budget will be spent on the advancement of MR-MUF and TSV technologies, according to their R&D chief.

Intel Sets 100 Million CPU Supply Goal for AI PCs by 2025

Intel has been hyping up their artificial intelligence-augmented processor products since late last year—their "AI Everywhere" marketing push started with the official launch of Intel Core Ultra mobile CPUs, AKA the much-delayed Meteor Lake processor family. CEO, Pat Gelsinger stated (mid-December 2023): "AI innovation is poised to raise the digital economy's impact up to as much as one-third of global gross domestic product...Intel is developing the technologies and solutions that empower customers to seamlessly integrate and effectively run AI in all their applications—in the cloud and, increasingly, locally at the PC and edge, where data is generated and used." Team Blue's presence at this week's MWC Barcelona 2024 event introduced "AI Everywhere Across Network, Edge, Enterprise."

Nikkei Asia sat down with Intel's David Feng—Vice President of Client Computing Group and General Manager of Client Segments. The impressively job-titled executive discussed the "future of AI PCs," and set some lofty sales goals for his firm. According to the Nikkei report, Intel leadership expects to "deliver 40 million AI PCs" this year and a further 60 million units next year—representing "more than 20% of the projected total global PC market in 2025." Feng and his colleagues predict that mainstream customers will prefer to use local "on-device" AI solutions (equipped with NPUs), rather than rely on remote cloud services. Significant Edge AI improvements are expected to arrive with next generation Lunar Lake and Arrow Lake processor families, the latter will be bringing Team Blue NPU technologies to desktop platforms—AMD's Ryzen 8000G series of AM5 APUs launched with XDNA engines last month.

Samsung Reportedly Working on Backside Power Supply Tech with 2 Nanometer Process

Samsung and ARM announced a collaborative project last week—the partners are aiming to deliver an "optimized next generation Arm Cortex -X CPU" developed on the latest Gate-All-Around (GAA) process technology. Semiconductor industry watchdogs believe that Samsung Foundry's 3 nm GAA process did not meet sales expectations—reports suggest that many clients decided to pursue advanced three nanometer service options chez TSMC. The South Korean multinational manufacturing conglomerate is setting its sights forward—with an in-progress SF2 GAAFET process in the pipeline—industry insiders reckon that Samsung leadership is hoping to score a major victory within this next-gen market segment.

Lately, important industry figures have been hyping up Backside Power Supply Delivery Network (BSPDN) technology—recent Intel Foundry Services (IFS) press material lays claim to several technological innovations. A prime example being an ambitious five-nodes-in-four-years (5N4Y) process roadmap that: "remains on track and will deliver the industry's first backside power solution." A Chosun Business report proposes that Samsung is working on Backside Power Supply designs—a possible "game changer" when combined with in-house 2 nm SF2 GAAFET. Early experiments, allegedly, involving two unidentified ARM cores have exceeded expectations—according to Chosun's sources, engineers were able to: "reduce the chip area by 10% and 19%, respectively, and succeeded in improving chip performance and frequency efficiency to a single-digit level." Samsung Foundry could be adjusting its mass production timetables, based on freshly reported technological breakthroughs—SF2 GAAFET + BSPDN designs could arrive before the original targeted year of 2027. Prior to the latest developments, Samsung's BSPDN tech was linked to a futuristic 1.7 nm line.

Huawei Launches OptiXtrans DC908 Pro, a Next-gen DCI Platform for the AI Era

At MWC Barcelona 2024, Huawei launched the Huawei OptiXtrans DC908 Pro, a new platform for Data Center Interconnect (DCI) designed for the intelligent era. This innovative platform ensures the efficient, secure, and stable transmission of data between data centers (DCs), setting a new standard for DCI networks. As AI continues to proliferate across various service scenarios, the demand for foundation models has intensified, leading to an explosion in data volume. DCs are now operating at the petabyte level, and DCI networks have evolved from single-wavelength 100 Gbit/s to single-wavelength Tbit/s.

In response to the challenges posed by massive data transmission in the intelligent era, Huawei introduces the next-generation DCI platform, the Huawei OptiXtrans DC908 Pro. Compared to its predecessor, the DC908 Pro offers higher bandwidth, reliability, and intelligence.

Microsoft DirectX Team to Introduce "DirectSR" at GDC 2024

According to a Game Developers Conference (GDC) 2024 schedule page, Microsoft is planning to present next-gen technologies with their upcoming "DirectX State of the Union Ft. Work Graphs and Introducing DirectSR" presentation. Shawn Hargreaves, Direct3D's Development Manager and Austin Kinross (PIX Developer Lead, Microsoft) are scheduled to discuss matters with representatives from NVIDIA and AMD. Wessam Bahnassi, a "20-year veteran in 3D engine design and optimization," is Team Green's Principal Engineer of Developer Technology. Rob Martin, a Fellow Software Engineer, will be representing all things Team Red—where he leads development on implementations for GPU Work Graphs. According to GDC, the intended audience will be: "graphics developers or technical directors from game studios or engine companies."

Earlier this month, an "Automatic super resolution" feature was discovered in Windows 11 Insider Preview build (24H2)—the captioned part stated: "use AI to make supported games play more smoothly with enhanced details," although further interface options granted usage in desktop applications as well. Initial analysis and user impressions indicated that Microsoft engineers had created a proprietary model, separate from familiar technologies: NVIDIA DLSS, AMD FSR and Intel XeSS. It is interesting to note that Team Blue is not participating in the upcoming March 21 "DirectX State of the Union" panel discussion (a sponsored session). GDC's event description states (in full): "The DirectX team will showcase the latest updates, demos, and best practices for game development with key partners from AMD and NVIDIA. Work graphs are the newest way to take full advantage of GPU hardware and parallelize workloads. Microsoft will provide a preview into DirectSR, making it easier than ever for game devs to scale super resolution support across Windows devices. Finally, dive into the latest tooling updates for PIX."

Alleged ARM Cortex-X5 Underperformance Linked to Power Consumption Concerns

ARM's in-progress fifth generation "Blackhawk" Cortex design is allegedly going through a troubled phase of development, according to Chinese insider sources. A Revegnus (@Tech_Reve) social media post highlights ongoing issues: "It's reported that the Cortex X5 architecture is underperforming compared to expectations. It's speculated that the high-frequency power consumption has surged explosively. Therefore, if performance is reduced for lower power consumption, the Geekbench 6 multi-core score of Dimensity 9400 may not achieve a score of 9,400 points." A recent Moor Insights & Strategy analysis piece proposed that "Blackhawk" would become "the most powerful option available at launch" later this year—mobile chipsets leveraging ARM's Cortex-X5 design are touted to face tough next-gen competition from Qualcomm and Apple corners.

Revegnus pulled in a rival SoC: "While Snapdragon 8 Gen 4 is seen to have minor issues, there is no evidence to support this claim. There might be a problem with low-frequency power consumption not showing clear superiority over ARM's middle cores." Qualcomm's next flagship model is performing admirably according to insiders—an engineering sample managed to score 10,628 points in alleged Geekbench 6 multi-core gauntlets. Late last month prototype clocks were leaked—Digital Chat Station claimed that a Snapdragon 8 Gen 4 High-Performance "Big" core was capable of reaching 4.0 GHz. Prior to the latest news, MediaTek's Dimensity 9400 SoC was observed achieving ~10,000 multi-core Geekbench 6 scores—leaked CPU cluster details present a single "Big" Cortex-X5 unit operating alongside three Cortex-X4 cores.

Games Consultant Predicts H2Y24 Launch for PlayStation 5 Pro

Serkan Toto, CEO of Tokyo-based games consultancy Kantan Games was interviewed by CNBC earlier this week—he was invited on-air to provide expert commentary on Sony's freshly revised sales and revenue forecast for PlayStation 5 products. He believes that great forward momentum is best achieved with refreshed hardware, and a well timed launch coinciding with the release of AAA/blockbuster games titles. Last autumn's rollout of slimmer PlayStation 5 consoles was not particularly exciting—with no major bump up in specs or attractive pricing. The development of an inevitable "Pro" variant has circulated around rumor mills for more than a year.

Sony Computer Entertainment (SCE) and AMD are believed to co-operating on a very potent hardware redesign—reports from late last year posited that a semi-custom "Viola" SoC is in the pipeline. A more expensive RDNA 3-upgraded refresh could attract an additional segment of hardcore gamers, but another industry analyst reckons that Sony is unlikely to implement a standard model price cut later this year (based on past trends). George Jijiashvili, senior principal analyst at Omdia, stated: "A scenario where Sony launches a PS5 Pro, but still experiences declining year-on-year hardware sales is very much within the realms of possibility." Serkan Toto (of Kantan Games consultancy) expressed a more optimistic view: "There seems to be a broad consensus in the game industry that Sony is indeed preparing a launch of a PS5 Pro in the second half of 2024...And Sony will want to make sure to have a great piece of hardware ready when GTA VI hits in 2025, a launch that will be a shot in the arm for the entire gaming industry."

NVIDIA Introduces NVIDIA RTX 2000 Ada Generation GPU

Generative AI is driving change across industries—and to take advantage of its benefits, businesses must select the right hardware to power their workflows. The new NVIDIA RTX 2000 Ada Generation GPU delivers the latest AI, graphics and compute technology to compact workstations, offering up to 1.5x the performance of the previous-generation RTX A2000 12 GB in professional workflows. From crafting stunning 3D environments to streamlining complex design reviews to refining industrial designs, the card's capabilities pave the way for an AI-accelerated future, empowering professionals to achieve more without compromising on performance or capabilities. Modern multi-application workflows, such as AI-powered tools, multi-display setups and high-resolution content, put significant demands on GPU memory. With 16 GB of memory in the RTX 2000 Ada, professionals can tap the latest technologies and tools to work faster and better with their data.

Powered by NVIDIA RTX technology, the new GPU delivers impressive realism in graphics with NVIDIA DLSS, delivering ultra-high-quality, photorealistic ray-traced images more than 3x faster than before. In addition, the RTX 2000 Ada enables an immersive experience for enterprise virtual-reality workflows, such as for product design and engineering design reviews. With its blend of performance, versatility and AI capabilities, the RTX 2000 Ada helps professionals across industries achieve efficiencies. Architects and urban planners can use it to accelerate visualization workflows and structural analysis, enhancing design precision. Product designers and engineers using industrial PCs can iterate rapidly on product designs with fast, photorealistic rendering and AI-powered generative design. Content creators can edit high-resolution videos and images seamlessly, and use AI for realistic visual effects and content creation assistance. And in vital embedded applications and edge computing, the RTX 2000 Ada can power real-time data processing for medical devices, optimize manufacturing processes with predictive maintenance and enable AI-driven intelligence in retail environments.
Return to Keyword Browsing
Apr 29th, 2024 14:42 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts