News Posts matching #AI

Return to Keyword Browsing

DFI Unveils Embedded System Module Equipped with Intel's Latest AI Processor

DFI, the world's leading brand in embedded motherboards and industrial computers, is targeting the AI application market by launching the embedded system module (SOM) MTH968 equipped with the latest Intel Core Ultra processor. It is the first product integrated with an NPU (Neural Processor Unit) processor, representing the official integration of AI with industrial PCs (IPCs). With the expansion into AI IPC, DFI expects to inject new momentum into the AI edge computing market.

According to the STL Partners report, the potential market value of global edge computing will increase from US$9 billion in 2020 to US$462 billion in 2030, representing a compound annual growth rate (CAGR) of 49%. Therefore, the development of products that utilize the core capabilities of chips to rapidly execute AI edge computing in devices has become a key focus for many major technology companies.

Intel Xeon W9-3595X Spotted with 60 Cores, 112 MB of L3 Cache, and 4.60 GHz Frequency

Intel's upcoming high-end desktop (HEDT) processor lineup for enthusiasts and prosumers is around the corner, and today, we managed to see the flagship SKU - the Xeon W9-3595X. Spotted recently on Geekbench benchmarks, this new chip packs a whopping 60 cores and 120 threads, making it Intel's highest core count HEDT offering yet. The Xeon W9-3595X is based on Intel's advanced Sapphire Rapids architecture, built using the Intel 7 process node. It succeeds the previous flagship 56-core W9-3495X, with four additional cores crammed into the new 350 Watt TDP envelope. Clock speeds have taken a slight hit to accommodate the extra cores, with the maximum turbo frequency lowered from 4.8 GHz on the 3495X to 4.6 GHz on the new 3595X.

However, with more cores, the 3595X should still offer a significant multi-threaded performance uplift for heavily parallel workloads. The Xeon W9-3595X will drop into existing LGA-4677 motherboards, like the ASUS PRO WS 790-ACE, after a BIOS update. It features 112 MB of L3 cache, 120 MB of L2 cache (2 MB per core), and continues Intel's push towards higher core counts for enthusiasts, content creators, and workstation users who need maximum multi-threaded horsepower. Pricing and availability details remain unannounced as of now. But with an appearance on public databases, an official launch of the 60-core HEDT juggernaut seems imminent. These new Sapphire Rapids SKUs will likely have extra AI features, like dedicated AI acceleration engines, in the same manner that server-class SKUs do.

NVIDIA Faces AI Chip Shortages, Turns to Intel for Advanced Packaging Services

NVIDIA's supply of AI chips remains tight due to insufficient advanced packaging production capacity from key partner TSMC. As per the UDN report, NVIDIA will add Intel as a provider of advanced packaging services to help ease the constraints. Intel is expected to start supplying NVIDIA with a monthly advanced packaging capacity of about 5,000 units in Q2 at the earliest. While TSMC will remain NVIDIA's primary packaging partner, Intel's participation significantly boosts NVIDIA's total production capacity by nearly 10%. Even after Intel comes online, TSMC will still account for the lion's share—about 90% of NVIDIA's advanced packaging needs. TSMC is also aggressively expanding capacity, with monthly production expected to reach nearly 50,000 units in Q1, a 25% increase over December 2023. Intel has advanced packaging facilities in the U.S. and is expanding its capacity in Penang. The company has an open model, allowing customers to leverage its packaging solutions separately.

The AI chip shortages stemmed from insufficient advanced packaging capacity, tight HBM3 memory supply, and overordering by some cloud providers. These constraints are now easing faster than anticipated. The additional supply will benefit AI server providers like Quanta, Inventec and GIGABYTE. Quanta stated that the demand for AI servers remains robust, with the main limitation being chip supply. Both Inventec and GIGABYTE expect strong AI server shipment growth this year as supply issues resolve. The ramping capacity from TSMC and Intel in advanced packaging and improvements upstream suggest the AI supply crunch may be loosening. This would allow cloud service providers to continue the rapid deployment of AI workloads.

Kioxia Joins HPE Servers on Space Launch Destined for the International Space Station

Today, KIOXIA SSDs took flight with the launch of the NG-20 mission rocket, which is delivering an updated HPE Spaceborne Computer-2, based on HPE EdgeLine and ProLiant servers from Hewlett Packard Enterprise (HPE), to the International Space Station (ISS). KIOXIA SSDs provide robust flash storage in HPE Spaceborne Computer-2 to conduct scientific experiments aboard the space station.

HPE Spaceborne Computer-2, based on commercial off-the-shelf technology, provides edge computing and AI capabilities on board the research outpost as part of a greater mission to significantly advance computing power in space and reduce dependency on communications as space exploration continues to expand. Designed to perform various high-performance computing (HPC) workloads in space, including real-time image processing, deep learning, and scientific simulations, HPE Spaceborne Computer-2 can be used to compute a number of experiment types including healthcare, natural disaster recovery, 3D printing, 5G, AI, and more.

MSI Unveils AI-Driven Gaming Desktops with NVIDIA GeForce RTX 40 SUPER Series

MSI, a forefront leader in True Gaming hardware, proudly announces the incorporation of NVIDIA GeForce RTX 40 SUPER Series graphics cards into its latest 14th generation AI gaming desktops. The MEG Trident X2 14th, MPG Infinite X2 14th, MPG Trident AS 14th, MAG Infinite S3 14th, and MAG Codex 6 14th, initially featuring the NVIDIA GeForce RTX 40 Series graphics cards, now boast the cutting-edge RTX 40 SUPER Series, ushering in a new era of gaming excellence.

At the heart of these 14th gen AI gaming desktops lies the revolutionary RTX 40 SUPER Series, which are GeForce RTX 4080 SUPER, GeForce RTX 4070 Ti SUPER, and GeForce RTX 4070 SUPER. This series reshapes the gaming experience with cutting-edge AI capabilities, surpassing the speed of their predecessors. Equipped with RTX platform superpowers, these GPUs elevate the performance of games, applications, and AI tasks, marking a significant advancement in the gaming landscape.

Windows 11 Preview Build Removes WordPad

Microsoft quietly added WordPad to its "Deprecated features for Windows client" list last September—a short message stated that the popular bundled-in word processing application will: "no longer be updated and will be removed in a future release of Windows. We recommend Microsoft Word for rich text documents like.doc and.rtf and Windows Notepad for plain text documents like.txt." The aforementioned "future" version of their mainstream operating system appears to be the recently issued Windows 11 Insider Preview Build 26040 (through Canary Channel). Microsoft is pushing an AI feature enriched user experience—last week's Insider blog reveals that Voice Clarity is escaping its Surface family confines; the application no longer relies on NPU hardware.

According to Preview 26040's accompanying notes: "Starting with this build, the WordPad and People apps will no longer be installed after doing a clean install of the OS. In a future flight, WordPad will be removed in an upgrade. WordPad will not be reinstallable. WordPad is a deprecated Windows feature." Many journalists have pointed out that protest efforts could save WordPad from its deprecated fate—enough fuss was generated over Microsoft's proposed axing of MS Paint, to warrant a reversal and eventual AI-enrichment. A segment of the Windows userbase will welcome the upcoming dismissal of Cortana (already effective in the latest W11 preview)—their older personal productivity assistant is being pushed aside in favor of Windows Copilot.

Samsung Showcases B2B Displays with Advanced Connectivity at ISE 2024

Samsung Electronics today at Integrated Systems Europe (ISE) 2024 in Barcelona is showcasing how SmartThings will bolster its B2B displays to shape the future of business connectivity. Samsung's "SmartThings for Business" exhibition emphasizes the new advancements that the cutting-edge internet-of-things (IoT) platform will offer, as well as Samsung's commitment to providing more connected, easy-to-control digital signage across industries. "In a commercial display sector where operational efficiency is key, Samsung digital signage is leveraging SmartThings to deliver next-gen connectivity and features to organizations of all sizes," said SW Yong, President and Head of Visual Display Business at Samsung Electronics. "This further expansion of the SmartThings ecosystem will serve to elevate experiences for customers and partners from a wide variety of industries."

How Businesses Can Leverage Connected Tech Through SmartThings—From the Smart Store to Smart Office
At the event, Samsung is showcasing how SmartThings enables business owners to leverage their digital signage to connect and gain more control of their smart devices across various landscapes. By offering the SmartThings connectivity feature to commercial display products such as Smart Signage and Hotel TVs, users can experience the convenience of hyper-connectivity in their business environments. These changes will include Samsung smart devices, as well as other devices that support the industry's latest IoT specifications, Matter and the Home Connectivity Alliance (HCA). Through the application of SmartThings to various business environments, Samsung contributes to the more efficient management of space and energy by transforming places of business into interconnected smart spaces. These connectivity improvements have been designed to benefit all types of business customers, from small and mid-sized business owners to enterprises. Examples of the smart spaces—including a smart store, smart office and smart hotel—are on display at Samsung's booth at ISE 2024.

Windows 11 Preview Build Enlarges Reach of AI Feature, Formerly Surface Only

The Windows 11 Insider Preview Build 26040 was made available via the Canary Channel last Friday—Microsoft's introductory blog explores a wide range of improvements and fixes—including a new Task Manager icon, support for USB 80 Gbps (AKA USB 4 Version 2), Colemak and Hebrew keyboard layouts and many more. The most significant change arrives with a wider deployment of Voice Clarity—this change has received plenty of press attention over the past weekend. The AI-powered feature was first released publicly back in 2022, but its rollout was product limited to compatible Surface devices. Microsoft's January 26 Insider Blog entry has an extensive escape plan—support is extending to "x64 and Arm64 CPUs" without any need for "extra hardware."

The relevant section stated: "We are expanding Voice Clarity which has been available on Surface devices and making it available to more people. Voice Clarity is a feature that enhances your audio experience on Windows with state-of-the-art AI technology. Powered by low complexity AI models, Voice Clarity cancels echo, suppresses background noise, and reduces reverberation in real-time. This feature is enabled by default and can be utilized by applications using Communications Signal Processing Mode, such as Phone Link and WhatsApp." Previously, this feature was only available on Surface devices that sported a Neural Processing Unit (NPU), so it is interesting to see Voice Clarity becoming less reliant on AI-tailored hardware.

Microsoft Announces Participation in National AI Research Resource Pilot

We are delighted to announce our support for the National AI Research Resource (NAIRR) pilot, a vital initiative highlighted in the President's Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. This initiative aligns with our commitment to broaden AI research and spur innovation by providing greater computing resources to AI researchers and engineers in academia and non-profit sectors. We look forward to contributing to the pilot and sharing insights that can help inform the envisioned full-scale NAIRR.

The NAIRR's objective is to democratize access to the computational tools essential for advancing AI in critical areas such as safety, reliability, security, privacy, environmental challenges, infrastructure, health care, and education. Advocating for such a resource has been a longstanding goal of ours, one that promises to equalize the field of AI research and stimulate innovation across diverse sectors. As a commissioner on the National Security Commission on AI (NSCAI), I worked with colleagues on the committee to propose an early conception of the NAIRR, underlining our nation's need for this resource as detailed in the NSCAI Final Report. Concurrently, we enthusiastically supported a university-led initiative pursuing a national computing resource. It's rewarding to see these early ideas and endeavors now materialize into a tangible entity.

Intel Lunar Lake-MX to Embed Samsung LPDDR5X Memory on SoC Package

According to sources close to Seoul Economy, and reported by DigiTimes, Intel has reportedly chosen Samsung as a supplier for its next-generation Lunar Lake processors, set to debut later this year. The report notes that Samsung will provide LPDDR5X memory devices for integration into Intel's processors. This collaboration could be a substantial win for Samsung, given Intel's projection to distribute millions of Lunar Lake CPUs in the coming years. However, it's important to note that this information is based on a leak and has not been officially confirmed. Designed for ultra-portable laptops, the Lunar Lake-MX platform is expected to feature 16 GB or 32 GB of LPDDR5X-8533 memory directly on the processor package. This on-package memory approach aims to minimize the platform's physical size while enhancing performance over traditional memory configurations. With Lunar Lake's exclusive support for on-package memory, Samsung's LPDDR5X-8533 products could significantly boost sales.

While Samsung is currently in the spotlight, it remains unclear if it will be the sole LPDDR5X memory provider for Lunar Lake. Intel's strategy involves selling processors with pre-validated memory, leaving the door open for potential validation of similar memory products from competitors like Micron and SK Hynix. Thanks to a new microarchitecture, Intel has promoted its Lunar Lake processors as a revolutionary leap in performance-per-watt efficiency. The processors are expected to utilize a multi-chipset design with Foveros technology, combining CPU and GPU chipsets, a system-on-chip tile, and dual memory packages. The CPU component is anticipated to include up to eight cores, a mix of four high-performance Lion Cove and four energy-efficient Skymont cores, alongside advanced graphics, cache, and AI acceleration capabilities. Apple's use of on-package memory in its M-series chips has set a precedent in the industry, and with Intel's Lunar Lake MX, this trend could extend across the thin-and-light laptop market. However, systems requiring more flexibility in terms of configuration, repair, and upgrades will likely continue to employ standard memory solutions like SODIMMs and/or the new CAMM2 modules that offer a balance of high performance and energy efficiency.

AMD Instinct MI300X Released at Opportune Moment. NVIDIA AI GPUs in Short Supply

LaminiAI appeared to be one of the first customers to receive an initial shipment of AMD's Instinct MI300X accelerators, as disclosed by their CEO posting about functioning hardware on social media late last week. A recent Taiwan Economic Daily article states that the "MI300X is rumored to have begun supply"—we are not sure about why they have adopted a semi-secretive tone in their news piece, but a couple of anonymous sources are cited. A person familiar with supply chains in Taiwan divulged that: "(they have) been receiving AMD MI300X chips one after another...due to the huge shortage of NVIDIA AI chips, the arrival of new AMD products is really a timely rainfall." Favorable industry analysis (from earlier this month) has placed Team Red in a position of strength, due to growing interest in their very performant flagship AI accelerator.

The secrecy seems to lie in Team Red's negotiation strategies in Taiwan—the news piece alleges that big manufacturers in the region have been courted. AMD has been aggressive in a push to: "cooperate and seize AI business opportunities, with GIGABYTE taking the lead and attracting the most attention. Not only was GIGABYTE the first to obtain a partnership with AMD's MI300A chip, which had previously been mass-produced, but GIGABYTE was also one of the few Taiwanese manufacturers included in AMD's first batch of MI300X partners." GIGABYTE is expected to release two new "G593" product lines of server hardware later this year, based on combinations of AMD's Instinct MI300X accelerator and EPYC 9004 series processors.

NVIDIA Contributes $30 Million of Tech to NAIRR Pilot Program

In a major stride toward building a shared national research infrastructure, the U.S. National Science Foundation has launched the National Artificial Intelligence Research Resource pilot program with significant support from NVIDIA. The initiative aims to broaden access to the tools needed to power responsible AI discovery and innovation. It was announced Wednesday in partnership with 10 other federal agencies as well as private-sector, nonprofit and philanthropic organizations. "The breadth of partners that have come together for this pilot underscores the urgency of developing a National AI Research Resource for the future of AI in America," said NSF Director Sethuraman Panchanathan. "By investing in AI research through the NAIRR pilot, the United States unleashes discovery and impact and bolsters its global competitiveness."

NVIDIA's commitment of $30 million in technology contributions over two years is a key factor in enlarging the scale of the pilot, fueling the potential for broader achievements and accelerating the momentum toward full-scale implementation. "The NAIRR is a vision of a national research infrastructure that will provide access to computing, data, models and software to empower researchers and communities," said Katie Antypas, director of the Office of Advanced Cyberinfrastructure at the NSF. "Our primary goals for the NAIRR pilot are to support fundamental AI research and domain-specific research applying AI, reach broader communities, particularly those currently unable to participate in the AI innovation ecosystem, and refine the design for the future full NAIRR," Antypas added.

Intel Reports Fourth-Quarter and Full-Year 2023 Financial Results

Intel Corporation today reported fourth-quarter and full-year 2023 financial results. "We delivered strong Q4 results, surpassing expectations for the fourth consecutive quarter with revenue at the higher end of our guidance," said Pat Gelsinger, Intel CEO. "The quarter capped a year of tremendous progress on Intel's transformation, where we consistently drove execution and accelerated innovation, resulting in strong customer momentum for our products. In 2024, we remain relentlessly focused on achieving process and product leadership, continuing to build our external foundry business and at-scale global manufacturing, and executing our mission to bring AI everywhere as we drive long-term value for stakeholders."

David Zinsner, Intel CFO, said, "We continued to drive operational efficiencies in the fourth quarter, and comfortably achieved our commitment to deliver $3 billion in cost savings in 2023. We expect to unlock further efficiencies in 2024 and beyond as we implement our new internal foundry model, which is designed to drive greater transparency and accountability and higher returns on our owners' capital." For the full year, the company generated $11.5 billion in cash from operations and paid dividends of $3.1 billion.

OpenAI Reportedly Talking to TSMC About Custom Chip Venture

OpenAI is reported to be initiating R&D on a proprietary AI processing solution—the research organization's CEO, Sam Altman, has commented on the in-efficient operation of datacenters running NVIDIA H100 and A100 GPUs. He foresees a future scenario where his company becomes less reliant on Team Green's off-the-shelf AI-crunchers, with a deployment of bespoke AI processors. A short Reuters interview also underlined Altman's desire to find alternatives sources of power: "It motivates us to go invest more in (nuclear) fusion." The growth of artificial intelligence industries has put an unprecedented strain on energy providers, so tech firms could be semi-forced into seeking out frugal enterprise hardware.

The Financial Times has followed up on last week's Bloomberg report of OpenAI courting investment partners in the Middle East. FT's news piece alleges that Altman is in talks with billionaire businessman Sheikh Tahnoon bin Zayed al-Nahyan, a very well connected member of the United Arab Emirates Royal Family. OpenAI's leadership is reportedly negotiating with TSMC—The Financial Times alleges that Taiwan's top chip foundry is an ideal manufacturing partner. This revelation contradicts Bloomberg's recent reports of a potential custom OpenAI AI chip venture involving purpose-built manufacturing facilities. The whole project is said to be at an early stage of development, so Altman and his colleagues are most likely exploring a variety of options.

FTC Launches Inquiry into Generative AI Investments and Partnerships

The Federal Trade Commission announced today that it issued orders to five companies requiring them to provide information regarding recent investments and partnerships involving generative AI companies and major cloud service providers. The agency's 6(b) inquiry will scrutinize corporate partnerships and investments with AI providers to build a better internal understanding of these relationships and their impact on the competitive landscape. The compulsory orders were sent to Alphabet, Inc., Amazon.com, Inc., Anthropic PBC, Microsoft Corp., and OpenAI, Inc.

"History shows that new technologies can create new markets and healthy competition. As companies race to develop and monetize AI, we must guard against tactics that foreclose this opportunity, "said FTC Chair Lina M. Khan. "Our study will shed light on whether investments and partnerships pursued by dominant companies risk distorting innovation and undermining fair competition."

AI Power Consumption Surge Strains US Electricity Grid, Coal-Powered Plants Make a Comeback

The artificial intelligence boom is driving a sharp rise in electricity use across the United States, catching utilities and regulators off guard. In northern Virginia's "data center alley," demand is so high that the local utility temporarily halted new data center connections in 2022. Nation-wide, electricity consumption at data centers alone could triple by 2030 to 390 TeraWatt Hours. Add in new electric vehicle battery factories, chip plants, and other clean tech manufacturing spurred by federal incentives, and demand over the next five years is forecast to rise at 1.5%—the fastest rate since the 1990s. Unable to keep pace, some utilities are scrambling to revise projections and reconsider previous plans of closing fossil fuel plants even as the Biden administration pushes for more renewable energy. Some older coal power plans will stay online, until the grid adds more power production capacity. The result could be increased emissions in the near term and risks of rolling blackouts if infrastructure continues lagging behind demand.

The situation is especially dire in Virginia, the world's largest data center hub. The state's largest utility, Dominion Energy, was forced to pause new data center connections for three months last year due to surging demand in Loudoun County. Though connections have resumed, Dominion expects load growth to almost double over the next 15 years. With data centers, EV factories, and other power-hungry tech continuing rapid expansion, experts warn the US national electricity grid is poorly equipped to handle the spike. Substantial investments in new transmission lines and generation are urgently needed to avoid businesses being turned away or blackouts in some regions. Though many tech companies aim to power operations with clean energy, factories are increasingly open to any available power source.

Google Faces Potential Billion-Dollar Damages in TPU Patent Dispute

Tech giant Google is embroiled in a high-stakes legal battle over the alleged infringement of patents related to its Tensor Processing Units (TPUs), custom AI accelerator chips used to power machine learning applications. Massachusetts-based startup Singular Computing has accused Google of incorporating architectures described in several of its patents into the design of the TPU without permission. The disputed patents, first filed in 2009, outline computer architectures optimized for executing a high volume of low-precision calculations per cycle - an approach well-suited for neural network-based AI. In a 2019 lawsuit, Singular argues that Google knowingly infringed on these patents in developing its TPU v2 and TPU v3 chips introduced in 2017 and 2018. Singular Computing is seeking between $1.6 billion and $5.19 billion in damages from Google.

Google denies these claims, stating that its TPUs were independently developed over many years. The company is currently appealing to have Singular's patents invalidated, which would undermine the infringement allegations. The high-profile case highlights mounting legal tensions as tech giants race to dominate the burgeoning field of AI hardware. With billions in potential damages at stake, the outcome could have major implications for the competitive landscape in cloud-based machine learning services. As both sides prepare for court, the dispute underscores the massive investments tech leaders like Google make to integrate specialized AI accelerators into their cloud infrastructures. Dominance in this sphere is a crucial strategic advantage as more industries embrace data-hungry neural network applications.

Update 17:25 UTC: According to Reuters, Google and Singular Computing have settled the case with details remaining private for the time being.

ASUS and Quividi Forge Strategic Partnership

ASUS and ASUS IoT today announced that they have entered into a strategic partnership with Quividi, the pioneer and leader in AI and computer vision (CV)‑based real-time audience measurement platforms for the digital signage and retail industries. The collaboration allows for seamless integration of Quividi's advanced audience measurement solutions into the range of ASUS NUCs and ASUS IoT Tinker Board-based digital signage solutions.

ASUS, having recently licensed Intel's NUC product lines, has embarked on the development, manufacturing, and sale of current and future NUC systems. This move positions ASUS as a key player in producing customizable small-form-factor (SFF) devices, ideal for edge computing environments and widely utilized in the digital signage and retail industries.

HBM Industry Revenue Could Double by 2025 - Growth Driven by Next-gen AI GPUs Cited

Samsung, SK hynix, and Micron are considered to be the top manufacturing sources of High Bandwidth Memory (HBM)—the HBM3 and HBM3E standards are becoming increasingly in demand, due to a widespread deployment of GPUs and accelerators by generative AI companies. Taiwan's Commercial Times proposes that there is an ongoing shortage of HBM components—but this presents a growth opportunity for smaller manufacturers in the region. Naturally, the big name producers are expected to dive in head first with the development of next generation models. The aforementioned financial news article cites research conducted by the Gartner group—they predict that the HBM market will hit an all-time high of $4.976 billion (USD) by 2025.

This estimate is almost double that of projected revenues (just over $2 billion) generated by the HBM market in 2023—the explosive growth of generative AI applications has "boosted" demand for the most performant memory standards. The Commercial Times report states that SK Hynix is the current HBM3E leader, with Micron and Samsung trailing behind—industry experts believe that stragglers will need to "expand HBM production capacity" in order to stay competitive. SK Hynix has shacked up with NVIDIA—the GH200 Grace Hopper platform was unveiled last summer; outfitted with the South Korean firm's HBM3e parts. In a similar timeframe, Samsung was named as AMD's preferred supplier of HBM3 packages—as featured within the recently launched Instinct MI300X accelerator. NVIDIA's HBM3E deal with SK Hynix is believed to extend to the internal makeup of Blackwell GB100 data-center GPUs. The HBM4 memory standard is expected to be the next major battleground for the industry's hardest hitters.

AMD Radeon RX 7600 XT Launches with a Large 16 GB Memory

AMD Radeon RX 7600 XT went on sale today, at a starting price of $330. Designed for maxed out AAA gameplay at 1080p, this card can try its hands with 1440p gaming, at mid-thru-high settings; you can use features such as FSR 2, FSR 3 Frame Generation, the AMD Fluid Motion Frames feature that extends frame generation to any DirectX 11/12 game; as well as the HyperRX one-click performance enhancement that's part of the AMD Software control panel app. AMD had already maxed out all available shaders on the 6 nm "Navi 33" monolithic silicon, but has opted not to rope in the larger "Navi 32" chiplet GPU for the RX 7600 XT. Instead, it attempted to squeeze out the most performance possible from the "Navi 33," by dialing up clock speeds, power limits, and doubling the memory size.

You still get 32 compute units on the RX 7600 XT, which are worth 2,048 stream processors, 64 AI accelerators, 32 Ray accelerators, 128 TMUs, and 64 ROPs, but the 128-bit GDDR6 memory bus now drives 16 GB of memory running at the same 18 Gbps speed, yielding 288 GB/s of bandwidth. The GPU game clock has been increased to 2.47 GHz, up from 2.25 GHz on the RX 7600. The power limit has been increased from 165 W to 190 W on the RX 7600 XT; and implementing DisplayPort 2.1 has been made mandatory for board partners (they can't opt for the DisplayPort 1.4a like they could on the RX 7600). AMD claims that the 16 GB of video memory should come in handy for content creators, and those dabbling with generative AI.

We have three reviews of the Radeon RX 7600 XT for you today, so be sure to check them all out.

Sapphire Radeon RX 7600 XT Pulse | XFX Radeon RX 7600 XT Speedster QICK 309 | ASRock Radeon RX 7600 XT Steel Legend

EdgeCortix Foresees Barrier Breaking Efficient Next-gen Edge AI Chips

EdgeCortix, the Japan-based fabless semiconductor company focused on energy-efficient AI processing, predicts that 2024 is set to be a watershed moment for Edge AI. Through its predictions for the year, EdgeCortix believes that Edge AI landscape will be transformed during this exciting year for the industry. Next-gen AI chips, hybrid edge-cloud architectures, software supremacy and the rise of new generative-AI applications "at the edge," will revolutionize the world of business as we know it.

1. Next-Gen efficient Edge AI Chips will break barriers:
Prepare for a hardware uprising! EdgeCortix foresees next-gen energy-efficient AI chips that not only break the barriers of processing power but redefine them. These chips are not just powerful; they are customized for multi-modal generative AI and efficient language models, enabling cutting-edge AI capabilities at low power for a whole new spectrum of applications.

OpenAI CEO Reportedly Seeking Funds for Purpose-built Chip Foundries

OpenAI CEO, Sam Altman, had a turbulent winter 2023 career moment, but appears to be going all in with his company's future interests. A Bloomberg report suggests that the tech visionary has initiated a major fundraising initiative for the construction of OpenAI-specific semiconductor production plants. The AI evangelist reckons that his industry will become prevalent enough to demand a dedicated network of manufacturing facilities—the U.S. based artificial intelligence (AI) research organization is (reportedly) exploring custom artificial intelligence chip designs. Proprietary AI-focused GPUs and accelerators are not novelties at this stage in time—many top tech companies rely on NVIDIA solutions, but are keen to deploy custom-built hardware in the near future.

OpenAI's popular ChatGPT system is reliant on NVIDIA H100 and A100 GPUs, but tailor-made alternatives seem to be the desired route for Altman & Co. The "on their own terms" pathway seemingly skips an expected/traditional chip manufacturing process—the big foundries could struggle to keep up with demand for AI-oriented silicon. G42 (an Abu Dhabi-based AI development holding company) and SoftBank Group are mentioned as prime investment partners in OpenAI's fledgling scheme—Bloomberg proposes that Altman's team is negotiating a $8 to 10 billion deal with top brass at G42. OpenAI's planned creation of its own foundry network is certainly a lofty and costly goal—the report does not specify whether existing facilities will be purchased and overhauled, or new plants being constructed entirely from scratch.

Jensen Huang Heads to Taiwan, B100 "Blackwell" GPUs Reportedly in Focus

NVIDIA's intrepid CEO, Jensen Huang, has spent a fair chunk of January travelling around China—news outlets believe that Team Green's leader has conducted business meetings with very important clients in the region. Insiders proposed that his low-profile business trip included visits to NVIDIA operations in Shenzhen, Shanghai and Beijing. The latest updates allege that a stopover in Taiwan was also planned, following the conclusion of Mainland activities. Photos from an NVIDIA Chinese new year celebratory event have been spreading across the internet lately—many were surprised to see Huang appear on-stage in Shanghai and quickly dispense with his trademark black leather jacket. He swapped into a colorful "Year of the Wood Dragon" sleeveless shirt for a traditional dance routine.

It was not all fun and games during Huang's first trip to China in four years—inside sources have informed the Wall Street Journey about growing unrest within the nation's top ranked Cloud AI tech firms. Anonymous informants allege that leadership, at Alibaba Group and Tencent, are not happy with NVIDIA's selection of compromised enterprise GPUs—it is posited that NVIDIA's President has spent time convincing key clients to not adopt natively-developed solutions (unaffected by US Sanctions). The short hop over to Taiwan is reported not to be for R&R purposes—insiders had Huang's visiting key supply partners; TSMC and Wistron. Industry experts think that these meetings are linked to NVIDIA's upcoming "Blackwell" B100 AI GPU, and "supercharged" H200 "Hopper" accelerator. It is too early for the rumor mill to start speculation about nerfed versions of NVIDIA's 2024 enterprise products reaching Chinese shores, but Jensen Huang is seemingly ready to hold diplomatic talks with all sides.

AMD Instinct MI300X GPUs Featured in LaminiAI LLM Pods

LaminiAI appears to be one of AMD's first customers to receive a bulk order of Instinct MI300X GPUs—late last week, Sharon Zhou (CEO and co-founder) posted about the "next batch of LaminiAI LLM Pods" up and running with Team Red's cutting-edge CDNA 3 series accelerators inside. Her short post on social media stated: "rocm-smi...like freshly baked bread, 8x MI300X is online—if you're building on open LLMs and you're blocked on compute, lmk. Everyone should have access to this wizard technology called LLMs."

An attached screenshot of a ROCm System Management Interface (ROCm SMI) session showcases an individual Pod configuration sporting eight Instinct MI300X GPUs. According to official blog entries, LaminiAI has utilized bog-standard MI300 accelerators since 2023, so it is not surprising to see their partnership continue to grow with AMD. Industry predictions have the Instinct MI300X and MI300A models placed as great alternatives to NVIDIA's dominant H100 "Hopper" series—AMD stock is climbing due to encouraging financial analyst estimations.

Qualcomm Announces Snapdragon 8 Gen 3 Mobile Platform for Galaxy S24 Series

Qualcomm Technologies, Inc. announced that the premium Snapdragon 8 Gen 3 Mobile Platform for Galaxy is powering Samsung Electronics Co. Ltd.'s latest flagship Galaxy S24 Ultra globally and Galaxy S24 Plus and S24 in select regions. The Samsung Galaxy S24 series introduces Galaxy AI, taking advantage of the new and enhanced Snapdragon 8 Gen 3 for Galaxy, the titan of intelligence and leader in performance and power efficiency.

"We're proud to continue working alongside Samsung to lead this new era for the mobile industry and provide consumers with extraordinary premium experiences," said Chris Patrick, senior vice president and general manager of mobile handset, Qualcomm Technologies, Inc. "Snapdragon 8 Gen 3 for Galaxy instills its advanced AI capabilities in the Galaxy S24 series, to enable new experiences with AI features to empower users' everyday life. It also fuels advanced professional-quality camera, gaming experiences and ultra-fast connectivity including Wi-Fi 71, plus offers one of the most reliable authentication solutions available with our Qualcomm 3D Sonic Gen 2 technology."
Return to Keyword Browsing
Apr 23rd, 2024 15:50 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts