News Posts matching #trillion

Return to Keyword Browsing

Cerebras & G42 Break Ground on Condor Galaxy 3 - an 8 exaFLOPs AI Supercomputer

Cerebras Systems, the pioneer in accelerating generative AI, and G42, the Abu Dhabi-based leading technology holding group, today announced the build of Condor Galaxy 3 (CG-3), the third cluster of their constellation of AI supercomputers, the Condor Galaxy. Featuring 64 of Cerebras' newly announced CS-3 systems - all powered by the industry's fastest AI chip, the Wafer-Scale Engine 3 (WSE-3) - Condor Galaxy 3 will deliver 8 exaFLOPs of AI with 58 million AI-optimized cores. The Cerebras and G42 strategic partnership already delivered 8 exaFLOPs of AI supercomputing performance via Condor Galaxy 1 and Condor Galaxy 2, each amongst the largest AI supercomputers in the world. Located in Dallas, Texas, Condor Galaxy 3 brings the current total of the Condor Galaxy network to 16 exaFLOPs.

"With Condor Galaxy 3, we continue to achieve our joint vision of transforming the worldwide inventory of AI compute through the development of the world's largest and fastest AI supercomputers," said Kiril Evtimov, Group CTO of G42. "The existing Condor Galaxy network has trained some of the leading open-source models in the industry, with tens of thousands of downloads. By doubling the capacity to 16exaFLOPs, we look forward to seeing the next wave of innovation Condor Galaxy supercomputers can enable." At the heart of Condor Galaxy 3 are 64 Cerebras CS-3 Systems. Each CS-3 is powered by the new 4 trillion transistor, 900,000 AI core WSE-3. Manufactured at TSMC at the 5-nanometer node, the WSE-3 delivers twice the performance at the same power and for the same price as the previous generation part. Purpose built for training the industry's largest AI models, WSE-3 delivers an astounding 125 petaflops of peak AI performance per chip.

Cerebras Systems Unveils World's Fastest AI Chip with 4 Trillion Transistors and 900,000 AI cores

Cerebras Systems, the pioneer in accelerating generative AI, has doubled down on its existing world record of fastest AI chip with the introduction of the Wafer Scale Engine 3. The WSE-3 delivers twice the performance of the previous record-holder, the Cerebras WSE-2, at the same power draw and for the same price. Purpose built for training the industry's largest AI models, the 5 nm-based, 4 trillion transistor WSE-3 powers the Cerebras CS-3 AI supercomputer, delivering 125 petaflops of peak AI performance through 900,000 AI optimized compute cores.

Google: CPUs are Leading AI Inference Workloads, Not GPUs

The AI infrastructure of today is mostly fueled by the expansion that relies on GPU-accelerated servers. Google, one of the world's largest hyperscalers, has noted that CPUs are still a leading compute for AI/ML workloads, recorded on their Google Cloud Services cloud internal analysis. During the TechFieldDay event, a speech by Brandon Royal, product manager at Google Cloud, explained the position of CPUs in today's AI game. The AI lifecycle is divided into two parts: training and inference. During training, massive compute capacity is needed, along with enormous memory capacity, to fit ever-expanding AI models into memory. The latest models, like GPT-4 and Gemini, contain billions of parameters and require thousands of GPUs or other accelerators working in parallel to train efficiently.

On the other hand, inference requires less compute intensity but still benefits from acceleration. The pre-trained model is optimized and deployed during inference to make predictions on new data. While less compute is needed than training, latency and throughput are essential for real-time inference. Google found out that, while GPUs are ideal for the training phase, models are often optimized and run inference on CPUs. This means that there are customers who choose CPUs as their medium of AI inference for a wide variety of reasons.

TSMC Plans to Put a Trillion Transistors on a Single Package by 2030

During the recent IEDM conference, TSMC previewed its process roadmap for delivering next-generation chip packages packing over one trillion transistors by 2030. This aligns with similar long-term visions from Intel. Such enormous transistor counts will come through advanced 3D packaging of multiple chipsets. But TSMC also aims to push monolithic chip complexity higher, ultimately enabling 200 billion transistor designs on a single die. This requires steady enhancement of TSMC's planned N2, N2P, N1.4, and N1 nodes, which are slated to arrive between now and the end of the decade. While multi-chipset architectures are currently gaining favor, TSMC asserts both packaging density and raw transistor density must scale up in tandem. Some perspective on the magnitude of TSMC's goals include NVIDIA's 80 billion transistor GH100 GPU—among today's largest chips, excluding wafer-scale designs from Cerebras.

Yet TSMC's roadmap calls for more than doubling that, first with over 100 billion transistor monolithic designs, then eventually 200 billion. Of course, yields become more challenging as die sizes grow, which is where advanced packaging of smaller chiplets becomes crucial. Multi-chip module offerings like AMD's MI300X and Intel's Ponte Vecchio already integrate dozens of tiles, with PVC having 47 tiles. TSMC envisions this expansion to chip packages housing more than a trillion transistors via its CoWoS, InFO, 3D stacking, and many other technologies. While the scaling cadence has recently slowed, TSMC remains confident in achieving both packaging and process breakthroughs to meet future density demands. The foundry's continuous investment ensures progress in unlocking next-generation semiconductor capabilities. But physics ultimately dictates timelines, no matter how aggressive the roadmap.

NVIDIA Predicted to Pull in $300 Billion AI Revenues by 2027

NVIDIA has been raking in lots of cash this year and hit a major milestone back in late May, with a trillion dollar valuation—its stock price doubled thanks to upward trends in the artificial intelligence market, with growing global demand for AI-hardware. Business Insider believes that Team Green will continue to do very well for itself over the next couple of years: "Mizuho analyst Vijay Rakesh has given NVIDIA's stock price another 20% upside to run—and even this new target of $530 is "conservative," according to a Sunday client note seen by Insider. Rakesh's previous price target for NVIDIA was $400. NVIDIA shares closed 0.7% higher at $446.12 apiece on Monday. The stock has surged 205% so far this year."

Despite the emergence of competing hardware from the likes of AMD and Intel, Rakesh predicts that NVIDIA will maintain a dominant position in the AI chip market until at least 2027: "With demand for generative AI accelerating, we see significant opportunities for hardware suppliers powering the higher compute needs for large-language models, particularly AI powerhouse NVIDIA. Insider reports that the company: "could generate around $300 billion in AI-specific revenue by 2027 with a 75% market share of AI server units...That's 10 times his projection of $25 billion to $30 billion in AI revenues this year." Rakesh has reportedly stuck with a $140 buy rating and price target for AMD shares.

Samsung Will Reportedly Cut DRAM and NAND Production Further After US$7 Billion Loss

Earlier today, Samsung released its Q2 results, which were a mixed bag with many business units underperforming. However, none were doing as poorly as Samsung's memory business which made a loss of 4.36 trillion Won or around US$3.4 billion. Although not mentioned in the financial report, Reuters are reporting that Samsung is looking to cut production of NAND and DRAM further, as the publication is pointing out that Samsung's chipset business as a whole lost a staggering 8.9 trillion Won or US$7 billion. That said, the loss wasn't quite as bad as in the first quarter of the year and Samsung is expected to cut the loss in half for Q3.

In a statement to Reuters, Samsung said "Production cuts across the industry are likely to continue in the second half, and demand is expected to gradually recover as clients continue to destock their (chip) inventory." This suggests that the expected recovery isn't going to happen as soon as the DRAM and NAND manufacturers expected, as there simply isn't enough demand for either product. SK Hynix has already announced that it's cutting production a further five to 10 percent for NAND, but the company doesn't appear to have had quite as big losses as Samsung. For the time being, this is good news for consumers, as RAM and SSD pricing is about as low as it has ever been. However, with production cuts from all the three major manufacturers of DRAM and NAND, prices will most likely start going up soon, especially as Samsung is said to have depleted most of its inventories.

Report: Worldwide IT Spending in 2023 Continues to Slowly Trend Downward

or the fifth consecutive month, International Data Corporation (IDC) has lowered its 2023 forecast for worldwide IT spending as technology investments continue to show the impact of a weakening economy. In its new monthly forecast for worldwide IT spending growth, IDC projects overall growth this year in constant currency of 4.4% to $3.25 trillion. This is slightly down from 4.5% in the previous month's forecast and represents a swing from a 6.0% growth forecast in October 2022.

"Since the fourth quarter of last year, we have seen clear and measurable signs of a moderate pullback in some areas of IT spending," said Stephen Minton, vice president in IDC's Data & Analytics research group. "Tech spending remains resilient compared to historical economic downturns and other types of business spending, but rising interest rates are now impacting capital spending."

Intel Research Fuels Moore's Law and Paves the Way to a Trillion Transistors by 2030

Today, Intel unveiled research breakthroughs fueling its innovation pipeline for keeping Moore's Law on track to a trillion transistors on a package in the next decade. At IEEE International Electron Devices Meeting (IEDM) 2022, Intel researchers showcased advancements in 3D packaging technology with a new 10x improvement in density; novel materials for 2D transistor scaling beyond RibbonFET, including super-thin material just 3 atoms thick; new possibilities in energy efficiency and memory for higher-performing computing; and advancements for quantum computing.

"Seventy-five years since the invention of the transistor, innovation driving Moore's Law continues to address the world's exponentially increasing demand for computing. At IEDM 2022, Intel is showcasing both the forward-thinking and concrete research advancements needed to break through current and future barriers, deliver to this insatiable demand, and keep Moore's Law alive and well for years to come." -Gary Patton, Intel vice president and general manager of Components Research and Design Enablement

Cerebras Systems Sets Record for Largest AI Models Ever Trained on A Single Device

Cerebras Systems, the pioneer in high performance artificial intelligence (AI) computing, today announced, for the first time ever, the ability to train models with up to 20 billion parameters on a single CS-2 system - a feat not possible on any other single device. By enabling a single CS-2 to train these models, Cerebras reduces the system engineering time necessary to run large natural language processing (NLP) models from months to minutes. It also eliminates one of the most painful aspects of NLP—namely the partitioning of the model across hundreds or thousands of small graphics processing units (GPU).

"In NLP, bigger models are shown to be more accurate. But traditionally, only a very select few companies had the resources and expertise necessary to do the painstaking work of breaking up these large models and spreading them across hundreds or thousands of graphics processing units," said Andrew Feldman, CEO and Co-Founder of Cerebras Systems. "As a result, only very few companies could train large NLP models - it was too expensive, time-consuming and inaccessible for the rest of the industry. Today we are proud to democratize access to GPT-3 1.3B, GPT-J 6B, GPT-3 13B and GPT-NeoX 20B, enabling the entire AI ecosystem to set up large models in minutes and train them on a single CS-2."

Intel Updates Technology Roadmap with Data Center Processors and Game Streaming Service

At Intel's 2022 Investor Meeting, Chief Executive Officer Pat Gelsinger and Intel's business leaders outlined key elements of the company's strategy and path for long-term growth. Intel's long-term plans will capitalize on transformative growth during an era of unprecedented demand for semiconductors. Among the presentations, Intel announced product roadmaps across its major business units and key execution milestones, including: Accelerated Computing Systems and Graphics, Intel Foundry Services, Software and Advanced Technology, Network and Edge, Technology Development, More: For more from Intel's Investor Meeting 2022, including the presentations and news, please visit the Intel Newsroom and Intel.com's Investor Meeting site.

Foxconn Reports Record High Third Quarter Sales

Despite the current shortage of just about everything, Foxconn doesn't appear to be suffering, as the company has reported record sales for the third quarter of this year. The company recorded sales of close to NT$1.4 trillion, or about US$50 billion, with a consolidated revenue of just over NT$585.7 billion or about US$20.9 billion.

Sales revenue for Foxconn increased 8.8 percent compared to the same period last year and a 3.44 percent increase quarter on quarter, with sales in September being the best so far this year for the company. The top grossing product groups included consumer smart devices, computers and cloud networking products. This is likely due to no small part due to Apple launching the iPhone 13, as well as its new tablet products last month. Foxconn is expecting a gross profit of 12 to 13 percent for the second half of 2021, which is a significant increase from around 8 to 9 percent for the first half of the year.

Tachyum Boots Linux on Prodigy FPGA

Tachyum Inc. today announced that it has successfully executed the Linux boot process on the field-programmable gate array (FPGA) prototype of its Prodigy Universal Processor, in 2 months after taking delivery of the IO motherboard from manufacturing. This achievement proves the stability of the Prodigy emulation system and allows the company to move forward with additional testing before advancing to tape out.

Tachyum engineers were able to perform the Linux boot, execute a short user-mode program and shutdown the system on the fully functional FPGA emulation system. Not only does this successful test prove that the basic processor is stable, but interrupts, exceptions, timing, and system-mode transitions are, as well. This is a key milestone, which dramatically reduces risk, as booting and running large and complex pieces of software like Linux reliably on the Tachyum FPGA processor prototype shows that verification and hardware stability are past the most difficult turning point, and it is now obvious that verification and testing should successfully complete in the coming months. Designers are now shifting their attention to debug and verification processes, running hundreds of trillions of test cycles over the next few months, and running large scale user mode applications with compatibility testing to get the processor to production quality.

Intel Xeon Processors E5 Achieve Fastest Adoption, Announcing Xeon Phi Co-Processors

The Intel Xeon processor E5-2600 product family reached a new supercomputing milestone as the fastest adopted new processing technology to power 44 systems, including 3 Petascale-class supercomputers on the 39th edition of the Top500 list announced today.

The "SuperMUC" supercomputer at LRZ in Germany, which ranked fourth on the list, delivers 2.9 PetaFLOPs of performance, making it the most powerful in Europe, as well as the largest installation based on the new Intel Xeon processors E5 family.

EIZO DuraVision FDH3601 On Display at NAB 2012

EIZO's 4K posterboy, DuraVision FDH3601 will be up for display on the 14th of April at the NABShow 2012, in Las Vegas USA, a convention of creative professionals and production houses. What makes this 36.4" PC monitor an eyeball magnet (literally), is that it sports a 4K resolution (4096 x 2160 pixels), and has a bleeding-edge featureset for content-creators. So high is its resolution, that it requires two dual-link DVI connections (or two DisplayPort connections, provided they transport two TMDS links each). It can deliver 100% coverage of the RGB palette, with the ability to reproduce 278 trillion colors. Other vital stats include 700 cd/m² brightness, 1000:1 contrast ratio, 176° and 176° (H and V) viewing angles, and 8 ms response time. The DuraVision FDH3601 was first announced in 2011. It costs a little over $35,000.

Apple Could Become History's First Trillion-Dollar Company

Wall Street punters predict Apple could become the first trillion dollar company in an year or two. The company's share prices rose last week, to $633.38, edging past those of Google. Analyst Brian White, of Topeka Capital Markets claimed that the share price could enter four-figures in under an year, making the company worth US $1T. In his report, White stated "Apple fever is spreading like a wildfire around the world." By that, he meant that the craze for Apple products and services, is spreading.

Apple On Track for Trillion Dollar Valuation

Today Apple announced its market value to be $500 billion dollars after a massive stock surge Monday. The tech giant's valuation is now nearly halfway to the 10-figure mark, with speculation Apple will launch iTV later this year driving shares to new record highs. Yet, Apple still has a way to go to become the most valuable company of all time. If Apple shares continue to hit new record levels, its market cap will reach $500 billion when the price reaches $537.

Still, shares will need to rise another $100 above that level to put Apple in contention for the most expensive company ever. According to Standard and Poor's, ExxonMobil was the most recent company to see a valuation north of $500 billion, back in 2007 when oil prices were at record highs. While Microsoft may not excite investors like it did in Y2K, the software behemoth still holds the record for the most expensive valuation. Its market cap closed out 1999 at just over $600 billion according to Standard and Poor's, before peaking north of $650 billion during the tech bubble in 2000. The high analyst price target on the street for Apple right now is $700. At that price, its market cap will handily surpass Microsoft's Y2K record.

Time will tell if Apple will reach the trillion dollar mark.
Return to Keyword Browsing
Apr 16th, 2024 17:53 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts