News Posts matching #Science

Return to Keyword Browsing

NVIDIA Modulus & Omniverse Drive Physics-informed Models and Simulations

A manufacturing plant near Hsinchu, Taiwan's Silicon Valley, is among facilities worldwide boosting energy efficiency with AI-enabled digital twins. A virtual model can help streamline operations, maximizing throughput for its physical counterpart, say engineers at Wistron, a global designer and manufacturer of computers and electronics systems. In the first of several use cases, the company built a digital copy of a room where NVIDIA DGX systems undergo thermal stress tests (pictured above). Early results were impressive.

Making Smart Simulations
Using NVIDIA Modulus, a framework for building AI models that understand the laws of physics, Wistron created digital twins that let them accurately predict the airflow and temperature in test facilities that must remain between 27 and 32 degrees C. A simulation that would've taken nearly 15 hours with traditional methods on a CPU took just 3.3 seconds on an NVIDIA GPU running inference with an AI model developed using Modulus, a whopping 15,000x speedup. The results were fed into tools and applications built by Wistron developers with NVIDIA Omniverse, a platform for creating 3D workflows and applications based on OpenUSD.

Extropic Intends to Accelerate AI through Thermodynamic Computing

Extropic, a pioneer in physics-based computing, this week emerged from stealth mode and announced the release of its Litepaper, which outlines the company's revolutionary approach to AI acceleration through thermodynamic computing. Founded in 2022 by Guillaume Verdon, Extropic has been developing novel chips and algorithms that leverage the natural properties of out-of-equilibrium thermodynamic systems to perform probabilistic computations for generative AI applications in a highly efficient manner. The Litepaper delves into Extropic's groundbreaking computational paradigm, which aims to address the limitations of current digital hardware in handling the complex probability distributions required for generative AI.

Today's algorithms spend around 25% of their time moving numbers around in memory, limiting the speedup achievable by accelerating specific operations. In contrast, Extropic's chips natively accelerate a broad class of probabilistic algorithms by running them physically as a rapid and energy-efficient, physics-based process in their entirety, unlocking a new regime of AI acceleration well beyond what was previously thought achievable. In coming out of stealth, the company has announced the fabrication of a superconducting prototype processor and developments surrounding room-temperature semiconductor-based devices for the broader market, with the goal of revolutionizing the field of AI acceleration and enabling new possibilities in generative AI.

Samsung Anticipates 2027-2028 Entry into Micro OLED AR/VR Market

Choi Joo-sun, CEO of Samsung Display, spoke to journalists post-conclusion of a March 6 lecture at the Korea Advanced Institute of Science and Technology (KAIST). A Chosun Daily Business reporter pulled some quotes regarding Samsung's outlook for new generation micro OLED technologies. Choi and his colleagues are likely taking their time on this development front—Sony Semiconductor Solutions (SSS) has already mass-produced OLED Microdisplay products. The Japanese technology giant is the main supplier of display panels for Apple's Vision Pro mixed reality headset—a recent iFixit teardown revealed a possible custom-designed unit. Leaked "Bill of Materials" figures indicate an eye-watering total cost of $456 for a pair of SSS 4K panels—Apple is reportedly engaged in negotiations with SeeYa and BOE regarding the supply of cheaper alternatives.

The Samsung Display boss is monitoring current industry trends, but his team is not rushing out competing solutions: "The market potential of micro OLED, which is used in augmented reality (AR) and virtual reality (VR), is significant, but I believe the market will begin in earnest around 2027-2028...there are many technical aspects to overcome and cost considerations." Choi believes that Samsung is better off with plenty of preparation time, before an anticipated bloom in the micro OLED market—in his opinion, domination can be achieved with careful investment in research and development (R&D) efforts. He stated: "During the remaining 2 to 3 years, we will deploy manpower to ensure that Samsung Display does not fall behind in the micro OLED market and introduce solutions that are competitive compared to competitors...The acquisition of Imagine, an American display company, is also part of this effort."

3D Nanoscale Petabit Capacity Optical Disk Format Proposed by Chinese R&D Teams

The University of Shanghai for Science and Technology (USST), Peking University and the Shanghai Institute of Optics and Fine Mechanics (SIOM) are collaborating on new Optical Data Storage (ODS) technologies—a recently published paper reveals that scientists are attempting to create 3D nanoscale optical disk memory that breaks into petabit capacities. Society (as a whole) has an ever-growing data demand—this requires the development of improved high-capacity storage technologies—the R&D teams believe that ODS presents a viable alternative route to traditional present day solutions: "data centers based on major storage technologies such as semiconductor flash devices and hard disk drives have high energy burdens, high operation costs and short lifespans."

The proposed ODS format could be a "promising solution for cost-effective long-term archival data storage." The researchers note that current (e.g Blu-ray) and previous generation ODS technologies have been: "limited by low capacities and the challenge of increasing areal density." In order to get ODS up to petabit capacity levels, several innovations are required—the Nature.com abstract stated: "extending the planar recording architecture to three dimensions with hundreds of layers, meanwhile breaking the optical diffraction limit barrier of the recorded spots. We develop an optical recording medium based on a photoresist film doped with aggregation-induced emission dye, which can be optically stimulated by femtosecond laser beams. This film is highly transparent and uniform, and the aggregation-induced emission phenomenon provides the storage mechanism. It can also be inhibited by another deactivating beam, resulting in a recording spot with a super-resolution scale." The novel optical storage medium relies on dye-doped photoresist (DDPR) with aggregation-induced emission luminogens (AIE-DDPR)—a 515 nm femtosecond Gaussian laser beam takes care of optical writing tasks, while a doughnut-shaped 639 nm continuous wave laser beam is tasked with retrieval. A 480 nm pulsed laser and a 592 nm continuous wave laser work in tandem to read data.

NVIDIA Joins US Artificial Intelligence Safety Institute Consortium

NVIDIA has joined the National Institute of Standards and Technology's new U.S. Artificial Intelligence Safety Institute Consortium as part of the company's effort to advance safe, secure and trustworthy AI. AISIC will work to create tools, methodologies and standards to promote the safe and trustworthy development and deployment of AI. As a member, NVIDIA will work with NIST—an agency of the U.S. Department of Commerce—and fellow consortium members to advance the consortium's mandate. NVIDIA's participation builds on a record of working with governments, researchers and industries of all sizes to help ensure AI is developed and deployed safely and responsibly.

Through a broad range of development initiatives, including NeMo Guardrails, open-source software for ensuring large language model responses are accurate, appropriate, on topic and secure, NVIDIA actively works to make AI safety a reality. In 2023, NVIDIA endorsed the Biden Administration's voluntary AI safety commitments. Last month, the company announced a $30 million contribution to the U.S. National Science Foundation's National Artificial Intelligence Research Resource pilot program, which aims to broaden access to the tools needed to power responsible AI discovery and innovation.

NUDT MT-3000 Hybrid CPU Reportedly Utilized by Tianhe-3 Supercomputer

China's National Supercomputer Center (NUDT) introduced their Tianhe-3 system as a prototype back in early 2019—at the time it had been tested by thirty local organizations. Notable assessors included the Chinese Academy of Sciences and the China Aerodynamics Research and Development Center. The (previous generation) Tianhe-2 system currently sits in a number seven position of world-ranked Supercomputers—offering a measured performance of 33.86 petaFLOPS/s. The internal makeup of its fully formed successor has remained a mystery...until now. The Next Platform believes that the "Xingyi" monikered third generation supercomputer houses the Guangzhou-based lab's MT-3000 processor design. Author, Timothy Prickett Morgan, boasted about acquiring exclusive inside knowledge ahead of international intelligence agencies—many will be keeping an eye on the NUDT, since it is administered by the National University of Defence Technology (itself owned by the Chinese government).

The Next Platform has a track record of outing intimate details relating to Chinese-developed scientific breakthroughs—the semi-related "Oceanlight" system installed at their National Supercomputer Center (Wuxi) was "figured out" two years ago. Tianhe-3 and Oceanlight face significant competition in the form of "El Capitan"—this is the USA's prime: "supercomputer being built right now at Lawrence Livermore National Laboratory by Hewlett Packard Enterprise in conjunction with compute engine supplier AMD. We need to know because we want to understand the very different—and yet, in some ways similar—architectural path that China seems to have taken with the Xingyi architecture to break through the exascale barrier."

Hafnia Material Breakthrough Paves Way for Ferroelectric Computer Memory

Scientists and engineers have been experimenting with hafnium oxide over the past decade—many believe that this "elusive ferroelectric material" is best leveraged in next generation computing memory (due to its non-volatile properties), although this requires a major scientific breakthrough to get working in a practical manner. Hafnia's natural state is inherently non-ferroelectric, so it takes some effort to get it into a suitable state—a SciTechDaily article explores past efforts: "Scientists could only get hafnia to its metastable ferroelectric state when straining it as a thin, two-dimensional film of nanometer thickness." Research teams at the University of Rochester, New York and University of Tennessee, Knoxville have presented evidence of an exciting landmark development. Sobhit Singh, assistant professor at UoR's Department of Mechanical Engineering, believes that the joint effort has created a lane for the creation of bulk ferroelectric and antiferroelectric hafnia.

His "Proceedings of the National Academy of Sciences" study proposes an alternative material path: "Hafnia is a very exciting material because of its practical applications in computer technology, especially for data storage. Currently, to store data we use magnetic forms of memory that are slow, require a lot of energy to operate, and are not very efficient. Ferroelectric forms of memory are robust, ultra-fast, cheaper to produce, and more energy-efficient." Professor Janice Musfeldt's team at the University of Tennessee have managed to produce a ferroelectric form of hafnia—through an experimental high pressure process, based on Singh's exact calculations. The material remained in a metastable phase post-experiment, even in a pressure-relieved state. Musfeldt commented on the pleasing results: "This is as an excellent example of experimental-theoretical collaboration." Memory manufacturers are likely keeping an eye on Hafnia's breakthrough potential, but material costs are dampening expectations—Tom's Hardware cites shortages (going back to early 2023): "Hafnium (the key component in Hafnia) has seen a nearly fivefold price increase due to increased demand since 2021, raising its cost from about $1,000 per kilogram to about $5,000. Even at $1000 a kilogram, though, hafnium is by far more expensive than silicon, which measures in the tens of dollars per kilogram."

NVIDIA Contributes $30 Million of Tech to NAIRR Pilot Program

In a major stride toward building a shared national research infrastructure, the U.S. National Science Foundation has launched the National Artificial Intelligence Research Resource pilot program with significant support from NVIDIA. The initiative aims to broaden access to the tools needed to power responsible AI discovery and innovation. It was announced Wednesday in partnership with 10 other federal agencies as well as private-sector, nonprofit and philanthropic organizations. "The breadth of partners that have come together for this pilot underscores the urgency of developing a National AI Research Resource for the future of AI in America," said NSF Director Sethuraman Panchanathan. "By investing in AI research through the NAIRR pilot, the United States unleashes discovery and impact and bolsters its global competitiveness."

NVIDIA's commitment of $30 million in technology contributions over two years is a key factor in enlarging the scale of the pilot, fueling the potential for broader achievements and accelerating the momentum toward full-scale implementation. "The NAIRR is a vision of a national research infrastructure that will provide access to computing, data, models and software to empower researchers and communities," said Katie Antypas, director of the Office of Advanced Cyberinfrastructure at the NSF. "Our primary goals for the NAIRR pilot are to support fundamental AI research and domain-specific research applying AI, reach broader communities, particularly those currently unable to participate in the AI innovation ecosystem, and refine the design for the future full NAIRR," Antypas added.

Quantum Breakthrough: Stable Qubits Generated at Room Temperature

Quantum coherence at room temperature has been achieved, thanks to the efforts of Associate Professor Nobuhiro Yanai and his research team at Kyushu University's Faculty of Engineering. Additional credit goes to Associate Professor Kiyoshi Miyata (also of Kyushu University) and Professor Yasuhiro Kobori of Kobe University, all in Japan. Their scientific experiments have led to an ideal set of conditions where it is "crucial to generate quantum spin coherence in the quintet sublevels by microwave manipulation at room temperature." A quantum system requires operation in a stable state over a certain period of time, free of environmental interference.

Kobori-san has disclosed multi-department research results in a very elaborate document: "This is the first room-temperature quantum coherence of entangled quintets." The certain period of time mentioned above was only measures in nanoseconds, so more experimental work and further refinement will be carried out to prolong harmonious conditions. Head honco, Professor Yanai outlined some goals: "It will be possible to generate quintet multiexciton state qubits more efficiently in the future by searching for guest molecules that can induce more such suppressed motions and by developing suitable MOF structures...This can open doors to room-temperature molecular quantum computing based on multiple quantum gate control and quantum sensing of various target compounds."

You Can Now Create a Digital Clone of Yourself with Eternity.AC, an AI Startup Paving a Path to Immortality

Science fiction is coming to life with eternity.ac, a new startup offering personal digital cloning where anyone can challenge the boundaries of physical limitations with an affordable artificial intelligence that looks, talks, and converses just like you. The new venture empowers individuals to preserve their unique appearance, thoughts, experiences, and memories with a simple 3-step clone creation process.

The innovation opens up a new spectrum of meaningful AI uses, such as allowing future generations to interact with loved ones, enabling fans and followers to engage with their favorite public figures, and helping people understand the viewpoints and experiences of others. Once created, people can interact with the clone via written chat or through vocal conversations.

NVIDIA CEO Meets with India Prime Minister Narendra Modi

Underscoring NVIDIA's growing relationship with the global technology superpower, Indian Prime Minister Narendra Modi met with NVIDIA founder and CEO Jensen Huang Monday evening. The meeting at 7 Lok Kalyan Marg—as the Prime Minister's official residence in New Delhi is known—comes as Modi prepares to host a gathering of leaders from the G20 group of the world's largest economies, including U.S. President Joe Biden, later this week.

"Had an excellent meeting with Mr. Jensen Huang, the CEO of NVIDIA," Modi said in a social media post. "We talked at length about the rich potential India offers in the world of AI." The event marks the second meeting between Modi and Huang, highlighting NVIDIA's role in the country's fast-growing technology industry.

TSMC Inaugurates Global R&D Center, Celebrating Its Newest Hub for Technology Innovation

TSMC today held an inauguration ceremony for its global Research and Development Center in Hsinchu, Taiwan, celebrating the Company's newest hub for bringing the next generations of semiconductor technology into reality with customers, R&D partners in industry and academia, design ecosystem partners, and senior government leaders.

The R&D Center will serve as the new home for TSMC's R&D Organization, including the researchers who will develop TSMC's leading-edge process technology at the 2-nanometer generation and beyond, as well as scientists and scholars blazing the trail with exploratory research into fields such as novel materials and transistor structures. With R&D employees already relocating to their workplaces in the new building, it will be ready for its full complement of more than 7,000 staff by September 2023.

NVIDIA Espouses Generative AI for Improved Productivity Across Industries

A watershed moment on Nov. 22, 2022, was mostly virtual, yet it shook the foundations of nearly every industry on the planet. On that day, OpenAI released ChatGPT, the most advanced artificial intelligence chatbot ever developed. This set off demand for generative AI applications that help businesses become more efficient, from providing consumers with answers to their questions to accelerating the work of researchers as they seek scientific breakthroughs, and much, much more.

Businesses that previously dabbled in AI are now rushing to adopt and deploy the latest applications. Generative AI—the ability of algorithms to create new text, images, sounds, animations, 3D models and even computer code—is moving at warp speed, transforming the way people work and play. By employing large language models (LLMs) to handle queries, the technology can dramatically reduce the time people devote to manual tasks like searching for and compiling information.

Assassin's Creed Mirage Showcases the History of Baghdad

When Assassin's Creed Mirage launches on October 12, it will continue the series' tradition of bringing players closer to history with History of Baghdad, a feature that adds historical context to the game's simulation of the past. Part of an in-game Codex that also includes tutorials and a Database with lore, History of Baghdad will deliver expertly curated information on the history, art, and culture of Baghdad and the Abbasid Caliphate circa the ninth century, accompanied by images provided by museum partners.

In keeping with Assassin's Creed Mirage being a tribute to early Assassin's Creed games, History of Baghdad will be integrated into the main game, similarly to the Database of earlier games, and is tied to player progression, with an in-game reward for Basim once completed. As Basim visits 66 historical sites throughout Baghdad, players will unlock research-driven articles that dig into information across five topics: Economy; Belief & Daily Life; Government; Art & Science; and Court Life.

IBM and UC Berkeley Collaborate on Practical Quantum Computing

For weeks, researchers at IBM Quantum and UC Berkeley were taking turns running increasingly complex physical simulations. Youngseok Kim and Andrew Eddins, scientists with IBM Quantum, would test them on the 127-qubit IBM Quantum Eagle processor. UC Berkeley's Sajant Anand would attempt the same calculation using state-of-the-art classical approximation methods on supercomputers located at Lawrence Berkeley National Lab and Purdue University. They'd check each method against an exact brute-force classical calculation.

Eagle returned accurate answers every time. And watching how both computational paradigms performed as the simulations grew increasingly complex made both teams feel confident the quantum computer was still returning answers more accurate than the classical approximation methods, even in the regime beyond the capabilities of the brute force methods. "The level of agreement between the quantum and classical computations on such large problems was pretty surprising to me personally," said Eddins. "Hopefully it's impressive to everyone."

ITRI Set to Strengthen Taiwan-UK Collaboration on Semiconductors

The newly established Department for Science, Innovation and Technology (DSIT) in the UK has recently released the UK's National Semiconductor Strategy. Dr. Shih-Chieh Chang, General Director of Electronic and Optoelectronic System Research Laboratories at the Industrial Technology Research Institute (ITRI) of Taiwan had an initial exchange with DSIT. During the exchange, Dr. Chang suggested that Taiwan can become a trustable partner for the UK and that the partnership can leverage collective strengths to create mutually beneficial developments. According to the Strategy, the British government plans to invest 1 billion pounds over the next decade to support the semiconductor industry. This funding will improve access to infrastructure, power more research and development and facilitate greater international cooperation.

Dr. Chang stressed that ITRI looks forward to more collaboration with the UK on semiconductors to enhance the resilience of the supply chain. While the UK possesses cutting-edge capabilities in semiconductor IP design and compound semiconductor technology, ITRI has extensive expertise in semiconductor technology R&D and trial production. As a result, ITRI is well-positioned to offer consultation services for advanced packaging pilot lines, facilitate pre-production evaluation, and link British semiconductor IP design companies with Taiwan's semiconductor industry chain. "The expansion of British manufacturers' service capacity in Taiwan would create a mutually beneficial outcome for both Taiwan and the UK," said Dr. Chang.

U.S. Government to Allow Chipmakers to Expand Facilities in China

The United States government has imposed sanctions on companies exporting their goods to China with the aim of limiting the country's technological advancements. This forced many companies to reduce their shipments of the latest technologies; however, according to the latest information from The Wall Street Journal, the Biden administration will allow companies to keep expanding their production capacities in China. As the source notes, quoting statements from government officials, the top semiconductor makers such as Samsung, SK Hynix, and TSMC, all of which have a chip production facility in China, will be allowed to expand the production capacity without any US backlash.

Of course, this does not contradict the plan of a US export-control policy, which the administration plans to continue. Alan Estevez, undersecretary of commerce for industry and security, noted last week in the industry gathering that the US plans to continue these restrictions for another year. Reportedly, all manufacturers of wafer fab equipment (WFE) from the US must acquire an export license from the Department of Commerce before exporting any tools for making either logic of memory chip indented for customers in China. Chipmakers Samsung, SK Hynix, and TSMC all received their licenses to export from October 2022 to October 2023. However, the US government now allows these companies to continue upgrading their Chinese plans beyond the renewed license expiry date of October 2024.

NVIDIA Touts A100 GPU Energy Efficiency, Tensor Cores Drive "Perlmutter" Super Computer

People agree: accelerated computing is energy-efficient computing. The National Energy Research Scientific Computing Center (NERSC), the U.S. Department of Energy's lead facility for open science, measured results across four of its key high performance computing and AI applications.

They clocked how fast the applications ran and how much energy they consumed on CPU-only and GPU-accelerated nodes on Perlmutter, one of the world's largest supercomputers using NVIDIA GPUs. The results were clear. Accelerated with NVIDIA A100 Tensor Core GPUs, energy efficiency rose 5x on average. An application for weather forecasting logged gains of 9.8x.

Frontier Remains As Sole Exaflop Machine on TOP500 List

Increasing its HPL score from 1.02 Eflop/s in November 2022 to an impressive 1.194 Eflop/s on this list, Frontier was able to improve upon its score after a stagnation between June 2022 and November 2022. Considering exascale was only a goal to aspire to just a few years ago, a roughly 17% increase here is an enormous success. Additionally, Frontier earned a score of 9.95 Eflop/s on the HLP-MxP benchmark, which measures performance for mixed-precision calculation. This is also an increase over the 7.94 EFlop/s that the system achieved on the previous list and nearly 10 times more powerful than the machine's HPL score. Frontier is based on the HPE Cray EX235a architecture and utilizes AMD EPYC 64C 2 GHz processors. It also has 8,699,904 cores and an incredible energy efficiency rating of 52.59 Gflops/watt. It also relies on gigabit ethernet for data transfer.

Applied Materials Launches Multibillion-Dollar R&D Platform in Silicon Valley to Accelerate Semiconductor Innovation

Applied Materials, Inc. today announced a landmark investment to build the world's largest and most advanced facility for collaborative semiconductor process technology and manufacturing equipment research and development (R&D). The new Equipment and Process Innovation and Commercialization (EPIC) Center is planned as the heart of a high-velocity innovation platform designed to accelerate development and commercialization of the foundational technologies needed by the global semiconductor and computing industries.

To be located at an Applied campus in Silicon Valley, the multibillion-dollar facility is designed to provide a breadth and scale of capabilities that is unique in the industry, including more than 180,000 square feet - more than three American football fields - of state-of-the-art cleanroom for collaborative innovation with chipmakers, universities and ecosystem partners. Designed from the ground up to accelerate the pace of introducing new manufacturing innovations, the new EPIC Center is expected to reduce the time it takes the industry to bring a technology from concept to commercialization by several years, while simultaneously increasing the commercial success rate of new innovations and the return on R&D investments for the entire semiconductor ecosystem.

NVIDIA Grace Drives Wave of New Energy-Efficient Arm Supercomputers

NVIDIA today announced a supercomputer built on the NVIDIA Grace CPU Superchip, adding to a wave of new energy-efficient supercomputers based on the Arm Neoverse platform. The Isambard 3 supercomputer to be based at the Bristol & Bath Science Park, in the U.K., will feature 384 Arm-based NVIDIA Grace CPU Superchips to power medical and scientific research, and is expected to deliver 6x the performance and energy efficiency of Isambard 2, placing it among Europe's most energy-efficient systems.

It will achieve about 2.7 petaflops of FP64 peak performance and consume less than 270 kilowatts of power, ranking it among the world's three greenest non-accelerated supercomputers. The project is being led by the University of Bristol, as part of the research consortium the GW4 Alliance, together with the universities of Bath, Cardiff and Exeter.

Quantinuum Solves Quantum Computing Error Correction With System H2 Model

Quantinuum is proud and excited to announce this significant step towards fault tolerant quantum computing. This achievement has been uniquely enabled by the release of Quantinuum's System Model H2 - the highest performing quantum computer ever built. The official launch of Quantinuum's H2 quantum processor, Powered by Honeywell, follows extensive pre-launch work with a variety of global partners and was essential to the controlled creation and manipulation of non-Abelian anyons. The precise control of non-Abelian anyons has been long held as the path to using topological qubits for a fault tolerant quantum computer.

Tony Uttley, President and COO of Quantinuum, stated "With our second-generation system, we are entering a new phase of quantum computing. H2 highlights the opportunity to achieve valuable outcomes that are only possible with a quantum computer. The development of the H2 processor is also a critical step in moving towards universal fault tolerant quantum computing." He added "This demonstration is a beautiful proof point in the power of our H-Series hardware roadmap and reinforces our primary purpose which is to enable our customers to tackle problems that were previously beyond the reach of classical computers. The implications for society are significant and we are excited to see how this technology truly changes the world."

Biden-Harris Administration Launches First CHIPS for America Funding Opportunity

The Biden-Harris Administration through the U.S. Department of Commerce's National Institute of Standards and Technology today launched the first CHIPS for America funding opportunity for manufacturing incentives to restore U.S. leadership in semiconductor manufacturing, support good-paying jobs across the semiconductor supply chain, and advance U.S. economic and national security.

As part of the bipartisan CHIPS and Science Act, the Department of Commerce is overseeing $50 billion to revitalize the U.S. semiconductor industry, including $39 billion in semiconductor incentives. The first funding opportunity seeks applications for projects to construct, expand, or modernize commercial facilities for the production of leading-edge, current-generation, and mature-node semiconductors. This includes both front-end wafer fabrication and back-end packaging. The Department will also be releasing a funding opportunity for semiconductor materials and equipment facilities in the late spring, and one for research and development facilities in the fall.

IonQ to Open First Quantum Computing Manufacturing Facility in the US

IonQ, Inc. (NYSE: IONQ), an industry leader in quantum computing, today announced plans to open the first known dedicated quantum computing manufacturing facility in the U.S., located in the suburbs of Seattle, Washington. The new facility will house IonQ's growing R&D and manufacturing teams, as they develop systems to meet continued customer demand. With public support from U.S. Senator Patty Murray (D-WA) - an early proponent of the CHIPS and Science Act - and Congresswoman Suzan DelBene, US representative from Washington's 1st congressional district,today's announcement is part of IonQ's broader intent to invest $1 billion through expansion in the Pacific Northwest over the next 10 years.

"IonQ making the decision to open the first ever quantum computing manufacturing facility in the country right here in Bothell is a very big deal—and it's great news for Washington state," said Senator Murray. "Opening this facility will absolutely help ensure Washington state continues to be a leader in innovation and cutting-edge technologies—but it also means jobs that will be an investment in our families and their futures. These are the kinds of investments that happen when we pass legislation like the CHIPS and Science Act to invest in American manufacturing and build the economy of the future right here at home."

ASUS Servers Announce AI Developments at NVIDIA GTC

ASUS, the leading IT company in server systems, server motherboards and workstations, today announced its presence at NVIDIA GTC - a developer conference for the era of AI and the metaverse. ASUS will focus on three demonstrations outlining its strategic developments in AI, including: the methodology behind ASUS MLPerf Training v2.0 results that achieved multiple breakthrough records; a success story exploring the building of an academic AI data center at King Abdullah University of Science and Technology (KAUST) in Saudi Arabia; and a research AI data center created in conjunction with the National Health Research Institute in Taiwan.

MLPerf benchmark results help advance machine-learning performance and efficiency, allowing researchers to evaluate the efficacy of AI training and inference based on specific server configurations. Since joining MLCommons in 2021, ASUS has gained multiple breakthrough records in the data center closed division across six AI-benchmark tasks in AI training and inferencing MLPerf Training v2.0. At the ASUS GTC session, senior ASUS software engineers will share the methodology for achieving these world-class results—as well as the company's efforts to deliver more efficient AI workflows through machine learning.
Return to Keyword Browsing
Apr 19th, 2024 23:16 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts