News Posts matching #IBM

Return to Keyword Browsing

NVIDIA Announces Mellanox InfiniBand for Exascale AI Supercomputing

NVIDIA today introduced the next generation of NVIDIA Mellanox 400G InfiniBand, giving AI developers and scientific researchers the fastest networking performance available to take on the world's most challenging problems.

As computing requirements continue to grow exponentially in areas such as drug discovery, climate research and genomics, NVIDIA Mellanox 400G InfiniBand is accelerating this work through a dramatic leap in performance offered on the world's only fully offloadable, in-network computing platform. The seventh generation of Mellanox InfiniBand provides ultra-low latency and doubles data throughput with NDR 400 Gb/s and adds new NVIDIA In-Network Computing engines to provide additional acceleration.

IBM Delivers Its Highest Quantum Volume to Date

Today, IBM has unveiled a new milestone on its quantum computing road map, achieving the company's highest Quantum Volume to date. Combining a series of new software and hardware techniques to improve overall performance, IBM has upgraded one of its newest 27-qubit client-deployed systems to achieve a Quantum Volume 64. The company has made a total of 28 quantum computers available over the last four years through IBM Quantum Experience.

In order to achieve a Quantum Advantage, the point where certain information processing tasks can be performed more efficiently or cost effectively on a quantum computer, versus a classical one, it will require improved quantum circuits, the building blocks of quantum applications. Quantum Volume measures the length and complexity of circuits - the higher the Quantum Volume, the higher the potential for exploring solutions to real world problems across industry, government, and research.

To achieve this milestone, the company focused on a new set of techniques and improvements that used knowledge of the hardware to optimally run the Quantum Volume circuits. These hardware-aware methods are extensible and will improve any quantum circuit run on any IBM Quantum system, resulting in improvements to the experiments and applications which users can explore. These techniques will be available in upcoming releases and improvements to the IBM Cloud software services and the cross-platform open source software development kit (SDK) Qiskit.

IBM Reveals Next-Generation IBM POWER10 Processor

IBM today revealed the next generation of its IBM POWER central processing unit (CPU) family: IBM POWER10. Designed to offer a platform to meet the unique needs of enterprise hybrid cloud computing, the IBM POWER10 processor uses a design focused on energy efficiency and performance in a 7 nm form factor with an expected improvement of up to 3x greater processor energy efficiency, workload capacity, and container density than the IBM POWER9 processor.

Designed over five years with hundreds of new and pending patents, the IBM POWER10 processor is an important evolution in IBM's roadmap for POWER. Systems taking advantage of IBM POWER10 are expected to be available in the second half of 2021. Some of the new processor innovations include:
IBM POWER10 Processor IBM POWER10 Processor

Fujifilm Points to 400 TB Tape Drives in the Future

Fujifilm, a Japanese company focused on photography, imaging, printing, and biotechnology, predicts that it can build a 400 terabyte cartridge using Strontium Ferrite (SrFe) technology. Thanks to a report by Blocks&Files, who was press-briefed by Fujifilm, we have some information on the future of tape storage. Tape storage uses Linear Tape-Open (LTO) technology, which is an open standard developed by IBM to ensure all tape-based storage devices use the same format, instead of proprietary magnetic formats. We are currently at the LTO-8 version of this technology, which was released in 2017. Currently, LTO-8 can hold 12 TB in a single cartridge.

Fujifilm, one of the remaining makers of tape storage, predicts that it can pack 400 TB of tape storage in the LTO-13 era. Starting from LTO-12, Fujifilm plans to deploy Strontium Ferrite (SrFe) technology, which is different from the current Barium Ferrite (BaFe). The problem with BaFe is that each new LTO generation uses smaller and smaller particles and that leads to some problems where a tape bit value can't be read, and magnetic polarities would be disturbed if particles get too small. 400 TB tape drives using LTO-13 should be in circulation around 2032/33 according to a Blocks&Files prediction. Below you can check out the table provided by Blocks&Files that shows LTO generations and their abilities.

Hot Chips 2020 Program Announced

Today the Hot Chips program committee officially announced the August conference line-up, posted to hotchips.org. For this first-ever live-streamed Hot Chips Symposium, the program is better than ever!

In a session on deep learning training for data centers, we have a mix of talks from the internet giant Google showcasing their TPUv2 and TPUv3, and a talk from startup Cerebras on their 2nd gen wafer-scale AI solution, as well as ETH Zurich's 4096-core RISC-V based AI chip. And in deep learning inference, we have talks from several of China's biggest AI infrastructure companies: Baidu, Alibaba, and SenseTime. We also have some new startups that will showcase their interesting solutions—LightMatter talking about its optical computing solution, and TensTorrent giving a first-look at its new architecture for AI.
Hot Chips

Cisco Appoints AMD President and CEO Dr. Lisa T. Su to Board of Directors

Cisco today announced the appointment of Dr. Lisa T. Su, AMD president and CEO, to its board of directors effective today. "Lisa is an accomplished business leader with deep expertise in the semiconductor industry," said Chuck Robbins, chairman and CEO, Cisco. "We look forward to her contributions to Cisco's board and our business as we continue to develop ground breaking technologies, and a new internet for the 5G era that will help our customers innovate faster than ever before."

Dr. Su, 50, joined AMD in 2012 and has held the position of President and Chief Executive Officer since October 2014. She also serves on AMD's Board of Directors. Previously, Dr. Su served as Senior Vice President and General Manager, Networking and Multimedia at Freescale Semiconductor, Inc., and was responsible for global strategy, marketing and engineering for the company's embedded communications and applications processor business. Dr. Su joined Freescale in 2007 as Chief Technology Officer, where she led the company's technology roadmap and research and development efforts.

Lenovo ThinkPad X1 Fold is a Force Multiplier for Road Warriors at CES 2020

The ThinkPad brand from Lenovo has always represented serious business on the move, right from its IBM origins. At CES 2020, the company unveiled what is possibly the best foldable PC design we've seen till date, the ThinkPad X1 Fold. The X1 Fold is a 13.3-inch tablet that folds perfectly along the middle to either a book-like orientation, or as a laptop, in which the top half becomes the display, and the bottom half your keyboard of whichever possible layout. If a touchscreen keyboard doesn't appeal to you, you can dock an accessory that has a physical keyboard and trackpad.

Under the hood of the X1 Fold is an Intel "Lakefield" Hybrid x86 SoC that combines high-performance and high-efficiency x86 cores and dynamically allots workload to them while power-gating on the fly (a la ARM big.LITTLE). When it comes out mid-2020 (likely a Computex 2020 launch), the ThinkPad X1 Fold will be driven by Windows 10X, a new operating system Microsoft is designing specifically for dual-screen mobile computing devices. The Flex 5G is Lenovo's first business notebook with an integrated 5G modem (in addition to Wi-Fi 6), so you can enjoy high-speed mobile Internet on the move. Lastly, we spotted the Lenovo Ducati notebook, a co-branded product of the company's MotoGP team sponsorship.

AMD Reports Third Quarter 2019 Financial Results

AMD (NASDAQ:AMD) today announced revenue for the third quarter of 2019 of $1.80 billion, operating income of $186 million, net income of $120 million and diluted earnings per share of $0.11. On a non-GAAP(*) basis, operating income was $240 million, net income was $219 million and diluted earnings per share was $0.18.

"Our first full quarter of 7 nm Ryzen, Radeon and EPYC processor sales drove our highest quarterly revenue since 2005, our highest quarterly gross margin since 2012 and a significant increase in net income year-over-year," said Dr. Lisa Su, AMD president and CEO. "I am extremely pleased with our progress as we have the strongest product portfolio in our history, significant customer momentum and a leadership product roadmap for 2020 and beyond."

GlobalFoundries to Go Public in 2022

GlobalFoundries is planning to sell a minority stake in the company through an IPO (initial public offering) in 2022, company CEO Tom Caulfield told the Wall Street Journal. In February, it was reported that with the discontinuation of the 7 nm development and sale of certain facilities, the perception was made that GloFo was looking to be acquired by another semiconductor company. The same course of actions could have also served as prelude to taking the company public, and as it turns out, GloFo is heading toward the latter.

TimesUnion comments that the decision to discontinue 7 nm development and shedding some assets slowed down development of future technologies, but returned the company to profitability, so it could be put up for an IPO. Caulfield didn't comment on what is the size of the stake sale, but the source comments it could be aimed at alleviating the strain on GloFo's original investors, the Abu Dhabi government, which has invested over $21 billion in the company over the past 10 years. GlobalFoundries was formed as AMD spun off its semiconductor business in 2009, with seed capital from the Abu Dhabi government. Over the decade, the company built fabs in New York state, and acquired fabs across Vermont, and Singapore, along with tech acquisition from IBM.

AMD Could Release Next Generation EPYC CPUs with Four-Way SMT

AMD has completed design phase of its "Zen 3" architecture and rumors are already appearing about its details. This time, Hardwareluxx has reported that AMD could bake a four-way simultaneous multithreading technology in its Zen 3 core to enable more performance and boost parallel processing power of its data center CPUs. Expected to arrive sometime in 2020, Zen 3 server CPUs, codenamed "MILAN", are expected to bring many architectural improvements and make use of TSMC's 7nm+ Extreme Ultra Violet lithography that brings as much as 20% increase in transistor density.

Perhaps the biggest change we could see is the addition of four-way SMT that should allow a CPU to have four virtual threads per core that will improve parallel processing power and enable data center users to run more virtual machines than ever before. Four-way SMT will theoretically boost performance by dividing micro-ops into four smaller groups so that each thread could execute part of the operation, thus making the execution time much shorter. This being only one application of four-way SMT, we can expect AMD to leverage this feature in a way that is most practical and brings the best performance possible.

Compute Express Link Consortium (CXL) Officially Incorporates

Today, Alibaba, Cisco, Dell EMC, Facebook, Google, Hewlett Packard Enterprise, Huawei, Intel Corporation and Microsoft announce the incorporation of the Compute Express Link (CXL) Consortium, and unveiled the names of its newly-elected members to its Board of Directors. The core group of key industry partners announced their intent to incorporate in March 2019, and remain dedicated to advancing the CXL standard, a new high-speed CPU-to-Device and CPU-to-Memory interconnect which accelerates next-generation data center performance.

The five new CXL board members are as follows: Steve Fields, Fellow and Chief Engineer of Power Systems, IBM; Gaurav Singh, Corporate Vice President, Xilinx; Dong Wei, Standards Architect and Fellow at ARM Holdings; Nathan Kalyanasundharam, Senior Fellow at AMD Semiconductor; and Larrie Carr, Fellow, Technical Strategy and Architecture, Data Center Solutions, Microchip Technology Inc.

IBM Unveils z15 With Industry-First Data Privacy Capabilities

IBM (NYSE: IBM) today announced IBM z15, a new enterprise platform delivering the ability to manage the privacy of customer data across hybrid multicloud environments. With z15, clients can manage who gets access to data via policy-based controls, with an industry-first capability to revoke access to data across the hybrid cloud.

The movement of data between partners and third parties is often the root cause of data breaches. In fact, 60 percent of businesses reported they suffered a data breach caused by a vendor or third party in 2018. With the growing adoption of hybrid multicloud environments, the importance of maintaining data security and privacy only grows more acute and challenging.

NVIDIA to Implement 7nm EUV Node for its 2020 GPUs

NVIDIA will implement the 7 nanometer EUV (extreme ultraviolet) lithography to build its future generation of GPUs slated for 2020, according to Japanese publication MyNavi.jp. The GPU giant could be among the first customers besides IBM, to contract Samsung for 7 nm EUV mass-production of GPUs. IBM will use the Korean semiconductor giant for manufacturing Z-series processors and FPGAs. Samsung announced in October 2018 that it will begin risk-production on its 7 nm EUV node in early-2019.

An earlier report from 2018 also forecast NVIDIA implementing 7 nm DUV (deep ultraviolet) node of TSMC for its 2019 GPU lineup. With news of the company now working with Samsung on 7 nm EUV for 2020, this seems less likely. It's possible that NVIDIA could somehow split its next generation GPU lineup between TSMC 7 nm DUV and Samsung 7 nm EUV, with the latter being used for chips with higher transistor-counts, taking advantage of the node's higher deliverable transistor densities.

IBM Expands Strategic Partnership with Samsung to Include 7nm Chip Manufacturing

IBM today announced an agreement with Samsung to manufacture 7-nanometer (nm) microprocessors for IBM Power Systems , IBM Z and LinuxONE , high-performance computing (HPC) systems, and cloud offerings. The agreement combines Samsung's industry-leading semiconductor manufacturing with IBM's high-performance CPU designs. This combination is being designed to drive unmatched systems performance, including acceleration, memory and I/O bandwidth, encryption and compression speed, as well as system scaling. It positions IBM and Samsung as strategic partners leading the new era of high-performance computing specifically designed for AI.

"At IBM, our first priority is our clients," said John Acocella, Vice President of Enterprise Systems and Technology Development for IBM Systems. "IBM selected Samsung to build our next generation of microprocessors because they share our level of commitment to the performance, reliability, security, and innovation that will position our clients for continued success on the next generation of IBM hardware."

IBM To Acquire Red Hat, Completely Changing The Cloud Landscape

IBM and Red Hat, the world's leading provider of open source cloud software, announced today that the companies have reached a definitive agreement under which IBM will acquire all of the issued and outstanding common shares of Red Hat for $190.00 per share in cash, representing a total enterprise value of approximately $34 billion.

"The acquisition of Red Hat is a game-changer. It changes everything about the cloud market," said Ginni Rometty, IBM Chairman, President and Chief Executive Officer. "IBM will become the world's #1 hybrid cloud provider, offering companies the only open cloud solution that will unlock the full value of the cloud for their businesses.

With Summit, US Regains Leadership from China in TOP500 Supercomputers Listing

We previously covered in more depth the fact that the US was gearing up to overtake China's Sunway TaihuLight, then the world's fastest supercomputer, with its Summit machine, built in collaboration between IBM (with its water-cooled Power Systems AC922 nodes with 24-core processors and 96 processing threads) and NVIDIA (GV100 GPUs).

Now, this US dream has finally come to pass, and in a big way - the Summit delivers more than double the performance of China's posterchild, coming in at 200 PetaFLOPs of computing power. Summit boasts of 27,648 Volta Tensor Core GPUs and 9,216 CPUs within its 5,600 square feet. The Summit supercomputer consumes 15 MW of power (the site where it's deployed is able to deliver up to 20 MW), which is on-par with China's Sunway - but remember, it more than doubles the peak PetaFlops from 93 to 200. A good step in the battle for supercomputer supremacy, but China still has an increasing foothold in the number of systems it has employed and registered with the TOP500.

Globalfoundries: 7 nm to Enable up to 2.7x Smaller Dies, 5 GHz CPUs

Globalfoundries' Chief Technical Officer, Gary Patton, talked about the future he believes can be possible in future manufacturing processes, calling for particular attention towards the next step in the ladder at 7 nm. Apparently, the 7 nm process at Globalfoundries has received a shot in the arm from the integration of ex IBM engineering specialists (remember that IBM practically paid Globalfoundries to take its manufacturing division of its hands), and the company now expects better than foreseen technical specs and achievements of its 7 nm process.

While a move from 14 nm to 7 nm was expected to provide, at the very best, a halving in the actual size of a chip manufactured in 7 nm compared to 14 nm, Gary Patton is now saying that the are should actually be reduced by up to 2.7 times the original size. To put that into perspective, AMD's 1000 series processors on the Zeppelin die and 14 nm process, which come in at 213 mm² for the full, 8-core design, could be brought down to just 80 mm² instead. AMD could potentially use up that extra die space to either build in some overprovisioning, should the process still be in its infancy and yields need a small boost; or cram it with double the amount of cores and other architectural improvements, and still have chips that are smaller than the original Zen dies.

IBM Unveils Industry's Most Advanced Server Designed for Artificial Intelligence

IBM today unveiled its next-generation Power Systems Servers incorporating its newly designed POWER9 processor. Built specifically for compute-intensive AI workloads, the new POWER9 systems are capable of improving the training times of deep learning frameworks by nearly 4x allowing enterprises to build more accurate AI applications, faster.

The new POWER9-based AC922 Power Systems are the first to embed PCI-Express 4.0, next-generation NVIDIA NVLink and OpenCAPI, which combined can accelerate data movement, calculated at 9.5x faster than PCI-E 3.0 based x86 systems. The system was designed to drive demonstrable performance improvements across popular AI frameworks such as Chainer, TensorFlow and Caffe, as well as accelerated databases such as Kinetica. As a result, data scientists can build applications faster, ranging from deep learning insights in scientific research, real-time fraud detection and credit risk analysis.

"Summit" Supercomputer to Propel US Back to Number 1 in Top 500 by 2018

China has been increasingly - and steadily - gaining relevance in the supercomputing world, with most of the top-500 entries being controlled by that country. In fact, China can boast of having the number one supercomputer in the world, the Sunway TaihuLight, which can deliver 93 PetaFLOPS of computing power - just 3x more computational power than the second most powerful machine, China's own Tianhe-2). However, supercomputing, and the amount of money that's earned by selling processing slices of these supercomputers for private or state contractors, i a very attractive pull - especially considering the increasingly more expensive computational needs of the modern world.

The Summit is to be the United State's call to fame in that regard, bringing the country back to number one in raw, top-of-the-line single-machine supercomputing power. Summit is promising to more than double the PetaFLOPS of China's TaihuLight, to over 200 PetaFLOPs. That amounts to around 11x more processing grunt than its predecessor, the Titan, in a much smaller footprint - the Titan's 18,688 processing nodes will be condensed to just ~4,600 nodes on the Summit, with each node achieving around 40 TeraFLOPS of computing power. The hardware? IBM and NVIDIA, married in water-cooled nodes with the powerful GV100 accelerator that's still eluding us enthusiasts - but that's a question for another day.

NVIDIA Announces SaturnV AI Supercomputer Powered by "Volta"

NVIDIA at the Supercomputing 2017 conference announced a major upgrade of its new SaturnV AI supercomputer, which when complete, the company claims, will be not just one of the world's top-10 AI supercomputers in terms of raw compute power; but will also the world's most energy-efficient. The SaturnV will be a cluster supercomputer with 660 NVIDIA DGX-1 nodes. Each such node packs eight NVIDIA GV100 GPUs, which takes the machine's total GPU count to a staggering 5,280 (that's GPUs, not CUDA cores). They add up to an FP16 performance that's scraping the ExaFLOP (1,000-petaFLOP or 10^18 FLOP/s) barrier; while its FP64 (double-precision) compute performance nears 40 petaFLOP/s (40,000 TFLOP/s).

SaturnV should beat Summit, a supercomputer being co-developed by NVIDIA and IBM, which in turn should unseat Sunway TaihuLight, that's currently the world's fastest supercomputer. This feat gains prominence as NVIDIA SaturnV and NVIDIA+IBM Summit are both machines built by the American private-sector, which are trying to beat a supercomputing leader backed by the mighty Chinese exchequer. The other claim to fame of SaturnV is its energy-efficiency. Before its upgrade, SaturnV achieved an energy-efficiency of a staggering 15.1 GFLOP/s per Watt, which was already the fourth "greenest." NVIDIA expects the upgraded SaturnV to take the number-one spot.

China Pulls Ahead of U.S. in Latest TOP500 List

The fiftieth TOP500 list of the fastest supercomputers in the world has China overtaking the US in the total number of ranked systems by a margin of 202 to 143. It is the largest number of supercomputers China has ever claimed on the TOP500 ranking, with the US presence shrinking to its lowest level since the list's inception 25 years ago.

Just six months ago, the US led with 169 systems, with China coming in at 160. Despite the reversal of fortunes, the 143 systems claimed by the US gives them a solid second place finish, with Japan in third place with 35, followed by Germany with 20, France with 18, and the UK with 15.

Intel Introduces Neuromorphic Self-Learning Chip Codenamed "Loihi"

Intel has been steadily increasing its portfolio of products in the AI space, through the acquisition of multiple AI-focused companies such as Nervana, Mobileye, and others. Through its increased portfolio of AI-related IP, the company is looking to carve itself a slice of the AI computing market, and this sometimes means thinking inside the box more than outside of it. It really doesn't matter the amount of cores and threads you can put on your HEDT system: the human brain's wetware is still one of the most impressive computation machines known to man.

That idea is what's behind of neuromorphic computing, where chips are being designed to mimic the overall architecture of the human brain, with neurons, synapses and all. It marries the fields of biology, physics, mathematics, computer science, and electronic engineering to design artificial neural systems, mimicking the morphology of individual neurons, circuits, applications, and overall architectures. This, in turn, affects how information is represented, influences robustness to damage due to the distribution of workload through a "many cores" design, incorporates learning and development, adapts to local change (plasticity), and facilitates evolutionary change.

IBM Research Achieves Breakthrough in Magnetic Storage: Enabling a 330TB Tape

IBM Research scientists have achieved a new world record in tape storage - their fifth since 2006. The new record of 201 Gb/in2 (gigabits per square inch) in areal density was achieved on a prototype sputtered magnetic tape developed by Sony Storage Media Solutions. The scientists presented the achievement today at the 28th Magnetic Recording Conference (TMRC 2017) here.

Tape storage is currently the most secure, energy efficient and cost-effective solution for storing enormous amounts of back-up and archival data, as well as for new applications such as Big Data and cloud computing. This new record areal recording density is more than 20 times the areal density used in current state of the art commercial tape drives such as the IBM TS1155 enterprise tape drive, and it enables the potential to record up to about 330 terabytes (TB) of uncompressed data on a single tape cartridge that would fit in the palm of your hand. 330 terabytes of data are comparable to the text of 330 million books, which would fill a bookshelf that stretches slightly beyond the northeastern to the southwestern most tips of Japan.

U.S.A. Loses 3rd Place in TOP500 Supercomputer Standings... To Switzerland?

The United States has been being pushed down in the TOP500 standings for some time courtesy China, whom has taken the 1st and 2nd place seats from the US with their Sunway TaihuLight and Tianhe-2 Supercomputers (at a Linpack performance of 93 and 33.9 Petaflops, respectively). It seemed though the crown was stolen from America, 3rd place was relatively safe for the former champs. Not so. America has been pushed right off the podium in the latest TOP500 refresh... not by China though, but Switzerland?

Exascale Supercomputer Technology Buoyed by $258M Grant by US Dept. of Energy

Developing supercomputers isn't for the faint of heart. Much less it is for those that are looking for fast development and deployment time-frames. And as such, even as the world's supercomputers are getting increasingly faster and exorbitantly expensive to develop and deploy, players who want to stay ahead have to think ahead as well. To this end, the US Department of Energy has awarded a total of $258M in research contracts to six of the US's foremost tech companies to accelerate the development of Exascale Supercomputer technologies (AMD, Cray, Hewlett Packard Enterprise, IBM, Intel, and NVIDIA.) These companies will be working over a three year contract period, and will have to support at least 40% of the project cost - to help develop the technologies needed to build an exascale computer for 2021. It isn't strange that the companies accepted the grant and jumped at the opportunity: 60% savings in research and development they'd have to do for themselves is nothing to scoff at.

Supercomputers birthed from the project are expected to be in the exaFLOPS scale of computing performance, which is around 50 times more processing power than the generation of supercomputers being installed now. Since traditional supercomputing knowledge and materials are known to falter at the objective level of exaFLOPS performance, the PathForward program - which looks to ensure achievement of such systems in a timely fashion to ensure US leadership in the field of supercomputing - will need to see spurred research and development, which the $258M grant is looking out to do.
Return to Keyword Browsing
Apr 25th, 2024 05:25 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts