News Posts matching #AI

Return to Keyword Browsing

TSMC Starts Shipping its 7nm+ Node Based on EUV Technology

TSMC today announced that its seven-nanometer plus (N7+), the industry's first commercially available Extreme Ultraviolet (EUV) lithography technology, is delivering customer products to market in high volume. The N7+ process with EUV technology is built on TSMC's successful 7 nm node and paves the way for 6 nm and more advanced technologies.

The N7+ volume production is one of the fastest on record. N7+, which began volume production in the second quarter of 2019, is matching yields similar to the original N7 process that has been in volume production for more than one year.

GLOBALFOUNDRIES Introduces 12LP+ FinFET Solution for Cloud and Edge AI Applications

GLOBALFOUNDRIES (GF), the world's leading specialty foundry, announced today at its Global Technology Conference the availability of 12LP+, an innovative new solution for AI training and inference applications. 12LP+ offers chip designers a best-in-class combination of performance, power and area, along with a set of key new features, a mature design and production ecosystem, cost-efficient development and fast time-to-market for high-growth cloud and edge AI applications.

Derived from GF's existing 12nm Leading Performance (12LP) platform, GF's new 12LP+ provides either a 20% increase in performance or a 40% reduction in power requirements over the base 12LP platform, plus a 15% improvement in logic area scaling. A key feature is a high-speed, low-power 0.5 V SRAM bit cell that supports the fast, power-efficient shuttling of data between processors and memory, an important requirement for AI applications in the computing and wired infrastructure markets.

Intel "Cascade Lake-X" HEDT CPU Lineup Starts at 10-core, Core i9-10900X Geekbenched

With its 10th generation Core X "Cascade Lake-X" HEDT processor series, Intel will not bother designing models with single-digit core-counts. The series is likely to start at 10 cores with the Core i9-10900X. This 10-core/20-thread processor features a quad-channel DDR4 memory interface, and comes with clock speeds of 3.70 GHz base, a 200 MHz speed-bump over the Core i9-9900X. The chip retains the mesh interconnect design and cache hierarchy of Intel's HEDT processors since "Skylake-X," with 1 MB of dedicated L2 cache per core, and 19.3 MB of shared L3 cache.

Geekbench tests run on the chip show it to perform roughly on par with the i9-9900X, with the 200 MHz speed-bump expected to marginally improve multi-threaded performance. Where the "Cascade Lake-X" silicon is expected to one-up "Skylake-X" is its support for DLBoost, an on-die fixed function hardware that multiplies matrices, improving AI DNN building and training; and pricing. Intel is expected to price its next-generation HEDT processors aggressively, to nearly double cores-per-Dollar.

OtherSide Entertainment Releases Pre-Alpha Gameplay Footage of System Shock 3

System Shock is one of the most revered, iconic franchises for PC gaming. One of the first games to thoroughly explore the trope of the rampant AI in a fully realized world (be it lore or mechanics-wise), it has left a taste of grandeur on those that actually got to play it before some of us gamers became too blindsided by pretty, current graphics.

Developer OtherSide Entertainment, who have the rights to develop further worlds in the franchise, have just released a pre-alpha gameplay teaser that at least serves to give us a taste of their art and mood direction. I got some flashbacks to the first Dead Space game whilst watching this, but that might just be me. This really does look like one of the darkest renditions of AI-powered reckonings we've seen to date. Here's hoping for a good, engaging storyline. Check out the teaser after the break.

India First Country to Deploy AI Machine Learning to Fight Income Tax Evasion

India is building a large AI machine learning data-center that can crunch through trillions of financial transactions per hour to process income tax returns of India's billion-strong income tax assessee base. India's Income Tax Department has relied on human tax assessment officers that are randomly selected by a computer to assess tax returned filed by individuals, in an increasingly inefficient system that's prone to both evasion and corruption. India has already been using machine learning since 2017 to fish out cases of tax-evasion for further human scrutiny. The AI now replaces human assessment officers, relegating them up an escalation matrix.

The AI/ML assessment system is a logical next step to two big policy decisions the Indian government has taken in recent years: one of 100% data-localization by foreign entities conducting commerce in India; and getting India's vast population to use electronic payment instruments, away from paper-money, by de-monetizing high-value currency, and replacing it with a scarce supply of newer bank-notes that effectively force people to use electronic instruments. Contributing to these efforts are some of the lowest 4G mobile data prices in the world (as low as $1.50 for 40 GB of 4G LTE data), and low-cost smartphone handsets. It's also free to open a basic bank account with no minimum balance requirements.

Intel Ships First 10nm Agilex FPGAs

Intel today announced that it has begun shipments of the first Intel Agilex field programmable gate arrays (FPGAs) to early access program customers. Participants in the early access program include Colorado Engineering Inc., Mantaro Networks, Microsoft and Silicom. These customers are using Agilex FPGAs to develop advanced solutions for networking, 5G and accelerated data analytics.

"The Intel Agilex FPGA product family leverages the breadth of Intel innovation and technology leadership, including architecture, packaging, process technology, developer tools and a fast path to power reduction with eASIC technology. These unmatched assets enable new levels of heterogeneous computing, system integration and processor connectivity and will be the first 10nm FPGA to provide cache-coherent and low latency connectivity to Intel Xeon processors with the upcoming Compute Express Link," said Dan McNamara, Intel senior vice president and general manager of the Networking and Custom Logic Group.

AORUS Announces the All-New AORUS 17 Flagship Laptop

Top-of-the-line gaming brand AORUS, reveals itself at the 2019 GAMESCOM trade show in Germany Cologne, in which the spotlight shines at the all-new 17 inch flagship gaming laptop, the AORUS 17. AORUS takes the lead yet again through the co-op with world renowned switch manufacture OMRON to innovate and develop a set of unique mechanical switches that is tailored for the AORUS 17, not only does the keys offer exceptional durability, it also offers one of the best feeling keys that gamers can find on a laptop. The AORUS greatness continues through the combination of the brand new Intel 8-core CPU, NVIDIA RTX graphics chip with Ray Tracing technology and an exclusive WINDFORCE INFINITY cooling system, the AORUS 17 steadily sits on the high-end gaming thrown with these specs.

AORUS leads the industry again by working with world renowned mechanical switch manufacture, OMRON in order to create a unique set of mechanical keys for the AORUS laptop, with gamer oriented design details, including an optimal 2.5 mm key travel and an actuation point of 1.6 mm, giving gamers both the sensational touch and sound of a crisp blue switch, which gamers can now enjoy the qualities of a full mechanical keyboard right on their AORUS laptop. AORUS pursues further by redesigning the key caps to produce stunning backlit keys with unique "concentric" keycaps, letting the LED underneath the keycap shine though evenly, increasing the overall lighting intensity by 27%, in addition to the AORUS exclusive FUSION 2.0 keyboard customization software, gamers can truly create a unique personal style.

Cerebras Systems' Wafer Scale Engine is a Trillion Transistor Processor in a 12" Wafer

This news isn't properly today's, but it's relevant and interesting enough that I think warrants a news piece on our page. My reasoning is this: in an era where Multi-Chip Modules (MCM) and a chiplet approach to processor fabrication has become a de-facto standard for improving performance and yields, a trillion-transistor processor that eschews those modular design philosophies is interesting enough to give pause.

The Wafer Scale engine has been developed by Cerebras Systems to face the ongoing increase in demand for AI-training engines. However, in workloads where latency occur a very real impact in training times and a system's capability, Cerebras wanted to design a processor that avoided the need for a communication lane for all its cores to communicate - the system is only limited, basically, by transistors' switching times. Its 400,000 cores communicate seamlessly via interconnects, etched on 42,225 square millimeters of silicon (by comparison, NVIDIA's largest GPU is 56.7 times smaller at "just" 815 square millimeters).

Intel's CEO Blames 10 nm Delay on being "Too Aggressive"

During Fortune's Brainstorm Tech conference in Aspen, Colorado, Intel's CEO Bob Swan took stage and talked about the company, about where Intel is now and where they are headed in the future and how the company plans to evolve. Particular focus was put on how Intel became "data centric" from "PC centric," and the struggles it encountered.

However, when asked about the demise of Moore's Law, Swan detailed the aggressiveness that they approached the challenge with. Instead of the regular two times improvement in transistor density every two years, Swan said that Intel has always targeted better and greater densities so that it would stay the leader in the business.

SHERPA Consortium: If AI Could Feel, it Would Fear Cyber-attacks from People

Artificial intelligence (AI) is rapidly finding applications in nearly every walk of life. Self-driving cars, social media networks, cyber security companies, and everything in between uses it. But a new report published by the SHERPA consortium - an EU project studying the impact of AI on ethics and human rights that F-Secure joined in 2018 - finds that while human attackers have access to machine learning techniques, they currently focus most of their efforts on manipulating existing AI systems for malicious purposes instead of creating new attacks that would use machine learning.

The study's primary focus is on how malicious actors can abuse AI, machine learning, and smart information systems. The researchers identify a variety of potentially malicious uses for AI that are well within reach of today's attackers, including the creation of sophisticated disinformation and social engineering campaigns.

Intel Sets Up New Network and Custom-logic Group

In recent conversations with Intel customers, two words kept coming up: disruption and opportunity. Disruption because almost every single executive I talk with has seen business disrupted in one way or another or is worried about keeping up with new technology trends and keeping a competitive edge. And opportunity because when these customers discuss their needs -- be it how to better leverage data, how to modernize their infrastructure for 5G or how to accelerate artificial intelligence (AI) and analytics workloads -- they realize the massive prospects in front of them.

To help our customers capitalize on the opportunities ahead, Intel has created a new organization that combines our network infrastructure organization with our programmable solutions organization under my leadership. This new organization is called the Network and Custom Logic Group.
Both original organizations executed on record design wins and revenues in 2018. Their merger allows Intel to bring maximum value to our customers by delivering unprecedented and seamless access to Intel's broad portfolio of products, from Intel Xeon processors SoC, FPGA, eASIC, full-custom ASIC, software, IP, and systems and solutions across the cloud, enterprise, network, embedded and IoT markets. To that end, FPGA and custom silicon will continue to be important horizontal technologies. And this is just the beginning of a continuum of Custom Logic Portfolio of FPGA, eASIC, and ASIC to support our customers' unique needs throughout their life cycles. No other company in the world can offer that.

The EPI Announces Successful First Steps Towards a Made-in-Europe High-performance Microprocessor

The European Processor Initiative(EPI), crucial element of the European exascale strategy, delivers its first architectural design to the European Commission and welcomes new partners Almost six months in, the project that kicked off last December has already delivered its first architectural designs to the European Commission, thus marking initial milestones successfully executed. The project that will be the cornerstone of the EU's strategic plans in HPC initially brought together 23 partners from 10 European countries, but has now welcomed three more strong additions to its EPI family. EPI consortium aims to bring a low-power microprocessor to the market and ensure that the key competences for high-end chip design remain in Europe. The European Union's Horizon 2020 program funds this project with a special Framework Partnership Agreement. The initial stage is a three-year Specific Grant Agreement, which lasts until November 2021.

FSP Leads the Way to AIoT and 5G at Computex 2019

FSP, one of the world's leading manufacturers of power supplies, is pleased to announce an extensive range of products for the Computex 2019 show in Taipei, Taiwan, from May 28 to June 1, 2019 - with a special focus on the fast-growing AIoT (Artificial Intelligence of Things) sector and 5G-ready products. FSP will be showing power supplies, chargers and other power-related products designed specifically for these markets. In addition, they will be demonstrating a variety of specialized power supplies and PC cases for gamers.

The AIoT has the potential to drive revolutionary changes in numerous sectors, particularly in B2B, but relies on 24/7/365 uptime backed by a reliable power supply. FSP is ready to meet this demand. The key AIoT segments impact a huge range of applications including smart cities, transportation, logistics, environment, industrials, agriculture, utilities, smart building and consumers. For those key segments, FSP offers 1U, UPS and Redundant PSU which included CRPS for cloud and data center services. For edge computing (so-called 'fog' segment), FSP has Flex ATX, ATX and UPS. Finally, to fully cover the vertical, FSP offers adapters, open frame designs, Flex ATX, chargers and UPS for edge computing and devices at the client end.

NVIDIA Announces Financial Results for First Quarter Fiscal 2020

NVIDIA today reported revenue for the first quarter ended April 28, 2019, of $2.22 billion compared with $3.21 billion a year earlier and $2.21 billion in the previous quarter. GAAP earnings per diluted share for the quarter were $0.64, compared with $1.98 a year ago and $0.92 in the previous quarter. Non-GAAP earnings per diluted share were $0.88 compared with $2.05 a year earlier and $0.80 in the previous quarter.

"NVIDIA is back on an upward trajectory," said Jensen Huang, founder and CEO of NVIDIA. "We've returned to growth in gaming, with nearly 100 new GeForce Max-Q laptops shipping. And NVIDIA RTX has gained broad industry support, making ray tracing the standard for next-generation gaming.

Microsoft Partners with Sony on Gaming and AI

Sony Corporation (Sony) and Microsoft Corp. (Microsoft) announced on Thursday that the two companies will partner on new innovations to enhance customer experiences in their direct-to-consumer entertainment platforms and AI solutions.

Under the memorandum of understanding signed by the parties, the two companies will explore joint development of future cloud solutions in Microsoft Azure to support their respective game and content-streaming services. In addition, the two companies will explore the use of current Microsoft Azure datacenter-based solutions for Sony's game and content-streaming services. By working together, the companies aim to deliver more enhanced entertainment experiences for their worldwide customers. These efforts will also include building better development platforms for the content creator community.

Western Digital Announces Automotive-grade iNAND EM132 eMMC Storage

Western Digital Corp. is addressing the automotive industry's increasing need for storage by equipping vehicle manufacturers and system solution providers with the technology and capacity to support both current and future applications including e-cockpits, Artificial Intelligence (AI) databases, ADAS, advanced infotainment systems, and autonomous computers. As the first 256GB e.MMC using 64-Layer 3D NAND TLC flash technology in the automotive market, the new Western Digital iNAND AT EM132 EFD extends the life of e.MMC beyond 2D NAND to meet evolving application needs and growing capacity requirements.

According to Neil Shah, partner and research director, Counterpoint Research, "Storage is one of the fastest growing semiconductor applications in a connected autonomous car. The advanced in-vehicle infotainment (IVI), AI and sensor-driven autonomous driving systems generate large amounts of data that needs to be processed and stored locally at the edge. The average capacity of storage required per vehicle is expected to balloon beyond 2TB by 2022."

Intel Xe GPUs to Support Raytracing Hardware Acceleration

Intel's upcoming Xe discrete GPUs will feature hardware-acceleration for real-time raytracing, similar to NVIDIA's "Turing" RTX chips, according to a company blog detailing how the company's Rendering Framework will work with the upcoming Xe architecture. The blog only mentions that the company's data-center GPUs support the feature, and not whether its client-segment ones do. The data-center Xe GPUs are targeted at cloud-based gaming service and cloud-computing providers, as well as those building large rendering farms.

"I'm pleased to share today that the Intel Xe architecture roadmap for data center optimized rendering includes ray tracing hardware acceleration support for the Intel Rendering Framework family of API's and libraries," said Jim Jeffers, Sr. Principal Engineer and Sr. Director of Intel's Advanced Rendering and Visualization team. Intel did not go into technical details of the hardware itself. NVIDIA demonstrated that you need two major components on a modern GPU to achieve real-time raytracing: 1. a fixed-function hardware that computes intersection of rays with triangles or surfaces (which in NVIDIA's case are the RT cores), and 2. an "inexpensive" de-noiser. NVIDIA took the AI route to achieve the latter, by deploying tensor cores (matrix-multiplication units), which accelerate AI DNN building and training. Both these tasks are achievable without fixed-function hardware, using programmable unified shaders, but at great performance cost. Intel developed a CPU-based de-noiser that can leverage AVX-512.

Intel Reports First-Quarter 2019 Financial Results

Intel Corporation today reported first-quarter 2019 financial results. "Results for the first quarter were slightly higher than our January expectations. We shipped a strong mix of high performance products and continued spending discipline while ramping 10nm and managing a challenging NAND pricing environment. Looking ahead, we're taking a more cautious view of the year, although we expect market conditions to improve in the second half," said Bob Swan, Intel CEO. "Our team is focused on expanding our market opportunity, accelerating our innovation and improving execution while evolving our culture. We aim to capitalize on key technology inflections that set us up to play a larger role in our customers' success, while improving returns for our owners."

In the first quarter, the company generated approximately $5.0 billion in cash from operations, paid dividends of $1.4 billion and used $2.5 billion to repurchase 49 million shares of stock. In the first quarter, Intel achieved 4 percent growth in the PC-centric business while data-centric revenue declined 5 percent.

NVIDIA Responds to Tesla's In-house Full Self-driving Hardware Development

Tesla held an investor panel in the USA yesterday (April 22) with the entire event, focusing on autonomous vehicles, also streamed on YouTube (replay here). There were many things promised in the course of the event, many of which are outside the scope of this website, but the announcement of Tesla's first full self-driving hardware module made the news in more ways than one as reported right here on TechPowerUp. We had noted how Tesla had traditionally relied on NVIDIA (and then Intel) microcontroller units, as well as NVIDIA self-driving modules in the past, but the new in-house built module had stepped away from the green camp in favor of more control over the feature set.

NVIDIA was quick to respond to this, saying Tesla was incorrect in their comparisons, in that the NVIDIA Drive Xavier at 21 TOPS was not the right comparison, and rather it should have been against NVIDIA's own full self-driving hardware the Drive AGX Pegasus capable of 320 TOPS. Oh, and NVIDIA also claimed Tesla erroneously reported Drive Xavier's performance was 21 TOPS instead of 30 TOPS. It is interesting how one company was quick to recognize itself as the unmarked competition, especially at a time when Intel, via their Mobileye division, have also given them a hard time recently. Perhaps this is a sign of things to come in that self-driving cars, and AI computing in general, is getting too big a market to be left to third-party manufacturing, with larger companies opting for in-house hardware itself. This move does hurt NVIDIA's focus in this field, as market speculation is ongoing that they may end up losing other customers following Tesla's departure.

Western Digital Introduces Surveillance-Class Storage with Extreme Endurance For AI-Enabled Security

Western Digital Corp. today unveiled the new Western Digital WD Purple SC QD312 Extreme Endurance microSD card for designers and manufacturers of AI-enabled security cameras, smart video surveillance and advanced edge devices that capture and store video at higher bit rates than mainstream cameras. According to IHS Markit, global shipments of professional video surveillance cameras are expected to grow from 127 million to over 200 million between 2017 and 2022, and those with on-board storage are expected to grow by an average of approximately 19 percent per year.

With the migration to 4K and higher video resolutions, and the introduction of more smart cameras with built-in AI and improved local processing capabilities, surveillance cameras need to be able to store both video and raw data to facilitate these AI capabilities. As a result, storage with higher capacity, more intelligence and greater durability is increasingly required.

AMD President and CEO Dr. Lisa Su to Deliver COMPUTEX 2019 CEO Keynote

Taiwan External Trade Development Council (TAITRA) announced today that the 2019 COMPUTEX International Press Conference will be held with a Keynote by AMD President and CEO Dr. Lisa Su. The 2019 COMPUTEX International Press Conference & CEO Keynote is scheduled for Monday, May 27 at 10:00 AM in Room 201 of the Taipei International Convention Center (TICC) in Taipei, Taiwan with the keynote topic "The Next Generation of High-Performance Computing".

"COMPUTEX, as one of the global leading technology tradeshows, has continued to advance with the times for more than 30 years. This year, for the first time, a keynote speech will be held at the pre-show international press conference," said Mr. Walter Yeh, President & CEO, TAITRA, "Dr. Lisa Su received a special invitation to share insights about the next generation of high-performance computing. We look forward to her participation attracting more companies to participate in COMPUTEX, bringing the latest industry insights, and jointly sharing the infinite possibilities of the technology ecosystem on this global stage."

Mellanox Not Quite Intel's Yet, NVIDIA Joins Competitive Bidding

Late January it was reported that Intel is looking to buy out Israeli networking hardware maker Mellanox Technology, in what looked like a cakewalk USD $6 billion deal at the time, which was a 35 percent premium over the valuation of Mellanox. Turns out, Intel hasn't closed the deal, and there are other big tech players in the foray for Mellanox, the most notable being NVIDIA. The GPU giant has reportedly offered Mellanox a competitive bid of $7 billion.

NVIDIA eyes a slice of the data-center networking hardware pie since the company has invested heavily in GPU-based AI accelerators and its own high-bandwidth interconnect dubbed NVLink, and now needs to complete its hardware ecosystem with NICs and switches under its own brand. Founded in 1999 in Yoqneam, Israel, Mellanox designs high performance network processors and fully-built NICs in a wide range of data-center relevant interconnects. Intel is by far the biggest tech company operating in Israel, with not just R&D centers, but also manufacturing sites, in stark contrast to NVIDIA, which opened its first R&D office in 2017 with a few hundred employees.

Update: NVIDIA's bid for Mellanox stands at $7 billion.

Microsoft Unveils HoloLens 2 Mixed Reality Headset

Since the release of HoloLens in 2016 we have seen mixed reality transform the way work gets done. We have unlocked super-powers for hundreds of thousands of people who go to work every day. From construction sites to factory floors, from operating rooms to classrooms, HoloLens is changing how we work, learn, communicate and get things done.

We are entering a new era of computing, one in which the digital world goes beyond two-dimensional screens and enters the three-dimensional world. This new collaborative computing era will empower us all to achieve more, break boundaries and work together with greater ease and immediacy in 3D. Today, we are proud to introduce the world to Microsoft HoloLens 2. Our customers asked us to focus on three key areas to make HoloLens even better. They wanted HoloLens 2 to be even more immersive and more comfortable, and to accelerate the time-to-value.

AMD 7nm EPYC "Rome" CPUs in Upcoming Finnish Supercomputer, 200,000 Cores Total

During the next year and a half, the Finnish IT Center for Science (CSC) will be purchasing a new supercomputer in two phases. The first phase consists of Atos' air-cooled BullSequana X400 cluster which makes use of Intel's Cascade Lake Xeon processors along with Mellanox HDR InfiniBand for a theoretical performance of 2 petaflops. Meanwhile, system memory per node will range from 96 GB up to 1.5 TB with the entire system receiving a 4.9 PB Lustre parallel file system as well from DDN. Furthermore, a separate partition of phase one will be used for AI research and will feature 320 NVIDIA V100 NVLinked GPUs configured in 4-GPU nodes. It is expected that peak performance will reach 2.5 petaflops. Phase one will be brought online at some point in the summer of 2019.

Where things get interesting is in phase two, which is set for completion during the spring of 2020. Atos' will be building CSC a liquid-cooled HDR-connected BullSequana XH2000 supercomputer that will be configured with 200,000 AMD EPYC "Rome" CPU cores which for the mathematicians out there works out to 3,125 64 core AMD EPYC processors. Of course, all that x86 muscle will require a great deal of system memory, as such, each node will be equipped with 256 GB for good measure. Storage will consist of an 8 PB Lustre parallel file system that is to be provided by DDN. Overall phase two will increase computing capacity by 6.4 petaflops (peak). With deals like this already being signed it would appear AMD's next-generation EPYC processors are shaping up nicely considering Intel had this market cornered for nearly a decade.

Intel Unveils a Clean-slate CPU Core Architecture Codenamed "Sunny Cove"

Intel today unveiled its first clean-slate CPU core micro-architecture since "Nehalem," codenamed "Sunny Cove." Over the past decade, the 9-odd generations of Core processors were based on incrementally refined descendants of "Nehalem," running all the way down to "Coffee Lake." Intel now wants a clean-slate core design, much like AMD "Zen" is a clean-slate compared to "Stars" or to a large extent even "Bulldozer." This allows Intel to introduce significant gains in IPC (single-thread performance) over the current generation. Intel's IPC growth curve over the past three micro-architectures has remained flat, and only grew single-digit percentages over the generations prior.

It's important to note here, that "Sunny Cove" is the codename for the core design. Intel's earlier codenaming was all-encompassing, covering not just cores, but also uncore, and entire dies. It's up to Intel's future chip-designers to design dies with many of these cores, a future-generation iGPU such as Gen11, and a next-generation uncore that probably integrates PCIe gen 4.0 and DDR5 memory. Intel details "Sunny Cove" as far as mentioning IPC gains, a new ISA (new instruction sets and hardware capabilities, including AVX-512), and improved scalability (ability to increase core-counts without running into latency problems).
Return to Keyword Browsing