News Posts matching #AI

Return to Keyword Browsing

Samsung Electronics Begins Mass Production at New EUV Manufacturing Line

Samsung Electronics, a world leader in advanced semiconductor technology, today announced that its new cutting-edge semiconductor fabrication line in Hwaseong, Korea, has begun mass production.

The facility, V1, is Samsung's first semiconductor production line dedicated to the extreme ultraviolet (EUV) lithography technology and produces chips using process node of 7 nanometer (nm) and below. The V1 line broke ground in February 2018, and began test wafer production in the second half of 2019. Its first products will be delivered to customers in the first quarter.

VR as a Coping Mechanism for Loss: Meet Nayeon

VR has been hailed as the next coming of truly ingenious, engrossing, enveloping experiences, and to some extent, it already does offer those. There are still limitations to the technology and the level of realism it can impart (there is a whole slew of senses we need to trigger for truly enveloping experiences, of course), but I feel we sometimes get somewhat limited in the way we look at VR. Of course, we can all imagine video games built in VR - and when we do, we likely imagine them as they were presented to us in Steven Spielberg's Ready Player One.

Then there are other use-cases, such as real-estate experiences that place you right inside your future home and allow you to see the changes you'd make. Architecture design, engineering, game world design, even strolls through museums, your mind a subatomic particle able to instantly travel to foreign countries and explore their marvels. All for this, mind you, without ever leaving the comfort of our home, without the required expenses and no wasted time with travelling or passport checks - all, however, simulated. But what if VR could go even further? What if VR could be used as a coping mechanism? What if you could meet your dead parents, siblings... Or children? This is the story I bring to you today: of how VR was used to reunite a mother with her deceased seven-year-old girl. This is the story of Ji-sung and her daughter Nayeon.

AMD Radeon Instinct MI100 "Arcturus" Hits the Radar, We Have its BIOS

AMD's upcoming large post-Navi graphics chip, codenamed "Arcturus," will debut as "Radeon Instinct MI100", which is an AI-ML accelerator under the Radeon Instinct brand, which AMD calls "Server Accelerators." TechPowerUp accessed its BIOS, which is now up on our VGA BIOS database. The card goes with the device ID "0x1002 0x738C," which confirms "AMD" and "Arcturus,". The BIOS also confirms that memory size is at a massive 32 GB HBM2, clocked at 1000 MHz real (possibly 1 TB/s bandwidth, if memory bus width is 4096-bit).

Both Samsung (KHA884901X) and Hynix memory (H5VR64ESA8H) is supported, which is an important capability for AMD's supply chain. From the ID string "MI100 D34303 A1 XL 200W 32GB 1000m" we can derive that the TDP limit is set to a surprisingly low 200 W, especially considering this is a 128 CU / 8,192 shader count design. Vega 64 and Radeon Instinct MI60 for comparison have around 300 W power budget with 4,096 shaders, 5700 XT has 225 W with 2560 shaders, so either AMD achieved some monumental efficiency improvements with Arcturus or the whole design is intentionally running constrained, so that AMD doesn't reveal their hand to these partners, doing early testing of the card.

Samsung Launches 3rd-Generation "Flashbolt" HBM2E Memory

Samsung Electronics, the world leader in advanced memory technology, today announced the market launch of 'Flashbolt', its third-generation High Bandwidth Memory 2E (HBM2E). The new 16-gigabyte (GB) HBM2E is uniquely suited to maximize high performance computing (HPC) systems and help system manufacturers to advance their supercomputers, AI-driven data analytics and state-of-the-art graphics systems in a timely manner.

"With the introduction of the highest performing DRAM available today, we are taking a critical step to enhance our role as the leading innovator in the fast-growing premium memory market," said Cheol Choi, executive vice president of Memory Sales & Marketing at Samsung Electronics. "Samsung will continue to deliver on its commitment to bring truly differentiated solutions as we reinforce our edge in the global memory marketplace."

Europe Readies its First Prototype of Custom HPC Processor

European Processor Initiative (EPI) is a Europe's project to kickstart a homegrown development of custom processors tailored towards different usage models that the European Union might need. The first task of EPI is to create a custom processor for high-performance computing applications like machine learning, and the chip prototypes are already on their way. The EPI chairman of the board Jean-Marc Denis recently spoke to the Next Platform and confirmed some information regarding the processor design goals and the timeframe of launch.

Supposed to be manufactured on TSMC's 6 nm EUV (TSMC N6 EUV) technology, the EPI processor will tape-out at the end of 2020 or the beginning of 2021, and it is going to be heterogeneous. That means that on its 2.5D die, many different IPs will be present. The processor will use a custom ARM CPU, based on a "Zeus" iteration of Neoverese server core, meant for general-purpose computation tasks like running the OS. When it comes to the special-purpose chips, EPI will incorporate a chip named Titan - a RISC-V based processor that uses vector and tensor processing units to compute AI tasks. The Titan will use every new standard for AI processing, including FP32, FP64, INT8, and bfloat16. The system will use HBM memory allocated to the Titan processor, have DDR5 links for the CPU, and feature PCIe 5.0 for the inner connection.

Samsung Starts Production of AI Chips for Baidu

Baidu, a leading Chinese-language Internet search provider, and Samsung Electronics, a world leader in advanced semiconductor technology, today announced that Baidu's first cloud-to-edge AI accelerator, Baidu KUNLUN, has completed its development and will be mass-produced early next year. Baidu KUNLUN chip is built on the company's advanced XPU, a home-grown neural processor architecture for cloud, edge, and AI, as well as Samsung's 14-nanometer (nm) process technology with its I-Cube (Interposer-Cube) package solution.

The chip offers 512 gigabytes per second (GBps) memory bandwidth and supplies up to 260 Tera operations per second (TOPS) at 150 watts. In addition, the new chip allows Ernie, a pre-training model for natural language processing, to infer three times faster than the conventional GPU/FPGA-accelerating model. Leveraging the chip's limit-pushing computing power and power efficiency, Baidu can effectively support a wide variety of functions including large-scale AI workloads, such as search ranking, speech recognition, image processing, natural language processing, autonomous driving, and deep learning platforms like PaddlePaddle.

NVIDIA Introduces DRIVE AGX Orin Platform

NVIDIA today introduced NVIDIA DRIVE AGX Orin, a highly advanced software-defined platform for autonomous vehicles and robots. The platform is powered by a new system-on-a-chip (SoC) called Orin, which consists of 17 billion transistors and is the result of four years of R&D investment. The Orin SoC integrates NVIDIA's next-generation GPU architecture and Arm Hercules CPU cores, as well as new deep learning and computer vision accelerators that, in aggregate, deliver 200 trillion operations per second—nearly 7x the performance of NVIDIA's previous generation Xavier SoC.

Orin is designed to handle the large number of applications and deep neural networks that run simultaneously in autonomous vehicles and robots, while achieving systematic safety standards such as ISO 26262 ASIL-D. Built as a software-defined platform, DRIVE AGX Orin is developed to enable architecturally compatible platforms that scale from a Level 2 to full self-driving Level 5 vehicle, enabling OEMs to develop large-scale and complex families of software products. Since both Orin and Xavier are programmable through open CUDA and TensorRT APIs and libraries, developers can leverage their investments across multiple product generations.

Intel Acquires Artificial Intelligence Chipmaker Habana Labs

Intel Corporation today announced that it has acquired Habana Labs, an Israel-based developer of programmable deep learning accelerators for the data center for approximately $2 billion. The combination strengthens Intel's artificial intelligence (AI) portfolio and accelerates its efforts in the nascent, fast-growing AI silicon market, which Intel expects to be greater than $25 billion by 2024 (1).

"This acquisition advances our AI strategy, which is to provide customers with solutions to fit every performance need - from the intelligent edge to the data center," said Navin Shenoy, executive vice president and general manager of the Data Platforms Group at Intel. "More specifically, Habana turbo-charges our AI offerings for the data center with a high-performance training processor family and a standards-based programming environment to address evolving AI workloads."

Intel's AI strategy is grounded in the belief that harnessing the power of AI to improve business outcomes requires a broad mix of technology - hardware and software - and full ecosystem support. Today, Intel AI solutions are helping customers turn data into business value and driving meaningful revenue for the company. In 2019, Intel expects to generate over $3.5 billion in AI-driven revenue, up more than 20 percent year-over-year. Together, Intel and Habana can accelerate the delivery of best-in-class AI products for the data center, addressing customers' evolving needs.

Intel in Negotiations for Habana Labs Acquisition

Intel is currently performing negotiations to acquire Israeli AI chip startup, Habana Labs, according to a person who spoke to Calcalist anonymously. If the deal realizes, Intel will pay between one and two billion USD, making it Intel's second-largest acquisition of an Israeli company. When asked about the potential deal, the Intel spokesperson has stated that the company will not respond to rumors surrounding it, while Habana Labs has yet to respond to a request for comment made by Calcalist.

Founded in 2016 by Israeli entrepreneur Avigdor Willenz, who founded Galileo Technologies and Annapurna Labs, Habana Labs develops processors for training and inference of Machine Learning models. This acquisition would allow Intel to compete better in the AI processor market and get new customers which were previously exclusive to Habana Labs.

MediaTek Announces Dimensity & Dimensity 1000 5G SoC

MediaTek today unveiled Dimensity, MediaTek's family of powerful 5G system-on-chips (SoCs) offering an unrivaled combination of connectivity, multimedia, AI and imaging innovations for premium and flagship smartphones.

The MediaTek Dimensity 5G chipset family brings smart and fast together to power the world's most capable 5G devices. Dimensity represents a step toward a new era of mobility - the fifth dimension - to spur industry innovation and let consumers unlock the possibilities of 5G connectivity.

ASUS Deploying AI for Noise Cancellation Technology in Upcoming ROG Strix Go 2.4 Headset

ASUS- upcoming ROG Strix Go 2.4 headset may well grab more headlines than this one here at TechPowerUp, since the company has decided to integrate the latest buzzword in their upcoming product (missing is a "Gaming" addition). According to ASUS, the ROG Strix Go 2.4-s bi-directional boom-mic uses a noise-cancelling algorithm to block out up to 95% of unwanted noise, ranging from rage-induced keyboard mash-ups through nagging significant others (just kidding, they never nag). ASUS says that the algorithm they've deployed in these noise-cancellation features has been trained on a database of over 50 million records for tens of thousands of hours. But noise-cancellation with no discernible output sound quality wouldn't a driving product make, and so ASUS has included their 40 mm Asus Essence drivers on these headphones, which ASUS says have a penchant for Bass, and have been conditioned in an airtight chamber for reduction of any parasitic sounds.

NVIDIA Announces Jetson Xavier NX, Smallest Supercomputer for AI at the Edge

NVIDIA today introduced Jetson Xavier NX, the world's smallest, most powerful AI supercomputer for robotic and embedded computing devices at the edge. With a compact form factor smaller than the size of a credit card, the energy-efficient Jetson Xavier NX module delivers server-class performance up to 21 TOPS for running modern AI workloads, and consumes as little as 10 watts of power.

Jetson Xavier NX opens the door for embedded edge computing devices that demand increased performance but are constrained by size, weight, power budgets or cost. These include small commercial robots, drones, intelligent high-resolution sensors for factory logistics and production lines, optical inspection, network video recorders, portable medical devices and other industrial IoT systems.

Rambus Achieves Industry-Leading GDDR6 Performance at 18 Gbps

Rambus Inc., a premier silicon IP and chip provider making data faster and safer, today announced it has achieved industry-leading 18 Gbps performance with the Rambus GDDR6 Memory PHY. Running at the industry's fastest data rate of 18 Gbps, the Rambus GDDR6 PHY IP delivers peak performance four-to-five times faster than current DDR4 solutions and continues the company's longstanding tradition of developing leading-edge products. The Rambus GDDR6 PHY pairs with the companion GDDR6 memory controller from the recent acquisition of Northwest Logic to provide a complete and optimized memory subsystem solution.

Increased data usage in applications such as AI, ML, data center, networking and automotive systems is driving a need for higher bandwidth memory. The coming introduction of high-bandwidth 5G networks will exacerbate this challenge. Working closely with our memory partners, the Rambus GDDR6 solution gives system designers more options in selecting the memory system that meets both their bandwidth and cost requirements.

NVIDIA Launches the New Shield TV

NVIDIA today raised the bar higher still for streaming media players — unveiling the next generation of SHIELD TV, which delivers unmatched levels of home entertainment, gaming and AI capabilities right into the living room, starting at $149.

The two new SHIELD models — SHIELD TV and SHIELD TV Pro — provide exceptional visual and sound experiences. Their new Tegra X1+ processor, delivering up to 25 percent more performance than its predecessor, helps bring to life Dolby Vision for ultra-vivid imagery and Dolby Atmos for extraordinary audio. Its computational prowess dramatically improves picture quality by using AI to upscale HD video streams into 4K resolution.

New NVIDIA EGX Edge Supercomputing Platform Accelerates AI, IoT, 5G at the Edge

NVIDIA today announced the NVIDIA EGX Edge Supercomputing Platform - a high-performance, cloud-native platform that lets organizations harness rapidly streaming data from factory floors, manufacturing inspection lines and city streets to securely deliver next-generation AI, IoT and 5G-based services at scale, with low latency.

Early adopters of the platform - which combines NVIDIA CUDA-X software with NVIDIA-certified GPU servers and devices - include Walmart, BMW, Procter & Gamble, Samsung Electronics and NTT East, as well as the cities of San Francisco and Las Vegas.

Intel Could Unveil First Discrete 10 nm GPUs in mid-2020

According to the sources close to DigiTimes, Intel will unveil its first discrete 10 nm graphics cards named "Xe" very soon, with the first wave of Xe GPUs expected to arrive some time in 2020. Said to launch mid year, around July or August, Intel will start selling initial Xe GPU models of the long awaited product to consumers, in hope of gaining a share in the massive market using GPU for acceleration of all kinds of tasks.

Perhaps one of the most interesting notes DigiTimes reported is that "... Intel's GPUs have already received support from the upstream supply chain and has already been integrated into Intel's CPUs to be used in the datacenter and AI fields.", meaning that AIB partners already have access to first 10 nm graphics chips that are ready for system integration. First generation of Xe graphics cards will cover almost whole GPU market, including PC, datacenter, and AI applications where NVIDIA currently holds the top spot.

TSMC Starts Shipping its 7nm+ Node Based on EUV Technology

TSMC today announced that its seven-nanometer plus (N7+), the industry's first commercially available Extreme Ultraviolet (EUV) lithography technology, is delivering customer products to market in high volume. The N7+ process with EUV technology is built on TSMC's successful 7 nm node and paves the way for 6 nm and more advanced technologies.

The N7+ volume production is one of the fastest on record. N7+, which began volume production in the second quarter of 2019, is matching yields similar to the original N7 process that has been in volume production for more than one year.

GLOBALFOUNDRIES Introduces 12LP+ FinFET Solution for Cloud and Edge AI Applications

GLOBALFOUNDRIES (GF), the world's leading specialty foundry, announced today at its Global Technology Conference the availability of 12LP+, an innovative new solution for AI training and inference applications. 12LP+ offers chip designers a best-in-class combination of performance, power and area, along with a set of key new features, a mature design and production ecosystem, cost-efficient development and fast time-to-market for high-growth cloud and edge AI applications.

Derived from GF's existing 12nm Leading Performance (12LP) platform, GF's new 12LP+ provides either a 20% increase in performance or a 40% reduction in power requirements over the base 12LP platform, plus a 15% improvement in logic area scaling. A key feature is a high-speed, low-power 0.5 V SRAM bit cell that supports the fast, power-efficient shuttling of data between processors and memory, an important requirement for AI applications in the computing and wired infrastructure markets.

Intel "Cascade Lake-X" HEDT CPU Lineup Starts at 10-core, Core i9-10900X Geekbenched

With its 10th generation Core X "Cascade Lake-X" HEDT processor series, Intel will not bother designing models with single-digit core-counts. The series is likely to start at 10 cores with the Core i9-10900X. This 10-core/20-thread processor features a quad-channel DDR4 memory interface, and comes with clock speeds of 3.70 GHz base, a 200 MHz speed-bump over the Core i9-9900X. The chip retains the mesh interconnect design and cache hierarchy of Intel's HEDT processors since "Skylake-X," with 1 MB of dedicated L2 cache per core, and 19.3 MB of shared L3 cache.

Geekbench tests run on the chip show it to perform roughly on par with the i9-9900X, with the 200 MHz speed-bump expected to marginally improve multi-threaded performance. Where the "Cascade Lake-X" silicon is expected to one-up "Skylake-X" is its support for DLBoost, an on-die fixed function hardware that multiplies matrices, improving AI DNN building and training; and pricing. Intel is expected to price its next-generation HEDT processors aggressively, to nearly double cores-per-Dollar.

OtherSide Entertainment Releases Pre-Alpha Gameplay Footage of System Shock 3

System Shock is one of the most revered, iconic franchises for PC gaming. One of the first games to thoroughly explore the trope of the rampant AI in a fully realized world (be it lore or mechanics-wise), it has left a taste of grandeur on those that actually got to play it before some of us gamers became too blindsided by pretty, current graphics.

Developer OtherSide Entertainment, who have the rights to develop further worlds in the franchise, have just released a pre-alpha gameplay teaser that at least serves to give us a taste of their art and mood direction. I got some flashbacks to the first Dead Space game whilst watching this, but that might just be me. This really does look like one of the darkest renditions of AI-powered reckonings we've seen to date. Here's hoping for a good, engaging storyline. Check out the teaser after the break.

India First Country to Deploy AI Machine Learning to Fight Income Tax Evasion

India is building a large AI machine learning data-center that can crunch through trillions of financial transactions per hour to process income tax returns of India's billion-strong income tax assessee base. India's Income Tax Department has relied on human tax assessment officers that are randomly selected by a computer to assess tax returned filed by individuals, in an increasingly inefficient system that's prone to both evasion and corruption. India has already been using machine learning since 2017 to fish out cases of tax-evasion for further human scrutiny. The AI now replaces human assessment officers, relegating them up an escalation matrix.

The AI/ML assessment system is a logical next step to two big policy decisions the Indian government has taken in recent years: one of 100% data-localization by foreign entities conducting commerce in India; and getting India's vast population to use electronic payment instruments, away from paper-money, by de-monetizing high-value currency, and replacing it with a scarce supply of newer bank-notes that effectively force people to use electronic instruments. Contributing to these efforts are some of the lowest 4G mobile data prices in the world (as low as $1.50 for 40 GB of 4G LTE data), and low-cost smartphone handsets. It's also free to open a basic bank account with no minimum balance requirements.

Intel Ships First 10nm Agilex FPGAs

Intel today announced that it has begun shipments of the first Intel Agilex field programmable gate arrays (FPGAs) to early access program customers. Participants in the early access program include Colorado Engineering Inc., Mantaro Networks, Microsoft and Silicom. These customers are using Agilex FPGAs to develop advanced solutions for networking, 5G and accelerated data analytics.

"The Intel Agilex FPGA product family leverages the breadth of Intel innovation and technology leadership, including architecture, packaging, process technology, developer tools and a fast path to power reduction with eASIC technology. These unmatched assets enable new levels of heterogeneous computing, system integration and processor connectivity and will be the first 10nm FPGA to provide cache-coherent and low latency connectivity to Intel Xeon processors with the upcoming Compute Express Link," said Dan McNamara, Intel senior vice president and general manager of the Networking and Custom Logic Group.

AORUS Announces the All-New AORUS 17 Flagship Laptop

Top-of-the-line gaming brand AORUS, reveals itself at the 2019 GAMESCOM trade show in Germany Cologne, in which the spotlight shines at the all-new 17 inch flagship gaming laptop, the AORUS 17. AORUS takes the lead yet again through the co-op with world renowned switch manufacture OMRON to innovate and develop a set of unique mechanical switches that is tailored for the AORUS 17, not only does the keys offer exceptional durability, it also offers one of the best feeling keys that gamers can find on a laptop. The AORUS greatness continues through the combination of the brand new Intel 8-core CPU, NVIDIA RTX graphics chip with Ray Tracing technology and an exclusive WINDFORCE INFINITY cooling system, the AORUS 17 steadily sits on the high-end gaming thrown with these specs.

AORUS leads the industry again by working with world renowned mechanical switch manufacture, OMRON in order to create a unique set of mechanical keys for the AORUS laptop, with gamer oriented design details, including an optimal 2.5 mm key travel and an actuation point of 1.6 mm, giving gamers both the sensational touch and sound of a crisp blue switch, which gamers can now enjoy the qualities of a full mechanical keyboard right on their AORUS laptop. AORUS pursues further by redesigning the key caps to produce stunning backlit keys with unique "concentric" keycaps, letting the LED underneath the keycap shine though evenly, increasing the overall lighting intensity by 27%, in addition to the AORUS exclusive FUSION 2.0 keyboard customization software, gamers can truly create a unique personal style.

Cerebras Systems' Wafer Scale Engine is a Trillion Transistor Processor in a 12" Wafer

This news isn't properly today's, but it's relevant and interesting enough that I think warrants a news piece on our page. My reasoning is this: in an era where Multi-Chip Modules (MCM) and a chiplet approach to processor fabrication has become a de-facto standard for improving performance and yields, a trillion-transistor processor that eschews those modular design philosophies is interesting enough to give pause.

The Wafer Scale engine has been developed by Cerebras Systems to face the ongoing increase in demand for AI-training engines. However, in workloads where latency occur a very real impact in training times and a system's capability, Cerebras wanted to design a processor that avoided the need for a communication lane for all its cores to communicate - the system is only limited, basically, by transistors' switching times. Its 400,000 cores communicate seamlessly via interconnects, etched on 42,225 square millimeters of silicon (by comparison, NVIDIA's largest GPU is 56.7 times smaller at "just" 815 square millimeters).

Intel's CEO Blames 10 nm Delay on being "Too Aggressive"

During Fortune's Brainstorm Tech conference in Aspen, Colorado, Intel's CEO Bob Swan took stage and talked about the company, about where Intel is now and where they are headed in the future and how the company plans to evolve. Particular focus was put on how Intel became "data centric" from "PC centric," and the struggles it encountered.

However, when asked about the demise of Moore's Law, Swan detailed the aggressiveness that they approached the challenge with. Instead of the regular two times improvement in transistor density every two years, Swan said that Intel has always targeted better and greater densities so that it would stay the leader in the business.
Return to Keyword Browsing