News Posts matching #Deep Learning

Return to Keyword Browsing

Intel Internal Memo Reveals that even Intel is Impressed by AMD's Progress

Today an article was posted on Intel's internal employee-only portal called "Circuit News". The post, titled "AMD competitive profile: Where we go toe-to-toe, why they are resurgent, which chips of ours beat theirs" goes into detail about the recent history of AMD and how the company achieved its tremendous growth in recent years. Further, Intel talks about where they see the biggest challenges with AMD's new products, and what the company's "secret sauce" is to fight against these improvements.
The full article follows:

Intel Reports First-Quarter 2019 Financial Results

Intel Corporation today reported first-quarter 2019 financial results. "Results for the first quarter were slightly higher than our January expectations. We shipped a strong mix of high performance products and continued spending discipline while ramping 10nm and managing a challenging NAND pricing environment. Looking ahead, we're taking a more cautious view of the year, although we expect market conditions to improve in the second half," said Bob Swan, Intel CEO. "Our team is focused on expanding our market opportunity, accelerating our innovation and improving execution while evolving our culture. We aim to capitalize on key technology inflections that set us up to play a larger role in our customers' success, while improving returns for our owners."

In the first quarter, the company generated approximately $5.0 billion in cash from operations, paid dividends of $1.4 billion and used $2.5 billion to repurchase 49 million shares of stock. In the first quarter, Intel achieved 4 percent growth in the PC-centric business while data-centric revenue declined 5 percent.

Intel Announces Broadest Product Portfolio for Moving, Storing, and Processing Data

Intel Tuesday unveiled a new portfolio of data-centric solutions consisting of 2nd-Generation Intel Xeon Scalable processors, Intel Optane DC memory and storage solutions, and software and platform technologies optimized to help its customers extract more value from their data. Intel's latest data center solutions target a wide range of use cases within cloud computing, network infrastructure and intelligent edge applications, and support high-growth workloads, including AI and 5G.

Building on more than 20 years of world-class data center platforms and deep customer collaboration, Intel's data center solutions target server, network, storage, internet of things (IoT) applications and workstations. The portfolio of products advances Intel's data-centric strategy to pursue a massive $300 billion data-driven market opportunity.

Anthem Gets NVIDIA DLSS and Highlights Support in Latest Update

Saying Anthem has had a rough start would be an understatement, but things can only get better with time (hopefully, anyway). This week saw an update to the PC version that brought along with it support for NVIDIA's new DLSS (Deep Learning Super Sampling) technology to be used with their new Turing-microarchitecture GeForce RTX cards. NVIDIA's internal testing shows as much as 40% improvement in average FPS with DLSS on relative to off, as seen in the image below, and there is also a video to help show graphical changes, or lack thereof in this case. DLSS on Anthem is available on all RTX cards at 3840x2160 resolution gameplay, and on the RTX 2060, 2070, and 2080 at 2560x1440. No word on equivalent resolutions at a non-16:9 aspect ratio, and presumably 1080p is a no-go as first discussed by us last month.

Note that we will NOT be able to test DLSS on Anthem, which is a result of the five activations limit as far as hardware configurations go. This prevented us from doing a full graphics card performance test, but our article on the VIP demo is still worth checking into if you were curious. In addition to DLSS, Anthem also has NVIDIA Highlights support for GeForce Experience users to automatically capture and save "best gameplay moments", with a toggle option to enable this setting in the driver. A highlight is generated for an apex kill, boss kill, legendary kill, multi kill, overlook interaction, or a tomb discovery. More on this in the source linked below in the full story.

3DMark Adds NVIDIA DLSS Feature Performance Test to Port Royal

Did you see the NVIDIA keynote presentation at CES this year? For us, one of the highlights was the DLSS demo based on our 3DMark Port Royal ray tracing benchmark. Today, we're thrilled to announce that we've added this exciting new graphics technology to 3DMark in the form of a new NVIDIA DLSS feature test. This new test is available now in 3DMark Advanced and Professional Editions.

3DMark feature tests are specialized tests for specific technologies. The NVIDIA DLSS feature test helps you compare performance and image quality with and without DLSS processing. The test is based on the 3DMark Port Royal ray tracing benchmark. Like many games, Port Royal uses Temporal Anti-Aliasing. TAA is a popular, state-of-the-art technique, but it can result in blurring and the loss of fine detail. DLSS (Deep Learning Super Sampling) is an NVIDIA RTX technology that uses deep learning and AI to improve game performance while maintaining visual quality.

NVIDIA Presents the TITAN RTX 24GB Graphics Card at $2,499

NVIDIA today introduced NVIDIA TITAN RTX , the world's most powerful desktop GPU, providing massive performance for AI research, data science and creative applications. Driven by the new NVIDIA Turing architecture, TITAN RTX - dubbed T-Rex - delivers 130 teraflops of deep learning performance and 11 GigaRays of ray-tracing performance.

"Turing is NVIDIA's biggest advance in a decade - fusing shaders, ray tracing, and deep learning to reinvent the GPU," said Jensen Huang, founder and CEO of NVIDIA. "The introduction of T-Rex puts Turing within reach of millions of the most demanding PC users - developers, scientists and content creators."

Intel Puts Out Additional "Cascade Lake" Performance Numbers

Intel late last week put out additional real-world HPC and AI compute performance numbers of its upcoming "Cascade Lake" 2x 48-core (96 cores in total) machine, compared to AMD's EPYC 7601 2x 32-core (64 cores in total) machine. You'll recall that on November 5th, the company put out Linpack, System Triad, and Deep Learning Inference numbers, which are all synthetic benchmarks. In a new set of slides, the company revealed a few real-world HPC/AI application performance numbers, including MIMD Lattice Computation (MILC), Weather Research and Forecasting (WRF), OpenFOAM, NAMD scalable molecular dynamics, and YaSK.

The Intel 96-core setup with 12-channel memory interface belts out up to 1.5X performance in MILC, up to 1.6X in WRF and OpenFOAM, up to 2.1X in NAMD, and up to 3.1X in YASK, compared to an AMD EPYC 7601 2P machine. The company also put out system configuration and disclaimer slides with the usual forward-looking CYA. "Cascake Lake" will be Intel's main competitor to AMD's EPYC "Rome" 64-core 4P-capable processor that comes out by the end of 2018. Intel's product is a multi-chip module of two 24~28 core dies, with a 2x 6-channel DDR4 memory interface.

Intel Announces Cascade Lake Advanced Performance and Xeon E-2100

Intel today announced two new members of its Intel Xeon processor portfolio: Cascade Lake advanced performance (expected to be released the first half of 2019) and the Intel Xeon E-2100 processor for entry-level servers (general availability today). These two new product families build upon Intel's foundation of 20 years of Intel Xeon platform leadership and give customers even more flexibility to pick the right solution for their needs.

"We remain highly focused on delivering a wide range of workload-optimized solutions that best meet our customers' system requirements. The addition of Cascade Lake advanced performance CPUs and Xeon E-2100 processors to our Intel Xeon processor lineup once again demonstrates our commitment to delivering performance-optimized solutions to a wide range of customers," said Lisa Spelman, Intel vice president and general manager of Intel Xeon products and data center marketing.

Intel and Philips Accelerate Deep Learning Inference on CPUs in Medical Imaging

Using Intel Xeon Scalable processors and the OpenVINO toolkit, Intel and Philips tested two healthcare use cases for deep learning inference models: one on X-rays of bones for bone-age-prediction modeling, the other on CT scans of lungs for lung segmentation. In these tests, Intel and Philips achieved a speed improvement of 188 times for the bone-age-prediction model, and a 38 times speed improvement for the lung-segmentation model over the baseline measurements.

Intel Xeon Scalable processors appear to be the right solution for this type of AI workload. Our customers can use their existing hardware to its maximum potential, while still aiming to achieve quality output resolution at exceptional speeds," said Vijayananda J., chief architect and fellow, Data Science and AI at Philips HealthSuite Insights.

Intel "Cooper Lake" Latest 14nm Stopgap Between "Cascade Lake" and "Ice Lake"

With no end to its 10 nm transition woes in sight (at least not until late-2019), Intel is left with refinement of its existing CPU micro-architectures on the 14 nanometer node. The client-desktop segment sees the introduction of the "Whiskey Lake" (aka Coffee Lake Refresh) later this year; while the enterprise segment gets the 14 nm "Cascade Lake." To its credit, Cascade Lake introduces a few major platform innovations, such as support for Optane Persistent Memory, silicon-level hardening against recent security vulnerabilities, and Deep Learning Boost, which is hardware-accelerated neural net building/training, and the introduction of VNNI (Variable Length Neural Network Instructions). "Cascade Lake" makes its debut towards the end of 2018. It will be succeeded in 2019 by Ice Lake the new "Cooper Lake" architecture.

"Cooper Lake" is a refresh of "Cascade Lake," and a stopgap in Intel's saga of getting 10 nm right, so it could build "Ice Lake" on it. It will be built on the final (hopefully) iteration of the 14 nm node. It will share its platform with "Cascade Lake," and so Optane Persistent Memory support carriers over. What's changed is the Deep Learning Boost feature-set, which will be augmented with a few new instructions, including BFLOAT16 (a possible half-precision floating point instruction). Intel could also be presented with the opportunity to crank up clock speeds across the board.

GIGABYTE Announces Two New Powerful Deep Learning Engines

GIGABYTE, an industry leader in server hardware for high performance computing, has released two new powerful 4U GPU servers to bring massive parallel computing capabilities into your datacenter: the 8 x SXM2 GPU G481-S80, and the 10 x GPU G481-HA0. Both products offer some of the highest GPU density of this form factor available on the market.

As artificial intelligence is becoming more widespread in our daily lives, such as for image recognition, autonomous vehicles or medical research, more organizations need deep learning capabilities in their datacenter. Deep learning requires a powerful engine that can deal with the massive volumes of data processing required. GIGABYTE is proud to provide our customers with two new solutions for such an engine.
Return to Keyword Browsing