News Posts matching #Machine Learning

Return to Keyword Browsing

AMD also Announces Radeon Instinct MI8 and MI6 Machine Learning Accelerators

AMD also announced the Radeon Instinct MI8 and MI6 Machine Learning GPUs based on Fiji and Polaris cores, respectively. These parts comprise the more "budget" part of the still most certainly non-consumer oriented high-end machine learning lineup. Still, with all parts using fairly modern cores, they aim to make an impact in their respective segments.

Starting with the Radeon Instinct MI8, we have a Fiji based core with the familiar 4 GBs of HBM1 memory and 512 GB/s total memory bandwidth. It has 8.2 TFLOPS of either Single Precision of Half Precision floating point performance (so performance there does not double when going half precision like its bigger Vega based brother, the MI25). It features 64 Compute Units.

The Radeon Instinct MI6 is a Polaris based card and slightly slower in performance than the MI8, despite having four times the amount of memory at 16 GBs of GDDR5. The likely reason for this is a slower bandwidth speed, at only 224 GB/s. It also has less compute units at 36 total, with a total of 2304 stream processors. This all equates out to a still respectable 5.7 TFLOPs of overall half or single precision floating point performance (which again, does not double at half precision rate like Vega).

AMD Announces the Radeon Instinct MI25 Deep Learning Accelerator

AMD's EPYC Launch presentation focused mainly on its line of datacenter processors, but fans of AMD's new Vega GPU lineup may be interested in another high-end product that was announced during the presentation. The Radeon Instinct MI25 is a Deep Learning accelerator, and as such is hardly intended for consumers, but it is Vega based and potentially very potent in the company's portfolio all the same. Claiming a massive 24.6 TFLOPS of Half Precision Floating Point performance (12.3 Single Precision) from its 64 "next-gen" compute units, this machine is very suited to Deep Learning and Machine AI oriented applications. It comes with no less than 16 GBs of HBM2 memory, and has 484 GB/s of memory bandwidth to play with.

ARM Reveals Its Plan for World Domination: Announces DynamIQ Technology

ARM processors have been making forays into hitherto shallow markets, with it's technology and processor architectures winning an ever increasing amount of design wins. Most recently, Microsoft itself announced a platform meant to use ARM processors in a server environment. Now, ARM has put forward its plans towards achieving a grand total of 100 billion chips shipped in the 2017-2021 time frame.

To put that goal in perspective, ARM is looking to ship as many ARM-powered processors in this 2017-2021 time frame as it did between 1991 and 2017. This is no easy task - at least if ARM were to stay in its known markets, where it has already achieved almost total saturation. The plan: to widen the appeal of its processor design, with big bets in the AI, Automotive, XR (which encompasses the Virtual Reality, Augmented Reality, and Mixed Reality markets), leveraged by what ARM does best: hyper-efficient processors.

"Zoom and Enhance" to Become a Reality Thanks to Machine Learning

The one phrase from television that makes IT people and creative professionals cringe the most is "zoom and enhance" - the notion that you zoom into a digital image and, at the push of a button, it converts a pixellated image into something with details - which lets CSI catch the bad guys. Up until now, this has been laughably impossible. Images are made up of dots called pixels, and the more pixels you have, the more details you can have in your image (resolution). Zooming into images eventually shows you a colorful checkerboard that's proud of its identity.

Google is tapping into machine-learning, in an attempt to change this. The company has reportedly come up with a machine-learning technique that attempts to reconstruct details in low-resolution images. Google is calling this RAISR (rapid and accurate image super-resolution). The technology works with the software learning "edges" of a picture (portions of the image with drastic changes in color and brightness gradients), and attempts to reconstruct them. What makes this different from conventional super-sampling methods is its machine-learning component. A low-resolution image is studied by the machine to invent an upscaling method most effective for the image, in-situ. While its application in law-enforcement is tricky, and will only become a reality when a reasonably high court of law sets a spectacular precedent; this technology could have commercial applications in up-scaling low-resolution movies to new formats such as 4K Ultra HD, and perhaps even 8K.
Return to Keyword Browsing
May 22nd, 2025 03:48 CDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts