News Posts matching "Machine Learning"

Return to Keyword Browsing

Logic Supply Unveils Karbon 300 Compact Rugged PC, Built For IoT

Global industrial and IoT hardware manufacturer Logic Supply has combined the latest vision processing, security protocols, wireless communication technologies, and proven cloud architectures to create the Karbon 300 rugged fanless computer. The system has been engineered to help innovators overcome the limitations of deploying reliable computer hardware in challenging environments.

"Computing at the edge is increasingly at the core of today's Industry 4.0 and Industrial IoT solutions," says Logic Supply VP of Products Murat Erdogan. "These devices are being deployed in environments that would quickly destroy traditional computer hardware. The builders and creators we work with require a careful combination of connectivity, processing and environmental protections. With Karbon 300, we're providing the ideal mix of capabilities to help make the next generation of industry-shaping innovation a reality, and enable innovators to truly challenge what's possible."

QNAP Officially Launches the TS-2888X AI-Ready NAS for Machine Learning

QNAP Systems, Inc. today officially launched the TS-2888X, an AI-Ready NAS specifically optimized for AI model training. Built using powerful Intel Xeon W processors with up to 18 cores and employing a flash-optimized hybrid storage architecture for IOPS-intensive workloads, the TS-2888X also supports installing up to 4 high-end graphics cards and runs QNAP's AI developer package " QuAI". The TS-2888X packs everything required for machine learning AI, greatly reducing latency, accelerating data transfer, and eliminating performance bottlenecks caused by network connectivity to expedite AI implementation.

Google Cloud Introduces NVIDIA Tesla P4 GPUs, for $430 per Month

Today, we are excited to announce a new addition to the Google Cloud Platform (GCP) GPU family that's optimized for graphics-intensive applications and machine learning inference: the NVIDIA Tesla P4 GPU.

We've come a long way since we introduced our first-generation compute accelerator, the K80 GPU, adding along the way P100 and V100 GPUs that are optimized for machine learning and HPC workloads. The new P4 accelerators, now in beta, provide a good balance of price/performance for remote display applications and real-time machine learning inference.

Khronos Group Releases NNEF 1.0 Standard for Neural Network Exchange

The Khronos Group, an open consortium of leading hardware and software companies creating advanced acceleration standards, announces the release of the Neural Network Exchange Format (NNEF) 1.0 Provisional Specification for universal exchange of trained neural networks between training frameworks and inference engines. NNEF reduces machine learning deployment fragmentation by enabling a rich mix of neural network training tools and inference engines to be used by applications across a diverse range of devices and platforms. The release of NNEF 1.0 as a provisional specification enables feedback from the industry to be incorporated before the specification is finalized - comments and feedback are welcome on the NNEF GitHub repository.

AMD also Announces Radeon Instinct MI8 and MI6 Machine Learning Accelerators

AMD also announced the Radeon Instinct MI8 and MI6 Machine Learning GPUs based on Fiji and Polaris cores, respectively. These parts comprise the more "budget" part of the still most certainly non-consumer oriented high-end machine learning lineup. Still, with all parts using fairly modern cores, they aim to make an impact in their respective segments.

Starting with the Radeon Instinct MI8, we have a Fiji based core with the familiar 4 GBs of HBM1 memory and 512 GB/s total memory bandwidth. It has 8.2 TFLOPS of either Single Precision of Half Precision floating point performance (so performance there does not double when going half precision like its bigger Vega based brother, the MI25). It features 64 Compute Units.

The Radeon Instinct MI6 is a Polaris based card and slightly slower in performance than the MI8, despite having four times the amount of memory at 16 GBs of GDDR5. The likely reason for this is a slower bandwidth speed, at only 224 GB/s. It also has less compute units at 36 total, with a total of 2304 stream processors. This all equates out to a still respectable 5.7 TFLOPs of overall half or single precision floating point performance (which again, does not double at half precision rate like Vega).

AMD Announces the Radeon Instinct MI25 Deep Learning Accelerator

AMD's EPYC Launch presentation focused mainly on its line of datacenter processors, but fans of AMD's new Vega GPU lineup may be interested in another high-end product that was announced during the presentation. The Radeon Instinct MI25 is a Deep Learning accelerator, and as such is hardly intended for consumers, but it is Vega based and potentially very potent in the company's portfolio all the same. Claiming a massive 24.6 TFLOPS of Half Precision Floating Point performance (12.3 Single Precision) from its 64 "next-gen" compute units, this machine is very suited to Deep Learning and Machine AI oriented applications. It comes with no less than 16 GBs of HBM2 memory, and has 484 GB/s of memory bandwidth to play with.

ARM Reveals Its Plan for World Domination: Announces DynamIQ Technology

ARM processors have been making forays into hitherto shallow markets, with it's technology and processor architectures winning an ever increasing amount of design wins. Most recently, Microsoft itself announced a platform meant to use ARM processors in a server environment. Now, ARM has put forward its plans towards achieving a grand total of 100 billion chips shipped in the 2017-2021 time frame.

To put that goal in perspective, ARM is looking to ship as many ARM-powered processors in this 2017-2021 time frame as it did between 1991 and 2017. This is no easy task - at least if ARM were to stay in its known markets, where it has already achieved almost total saturation. The plan: to widen the appeal of its processor design, with big bets in the AI, Automotive, XR (which encompasses the Virtual Reality, Augmented Reality, and Mixed Reality markets), leveraged by what ARM does best: hyper-efficient processors.

"Zoom and Enhance" to Become a Reality Thanks to Machine Learning

The one phrase from television that makes IT people and creative professionals cringe the most is "zoom and enhance" - the notion that you zoom into a digital image and, at the push of a button, it converts a pixellated image into something with details - which lets CSI catch the bad guys. Up until now, this has been laughably impossible. Images are made up of dots called pixels, and the more pixels you have, the more details you can have in your image (resolution). Zooming into images eventually shows you a colorful checkerboard that's proud of its identity.

Google is tapping into machine-learning, in an attempt to change this. The company has reportedly come up with a machine-learning technique that attempts to reconstruct details in low-resolution images. Google is calling this RAISR (rapid and accurate image super-resolution). The technology works with the software learning "edges" of a picture (portions of the image with drastic changes in color and brightness gradients), and attempts to reconstruct them. What makes this different from conventional super-sampling methods is its machine-learning component. A low-resolution image is studied by the machine to invent an upscaling method most effective for the image, in-situ. While its application in law-enforcement is tricky, and will only become a reality when a reasonably high court of law sets a spectacular precedent; this technology could have commercial applications in up-scaling low-resolution movies to new formats such as 4K Ultra HD, and perhaps even 8K.
Return to Keyword Browsing