News Posts matching #Machine Learning

Return to Keyword Browsing

Unreal Engine 5.4 is Now Available With Improvements to Nanite, AI and Machine Learning, TSR, and More

Unreal Engine 5.4 is here, and it's packed with new features and improvements to performance, visual fidelity, and productivity that will benefit game developers and creators across industries. With this release, we're delivering the toolsets we've been using internally to build and ship Fortnite Chapter 5, Rocket Racing, Fortnite Festival, and LEGO Fortnite. Here are some of the highlights.

Animation
Character rigging and animation authoring
This release sees substantial updates to Unreal Engine's built-in animation toolset, enabling you to quickly, easily, and enjoyably rig characters and author animation directly in engine, without the frustrating and time-consuming need to round trip to external applications. With an Experimental new Modular Control Rig feature, you can build animation rigs from understandable modular parts instead of complex granular graphs, while Automatic Retargeting makes it easier to get great results when reusing bipedal character animations. There are also extensions to the Skeletal Editor and a suite of new deformer functions to make the Deformer Graph more accessible.

ASUS Unveils ProArt Display and PC Solutions for AI, XR, and Virtual Production at NAB Show 2024

ASUS today announced its participation in the upcoming NAB Show 2024, themed "A Glimpse into Tomorrow's Tech." Visitors to the ASUS booth (C2934, LVCC Central Hall, Exhibition Time: April 14-17, 2024) will have the opportunity to explore the future of content creation with hands-on experiences and demonstrations of groundbreaking innovations. The ASUS showcase will highlight a range of cutting-edge technology ideal for the challenges of XR and virtual production, including the ProArt Display PA32KCX, the world's first 8K Mini LED professional monitor, in addition to AI-powered workstations and color management solutions.

Shooting in 8K gives creators at the cutting-edge of XR and virtual production exceptionally high-resolution footage, affording them wide flexibility in post-production without sacrificing the ability to produce final products in a full 4K resolution. To allow these creators to work with raw 8K footage in full detail, ASUS today announced the ProArt Display PA32KCX, the world's first 8K Mini LED professional monitor. This 32-inch 8K (7680 x 4320) offers an average ΔE value of less than one for world-leading color accuracy, and it covers 97% of the cinema-grade DCI-P3 color gamut for beautifully saturated color reproduction. With its 4096-zone Mini LED backlight capable of 1200 nits peak brightness and industry-leading 1000 nits full-screen sustained brightness, the ProArt Display PA32KCX is an ideal candidate for HDR workflows. It supports multiple HDR metadata formats, including HLG and HDR10, allowing creators to check how content will appear for a wide range of target displays before it is sent off for final delivery.

Microsoft Auto-updating Eligible Windows 11 PCs to Version 23H2

Windows 11 version 23H2 started rolling out last October, but many users of Microsoft's flagship operating system opted out of an upgrade, thanks to a handy "optional" toggle. News outlets have latched onto a freshly published (February 20) Windows 11 "Release Health" notice—the official Microsoft dashboard alert states that Windows 11 2023 Update: "is now entering a new rollout phase." Fastidious users will not be happy to discover that "eligible Windows 11 devices" are now subject to an automatic bump up to version 23H2. Very passive-aggressive tactics have been utilized in the past—Microsoft is seemingly eager to get it audience upgraded onto its latest and greatest feature-rich experience.

According to NeoWin, an official announcement from last week alerted users to an "impending end of optional preview updates on Windows 11 22H2." Yesterday's "23H2" dashboard confessional provided a little bit more context—unsurprisingly involving artificial intelligence: "This automatic update targets Windows 11 devices that have reached or are approaching end of servicing, and it follows the machine learning-based (ML) training we have utilized so far. We will continue to train our intelligent ML model to safely roll out this new Windows version in phases to deliver a smooth update experience."

AMD ROCm 6.0 Adds Support for Radeon PRO W7800 & RX 7900 GRE GPUs

Building on our previously announced support of the AMD Radeon RX 7900 XT, XTX and Radeon PRO W7900 GPUs with AMD ROCm 5.7 and PyTorch, we are now expanding our client-based ML Development offering, both from the hardware and software side with AMD ROCm 6.0. Firstly, AI researchers and ML engineers can now also develop on Radeon PRO W7800 and on Radeon RX 7900 GRE GPUs. With support for such a broad product portfolio, AMD is helping the AI community to get access to desktop graphics cards at even more price points and at different performance levels.

Furthermore, we are complementing our solution stack with support for ONNX Runtime. ONNX, short for Open Neural Network Exchange, is an intermediary Machine Learning framework used to convert AI models between different ML frameworks. As a result, users can now perform inference on a wider range of source data on local AMD hardware. This also adds INT8 via MIGraphX—AMD's own graph inference engine—to the available data types (including FP32 and FP16). With AMD ROCm 6.0, we are continuing our support for the PyTorch framework bringing mixed precision with FP32/FP16 to Machine Learning training workflows.

Nubis Communications and Alphawave Semi Showcase First Demonstration of Optical PCI Express 6.0 Technology

Nubis Communications, Inc., provider of low-latency high-density optical inter-connect (HDI/O), and Alphawave Semi (LN: AWE), a global leader in high-speed connectivity and compute silicon for the world's technology infrastructure, today announced their upcoming demonstration of PCI Express 6.0 technology driving over an optical link at 64GT/s per lane. Data Center providers are exploring the use of PCIe over Optics to greatly expand the reach and flexibility of the interconnect for memory, CPUs, GPUs, and custom silicon accelerators to enable more scalable and energy-efficient clusters for Artificial Intelligence and Machine Learning (ML/AI) architectures.

Nubis Communications and Alphawave Semi will be showing a live demonstration in the Tektronix booth at DesignCon, the leading conference for advanced chip, board, and system design technologies. An Alphawave Semi PCIe Subsystem with PiCORE Controller IP and PipeCORE PHY will directly drive and receive PCIe 6.0 traffic through a Nubis XT1600 linear optical engine to demonstrate a PCIe 6.0 optical link at 64GT/s per fiber, with optical output waveform measured on a Tektronix sampling scope with a high-speed optical probe.

Nuvoton Unveils New Production-Ready Endpoint AI Platform for Machine Learning

Nuvoton is pleased to announce its new Endpoint AI Platform to accelerate the development of fully-featured microcontroller (MCU) AI products. These solutions are enabled by Nuvoton's powerful new MCU and MPU silicon, including the NuMicro M55M1 equipped with Ethos U55 NPU, NuMicro MA35D1, and NuMicro M467 series. These MCUs are a valuable addition to the modern AI-centric computing toolkit and demonstrate how Nuvoton continues to work closely with Arm and other companies to develop a user-friendly and complete Endpoint AI Ecosystem.

Development on these platforms is made easy by Nuvoton's NuEdgeWise: a well-rounded, simple-to-adopt tool for machine learning (ML) development, which is nonetheless suitable for cutting-edge tasks. Together, this powerful core hardware, combined with unique rich development tools, cements Nuvoton's reputation as a leading microcontroller platform provider. These new single-chip-based platforms are ideal for applications including smart home appliances and security, smart city services, industry, agriculture, entertainment, environmental protection, education, highly accurate voice-control tasks, and sports, health, and fitness.

MAINGEAR Unveils Powerful Workstation PCs Designed for Creatives and Professionals

MAINGEAR, the leader in premium-quality, high-performance, custom PCs, today announced the launch of its latest lineup of Pro Series Workstation PCs, meticulously engineered and configurable with the industry's most powerful components, to cater to the diverse needs of professionals across multiple industries.

Ideal for game developers, photo editors, graphics designers, videographers, 3D rendering artists, music producers, CAD engineers, data scientists, and AI/Machine Learning developers, the MAINGEAR ProWS Series introduces a range of desktop workstations crafted to crush the most intensive tasks, elevate productivity and streamline workflow.

NVIDIA Introduces Generative AI Foundry Service on Microsoft Azure for Enterprises and Startups Worldwide

NVIDIA today introduced an AI foundry service to supercharge the development and tuning of custom generative AI applications for enterprises and startups deploying on Microsoft Azure.

The NVIDIA AI foundry service pulls together three elements—a collection of NVIDIA AI Foundation Models, NVIDIA NeMo framework and tools, and NVIDIA DGX Cloud AI supercomputing services—that give enterprises an end-to-end solution for creating custom generative AI models. Businesses can then deploy their customized models with NVIDIA AI Enterprise software to power generative AI applications, including intelligent search, summarization and content generation.

Google Introduces Cloud TPU v5e and Announces A3 Instance Availability

We're at a once-in-a-generation inflection point in computing. The traditional ways of designing and building computing infrastructure are no longer adequate for the exponentially growing demands of workloads like generative AI and LLMs. In fact, the number of parameters in LLMs has increased by 10x per year over the past five years. As a result, customers need AI-optimized infrastructure that is both cost effective and scalable.

For two decades, Google has built some of the industry's leading AI capabilities: from the creation of Google's Transformer architecture that makes gen AI possible, to our AI-optimized infrastructure, which is built to deliver the global scale and performance required by Google products that serve billions of users like YouTube, Gmail, Google Maps, Google Play, and Android. We are excited to bring decades of innovation and research to Google Cloud customers as they pursue transformative opportunities in AI. We offer a complete solution for AI, from computing infrastructure optimized for AI to the end-to-end software and services that support the full lifecycle of model training, tuning, and serving at global scale.

AMD's CTO Discusses Founding of Ultra Ethernet Consortium

Mark Papermaster, AMD's Chief Technology Officer and Executive Vice President of Technology and Engineering announced: "Over the past 50 years, Ethernet has grown to dominate general networking. One of its key strengths is flexibility - the ability to adapt to different workloads, scale and computing environments. One of the places that it hasn't been well-known, though, is in high-performance networking environments.

Now, the Ultra Ethernet Consortium (UEC) was formed by leading technology companies to focus on tuning the Ethernet foundation for high-performance Artificial Intelligence, Machine Learning, and High-Performance Computing (AI/ML/HPC) workloads. This includes work at the Physical, Link, Transport, and Software layers with robust security and congestion protections.

NVIDIA Triton Inference Server Running A100 Tensor Core GPUs Boosts Bing Advert Delivery

Inference software enables shift to NVIDIA A100 Tensor Core GPUs, delivering 7x throughput for the search giant. Jiusheng Chen's team just got accelerated. They're delivering personalized ads to users of Microsoft Bing with 7x throughput at reduced cost, thanks to NVIDIA Triton Inference Server running on NVIDIA A100 Tensor Core GPUs. It's an amazing achievement for the principal software engineering manager and his crew.

Tuning a Complex System
Bing's ad service uses hundreds of models that are constantly evolving. Each must respond to a request within as little as 10 milliseconds, about 10x faster than the blink of an eye. The latest speedup got its start with two innovations the team delivered to make AI models run faster: Bang and EL-Attention. Together, they apply sophisticated techniques to do more work in less time with less computer memory. Model training was based on Azure Machine Learning for efficiency.

Apple Introduces M2 Ultra

Apple today announced M2 Ultra, a new system on a chip (SoC) that delivers huge performance increases to the Mac and completes the M2 family. M2 Ultra is the largest and most capable chip Apple has ever created, and it makes the new Mac Studio and Mac Pro the most powerful Mac desktops ever made. M2 Ultra is built using a second-generation 5-nanometer process and uses Apple's groundbreaking UltraFusion technology to connect the die of two M2 Max chips, doubling the performance. M2 Ultra consists of 134 billion transistors—20 billion more than M1 Ultra. Its unified memory architecture supports up to a breakthrough 192 GB of memory capacity, which is 50 percent more than M1 Ultra, and features 800 GB/s of memory bandwidth—twice that of M2 Max. M2 Ultra features a more powerful CPU that's 20 percent faster than M1 Ultra, a larger GPU that's up to 30 percent faster, and a Neural Engine that's up to 40 percent faster. It also features a media engine with twice the capabilities of M2 Max for blazing ProRes acceleration. With all these advancements, M2 Ultra takes Mac performance to a whole new level yet again.

"M2 Ultra delivers astonishing performance and capabilities for our pro users' most demanding workflows, while maintaining Apple silicon's industry-leading power efficiency," said Johny Srouji, Apple's senior vice president of Hardware Technologies. "With huge performance gains in the CPU, GPU, and Neural Engine, combined with massive memory bandwidth in a single SoC, M2 Ultra is the world's most powerful chip ever created for a personal computer."

NVIDIA Collaborates With Microsoft to Accelerate Enterprise-Ready Generative AI

NVIDIA today announced that it is integrating its NVIDIA AI Enterprise software into Microsoft's Azure Machine Learning to help enterprises accelerate their AI initiatives. The integration will create a secure, enterprise-ready platform that enables Azure customers worldwide to quickly build, deploy and manage customized applications using the more than 100 NVIDIA AI frameworks and tools that come fully supported in NVIDIA AI Enterprise, the software layer of NVIDIA's AI platform.

"With the coming wave of generative AI applications, enterprises are seeking secure accelerated tools and services that drive innovation," said Manuvir Das, vice president of enterprise computing at NVIDIA. "The combination of NVIDIA AI Enterprise software and Azure Machine Learning will help enterprises speed up their AI initiatives with a straight, efficient path from development to production."

TYAN to Showcase Cloud Platforms for Data Centers at CloudFest 2023

TYAN, an industry-leading server platform design manufacturer and a MiTAC Computing Technology Corporation subsidiary, will showcase its latest cloud server platforms powered by AMD EPYC 9004 Series processors and 4th Gen Intel Xeon Scalable processors for next-generation data centers at CloudFest 2023, Booth #H12 in Europa-Park from March 21-23.

"With the exponential advancement of technologies like AI and Machine Learning, data centers require robust hardware and infrastructure to handle complex computations while running AI workloads and processing big data," said Danny Hsu, Vice President of MiTAC Computing Technology Corporation's Server Infrastructure BU. "TYAN's cloud server platforms with storage performance and computing capability can support the ever-increasing demand for computational power and data processing."

UK Government Seeks to Invest £900 Million in Supercomputer, Native Research into Advanced AI Deemed Essential

The UK Treasury has set aside a budget of £900 million to invest in the development of a supercomputer that would be powerful enough to chew through more than one billion billion simple calculations a second. A new exascale computer would fit the bill, for utilization by newly established advanced AI research bodies. It is speculated that one key goal is to establish a "BritGPT" system. The British government has been keeping tabs on recent breakthroughs in large language models, the most notable example being OpenAI's ChatGPT. Ambitions to match such efforts were revealed in a statement, with the emphasis: "to advance UK sovereign capability in foundation models, including large language models."

The current roster of United Kingdom-based supercomputers looks to be unfit for the task of training complex AI models. In light of being outpaced by drives in other countries to ramp up supercomputer budgets, the UK Government outlined its own future investments: "Because AI needs computing horsepower, I today commit around £900 million of funding, for an exascale supercomputer," said the chancellor, Jeremy Hunt. The government has declared that quantum technologies will receive an investment of £2.5 billion over the next decade. Proponents of the technology have declared that it will supercharge machine learning.

AMD Announces Appointment of New Corporate Fellows

AMD today announced the appointment of five technical leaders to the role of AMD Corporate Fellow. These appointments recognize each leader's significant impact on semiconductor innovation across various areas, from graphics architecture to advanced packaging. "David, Nathan, Suresh, Ben and Ralph - whose engineering contributions have already left an indelible mark on our industry - represent the best of our innovation culture," said Mark Papermaster, chief technology officer and executive vice president of Technology and Engineering at AMD. "Their appointments to Corporate Fellow will enable AMD to innovate in new dimensions as we work to deliver the most significant breakthroughs in high-performance computing in the decade ahead."

Appointment to AMD Corporate Fellow is an honor bestowed on the most accomplished AMD innovators. AMD Corporate Fellows are appointed after a rigorous review process that assesses not only specific technical contributions to the company, but also involvement in the industry, mentoring of others and improving the long-term strategic position of the company. Currently, only 13 engineers at AMD hold the title of Corporate Fellow.

Hewlett Packard Enterprise Brings HPE Cray EX and HPE Cray XD Supercomputers to Enterprise Customers

Hewlett Packard Enterprise (NYSE: HPE) today announced it is making supercomputing accessible for more enterprises to harness insights, solve problems and innovate faster by delivering its world-leading, energy-efficient supercomputers in a smaller form factor and at a lower price point.

The expanded portfolio includes new HPE Cray EX and HPE Cray XD supercomputers, which are based on HPE's exascale innovation that delivers end-to-end, purpose-built technologies in compute, accelerated compute, interconnect, storage, software, and flexible power and cooling options. The supercomputers provide significant performance and AI-at-scale capabilities to tackle demanding, data-intensive workloads, speed up AI and machine learning initiatives, and accelerate innovation to deliver products and services to market faster.

Inventec's Rhyperior Is the Powerhouse GPU Accelerator System Every Business in the AI And ML World Needs

Taiwan-based leading server manufacturing company Inventec's powerhouse GPU accelerator system, Rhyperior, is everything any modern-day business needs in the digital era, especially those relying heavily on Artificial Intelligence (AI) and Machine Learning (ML). A unique and optimal combination of GPUs and CPUs, this 4U GPU accelerator system is based on the NVIDIA A100 Tensor Core GPU and Intel Xeon 3rd Gen (Whitley platform). Rhyperior also equips an NVIDIA NVSwitch to enhance performance dramatically, and its power can be an effective tool for modern workloads.

In a world where technology is disrupting our lives as we know it, GPU acceleration is critical: essentially speeding up processes that would otherwise take much longer. Acceleration boosts execution for complex computational problems that can be broken down into similar, parallel operations. In other words, an excellent accelerator can be a game changer for industries like gaming and healthcare, increasingly relying on the latest technologies like AI and ML for better, more robust solutions for consumers.

CXL Consortium Releases Compute Express Link 3.0 Specification to Expand Fabric Capabilities and Management

The CXL Consortium, an industry standards body dedicated to advancing Compute Express Link (CXL) technology, today announced the release of the CXL 3.0 specification. The CXL 3.0 specification expands on previous technology generations to increase scalability and to optimize system level flows with advanced switching and fabric capabilities, efficient peer-to-peer communications, and fine-grained resource sharing across multiple compute domains.

"Modern datacenters require heterogenous and composable architectures to support compute intensive workloads for applications such as Artificial Intelligence and Machine Learning - and we continue to evolve CXL technology to meet industry requirements," said Siamak Tavallaei, president, CXL Consortium. "Developed by our dedicated technical workgroup members, the CXL 3.0 specification will enable new usage models in composable disaggregated infrastructure."

Cerebras Systems Sets Record for Largest AI Models Ever Trained on A Single Device

Cerebras Systems, the pioneer in high performance artificial intelligence (AI) computing, today announced, for the first time ever, the ability to train models with up to 20 billion parameters on a single CS-2 system - a feat not possible on any other single device. By enabling a single CS-2 to train these models, Cerebras reduces the system engineering time necessary to run large natural language processing (NLP) models from months to minutes. It also eliminates one of the most painful aspects of NLP—namely the partitioning of the model across hundreds or thousands of small graphics processing units (GPU).

"In NLP, bigger models are shown to be more accurate. But traditionally, only a very select few companies had the resources and expertise necessary to do the painstaking work of breaking up these large models and spreading them across hundreds or thousands of graphics processing units," said Andrew Feldman, CEO and Co-Founder of Cerebras Systems. "As a result, only very few companies could train large NLP models - it was too expensive, time-consuming and inaccessible for the rest of the industry. Today we are proud to democratize access to GPT-3 1.3B, GPT-J 6B, GPT-3 13B and GPT-NeoX 20B, enabling the entire AI ecosystem to set up large models in minutes and train them on a single CS-2."

SMART Modular Announces the SMART Kestral PCIe Optane Memory Add-in-Card to Enable Memory Expansion and Acceleration

SMART Modular Technologies, Inc. ("SMART"), a division of SGH and a global leader in memory solutions, solid-state drives, and hybrid storage products, announces its new SMART Kestral PCIe Optane Memory Add-in-Card (AIC), which is able to add up to 2 TB of Optane Memory expansion on a PCIe-Gen4-x16 or PCIe-Gen3-x16 interface independent of the motherboard CPU. SMART's Kestral AICs accelerate selected algorithms by offloading software-defined storage functions from the host CPU to the Intel FPGA on the AIC. SMART's Kestral memory AICs are ideal for hyperscale, data center, and other similar environments that run large memory applications, and would benefit from memory acceleration or system acceleration through computational storage.

"With the advancement of new interconnect standards such as CXL and OpenCAPI, SMART's new family of SMART Kestral AICs addresses the industry's need for a variety of new memory module form factors and interfaces for memory expansion and acceleration," stated Mike Rubino, SMART Modular's vice president of engineering. "SMART is able to leverage our many years of experience in developing and productizing controller-based memory solutions to meet today's emerging and continually evolving memory add-on needs of server and storage system customers."

Supermicro Breakthrough Universal GPU System - Supports All Major CPU, GPU, and Fabric Architectures

Super Micro Computer, Inc. (SMCI), a global leader in enterprise computing, storage, networking solutions, and green computing technology, has announced a revolutionary technology that simplifies large scale GPU deployments and is a future proof design that supports yet to be announced technologies. The Universal GPU server provides the ultimate flexibility in a resource-saving server.

The Universal GPU system architecture combines the latest technologies supporting multiple GPU form factors, CPU choices, storage, and networking options optimized together to deliver uniquely-configured and highly scalable systems. Systems can be optimized for each customer's specific Artificial Intelligence (AI), Machine Learning (ML), and High-Performance Computing (HPC) applications. Organizations worldwide are demanding new options for their next generation of computing environments, which have the thermal headroom for the next generation of CPUs and GPUs.

Google Uses Artificial Intelligence to Develop Faster and Smaller Hardware Accelerators

Designing Artificial Intelligence / Machine Learning hardware accelerators takes effort from hardware engineers in conjunction with scientists working in the AI/ML area itself. A few years ago, we started seeing AI incorporated into parts of electronic design automation (EDA) software tools, helping chip designers speed up the process of creating hardware. What we were "used to" seeing AI do are just a couple of things like placement and routing. And having that automated is a huge deal. However, it looks like the power of AI for chip design is not going to stop there. Researchers at Google and UC Berkeley have made a research project that helps AI design and develop AI-tailored accelerators smaller and faster than anything humans made.

In the published paper, researchers present PRIME - a framework that created AI processors based on a database of blueprints. The PRIME framework feeds off an offline database containing accelerator designs and their corresponding performance metrics (e.g., latency, power, etc.) to design next-generation hardware accelerators. According to Google, PRIME can do so without further hardware simulation and has processors ready for use. As per the paper, PRIME improves performance upon state-of-the-art simulation-driven methods by as much as 1.2x-1.5x. It also reduces the required total simulation time by 93% and 99%, respectively. The framework is also capable of architecting accelerators for unseen applications.

Tanzanite Silicon Solutions Demonstrates Industry's First CXL Based Memory Expansion and Memory Pooling Products

Tanzanite Silicon Solutions Inc., the leader in the development of Compute Express Link (CXL) based products, is unveiling its architectural vision and product roadmap with an SoC mapped to FPGA Proof-Of-Concept vehicle demonstrating Memory Expansion and Memory Pooling, with multi-host CXL based connectivity. Explosive demand for memory and compute to meet the needs of emerging applications such as Artificial Intelligence (AI), Machine Learning (ML), blockchain technology, and the metaverse is outpacing monolithic systems. A disaggregated data center design with composable components for CPU, memory, storage, GPU, and XPU is needed to provide flexible and dynamic pooling of resources to meet the varying demands of heterogenous workloads in an optimal and efficient manner.

Tanzanite's visionary TanzanoidTZ architecture and purpose-built design of a "Smart Logic Interface Connector" (SLICTZ) SoC enables independent scaling and sharing of memory and compute in a pool with low latency within and across server racks. The Tanzanite solution provides a highly scalable architecture for exa-scale level memory capacity and compute acceleration, supporting multiple industry standard form-factors, ranging from E1.S, E3.S, memory expansion board, and memory appliance.

Ceremorphic Exits Stealth Mode; Unveils Technology Plans to Deliver a New Architecture Specifically Designed for Reliable Performance Computing

Armed with more than 100 patents and leveraging multi-decade expertise in creating industry-leading silicon systems, Ceremorphic Inc. today announced its plans to deliver a complete silicon system that provides the performance needed for next-generation applications such as AI model training, HPC, automotive processing, drug discovery, and metaverse processing. Designed in advanced silicon geometry (TSMC 5 nm node), this new architecture was built from the ground up to solve today's high-performance computing problems in reliability, security and energy consumption to serve all performance-demanding market segments.

Ceremorphic was founded in April 2020 by industry-veteran Dr. Venkat Mattela, the Founding CEO of Redpine Signals, which sold its wireless assets to Silicon Labs, Inc. in March 2020 for $308 million. Under his leadership, the team at Redpine Signals delivered breakthrough innovations and industry-first products that led to the development of an ultra-low-power wireless solution that outperformed products from industry giants in the wireless space by as much as 26 times on energy consumption. Ceremorphic leverages its own patented multi-thread processor technology ThreadArch combined with cutting-edge new technology developed by the silicon, algorithm and software engineers currently employed by Ceremorphic. This team is leveraging its deep expertise and patented technology to design an ultra-low-power training supercomputing chip.
Return to Keyword Browsing
Apr 27th, 2024 01:20 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts