News Posts matching #AI

Return to Keyword Browsing

NVIDIA Collaborates With Microsoft to Accelerate Enterprise-Ready Generative AI

NVIDIA today announced that it is integrating its NVIDIA AI Enterprise software into Microsoft's Azure Machine Learning to help enterprises accelerate their AI initiatives. The integration will create a secure, enterprise-ready platform that enables Azure customers worldwide to quickly build, deploy and manage customized applications using the more than 100 NVIDIA AI frameworks and tools that come fully supported in NVIDIA AI Enterprise, the software layer of NVIDIA's AI platform.

"With the coming wave of generative AI applications, enterprises are seeking secure accelerated tools and services that drive innovation," said Manuvir Das, vice president of enterprise computing at NVIDIA. "The combination of NVIDIA AI Enterprise software and Azure Machine Learning will help enterprises speed up their AI initiatives with a straight, efficient path from development to production."

Dell and NVIDIA Introduce Project Helix, a Secure On-Premises Generative AI

Dell Technologies and NVIDIA announce a joint initiative to make it easier for businesses to build and use generative AI models on-premises to quickly and securely deliver better customer service, market intelligence, enterprise search and a range of other capabilities. Project Helix will deliver a series of full-stack solutions with technical expertise and pre-built tools based on Dell and NVIDIA infrastructure and software. It includes a complete blueprint to help enterprises use their proprietary data and more easily deploy generative AI responsibly and accurately.

"Project Helix gives enterprises purpose-built AI models to more quickly and securely gain value from the immense amounts of data underused today," said Jeff Clarke, vice chairman and co-chief operating officer, Dell Technologies. "With highly scalable and efficient infrastructure, enterprises can create a new wave of generative AI solutions that can reinvent their industries."

"We are at a historic moment, when incredible advances in generative AI are intersecting with enterprise demand to do more with less," said Jensen Huang, founder and CEO, NVIDIA. "With Dell Technologies, we've designed extremely scalable, highly efficient infrastructure that enables enterprises to transform their business by securely using their own data to build and operate generative AI applications."

Adobe Introduces Generative AI Creative Co-Pilot in Photoshop

Today, Adobe unveiled Generative Fill in Photoshop, bringing Adobe Firefly generative AI capabilities directly into design workflows. The new Firefly-powered Generative Fill is the world's first co-pilot in creative and design workflows, giving users a magical new way to work by easily adding, extending or removing content from images non-destructively in seconds using simple text prompts. This beta release of Photoshop is Adobe's first Creative Cloud application to deeply integrate Firefly with an exciting roadmap ahead that will transform workflows across Creative Cloud, Document Cloud, Experience Cloud and Adobe Express.

Adobe has over a decade-long history of AI innovation, delivering hundreds of intelligent capabilities through Adobe Sensei into applications that hundreds of millions of people rely upon. Features like Neural Filters in Photoshop, Content Aware Fill in After Effects, Customer AI in Adobe Experience Platform and Liquid Mode in Acrobat empower Adobe customers to create, edit, measure, optimize and review billions of pieces of content. Firefly, Adobe's family of creative generative AI models, now brings even more precision, power, speed and ease directly into workflows where content is created or modified.

AMD Accelerates AI Adoption on Windows 11 With New Developer Tools for Ryzen AI

AI is one of the most transformative technologies to debut in the past 40 years, but taking full advantage of its capabilities requires more than just the latest processor or graphics card. Previous major shifts in computing, like the adoption of 64-bit software and the introduction of 3D-accelerated graphics in the late 1990s, required significant software support as well as sophisticated new hardware capabilities. Today, AMD and Microsoft are stepping up to this challenge, working together to create the building blocks developers and consumers will need to take full advantage of AI both today and in the future.

While many AI tasks can run on a conventional CPU or GPU, these often aren't the ideal locations for executing a machine learning inference workload. As more AI models and workloads evolve, it will be vital to use specialized AI engines on the silicon designed specifically for AI tasks to deliver the right performance and experience to end users. The select new AMD Ryzen 7040 Series processors with Ryzen AI deliver cutting-edge AI features via a dedicated on-die processing block that's purpose-built to handle AI workloads - the first of its kind for x86 Windows 11 laptops. Integrating this functionality in ultrathin laptops will give these systems the outstanding AI performance they might not enjoy if relying on more conventional processors.

Anthropic Raises $450 Million to Develop Next Generation AI Assistants

We are pleased to announce that we have raised $450 million in Series C funding led by Spark Capital with participation from Google, Salesforce Ventures, Sound Ventures, Zoom Ventures, and others. The funding will support our continued work developing helpful, harmless, and honest AI systems—including Claude, an AI assistant that can perform a wide variety of conversational and text processing tasks.

Anthropic was founded to build AI products that people can rely on and generate research about the opportunities and risks of AI. Our CEO, Dario Amodei, says, "We are thrilled that these leading investors and technology companies are supporting Anthropic's mission: AI research and products that put safety at the frontier. The systems we are building are being designed to provide reliable AI services that can positively impact businesses and consumers now and in the future."

Intel Falcon Shores is Initially a GPU, Gaudi Accelerators to Disappear

During the ISC High Performance 2023 international conference, Intel announced interesting roadmap updates to its high-performance computing (HPC) and artificial intelligence (AI). With the scrapping of Rialto Bridge and Lancaster Sound, Intel merged these accelerator lines into Falcon Shores processor for HPC and AI, initially claiming to be a CPU+GPU solution on a single package. However, during the ISC 2023 talk, the company forced a change of plans, and now, Falcon Shores is GPU only solution destined for a 2025 launch. Originally, Intel wanted to combine x86-64 cores with Xe GPU to form an "XPU" module that powers HPC and AI workloads. However, Intel did not see a point in forcing customers to choose between specific CPU-to-GPU core ratios that would need to be in an XPU accelerator. Instead, a regular GPU solution paired with a separate CPU is the choice of Intel for now. In the future, as workloads get more defined, XPU solutions are still a possibility, just delayed from what was originally intended.

Regarding Intel's Gaudi accelerators, the story is about to end. The company originally paid two billion US Dollars for Habana Labs and its Gaudi hardware. However, Intel now plans to stop the Gaudi development as a standalone accelerator and instead use the IP to integrate it into its Falcon Shores GPU. Using modular, tile-based architecture, the Falcon Shores GPU features standard ethernet switching, up to 288 GB of HBM3 running at 9.8 TB/s throughput, I/O optimized for scaling, and support for FP8 and FP16 floating point precision needed for AI and other workloads. As noted, the creation of XPU was premature, and now, the initial Falcon Shores GPU will become an accelerator for HPC, AI, and a mix of both, depending on a specific application. You can see the roadmap below for more information.

Google Expands Flood Hub Platform's Global Reach

Natural disasters, like flooding, are increasing in frequency and intensity due to climate change, threatening people's safety and livelihood. It's estimated that flooding affects more than 250 million people globally each year and causes around $10 billion in economic damages.

As part of our work to use AI to address the climate crisis, today we're expanding our flood forecasting capabilities to 80 countries. With the addition of 60 new countries across Africa, the Asia-Pacific region, Europe, and South and Central America, our platform Flood Hub now includes some of the territories with the highest percentages of population exposed to flood risk and experiencing more extreme weather, covering 460 million people globally.

Intel Delivers AI-Accelerated HPC Performance

At the ISC High Performance Conference, Intel showcased leadership performance for high performance computing (HPC) and artificial intelligence (AI) workloads; shared its portfolio of future HPC and AI products, unified by the oneAPI open programming model; and announced an ambitious international effort to use the Aurora supercomputer to develop generative AI models for science and society.

"Intel is committed to serving the HPC and AI community with products that help customers and end-users make breakthrough discoveries faster," said Jeff McVeigh, Intel corporate vice president and general manager of the Super Compute Group. "Our product portfolio spanning Intel Xeon CPU Max Series, Intel Data Center GPU Max Series, 4th Generation Intel Xeon Scalable Processors and Habana Gaudi 2 are outperforming the competition on a variety of workloads, offering energy and total cost of ownership advantages, democratizing AI and providing choice, openness and flexibility."

RIKEN and Intel Collaborate on "Road to Exascale"

RIKEN and Intel Corporation (hereafter referred to as Intel) have signed a memorandum of understanding on collaboration and cooperation to accelerate joint research in next-generation computing fields such as AI (artificial intelligence), high-performance computing, and quantum computers. The signing ceremony was concluded on May 18, 2023. As part of this MOU, RIKEN will work with Intel Foundry Services (IFS) to prototype these new solutions.

NVIDIA Grace Drives Wave of New Energy-Efficient Arm Supercomputers

NVIDIA today announced a supercomputer built on the NVIDIA Grace CPU Superchip, adding to a wave of new energy-efficient supercomputers based on the Arm Neoverse platform. The Isambard 3 supercomputer to be based at the Bristol & Bath Science Park, in the U.K., will feature 384 Arm-based NVIDIA Grace CPU Superchips to power medical and scientific research, and is expected to deliver 6x the performance and energy efficiency of Isambard 2, placing it among Europe's most energy-efficient systems.

It will achieve about 2.7 petaflops of FP64 peak performance and consume less than 270 kilowatts of power, ranking it among the world's three greenest non-accelerated supercomputers. The project is being led by the University of Bristol, as part of the research consortium the GW4 Alliance, together with the universities of Bath, Cardiff and Exeter.

Supermicro Launches Industry's First NVIDIA HGX H100 8 and 4-GPU H100 Servers with Liquid Cooling

Supermicro, Inc., a Total IT Solution Provider for Cloud, AI/ML, Storage, and 5G/Edge, continues to expand its data center offerings with liquid cooled NVIDIA HGX H100 rack scale solutions. Advanced liquid cooling technologies entirely from Supermicro reduce the lead time for a complete installation, increase performance, and result in lower operating expenses while significantly reducing the PUE of data centers. Savings for a data center are estimated to be 40% for power when using Supermicro liquid cooling solutions compared to an air-cooled data center. In addition, up to 86% reduction in direct cooling costs compared to existing data centers may be realized.

"Supermicro continues to lead the industry supporting the demanding needs of AI workloads and modern data centers worldwide," said Charles Liang, president, and CEO of Supermicro. "Our innovative GPU servers that use our liquid cooling technology significantly lower the power requirements of data centers. With the amount of power required to enable today's rapidly evolving large scale AI models, optimizing TCO and the Total Cost to Environment (TCE) is crucial to data center operators. We have proven expertise in designing and building entire racks of high-performance servers. These GPU systems are designed from the ground up for rack scale integration with liquid cooling to provide superior performance, efficiency, and ease of deployments, allowing us to meet our customers' requirements with a short lead time."

Samsung Trademark Applications Hint at Next Gen DRAM for HPC & AI Platforms

The Korea Intellectual Property Rights Information Service (KIPRIS) has been processing a bunch of trademark applications in recent weeks, submitted by Samsung Electronics Corporation. News outlets pointed out, earlier on this month, that the South Korean multinational manufacturing conglomerate was attempting to secure the term "Snowbolt" as a moniker for an unreleased HBM3P DRAM-based product. Industry insiders and Samsung representatives have indicated that high bandwidth memory (5 TB/s bandwidth speeds per stack) will be featured in upcoming cloud servers, high-performance and AI computing - slated for release later on in 2023.

A Samsung-focused news outlet, SamMobile, has reported (on May 15) of further trademark applications for next generation DRAM (Dynamic Random Access Memory) products. Samsung has filed for two additional monikers - "Shinebolt" and "Flamebolt" - details published online show that these products share the same "designated goods" descriptors with the preceding "Snowbolt" registration: "DRAM modules with high bandwidth for use in high-performance computing equipment, artificial intelligence, and supercomputing equipment" and "DRAM with high bandwidth for use in graphic cards." Kye Hyun Kyung, CEO of Samsung Semiconductor, has been talking up his company's ambitions of competing with rival TSMC in providing cutting edge component technology, especially in the field of AI computing. It is too early to determine whether these "-bolt" DRAM products will be part of that competitive move, but it is good to know that speedier memory is on the way - future generation GPUs are set to benefit.

Server Shipments to Fall an Estimated 2.85% YoY in 2023

TrendForce reveals that alongside the four major CSPs reducing their procurement volumes, OEMs like Dell and HPE have also scaled back their annual shipment volume forecasts at some point between February and April, predicting YoY declines of 15% and 12%, respectively. Furthermore, server demand in China is facing headwinds due to geopolitical and economic challenges. Consequently, TrendForce projects a downward revision in global server shipment volumes for this year—a 2.85% YoY decrease at 13.835 million units.

TrendForce emphasizes that the server market in 1H23 remains pessimistic, with 1Q23 shipments experiencing a 15.9% QoQ decrease due to off-season factors and end-user inventory adjustments. The expected industry boom in 2Q23 failed to materialize, leading to a modest QoQ growth estimate of only 9.23%. Persistent influences on server shipments include OEMs lowering shipment volumes, subdued domestic demand in China, and continuous supply chain inventory adjustments. ESG issues have also led CSPs to prolong server lifecycles and reduce procurement volume. Moreover, OEMs are lengthening supports period for older platforms as businesses seek to control capital expenditures, further contributing to market strain.

TWS Showcases Enterprise-level Large-scale Traditional Chinese Language Models at the AIHPCcon Taiwan AI Supercomputing Conference

ASUS today announced that TWS, Taiwan's leading AI company, showcased its Formosa Foundation Model at AIHPCcon Taiwan AI Supercomputing Conference. The TWS Formosa Foundation Model is powered by the Taiwania 2 supercomputer and boasts an impressive scale of 176 billion parameters. The theme of this year's annual technology event, held on May 17th, was AI 2.0, Supercomputing, and the New Ecosystem. Numerous startups and AI 2.0 partners were invited to showcase their AI intelligence applications.

The Formosa Foundation Model combines the ability to comprehend and generate text with traditional Chinese semantics, offering enterprise-level generative AI solutions through a novel business model. These solutions provide flexibility, security, and rapid optimization tailored to industry applications while leveraging ecosystem partnerships, creating trusted AI 2.0 opportunities, and driving AI intelligence application innovation to capture the trends and opportunities in AI digital business.

Intel, BCG Collaborate to Deliver Enterprise-Grade, Secure Generative AI

Intel Corporation and Boston Consulting Group (BCG) today announced a strategic collaboration to enable generative artificial intelligence (GenAI) using end-to-end Intel AI hardware and software. The GenAI solution delivers fully custom and proprietary solutions to enterprise clients while keeping private data in the isolation of their trusted environments.

BCG leveraged Intel's AI supercomputer powered by Intel Xeon Scalable processors and AI-optimized Habana Gaudi hardware accelerators, as well as production-ready hybrid cloud-scale software. The Intel system powers BCG's GenAI model that delivers insights based on over 50 years' worth of highly confidential and proprietary data. In addition, all AI training and inferencing was completed within BCG's security perimeter.

Google Photos Augmented with AI-Powered Image Editor

Since its launch in 2015, Google Photos has used AI to help you get the most out of your memories - from automatically organizing and resurfacing your photos to helping you edit them with advanced tools like Magic Eraser and Photo Unblur. Today at I/O, we gave a sneak peek of Magic Editor, a new experimental editing experience that uses generative AI to help you reimagine your photos and make editing even easier.

Reimagine your photos with AI
With Magic Editor, you'll be able to make complex edits without pro-level editing skills. Using a combination of AI techniques, including generative AI, it will help you make edits to specific parts of an image - like the subject, sky or background - so you have even more control over the final look and feel of your photo. Sometimes the difference between a so-so picture and the perfect shot comes down to getting the framing just right. Magic Editor will help you improve the overall composition of your photo by allowing you to reposition the subject of your shot to the best spot.

AMD to Showcase Next-Generation Data Center and AI Technology at June 13 Livestream Event

Today, AMD (NASDAQ: AMD) announced the "AMD Data Center and AI Technology Premiere," an in-person and livestreamed event to showcase the company's growth strategy and expanding product portfolio and capabilities for data center and AI. AMD Chair and CEO Dr. Lisa Su will be joined by other AMD executives and key customers to detail new products, and momentum across data center, AI, adaptive and high-performance computing solutions.

The live stream will start at 10 a.m. PT on Tuesday, June 13 at www.amd.com/datacenter as well as the AMD YouTube channel.

NVIDIA CEO and Founder Jensen Huang to Keynote Live at COMPUTEX 2023

TAITRA (Taiwan External Trade Development Council) today announced that NVIDIA founder and CEO Jensen Huang will deliver the keynote address in person at COMPUTEX 2023. The keynote will take place at the Taipei Nangang Exhibition Center Hall 2 on Monday, May 29, at 11:00 AM (UTC+8), and cover advanced developments in the fields of accelerated computing and artificial intelligence. Welcome to join and Register Now. A livestream and replay of the keynote will be available here.

NVIDIA, the pioneer in accelerated computing that enabled the AI revolution, announced at its most recent GTC conference a series of breakthroughs in generative AI, simulation and collaboration that are boosting productivity and efficiency for leading companies around the world. At this year's COMPUTEX Forum, NVIDIA's Greg Estes, VP of Corporate Marketing and Developer Programs, will deliver a talk entitled "Racing Towards the Industrial Metaverse", sharing how NVIDIA and its partners are using Omniverse, generative AI, and accelerated computing to enable an exciting new era of 3D workflows. His talk will take place on Tuesday, May 30, from 2:30 to 3:00 p.m.

IBM to Replace 7,800 Jobs Using Automation and AI

With the rise of generative artificial intelligence (AI) and other AI tools, the new models pose a threat to workers around the globe. Today, as reported by Bloomberg News, we find out that International Business Machines, or IBM shortly, is planning to replace thousands of jobs using automation and AI technology. As the report suggests, IBM's Chief Executive Officer Arvind Krishna is expecting to pause the hiring for roles AI could replace in the coming years. The main department that will see a job cut is the non-customer-facing roles like the back office. There are 26,000 people at IBM working in the back office, and CEO noted that "I could easily see 30% of that getting replaced by AI and automation over a five-year period."

This translates to roughly 7,800 jobs impacted by AI over the next five years. The CEO also added that departments such as human resources would see a hiring slowdown or even suspension of hiring further. IBM has 260,000 employees, and the company continues to look for engineering and customer-facing roles.

"Godfather of AI" Geoffrey Hinton Departs Google, Voices Concern Over Dangers of AI

Geoffrey Hinton, British-Canadian psychologist, computer scientist, and 2018 Turing Award winner in deep learning, has departed the Google Brain team after a decade-long tenure. His research on AI and neural networks dating back to the 1980s has helped shape the current landscape of deep learning, neural processing, and artificial intelligence algorithms with direct and indirect contributions over the years. 2012's AlexNet, designed and developed in collaboration with his students Alex Krizhevsky and Ilya Sutskever, formed the modern backbone of computer vision and AI image recognition used today in Generative AI. Hinton joined Google when the company won the bid for the tiny startup he and his two students formed in the months following the reveal of AlexNet. Ilya Sutskever left their cohort at Google in 2015 to become co-founder and Chief Scientist of OpenAI; creators of ChatGPT and one of Google's most prominent competitors.

In an interview with the New York Times Hinton says that he quit his position at Google so that he may speak freely about the risks of AI, and that a part of him regrets his life's work in the field. He said that during his time there Google has acted as a "proper steward" of AI development, and was careful about releasing anything that might be harmful. His viewpoint on the industry shifted within the last year as Microsoft's Bing Chat took shots at Google's core business, the web browser, leading to Google being more reactionary than deliberate in response with Bard. The concern arises that as these companies battle it out for AI supremacy they won't take proper precautions against bad-faith actors using the technologies to flood the internet with false photos, text, and even videos. That the average person would no longer be able to tell what was real, and what was manufactured by AI prompt.

MIT Researchers Grow Transistors on Top of Silicon Wafers

MIT researchers have developed a groundbreaking technology that allows for the growth of 2D transition metal dichalcogenide (TMD) materials directly on fully fabricated silicon chips, enabling denser integrations. Conventional methods require temperatures of about 600°C, which can damage silicon transistors and circuits as they break down above 400°C. The MIT team overcame this challenge by creating a low-temperature growth process that preserves the chip's integrity, allowing 2D semiconductor transistors to be directly integrated on top of standard silicon circuits. The new approach grows a smooth, highly uniform layer across an entire 8-inch wafer, unlike previous methods that involved growing 2D materials elsewhere before transferring them to a chip or wafer. This process often led to imperfections that negatively impacted device and chip performance.

Additionally, the novel technology can grow a uniform layer of TMD material in less than an hour over 8-inch wafers, a significant improvement from previous methods that required over a day for a single layer. The enhanced speed and uniformity of this technology make it suitable for commercial applications, where 8-inch or larger wafers are essential. The researchers focused on molybdenum disulfide, a flexible, transparent 2D material with powerful electronic and photonic properties ideal for semiconductor transistors. They designed a new furnace for the metal-organic chemical vapor deposition process, which has separate low and high-temperature regions. The silicon wafer is placed in the low-temperature region while vaporized molybdenum and sulfur precursors flow into the furnace. Molybdenum remains in the low-temperature region, while the sulfur precursor decomposes in the high-temperature region before flowing back into the low-temperature region to grow molybdenum disulfide on the wafer surface.

NVIDIA H100 Compared to A100 for Training GPT Large Language Models

NVIDIA's H100 has recently become available to use via Cloud Service Providers (CSPs), and it was only a matter of time before someone decided to benchmark its performance and compare it to the previous generation's A100 GPU. Today, thanks to the benchmarks of MosaicML, a startup company led by the ex-CEO of Nervana and GM of Artificial Intelligence (AI) at Intel, Naveen Rao, we have some comparison between these two GPUs with a fascinating insight about the cost factor. Firstly, MosaicML has taken Generative Pre-trained Transformer (GPT) models of various sizes and trained them using bfloat16 and FP8 Floating Point precision formats. All training occurred on CoreWeave cloud GPU instances.

Regarding performance, the NVIDIA H100 GPU achieved anywhere from 2.2x to 3.3x speedup. However, an interesting finding emerges when comparing the cost of running these GPUs in the cloud. CoreWeave prices the H100 SXM GPUs at $4.76/hr/GPU, while the A100 80 GB SXM gets $2.21/hr/GPU pricing. While the H100 is 2.2x more expensive, the performance makes it up, resulting in less time to train a model and a lower price for the training process. This inherently makes H100 more attractive for researchers and companies wanting to train Large Language Models (LLMs) and makes choosing the newer GPU more viable, despite the increased cost. Below, you can see tables of comparison between two GPUs in training time, speedup, and cost of training.

Samsung Electronics Announces First Quarter 2023 Results, Profits Lowest in 14 Years

Samsung Electronics today reported financial results for the first quarter ended March 31, 2023. The Company posted KRW 63.75 trillion in consolidated revenue, a 10% decline from the previous quarter, as overall consumer spending slowed amid the uncertain global macroeconomic environment. Operating profit was KRW 0.64 trillion as the DS (Device Solutions) Division faced decreased demand, while profit in the DX (Device eXperience) Division increased.

The DS Division's profit declined from the previous quarter due to weak demand in the Memory Business, a decline in utilization rates in the Foundry Business and continued weak demand and inventory adjustments from customers. Samsung Display Corporation (SDC) saw earnings in the mobile panel business decline quarter-on-quarter amid a market contraction, while the large panel business slightly narrowed its losses. The DX Division's results improved on the back of strong sales of the premium Galaxy S23 series as well as an enhanced sales mix focusing on premium TVs.

Cigent Protects SSD Data from Ransomware Attacks Using AI Technology

Cigent Technology, a company specializing in data security, has unveiled its unique Cigent Secure SSD+ drives. In contrast to the earlier Secure SSD series, the SSD+ incorporates a cutting-edge AI microprocessor that leverages machine learning (ML) to combat ransomware attacks and protect drive data. The Cigent Secure SSD+ emphasizes a proactive approach, integrating attack prevention within the storage. Its AI microprocessor applies ML algorithms to monitor SSD activity, mitigating ransomware threats. Users can customize detection sensitivity to reduce false positives. Working with Cigent Data Defense software, the Secure SSD+ provides several protective layers upon detecting potential attacks. It can initiate a "Shields Up" mode, demanding multi-factor authentication (MFA) for accessing secure files. The software can also auto-secure drive data or set read-only mode, preventing unauthorized access or modifications.

When an attack is detected, the software notifies security personnel to activate "Shields Up" on other Cigent-protected network systems. The Cigent Secure SSD+ logs data access, allowing a review of all activity performed on the drive. The company has also introduced safeguards to deter bad actors from disabling security features. However, the Cigent Secure SSD+ has limitations. To perform as intended, future users of these drives need to install them as their primary storage, and for now, only Windows OS is supported. We expect to see expansion to Linux as well, as software enablement for Linux happens. Also, the ML algorithms used are unknown, leaving everyone wondering about their effectiveness. As far as exact specifications, pricing, and availability, we have yet to have data for now but don't expect it to come cheap. The regular Secure SSD can cost over 1,000 USD for a 1 TB option. Additionally, Cigent has confirmed that these SSDs come in a standard M.2 2280 drive form factor, with mentions that it is a double-sided design, so some ultra-thin laptops could not support it. On the company website, Cignet offers a pre-order option, where you need to fill out a form.
Cigent Secure SSD+

QNAP Partners with Hailo to Bring Cutting-edge AI-powered Surveillance Solutions

QNAP Systems, Inc., a leading computing, networking and storage solution innovator, announced a partnership with Hailo, the pioneering chipmaker of edge artificial intelligence (AI) processors. This partnership brings cutting-edge AI-powered facial recognition and people counting solutions to surveillance systems with boosted performance using the Hailo-8 M.2 AI acceleration module.

The partnership sees Hailo's advanced AI acceleration technology integrated with QNAP's QVR Face, a smart facial recognition solution, and QVR Human people counting solution. The combined solution boosts video recognition performance and capabilities, making it the ideal choice for an economical centralized security and AI surveillance solution in large-scale deployment.
Return to Keyword Browsing
Jun 14th, 2024 17:31 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts