News Posts matching #AI

Return to Keyword Browsing

NVIDIA Wants to Set Guardrails for Large Language Models Such as ChatGPT

ChatGPT has surged in popularity over a few months, and usage of this software has been regarded as one of the fastest-growing apps ever. Based on a Large Language Model (LLM) called GPT-3.5/4, ChatGPT uses user input to form answers based on its extensive database used in the training process. Having billions of parameters, the GPT models used for GPT can give precise answers; however, sometimes, these models hallucinate. Given a question about a non-existing topic/subject, ChatGPT can induce hallucination and make up the information. To prevent these hallucinations, NVIDIA, the maker of GPUs used for training and inferencing LLMs, has released a software library to put AI in place, called NeMo Guardrails.

As the NVIDIA repository states: "NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems. Guardrails (or "rails" for short) are specific ways of controlling the output of a large language model, such as not talking about politics, responding in a particular way to specific user requests, following a predefined dialog path, using a particular language style, extracting structured data, and more." These guardrails are easily programmable and can stop LLMs from outputting unwanted content. For a company that invests heavily in the hardware and software landscape, this launch is a logical decision to keep the lead in setting the infrastructure for future LLM-based applications.

PMIC Issue with Server DDR5 RDIMMs Reported, Convergence of DDR5 Server DRAM Price Decline

TrendForce reports that mass production of new server platforms—such as Intel Sapphire Rapids and AMD Genoa—is imminent. However, recent market reports have indicated a PMIC compatibility issue for server DDR5 RDIMMs; DRAM suppliers and PMIC vendors are working to address the problem. TrendForce believes this will have two effects: First, DRAM suppliers will temporarily procure more PMICs from Monolithic Power Systems (MPS), which supplies PMICs without any issues. Second, supply will inevitably be affected in the short term as current DDR5 server DRAM production still uses older processes, which will lead to a convergence in the price decline of DDR5 server DRAM in 2Q23—from the previously estimated 15~20% to 13~18%.

As previously mentioned, PMIC issues and the production process relying on older processes are all having a short-term impact on the supply of DDR5 server DRAM. SK hynix has gradually ramped up production and sales of 1α-nm, which, unlike 1y-nm, has yet to be fully verified by consumers. Current production processes are still being dominated by Samsung and SK hynix's 1y-nm and Micron's 1z-nm; 1α and 1β-nm production is projected to increase in 2H23.

Square Enix Unearths Old Crime Puzzler - The Portopia Serial Murder Case, Remaster Features AI Interaction

At the turn of the 1980s, most PC adventure games were played using only the keyboard. In those days, adventure games didn't use action menus like more modern games, but simply presented the player with a command line where they could freely input text to decide the actions that characters would take and proceed through the story. Free text input systems like these allowed players to feel a great deal of freedom. However, they did come with one common source of frustration: players knowing what action they wanted to perform but being unable to do so because they could not find the right wording. This problem was caused by the limitations of PC performance and NLP technology of the time.

40 years have passed since then, and PC performance has drastically improved, as have the capabilities of NLP technology. Using "The Portopia Serial Murder Case" as a test case, we'd like to show you the capabilities of modern NLP and the impact it can have on adventure games, as well as deepen your understanding of NLP technologies.

Google Bard AI Chatbot Smart Enough to Assist in Software Coding

Alphabet Incorporated's Google AI division has today revealed a planned update for its Bard conversational artificial intelligence chatbot. The experimental generative artificial intelligence software application will become capable of assisting people in the writing of computer code - the American multinational technology company hopes that Bard will be of great to help in the area of software development. Paige Bailey, a group product manager at Google Research has introduced the upcoming changes: "Since we launched Bard, our experiment that lets you collaborate with generative AI, coding has been one of the top requests we've received from our users. As a product lead in Google Research - and a passionate engineer who still programs every day - I'm excited that today we're updating Bard to include that capability."

The Bard chatbot was made available, on a trial basis, to users in the USA and UK last month. Google's AI team is reported to be under great pressure to advance the Bard chatbot into a suitably powerful state in order to compete with its closest rival - Microsoft Corporation. The Seattle-based giant has invested heavily into Open AI's industry leading ChatGPT application. Google's latest volley against its rivals shows that Bard's has become very sophisticated - so much so that the app is able to chew through a variety of programming languages. Bailey outlines these features in the company's latest blog: "Starting now, Bard can help with programming and software development tasks, including code generation, debugging and code explanation. We're launching these capabilities in more than 20 programming languages including C++, Go, Java, Javascript, Python and Typescript. And you can easily export Python code to Google Colab - no copy and paste required." Critics of AI-driven large language models have posited that the technology could potentially eliminate humans from the job market - it will be interesting to observe the coder community's reaction to Google marketing of Bard as a helpful tool in software development.

Google Merges its AI Subsidiaries into Google DeepMind

Google has announced that the company is officially merging its subsidiaries focused on artificial intelligence to form a single group. More specifically, Google Brain and DeepMind companies are now joining forces to become a single unit called Google DeepMind. As Google CEO Sundar Pichai notes: "This group, called Google DeepMind, will bring together two leading research groups in the AI field: the Brain team from Google Research, and DeepMind. Their collective accomplishments in AI over the last decade span AlphaGo, Transformers, word2vec, WaveNet, AlphaFold, sequence to sequence models, distillation, deep reinforcement learning, and distributed systems and software frameworks like TensorFlow and JAX for expressing, training and deploying large scale ML models."

As a CEO of this group, Demis Hassabis, a previous CEO of DeepMind, will work together with Jeff Dean, now promoted to Google's Chief Scientist, where he will report to the Sundar. In the spirit of a new role, Jeff Dean will work as a Chief Scientist at Google Research and Google DeepMind, where he will set the goal for AI research at both units. This corporate restructuring will help the two previously separate teams work together on a single plan and help advance AI capabilities faster. We are eager to see the upcoming developments these teams accomplish.

NVIDIA's Tiny RTX 4000 Ada Lovelace Graphics Cards is now Available

NVIDIA has begun selling its compact RTX 4000 Ada Lovelace graphics card, offering GeForce RTX 3070-like performance at a mere 70 W power consumption, allowing it to fit in almost all desktop PCs. The low-profile, dual-slot board is priced higher than the RTX 4080 as it targets professional users, but it can still be used in a regular gaming computer. PNY's RTX 4000 Ada generation graphics card is the first to reach consumer shelves, currently available for $1,444 at ShopBLT, a retailer known for obtaining hardware before its competitors. The card comes with four Mini-DisplayPort connectors, so an additional mDP-DP or mDP-HDMI adapter must be factored into the cost.

The NVIDIA RTX 4000 SFF Ada generation board features an AD104 GPU with 6,144 CUDA cores, 20 GB of GDDR6 ECC memory, and a 160-bit interface. With a fixed boost frequency floating around 1560 MHz to reduce overall board power consumption, the GPU is rated for just 70 Watts of power. To emphasize the efficiency, this card requires no external PCIe power connector, as all the juice is fed through the PCIe slot. The GA104 graphics processor in this configuration delivers a peak FP32 performance of 19.2 TFLOPS, comparable to the GeForce RTX 3070. The 20 GB of memory makes the card more valuable for professionals and AI researchers needing compact solutions. Although the card's performance is overshadowed by the recently launched GeForce RTX 4070, the RTX 4000 SFF Ada's professional drivers, support for professional software ISVs, and additional features make it a strong contender in the semi-professional market. Availability and pricing are expected to improve in the coming weeks as the card becomes more widely accessible.

More images, along with specification table, follow.

EdgeCortix Expands Delivery of its Industry Leading SAKURA-I AI Co-processor Devices

EdgeCortix Inc., the innovative Edge Artificial Intelligence (AI) Platform company, focused on delivering class-leading compute efficiency and ultra-low latency for AI inference; announced, it is shipping its industry leading, energy-efficient, turn-key, AI co-processor, branded as the EdgeCortix SAKURA-I, to its global Early Access Program members.

"We are very pleased to be announcing the fulfillment of our first-generation semiconductor solution, the EdgeCortix SAKURA-I AI co-processor. Designed and engineered in Japan, SAKURA-I features up to 40 trillion operations per second (TOPs) of dedicated AI performance at sub-10 watts of power consumption.", said Sakyasingha Dasgupta, CEO and Founder of EdgeCortix, "We are delivering a complete Edge AI platform to our Early Access Program members, comprising both software and hardware solutions, which includes our recently updated MERA software suite. Program members include numerous global industry leading enterprise customers across both the commercial and defense sectors. We developed the EdgeCortix Early Access Program (EAP) with a focus on offering customers the opportunity to assess EdgeCortix's products and services at scale, by deploying them within their own complex, heterogeneous environments. The goal of the EAP offering is three-fold: showcasing the ease of integration into customer's existing heterogeneous systems, enabling customers to prove-out the effectiveness and efficiency of EdgeCortix solutions versus competing products and facilitating a direct dialog with EdgeCortix product management, enabling tailor-made fit in certain cases."

Microsoft Working on Custom AI Processor Codenamed Project Athena

According to The Information, Microsoft has been working on creating custom processors for processing AI with a project codenamed Athena. Based on TSMC's 5 nm process, these chips are designed to accelerate AI workloads and scale to hundreds or even thousands of chips. With the boom of Large Language Models (LLMs) that require billions of parameters, training them requires a rapid increase of computational power to a point where companies purchase hundreds of thousands of GPUs from the likes of NVIDIA. However, creating custom processors is a familiar feat for a company like Microsoft. Hyperscalers like AWS, Google, and Meta are already invested in the creation of processors for AI training, and Microsoft is just joining as well.

While we don't have much information about these processors, we know that Microsoft started the project in 2019, and today these processors are in the hands of select employees of Microsoft and OpenAI that work with AI projects and need computational horsepower. Interestingly, some projections assume that if Microsoft could match NVIDIA's GPU performance, the cost would only be a third of NVIDIA's offerings. However, it is challenging to predict that until more information is provided. Microsoft plans to make these chips more widely available as early as next year; however, there is no specific information on when and how, but Azure cloud customers would be the most logical place to start.

Samsung Could Replace Google Search on its Mobile Devices

Google's business of providing the world's largest search engine is reportedly in jeopardy, as the latest reports indicate that Samsung could replace Google Search with another search engine as a default solution on its mobile devices. Samsung, which sells millions of devices per year, is looking to replace the current search engine, Google Search, in favor of more modern AI-powered models. Currently, Google and Samsung have a contract where Google pays the South Korean giant a sum of three billion US dollars per year to keep its search engine as a default option on Samsung's devices. However, this decision is flexible, as the contract is up for renewal and new terms are being negotiated.

With the release of the ChatGPT and AI-powered search that Microsoft Bing enables, Google is reportedly working hard to keep up and integrate Large Language Models (LLMs) into Search. Google's answer to Microsoft Bing is codenamed Project Magi, an initiative to bring AI-powered search supposedly next month. To emphasize the importance of getting this to production, Google was ready to give up three billion US Dollars of revenue to Samsung for keeping Google Search as a default search engine for 12 years without a doubt. However, with the emergence of better solutions like Microsoft Bing, Samsung is considering replacing it with something else. The deal is still open, terms are still negotiated, and for now there are no official mentions of Bing. As a reminder, Google has a similar agreement with Apple, worth 20 billion US Dollars, and Google Search was valued at 162 billion US Dollars last year.

HBM Supply Leader SK Hynix's Market Share to Exceed 50% in 2023 Due to Demand for AI Servers

A strong growth in AI server shipments has driven demand for high bandwidth memory (HBM). TrendForce reports that the top three HBM suppliers in 2022 were SK hynix, Samsung, and Micron, with 50%, 40%, and 10% market share, respectively. Furthermore, the specifications of high-end AI GPUs designed for deep learning have led to HBM product iteration. To prepare for the launch of NVIDIA H100 and AMD MI300 in 2H23, all three major suppliers are planning for the mass production of HBM3 products. At present, SK hynix is the only supplier that mass produces HBM3 products, and as a result, is projected to increase its market share to 53% as more customers adopt HBM3. Samsung and Micron are expected to start mass production sometime towards the end of this year or early 2024, with HBM market shares of 38% and 9%, respectively.

AI server shipment volume expected to increase by 15.4% in 2023
NVIDIA's DM/ML AI servers are equipped with an average of four or eight high-end graphics cards and two mainstream x86 server CPUs. These servers are primarily used by top US cloud services providers such as Google, AWS, Meta, and Microsoft. TrendForce analysis indicates that the shipment volume of servers with high-end GPGPUs is expected to increase by around 9% in 2022, with approximately 80% of these shipments concentrated in eight major cloud service providers in China and the US. Looking ahead to 2023, Microsoft, Meta, Baidu, and ByteDance will launch generative AI products and services, further boosting AI server shipments. It is estimated that the shipment volume of AI servers will increase by 15.4% this year, and a 12.2% CAGR for AI server shipments is projected from 2023 to 2027.

Elon Musk AI-Powered Empire Expands Again, X.AI Startup Incorporated in Nevada

Elon Musk has formed a new AI-focused company, as reported by the Wall Street Journal yesterday. The entity registered under the name X.AI was incorporated via a filing in Nevada last month, and Musk appears to be listed as the company's only director with Jared Birchall joining him under the role of secretary. Birchall heads the Musk family office, Excession LLC, and he serves as CEO of Neuralink - a neurotechnology company that was co-founded by Musk back in 2016. It is widely speculated that Birchall serves as a type of fixer - go watch the TV series "Ray Donovan" if you would like to observe a crude (and obviously fictional) example - in corporate affairs.

Reports emerged earlier this week, with Musk being at the forefront of a massive purchase of GPUs destined to arrive shortly at his data centers - this impressive chunk of hardware is speculated to power AI-related number crunching at Twitter in the near future. The founding of X.AI could provide another home for a portion of the 10,000 GPU order, but industry insiders firmly believe that Twitter will need to tool up quickly for its new AI-driven endeavor - the GPUs will likely be set to work on a ChatBot system to underpin the social media platform. Musk has already recruited researchers from DeepMind and setup a lab for them at one of his operations. It remains to be seen how the X.AI startup will run alongside efforts at other Musk-owned companies - it is theorized that he wants to beat OpenAI at their own game, and compete with similar undertakings at Google, Microsoft and Amazon.

Arm-based PCs to Nearly Double Market Share by 2027, Says Report

Personal computers (PCs) based on Arm architecture will grow in popularity and their market share will almost double from 14% now to 25% by 2027, according to Counterpoint Research's latest projections. The ability of Arm-based hardware to run Mac OS has allowed Apple to capture 90% of the Arm-based notebook computer market. However, the full support of Windows and Office365 and the speed of native Arm-based app adoption are also critical factors in determining the Arm SoC penetration rate in PCs. Once these factors are addressed, Arm-based PCs will become a viable option for both daily users and businesses.

As more existing PC OEMs/ODMs and smartphone manufacturers enter the market, they will bring their expertise in Arm-based hardware and software, which will further boost the popularity of Arm-based PCs. The availability of more native Arm-based apps will also increase user comfort and familiarity with the platform. Overall, the trend towards Arm-based PCs is expected to continue and their market share will likely increase significantly in the coming years.

Alibaba Developing an Equivalent to ChatGPT

Last Tuesday, Alibaba announced its intentions to put out its own artificial intelligence (AI) chatbot product called Tongyi Qianwen - another rival to take on OpenAI's pioneering ChatGPT natural language processing tool. The Chinese technology giant is hoping to retrofit the new chatbot system into several arms of its business operations. Alibaba had revealed initial plans for chatbot integration earlier this year, and mentioned that it was providing an alternative to the already well established ChatGPT tool. Alibaba's workplace messaging application - DingTalk - is slated to receive the first AI-powered update in the near future, although the company did not provide a firm timeline for Tongyi Qianwen's release window.

The product name "Tongyi Qianwen" loosely translates to "seeking an answer by asking a thousand questions" - Alibaba did not provide an official English language translation at last week's press conference. Their chatbot is reported to function in both Mandarin and English language modes. Advanced AI voice recognition is set for usage in the Tmall Genie range of smart speakers (similar in function to the Amazon Echo). Alibaba expects to expand Tongyi Qianwen's reach into applications relating to e-commerce and mapping services.

AMD Introduces Alveo MA35D Media Accelerator

AMD today announced the AMD Alveo MA35D media accelerator featuring two 5 nm, ASIC-based video processing units (VPUs) supporting the AV1 compression standard and purpose-built to power a new era of live interactive streaming services at scale. With over 70% of the global video market being dominated by live content, a new class of low-latency, high-volume interactive streaming applications are emerging such as watch parties, live shopping, online auctions, and social streaming.

The Alveo MA35D media accelerator delivers the high channel density, with up to 32x 1080p60 streams per card, power efficiency and ultra-low-latency performance critical to reducing the skyrocketing infrastructure costs now required for scaling such compute intensive content delivery. Compared to the previous generation Alveo U30 media accelerator, the Alveo MA35D delivers up to 4x higher channel density, 4x max lower latency in 4K and 1.8x greater compression efficiency to achieve the same VMAF score—a common video quality metric.

IBM z16 and LinuxONE 4 Get Single Frame and Rack Mount Options

IBM today unveiled new single frame and rack mount configurations of IBM z16 and IBM LinuxONE 4, expanding their capabilities to a broader range of data center environments. Based on IBM's Telum processor, the new options are designed with sustainability in mind for highly efficient data centers, helping clients adapt to a digitized economy and ongoing global uncertainty.

Introduced in April 2022, the IBM z16 multi frame has helped transform industries with real-time AI inferencing at scale and quantum-safe cryptography. IBM LinuxONE Emperor 4, launched in September 2022, features capabilities that can reduce both energy consumption and data center floor space while delivering the scale, performance and security that clients need. The new single frame and rack mount configurations expand client infrastructure choices and help bring these benefits to data center environments where space, sustainability and standardization are paramount.

Adlink launches portable GPU accelerator with NVIDIA RTX A500

ADLINK Technology Inc., a global leader in edge computing, today launched Pocket AI - the first ever ultra-portable GPU accelerator to offer exceptional power at a cost-effective price point. With hardware and software compatibility, Pocket AI is the perfect tool to boost performance and productivity. It provides plug-and-play scalability from development to deployment for AI developers, professional graphics users and embedded industrial applications.

Pocket AI is a simple, reliable route to impressive GPU acceleration at a fraction of the cost of a laptop with equivalent GPU power. Its many benefits include a perfect power/performance balance from the NVIDIA RTX A500 GPU; high functionality driven by NVIDIA CUDA X and accelerated libraries; quick, easy delivery/power via Thunderbolt 3 interface and USB PD; and compatibility supported by NVIDIA developer tools. For the ultimate portability, the Pocket AI is compact and lightweight - est. 106 x 72 x 25 mm and 250 grams.

Microsoft Aims to Modernize its Upcoming Windows 12 with Modular Design

Insider sources at Microsoft have spoken of continued efforts to modernize the core of its operating system, with the work-in-progress Windows 12 cited as the ideal candidate for substantial updates. The engineering team is reported to be integrating a modular design, which will allow for a reduced operating system footprint - similar in principle to ChromeOS. According to a Windows Report article the operating system development team is hard at work on a spiritual successor to the abandoned Windows Core OS project. Their newest effort is reported to be called "Windows CorePC" and Microsoft is aiming to hit the same goals it set for its Windows 10X edition, which was cancelled in mid-2021, but they will also target native support for legacy applications on devices that require necessary access.

Windows Core OS was shutdown after years of development and internal testing - it was hoped that a modular Universal Windows Platform-first (UWP-F) operating system would be more lightweight and gain stronger security features, as well as greater access to regular updates. The inside sources stated that Windows Core OS will not be developed any further, at least not for desktop computer purposes. The Microsoft team is anticipating that its new CorePC project will lead onto new configurations of Windows that feature a capability of scaling up and down depending on hardware variations. Windows PCs and devices, in some user case scenarios, do not require full breadth of legacy Win32 application support. CorePC will enable different configurations of Windows to be installed on a custom basis.

With Security Copilot, Microsoft brings the power of AI to cyberdefense

Microsoft Corp. on Tuesday announced it is bringing the next generation of AI to cybersecurity with the launch of Microsoft Security Copilot, giving defenders a much-needed tool to quickly detect and respond to threats and better understand the threat landscape overall. Security Copilot will combine Microsoft's vast threat intelligence footprint with industry-leading expertise to augment the work of security professionals through an easy-to-use AI assistant.

"Today the odds remain stacked against cybersecurity professionals. Too often, they fight an asymmetric battle against relentless and sophisticated attackers," said Vasu Jakkal, corporate vice president, Microsoft Security. "With Security Copilot, we are shifting the balance of power into our favor. Security Copilot is the first and only generative AI security product enabling defenders to move at the speed and scale of AI."

UP Team Collaborate with Leopard Imaging to Bring AI Vision to the Edge

Following its successful demonstration of a MIPI camera solution using the UP Squared Pro 7000 at Embedded World, AAEON have announced a MIPI Expansion Kit for the newly released third generation board. Equipped with the Intel Atom Processors x7000E Series, Intel Core i3-N305 Processor and Intel Processor N Series (formerly Alder Lake-N) and an FPC port supporting MIPI-CSI interface, the UP Squared Pro 7000 emphasizes AAEON's continued dedication to bringing the latest Intel technologies to a wider variety of platforms. One such platform is the LI-ADL-ADP-IMX415-MIPI-081H, a 4K MIPI Camera Module from leading intelligent embedded camera creator, Leopard Imaging Inc.

The company recently announced that the camera has been designed with full-function drivers to support the Intel Atom Processors x7000E Series, Intel Core i3-N305 Processor, and Intel Processor N Series (formerly Alder Lake-N), allowing users to take advantage of the peripheral Intel technologies that come with the processor platform, such as Intel UHD Graphics on a Windows 10 IoT Enterprise OS.

Lenovo Announces New AI Powered Legion Gaming Laptops and New Ultra-Wide Gaming Monitors

Today, Lenovo announced the latest 8th generation of Lenovo Legion Slim laptops, allowing gamers to harness the freedom that the newest series has to offer. The Lenovo Legion Slim series is all about empowering players to crush their gaming goals while also pursuing their creative passions, and there are more options than ever among the new Lenovo Legion Slim 7i and 7 (16", 8), Lenovo Legion Slim 5i and 5 (16", 8), and—an all-new size for this year—the Lenovo Legion Slim 5 (14", 8), which elevates laptop power and portability to a whole new level. This newest generation of the series is also the first to sport the Lenovo Artificial Intelligence (LA) family of chips. These are onboard physical AI chips that power Lenovo AI Engine+, which dynamically adjusts the Lenovo Legion ColdFront 5.0 thermals to optimize cooling on the fly and maintain maximum output with minimal noise.

Lenovo Legion Slim series laptops are designed to meet the multi-faceted needs of gamers, with an SD slot, rapid charging battery technology Windows 11, three months of free Xbox Game Pass Ultimate, as well as access to Nahimic by SteelSeries 3D immersive audio, and Lenovo Vantage helping users get the most out of their machines. Offering peace of mind, Lenovo's Legion Ultimate Support service is available with round-the-clock tech support, guidance and assistance so gamers don't have to miss a beat, and with Legion Arena, users can create their ultimate gaming hub with all their titles accessible in one place rather than having to switch between apps. Additionally, gamers looking to expand their horizons—literally—can look forward to the new Lenovo Legion R45w-30 44.5" 32:9 ultrawide curved display and the new Lenovo Legion Y34wz-30 Gaming Monitor that delivers extreme clarity and vivid color with its 34-inch mini-LED backlit panel.

NCSOFT Unveiled its Digital Human for the First Time at GDC 2023

NCSOFT, a global premier game developer and publisher, unveiled its digital human technology together with Project M's trailer at the Game Developers Conference 2023 (GDC), currently being held in San Francisco, CA.

On March 22, NCSOFT revealed the video at State of Unreal, Epic Games' opening session at GDC held to introduce new Unreal Engine tools and technologies in collaboration with its partners. Here, Songyee Yoon, chief strategy officer (CSO) at NCSOFT, was on stage to introduce the company's latest project, Project M, and its trailer.

Aetina and CVEDIA Join Forces to Launch Advanced AI Video Analytics Solutions Powered by NVIDIA Jetson Orin SoM

Aetina teams up with CVEDIA to launch advanced AI-powered video analytics solutions for enhanced public safety, smarter cities and safer traffic networks. The solutions involve Aetina's embedded computers built with the latest NVIDIA Jetson Orin system-on-modules (SoMs) and CVEDIA's synthetic AI models, enabling low-latency object and human motion detections at the edge. By adopting AI-enabled video analytics, security companies and system integrators can optimize operations and enhance security measures. During ISC West 2023, Aetina and CVEDIA will showcase the solutions to demonstrate how they leverage deep learning to accurately detect, track, and analyze objects and events from videos, enabling businesses to quickly generate insights and improve their decision-making processes.

Aetina's embedded computers that support the NVIDIA Jetson AGX Orin, Orin NX, or Orin Nano SoMs are designed for the creation of different types of vertical AI. The embedded computers provide the computing power for CVEDIA's AI solutions to run real-time inference processes; they are also compatible with any kind of camera to receive high-resolution videos for the analytics tasks.

Opera goes live with generative AI tools in Desktop browser and Opera GX

As generative AI tools continue to reshape the digital landscape, Opera is integrating AI capabilities into its desktop browsers. Shipping today in Opera and Opera GX are AI Prompts and sidebar integration of the popular GPT-based services ChatGPT and ChatSonic. Users can explore AIGC (AI-generated content) tools within Opera and Opera GX.

"AIGC is a game-changer for web browsing. Our goal is to use these technologies to give our users new browsing superpowers - reimagining how they learn, create, and research," said Joanna Czajka, Product Director at Opera.

ASUS Announces NVIDIA-Certified Servers and ProArt Studiobook Pro 16 OLED at GTC

ASUS today announced its participation in NVIDIA GTC, a developer conference for the era of AI and the metaverse. ASUS will offer comprehensive NVIDIA-certified server solutions that support the latest NVIDIA L4 Tensor Core GPU—which accelerates real-time video AI and generative AI—as well as the NVIDIA BlueField -3 DPU, igniting unprecedented innovation for supercomputing infrastructure. ASUS will also launch the new ProArt Studiobook Pro 16 OLED laptop with the NVIDIA RTX 3000 Ada Generation Laptop GPU for mobile creative professionals.

Purpose-built GPU servers for generative AI
Generative AI applications enable businesses to develop better products and services, and deliver original content tailored to the unique needs of customers and audiences. ASUS ESC8000 and ESC4000 are fully certified NVIDIA servers that support up to eight NVIDIA L4 Tensor Core GPUs, which deliver universal acceleration and energy efficiency for AI with up to 2.7X more generative AI performance than the previous GPU generation. ASUS ESC and RS series servers are engineered for HPC workloads, with support for the NVIDIA Bluefield-3 DPU to transform data center infrastructure, as well as NVIDIA AI Enterprise applications for streamlined AI workflows and deployment.

Mitsui and NVIDIA Announce World's First Generative AI Supercomputer for Pharmaceutical Industry

Mitsui & Co., Ltd., one of Japan's largest business conglomerates, is collaborating with NVIDIA on Tokyo-1—an initiative to supercharge the nation's pharmaceutical leaders with technology, including high-resolution molecular dynamics simulations and generative AI models for drug discovery.

Announced today at the NVIDIA GTC global AI conference, the Tokyo-1 project features an NVIDIA DGX AI supercomputer that will be accessible to Japan's pharma companies and startups. The effort is poised to accelerate Japan's $100 billion pharma industry, the world's third largest following the U.S. and China.
Return to Keyword Browsing
Jun 16th, 2024 11:27 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts