News Posts matching #OpenAI

Return to Keyword Browsing

ASUS ROG Updates Virtual Assistant With New AI Module

ASUS Republic of Gamers (ROG) today released a significant update to its bundled Virtual Assistant software (formerly known as Virtual Pet). This new software package comes preinstalled on the ROG Zephyrus G16 gaming laptop and leverages the incredible power of AI to significantly level up the capabilities of the Virtual Assistant, including an intelligent chat and Q&A interface, written document summarization, and voice transcription tools. This update is available on laptop models with AMD Ryzen AI 300 Series processors as a free download via ASUS Live Update.

Intelligent chat support
The Virtual Assistant gives users a leg up when they're using an unfamiliar program or system tool. With a local chat and Q&A feature, even when disconnected from the internet, the Virtual Assistant can help users navigate complicated menus and activate the features and settings they need. For example, if a new user is looking to adjust fan settings, they can request that from the Virtual Assistant, and it will direct them to the appropriate settings menu within the Armoury Crate app. Applications like MyASUS, GlideX, and ProArt Creator Hub are supported, and the chat functionality adds a new layer of support for end users.

Apple Debuts the iPhone 16 Pro and iPhone 16 Pro Max - Now with a Camera Button

Apple today introduced iPhone 16 Pro and iPhone 16 Pro Max, featuring Apple Intelligence, larger display sizes, new creative capabilities with innovative pro camera features, stunning graphics for immersive gaming, and more—all powered by the A18 Pro chip. With Apple Intelligence, powerful Apple-built generative models come to iPhone in the easy-to-use personal intelligence system that understands personal context to deliver intelligence that is helpful and relevant while protecting user privacy. Camera Control unlocks a fast, intuitive way to tap into visual intelligence and easily interact with the advanced camera system. Featuring a new 48MP Fusion camera with a faster quad-pixel sensor that enables 4K120 FPS video recording in Dolby Vision, these new Pro models achieve the highest resolution and frame-rate combination ever available on iPhone. Additional advancements include a new 48MP Ultra Wide camera for higher-resolution photography, including macro; a 5x Telephoto camera on both Pro models; and studio-quality mics to record more true-to-life audio. The durable titanium design is strong yet lightweight, with larger display sizes, the thinnest borders on any Apple product, and a huge leap in battery life—with iPhone 16 Pro Max offering the best battery life on iPhone ever.

iPhone 16 Pro and iPhone 16 Pro Max will be available in four stunning finishes: black titanium, natural titanium, white titanium, and desert titanium. Pre-orders begin Friday, September 13, with availability beginning Friday, September 20.

Cerebras Launches the World's Fastest AI Inference

Today, Cerebras Systems, the pioneer in high performance AI compute, announced Cerebras Inference, the fastest AI inference solution in the world. Delivering 1,800 tokens per second for Llama3.1 8B and 450 tokens per second for Llama3.1 70B, Cerebras Inference is 20 times faster than NVIDIA GPU-based solutions in hyperscale clouds. Starting at just 10c per million tokens, Cerebras Inference is priced at a fraction of GPU solutions, providing 100x higher price-performance for AI workloads.

Unlike alternative approaches that compromise accuracy for performance, Cerebras offers the fastest performance while maintaining state of the art accuracy by staying in the 16-bit domain for the entire inference run. Cerebras Inference is priced at a fraction of GPU-based competitors, with pay-as-you-go pricing of 10 cents per million tokens for Llama 3.1 8B and 60 cents per million tokens for Llama 3.1 70B.

Report: AI Software Sales to Experience Massive Growth with 40.6% CAGR Over the Next Five Years

The market for artificial intelligence (AI) platforms software grew at a rapid pace in 2023 and is projected to maintain its remarkable momentum, driven by the increasing adoption of AI across many industries. A new International Data Corporation (IDC) forecast shows that worldwide revenue for AI platforms software will grow to $153.0 billion in 2028 with a compound annual growth rate (CAGR) of 40.6% over the 2023-2028 forecast period.

"The AI platforms market shows no signs of slowing down. Rapid innovations in generative AI is changing how companies think about their products, how they develop and deploy AI applications, and how they leverage technology themselves for reinventing their business models and competitive positioning," said Ritu Jyoti, group vice president and general manager of IDC's Artificial Intelligence, Automation, Data and Analytics research. "IDC expects this upward trajectory will continue to accelerate with the emergence of unified platforms for predictive and generative AI that supports interoperating APIs, ecosystem extensibility, and responsible AI adoption at scale."

OpenAI in Talks with Broadcom About Developing Custom AI Chips to Power Next Generation Models

According to The Information, OpenAI is reportedly in talks with Broadcom about developing a custom AI accelerator to power OpenAI's growing demand for high-performance solutions. Broadcom is a fabless chip designer known for a wide range of silicon solutions spanning from networking, PCIe, SSD controllers, and PHYs all the way up to custom ASICs. The latter part is what OpenAI wants to focus on, but all the aforementioned IP developed by Broadcom is of use in a data center. Suppose OpenAI decides to use Broadcom's solutions. In that case, the fabless silicon designer offers a complete vertical stack of products for inter-system communication using various protocols such as PCIe, system-to-system communication using Ethernet networking with Broadcom Tomahawk 6 and future revisions, alongside storage solutions and many other complimentary elements of a data center.

As a company skilled in making various IPs, it also makes ASIC solutions for other companies and has assisted Google in the making of its Tensor Processing Unit (TPU), which is now in its sixth generation. Google TPUs are massively successful as Google deploys millions of them and provides AI solutions to billions of users across the globe. Now, OpenAI wants to be part of the AI chip game, and Broadcom could come to the rescue with its already-established AI success and various other data center componentry to help make a custom AI accelerator to power OpenAI's infrastructure needed for the next generation of AI models. With each new AI model released by OpenAI, compute demand spikes by several orders of magnitude, and having an AI accelerator that exactly matches their need will help the company move faster and run even bigger AI models.

AI Startup Etched Unveils Transformer ASIC Claiming 20x Speed-up Over NVIDIA H100

A new startup emerged out of stealth mode today to power the next generation of generative AI. Etched is a company that makes an application-specific integrated circuit (ASIC) to process "Transformers." The transformer is an architecture for designing deep learning models developed by Google and is now the powerhouse behind models like OpenAI's GPT-4o in ChatGPT, Antrophic Claude, Google Gemini, and Meta's Llama family. Etched wanted to create an ASIC for processing only the transformer models, making a chip called Sohu. The claim is Sohu outperforms NVIDIA's latest and greatest by an entire order of magnitude. Where a server configuration with eight NVIDIA H100 GPU clusters pushes Llama-3 70B models at 25,000 tokens per second, and the latest eight B200 "Blackwell" GPU cluster pushes 43,000 tokens/s, the eight Sohu clusters manage to output 500,000 tokens per second.

Why is this important? Not only does the ASIC outperform Hopper by 20x and Blackwell by 10x, but it also serves so many tokens per second that it enables an entirely new fleet of AI applications requiring real-time output. The Sohu architecture is so efficient that 90% of the FLOPS can be used, while traditional GPUs boast a 30-40% FLOP utilization rate. This translates into inefficiency and waste of power, which Etched hopes to solve by building an accelerator dedicated to power transformers (the "T" in GPT) at massive scales. Given that the frontier model development costs more than one billion US dollars, and hardware costs are measured in tens of billions of US Dollars, having an accelerator dedicated to powering a specific application can help advance AI faster. AI researchers often say that "scale is all you need" (resembling the legendary "attention is all you need" paper), and Etched wants to build on that.

OpenAI Co-Founder Ilya Sutskever Launches a New Venture: Safe Superintelligence Inc.

OpenAI's co-founder and ex-chief scientist, Ilya Sutskever, has announced the formation of a new company promising a safe path to artificial superintelligence (ASI). Called Safe Superintelligence Inc. (SSI), the company has a simple mission: achieving ASI with safety at the front. "We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead," notes the SSI website, adding that "Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures."

Interestingly, safety is a concern only a few frontier AI labs have. In recent history, OpenAI's safety team got the spotlight for being neglected, and the company's safety lead, Jan Leike, publically criticized safety practices before moving to Anthophic. Antrophic is focused on providing safe AI models, with its Claude Opus being one of the leading AI models to date. What is to come out of SSI? We still don't know. However, given the team of Ilya Sutskever, Daniel Gross, and Daniel Levy, we assume they attracted the best-in-class talent for developing next-generation AI models, focusing on safety. With offices in Palo Alto and Tel Aviv, SSI can tap a vast network of AI researchers and policymakers to establish safe ASI, free from short-term commercial pressure and focused on research and development. "Our team, investors, and business model are all aligned to achieve SSI," says the SSI website.

The Race is Heating Up, Elon Musk's AI Startup xAI Raises $6 Billion

Elon Musk's AI company xAI just scored big (according to Reuters), raising a massive $6 billion in new funding. This sky-high investment values xAI at a whopping $24 billion as investors go all-in on challengers to top AI players like OpenAI. Big-name funders like Andreessen Horowitz and Sequoia backed the funding round, according to xAI's blog post on Sunday. Before this, xAI was valued at $18 billion, Musk said on social app X.

The huge cash influx will help xAI launch its first products, build advanced tech, and turbocharge their research, the company stated. "More news coming soon," Musk teased cryptically after the funding announcement. It's an AI investment frenzy as tech giants like Microsoft and Google parent Alphabet pour fortunes into leading the red-hot generative AI race. With its new war chest, xAI is gearing up to make some serious waves.
Elon Musk xAI

AMD Instinct MI300X Accelerators Power Microsoft Azure OpenAI Service Workloads and New Azure ND MI300X V5 VMs

Today at Microsoft Build, AMD (NASDAQ: AMD) showcased its latest end-to-end compute and software capabilities for Microsoft customers and developers. By using AMD solutions such as AMD Instinct MI300X accelerators, ROCm open software, Ryzen AI processors and software, and Alveo MA35D media accelerators, Microsoft is able to provide a powerful suite of tools for AI-based deployments across numerous markets. The new Microsoft Azure ND MI300X virtual machines (VMs) are now generally available, giving customers like Hugging Face, access to impressive performance and efficiency for their most demanding AI workloads.

"The AMD Instinct MI300X and ROCm software stack is powering the Azure OpenAI Chat GPT 3.5 and 4 services, which are some of the world's most demanding AI workloads," said Victor Peng, president, AMD. "With the general availability of the new VMs from Azure, AI customers have broader access to MI300X to deliver high-performance and efficient solutions for AI applications."

ChatGPT Comes to Desktop with OpenAI's Latest GPT-4o Model That Talks With Users

At OpenAI's spring update, a lot of eyes were fixed on the company, which spurred the AI boom with the ChatGPT application. Now being almost a must-have app for consumers and prosumers alike, ChatGPT is a de-facto application for the latest AI innovation, backed by researchers and scientists from OpenAI. Today, OpenAI announced a new model called GPT-4o (Omni), which hopes to bring advanced intelligence, improved overall capabilities, and real-time voice interaction with users. Now, the ChatGPT application wants to become like a personal assistant that actively communicates with users and provides much broader capabilities. OpenAI claims that it can respond to audio inputs as quickly as 232 milliseconds, with an average of 320 milliseconds, similar to human response time in conversations.

However, OpenAI states that it wants ChatGPT's latest GPT-4o model to be available to the free, Plus, and Team paid subscribers, where paid subscribers get 5x higher usage and early access to the model. Interestingly, the GPT-4o model is much improved across a variety of standard benchmarks like MMLU, Math, HumanEval, GPQA, and others, where it now surpasses almost all models except Claude 3 Opus in MGSM. It now understands more than 50 languages and can do real time translation. In addition to the new model, OpenAI announced that they are launching a desktop ChatGPT app, which can act as a personal assistant and see what is happening on the screen, but it is only allowed by user command. This is supposed to bring a much more refined user experience and enable users to use AI as a third person to help understand the screen's content. Initially only available on macOS, we are waiting for OpenAI to launch the Windows ChatGPT application so everyone can also experience the new technology.

Report: 3 Out of 4 Laptop PCs Sold in 2027 will be AI Laptop PCs

Personal computers (PCs) have been used as the major productivity device for several decades. But now we are entering a new era of PCs based on artificial intelligence (AI), thanks to the boom witnessed in generative AI (GenAI). We believe the inventory correction and demand weakness in the global PC market have already normalized, with the impacts from COVID-19 largely being factored in. All this has created a comparatively healthy backdrop for reshaping the PC industry. Counterpoint estimates that almost half a billion AI laptop PCs will be sold during the 2023-2027 period, with AI PCs reviving the replacement demand.

Counterpoint separates GenAI laptop PCs into three categories - AI basic laptop, AI-advanced laptop and AI-capable laptop - based on different levels of computational performance, corresponding use cases and the efficiency of computational performance. We believe AI basic laptops, which are already in the market, can perform basic AI tasks but not completely GenAI tasks and, starting this year, will be supplanted by more AI-advanced and AI-capable models with enough TOPS (tera operations per second) powered by NPU (neural processing unit) or GPU (graphics processing unit) to perform the advanced GenAI tasks really well.

Microsoft Prepares MAI-1 In-House AI Model with 500B Parameters

According to The Information, Microsoft is developing a new AI model, internally named MAI-1, designed to compete with the leading models from Google, Anthropic, and OpenAI. This significant step forward in the tech giant's AI capabilities is boosted by Mustafa Suleyman, the former Google AI leader who previously served as CEO of Inflection AI before Microsoft acquired the majority of its staff and intellectual property for $650 million in March. MAI-1 is a custom Microsoft creation that utilizes training data and technology from Inflection but is not a transferred model. It is also distinct from Inflection's previously released Pi models, as confirmed by two Microsoft insiders familiar with the project. With approximately 500 billion parameters, MAI-1 will be significantly larger than its predecessors, surpassing the capabilities of Microsoft's smaller, open-source models.

For comparison, OpenAI's GPT-4 boasts 1.8 trillion parameters in a Mixture of Experts sparse design, while open-source models from Meta and Mistral feature 70 billion parameters dense. Microsoft's investment in MAI-1 highlights its commitment to staying competitive in the rapidly evolving AI landscape. The development of this large-scale model represents a significant step forward for the tech giant, as it seeks to challenge industry leaders in the field. The increased computing power, training data, and financial resources required for MAI-1 demonstrate Microsoft's dedication to pushing the boundaries of AI capabilities and intention to compete on its own. With the involvement of Mustafa Suleyman, a renowned expert in AI, the company is well-positioned to make significant strides in this field.

Jensen Huang Will Discuss AI's Future at NVIDIA GTC 2024

NVIDIA's GTC 2024 AI conference will set the stage for another leap forward in AI. At the heart of this highly anticipated event: the opening keynote by Jensen Huang, NVIDIA's visionary founder and CEO, who speaks on Monday, March 18, at 1 p.m. Pacific, at the SAP Center in San Jose, California.

Planning Your GTC Experience
There are two ways to watch. Register to attend GTC in person to secure a spot for an immersive experience at the SAP Center. The center is a short walk from the San Jose Convention Center, where the rest of the conference takes place. Doors open at 11 a.m., and badge pickup starts at 10:30 a.m. The keynote will also be livestreamed at www.nvidia.com/gtc/keynote/.

CNET Demoted to Untrusted Sources by Wikipedia Editors Due to AI-Generated Content

Once trusted as the staple of technology journalism, the website CNET has been publically demoted to Untrusted Sources on Wikipedia. CNET has faced public criticism since late 2022 for publishing AI-generated articles without disclosing humans did not write them. This practice has culminated in CNET being demoted from Trusted to Untrusted Sources on Wikipedia, following extensive debates between Wikipedia editors. CNET's reputation first declined in 2020 when it was acquired by publisher Red Ventures, who appeared to prioritize advertising and SEO traffic over editorial standards. However, the AI content scandal accelerated CNET's fall from grace. After discovering the AI-written articles, Wikipedia editors argued that CNET should be removed entirely as a reliable source, citing Red Ventures' pattern of misinformation.

One editor called for targeting Red Ventures as "a spam network." AI-generated content poses familiar challenges to spam bots - machine-created text that is frequently low quality or inaccurate. However, CNET claims it has stopped publishing AI content. This controversy highlights rising concerns about AI-generated text online. Using AI-generated stories might seem interesting as it lowers the publishing time; however, these stories usually rank low in the Google search index, as the engine detects and penalizes AI-generated content probably because Google's AI detection algorithms used the same training datasets as models used to write the text. Lawsuits like The New York Times v. OpenAI also allege AIs must scrape vast amounts of text without permission. As AI capabilities advance, maintaining information quality on the web will require increased diligence. But demoting once-reputable sites like CNET as trusted sources when they disregard ethics and quality control helps set a necessary precedent. Below, you can see the Wikipedia table about CNET.

Elon Musk Sues Open AI and Sam Altman for Breach of Founding Contract

Elon Musk in his individual capacity has sued Sam Altman, Gregory Brockman, Open AI and its affiliate companies, of breach of founding contract, and a deviation from its founding goal to be a non-profit tasked with the development of AI toward the benefit of humanity. This lawsuit comes in the wake of Open AI's relationship with Microsoft, which Musk says compromises its founding contract. Musk alleges breach of contract, breach of fiduciary duty, and unfair business practices against Open AI, and demands that the company revert to being open-source with all its technology, and function as a non-profit.

Musk also requests an injunction to prevent Open AI and the other defendants from profiting off Open AI technology. In particular, Musk alleges that GPT-4 isn't open-source, claiming that only Open AI and Microsoft know its inner workings, and Microsoft stands to monetize GPT-4 "for a fortune." Microsoft, interestingly, was not named in the lawsuit as a defendant. Elon Musk sat on the original board of Open AI until his departure in 2018, is said to be a key sponsor of AI acceleration hardware used in the pioneering work done by Open AI.

Intel Announces Intel 14A (1.4 nm) and Intel 3T Foundry Nodes, Launches World's First Systems Foundry Designed for the AI Era

Intel Corp. today launched Intel Foundry as a more sustainable systems foundry business designed for the AI era and announced an expanded process roadmap designed to establish leadership into the latter part of this decade. The company also highlighted customer momentum and support from ecosystem partners - including Synopsys, Cadence, Siemens and Ansys - who outlined their readiness to accelerate Intel Foundry customers' chip designs with tools, design flows and IP portfolios validated for Intel's advanced packaging and Intel 18A process technologies.

The announcements were made at Intel's first foundry event, Intel Foundry Direct Connect, where the company gathered customers, ecosystem companies and leaders from across the industry. Among the participants and speakers were U.S. Secretary of Commerce Gina Raimondo, Arm CEO Rene Haas, Microsoft CEO Satya Nadella, OpenAI CEO Sam Altman and others.

Jensen Huang to Unveil Latest AI Breakthroughs at GTC 2024 Conference

NVIDIA today announced it will host its flagship GTC 2024 conference at the San Jose Convention Center from March 18-21. More than 300,000 people are expected to register to attend in person or virtually. NVIDIA founder and CEO Jensen Huang will deliver the keynote from the SAP Center on Monday, March 18, at 1 p.m. Pacific time. It will be livestreamed and available on demand. Registration is not required to view the keynote online. Since Huang first highlighted machine learning in his 2014 GTC keynote, NVIDIA has been at the forefront of the AI revolution. The company's platforms have played a crucial role in enabling AI across numerous domains including large language models, biology, cybersecurity, data center and cloud computing, conversational AI, networking, physics, robotics, and quantum, scientific and edge computing.

The event's 900 sessions and over 300 exhibitors will showcase how organizations are deploying NVIDIA platforms to achieve remarkable breakthroughs across industries, including aerospace, agriculture, automotive and transportation, cloud services, financial services, healthcare and life sciences, manufacturing, retail and telecommunications. "Generative AI has moved to center stage as governments, industries and organizations everywhere look to harness its transformative capabilities," Huang said. "GTC has become the world's most important AI conference because the entire ecosystem is there to share knowledge and advance the state of the art. Come join us."

Jim Keller Offers to Design AI Chips for Sam Altman for Less Than $1 Trillion

In case you missed it, Sam Altman of OpenAI took the Internet by storm late last week with the unveiling of Sora, the generative AI that can congure up photoreal video clips based on prompts, with deadly accuracy. While Altman and his colleagues in the generative AI industry had a ton of fun generating videos based on prompts from the public on X; it became all too clear that the only thing holding back the democratization of generative AI is the volume of AI accelerator chips. Altman wants to solve this by designing his own AI acceleration hardware from the grounds up, for which he initially pitched an otherworldly $7 trillion in investment—something impossible with the financial markets, but one that's possible only by "printing money," or through sovereign wealth fund investments.

Jim Keller needs no introduction—the celebrity VLSI architect has been designing number crunching devices of all shapes and sizes for some of the biggest tech companies out there for decades, including Intel, Apple, and AMD, just to name a few. When as part of his "are you not entertained?" victory lap, Altman suggested that his vision for the future needs an even larger $8 trillion investment, Keller responded that he could design an AI chip for less than $1 trillion. Does Altman really need several trillions of Dollars to build a grounds-up AI chip at the costs and volumes needed to mainstream AI?

Sora by OpenAI is the Text-to-Video AI Model Beyond Our Wildest Imagination

Sam Altman of OpenAI just unveiled Sora, the all new speech-to-video AI model that exactly the way science fiction would want such a thing to work—imagine fluid, photorealistic, true-color video clips based entirely on text prompts. Sora is generative AI on an exponentially higher scale than Dall-E, and presumably requires an enormously higher amount of compute power. But to those that can afford to rent out a large hardware instance, this means the power to create a video of just about anything. Everything democratizes with time, and in a few years, Sora could become the greatest tool for independent content creators, as they could draw up entire worlds using just prompts and green screens. Sora strapped to a mixed reality headset such as the Apple Vision Pro, is basically a Holodeck.

OpenAI Potentially Seeking $5-7 Trillion Investment in Establishment of Fab Network

Various news outlets have been keeping tabs on OpenAI's CEO—Sam Altman—the AI technology evangelist was reported to be pursuing an ambitious proprietary AI chip project in early 2024. Inside sources pointed to late-January negotiations with important investment personnel in the Middle East—many believe that OpenAI leadership is exploring the idea of establishing its own network of semiconductor production plants. Late last week, The Wall Street Journal followed up on last month's AI industry rumors: "(Altman) has another great ambition: raising trillions of dollars to reshape the global semiconductor industry. The OpenAI chief executive officer is in talks with investors including the United Arab Emirates government to raise funds for a wildly ambitious tech initiative that would boost the world's chip-building capacity, expand its ability to power AI." One anonymous insider reckons that "the project could require raising as much as $5 trillion to $7 trillion."

TSMC is reportedly in the equation—Altman allegedly conducted talks with top brass last month—their expertise in cutting edge fabrication techniques would be of great value, although it is somewhat futile to reveal too many industry secrets given the sheer scale of OpenAI's (reported) aggressive expansion plans. The Wall Street Journal (WSJ) suggests that the embryonic venture is far more "open" than previously reported—a collaborative venture could be established once funding is secured, although Altman & Co. face "significant obstacles" en route. WSJ proposes that the somewhat OpenAI-centric fabrication network is best founded by a joint partnership—involving multiple investors, contract chip manufacturers (perhaps TSMC), and energy/power providers. OpenAI appears to be the "primary buyer" of resultant fabricated AI chips, with manufacturing services also offered to other clients. The scale of such an endeavor is put into perspective by WSJ's analysis (via inside sources): "Such a sum of investment would dwarf the current size of the global semiconductor industry. Global sales of chips were $527 billion last year and are expected to rise to $1 trillion annually by 2030. Global sales of semiconductor manufacturing equipment—the costly machinery needed to run chip factories—last year were $100 billion, according to an estimate by the industry group SEMI."

OpenAI Reportedly Talking to TSMC About Custom Chip Venture

OpenAI is reported to be initiating R&D on a proprietary AI processing solution—the research organization's CEO, Sam Altman, has commented on the in-efficient operation of datacenters running NVIDIA H100 and A100 GPUs. He foresees a future scenario where his company becomes less reliant on Team Green's off-the-shelf AI-crunchers, with a deployment of bespoke AI processors. A short Reuters interview also underlined Altman's desire to find alternatives sources of power: "It motivates us to go invest more in (nuclear) fusion." The growth of artificial intelligence industries has put an unprecedented strain on energy providers, so tech firms could be semi-forced into seeking out frugal enterprise hardware.

The Financial Times has followed up on last week's Bloomberg report of OpenAI courting investment partners in the Middle East. FT's news piece alleges that Altman is in talks with billionaire businessman Sheikh Tahnoon bin Zayed al-Nahyan, a very well connected member of the United Arab Emirates Royal Family. OpenAI's leadership is reportedly negotiating with TSMC—The Financial Times alleges that Taiwan's top chip foundry is an ideal manufacturing partner. This revelation contradicts Bloomberg's recent reports of a potential custom OpenAI AI chip venture involving purpose-built manufacturing facilities. The whole project is said to be at an early stage of development, so Altman and his colleagues are most likely exploring a variety of options.

FTC Launches Inquiry into Generative AI Investments and Partnerships

The Federal Trade Commission announced today that it issued orders to five companies requiring them to provide information regarding recent investments and partnerships involving generative AI companies and major cloud service providers. The agency's 6(b) inquiry will scrutinize corporate partnerships and investments with AI providers to build a better internal understanding of these relationships and their impact on the competitive landscape. The compulsory orders were sent to Alphabet, Inc., Amazon.com, Inc., Anthropic PBC, Microsoft Corp., and OpenAI, Inc.

"History shows that new technologies can create new markets and healthy competition. As companies race to develop and monetize AI, we must guard against tactics that foreclose this opportunity, "said FTC Chair Lina M. Khan. "Our study will shed light on whether investments and partnerships pursued by dominant companies risk distorting innovation and undermining fair competition."

OpenAI CEO Reportedly Seeking Funds for Purpose-built Chip Foundries

OpenAI CEO, Sam Altman, had a turbulent winter 2023 career moment, but appears to be going all in with his company's future interests. A Bloomberg report suggests that the tech visionary has initiated a major fundraising initiative for the construction of OpenAI-specific semiconductor production plants. The AI evangelist reckons that his industry will become prevalent enough to demand a dedicated network of manufacturing facilities—the U.S. based artificial intelligence (AI) research organization is (reportedly) exploring custom artificial intelligence chip designs. Proprietary AI-focused GPUs and accelerators are not novelties at this stage in time—many top tech companies rely on NVIDIA solutions, but are keen to deploy custom-built hardware in the near future.

OpenAI's popular ChatGPT system is reliant on NVIDIA H100 and A100 GPUs, but tailor-made alternatives seem to be the desired route for Altman & Co. The "on their own terms" pathway seemingly skips an expected/traditional chip manufacturing process—the big foundries could struggle to keep up with demand for AI-oriented silicon. G42 (an Abu Dhabi-based AI development holding company) and SoftBank Group are mentioned as prime investment partners in OpenAI's fledgling scheme—Bloomberg proposes that Altman's team is negotiating a $8 to 10 billion deal with top brass at G42. OpenAI's planned creation of its own foundry network is certainly a lofty and costly goal—the report does not specify whether existing facilities will be purchased and overhauled, or new plants being constructed entirely from scratch.

Microsoft Announces Copilot Pro AI Assistant for Office Suite, Coming as a Subscription

Today, Microsoft launched Copilot Pro, bringing its AI-powered productivity features to mainstream consumers. Available as a $20 per month add-on to Microsoft 365 Personal or Home subscriptions, Copilot Pro injects AI directly into Office apps to help users write faster and work smarter. The key features include AI-generated text suggestions and summaries in Word, automated data analysis and graph creation in Excel, and even entire PowerPoint presentation generation from a text prompt. Copilot can also suggest email replies in Outlook and assist with other productivity tasks. What sets Copilot Pro apart from the free Copilot assistant is its tight integration with Office apps and ability to summarize long Word documents and generate entire decks. The Pro version also offers priority access to the latest AI models from OpenAI, Microsoft's key partner. The regular Copilot uses GPT-4 during non-priority times, while the Copilot Pro aims to serve its users with GPT-4 and GPT-4 Turbo all the time.

While regular Copilot is available to anyone with a Microsoft account, Copilot Pro is reserved for those with active Microsoft 365 subscriptions. However, it brings previously business-only AI capabilities like PowerPoint generation to mainstream users. Microsoft's Divya Kumar, global head of marketing for search and AI at Microsoft, stated in a discussion for The Verge that Copilot Pro will continue to expand over time: "Given that pattern we've been in, that rolling thunder, you can expect we're going to do the exact same thing for Copilot Pro." So, while the initial launch focuses on core Office apps, expect Copilot Pro to pop up across Microsoft's software and services eventually. For now, it aims to boost productivity for Office power users willing to pay a little extra for AI assistance.

AMD Showcases Growing Momentum for AMD Powered AI Solutions from the Data Center to PCs

Today at the "Advancing AI" event, AMD was joined by industry leaders including Microsoft, Meta, Oracle, Dell Technologies, HPE, Lenovo, Supermicro, Arista, Broadcom and Cisco to showcase how these companies are working with AMD to deliver advanced AI solutions spanning from cloud to enterprise and PCs. AMD launched multiple new products at the event, including the AMD Instinct MI300 Series data center AI accelerators, ROCm 6 open software stack with significant optimizations and new features supporting Large Language Models (LLMs) and Ryzen 8040 Series processors with Ryzen AI.

"AI is the future of computing and AMD is uniquely positioned to power the end-to-end infrastructure that will define this AI era, from massive cloud installations to enterprise clusters and AI-enabled intelligent embedded devices and PCs," said AMD Chair and CEO Dr. Lisa Su. "We are seeing very strong demand for our new Instinct MI300 GPUs, which are the highest-performance accelerators in the world for generative AI. We are also building significant momentum for our data center AI solutions with the largest cloud companies, the industry's top server providers, and the most innovative AI startups ꟷ who we are working closely with to rapidly bring Instinct MI300 solutions to market that will dramatically accelerate the pace of innovation across the entire AI ecosystem."
Return to Keyword Browsing
Oct 12th, 2024 04:45 CDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts