News Posts matching #OpenAI

Return to Keyword Browsing

Jensen Huang Will Discuss AI's Future at NVIDIA GTC 2024

NVIDIA's GTC 2024 AI conference will set the stage for another leap forward in AI. At the heart of this highly anticipated event: the opening keynote by Jensen Huang, NVIDIA's visionary founder and CEO, who speaks on Monday, March 18, at 1 p.m. Pacific, at the SAP Center in San Jose, California.

Planning Your GTC Experience
There are two ways to watch. Register to attend GTC in person to secure a spot for an immersive experience at the SAP Center. The center is a short walk from the San Jose Convention Center, where the rest of the conference takes place. Doors open at 11 a.m., and badge pickup starts at 10:30 a.m. The keynote will also be livestreamed at www.nvidia.com/gtc/keynote/.

CNET Demoted to Untrusted Sources by Wikipedia Editors Due to AI-Generated Content

Once trusted as the staple of technology journalism, the website CNET has been publically demoted to Untrusted Sources on Wikipedia. CNET has faced public criticism since late 2022 for publishing AI-generated articles without disclosing humans did not write them. This practice has culminated in CNET being demoted from Trusted to Untrusted Sources on Wikipedia, following extensive debates between Wikipedia editors. CNET's reputation first declined in 2020 when it was acquired by publisher Red Ventures, who appeared to prioritize advertising and SEO traffic over editorial standards. However, the AI content scandal accelerated CNET's fall from grace. After discovering the AI-written articles, Wikipedia editors argued that CNET should be removed entirely as a reliable source, citing Red Ventures' pattern of misinformation.

One editor called for targeting Red Ventures as "a spam network." AI-generated content poses familiar challenges to spam bots - machine-created text that is frequently low quality or inaccurate. However, CNET claims it has stopped publishing AI content. This controversy highlights rising concerns about AI-generated text online. Using AI-generated stories might seem interesting as it lowers the publishing time; however, these stories usually rank low in the Google search index, as the engine detects and penalizes AI-generated content probably because Google's AI detection algorithms used the same training datasets as models used to write the text. Lawsuits like The New York Times v. OpenAI also allege AIs must scrape vast amounts of text without permission. As AI capabilities advance, maintaining information quality on the web will require increased diligence. But demoting once-reputable sites like CNET as trusted sources when they disregard ethics and quality control helps set a necessary precedent. Below, you can see the Wikipedia table about CNET.

Elon Musk Sues Open AI and Sam Altman for Breach of Founding Contract

Elon Musk in his individual capacity has sued Sam Altman, Gregory Brockman, Open AI and its affiliate companies, of breach of founding contract, and a deviation from its founding goal to be a non-profit tasked with the development of AI toward the benefit of humanity. This lawsuit comes in the wake of Open AI's relationship with Microsoft, which Musk says compromises its founding contract. Musk alleges breach of contract, breach of fiduciary duty, and unfair business practices against Open AI, and demands that the company revert to being open-source with all its technology, and function as a non-profit.

Musk also requests an injunction to prevent Open AI and the other defendants from profiting off Open AI technology. In particular, Musk alleges that GPT-4 isn't open-source, claiming that only Open AI and Microsoft know its inner workings, and Microsoft stands to monetize GPT-4 "for a fortune." Microsoft, interestingly, was not named in the lawsuit as a defendant. Elon Musk sat on the original board of Open AI until his departure in 2018, is said to be a key sponsor of AI acceleration hardware used in the pioneering work done by Open AI.

Intel Announces Intel 14A (1.4 nm) and Intel 3T Foundry Nodes, Launches World's First Systems Foundry Designed for the AI Era

Intel Corp. today launched Intel Foundry as a more sustainable systems foundry business designed for the AI era and announced an expanded process roadmap designed to establish leadership into the latter part of this decade. The company also highlighted customer momentum and support from ecosystem partners - including Synopsys, Cadence, Siemens and Ansys - who outlined their readiness to accelerate Intel Foundry customers' chip designs with tools, design flows and IP portfolios validated for Intel's advanced packaging and Intel 18A process technologies.

The announcements were made at Intel's first foundry event, Intel Foundry Direct Connect, where the company gathered customers, ecosystem companies and leaders from across the industry. Among the participants and speakers were U.S. Secretary of Commerce Gina Raimondo, Arm CEO Rene Haas, Microsoft CEO Satya Nadella, OpenAI CEO Sam Altman and others.

Jensen Huang to Unveil Latest AI Breakthroughs at GTC 2024 Conference

NVIDIA today announced it will host its flagship GTC 2024 conference at the San Jose Convention Center from March 18-21. More than 300,000 people are expected to register to attend in person or virtually. NVIDIA founder and CEO Jensen Huang will deliver the keynote from the SAP Center on Monday, March 18, at 1 p.m. Pacific time. It will be livestreamed and available on demand. Registration is not required to view the keynote online. Since Huang first highlighted machine learning in his 2014 GTC keynote, NVIDIA has been at the forefront of the AI revolution. The company's platforms have played a crucial role in enabling AI across numerous domains including large language models, biology, cybersecurity, data center and cloud computing, conversational AI, networking, physics, robotics, and quantum, scientific and edge computing.

The event's 900 sessions and over 300 exhibitors will showcase how organizations are deploying NVIDIA platforms to achieve remarkable breakthroughs across industries, including aerospace, agriculture, automotive and transportation, cloud services, financial services, healthcare and life sciences, manufacturing, retail and telecommunications. "Generative AI has moved to center stage as governments, industries and organizations everywhere look to harness its transformative capabilities," Huang said. "GTC has become the world's most important AI conference because the entire ecosystem is there to share knowledge and advance the state of the art. Come join us."

Jim Keller Offers to Design AI Chips for Sam Altman for Less Than $1 Trillion

In case you missed it, Sam Altman of OpenAI took the Internet by storm late last week with the unveiling of Sora, the generative AI that can congure up photoreal video clips based on prompts, with deadly accuracy. While Altman and his colleagues in the generative AI industry had a ton of fun generating videos based on prompts from the public on X; it became all too clear that the only thing holding back the democratization of generative AI is the volume of AI accelerator chips. Altman wants to solve this by designing his own AI acceleration hardware from the grounds up, for which he initially pitched an otherworldly $7 trillion in investment—something impossible with the financial markets, but one that's possible only by "printing money," or through sovereign wealth fund investments.

Jim Keller needs no introduction—the celebrity VLSI architect has been designing number crunching devices of all shapes and sizes for some of the biggest tech companies out there for decades, including Intel, Apple, and AMD, just to name a few. When as part of his "are you not entertained?" victory lap, Altman suggested that his vision for the future needs an even larger $8 trillion investment, Keller responded that he could design an AI chip for less than $1 trillion. Does Altman really need several trillions of Dollars to build a grounds-up AI chip at the costs and volumes needed to mainstream AI?

Sora by OpenAI is the Text-to-Video AI Model Beyond Our Wildest Imagination

Sam Altman of OpenAI just unveiled Sora, the all new speech-to-video AI model that exactly the way science fiction would want such a thing to work—imagine fluid, photorealistic, true-color video clips based entirely on text prompts. Sora is generative AI on an exponentially higher scale than Dall-E, and presumably requires an enormously higher amount of compute power. But to those that can afford to rent out a large hardware instance, this means the power to create a video of just about anything. Everything democratizes with time, and in a few years, Sora could become the greatest tool for independent content creators, as they could draw up entire worlds using just prompts and green screens. Sora strapped to a mixed reality headset such as the Apple Vision Pro, is basically a Holodeck.

OpenAI Potentially Seeking $5-7 Trillion Investment in Establishment of Fab Network

Various news outlets have been keeping tabs on OpenAI's CEO—Sam Altman—the AI technology evangelist was reported to be pursuing an ambitious proprietary AI chip project in early 2024. Inside sources pointed to late-January negotiations with important investment personnel in the Middle East—many believe that OpenAI leadership is exploring the idea of establishing its own network of semiconductor production plants. Late last week, The Wall Street Journal followed up on last month's AI industry rumors: "(Altman) has another great ambition: raising trillions of dollars to reshape the global semiconductor industry. The OpenAI chief executive officer is in talks with investors including the United Arab Emirates government to raise funds for a wildly ambitious tech initiative that would boost the world's chip-building capacity, expand its ability to power AI." One anonymous insider reckons that "the project could require raising as much as $5 trillion to $7 trillion."

TSMC is reportedly in the equation—Altman allegedly conducted talks with top brass last month—their expertise in cutting edge fabrication techniques would be of great value, although it is somewhat futile to reveal too many industry secrets given the sheer scale of OpenAI's (reported) aggressive expansion plans. The Wall Street Journal (WSJ) suggests that the embryonic venture is far more "open" than previously reported—a collaborative venture could be established once funding is secured, although Altman & Co. face "significant obstacles" en route. WSJ proposes that the somewhat OpenAI-centric fabrication network is best founded by a joint partnership—involving multiple investors, contract chip manufacturers (perhaps TSMC), and energy/power providers. OpenAI appears to be the "primary buyer" of resultant fabricated AI chips, with manufacturing services also offered to other clients. The scale of such an endeavor is put into perspective by WSJ's analysis (via inside sources): "Such a sum of investment would dwarf the current size of the global semiconductor industry. Global sales of chips were $527 billion last year and are expected to rise to $1 trillion annually by 2030. Global sales of semiconductor manufacturing equipment—the costly machinery needed to run chip factories—last year were $100 billion, according to an estimate by the industry group SEMI."

OpenAI Reportedly Talking to TSMC About Custom Chip Venture

OpenAI is reported to be initiating R&D on a proprietary AI processing solution—the research organization's CEO, Sam Altman, has commented on the in-efficient operation of datacenters running NVIDIA H100 and A100 GPUs. He foresees a future scenario where his company becomes less reliant on Team Green's off-the-shelf AI-crunchers, with a deployment of bespoke AI processors. A short Reuters interview also underlined Altman's desire to find alternatives sources of power: "It motivates us to go invest more in (nuclear) fusion." The growth of artificial intelligence industries has put an unprecedented strain on energy providers, so tech firms could be semi-forced into seeking out frugal enterprise hardware.

The Financial Times has followed up on last week's Bloomberg report of OpenAI courting investment partners in the Middle East. FT's news piece alleges that Altman is in talks with billionaire businessman Sheikh Tahnoon bin Zayed al-Nahyan, a very well connected member of the United Arab Emirates Royal Family. OpenAI's leadership is reportedly negotiating with TSMC—The Financial Times alleges that Taiwan's top chip foundry is an ideal manufacturing partner. This revelation contradicts Bloomberg's recent reports of a potential custom OpenAI AI chip venture involving purpose-built manufacturing facilities. The whole project is said to be at an early stage of development, so Altman and his colleagues are most likely exploring a variety of options.

FTC Launches Inquiry into Generative AI Investments and Partnerships

The Federal Trade Commission announced today that it issued orders to five companies requiring them to provide information regarding recent investments and partnerships involving generative AI companies and major cloud service providers. The agency's 6(b) inquiry will scrutinize corporate partnerships and investments with AI providers to build a better internal understanding of these relationships and their impact on the competitive landscape. The compulsory orders were sent to Alphabet, Inc., Amazon.com, Inc., Anthropic PBC, Microsoft Corp., and OpenAI, Inc.

"History shows that new technologies can create new markets and healthy competition. As companies race to develop and monetize AI, we must guard against tactics that foreclose this opportunity, "said FTC Chair Lina M. Khan. "Our study will shed light on whether investments and partnerships pursued by dominant companies risk distorting innovation and undermining fair competition."

OpenAI CEO Reportedly Seeking Funds for Purpose-built Chip Foundries

OpenAI CEO, Sam Altman, had a turbulent winter 2023 career moment, but appears to be going all in with his company's future interests. A Bloomberg report suggests that the tech visionary has initiated a major fundraising initiative for the construction of OpenAI-specific semiconductor production plants. The AI evangelist reckons that his industry will become prevalent enough to demand a dedicated network of manufacturing facilities—the U.S. based artificial intelligence (AI) research organization is (reportedly) exploring custom artificial intelligence chip designs. Proprietary AI-focused GPUs and accelerators are not novelties at this stage in time—many top tech companies rely on NVIDIA solutions, but are keen to deploy custom-built hardware in the near future.

OpenAI's popular ChatGPT system is reliant on NVIDIA H100 and A100 GPUs, but tailor-made alternatives seem to be the desired route for Altman & Co. The "on their own terms" pathway seemingly skips an expected/traditional chip manufacturing process—the big foundries could struggle to keep up with demand for AI-oriented silicon. G42 (an Abu Dhabi-based AI development holding company) and SoftBank Group are mentioned as prime investment partners in OpenAI's fledgling scheme—Bloomberg proposes that Altman's team is negotiating a $8 to 10 billion deal with top brass at G42. OpenAI's planned creation of its own foundry network is certainly a lofty and costly goal—the report does not specify whether existing facilities will be purchased and overhauled, or new plants being constructed entirely from scratch.

Microsoft Announces Copilot Pro AI Assistant for Office Suite, Coming as a Subscription

Today, Microsoft launched Copilot Pro, bringing its AI-powered productivity features to mainstream consumers. Available as a $20 per month add-on to Microsoft 365 Personal or Home subscriptions, Copilot Pro injects AI directly into Office apps to help users write faster and work smarter. The key features include AI-generated text suggestions and summaries in Word, automated data analysis and graph creation in Excel, and even entire PowerPoint presentation generation from a text prompt. Copilot can also suggest email replies in Outlook and assist with other productivity tasks. What sets Copilot Pro apart from the free Copilot assistant is its tight integration with Office apps and ability to summarize long Word documents and generate entire decks. The Pro version also offers priority access to the latest AI models from OpenAI, Microsoft's key partner. The regular Copilot uses GPT-4 during non-priority times, while the Copilot Pro aims to serve its users with GPT-4 and GPT-4 Turbo all the time.

While regular Copilot is available to anyone with a Microsoft account, Copilot Pro is reserved for those with active Microsoft 365 subscriptions. However, it brings previously business-only AI capabilities like PowerPoint generation to mainstream users. Microsoft's Divya Kumar, global head of marketing for search and AI at Microsoft, stated in a discussion for The Verge that Copilot Pro will continue to expand over time: "Given that pattern we've been in, that rolling thunder, you can expect we're going to do the exact same thing for Copilot Pro." So, while the initial launch focuses on core Office apps, expect Copilot Pro to pop up across Microsoft's software and services eventually. For now, it aims to boost productivity for Office power users willing to pay a little extra for AI assistance.

AMD Showcases Growing Momentum for AMD Powered AI Solutions from the Data Center to PCs

Today at the "Advancing AI" event, AMD was joined by industry leaders including Microsoft, Meta, Oracle, Dell Technologies, HPE, Lenovo, Supermicro, Arista, Broadcom and Cisco to showcase how these companies are working with AMD to deliver advanced AI solutions spanning from cloud to enterprise and PCs. AMD launched multiple new products at the event, including the AMD Instinct MI300 Series data center AI accelerators, ROCm 6 open software stack with significant optimizations and new features supporting Large Language Models (LLMs) and Ryzen 8040 Series processors with Ryzen AI.

"AI is the future of computing and AMD is uniquely positioned to power the end-to-end infrastructure that will define this AI era, from massive cloud installations to enterprise clusters and AI-enabled intelligent embedded devices and PCs," said AMD Chair and CEO Dr. Lisa Su. "We are seeing very strong demand for our new Instinct MI300 GPUs, which are the highest-performance accelerators in the world for generative AI. We are also building significant momentum for our data center AI solutions with the largest cloud companies, the industry's top server providers, and the most innovative AI startups ꟷ who we are working closely with to rapidly bring Instinct MI300 solutions to market that will dramatically accelerate the pace of innovation across the entire AI ecosystem."

Sam Altman to Return as OpenAI CEO, Days After Board's Decision for Removal

Over the past few days, the OpenAI drama has continued to give more details about the relationship between OpenAI's board, employees, and even the executive layer of the company. As we have covered previously, the OpenAI board on last Friday, November 17, fired the company's CEO, Sam Altman. Over the weekend, Mr. Altman was approached by Microsoft CEO Satya Nadella and offered to lead the AI unit within the Redmond giant; however, the employment was not yet finalized. Today, we learned that Sam Altman has reached an agreement with the board to return to OpenAI along with Greg Brockman and many other OpenAI employees.

After starting a wave of posts on the X/Twitter platform saying, "OpenAI is nothing without its people," the employees of OpenAI signed a letter requesting the board to bring back Sam Altman. With the deal now happening, employees are expected to continue working for OpenAI under Sam Altman's leadership. The new initial board of OpenAI is composed of Bret Taylor (Chair), Larry Summers, and Adam D'Angelo. Sam Altman said in a post on X: "i love openai, and everything i've done over the past few days has been in service of keeping this team and its mission together. when i decided to join msft on sun evening, it was clear that was the best path for me and the team. with the new board and w satya's support, i'm looking forward to returning to openai, and building on our strong partnership with msft." While Microsoft CEO Satya Nadella added "We are encouraged by the changes to the OpenAI board. We believe this is a first essential step on a path to more stable, well-informed, and effective governance. Sam, Greg, and I have talked and agreed they have a key role to play along with the OAI leadership team in ensuring OAI continues to thrive and build on its mission. We look forward to building on our strong partnership and delivering the value of this next generation of AI to our customers and partners."

OpenAI Names Emmett Shear as CEO, Sam Altman Joins Microsoft and Drags Hundreds of Employees With Him

On Friday, the AI world was caught by storm as the board of directors of OpenAI, the maker of ChatGPT and other AI software, fired its CEO, Sam Altman. According to multiple sources reporting the state of OpenAI, Sam Altman was stunned by the board's decision of his removal, where the company published a public statement with many remarks, primarily informing the public that "Mr. Altman's departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI."

After Sam Altman's leave, Greg Brockman, president and co-founder of OpenAI, announced that he was also leaving the company. Satya Nadella, CEO of Microsoft, and other investors have stepped in to lead negotiations between the OpenAI board and Sam Altman to return to his position as the CEO of the non-profit company. However, according to The Information, Sam Altman will not be returning as the CEO, and instead, Emmett Shear will be appointed as the interim CEO of OpenAI. It is also reported that the departure of Sam Altman is now being followed by three senior researchers, Jakub Pachocki, Aleksander Madry, and Szymon Sidor, who have left the company to follow Sam Altman's next adventure. They wanted to go back to OpenAI if Mr. Altman would return; however, with Emmett Shear now being appointed as interim CEO, the company is in shambles with its senior staff employment in question.

Update 15:30 UTC: Sam Altman has joined Microsoft alongside Greg Brockman to lead Microsoft's advanced AI research efforts; additionally with hundreds of OpenAI staff wanting to do projects under Sam Altman's lead. Apparently there are 700 members of staff, and 505 of them plan to follow Mr. Altman and Mr. Brockman under Microsoft's wing.

Microsoft Introduces 128-Core Arm CPU for Cloud and Custom AI Accelerator

During its Ignite conference, Microsoft introduced a duo of custom-designed silicon made to accelerate AI and excel in cloud workloads. First of the two is Microsoft's Azure Cobalt 100 CPU, a 128-core design that features a 64-bit Armv9 instruction set, implemented in a cloud-native design that is set to become a part of Microsoft's offerings. While there aren't many details regarding the configuration, the company claims that the performance target is up to 40% when compared to the current generation of Arm servers running on Azure cloud. The SoC has used Arm's Neoverse CSS platform customized for Microsoft, with presumably Arm Neoverse N2 cores.

The next and hottest topic in the server space is AI acceleration, which is needed for running today's large language models. Microsoft hosts OpenAI's ChatGPT, Microsoft's Copilot, and many other AI services. To help make them run as fast as possible, Microsoft's project Athena now has the name of Maia 100 AI accelerator, which is manufactured on TSMC's 5 nm process. It features 105 billion transistors and supports various MX data formats, even those smaller than 8-bit bit, for maximum performance. Currently tested on GPT 3.5 Turbo, we have yet to see performance figures and comparisons with competing hardware from NVIDIA, like H100/H200 and AMD, with MI300X. The Maia 100 has an aggregate bandwidth of 4.8 Terabits per accelerator, which uses a custom Ethernet-based networking protocol for scaling. These chips are expected to appear in Microsoft data centers early next year, and we hope to get some performance numbers soon.

NVIDIA Announces up to 5x Faster TensorRT-LLM for Windows, and ChatGPT API-like Interface

Even as CPU vendors are working to mainstream accelerated AI for client PCs, and Microsoft setting the pace for more AI in everyday applications with Windows 11 23H2 Update; NVIDIA is out there reminding you that every GeForce RTX GPU is an AI accelerator. This is thanks to its Tensor cores, and the SIMD muscle of the ubiquitous CUDA cores. NVIDIA has been making these for over 5 years now, and has an install base of over 100 million. The company is hence focusing on bring generative AI acceleration to more client- and enthusiast relevant use-cases, such as large language models.

NVIDIA at the Microsoft Ignite event announced new optimizations, models, and resources to bring accelerated AI to everyone with an NVIDIA GPU that meets the hardware requirements. To begin with, the company introduced an update to TensorRT-LLM for Windows, a library that leverages NVIDIA RTX architecture for accelerating large language models (LLMs). The new TensorRT-LLM version 0.6.0 will release later this month, and improve LLM inference performance by up to 5 times in terms of tokens per second, when compared to the initial release of TensorRT-LLM from October 2023. In addition, TensorRT-LLM 0.6.0 will introduce support for popular LLMs, including Mistral 7B and Nemtron-3 8B. Accelerating these two will require a GeForce RTX 30-series "Ampere" or 40-series "Ada" GPU with at least 8 GB of main memory.

OpenAI Could Make Custom Chips to Power Next-Generation AI Models

OpenAI, the company behind ChatGPT and the GPT-4 large language model, is reportedly exploring the possibility of creating custom silicon to power its next-generation AI models. According to Reuters, Insider sources have even alluded to the firm evaluating potential acquisitions of chip design firms. While a final decision is yet to be cemented, conversations from as early as last year highlighted OpenAI's struggle with the growing scarcity and escalating costs of AI chips, with NVIDIA being its primary supplier. The CEO of OpenAI, Sam Altman, has been rather vocal about the shortage of GPUs, a sector predominantly monopolized by NVIDIA, which holds control over an astounding 80% of the global market for AI-optimized chips.

Back in 2020, OpenAI banked on a colossal supercomputer crafted by Microsoft, a significant investor in OpenAI, which harnesses the power of 10,000 NVIDIA GPUs. This setup is instrumental in driving the operations of ChatGPT, which, as per Bernstein's analyst Stacy Rasgon, comes with its own hefty price tag. Each interaction with ChatGPT is estimated to cost around 4 cents. Drawing a comparative scale with Google search, if ChatGPT queries ever burgeoned to a mere tenth of Google's search volume, the initial GPU investment would skyrocket to an overwhelming $48.1 billion, with a recurring annual expenditure of approximately $16 billion for sustained operations. For an invitation to comment, OpenAI declined to provide any statements. The potential entry into the world of custom silicon signals a strategic move towards greater self-reliance and cost optimization so further development of AI can be sustained.

Run AI on Your PC? NVIDIA GeForce Users Are Ahead of the Curve

Generative AI is no longer just for tech giants. With GeForce, it's already at your fingertips. Gone are the days when AI was the domain of sprawling data centers or elite researchers. For GeForce RTX users, AI is now running on your PC. It's personal, enhancing every keystroke, every frame and every moment. Gamers are already enjoying the benefits of AI in over 300 RTX games. Meanwhile, content creators have access to over 100 RTX creative and design apps, with AI enhancing everything from video and photo editing to asset generation. And for GeForce enthusiasts, it's just the beginning. RTX is the platform for today and the accelerator that will power the AI of tomorrow.

How Did AI and Gaming Converge?
NVIDIA pioneered the integration of AI and gaming with DLSS, a technique that uses AI to generate pixels in video games automatically and which has increased frame rates by up to 4x. And with the recent introduction of DLSS 3.5, NVIDIA has enhanced the visual quality in some of the world's top titles, setting a new standard for visually richer and more immersive gameplay. But NVIDIA's AI integration doesn't stop there. Tools like RTX Remix empower game modders to remaster classic content using high-quality textures and materials generated by AI.

OpenAI Degrades GPT-4 Performance While GPT-3.5 Gets Better

When OpenAI announced its GPT-4 model, it first became a part of ChatGPT, behind the paywall for premium users. The GPT-4 is the latest installment in the Generative Pretrained Transformer (GPT) Large Language Models (LLMs). The GPT-4 aims to be a more capable version than the GPT-3.5 that powered ChatGPT at first, which was capable once it launched. However, it seems like the performance of GPT-4 has been steadily dropping since its introduction. Many users noted the regression, and today we have researchers from Stanford University and UC Berkeley, who benchmarked the GPT-4 performance in March 2023, and the model's performance in June 2023 in tasks like solving math problems, visual reasoning, code generation, and answering sensitive questions.

The results? The paper shows that GPT-4 performance has been significantly degraded in all the tasks. This could be attributed to improving stability, lowering the massive compute demand, and much more. What is unexpected, GPT-3.5 experienced a significant uplift in the same period. Below, you can see the examples that were benchmarked by the researchers, which also compare GTP-4 and GPT-3.5 performance in all cases.

NVIDIA Espouses Generative AI for Improved Productivity Across Industries

A watershed moment on Nov. 22, 2022, was mostly virtual, yet it shook the foundations of nearly every industry on the planet. On that day, OpenAI released ChatGPT, the most advanced artificial intelligence chatbot ever developed. This set off demand for generative AI applications that help businesses become more efficient, from providing consumers with answers to their questions to accelerating the work of researchers as they seek scientific breakthroughs, and much, much more.

Businesses that previously dabbled in AI are now rushing to adopt and deploy the latest applications. Generative AI—the ability of algorithms to create new text, images, sounds, animations, 3D models and even computer code—is moving at warp speed, transforming the way people work and play. By employing large language models (LLMs) to handle queries, the technology can dramatically reduce the time people devote to manual tasks like searching for and compiling information.

OpenAI Considers Exit From Europe - Faces Planned Legislation from Regulators

OpenAI's CEO, Sam Altman, is currently exploring the UK and Europe on a PR-related "mini" world tour, and protesters have been following these proceedings with much interest. UK news outlets have reported that a demonstration took place outside of a university building in London yesterday, where the UCL Events organization hosted Altman as part of a fireside discussion about the benefits and problems relating to advanced AI systems. Attendees noted that Altman expressed optimism about AI's potential for the creation of more jobs and reduction in inequality - despite calls for a major pause on development. He also visited 10 Downing Street during the British leg of his PR journey - alongside other AI company leaders - to talk about potential risks (originating from his industry) with the UK's prime minister. Discussed topics were reported to include national security, existential threats and disinformation.

At the UCL event, Altman touched upon his recent meetings with European regulators, who are developing plans for advanced legislation that could lead to targeted laws (applicable to AI industries). He says that his company is "gonna try to comply" with these potential new rules and agrees that some form of regulation is necessary: "something between the traditional European approach and the traditional US approach" would be preferred. He took issue with the potential branding of large AI models (such as OpenAI's ChatGPT and GPT-4 applications) as "high risk" ventures via the European Union's AI Act provisions: "Either we'll be able to solve those requirements or not...If we can comply, we will, and if we can't, we'll cease operating… We will try. But there are technical limits to what's possible."

"Godfather of AI" Geoffrey Hinton Departs Google, Voices Concern Over Dangers of AI

Geoffrey Hinton, British-Canadian psychologist, computer scientist, and 2018 Turing Award winner in deep learning, has departed the Google Brain team after a decade-long tenure. His research on AI and neural networks dating back to the 1980s has helped shape the current landscape of deep learning, neural processing, and artificial intelligence algorithms with direct and indirect contributions over the years. 2012's AlexNet, designed and developed in collaboration with his students Alex Krizhevsky and Ilya Sutskever, formed the modern backbone of computer vision and AI image recognition used today in Generative AI. Hinton joined Google when the company won the bid for the tiny startup he and his two students formed in the months following the reveal of AlexNet. Ilya Sutskever left their cohort at Google in 2015 to become co-founder and Chief Scientist of OpenAI; creators of ChatGPT and one of Google's most prominent competitors.

In an interview with the New York Times Hinton says that he quit his position at Google so that he may speak freely about the risks of AI, and that a part of him regrets his life's work in the field. He said that during his time there Google has acted as a "proper steward" of AI development, and was careful about releasing anything that might be harmful. His viewpoint on the industry shifted within the last year as Microsoft's Bing Chat took shots at Google's core business, the web browser, leading to Google being more reactionary than deliberate in response with Bard. The concern arises that as these companies battle it out for AI supremacy they won't take proper precautions against bad-faith actors using the technologies to flood the internet with false photos, text, and even videos. That the average person would no longer be able to tell what was real, and what was manufactured by AI prompt.

Google Bard AI Chatbot Smart Enough to Assist in Software Coding

Alphabet Incorporated's Google AI division has today revealed a planned update for its Bard conversational artificial intelligence chatbot. The experimental generative artificial intelligence software application will become capable of assisting people in the writing of computer code - the American multinational technology company hopes that Bard will be of great to help in the area of software development. Paige Bailey, a group product manager at Google Research has introduced the upcoming changes: "Since we launched Bard, our experiment that lets you collaborate with generative AI, coding has been one of the top requests we've received from our users. As a product lead in Google Research - and a passionate engineer who still programs every day - I'm excited that today we're updating Bard to include that capability."

The Bard chatbot was made available, on a trial basis, to users in the USA and UK last month. Google's AI team is reported to be under great pressure to advance the Bard chatbot into a suitably powerful state in order to compete with its closest rival - Microsoft Corporation. The Seattle-based giant has invested heavily into Open AI's industry leading ChatGPT application. Google's latest volley against its rivals shows that Bard's has become very sophisticated - so much so that the app is able to chew through a variety of programming languages. Bailey outlines these features in the company's latest blog: "Starting now, Bard can help with programming and software development tasks, including code generation, debugging and code explanation. We're launching these capabilities in more than 20 programming languages including C++, Go, Java, Javascript, Python and Typescript. And you can easily export Python code to Google Colab - no copy and paste required." Critics of AI-driven large language models have posited that the technology could potentially eliminate humans from the job market - it will be interesting to observe the coder community's reaction to Google marketing of Bard as a helpful tool in software development.

Gigabyte Extends Its Leading GPU Portfolio of Servers

Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance servers, server motherboards, and workstations, today announced a lineup of powerful GPU-centric servers with the latest AMD and Intel CPUs, including NVIDIA HGX H100 servers with both 4-GPU and 8-GPU modules. With growing interest in HPC and AI applications, specifically generative AI (GAI), this breed of server relies heavily on GPU resources to tackle compute-heavy workloads that handle large amounts of data. With the advent of OpenAI's ChatGPT and other AI chatbots, large GPU clusters are being deployed with system-level optimization to train large language models (LLMs). These LLMs can be processed by GIGABYTE's new design-optimized systems that offer a high level of customization based on users' workloads and requirements.

The GIGABYTE G-series servers are built first and foremost to support dense GPU compute and the latest PCIe technology. Starting with the 2U servers, the new G293 servers can support up to 8 dual-slot GPUs or 16 single-slot GPUs, depending on the server model. For the ultimate in CPU and GPU performance, the 4U G493 servers offer plenty of networking options and storage configurations to go alongside support for eight (Gen 5 x16) GPUs. And for the highest level of GPU compute for HPC and AI, the G393 & G593 series support NVIDIA H100 Tensor Core GPUs. All these new two CPU socket servers are designed for either 4th Gen AMD EPYC processors or 4th Gen Intel Xeon Scalable processors.
Return to Keyword Browsing
Apr 29th, 2024 14:43 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts