News Posts matching #AWS

Return to Keyword Browsing

AMD Details New EPYC CPUs, Next-Generation AMD Instinct Accelerator, and Networking Portfolio for Cloud and Enterprise

Today, at the "Data Center and AI Technology Premiere," AMD announced the products, strategy and ecosystem partners that will shape the future of computing, highlighting the next phase of data center innovation. AMD was joined on stage with executives from Amazon Web Services (AWS), Citadel, Hugging Face, Meta, Microsoft Azure and PyTorch to showcase the technological partnerships with industry leaders to bring the next generation of high performance CPU and AI accelerator solutions to market.

"Today, we took another significant step forward in our data center strategy as we expanded our 4th Gen EPYC processor family with new leadership solutions for cloud and technical computing workloads and announced new public instances and internal deployments with the largest cloud providers," said AMD Chair and CEO Dr. Lisa Su. "AI is the defining technology shaping the next generation of computing and the largest strategic growth opportunity for AMD. We are laser focused on accelerating the deployment of AMD AI platforms at scale in the data center, led by the launch of our Instinct MI300 accelerators planned for later this year and the growing ecosystem of enterprise-ready AI software optimized for our hardware."

IonQ Aria Now Available on Amazon Braket Cloud Quantum Computing Service

Today at Commercialising Quantum Global 2023, IonQ (NYSE: IONQ), an industry leader in quantum computing, announced the availability of IonQ Aria on Amazon Braket, AWS's quantum computing service. This expands upon IonQ's existing presence on Amazon Braket, following the debut of IonQ's Harmony system on the platform in 2020. With broader access to IonQ Aria, IonQ's flagship system with 25 algorithmic qubits (#AQ)—more than 65,000 times more powerful than IonQ Harmony—users can now explore, design, and run more complex quantum algorithms to tackle some of the most challenging problems of today.

"We are excited for IonQ Aria to become available on Amazon Braket, as we expand the ways users can access our leading quantum computer on the most broadly adopted cloud service provider," said Peter Chapman, CEO and President, IonQ. "Amazon Braket has been instrumental in commercializing quantum, and we look forward to seeing what new approaches will come from the brightest, most curious, minds in the space."

Microsoft Activision Blizzard Merger Blocked by UK Market Regulator Citing "Cloud Gaming Concerns"

The United Kingdom Competition and Markets Authority (UK-CMA) on Wednesday blocked the proposed $68.7 billion merger of Microsoft and Activision-Blizzard. In its press-releasing announcing its final decision into an investigation on the question of how the merger will affect consumer-choice and innovation in the market, the CMA says that the merger would alter the future of cloud gaming, and lead to "reduced innovation and less choice for United Kingdom gamers over the years to come." Cloud gaming in this context would be games rendered on the cloud, and consumed on the edge by gamers. NVIDIA's GeForce NOW is one such service.

Microsoft Azure is one of the big-three cloud computing providers (besides AWS and Google Cloud), and the CMA fears that Microsoft's acquisition of Activision-Blizzard IP (besides its control over the Xbox and Windows PC ecosystems), would "strengthen that advantage giving it the ability to undermine new and innovative competitors." The CMA report continues: "Cloud gaming needs a free, competitive market to drive innovation and choice. That is best achieved by allowing the current competitive dynamics in cloud gaming to continue to do their job." Microsoft and Activision-Blizzard are unsurprisingly unhappy with the verdict.

Linux Foundation Launches New TLA+ Organization

SAN FRANCISCO, April 21, 2023 -- The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced the launch of the TLA+ Foundation to promote the adoption and development of the TLA+ programming language and its community of TLA+ practitioners. Inaugural members include Amazon Web Services (AWS), Oracle and Microsoft. TLA+ is a high-level language for modeling programs and systems, especially concurrent and distributed ones. TLA+ has been successfully used by companies to verify complex software systems, reducing errors and improving reliability. The language helps detect design flaws early in the development process, saving time and resources.

TLA+ and its tools are useful for eliminating fundamental design errors, which are hard to find and expensive to correct in code. The language is based on the idea that the best way to describe things precisely is with simple mathematics. The language was invented decades ago by the pioneering computer scientist Leslie Lamport, now a distinguished scientist with Microsoft Research. After years of Lamport's stewardship and Microsoft's support, TLA+ has found a new home at the Linux Foundation.

AMD Joins AWS ISV Accelerate Program

AMD announced it has joined the Amazon Web Services (AWS) Independent Software Vendor (ISV) Accelerate Program, a co-sell program for AWS Partners - like AMD - who provide integrated solutions on AWS. The program helps AWS Partners drive new business by directly connecting participating ISVs with the AWS Sales organization.

Through the AWS ISV Accelerate Program, AMD will receive focused co-selling support from AWS, including, access to further sales enablement resources, reduced AWS Marketplace listing fees, and incentives for AWS Sales teams. The program will also allow participating ISVs access to millions of active AWS customers globally.

Microsoft Working on Custom AI Processor Codenamed Project Athena

According to The Information, Microsoft has been working on creating custom processors for processing AI with a project codenamed Athena. Based on TSMC's 5 nm process, these chips are designed to accelerate AI workloads and scale to hundreds or even thousands of chips. With the boom of Large Language Models (LLMs) that require billions of parameters, training them requires a rapid increase of computational power to a point where companies purchase hundreds of thousands of GPUs from the likes of NVIDIA. However, creating custom processors is a familiar feat for a company like Microsoft. Hyperscalers like AWS, Google, and Meta are already invested in the creation of processors for AI training, and Microsoft is just joining as well.

While we don't have much information about these processors, we know that Microsoft started the project in 2019, and today these processors are in the hands of select employees of Microsoft and OpenAI that work with AI projects and need computational horsepower. Interestingly, some projections assume that if Microsoft could match NVIDIA's GPU performance, the cost would only be a third of NVIDIA's offerings. However, it is challenging to predict that until more information is provided. Microsoft plans to make these chips more widely available as early as next year; however, there is no specific information on when and how, but Azure cloud customers would be the most logical place to start.

Shipments of AI Servers Will Climb at CAGR of 10.8% from 2022 to 2026

According to TrendForce's latest survey of the server market, many cloud service providers (CSPs) have begun large-scale investments in the kinds of equipment that support artificial intelligence (AI) technologies. This development is in response to the emergence of new applications such as self-driving cars, artificial intelligence of things (AIoT), and edge computing since 2018. TrendForce estimates that in 2022, AI servers that are equipped with general-purpose GPUs (GPGPUs) accounted for almost 1% of annual global server shipments. Moving into 2023, shipments of AI servers are projected to grow by 8% YoY thanks to ChatBot and similar applications generating demand across AI-related fields. Furthermore, shipments of AI servers are forecasted to increase at a CAGR of 10.8% from 2022 to 2026.

Data Center CPU Landscape Allows Ampere Computing to Gain Traction

Once upon a time, the data center market represented a duopoly of x86-64 makers AMD and Intel. However, in recent years companies started developing custom Arm-based processors to handle workloads as complex within smaller power envelopes and doing it more efficiently. According to Counterpoint Research firm, we have the latest data highlighting a significant new player called Ampere Computing in the data center world. With the latest data center revenue share report, we get to see Intel/AMD x86-64 and AWS/Ampere Arm CPU revenue. For the first time, we see that a 3rd party company, Ampere Computing, managed to capture as much as 1.54% market revenue share of the entire data center market in 2022. Thanks to having CPUs in off-the-shelf servers from OEMs, enterprises and cloud providers are able to easily integrate Ampere Altra processors.

Intel, still the most significant player, saw a 70.77% share of the overall revenue; however, that comes as a drop from 2021 data which stated an 80.71% revenue share in the data center market. This represents a 16% year-over-year decline. This reduction is not due to the low demand for server processors, as the global data center CPU market's revenue registered only a 4.4% YoY decline in 2022, but due to the high demand for AMD EPYC solutions, where team red managed to grab 19.84% of the revenue from 2022. This is a 62% YoY growth from last year's 11.74% revenue share. Slowly but surely, AMD is eating Intel's lunch. Another revenue source comes from Amazon Web Services (AWS), which the company filled with its Graviton CPU offerings based on Arm ISA. AWS Graviton CPUs accounted for 3.16% of the market revenue, up 74% from 1.82% in 2021.

Marvell and AWS Collaborate to Enable Cloud-First Silicon Design

Marvell Technology, Inc., a leader in data infrastructure semiconductor solutions, announced today that it has selected Amazon Web Services, Inc. (AWS) as its cloud provider for electronic design automation (EDA). A cloud-first approach helps Marvell to rapidly and securely scale its service on the world's leading cloud, rise to the challenges brought by increasingly complex chip design processes, and deliver continuous innovation for the expanding needs across the automotive, carrier, data center, and enterprise infrastructure markets it serves. The work extends the longstanding relationship between the two companies—Marvell is also a key semiconductor supplier for AWS, helping the company support the design and rapid delivery of cloud services that best meet customers' demanding requirements.

EDA refers to the specialized and compute-intensive processes used in chip making and is a critical piece of Marvell's R&D. Over the years, the number of transistors on an integrated chip has increased exponentially. Each advance in chip design calls for a calculated application of software modules overseeing logic design, debugging, component placement, wire routing, optimization of time and power consumption, and verification. Due to the computationally intensive nature of EDA workloads, it is no longer cost-effective or timely to run EDA on premises. By powering its EDA with AWS, Marvell leverages an unmatched portfolio of services including secure, elastic, high-performance compute capacity in the cloud to solve challenges around speed, latency, security of IP, and data transfer.

Projected YoY Growth Rate of Global Server Shipments for 2023 Has Been Lowered to 1.87% Due to North American Cloud Service Providers Cutting Demand

Facing global economic headwinds, the four major North American cloud service providers (CSPs) have scaled back their server procurement quantities for 2023 and could make further downward corrections in the future. Meta is the leader among the four in terms of server demand reduction, followed by Microsoft, Google, and AWS. TrendForce has lowered the YoY growth rate of their total server procurement quantity for this year from the original projection of 6.9% to the latest projection of 4.4%. With CSPs cutting demand, global server shipments are now estimated to grow by just 1.87% YoY for 2023. Regarding the server DRAM market, prices there are estimated to drop by around 20~25% QoQ for 1Q23 as CSPs' downward corrections exacerbate the oversupply situation.

Looking at the four CSPs individually, the YoY decline of Meta's server procurement quantity has been widened to 3.0% and could get larger. The instability of the global economy remains the largest variable for all CSPs. Besides this, Meta has also encountered a notable obstacle in expanding its operation in Europe. Specifically, its data center in Denmark has not met the regional standard for emissions. This issue is expected to hinder its progress in setting up additional data centers across the EU. Moreover, businesses related to e-commerce account for about 98% of Meta's revenue. Therefore, the decline in e-commerce activities amidst the recent easing of the COVID-19 pandemic has impacted Meta's growth momentum. Additionally, Meta's server demand has been affected by the high level of component inventory held by server ODMs.

AWS Updates Custom CPU Offerings with Graviton3E for HPC Workloads

Amazon Web Services (AWS) cloud division is extensively developing custom Arm-based CPU solutions to suit its enterprise clients and is releasing new iterations of the Graviton series. Today, during the company re:Invent week, we are getting a new CPU custom-tailored to high-performance computing (HPC) workloads called Graviton3E. Given that HPC workloads require higher bandwidth, wider datapaths, and data types span in multiple dimensions, AWS redesigned the Graviton3 processor and enhanced it with new vector processing capabilities with a new name—Graviton3E. This CPU is promised to offer up to 35% higher performance in workloads that depend on heavy vector processing.

With the rising popularity of HPC in the cloud, AWS sees a significant market opportunity and is trying to capture it. Available in the AWS EC2 instance types, this chip will be available with up to 64 vCPU cores and 128 GiB of memory. The supported EC2 tiers that will offer this enhanced chip are C7gn and Hpc7g instances that provide 200 Gbps of dedicated network bandwidth that is optimized for traffic between instances in the same VPC. In addition, Intel-based R7iz instances are available for HPC users in the cloud, now powered by 4th generation Xeon Scalable processors codenamed Sapphire Rapids.

Arm Announces Next-Generation Neoverse Cores for High Performance Computing

The demand for data is insatiable, from 5G to the cloud to smart cities. As a society we want more autonomy, information to fuel our decisions and habits, and connection - to people, stories, and experiences.

To address these demands, the cloud infrastructure of tomorrow will need to handle the coming data explosion and the effective processing of evermore complex workloads … all while increasing power efficiency and minimizing carbon footprint. It's why the industry is increasingly looking to the performance, power efficiency, specialized processing and workload acceleration enabled by Arm Neoverse to redefine and transform the world's computing infrastructure.

Qualcomm Wants Server Market to Run its New Processors, a Re-Launch Could Happen

Qualcomm is a company well known for designing processors going inside a vast majority of smartphones. However, the San Diego company has been making attempts to break out of its vision to focus on smartphones and establish new markets where it could show its potential for efficient processor design. According to Bloomberg's insights, Qualcomm is planning to re-enter the server market and try again to compete in the now very diverse space. In 2014, Qualcomm announced that the company is developing an Arm ISA-based CPU that will target servers and be an excellent alternative for cloud service providers looking at efficient designs called Centriq. Later on, in November of 2017, the company announced the first CPU Centriq 2400, which had 48 custom Falkor cores, six-channel DDR4 memory, and 60 MB of L3 cache.

What happened later is that the changing management of the company slowly abandoned the project, and the Arm CPU market was a bit of a dead-end for many projects. However, in recent years, many companies began designing Arm processors, and now the market is ready for a player like Qualcomm to re-enter this space. With the acquisition of Nuvia Inc., which developed crazy fast CPU IPs under the leadership of industry veterans, these designs could soon see the light of the day. It is reported that Qualcomm is in talks with Amazon's AWS cloud division, which has agreed to take a look at Qualcomm's offerings.

AWS Graviton3 CPU with 64 Cores and DDR5 Memory Available with Three Sockets Per Motherboard

Amazon's AWS division has been making Graviton processors for a few years now, and the company recently announced its Graviton3 design will soon to available in the cloud. Today, we are witnessing a full launch of the Graviton3 CPUs with the first instances available in the AWS Cloud. In theC7g instances, AWS customers can now scale their workloads across 1-64 vCPU instance variants. Graviton3's 64 cores run at 2.6 GHz clock speed, 300 GB/sec maximum memory bandwidth, DDR5 memory controller, 64 cores, seven silicon die chiplet-based design, 256-bit SVE (Scalable Vector Extension), all across 55 billion transistors. Paired with up to 128 GiB of DDR5 memory, these processors are compute-intensive solutions. AWS noted that the company used a monolithic computing and memory controller logic design to reduce latency and improve performance.

One interesting thing to note is the motherboard that AWS hosts Graviton3 processors in. Usually, server motherboards can be single, dual, or quad-socket solutions. However, AWS decided to implement a unique solution with three sockets. This tri-socket setup is designed to see each CPU as an independent processor, managed by a Nitro Card, which can handle exactly three CPUs. The company notes that the CPU is now in general availability with C7g instances and you can see it below.

ARM-based Server Penetration Rate to Reach 22% by 2025 with Cloud Data Centers Leading the Way, Says TrendForce

According to TrendForce research, corporate demand for digital transformation including artificial intelligence and high-performance computing has accelerated in recent years, which has led to increasing adoption of cloud computing. In order to improve service flexibility, the world's major cloud service providers have gradually introduced ARM-based servers. The penetration rate of ARM architecture in data center servers is expected to reach 22% by 2025.

In the past few years, ARM architecture processors have matured in the fields of mobile terminals and Internet of Things but progress in the server field has been relatively slow. However, companies have diversified cloud workloads in recent years and the market has begun to pay attention to the benefits ARM architecture processing can provide to data centers. TrendForce believes that ARM-based processors have three major advantages. First, they can support diverse and rapidly changing workloads and are more scalability and cost-effective. Second, ARM-based processors provide higher customization for different niche markets with a more flexible ecosystem. Third, physical footprint is relatively small which meets the needs of today's micro data centers.

AMD EPYC Powers Amazon EC2 C6a HPC Instance

AMD announced the expansion of Amazon Web Services Inc. (AWS) AMD EPYC processor-based offerings with the general availability of compute optimized Amazon EC2 C6a instances. According to AWS, the C6a instances offer up to 15% better compute price performance over previous generation C5a instances for a variety of compute focused workloads.

The C6a instances support compute-intensive workloads such as batch processing, distributed analytics, ad serving, highly scalable multiplayer gaming, and video encoding. This is the second AWS EC2 instance type powered by 3rd Gen AMD EPYC processors, following the release of the M6a instances. These instances continue the collaboration between AWS and AMD providing AWS EC2 customers access to the performance and price performance capabilities of the latest generation of AMD EPYC processors.

Arm Appoints Rene Haas as Chief Executive Officer

Arm announced today that its board of directors has appointed 35-year semiconductor industry leader Rene Haas as chief executive officer and member of the board of directors, effective immediately. Mr. Haas succeeds Simon Segars, who has stepped down as chief executive officer and member of the board of directors after 30 years with the company. In the near-term, Mr. Segars will support the leadership transition in an advisory role for Arm.

"Rene is the right leader to accelerate Arm's growth as the company starts making preparations to re-enter the public markets," said Masayoshi Son, Representative Director, Corporate Officer, Chairman & CEO of SoftBank Group Corp. "I would like to thank Simon for his leadership, contributions and dedication to Arm over the past 30 years."

AMD Highlights Growing Cloud Momentum With New Amazon EC2 Instances for High Performance Computing

AMD announced Amazon Web Services (AWS) has expanded its AMD EPYC processor-based offerings with the general availability of the new Amazon EC2 Hpc6a instances, which are purpose-built for high performance computing (HPC) workloads in the cloud. According to AWS, Amazon EC2 Hpc6a instances deliver up to 65 percent better price-performance compared to similar Amazon EC2 instances. Hpc6a will help customers run their most compute-intensive HPC workloads like, genomics, computational fluid dynamics, weather forecasting, financial risk modeling, EDA for semiconductor design, computer-aided engineering, and seismic imaging.

Throughout the HPC industry, there has been a growing preference for AMD as showcased by AMD EPYC processors powering 73 supercomputers on the latest Top500 list and holding 70 HPC world records. The new Hpc6a instances bring the leadership performance and capabilities of 3rd Gen AMD EPYC processors to compute-optimized Amazon EC2 instances used for highly complex HPC workloads.

Amazon Announces Arm Based Graviton3 Processors, Opens up EC2 C7g Preview Instances

As Amazon is continuing to grow its AWS business, both with new instances powered by AMD's third generation of EPYC processors and it's new EC2 C5g instances powered by its current Graviton2 processors and Nvidia's T4G tensor core GPUs. However, the company is also opening up its first EC2 C7g preview instances using its brand new Graviton3 processors, which the company claims offer vastly improved performance over the Graviton2 on specific workloads.

EC2 stands for Elastic Compute Cloud and judging by the fact that the Graviton3 is said to have up to twice as past FPU performance for scientific workloads and being twice as fast for cryptographic workloads and up to three times faster for machine learning workloads, you can guess who these new EC2 instances are intended for. Amazon didn't reveal much in terms of technical details about the Graviton3, but it'll utilize DDR5 memory, which makes it one of the first, if not the first server CPU to use DDR5. It's also said to use up to 60 percent less energy than the Graviton2, while delivering up to 25 percent more compute performance. It's implied that it uses the Arm v9 architecture and the Neoverse N2 Arm cores, although this hasn't been officially announced.

AMD EPYC Processor Offerings Continue to Grow at AWS With New Instances for General Purpose Compute

AMD (NASDAQ: AMD) announced Amazon Web Services, Inc. (AWS) has expanded its AMD EPYC processor-based offerings with the general availability of general-purpose Amazon EC2 M6a instances. The M6a instances are powered by 3rd Gen AMD EPYC processors delivering, according to AWS, up to 35% better price-performance compared to the previous M5a instances and a 10% lower cost than comparable x86-based EC2 instances.

"Our 3rd Gen AMD EPYC processors provide Amazon EC2 users excellent scalability and impressive price-performance compared to previous generation Amazon EC2 M5a instances. This announcement shows our strong collaboration as well as highlights our overall momentum in cloud infrastructure," said Lynn Comp, corporate vice president, Cloud Business, AMD. "Our work with AWS exemplifies our commitment to giving end users innovation and performance for their cloud environments and workloads."

Intel Reports Third-Quarter 2021 Financial Results

Intel Corporation today reported third-quarter 2021 financial results. "Q3 shone an even greater spotlight on the global demand for semiconductors, where Intel has the unique breadth and scale to lead. Our focus on execution continued as we started delivering on our IDM 2.0 commitments. We broke ground on new fabs, shared our accelerated path to regain process performance leadership, and unveiled our most dramatic architectural innovations in a decade. We also announced major customer wins across every part of our business," said Pat Gelsinger, Intel CEO. "We are still in the early stages of our journey, but I see the enormous opportunity ahead, and I couldn't be prouder of the progress we are making towards that opportunity."

In the third quarter, the company generated $9.9 billion in cash from operations and paid dividends of $1.4 billion. Intel CFO George Davis announced plans to retire from Intel in May 2022. He will continue to serve in his current role while Intel conducts a search for a new CFO and until his successor is appointed. Third-quarter revenue was led by strong recovery in the Enterprise portion of DCG and in IOTG, which saw higher demand amid recovery from the economic impacts of COVID-19. The Client Computing Group (CCG) was down due to lower notebook volumes due to industry-wide component shortages, and on lower adjacent revenue, partially offset by higher average selling prices (ASPs) and strength in desktop.

Alibaba Goes Anti-x86: Open-Source RISC-V and 128-Core Arm Server Processors on the Horizon

With the x86 architecture, large hyperscale cloud providers have been experiencing all sorts of troubles, from high power consumption to the high pricing structure of these processors. Companies like Amazon Web Services (AWS) build their processors based on 3rd party instruction set architecture designs. Today, Alibaba, the Chinese giant, has announced the launch of two processors made in-house to serve everything from edge to central server processing. First in line is the RISC-V-based Xuantie series of processors, which can run anything from AliOS, FreeRTOS, RT-Thread, Linux, Android, etc., to other operating systems as well. These processors are open-source, capable of modest processing capabilities, and designed as IPs that anyone can use. You can check them out on T-Head GitHub repositories here.

The other thing that Alibaba announced is the development of a 128-core custom processor based on the Arm architecture. Called Yitian 710 server SoC, TSMC manufactures it on the company on 5 nm semiconductor node. So far, Alibaba didn't reveal any details about the SoC and what Arm cores are used. However, this signifies that the company seeks technology independence from outside sources and wants to take it all in-house. With custom RISC-V processors for lower-power tasks and custom Arm server CPUs, the whole infrastructure is covered. It is just a matter of time before Alibaba starts to replace x86 makers in full. However, given the significant number of chips that the company needs, it may not happen at any sooner date.

Intel Powers Latest Amazon EC2 General Purpose Instances with 3rd Gen Intel Xeon Scalable Processors

Intel today announced AWS customers can access the latest 3rd Gen Intel Xeon Scalable processors via the new Amazon Elastic Compute Cloud (Amazon EC2) M6i instances. Optimized for high-performance, general-purpose compute, the latest Intel-powered Amazon EC2 instances provide customers increased flexibility and more choices when running their Intel-powered infrastructure within the AWS cloud. Today's news is a further continuation of Intel and AWS' close collaboration, giving customers scalable compute instances in the cloud for almost 15 years.

"Our latest 3rd Gen Intel Xeon Scalable processors are our highest performance data center CPU and provide AWS customers an excellent platform to run their most critical business applications. We look forward to continuing our long-term collaboration with AWS to deploy industry-leading technologies within AWS' cloud infrastructure." -Sandra Rivera, Intel executive vice president and general manager, Datacenter and AI Group.

Penetration Rate of Ice Lake CPUs in Server Market Expected to Surpass 30% by Year's End as x86 Architecture Remains Dominant, Says TrendForce

While the server industry transitions to the latest generation of processors based on the x86 platform, the Intel Ice Lake and AMD Milan CPUs entered mass production earlier this year and were shipped to certain customers, such as North American CSPs and telecommunication companies, at a low volume in 1Q21, according to TrendForce's latest investigations. These processors are expected to begin seeing widespread adoption in the server market in 3Q21. TrendForce believes that Ice Lake represents a step-up in computing performance from the previous generation due to its higher scalability and support for more memory channels. On the other hand, the new normal that emerged in the post-pandemic era is expected to drive clients in the server sector to partially migrate to the Ice Lake platform, whose share in the server market is expected to surpass 30% in 4Q21.

Linux Foundation to Form New Open 3D Foundation

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced an intent to form the Open 3D Foundation to accelerate developer collaboration on 3D game and simulation technology. The Open 3D Foundation will support open source projects that advance capabilities related to 3D graphics, rendering, authoring, and development. As the first project governed by the new foundation, Amazon Web Services, Inc. (AWS) is contributing an updated version of the Amazon Lumberyard game engine as the Open 3D Engine (O3DE), under the permissive Apache 2.0 license. The Open 3D Engine enables developers and content creators to build 3D experiences unencumbered by commercial terms and will provide the support and infrastructure of an open source community through forums, code repositories, and developer events. A developer preview of O3DE is available on GitHub today. For more information and/or to contribute, please visit: https://o3de.org

3D engines are used to create a range of virtual experiences, including games and simulations, by providing capabilities such as 3D rendering, content authoring tools, animation, physics systems, and asset processing. Many developers are seeking ways to build their intellectual property on top of an open source engine where the roadmap is highly visible, openly governed, and collaborative to the community as a whole. More developers look to be able to create or augment their current technological foundations with highly collaborative solutions that can be used in any development environment. O3DE introduces a new ecosystem for developers and content creators to innovate, build, share, and distribute immersive 3D worlds that will inspire their users with rich experiences that bring the imaginations of their creators to life.
Return to Keyword Browsing
May 21st, 2025 21:04 CDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts