News Posts matching #Enterprise

Return to Keyword Browsing

AI Demand Drives Rapid Growth in QLC Enterprise SSD Shipments for 2024

North American customers are increasing their orders for storage products as energy efficiency becomes a key priority for AI inference servers. This, in turn, is driving up demand for QLC enterprise SSDs. Currently, only Solidigm and Samsung have certified QLC products, with Solidigm actively promoting its QLC products and standing to benefit the most from this surge in demand. TrendForce predicts shipments of QLC enterprise SSD bits to reach 30 exabytes in 2024—increasing fourfold in volume from 2023.

TrendForce identifies two main reasons for the increasing use of QLC SSDs in AI applications: the products' fast read speeds and TCO advantages. AI inference servers primarily perform read operations, which occur less frequently than the data writing required by AI training servers. In comparison to HDDs, QLC enterprise SSDs offer superior read speeds and have capacities that have expanded up to 64 TB.

MSI Showcases GPU Servers for Media and Entertainment Industry at 2024 NAB Show

MSI, a leading global server provider, is showcasing its latest GPU servers powered by AMD processors at the 2024 NAB Show, Booth #SL9137 in the Las Vegas Convention Center from April 14-17. These servers are designed to meet the evolving needs of modern creative projects in Media and Entertainment industry. "As AI continues to reshape the Media and Entertainment industry, it brings unprecedented speed and performance to tasks such as animation, visual effects, video editing, and rendering," said Danny Hsu, General Manager of Enterprise Platform Solutions. "MSI's GPU platforms empower content creators to execute every project with efficiency, speed, and uncompromising quality."

The G4101 is a 4U 4GPU server platform, purpose-built to unleash the full potential of creative professionals in the Media and Entertainment industry. It supports a single AMD EPYC 9004 Series processor equipped with a liquid cooling module, along with twelve DDR5 RDIMM slots. Additionally, it features four PCIe 5.0 x16 slots tailored for triple-slot graphic cards with coolers, ensuring increased airflow and sustained performance. With twelve front 2.5-inch U.2 NVMe/SATA drive bays, it offers high-speed and flexible storage options, catering to the diverse needs of AI workloads. The G4101 combines air flow spacing and liquid closed-loop cooling, making it the optimal thermal management solution for even the most demanding tasks.

PNY Expands Enterprise Portfolio With PICO's Enterprise VR Solutions

PNY Technologies, a global leader in memory and storage solutions, has expanded its enterprise portfolio through a strategic partnership with PICO, a VR solutions company with independent innovation and R&D capabilities. This partnership highlights PNY's dedication to providing state-of-the-art solutions across a spectrum of applications, from enhancing sports, entertainment, and video consumption to revolutionizing industries such as education, healthcare, and corporate training.

This partnership leverages PICO's XR platform solutions, enhancing PNY's enterprise offerings with unparalleled performance, scalability, and cost efficiency. This move reinforces PNY's position as a key player in the enterprise market.

Samsung Introduces "Petabyte SSD as a Service" at GTC 2024, "Petascale" Servers Showcased

Leaked Samsung PBSSD presentation material popped up online a couple of days prior to the kick-off day of NVIDIA's GTC 2024 conference (March 18)—reports (at the time) jumped on the potential introduction of a "petabyte (PB)-level SSD solution," alongside an enterprise subscription service for the US market. Tom's Hardware took the time to investigate this matter—in-person—on the showroom floor up in San Jose, California. It turns out that interpretations of pre-event information were slightly off—according to on-site investigations: "despite the name, PBSSD is not a petabyte-scale solid-state drive (Samsung's highest-capacity drive can store circa 240 TB), but rather a 'petascale' storage system that can scale-out all-flash storage capacity to petabytes."

Samsung showcased a Supermicro Petascale server design, but a lone unit is nowhere near capable of providing a petabyte of storage—the Tom's Hardware reporter found out that the demonstration model housed: "sixteen 15.36 TB SSDs, so for now the whole 1U unit can only pack up to 245.76 TB of 3D NAND storage (which is pretty far from a petabyte), so four of such units will be needed to store a petabyte of data." Company representatives also had another Supermicro product at their booth: "(an) H13 all-flash petascale system with CXL support that can house eight E3.S SSDs (with) four front-loading E3.S CXL bays for memory expansion."

Dell Expands Generative AI Solutions Portfolio, Selects NVIDIA Blackwell GPUs

Dell Technologies is strengthening its collaboration with NVIDIA to help enterprises adopt AI technologies. By expanding the Dell Generative AI Solutions portfolio, including with the new Dell AI Factory with NVIDIA, organizations can accelerate integration of their data, AI tools and on-premises infrastructure to maximize their generative AI (GenAI) investments. "Our enterprise customers are looking for an easy way to implement AI solutions—that is exactly what Dell Technologies and NVIDIA are delivering," said Michael Dell, founder and CEO, Dell Technologies. "Through our combined efforts, organizations can seamlessly integrate data with their own use cases and streamline the development of customized GenAI models."

"AI factories are central to creating intelligence on an industrial scale," said Jensen Huang, founder and CEO, NVIDIA. "Together, NVIDIA and Dell are helping enterprises create AI factories to turn their proprietary data into powerful insights."

NVIDIA Announces New Switches Optimized for Trillion-Parameter GPU Computing and AI Infrastructure

NVIDIA today announced a new wave of networking switches, the X800 series, designed for massive-scale AI. The world's first networking platforms capable of end-to-end 800 Gb/s throughput, NVIDIA Quantum-X800 InfiniBand and NVIDIA Spectrum -X800 Ethernet push the boundaries of networking performance for computing and AI workloads. They feature software that further accelerates AI, cloud, data processing and HPC applications in every type of data center, including those that incorporate the newly released NVIDIA Blackwell architecture-based product lineup.

"NVIDIA Networking is central to the scalability of our AI supercomputing infrastructure," said Gilad Shainer, senior vice president of Networking at NVIDIA. "NVIDIA X800 switches are end-to-end networking platforms that enable us to achieve trillion-parameter-scale generative AI essential for new AI infrastructures."

NVIDIA Launches Blackwell-Powered DGX SuperPOD for Generative AI Supercomputing at Trillion-Parameter Scale

NVIDIA today announced its next-generation AI supercomputer—the NVIDIA DGX SuperPOD powered by NVIDIA GB200 Grace Blackwell Superchips—for processing trillion-parameter models with constant uptime for superscale generative AI training and inference workloads.

Featuring a new, highly efficient, liquid-cooled rack-scale architecture, the new DGX SuperPOD is built with NVIDIA DGX GB200 systems and provides 11.5 exaflops of AI supercomputing at FP4 precision and 240 terabytes of fast memory—scaling to more with additional racks.

Samsung Expected to Unveil Enterprise "PBSSD" Subscription Service at GTC

Samsung Electronics is all set to discuss the future of AI, alongside Jensen Huang, at NVIDIA's upcoming GTC 2024 conference. South Korean insiders have leaked the company's intentions, only days before the event's March 18 kickoff time. Their recently unveiled 36 GB HBM3E 12H DRAM product is expected to be the main focus of official presentations—additionally, a new storage subscription service is marked down for a possible live introduction. An overall "Redefining AI Infrastructure" presentation could include—according to BusinessKorea—a planned launch of: "petabyte (PB)-level SSD solution, dubbed 'PBSSD,' along with a subscription service in the US market within the second quarter (of 2024) to address the era of ultra-high-capacity data."

A Samsung statement—likely sourced from leaked material—summarized this business model: "the subscription service will help reduce initial investment costs in storage infrastructure for our customers and cut down on maintenance expenses." Under agreed upon conditions, customers are not required to purchasing ultra-high-capacity SSD solutions outright: "enterprises using the service can flexibly utilize SSD storage without the need to build separate infrastructure, while simultaneously receiving various services from Samsung Electronics related to storage management, security, and upgrades." A special session—"The Value of Storage as a Service for AI/ML and Data Analysis"—is alleged to be on the company's GTC schedule.

Microsoft Z1000 960 GB NVMe SSD Leaked

According to TPU's SSD database, the Microsoft Z1000 M.2 22110 form factor solid-state drive launched back in 2020—last week, well-known hardware tipster, yuuki_ans, leaked a set of photos and specifications. Their March 7 social media post showcases close-ups of a potential enterprise product—sporting a CNEX Labs CNX-2670AA-CB2T controller, Toshiba BiCS4 96-layer eTLC NAND flash dies and 1 GB Micron MT40A1G8SA-075:E DDR4 RAM cache. The mysterious storage device appears to be an engineering sample (PV1.1)—an attached label lists a possible manufacturing date of May 18, 2020, but its part number and serial code are redacted in yuuki's set of photos. PCIe specifications are not disclosed, but experts reckon that a 4.0 standard is present here (given the prototype's age).

The long form factor and presence of a CNEX Labs controller suggest that Microsoft has readied a 960 GB capacity model for usage in data servers. Unoccupied spaces on the board provide evidence of different configurations. Extra BGA mounting points could introduce another DRAM chip, and there is enough room for additional capacitors—via solder pads on both sides of the Z1000's PCB. It is speculated that 2 TB and 4 TB variants exist alongside the leaked 960 GB example—a "broad portfolio" of finalized Z1000 products could be in service right now, but the wider public is unlikely to see these items outside of Microsoft facilities.

NVIDIA Calls for Global Investment into Sovereign AI

Nations have long invested in domestic infrastructure to advance their economies, control their own data and take advantage of technology opportunities in areas such as transportation, communications, commerce, entertainment and healthcare. AI, the most important technology of our time, is turbocharging innovation across every facet of society. It's expected to generate trillions of dollars in economic dividends and productivity gains. Countries are investing in sovereign AI to develop and harness such benefits on their own. Sovereign AI refers to a nation's capabilities to produce artificial intelligence using its own infrastructure, data, workforce and business networks.

Why Sovereign AI Is Important
The global imperative for nations to invest in sovereign AI capabilities has grown since the rise of generative AI, which is reshaping markets, challenging governance models, inspiring new industries and transforming others—from gaming to biopharma. It's also rewriting the nature of work, as people in many fields start using AI-powered "copilots." Sovereign AI encompasses both physical and data infrastructures. The latter includes sovereign foundation models, such as large language models, developed by local teams and trained on local datasets to promote inclusiveness with specific dialects, cultures and practices. For example, speech AI models can help preserve, promote and revitalize indigenous languages. And LLMs aren't just for teaching AIs human languages, but for writing software code, protecting consumers from financial fraud, teaching robots physical skills and much more.

Enterprise SSD Industry Hits US$23.1 Billion in Revenue in 4Q23, Growth Trend to Continue into Q1 This Year

The third quarter of 2023 witnessed suppliers dramatically cutting production, which underpinned enterprise SSD prices. The fourth quarter saw a resurgence in contract prices, driven by robust buying activity and heightened demand from server brands and buoyed by optimistic capital expenditure forecasts for 2024. This, combined with increased demand from various end products entering their peak sales period and ongoing reductions in OEM NAND Flash inventories, resulted in some capacity shortages. Consequently, fourth-quarter enterprise SSD prices surged by over 15%. TrendForce highlights that this surge in demand and prices led to a 47.6% QoQ increase in enterprise SSD industry revenues in 4Q23, reaching approximately $23.1 billion.

The stage is set for continued fervor as we settle into the new year and momentum from server brand orders continues to heat up—particularly from Chinese clients. On the supply side, falling inventory levels and efforts to exit loss-making positions have prompted enterprise SSD prices to climb, with contract prices expected to increase by over 25%. This is anticipated to fuel a 20% revenue growth in Q1.

IBM Intros AI-enhanced Data Resilience Solution - a Cyberattack Countermeasure

Cyberattacks are an existential risk, with 89% of organizations ranking ransomware as one of the top five threats to their viability, according to a November 2023 report from TechTarget's Enterprise Strategy Group, a leading analyst firm. And this is just one of many risks to corporate data—insider threats, data exfiltration, hardware failures, and natural disasters also pose significant danger. Moreover, as the just-released 2024 IBM X-Force Threat Intelligence Index states, as the generative AI market becomes more established, it could trigger the maturity of AI as an attack surface, mobilizing even further investment in new tools from cybercriminals. The report notes that enterprises should also recognize that their existing underlying infrastructure is a gateway to their AI models that doesn't require novel tactics from attackers to target.

To help clients counter these threats with earlier and more accurate detection, we're announcing new AI-enhanced versions of the IBM FlashCore Module technology available inside new IBM Storage FlashSystem products and a new version of IBM Storage Defender software to help organizations improve their ability to detect and respond to ransomware and other cyberattacks that threaten their data. The newly available fourth generation of FlashCore Module (FCM) technology enables artificial intelligence capabilities within the IBM Storage FlashSystem family. FCM works with Storage Defender to provide end-to-end data resilience across primary and secondary workloads with AI-powered sensors designed for earlier notification of cyber threats to help enterprises recover faster.

Huawei Launches OptiXtrans DC908 Pro, a Next-gen DCI Platform for the AI Era

At MWC Barcelona 2024, Huawei launched the Huawei OptiXtrans DC908 Pro, a new platform for Data Center Interconnect (DCI) designed for the intelligent era. This innovative platform ensures the efficient, secure, and stable transmission of data between data centers (DCs), setting a new standard for DCI networks. As AI continues to proliferate across various service scenarios, the demand for foundation models has intensified, leading to an explosion in data volume. DCs are now operating at the petabyte level, and DCI networks have evolved from single-wavelength 100 Gbit/s to single-wavelength Tbit/s.

In response to the challenges posed by massive data transmission in the intelligent era, Huawei introduces the next-generation DCI platform, the Huawei OptiXtrans DC908 Pro. Compared to its predecessor, the DC908 Pro offers higher bandwidth, reliability, and intelligence.

NVIDIA AI GPU Customers Reportedly Selling Off Excess Hardware

The NVIDIA H100 Tensor Core GPU was last year's hot item for HPC and AI industry segments—the largest purchasers were reported to have acquired up to 150,000 units each. Demand grew so much that lead times of 36 to 52 weeks became the norm for H100-based server equipment. The latest rumblings indicate that things have stabilized—so much so that some organizations are "offloading chips" as the supply crunch cools off. Apparently it is more cost-effective to rent AI processing sessions through cloud service providers (CSPs)—the big three being Amazon Web Services, Google Cloud, and Microsoft Azure.

According to a mid-February Seeking Alpha report, wait times for the NVIDIA H100 80 GB GPU model have been reduced down to around three to four months. The Information believes that some companies have already reduced their order counts, while others have hardware sitting around, completely unused. Maintenance complexity and costs are reportedly cited as a main factors in "offloading" unneeded equipment, and turning to renting server time from CSPs. Despite improved supply conditions, AI GPU demand is still growing—driven mainly by organizations dealing with LLM models. A prime example being Open AI—as pointed out by The Information—insider murmurings have Sam Altman & Co. seeking out alternative solutions and production avenues.

NVIDIA Prepared to Offer Custom Chip Designs to AI Clients

NVIDIA is reported to be setting up an AI-focused semi-custom chip design business unit, according to inside sources known to Reuters—it is believed that Team Green leadership is adapting to demands leveraged by key data-center customers. Many companies are seeking cheaper alternatives, or have devised their own designs (budget/war chest permitting)—NVIDIA's current range of AI GPUs are simply off-the-shelf solutions. OpenAI has generated the most industry noise—their alleged early 2024 fund-raising pursuits have attracted plenty of speculative/kind-of-serious interest from notable semiconductor personalities.

Team Green is seemingly reacting to emerging market trends—Jensen Huang (CEO, president and co-founder) has hinted that NVIDIA custom chip designing services are on the cusp. Stephen Nellis—a Reuters reporter specializing in tech industry developments—has highlighted select NVIDIA boss quotes from an incoming interview piece: "We're always open to do that. Usually, the customization, after some discussion, could fall into system reconfigurations or recompositions of systems." The Team Green chief teased that his engineering team is prepared to take on the challenge meeting exact requests: "But if it's not possible to do that, we're more than happy to do a custom chip. And the benefit to the customer, as you can imagine, is really quite terrific. It allows them to extend our architecture with their know-how and their proprietary information." The rumored NVIDIA semi-custom chip design business unit could be introduced in an official capacity at next month's GTC 2024 Conference.

Cervoz Embraces Edge Computing with its M.2 Compact Solutions

Seizing the Edge: Cervoz Adapts to Shifting Data Landscape—The rapid emergence of technologies like AIoT and 5G and their demand for high-speed data processing has accelerated the data transition from the cloud to the edge. This shift exposes data to unpredictable environments with extreme temperature variations, vibrations, and space constraints, making it critical for edge devices to thrive in these settings. Cervoz strategically targets the blooming edge computing sector by introducing an extensive array of compact product lines, enhancing its existing SSDs, DRAM, and Modular Expansion Cards to meet the unique needs of edge computing.

Cervoz Reveals NVMe M.2 SSDs and Connectivity Solutions to Power the Edge
Cervoz introduces its latest compact PCIe Gen. 3x2 SSD offerings, the T421 M.2 2242 (B+M key) and T425 M.2 2230 (A+E key). These space-efficient design and low power consumption feature offer exceptional performance, catering to the storage needs of fanless embedded PCs and motherboards for purpose-built edge applications. Cervoz is also leading the way in developing connectivity solutions, including Ethernet, Wi-Fi, Serial, USB, and CAN Bus all available in M.2 2230 (A+E key) and M.2 2242/2260/2280 (B+M) form factors. The M.2 (B+M key) 2242/2260/2280 card is a versatile three-in-one solution designed for maximum adaptability. While it initially comes in a 2280 form factor, it can be easily adjusted to fit 2260 or 2242 sizes. It offers an effortless upgrade of existing systems without sacrificing connection capability, especially in edge devices.

Edged Energy Launches Four Ultra-Efficient AI-Ready Data Centers in USA

Edged Energy, a subsidiary of Endeavour devoted to carbon neutral data center infrastructure, announced today the launch of its first four U.S. data centers, all designed for today's high-density AI workloads and equipped with advanced waterless cooling and ultra-efficient energy systems. The facilities will bring more than 300 MW of critical capacity with an industry-leading average Power Usage Effectiveness (PUE) of 1.15 portfolio-wide. Edged has nearly a dozen new data centers operating or under construction across Europe and North America and a gigawatt-scale project pipeline.

The first phase of this U.S. expansion includes a 168 MW campus in Atlanta, a 96 MW campus in the Chicago area, 36 MW in Phoenix and 24 MW in Kansas City. At a time of growing water scarcity where rivers, aquifers and watersheds are at dangerously low levels, it is more critical than ever that IT infrastructure conserve precious water resources. The new Edged facilities are expected to save more than 1.2 billion gallons of water each year compared to conventional data centers. "The rise of AI and machine learning is requiring more power, and often more water, to cool outdated servers. While traditional data centers struggle to adapt, Edged facilities are ready for the advanced computing of today and tomorrow without consuming any water for cooling," said Bryant Farland, Chief Executive Officer for Edged. "Sustainability is at the core of our platform. It is why our data centers are uniquely optimized for energy efficiency and water conservation. We are excited to be partnering with local communities to bring future-proof solutions to a growing digital economy."

NVIDIA Unveils "Eos" to Public - a Top Ten Supercomputer

Providing a peek at the architecture powering advanced AI factories, NVIDIA released a video that offers the first public look at Eos, its latest data-center-scale supercomputer. An extremely large-scale NVIDIA DGX SuperPOD, Eos is where NVIDIA developers create their AI breakthroughs using accelerated computing infrastructure and fully optimized software. Eos is built with 576 NVIDIA DGX H100 systems, NVIDIA Quantum-2 InfiniBand networking and software, providing a total of 18.4 exaflops of FP8 AI performance. Revealed in November at the Supercomputing 2023 trade show, Eos—named for the Greek goddess said to open the gates of dawn each day—reflects NVIDIA's commitment to advancing AI technology.

Eos Supercomputer Fuels Innovation
Each DGX H100 system is equipped with eight NVIDIA H100 Tensor Core GPUs. Eos features a total of 4,608 H100 GPUs. As a result, Eos can handle the largest AI workloads to train large language models, recommender systems, quantum simulations and more. It's a showcase of what NVIDIA's technologies can do, when working at scale. Eos is arriving at the perfect time. People are changing the world with generative AI, from drug discovery to chatbots to autonomous machines and beyond. To achieve these breakthroughs, they need more than AI expertise and development skills. They need an AI factory—a purpose-built AI engine that's always available and can help ramp their capacity to build AI models at scale Eos delivers. Ranked No. 9 in the TOP 500 list of the world's fastest supercomputers, Eos pushes the boundaries of AI technology and infrastructure.

Samsung Announces the Galaxy Tab Active5

Samsung Electronics America has announced the Galaxy Tab Active5, a business-ready ruggedized tablet built to handle the rigors of frontline work. Building on the power of the Galaxy Tab Active3, the Galaxy Tab Active5 delivers significant improvements in performance, durability and security to help businesses conquer their challenges and boost productivity in the field, even in harsh working environments. Additionally, the Galaxy Tab Active5 is available as an Enterprise Edition, making it easy for businesses to enroll, configure, manage and analyze hundreds of devices.

Designed with the needs of retailers in mind, the Galaxy Tab Active5 features a high-resolution camera, near-field communication (NFC) and push-to-talk functionality to enable more efficient barcode-scanning, mobile point-of-sale (mPOS), in-store communication and more. In addition to retail, the Galaxy Tab Active5 also delivers new capabilities in other industries that require a high degree of durability, including foodservice, manufacturing, transportation, construction and the public sector.

Intel NEX "Bartlett Lake-S" CPUs Reportedly in Pipeline

Supply chain insiders have claimed that Intel is working on extending the lifespan of its LGA 1700 platform—a BenchLife report proposes that the "Bartlett Lake-S" processor family is due soon, courtesy of Team Blue's Network and Edge (NEX) business group. Only a few days ago, the rumor mill had placed "Bartlett Lake-S" CPUs in a mainstream desktop category, due to alleged connections with the Raptor Lake-S Refresh series—the former is also (supposedly) based on the Intel 7 processor process. BenchLife believes that DDR4 and DDR5 memory will be supported, but with no mention of possible ECC functionality. Confusingly, chip industry tipsters believe that the unannounced processors could be launched as 15th Gen Core parts.

BenchLife has a history of discovering and reporting on Intel product roadmaps—apparently Bartlett Lake-S can leverage the same core configurations as seen on Raptor Lake-S; namely 8 Raptor Cove P-Cores and 16 Gracemont E-Cores. An insider source claims that a new pure P-Core-only design could exist, sporting up to twelve Raptor Cove units. According to a leaked Bartlett Lake-S series specification sheet: "the iGPU part will use (existing) Intel Xe architecture, up to Intel UHD Graphics 770." The publication alludes to some type of AI performance enhancement as a distinguishing feature for Bartlett Lake-S, when lined up against 14th Gen Core desktop SKUs. Folks salivating at the prospect of a mainstream DIY launch will have to wait and see (according to BenchLife's supply chain insider): "judging from various specifications, this product belonging to the Intel NEX business group may also be decentralized to the consumer market, but the source did not make this part too clear and reserved some room for maneuver."

PNY Expands Enterprise Portfolio with Innovative VAST Data Platform

PNY Technologies, a global leader in memory and storage solutions, has expanded its enterprise portfolio through a strategic partnership with VAST Data, the AI data platform company. This collaboration underscores PNY's commitment to delivering cutting-edge solutions to meet the evolving needs of enterprises integrating AI and HPC into their core processes. This partnership leverages the VAST Data Platform's DataStore capabilities, enhancing PNY's enterprise offerings with unparalleled performance, scalability, and cost efficiency. This move reinforces PNY's position as a key player in the enterprise market.

Key highlights of the partnership include:
  • Revolutionary Solutions: PNY now offers VAST Data's innovative data platform, known for its simplicity and transformative performance, serving data to the world's most demanding supercomputers.
  • Unmatched Scalability: VAST Data's industry-disrupting DASE architecture enables businesses to enjoy nearly limitless scale as their data sets and AI pipelines grow, allowing them to adapt to the changing demands of today's increasingly data-driven world.
  • Cost-Effective Data Management: VAST Data and PNY will empower enterprises to achieve significant cost savings through improved data reduction (VAST Similarity), infrastructure efficiency and simplified management.
  • Enhanced Data Analytics: The VAST DataBase facilitates deeper insights from both structured and unstructured data, accelerating decision-making and enables data-driven innovation across various business functions.
  • Exceptional Customer Support: PNY extends its commitment to exceptional customer support to VAST Data solutions, providing reliable technical assistance and guidance.

IBM Storage Ceph Positioned as the Ideal Foundation for Modern Data Lakehouses

It's been one year since IBM integrated Red Hat storage product roadmaps and teams into IBM Storage. In that time, organizations have been faced with unprecedented data challenges to scale AI due to the rapid growth of data in more locations and formats, but with poorer quality. Helping clients combat this problem has meant modernizing their infrastructure with cutting-edge solutions as a part of their digital transformations. Largely, this involves delivering consistent application and data storage across on-premises and cloud environments. Also, crucially, this includes helping clients adopt cloud-native architectures to realize the benefits of public cloud like cost, speed, and elasticity. Formerly Red Hat Ceph—now IBM Storage Ceph—a state-of-the-art open-source software-defined storage platform, is a keystone in this effort.

Software-defined storage (SDS) has emerged as a transformative force when it comes to data management, offering a host of advantages over traditional legacy storage arrays including extreme flexibility and scalability that are well-suited to handle modern uses cases like generative AI. With IBM Storage Ceph, storage resources are abstracted from the underlying hardware, allowing for dynamic allocation and efficient utilization of data storage. This flexibility not only simplifies management but also enhances agility in adapting to evolving business needs and scaling compute and capacity as new workloads are introduced. This self-healing and self-managing platform is designed to deliver unified file, block, and object storage services at scale on industry standard hardware. Unified storage helps provide clients a bridge from legacy applications running on independent file or block storage to a common platform that includes those and object storage in a single appliance.

Financial Analyst Outs AMD Instinct MI300X "Projected" Pricing

AMD's December 2023 launch of new Instinct series accelerators has generated a lot of tech news buzz and excitement within the financial world, but not many folks are privy to Team Red's MSRP for the CDNA 3.0 powered MI300X and MI300A models. A Citi report has pulled back the curtain, albeit with "projected" figures—an inside source claims that Microsoft has purchased the Instinct MI300X 192 GB model for ~$10,000 a piece. North American enterprise customers appear to have taken delivery of the latest MI300 products around mid-January time—inevitably, top secret information has leaked out to news investigators. SeekingAlpha's article (based on Citi's findings) alleges that the Microsoft data center division is AMD's top buyer of MI300X hardware—GPT-4 is reportedly up and running on these brand new accelerators.

The leakers claim that businesses further down the (AI and HPC) food chain are having to shell out $15,000 per MI300X unit, but this is a bargain when compared to NVIDIA's closest competing package—the venerable H100 SXM5 80 GB professional card. Team Green, similarly, does not reveal its enterprise pricing to the wider public—Tom's Hardware has kept tabs on H100 insider info and market leaks: "over the recent quarters, we have seen NVIDIA's H100 80 GB HBM2E add-in-card available for $30,000, $40,000, and even much more at eBay. Meanwhile, the more powerful H100 80 GB SXM with 80 GB of HBM3 memory tends to cost more than an H100 80 GB AIB." Citi's projection has Team Green charging up to four times more for its H100 product, when compared to Team Red MI300X pricing. NVIDIA's dominant AI GPU market position could be challenged by cheaper yet still very performant alternatives—additionally chip shortages have caused Jensen & Co. to step outside their comfort zone. Tom's Hardware reached out to AMD for comment on the Citi pricing claims—a company representative declined this invitation.

GIGABYTE Enterprise Servers & Motherboards Roll Out on European E-commerce Platform

GIGABYTE Technology, a pioneer in computer hardware, has taken a significant stride in shaping its European business model. Today, GIGABYTE has broadened its e-commerce platform, shop.gigabyte.eu, by integrating enterprise server and server motherboard solutions into its product portfolio. Being at the forefront of computer hardware manufacturing, GIGABYTE recognizes that it is imperative to expand its presence in the EMEA region to maintain its leadership across all markets. With the introduction of our enterprise-level server and motherboard solutions, we are dedicated to delivering a diverse range of high-performance products directly to our B2B clients.

GIGABYTE offers a complete product portfolio that addresses all workloads from the data center to edge including traditional and emerging workloads in HPC and AI to data analytics, 5G/edge, cloud computing, and more. Our enduring partnerships with key technology leaders ensure that our new products are at the forefront of innovation and launch with new partner platforms. Our systems embody performance, security, scalability, and sustainability. Within the e-commerce product portfolio, we offer a selection of models from our Edge, Rack, GPU, and Storage series. Additionally, the platform provides server motherboards for custom integration. The current selection comprises a mix of solutions tailored to online sales. For more complex solutions, customers can get in touch via the integrated contact form.

Samsung Showcases B2B Displays with Advanced Connectivity at ISE 2024

Samsung Electronics today at Integrated Systems Europe (ISE) 2024 in Barcelona is showcasing how SmartThings will bolster its B2B displays to shape the future of business connectivity. Samsung's "SmartThings for Business" exhibition emphasizes the new advancements that the cutting-edge internet-of-things (IoT) platform will offer, as well as Samsung's commitment to providing more connected, easy-to-control digital signage across industries. "In a commercial display sector where operational efficiency is key, Samsung digital signage is leveraging SmartThings to deliver next-gen connectivity and features to organizations of all sizes," said SW Yong, President and Head of Visual Display Business at Samsung Electronics. "This further expansion of the SmartThings ecosystem will serve to elevate experiences for customers and partners from a wide variety of industries."

How Businesses Can Leverage Connected Tech Through SmartThings—From the Smart Store to Smart Office
At the event, Samsung is showcasing how SmartThings enables business owners to leverage their digital signage to connect and gain more control of their smart devices across various landscapes. By offering the SmartThings connectivity feature to commercial display products such as Smart Signage and Hotel TVs, users can experience the convenience of hyper-connectivity in their business environments. These changes will include Samsung smart devices, as well as other devices that support the industry's latest IoT specifications, Matter and the Home Connectivity Alliance (HCA). Through the application of SmartThings to various business environments, Samsung contributes to the more efficient management of space and energy by transforming places of business into interconnected smart spaces. These connectivity improvements have been designed to benefit all types of business customers, from small and mid-sized business owners to enterprises. Examples of the smart spaces—including a smart store, smart office and smart hotel—are on display at Samsung's booth at ISE 2024.
Return to Keyword Browsing
Apr 25th, 2024 20:35 EDT change timezone

New Forum Posts

Popular Reviews

Controversial News Posts