News Posts matching #NVIDIA

Return to Keyword Browsing

MSI Updates Mini ITX Version of the GeForce GTX 1650 Aero ITX 4G OC - Now OCV1

MSI has launched a new revision of their GeForce GTX 1650 Aero 4G OC, the original MSI rendition of a Mini ITX graphics cards based on NVIDIA's TU 116 chip. The GeForce GTX 1650 Aero 4G OCV1 is only 170 mm long, 111 mm wide, 38 mm thick, and weighs 375 grams features a single cooling fan that is more than sufficient to keep the 284 mm² die cool enough. Boost clock is being reported at 1,695 MHz. There's a 128-bit memory bus that ferries data between the GPU and its 4 GB of GDDR5 memory. I/O wise, there's 1x DisplayPort 1.4, 1x HDMI 2.0b, and 1x Dual Link DVI (and built-in triple-display support). There's no additional power connector, which means a below-75 W power consumption figure.

NVIDIA GeForce "Ampere" GPUs Built on Samsung 8nm Instead of TSMC 7nm?

NVIDIA's upcoming GeForce "Ampere" family of GPUs will be built almost entirely on Samsung's 8 nanometer silicon fabrication process that's derived from its 10 nm node; rather than TSMC's 7 nm process, according to kopite7kimi, a source with a high hit-rate with NVIDIA rumors in the past. The 8LPP silicon fabrication node by Samsung is an extension of the company's 10LPP (10 nm) node. Both have the same fin pitch, but reductions are made in the areas of gate pitch (down by 6%) resulting in a transistor density of over 61 million/mm². Apparently NVIDIA's entire high-end product stack, including the GA102 silicon that powers at least three high-end consumer SKUs, are expected to be based on Samsung 8LPP.

NVIDIA Dismisses Investor Claims of $1 billion Wrongdoing in Company Finance Reporting Amidst Crypto Boom

NVIDIA has (not surprisingly) dismissed allegations that it had misled investors in regard to demand towards its GeForce graphics products circa 2017. The original allegation claimed that NVIDIA has wrongfully misrepresented GeForce division sales to investors by including crypto-focused sales on its bottom line. This, investors claim, painted a safer investment opportunity on NVIDIA stock than it actually was - the volatility of the crypto market and associated unpredictability in NVIDIA GeForce products' demand being the sore point for investors. Demand of GeForce products for gaming is considered to be less risk-averse and less elastic than crypto-focused sales.

NVIDIA says that investors cherry-picked corporate statements while ignoring others that, according to NVIDIA, showed transparency. The ammended class suit, which was amended in May 2020 from its original 2017 entry date, accuses Nvidia CEO Jensen Huang and Jeff Fisher, head of gaming, claiming they knew the rise in GeForce GPU sales was linked to the crypto mining boom and wasn't going to last in the long-term. NVIDIA says that executives didn't lie when they described crypto sales as a "small portion" of their revenue (which was disclosed at $6.9 billion for the year 2017). Another contention point from NVIDIA is that executives in the company (and the company itself) had no way of knowing ecactly what purpoze its sold GPUs were being put to.

ASUS ROG Strix GeForce RTX 3080 Ti Leaked

Here's possibly the first picture of an ASUS ROG Strix GeForce RTX 3080 Ti graphics card, which not only confirms NVIDIA's nomenclature for its next-generation GeForce RTX graphics cards, but also provides fascinating insights into the direction ASUS is taking with its next-generation ROG Strix graphics cards. The design language involves matte black metal surfaces accented by brushed metal elements that conceal more RGB LED elements. ASUS's Axial Tech fans do the heavy lifting along with a large aluminium fin-stack heatsink underneath. The mention of "RTX 3080 Ti" also casts a shadow of doubt over "RTX 3090" leading the lineup. We should learn more about what ASUS and NVIDIA have in store, as we inch closer to the September unveil of this series.

Death Stranding with DLSS 2.0 Enables 4K-60 FPS on Any RTX 20-series GPU: Report

Ahead of its PC platform release on July 14, testing of a pre-release build by Tom's Hardware reveals that "Death Stranding" will offer 4K 60 frames per second on any NVIDIA RTX 20-series graphics card if DLSS 2.0 is enabled. NVIDIA's performance-enhancing feature renders the game at a resolution lower than that of the display head, and uses AI to reconstruct details. We've detailed DLSS 2.0 in an older article. The PC version has a frame-rate limit of 240 FPS, ultra-wide resolution support, and a photo mode (unsure if it's an Ansel implementation). It has rather relaxed recommended system requirements for 1080p 60 FPS gaming (sans DLSS).

Microsoft Details Hardware-Accelerated GPU Scheduling

Microsoft posted a technical brief of the new Hardware-Accelerated GPU Scheduling feature introduced with Windows 10 May 2020 Update, and its latest Windows Display Driver Model (WDDM) version. In a blog post by Steve Pronovost, a tech lead with Microsoft DirectX, Microsoft finally set out to explain what WDDM GPU Scheduling is. Introduced with Windows Vista, WDDM 1.0 introduced GPU scheduling, a software component that allocates workload from multiple sources onto a GPU, prior to which all applications that needed GPU-acceleration would send as much traffic as they could to the GPU driver. With growing complexity in the modern 3D rendering pipeline, the need for a scheduler, not unlike the OS thread scheduler, was needed.

In the following section (which matters), Microsoft went on to detail what Hardware-Accelerated GPU Scheduling is. Apparently, some of the newer generations of GPUs (by NVIDIA, AMD, and Intel), have a hardware component in-built to perform scheduling. When this component is combined with Windows 10 May 2020 Update and a WDDM 2.7-compliant graphics driver, Windows offloads GPU scheduling onto it, freeing up some CPU resources and potentially reducing latencies at various stages of the graphics rendering pipeline. Windows continues to exhibit control over scheduling, but by talking to this scheduling component instead of a CPU-executed software stack.

GALAX Designs a GeForce GTX 1650 "Ultra" with TU106 Silicon

NVIDIA board partners carving out GeForce RTX 20-series and GTX 16-series SKUs from ASICs they weren't originally based on, is becoming more common, but GALAX has taken things a step further. The company just launched a GeForce GTX 1650 (GDDR6) graphics card based on the "TU106" silicon (ASIC code: TU106-125-A1). The company carved a GTX 1650 out of this chip by disabling all of its RT cores, all its tensor cores, and a whopping 61% of its CUDA cores, along with proportionate reductions in TMU- and ROP counts. The memory bus width has been halved from 256-bit down to 128-bit.

The card, however, is only listed by the Chinese regional arm of GALAX. The card's marketing name is "GALAX GeForce GTX 1650 Ultra," with "Ultra" being a GALAX brand extension, and not an NVIDIA SKU (i.e. the GPU isn't called "GTX 1650 Ultra"). The GPU clock speeds for this card is identical to those of the original GTX 1650 that's based on TU117 - 1410 MHz base, 1590 MHz GPU Boost, and 12 Gbps (GDDR6-effective) memory.

CD Projekt Red Releases Latest Cyberpunk 2077 Trailer Featuring DLSS 2.0 & Ray Tracing

CD Projekt Red has released their latest trailer for Cyberpunk 2077 showcasing the spectacular visuals for the upcoming action-adventure-RPG. NVIDIA has confirmed that the trailer included Ray-Traced Diffuse Illumination, Ray-Traced Reflections, Ray-Traced Ambient Occlusion, and Ray-Traced Shadows real-time ray-tracing effects. These effects are powered by the DirectX 12 Ultimate's DXR Raytracing API and are optimized for GeForce RTX graphics GPUs.

NVIDIA also announced that the game would feature DLSS 2.0 tech which uses custom deep learning neural networks to boost game FPS. When Cyberpunk 2077 launches later this year, it will be available on GeForce NOW, with full support for ray-tracing effects and DLSS 2.0 at 1080p 60 FPS.

NVIDIA Releases GeForce Game Ready 451.48 WHQL Drivers

NVIDIA today released the latest version of its GeForce "Game Ready" software. Version 451.48 WHQL drivers are NVIDIA's first official/public GeForce software release with Windows 10 May 2020 Update (version 2004) support, complete with WDDM 2.7 support, and DirectX 12 Ultimate API support, including Windows 10 hardware-accelerated GPU scheduling support. The drivers also add official support for the Vulkan 1.2 graphics API. NVIDIA also expanded the list of G-SYNC compatible monitors to include several new monitor models. Grab the drivers from the link below.
DOWNLOAD: NVIDIA GeForce 451.48 WHQL Drivers

Intel Gives its First Comments on Apple's Departure from x86

Apple on Monday formalized the beginning of its departure from Intel x86 machine architecture for its Mac computers. Apple makes up to 4 percent of Intel's annual CPU sales, according to a MarketWatch report. Apple is now scaling up its own A-series SoCs that use Arm CPU cores, up to performance levels relevant to Macs, and has implemented support for not just new and upcoming software ported to the new Arm machine architecture, but also software over form the iOS and iPadOS ecosystems on Mac, starting with its MacOS "Big Sur" operating system. We reached out to Intel for some of its first comments on the development.

In a comment to TechPowerUp, an Intel spokesperson said "Apple is a customer across several areas of our business, and we will continue to support them. Intel remains focused on delivering the most advanced PC experiences and a wide range of technology choices that redefine computing. We believe Intel-powered PCs—like those based on our forthcoming Tiger Lake mobile platform—provide global customers the best experience in the areas they value most, as well as the most open platform for developers, both today and into the future."

Mercedes-Benz, NVIDIA Partner to Build Advanced, Software-Defined Vehicles

Mercedes-Benz, one of the largest manufacturers of premium passenger cars, and NVIDIA, the global leader in accelerated computing, plan to enter into a cooperation to create a revolutionary in-vehicle computing system and AI computing infrastructure. Starting in 2024, this will be rolled out across the fleet of next-generation Mercedes-Benz vehicles, enabling them with upgradable automated driving functions. Working together, the companies plan to develop the most sophisticated and advanced computing architecture ever deployed in an automobile.

The new software-defined architecture will be built on the NVIDIA DRIVE platform and will be standard in Mercedes-Benz's next-generation fleet, enabling state-of-the-art automated driving functionalities. A primary feature will be the ability to automate driving of regular routes from address to address. In addition, there will be numerous future safety and convenience applications. Customers will be able to purchase and add capabilities, software applications and subscription services through over-the-air software updates during the life of the car.

Acer Announces Predator X25 Monitor: 25", 1080p, 360 Hz

Acer today announced one of the world's highest refresh-rate monitors in the form of the Predator X25. joining in the likes of Alienware and ASUS, who have already announced their own 360 Hz in the form of the AW2521H and ROG Swift 360, respectively. The Predator X25's über-high refresh rate will leave players out of any excuses so as to why they weren't able to react in time to a threat.

The 360 Hz refresh rates comes with compromises (resolution is only 1080p), and there will be NVIDIA's G-Sync on-board (but a graphics card and CPU combo that can push those 360 FPS to really make use of this refresh rate... Hmm. That's a tougher deal). Alienware has confirmed their AW2521H[ monitor uses IPS panel technology, and it would thus seem likely that Acer also makes use of that particular panel technology on the Predator X25. However, we'll have to wait and see. The monitor brings some quality of life technologies, such as ergonomics tilting (25 degrees backward, 5 degrees forward), swivel (30 degrees) and height adjustment (4.7 inches). The monitor also features an RGB lighting on the back of the monitor that can light up according to scenes being rendered or according to music you're playing, there's automatic brightness exposure, and a friendly reminder schedule that pops up a warning for users who have been using the monitor for too long, reminding them to take a little walk. No pricing or release date were available at time of writing.

Acer Announces Swift 5 Notebooks Powered by Intel "Tiger Lake" Processors

Acer today announced its new Swift 5 notebook that offers a new take on productivity, powerful yet light enough to be carried around throughout the day. The ultraportable device is a beacon of both design and performance, sporting a professional aesthetic that is backed up by impressive functionality. This year's model features ultra-narrow bezels that allow for a 90% screen-to-body ratio, a touchscreen with Antimicrobial Corning Gorilla Glass and new colors, such as mist green.

"The new Swift 5 pushes the envelope on what thin-and-light notebooks can be," said James Lin, General Manager, Notebooks, IT Products Business, Acer Inc. "A productivity powerhouse housed in a sleek and ultraportable chassis, the Swift 5 is an excellent option for professionals who are always on the move and seeking a device capable of keeping up with them."

GIGABYTE Introduces a Broad Portfolio of G-series Servers Powered by NVIDIA A100 PCIe

GIGABYTE, an industry leader in high-performance servers and workstations, announced its G-series servers' validation plan. Following the NVIDIA A100 PCIe GPU announcement today, GIGABYTE has completed the compatibility validation of the G481-HA0 / G292-Z40 and added the NVIDIA A100 to the support list for these two servers. The remaining G-series servers will be divided into two waves to complete their respective compatibility tests soon. At the same time, GIGABYTE also launched a new G492 series server based on the AMD EPYC 7002 processor family, which provides PCIe Gen4 support for up to 10 NVIDIA A100 PCIe GPUs. The G492 is a server with the highest computing power for AI models training on the market today. GIGABYTE will offer two SKUs for the G492. The G492-Z50 will be at a more approachable price point, whereas the G492-Z51 will be geared towards higher performance.

The G492 is GIGABYTE's second-generation 4U G-series server. Based on the first generation G481 (Intel architecture) / G482 (AMD architecture) servers, the user-friendly design and scalability have been further optimized. In addition to supporting two 280 W 2nd Gen AMD EPYC 7002 processors, the 32 DDR4 memory slots support up to 8 TB of memory and maintain data transmission at 3200 MHz. The G492 has built-in PCIe Gen4 switches, which can provide more PCIe Gen4 lanes. PCIe Gen4 has twice the I/O performance of PCIe Gen3 and fully enables the computing power of the NVIDIA A100 Tensor Core GPU, or it can be applied to PCIe storage to help provide a storage upgrade path that is native to the G492.

MSI Setting Up a Gaming Desktop Build-to-Order Production Line in the US

MSI is reportedly setting up a build-to-order desktop PC production line in the U.S. With an online storefront taking orders, this unit will assemble gaming desktops from MSI components imported from Taiwan and China. The decision to set up a pre-built unit in the US may have been influenced by U.S. import tariff regimes that favor importing components. MSI will label the finished products "MIA" (short for "Made in America,") a marketing name that isn't strictly intended to convey the "Made in the USA" tagline used by American manufacturers.

The first MIA gaming desktops will be four variants of Aegis, which combines 10th Gen Intel Core processors with NVIDIA GeForce RTX 20-series GPUs. Options include the Core i9-10900K, i7-10700, and i5-10400F; along with GPU options spanning from RTX 2080 Super thru GTX 1660 Super, with prices ranging between USD $799 through $1,699. MSI's component portfolio spans motherboards, graphics cards, coolers, cases, coolers, and gaming peripherals.

Raijintek Unveils MORPHEUS 8057 VGA Air Cooler

Raijintek today unveiled the MORPHEUS 8057, a large graphics card air-cooler. The cooler consists of a gargantuan aluminium fin-stack heatsink that's ventilated by a pair of 120 mm fans (not included). The heatsink features a large mirror-finish copper base, from which six 6 mm thick heat pipes emerge in either direction of the base (Raijintek for some reason counts this as 12 heat pipes), conveying heat to a large fin-stack with 112 aluminium fins that have ceramic coating.

The MORPHEUS 8057 heatsink measures 254 mm x 100 mm x 44 mm (WxDxH), weighing 515 g. Among the secondary heatsinks included are 12 heatsinks of various sizes for memory and VRM components; thermal pads, retention clips, and some thermal paste. Among the graphics cards supported are AMD "Navi 10" based graphics cards (RX 5700 series and RX 5600 series); and NVIDIA RTX 2080 Ti, RTX 2080/SUPER, RTX 2070/SUPER, and RTX 2060/SUPER. The company didn't reveal pricing.

NVIDIA Announces A100 PCIe Tensor Core Accelerator Based on Ampere Architecture

NVIDIA and partners today announced a new way for interested users to partake in the AI-training capabilities of their Ampere graphics architecture in the form of the A100 PCIe. Diving a little deeper, and as the name implies, this solution differs from the SXM form-factor in that it can be deployed through systems' existing PCIe slots. The change in interface comes with a reduction in TDP from 400 W down to 250 W in the PCIe version - and equivalent reduced performance.

NVIDIA says peak throughput is the same across the SXM and PCIe version of their A100 accelerator. The difference comes in sustained workloads, where NVIDIA quotes the A100 as delivering 10% less performance compared to its SXM brethren. The A100 PCIe comes with the same 2.4 Gbps, 40 GB HBM2 memory footprint as the SXM version, and all other chip resources are the same. We're thus looking at the same 862 mm² silicon chip and 6,192 CUDA cores across both models. The difference is that the PCIe accelerator can more easily be integrated in existing server infrastructure.

World's Top System Makers Unveil NVIDIA A100-Powered Servers

NVIDIA and the world's leading server manufacturers today announced NVIDIA A100-powered systems in a variety of designs and configurations to tackle the most complex challenges in AI, data science and scientific computing. More than 50 A100-powered servers from leading vendors around the world — including ASUS, Atos, Cisco, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Inspur, Lenovo, One Stop Systems, Quanta/QCT and Supermicro — are expected following last month's launch of the NVIDIA Ampere architecture and the NVIDIA A100 GPU.

Availability of the servers varies, with 30 systems expected this summer, and over 20 more by the end of the year. "Adoption of NVIDIA A100 GPUs into leading server manufacturers' offerings is outpacing anything we've previously seen," said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA. "The sheer breadth of NVIDIA A100 servers coming from our partners ensures that customers can choose the very best options to accelerate their data centers for high utilization and low total cost of ownership."

NVIDIA Unveils AI Platform to Minimize Downtime in Supercomputing Data Centers

NVIDIA today unveiled the NVIDIA Mellanox UFM Cyber-AI platform, which minimizes downtime in InfiniBand data centers by harnessing AI-powered analytics to detect security threats and operational issues, as well as predict network failures.

This extension of the UFM platform product portfolio — which has managed InfiniBand systems for nearly a decade — applies AI to learn a data center's operational cadence and network workload patterns, drawing on both real-time and historic telemetry and workload data. Against this baseline, it tracks the system's health and network modifications, and detects performance degradations, usage and profile changes.

NVIDIA GeForce "Ampere" Hits 3DMark Time Spy Charts, 30% Faster than RTX 2080 Ti

An unknown NVIDIA GeForce "Ampere" GPU model surfaced on 3DMark Time Spy online database. We don't know if this is the RTX 3080 (RTX 2080 successor), or the top-tier RTX 3090 (RTX 2080 Ti successor). Rumored specs of the two are covered in our older article. The 3DMark Time Spy score unearthed by _rogame (Hardware Leaks) is 18257 points, which is close to 31 percent faster than the RTX 2080 Ti Founders Edition, 22 percent faster than the TITAN RTX, and just a tiny bit slower than KINGPIN's record-setting EVGA RTX 2080 Ti XC. Futuremark SystemInfo reads the GPU clock speeds of the "Ampere" card as 1935 MHz, and its memory clock at "6000 MHz." Normally, SystemInfo reads the memory actual clock (i.e. 1750 MHz for 14 Gbps GDDR6 effective). Perhaps SystemInfo isn't yet optimized for reading memory clocks on "Ampere."

ASUS Announces SC4000A-E10 GPGPU Server with NVIDIA A100 Tensor Core GPUs

ASUSTek, the leading IT Company in server systems, server motherboards and workstations today announced the new NVIDIA A100-powered server - ESC4000A E10 to accelerate and optimize data centers for high utilization and low total cost of ownership with the PCIe Gen 4 expansions, OCP 3.0 networking, faster compute and better GPU performance. ASUS continues building a strong partnership with NVIDIA to deliver unprecedented acceleration and flexibility to power the world's highest-performing elastic data centers for AI, data analytics, and HPC applications.

ASUS ESC4000A-E10 is a 2U server powered by the AMD EPYC 7002 series processors that deliver up to 2x the performance and 4x the floating point capability in a single socket versus the previous 7001 generation. Targeted for AI, HPC and VDI applications in data center or enterprise environments which require powerful CPU cores, more GPUs support, and faster transmission speed, ESC4000A E10 focuses on delivering GPU-optimized performance with support for up to four double-deck high performance or eight single-deck GPUs including the latest NVIDIA Ampere-architecture V100, Tesla, and Quadro. This also benefits on virtualization to consolidate GPU resources in to shared pool for users to utilize resources in more efficient ways.

IBASE Announces NVIDIA Jetson TX2 AI Computing Platform for AIoT Applications

IBASE Technology Inc. (TPEx: 8050), a world-leading manufacturer of industrial motherboard boards and embedded computing solutions, has revealed its latest EC-3200 AI computing platform that is based on the cutting-edge power-efficient and high-performance NVIDIA Jetson Tegra X2 (TX2) processor. The NVIDIA Jetson TX2 pairs a dual-core Denver 2 alongside a quad-core ARM Cortex -A57 processor and provides 256 CUDA cores on the NVIDIA's advanced Pascal GPU architecture with up to 1.33 TFLOPS, delivering exceptional AI performance.

Designed to operate under an extended temperature range from -20°C to +60°C, the EC-3200 leverages the integrated GPU-computing power of the Jetson TX2 platform in building edge inference servers to analyze and better manage real time traffic flow in smart cities or to optimize the efficiency of operational processes in intelligent factories. "Current edge and cloud-based AI products require better computational and video analytics capability to perform demanding real-time data processing and overcome latency issues," said Wilson Lin, Director of IBASE Product Planning Department. "The EC-3200 is no doubt a durable solution, especially built with a fanless design for non-stop operation and takes advantage of the GPU-accelerated parallel processing of NVIDIA Jetson TX2 to handle data intensive and mission-critical workloads with power efficiency and unmatched reliability."

Xiaomi Launches Mi Display 165 Hz Version Monitor: 27" IPS, 1440p, 165 Hz, VRR, Display HDR400

Xiaomi today soft-launched a new, high-refresh rate IPS gaming monitor on their Mi line. The Xiaomi Mi Display 165 Hz brings a 27", 8-bit IPS panel with a 2560 x 1440 resolution and a pretty impressive 95% DCI-P3 color space coverage. Response time is quoted at 4 ms (1 ms GtG, as most manufacturers insist on quoting their response time speeds). There is support for VRR technologies (AMD's FreeSync and NVIDIA's G-Sync Compatible), as well as Display HDR 400 certification (the lowest that can be had, but still, it's certified). Peak luminance in HDR is set at 400 nits, with typical brilliance standing at 320 nits.

I/O-wise, we're looking at 3x USB 3.0, 1x HDMI, 1x DisplayPort and 1x 3.5 mm headset port. The reported specifications, including the DCI-P3 color space coverage, should make this monitor interesting for users ranging from gamers to amateur/semi-professional photo and video editors. pricing, however, is the icing in the cake: Xiaomi will be selling this monitor for 2,199 yuan (which amounts to roughly $299) - a bargain when one considers the overall specifications on this monitor. The monitor will be available from June 17 in China, with international availability following.

Club3D Intros DisplayPort to HDMI 4K-120Hz Adapter

Club 3D is proud to open up a new chapter for adapters which convert DisplayPort to HDMI with our brandnew CAC-1085 DisplayPort 1.4 to HDMI 4K120Hz HDR Active Adapter M/F. This adapter uses DisplayPort 1.4 DSC video compression technology to reach the new resolutions/refresh rates. Highly requested by users all around the world and industry leading companies in IT and AV, this new adapter is the perfect solution to bring 4K with astonishing 120Hz to the latest big size TVs and Displays with HDMI 2.1 inputs.

CAC-1085 is following the tradition of Club 3D bringing new technologies to the market at an early stage and set new milestones, like we did with our CAC-1070 (DisplayPort 1.2 to HDMI 4K60Hz Active Adapter M/F) in 2016 and CAC-1080 (DP 1.4 to HDMI 2.0b, adding additional HDR functionality) in 2018.
The first delivery of CAC-1085 is planned to reach our warehouse by end of June. In the week to follow the adapter should be available worldwide.

Supply Chain Confused with Intel, AMD, and NVIDIA Launch Dates

According to the report from DigiTimes, which cites industry sources, the global supply chain of Intel, AMD, and NVIDIA has been rather confused recently by their launch dates and launch procedures. Due to worldwide pandemic, there are no electronics shows like Computex that are designed for companies to showcase their products there, and thus, there is no unified launch window where you can expect a product to be launched. What's even worse is the fact that the companies have now started to keep their launch dates as a secret in the latest edition of playing with the competition. Launch dates have started to change and now the launch is uncertain even if the launch day is provided.

This has a massive effect on the industry supply chain. By not giving concrete dates to them, companies have left them to wonder when the product will launch. This is hurting their ability to prepare themselves for an upcoming product and possibly cause some delays later on. If not given enough time, the supply chain could not adapt fast enough and the product could come later in the hands of consumers.
Return to Keyword Browsing