HBM and DRAM shortages: how AI hype is triggering a new global memory chip crisis
11.12.2025While the world debates whether artificial intelligence will take away the jobs of copywriters, it has quietly and without a queue taken away the most important things - memory chips, server power and the nerves of electronics manufacturers, who suddenly found themselves in a queue to Nvidia longer than the list of promises of any government.

The global boom in artificial intelligence, which began as a fashionable toy for tech corporations, has turned into a very specific problem: the world simply doesn't have enough memory chips to "feed" all these neural networks with data.1 2 3 . The demand for high-performance graphics processors and their memory has grown so much that manufacturers — from Korean giants to Taiwanese contract factories — are working at the limit of their capabilities and still can't keep up with the queue that the AI boom has placed before them.1 2 4 . As a result, we have a new wave of global supply crisis: memory prices are soaring, both smartphone manufacturers and data center owners have forgotten about calm planning, and investors have received another argument why “buying some more chipmaker shares” is supposedly a good idea.1 2 5 .
What's happening: memory is in short supply, data centers are in panic
The heart of the current crisis is not the AI chips themselves, but what is connected to them: the high-speed HBM (High Bandwidth Memory) and large amounts of RAM that modern AI models require.1 2 4 Nvidia sells its flagship processors with embedded HBM modules, which are manufactured by only a few companies in the world — primarily SK hynix, Samsung, and Micron; according to industry analysts, HBM production capacity is already effectively scheduled for the end of 2025, and in some places beyond.1 4 6 This means that any player who decides to “enter AI” today faces a simple fact: the queue for the “hardware” for his big idea is already around the corner and is occupied not by startups, but by giants of the level of Microsoft, Google and Amazon.2 4 6 .
In parallel with HBM, the price of conventional DRAM modules is rapidly rising - manufacturers are redistributing capacity in favor of more marginal server configurations, limiting supplies to the consumer segment.1 3 5 . One of the largest memory manufacturers, Micron, has already announced that it is reducing its focus on the consumer market and reorienting production to the needs of AI — in simple words, if your laptop or smartphone suddenly becomes more expensive this year, you can thank another data center with chatbots for it.1 5 .
HBM: the "golden" memory without which AI does not work
The reason for the hype is quite simple: modern artificial intelligence models operate with terabytes of parameters and data, and all these numbers need to be stored somewhere and transferred very quickly between chips.1 4 6 . HBM is a multi-layer memory mounted directly next to the GPU, providing enormous bandwidth unattainable by conventional DDR memory; without it, top-end GPUs turn into painfully expensive but practically powerless metal souvenirs4 6 The problem is that manufacturing HBM is a technologically complex and expensive process: it requires advanced production lines, a high level of yield of usable chips, and a very limited number of manufacturers who are even capable of doing it on an industrial scale.4 6 .
According to market reports, SK hynix and Micron have already booked HBM volumes for key partners — primarily Nvidia — for years to come, and other players’ attempts to join this club are facing a banal shortage of equipment and specialists.1 4 6 This creates a “closed bar” effect: even if you are willing to pay more, this does not mean that you will be allowed to the table - the seats are already occupied by those who signed contracts earlier and in much larger volumes.2 4 6 .
Nvidia, Blackwell and the GPU shortage: why even the giants don't have enough hardware
The logical question is: if HBM plays a key role, what does Nvidia have to do with it? The answer is simple: it was Nvidia that made HBM the de facto standard for training and running large AI models, tying these modules to its GPUs as part of a comprehensive product.1 2 4 Demand for the H100, H200 lines and the new Blackwell generation has been so fierce that, according to market estimates, the capacity for these chips is booked a year in advance, and some configurations have actually been sold out before mass deliveries even begin.2 4 6 Corporations that did not sign long-term contracts in time are now forced to purchase infrastructure through intermediaries or rent capacity from cloud players at significantly higher rates.2 4 6 .
This, in turn, increases market concentration: those who already have large data centers and long contracts with Nvidia gain an even greater advantage over smaller competitors who are stuck in the queue.2 3 4 For startups, this means that “making your own ChatGPT” is no longer just a matter of talent or code, but also a very specific challenge: where to get affordable and not exorbitantly expensive computing resources, if most of the capacity has been booked by Big Tech and a few government projects.2 3 6 .
Implications for consumer electronics: smartphones and laptops also pay for AI
Although the center of the storm is in the data center segment, the shockwave has already reached consumer electronics.1 3 5 Memory manufacturers, refocusing lines on HBM and server modules, are reducing supplies of conventional DRAM and NAND for smartphones, laptops and other gadgets1 5 According to analysts, wholesale prices for memory for mobile devices have already gone up, which means that in the coming quarters manufacturers will have to either reduce the amount of memory in basic configurations or quietly build these costs into the final price of the device.3 5 8 .
It is especially funny when smartphones and laptops simultaneously advertise “on-board AI features” and suffer from the same AI boom that pushes them out of the priorities in component supply.2 3 5 Some segments are already seeing delays in launching new models and reviewing lineups as companies are forced to calculate how much memory they can actually get, rather than how much they would like to install.3 5 8 .
Who profits from the crisis: memory as the new oil
For memory manufacturers, the current situation is like a long-awaited revenge after several years of low prices and excess inventory1 2 5 SK hynix, Samsung and Micron, which recently reduced production to stabilize the market, now find themselves in a position of players who can dictate the terms: demand for HBM and server DRAM exceeds supply, and the order book is packed with key customers for years to come.1 4 6 Some companies are already announcing plans to expand production lines, but even optimistic forecasts suggest that the problem will not be fully resolved until two to three years later.4 6 7 .
At the same time, rising memory prices and component shortages are creating a perfect storm for everyone who depends on affordable chips: from server manufacturers to companies building local data centers for banks, telecoms and government agencies.2 3 7 In this configuration, every gigabyte of memory becomes a political and economic question: who gets the next container of modules - the cloud giant or the local provider that wants to launch its own AI services?2 3 7 .
How it hits supply chains: post-COVID lessons not learned
If you think you've heard of a "supply crisis" before during the pandemic and the Suez Canal blockade, you're right - history is repeating itself, only this time instead of containers with masks, there are memory cards missing.1 2 3 Semiconductor supply chains remain highly concentrated: most advanced chips are produced by a few factories in East Asia, logistics are tied to a limited number of routes, and political tensions between the US and China only add nerves to an already fragile system.1 3 9 .
Signals are coming from different parts of the world: companies are forced to revise product release schedules, postpone some plans to the next years, and in some cases, freeze or reduce investments in new services, because there is no guarantee that the necessary hardware will be physically available.2 3 7 . The global chain, which was supposed to become “flexible” after COVID lessons, is showing itself to be rigid again: one segment (HBM) is unbalancing the entire market, from servers to smartphones1 2 9 .
Geopolitics and chip control: why the US and China are back in the spotlight
The memory crisis is not only about business, but also about politics: the US continues to tighten export controls on advanced chips and equipment for their production, trying to limit China's access to cutting-edge technologies1 3 9 This makes supply chains even more strained: companies are forced to balance Washington's demands, the needs of customers in China, and the desire not to lose a market that has already become one of the largest consumers of AI solutions outside the US.3 6 9 In response, Beijing is promoting its own import substitution programs and subsidizing domestic chip manufacturers, but it has not yet managed to catch up with the technological advantage in the field of top-end GPUs and HBM.3 6 9 .
Against this background, Europe, including Ukraine, finds itself in the role of a middle-class passenger on a plane where the business class is shared between the USA and China: dependence on external supplies of chips and memory remains, and the possibilities to influence the rules of the game are limited.1 3 9 European semiconductor support programs are still in their infancy, and even in the best-case scenario, it will take years to change the balance of power in the global market.3 9 .
What does this mean for Ukraine and the region?
For Ukraine, which is simultaneously at war, digitizing, and trying to develop its own IT products, the global memory crisis means several very practical things.3 7 11 First, the cost of renting capacity from cloud providers, who already include in their tariffs an increase in hardware prices, will inevitably increase; this applies to both international players and local data centers that purchase the same servers with scarce memory modules.3 7 Secondly, for Ukrainian startups working with AI, access to capacity is becoming an even more sensitive issue: competition for resources in the region will intensify, and cheap “sandboxes for experiments” may become a thing of the past.3 7 11 .
Third, government projects — from e-government to military developments — are also dependent on the availability of capacity and components; in a situation where even large corporations are queuing for HBM and GPUs, planning infrastructure modernization becomes a more difficult exercise than writing any digital transformation strategy.3 7 11 Therefore, the issues of supplier diversification, development of own data centers and a well-thought-out policy of using AI tools are becoming not a fashionable trend for Ukraine, but a factor of stability for the coming years.3 7 11 .
Is there a way out: investments, new players and changing priorities?
Industry experts agree on one thing: the current crisis will not resolve itself in a few quarters, as has sometimes happened in the memory market in the past.2 4 6 Balancing supply and demand requires major investments in new HBM and DRAM production lines, expanded testing and chip packaging capabilities, and the emergence of additional players capable of producing critical components at a level that satisfies Nvidia and other AI accelerator manufacturers.4 6 7 Even under the most optimistic forecasts, this means two to three years of intense transition, during which the market will live in a mode of scarcity and customer prioritization.4 6 7 .
At the same time, there are voices calling for smarter use of available resources: model optimization, more efficient architectures that require less memory, and better utilization of already installed capacities in data centers.3 6 7 But the market, which currently evaluates everything through the prism of “who can train a bigger model first,” is not very inclined to asceticism: as long as investors reward companies for aggressively building AI capabilities, it is unlikely that anyone will voluntarily abandon the race for additional GPUs and terabytes of HBM.2 3 6 .
AI as a new stress test for globalization
The current memory supply crisis has become the first serious test for the global economy on the topic: what will happen if AI suddenly stops being just a buzzword and becomes basic infrastructure1 2 3 It turned out that supply chains, which had been built for decades to accommodate other demand cycles, were not ready for a situation where several corporations and states simultaneously wanted tens of times more memory than yesterday.1 3 9 And while politicians talk about “digital strategies” and “ethical AI,” a much less poetic story is unfolding at the level of factories and contracts: those who have managed to reserve chips are writing the script of the future, and those who haven’t will read it later, at a cheaper rate.2 3 6 .
Sources
- Reuters: "AI frenzy is driving a memory chip supply crisis" — a basic overview of the global memory supply crisis amid the AI boom
- The Straits Times: Analysis on the impact of the AI boom on global memory chip supply chains
- International analytical media on semiconductors: materials on the new wave of chip supply crisis after the pandemic
- Semiconductor Industry Publications: HBM Shortage Estimates and Capacity Utilization SK hynix, Samsung, Micron
- CNBC and other business media: news about the change in strategy of Micron and other memory manufacturers in favor of the AI segment
- Data Center Market Analytics: Assessing the Impact of GPU and Memory Shortages on Cloud Providers' Expansion Plans
- International research on digital infrastructure: the impact of chip shortages on the development of data centers and AI services
- Publications on US export controls and China's chip policy: an analysis of the impact of restrictions on the global semiconductor market
- European press and think tanks: assessing Europe's chances of reducing dependence on Asian semiconductor manufacturers
- Regional overviews of the cloud services market in Central and Eastern Europe: the impact of hardware shortages on local IT ecosystems
- Technical and market reviews of GPU platforms: materials on HBM binding to top-end GPUs for AI and pricing on AI-GPUs
- Think Tanks on the Geopolitics of Technology: Materials on the Competition of the US, China, and Other Players in the AI Market
- DRAM and NAND Market Research: Data on Price Changes and Supply Structure for the Consumer Segment Against the Background of the AI Boom

