In July, 36kr cited industry sources reporting that Nation z Technologies has begun volume shipments to a " global top-tier power management vendor ," with related power monitoring chips in stable mass production at unit prices of $ 1.5– 2. Meanwhile , two communication chips are expected to be sam pled in July, targeting entry into the customer 's more core domestic MCU ecosystem.
On the surface, this reads like another domestic chip supply chain breakthrough story .
But what 's really worth examining are three specific details : volume shipments, AI power demand, and the $ 1.5–2 ASP .
I haven't verified this within the customer's AVL or design- in pipeline , so I must reserve judgment on which "global top-tier power management vendor " this is . However, jud ging purely from the description , this isn 't lab validation — it has crossed the sample stage and entered volume procurement .
Volume shipments, unit price $1.5–2, power monitoring chips in stable mass production. A large number of overseas AI power and optical communication companies are beginning large -scale procurement of domestic MCUs to address rapidly expanding compute and AI power demands .
The real signal this conv eys isn't about MC Us — it's about long -tail components in AI infrastructure starting to t ighten.
02 What This Really MeansThis isn't simply "a Chinese MCU company wins an overseas customer. "
What it 's actually saying is: AI infrastructure expansion has propag ated from GP Us, HBM , and optical modules into power management and board-level control layers — and has reached the MC U segment that almost no one writes long analyses about.
Why does this matter?
Because the reality of AI infra isn 't just training clusters and inference tokens. What truly determines whether clusters can be delivered, racks can go online, and PSUs can run st ably is an entire under est imated ecosystem of supporting components: power management I Cs, monitoring MCUs, BMCs , fan controllers , thermal management, optical communication auxiliary controllers. No matter how expensive the GPU, without these components, racks remain nothing more than construction - in -progress on the balance sheet.
The issue isn't how advanced the MCU itself is, but whether it becomes a delivery bottlen eck.
For AI builders , this means the inference cost curve is no longer just a function of " model efficiency improvements " or "GPU unit price dec lines." What actually gets priced is complete system deliverability. As long as any link in the power chain, optical communication chain , or board-level control chain experiences shortage , token prices cannot smooth ly decline.
This is also why the seemingly small AS P of $1.5–2 is worth noting . Because when a low-ASP component enters a critical system node , its strategic value far exceeds its BOM percentage . What made AWS truly form idable back then wasn't buying the cheapest servers, but being able to systemat ize all component supply, operations, scheduling, and r acking cad ence. AI infra is now repl aying the same logic.
I haven 't diss ected the performance , reliability, and firmware ecosystem of this specific Nationz power monitoring chip, so I can't assert it has a long-term technical moat. But if overseas AI power and optical communication companies are beginning large-scale procurement of domestic MC Us, it means at least two things have already happened :
First, supply -demand tension in non-GPU segments is sufficient to alter global procurement preferences.
Second, customers are willing to accept new vendor qualification costs for supply assurance.
This weigh s far more than the four words "domestic substitution." Because once customers are willing to change their AVL, switching costs have already been partially prep aid. Subsequently , as long as quality and delivery don't have major issues, share won 't easily revert completely .
03 Historical Anal ogy / Structural Comparison
This resemb les the AWS supply chain moment around 2014, not the ChatG PT product moment of 2022.
ChatGPT represents demand explosion.
AWS 's true infl ection point was when the market realized: cloud isn't a bunch of servers, but a complete industrial system from chips , networking , storage, power to software orchest ration. Whoever controls the delivery chain controls the profit pool.
The AI industry today is moving from "model as product " toward "system as product."
Over the past two years, market attention has been almost entirely on model release cadence: OpenAI, Anthropic, Google, Meta, De epSeek, Qwen. But if you extend the timeline, you'll find what truly affects token economics is often not benchmark rankings, but implicit constraints on the supply- side infrastructure: HBM capacity, CoWoS packaging, rack power density, liquid cooling, power conversion efficiency, optical interconnects, and a host of control chips surrounding these systems.
After the 2008 financial crisis, many people came to understand a fact: prices aren't determined by paper assets, but by liqu idity.
AI inf ra is the same. Model capability isn't the only constraint; system liqu idity is the constraint. " Liquidity" here means the total volume of hardware that can be procured, verified, mass -produced, and brought online.
I may be overestimating MC U's importance in overall AI cap ex— after all, its value is nowhere near the same magnitude as GP Us or optical modules. But struct urally it fits the typical inflection point signal: when the most incon spicuous supporting components begin benefiting from AI demand, it indicates expansion has penetrated deep into the supply chain cap illaries.
This is typically not short -term h ype, but a precursor to capacity restruct uring.
04 What This Means for AI Builders
For most AI builders, this news doesn't mean you need to study MCU dat asheets.
It means you need to reass ess your upstream cost and risk exposure.
First, if you're a model API consumer, stop assuming token prices will decline linearly. Competition at the model layer is fierce— OpenAI, Anthropic, Google, DeepSeek are all cutting prices or offering indirect conc essions. But if non -core components in underlying AI infra are also tight , price reduction transmission will be partially absorbed by system delivery costs. What you should do is more aggressively implement model routing, prompt caching, and batch API utilization, rather than pass ively waiting for the supply side to automatically yield cheaper prices.
Second , if you're doing private deployment or industry - specific large model projects, you should now scrut inize the hardware delivery chain more gran ularly. It 's not just GPU procurement contracts that matter— power, cooling, switching, optical modules , board-level control chips, and firmware support can all determine project launch timing. I haven't participated in your specific project's data center acceptance , but the most common failure mode for such projects isn 't that models can 't run, but that systems cannot be st ably delivered.
Third , if you're an AI infra startup, this news reminds you that moats may not lie in the sex iest layers . The market loves talking about models, Agents, MCP, ID Es, Copilots , but truly stable profits sometimes sink into boring components. As long as your product sits at a necessary node in a high-growth system, even with modest ASP, there's opportunity for repr icing.
Fourth, if you're an investor or entrepreneur looking at China supply chain opportunities, focus should shift from "can it substitute " to "can it enter mainstream global design chains." Once entered into design chains with completed volume validation , the probability of capturing more s ockets significantly increases. During AI expansion phases , what customers care most about isn 't s logans, but second sources , delivery stability, and lifecycle management.
This is also why I categor ize this news under infrastructure rather than ordinary semiconductor news. It reflects that AI compute expansion is propag ating deep into the supply chain, with even the most fundamental control components beginning to see structural opportunities.
05 Counterarguments / RisksThe strongest counterargument is: this may just be supply chain news ampl ified by " AI, " still far from a true industry inflection point.
First, the source comes from industry chain and media reports , not customer announcements, with no clear disclosure of customer name , ship ment scale , or ann ualized revenue contribution. Without this information , I cannot directly elev ate it to iron clad evidence of "global AI power chain formal restructuring."
Second, MCU design - in doesn't automatically equal long-term share. Power and communication - related components seem low -threshold , but actually demand extremely high reliability, certification , firmware stability, and long-term supply. Securing samples or volume orders today doesn't mean no replacement next year. I may be underestimating overseas customers ' path depend ence on mature suppliers.
Third, AI demand is indeed driving up power and optical communication accessories, but this may not necessarily sp ill over to all domestic MCU vendors . Winners may concentrate only among the few companies that can pass validation and bind with leading customers, not a sector -wide rally . In other words, this isn 't "domestic MCUs broadly benefiting," but more likely "very few suppliers entering critical chains benef iting."
Fourth, from an AI builder's perspective , this news may not have as large an impact on short-term business decisions as I've written. The vast majority of application teams buy APIs, not racks. As long as OpenAI, Anthrop ic, Google, AWS, Azure, and CoreWeave continue expanding capacity , end developers may not directly feel MC U shortage friction.
So a more prudent judgment is: this isn't a major event determining model landscape , but it is an edge signal on the supply side worth taking seriously.
The value of edge signals lies in how they often tell you earlier than headline news where the industry is truly stuck.
And this time, the bott leneck clearly isn't just in models . It has already moved into power systems.