Phenomenon and Business Essence

At the 2026 Intelligent Electric Vehicle Development High-Level Forum, Alibaba Cloud revealed a striking statistic: over 30 automotive manufacturers and autonomous driving solution providers are conducting autonomous driving R&D on their platform, with actual deployed T-Head self-developed "Zhenwu" PPU exceeding 100,000 chips, setting the largest-scale record for self-developed AI chips usage in the automotive industry on a public cloud platform. This isn't a press release—it's a collective vote through procurement decisions. The underlying business logic can be summed up in one sentence: Fixed costs of building computing facilities are losing to the flexible, on-demand payment model of cloud services.

Dimensional Analogy

In the 1980s, General Motors invested heavily in building its own steel mills and component factories, attempting to control the entire supply chain. The result: fixed assets crippled flexibility, and they were ultimately defeated by Toyota's "just-in-time outsourcing" model. Today's computing power is repeating this history. Building your own GPU cluster means hundreds of millions in hardware investment, 18-month depreciation cycles, and constant anxiety about chip iterations always being one generation behind. Cloud computing means renting computing power on a project basis, with spending determined by R&D team size, and paying nothing when models aren't running. The gap between these two cost structures will only widen as model iterations accelerate. What Alibaba Cloud provides isn't just computing power, but a three-layer collaborative architecture of chips, cloud platform, and open-source large model "Qianwen"—essentially offering automotive companies a complete "computing power as production line" rental service.

Industry Reconfiguration and Endgame Projection

Grove's theory of "strategic inflection points" states that when the fundamental method of accessing a critical resource changes, all players relying on old approaches face life-and-death choices. Autonomous driving computing power is at this inflection point.

  • Winners: Small and medium-sized auto manufacturers and autonomous driving solution providers. They previously couldn't compete with Huawei and Tesla on computing power. Cloud platforms let them "rent" instead of "buy," shortening R&D cycles and enabling rapid model validation.
  • Under Pressure: Large OEMs that have heavily invested in building their own data centers. Sunk costs will slow their migration to cloud, but competitors' iteration speed won't wait.
  • Potential Exits: Autonomous driving computing service providers dependent on exclusive NVIDIA GPU supply. Alibaba Cloud is building a closed ecosystem with self-developed chips—once scale effects materialize, replacement costs will be extremely high.

Time window: According to public information, autonomous driving model iteration cycles have compressed to just a few months. Within 18-24 months, the cloud computing ecosystem's lock-in effects will fundamentally solidify.

Two Paths for Executives

Path One (Cloud Migration): Join Alibaba Cloud's autonomous driving ecosystem, rent computing power on a project basis, and convert R&D costs from fixed assets to variable expenses. First step: evaluate current monthly autonomous driving R&D computing consumption, compare with cloud pricing, and make decisions within one quarter.

Path Two (Build Moats): If enterprise data security or competitive barriers require data to remain on-premises, concentrate resources to build a small-scale private cluster, but simultaneously partner with a domestic chip manufacturer (such as Cambricon or Huawei Ascend) for long-term iteration support, avoiding single-source NVIDIA supply chain risks. First step: lock in chip partners and sign 3+ year supply agreements to secure volume pricing.