Import AI editor Jack Clark throws out a number: over a 60% probability that by the end of 2028, AI systems will be able to autonomously build their successor versions. This is not sci-fi speculation, but a judgment based on arXiv papers and observations of frontier company product deployments.
What this is
Clark calls it "AI R&D without human involvement." Simply put: today's AI models require massive collaboration among engineers and researchers to develop; Clark believes all necessary technical components are already in place, and AI systems will soon be able to train their next generation end-to-end.
The timeline is not 2026. But within the next 1-2 years, we may see proof-of-concept at the non-frontier model level. Frontier models are harder—higher costs, greater human investment.
The core logic chain: AI systems are fundamentally software, software is built from code, and AI has already demonstrated significant progress in code generation. If scaling trends continue, models will possess sufficient creativity to propose new research paths, replacing the role of human researchers. Clark uses "crossing the Rubicon" to describe this—once achieved, there is no going back.
Industry view
Clark himself admits this is a "reluctant conclusion"—the implications are massive, and it is uncertain whether society is prepared. He relies on trend charts pieced together from multiple benchmarks, yet all benchmarks have flaws, and individual data points are unreliable. Trend extrapolation is inherently risky: past performance does not guarantee future results.
More substantial skepticism comes from the cost dimension. Training frontier models routinely costs tens of millions of dollars and requires highly coordinated engineering teams. AI being able to write code does not equal the ability to manage distributed training projects involving hundreds of people and thousands of GPUs. Engineering complexity and capital thresholds may be harder to cross than "creativity." Currently, public evidence remains mostly at "AI can complete partial coding tasks," not "AI can orchestrate a complete model R&D project."
Another perspective argues that even if AI can automate the engineering parts, intuitive judgment and taste in research—which directions are worth pursuing, which are dead ends—will still rely on humans in the short term. Clark's prediction is essentially a bet: that scaling laws will cause this "taste" to emerge as well.
Impact on regular people
For enterprise IT: If AI R&D automation becomes reality, the cost for enterprises to acquire custom models will drop significantly, but technological barriers will disappear even faster, and the window for competitive advantage will be shorter.
For individual careers: AI engineers and researchers are the groups most directly impacted. In the short term, it is more likely to change hiring structures—reducing junior roles while retaining senior roles capable of judging direction.
For the consumer market: What end users feel is not "AI building AI" itself, but the accelerated product update cadence brought by faster model iteration—advanced features today may become baseline configurations three months from now.