GROWTH TRAJECTORY OF AI IS HIGHLY UNCERTAIN THROUGH 2030
EXPERTS EXPRESS LOW CONFIDENCE IN THEIR ABILITY TO PREDICT
As India AI Impact Summit 2026, branded as the first-ever such event in the Global South, scheduled to be held in New Delhi next week from February 16 to February 20, many claims of AI’s growth and impact, often contradictory, are making headlines. However, a study “Exploring Possible AI Trajectories Through 2030” finds it path highly uncertain and experts have expressed their low confidence in their ability to predict anything with certainty.
This artificial intelligence paper published recently by the OCED suggest that AI progress by 2030 has a plausible range that includes both a plateau at approximately today’s level of capabilities and rapid improvement that leads to AI systems which broadly surpass human capabilities. Current evidence suggests that four different broad scenario classes are all plausible through to 2030: progress stalling, progress slowing, progress continuing, and progress accelerating. Moreover, the state of the evidence is insufficient to discount any of the scenarios outlined in this paper, or variations thereupon.
The report says that the consulted experts expressed high uncertainty and low confidence in their ability to predict the rate of AI progress by 2030 and beyond. This reflects the extremely rapid rate of innovation in AI systems over recent years combined with high uncertainty about the extent to which recent drivers of AI progress will continue to drive further progress.
The OCED paper suggests that policymakers could consider the four different broad scenario classes that all are plausible by 2030.
The first scenario is “Progress Stalls”. A scenario in which progress in the most advanced AI systems largely halts and capabilities remain largely unchanged. Rapid gains observed over recent years stop and AI progress plateaus. Diffusion and application development continue for existing capabilities. In 2030, AI systems can quickly undertake a range of tasks that would take humans hours to perform, but issues of robustness and hallucinations impact reliability. AI systems typically rely upon substantial support from humans to complete tasks, such as detailed prompting, review and provision of context. Potential variations could be: AI as a Narrow Tool; and Simple AI agents.
The second scenario is “Progress Slows”. It will be a scenario in which incremental gains in the most advanced AI systems deliver continued but slower progress. In 2030, AI systems have a deep knowledge base, excel at standard forms of structured reasoning, and can act as useful assistants for tasks that require them to use a computer, navigate the web or undertake limited interaction with people or services on behalf of the user. AI systems can quickly undertake well-scoped tasks that would take humans hours or days to perform. AI systems typically rely on humans to provide them with clearly scoped tasks, review important decisions or actions, and provide detailed guidance and context. Potential variations could include: Simple Robots; and Socially-Limited AI.
The third scenario is “Progress Continues”. It will be a scenario in which continued rapid progress occurs. In 2030, AI systems can perform many professional tasks in digital environments that might take humans a month to complete. Deficits in AI system’s continual learning and generalisation to complex real-world environments and situations persist. AI systems typically rely on humans to provide high level directions and bounds for their behaviour, but can often operate with high autonomy within these bounds towards a given objective, including autonomously interacting with a range of stakeholders. Potential variations could include: Forgetful AI; and Digital- only AI.
The fourth scenario is “Progress Accelerates”. It will be a scenario in which dramatic progress leads to AI systems as or more capable than humans across most or all capability dimensions. In 2030, AI systems can operate with levels of autonomy and cognitive ability that match or surpass humans in cognitive tasks, autonomously working towards broad strategic goals that they can reflect upon and revise if circumstances change, while also collaborating with humans where necessary. AI-guided robots can handle complex tasks in dynamic real-world environments in many industries and roles, though they still largely lag humans in these roles unless developed specifically for that role. Potential variations could include: Artificial General Intelligence; and Super intelligence.
The paper says that AI has advanced rapidly in recent years. AI systems are now able to draft academic essays at the level of university students and solve coding problems at the level of human programmers. More advanced AI systems are routinely developed while governments, economies and societies are racing to keep up with the pace of change. Nevertheless, there are several uncertainties.
The paper innumerates six key uncertainties: The relationship between scaling of pretraining and performance gains; Gains from reinforcement learning for reasoning and scaling of inference compute; Progress in memory and continual learning; Progress in Physical capabilities; Progress on robust agentic behaviour and metacognition; and Progress on creativity and the ability to solve novel problems.
The paper also innumerates four key uncertainties about future AI inputs: Scaling compute and data inputs; Algorithmic efficiency gains; The use of AI systems in AI development; and Other social, economic and institutional factors influencing AI progress.
This study has used nine indicators: Language; Social interaction; Problem solving; Creativity; Metacognition and critical thinking; Knowledge, learning and memory; Vision; Physical manipulation; and Robotic intelligence.
Experts have noted that both the recognized uncertainties discussed in this paper, and potential unknown unknowns in future AI developments, contribute to this uncertainty. This high uncertainty reflects the extremely rapid rate of innovation in AI systems over recent years combined with high uncertainty about the extent to which recent drivers of AI progress will continue to drive further progress. Therefore, the policymakers should consider the full range of possible AI trajectories by 2030 when developing policies,
to ensure that they can capture the benefits of AI technologies and manage the potential impacts of continued – or stalled – AI progress.
Comments are closed, but trackbacks and pingbacks are open.