Where (I Think) AI Is Headed
I’m writing this on March 5th, 2025, sharing my current take on where AI is headed. Like anyone trying to predict the future, I’ll probably miss the mark on some (or many) of these points - but hey, that’s part of the fun of making predictions.
-
The path to artificial super-intelligence likely lies in pre-trained foundation models (trained on next multimodal token prediction) that continually learn through reinforcement learning as they encounter new data and interact with the environment (this is called continual reinforcement learning).
-
The final question we must ultimately address is: “What should self-improving AI optimise for?”
-
In terms of self-improvement without human supervision and continual learning, we’ll solve its challenges not by entirely eliminating catastrophic forgetting in LLMs (as both LLMs and human brains have capacity limits), but by strategically forgetting less important information or compressing it into summaries to make room for new knowledge.
-
Human work will slowly shift toward high-level tasks while AI handles (more) routine operations. The risks of completely automating menial work will drive us toward becoming a test-driven society, with systems (including AI) validate (other) AI outputs. I believe this transition is already underway for developers who use AI tools to assist with coding. For critical applications, extensive testing protocols will remain essential.
-
The “bitter lesson” of AI development will continue: scaling simple algorithms will outperform more complex systems in the long run.
-
Governments will recognise AI’s strategic importance (some of them are already doing so), redirecting significant portions of their budgets toward AI development, effectively initiating a new AI-focused cold war.
-
Massive computing clusters will train and power the most advanced AI systems. While decentralised computing will exist alongside these clusters, opne-source AI will consistently lag slightly behind closed-source systems. Ultimately, due to superior computing resources, closed-source AI will pull significantly ahead. I honestly hope this does not happen but it seems like the most plausible scenario.
-
Each of us will have a personalised AI assistant, customised to our unique data and goals.
- While humans can learn through passive observation alone, they achieve optimal learning gains by alternating between observing and taking action. This same principle will be applied to AI systems, particularly embodied agents.
- Foundational model companies currently compete in an environment where users can easily switch platforms (Claude, ChatGPT, Perplexity, Gemini). The future market may shift toward tool-based subscriptions like Cursor (for coding) or Notion (for writing) that manage multiple models through a single subscription. This won’t create winner-take-all scenarios but will change how consumers access AI capabilities.
Enjoy Reading This Article?
Here are some more articles you might like to read next: