
OpenAI predicts that by 2028, AI systems will be capable of making significant discoveries in science, medicine, and education, far beyond current chatbot capabilities.
New Delhi: Artificial intelligence (AI) may make only minor discoveries in 2026, but by 2028 and beyond, OpenAI predicts the development of systems capable of more significant breakthroughs, the US-based AI company has said.
In a recent blog post, OpenAI noted that much of the world still views AI primarily as chatbots or advanced search tools. “But today, we have systems that can outperform the smartest humans at some of our most challenging intellectual competitions. Although AI systems are still spiky and face serious weaknesses, systems that can solve such hard problems seem more like 80 per cent of the way to an AI researcher than 20 per cent of the way,” the company said.
OpenAI highlighted the gap between current public use and the full potential of AI, noting that systems capable of discovering new knowledge autonomously or enhancing human capabilities could have a transformative impact.
The company cited recent progress in AI, which has gone from performing tasks that a human can complete in seconds to tasks that would normally take over an hour. “We expect to have systems that can do tasks that take a person days or weeks soon; we do not know how to think about systems that can do tasks that would take a person centuries. At the same time, the cost per unit of a given level of intelligence has fallen steeply: 40x per year is a reasonable estimate over the last few years,” OpenAI said.
Looking ahead, AI systems are expected to aid health understanding, accelerate research in materials science, drug development, and climate modelling, and expand access to personalised education globally. OpenAI said demonstrating these tangible benefits helps create a shared vision of a world where AI improves life, not just efficiency.
However, the company also cautioned about the potential risks of superintelligent AI, describing them as “potentially catastrophic.” OpenAI emphasised the importance of safety and alignment research to ensure robust control before deploying systems capable of recursive self-improvement. “No one should deploy super-intelligent systems without being able to robustly align and control them, and this requires more technical work,” it added.
IANS
Subscribe to our Newsletter
Disclaimer: Kindly avoid objectionable, derogatory, unlawful and lewd comments, while responding to reports. Such comments are punishable under cyber laws. Please keep away from personal attacks. The opinions expressed here are the personal opinions of readers and not that of Mathrubhumi.


