Skip to content
Resources > Research Articles

Beyond the Hype: Understanding the Limitations of AI

By Sarah Hoffman - Director of Research, AISeptember 3, 2025
limitations of ai

AI is already transforming how we work, learn, and create. But the technology is far from perfect. The recent release of GPT-5 delivered important improvements over OpenAI’s previous model, but it also reminded us that even powerful systems come with limitations. Understanding these limitations is essential for deriving real value from the technology.

AI’s Current Limitations

Even as models like GPT‑5 grow more capable, their core limitations remain. But these limits don’t make the technology useless; rather, they define where it creates the most value when paired with human judgment.

Dependence on Data Quality

AI can only be as good as the data you give it. If the data is biased, outdated, or incomplete, the system will reflect and even amplify those flaws. In finance, outdated or incomplete market datasets can cause AI models to miss major shifts, leading to costly missteps. A recent U.K. study found that 90% of estate agents believe AI software is routinely undervaluing properties due to limited data sources.

As one AI expert noted in an AlphaSense transcript: “It's not that it can't make the decisions, it just doesn't have access to all of the information that sometimes we or researchers might have. Its decision-making is just limited by the information that's available to it at the present time.”

The flipside, however, is that when AI is equipped with clean and trustworthy data, it can surface insights at a speed and scale that humans could never reach.

Hallucinations

Even the most advanced models can fabricate information. And they do it with confidence. A 2024 research paper introducing Google’s healthcare AI model Med-Gemini contained a hallucination, where the AI identified a nonexistent part of the brain. In March, a Norwegian man filed a complaint against OpenAI after ChatGPT falsely claimed he had murdered two of his children.

Contrary to expectations, some newer reasoning models actually hallucinate more than earlier versions. And while the newly released GPT-5 appears to hallucinate less than its predecessor, the problem is far from solved.

That said, as long as outputs can be verified, hallucinations can be managed — and in some cases, even turned into a feature. In brainstorming sessions or creative projects, hallucinations can function as sparks, surfacing unusual connections or prompting new lines of thought.

Lack of Truly Novel Ideas

AI excels at remixing known patterns. And while that can be incredibly powerful — sparking ideas faster, combining perspectives from across disciplines, and helping us push past creative blocks — it cannot generate truly novel ideas, goals, or strategies. In fact, despite worries to the contrary, demand for creative freelancers seems to be growing.

As more people rely on the same large language models trained on similar data, outputs are starting to sound the same. While this may work for writing generic emails or summarizing information, it is insufficient on its own for developing standout brands or breakthrough product ideas. AI can accelerate the innovative process, but it needs to be combined with human creativity to create something materially new.

Missing Emotional Intelligence

AI can mimic empathy, but it lacks true emotional awareness. It also cannot read subtle human cues. And in settings like healthcare, therapy, or even customer support, that distinction matters.

In August, OpenAI was sued by the parents of a 16-year-old who died by suicide. While ChatGPT did express empathy, it failed to recognize signs of a crisis and provided harmful responses. This isn’t an isolated incident. A growing number of high-profile cases show AI chatbots being blamed for encouraging self-harm or failing to stop it.

Additionally, emotional nuance — detecting sarcasm, subtle body language, or a moment of hesitation — is still far beyond AI’s reach. And when the tone is off — too cold or too detached — users don’t want to use the AI, as we saw with the GPT-5 release. Emotional intelligence isn’t something you can scale with more data or GPUs. But that doesn’t mean AI lacks a role. Used thoughtfully, it can support empathy at scale by triaging conversations, suggesting language that helps humans respond with more care, and freeing people to focus on the interactions that require genuine human connection.

No General Intelligence

Today’s AI is highly capable in specific tasks but unable to transfer knowledge across domains or apply common sense. For example, an AI model trained to generate images cannot create a website. With Artificial General Intelligence (AGI), AI systems would be able to solve complex problems in contexts they were never explicitly trained on, but that remains a long-term goal, not a present reality. Even the most advanced models still lack the flexible, contextual understanding that humans bring to everyday problem-solving.

Many debate how close we are to true AGI. But whether it’s a few years away, decades away, or never arrives, we can still unlock enormous value by combining today’s AI with human adaptability and judgment.

Why These Limitations Matter

These limitations are important because they shape how and where AI can be trusted and used. A hallucination in a movie summary is harmless. The same error in a medical report, a financial document, or a legal filing could be disastrous.

Trust and Adoption

Trust in AI is not improving as much as many hoped. And it’s fragmented by geography, demographics, and use cases. Frequent hallucinations and inconsistencies slow enterprise confidence in critical domains. Enterprises may hesitate to automate decision-making in high stakes areas like healthcare and finance. However, with strong oversight and clear transparency tools, organizations can adopt AI in ways that balance efficiency with safety.

The Human Advantage

The most AI-proof skills of 2025 are the most human. Skills like critical thinking, adaptability, creativity, and social fluency are increasingly valuable. Companies that invest in these capabilities will be more resilient than those purely looking to automate.

Effective Design

AI works best when framed as a tool that augments human insight, rather than replaces it. Designing for imperfection means acknowledging that AI can be wrong and building systems that reflect that. Transparency tools are also needed to expose how AI arrived at a decision.

The most resilient systems keep humans in the loop, set boundaries that fit the domain and risk level, and treat explainability as part of the user experience. Effective design isn’t just about reducing mistakes. It’s about making them visible, manageable, and recoverable.

What to Expect Next

The AI landscape in 2025 is evolving, but some things are clear:

  • Hallucinations won’t disappear. They may decline, but the core mechanism remains.
  • More agentic behavior is coming, with AI agents taking action on behalf of users, but with constrained autonomy.
  • AGI hype will grow, especially as new models look smarter. Companies will need to cut through the noise and design around what AI can actually do today.
  • Human plus machine will always win. The most effective systems will blend human oversight with machine efficiency.

We don’t need to wait for perfect AI — we already have extremely powerful tools today. The strongest approaches will accept imperfection, anticipate it, and design for it. Progress won’t come from pretending AI is flawless, or holding out for a perfect model, but from building systems that acknowledge and account for imperfections from the start. By doing so, we create solutions that are not only powerful, but also resilient, trustworthy, and deeply human-centered.

About the Author
  • Sarah Hoffman - Director of Research, AI

    Sarah Hoffman is Director of Research, AI at AlphaSense, where she explores artificial intelligence trends that will matter most to AlphaSense’s customers. Previously, Sarah was Vice President of AI and ML Research for Fidelity Investments, led FactSet’s ML and Language Technology team and worked as an Information Technology Analyst at Lehman Brothers. With a career spanning two decades in AI, ML, natural language processing, and other technologies, Sarah’s expertise has been featured in The Wall Street Journal, CNBC, VentureBeat, and on Bloomberg TV. Sarah holds a master's degree from Columbia University in computer science with a focus on natural language processing, and a B.B.A. from Baruch College in computer information systems. Sarah is based in New York.

Explore more

The AI-Ready Employee: Redefining Success for the Next Generation of Work

We use the AlphaSense platform to uncover how AI is reshaping the workplace and setting a new standard for career growth.
ai-ready employee

Generative AI and the Future of Entry-Level Jobs

AlphaSense finds information on companies, data and themes from within millions of research documents in seconds, all with ONE simple search.

Trust in Generative AI: A Fragmented Landscape

Using the AlphaSense platform, we explore the geographic, demographic, and sector-specific differences in levels of trust in AI.

Transform intelligence
into advantage

Develop bold strategies, seize opportunities,
and lead with clarity and confidence.