AI Web Search Risks: How Companies Can Protect Themselves from Data Accuracy Threats

AI web search AI accuracy risks business data accuracy GenAI compliance
Nikita Shekhawat
Nikita Shekhawat
 
November 18, 2025 4 min read

More than half of people now rely on AI to search the web. Yet despite this rapid adoption, the data accuracy of many AI tools remains stubbornly low—creating serious risks for businesses that depend on reliable information.

A recent investigation reveals a widening gap between how much users trust AI tools and how accurate these tools actually are. This mismatch introduces new threats to compliance, legal security, financial planning, and overall corporate governance.

The Rise of AI-Driven Search—and the New Shadow IT Problem

According to a September 2025 survey of 4,189 UK adults, nearly one-third of AI users believe these tools are already more important than traditional web search. If employees are comfortable using AI for personal queries, it’s highly likely they’re also using them at work—often without oversight.

Consumer group Which? conducted an investigation showing how this unverified reliance can expose companies to costly mistakes. Around half of users said they trust AI outputs to a “reasonable” or “great” extent, yet the actual accuracy of responses often fails to justify that confidence.

AI Search Accuracy: Popularity Doesn’t Equal Performance

The study evaluated six major tools—ChatGPT, Google Gemini (standard and AI Overviews), Microsoft Copilot, Meta AI, and Perplexity—across 40 questions in finance, law, and consumer rights.

Overall Accuracy Scores

  • Perplexity – 71% (highest)

  • Google Gemini AI Overviews – 70%

  • Google Gemini – competitive

  • ChatGPT – 64%

  • Meta AI – 55% (lowest)

Despite widespread usage, ChatGPT ranked second-lowest in accuracy, underscoring that adoption and reliability don’t always align.

Across all platforms, the investigation found frequent misinterpretations, incomplete answers, and subtle errors—many of which could pose significant business risks.

Example: Financial Guidance Errors

When asked how to invest a £25,000 ISA allowance—a deliberate mistake, since the legal cap is lower—ChatGPT and Copilot failed to catch the error and proceeded to offer advice that could lead to HMRC violations.

Gemini, Meta, and Perplexity identified the mistake, but the inconsistency across tools shows why businesses must enforce human review for financial guidance.

Example: Legal Ambiguity and Jurisdiction Errors

AI tools often generalized UK regulations and failed to distinguish between different regional laws (such as Scotland vs England and Wales).
Such errors could mislead legal teams, increase compliance risks, or result in incorrect contract assessments.

Example: Overconfident and Risky Advice

AI tools rarely recommended consulting a professional—even for high-stakes queries.

In one case, Gemini suggested withholding payment during a dispute with a builder—advice that legal experts say could breach a contract and weaken a user’s position.

This type of overconfidence can lead employees to make premature or legally damaging decisions.

Opaque Sourcing Creates Additional Governance Risks

For enterprises, knowing where information comes from is essential. The investigation found that AI-generated responses frequently referenced vague, outdated, or low-quality sources, including old forum posts.

In one example involving tax codes, ChatGPT and Perplexity directed users toward paid third-party tax services rather than free, official HMRC resources—an outcome that could inflate company spending or violate procurement policy.

Major AI vendors acknowledge the issue. Microsoft notes that Copilot “distills information from multiple sources” and must be manually verified. OpenAI emphasized industry-wide efforts to improve accuracy, noting that GPT-5 represents their most advanced model to date.

How Businesses Can Reduce AI-Related Data Risks

Banning AI tools is rarely effective and often pushes their use underground. Instead, organisations should build robust policies that ensure accurate, compliant use of AI-generated information.

1. Require More Specific Prompts

Vague prompts lead to vague—and risky—answers.
Training should emphasise specifying:

  • Jurisdiction

  • Industry

  • Region

  • Regulatory context

2. Mandate Source Verification

Employees should never rely on a single AI-generated answer for critical business decisions.
Companies should:

  • Require viewing cited sources

  • Encourage cross-checking across multiple AI tools

  • Use AI tools that provide transparent link previews (like Gemini AI Overviews)

3. Build a “Second Opinion” Workflow

AI output should be treated as a first draft, not a final verdict.
For high-risk areas—finance, legal, compliance—professional human review must remain mandatory.

AI Accuracy Is Improving, but Business Risk Remains High

GenAI tools are advancing quickly, and their web-search accuracy will continue to improve. But today, relying on them without verification can lead to compliance failures, legal exposure, or financial losses.

The difference between AI-driven efficiency and AI-driven risk lies in rigorous human oversight.

Nikita Shekhawat
Nikita Shekhawat
 

Nikita is a digital marketing expert with a strong background in SEO, content strategy, and performance marketing. She specializes in driving brand growth through data-driven campaigns, social media optimization, and AI-powered marketing strategies. With a passion for innovation, Nikita helps businesses enhance their online presence, attract the right audience, and achieve measurable results.

Related News

Franklin Templeton Adopts Agentic AI with Wand AI
Finance

Franklin Templeton Adopts Agentic AI with Wand AI

Franklin Templeton partners with Wand AI to deploy agentic AI across global operations, boosting investment research, automation, and data-driven decision-making.

By Deepak Gupta November 18, 2025 2 min read
Read full article
Sakana AI Becomes Japan’s Most Valuable Unicorn
Sakana AI

Sakana AI Becomes Japan’s Most Valuable Unicorn

Sakana AI raises ¥20B to reach a $2.6B valuation, becoming Japan’s top unicorn. Learn how its efficient, nature-inspired AI is shaping the future of enterprise tech.

By Ankit Agarwal November 17, 2025 6 min read
Read full article
Anthropic Releases New Tool to Measure Political Bias in AI
AI political bias

Anthropic Releases New Tool to Measure Political Bias in AI

Anthropic launches an open-source tool to measure political bias in AI models, showing Claude’s improved neutrality compared to other leading systems.

By David Brown November 17, 2025 2 min read
Read full article
OpenAI Launches ChatGPT 5.1: Faster, Smarter, and More Personal
ChatGPT 5.1

OpenAI Launches ChatGPT 5.1: Faster, Smarter, and More Personal

Discover the new ChatGPT 5.1 by OpenAI—faster, smarter, and more personal with new tone presets, improved speed, and enhanced AI reasoning.

By Nikita Shekhawat November 13, 2025 2 min read
Read full article