AI Web Search Risks: How Companies Can Protect Themselves from Data Accuracy Threats

AI web search AI accuracy risks business data accuracy GenAI compliance
Nikita Shekhawat
Nikita Shekhawat

Junior SEO Specialist

 
November 18, 2025 4 min read

More than half of people now rely on AI to search the web. Yet despite this rapid adoption, the data accuracy of many AI tools remains stubbornly low—creating serious risks for businesses that depend on reliable information.

A recent investigation reveals a widening gap between how much users trust AI tools and how accurate these tools actually are. This mismatch introduces new threats to compliance, legal security, financial planning, and overall corporate governance.

The Rise of AI-Driven Search—and the New Shadow IT Problem

According to a September 2025 survey of 4,189 UK adults, nearly one-third of AI users believe these tools are already more important than traditional web search. If employees are comfortable using AI for personal queries, it’s highly likely they’re also using them at work—often without oversight.

Consumer group Which? conducted an investigation showing how this unverified reliance can expose companies to costly mistakes. Around half of users said they trust AI outputs to a “reasonable” or “great” extent, yet the actual accuracy of responses often fails to justify that confidence.

AI Search Accuracy: Popularity Doesn’t Equal Performance

The study evaluated six major tools—ChatGPT, Google Gemini (standard and AI Overviews), Microsoft Copilot, Meta AI, and Perplexity—across 40 questions in finance, law, and consumer rights.

Overall Accuracy Scores

  • Perplexity – 71% (highest)

  • Google Gemini AI Overviews – 70%

  • Google Gemini – competitive

  • ChatGPT – 64%

  • Meta AI – 55% (lowest)

Despite widespread usage, ChatGPT ranked second-lowest in accuracy, underscoring that adoption and reliability don’t always align.

Across all platforms, the investigation found frequent misinterpretations, incomplete answers, and subtle errors—many of which could pose significant business risks.

Example: Financial Guidance Errors

When asked how to invest a £25,000 ISA allowance—a deliberate mistake, since the legal cap is lower—ChatGPT and Copilot failed to catch the error and proceeded to offer advice that could lead to HMRC violations.

Gemini, Meta, and Perplexity identified the mistake, but the inconsistency across tools shows why businesses must enforce human review for financial guidance.

Example: Legal Ambiguity and Jurisdiction Errors

AI tools often generalized UK regulations and failed to distinguish between different regional laws (such as Scotland vs England and Wales).
Such errors could mislead legal teams, increase compliance risks, or result in incorrect contract assessments.

Example: Overconfident and Risky Advice

AI tools rarely recommended consulting a professional—even for high-stakes queries.

In one case, Gemini suggested withholding payment during a dispute with a builder—advice that legal experts say could breach a contract and weaken a user’s position.

This type of overconfidence can lead employees to make premature or legally damaging decisions.

Opaque Sourcing Creates Additional Governance Risks

For enterprises, knowing where information comes from is essential. The investigation found that AI-generated responses frequently referenced vague, outdated, or low-quality sources, including old forum posts.

In one example involving tax codes, ChatGPT and Perplexity directed users toward paid third-party tax services rather than free, official HMRC resources—an outcome that could inflate company spending or violate procurement policy.

Major AI vendors acknowledge the issue. Microsoft notes that Copilot “distills information from multiple sources” and must be manually verified. OpenAI emphasized industry-wide efforts to improve accuracy, noting that GPT-5 represents their most advanced model to date.

How Businesses Can Reduce AI-Related Data Risks

Banning AI tools is rarely effective and often pushes their use underground. Instead, organisations should build robust policies that ensure accurate, compliant use of AI-generated information.

1. Require More Specific Prompts

Vague prompts lead to vague—and risky—answers.
Training should emphasise specifying:

  • Jurisdiction

  • Industry

  • Region

  • Regulatory context

2. Mandate Source Verification

Employees should never rely on a single AI-generated answer for critical business decisions.
Companies should:

  • Require viewing cited sources

  • Encourage cross-checking across multiple AI tools

  • Use AI tools that provide transparent link previews (like Gemini AI Overviews)

3. Build a “Second Opinion” Workflow

AI output should be treated as a first draft, not a final verdict.
For high-risk areas—finance, legal, compliance—professional human review must remain mandatory.

AI Accuracy Is Improving, but Business Risk Remains High

GenAI tools are advancing quickly, and their web-search accuracy will continue to improve. But today, relying on them without verification can lead to compliance failures, legal exposure, or financial losses.

The difference between AI-driven efficiency and AI-driven risk lies in rigorous human oversight.

Nikita Shekhawat
Nikita Shekhawat

Junior SEO Specialist

 

Nikita Shekhawat is a junior SEO specialist focused on off-page SEO and link-building initiatives that support organic visibility. Her work involves managing outreach, guest collaborations, and contextual link placements across technology-focused domains. She takes a practical, execution-oriented approach to improving authority and discoverability through consistent, relationship-driven SEO efforts.

Related News

Line Plus Debuts ActEngine AI to Accelerate Enterprise Revenue Through Specialized Agentic Models
ActEngine AI

Line Plus Debuts ActEngine AI to Accelerate Enterprise Revenue Through Specialized Agentic Models

Line Plus debuts ActEngine AI, a specialized agentic platform designed to automate enterprise sales and customer service workflows for better business outcomes.

By Hitesh Kumar Suthar March 20, 2026 4 min read
common.read_full_article
Legalweek 2026 Roundup: Avvoka Secures $18.5M Funding as FTI Consulting Debuts IQ.AI Studio
Legalweek 2026

Legalweek 2026 Roundup: Avvoka Secures $18.5M Funding as FTI Consulting Debuts IQ.AI Studio

Discover the top takeaways from Legalweek 2026, featuring Avvoka's $18.5M funding, FTI Consulting's IQ.AI Studio launch, and the rise of agentic AI workflows.

By Hitesh Kumar Suthar March 18, 2026 4 min read
common.read_full_article
Meta Pauses Rollout of New AI Model Amid Performance and Reliability Concerns
Meta AI model delay

Meta Pauses Rollout of New AI Model Amid Performance and Reliability Concerns

Meta has pushed the release of its 'Avocado' AI model to May 2026 after failing to meet performance benchmarks against OpenAI and Google Gemini 3.0.

By Hitesh Kumar Suthar March 15, 2026 4 min read
common.read_full_article
Elon Musk’s New AI-Centric Software Venture Challenges Microsoft’s Market Dominance in Enterprise Infrastructure
Macrohard

Elon Musk’s New AI-Centric Software Venture Challenges Microsoft’s Market Dominance in Enterprise Infrastructure

Elon Musk's new AI venture, Macrohard, aims to replace human software engineers with AI, directly challenging Microsoft's dominance in enterprise infrastructure.

By Deepak Gupta March 15, 2026 4 min read
common.read_full_article