When accurate AI is still dangerously incomplete

Published: (February 18, 2026 at 11:00 AM EST)
3 min read

Source: VentureBeat

Introduction

Typically, when building, training, and deploying AI, enterprises prioritize accuracy. And that, no doubt, is important; but in highly complex, nuanced industries like law, accuracy alone isn’t enough. Higher stakes mean higher standards: model outputs must be assessed for relevancy, authority, citation accuracy, and hallucination rates.

“There’s no such [thing] as ‘perfect AI’ because you never get 100% accuracy or 100% relevancy, especially in complex, high‑stake domains like legal,” — Min Chen, LexisNexis SVP and Chief AI Officer, VentureBeat Beyond the Pilot podcast.

The goal is to manage that uncertainty as much as possible and translate it into consistent customer value. “At the end of the day, what matters most for us is the quality of the AI outcome, and that is a continuous journey of experimentation, iteration and improvement,” Chen said.

Getting ‘complete’ answers to multi-faceted questions

To evaluate models and their outputs, Chen’s team has established more than a half‑dozen “sub‑metrics” to measure “usefulness” based on several factors — authority, citation accuracy, hallucination rates — as well as “comprehensiveness.” This metric evaluates whether a generative AI response fully addresses all aspects of a user’s legal question.

“So it’s not just about relevancy. Completeness speaks directly to legal reliability,” Chen explains.

  • Example: A user asks a question that requires an answer covering five distinct legal considerations. A generative AI may accurately address three of these. While relevant, the partial answer is incomplete and, from a user perspective, insufficient. This can be misleading and pose real‑life risks.
  • Citations: Citations may be semantically relevant to a user’s question, but they might point to arguments or instances that were ultimately overruled in court. “Our lawyers will consider them not citable,” Chen said. “If they’re not citable, they’re not useful.”

Moving beyond standard RAG

LexisNexis launched its flagship generative AI product, Lexis+ AI—a legal AI tool for drafting, research, and analysis—in 2023. It was built on a standard Retrieval‑Augmented Generation (RAG) framework and hybrid vector search that grounds responses in LexisNexis’ trusted, authoritative knowledge base.

The company then released its personal legal assistant, Protégé, in 2024. This agent incorporates a knowledge‑graph layer on top of vector search to overcome a “key limitation” of pure semantic search. Although “very good” at retrieving contextually relevant content, semantic search “doesn’t always guarantee authoritative answers,” Chen noted.

Process

  1. Initial semantic search returns what it deems relevant content.
  2. Chen’s team traverses those returns across a “point of law” graph to further filter the most highly authoritative documents.

Beyond this, the team is developing agentic graphs and accelerating automation so agents can plan and execute complex multi‑step tasks. Examples include:

  • Planner agents for research Q&A that break user questions into multiple sub‑questions. Human users can review and edit these to refine and personalize final answers.
  • Reflection agents for transactional document drafting that automatically and dynamically critique an initial draft, then incorporate feedback and refine the document in real time.

Chen emphasizes that these advances are not meant to replace humans. “Human experts and AI agents can learn, reason, and grow together. I see the future as a deeper collaboration between humans and AI.”

Podcast topics

  • How LexisNexis’ acquisition of Henchman helped ground AI models with proprietary LexisNexis data and customer data
  • The difference between deterministic and non‑deterministic evaluation
  • Why enterprises should identify KPIs and definitions of success before rushing to experimentation
  • The importance of focusing on a “triangle” of key components: cost, speed, and quality
0 views
Back to Blog

Related posts

Read more »