Detect and Mitigate Ungrounded Model Outputs
Published Mar 28 2024 06:00 AM 7,596 Views
Microsoft

Today, we are pleased to announce “Groundedness Detection,” alongside other advancements and feature announcements available in Azure AI. 

 

Ungrounded model outputs are consistently cited as a top risk to broad deployments of Copilots and other Generative AI-powered applications, particularly in high-stakes domains such as law, medicine and finance.   

 

Ungroundedness, otherwise known as hallucination, refers to model outputs that are plausible yet unsupported.  Whereas research shows that ungrounded output is an inherent feature of Generative AI models, it can be significantly reduced through continuous monitoring and mitigation.  But this requires a way to detect ungrounded outputs at a greater scale than is possible with manual checks.  Today Azure AI makes this possible for the first time with groundedness detection.    

  

This feature detects ungrounded statements within Generative AI output in applications using grounded documents, such as Q&A Copilots and document summarization applications.  When an ungrounded claim is detected, customers can take one of numerous mitigation steps: 

  • Test their AI implementation pre-deployment against groundedness metrics,   
  • Highlight ungrounded statements for internal users, triggering fact checks or mitigations such as metaprompt improvements or knowledge base editing,  
  • Trigger a rewrite of ungrounded statements before returning the completion to the end user, or   
  • When generating synthetic data, evaluate the groundedness of synthetic training data before using it to fine-tune their language model.  

How does Groundedness Detection work?  

Previously, some Generative AI applications would chain a request to an LLM asking if a completion was grounded relative to a grounding document.  This ad hoc approach has resulted in insufficient recall of ungrounded claims to derisk Generative AI applications.   

Azure AI’s groundedness detection feature is built from the ground up to accurately detect ungrounded claims.  We built a custom language model fine-tuned to a natural language processing task called Natural Language Inference (NLI), which evaluates claims as being entailed, refuted by, or neutral with regard to a source document.    

 Azure AI Content Safety’s groundedness detection model will continuously improve as Microsoft continues to push the envelope of Responsible AI innovation. 

 

 

Resources: 

2 Comments
Co-Authors
Version history
Last update:
‎Mar 27 2024 09:23 PM
Updated by: