Member-only story

Beyond the Output: Analyzing Hallucinations, Bias, and Evaluation in Large Language Models

Praveen Krishna Murthy
3 min readDec 22, 2024

Best solutions are yet to come ..

Introduction

Artificial Intelligence (AI) has reached an unprecedented inflection point this year, transcending its origins in research labs to dominate global conversations. From boardrooms to dinner tables, AI is now a central focus, with over $60 billion in venture capital flowing into the sector — surpassing industries like healthcare and consumer. Yet, beneath the excitement lies a critical need to address the imperfections in AI systems, particularly hallucinations, biases, and evaluation frameworks in large language models (LLMs).

Understanding Hallucinations and Bias in LLMs

LLMs are trained on vast datasets drawn from the internet, including encyclopedias, articles, and books. While this diversity is a strength, it also introduces inaccuracies and societal biases inherent in the data.

Hallucinations
Hallucinations in LLMs occur when models generate text that is factually incorrect or completely fabricated. These errors arise from several factors:

  • Data Gaps: Models often lack the knowledge to answer niche or domain-specific questions due to incomplete training data.
  • Fiction in Training Data: Fictional or opinion-based content can distort the accuracy of outputs.

--

--

Praveen Krishna Murthy
Praveen Krishna Murthy

Written by Praveen Krishna Murthy

ML fanatic | Book lover | Coffee | Learning from Chaos

No responses yet