RAG Playground Documentation

Official guide for using and understanding the RAG Playground platform.

← Back to Playground

What is RAG Playground?

RAG Playground is an interactive platform to compare and analyze different Retrieval-Augmented Generation (RAG) architectures on your own PDF documents. It enables you to upload documents, ask questions, and see how various RAG pipelines retrieve and generate answers, with full transparency into sources, metrics, and performance.

  • Upload and process PDF documents (text-based, up to 10MB)
  • Choose from multiple RAG architectures for querying
  • Compare answers, sources, and analytics side-by-side
  • View detailed performance and confidence metrics

Supported RAG Architectures

SimpleRAG

Performs fast vector similarity search using embeddings. Best for direct factual queries and quick lookups.

  • Uses vector database (FAISS) for retrieval
  • Low latency, high throughput
  • May miss nuanced or multi-part queries

HybridRAG

Combines semantic vector search and keyword-based BM25 retrieval for improved accuracy and recall.

  • Retrieves with both vector and keyword search
  • Balances speed and accuracy
  • Reduces hallucinations and increases context coverage

ReRankerRAG

Uses a cross-encoder to re-rank retrieved passages for maximum answer relevance and precision.

  • Reranks top retrieved chunks using a cross-encoder
  • Best for analytical or multi-context questions
  • Higher computational cost, but highest accuracy

How to Use

  1. Upload a text-based PDF document (max 10MB).
  2. Select one or more RAG architectures to compare.
  3. Type your question in the query box.
  4. Click Generate Response.
  5. Review answers, sources, and analytics for each architecture.
Tip: For best results, use clear and specific questions.

Metrics & Analytics

  • Processing Time: Time taken for each step (chunking, embedding, retrieval, generation).
  • Context Usage: How much of the retrieved context was used in the answer.
  • Diversity: Measures the uniqueness of retrieved sources.
  • Confidence Score: Indicates the models confidence in the answer.
  • Memory Usage: Embedding and chunk statistics for each query.

Frequently Asked Questions

Q: What types of PDFs are supported?
A: Only text-based PDFs. Scanned/image-only PDFs are not supported.
Q: Can I upload multiple PDFs?
A: Multiple PDF upload is not supported at this time.
Q: What models are used?
A: Llama 3 (70B) for generation, HuggingFace MiniLM for embeddings, BM25 for keyword retrieval, and a cross-encoder for reranking.
Q: Where can I get support?
A: Visit dipakchaudhari.com for help or to contact the author.
© 2025 RAG Playground. Made by Dipak Chaudhari