Tags
Language
Tags
October 2025
Su Mo Tu We Th Fr Sa
28 29 30 1 2 3 4
5 6 7 8 9 10 11
12 13 14 15 16 17 18
19 20 21 22 23 24 25
26 27 28 29 30 31 1
    Attention❗ To save your time, in order to download anything on this site, you must be registered 👉 HERE. If you do not have a registration yet, it is better to do it right away. ✌

    ( • )( • ) ( ͡⚆ ͜ʖ ͡⚆ ) (‿ˠ‿)
    SpicyMags.xyz

    GenAI RAG with LlamaIndex, Ollama and Elasticsearch

    Posted By: IrGens
    GenAI RAG with LlamaIndex, Ollama and Elasticsearch

    GenAI RAG with LlamaIndex, Ollama and Elasticsearch
    .MP4, AVC, 1920x1080, 30 fps | English, AAC, 2 Ch | 1h 49m | 706 MB
    Instructor: Andreas Kretz

    Local GenAI RAG with LlamaIndex & Ollama

    Retrieval-Augmented Generation (RAG) is the next practical step after semantic indexing and search. In this course, you’ll build a complete, local-first RAG pipeline that ingests PDFs, stores chunked vectors in Elasticsearch, retrieves the right context, and generates grounded answers with the Mistral LLM running locally via Ollama.

    We’ll work end-to-end on a concrete scenario: searching student CVs to answer questions like “Who worked in Ireland?” or “Who has Spark experience?”. You’ll set up a Dockerized stack (FastAPI, Elasticsearch, Kibana, Streamlit, Ollama) and wire it together with LlamaIndex so you can focus on the logic, not boilerplate. Along the way, you’ll learn where RAG shines, where it struggles (precision/recall, hallucinations), and how to design for production.

    By the end, you’ll have a working app: upload PDFs → extract text → produce clean JSON → chunk & embed → index into Elasticsearch → query via Streamlit → generate answers with Mistral, fully reproducible on your machine.

    What will I learn?

    From Search to RAG


    Revisit semantic search and extend it to RAG: retrieve relevant chunks first, then generate grounded answers. See how LlamaIndex connects your data to the LLM and why chunk size and overlap matter for recall and precision.

    Building the Pipeline

    Use FastAPI to accept PDF uploads and trigger the ingestion flow: text extraction, JSON shaping, chunking, embeddings, and indexing into Elasticsearch, all orchestrated with LlamaIndex to minimize boilerplate.

    Working with Elasticsearch

    Create an index for CV chunks with vectors and metadata. Understand similarity search vs. keyword queries, how vector fields are stored, and how to explore documents and scores with Kibana.

    Streamlit Chat Interface

    Build a simple Streamlit UI to ask questions in natural language. Toggle debug mode to inspect which chunks supported the answer, and use metadata filters (e.g., by person) to boost precision for targeted queries.

    Ingestion & JSON Shaping

    Extract raw text from PDFs using PyMuPDF, then have Ollama (Mistral) produce lossless JSON (escaped characters, preserved structure). Handle occasional JSON formatting issues with retries and strict prompting.

    Improving Answer Quality

    Learn practical levers to improve results:

    • Tune chunk size/overlap and top-K retrieval
    • Add richer metadata (roles, skills, locations) for hybrid filtering
    • Experiment with embedding models and stricter prompts
    • Consider structured outputs (e.g., JSON lists) for list-style questions

    Dockerized Setup

    Use Docker Compose to spin up the full stack (FastAPI, Elasticsearch, Kibana, Streamlit, and Ollama (Mistral) so you can run the entire system locally with consistent configuration.

    Bonus: Production Patterns

    Explore how this prototype maps to a scalable design:

    • Store uploads in a data lake (e.g., S3) and trigger processing with a message queue (Kafka/SQS)
    • Autoscale workers for chunking and embeddings
    • Swap LLM backends (e.g., Bedrock/OpenAI) behind clean APIs
    • Persist chat history in MongoDB/Postgres and replace Streamlit with a React/Next.js UI


    GenAI RAG with LlamaIndex, Ollama and Elasticsearch