914 B
914 B
LangChain RAG Demo
This repository demonstrates how to use LangChain for a Retrieval-Augmented Generation (RAG) application. The code retrieves Hacker News front page stories, categorizes them, stores them in a vector store, and performs retrieval based on user preferences.
Getting Started
-
Set the following environment variables:
OPENAI_API_KEY: Your OpenAI API key for chat and embedding models.JINA_AI_KEY: Your Jina AI Reader key for text extraction.
-
Start local Weaviate vector store instance:
docker compose up -d -
Run the RAG application:
uv run python indexing.py
Adjust the constants in indexing.py to change the number of stories to fetch and the categories to use.
You can optionally enable MLflow tracing by setting ENABLE_MLFLOW_TRACING=True there (make sure to run mlflow server first).