547da4517ad540953445d36dae008ad8dee1f193
LangChain RAG Demo
This repository demonstrates how to use LangChain for a Retrieval-Augmented Generation (RAG) application. The code retrieves Hacker News front page stories, categorizes them, stores them in a vector store, and performs retrieval based on user preferences.
Getting Started
-
Set the following environment variables:
OPENAI_API_KEY: Your OpenAI API key for chat and embedding models.JINA_AI_KEY: Your Jina AI Reader key for text extraction.SLACK_BOT_TOKEN: Your Slack bot token for sending messages (optional).
-
Start local Weaviate vector store instance:
docker compose up -d -
Run the RAG application:
uv run python indexing.py
Adjust the constants in indexing.py to configure the behavior of the application.
You can optionally enable MLflow tracing by setting ENABLE_MLFLOW_TRACING=True there (make sure to run mlflow server first).
Description
Languages
Python
100%