A Question-Answering chatbot built using RAG (Retrieval-Augmented Generation) with conversation memory. This project uses LangChain, various LLM options, and vector stores to create an intelligent chatbot that can answer questions about Jessup Cellars winery.
- RAG-based question answering
- Conversation memory to maintain context
- Support for multiple LLM options (Groq, OpenAI)
- Vector store options (Pinecone, FAISS)
- Environment variable configuration
- Flexible embedding models (HuggingFace, OpenAI)
git clone https://github.com/Piyush-sri11/QA-chatbot.git
virtualenv env
env/Scripts/activate
pip install -r requirements.txt
LANGCHAIN_API_KEY="your_langchain_api_key"
LANGCHAIN_PROJECT="RAG QA Chatbot with Memory"
OPENAI_API_KEY="your_openai_api_key"
GOOGLE_API_KEY="your_google_api_key"
GROQ_API_KEY="your_groq_api_key"
HUGGING_FACE_TOKEN="your_huggingface_token"
PINECONE_API_KEY="your_pinecone_api_key"
from google.colab import userdata
os.environ['OPENAI_API_KEY'] = userdata.get('OPENAI_API_KEY')
os.environ['GROQ_API_KEY'] = userdata.get('GROQ_API_KEY')
# Add other API keys as needed
The project uses Groq's Llama3-70b-8192 model by default, as it's a powerful open-source alternative:
llm = ChatGroq(
groq_api_key=groq_api_key,
model_name="Llama3-70b-8192",
temperature=0
)
To use OpenAI's models (requires API credits):
llm = ChatOpenAI(
model="gpt-4",
temperature=0,
max_tokens=None,
timeout=None,
max_retries=4,
api_key=openai_api_key,
)
The project uses Pinecone as the default vector store for production use.
For local development or testing, uncomment the FAISS implementation in the code.
- 1.Ensure all configurations are set up properly
- 2.Run the main script:
python yard.py
- 3.Start asking questions about Jessup Cellars.
- 4.Type 'exit' to end the session.
Enter your question: What makes Jessup Cellars wines special?
[Response will appear here]
Response time: [time in seconds]
##################################################
Enter your question:
- This project was developed using open-source LLMs (Groq's Llama3-70b-8192) due to OpenAI API credit limitations. The code includes commented sections for OpenAI integration if you have API credits available.
- See
corpus_info.md
for detailed information about the Jessup Cellars knowledge base used in this project.