Spaces:
Running
Running
# RAG Chatbot Application | |
## Project Overview | |
A modular Retrieval Augmented Generation (RAG) chatbot application built with FastAPI, supporting multiple LLM providers and embedding models. | |
## Project Structure | |
- `config/`: Configuration management | |
- `src/`: Main application source code | |
- `tests/`: Unit and integration tests | |
- `data/`: Document storage and ingestion | |
## Prerequisites | |
- Python 3.9+ | |
- pip | |
- (Optional) Virtual environment | |
## Installation | |
1. Clone the repository | |
```bash | |
git clone https://your-repo-url.git | |
cd rag-chatbot | |
``` | |
2. Create a virtual environment | |
```bash | |
python -m venv venv | |
source venv/bin/activate # On Windows use `venv\Scripts\activate` | |
``` | |
3. Install dependencies | |
```bash | |
pip install -r requirements.txt | |
``` | |
4. Set up environment variables | |
```bash | |
cp .env.example .env | |
# Edit .env with your credentials | |
``` | |
## Configuration | |
### Environment Variables | |
- `OPENAI_API_KEY`: OpenAI API key | |
- `OLLAMA_BASE_URL`: Ollama server URL | |
- `EMBEDDING_MODEL`: Hugging Face embedding model | |
- `CHROMA_PATH`: Vector store persistence path | |
- `DEBUG`: Enable debug mode | |
## Running the Application | |
### Development Server | |
```bash | |
uvicorn src.main:app --reload | |
``` | |
### Production Deployment | |
```bash | |
gunicorn -w 4 -k uvicorn.workers.UvicornWorker src.main:app | |
``` | |
## Testing | |
```bash | |
pytest tests/ | |
``` | |
## Features | |
- Multiple LLM Provider Support | |
- Retrieval Augmented Generation | |
- Document Ingestion | |
- Flexible Configuration | |
- FastAPI Backend | |
## Contributing | |
1. Fork the repository | |
2. Create your feature branch | |
3. Commit your changes | |
4. Push to the branch | |
5. Create a Pull Request |