PrivateGPT lets you chat with your documents using local LLMs. 100% private - your data never leaves your computer.
Quick Deploy
```bash # Clone the repo git clone https://github.com/imartinez/privateGPT.git cd privateGPT
# Install dependencies pip install -r requirements.txt
# Download a model (e.g., Llama 2) # Place in models/ folder
# Ingest your documents python ingest.py
# Start chatting python privateGPT.py ```
Docker Deploy
```bash docker-compose up -d # Access at http://localhost:8080 ```
Use Cases
Legal Document Review: Ask questions about contracts without uploading sensitive data to the cloud.
Research Assistant: Chat with academic papers, extract key findings.
Codebase Q&A: Ingest your code, ask "how does the auth system work?"
Company Knowledge Base: Build internal chatbots on proprietary docs.
Medical Records: Query patient data privately (HIPAA compliant).
Financial Analysis: Analyze reports without data exposure.
Supported Formats
- PDF, DOCX, TXT - CSV, XLSX - Python, JavaScript, and other code files - Markdown, HTML - Email archives
Pro Tips
- Use better models for better answers - Chunk size affects retrieval quality - More documents = slower but more comprehensive - GPU dramatically speeds up responses
Hardware
- CPU-only: Works but slow - 8GB VRAM: Good performance - 16GB+ VRAM: Best experience