Configuration
AI Providers
GitSage supports two primary intelligence modes: Cloud (Powered by GitSage's Intelligence Layer) for maximum speed and accuracy, and Local (Ollama) for absolute privacy.
Provider Comparison
Cloud Mode
ai_provider: "gitsage"
- ✅ Sub-2s analysis speed
- ✅ Deep reasoning on complex diffs
- ✅ 95%+ scope accuracy
- ⚠️ Requires GitSage key
Stealth Mode (Local)
ai_provider: "local"
- ✅ Zero data leaves your machine
- ✅ No API key required
- ✅ Works offline
- ⚠️ ~5s latency on avg hardware
- ⚠️ Requires Ollama installed
Setting Up Cloud Mode
- Create an account at the GitSage Portal and generate an API key.
- Configure GitSage with your secure key using the CLI command below.
- Run
gitsagein any repository to start auto-committing.
bash
Setting Up Ollama (Local)
- Install Ollama from ollama.ai.
- Pull a supported model (Mistral recommended).
- Switch GitSage to local mode in your current repository.
bash
Stealth Active
Ollama Ready
Supported local models: mistral, llama3, codellama, gemma2
Switching Providers on the Fly
bash