What makes Tally different
🏠 Automatic local backend
Starts a local server automatically when you launch the app, with intelligent fallback to cloud API and offline mode.
🔄 Automatic updates
Built-in Tauri updater with signed installers. Get the latest features and fixes seamlessly.
🧠 Context-aware summaries
Tally reads diffs, file types, and history to produce focused, useful summaries that match developer intent.
🔐 Privacy-first design
Local processing with Ollama by default. The app automatically checks for compatible models and guides you through setup.
⚡ Fast by design
Rust + incremental git scanning makes it instant for small diffs and responsive for large repos.
🔎 Risk & impact signals
Highlight security-sensitive changes, potential regressions, and migration risks before merge.
How Tally works
Launch & Connect
Tally automatically starts a local backend server when you launch it. The app connects locally first, then falls back to cloud API if needed.
Setup AI Models
Tally checks for compatible Ollama models on startup. It guides you to install models like phi4-mini or llama3 if needed.
Analyze Commits
Connect your Git repository and Tally analyzes commits with AI-powered summaries, security insights, and impact assessments.
Stay Updated
Automatic updates keep you on the latest version with notarized installers and seamless upgrades.
How Tally Works
🏠 Automatic Local Backend
When you launch Tally, it automatically starts a local backend server that handles all your commit analysis:
🏠 Local Server (Primary)
Starts automatically when you launch Tally - no setup required.
- • Runs locally on your machine
- • Processes Git commits instantly
- • Works with Ollama for AI summaries
- • Complete privacy - no data leaves your device
🌐 Cloud API (Fallback)
Automatically falls back to cloud API if local server unavailable.
- • Team collaboration features
- • Advanced cloud AI models
- • Real-time synchronization
- • Always available as backup
🧠 Ollama Integration
Automatically detects and checks for compatible AI models.
- • Checks for installed models on startup
- • Guides you to install compatible models
- • Supports phi4-mini, llama3, and more
- • Works completely offline
🔄 Automatic Updates
Tally includes built-in automatic updates powered by Tauri's secure updater:
🔐 Signed Updates
All updates are cryptographically signed and verified for security.
⚡ Seamless Installation
Updates install automatically in the background with minimal user interaction.
🎯 Smart Notifications
Get notified of new features and improvements without interrupting your workflow.
See Tally in Action
🔍 Real-time Commit Analysis
Tally analyzes your commits and provides intelligent insights in real-time. Here's what you'll see:
Tally provides context-aware analysis, security insights, and productivity metrics to help you understand your code changes better.
Simple, Transparent Pricing
Free
- ✓ Local AI processing with Ollama
- ✓ Unlimited commit analysis
- ✓ All platforms (macOS, Windows, Linux)
- ✓ Privacy-first design
- ✓ Open source
Pro
- ✓ Everything in Free
- ✓ Cloud AI integration (OpenAI, Anthropic)
- ✓ Advanced analytics & insights
- ✓ Team collaboration features
- ✓ Priority support
Enterprise
- ✓ Everything in Pro
- ✓ On-premise deployment
- ✓ Custom AI models
- ✓ SSO integration
- ✓ Dedicated support
⚠️ Security Warnings: Normal for Unsigned Apps
Windows and macOS may show security warnings when downloading Tally. This is normal for unsigned applications.
🚀 Quick Start Guide
📥 Download & Install
- Download Tally for your platform above
- Install the app (drag to Applications on macOS)
- Launch Tally from your Applications folder
🧠 Set Up AI Analysis (Optional)
- Install Ollama (optional, for local AI)
- Run
ollama pull phi4-mini:3.8borollama pull llama3 - Tally will automatically detect and use your models
- Connect your Git repository in Tally
✨ Start Analyzing
- Launch Tally - backend starts automatically
- Browse your commit history
- Get AI-powered summaries (with Ollama models)
- Copy insights to pull requests
🛠️ Installation Help
🍎 macOS
If Gatekeeper blocks the app, right-click → Open, or go to System Preferences → Security & Privacy → General → Allow apps downloaded from.
🪟 Windows
Click "More info" → "Run anyway" if Windows Defender blocks the download, or add an exception in Windows Security.
🐧 Linux
Make the AppImage executable with chmod +x Tally.AppImage and run with ./Tally.AppImage
📝 Latest Release Notes
Loading release notes...
🔐 Verification & Security
All releases are signed and checksummed for your security. Verify downloads with:
Loading checksums...
Frequently Asked Questions
Yes! Tally is completely free for local use with Ollama. You get unlimited commit analysis, all platforms, and full privacy. We also offer Pro plans with cloud AI integration and advanced features.
When you launch Tally, it automatically starts a local backend server on your machine. This server handles all commit analysis locally. If the local server fails to start, Tally automatically falls back to the cloud API. Everything happens automatically - no configuration needed.
Tally uses Tauri's built-in updater with signed installers. Updates are checked automatically, and you'll be notified when new versions are available. All updates are cryptographically signed for security.
By default, Tally processes everything locally using Ollama. Your code never leaves your machine unless you explicitly choose to use cloud AI providers. All analysis happens on your device.
Tally automatically checks for compatible Ollama models when you launch it. Compatible models include: phi4-mini, llama3, phi3, mistral, and codellama. If no compatible model is found, Tally will notify you with installation instructions. For cloud: OpenAI GPT-4, Anthropic Claude, and other major providers are supported.
Yes! Our Pro plan includes team collaboration features, shared analytics, and centralized configuration. Enterprise plans offer on-premise deployment and custom AI models.
1. Download and install Tally, 2. Launch the app - the backend starts automatically, 3. (Optional) Install Ollama and run ollama pull phi4-mini:3.8b for AI summaries - Tally will check and guide you, 4. Connect your Git repository and start analyzing! The local backend handles everything.