What is Tally?
Tally (formerly Time2Trace) is an advanced, AI-powered development assistant designed to bridge the gap between raw code changes and human understanding. It serves as a "universal translator" for development work, converting cryptic git commits and diffs into clear, actionable narratives tailored to specific audiences.
Cognitive Offloading
Automates the mental effort of summarizing complex technical work, letting you focus on coding.
Privacy-First AI
Designed to work with local LLMs (Ollama) so sensitive code never leaves your machine.
Multi-Persona
Generates summaries for different audiences: Engineers, Product Managers, or Personal Journals.
Evolution & Roadmap
Phase 1: Time2Trace Prototype
Originally a command-line tool and simple web dashboard focused on basic git history visualization. Monolithic Node.js architecture.
CompletedPhase 2: AI Integration
Introduction of the "Summaries" capability and Persona system. Integration with OpenAI API, laying the groundwork for intelligent code analysis.
CompletedPhase 3: Tally Desktop & Local AI
Rebranded to "Tally". Major architectural shift to Tauri desktop app. Local AI (Ollama) as first-class citizen. Auto-save, improved personas, and robust offline support.
Current Release v0.0.4Phase 4: Collaboration & Scale
Batch processing for weekly/monthly summaries. Team sync capabilities to share insights. VS Code extension integration for seamless workflow.
PlannedWhat makes Tally different
🏠 Automatic local backend
Starts a local server automatically when you launch the app, with intelligent fallback to cloud API and offline mode.
🔄 Automatic updates
Built-in Tauri updater with signed installers. Get the latest features and fixes seamlessly.
🧠 Context-aware summaries
Tally reads diffs, file types, and history to produce focused, useful summaries that match developer intent.
🔐 Privacy-first design
Local processing with Ollama by default. The app automatically checks for compatible models and guides you through setup.
⚡ Fast by design
Rust + incremental git scanning makes it instant for small diffs and responsive for large repos.
🔎 Risk & impact signals
Highlight security-sensitive changes, potential regressions, and migration risks before merge.
How Tally Works
The Architecture
Tally runs entirely on your machine. It combines a high-performance Rust frontend (Tauri) with a specialized Node.js sidecar backend to handle complex Git operations and AI orchestration.
Lightweight, native UI that renders your insights and manages the application lifecycle.
A hidden local server that processes Git history, diffs, and communicates with the AI model.
Your local LLM provider. Tally sends anonymized diffs to Ollama and receives structured summaries.
From Commit to Insight
Git Analysis
Tally scans your repository for recent commits. It extracts the raw diffs, file names, and commit messages.
Context Construction
The sidecar builds a prompt containing the diff and the selected "Persona" (e.g., "Explain like I'm 5" or "Technical Deep Dive").
Local Inference
The prompt is sent to your local Ollama instance. No data leaves your machine. The model generates a summary.
🦙 Setting up Ollama (Required)
To use Tally's privacy-first AI features, you need to have Ollama running with a compatible model.
Step 2: Pull a Model
Open your terminal and run the following command to download the recommended model (phi4):
ollama pull phi4
Note: You can also use `llama3`, `mistral`, or other models. Tally will detect them automatically.
Step 3: Launch Tally
Open Tally. It will automatically connect to your running Ollama instance. Look for the "AI Ready" indicator in the status bar.
See Tally in Action
🔍 Real-time Commit Analysis
Tally analyzes your commits and provides intelligent insights in real-time. Here's what you'll see:
Tally provides context-aware analysis, security insights, and productivity metrics to help you understand your code changes better.
Frequently Asked Questions
Yes! Tally is completely free for local use with Ollama. You get unlimited commit analysis, all platforms, and full privacy. We also offer Pro plans with cloud AI integration and advanced features.
When you launch Tally, it automatically starts a local backend server on your machine. This server handles all commit analysis locally. If the local server fails to start, Tally automatically falls back to the cloud API. Everything happens automatically - no configuration needed.
Tally uses Tauri's built-in updater with signed installers. Updates are checked automatically, and you'll be notified when new versions are available. All updates are cryptographically signed for security.
By default, Tally processes everything locally using Ollama. Your code never leaves your machine unless you explicitly choose to use cloud AI providers. All analysis happens on your device.
Tally automatically checks for compatible Ollama models when you launch it. Compatible models include: phi4-mini, llama3, phi3, mistral, and codellama. If no compatible model is found, Tally will notify you with installation instructions. For cloud: OpenAI GPT-4, Anthropic Claude, and other major providers are supported.
Yes! Our Pro plan includes team collaboration features, shared analytics, and centralized configuration. Enterprise plans offer on-premise deployment and custom AI models.
1. Download and install Tally, 2. Launch the app - the backend starts automatically, 3. (Optional) Install Ollama and run ollama pull phi4-mini:3.8b for AI summaries - Tally will check and guide you, 4. Connect your Git repository and start analyzing! The local backend handles everything.