What is Tally?

Tally (formerly Time2Trace) is an advanced, AI-powered development assistant designed to bridge the gap between raw code changes and human understanding. It serves as a "universal translator" for development work, converting cryptic git commits and diffs into clear, actionable narratives tailored to specific audiences.

🧠

Cognitive Offloading

Automates the mental effort of summarizing complex technical work, letting you focus on coding.

🔒

Privacy-First AI

Designed to work with local LLMs (Ollama) so sensitive code never leaves your machine.

🎭

Multi-Persona

Generates summaries for different audiences: Engineers, Product Managers, or Personal Journals.

Evolution & Roadmap

Early 2025

Phase 1: Time2Trace Prototype

Originally a command-line tool and simple web dashboard focused on basic git history visualization. Monolithic Node.js architecture.

Completed
Mid 2025

Phase 2: AI Integration

Introduction of the "Summaries" capability and Persona system. Integration with OpenAI API, laying the groundwork for intelligent code analysis.

Completed
Dec 2025 (Current)

Phase 3: Tally Desktop & Local AI

Rebranded to "Tally". Major architectural shift to Tauri desktop app. Local AI (Ollama) as first-class citizen. Auto-save, improved personas, and robust offline support.

Current Release v0.0.4
2026 & Beyond

Phase 4: Collaboration & Scale

Batch processing for weekly/monthly summaries. Team sync capabilities to share insights. VS Code extension integration for seamless workflow.

Planned

What makes Tally different

🏠 Automatic local backend

Starts a local server automatically when you launch the app, with intelligent fallback to cloud API and offline mode.

🔄 Automatic updates

Built-in Tauri updater with signed installers. Get the latest features and fixes seamlessly.

🧠 Context-aware summaries

Tally reads diffs, file types, and history to produce focused, useful summaries that match developer intent.

🔐 Privacy-first design

Local processing with Ollama by default. The app automatically checks for compatible models and guides you through setup.

Fast by design

Rust + incremental git scanning makes it instant for small diffs and responsive for large repos.

🔎 Risk & impact signals

Highlight security-sensitive changes, potential regressions, and migration risks before merge.

How Tally Works

The Architecture

Tally runs entirely on your machine. It combines a high-performance Rust frontend (Tauri) with a specialized Node.js sidecar backend to handle complex Git operations and AI orchestration.

🖥️ Frontend (Tauri)

Lightweight, native UI that renders your insights and manages the application lifecycle.

⚙️ Sidecar (Node.js)

A hidden local server that processes Git history, diffs, and communicates with the AI model.

🧠 AI Engine (Ollama)

Your local LLM provider. Tally sends anonymized diffs to Ollama and receives structured summaries.

From Commit to Insight

1

Git Analysis

Tally scans your repository for recent commits. It extracts the raw diffs, file names, and commit messages.

2

Context Construction

The sidecar builds a prompt containing the diff and the selected "Persona" (e.g., "Explain like I'm 5" or "Technical Deep Dive").

3

Local Inference

The prompt is sent to your local Ollama instance. No data leaves your machine. The model generates a summary.

🦙 Setting up Ollama (Required)

To use Tally's privacy-first AI features, you need to have Ollama running with a compatible model.

Step 1: Install Ollama

Download and install Ollama from the official website.

Download Ollama ↗

Step 2: Pull a Model

Open your terminal and run the following command to download the recommended model (phi4):

ollama pull phi4

Note: You can also use `llama3`, `mistral`, or other models. Tally will detect them automatically.

Step 3: Launch Tally

Open Tally. It will automatically connect to your running Ollama instance. Look for the "AI Ready" indicator in the status bar.

See Tally in Action

🔍 Real-time Commit Analysis

Tally analyzes your commits and provides intelligent insights in real-time. Here's what you'll see:

# Tally Analysis
📝 Commit: feat: add user authentication system
# Files changed: 3 (+127, -23)
+ Added JWT token validation
+ Implemented password hashing with bcrypt
+ Created user registration endpoint
# AI Summary:
This commit introduces a comprehensive authentication system with secure password handling, JWT tokens, and user registration. The implementation follows security best practices and includes proper error handling.
# Impact: High - Core security feature
# Risk: Medium - New authentication flow

Tally provides context-aware analysis, security insights, and productivity metrics to help you understand your code changes better.

Download Tally

Get started with AI-powered commit analysis in seconds. Choose your platform and start analyzing your code like never before.

🍎

macOS

Universal binary • macOS 10.15+

Download for macOS
🪟

Windows

Windows 10+ • x64 (Build from source)

Build Instructions
🐧

Linux

AppImage • x64 (Build from source)

Build Instructions
Signed & Verified
No Installation Required
Works Offline

Frequently Asked Questions

Is Tally really free?

Yes! Tally is completely free for local use with Ollama. You get unlimited commit analysis, all platforms, and full privacy. We also offer Pro plans with cloud AI integration and advanced features.

How does Tally connect to the backend?

When you launch Tally, it automatically starts a local backend server on your machine. This server handles all commit analysis locally. If the local server fails to start, Tally automatically falls back to the cloud API. Everything happens automatically - no configuration needed.

How do automatic updates work?

Tally uses Tauri's built-in updater with signed installers. Updates are checked automatically, and you'll be notified when new versions are available. All updates are cryptographically signed for security.

How does the privacy-first approach work?

By default, Tally processes everything locally using Ollama. Your code never leaves your machine unless you explicitly choose to use cloud AI providers. All analysis happens on your device.

What AI models are supported and how does Tally check for them?

Tally automatically checks for compatible Ollama models when you launch it. Compatible models include: phi4-mini, llama3, phi3, mistral, and codellama. If no compatible model is found, Tally will notify you with installation instructions. For cloud: OpenAI GPT-4, Anthropic Claude, and other major providers are supported.

Is there a team version?

Yes! Our Pro plan includes team collaboration features, shared analytics, and centralized configuration. Enterprise plans offer on-premise deployment and custom AI models.

How do I get started?

1. Download and install Tally, 2. Launch the app - the backend starts automatically, 3. (Optional) Install Ollama and run ollama pull phi4-mini:3.8b for AI summaries - Tally will check and guide you, 4. Connect your Git repository and start analyzing! The local backend handles everything.