SIGN IN SIGN UP
HKUDS / DeepTutor UNCLAIMED

"DeepTutor: AI-Powered Personalized Learning Assistant"

0 0 126 Python
2026-01-15 02:49:17 +08:00
<div align="center">
2026-03-24 18:33:11 +08:00
<img src="assets/logo-ver2.png" alt="DeepTutor" width="140" style="border-radius: 15px;">
2026-01-15 02:49:17 +08:00
2026-04-05 01:20:14 +08:00
# DeepTutor: Agent-Native Personalized Tutoring
2026-01-15 02:49:17 +08:00
2026-04-08 15:52:05 +08:00
<a href="https://trendshift.io/repositories/17099" target="_blank"><img src="https://trendshift.io/api/badge/repositories/17099" alt="HKUDS%2FDeepTutor | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
2026-04-07 11:01:52 +08:00
[![Python 3.11+](https://img.shields.io/badge/Python-3.11%2B-3776AB?style=flat-square&logo=python&logoColor=white)](https://www.python.org/downloads/)
2026-03-24 18:33:11 +08:00
[![Next.js 16](https://img.shields.io/badge/Next.js-16-000000?style=flat-square&logo=next.js&logoColor=white)](https://nextjs.org/)
[![License](https://img.shields.io/badge/License-Apache_2.0-blue?style=flat-square)](LICENSE)
2026-04-02 21:18:14 +08:00
[![GitHub release](https://img.shields.io/github/v/release/HKUDS/DeepTutor?style=flat-square&color=brightgreen)](https://github.com/HKUDS/DeepTutor/releases)
2026-04-04 13:23:14 +08:00
[![arXiv](https://img.shields.io/badge/arXiv-Coming_Soon-b31b1b?style=flat-square&logo=arxiv&logoColor=white)](#)
2026-03-24 18:33:11 +08:00
2026-04-02 22:37:19 +08:00
[![Discord](https://img.shields.io/badge/Discord-Community-5865F2?style=flat-square&logo=discord&logoColor=white)](https://discord.gg/eRsjPgMU4t)
[![Feishu](https://img.shields.io/badge/Feishu-Group-00D4AA?style=flat-square&logo=feishu&logoColor=white)](./Communication.md)
[![WeChat](https://img.shields.io/badge/WeChat-Group-07C160?style=flat-square&logo=wechat&logoColor=white)](https://github.com/HKUDS/DeepTutor/issues/78)
2026-01-15 02:49:17 +08:00
2026-04-02 21:18:14 +08:00
[Features](#-key-features) · [Get Started](#-get-started) · [Explore](#-explore-deeptutor) · [TutorBot](#-tutorbot--persistent-autonomous-ai-tutors) · [CLI](#%EF%B8%8F-deeptutor-cli--agent-native-interface) · [Community](#-community--ecosystem)
2026-01-15 02:49:17 +08:00
[🇨🇳 中文](assets/README/README_CN.md) · [🇯🇵 日本語](assets/README/README_JA.md) · [🇪🇸 Español](assets/README/README_ES.md) · [🇫🇷 Français](assets/README/README_FR.md) · [🇸🇦 العربية](assets/README/README_AR.md) · [🇷🇺 Русский](assets/README/README_RU.md) · [🇮🇳 हिन्दी](assets/README/README_HI.md) · [🇵🇹 Português](assets/README/README_PT.md) · [🇹🇭 ภาษาไทย](assets/README/README_TH.md)
2026-01-15 02:49:17 +08:00
</div>
2026-03-24 21:07:49 +08:00
---
### 📦 Releases
> **[2026.4.18]** [v1.1.2](https://github.com/HKUDS/DeepTutor/releases/tag/v1.1.2) — Schema-driven Channels tab with secret masking, RAG collapsed to single pipeline, RAG/KB consistency hardening, externalized chat prompts, and Thai README.
> **[2026.4.17]** [v1.1.1](https://github.com/HKUDS/DeepTutor/releases/tag/v1.1.1) — Universal "Answer now" across all capabilities, Co-Writer scroll sync, Save-to-Notebook message selection, unified settings panel, streaming Stop button, and TutorBot atomic config writes.
release: v1.1.1 — universal answer-now, co-writer scroll sync, notebook system unification, tutorbot config refactor - Promote "Answer now" to a universal escape hatch via answer_now_context; orchestrator re-routes any capability to chat._stage_answer_now. - Co-Writer: draggable split + persistent ratio + line-anchored bidirectional scroll sync using a hidden mirror for wrap-aware editor y. - SaveToNotebookModal: optional message-selection mode with quick presets; rebuild transcript / userQuery from the selected subset. - Adopt the real notebook API (/api/v1/notebook/*) across Knowledge tab, guide notebook picker, and save-to-notebook flow; new typed helpers in web/lib/notebook-api.ts (listNotebooks/getNotebook/createNotebook/...). - Extract CollapsibleConfigSection; unify Quiz / Math Animator / Visualize / Deep Research panels behind one collapsed-summary header. - Replace streaming spinner with a dedicated Stop button; persistent composer header; mask-gradient on messages container. - TutorBotManager: public load/save/merge API; atomic write via temp + Path.replace; merge_bot_config with explicit-clear semantics; new tests/services/tutorbot/test_manager_config.py. - MarkdownRenderer: trackSourceLines passes data-source-line through; RichCodeBlock skips Prism for unlabeled/plaintext fences; mermaid detection includes flow/seq/sequence editor.md fences. - UI/theme: deeper foregrounds + warmer borders for light/Snow themes; /guide migrated off hardcoded slate/indigo to design tokens; HistorySessionPicker themed and timestamp-seconds bug fixed. - Filter system messages out of UnifiedChatContext.hydrateMessages and ChatMessageList so backend grounding context never renders as a bubble. - Bug fixes (already merged earlier in dev): * tutorbot config wipe on restart (#332) * selective_access_log middleware unpack crash (#334/#335) * 15 npm security vulnerabilities — jspdf/next/mermaid (#330) - Drive-by: remove "// README" typo from .env.example_CN. - Add v1.1.1 release notes and update README. Made-with: Cursor
2026-04-17 22:43:52 +08:00
> **[2026.4.15]** [v1.1.0](https://github.com/HKUDS/DeepTutor/releases/tag/v1.1.0) — LaTeX block math parsing overhaul, LLM diagnostic probe via agents.yaml, extra headers forwarding fix, SaveToNotebook UUID fix, and Docker + local LLM guidance.
> **[2026.4.14]** [v1.1.0-beta](https://github.com/HKUDS/DeepTutor/releases/tag/v1.1.0-beta) — URL-based bookmarkable sessions, Snow theme, WebSocket heartbeat & auto-reconnect, ChatComposer performance fix, embedding provider registry overhaul, and Serper search provider.
> **[2026.4.13]** [v1.0.3](https://github.com/HKUDS/DeepTutor/releases/tag/v1.0.3) — Question Notebook with bookmarks & categories, Mermaid in Visualize, embedding mismatch detection, Qwen/vLLM compatibility, LM Studio & llama.cpp support, and Glass theme.
> **[2026.4.11]** [v1.0.2](https://github.com/HKUDS/DeepTutor/releases/tag/v1.0.2) — Search consolidation with SearXNG fallback, provider switch fix, and frontend resource leak fixes.
> **[2026.4.10]** [v1.0.1](https://github.com/HKUDS/DeepTutor/releases/tag/v1.0.1) — Visualize capability (Chart.js/SVG), quiz duplicate prevention, and o4-mini model support.
> **[2026.4.10]** [v1.0.0-beta.4](https://github.com/HKUDS/DeepTutor/releases/tag/v1.0.0-beta.4) — Embedding progress tracking with rate-limit retry, cross-platform dependency fixes, and MIME validation fix.
> **[2026.4.8]** [v1.0.0-beta.3](https://github.com/HKUDS/DeepTutor/releases/tag/v1.0.0-beta.3) — Native OpenAI/Anthropic SDK (drop litellm), Windows Math Animator support, robust JSON parsing, and full Chinese i18n.
> **[2026.4.7]** [v1.0.0-beta.2](https://github.com/HKUDS/DeepTutor/releases/tag/v1.0.0-beta.2) — Hot settings reload, MinerU nested output, WebSocket fix, and Python 3.11+ minimum.
> **[2026.4.4]** [v1.0.0-beta.1](https://github.com/HKUDS/DeepTutor/releases/tag/v1.0.0-beta.1) — Agent-native architecture rewrite (~200k lines): Tools + Capabilities plugin model, CLI & SDK, TutorBot, Co-Writer, Guided Learning, and persistent memory.
2026-03-24 21:07:49 +08:00
<details>
<summary><b>Past releases</b></summary>
> **[2026.1.23]** [v0.6.0](https://github.com/HKUDS/DeepTutor/releases/tag/v0.6.0) — Session persistence, incremental document upload, flexible RAG pipeline import, and full Chinese localization.
> **[2026.1.18]** [v0.5.2](https://github.com/HKUDS/DeepTutor/releases/tag/v0.5.2) — Docling support for RAG-Anything, logging system optimization, and bug fixes.
> **[2026.1.15]** [v0.5.0](https://github.com/HKUDS/DeepTutor/releases/tag/v0.5.0) — Unified service configuration, RAG pipeline selection per knowledge base, question generation overhaul, and sidebar customization.
> **[2026.1.9]** [v0.4.0](https://github.com/HKUDS/DeepTutor/releases/tag/v0.4.0) — Multi-provider LLM & embedding support, new home page, RAG module decoupling, and environment variable refactor.
> **[2026.1.5]** [v0.3.0](https://github.com/HKUDS/DeepTutor/releases/tag/v0.3.0) — Unified PromptManager architecture, GitHub Actions CI/CD, and pre-built Docker images on GHCR.
> **[2026.1.2]** [v0.2.0](https://github.com/HKUDS/DeepTutor/releases/tag/v0.2.0) — Docker deployment, Next.js 16 & React 19 upgrade, WebSocket security hardening, and critical vulnerability fixes.
</details>
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
### 📰 News
> **[2026.4.4]** Long time no see! ✨ DeepTutor v1.0.0 is finally here — an agent-native evolution featuring a ground-up architecture rewrite, TutorBot, and flexible mode switching under the Apache-2.0 license. A new chapter begins, and our story continues!
> **[2026.2.6]** 🚀 We've reached 10k stars in just 39 days! A huge thank you to our incredible community for the support!
> **[2026.1.1]** Happy New Year! Join our [Discord](https://discord.gg/eRsjPgMU4t), [WeChat](https://github.com/HKUDS/DeepTutor/issues/78), or [Discussions](https://github.com/HKUDS/DeepTutor/discussions) — let's shape the future of DeepTutor together!
> **[2025.12.29]** DeepTutor is officially released!
2026-03-24 21:07:49 +08:00
## ✨ Key Features
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
2026-04-02 21:18:14 +08:00
- **Unified Chat Workspace** — Five modes, one thread. Chat, Deep Solve, Quiz Generation, Deep Research, and Math Animator share the same context — start a conversation, escalate to multi-agent problem solving, generate quizzes, then deep-dive into research, all without losing a single message.
- **Personal TutorBots** — Not chatbots — autonomous tutors. Each TutorBot lives in its own workspace with its own memory, personality, and skill set. They set reminders, learn new abilities, and evolve as you grow. Powered by [nanobot](https://github.com/HKUDS/nanobot).
- **AI Co-Writer** — A Markdown editor where AI is a first-class collaborator. Select text, rewrite, expand, or summarize — drawing from your knowledge base and the web. Every piece feeds back into your learning ecosystem.
- **Guided Learning** — Turn your materials into structured, visual learning journeys. DeepTutor designs multi-step plans, generates interactive pages for each knowledge point, and lets you discuss alongside each step.
- **Knowledge Hub** — Upload PDFs, Markdown, and text files to build RAG-ready knowledge bases. Organize insights across sessions in color-coded notebooks. Your documents don't just sit there — they actively power every conversation.
- **Persistent Memory** — DeepTutor builds a living profile of you: what you've studied, how you learn, and where you're heading. Shared across all features and TutorBots, it gets sharper with every interaction.
- **Agent-Native CLI** — Every capability, knowledge base, session, and TutorBot is one command away. Rich terminal output for humans, structured JSON for AI agents and pipelines. Hand DeepTutor a [`SKILL.md`](SKILL.md) and your agents can operate it autonomously.
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
---
2026-03-24 18:33:11 +08:00
## 🚀 Get Started
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
### Prerequisites
Before you begin, make sure the following are installed on your system:
| Requirement | Version | Check | Notes |
|:---|:---|:---|:---|
| [Git](https://git-scm.com/) | Any | `git --version` | For cloning the repository |
| [Python](https://www.python.org/downloads/) | 3.11+ | `python --version` | Backend runtime |
| [Node.js](https://nodejs.org/) | 18+ | `node --version` | Frontend build (not needed for CLI-only or Docker) |
| [npm](https://www.npmjs.com/) | 9+ | `npm --version` | Bundled with Node.js |
You'll also need an **API key** from at least one LLM provider (e.g. [OpenAI](https://platform.openai.com/api-keys), [DeepSeek](https://platform.deepseek.com/), [Anthropic](https://console.anthropic.com/)). The Setup Tour will walk you through entering it.
2026-04-02 21:18:14 +08:00
### Option A — Setup Tour (Recommended)
A **single interactive script** that walks you through everything: dependency installation, environment configuration, live connection testing, and launch. No manual `.env` editing needed.
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
```bash
git clone https://github.com/HKUDS/DeepTutor.git
cd DeepTutor
# Create a Python virtual environment (pick one):
conda create -n deeptutor python=3.11 && conda activate deeptutor # if you use Anaconda/Miniconda
python -m venv .venv && source .venv/bin/activate # otherwise (macOS/Linux)
python -m venv .venv && .venv\Scripts\activate # otherwise (Windows)
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
2026-04-02 21:18:14 +08:00
# Launch the guided tour
2026-03-24 18:33:11 +08:00
python scripts/start_tour.py
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
```
2026-04-02 21:18:14 +08:00
The tour asks how you'd like to use DeepTutor:
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
- **Web mode** (recommended) — Installs all dependencies (pip + npm), spins up a temporary server, and opens the **Settings** page in your browser. A four-step guided tour walks you through LLM, Embedding, and Search provider setup with live connection testing. Once complete, DeepTutor restarts automatically with your configuration.
2026-04-02 21:18:14 +08:00
- **CLI mode** — A fully interactive terminal flow: choose a dependency profile, install dependencies, configure providers, verify connections, and apply — all without leaving the shell.
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
2026-03-24 18:48:50 +08:00
Either way, you end up with a running DeepTutor at [http://localhost:3782](http://localhost:3782).
2026-03-24 18:33:11 +08:00
> **Daily launch** — The tour is only needed once. From now on, start DeepTutor with:
>
> ```bash
> python scripts/start_web.py
> ```
>
> This boots both the backend and frontend in one command and opens the browser automatically. Re-run `start_tour.py` only if you need to reconfigure providers or reinstall dependencies.
2026-04-02 21:18:14 +08:00
### Option B — Manual Local Install
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
2026-04-02 21:18:14 +08:00
If you prefer full control, install and configure everything yourself.
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
2026-04-02 21:18:14 +08:00
**1. Install dependencies**
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
2026-04-02 21:18:14 +08:00
```bash
git clone https://github.com/HKUDS/DeepTutor.git
cd DeepTutor
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
# Create & activate a Python virtual environment (same as Option A)
2026-04-02 21:18:14 +08:00
conda create -n deeptutor python=3.11 && conda activate deeptutor
# Install DeepTutor with backend + web server dependencies
2026-04-02 21:18:14 +08:00
pip install -e ".[server]"
2026-04-02 20:30:06 +08:00
# Install frontend dependencies (requires Node.js 18+)
2026-04-02 21:18:14 +08:00
cd web && npm install && cd ..
```
2026-04-02 20:30:06 +08:00
2026-04-02 21:18:14 +08:00
**2. Configure environment**
2026-04-02 20:30:06 +08:00
```bash
cp .env.example .env
```
Edit `.env` and fill in at least the required fields:
```dotenv
# LLM (Required)
LLM_BINDING=openai
LLM_MODEL=gpt-4o-mini
LLM_API_KEY=sk-xxx
LLM_HOST=https://api.openai.com/v1
# Embedding (Required for Knowledge Base)
EMBEDDING_BINDING=openai
EMBEDDING_MODEL=text-embedding-3-large
EMBEDDING_API_KEY=sk-xxx
EMBEDDING_HOST=https://api.openai.com/v1
EMBEDDING_DIMENSION=3072
```
2026-04-08 20:02:13 +08:00
<details>
<summary><b>Supported LLM Providers</b></summary>
| Provider | Binding | Default Base URL |
|:--|:--|:--|
| AiHubMix | `aihubmix` | `https://aihubmix.com/v1` |
| Anthropic | `anthropic` | `https://api.anthropic.com/v1` |
| Azure OpenAI | `azure_openai` | — |
| BytePlus | `byteplus` | `https://ark.ap-southeast.bytepluses.com/api/v3` |
| BytePlus Coding Plan | `byteplus_coding_plan` | `https://ark.ap-southeast.bytepluses.com/api/coding/v3` |
| Custom (OpenAI-compat) | `custom` | — |
| DashScope (Qwen) | `dashscope` | `https://dashscope.aliyuncs.com/compatible-mode/v1` |
| DeepSeek | `deepseek` | `https://api.deepseek.com` |
| Gemini | `gemini` | `https://generativelanguage.googleapis.com/v1beta/openai/` |
| GitHub Copilot | `github_copilot` | `https://api.githubcopilot.com` |
| Groq | `groq` | `https://api.groq.com/openai/v1` |
| llama.cpp | `llama_cpp` | `http://localhost:8080/v1` |
| LM Studio | `lm_studio` | `http://localhost:1234/v1` |
2026-04-08 20:02:13 +08:00
| MiniMax | `minimax` | `https://api.minimax.io/v1` |
| Mistral | `mistral` | `https://api.mistral.ai/v1` |
| Moonshot (Kimi) | `moonshot` | `https://api.moonshot.ai/v1` |
| Ollama | `ollama` | `http://localhost:11434/v1` |
| OpenAI | `openai` | `https://api.openai.com/v1` |
| OpenAI Codex | `openai_codex` | `https://chatgpt.com/backend-api` |
| OpenRouter | `openrouter` | `https://openrouter.ai/api/v1` |
| OpenVINO Model Server | `ovms` | `http://localhost:8000/v3` |
| Qianfan (Ernie) | `qianfan` | `https://qianfan.baidubce.com/v2` |
| SiliconFlow | `siliconflow` | `https://api.siliconflow.cn/v1` |
| Step Fun | `stepfun` | `https://api.stepfun.com/v1` |
| vLLM | `vllm` | `http://localhost:8000/v1` |
| VolcEngine | `volcengine` | `https://ark.cn-beijing.volces.com/api/v3` |
| VolcEngine Coding Plan | `volcengine_coding_plan` | `https://ark.cn-beijing.volces.com/api/coding/v3` |
| Xiaomi MIMO | `xiaomi_mimo` | `https://api.xiaomimimo.com/v1` |
| Zhipu AI (GLM) | `zhipu` | `https://open.bigmodel.cn/api/paas/v4` |
</details>
<details>
<summary><b>Supported Embedding Providers</b></summary>
| Provider | Binding | Model Example | Default Dim |
|:--|:--|:--|:--|
| OpenAI | `openai` | `text-embedding-3-large` | 3072 |
| Azure OpenAI | `azure_openai` | deployment name | — |
| Cohere | `cohere` | `embed-v4.0` | 1024 |
| Jina | `jina` | `jina-embeddings-v3` | 1024 |
| Ollama | `ollama` | `nomic-embed-text` | 768 |
| vLLM / LM Studio | `vllm` | Any embedding model | — |
| Any OpenAI-compatible | `custom` | — | — |
OpenAI-compatible providers (DashScope, SiliconFlow, etc.) work via the `custom` or `openai` binding.
2026-04-08 20:02:13 +08:00
</details>
<details>
<summary><b>Supported Web Search Providers</b></summary>
| Provider | Env Key | Notes |
|:--|:--|:--|
| Brave | `BRAVE_API_KEY` | Recommended, free tier available |
| Tavily | `TAVILY_API_KEY` | |
| Jina | `JINA_API_KEY` | |
| SearXNG | — | Self-hosted, no API key needed |
| DuckDuckGo | — | No API key needed |
| Perplexity | `PERPLEXITY_API_KEY` | Requires API key |
</details>
2026-04-02 21:18:14 +08:00
**3. Start services**
The quickest way to launch everything:
```bash
python scripts/start_web.py
```
This starts both the backend and frontend and opens the browser automatically.
Alternatively, start each service manually in separate terminals:
2026-04-02 21:18:14 +08:00
```bash
# Backend (FastAPI)
python -m deeptutor.api.run_server
# Frontend (Next.js) — in a separate terminal
cd web && npm run dev -- -p 3782
```
| Service | Default Port |
|:---:|:---:|
| Backend | `8001` |
| Frontend | `3782` |
Open [http://localhost:3782](http://localhost:3782) and you're ready to go.
### Option C — Docker Deployment
Docker wraps the backend and frontend into a single container — no local Python or Node.js required. You only need [Docker Desktop](https://www.docker.com/products/docker-desktop/) (or Docker Engine + Compose on Linux).
2026-04-02 21:18:14 +08:00
**1. Configure environment variables** (required for both options below)
2026-04-02 21:18:14 +08:00
```bash
git clone https://github.com/HKUDS/DeepTutor.git
cd DeepTutor
cp .env.example .env
```
Edit `.env` and fill in at least the required fields (same as [Option B](#option-b--manual-local-install) above).
**2a. Pull official image (recommended)**
Official images are published to [GitHub Container Registry](https://github.com/HKUDS/DeepTutor/pkgs/container/deeptutor) on every release, built for `linux/amd64` and `linux/arm64`.
```bash
docker compose -f docker-compose.ghcr.yml up -d
```
To pin a specific version, edit the image tag in `docker-compose.ghcr.yml`:
```yaml
image: ghcr.io/hkuds/deeptutor:1.0.0 # or :latest
```
**2b. Build from source**
2026-04-02 20:30:06 +08:00
```bash
docker compose up -d
```
2026-04-02 21:18:14 +08:00
This builds the image locally from `Dockerfile` and starts the container.
2026-04-02 20:30:06 +08:00
2026-04-02 21:18:14 +08:00
**3. Verify & manage**
Open [http://localhost:3782](http://localhost:3782) once the container is healthy.
2026-04-02 20:30:06 +08:00
```bash
docker compose logs -f # tail logs
docker compose down # stop and remove container
```
<details>
<summary><b>Cloud / remote server deployment</b></summary>
When deploying to a remote server, the browser needs to know the public URL of the backend API. Add one more variable to your `.env`:
```dotenv
# Set to the public URL where the backend is reachable
NEXT_PUBLIC_API_BASE_EXTERNAL=https://your-server.com:8001
```
The frontend startup script applies this value at runtime — no rebuild needed.
</details>
<details>
<summary><b>Development mode (hot-reload)</b></summary>
Layer the dev override to mount source code and enable hot-reload for both services:
```bash
docker compose -f docker-compose.yml -f docker-compose.dev.yml up
```
Changes to `deeptutor/`, `deeptutor_cli/`, `scripts/`, and `web/` are reflected immediately.
</details>
<details>
<summary><b>Custom ports</b></summary>
Override the default ports in `.env`:
```dotenv
BACKEND_PORT=9001
FRONTEND_PORT=4000
```
Then restart:
```bash
2026-04-02 21:18:14 +08:00
docker compose up -d # or docker compose -f docker-compose.ghcr.yml up -d
2026-04-02 20:30:06 +08:00
```
</details>
<details>
<summary><b>Data persistence</b></summary>
User data and knowledge bases are persisted via Docker volumes mapped to local directories:
| Container path | Host path | Content |
|:---|:---|:---|
| `/app/data/user` | `./data/user` | Settings, memory, workspace, sessions, logs |
| `/app/data/knowledge_bases` | `./data/knowledge_bases` | Uploaded documents & vector indices |
These directories survive `docker compose down` and are reused on the next `docker compose up`.
</details>
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
<details>
2026-03-24 18:33:11 +08:00
<summary><b>Environment variables reference</b></summary>
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
2026-03-24 18:33:11 +08:00
| Variable | Required | Description |
|:---|:---:|:---|
| `LLM_BINDING` | **Yes** | LLM provider (`openai`, `anthropic`, etc.) |
| `LLM_MODEL` | **Yes** | Model name (e.g. `gpt-4o`) |
| `LLM_API_KEY` | **Yes** | Your LLM API key |
| `LLM_HOST` | **Yes** | API endpoint URL |
| `EMBEDDING_BINDING` | **Yes** | Embedding provider |
| `EMBEDDING_MODEL` | **Yes** | Embedding model name |
| `EMBEDDING_API_KEY` | **Yes** | Embedding API key |
| `EMBEDDING_HOST` | **Yes** | Embedding endpoint |
| `EMBEDDING_DIMENSION` | **Yes** | Vector dimension |
| `SEARCH_PROVIDER` | No | Search provider (`tavily`, `jina`, `serper`, `perplexity`, etc.) |
| `SEARCH_API_KEY` | No | Search API key |
| `BACKEND_PORT` | No | Backend port (default `8001`) |
| `FRONTEND_PORT` | No | Frontend port (default `3782`) |
2026-04-02 20:30:06 +08:00
| `NEXT_PUBLIC_API_BASE_EXTERNAL` | No | Public backend URL for cloud deployment |
| `DISABLE_SSL_VERIFY` | No | Disable SSL verification (default `false`) |
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
</details>
2026-04-02 21:18:14 +08:00
### Option D — CLI Only
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
2026-04-02 21:18:14 +08:00
If you just want the CLI without the web frontend:
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
2026-03-24 18:33:11 +08:00
```bash
2026-04-02 21:18:14 +08:00
pip install -e ".[cli]"
```
You still need to configure your LLM provider. The quickest way:
```bash
cp .env.example .env # then edit .env to fill in your API keys
```
Once configured, you're ready to go:
```bash
2026-03-24 18:33:11 +08:00
deeptutor chat # Interactive REPL
2026-03-24 18:48:50 +08:00
deeptutor run chat "Explain Fourier transform" # One-shot capability
deeptutor run deep_solve "Solve x^2 = 4" # Multi-agent problem solving
deeptutor kb create my-kb --doc textbook.pdf # Build a knowledge base
2026-04-02 21:18:14 +08:00
```
> See [DeepTutor CLI](#%EF%B8%8F-deeptutor-cli--agent-native-interface) for the full feature guide and command reference.
### What's Next?
Once DeepTutor is running, here are some things to try first:
1. **Upload a document** — Go to the Knowledge page and create a knowledge base from a PDF or Markdown file.
2. **Start a conversation** — Open Chat, select your knowledge base, and ask a question.
3. **Try Deep Solve** — Switch to Deep Solve mode for a step-by-step, multi-agent solution with citations.
4. **Create a TutorBot** — Build a persistent AI tutor with its own personality and memory.
Explore all features in the [Explore DeepTutor](#-explore-deeptutor) section below.
2026-04-02 21:18:14 +08:00
---
## 📖 Explore DeepTutor
2026-04-02 21:57:20 +08:00
<div align="center">
<img src="assets/figs/deeptutor-architecture.png" alt="DeepTutor Architecture" width="800">
</div>
2026-04-02 21:18:14 +08:00
### 💬 Chat — Unified Intelligent Workspace
2026-04-02 22:04:52 +08:00
<div align="center">
<img src="assets/figs/dt-chat.png" alt="Chat Workspace" width="800">
</div>
2026-04-02 21:18:14 +08:00
Five distinct modes coexist in a single workspace, bound by a **unified context management system**. Conversation history, knowledge bases, and references persist across modes — switch between them freely within the same topic, whenever the moment calls for it.
| Mode | What It Does |
|:---|:---|
| **Chat** | Fluid, tool-augmented conversation. Choose from RAG retrieval, web search, code execution, deep reasoning, brainstorming, and paper search — mix and match as needed. |
| **Deep Solve** | Multi-agent problem solving: plan, investigate, solve, and verify — with precise source citations at every step. |
| **Quiz Generation** | Generate assessments grounded in your knowledge base, with built-in validation. |
| **Deep Research** | Decompose a topic into subtopics, dispatch parallel research agents across RAG, web, and academic papers, and produce a fully cited report. |
| **Math Animator** | Turn mathematical concepts into visual animations and storyboards powered by Manim. |
Tools are **decoupled from workflows** — in every mode, you decide which tools to enable, how many to use, or whether to use any at all. The workflow orchestrates the reasoning; the tools are yours to compose.
> Start with a quick chat question, escalate to Deep Solve when it gets hard, generate quiz questions to test yourself, then launch a Deep Research to go deeper — all in one continuous thread.
### ✍️ Co-Writer — AI Inside Your Editor
2026-04-02 22:04:52 +08:00
<div align="center">
<img src="assets/figs/dt-cowriter.png" alt="Co-Writer" width="800">
</div>
2026-04-02 21:18:14 +08:00
Co-Writer brings the intelligence of Chat directly into a writing surface. It is a full-featured Markdown editor where AI is a first-class collaborator — not a sidebar, not an afterthought.
Select any text and choose **Rewrite**, **Expand**, or **Shorten** — optionally drawing context from your knowledge base or the web. The editing flow is non-destructive with full undo/redo, and every piece you write can be saved straight to your notebooks, feeding back into your learning ecosystem.
### 🎓 Guided Learning — Visual, Step-by-Step Mastery
2026-04-02 22:04:52 +08:00
<div align="center">
<img src="assets/figs/dt-guide.png" alt="Guided Learning" width="800">
</div>
2026-04-02 21:18:14 +08:00
Guided Learning turns your personal materials into structured, multi-step learning journeys. Provide a topic, optionally link notebook records, and DeepTutor will:
1. **Design a learning plan** — Identify 35 progressive knowledge points from your materials.
2. **Generate interactive pages** — Each point becomes a rich visual HTML page with explanations, diagrams, and examples.
3. **Enable contextual Q&A** — Chat alongside each step for deeper exploration.
4. **Summarize your progress** — Upon completion, receive a learning summary of everything you've covered.
Sessions are persistent — pause, resume, or revisit any step at any time.
### 📚 Knowledge Management — Your Learning Infrastructure
2026-04-02 22:04:52 +08:00
<div align="center">
<img src="assets/figs/dt-knowledge.png" alt="Knowledge Management" width="800">
</div>
2026-04-02 21:18:14 +08:00
Knowledge is where you build and manage the document collections that power everything else in DeepTutor.
- **Knowledge Bases** — Upload PDF, TXT, or Markdown files to create searchable, RAG-ready collections. Add documents incrementally as your library grows.
- **Notebooks** — Organize learning records across sessions. Save insights from Chat, Guided Learning, Co-Writer, or Deep Research into categorized, color-coded notebooks.
Your knowledge base is not passive storage — it actively participates in every conversation, every research session, and every learning path you create.
### 🧠 Memory — DeepTutor Learns As You Learn
2026-04-02 22:04:52 +08:00
<div align="center">
<img src="assets/figs/dt-memory.png" alt="Memory" width="800">
</div>
2026-04-02 21:18:14 +08:00
DeepTutor maintains a persistent, evolving understanding of you through two complementary dimensions:
- **Summary** — A running digest of your learning progress: what you've studied, which topics you've explored, and how your understanding has developed.
- **Profile** — Your learner identity: preferences, knowledge level, goals, and communication style — automatically refined through every interaction.
Memory is shared across all features and all your TutorBots. The more you use DeepTutor, the more personalized and effective it becomes.
---
### 🦞 TutorBot — Persistent, Autonomous AI Tutors
2026-04-02 21:57:20 +08:00
<div align="center">
<img src="assets/figs/tutorbot-architecture.png" alt="TutorBot Architecture" width="800">
</div>
2026-04-02 21:18:14 +08:00
TutorBot is not a chatbot — it is a **persistent, multi-instance agent** built on [nanobot](https://github.com/HKUDS/nanobot). Each TutorBot runs its own agent loop with independent workspace, memory, and personality. Create a Socratic math tutor, a patient writing coach, and a rigorous research advisor — all running simultaneously, each evolving with you.
2026-04-02 22:37:19 +08:00
<div align="center">
<img src="assets/figs/tb.png" alt="TutorBot" width="800">
</div>
2026-04-02 21:18:14 +08:00
- **Soul Templates** — Define your tutor's personality, tone, and teaching philosophy through editable Soul files. Choose from built-in archetypes (Socratic, encouraging, rigorous) or craft your own — the soul shapes every response.
- **Independent Workspace** — Each bot has its own directory with separate memory, sessions, skills, and configuration — fully isolated yet able to access DeepTutor's shared knowledge layer.
- **Proactive Heartbeat** — Bots don't just respond — they initiate. The built-in Heartbeat system enables recurring study check-ins, review reminders, and scheduled tasks. Your tutor shows up even when you don't.
- **Full Tool Access** — Every bot reaches into DeepTutor's complete toolkit: RAG retrieval, code execution, web search, academic paper search, deep reasoning, and brainstorming.
- **Skill Learning** — Teach your bot new abilities by adding skill files to its workspace. As your needs evolve, so does your tutor's capability.
- **Multi-Channel Presence** — Connect bots to Telegram, Discord, Slack, Feishu, WeChat Work, DingTalk, Email, and more. Your tutor meets you wherever you are.
- **Team & Sub-Agents** — Spawn background sub-agents or orchestrate multi-agent teams within a single bot for complex, long-running tasks.
```bash
deeptutor bot create math-tutor --persona "Socratic math teacher who uses probing questions"
deeptutor bot create writing-coach --persona "Patient, detail-oriented writing mentor"
deeptutor bot list # See all your active tutors
```
---
### ⌨️ DeepTutor CLI — Agent-Native Interface
2026-04-02 21:57:20 +08:00
<div align="center">
<img src="assets/figs/cli-architecture.png" alt="DeepTutor CLI Architecture" width="800">
</div>
2026-04-02 21:18:14 +08:00
DeepTutor is fully CLI-native. Every capability, knowledge base, session, memory, and TutorBot is one command away — no browser required. The CLI serves both humans (with rich terminal rendering) and AI agents (with structured JSON output).
Hand the [`SKILL.md`](SKILL.md) at the project root to any tool-using agent ([nanobot](https://github.com/HKUDS/nanobot), or any LLM with tool access), and it can configure and operate DeepTutor autonomously.
**One-shot execution** — Run any capability directly from the terminal:
```bash
deeptutor run chat "Explain the Fourier transform" -t rag --kb textbook
deeptutor run deep_solve "Prove that √2 is irrational" -t reason
deeptutor run deep_question "Linear algebra" --config num_questions=5
deeptutor run deep_research "Attention mechanisms in transformers"
```
**Interactive REPL** — A persistent chat session with live mode switching:
```bash
deeptutor chat --capability deep_solve --kb my-kb
# Inside the REPL: /cap, /tool, /kb, /history, /notebook, /config to switch on the fly
```
**Knowledge base lifecycle** — Build, query, and manage RAG-ready collections entirely from the terminal:
```bash
deeptutor kb create my-kb --doc textbook.pdf # Create from document
deeptutor kb add my-kb --docs-dir ./papers/ # Add a folder of papers
deeptutor kb search my-kb "gradient descent" # Search directly
deeptutor kb set-default my-kb # Set as default for all commands
```
**Dual output mode** — Rich rendering for humans, structured JSON for pipelines:
```bash
deeptutor run chat "Summarize chapter 3" -f rich # Colored, formatted output
deeptutor run chat "Summarize chapter 3" -f json # Line-delimited JSON events
```
**Session continuity** — Resume any conversation right where you left off:
```bash
deeptutor session list # List all sessions
deeptutor session open <id> # Resume in REPL
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
```
2026-03-24 18:48:50 +08:00
<details>
<summary><b>Full CLI command reference</b></summary>
**Top-level**
| Command | Description |
|:---|:---|
| `deeptutor run <capability> <message>` | Run any capability in a single turn (`chat`, `deep_solve`, `deep_question`, `deep_research`, `math_animator`) |
| `deeptutor chat` | Interactive REPL with optional `--capability`, `--tool`, `--kb`, `--language` |
2026-04-02 21:18:14 +08:00
| `deeptutor serve` | Start the DeepTutor API server |
2026-03-24 18:48:50 +08:00
**`deeptutor bot`**
| Command | Description |
|:---|:---|
| `deeptutor bot list` | List all TutorBot instances |
| `deeptutor bot create <id>` | Create and start a new bot (`--name`, `--persona`, `--model`) |
| `deeptutor bot start <id>` | Start a bot |
| `deeptutor bot stop <id>` | Stop a bot |
**`deeptutor kb`**
| Command | Description |
|:---|:---|
| `deeptutor kb list` | List all knowledge bases |
| `deeptutor kb info <name>` | Show knowledge base details |
| `deeptutor kb create <name>` | Create from documents (`--doc`, `--docs-dir`) |
| `deeptutor kb add <name>` | Add documents incrementally |
| `deeptutor kb search <name> <query>` | Search a knowledge base |
| `deeptutor kb set-default <name>` | Set as default KB |
| `deeptutor kb delete <name>` | Delete a knowledge base (`--force`) |
**`deeptutor memory`**
| Command | Description |
|:---|:---|
| `deeptutor memory show [file]` | View memory (`summary`, `profile`, or `all`) |
| `deeptutor memory clear [file]` | Clear memory (`--force`) |
**`deeptutor session`**
| Command | Description |
|:---|:---|
| `deeptutor session list` | List sessions (`--limit`) |
| `deeptutor session show <id>` | View session messages |
| `deeptutor session open <id>` | Resume session in REPL |
| `deeptutor session rename <id>` | Rename a session (`--title`) |
| `deeptutor session delete <id>` | Delete a session |
**`deeptutor notebook`**
| Command | Description |
|:---|:---|
| `deeptutor notebook list` | List notebooks |
| `deeptutor notebook create <name>` | Create a notebook (`--description`) |
| `deeptutor notebook show <id>` | View notebook records |
| `deeptutor notebook add-md <id> <path>` | Import markdown as record |
| `deeptutor notebook replace-md <id> <rec> <path>` | Replace a markdown record |
| `deeptutor notebook remove-record <id> <rec>` | Remove a record |
**`deeptutor config` / `plugin` / `provider`**
| Command | Description |
|:---|:---|
| `deeptutor config show` | Print current configuration summary |
| `deeptutor plugin list` | List registered tools and capabilities |
| `deeptutor plugin info <name>` | Show tool or capability details |
| `deeptutor provider login <provider>` | Provider auth (`openai-codex` OAuth login; `github-copilot` validates an existing Copilot auth session) |
2026-03-24 18:48:50 +08:00
</details>
2026-04-07 11:11:52 +08:00
## 🗺️ Roadmap
| Status | Milestone |
|:---:|:---|
2026-04-13 03:33:51 +08:00
| 🎯 | **Authentication & Login** — Optional login page for public deployments with multi-user support |
| 🎯 | **Themes & Appearance** — Diverse theme options and customizable UI appearance |
| 🎯 | **Interaction Improvement** — optimize icon design and interaction details |
| 🔜 | **Better Memories** — integrating better memory management |
2026-04-07 11:11:52 +08:00
| 🔜 | **LightRAG Integration** — Integrate [LightRAG](https://github.com/HKUDS/LightRAG) as an advanced knowledge base engine |
| 🔜 | **Documentation Site** — Comprehensive docs page with guides, API reference, and tutorials |
> If you find DeepTutor useful, [give us a star](https://github.com/HKUDS/DeepTutor/stargazers) — it helps us keep going!
---
2026-03-24 21:12:01 +08:00
## 🌐 Community & Ecosystem
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
2026-03-24 21:12:01 +08:00
DeepTutor stands on the shoulders of outstanding open-source projects:
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
2026-03-24 21:12:01 +08:00
| Project | Role in DeepTutor |
|:---|:---|
| [**nanobot**](https://github.com/HKUDS/nanobot) | Ultra-lightweight agent engine powering TutorBot |
| [**LlamaIndex**](https://github.com/run-llama/llama_index) | RAG pipeline and document indexing backbone |
| [**ManimCat**](https://github.com/Wing900/ManimCat) | AI-driven math animation generation for Math Animator |
**From the HKUDS ecosystem:**
| [⚡ LightRAG](https://github.com/HKUDS/LightRAG) | [🤖 AutoAgent](https://github.com/HKUDS/AutoAgent) | [🔬 AI-Researcher](https://github.com/HKUDS/AI-Researcher) | [🧬 nanobot](https://github.com/HKUDS/nanobot) |
|:---:|:---:|:---:|:---:|
| Simple & Fast RAG | Zero-Code Agent Framework | Automated Research | Ultra-Lightweight AI Agent |
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
2026-03-24 18:33:11 +08:00
## 🤝 Contributing
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
<div align="center">
2026-03-24 18:33:11 +08:00
We hope DeepTutor becomes a gift for the community. 🎁
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
<a href="https://github.com/HKUDS/DeepTutor/graphs/contributors">
2026-03-24 18:33:11 +08:00
<img src="https://contrib.rocks/image?repo=HKUDS/DeepTutor&max=999" alt="Contributors" />
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
</a>
</div>
2026-03-24 18:33:11 +08:00
See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines on setting up your development environment, code standards, and pull request workflow.
2026-03-24 21:12:01 +08:00
## ⭐ Star History
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
2026-03-24 21:12:01 +08:00
<div align="center">
2026-03-24 18:48:50 +08:00
2026-03-24 21:12:01 +08:00
<a href="https://www.star-history.com/#HKUDS/DeepTutor&type=timeline&legend=top-left">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=HKUDS/DeepTutor&type=timeline&theme=dark&legend=top-left" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=HKUDS/DeepTutor&type=timeline&legend=top-left" />
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=HKUDS/DeepTutor&type=timeline&legend=top-left" />
</picture>
</a>
2026-03-24 18:48:50 +08:00
2026-03-24 21:12:01 +08:00
</div>
2026-03-24 18:48:50 +08:00
<p align="center">
<a href="https://www.star-history.com/hkuds/deeptutor">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/badge?repo=HKUDS/DeepTutor&theme=dark" />
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/badge?repo=HKUDS/DeepTutor" />
<img alt="Star History Rank" src="https://api.star-history.com/badge?repo=HKUDS/DeepTutor" />
</picture>
</a>
</p>
2026-03-24 18:48:50 +08:00
<div align="center">
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
**[Data Intelligence Lab @ HKU](https://github.com/HKUDS)**
[⭐ Star us](https://github.com/HKUDS/DeepTutor/stargazers) · [🐛 Report a bug](https://github.com/HKUDS/DeepTutor/issues) · [💬 Discussions](https://github.com/HKUDS/DeepTutor/discussions)
---
2026-03-24 18:33:11 +08:00
Licensed under the [Apache License 2.0](LICENSE).
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
2026-03-24 18:33:11 +08:00
<p>
2026-01-21 21:16:53 +08:00
<img src="https://visitor-badge.laobi.icu/badge?page_id=HKUDS.DeepTutor&style=for-the-badge&color=00d4ff" alt="Views">
</p>
work/llm-error-framework (#35) * feat: enhance error handling and logging in DocumentAdder * feat: implement robust error handling utilities for API exceptions * style: fix formatting issues * feat: implement retry logic for LLM API calls to handle rate limits and timeouts * feat: enhance document validation and error handling in file uploads * feat: enhance valid tools configuration and error handling in project structure * feat: refactor error handling and validation functions for improved robustness * fix: pre-commit format * feat: implement configuration validation for tool consistency and enhance error handling in file uploads * feat: add exception hierarchy for LLM service errors to improve error handling * chore: update repo roster images [skip ci] * chore: update repo roster images [skip ci] * Integrate documentation updates and rebase onto upstream dev * Implement error handling utilities and validation functions * feat: implement multi-provider support with enhanced capability handling and model extraction * fix: enhance error handling and validation across multiple modules * chore: remove unused local environment configuration file * commit * fix: improve error handling and logging in utilities and error handler * fix: enhance error handling and logging across multiple modules * fix: enhance async initialization and validation in multiple modules * feat: implement LLM error framework - Add LLM-specific exception classes in exceptions.py - Create error mapping utilities for standardized error handling - Implement base provider interface with error handling - Add OpenAI provider implementation with error mapping - Define LLM types and response structures * feat: add missing registry and telemetry modules - Add registry.py for provider registration system - Add telemetry.py for basic LLM call tracking - Complete the LLM error framework implementation * fix(llm): safer sync wrapper + smarter retries + optional openai mapping * Fix LLM error framework and code quality improvements * feat: update environment configuration and improve logging in LLM services * Update README.md Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> * Refactor code structure for improved readability and maintainability * chore: update documentation, improve error handling, and enhance file upload validation * feat: enhance error handling and configuration validation across multiple modules * feat: add configuration for pytest, improve error handling, and enhance byte formatting utilities --------- Co-authored-by: scrrlt <smo0051@student.monash.edu> Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
2026-01-13 16:58:13 +11:00
</div>