* Add prompt template support and LM Studio SDK integration
Features:
- Prompt template support for embedding models (via --embedding-prompt-template)
- LM Studio SDK integration for automatic context length detection
- Hybrid token limit discovery (Ollama → LM Studio → Registry → Default)
- Client-side token truncation to prevent silent failures
- Automatic persistence of embedding_options to .meta.json
Implementation:
- Added _query_lmstudio_context_limit() with Node.js subprocess bridge
- Modified compute_embeddings_openai() to apply prompt templates before truncation
- Extended CLI with --embedding-prompt-template flag for build and search
- URL detection for LM Studio (port 1234 or lmstudio/lm.studio keywords)
- HTTP→WebSocket URL conversion for SDK compatibility
Tests:
- 60 passing tests across 5 test files
- Comprehensive coverage of prompt templates, LM Studio integration, and token handling
- Parametrized tests for maintainability and clarity
* Add integration tests and fix LM Studio SDK bridge
Features:
- End-to-end integration tests for prompt template with EmbeddingGemma
- Integration tests for hybrid token limit discovery mechanism
- Tests verify real-world functionality with live services (LM Studio, Ollama)
Fixes:
- LM Studio SDK bridge now uses client.embedding.load() for embedding models
- Fixed NODE_PATH resolution to include npm global modules
- Fixed integration test to use WebSocket URL (ws://) for SDK bridge
Tests:
- test_prompt_template_e2e.py: 8 integration tests covering:
- Prompt template prepending with LM Studio (EmbeddingGemma)
- LM Studio SDK bridge for context length detection
- Ollama dynamic token limit detection
- Hybrid discovery fallback mechanism (registry, default)
- All tests marked with @pytest.mark.integration for selective execution
- Tests gracefully skip when services unavailable
Documentation:
- Updated tests/README.md with integration test section
- Added prerequisites and running instructions
- Documented that prompt templates are ONLY for EmbeddingGemma
- Added integration marker to pyproject.toml
Test Results:
- All 8 integration tests passing with live services
- Confirmed prompt templates work correctly with EmbeddingGemma
- Verified LM Studio SDK bridge auto-detects context length (2048)
- Validated hybrid token limit discovery across all backends
* Add prompt template support to Ollama mode
Extends prompt template functionality from OpenAI mode to Ollama for backend consistency.
Changes:
- Add provider_options parameter to compute_embeddings_ollama()
- Apply prompt template before token truncation (lines 1005-1011)
- Pass provider_options through compute_embeddings() call chain
Tests:
- test_ollama_embedding_with_prompt_template: Verifies templates work with Ollama
- test_ollama_prompt_template_affects_embeddings: Confirms embeddings differ with/without template
- Both tests pass with live Ollama service (2/2 passing)
Usage:
leann build --embedding-mode ollama --embedding-prompt-template "query: " ...
* Fix LM Studio SDK bridge to respect JIT auto-evict settings
Problem: SDK bridge called client.embedding.load() which loaded models into
LM Studio memory and bypassed JIT auto-evict settings, causing duplicate
model instances to accumulate.
Root cause analysis (from Perplexity research):
- Explicit SDK load() commands are treated as "pinned" models
- JIT auto-evict only applies to models loaded reactively via API requests
- SDK-loaded models remain in memory until explicitly unloaded
Solutions implemented:
1. Add model.unload() after metadata query (line 243)
- Load model temporarily to get context length
- Unload immediately to hand control back to JIT system
- Subsequent API requests trigger JIT load with auto-evict
2. Add token limit caching to prevent repeated SDK calls
- Cache discovered limits in _token_limit_cache dict (line 48)
- Key: (model_name, base_url), Value: token_limit
- Prevents duplicate load/unload cycles within same process
- Cache shared across all discovery methods (Ollama, SDK, registry)
Tests:
- TestTokenLimitCaching: 5 tests for cache behavior (integrated into test_token_truncation.py)
- Manual testing confirmed no duplicate models in LM Studio after fix
- All existing tests pass
Impact:
- Respects user's LM Studio JIT and auto-evict settings
- Reduces model memory footprint
- Faster subsequent builds (cached limits)
* Document prompt template and LM Studio SDK features
Added comprehensive documentation for new optional embedding features:
Configuration Guide (docs/configuration-guide.md):
- New section: "Optional Embedding Features"
- Task-Specific Prompt Templates subsection:
- Explains EmbeddingGemma use case with document/query prompts
- CLI and Python API examples
- Clear warnings about compatible vs incompatible models
- References to GitHub issue #155 and HuggingFace blog
- LM Studio Auto-Detection subsection:
- Prerequisites (Node.js + @lmstudio/sdk)
- How auto-detection works (4-step process)
- Benefits and optional nature clearly stated
FAQ (docs/faq.md):
- FAQ #2: When should I use prompt templates?
- DO/DON'T guidance with examples
- Links to detailed configuration guide
- FAQ #3: Why is LM Studio loading multiple copies?
- Explains the JIT auto-evict fix
- Troubleshooting steps if still seeing issues
- FAQ #4: Do I need Node.js and @lmstudio/sdk?
- Clarifies it's completely optional
- Lists benefits if installed
- Installation instructions
Cross-references between documents for easy navigation between quick reference and detailed guides.
* Add separate build/query template support for task-specific models
Task-specific models like EmbeddingGemma require different templates for indexing vs searching. Store both templates at build time and auto-apply query template during search with backward compatibility.
* Consolidate prompt template tests from 44 to 37 tests
Merged redundant no-op tests, removed low-value implementation tests, consolidated parameterized CLI tests, and removed hanging over-mocked test. All tests pass with improved focus on behavioral testing.
* Fix query template application in compute_query_embedding
Query templates were only applied in the fallback code path, not when using the embedding server (default path). This meant stored query templates in index metadata were ignored during MCP and CLI searches.
Changes:
- Move template application to before any computation path (searcher_base.py:109-110)
- Add comprehensive tests for both server and fallback paths
- Consolidate tests into test_prompt_template_persistence.py
Tests verify:
- Template applied when using embedding server
- Template applied in fallback path
- Consistent behavior between both paths
* Apply ruff formatting and fix linting issues
- Remove unused imports
- Fix import ordering
- Remove unused variables
- Apply code formatting
* Fix CI test failures: mock OPENAI_API_KEY in tests
Tests were failing in CI because compute_embeddings_openai() checks for OPENAI_API_KEY before using the mocked client. Added monkeypatch to set fake API key in test fixture.
* fix: Fix Twitter bookmarks anchor link
- Convert Twitter Bookmarks from collapsible details to proper header
- Update internal link to match new anchor format
- Ensures external links to #twitter-bookmarks-your-personal-tweet-library work correctly
Fixes broken link: https://github.com/yichuan-w/LEANN?tab=readme-ov-file#twitter-bookmarks-your-personal-tweet-library
* fix: Fix Slack messages anchor link as well
- Convert Slack Messages from collapsible details to proper header
- Update internal link to match new anchor format
- Ensures external links to #slack-messages-search-your-team-conversations work correctly
Both Twitter and Slack MCP sections now have reliable anchor links.
* fix: Point Slack and Twitter links to main MCP section
- Both Slack and Twitter are subsections under MCP Integration
- Links should point to #mcp-integration-rag-on-live-data-from-any-platform
- Users will land on the MCP section and can find both Slack and Twitter subsections there
This matches the actual document structure where Slack and Twitter are under the MCP Integration section.
* Improve Slack MCP integration with retry logic and comprehensive setup guide
- Add retry mechanism with exponential backoff for cache sync issues
- Handle 'users cache is not ready yet' errors gracefully
- Add max-retries and retry-delay CLI arguments for better control
- Create comprehensive Slack setup guide with troubleshooting
- Update README with link to detailed setup guide
- Improve error messages and user experience
* Fix trailing whitespace in slack setup guide
Pre-commit hooks formatting fixes
* Add comprehensive Slack setup guide with success screenshot
- Create detailed setup guide with step-by-step instructions
- Add troubleshooting section for common issues like cache sync errors
- Include real terminal output example from successful integration
- Add screenshot showing VS Code interface with Slack channel data
- Remove excessive emojis for more professional documentation
- Document retry logic improvements and CLI arguments
* Fix formatting issues in Slack setup guide
- Remove trailing whitespace
- Fix end of file formatting
- Pre-commit hooks formatting fixes
* Add real RAG example showing intelligent Slack query functionality
- Add detailed example of asking 'What is LEANN about?'
- Show retrieved messages from Slack channels
- Demonstrate intelligent answer generation based on context
- Add command example for running real RAG queries
- Explain the 4-step process: retrieve, index, generate, cite
* Update Slack setup guide with bot invitation requirements
- Add important section about inviting bot to channels before RAG queries
- Explain the 'not_in_channel' errors and their meaning
- Provide clear steps for bot invitation process
- Document realistic scenario where bot needs explicit channel access
- Update documentation to be more professional and less cursor-style
* Docs: add real RAG example for Sky Lab #random
- Embed screenshot videos/rag-sky-random.png
- Add step-by-step commands and notes
- Include helper test script tests/test_channel_by_id_or_name.py
- Redact example tokens from docs
* Docs/CI: fix broken image paths and ruff lint\n\n- Move screenshot to docs/videos and update references\n- Remove obsolete rag-query-results image\n- Rename variable to satisfy ruff
* Docs: fix image path for lychee (use videos/ relative under docs/)
* Docs: finalize Slack setup guide with Sky random RAG example and image path fixes\n\n- Redact example tokens from docs
* Fix Slack MCP integration and update documentation
- Fix SlackMCPReader to use conversations_history instead of channels_list
- Add fallback imports for leann.interactive_utils and leann.settings
- Update slack-setup-guide.md with real screenshots and improved text
- Remove old screenshot files
* Add Slack integration screenshots to docs/videos
- Add slack_integration.png showing RAG query results
- Add slack_integration_2.png showing additional demo functionality
- Fixes lychee link checker errors for missing image files
* Update Slack integration screenshot with latest changes
* Remove test_channel_by_id_or_name.py
- Clean up temporary test file that was used for debugging
- Keep only the main slack_rag.py application for production use
* Update Slack RAG example to show LEANN announcement retrieval
- Change query from 'PUBPOL 290' to 'What is LEANN about?' for more challenging retrieval
- Update command to use python -m apps.slack_rag instead of test script
- Add expected response showing Yichuan Wang's LEANN announcement message
- Emphasize this demonstrates ability to find specific announcements in conversation history
- Update description to highlight challenging query capabilities
* Update Slack RAG integration with improved CSV parsing and new screenshots
- Fixed CSV message parsing in slack_mcp_reader.py to properly handle individual messages
- Updated slack_rag.py to filter empty channel strings
- Enhanced slack-setup-guide.md with two new query examples:
- Advisor Models query: 'train black-box models to adopt to your personal data'
- Barbarians at the Gate query: 'AI-driven research systems ADRS'
- Replaced old screenshots with four new ones showing both query examples
- Updated documentation to use User OAuth Token (xoxp-) instead of Bot Token (xoxb-)
- Added proper command examples with --no-concatenate-conversations and --force-rebuild flags
* Update Slack RAG documentation with Ollama integration and new screenshots
- Updated slack-setup-guide.md with comprehensive Ollama setup instructions
- Added 6 new screenshots showing complete RAG workflow:
- Command setup, search results, and LLM responses for both queries
- Removed simulated LLM references, now uses real Ollama with llama3.2:1b
- Enhanced documentation with step-by-step Ollama installation
- Updated troubleshooting checklist to include Ollama-specific checks
- Fixed command syntax and added proper Ollama configuration
- Demonstrates working Slack RAG with real AI-generated responses
* Remove Key Features section from Slack RAG examples
- Simplified documentation by removing the bullet point list
- Keeps the focus on the actual examples and screenshots
* feat: finance bench
* docs: results
* chore: ignroe data README
* feat: fix financebench
* feat: laion, also required idmaps support
* style: format
* style: format
* fix: resolve ruff linting errors
- Remove unused variables in benchmark scripts
- Rename unused loop variables to follow convention
* feat: enron email bench
* experiments for running DiskANN & BM25 on Arch 4090
* style: format
* chore(ci): remove paru-bin submodule and config to fix checkout --recurse-submodules
* docs: data
* docs: data updated
* fix: as package
* fix(ci): only run pre-commit
* chore: use http url of astchunk; use group for some dev deps
* fix(ci): should checkout modules as well since `uv sync` checks
* fix(ci): run with lint only
* fix: find links to install wheels available
* CI: force local wheels in uv install step
* CI: install local wheels via file paths
* CI: pick wheels matching current Python tag
* CI: handle python tag mismatches for local wheels
* CI: use matrix python venv and set macOS deployment target
* CI: revert install step to match main
* CI: use uv group install with local wheel selection
* CI: rely on setup-uv for Python and tighten group install
* CI: install build deps with uv python interpreter
* CI: use temporary uv venv for build deps
* CI: add build venv scripts path for wheel repair
* Metadata filtering initial version
* Metadata filtering initial version
* Fixes linter issues
* Cleanup code
* Clean up and readme
* Fix after review
* Use UV in example
* Merge main into feature/metadata-filtering
* feat(core): Add AST-aware code chunking with astchunk integration
This PR introduces intelligent code chunking that preserves semantic boundaries
(functions, classes, methods) for better code understanding in RAG applications.
Key Features:
- AST-aware chunking for Python, Java, C#, TypeScript files
- Graceful fallback to traditional chunking for unsupported languages
- New specialized code RAG application for repositories
- Enhanced CLI with --use-ast-chunking flag
- Comprehensive test suite with integration tests
Technical Implementation:
- New chunking_utils.py module with enhanced chunking logic
- Extended base RAG framework with AST chunking arguments
- Updated document RAG with --enable-code-chunking flag
- CLI integration with proper error handling and fallback
Benefits:
- Better semantic understanding of code structure
- Improved search quality for code-related queries
- Maintains backward compatibility with existing workflows
- Supports mixed content (code + documentation) seamlessly
Dependencies:
- Added astchunk and tree-sitter parsers to pyproject.toml
- All dependencies are optional - fallback works without them
Testing:
- Comprehensive test suite in test_astchunk_integration.py
- Integration tests with document RAG
- Error handling and edge case coverage
Documentation:
- Updated README.md with AST chunking highlights
- Added ASTCHUNK_INTEGRATION.md with complete guide
- Updated features.md with new capabilities
* Refactored chunk utils
* Remove useless import
* Update README.md
* Update apps/chunking/utils.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update apps/code_rag.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Fix issue
* apply suggestion from @Copilot
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Fixes after pr review
* Fix tests not passing
* Fix linter error for documentation files
* Update .gitignore with unwanted files
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Andy Lee <andylizf@outlook.com>
* docs: add SkyPilot template and instructions for running embeddings/index build on cloud GPU
* docs: add low-resource note in README; point to config guide; suggest OpenAI embeddings, SkyPilot remote build, and --no-recompute
* docs: consolidate low-resource guidance into config guide; README points to it
* cli: add --no-recompute and --no-recompute-embeddings flags; docs: clarify HNSW requires --no-compact when disabling recompute
* docs: dedupe recomputation guidance; keep single Low-resource setups section
* sky: expand leann-build.yaml with configurable params and flags (backend, recompute, compact, embedding options)
* hnsw: auto-disable compact when --no-recompute is used; docs: expand SkyPilot with -e overrides and copy-back example
* docs+sky: simplify SkyPilot flow (auto-build on launch, rsync copy-back); clarify HNSW auto non-compact when no-recompute
* feat: auto compact for hnsw when recompute
* reader: non-destructive portability (relative hints + fallback); fix comments; sky: refine yaml
* cli: unify flags to --recompute/--no-recompute for build/search/ask; docs: update references
* chore: remove
* hnsw: move pruned/no-recompute assertion into backend; api: drop global assertion; docs: will adjust after benchmarking
* cli: use argparse.BooleanOptionalAction for paired flags (--recompute/--compact) across build/search/ask
* docs: a real example on recompute
* benchmarks: fix and extend HNSW+DiskANN recompute vs no-recompute; docs: add fresh numbers and DiskANN notes
* benchmarks: unify HNSW & DiskANN into one clean script; isolate groups, fixed ports, warm-up, param complexity
* docs: diskann recompute
* core: auto-cleanup for LeannSearcher/LeannChat (__enter__/__exit__/__del__); ensure server terminate/kill robustness; benchmarks: use searcher.cleanup(); docs: suggest uv run
* fix: hang on warnings
* docs: boolean flags
* docs: leann help
* feat: Add graph partition support for DiskANN backend
- Add GraphPartitioner class for advanced graph partitioning
- Add partition_graph_simple function for easy-to-use partitioning
- Add pybind11 dependency for C++ executable building
- Update __init__.py to export partition functions
- Include test scripts for partition functionality
The partition functionality allows optimizing disk-based indices
for better search performance and memory efficiency.
* chore: Update DiskANN submodule to latest with graph partition tools
- Update DiskANN submodule to commit b2dc4ea
- Includes graph partition tools and CMake integration
- Enables graph partitioning functionality in DiskANN backend
* merge
* ruff
* add a path related fix
* fix: always use relative path in metadata
* docs: tool cli install
* chore: more data
* fix: diskann building and partitioning
* tests: diskann and partition
* docs: highlight diskann readiness and add performance comparison
* docs: add ldg-times parameter for diskann graph locality optimization
* fix: update pre-commit ruff version and format compliance
* fix: format test files with latest ruff version for CI compatibility
* fix: pin ruff version to 0.12.7 across all environments
- Pin ruff==0.12.7 in pyproject.toml dev dependencies
- Update CI to use exact ruff version instead of latest
- Add comments explaining version pinning rationale
- Ensures consistent formatting across local, CI, and pre-commit
* fix: use uv tool install for ruff instead of uv pip install
- uv tool install is the correct way to install CLI tools like ruff
- uv pip install --system is for Python packages, not tools
* debug: add detailed logging for CI path resolution debugging
- Add logging in DiskANN embedding server to show metadata_file_path
- Add debug logging in PassageManager to trace path resolution
- This will help identify why CI fails to find passage files
* fix: force install local wheels in CI to prevent PyPI version conflicts
- Change from --find-links to direct wheel installation with --force-reinstall
- This ensures CI uses locally built packages with latest source code
- Prevents uv from using PyPI packages with same version number but old code
- Fixes CI test failures where old code (without metadata_file_path) was used
Root cause: CI was installing leann-backend-diskann v0.2.1 from PyPI
instead of the locally built wheel with same version number.
* debug: add more CI diagnostics for DiskANN module import issue
- Check wheel contents before and after auditwheel repair
- Verify _diskannpy module installation after pip install
- List installed package directory structure
- Add explicit platform tag for auditwheel repair
This helps diagnose why ImportError: cannot import name '_diskannpy' occurs
* fix: remove invalid --plat argument from auditwheel repair
- Remove '--plat linux_x86_64' which is not a valid platform tag
- Let auditwheel automatically determine the correct platform
- Based on CI output, it will use manylinux_2_35_x86_64
This was causing auditwheel repair to fail, preventing proper wheel repair
* fix: ensure CI installs correct Python version wheel packages
- Use --find-links with --no-index to let uv select correct wheel
- Prevents installing wrong Python version wheel (e.g., cp310 for Python 3.11)
- Fixes ImportError: _diskannpy.cpython-310-x86_64-linux-gnu.so in Python 3.11
The issue was that *.whl glob matched all Python versions, causing
uv to potentially install a cp310 wheel in a Python 3.11 environment.
* fix: ensure venv uses correct Python version from matrix
- Explicitly specify Python version when creating venv with uv
- Prevents mismatch between build Python (e.g., 3.10) and test Python
- Fixes: _diskannpy.cpython-310-x86_64-linux-gnu.so in Python 3.11 error
The issue: uv venv was defaulting to Python 3.11 regardless of matrix version
* fix: resolve dependency issues in CI package installation
- Ubuntu: Install all packages from local builds with --no-index
- macOS: Install core packages from PyPI, backends from local builds
- Remove --no-index for macOS backend installation to allow dependency resolution
- Pin versions when installing from PyPI to ensure consistency
Fixes error: 'leann-core was not found in the provided package locations'
* fix: Python 3.9 compatibility - replace Union type syntax
- Replace 'int | None' with 'Optional[int]' everywhere
- Replace 'subprocess.Popen | None' with 'Optional[subprocess.Popen]'
- Add Optional import to all affected files
- Update ruff target-version from py310 to py39
- The '|' syntax for Union types was introduced in Python 3.10 (PEP 604)
Fixes TypeError: unsupported operand type(s) for |: 'type' and 'NoneType'
* ci: build all packages on all platforms; install from local wheels only
- Build leann-core and leann on macOS too
- Install all packages via --find-links and --no-index across platforms
- Lower macOS MACOSX_DEPLOYMENT_TARGET to 12.0 for wider compatibility
This ensures consistency and avoids PyPI drift while improving macOS compatibility.
* ci: allow resolving third-party deps from index; still prefer local wheels for our packages
- Remove --no-index so numpy/scipy/etc can be resolved on Python 3.13
- Keep --find-links to force our packages from local dist
Fixes: dependency resolution failure on Ubuntu Python 3.13 (numpy missing)
* ci(macOS): set MACOSX_DEPLOYMENT_TARGET back to 13.3
- Fix build failure: 'sgesdd_' only available on macOS 13.3+
- Keep other CI improvements (local builds, find-links installs)
* fix(py39): replace union type syntax in chat.py
- validate_model_and_suggest: str | None -> Optional[str]
- OpenAIChat.__init__: api_key: str | None -> Optional[str]
- get_llm: dict[str, Any] | None -> Optional[dict[str, Any]]
Ensures Python 3.9 compatibility for CI macOS 3.9.
* style: organize imports per ruff; finish py39 Optional changes
- Fix import ordering in embedding servers and graph_partition_simple
- Remove duplicate Optional import
- Complete Optional[...] replacements
* fix(py39): replace remaining '| None' in diskann graph_partition (module-level function)
* fix(py39): remove zip(strict=...) usage in api; Python 3.9 compatibility
* style: organize imports; fix process-group stop for embedding server
* chore: keep embedding server stdout/stderr visible; still use new session and pg-kill on stop
* fix: add timeout to final wait() in stop_server to prevent infinite hang
* fix: prevent hang in CI by flushing print statements and redirecting embedding server output
- Add flush=True to all print statements in convert_to_csr.py to prevent buffer deadlock
- Redirect embedding server stdout/stderr to DEVNULL in CI environment (CI=true)
- Fix timeout in embedding_server_manager.stop_server() final wait call
* fix: resolve CI hanging by removing problematic wait() in stop_server
* fix: remove hardcoded paths from MCP server and documentation
* feat: add CI timeout protection for tests
* fix: skip OpenAI test in CI to avoid failures and API costs
- Add CI skip for test_document_rag_openai
- Test was failing because it incorrectly used --llm simulated which isn't supported by document_rag.py
* feat: add simulated LLM option to document_rag.py
- Add 'simulated' to the LLM choices in base_rag_example.py
- Handle simulated case in get_llm_config() method
- This allows tests to use --llm simulated to avoid API costs
* feat: add comprehensive debugging capabilities with tmate integration
1. Tmate SSH Debugging:
- Added manual workflow_dispatch trigger with debug_enabled option
- Integrated mxschmitt/action-tmate@v3 for SSH access to CI runner
- Can be triggered manually or by adding [debug] to commit message
- Detached mode with 30min timeout, limited to actor only
- Also triggers on test failure when debug is enabled
2. Enhanced Pytest Output:
- Added --capture=no to see real-time output
- Added --log-cli-level=DEBUG for maximum verbosity
- Added --tb=short for cleaner tracebacks
- Pipe output to tee for both display and logging
- Show last 20 lines of output on completion
3. Environment Diagnostics:
- Export PYTHONUNBUFFERED=1 for immediate output
- Show Python/Pytest versions at start
- Display relevant environment variables
- Check network ports before/after tests
4. Diagnostic Script:
- Created scripts/diagnose_hang.sh for comprehensive system checks
- Shows processes, network, file descriptors, memory, ZMQ status
- Automatically runs on timeout for detailed debugging info
This allows debugging CI hangs via SSH when needed while providing extensive logging by default.
* fix: add diagnostic script (force add to override .gitignore)
The diagnose_hang.sh script needs to be in git for CI to use it.
Using -f to override *.sh rule in .gitignore.
* test: investigate hanging [debug]
* fix: move tmate debug session inside pytest step to avoid hanging
The issue was that tmate was placed before pytest step, but the hang
occurs during pytest execution. Now tmate starts inside the test step
and provides connection info before running tests.
* debug: trigger tmate debug session [debug]
* fix: debug variable values and add commit message [debug] trigger
- Add debug output to show variable values
- Support both manual trigger and [debug] in commit message
* fix: force debug mode for investigation branch
- Auto-enable debug mode for debug/clean-state-investigation branch
- Add more debug info to troubleshoot trigger issues
- This ensures tmate will start regardless of trigger method
* fix: use github.head_ref for PR branch detection
For pull requests, github.ref is refs/pull/N/merge, but github.head_ref
contains the actual branch name. This should fix debug mode detection.
* fix: FORCE debug mode on - no more conditions
Just always enable debug mode on this branch.
We need tmate to work for investigation!
* fix: improve tmate connection info retrieval
- Add proper wait and retry logic for tmate initialization
- Tmate needs time to connect to servers before showing SSH info
- Try multiple times with delays to get connection details
* fix: ensure OpenMP is found during DiskANN build on macOS
- Add OpenMP environment variables directly in build step
- Should fix the libomp.dylib not found error on macOS-14
* fix: simplify macOS OpenMP configuration to match main branch
- Remove complex OpenMP environment variables
- Use simplified configuration from working main branch
- Remove redundant OpenMP setup in DiskANN build step
- Keep essential settings: OpenMP_ROOT, CMAKE_PREFIX_PATH, LDFLAGS, CPPFLAGS
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: revert DiskANN submodule to stable version
The debug branch had updated DiskANN submodule to a version with
hardcoded OpenMP paths that break macOS 13 builds. This reverts
to the stable version used in main branch.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: update faiss submodule to latest stable version
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* refactor: remove upterm/tmate debug code and clean CI workflow
- Remove all upterm/tmate SSH debugging infrastructure
- Restore clean CI workflow from main branch
- Remove diagnostic script that was only for SSH debugging
- Keep valuable DiskANN and HNSW backend improvements
This provides a clean base to add targeted pytest hang debugging
without the complexity of SSH sessions.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* debug: increase timeouts to 600s for comprehensive hang investigation
- Increase pytest timeout from 300s to 600s for thorough testing
- Increase import testing timeout from 60s to 120s
- Allow more time for C++ extension loading (faiss/diskann)
- Still provides timeout protection against infinite hangs
This gives the system more time to complete imports and tests
while still catching genuine hangs that exceed reasonable limits.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: remove debug_enabled parameter from build-and-publish workflow
- Remove debug_enabled input parameter that no longer exists in build-reusable.yml
- Keep workflow_dispatch trigger but without debug options
- Fixes workflow validation error: 'debug_enabled is not defined'
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* debug: fix YAML syntax and add post-pytest cleanup monitoring
- Fix Python code formatting in YAML (pre-commit fixed indentation issues)
- Add comprehensive post-pytest cleanup monitoring
- Monitor for hanging processes after test completion
- Focus on teardown phase based on previous hang analysis
This addresses the root cause identified: hang occurs after tests pass,
likely during cleanup/teardown of C++ extensions or embedding servers.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* debug: add external process monitoring and unbuffered output for precise hang detection
* fix
* feat: add comprehensive hang detection for pytest CI debugging
- Add Python faulthandler integration with signal-triggered stack dumps
- Implement periodic stack dumps at 5min and 10min intervals
- Add external process monitoring with SIGUSR1 signal on hang detection
- Use debug_pytest.py wrapper to capture exact hang location in C++ cleanup
- Enhance CPU stability monitoring to trigger precise stack traces
This addresses the persistent pytest hanging issue in Ubuntu 22.04 CI by
providing detailed stack traces to identify the exact code location where
the hang occurs during test cleanup phase.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* CI: move pytest hang-debug script into scripts/ci_debug_pytest.py; sort imports and apply ruff suggestion; update workflow to call the script
* fix: improve hang detection to monitor actual pytest process
* fix: implement comprehensive solution for CI pytest hangs
Key improvements:
1. Replace complex monitoring with simpler process group management
2. Add pytest conftest.py with per-test timeouts and aggressive cleanup
3. Skip problematic tests in CI that cause infinite loops
4. Enhanced cleanup at session start/end and after each test
5. Shorter timeouts (3min per test, 10min total) with better monitoring
This should resolve the hanging issues by:
- Preventing individual tests from running too long
- Automatically cleaning up hanging processes
- Skipping known problematic tests in CI
- Using process groups for more reliable cleanup
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: correct pytest_runtest_call hook parameter in conftest.py
- Change invalid 'puretest' parameter to proper pytest hooks
- Replace problematic pytest_runtest_call with pytest_runtest_setup/teardown
- This fixes PluginValidationError preventing pytest from starting
- Remove unused time import
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: prevent wrapper script from killing itself in cleanup
- Remove overly aggressive pattern 'python.*pytest' that matched wrapper itself
- Add current PID check to avoid killing wrapper process
- Add exclusion for wrapper and debug script names
- This fixes exit code 137 (SIGKILL) issue where wrapper killed itself
Root cause: cleanup function was killing the wrapper process itself,
causing immediate termination with no output in CI.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: prevent wrapper from detecting itself as remaining process
- Add PID and script name checks in post-test verification
- Avoid false positive detection of wrapper process as 'remaining'
- This prevents unnecessary cleanup calls that could cause hangs
- Root cause: wrapper was trying to clean up itself in verification phase
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: implement graceful shutdown for embedding servers
- Replace daemon threads with coordinated shutdown mechanism
- Add shutdown_event for thread synchronization
- Implement proper ZMQ resource cleanup
- Wait for threads to complete before exit
- Add ZMQ timeout to allow periodic shutdown checks
- Move signal handlers into server functions for proper scope access
- Fix protobuf class names and variable references
- Simplify resource cleanup to avoid variable scope issues
Root cause: Original servers used daemon threads + direct sys.exit(0)
which interrupted ZMQ operations and prevented proper resource cleanup,
causing hangs during process termination in CI environments.
This should resolve the core pytest hanging issue without complex wrappers.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: simplify embedding server process management
- Remove start_new_session=True to fix signal handling issues
- Simplify termination logic to use standard SIGTERM/SIGKILL
- Remove complex process group management that could cause hangs
- Add timeout-based cleanup to prevent CI hangs while ensuring proper resource cleanup
- Give graceful shutdown more time (5s) since we fixed the server shutdown logic
- Remove unused signal import
This addresses the remaining process management issues that could
cause startup failures and hanging during termination.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: increase CI test timeouts to accommodate model download
Analysis of recent CI failures shows:
- Model download takes ~12 seconds
- Embedding server startup + first search takes additional ~78 seconds
- Total time needed: ~90-100 seconds
Updated timeouts:
- test_readme_basic_example: 90s -> 180s
- test_backend_options: 60s -> 150s
- test_llm_config_simulated: 75s -> 150s
Root cause: Initial model download from huggingface.co in CI environment
is slower than local development, causing legitimate timeouts rather than
actual hanging processes.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* debug: preserve stderr in CI to debug embedding server startup failures
Previous fix revealed the real issue: embedding server fails to start within 120s,
not timeout issues. The error was hidden because both stdout and stderr were
redirected to DEVNULL in CI.
Changes:
- Keep stderr output in CI environment for debugging
- Only redirect stdout to DEVNULL to avoid buffer deadlock
- This will help us see why embedding server startup is failing
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix(embedding-server): ensure shutdown-capable ZMQ threads create/bind their own REP sockets and poll with timeouts; fix undefined socket causing startup crash and CI hangs on Ubuntu 22.04
* style(hnsw-server): apply ruff-format after robustness changes
* fix(hnsw-server): be lenient to nested [[ids]] for both distance and embedding requests to match client expectations; prevents missing ID lookup when wrapper nests the list
* refactor(hnsw-server): remove duplicate legacy ZMQ thread; keep single shutdown-capable server implementation to reduce surface and avoid hangs
* ci: simplify test step to run pytest uniformly across OS; drop ubuntu-22.04 wrapper special-casing
* chore(ci): remove unused pytest wrapper and debug runner
* refactor(diskann): remove redundant graph_partition_simple; keep single partition API (graph_partition)
* refactor(hnsw-convert): remove global print override; rely on default flushing in CI
* tests: drop custom ci_timeout decorator and helpers; rely on pytest defaults and simplified CI
* tests: remove conftest global timeouts/cleanup; keep test suite minimal and rely on simplified CI + robust servers
* tests: call searcher.cleanup()/chat.cleanup() to ensure background embedding servers terminate after tests
* tests: fix ruff warnings in minimal conftest
* core: add weakref.finalize and atexit-based cleanup in EmbeddingServerManager to ensure server stops on interpreter exit/GC
* tests: remove minimal conftest to validate atexit/weakref cleanup path
* core: adopt compatible running server (record PID) and ensure stop_server() can terminate adopted processes; clear server_port on stop
* ci/core: skip compatibility scanning in CI (LEANN_SKIP_COMPAT=1) to avoid slow/hanging process scans; always pick a fresh available port
* core: unify atexit to always call _finalize_process (covers both self-launched and adopted servers)
* zmq: set SNDTIMEO=1s and LINGER=0 for REP sockets to avoid send blocking during shutdown; reduces CI hang risk
* tests(ci): skip DiskANN branch of README basic example on CI to avoid core dump in constrained runners; HNSW still validated
* diskann(ci): avoid stdout/stderr FD redirection in CI to prevent aborts from low-level dup2; no-op contextmanager on CI
* core: purge dead helpers and comments from EmbeddingServerManager; keep only minimal in-process flow
* core: fix lint (remove unused passages_file); keep per-instance reuse only
* fix: keep backward-compat
---------
Co-authored-by: yichuan520030910320 <yichuan_wang@berkeley.edu>
Co-authored-by: Claude <noreply@anthropic.com>
* feat: Add Ollama embedding support for local embedding models
* docs: Add clear documentation for Ollama embedding usage
* feat: Enhance Ollama embedding with better error handling and concurrent processing
- Add intelligent model validation and suggestions (inspired by OllamaChat)
- Implement concurrent processing for better performance
- Add retry mechanism with timeout handling
- Provide user-friendly error messages with emojis
- Auto-detect and recommend embedding models
- Add text truncation for long texts
- Improve progress bar display logic
* docs: don't mention it in README
* docs: config guidance
* feat: add comprehensive configuration guide and update README
- Create docs/configuration-guide.md with detailed guidance on:
- Embedding model selection (small/medium/large)
- Index selection (HNSW vs DiskANN)
- LLM engine and model comparison
- Parameter tuning (build/search complexity, top-k)
- Performance optimization tips
- Deep dive into LEANN's recomputation feature
- Update README.md to link to the configuration guide
- Include latest 2025 model recommendations (Qwen3, DeepSeek-R1, O3-mini)
* chore: move evaluation data .gitattributes to correct location
* docs: Weaken DiskANN emphasis in README
- Change backend description to emphasize HNSW as default
- DiskANN positioned as optional for billion-scale datasets
- Simplify evaluation commands to be more generic
* docs: Adjust DiskANN positioning in features and roadmap
- features.md: Put HNSW/FAISS first as default, DiskANN as optional
- roadmap.md: Reorder to show HNSW integration before DiskANN
- Consistent with positioning DiskANN as advanced option for large-scale use
* docs: Improve configuration guide based on feedback
- List specific files in default data/ directory (2 AI papers, literature, tech report)
- Update examples to use English and better RAG-suitable queries
- Change full dataset reference to use --max-items -1
- Adjust small model guidance about upgrading to larger models when time allows
- Update top-k defaults to reflect actual default of 20
- Ensure consistent use of full model name Qwen/Qwen3-Embedding-0.6B
- Reorder optimization steps, move MLX to third position
- Remove incorrect chunk size tuning guidance
- Change README from 'Having trouble' to 'Need best practices'
* docs: Address all configuration guide feedback
- Fix grammar: 'If time is not a constraint' instead of 'time expense is not large'
- Highlight Qwen3-Embedding-0.6B performance (nearly OpenAI API level)
- Add OpenAI quick start section with configuration example
- Fold Cloud vs Local trade-offs into collapsible section
- Update HNSW as 'default and recommended for extreme low storage'
- Add DiskANN beta warning and explain PQ+rerank architecture
- Expand Ollama models: add qwen3:0.6b, 4b, 7b variants
- Note OpenAI as current default but recommend Ollama switch
- Add 'need to install extra software' warning for Ollama
- Remove incorrect latency numbers from search-complexity recommendations
* docs: add a link
* refactor: Unify examples interface with BaseRAGExample
- Create BaseRAGExample base class for all RAG examples
- Refactor 4 examples to use unified interface:
- document_rag.py (replaces main_cli_example.py)
- email_rag.py (replaces mail_reader_leann.py)
- browser_rag.py (replaces google_history_reader_leann.py)
- wechat_rag.py (replaces wechat_history_reader_leann.py)
- Maintain 100% parameter compatibility with original files
- Add interactive mode support for all examples
- Unify parameter names (--max-items replaces --max-emails/--max-entries)
- Update README.md with new examples usage
- Add PARAMETER_CONSISTENCY.md documenting all parameter mappings
- Keep main_cli_example.py for backward compatibility with migration notice
All default values, LeannBuilder parameters, and chunking settings
remain identical to ensure full compatibility with existing indexes.
* fix: Update CI tests for new unified examples interface
- Rename test_main_cli.py to test_document_rag.py
- Update all references from main_cli_example.py to document_rag.py
- Update tests/README.md documentation
The tests now properly test the new unified interface while maintaining
the same test coverage and functionality.
* fix: Fix pre-commit issues and update tests
- Fix import sorting and unused imports
- Update type annotations to use built-in types (list, dict) instead of typing.List/Dict
- Fix trailing whitespace and end-of-file issues
- Fix Chinese fullwidth comma to regular comma
- Update test_main_cli.py to test_document_rag.py
- Add backward compatibility test for main_cli_example.py
- Pass all pre-commit hooks (ruff, ruff-format, etc.)
* refactor: Remove old example scripts and migration references
- Delete old example scripts (mail_reader_leann.py, google_history_reader_leann.py, etc.)
- Remove migration hints and backward compatibility
- Update tests to use new unified examples directly
- Clean up all references to old script names
- Users now only see the new unified interface
* fix: Restore embedding-mode parameter to all examples
- All examples now have --embedding-mode parameter (unified interface benefit)
- Default is 'sentence-transformers' (consistent with original behavior)
- Users can now use OpenAI or MLX embeddings with any data source
- Maintains functional equivalence with original scripts
* docs: Improve parameter categorization in README
- Clearly separate core (shared) vs specific parameters
- Move LLM and embedding examples to 'Example Commands' section
- Add descriptive comments for all specific parameters
- Keep only truly data-source-specific parameters in specific sections
* docs: Make example commands more representative
- Add default values to parameter descriptions
- Replace generic examples with real-world use cases
- Focus on data-source-specific features in examples
- Remove redundant demonstrations of common parameters
* docs: Reorganize parameter documentation structure
- Move common parameters to a dedicated section before all examples
- Rename sections to 'X-Specific Arguments' for clarity
- Remove duplicate common parameters from individual examples
- Better information architecture for users
* docs: polish applications
* docs: Add CLI installation instructions
- Add two installation options: venv and global uv tool
- Clearly explain when to use each option
- Make CLI more accessible for daily use
* docs: Clarify CLI global installation process
- Explain the transition from venv to global installation
- Add upgrade command for global installation
- Make it clear that global install allows usage without venv activation
* docs: Add collapsible section for CLI installation
- Wrap CLI installation instructions in details/summary tags
- Keep consistent with other collapsible sections in README
- Improve document readability and navigation
* style: format
* docs: Fix collapsible sections
- Make Common Parameters collapsible (as it's lengthy reference material)
- Keep CLI Installation visible (important for users to see immediately)
- Better information hierarchy
* docs: Add introduction for Common Parameters section
- Add 'Flexible Configuration' heading with descriptive sentence
- Create parallel structure with 'Generation Model Setup' section
- Improve document flow and readability
* docs: nit
* fix: Fix issues in unified examples
- Add smart path detection for data directory
- Fix add_texts -> add_text method call
- Handle both running from project root and examples directory
* fix: Fix async/await and add_text issues in unified examples
- Remove incorrect await from chat.ask() calls (not async)
- Fix add_texts -> add_text method calls
- Verify search-complexity correctly maps to efSearch parameter
- All examples now run successfully
* feat: Address review comments
- Add complexity parameter to LeannChat initialization (default: search_complexity)
- Fix chunk-size default in README documentation (256, not 2048)
- Add more index building parameters as CLI arguments:
- --backend-name (hnsw/diskann)
- --graph-degree (default: 32)
- --build-complexity (default: 64)
- --no-compact (disable compact storage)
- --no-recompute (disable embedding recomputation)
- Update README to document all new parameters
* feat: Add chunk-size parameters and improve file type filtering
- Add --chunk-size and --chunk-overlap parameters to all RAG examples
- Preserve original default values for each data source:
- Document: 256/128 (optimized for general documents)
- Email: 256/25 (smaller overlap for email threads)
- Browser: 256/128 (standard for web content)
- WeChat: 192/64 (smaller chunks for chat messages)
- Make --file-types optional filter instead of restriction in document_rag
- Update README to clarify interactive mode and parameter usage
- Fix LLM default model documentation (gpt-4o, not gpt-4o-mini)
* feat: Update documentation based on review feedback
- Add MLX embedding example to README
- Clarify examples/data content description (two papers, Pride and Prejudice, Chinese README)
- Move chunk parameters to common parameters section
- Remove duplicate chunk parameters from document-specific section
* docs: Emphasize diverse data sources in examples/data description
* fix: update default embedding models for better performance
- Change WeChat, Browser, and Email RAG examples to use all-MiniLM-L6-v2
- Previous Qwen/Qwen3-Embedding-0.6B was too slow for these use cases
- all-MiniLM-L6-v2 is a fast 384-dim model, ideal for large-scale personal data
* add response highlight
* change rebuild logic
* fix some example
* feat: check if k is larger than #docs
* fix: WeChat history reader bugs and refactor wechat_rag to use unified architecture
* fix email wrong -1 to process all file
* refactor: reorgnize all examples/ and test/
* refactor: reorganize examples and add link checker
* fix: add init.py
* fix: handle certificate errors in link checker
* fix wechat
* merge
* docs: update README to use proper module imports for apps
- Change from 'python apps/xxx.py' to 'python -m apps.xxx'
- More professional and pythonic module calling
- Ensures proper module resolution and imports
- Better separation between apps/ (production tools) and examples/ (demos)
---------
Co-authored-by: yichuan520030910320 <yichuan_wang@berkeley.edu>
* fix: auto-detect normalized embeddings and use cosine distance
- Add automatic detection for normalized embedding models (OpenAI, Voyage AI, Cohere)
- Automatically set distance_metric='cosine' for normalized embeddings
- Add warnings when using non-optimal distance metrics
- Implement manual L2 normalization in HNSW backend (custom Faiss build lacks normalize_L2)
- Fix DiskANN zmq_port compatibility with lazy loading strategy
- Add documentation for normalized embeddings feature
This fixes the low accuracy issue when using OpenAI text-embedding-3-small model with default MIPS metric.
* style: format
- Manual Release workflow now directly publishes to PyPI after downloading CI artifacts
- No more duplicate builds - reuses artifacts from CI
- build-and-publish.yml renamed to 'CI - Build Multi-Platform Packages'
- Publishing in CI workflow only for emergency manual triggers
- Updated RELEASE.md to reflect the new streamlined process
This fixes the issue where releases would trigger redundant builds.
- Add continue-on-error to TestPyPI step
- Check if TEST_PYPI_API_TOKEN exists before attempting upload
- Add graceful failure handling with clear messages
- Update docs to explain TestPyPI token configuration
- Clarify that TestPyPI testing is optional
Now the release won't fail if TestPyPI is not configured or upload fails
- Download pre-built wheels from successful CI runs
- Avoids duplicate builds and ensures consistency
- CI artifacts are already tested across all platforms
- Faster release process (no build time)
- Updates release documentation to reflect new flow
This ensures the released packages are exactly what was tested in CI.
- Add ci.yml for continuous integration on every commit
- Test builds on Ubuntu/macOS with Python 3.9/3.10/3.11
- Ensure code quality before any release
- Add release-manual.yml for controlled releases
- Manual trigger prevents accidental releases
- Version validation and tag creation
- Optional TestPyPI testing before production
- Only creates tag after validation passes
- Keep build-and-publish.yml for automated PyPI deployment
- Triggered by new tags (separation of concerns)
- Handles multi-platform wheel building
- Allows retry if PyPI upload fails
- Update RELEASE.md with clear prerequisites and workflow
This setup ensures:
1. Every commit is tested (CI)
2. Releases are deliberate (manual trigger)
3. Failed CI won't create broken tags
4. PyPI publish can be retried independently