Fixes#175
Problem:
When --file-types .pdf is specified, PDFs were being processed twice:
1. Separately with PyMuPDF/pdfplumber extractors
2. Again in the 'other file types' section via SimpleDirectoryReader
This caused duplicate processing and potential conflicts.
Solution:
- Exclude .pdf from other_file_extensions when PDFs are already
processed separately
- Only load other file types if there are extensions to process
- Prevents duplicate PDF processing
Changes:
- Added logic to filter out .pdf from code_extensions when loading
other file types if PDFs were processed separately
- Updated SimpleDirectoryReader to use filtered extensions
- Added check to skip loading if no other extensions to process
* Fix prompt template bugs in build and search
Bug 1: Build template ignored in new format
- Updated compute_embeddings_openai() to read build_prompt_template or prompt_template
- Updated compute_embeddings_ollama() with same fix
- Maintains backward compatibility with old single-template format
Bug 2: Runtime override not wired up
- Wired CLI search to pass provider_options to searcher.search()
- Enables runtime template override during search via --embedding-prompt-template
All 42 prompt template tests passing.
Fixes#155
* Fix: Prevent embedding server from applying templates during search
- Filter out all prompt templates (build_prompt_template, query_prompt_template, prompt_template) from provider_options when launching embedding server during search
- Templates are already applied in compute_query_embedding() before server call
- Prevents double-templating and ensures runtime override works correctly
This fixes the issue where --embedding-prompt-template during search was ignored because the server was applying build_prompt_template instead.
* Format code with ruff
* Add prompt template support and LM Studio SDK integration
Features:
- Prompt template support for embedding models (via --embedding-prompt-template)
- LM Studio SDK integration for automatic context length detection
- Hybrid token limit discovery (Ollama → LM Studio → Registry → Default)
- Client-side token truncation to prevent silent failures
- Automatic persistence of embedding_options to .meta.json
Implementation:
- Added _query_lmstudio_context_limit() with Node.js subprocess bridge
- Modified compute_embeddings_openai() to apply prompt templates before truncation
- Extended CLI with --embedding-prompt-template flag for build and search
- URL detection for LM Studio (port 1234 or lmstudio/lm.studio keywords)
- HTTP→WebSocket URL conversion for SDK compatibility
Tests:
- 60 passing tests across 5 test files
- Comprehensive coverage of prompt templates, LM Studio integration, and token handling
- Parametrized tests for maintainability and clarity
* Add integration tests and fix LM Studio SDK bridge
Features:
- End-to-end integration tests for prompt template with EmbeddingGemma
- Integration tests for hybrid token limit discovery mechanism
- Tests verify real-world functionality with live services (LM Studio, Ollama)
Fixes:
- LM Studio SDK bridge now uses client.embedding.load() for embedding models
- Fixed NODE_PATH resolution to include npm global modules
- Fixed integration test to use WebSocket URL (ws://) for SDK bridge
Tests:
- test_prompt_template_e2e.py: 8 integration tests covering:
- Prompt template prepending with LM Studio (EmbeddingGemma)
- LM Studio SDK bridge for context length detection
- Ollama dynamic token limit detection
- Hybrid discovery fallback mechanism (registry, default)
- All tests marked with @pytest.mark.integration for selective execution
- Tests gracefully skip when services unavailable
Documentation:
- Updated tests/README.md with integration test section
- Added prerequisites and running instructions
- Documented that prompt templates are ONLY for EmbeddingGemma
- Added integration marker to pyproject.toml
Test Results:
- All 8 integration tests passing with live services
- Confirmed prompt templates work correctly with EmbeddingGemma
- Verified LM Studio SDK bridge auto-detects context length (2048)
- Validated hybrid token limit discovery across all backends
* Add prompt template support to Ollama mode
Extends prompt template functionality from OpenAI mode to Ollama for backend consistency.
Changes:
- Add provider_options parameter to compute_embeddings_ollama()
- Apply prompt template before token truncation (lines 1005-1011)
- Pass provider_options through compute_embeddings() call chain
Tests:
- test_ollama_embedding_with_prompt_template: Verifies templates work with Ollama
- test_ollama_prompt_template_affects_embeddings: Confirms embeddings differ with/without template
- Both tests pass with live Ollama service (2/2 passing)
Usage:
leann build --embedding-mode ollama --embedding-prompt-template "query: " ...
* Fix LM Studio SDK bridge to respect JIT auto-evict settings
Problem: SDK bridge called client.embedding.load() which loaded models into
LM Studio memory and bypassed JIT auto-evict settings, causing duplicate
model instances to accumulate.
Root cause analysis (from Perplexity research):
- Explicit SDK load() commands are treated as "pinned" models
- JIT auto-evict only applies to models loaded reactively via API requests
- SDK-loaded models remain in memory until explicitly unloaded
Solutions implemented:
1. Add model.unload() after metadata query (line 243)
- Load model temporarily to get context length
- Unload immediately to hand control back to JIT system
- Subsequent API requests trigger JIT load with auto-evict
2. Add token limit caching to prevent repeated SDK calls
- Cache discovered limits in _token_limit_cache dict (line 48)
- Key: (model_name, base_url), Value: token_limit
- Prevents duplicate load/unload cycles within same process
- Cache shared across all discovery methods (Ollama, SDK, registry)
Tests:
- TestTokenLimitCaching: 5 tests for cache behavior (integrated into test_token_truncation.py)
- Manual testing confirmed no duplicate models in LM Studio after fix
- All existing tests pass
Impact:
- Respects user's LM Studio JIT and auto-evict settings
- Reduces model memory footprint
- Faster subsequent builds (cached limits)
* Document prompt template and LM Studio SDK features
Added comprehensive documentation for new optional embedding features:
Configuration Guide (docs/configuration-guide.md):
- New section: "Optional Embedding Features"
- Task-Specific Prompt Templates subsection:
- Explains EmbeddingGemma use case with document/query prompts
- CLI and Python API examples
- Clear warnings about compatible vs incompatible models
- References to GitHub issue #155 and HuggingFace blog
- LM Studio Auto-Detection subsection:
- Prerequisites (Node.js + @lmstudio/sdk)
- How auto-detection works (4-step process)
- Benefits and optional nature clearly stated
FAQ (docs/faq.md):
- FAQ #2: When should I use prompt templates?
- DO/DON'T guidance with examples
- Links to detailed configuration guide
- FAQ #3: Why is LM Studio loading multiple copies?
- Explains the JIT auto-evict fix
- Troubleshooting steps if still seeing issues
- FAQ #4: Do I need Node.js and @lmstudio/sdk?
- Clarifies it's completely optional
- Lists benefits if installed
- Installation instructions
Cross-references between documents for easy navigation between quick reference and detailed guides.
* Add separate build/query template support for task-specific models
Task-specific models like EmbeddingGemma require different templates for indexing vs searching. Store both templates at build time and auto-apply query template during search with backward compatibility.
* Consolidate prompt template tests from 44 to 37 tests
Merged redundant no-op tests, removed low-value implementation tests, consolidated parameterized CLI tests, and removed hanging over-mocked test. All tests pass with improved focus on behavioral testing.
* Fix query template application in compute_query_embedding
Query templates were only applied in the fallback code path, not when using the embedding server (default path). This meant stored query templates in index metadata were ignored during MCP and CLI searches.
Changes:
- Move template application to before any computation path (searcher_base.py:109-110)
- Add comprehensive tests for both server and fallback paths
- Consolidate tests into test_prompt_template_persistence.py
Tests verify:
- Template applied when using embedding server
- Template applied in fallback path
- Consistent behavior between both paths
* Apply ruff formatting and fix linting issues
- Remove unused imports
- Fix import ordering
- Remove unused variables
- Apply code formatting
* Fix CI test failures: mock OPENAI_API_KEY in tests
Tests were failing in CI because compute_embeddings_openai() checks for OPENAI_API_KEY before using the mocked client. Added monkeypatch to set fake API key in test fixture.
* feat: enhance token limits with dynamic discovery + AST metadata
Improves upon upstream PR #154 with two major enhancements:
1. **Hybrid Token Limit Discovery**
- Dynamic: Query Ollama /api/show for context limits
- Fallback: Registry for LM Studio/OpenAI
- Zero maintenance for Ollama users
- Respects custom num_ctx settings
2. **AST Metadata Preservation**
- create_ast_chunks() returns dict format with metadata
- Preserves file_path, file_name, timestamps
- Includes astchunk metadata (line numbers, node counts)
- Fixes content extraction bug (checks "content" key)
- Enables --show-metadata flag
3. **Better Token Limits**
- nomic-embed-text: 2048 tokens (vs 512)
- nomic-embed-text-v1.5: 2048 tokens
- Added OpenAI models: 8192 tokens
4. **Comprehensive Tests**
- 11 tests for token truncation
- 545 new lines in test_astchunk_integration.py
- All metadata preservation tests passing
* fix: merge EMBEDDING_MODEL_LIMITS and remove redundant validation
- Merged upstream's model list with our corrected token limits
- Kept our corrected nomic-embed-text: 2048 (not 512)
- Removed post-chunking validation (redundant with embedding-time truncation)
- All tests passing except 2 pre-existing integration test failures
* style: apply ruff formatting and restore PR #154 version handling
- Remove duplicate truncate_to_token_limit and get_model_token_limit functions
- Restore version handling logic (model:latest -> model) from PR #154
- Restore partial matching fallback for model name variations
- Apply ruff formatting to all modified files
- All 11 token truncation tests passing
* style: sort imports alphabetically (pre-commit auto-fix)
* fix: show AST token limit warning only once per session
- Add module-level flag to track if warning shown
- Prevents spam when processing multiple files
- Add clarifying note that auto-truncation happens at embedding time
- Addresses issue where warning appeared for every code file
* enhance: add detailed logging for token truncation
- Track and report truncation statistics (count, tokens removed, max length)
- Show first 3 individual truncations with exact token counts
- Provide comprehensive summary when truncation occurs
- Use WARNING level for data loss visibility
- Silent (DEBUG level only) when no truncation needed
Replaces misleading "truncated where necessary" message that appeared
even when nothing was truncated.
* feat: add metadata output to search results
- Add --show-metadata flag to display file paths in search results
- Preserve document metadata (file_path, file_name, timestamps) during chunking
- Update MCP tool schema to support show_metadata parameter
- Enhance CLI search output to display metadata when requested
- Fix pre-existing bug: args.backend -> args.backend_name
Resolves yichuan-w/LEANN#144
* fix: resolve ZMQ linking issues in Python extension
- Use pkg_check_modules IMPORTED_TARGET to create PkgConfig::ZMQ
- Set PKG_CONFIG_PATH to prioritize ARM64 Homebrew on Apple Silicon
- Override macOS -undefined dynamic_lookup to force proper symbol resolution
- Use PUBLIC linkage for ZMQ in faiss library for transitive linking
- Mark cppzmq includes as SYSTEM to suppress warnings
Fixes editable install ZMQ symbol errors while maintaining compatibility
across Linux, macOS Intel, and macOS ARM64 platforms.
* style: apply ruff formatting
* chore: update faiss submodule to use ww2283 fork
Use ww2283/faiss fork with fix/zmq-linking branch to resolve CI checkout
failures. The ZMQ linking fixes are not yet merged upstream.
* feat: implement true batch processing for Ollama embeddings
Migrate from deprecated /api/embeddings to modern /api/embed endpoint
which supports batch inputs. This reduces HTTP overhead by sending
32 texts per request instead of making individual API calls.
Changes:
- Update endpoint from /api/embeddings to /api/embed
- Change parameter from 'prompt' (single) to 'input' (array)
- Update response parsing for batch embeddings array
- Increase timeout to 60s for batch processing
- Improve error handling for batch requests
Performance:
- Reduces API calls by 32x (batch size)
- Eliminates HTTP connection overhead per text
- Note: Ollama still processes batch items sequentially internally
Related: #151
* fall back to original faiss as i merge the PR
---------
Co-authored-by: yichuan520030910320 <yichuan_wang@berkeley.edu>
* Printing querying time
* Adding source name to chunks
Adding source name as metadata to chunks, then printing the sources when searching
* Printing the context provided to LLM
To check the data transmitted to the LLMs : display the relevance, ID, content, and source of each sent chunk.
* Correcting source as metadata for chunks
* Applying ruff format
* Applying Ruff formatting
* Ruff formatting
BREAKING CHANGE: trust_remote_code now defaults to False for security
- Set trust_remote_code=False by default in HFChat class
- Add explicit trust_remote_code parameter to HFChat.__init__()
- Add security warning when trust_remote_code=True is used
- Update get_llm() function to support trust_remote_code parameter
- Update benchmark utilities (load_hf_model, load_vllm_model, load_qwen_vl_model)
- Add comprehensive documentation for security implications
Security Benefits:
- Prevents arbitrary code execution from compromised model repositories
- Requires explicit opt-in for models that need remote code execution
- Shows clear warnings when security is reduced
- Follows security-by-default principle
Migration Guide:
- Most users: No changes needed (more secure by default)
- Users with models requiring remote code: Add trust_remote_code=True explicitly
- Config users: Add 'trust_remote_code': true to LLM config if needed
Fixes#136
* feat: Add MCP integration support for Slack and Twitter
- Implement SlackMCPReader for connecting to Slack MCP servers
- Implement TwitterMCPReader for connecting to Twitter MCP servers
- Add SlackRAG and TwitterRAG applications with full CLI support
- Support live data fetching via Model Context Protocol (MCP)
- Add comprehensive documentation and usage examples
- Include connection testing capabilities with --test-connection flag
- Add standalone tests for core functionality
- Update README with detailed MCP integration guide
- Add Aakash Suresh to Active Contributors
Resolves#36
* fix: Resolve linting issues in MCP integration
- Replace deprecated typing.Dict/List with built-in dict/list
- Fix boolean comparisons (== True/False) to direct checks
- Remove unused variables in demo script
- Update type annotations to use modern Python syntax
All pre-commit hooks should now pass.
* fix: Apply final formatting fixes for pre-commit hooks
- Remove unused imports (asyncio, pathlib.Path)
- Remove unused class imports in demo script
- Ensure all files pass ruff format and pre-commit checks
This should resolve all remaining CI linting issues.
* fix: Apply pre-commit formatting changes
- Fix trailing whitespace in all files
- Apply ruff formatting to match project standards
- Ensure consistent code style across all MCP integration files
This commit applies the exact changes that pre-commit hooks expect.
* fix: Apply pre-commit hooks formatting fixes
- Remove trailing whitespace from all files
- Fix ruff formatting issues (2 errors resolved)
- Apply consistent code formatting across 3 files
- Ensure all files pass pre-commit validation
This resolves all CI formatting failures.
* fix: Update MCP RAG classes to match BaseRAGExample signature
- Fix SlackMCPRAG and TwitterMCPRAG __init__ methods to provide required parameters
- Add name, description, and default_index_name to super().__init__ calls
- Resolves test failures: test_slack_rag_initialization and test_twitter_rag_initialization
This fixes the TypeError caused by BaseRAGExample requiring additional parameters.
* style: Apply ruff formatting - add trailing commas
- Add trailing commas to super().__init__ calls in SlackMCPRAG and TwitterMCPRAG
- Fixes ruff format pre-commit hook requirements
* fix: Resolve SentenceTransformer model_kwargs parameter conflict
- Fix local_files_only parameter conflict in embedding_compute.py
- Create separate copies of model_kwargs and tokenizer_kwargs for local vs network loading
- Prevents parameter conflicts when falling back from local to network loading
- Resolves TypeError in test_readme_examples.py tests
This addresses the SentenceTransformer initialization issues in CI tests.
* fix: Add comprehensive SentenceTransformer version compatibility
- Handle both old and new sentence-transformers versions
- Gracefully fallback from advanced parameters to basic initialization
- Catch TypeError for model_kwargs/tokenizer_kwargs and use basic SentenceTransformer init
- Ensures compatibility across different CI environments and local setups
- Maintains optimization benefits where supported while ensuring broad compatibility
This resolves test failures in CI environments with older sentence-transformers versions.
* style: Apply ruff formatting to embedding_compute.py
- Break long logger.warning lines for better readability
- Fixes pre-commit hook formatting requirements
* docs: Comprehensive documentation improvements for better user experience
- Add clear step-by-step Getting Started Guide for new users
- Add comprehensive CLI Reference with all commands and options
- Improve installation instructions with clear steps and verification
- Add detailed troubleshooting section for common issues (Ollama, OpenAI, etc.)
- Clarify difference between CLI commands and specialized apps
- Add environment variables documentation
- Improve MCP integration documentation with CLI integration examples
- Address user feedback about confusing installation and setup process
This resolves documentation gaps that made LEANN difficult for non-specialists to use.
* style: Remove trailing whitespace from README.md
- Fix trailing whitespace issues found by pre-commit hooks
- Ensures consistent formatting across documentation
* docs: Simplify README by removing excessive documentation
- Remove overly complex CLI reference and getting started sections (lines 61-334)
- Remove emojis from section headers for cleaner appearance
- Keep README simple and focused as requested
- Maintain essential MCP integration documentation
This addresses feedback to keep documentation minimal and avoid auto-generated content.
* docs: Address maintainer feedback on README improvements
- Restore emojis in section headers (Prerequisites and Quick Install)
- Add MCP live data feature mention in line 23 with links to Slack and Twitter
- Add detailed API credential setup instructions for Slack:
- Step-by-step Slack App creation process
- Required OAuth scopes and permissions
- Clear token identification (xoxb- vs xapp-)
- Add detailed API credential setup instructions for Twitter:
- Twitter Developer Account application process
- API v2 requirements for bookmarks access
- Required permissions and scopes
This addresses maintainer feedback to make API setup more user-friendly.
* Add readline support to interactive command line interfaces
- Implement readline history, navigation, and editing for CLI, API, and RAG chat modes
- Create shared InteractiveSession class to consolidate readline functionality
- Add command history persistence across sessions with separate files per context
- Support built-in commands: help, clear, history, quit/exit
- Enable arrow key navigation and command editing in all interactive modes
* Improvements based on feedback
* feat: finance bench
* docs: results
* chore: ignroe data README
* feat: fix financebench
* feat: laion, also required idmaps support
* style: format
* style: format
* fix: resolve ruff linting errors
- Remove unused variables in benchmark scripts
- Rename unused loop variables to follow convention
* feat: enron email bench
* experiments for running DiskANN & BM25 on Arch 4090
* style: format
* chore(ci): remove paru-bin submodule and config to fix checkout --recurse-submodules
* docs: data
* docs: data updated
* fix: as package
* fix(ci): only run pre-commit
* chore: use http url of astchunk; use group for some dev deps
* fix(ci): should checkout modules as well since `uv sync` checks
* fix(ci): run with lint only
* fix: find links to install wheels available
* CI: force local wheels in uv install step
* CI: install local wheels via file paths
* CI: pick wheels matching current Python tag
* CI: handle python tag mismatches for local wheels
* CI: use matrix python venv and set macOS deployment target
* CI: revert install step to match main
* CI: use uv group install with local wheel selection
* CI: rely on setup-uv for Python and tighten group install
* CI: install build deps with uv python interpreter
* CI: use temporary uv venv for build deps
* CI: add build venv scripts path for wheel repair
* feat: Add GitHub PR and issue templates for better contributor experience
* simplify: Make templates more concise and user-friendly
* fix: enable is_compact=False, is_recompute=True
* feat: update when recompute
* test
* fix: real recompute
* refactor
* fix: compare with no-recompute
* fix: test
* feat: Add ARM64 Linux wheel support for leann-backend-hnsw
* fix: Use OpenBLAS for ARM64 Linux builds instead of Intel MKL
* fix: Configure Faiss with SVE optimization for ARM64 builds
- Set FAISS_OPT_LEVEL to "sve" for ARM64 architecture
- Disable x86-specific SIMD instructions (AVX2, AVX512, SSE4.1)
- Use ARM64-native SVE optimization as per Faiss conda build scripts
- Add architecture detection and proper configuration messages
Fixes compilation error: "xmmintrin.h: No such file or directory"
on ubuntu-24.04-arm runners.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: Apply ARM64 compatibility fix directly to Faiss submodule
- Modify faiss/impl/pq.cpp to use x86-specific preprocessor conditions
- Remove patch file approach in favor of direct submodule modification
- Update CMakeLists.txt to reflect the submodule changes
- Fixes ARM64 Linux compilation by preventing x86 SIMD header inclusion
This resolves the "xmmintrin.h: No such file or directory" error
when building ARM64 Linux wheels for Docker compatibility.
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* chore: Update Faiss submodule to include ARM64 compatibility fix
- Points to commit ed96ff7d with x86-specific preprocessor conditions
- Enables successful ARM64 Linux wheel builds
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* retrigger ci
* fix: Use different optimization levels for ARM64 based on platform
- Use SVE optimization only for ARM64 Linux
- Use generic optimization for ARM64 macOS to avoid clang SVE issues
- Fixes macOS ARM64 compilation errors with SVE instructions
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* feat: Update DiskANN submodule with OpenBLAS fallback support
- Points to commit 5c396c4 with ARM64 Linux OpenBLAS support
- Enables DiskANN to build on ARM64 Linux using standard BLAS libraries
- Resolves Intel MKL dependency issues for Docker ARM64 deployments
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: Update DiskANN submodule with ZeroMQ polling configuration
- Points to commit 3a1016e with explicit polling method setup
- Resolves ZeroMQ autodetection issues on ARM64 Linux
- Ensures stable cross-platform ZeroMQ builds
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* retrigger ci
* fix: Update DiskANN submodule with ARM64 compiler flags fix
- Points to commit a0dc600 with architecture-specific compiler flags
- Removes x86 SIMD flags on ARM64 Linux to fix compilation errors
- Enables successful ARM64 Linux wheel builds
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: Update DiskANN submodule with ARM64 compiler flags fix
- Points to commit 0921664 with architecture-specific compiler flags
- Removes x86 SIMD flags on ARM64 Linux to fix compilation errors
- Enables successful ARM64 Linux wheel builds
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* retrigger ci
* fix: Update DiskANN submodule with cross-platform prefetch support
- Points to commit 39192d6 with unified prefetch macros
- Replaces all Intel-specific _mm_prefetch calls with cross-platform macros
- Enables ARM64 Linux compatibility while maintaining x86 performance
- Resolves all remaining compilation errors for ARM64 builds
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: Update DiskANN submodule with corrected ARM64 compatibility fixes
- Points to commit 3cb87a8 with proper x86 platform detection
- Includes ARM64 fallback for AVXDistanceInnerProductFloat function
- Resolves all remaining '__m256 was not declared' compilation errors
- Enables successful ARM64 Linux wheel builds for Docker compatibility
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: Update DiskANN submodule with template type handling fix
- Points to commit d396bc3 with corrected template type handling
- Fixes DistanceInnerProduct template instantiation for int8_t/uint8_t types
- Resolves 'cannot convert const signed char* to const float*' error
- Completes ARM64 Linux compilation compatibility
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: Update DiskANN submodule with DistanceFastL2::norm template fix
- Points to commit 69d9a99 with corrected template type handling
- Fixes DistanceFastL2::norm template instantiation for int8_t/uint8_t types
- Resolves another 'cannot convert const signed char* to const float*' error
- Continues ARM64 Linux compilation compatibility improvements
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: Update DiskANN submodule with LAPACKE header detection
- Points to commit 64a9e01 with LAPACKE header path configuration
- Adds pkg-config based detection for LAPACKE include directories
- Resolves 'lapacke.h: No such file or directory' compilation error
- Completes OpenBLAS integration for ARM64 Linux builds
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: Update DiskANN submodule with enhanced LAPACKE header detection
- Points to commit 18d0721 with fallback LAPACKE header search paths
- Checks multiple standard locations for lapacke.h on various systems
- Improves ARM64 Linux compatibility for OpenBLAS builds
- Should resolve 'lapacke.h: No such file or directory' errors
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: Add liblapacke-dev package for ARM64 Linux builds
- Add liblapacke-dev to ARM64 dependencies alongside libopenblas-dev
- Provides lapacke.h header file needed for LAPACK C interface
- Fixes 'lapacke.h: No such file or directory' compilation error
- Enables complete OpenBLAS + LAPACKE support for ARM64 wheel builds
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: Update DiskANN submodule with cosine_similarity.h x86 intrinsics fix
- Points to commit dbb17eb with corrected conditional compilation
- Fixes immintrin.h inclusion for ARM64 compatibility in cosine_similarity.h
- Resolves 'immintrin.h: No such file or directory' error
- Continues systematic ARM64 Linux compilation fixes
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
* fix: Update DiskANN submodule with LAPACKE library linking fix
- Points to commit 19f9603 with explicit LAPACKE library discovery and linking
- Resolves 'undefined symbol: LAPACKE_sgesdd' runtime error on ARM64 Linux
- Completes ARM64 Linux wheel build compatibility for Docker deployments
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-Authored-By: Claude <noreply@anthropic.com>
---------
Co-authored-by: Claude <noreply@anthropic.com>
* Metadata filtering initial version
* Metadata filtering initial version
* Fixes linter issues
* Cleanup code
* Clean up and readme
* Fix after review
* Use UV in example
* Merge main into feature/metadata-filtering
* feat(core): Add AST-aware code chunking with astchunk integration
This PR introduces intelligent code chunking that preserves semantic boundaries
(functions, classes, methods) for better code understanding in RAG applications.
Key Features:
- AST-aware chunking for Python, Java, C#, TypeScript files
- Graceful fallback to traditional chunking for unsupported languages
- New specialized code RAG application for repositories
- Enhanced CLI with --use-ast-chunking flag
- Comprehensive test suite with integration tests
Technical Implementation:
- New chunking_utils.py module with enhanced chunking logic
- Extended base RAG framework with AST chunking arguments
- Updated document RAG with --enable-code-chunking flag
- CLI integration with proper error handling and fallback
Benefits:
- Better semantic understanding of code structure
- Improved search quality for code-related queries
- Maintains backward compatibility with existing workflows
- Supports mixed content (code + documentation) seamlessly
Dependencies:
- Added astchunk and tree-sitter parsers to pyproject.toml
- All dependencies are optional - fallback works without them
Testing:
- Comprehensive test suite in test_astchunk_integration.py
- Integration tests with document RAG
- Error handling and edge case coverage
Documentation:
- Updated README.md with AST chunking highlights
- Added ASTCHUNK_INTEGRATION.md with complete guide
- Updated features.md with new capabilities
* Refactored chunk utils
* Remove useless import
* Update README.md
* Update apps/chunking/utils.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Update apps/code_rag.py
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Fix issue
* apply suggestion from @Copilot
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
* Fixes after pr review
* Fix tests not passing
* Fix linter error for documentation files
* Update .gitignore with unwanted files
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Andy Lee <andylizf@outlook.com>