Fixed the actual root cause instead of just masking it in tests:
1. Root Problem:
- C++ side's ZmqDistanceComputer creates ZMQ connections but doesn't clean them
- Python 3.9/3.13 are more sensitive to cleanup timing during shutdown
2. Core Fixes in SearcherBase and LeannSearcher:
- Added cleanup() method to BaseSearcher that cleans ZMQ and embedding server
- LeannSearcher.cleanup() now also handles ZMQ context cleanup
- Both HNSW and DiskANN searchers now properly delete C++ index objects
3. Backend-Specific Cleanup:
- HNSWSearcher.cleanup(): Deletes self.index to trigger C++ destructors
- DiskannSearcher.cleanup(): Deletes self._index and resets state
- Both force garbage collection after deletion
4. Test Infrastructure:
- Added auto_cleanup_searcher fixture for explicit resource management
- Global cleanup now more aggressive with ZMQ context destruction
This is the proper fix - cleaning up resources at the source, not just
working around the issue in tests. The hanging was caused by C++ side
ZMQ connections not being properly terminated when is_recompute=True.
Based on excellent analysis from user, implemented comprehensive fixes:
1. ZMQ Socket Cleanup:
- Set LINGER=0 on all ZMQ sockets (client and server)
- Use try-finally blocks to ensure socket.close() and context.term()
- Prevents blocking on exit when ZMQ contexts have pending operations
2. Global Test Cleanup:
- Added tests/conftest.py with session-scoped cleanup fixture
- Cleans up leftover ZMQ contexts and child processes after all tests
- Lists remaining threads for debugging
3. CI Improvements:
- Apply timeout to ALL Python versions on Linux (not just 3.13)
- Increased timeout to 180s for better reliability
- Added process cleanup (pkill) on timeout
4. Dependencies:
- Added psutil>=5.9.0 to test dependencies for process management
Root cause: Python 3.9/3.13 are more sensitive to cleanup timing during
interpreter shutdown. ZMQ's default LINGER=-1 was blocking exit, and
atexit handlers were unreliable for cleanup.
This should resolve the 'all tests pass but CI hangs' issue.
- Added OS check ( == Linux) before using timeout command
- macOS doesn't have GNU timeout by default, so skip it there
- Still run tests with verbose output on all platforms
- This avoids 'timeout: command not found' error on macOS CI
- Changed pytest-anyio to anyio (the correct package name)
- The anyio package includes built-in pytest plugin support
- pytest-anyio==0.0.0 was causing dependency resolution failures
- anyio>=4.0 provides the pytest plugin for async test support
- Added timeout --signal=INT to pytest runs on Python 3.13
- This will interrupt hanging tests and provide full traceback
- Added extra debugging steps for Python 3.13 to isolate the issue:
- Test collection only with timeout
- Run single simple test with timeout
- Reference: https://youtu.be/QRywzsBftfc (debugging hanging tests)
- Will help identify if hanging occurs during collection or execution
- Updated pytest to >=8.3.0 (required for Python 3.13 support)
- Updated pytest-cov to >=5.0
- Updated pytest-xdist to >=3.5
- Updated pytest-timeout to >=2.3
- Added pytest-anyio>=4.0 for async test support with Python 3.13
- These version requirements ensure compatibility with Python 3.13
- No need to disable Python 3.13 in CI matrix
- Skip the test in CI environment to avoid hanging on OpenAI API calls
- Add 60-second timeout decorator for local runs
- Import ci_timeout from test_timeout module
- The test uses OpenAI embeddings which can hang due to network/API issues
- Add 'simulated' to the LLM choices in base_rag_example.py
- Handle simulated case in get_llm_config() method
- This allows tests to use --llm simulated to avoid API costs
- Improve grammar and sentence structure in MCP section
- Add proper markdown image formatting with relative paths
- Optimize mcp_leann.png size (1.3MB -> 224KB)
- Update data description to be more specific about Chinese content
- Add flush=True to all print statements in convert_to_csr.py to prevent buffer deadlock
- Redirect embedding server stdout/stderr to DEVNULL in CI environment (CI=true)
- Fix timeout in embedding_server_manager.stop_server() final wait call
- Remove --no-index so numpy/scipy/etc can be resolved on Python 3.13
- Keep --find-links to force our packages from local dist
Fixes: dependency resolution failure on Ubuntu Python 3.13 (numpy missing)
- Build leann-core and leann on macOS too
- Install all packages via --find-links and --no-index across platforms
- Lower macOS MACOSX_DEPLOYMENT_TARGET to 12.0 for wider compatibility
This ensures consistency and avoids PyPI drift while improving macOS compatibility.
- Replace 'int | None' with 'Optional[int]' everywhere
- Replace 'subprocess.Popen | None' with 'Optional[subprocess.Popen]'
- Add Optional import to all affected files
- Update ruff target-version from py310 to py39
- The '|' syntax for Union types was introduced in Python 3.10 (PEP 604)
Fixes TypeError: unsupported operand type(s) for |: 'type' and 'NoneType'
- Ubuntu: Install all packages from local builds with --no-index
- macOS: Install core packages from PyPI, backends from local builds
- Remove --no-index for macOS backend installation to allow dependency resolution
- Pin versions when installing from PyPI to ensure consistency
Fixes error: 'leann-core was not found in the provided package locations'
- Explicitly specify Python version when creating venv with uv
- Prevents mismatch between build Python (e.g., 3.10) and test Python
- Fixes: _diskannpy.cpython-310-x86_64-linux-gnu.so in Python 3.11 error
The issue: uv venv was defaulting to Python 3.11 regardless of matrix version
- Use --find-links with --no-index to let uv select correct wheel
- Prevents installing wrong Python version wheel (e.g., cp310 for Python 3.11)
- Fixes ImportError: _diskannpy.cpython-310-x86_64-linux-gnu.so in Python 3.11
The issue was that *.whl glob matched all Python versions, causing
uv to potentially install a cp310 wheel in a Python 3.11 environment.
- Remove '--plat linux_x86_64' which is not a valid platform tag
- Let auditwheel automatically determine the correct platform
- Based on CI output, it will use manylinux_2_35_x86_64
This was causing auditwheel repair to fail, preventing proper wheel repair
- Check wheel contents before and after auditwheel repair
- Verify _diskannpy module installation after pip install
- List installed package directory structure
- Add explicit platform tag for auditwheel repair
This helps diagnose why ImportError: cannot import name '_diskannpy' occurs
- Change from --find-links to direct wheel installation with --force-reinstall
- This ensures CI uses locally built packages with latest source code
- Prevents uv from using PyPI packages with same version number but old code
- Fixes CI test failures where old code (without metadata_file_path) was used
Root cause: CI was installing leann-backend-diskann v0.2.1 from PyPI
instead of the locally built wheel with same version number.
- Add logging in DiskANN embedding server to show metadata_file_path
- Add debug logging in PassageManager to trace path resolution
- This will help identify why CI fails to find passage files
- Pin ruff==0.12.7 in pyproject.toml dev dependencies
- Update CI to use exact ruff version instead of latest
- Add comments explaining version pinning rationale
- Ensures consistent formatting across local, CI, and pre-commit