Compare commits
24 Commits
feature/ad
...
feature/sk
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
3de0a94efc | ||
|
|
58c12e3eed | ||
|
|
92739c7899 | ||
|
|
6709afe38b | ||
|
|
ded0701504 | ||
|
|
e3518a31ed | ||
|
|
d5f6ca61ed | ||
|
|
b13b52e78c | ||
|
|
79ca32e87b | ||
|
|
16f4572fe7 | ||
|
|
2bd557d1cf | ||
|
|
3e162fb177 | ||
|
|
b988f0ab5b | ||
|
|
43cb500ed8 | ||
|
|
0361725323 | ||
|
|
3f81861cba | ||
|
|
fa2a775867 | ||
|
|
737dfc960c | ||
|
|
c994635af6 | ||
|
|
23b80647c5 | ||
|
|
50121972ee | ||
|
|
07e5f10204 | ||
|
|
58711bff7e | ||
|
|
a69464eb16 |
17
README.md
17
README.md
@@ -31,7 +31,7 @@ LEANN achieves this through *graph-based selective recomputation* with *high-deg
|
|||||||
<img src="assets/effects.png" alt="LEANN vs Traditional Vector DB Storage Comparison" width="70%">
|
<img src="assets/effects.png" alt="LEANN vs Traditional Vector DB Storage Comparison" width="70%">
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
> **The numbers speak for themselves:** Index 60 million text chunks in just 6GB instead of 201GB. From emails to browser history, everything fits on your laptop. [See detailed benchmarks for different applications below ↓](#-storage-comparison)
|
> **The numbers speak for themselves:** Index 60 million text chunks in just 6GB instead of 201GB. From emails to browser history, everything fits on your laptop. [See detailed benchmarks for different applications below ↓](#storage-comparison)
|
||||||
|
|
||||||
|
|
||||||
🔒 **Privacy:** Your data never leaves your laptop. No OpenAI, no cloud, no "terms of service".
|
🔒 **Privacy:** Your data never leaves your laptop. No OpenAI, no cloud, no "terms of service".
|
||||||
@@ -70,8 +70,8 @@ uv venv
|
|||||||
source .venv/bin/activate
|
source .venv/bin/activate
|
||||||
uv pip install leann
|
uv pip install leann
|
||||||
```
|
```
|
||||||
<!--
|
|
||||||
> Low-resource? See “Low-resource setups” in the [Configuration Guide](docs/configuration-guide.md#low-resource-setups). -->
|
> Low-resource? See “Low-resource setups” in the [Configuration Guide](docs/configuration-guide.md#low-resource-setups).
|
||||||
|
|
||||||
<details>
|
<details>
|
||||||
<summary>
|
<summary>
|
||||||
@@ -426,21 +426,21 @@ Once the index is built, you can ask questions like:
|
|||||||
**The future of code assistance is here.** Transform your development workflow with LEANN's native MCP integration for Claude Code. Index your entire codebase and get intelligent code assistance directly in your IDE.
|
**The future of code assistance is here.** Transform your development workflow with LEANN's native MCP integration for Claude Code. Index your entire codebase and get intelligent code assistance directly in your IDE.
|
||||||
|
|
||||||
**Key features:**
|
**Key features:**
|
||||||
- 🔍 **Semantic code search** across your entire project, fully local index and lightweight
|
- 🔍 **Semantic code search** across your entire project
|
||||||
- 📚 **Context-aware assistance** for debugging and development
|
- 📚 **Context-aware assistance** for debugging and development
|
||||||
- 🚀 **Zero-config setup** with automatic language detection
|
- 🚀 **Zero-config setup** with automatic language detection
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Install LEANN globally for MCP integration
|
# Install LEANN globally for MCP integration
|
||||||
uv tool install leann-core --with leann
|
uv tool install leann-core
|
||||||
claude mcp add --scope user leann-server -- leann_mcp
|
|
||||||
# Setup is automatic - just start using Claude Code!
|
# Setup is automatic - just start using Claude Code!
|
||||||
```
|
```
|
||||||
Try our fully agentic pipeline with auto query rewriting, semantic search planning, and more:
|
Try our fully agentic pipeline with auto query rewriting, semantic search planning, and more:
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
**🔥 Ready to supercharge your coding?** [Complete Setup Guide →](packages/leann-mcp/README.md)
|
**Ready to supercharge your coding?** [Complete Setup Guide →](packages/leann-mcp/README.md)
|
||||||
|
|
||||||
## 🖥️ Command Line Interface
|
## 🖥️ Command Line Interface
|
||||||
|
|
||||||
@@ -457,8 +457,7 @@ leann --help
|
|||||||
**To make it globally available:**
|
**To make it globally available:**
|
||||||
```bash
|
```bash
|
||||||
# Install the LEANN CLI globally using uv tool
|
# Install the LEANN CLI globally using uv tool
|
||||||
uv tool install leann-core --with leann
|
uv tool install leann-core
|
||||||
|
|
||||||
|
|
||||||
# Now you can use leann from anywhere without activating venv
|
# Now you can use leann from anywhere without activating venv
|
||||||
leann --help
|
leann --help
|
||||||
|
|||||||
@@ -46,7 +46,6 @@ def compute_embeddings(
|
|||||||
- "sentence-transformers": Use sentence-transformers library (default)
|
- "sentence-transformers": Use sentence-transformers library (default)
|
||||||
- "mlx": Use MLX backend for Apple Silicon
|
- "mlx": Use MLX backend for Apple Silicon
|
||||||
- "openai": Use OpenAI embedding API
|
- "openai": Use OpenAI embedding API
|
||||||
- "gemini": Use Google Gemini embedding API
|
|
||||||
use_server: Whether to use embedding server (True for search, False for build)
|
use_server: Whether to use embedding server (True for search, False for build)
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
|
|||||||
@@ -680,52 +680,6 @@ class HFChat(LLMInterface):
|
|||||||
return response.strip()
|
return response.strip()
|
||||||
|
|
||||||
|
|
||||||
class GeminiChat(LLMInterface):
|
|
||||||
"""LLM interface for Google Gemini models."""
|
|
||||||
|
|
||||||
def __init__(self, model: str = "gemini-2.5-flash", api_key: Optional[str] = None):
|
|
||||||
self.model = model
|
|
||||||
self.api_key = api_key or os.getenv("GEMINI_API_KEY")
|
|
||||||
|
|
||||||
if not self.api_key:
|
|
||||||
raise ValueError(
|
|
||||||
"Gemini API key is required. Set GEMINI_API_KEY environment variable or pass api_key parameter."
|
|
||||||
)
|
|
||||||
|
|
||||||
logger.info(f"Initializing Gemini Chat with model='{model}'")
|
|
||||||
|
|
||||||
try:
|
|
||||||
import google.genai as genai
|
|
||||||
|
|
||||||
self.client = genai.Client(api_key=self.api_key)
|
|
||||||
except ImportError:
|
|
||||||
raise ImportError(
|
|
||||||
"The 'google-genai' library is required for Gemini models. Please install it with 'uv pip install google-genai'."
|
|
||||||
)
|
|
||||||
|
|
||||||
def ask(self, prompt: str, **kwargs) -> str:
|
|
||||||
logger.info(f"Sending request to Gemini with model {self.model}")
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Set generation configuration
|
|
||||||
generation_config = {
|
|
||||||
"temperature": kwargs.get("temperature", 0.7),
|
|
||||||
"max_output_tokens": kwargs.get("max_tokens", 1000),
|
|
||||||
}
|
|
||||||
|
|
||||||
# Handle top_p parameter
|
|
||||||
if "top_p" in kwargs:
|
|
||||||
generation_config["top_p"] = kwargs["top_p"]
|
|
||||||
|
|
||||||
response = self.client.models.generate_content(
|
|
||||||
model=self.model, contents=prompt, config=generation_config
|
|
||||||
)
|
|
||||||
return response.text.strip()
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Error communicating with Gemini: {e}")
|
|
||||||
return f"Error: Could not get a response from Gemini. Details: {e}"
|
|
||||||
|
|
||||||
|
|
||||||
class OpenAIChat(LLMInterface):
|
class OpenAIChat(LLMInterface):
|
||||||
"""LLM interface for OpenAI models."""
|
"""LLM interface for OpenAI models."""
|
||||||
|
|
||||||
@@ -839,8 +793,6 @@ def get_llm(llm_config: Optional[dict[str, Any]] = None) -> LLMInterface:
|
|||||||
return HFChat(model_name=model or "deepseek-ai/deepseek-llm-7b-chat")
|
return HFChat(model_name=model or "deepseek-ai/deepseek-llm-7b-chat")
|
||||||
elif llm_type == "openai":
|
elif llm_type == "openai":
|
||||||
return OpenAIChat(model=model or "gpt-4o", api_key=llm_config.get("api_key"))
|
return OpenAIChat(model=model or "gpt-4o", api_key=llm_config.get("api_key"))
|
||||||
elif llm_type == "gemini":
|
|
||||||
return GeminiChat(model=model or "gemini-2.5-flash", api_key=llm_config.get("api_key"))
|
|
||||||
elif llm_type == "simulated":
|
elif llm_type == "simulated":
|
||||||
return SimulatedChat()
|
return SimulatedChat()
|
||||||
else:
|
else:
|
||||||
|
|||||||
@@ -148,30 +148,6 @@ Examples:
|
|||||||
type=str,
|
type=str,
|
||||||
help="Comma-separated list of file extensions to include (e.g., '.txt,.pdf,.pptx'). If not specified, uses default supported types.",
|
help="Comma-separated list of file extensions to include (e.g., '.txt,.pdf,.pptx'). If not specified, uses default supported types.",
|
||||||
)
|
)
|
||||||
build_parser.add_argument(
|
|
||||||
"--doc-chunk-size",
|
|
||||||
type=int,
|
|
||||||
default=256,
|
|
||||||
help="Document chunk size in tokens/characters (default: 256)",
|
|
||||||
)
|
|
||||||
build_parser.add_argument(
|
|
||||||
"--doc-chunk-overlap",
|
|
||||||
type=int,
|
|
||||||
default=128,
|
|
||||||
help="Document chunk overlap (default: 128)",
|
|
||||||
)
|
|
||||||
build_parser.add_argument(
|
|
||||||
"--code-chunk-size",
|
|
||||||
type=int,
|
|
||||||
default=512,
|
|
||||||
help="Code chunk size in tokens/lines (default: 512)",
|
|
||||||
)
|
|
||||||
build_parser.add_argument(
|
|
||||||
"--code-chunk-overlap",
|
|
||||||
type=int,
|
|
||||||
default=50,
|
|
||||||
help="Code chunk overlap (default: 50)",
|
|
||||||
)
|
|
||||||
|
|
||||||
# Search command
|
# Search command
|
||||||
search_parser = subparsers.add_parser("search", help="Search documents")
|
search_parser = subparsers.add_parser("search", help="Search documents")
|
||||||
@@ -750,37 +726,6 @@ Examples:
|
|||||||
print(f"Index '{index_name}' already exists. Use --force to rebuild.")
|
print(f"Index '{index_name}' already exists. Use --force to rebuild.")
|
||||||
return
|
return
|
||||||
|
|
||||||
# Configure chunking based on CLI args before loading documents
|
|
||||||
# Guard against invalid configurations
|
|
||||||
doc_chunk_size = max(1, int(args.doc_chunk_size))
|
|
||||||
doc_chunk_overlap = max(0, int(args.doc_chunk_overlap))
|
|
||||||
if doc_chunk_overlap >= doc_chunk_size:
|
|
||||||
print(
|
|
||||||
f"⚠️ Adjusting doc chunk overlap from {doc_chunk_overlap} to {doc_chunk_size - 1} (must be < chunk size)"
|
|
||||||
)
|
|
||||||
doc_chunk_overlap = doc_chunk_size - 1
|
|
||||||
|
|
||||||
code_chunk_size = max(1, int(args.code_chunk_size))
|
|
||||||
code_chunk_overlap = max(0, int(args.code_chunk_overlap))
|
|
||||||
if code_chunk_overlap >= code_chunk_size:
|
|
||||||
print(
|
|
||||||
f"⚠️ Adjusting code chunk overlap from {code_chunk_overlap} to {code_chunk_size - 1} (must be < chunk size)"
|
|
||||||
)
|
|
||||||
code_chunk_overlap = code_chunk_size - 1
|
|
||||||
|
|
||||||
self.node_parser = SentenceSplitter(
|
|
||||||
chunk_size=doc_chunk_size,
|
|
||||||
chunk_overlap=doc_chunk_overlap,
|
|
||||||
separator=" ",
|
|
||||||
paragraph_separator="\n\n",
|
|
||||||
)
|
|
||||||
self.code_parser = SentenceSplitter(
|
|
||||||
chunk_size=code_chunk_size,
|
|
||||||
chunk_overlap=code_chunk_overlap,
|
|
||||||
separator="\n",
|
|
||||||
paragraph_separator="\n\n",
|
|
||||||
)
|
|
||||||
|
|
||||||
all_texts = self.load_documents(docs_paths, args.file_types)
|
all_texts = self.load_documents(docs_paths, args.file_types)
|
||||||
if not all_texts:
|
if not all_texts:
|
||||||
print("No documents found")
|
print("No documents found")
|
||||||
|
|||||||
@@ -57,8 +57,6 @@ def compute_embeddings(
|
|||||||
return compute_embeddings_mlx(texts, model_name)
|
return compute_embeddings_mlx(texts, model_name)
|
||||||
elif mode == "ollama":
|
elif mode == "ollama":
|
||||||
return compute_embeddings_ollama(texts, model_name, is_build=is_build)
|
return compute_embeddings_ollama(texts, model_name, is_build=is_build)
|
||||||
elif mode == "gemini":
|
|
||||||
return compute_embeddings_gemini(texts, model_name, is_build=is_build)
|
|
||||||
else:
|
else:
|
||||||
raise ValueError(f"Unsupported embedding mode: {mode}")
|
raise ValueError(f"Unsupported embedding mode: {mode}")
|
||||||
|
|
||||||
@@ -265,16 +263,8 @@ def compute_embeddings_openai(texts: list[str], model_name: str) -> np.ndarray:
|
|||||||
print(f"len of texts: {len(texts)}")
|
print(f"len of texts: {len(texts)}")
|
||||||
|
|
||||||
# OpenAI has limits on batch size and input length
|
# OpenAI has limits on batch size and input length
|
||||||
max_batch_size = 800 # Conservative batch size because the token limit is 300K
|
max_batch_size = 1000 # Conservative batch size
|
||||||
all_embeddings = []
|
all_embeddings = []
|
||||||
# get the avg len of texts
|
|
||||||
avg_len = sum(len(text) for text in texts) / len(texts)
|
|
||||||
print(f"avg len of texts: {avg_len}")
|
|
||||||
# if avg len is less than 1000, use the max batch size
|
|
||||||
if avg_len > 300:
|
|
||||||
max_batch_size = 500
|
|
||||||
|
|
||||||
# if avg len is less than 1000, use the max batch size
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from tqdm import tqdm
|
from tqdm import tqdm
|
||||||
@@ -660,83 +650,3 @@ def compute_embeddings_ollama(
|
|||||||
logger.info(f"Generated {len(embeddings)} embeddings, dimension: {embeddings.shape[1]}")
|
logger.info(f"Generated {len(embeddings)} embeddings, dimension: {embeddings.shape[1]}")
|
||||||
|
|
||||||
return embeddings
|
return embeddings
|
||||||
|
|
||||||
|
|
||||||
def compute_embeddings_gemini(
|
|
||||||
texts: list[str], model_name: str = "text-embedding-004", is_build: bool = False
|
|
||||||
) -> np.ndarray:
|
|
||||||
"""
|
|
||||||
Compute embeddings using Google Gemini API.
|
|
||||||
|
|
||||||
Args:
|
|
||||||
texts: List of texts to compute embeddings for
|
|
||||||
model_name: Gemini model name (default: "text-embedding-004")
|
|
||||||
is_build: Whether this is a build operation (shows progress bar)
|
|
||||||
|
|
||||||
Returns:
|
|
||||||
Embeddings array, shape: (len(texts), embedding_dim)
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
import os
|
|
||||||
|
|
||||||
import google.genai as genai
|
|
||||||
except ImportError as e:
|
|
||||||
raise ImportError(f"Google GenAI package not installed: {e}")
|
|
||||||
|
|
||||||
api_key = os.getenv("GEMINI_API_KEY")
|
|
||||||
if not api_key:
|
|
||||||
raise RuntimeError("GEMINI_API_KEY environment variable not set")
|
|
||||||
|
|
||||||
# Cache Gemini client
|
|
||||||
cache_key = "gemini_client"
|
|
||||||
if cache_key in _model_cache:
|
|
||||||
client = _model_cache[cache_key]
|
|
||||||
else:
|
|
||||||
client = genai.Client(api_key=api_key)
|
|
||||||
_model_cache[cache_key] = client
|
|
||||||
logger.info("Gemini client cached")
|
|
||||||
|
|
||||||
logger.info(
|
|
||||||
f"Computing embeddings for {len(texts)} texts using Gemini API, model: '{model_name}'"
|
|
||||||
)
|
|
||||||
|
|
||||||
# Gemini supports batch embedding
|
|
||||||
max_batch_size = 100 # Conservative batch size for Gemini
|
|
||||||
all_embeddings = []
|
|
||||||
|
|
||||||
try:
|
|
||||||
from tqdm import tqdm
|
|
||||||
|
|
||||||
total_batches = (len(texts) + max_batch_size - 1) // max_batch_size
|
|
||||||
batch_range = range(0, len(texts), max_batch_size)
|
|
||||||
batch_iterator = tqdm(
|
|
||||||
batch_range, desc="Computing embeddings", unit="batch", total=total_batches
|
|
||||||
)
|
|
||||||
except ImportError:
|
|
||||||
# Fallback when tqdm is not available
|
|
||||||
batch_iterator = range(0, len(texts), max_batch_size)
|
|
||||||
|
|
||||||
for i in batch_iterator:
|
|
||||||
batch_texts = texts[i : i + max_batch_size]
|
|
||||||
|
|
||||||
try:
|
|
||||||
# Use the embed_content method from the new Google GenAI SDK
|
|
||||||
response = client.models.embed_content(
|
|
||||||
model=model_name,
|
|
||||||
contents=batch_texts,
|
|
||||||
config=genai.types.EmbedContentConfig(
|
|
||||||
task_type="RETRIEVAL_DOCUMENT" # For document embedding
|
|
||||||
),
|
|
||||||
)
|
|
||||||
|
|
||||||
# Extract embeddings from response
|
|
||||||
for embedding_data in response.embeddings:
|
|
||||||
all_embeddings.append(embedding_data.values)
|
|
||||||
except Exception as e:
|
|
||||||
logger.error(f"Batch {i} failed: {e}")
|
|
||||||
raise
|
|
||||||
|
|
||||||
embeddings = np.array(all_embeddings, dtype=np.float32)
|
|
||||||
logger.info(f"Generated {len(embeddings)} embeddings, dimension: {embeddings.shape[1]}")
|
|
||||||
|
|
||||||
return embeddings
|
|
||||||
|
|||||||
@@ -64,6 +64,19 @@ def handle_request(request):
|
|||||||
"required": ["index_name", "query"],
|
"required": ["index_name", "query"],
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
{
|
||||||
|
"name": "leann_status",
|
||||||
|
"description": "📊 Check the health and stats of your code indexes - like a medical checkup for your codebase knowledge!",
|
||||||
|
"inputSchema": {
|
||||||
|
"type": "object",
|
||||||
|
"properties": {
|
||||||
|
"index_name": {
|
||||||
|
"type": "string",
|
||||||
|
"description": "Optional: Name of specific index to check. If not provided, shows status of all indexes.",
|
||||||
|
}
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
{
|
{
|
||||||
"name": "leann_list",
|
"name": "leann_list",
|
||||||
"description": "📋 Show all your indexed codebases - your personal code library! Use this to see what's available for search.",
|
"description": "📋 Show all your indexed codebases - your personal code library! Use this to see what's available for search.",
|
||||||
@@ -105,6 +118,15 @@ def handle_request(request):
|
|||||||
]
|
]
|
||||||
result = subprocess.run(cmd, capture_output=True, text=True)
|
result = subprocess.run(cmd, capture_output=True, text=True)
|
||||||
|
|
||||||
|
elif tool_name == "leann_status":
|
||||||
|
if args.get("index_name"):
|
||||||
|
# Check specific index status - for now, we'll use leann list and filter
|
||||||
|
result = subprocess.run(["leann", "list"], capture_output=True, text=True)
|
||||||
|
# We could enhance this to show more detailed status per index
|
||||||
|
else:
|
||||||
|
# Show all indexes status
|
||||||
|
result = subprocess.run(["leann", "list"], capture_output=True, text=True)
|
||||||
|
|
||||||
elif tool_name == "leann_list":
|
elif tool_name == "leann_list":
|
||||||
result = subprocess.run(["leann", "list"], capture_output=True, text=True)
|
result = subprocess.run(["leann", "list"], capture_output=True, text=True)
|
||||||
|
|
||||||
|
|||||||
@@ -13,20 +13,10 @@ This installs the `leann` CLI into an isolated tool environment and includes bot
|
|||||||
|
|
||||||
## 🚀 Quick Setup
|
## 🚀 Quick Setup
|
||||||
|
|
||||||
Add the LEANN MCP server to Claude Code. Choose the scope based on how widely you want it available. Below is the command to install it globally; if you prefer a local install, skip this step:
|
Add the LEANN MCP server to Claude Code:
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Global (recommended): available in all projects for your user
|
claude mcp add leann-server -- leann_mcp
|
||||||
claude mcp add --scope user leann-server -- leann_mcp
|
|
||||||
```
|
|
||||||
|
|
||||||
- `leann-server`: the display name of the MCP server in Claude Code (you can change it).
|
|
||||||
- `leann_mcp`: the Python entry point installed with LEANN that starts the MCP server.
|
|
||||||
|
|
||||||
Verify it is registered globally:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
claude mcp list | cat
|
|
||||||
```
|
```
|
||||||
|
|
||||||
## 🛠️ Available Tools
|
## 🛠️ Available Tools
|
||||||
@@ -35,36 +25,27 @@ Once connected, you'll have access to these powerful semantic search tools in Cl
|
|||||||
|
|
||||||
- **`leann_list`** - List all available indexes across your projects
|
- **`leann_list`** - List all available indexes across your projects
|
||||||
- **`leann_search`** - Perform semantic searches across code and documents
|
- **`leann_search`** - Perform semantic searches across code and documents
|
||||||
|
- **`leann_ask`** - Ask natural language questions and get AI-powered answers from your codebase
|
||||||
|
|
||||||
## 🎯 Quick Start Example
|
## 🎯 Quick Start Example
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Add locally if you did not add it globally (current folder only; default if --scope is omitted)
|
|
||||||
claude mcp add leann-server -- leann_mcp
|
|
||||||
|
|
||||||
# Build an index for your project (change to your actual path)
|
# Build an index for your project (change to your actual path)
|
||||||
# See the advanced examples below for more ways to configure indexing
|
leann build my-project --docs ./
|
||||||
# Set the index name (replace 'my-project' with your own)
|
|
||||||
leann build my-project --docs $(git ls-files)
|
|
||||||
|
|
||||||
# Start Claude Code
|
# Start Claude Code
|
||||||
claude
|
claude
|
||||||
```
|
```
|
||||||
|
|
||||||
## 🚀 Advanced Usage Examples to build the index
|
## 🚀 Advanced Usage Examples
|
||||||
|
|
||||||
### Index Entire Git Repository
|
### Index Entire Git Repository
|
||||||
```bash
|
```bash
|
||||||
# Index all tracked files in your Git repository.
|
# Index all tracked files in your git repository, note right now we will skip submodules, but we can add it back easily if you want
|
||||||
# Note: submodules are currently skipped; we can add them back if needed.
|
|
||||||
leann build my-repo --docs $(git ls-files) --embedding-mode sentence-transformers --embedding-model all-MiniLM-L6-v2 --backend hnsw
|
leann build my-repo --docs $(git ls-files) --embedding-mode sentence-transformers --embedding-model all-MiniLM-L6-v2 --backend hnsw
|
||||||
|
|
||||||
# Index only tracked Python files from Git.
|
# Index only specific file types from git
|
||||||
leann build my-python-code --docs $(git ls-files "*.py") --embedding-mode sentence-transformers --embedding-model all-MiniLM-L6-v2 --backend hnsw
|
leann build my-python-code --docs $(git ls-files "*.py") --embedding-mode sentence-transformers --embedding-model all-MiniLM-L6-v2 --backend hnsw
|
||||||
|
|
||||||
# If you encounter empty requests caused by empty files (e.g., __init__.py), exclude zero-byte files. Thanks @ww2283 for pointing [that](https://github.com/yichuan-w/LEANN/issues/48) out
|
|
||||||
leann build leann-prospec-lig --docs $(find ./src -name "*.py" -not -empty) --embedding-mode openai --embedding-model text-embedding-3-small
|
|
||||||
```
|
```
|
||||||
|
|
||||||
### Multiple Directories and Files
|
### Multiple Directories and Files
|
||||||
@@ -92,7 +73,7 @@ leann build docs-and-configs --docs $(git ls-files "*.md" "*.yml" "*.yaml" "*.js
|
|||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## **Try this in Claude Code:**
|
**Try this in Claude Code:**
|
||||||
```
|
```
|
||||||
Help me understand this codebase. List available indexes and search for authentication patterns.
|
Help me understand this codebase. List available indexes and search for authentication patterns.
|
||||||
```
|
```
|
||||||
@@ -101,7 +82,6 @@ Help me understand this codebase. List available indexes and search for authenti
|
|||||||
<img src="../../assets/claude_code_leann.png" alt="LEANN in Claude Code" width="80%">
|
<img src="../../assets/claude_code_leann.png" alt="LEANN in Claude Code" width="80%">
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
If you see a prompt asking whether to proceed with LEANN, you can now use it in your chat!
|
|
||||||
|
|
||||||
## 🧠 How It Works
|
## 🧠 How It Works
|
||||||
|
|
||||||
@@ -137,11 +117,3 @@ To remove LEANN
|
|||||||
```
|
```
|
||||||
uv pip uninstall leann leann-backend-hnsw leann-core
|
uv pip uninstall leann leann-backend-hnsw leann-core
|
||||||
```
|
```
|
||||||
|
|
||||||
To globally remove LEANN (for version update)
|
|
||||||
```
|
|
||||||
uv tool list | cat
|
|
||||||
uv tool uninstall leann-core
|
|
||||||
command -v leann || echo "leann gone"
|
|
||||||
command -v leann_mcp || echo "leann_mcp gone"
|
|
||||||
```
|
|
||||||
|
|||||||
@@ -1 +0,0 @@
|
|||||||
__all__ = []
|
|
||||||
@@ -136,9 +136,5 @@ def export_sqlite(
|
|||||||
connection.commit()
|
connection.commit()
|
||||||
|
|
||||||
|
|
||||||
def main():
|
|
||||||
app()
|
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
main()
|
app()
|
||||||
|
|||||||
@@ -10,7 +10,6 @@ requires-python = ">=3.9"
|
|||||||
dependencies = [
|
dependencies = [
|
||||||
"leann-core",
|
"leann-core",
|
||||||
"leann-backend-hnsw",
|
"leann-backend-hnsw",
|
||||||
"typer>=0.12.3",
|
|
||||||
"numpy>=1.26.0",
|
"numpy>=1.26.0",
|
||||||
"torch",
|
"torch",
|
||||||
"tqdm",
|
"tqdm",
|
||||||
@@ -85,11 +84,6 @@ documents = [
|
|||||||
|
|
||||||
[tool.setuptools]
|
[tool.setuptools]
|
||||||
py-modules = []
|
py-modules = []
|
||||||
packages = ["wechat_exporter"]
|
|
||||||
package-dir = { "wechat_exporter" = "packages/wechat-exporter" }
|
|
||||||
|
|
||||||
[project.scripts]
|
|
||||||
wechat-exporter = "wechat_exporter.main:main"
|
|
||||||
|
|
||||||
|
|
||||||
[tool.uv.sources]
|
[tool.uv.sources]
|
||||||
|
|||||||
Reference in New Issue
Block a user