* refactor: Unify examples interface with BaseRAGExample - Create BaseRAGExample base class for all RAG examples - Refactor 4 examples to use unified interface: - document_rag.py (replaces main_cli_example.py) - email_rag.py (replaces mail_reader_leann.py) - browser_rag.py (replaces google_history_reader_leann.py) - wechat_rag.py (replaces wechat_history_reader_leann.py) - Maintain 100% parameter compatibility with original files - Add interactive mode support for all examples - Unify parameter names (--max-items replaces --max-emails/--max-entries) - Update README.md with new examples usage - Add PARAMETER_CONSISTENCY.md documenting all parameter mappings - Keep main_cli_example.py for backward compatibility with migration notice All default values, LeannBuilder parameters, and chunking settings remain identical to ensure full compatibility with existing indexes. * fix: Update CI tests for new unified examples interface - Rename test_main_cli.py to test_document_rag.py - Update all references from main_cli_example.py to document_rag.py - Update tests/README.md documentation The tests now properly test the new unified interface while maintaining the same test coverage and functionality. * fix: Fix pre-commit issues and update tests - Fix import sorting and unused imports - Update type annotations to use built-in types (list, dict) instead of typing.List/Dict - Fix trailing whitespace and end-of-file issues - Fix Chinese fullwidth comma to regular comma - Update test_main_cli.py to test_document_rag.py - Add backward compatibility test for main_cli_example.py - Pass all pre-commit hooks (ruff, ruff-format, etc.) * refactor: Remove old example scripts and migration references - Delete old example scripts (mail_reader_leann.py, google_history_reader_leann.py, etc.) - Remove migration hints and backward compatibility - Update tests to use new unified examples directly - Clean up all references to old script names - Users now only see the new unified interface * fix: Restore embedding-mode parameter to all examples - All examples now have --embedding-mode parameter (unified interface benefit) - Default is 'sentence-transformers' (consistent with original behavior) - Users can now use OpenAI or MLX embeddings with any data source - Maintains functional equivalence with original scripts * docs: Improve parameter categorization in README - Clearly separate core (shared) vs specific parameters - Move LLM and embedding examples to 'Example Commands' section - Add descriptive comments for all specific parameters - Keep only truly data-source-specific parameters in specific sections * docs: Make example commands more representative - Add default values to parameter descriptions - Replace generic examples with real-world use cases - Focus on data-source-specific features in examples - Remove redundant demonstrations of common parameters * docs: Reorganize parameter documentation structure - Move common parameters to a dedicated section before all examples - Rename sections to 'X-Specific Arguments' for clarity - Remove duplicate common parameters from individual examples - Better information architecture for users * docs: polish applications * docs: Add CLI installation instructions - Add two installation options: venv and global uv tool - Clearly explain when to use each option - Make CLI more accessible for daily use * docs: Clarify CLI global installation process - Explain the transition from venv to global installation - Add upgrade command for global installation - Make it clear that global install allows usage without venv activation * docs: Add collapsible section for CLI installation - Wrap CLI installation instructions in details/summary tags - Keep consistent with other collapsible sections in README - Improve document readability and navigation * style: format * docs: Fix collapsible sections - Make Common Parameters collapsible (as it's lengthy reference material) - Keep CLI Installation visible (important for users to see immediately) - Better information hierarchy * docs: Add introduction for Common Parameters section - Add 'Flexible Configuration' heading with descriptive sentence - Create parallel structure with 'Generation Model Setup' section - Improve document flow and readability * docs: nit * fix: Fix issues in unified examples - Add smart path detection for data directory - Fix add_texts -> add_text method call - Handle both running from project root and examples directory * fix: Fix async/await and add_text issues in unified examples - Remove incorrect await from chat.ask() calls (not async) - Fix add_texts -> add_text method calls - Verify search-complexity correctly maps to efSearch parameter - All examples now run successfully * feat: Address review comments - Add complexity parameter to LeannChat initialization (default: search_complexity) - Fix chunk-size default in README documentation (256, not 2048) - Add more index building parameters as CLI arguments: - --backend-name (hnsw/diskann) - --graph-degree (default: 32) - --build-complexity (default: 64) - --no-compact (disable compact storage) - --no-recompute (disable embedding recomputation) - Update README to document all new parameters * feat: Add chunk-size parameters and improve file type filtering - Add --chunk-size and --chunk-overlap parameters to all RAG examples - Preserve original default values for each data source: - Document: 256/128 (optimized for general documents) - Email: 256/25 (smaller overlap for email threads) - Browser: 256/128 (standard for web content) - WeChat: 192/64 (smaller chunks for chat messages) - Make --file-types optional filter instead of restriction in document_rag - Update README to clarify interactive mode and parameter usage - Fix LLM default model documentation (gpt-4o, not gpt-4o-mini) * feat: Update documentation based on review feedback - Add MLX embedding example to README - Clarify examples/data content description (two papers, Pride and Prejudice, Chinese README) - Move chunk parameters to common parameters section - Remove duplicate chunk parameters from document-specific section * docs: Emphasize diverse data sources in examples/data description * fix: update default embedding models for better performance - Change WeChat, Browser, and Email RAG examples to use all-MiniLM-L6-v2 - Previous Qwen/Qwen3-Embedding-0.6B was too slow for these use cases - all-MiniLM-L6-v2 is a fast 384-dim model, ideal for large-scale personal data * add response highlight * change rebuild logic * fix some example * feat: check if k is larger than #docs * fix: WeChat history reader bugs and refactor wechat_rag to use unified architecture * fix email wrong -1 to process all file * refactor: reorgnize all examples/ and test/ * refactor: reorganize examples and add link checker * fix: add init.py * fix: handle certificate errors in link checker * fix wechat * merge * docs: update README to use proper module imports for apps - Change from 'python apps/xxx.py' to 'python -m apps.xxx' - More professional and pythonic module calling - Ensures proper module resolution and imports - Better separation between apps/ (production tools) and examples/ (demos) --------- Co-authored-by: yichuan520030910320 <yichuan_wang@berkeley.edu>
142 lines
4.3 KiB
Python
142 lines
4.3 KiB
Python
import time
|
|
|
|
import matplotlib.pyplot as plt
|
|
import mlx.core as mx
|
|
import numpy as np
|
|
import torch
|
|
from mlx_lm import load
|
|
from sentence_transformers import SentenceTransformer
|
|
|
|
# --- Configuration ---
|
|
MODEL_NAME_TORCH = "Qwen/Qwen3-Embedding-0.6B"
|
|
MODEL_NAME_MLX = "mlx-community/Qwen3-Embedding-0.6B-4bit-DWQ"
|
|
BATCH_SIZES = [1, 8, 16, 32, 64, 128]
|
|
NUM_RUNS = 10 # Number of runs to average for each batch size
|
|
WARMUP_RUNS = 2 # Number of warm-up runs
|
|
|
|
# --- Generate Dummy Data ---
|
|
DUMMY_SENTENCES = ["This is a test sentence for benchmarking." * 5] * max(BATCH_SIZES)
|
|
|
|
# --- Benchmark Functions ---b
|
|
|
|
|
|
def benchmark_torch(model, sentences):
|
|
start_time = time.time()
|
|
model.encode(sentences, convert_to_numpy=True)
|
|
end_time = time.time()
|
|
return (end_time - start_time) * 1000 # Return time in ms
|
|
|
|
|
|
def benchmark_mlx(model, tokenizer, sentences):
|
|
start_time = time.time()
|
|
|
|
# Tokenize sentences using MLX tokenizer
|
|
tokens = []
|
|
for sentence in sentences:
|
|
token_ids = tokenizer.encode(sentence)
|
|
tokens.append(token_ids)
|
|
|
|
# Pad sequences to the same length
|
|
max_len = max(len(t) for t in tokens)
|
|
input_ids = []
|
|
attention_mask = []
|
|
|
|
for token_seq in tokens:
|
|
# Pad sequence
|
|
padded = token_seq + [tokenizer.eos_token_id] * (max_len - len(token_seq))
|
|
input_ids.append(padded)
|
|
# Create attention mask (1 for real tokens, 0 for padding)
|
|
mask = [1] * len(token_seq) + [0] * (max_len - len(token_seq))
|
|
attention_mask.append(mask)
|
|
|
|
# Convert to MLX arrays
|
|
input_ids = mx.array(input_ids)
|
|
attention_mask = mx.array(attention_mask)
|
|
|
|
# Get embeddings
|
|
embeddings = model(input_ids)
|
|
|
|
# Mean pooling
|
|
mask = mx.expand_dims(attention_mask, -1)
|
|
sum_embeddings = (embeddings * mask).sum(axis=1)
|
|
sum_mask = mask.sum(axis=1)
|
|
_ = sum_embeddings / sum_mask
|
|
|
|
mx.eval() # Ensure computation is finished
|
|
end_time = time.time()
|
|
return (end_time - start_time) * 1000 # Return time in ms
|
|
|
|
|
|
# --- Main Execution ---
|
|
def main():
|
|
print("--- Initializing Models ---")
|
|
# Load PyTorch model
|
|
print(f"Loading PyTorch model: {MODEL_NAME_TORCH}")
|
|
device = "mps" if torch.backends.mps.is_available() else "cpu"
|
|
model_torch = SentenceTransformer(MODEL_NAME_TORCH, device=device)
|
|
print(f"PyTorch model loaded on: {device}")
|
|
|
|
# Load MLX model
|
|
print(f"Loading MLX model: {MODEL_NAME_MLX}")
|
|
model_mlx, tokenizer_mlx = load(MODEL_NAME_MLX)
|
|
print("MLX model loaded.")
|
|
|
|
# --- Warm-up ---
|
|
print("\n--- Performing Warm-up Runs ---")
|
|
for _ in range(WARMUP_RUNS):
|
|
benchmark_torch(model_torch, DUMMY_SENTENCES[:1])
|
|
benchmark_mlx(model_mlx, tokenizer_mlx, DUMMY_SENTENCES[:1])
|
|
print("Warm-up complete.")
|
|
|
|
# --- Benchmarking ---
|
|
print("\n--- Starting Benchmark ---")
|
|
results_torch = []
|
|
results_mlx = []
|
|
|
|
for batch_size in BATCH_SIZES:
|
|
print(f"Benchmarking batch size: {batch_size}")
|
|
sentences_batch = DUMMY_SENTENCES[:batch_size]
|
|
|
|
# Benchmark PyTorch
|
|
torch_times = [benchmark_torch(model_torch, sentences_batch) for _ in range(NUM_RUNS)]
|
|
results_torch.append(np.mean(torch_times))
|
|
|
|
# Benchmark MLX
|
|
mlx_times = [
|
|
benchmark_mlx(model_mlx, tokenizer_mlx, sentences_batch) for _ in range(NUM_RUNS)
|
|
]
|
|
results_mlx.append(np.mean(mlx_times))
|
|
|
|
print("\n--- Benchmark Results (Average time per batch in ms) ---")
|
|
print(f"Batch Sizes: {BATCH_SIZES}")
|
|
print(f"PyTorch (mps): {[f'{t:.2f}' for t in results_torch]}")
|
|
print(f"MLX: {[f'{t:.2f}' for t in results_mlx]}")
|
|
|
|
# --- Plotting ---
|
|
print("\n--- Generating Plot ---")
|
|
plt.figure(figsize=(10, 6))
|
|
plt.plot(
|
|
BATCH_SIZES,
|
|
results_torch,
|
|
marker="o",
|
|
linestyle="-",
|
|
label=f"PyTorch ({device})",
|
|
)
|
|
plt.plot(BATCH_SIZES, results_mlx, marker="s", linestyle="-", label="MLX")
|
|
|
|
plt.title(f"Embedding Performance: MLX vs PyTorch\nModel: {MODEL_NAME_TORCH}")
|
|
plt.xlabel("Batch Size")
|
|
plt.ylabel("Average Time per Batch (ms)")
|
|
plt.xticks(BATCH_SIZES)
|
|
plt.grid(True)
|
|
plt.legend()
|
|
|
|
# Save the plot
|
|
output_filename = "embedding_benchmark.png"
|
|
plt.savefig(output_filename)
|
|
print(f"Plot saved to {output_filename}")
|
|
|
|
|
|
if __name__ == "__main__":
|
|
main()
|