* refactor: Unify examples interface with BaseRAGExample - Create BaseRAGExample base class for all RAG examples - Refactor 4 examples to use unified interface: - document_rag.py (replaces main_cli_example.py) - email_rag.py (replaces mail_reader_leann.py) - browser_rag.py (replaces google_history_reader_leann.py) - wechat_rag.py (replaces wechat_history_reader_leann.py) - Maintain 100% parameter compatibility with original files - Add interactive mode support for all examples - Unify parameter names (--max-items replaces --max-emails/--max-entries) - Update README.md with new examples usage - Add PARAMETER_CONSISTENCY.md documenting all parameter mappings - Keep main_cli_example.py for backward compatibility with migration notice All default values, LeannBuilder parameters, and chunking settings remain identical to ensure full compatibility with existing indexes. * fix: Update CI tests for new unified examples interface - Rename test_main_cli.py to test_document_rag.py - Update all references from main_cli_example.py to document_rag.py - Update tests/README.md documentation The tests now properly test the new unified interface while maintaining the same test coverage and functionality. * fix: Fix pre-commit issues and update tests - Fix import sorting and unused imports - Update type annotations to use built-in types (list, dict) instead of typing.List/Dict - Fix trailing whitespace and end-of-file issues - Fix Chinese fullwidth comma to regular comma - Update test_main_cli.py to test_document_rag.py - Add backward compatibility test for main_cli_example.py - Pass all pre-commit hooks (ruff, ruff-format, etc.) * refactor: Remove old example scripts and migration references - Delete old example scripts (mail_reader_leann.py, google_history_reader_leann.py, etc.) - Remove migration hints and backward compatibility - Update tests to use new unified examples directly - Clean up all references to old script names - Users now only see the new unified interface * fix: Restore embedding-mode parameter to all examples - All examples now have --embedding-mode parameter (unified interface benefit) - Default is 'sentence-transformers' (consistent with original behavior) - Users can now use OpenAI or MLX embeddings with any data source - Maintains functional equivalence with original scripts * docs: Improve parameter categorization in README - Clearly separate core (shared) vs specific parameters - Move LLM and embedding examples to 'Example Commands' section - Add descriptive comments for all specific parameters - Keep only truly data-source-specific parameters in specific sections * docs: Make example commands more representative - Add default values to parameter descriptions - Replace generic examples with real-world use cases - Focus on data-source-specific features in examples - Remove redundant demonstrations of common parameters * docs: Reorganize parameter documentation structure - Move common parameters to a dedicated section before all examples - Rename sections to 'X-Specific Arguments' for clarity - Remove duplicate common parameters from individual examples - Better information architecture for users * docs: polish applications * docs: Add CLI installation instructions - Add two installation options: venv and global uv tool - Clearly explain when to use each option - Make CLI more accessible for daily use * docs: Clarify CLI global installation process - Explain the transition from venv to global installation - Add upgrade command for global installation - Make it clear that global install allows usage without venv activation * docs: Add collapsible section for CLI installation - Wrap CLI installation instructions in details/summary tags - Keep consistent with other collapsible sections in README - Improve document readability and navigation * style: format * docs: Fix collapsible sections - Make Common Parameters collapsible (as it's lengthy reference material) - Keep CLI Installation visible (important for users to see immediately) - Better information hierarchy * docs: Add introduction for Common Parameters section - Add 'Flexible Configuration' heading with descriptive sentence - Create parallel structure with 'Generation Model Setup' section - Improve document flow and readability * docs: nit * fix: Fix issues in unified examples - Add smart path detection for data directory - Fix add_texts -> add_text method call - Handle both running from project root and examples directory * fix: Fix async/await and add_text issues in unified examples - Remove incorrect await from chat.ask() calls (not async) - Fix add_texts -> add_text method calls - Verify search-complexity correctly maps to efSearch parameter - All examples now run successfully * feat: Address review comments - Add complexity parameter to LeannChat initialization (default: search_complexity) - Fix chunk-size default in README documentation (256, not 2048) - Add more index building parameters as CLI arguments: - --backend-name (hnsw/diskann) - --graph-degree (default: 32) - --build-complexity (default: 64) - --no-compact (disable compact storage) - --no-recompute (disable embedding recomputation) - Update README to document all new parameters * feat: Add chunk-size parameters and improve file type filtering - Add --chunk-size and --chunk-overlap parameters to all RAG examples - Preserve original default values for each data source: - Document: 256/128 (optimized for general documents) - Email: 256/25 (smaller overlap for email threads) - Browser: 256/128 (standard for web content) - WeChat: 192/64 (smaller chunks for chat messages) - Make --file-types optional filter instead of restriction in document_rag - Update README to clarify interactive mode and parameter usage - Fix LLM default model documentation (gpt-4o, not gpt-4o-mini) * feat: Update documentation based on review feedback - Add MLX embedding example to README - Clarify examples/data content description (two papers, Pride and Prejudice, Chinese README) - Move chunk parameters to common parameters section - Remove duplicate chunk parameters from document-specific section * docs: Emphasize diverse data sources in examples/data description * fix: update default embedding models for better performance - Change WeChat, Browser, and Email RAG examples to use all-MiniLM-L6-v2 - Previous Qwen/Qwen3-Embedding-0.6B was too slow for these use cases - all-MiniLM-L6-v2 is a fast 384-dim model, ideal for large-scale personal data * add response highlight * change rebuild logic * fix some example * feat: check if k is larger than #docs * fix: WeChat history reader bugs and refactor wechat_rag to use unified architecture * fix email wrong -1 to process all file * refactor: reorgnize all examples/ and test/ * refactor: reorganize examples and add link checker * fix: add init.py * fix: handle certificate errors in link checker * fix wechat * merge * docs: update README to use proper module imports for apps - Change from 'python apps/xxx.py' to 'python -m apps.xxx' - More professional and pythonic module calling - Ensures proper module resolution and imports - Better separation between apps/ (production tools) and examples/ (demos) --------- Co-authored-by: yichuan520030910320 <yichuan_wang@berkeley.edu>
190 lines
7.0 KiB
Python
190 lines
7.0 KiB
Python
"""
|
|
WeChat History RAG example using the unified interface.
|
|
Supports WeChat chat history export and search.
|
|
"""
|
|
|
|
import subprocess
|
|
import sys
|
|
from pathlib import Path
|
|
|
|
# Add parent directory to path for imports
|
|
sys.path.insert(0, str(Path(__file__).parent))
|
|
|
|
from base_rag_example import BaseRAGExample
|
|
|
|
from .history_data.wechat_history import WeChatHistoryReader
|
|
|
|
|
|
class WeChatRAG(BaseRAGExample):
|
|
"""RAG example for WeChat chat history."""
|
|
|
|
def __init__(self):
|
|
# Set default values BEFORE calling super().__init__
|
|
self.max_items_default = -1 # Match original default
|
|
self.embedding_model_default = (
|
|
"sentence-transformers/all-MiniLM-L6-v2" # Fast 384-dim model
|
|
)
|
|
|
|
super().__init__(
|
|
name="WeChat History",
|
|
description="Process and query WeChat chat history with LEANN",
|
|
default_index_name="wechat_history_magic_test_11Debug_new",
|
|
)
|
|
|
|
def _add_specific_arguments(self, parser):
|
|
"""Add WeChat-specific arguments."""
|
|
wechat_group = parser.add_argument_group("WeChat Parameters")
|
|
wechat_group.add_argument(
|
|
"--export-dir",
|
|
type=str,
|
|
default="./wechat_export",
|
|
help="Directory to store WeChat exports (default: ./wechat_export)",
|
|
)
|
|
wechat_group.add_argument(
|
|
"--force-export",
|
|
action="store_true",
|
|
help="Force re-export of WeChat data even if exports exist",
|
|
)
|
|
wechat_group.add_argument(
|
|
"--chunk-size", type=int, default=192, help="Text chunk size (default: 192)"
|
|
)
|
|
wechat_group.add_argument(
|
|
"--chunk-overlap", type=int, default=64, help="Text chunk overlap (default: 64)"
|
|
)
|
|
|
|
def _export_wechat_data(self, export_dir: Path) -> bool:
|
|
"""Export WeChat data using wechattweak-cli."""
|
|
print("Exporting WeChat data...")
|
|
|
|
# Check if WeChat is running
|
|
try:
|
|
result = subprocess.run(["pgrep", "WeChat"], capture_output=True, text=True)
|
|
if result.returncode != 0:
|
|
print("WeChat is not running. Please start WeChat first.")
|
|
return False
|
|
except Exception:
|
|
pass # pgrep might not be available on all systems
|
|
|
|
# Create export directory
|
|
export_dir.mkdir(parents=True, exist_ok=True)
|
|
|
|
# Run export command
|
|
cmd = ["packages/wechat-exporter/wechattweak-cli", "export", str(export_dir)]
|
|
|
|
try:
|
|
print(f"Running: {' '.join(cmd)}")
|
|
result = subprocess.run(cmd, capture_output=True, text=True)
|
|
|
|
if result.returncode == 0:
|
|
print("WeChat data exported successfully!")
|
|
return True
|
|
else:
|
|
print(f"Export failed: {result.stderr}")
|
|
return False
|
|
|
|
except FileNotFoundError:
|
|
print("\nError: wechattweak-cli not found!")
|
|
print("Please install it first:")
|
|
print(" sudo packages/wechat-exporter/wechattweak-cli install")
|
|
return False
|
|
except Exception as e:
|
|
print(f"Export error: {e}")
|
|
return False
|
|
|
|
async def load_data(self, args) -> list[str]:
|
|
"""Load WeChat history and convert to text chunks."""
|
|
# Initialize WeChat reader with export capabilities
|
|
reader = WeChatHistoryReader()
|
|
|
|
# Find existing exports or create new ones using the centralized method
|
|
export_dirs = reader.find_or_export_wechat_data(args.export_dir)
|
|
if not export_dirs:
|
|
print("Failed to find or export WeChat data. Trying to find any existing exports...")
|
|
# Try to find any existing exports in common locations
|
|
export_dirs = reader.find_wechat_export_dirs()
|
|
if not export_dirs:
|
|
print("No WeChat data found. Please ensure WeChat exports exist.")
|
|
return []
|
|
|
|
# Load documents from all found export directories
|
|
all_documents = []
|
|
total_processed = 0
|
|
|
|
for i, export_dir in enumerate(export_dirs):
|
|
print(f"\nProcessing WeChat export {i + 1}/{len(export_dirs)}: {export_dir}")
|
|
|
|
try:
|
|
# Apply max_items limit per export
|
|
max_per_export = -1
|
|
if args.max_items > 0:
|
|
remaining = args.max_items - total_processed
|
|
if remaining <= 0:
|
|
break
|
|
max_per_export = remaining
|
|
|
|
documents = reader.load_data(
|
|
wechat_export_dir=str(export_dir),
|
|
max_count=max_per_export,
|
|
concatenate_messages=True, # Enable message concatenation for better context
|
|
)
|
|
|
|
if documents:
|
|
print(f"Loaded {len(documents)} chat documents from {export_dir}")
|
|
all_documents.extend(documents)
|
|
total_processed += len(documents)
|
|
else:
|
|
print(f"No documents loaded from {export_dir}")
|
|
|
|
except Exception as e:
|
|
print(f"Error processing {export_dir}: {e}")
|
|
continue
|
|
|
|
if not all_documents:
|
|
print("No documents loaded from any source. Exiting.")
|
|
return []
|
|
|
|
print(f"\nTotal loaded {len(all_documents)} chat documents from {len(export_dirs)} exports")
|
|
print("now starting to split into text chunks ... take some time")
|
|
|
|
# Convert to text chunks with contact information
|
|
all_texts = []
|
|
for doc in all_documents:
|
|
# Split the document into chunks
|
|
from llama_index.core.node_parser import SentenceSplitter
|
|
|
|
text_splitter = SentenceSplitter(
|
|
chunk_size=args.chunk_size, chunk_overlap=args.chunk_overlap
|
|
)
|
|
nodes = text_splitter.get_nodes_from_documents([doc])
|
|
|
|
for node in nodes:
|
|
# Add contact information to each chunk
|
|
contact_name = doc.metadata.get("contact_name", "Unknown")
|
|
text = f"[Contact] means the message is from: {contact_name}\n" + node.get_content()
|
|
all_texts.append(text)
|
|
|
|
print(f"Created {len(all_texts)} text chunks from {len(all_documents)} documents")
|
|
return all_texts
|
|
|
|
|
|
if __name__ == "__main__":
|
|
import asyncio
|
|
|
|
# Check platform
|
|
if sys.platform != "darwin":
|
|
print("\n⚠️ Warning: WeChat export is only supported on macOS")
|
|
print(" You can still query existing exports on other platforms\n")
|
|
|
|
# Example queries for WeChat RAG
|
|
print("\n💬 WeChat History RAG Example")
|
|
print("=" * 50)
|
|
print("\nExample queries you can try:")
|
|
print("- 'Show me conversations about travel plans'")
|
|
print("- 'Find group chats about weekend activities'")
|
|
print("- '我想买魔术师约翰逊的球衣,给我一些对应聊天记录?'")
|
|
print("- 'What did we discuss about the project last month?'")
|
|
print("\nNote: WeChat must be running for export to work\n")
|
|
|
|
rag = WeChatRAG()
|
|
asyncio.run(rag.run())
|