* refactor: Unify examples interface with BaseRAGExample - Create BaseRAGExample base class for all RAG examples - Refactor 4 examples to use unified interface: - document_rag.py (replaces main_cli_example.py) - email_rag.py (replaces mail_reader_leann.py) - browser_rag.py (replaces google_history_reader_leann.py) - wechat_rag.py (replaces wechat_history_reader_leann.py) - Maintain 100% parameter compatibility with original files - Add interactive mode support for all examples - Unify parameter names (--max-items replaces --max-emails/--max-entries) - Update README.md with new examples usage - Add PARAMETER_CONSISTENCY.md documenting all parameter mappings - Keep main_cli_example.py for backward compatibility with migration notice All default values, LeannBuilder parameters, and chunking settings remain identical to ensure full compatibility with existing indexes. * fix: Update CI tests for new unified examples interface - Rename test_main_cli.py to test_document_rag.py - Update all references from main_cli_example.py to document_rag.py - Update tests/README.md documentation The tests now properly test the new unified interface while maintaining the same test coverage and functionality. * fix: Fix pre-commit issues and update tests - Fix import sorting and unused imports - Update type annotations to use built-in types (list, dict) instead of typing.List/Dict - Fix trailing whitespace and end-of-file issues - Fix Chinese fullwidth comma to regular comma - Update test_main_cli.py to test_document_rag.py - Add backward compatibility test for main_cli_example.py - Pass all pre-commit hooks (ruff, ruff-format, etc.) * refactor: Remove old example scripts and migration references - Delete old example scripts (mail_reader_leann.py, google_history_reader_leann.py, etc.) - Remove migration hints and backward compatibility - Update tests to use new unified examples directly - Clean up all references to old script names - Users now only see the new unified interface * fix: Restore embedding-mode parameter to all examples - All examples now have --embedding-mode parameter (unified interface benefit) - Default is 'sentence-transformers' (consistent with original behavior) - Users can now use OpenAI or MLX embeddings with any data source - Maintains functional equivalence with original scripts * docs: Improve parameter categorization in README - Clearly separate core (shared) vs specific parameters - Move LLM and embedding examples to 'Example Commands' section - Add descriptive comments for all specific parameters - Keep only truly data-source-specific parameters in specific sections * docs: Make example commands more representative - Add default values to parameter descriptions - Replace generic examples with real-world use cases - Focus on data-source-specific features in examples - Remove redundant demonstrations of common parameters * docs: Reorganize parameter documentation structure - Move common parameters to a dedicated section before all examples - Rename sections to 'X-Specific Arguments' for clarity - Remove duplicate common parameters from individual examples - Better information architecture for users * docs: polish applications * docs: Add CLI installation instructions - Add two installation options: venv and global uv tool - Clearly explain when to use each option - Make CLI more accessible for daily use * docs: Clarify CLI global installation process - Explain the transition from venv to global installation - Add upgrade command for global installation - Make it clear that global install allows usage without venv activation * docs: Add collapsible section for CLI installation - Wrap CLI installation instructions in details/summary tags - Keep consistent with other collapsible sections in README - Improve document readability and navigation * style: format * docs: Fix collapsible sections - Make Common Parameters collapsible (as it's lengthy reference material) - Keep CLI Installation visible (important for users to see immediately) - Better information hierarchy * docs: Add introduction for Common Parameters section - Add 'Flexible Configuration' heading with descriptive sentence - Create parallel structure with 'Generation Model Setup' section - Improve document flow and readability * docs: nit * fix: Fix issues in unified examples - Add smart path detection for data directory - Fix add_texts -> add_text method call - Handle both running from project root and examples directory * fix: Fix async/await and add_text issues in unified examples - Remove incorrect await from chat.ask() calls (not async) - Fix add_texts -> add_text method calls - Verify search-complexity correctly maps to efSearch parameter - All examples now run successfully * feat: Address review comments - Add complexity parameter to LeannChat initialization (default: search_complexity) - Fix chunk-size default in README documentation (256, not 2048) - Add more index building parameters as CLI arguments: - --backend-name (hnsw/diskann) - --graph-degree (default: 32) - --build-complexity (default: 64) - --no-compact (disable compact storage) - --no-recompute (disable embedding recomputation) - Update README to document all new parameters * feat: Add chunk-size parameters and improve file type filtering - Add --chunk-size and --chunk-overlap parameters to all RAG examples - Preserve original default values for each data source: - Document: 256/128 (optimized for general documents) - Email: 256/25 (smaller overlap for email threads) - Browser: 256/128 (standard for web content) - WeChat: 192/64 (smaller chunks for chat messages) - Make --file-types optional filter instead of restriction in document_rag - Update README to clarify interactive mode and parameter usage - Fix LLM default model documentation (gpt-4o, not gpt-4o-mini) * feat: Update documentation based on review feedback - Add MLX embedding example to README - Clarify examples/data content description (two papers, Pride and Prejudice, Chinese README) - Move chunk parameters to common parameters section - Remove duplicate chunk parameters from document-specific section * docs: Emphasize diverse data sources in examples/data description * fix: update default embedding models for better performance - Change WeChat, Browser, and Email RAG examples to use all-MiniLM-L6-v2 - Previous Qwen/Qwen3-Embedding-0.6B was too slow for these use cases - all-MiniLM-L6-v2 is a fast 384-dim model, ideal for large-scale personal data * add response highlight * change rebuild logic * fix some example * feat: check if k is larger than #docs * fix: WeChat history reader bugs and refactor wechat_rag to use unified architecture * fix email wrong -1 to process all file * refactor: reorgnize all examples/ and test/ * refactor: reorganize examples and add link checker * fix: add init.py * fix: handle certificate errors in link checker * fix wechat * merge * docs: update README to use proper module imports for apps - Change from 'python apps/xxx.py' to 'python -m apps.xxx' - More professional and pythonic module calling - Ensures proper module resolution and imports - Better separation between apps/ (production tools) and examples/ (demos) --------- Co-authored-by: yichuan520030910320 <yichuan_wang@berkeley.edu>
168 lines
7.0 KiB
Python
168 lines
7.0 KiB
Python
import email
|
|
import os
|
|
from pathlib import Path
|
|
from typing import Any
|
|
|
|
from llama_index.core import Document
|
|
from llama_index.core.readers.base import BaseReader
|
|
|
|
|
|
def find_all_messages_directories(root: str | None = None) -> list[Path]:
|
|
"""
|
|
Recursively find all 'Messages' directories under the given root.
|
|
Returns a list of Path objects.
|
|
"""
|
|
if root is None:
|
|
# Auto-detect user's mail path
|
|
home_dir = os.path.expanduser("~")
|
|
root = os.path.join(home_dir, "Library", "Mail")
|
|
|
|
messages_dirs = []
|
|
for dirpath, _dirnames, _filenames in os.walk(root):
|
|
if os.path.basename(dirpath) == "Messages":
|
|
messages_dirs.append(Path(dirpath))
|
|
return messages_dirs
|
|
|
|
|
|
class EmlxReader(BaseReader):
|
|
"""
|
|
Apple Mail .emlx file reader with embedded metadata.
|
|
|
|
Reads individual .emlx files from Apple Mail's storage format.
|
|
"""
|
|
|
|
def __init__(self, include_html: bool = False) -> None:
|
|
"""
|
|
Initialize.
|
|
|
|
Args:
|
|
include_html: Whether to include HTML content in the email body (default: False)
|
|
"""
|
|
self.include_html = include_html
|
|
|
|
def load_data(self, input_dir: str, **load_kwargs: Any) -> list[Document]:
|
|
"""
|
|
Load data from the input directory containing .emlx files.
|
|
|
|
Args:
|
|
input_dir: Directory containing .emlx files
|
|
**load_kwargs:
|
|
max_count (int): Maximum amount of messages to read.
|
|
"""
|
|
docs: list[Document] = []
|
|
max_count = load_kwargs.get("max_count", 1000)
|
|
count = 0
|
|
total_files = 0
|
|
successful_files = 0
|
|
failed_files = 0
|
|
|
|
print(f"Starting to process directory: {input_dir}")
|
|
|
|
# Walk through the directory recursively
|
|
for dirpath, dirnames, filenames in os.walk(input_dir):
|
|
# Skip hidden directories
|
|
dirnames[:] = [d for d in dirnames if not d.startswith(".")]
|
|
|
|
for filename in filenames:
|
|
# Check if we've reached the max count (skip if max_count == -1)
|
|
if max_count > 0 and count >= max_count:
|
|
break
|
|
|
|
if filename.endswith(".emlx"):
|
|
total_files += 1
|
|
filepath = os.path.join(dirpath, filename)
|
|
try:
|
|
# Read the .emlx file
|
|
with open(filepath, encoding="utf-8", errors="ignore") as f:
|
|
content = f.read()
|
|
|
|
# .emlx files have a length prefix followed by the email content
|
|
# The first line contains the length, followed by the email
|
|
lines = content.split("\n", 1)
|
|
if len(lines) >= 2:
|
|
email_content = lines[1]
|
|
|
|
# Parse the email using Python's email module
|
|
try:
|
|
msg = email.message_from_string(email_content)
|
|
|
|
# Extract email metadata
|
|
subject = msg.get("Subject", "No Subject")
|
|
from_addr = msg.get("From", "Unknown")
|
|
to_addr = msg.get("To", "Unknown")
|
|
date = msg.get("Date", "Unknown")
|
|
|
|
# Extract email body
|
|
body = ""
|
|
if msg.is_multipart():
|
|
for part in msg.walk():
|
|
if (
|
|
part.get_content_type() == "text/plain"
|
|
or part.get_content_type() == "text/html"
|
|
):
|
|
if (
|
|
part.get_content_type() == "text/html"
|
|
and not self.include_html
|
|
):
|
|
continue
|
|
try:
|
|
payload = part.get_payload(decode=True)
|
|
if payload:
|
|
body += payload.decode("utf-8", errors="ignore")
|
|
except Exception as e:
|
|
print(f"Error decoding payload: {e}")
|
|
continue
|
|
else:
|
|
try:
|
|
payload = msg.get_payload(decode=True)
|
|
if payload:
|
|
body = payload.decode("utf-8", errors="ignore")
|
|
except Exception as e:
|
|
print(f"Error decoding single part payload: {e}")
|
|
body = ""
|
|
|
|
# Only create document if we have some content
|
|
if body.strip() or subject != "No Subject":
|
|
# Create document content with metadata embedded in text
|
|
doc_content = f"""
|
|
[File]: {filename}
|
|
[From]: {from_addr}
|
|
[To]: {to_addr}
|
|
[Subject]: {subject}
|
|
[Date]: {date}
|
|
[EMAIL BODY Start]:
|
|
{body}
|
|
"""
|
|
|
|
# No separate metadata - everything is in the text
|
|
doc = Document(text=doc_content, metadata={})
|
|
docs.append(doc)
|
|
count += 1
|
|
successful_files += 1
|
|
|
|
# Print first few successful files for debugging
|
|
if successful_files <= 3:
|
|
print(
|
|
f"Successfully loaded: {filename} - Subject: {subject[:50]}..."
|
|
)
|
|
|
|
except Exception as e:
|
|
failed_files += 1
|
|
if failed_files <= 5: # Only print first few errors
|
|
print(f"Error parsing email from {filepath}: {e}")
|
|
continue
|
|
|
|
except Exception as e:
|
|
failed_files += 1
|
|
if failed_files <= 5: # Only print first few errors
|
|
print(f"Error reading file {filepath}: {e}")
|
|
continue
|
|
|
|
print("Processing summary:")
|
|
print(f" Total .emlx files found: {total_files}")
|
|
print(f" Successfully loaded: {successful_files}")
|
|
print(f" Failed to load: {failed_files}")
|
|
print(f" Final documents: {len(docs)}")
|
|
|
|
return docs
|