* refactor: Unify examples interface with BaseRAGExample - Create BaseRAGExample base class for all RAG examples - Refactor 4 examples to use unified interface: - document_rag.py (replaces main_cli_example.py) - email_rag.py (replaces mail_reader_leann.py) - browser_rag.py (replaces google_history_reader_leann.py) - wechat_rag.py (replaces wechat_history_reader_leann.py) - Maintain 100% parameter compatibility with original files - Add interactive mode support for all examples - Unify parameter names (--max-items replaces --max-emails/--max-entries) - Update README.md with new examples usage - Add PARAMETER_CONSISTENCY.md documenting all parameter mappings - Keep main_cli_example.py for backward compatibility with migration notice All default values, LeannBuilder parameters, and chunking settings remain identical to ensure full compatibility with existing indexes. * fix: Update CI tests for new unified examples interface - Rename test_main_cli.py to test_document_rag.py - Update all references from main_cli_example.py to document_rag.py - Update tests/README.md documentation The tests now properly test the new unified interface while maintaining the same test coverage and functionality. * fix: Fix pre-commit issues and update tests - Fix import sorting and unused imports - Update type annotations to use built-in types (list, dict) instead of typing.List/Dict - Fix trailing whitespace and end-of-file issues - Fix Chinese fullwidth comma to regular comma - Update test_main_cli.py to test_document_rag.py - Add backward compatibility test for main_cli_example.py - Pass all pre-commit hooks (ruff, ruff-format, etc.) * refactor: Remove old example scripts and migration references - Delete old example scripts (mail_reader_leann.py, google_history_reader_leann.py, etc.) - Remove migration hints and backward compatibility - Update tests to use new unified examples directly - Clean up all references to old script names - Users now only see the new unified interface * fix: Restore embedding-mode parameter to all examples - All examples now have --embedding-mode parameter (unified interface benefit) - Default is 'sentence-transformers' (consistent with original behavior) - Users can now use OpenAI or MLX embeddings with any data source - Maintains functional equivalence with original scripts * docs: Improve parameter categorization in README - Clearly separate core (shared) vs specific parameters - Move LLM and embedding examples to 'Example Commands' section - Add descriptive comments for all specific parameters - Keep only truly data-source-specific parameters in specific sections * docs: Make example commands more representative - Add default values to parameter descriptions - Replace generic examples with real-world use cases - Focus on data-source-specific features in examples - Remove redundant demonstrations of common parameters * docs: Reorganize parameter documentation structure - Move common parameters to a dedicated section before all examples - Rename sections to 'X-Specific Arguments' for clarity - Remove duplicate common parameters from individual examples - Better information architecture for users * docs: polish applications * docs: Add CLI installation instructions - Add two installation options: venv and global uv tool - Clearly explain when to use each option - Make CLI more accessible for daily use * docs: Clarify CLI global installation process - Explain the transition from venv to global installation - Add upgrade command for global installation - Make it clear that global install allows usage without venv activation * docs: Add collapsible section for CLI installation - Wrap CLI installation instructions in details/summary tags - Keep consistent with other collapsible sections in README - Improve document readability and navigation * style: format * docs: Fix collapsible sections - Make Common Parameters collapsible (as it's lengthy reference material) - Keep CLI Installation visible (important for users to see immediately) - Better information hierarchy * docs: Add introduction for Common Parameters section - Add 'Flexible Configuration' heading with descriptive sentence - Create parallel structure with 'Generation Model Setup' section - Improve document flow and readability * docs: nit * fix: Fix issues in unified examples - Add smart path detection for data directory - Fix add_texts -> add_text method call - Handle both running from project root and examples directory * fix: Fix async/await and add_text issues in unified examples - Remove incorrect await from chat.ask() calls (not async) - Fix add_texts -> add_text method calls - Verify search-complexity correctly maps to efSearch parameter - All examples now run successfully * feat: Address review comments - Add complexity parameter to LeannChat initialization (default: search_complexity) - Fix chunk-size default in README documentation (256, not 2048) - Add more index building parameters as CLI arguments: - --backend-name (hnsw/diskann) - --graph-degree (default: 32) - --build-complexity (default: 64) - --no-compact (disable compact storage) - --no-recompute (disable embedding recomputation) - Update README to document all new parameters * feat: Add chunk-size parameters and improve file type filtering - Add --chunk-size and --chunk-overlap parameters to all RAG examples - Preserve original default values for each data source: - Document: 256/128 (optimized for general documents) - Email: 256/25 (smaller overlap for email threads) - Browser: 256/128 (standard for web content) - WeChat: 192/64 (smaller chunks for chat messages) - Make --file-types optional filter instead of restriction in document_rag - Update README to clarify interactive mode and parameter usage - Fix LLM default model documentation (gpt-4o, not gpt-4o-mini) * feat: Update documentation based on review feedback - Add MLX embedding example to README - Clarify examples/data content description (two papers, Pride and Prejudice, Chinese README) - Move chunk parameters to common parameters section - Remove duplicate chunk parameters from document-specific section * docs: Emphasize diverse data sources in examples/data description * fix: update default embedding models for better performance - Change WeChat, Browser, and Email RAG examples to use all-MiniLM-L6-v2 - Previous Qwen/Qwen3-Embedding-0.6B was too slow for these use cases - all-MiniLM-L6-v2 is a fast 384-dim model, ideal for large-scale personal data * add response highlight * change rebuild logic * fix some example * feat: check if k is larger than #docs * fix: WeChat history reader bugs and refactor wechat_rag to use unified architecture * fix email wrong -1 to process all file * refactor: reorgnize all examples/ and test/ * refactor: reorganize examples and add link checker * fix: add init.py * fix: handle certificate errors in link checker * fix wechat * merge * docs: update README to use proper module imports for apps - Change from 'python apps/xxx.py' to 'python -m apps.xxx' - More professional and pythonic module calling - Ensures proper module resolution and imports - Better separation between apps/ (production tools) and examples/ (demos) --------- Co-authored-by: yichuan520030910320 <yichuan_wang@berkeley.edu>
187 lines
6.3 KiB
Python
187 lines
6.3 KiB
Python
import os
|
|
import sqlite3
|
|
from pathlib import Path
|
|
from typing import Any
|
|
|
|
from llama_index.core import Document
|
|
from llama_index.core.readers.base import BaseReader
|
|
|
|
|
|
class ChromeHistoryReader(BaseReader):
|
|
"""
|
|
Chrome browser history reader that extracts browsing data from SQLite database.
|
|
|
|
Reads Chrome history from the default Chrome profile location and creates documents
|
|
with embedded metadata similar to the email reader structure.
|
|
"""
|
|
|
|
def __init__(self) -> None:
|
|
"""Initialize."""
|
|
pass
|
|
|
|
def load_data(self, input_dir: str | None = None, **load_kwargs: Any) -> list[Document]:
|
|
"""
|
|
Load Chrome history data from the default Chrome profile location.
|
|
|
|
Args:
|
|
input_dir: Not used for Chrome history (kept for compatibility)
|
|
**load_kwargs:
|
|
max_count (int): Maximum amount of history entries to read.
|
|
chrome_profile_path (str): Custom path to Chrome profile directory.
|
|
"""
|
|
docs: list[Document] = []
|
|
max_count = load_kwargs.get("max_count", 1000)
|
|
chrome_profile_path = load_kwargs.get("chrome_profile_path", None)
|
|
|
|
# Default Chrome profile path on macOS
|
|
if chrome_profile_path is None:
|
|
chrome_profile_path = os.path.expanduser(
|
|
"~/Library/Application Support/Google/Chrome/Default"
|
|
)
|
|
|
|
history_db_path = os.path.join(chrome_profile_path, "History")
|
|
|
|
if not os.path.exists(history_db_path):
|
|
print(f"Chrome history database not found at: {history_db_path}")
|
|
return docs
|
|
|
|
try:
|
|
# Connect to the Chrome history database
|
|
print(f"Connecting to database: {history_db_path}")
|
|
conn = sqlite3.connect(history_db_path)
|
|
cursor = conn.cursor()
|
|
|
|
# Query to get browsing history with metadata (removed created_time column)
|
|
query = """
|
|
SELECT
|
|
datetime(last_visit_time/1000000-11644473600,'unixepoch','localtime') as last_visit,
|
|
url,
|
|
title,
|
|
visit_count,
|
|
typed_count,
|
|
hidden
|
|
FROM urls
|
|
ORDER BY last_visit_time DESC
|
|
"""
|
|
|
|
print(f"Executing query on database: {history_db_path}")
|
|
cursor.execute(query)
|
|
rows = cursor.fetchall()
|
|
print(f"Query returned {len(rows)} rows")
|
|
|
|
count = 0
|
|
for row in rows:
|
|
if count >= max_count and max_count > 0:
|
|
break
|
|
|
|
last_visit, url, title, visit_count, typed_count, hidden = row
|
|
|
|
# Create document content with metadata embedded in text
|
|
doc_content = f"""
|
|
[Title]: {title}
|
|
[URL of the page]: {url}
|
|
[Last visited time]: {last_visit}
|
|
[Visit times]: {visit_count}
|
|
[Typed times]: {typed_count}
|
|
"""
|
|
|
|
# Create document with embedded metadata
|
|
doc = Document(text=doc_content, metadata={"title": title[0:150]})
|
|
# if len(title) > 150:
|
|
# print(f"Title is too long: {title}")
|
|
docs.append(doc)
|
|
count += 1
|
|
|
|
conn.close()
|
|
print(f"Loaded {len(docs)} Chrome history documents")
|
|
|
|
except Exception as e:
|
|
print(f"Error reading Chrome history: {e}")
|
|
# add you may need to close your browser to make the database file available
|
|
# also highlight in red
|
|
print(
|
|
"\033[91mYou may need to close your browser to make the database file available\033[0m"
|
|
)
|
|
return docs
|
|
|
|
return docs
|
|
|
|
@staticmethod
|
|
def find_chrome_profiles() -> list[Path]:
|
|
"""
|
|
Find all Chrome profile directories.
|
|
|
|
Returns:
|
|
List of Path objects pointing to Chrome profile directories
|
|
"""
|
|
chrome_base_path = Path(os.path.expanduser("~/Library/Application Support/Google/Chrome"))
|
|
profile_dirs = []
|
|
|
|
if not chrome_base_path.exists():
|
|
print(f"Chrome directory not found at: {chrome_base_path}")
|
|
return profile_dirs
|
|
|
|
# Find all profile directories
|
|
for profile_dir in chrome_base_path.iterdir():
|
|
if profile_dir.is_dir() and profile_dir.name != "System Profile":
|
|
history_path = profile_dir / "History"
|
|
if history_path.exists():
|
|
profile_dirs.append(profile_dir)
|
|
print(f"Found Chrome profile: {profile_dir}")
|
|
|
|
print(f"Found {len(profile_dirs)} Chrome profiles")
|
|
return profile_dirs
|
|
|
|
@staticmethod
|
|
def export_history_to_file(
|
|
output_file: str = "chrome_history_export.txt", max_count: int = 1000
|
|
):
|
|
"""
|
|
Export Chrome history to a text file using the same SQL query format.
|
|
|
|
Args:
|
|
output_file: Path to the output file
|
|
max_count: Maximum number of entries to export
|
|
"""
|
|
chrome_profile_path = os.path.expanduser(
|
|
"~/Library/Application Support/Google/Chrome/Default"
|
|
)
|
|
history_db_path = os.path.join(chrome_profile_path, "History")
|
|
|
|
if not os.path.exists(history_db_path):
|
|
print(f"Chrome history database not found at: {history_db_path}")
|
|
return
|
|
|
|
try:
|
|
conn = sqlite3.connect(history_db_path)
|
|
cursor = conn.cursor()
|
|
|
|
query = """
|
|
SELECT
|
|
datetime(last_visit_time/1000000-11644473600,'unixepoch','localtime') as last_visit,
|
|
url,
|
|
title,
|
|
visit_count,
|
|
typed_count,
|
|
hidden
|
|
FROM urls
|
|
ORDER BY last_visit_time DESC
|
|
LIMIT ?
|
|
"""
|
|
|
|
cursor.execute(query, (max_count,))
|
|
rows = cursor.fetchall()
|
|
|
|
with open(output_file, "w", encoding="utf-8") as f:
|
|
for row in rows:
|
|
last_visit, url, title, visit_count, typed_count, hidden = row
|
|
f.write(
|
|
f"{last_visit}\t{url}\t{title}\t{visit_count}\t{typed_count}\t{hidden}\n"
|
|
)
|
|
|
|
conn.close()
|
|
print(f"Exported {len(rows)} history entries to {output_file}")
|
|
|
|
except Exception as e:
|
|
print(f"Error exporting Chrome history: {e}")
|