Compare commits
1 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
55f0973524 |
15
.gitignore
vendored
15
.gitignore
vendored
@@ -93,18 +93,3 @@ packages/leann-backend-diskann/third_party/DiskANN/_deps/
|
||||
batchtest.py
|
||||
tests/__pytest_cache__/
|
||||
tests/__pycache__/
|
||||
paru-bin/
|
||||
|
||||
CLAUDE.md
|
||||
CLAUDE.local.md
|
||||
.claude/*.local.*
|
||||
.claude/local/*
|
||||
|
||||
benchmarks/data/
|
||||
!benchmarks/data/prompts_g5/*.txt
|
||||
!benchmarks/run_all.sh
|
||||
!benchmarks/run_speed_bench_all.sh
|
||||
!benchmarks/simple_mac_tpt_test.py
|
||||
!benchmarks/run_speed_bench_all.sh
|
||||
!benchmarks/run_speed_bench_all.sh
|
||||
!benchmarks/run_speed_bench_all.sh
|
||||
|
||||
@@ -13,5 +13,4 @@ repos:
|
||||
rev: v0.12.7 # Fixed version to match pyproject.toml
|
||||
hooks:
|
||||
- id: ruff
|
||||
args: [--fix, --exit-non-zero-on-fix]
|
||||
- id: ruff-format
|
||||
|
||||
55
README.md
55
README.md
@@ -176,8 +176,6 @@ response = chat.ask("How much storage does LEANN save?", top_k=1)
|
||||
|
||||
LEANN supports RAG on various data sources including documents (`.pdf`, `.txt`, `.md`), Apple Mail, Google Search History, WeChat, and more.
|
||||
|
||||
|
||||
|
||||
### Generation Model Setup
|
||||
|
||||
LEANN supports multiple LLM providers for text generation (OpenAI API, HuggingFace, Ollama).
|
||||
@@ -220,8 +218,7 @@ ollama pull llama3.2:1b
|
||||
|
||||
</details>
|
||||
|
||||
|
||||
## ⭐ Flexible Configuration
|
||||
### ⭐ Flexible Configuration
|
||||
|
||||
LEANN provides flexible parameters for embedding models, search strategies, and data processing to fit your specific needs.
|
||||
|
||||
@@ -297,12 +294,6 @@ python -m apps.document_rag --data-dir "~/Documents/Papers" --chunk-size 1024
|
||||
|
||||
# Filter only markdown and Python files with smaller chunks
|
||||
python -m apps.document_rag --data-dir "./docs" --chunk-size 256 --file-types .md .py
|
||||
|
||||
# Enable AST-aware chunking for code files
|
||||
python -m apps.document_rag --enable-code-chunking --data-dir "./my_project"
|
||||
|
||||
# Or use the specialized code RAG for better code understanding
|
||||
python -m apps.code_rag --repo-dir "./my_codebase" --query "How does authentication work?"
|
||||
```
|
||||
|
||||
</details>
|
||||
@@ -477,20 +468,10 @@ Once the index is built, you can ask questions like:
|
||||
|
||||
### 🚀 Claude Code Integration: Transform Your Development Workflow!
|
||||
|
||||
<details>
|
||||
<summary><strong>NEW!! AST‑Aware Code Chunking</strong></summary>
|
||||
|
||||
LEANN features intelligent code chunking that preserves semantic boundaries (functions, classes, methods) for Python, Java, C#, and TypeScript, improving code understanding compared to text-based chunking.
|
||||
|
||||
📖 Read the [AST Chunking Guide →](docs/ast_chunking_guide.md)
|
||||
|
||||
</details>
|
||||
|
||||
**The future of code assistance is here.** Transform your development workflow with LEANN's native MCP integration for Claude Code. Index your entire codebase and get intelligent code assistance directly in your IDE.
|
||||
|
||||
**Key features:**
|
||||
- 🔍 **Semantic code search** across your entire project, fully local index and lightweight
|
||||
- 🧠 **AST-aware chunking** preserves code structure (functions, classes)
|
||||
- 📚 **Context-aware assistance** for debugging and development
|
||||
- 🚀 **Zero-config setup** with automatic language detection
|
||||
|
||||
@@ -553,8 +534,7 @@ leann remove my-docs
|
||||
|
||||
**Key CLI features:**
|
||||
- Auto-detects document formats (PDF, TXT, MD, DOCX, PPTX + code files)
|
||||
- **🧠 AST-aware chunking** for Python, Java, C#, TypeScript files
|
||||
- Smart text chunking with overlap for all other content
|
||||
- Smart text chunking with overlap
|
||||
- Multiple LLM providers (Ollama, OpenAI, HuggingFace)
|
||||
- Organized index storage in `.leann/indexes/` (project-local)
|
||||
- Support for advanced search parameters
|
||||
@@ -627,33 +607,6 @@ Options:
|
||||
|
||||
</details>
|
||||
|
||||
## 🚀 Advanced Features
|
||||
|
||||
### 🎯 Metadata Filtering
|
||||
|
||||
LEANN supports a simple metadata filtering system to enable sophisticated use cases like document filtering by date/type, code search by file extension, and content management based on custom criteria.
|
||||
|
||||
```python
|
||||
# Add metadata during indexing
|
||||
builder.add_text(
|
||||
"def authenticate_user(token): ...",
|
||||
metadata={"file_extension": ".py", "lines_of_code": 25}
|
||||
)
|
||||
|
||||
# Search with filters
|
||||
results = searcher.search(
|
||||
query="authentication function",
|
||||
metadata_filters={
|
||||
"file_extension": {"==": ".py"},
|
||||
"lines_of_code": {"<": 100}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
**Supported operators**: `==`, `!=`, `<`, `<=`, `>`, `>=`, `in`, `not_in`, `contains`, `starts_with`, `ends_with`, `is_true`, `is_false`
|
||||
|
||||
📖 **[Complete Metadata filtering guide →](docs/metadata_filtering.md)**
|
||||
|
||||
## 🏗️ Architecture & How It Works
|
||||
|
||||
<p align="center">
|
||||
@@ -693,7 +646,6 @@ results = searcher.search(
|
||||
```bash
|
||||
uv pip install -e ".[dev]" # Install dev dependencies
|
||||
python benchmarks/run_evaluation.py # Will auto-download evaluation data and run benchmarks
|
||||
python benchmarks/run_evaluation.py benchmarks/data/indices/rpj_wiki/rpj_wiki --num-queries 2000 # After downloading data, you can run the benchmark with our biggest index
|
||||
```
|
||||
|
||||
The evaluation script downloads data automatically on first run. The last three results were tested with partial personal data, and you can reproduce them with your own data!
|
||||
@@ -733,9 +685,6 @@ MIT License - see [LICENSE](LICENSE) for details.
|
||||
|
||||
Core Contributors: [Yichuan Wang](https://yichuan-w.github.io/) & [Zhifei Li](https://github.com/andylizf).
|
||||
|
||||
Active Contributors: [Gabriel Dehan](https://github.com/gabriel-dehan)
|
||||
|
||||
|
||||
We welcome more contributors! Feel free to open issues or submit PRs.
|
||||
|
||||
This work is done at [**Berkeley Sky Computing Lab**](https://sky.cs.berkeley.edu/).
|
||||
|
||||
@@ -11,6 +11,7 @@ from typing import Any
|
||||
import dotenv
|
||||
from leann.api import LeannBuilder, LeannChat
|
||||
from leann.registry import register_project_directory
|
||||
from llama_index.core.node_parser import SentenceSplitter
|
||||
|
||||
dotenv.load_dotenv()
|
||||
|
||||
@@ -108,38 +109,6 @@ class BaseRAGExample(ABC):
|
||||
help="Thinking budget for reasoning models (low/medium/high). Supported by GPT-Oss:20b and other reasoning models.",
|
||||
)
|
||||
|
||||
# AST Chunking parameters
|
||||
ast_group = parser.add_argument_group("AST Chunking Parameters")
|
||||
ast_group.add_argument(
|
||||
"--use-ast-chunking",
|
||||
action="store_true",
|
||||
help="Enable AST-aware chunking for code files (requires astchunk)",
|
||||
)
|
||||
ast_group.add_argument(
|
||||
"--ast-chunk-size",
|
||||
type=int,
|
||||
default=512,
|
||||
help="Maximum characters per AST chunk (default: 512)",
|
||||
)
|
||||
ast_group.add_argument(
|
||||
"--ast-chunk-overlap",
|
||||
type=int,
|
||||
default=64,
|
||||
help="Overlap between AST chunks (default: 64)",
|
||||
)
|
||||
ast_group.add_argument(
|
||||
"--code-file-extensions",
|
||||
nargs="+",
|
||||
default=None,
|
||||
help="Additional code file extensions to process with AST chunking (e.g., .py .java .cs .ts)",
|
||||
)
|
||||
ast_group.add_argument(
|
||||
"--ast-fallback-traditional",
|
||||
action="store_true",
|
||||
default=True,
|
||||
help="Fall back to traditional chunking if AST chunking fails (default: True)",
|
||||
)
|
||||
|
||||
# Search parameters
|
||||
search_group = parser.add_argument_group("Search Parameters")
|
||||
search_group.add_argument(
|
||||
@@ -299,6 +268,7 @@ class BaseRAGExample(ABC):
|
||||
chat = LeannChat(
|
||||
index_path,
|
||||
llm_config=self.get_llm_config(args),
|
||||
system_prompt=f"You are a helpful assistant that answers questions about {self.name} data.",
|
||||
complexity=args.search_complexity,
|
||||
)
|
||||
|
||||
@@ -340,3 +310,21 @@ class BaseRAGExample(ABC):
|
||||
await self.run_single_query(args, index_path, args.query)
|
||||
else:
|
||||
await self.run_interactive_chat(args, index_path)
|
||||
|
||||
|
||||
def create_text_chunks(documents, chunk_size=256, chunk_overlap=25) -> list[str]:
|
||||
"""Helper function to create text chunks from documents."""
|
||||
node_parser = SentenceSplitter(
|
||||
chunk_size=chunk_size,
|
||||
chunk_overlap=chunk_overlap,
|
||||
separator=" ",
|
||||
paragraph_separator="\n\n",
|
||||
)
|
||||
|
||||
all_texts = []
|
||||
for doc in documents:
|
||||
nodes = node_parser.get_nodes_from_documents([doc])
|
||||
if nodes:
|
||||
all_texts.extend(node.get_content() for node in nodes)
|
||||
|
||||
return all_texts
|
||||
|
||||
@@ -1,22 +0,0 @@
|
||||
"""
|
||||
Chunking utilities for LEANN RAG applications.
|
||||
Provides AST-aware and traditional text chunking functionality.
|
||||
"""
|
||||
|
||||
from .utils import (
|
||||
CODE_EXTENSIONS,
|
||||
create_ast_chunks,
|
||||
create_text_chunks,
|
||||
create_traditional_chunks,
|
||||
detect_code_files,
|
||||
get_language_from_extension,
|
||||
)
|
||||
|
||||
__all__ = [
|
||||
"CODE_EXTENSIONS",
|
||||
"create_ast_chunks",
|
||||
"create_text_chunks",
|
||||
"create_traditional_chunks",
|
||||
"detect_code_files",
|
||||
"get_language_from_extension",
|
||||
]
|
||||
@@ -1,320 +0,0 @@
|
||||
"""
|
||||
Enhanced chunking utilities with AST-aware code chunking support.
|
||||
Provides unified interface for both traditional and AST-based text chunking.
|
||||
"""
|
||||
|
||||
import logging
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
from llama_index.core.node_parser import SentenceSplitter
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Code file extensions supported by astchunk
|
||||
CODE_EXTENSIONS = {
|
||||
".py": "python",
|
||||
".java": "java",
|
||||
".cs": "csharp",
|
||||
".ts": "typescript",
|
||||
".tsx": "typescript",
|
||||
".js": "typescript",
|
||||
".jsx": "typescript",
|
||||
}
|
||||
|
||||
# Default chunk parameters for different content types
|
||||
DEFAULT_CHUNK_PARAMS = {
|
||||
"code": {
|
||||
"max_chunk_size": 512,
|
||||
"chunk_overlap": 64,
|
||||
},
|
||||
"text": {
|
||||
"chunk_size": 256,
|
||||
"chunk_overlap": 128,
|
||||
},
|
||||
}
|
||||
|
||||
|
||||
def detect_code_files(documents, code_extensions=None) -> tuple[list, list]:
|
||||
"""
|
||||
Separate documents into code files and regular text files.
|
||||
|
||||
Args:
|
||||
documents: List of LlamaIndex Document objects
|
||||
code_extensions: Dict mapping file extensions to languages (defaults to CODE_EXTENSIONS)
|
||||
|
||||
Returns:
|
||||
Tuple of (code_documents, text_documents)
|
||||
"""
|
||||
if code_extensions is None:
|
||||
code_extensions = CODE_EXTENSIONS
|
||||
|
||||
code_docs = []
|
||||
text_docs = []
|
||||
|
||||
for doc in documents:
|
||||
# Get file path from metadata
|
||||
file_path = doc.metadata.get("file_path", "")
|
||||
if not file_path:
|
||||
# Fallback to file_name
|
||||
file_path = doc.metadata.get("file_name", "")
|
||||
|
||||
if file_path:
|
||||
file_ext = Path(file_path).suffix.lower()
|
||||
if file_ext in code_extensions:
|
||||
# Add language info to metadata
|
||||
doc.metadata["language"] = code_extensions[file_ext]
|
||||
doc.metadata["is_code"] = True
|
||||
code_docs.append(doc)
|
||||
else:
|
||||
doc.metadata["is_code"] = False
|
||||
text_docs.append(doc)
|
||||
else:
|
||||
# If no file path, treat as text
|
||||
doc.metadata["is_code"] = False
|
||||
text_docs.append(doc)
|
||||
|
||||
logger.info(f"Detected {len(code_docs)} code files and {len(text_docs)} text files")
|
||||
return code_docs, text_docs
|
||||
|
||||
|
||||
def get_language_from_extension(file_path: str) -> Optional[str]:
|
||||
"""Get the programming language from file extension."""
|
||||
ext = Path(file_path).suffix.lower()
|
||||
return CODE_EXTENSIONS.get(ext)
|
||||
|
||||
|
||||
def create_ast_chunks(
|
||||
documents,
|
||||
max_chunk_size: int = 512,
|
||||
chunk_overlap: int = 64,
|
||||
metadata_template: str = "default",
|
||||
) -> list[str]:
|
||||
"""
|
||||
Create AST-aware chunks from code documents using astchunk.
|
||||
|
||||
Args:
|
||||
documents: List of code documents
|
||||
max_chunk_size: Maximum characters per chunk
|
||||
chunk_overlap: Number of AST nodes to overlap between chunks
|
||||
metadata_template: Template for chunk metadata
|
||||
|
||||
Returns:
|
||||
List of text chunks with preserved code structure
|
||||
"""
|
||||
try:
|
||||
from astchunk import ASTChunkBuilder
|
||||
except ImportError as e:
|
||||
logger.error(f"astchunk not available: {e}")
|
||||
logger.info("Falling back to traditional chunking for code files")
|
||||
return create_traditional_chunks(documents, max_chunk_size, chunk_overlap)
|
||||
|
||||
all_chunks = []
|
||||
|
||||
for doc in documents:
|
||||
# Get language from metadata (set by detect_code_files)
|
||||
language = doc.metadata.get("language")
|
||||
if not language:
|
||||
logger.warning(
|
||||
"No language detected for document, falling back to traditional chunking"
|
||||
)
|
||||
traditional_chunks = create_traditional_chunks([doc], max_chunk_size, chunk_overlap)
|
||||
all_chunks.extend(traditional_chunks)
|
||||
continue
|
||||
|
||||
try:
|
||||
# Configure astchunk
|
||||
configs = {
|
||||
"max_chunk_size": max_chunk_size,
|
||||
"language": language,
|
||||
"metadata_template": metadata_template,
|
||||
"chunk_overlap": chunk_overlap if chunk_overlap > 0 else 0,
|
||||
}
|
||||
|
||||
# Add repository-level metadata if available
|
||||
repo_metadata = {
|
||||
"file_path": doc.metadata.get("file_path", ""),
|
||||
"file_name": doc.metadata.get("file_name", ""),
|
||||
"creation_date": doc.metadata.get("creation_date", ""),
|
||||
"last_modified_date": doc.metadata.get("last_modified_date", ""),
|
||||
}
|
||||
configs["repo_level_metadata"] = repo_metadata
|
||||
|
||||
# Create chunk builder and process
|
||||
chunk_builder = ASTChunkBuilder(**configs)
|
||||
code_content = doc.get_content()
|
||||
|
||||
if not code_content or not code_content.strip():
|
||||
logger.warning("Empty code content, skipping")
|
||||
continue
|
||||
|
||||
chunks = chunk_builder.chunkify(code_content)
|
||||
|
||||
# Extract text content from chunks
|
||||
for chunk in chunks:
|
||||
if hasattr(chunk, "text"):
|
||||
chunk_text = chunk.text
|
||||
elif isinstance(chunk, dict) and "text" in chunk:
|
||||
chunk_text = chunk["text"]
|
||||
elif isinstance(chunk, str):
|
||||
chunk_text = chunk
|
||||
else:
|
||||
# Try to convert to string
|
||||
chunk_text = str(chunk)
|
||||
|
||||
if chunk_text and chunk_text.strip():
|
||||
all_chunks.append(chunk_text.strip())
|
||||
|
||||
logger.info(
|
||||
f"Created {len(chunks)} AST chunks from {language} file: {doc.metadata.get('file_name', 'unknown')}"
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.warning(f"AST chunking failed for {language} file: {e}")
|
||||
logger.info("Falling back to traditional chunking")
|
||||
traditional_chunks = create_traditional_chunks([doc], max_chunk_size, chunk_overlap)
|
||||
all_chunks.extend(traditional_chunks)
|
||||
|
||||
return all_chunks
|
||||
|
||||
|
||||
def create_traditional_chunks(
|
||||
documents, chunk_size: int = 256, chunk_overlap: int = 128
|
||||
) -> list[str]:
|
||||
"""
|
||||
Create traditional text chunks using LlamaIndex SentenceSplitter.
|
||||
|
||||
Args:
|
||||
documents: List of documents to chunk
|
||||
chunk_size: Size of each chunk in characters
|
||||
chunk_overlap: Overlap between chunks
|
||||
|
||||
Returns:
|
||||
List of text chunks
|
||||
"""
|
||||
# Handle invalid chunk_size values
|
||||
if chunk_size <= 0:
|
||||
logger.warning(f"Invalid chunk_size={chunk_size}, using default value of 256")
|
||||
chunk_size = 256
|
||||
|
||||
# Ensure chunk_overlap is not negative and not larger than chunk_size
|
||||
if chunk_overlap < 0:
|
||||
chunk_overlap = 0
|
||||
if chunk_overlap >= chunk_size:
|
||||
chunk_overlap = chunk_size // 2
|
||||
|
||||
node_parser = SentenceSplitter(
|
||||
chunk_size=chunk_size,
|
||||
chunk_overlap=chunk_overlap,
|
||||
separator=" ",
|
||||
paragraph_separator="\n\n",
|
||||
)
|
||||
|
||||
all_texts = []
|
||||
for doc in documents:
|
||||
try:
|
||||
nodes = node_parser.get_nodes_from_documents([doc])
|
||||
if nodes:
|
||||
chunk_texts = [node.get_content() for node in nodes]
|
||||
all_texts.extend(chunk_texts)
|
||||
logger.debug(f"Created {len(chunk_texts)} traditional chunks from document")
|
||||
except Exception as e:
|
||||
logger.error(f"Traditional chunking failed for document: {e}")
|
||||
# As last resort, add the raw content
|
||||
content = doc.get_content()
|
||||
if content and content.strip():
|
||||
all_texts.append(content.strip())
|
||||
|
||||
return all_texts
|
||||
|
||||
|
||||
def create_text_chunks(
|
||||
documents,
|
||||
chunk_size: int = 256,
|
||||
chunk_overlap: int = 128,
|
||||
use_ast_chunking: bool = False,
|
||||
ast_chunk_size: int = 512,
|
||||
ast_chunk_overlap: int = 64,
|
||||
code_file_extensions: Optional[list[str]] = None,
|
||||
ast_fallback_traditional: bool = True,
|
||||
) -> list[str]:
|
||||
"""
|
||||
Create text chunks from documents with optional AST support for code files.
|
||||
|
||||
Args:
|
||||
documents: List of LlamaIndex Document objects
|
||||
chunk_size: Size for traditional text chunks
|
||||
chunk_overlap: Overlap for traditional text chunks
|
||||
use_ast_chunking: Whether to use AST chunking for code files
|
||||
ast_chunk_size: Size for AST chunks
|
||||
ast_chunk_overlap: Overlap for AST chunks
|
||||
code_file_extensions: Custom list of code file extensions
|
||||
ast_fallback_traditional: Fall back to traditional chunking on AST errors
|
||||
|
||||
Returns:
|
||||
List of text chunks
|
||||
"""
|
||||
if not documents:
|
||||
logger.warning("No documents provided for chunking")
|
||||
return []
|
||||
|
||||
# Create a local copy of supported extensions for this function call
|
||||
local_code_extensions = CODE_EXTENSIONS.copy()
|
||||
|
||||
# Update supported extensions if provided
|
||||
if code_file_extensions:
|
||||
# Map extensions to languages (simplified mapping)
|
||||
ext_mapping = {
|
||||
".py": "python",
|
||||
".java": "java",
|
||||
".cs": "c_sharp",
|
||||
".ts": "typescript",
|
||||
".tsx": "typescript",
|
||||
}
|
||||
for ext in code_file_extensions:
|
||||
if ext.lower() not in local_code_extensions:
|
||||
# Try to guess language from extension
|
||||
if ext.lower() in ext_mapping:
|
||||
local_code_extensions[ext.lower()] = ext_mapping[ext.lower()]
|
||||
else:
|
||||
logger.warning(f"Unsupported extension {ext}, will use traditional chunking")
|
||||
|
||||
all_chunks = []
|
||||
|
||||
if use_ast_chunking:
|
||||
# Separate code and text documents using local extensions
|
||||
code_docs, text_docs = detect_code_files(documents, local_code_extensions)
|
||||
|
||||
# Process code files with AST chunking
|
||||
if code_docs:
|
||||
logger.info(f"Processing {len(code_docs)} code files with AST chunking")
|
||||
try:
|
||||
ast_chunks = create_ast_chunks(
|
||||
code_docs, max_chunk_size=ast_chunk_size, chunk_overlap=ast_chunk_overlap
|
||||
)
|
||||
all_chunks.extend(ast_chunks)
|
||||
logger.info(f"Created {len(ast_chunks)} AST chunks from code files")
|
||||
except Exception as e:
|
||||
logger.error(f"AST chunking failed: {e}")
|
||||
if ast_fallback_traditional:
|
||||
logger.info("Falling back to traditional chunking for code files")
|
||||
traditional_code_chunks = create_traditional_chunks(
|
||||
code_docs, chunk_size, chunk_overlap
|
||||
)
|
||||
all_chunks.extend(traditional_code_chunks)
|
||||
else:
|
||||
raise
|
||||
|
||||
# Process text files with traditional chunking
|
||||
if text_docs:
|
||||
logger.info(f"Processing {len(text_docs)} text files with traditional chunking")
|
||||
text_chunks = create_traditional_chunks(text_docs, chunk_size, chunk_overlap)
|
||||
all_chunks.extend(text_chunks)
|
||||
logger.info(f"Created {len(text_chunks)} traditional chunks from text files")
|
||||
else:
|
||||
# Use traditional chunking for all files
|
||||
logger.info(f"Processing {len(documents)} documents with traditional chunking")
|
||||
all_chunks = create_traditional_chunks(documents, chunk_size, chunk_overlap)
|
||||
|
||||
logger.info(f"Total chunks created: {len(all_chunks)}")
|
||||
return all_chunks
|
||||
211
apps/code_rag.py
211
apps/code_rag.py
@@ -1,211 +0,0 @@
|
||||
"""
|
||||
Code RAG example using AST-aware chunking for optimal code understanding.
|
||||
Specialized for code repositories with automatic language detection and
|
||||
optimized chunking parameters.
|
||||
"""
|
||||
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
# Add parent directory to path for imports
|
||||
sys.path.insert(0, str(Path(__file__).parent))
|
||||
|
||||
from base_rag_example import BaseRAGExample
|
||||
from chunking import CODE_EXTENSIONS, create_text_chunks
|
||||
from llama_index.core import SimpleDirectoryReader
|
||||
|
||||
|
||||
class CodeRAG(BaseRAGExample):
|
||||
"""Specialized RAG example for code repositories with AST-aware chunking."""
|
||||
|
||||
def __init__(self):
|
||||
super().__init__(
|
||||
name="Code",
|
||||
description="Process and query code repositories with AST-aware chunking",
|
||||
default_index_name="code_index",
|
||||
)
|
||||
# Override defaults for code-specific usage
|
||||
self.embedding_model_default = "facebook/contriever" # Good for code
|
||||
self.max_items_default = -1 # Process all code files by default
|
||||
|
||||
def _add_specific_arguments(self, parser):
|
||||
"""Add code-specific arguments."""
|
||||
code_group = parser.add_argument_group("Code Repository Parameters")
|
||||
|
||||
code_group.add_argument(
|
||||
"--repo-dir",
|
||||
type=str,
|
||||
default=".",
|
||||
help="Code repository directory to index (default: current directory)",
|
||||
)
|
||||
code_group.add_argument(
|
||||
"--include-extensions",
|
||||
nargs="+",
|
||||
default=list(CODE_EXTENSIONS.keys()),
|
||||
help="File extensions to include (default: supported code extensions)",
|
||||
)
|
||||
code_group.add_argument(
|
||||
"--exclude-dirs",
|
||||
nargs="+",
|
||||
default=[
|
||||
".git",
|
||||
"__pycache__",
|
||||
"node_modules",
|
||||
"venv",
|
||||
".venv",
|
||||
"build",
|
||||
"dist",
|
||||
"target",
|
||||
],
|
||||
help="Directories to exclude from indexing",
|
||||
)
|
||||
code_group.add_argument(
|
||||
"--max-file-size",
|
||||
type=int,
|
||||
default=1000000, # 1MB
|
||||
help="Maximum file size in bytes to process (default: 1MB)",
|
||||
)
|
||||
code_group.add_argument(
|
||||
"--include-comments",
|
||||
action="store_true",
|
||||
help="Include comments in chunking (useful for documentation)",
|
||||
)
|
||||
code_group.add_argument(
|
||||
"--preserve-imports",
|
||||
action="store_true",
|
||||
default=True,
|
||||
help="Try to preserve import statements in chunks (default: True)",
|
||||
)
|
||||
|
||||
async def load_data(self, args) -> list[str]:
|
||||
"""Load code files and convert to AST-aware chunks."""
|
||||
print(f"🔍 Scanning code repository: {args.repo_dir}")
|
||||
print(f"📁 Including extensions: {args.include_extensions}")
|
||||
print(f"🚫 Excluding directories: {args.exclude_dirs}")
|
||||
|
||||
# Check if repository directory exists
|
||||
repo_path = Path(args.repo_dir)
|
||||
if not repo_path.exists():
|
||||
raise ValueError(f"Repository directory not found: {args.repo_dir}")
|
||||
|
||||
# Load code files with filtering
|
||||
reader_kwargs = {
|
||||
"recursive": True,
|
||||
"encoding": "utf-8",
|
||||
"required_exts": args.include_extensions,
|
||||
"exclude_hidden": True,
|
||||
}
|
||||
|
||||
# Create exclusion filter
|
||||
def file_filter(file_path: str) -> bool:
|
||||
"""Filter out unwanted files and directories."""
|
||||
path = Path(file_path)
|
||||
|
||||
# Check file size
|
||||
try:
|
||||
if path.stat().st_size > args.max_file_size:
|
||||
print(f"⚠️ Skipping large file: {path.name} ({path.stat().st_size} bytes)")
|
||||
return False
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
# Check if in excluded directory
|
||||
for exclude_dir in args.exclude_dirs:
|
||||
if exclude_dir in path.parts:
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
try:
|
||||
# Load documents with file filtering
|
||||
documents = SimpleDirectoryReader(
|
||||
args.repo_dir,
|
||||
file_extractor=None, # Use default extractors
|
||||
**reader_kwargs,
|
||||
).load_data(show_progress=True)
|
||||
|
||||
# Apply custom filtering
|
||||
filtered_docs = []
|
||||
for doc in documents:
|
||||
file_path = doc.metadata.get("file_path", "")
|
||||
if file_filter(file_path):
|
||||
filtered_docs.append(doc)
|
||||
|
||||
documents = filtered_docs
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Error loading code files: {e}")
|
||||
return []
|
||||
|
||||
if not documents:
|
||||
print(
|
||||
f"❌ No code files found in {args.repo_dir} with extensions {args.include_extensions}"
|
||||
)
|
||||
return []
|
||||
|
||||
print(f"✅ Loaded {len(documents)} code files")
|
||||
|
||||
# Show breakdown by language/extension
|
||||
ext_counts = {}
|
||||
for doc in documents:
|
||||
file_path = doc.metadata.get("file_path", "")
|
||||
if file_path:
|
||||
ext = Path(file_path).suffix.lower()
|
||||
ext_counts[ext] = ext_counts.get(ext, 0) + 1
|
||||
|
||||
print("📊 Files by extension:")
|
||||
for ext, count in sorted(ext_counts.items()):
|
||||
print(f" {ext}: {count} files")
|
||||
|
||||
# Use AST-aware chunking by default for code
|
||||
print(
|
||||
f"🧠 Using AST-aware chunking (chunk_size: {args.ast_chunk_size}, overlap: {args.ast_chunk_overlap})"
|
||||
)
|
||||
|
||||
all_texts = create_text_chunks(
|
||||
documents,
|
||||
chunk_size=256, # Fallback for non-code files
|
||||
chunk_overlap=64,
|
||||
use_ast_chunking=True, # Always use AST for code RAG
|
||||
ast_chunk_size=args.ast_chunk_size,
|
||||
ast_chunk_overlap=args.ast_chunk_overlap,
|
||||
code_file_extensions=args.include_extensions,
|
||||
ast_fallback_traditional=True,
|
||||
)
|
||||
|
||||
# Apply max_items limit if specified
|
||||
if args.max_items > 0 and len(all_texts) > args.max_items:
|
||||
print(f"⏳ Limiting to {args.max_items} chunks (from {len(all_texts)})")
|
||||
all_texts = all_texts[: args.max_items]
|
||||
|
||||
print(f"✅ Generated {len(all_texts)} code chunks")
|
||||
return all_texts
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import asyncio
|
||||
|
||||
# Example queries for code RAG
|
||||
print("\n💻 Code RAG Example")
|
||||
print("=" * 50)
|
||||
print("\nExample queries you can try:")
|
||||
print("- 'How does the embedding computation work?'")
|
||||
print("- 'What are the main classes in this codebase?'")
|
||||
print("- 'Show me the search implementation'")
|
||||
print("- 'How is error handling implemented?'")
|
||||
print("- 'What design patterns are used?'")
|
||||
print("- 'Explain the chunking logic'")
|
||||
print("\n🚀 Features:")
|
||||
print("- ✅ AST-aware chunking preserves code structure")
|
||||
print("- ✅ Automatic language detection")
|
||||
print("- ✅ Smart filtering of large files and common excludes")
|
||||
print("- ✅ Optimized for code understanding")
|
||||
print("\nUsage examples:")
|
||||
print(" python -m apps.code_rag --repo-dir ./my_project")
|
||||
print(
|
||||
" python -m apps.code_rag --include-extensions .py .js --query 'How does authentication work?'"
|
||||
)
|
||||
print("\nOr run without --query for interactive mode\n")
|
||||
|
||||
rag = CodeRAG()
|
||||
asyncio.run(rag.run())
|
||||
@@ -9,8 +9,7 @@ from pathlib import Path
|
||||
# Add parent directory to path for imports
|
||||
sys.path.insert(0, str(Path(__file__).parent))
|
||||
|
||||
from base_rag_example import BaseRAGExample
|
||||
from chunking import create_text_chunks
|
||||
from base_rag_example import BaseRAGExample, create_text_chunks
|
||||
from llama_index.core import SimpleDirectoryReader
|
||||
|
||||
|
||||
@@ -45,11 +44,6 @@ class DocumentRAG(BaseRAGExample):
|
||||
doc_group.add_argument(
|
||||
"--chunk-overlap", type=int, default=128, help="Text chunk overlap (default: 128)"
|
||||
)
|
||||
doc_group.add_argument(
|
||||
"--enable-code-chunking",
|
||||
action="store_true",
|
||||
help="Enable AST-aware chunking for code files in the data directory",
|
||||
)
|
||||
|
||||
async def load_data(self, args) -> list[str]:
|
||||
"""Load documents and convert to text chunks."""
|
||||
@@ -82,22 +76,9 @@ class DocumentRAG(BaseRAGExample):
|
||||
|
||||
print(f"Loaded {len(documents)} documents")
|
||||
|
||||
# Determine chunking strategy
|
||||
use_ast = args.enable_code_chunking or getattr(args, "use_ast_chunking", False)
|
||||
|
||||
if use_ast:
|
||||
print("Using AST-aware chunking for code files")
|
||||
|
||||
# Convert to text chunks with optional AST support
|
||||
# Convert to text chunks
|
||||
all_texts = create_text_chunks(
|
||||
documents,
|
||||
chunk_size=args.chunk_size,
|
||||
chunk_overlap=args.chunk_overlap,
|
||||
use_ast_chunking=use_ast,
|
||||
ast_chunk_size=getattr(args, "ast_chunk_size", 512),
|
||||
ast_chunk_overlap=getattr(args, "ast_chunk_overlap", 64),
|
||||
code_file_extensions=getattr(args, "code_file_extensions", None),
|
||||
ast_fallback_traditional=getattr(args, "ast_fallback_traditional", True),
|
||||
documents, chunk_size=args.chunk_size, chunk_overlap=args.chunk_overlap
|
||||
)
|
||||
|
||||
# Apply max_items limit if specified
|
||||
@@ -121,10 +102,6 @@ if __name__ == "__main__":
|
||||
print(
|
||||
"- 'What is the problem of developing pan gu model Huawei meets? (盘古大模型开发中遇到什么问题?)'"
|
||||
)
|
||||
print("\n🚀 NEW: Code-aware chunking available!")
|
||||
print("- Use --enable-code-chunking to enable AST-aware chunking for code files")
|
||||
print("- Supports Python, Java, C#, TypeScript files")
|
||||
print("- Better semantic understanding of code structure")
|
||||
print("\nOr run without --query for interactive mode\n")
|
||||
|
||||
rag = DocumentRAG()
|
||||
|
||||
82
benchmarks/data/.gitattributes
vendored
Normal file
82
benchmarks/data/.gitattributes
vendored
Normal file
@@ -0,0 +1,82 @@
|
||||
*.7z filter=lfs diff=lfs merge=lfs -text
|
||||
*.arrow filter=lfs diff=lfs merge=lfs -text
|
||||
*.bin filter=lfs diff=lfs merge=lfs -text
|
||||
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
||||
*.ftz filter=lfs diff=lfs merge=lfs -text
|
||||
*.gz filter=lfs diff=lfs merge=lfs -text
|
||||
*.h5 filter=lfs diff=lfs merge=lfs -text
|
||||
*.joblib filter=lfs diff=lfs merge=lfs -text
|
||||
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.lz4 filter=lfs diff=lfs merge=lfs -text
|
||||
*.mds filter=lfs diff=lfs merge=lfs -text
|
||||
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
||||
*.model filter=lfs diff=lfs merge=lfs -text
|
||||
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
||||
*.npy filter=lfs diff=lfs merge=lfs -text
|
||||
*.npz filter=lfs diff=lfs merge=lfs -text
|
||||
*.onnx filter=lfs diff=lfs merge=lfs -text
|
||||
*.ot filter=lfs diff=lfs merge=lfs -text
|
||||
*.parquet filter=lfs diff=lfs merge=lfs -text
|
||||
*.pb filter=lfs diff=lfs merge=lfs -text
|
||||
*.pickle filter=lfs diff=lfs merge=lfs -text
|
||||
*.pkl filter=lfs diff=lfs merge=lfs -text
|
||||
*.pt filter=lfs diff=lfs merge=lfs -text
|
||||
*.pth filter=lfs diff=lfs merge=lfs -text
|
||||
*.rar filter=lfs diff=lfs merge=lfs -text
|
||||
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
||||
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
||||
*.tar filter=lfs diff=lfs merge=lfs -text
|
||||
*.tflite filter=lfs diff=lfs merge=lfs -text
|
||||
*.tgz filter=lfs diff=lfs merge=lfs -text
|
||||
*.wasm filter=lfs diff=lfs merge=lfs -text
|
||||
*.xz filter=lfs diff=lfs merge=lfs -text
|
||||
*.zip filter=lfs diff=lfs merge=lfs -text
|
||||
*.zst filter=lfs diff=lfs merge=lfs -text
|
||||
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
||||
# Audio files - uncompressed
|
||||
*.pcm filter=lfs diff=lfs merge=lfs -text
|
||||
*.sam filter=lfs diff=lfs merge=lfs -text
|
||||
*.raw filter=lfs diff=lfs merge=lfs -text
|
||||
# Audio files - compressed
|
||||
*.aac filter=lfs diff=lfs merge=lfs -text
|
||||
*.flac filter=lfs diff=lfs merge=lfs -text
|
||||
*.mp3 filter=lfs diff=lfs merge=lfs -text
|
||||
*.ogg filter=lfs diff=lfs merge=lfs -text
|
||||
*.wav filter=lfs diff=lfs merge=lfs -text
|
||||
# Image files - uncompressed
|
||||
*.bmp filter=lfs diff=lfs merge=lfs -text
|
||||
*.gif filter=lfs diff=lfs merge=lfs -text
|
||||
*.png filter=lfs diff=lfs merge=lfs -text
|
||||
*.tiff filter=lfs diff=lfs merge=lfs -text
|
||||
# Image files - compressed
|
||||
*.jpg filter=lfs diff=lfs merge=lfs -text
|
||||
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
||||
*.webp filter=lfs diff=lfs merge=lfs -text
|
||||
# Video files - compressed
|
||||
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
||||
*.webm filter=lfs diff=lfs merge=lfs -text
|
||||
ground_truth/dpr/id_map.json filter=lfs diff=lfs merge=lfs -text
|
||||
indices/dpr/dpr_diskann.passages.idx filter=lfs diff=lfs merge=lfs -text
|
||||
indices/dpr/dpr_diskann.passages.jsonl filter=lfs diff=lfs merge=lfs -text
|
||||
indices/dpr/dpr_diskann_disk.index filter=lfs diff=lfs merge=lfs -text
|
||||
indices/dpr/leann.labels.map filter=lfs diff=lfs merge=lfs -text
|
||||
indices/rpj_wiki/leann.labels.map filter=lfs diff=lfs merge=lfs -text
|
||||
indices/rpj_wiki/rpj_wiki.index filter=lfs diff=lfs merge=lfs -text
|
||||
indices/rpj_wiki/rpj_wiki.passages.0.idx filter=lfs diff=lfs merge=lfs -text
|
||||
indices/rpj_wiki/rpj_wiki.passages.0.jsonl filter=lfs diff=lfs merge=lfs -text
|
||||
indices/rpj_wiki/rpj_wiki.passages.1.idx filter=lfs diff=lfs merge=lfs -text
|
||||
indices/rpj_wiki/rpj_wiki.passages.1.jsonl filter=lfs diff=lfs merge=lfs -text
|
||||
indices/rpj_wiki/rpj_wiki.passages.2.idx filter=lfs diff=lfs merge=lfs -text
|
||||
indices/rpj_wiki/rpj_wiki.passages.2.jsonl filter=lfs diff=lfs merge=lfs -text
|
||||
indices/rpj_wiki/rpj_wiki.passages.3.idx filter=lfs diff=lfs merge=lfs -text
|
||||
indices/rpj_wiki/rpj_wiki.passages.3.jsonl filter=lfs diff=lfs merge=lfs -text
|
||||
indices/rpj_wiki/rpj_wiki.passages.4.idx filter=lfs diff=lfs merge=lfs -text
|
||||
indices/rpj_wiki/rpj_wiki.passages.4.jsonl filter=lfs diff=lfs merge=lfs -text
|
||||
indices/rpj_wiki/rpj_wiki.passages.5.idx filter=lfs diff=lfs merge=lfs -text
|
||||
indices/rpj_wiki/rpj_wiki.passages.5.jsonl filter=lfs diff=lfs merge=lfs -text
|
||||
indices/rpj_wiki/rpj_wiki.passages.6.idx filter=lfs diff=lfs merge=lfs -text
|
||||
indices/rpj_wiki/rpj_wiki.passages.6.jsonl filter=lfs diff=lfs merge=lfs -text
|
||||
indices/rpj_wiki/rpj_wiki.passages.7.idx filter=lfs diff=lfs merge=lfs -text
|
||||
indices/rpj_wiki/rpj_wiki.passages.7.jsonl filter=lfs diff=lfs merge=lfs -text
|
||||
@@ -1,44 +0,0 @@
|
||||
---
|
||||
license: mit
|
||||
---
|
||||
|
||||
# LEANN-RAG Evaluation Data
|
||||
|
||||
This repository contains the necessary data to run the recall evaluation scripts for the [LEANN-RAG](https://huggingface.co/LEANN-RAG) project.
|
||||
|
||||
## Dataset Components
|
||||
|
||||
This dataset is structured into three main parts:
|
||||
|
||||
1. **Pre-built LEANN Indices**:
|
||||
* `dpr/`: A pre-built index for the DPR dataset.
|
||||
* `rpj_wiki/`: A pre-built index for the RPJ-Wiki dataset.
|
||||
These indices were created using the `leann-core` library and are required by the `LeannSearcher`.
|
||||
|
||||
2. **Ground Truth Data**:
|
||||
* `ground_truth/`: Contains the ground truth files (`flat_results_nq_k3.json`) for both the DPR and RPJ-Wiki datasets. These files map queries to the original passage IDs from the Natural Questions benchmark, evaluated using the Contriever model.
|
||||
|
||||
3. **Queries**:
|
||||
* `queries/`: Contains the `nq_open.jsonl` file with the Natural Questions queries used for the evaluation.
|
||||
|
||||
## Usage
|
||||
|
||||
To use this data, you can download it locally using the `huggingface-hub` library. First, install the library:
|
||||
|
||||
```bash
|
||||
pip install huggingface-hub
|
||||
```
|
||||
|
||||
Then, you can download the entire dataset to a local directory (e.g., `data/`) with the following Python script:
|
||||
|
||||
```python
|
||||
from huggingface_hub import snapshot_download
|
||||
|
||||
snapshot_download(
|
||||
repo_id="LEANN-RAG/leann-rag-evaluation-data",
|
||||
repo_type="dataset",
|
||||
local_dir="data"
|
||||
)
|
||||
```
|
||||
|
||||
This will download all the necessary files into a local `data` folder, preserving the repository structure. The evaluation scripts in the main [LEANN-RAG Space](https://huggingface.co/LEANN-RAG) are configured to work with this data structure.
|
||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because one or more lines are too long
File diff suppressed because one or more lines are too long
@@ -1,484 +0,0 @@
|
||||
=== Prompt Dump for TRIVIA + HNSW ===
|
||||
Total prompts: 50
|
||||
Showing first 20 prompts:
|
||||
|
||||
==================================================
|
||||
PROMPT #1:
|
||||
==================================================
|
||||
Jason Lee also portrays David Seville in live action/CGI films starring Alvin and the Chipmunks, which use a combination of live-action acting and computer animation. While Ross Bagdasarian Jr. does not do any voices for the film series, the films are all produced in association with Bagdasarian Productions, which owns the rights to all of the characters. Portrayed by Filmography Films Television See also References Fictional characters introduced in 1958 Alter egos Alvin and the Chipmunks Fictional managers Fictional producers American male characters in televisionRoss Dickran Bagdasarian (born May 6, 1949) is an American actor, animator and producer, known for his work on the Alvin and the Chipmunks franchise. He is the son of the franchise's creator, Ross Bagdasarian. Early life Bagdasarian was born in Fresno, California, the son of Armenian-American parents Armenuhi Bagdasarian (née Kulhanjian) and Ross Bagdasarian (1919–1972). As a child, he worked with his father on The Alvin Show by helping edit and coordinate the soundtracks and falsetto voice-overs of the Chipmunks. Career Bagdasarian graduated from law school. He succeeded his father as president of Bagdasarian Productions in 1972 after the death of the elder Bagdasarian. The company had fallen into obscurity after significant success between 1958 and the late 1960s. Bagdasarian was also admitted to the California bar as an attorney in 1975. Under Bagdasarian's supervision, new Chipmunks records were created shortly after his marriage to Karman, including Chipmunk Punk. In 1981, the Chipmunks returned to television in the cartoon special A Chipmunk Christmas. Two years later, Ruby-Spears Productions' Alvin and the Chipmunks Saturday morning cartoon series debuted on NBC. Based on that series, a feature film, The Chipmunk Adventure was released in 1987. Bagdasarian voices Alvin, Simon, and Dave Seville, and Karman voices Theodore and the Chipettes (Brittany, Jeanette, and Eleanor). Bagdasarian and Karman hold tight creative and financial control over the Chipmunk franchise, reviewing each and every business contract in great detail. In the mid-90s, Bagdasarian bought out his brother's and sister's portions of the Chipmunk rights, to take complete control of the franchise.Alvin and the Chipmunks, originally David Seville and the Chipmunks or simply The Chipmunks, are an American animated virtual band and media franchise first created by Ross Bagdasarian for novelty records in 1958. The group consists of three singing animated anthropomorphic chipmunks named Alvin, Simon, and Theodore who are originally managed by their human adoptive father, David "Dave" Seville. Bagdasarian provided the group's voices by producing sped-up recordings of his own, a technique pioneered on the successful "Witch Doctor". Later in 1958, Bagdasarian released the similarly-engineered "The Chipmunk Song" for which he came up with the chipmunk characters and their human father, attributing the track to them. David Seville and the Chipmunks released several more records over the following decade until Bagdasarian's death in 1972. The franchise was revived in 1979 with the characters' voices provided by his son Ross Bagdasarian Jr. and the latter's wife Janice Karman. Through the successful franchise, the Chipmunks have become one of the most successful children's artists of all time. It has garnered two number-one singles on the Billboard Hot 100 and won five Grammy Awards, having four Top 10 albums on the Billboard 200 and three certified platinum albums. "The Chipmunk Song" became one of the best-selling singles of all time at 5 million physical copies sold. The Chipmunks were first depicted in animated form in The Alvin Show (1961). The characters have since featured in several television series and films, as well as other media. In 2019, The Chipmunks received a star on the Hollywood Walk of Fame.
|
||||
Think hard, but answer shortly and concisely. Only give direct answers to the questions. No additional explanations. Directly answer these questions:
|
||||
Q: Rita Coolidge sang the title song for which Bond film??
|
||||
A: Octopussy
|
||||
|
||||
Q: Which Lloyd Webber musical premiered in the US on 10th December 1993??
|
||||
A: Sunset Boulevard
|
||||
|
||||
Q: Who was the next British Prime Minister after Arthur Balfour??
|
||||
A: Campbell-Bannerman
|
||||
|
||||
Q: Who had a 70s No 1 hit with Kiss You All Over??
|
||||
A: Exile
|
||||
|
||||
Q: What claimed the life of singer Kathleen Ferrier??
|
||||
A: Cancer
|
||||
|
||||
Q: Who was the man behind The Chipmunks?
|
||||
A:
|
||||
==================================================
|
||||
|
||||
==================================================
|
||||
PROMPT #2:
|
||||
==================================================
|
||||
and the drum set. Their film counterparts are Michelle and Eleni. Production history Broadway (2015-2019) Auditions began on January 19, 2015 for children ages nine through fifteen. Some recruiting was done through the School of Rock after-school educational program (which predated the film by several years) and open calls were held in New York at the Winter Garden, in Chicago and in Los Angeles. The production closed on January 20, 2019, after 1,309 performances. West End (2016–2020) On 7 December 2015, following the show's Broadway opening, it was announced by Andrew Lloyd Webber that the show would transfer to London's West End in autumn 2016, with the intention to open at the London Palladium. On 20 May 2016, the musical was confirmed at the Gillian Lynne Theatre instead of the Palladium with previews starting on 24 October 2016, opening night on 14 November 2016, and public booking opening on 25 May 2016. Lloyd Webber revealed that the production was able to open several months earlier than anticipated due to finding the child musician actors easily. Anna Louizos' scenery has been modified to fit the architecture of the Gillian Lynne Theatre from the traditional proscenium arch stage at Winter Garden Theatre. Changes include the removal of the pre-show curtain, the use of a revolving stage and action taking place in the aisles of the stalls. While the show remains to be set in America, the script has been adapted to include some minor references for a British audience. The original London cast includes David Fynn as DeweyThe Sound of Music, Camelot and Fiddler on the Roof played at the theatre in the early 1980s. In 1984, the interior was extensively modified by the introduction of a 'race track' that ran through the audience, for the show Starlight Express with performers on roller skates. The show premièred on 27 March, composed by Andrew Lloyd Webber and directed by Trevor Nunn and ran for 7,406 performances, over 18 years. With the removal of the 'tracks', the interior was extensively restored by architects Jaques Muir and Partners. This included the removal of 3,500 incandescent lamps that had become difficult to maintain and consumed a considerable amount of power. These were replaced by 88,000 low power LEDs specially designed for the theatre, creating the first auditorium completely lit in this way. Another Lloyd Webber production followed, Bombay Dreams premièred on 19 June 2002. It was created by A. R. Rahman with lyrics by Don Black and was directed by Steven Pimlott, closing after 1,500 performances on 13 June 2004. This was followed by the return to the West End of the Bee Gee's musical Saturday Night Fever on 6 July 2004, closing 22 October 2005 to tour. This was followed on 10 April 2006 by the jukebox musical Movin' Out, featuring the music of Billy Joel. This starred James Fox but ran for only two months. The Broadway musical Wicked received its London première at the venue on 27 September 2006 with a cast featuring Idina Menzel as Elphaba, Helen Dallimore as Glinda, Nigel Planer asand also starred comedian Tim Minchin as Judas Iscariot, former Spice Girl Melanie C as Mary Magdalene and BBC Radio 1 DJ Chris Moyles as King Herod. Tickets for most venues went on sale on 18 May 2012. In 2013, Lloyd Webber reunited with Christopher Hampton and Don Black on Stephen Ward the Musical. For his next project, a 2015 musical adaptation of the 2003 film School of Rock, auditions were held for children aged nine to fifteen in cooperation with the School of Rock music education program, which predated the film by several years. In April 2016, the English National Opera staged a revival of Sunset Boulevard at the London Coliseum. The limited run, semi-staged production directed by Lonny Price brought Glenn Close to reprise her star turn as "Norma Desmond", which was her first time performing the role in London; she had originated the role in Los Angeles in December 1993 and then on Broadway in November 1994 (which won her the 1995 Tony Award for Best Actress in a Musical). The 2016 London revival was so well-received that the production transferred to the Palace Theatre on Broadway in February 2017, making Lloyd Webber the first musical-theatre composer since 1953 to have four musicals running simultaneously on Broadway – a feat that his heroes Rodgers and Hammerstein had previously achieved. Lloyd Webber's memoir, Unmasked, was published in 2018. On 9 September 2018, Lloyd Webber, along with Tim Rice and John Legend each won an Emmy for Jesus Christ Superstar Live in Concert. With this
|
||||
Think hard, but answer shortly and concisely. Only give direct answers to the questions. No additional explanations. Directly answer these questions:
|
||||
Q: Who was the man behind The Chipmunks??
|
||||
A: David Seville
|
||||
|
||||
Q: Rita Coolidge sang the title song for which Bond film??
|
||||
A: Octopussy
|
||||
|
||||
Q: Who was the next British Prime Minister after Arthur Balfour??
|
||||
A: Campbell-Bannerman
|
||||
|
||||
Q: Who had a 70s No 1 hit with Kiss You All Over??
|
||||
A: Exile
|
||||
|
||||
Q: What claimed the life of singer Kathleen Ferrier??
|
||||
A: Cancer
|
||||
|
||||
Q: Which Lloyd Webber musical premiered in the US on 10th December 1993?
|
||||
A:
|
||||
==================================================
|
||||
|
||||
==================================================
|
||||
PROMPT #3:
|
||||
==================================================
|
||||
Cabinet Louis Botha, Prime Minister of the Union of South Africa (1910–1919) Behind Churchill are: George Barnes, leader of the National Democratic and Labour Party Sir Robert Borden, Prime Minister of Canada (1911–1920) To their right are: Arthur Balfour, 1st Earl of Balfour, former Prime Minister of the United Kingdom (1902–1905); First Lord of the Admiralty (1915–1916) and Foreign Secretary (1916–1919) (standing adlocutio in a black suit) H. H. Asquith, 1st Earl of Oxford and Asquith, Prime Minister of the United Kingdom (1908–1916) (sitting in front) Sir Eric Geddes, First Lord of the Admiralty (1917–1919) (behind, cleanshaven) Bonar Law, Leader of the Opposition (United Kingdom) (1911–1915), Secretary of State for the Colonies (1915–1916), Chancellor of the Exchequer (1916–1919) (later Prime Minister of the United Kingdom, 1922–1923) (dark moustache) Edward Morris, 1st Baron Morris, Prime Minister of Newfoundland (1909–1917) (white moustache, in the shadows) Herbert Kitchener, 1st Earl Kitchener, Secretary of State for War (1914–1916) (in the shadows) Bailey decided that the painting should include British and Dominion civilian leaders in office at the beginning and the end of the First World War. It includes Prime Ministers of Australia, Canada, Newfoundland, and New Zealand, and the Prime Ministers, Foreign Secretaries, Secretaries of War, and First Lords of the Admiralty of the United Kingdom, together with two leaders of the British Conservative and Labour parties. The Maharaja of Bikaner, a member of the Imperial War Cabinet and the Indian delegate to the Versailles Peace Conference, stands to the left next to Botha, both in military uniform. Kitchener standsArthur James Balfour, 1st Earl of Balfour, (, ; 25 July 184819 March 1930), also known as Lord Balfour, was a British Conservative statesman who served as Prime Minister of the United Kingdom from 1902 to 1905. As foreign secretary in the Lloyd George ministry, he issued the Balfour Declaration of 1917 on behalf of the cabinet, which supported a "home for the Jewish people" in Palestine. Entering Parliament in 1874, Balfour achieved prominence as Chief Secretary for Ireland, in which position he suppressed agrarian unrest whilst taking measures against absentee landlords. He opposed Irish Home Rule, saying there could be no half-way house between Ireland remaining within the United Kingdom or becoming independent. From 1891 he led the Conservative Party in the House of Commons, serving under his uncle, Lord Salisbury, whose government won large majorities in 1895 and 1900. An esteemed debater, he was bored by the mundane tasks of party management. In July 1902, he succeeded his uncle as prime minister. In domestic policy he passed the Land Purchase (Ireland) Act 1903, which bought out most of the Anglo-Irish land owners. The Education Act 1902 had a major long-term impact in modernising the school system in England and Wales and provided financial support for schools operated by the Church of England and by the Catholic Church. Nonconformists were outraged and mobilised their voters, but were unable to reverse it. In foreign and defence policy, he oversaw reform of British defence policy and supported Jackie Fisher's naval innovations. He secured the Entente Cordiale withthe county of Haddington. In October 1922 he, with most of the Conservative leadership, resigned with Lloyd George's government following the Carlton Club meeting, a Conservative back-bench revolt against continuance of the coalition. Bonar Law became prime minister. Like many Coalition leaders, he did not hold office in the Conservative governments of 1922–1924, but as an elder statesman, he was consulted by the King in the choice of Stanley Baldwin as Bonar Law's successor as Conservative leader in May 1923. His advice was strongly in favour of Baldwin, ostensibly due to Baldwin's being an MP but in reality motivated by his personal dislike of Curzon. Later that evening, he met a mutual friend who asked 'Will dear George be chosen?' to which he replied with 'feline Balfourian satisfaction,' 'No, dear George will not.' His hostess replied, 'Oh, I am so sorry to hear that. He will be terribly disappointed.' Balfour retorted, 'Oh, I don't know. After all, even if he has lost the hope of glory he still possesses the means of Grace.' Balfour was not initially included in Baldwin's second government in 1924, but in 1925, he returned to the Cabinet, in place of the late Lord Curzon as Lord President of the Council, until the government ended in 1929. With 28 years of government service, Balfour had one of the longest ministerial careers in modern British politics, second only to Winston Churchill . Last years Lord Balfour had generally good health until 1928 and remained until then a regular tennis player. Four years previously
|
||||
Think hard, but answer shortly and concisely. Only give direct answers to the questions. No additional explanations. Directly answer these questions:
|
||||
Q: Who was the man behind The Chipmunks??
|
||||
A: David Seville
|
||||
|
||||
Q: Which Lloyd Webber musical premiered in the US on 10th December 1993??
|
||||
A: Sunset Boulevard
|
||||
|
||||
Q: Rita Coolidge sang the title song for which Bond film??
|
||||
A: Octopussy
|
||||
|
||||
Q: Who had a 70s No 1 hit with Kiss You All Over??
|
||||
A: Exile
|
||||
|
||||
Q: What claimed the life of singer Kathleen Ferrier??
|
||||
A: Cancer
|
||||
|
||||
Q: Who was the next British Prime Minister after Arthur Balfour?
|
||||
A:
|
||||
==================================================
|
||||
|
||||
==================================================
|
||||
PROMPT #4:
|
||||
==================================================
|
||||
classic '70s pop song." In 1992, Mexican trio Pandora released a cover version titled "Pierdo el Control" on their album Ilegal. In 1979 Ginger Rogers sang this song on The Love Boat in the episode "Critical Success / The Love Lamp Is Lit / Take My Boyfriend, Please / Rent a Family / The Man in Her Life: Parts 1 & 2" In 2001, the film Get Over It featured a dance to this song at the beginning by some of the cast. References 1973 songs 1975 debut singles Songs written by Neil Sedaka Songs with lyrics by Howard Greenfield Neil Sedaka songs Captain & Tennille songs Andy Williams songs Number-one singles in Australia Billboard Hot 100 number-one singles Cashbox number-one singles RPM Top Singles number-one singles Grammy Award for Record of the Year A&M Records singles Juno Award for Best Selling Single singlesMusic Week rated the song four out of five, concluding, "A third huge hit for the boys." Tracklisting CD single "Kiss You All Over" (Radio Edit) - 4:31 "Kiss You All Over" (Club Mix) - 5:53 "Bonita" (Radio Edit) - 3:54 "Bonita" (Club Mix) - 7:08 Charts Release history References 1978 songs 1978 singles 1997 singles 1998 singles Billboard Hot 100 number-one singles Cashbox number-one singles Exile (American band) songs Number-one singles in New Zealand Number-one singles in South Africa Number-one singles in Australia Songs written by Mike Chapman Song recordings produced by Frank Farian Song recordings produced by Mike Chapman Songs written by Nicky Chinn RAK Records singles Curb Records singles Hilltak Records singles Warner Records singles Arista Records singles No Mercy (pop band) songs Songs about kissing Phyllis Hyman songs"Kiss You All Over" is a 1978 song performed by American group Exile, written by Mike Chapman and Nicky Chinn. It was included on the band's third album, Mixed Emotions (1978), and featured lead vocalist Jimmy Stokley and guitarist J.P. Pennington on vocals. On the American Top 40 broadcast of May 26, 1979, Casey Kasem reported that Chapman stated his source of inspiration for "Kiss You All Over" was "It's Ecstasy When You Lay Down Next to Me" by Barry White. The song was a number one single in the United States, but proved to be Exile's only big hit in the pop market (they would later have great success on the country music charts). It held the number one spot on the Billboard Hot 100 for four weeks (starting September 30), and Billboard ranked it as the No. 5 song for 1978. The track also reached number-one in at least three other nations. In the United Kingdom, the song was released on Mickie Most's RAK Records, and peaked at number 6 on the UK Singles Chart. The strings are played with a synthesizer in a backing track. In 2010, Billboard ranked the song tenth on its list of "The 50 Sexiest Songs of All Time". Lead vocalist on the number, Stokley was ousted from the band in 1979, his health declining thereafter until he died at the age of 41 in 1985. After the success of soft rock singles from the albums Mixed Emotions and All There Is, the band moved into country music in
|
||||
Think hard, but answer shortly and concisely. Only give direct answers to the questions. No additional explanations. Directly answer these questions:
|
||||
Q: Who was the man behind The Chipmunks??
|
||||
A: David Seville
|
||||
|
||||
Q: Which Lloyd Webber musical premiered in the US on 10th December 1993??
|
||||
A: Sunset Boulevard
|
||||
|
||||
Q: Who was the next British Prime Minister after Arthur Balfour??
|
||||
A: Campbell-Bannerman
|
||||
|
||||
Q: Rita Coolidge sang the title song for which Bond film??
|
||||
A: Octopussy
|
||||
|
||||
Q: What claimed the life of singer Kathleen Ferrier??
|
||||
A: Cancer
|
||||
|
||||
Q: Who had a 70s No 1 hit with Kiss You All Over?
|
||||
A:
|
||||
==================================================
|
||||
|
||||
==================================================
|
||||
PROMPT #5:
|
||||
==================================================
|
||||
21st century world: "We dislike low-lying voices, for one thing— contraltos now sound freakish and headmistressy, and even the majority of mezzo-sopranos should more accurately be categorised as almost-sopranos". However, she was "a singer of, and for, her time — a time of grief and weariness, national self-respect and a belief in human nobility". In this context "her artistry stands upright, austere, unfussy, fundamental and sincere". Shortly after Ferrier's death an appeal was launched by Barbirolli, Walter, Myra Hess and others, to establish a cancer research fund in Ferrier's name. Donations were received from all over the world. To publicise the fund a special concert was given at the Royal Festival Hall on 7 May 1954, at which Barbirolli and Walter shared the conducting duties without payment. Among the items was a rendition of Purcell's When I am laid in earth, which Ferrier had often sung; on this occasion the vocal part was played by a solo cor anglais. The Kathleen Ferrier Cancer Research Fund helped establish the Kathleen Ferrier Chair of Clinical Oncology at University College Hospital, in 1984. , it was continuing to fund oncology research. As the result of a separate appeal, augmented by the sales proceeds of a memoir edited by Neville Cardus, the Kathleen Ferrier Memorial Scholarship Fund was created to encourage young British and Commonwealth singers of either sex. The Fund, which has operated from 1956 under the auspices of the Royal Philharmonic Society, initially provided an annual award covering the cost of a year's study to a single prizewinner.In the course of her professional life the English contralto Kathleen Ferrier made a large number of recordings. In the summer of 1944 she signed a contract with Columbia, which lasted until February 1946. She then transferred to Decca, and remained with them until her death in October 1953. Apart from her studio recordings, many of her live performances and broadcast recitals were recorded, sometimes privately. Some of these were later issued as commercial recordings; others are held by individuals or in the archives of broadcasting companies. The following list is neither up to date nor entirely accurate, particularly in regard to a CD issue, entitled 'Kathleen Ferrier Remembered', released in June 2017, on SOMM264, comprising 26 tracks, 19 of which have never previously been issued. Most of these 19 are not listed below. They include Lieder by Schubert, Brahms, Wolf and Mahler and songs by Stanford, Parry, Jacobson and Rubbra, all taken from BBC broadcasts between 1947 and 1952. In April 2019, a recording of Ferrier singing in Bach's 'Magnificat' during the 1950 Vienna International Bach Festival was issued for the first time. The CD catalogue number is SOMM Ariadne 5004 and it also features Irmgard Seefried and Friedl Riegler (sopranos), Hugo Meyer-Welfing (tenor) and Otto Edelmann (bass). The Vienna Philharmonic Orchestra and Chorus of the Vienna State Opera are conducted by Volkmar Andreae. The existence of this recording was not known until a vinyl disc was offered for sale on an internet auction site in 2018. In superb recorded sound, this discovery is aKathleen Mary Ferrier, CBE (22 April 19128 October 1953) was an English contralto singer who achieved an international reputation as a stage, concert and recording artist, with a repertoire extending from folksong and popular ballads to the classical works of Bach, Brahms, Mahler and Elgar. Her death from cancer, at the height of her fame, was a shock to the musical world and particularly to the general public, which was kept in ignorance of the nature of her illness until after her death. The daughter of a Lancashire village schoolmaster, Ferrier showed early talent as a pianist, and won numerous amateur piano competitions while working as a telephonist with the General Post Office. She did not take up singing seriously until 1937, when after winning a prestigious singing competition at the Carlisle Festival she began to receive offers of professional engagements as a vocalist. Thereafter she took singing lessons, first with J.E. Hutchinson and later with Roy Henderson. After the outbreak of the Second World War Ferrier was recruited by the Council for the Encouragement of Music and the Arts (CEMA), and in the following years sang at concerts and recitals throughout the UK. In 1942 her career was boosted when she met the conductor Malcolm Sargent, who recommended her to the influential Ibbs and Tillett concert management agency. She became a regular performer at leading London and provincial venues, and made numerous BBC radio broadcasts. In 1946, Ferrier made her stage debut, in the Glyndebourne Festival premiere of Benjamin Britten's opera The Rape of Lucretia.
|
||||
Think hard, but answer shortly and concisely. Only give direct answers to the questions. No additional explanations. Directly answer these questions:
|
||||
Q: Who was the man behind The Chipmunks??
|
||||
A: David Seville
|
||||
|
||||
Q: Which Lloyd Webber musical premiered in the US on 10th December 1993??
|
||||
A: Sunset Boulevard
|
||||
|
||||
Q: Who was the next British Prime Minister after Arthur Balfour??
|
||||
A: Campbell-Bannerman
|
||||
|
||||
Q: Who had a 70s No 1 hit with Kiss You All Over??
|
||||
A: Exile
|
||||
|
||||
Q: Rita Coolidge sang the title song for which Bond film??
|
||||
A: Octopussy
|
||||
|
||||
Q: What claimed the life of singer Kathleen Ferrier?
|
||||
A:
|
||||
==================================================
|
||||
|
||||
==================================================
|
||||
PROMPT #6:
|
||||
==================================================
|
||||
"You Only Live Twice", performed by Nancy Sinatra, is the theme song to the 1967 James Bond film of the same name. The music was by veteran Bond film composer John Barry, with lyrics by Leslie Bricusse. The song is widely recognized for its striking opening bars, featuring a simple 2-bar theme in the high octaves of the violins and lush harmonies from French horns. It is considered by some to be among the best James Bond theme songs, and has become one of Nancy Sinatra's best known hits. Shortly after Barry's production, Sinatra's producer Lee Hazlewood released a more guitar-based single version. The song has been covered by many artists including Coldplay, Soft Cell, Björk and Shirley Bassey. In 1998, Robbie Williams re-recorded portions of the song (including the opening strings) for use in his UK number-one single "Millennium". Background James Bond veteran John Barry returned to the franchise to produce the score. The lyrics were by Leslie Bricusse, who had previously cowritten the lyrics for the theme to Goldfinger. An initial version of the song was performed by Julie Rogers and recorded with a 50 or 60 piece orchestra at CTS Studios. However, this version was not used since Barry decided to re-write and re-record the song: "It was usually the producers that said 'this isn't working, there's a certain something that it needed'. If that energy wasn't there, if that mysterioso kind of thing wasn't there, then it wasn't going to work for the movie." The Rogers song shares only two lines withBassey belting out the fantastic title song." He added that the remastered edition's sound quality was "impeccable". Chart positions Track listing Credits Project manager: Herb Agner Creative director: Michelle Azzopardi Composer, conductor, primary artist: John Barry Primary artist, vocals: Shirley Bassey Liner notes: Jeff Bond Composer, lyricist: Leslie Bricusse Project manager: Wendy Brueder Producer, reissue producer: Frank Collura Remastering: Bob Fisher Guitar, soloist: Vic Flick Art direction, design: Peter Grant Orchestra contractor: Sid Margo Lyricist: Anthony Newley A&R: Gregg Ogorzelec Engineer: John Richards Saxophone, soloist: John Scott Source: Aftermath Following the success of her performance on the title track, Shirley Bassey sang the title songs for two later Bond films, Diamonds Are Forever and Moonraker. John Barry used the Goldfinger theme on his 1965 John Barry Plays Goldfinger album that featured Robert Brownjohn artwork. References Footnotes Citations Bibliography Soundtrack albums from James Bond films Soundtrack 1964 soundtrack albums EMI Records soundtracks John Barry (composer) soundtracksJames Bond (Roger Moore), and the title evidently refers to the key aerial sequences featured in the movie. Prior to Rita Coolidge being assigned the Octopussy theme, Mari Wilson was a contender, a British singer whose retro-image evoked the mid-'60s when the Bond series originated; but Wilson's lack of a US-profile led to a negative decision. In January 1983, the producer of Octopussy: Cubby Broccoli, stated that he hoped to have current hitmaker Laura Branigan sing the movie's theme song, an artist choice which both Barry and Rice have stated would have pleased them. However, on March 29, 1983 Rita Coolidge was revealed as the singer, a seemingly surprising choice in that Coolidge's career peak had occurred some six years previously. Coolidge recalls that Barbara Broccoli, daughter of Cubby Broccoli and herself the assistant director of Octopussy, was a fan of Coolidge and made a point of playing Coolidge records around her father until "one day [he said], "Who is that? That's the voice I want for the movie." Rice still had to complete his contribution as the singer arrived in the studio, with Coolidge stating that "we were waiting for the lyrics as the instrumental track had already been done." The chorus of "All Time High" features a lyric similar to that of Coolidge's #2 hit "(Your Love Has Lifted Me) Higher and Higher" whose lyric "When you wrap your loving arms around me I can stand up and face the world again" is echoed by the "All Time High" lyric "We'll take on the
|
||||
Think hard, but answer shortly and concisely. Only give direct answers to the questions. No additional explanations. Directly answer these questions:
|
||||
Q: Who was the man behind The Chipmunks??
|
||||
A: David Seville
|
||||
|
||||
Q: Which Lloyd Webber musical premiered in the US on 10th December 1993??
|
||||
A: Sunset Boulevard
|
||||
|
||||
Q: Who was the next British Prime Minister after Arthur Balfour??
|
||||
A: Campbell-Bannerman
|
||||
|
||||
Q: Who had a 70s No 1 hit with Kiss You All Over??
|
||||
A: Exile
|
||||
|
||||
Q: What claimed the life of singer Kathleen Ferrier??
|
||||
A: Cancer
|
||||
|
||||
Q: Rita Coolidge sang the title song for which Bond film?
|
||||
A:
|
||||
==================================================
|
||||
|
||||
==================================================
|
||||
PROMPT #7:
|
||||
==================================================
|
||||
which allowed the first legal beer sales since the beginning of Prohibition on January 16, 1920. In 1933 state conventions ratified the Twenty-first Amendment, which repealed Prohibition. The Amendment was fully ratified on December 5, 1933. Federal laws enforcing Prohibition were then repealed. Dry counties Following repeal some states continued prohibition within their own jurisdictions. Almost two-thirds of the states adopted some form of local option which enabled residents in political subdivisions to vote for or against local prohibition. For a time, 38 percent of Americans lived in areas with Prohibition. By 1966, however, all states had repealed their statewide prohibition laws, with Mississippi the last state to do so. Notes Sources Walker, Robert S. and Samuel C. Patterson, Oklahoma Goes Wet: The Repeal of Prohibiton, Eagleton Institute, Rutgers University, (1961). External links Repeal Day is December Fifth See more related images by selecting the "Alcohol" subject at the Persuasive Cartography, The PJ Mode Collection, Cornell University Library Prohibition in the United States Economic history of the United States 1933 in the United States Articles containing video clipsimportation of alcoholic beverages in the United States. The resolution was sent to the states for ratification and became the Eighteenth Amendment to the U.S. Constitution. On January 8, 1918, Mississippi became the first state to ratify the amendment and on January 16, 1919, Nebraska became the 36th state to do so, securing its passage with the required three-fourths of the states. By the end of February 1919, only three states remained as hold-outs to ratification: New Jersey, Connecticut and Rhode Island. The National Prohibition Act, also known as the Volstead Act, was enacted on October 18, 1919. Prohibition in the United States went into effect on January 17, 1920. Nationwide prohibition was repealed in 1933 with the passage of the Twenty-first Amendment on February 20 and its ratification on December 5. List of formerly dry states This table lists the effective dates each state went dry and any dates of repeal that do not coincide with the end of national prohibition in 1933. See also Dry county Alcoholic beverage control state List of alcohol laws of the United States by state Notes Alcohol law in the United States Prohibition in the United StatesAugust 19. PPS functionals were completed August 21. GATV 5006 was then transferred to complex 14 for mating with the Atlas. July 27, 1966 (Wednesday) Following the announcement of his austerity programme, British Prime Minister Harold Wilson survived a vote of censure in the House of Commons, as members of his Labour Party (with an 88-seat majority) supported him. The final result was 246 votes in favor, and 325 against. On the same day, the nation's chief labor union, the Trades Union Congress, voted 20 to 12 in support of a resolution pledging to halt strikes that had been threatened during the six-month freeze against raising wages. For the first time in 58 years, liquor was legally served in Mississippi, the last of the United States to have repealed its prohibition laws. Effective July 1, individual local governments were allowed to hold referendum elections on whether to allow the sale of liquor at state-approved resorts, and Harrison County voters had endorsed the measure. At 6:55 p.m., after police cars escorted a liquor delivery truck into Biloxi. The first drink in the state was poured at the Broadwater Beach Hotel, and Louis Cobb, the first legal bartender in Mississippi, sold a glass of scotch whiskey to hotel manager T.M. Dorsett. Biloxi Mayor Dan Guice then cut the ribbon to open the entrance to the hotel's bar.Died: Brenda Sue Brown, 11, was beaten to death after walking with her sister to summer school in Shelby, North Carolina. Police were unable to charge a suspect with the crime, until
|
||||
Think hard, but answer shortly and concisely. Only give direct answers to the questions. No additional explanations. Directly answer these questions:
|
||||
Q: Who was the man behind The Chipmunks??
|
||||
A: David Seville
|
||||
|
||||
Q: Which Lloyd Webber musical premiered in the US on 10th December 1993??
|
||||
A: Sunset Boulevard
|
||||
|
||||
Q: Who was the next British Prime Minister after Arthur Balfour??
|
||||
A: Campbell-Bannerman
|
||||
|
||||
Q: Who had a 70s No 1 hit with Kiss You All Over??
|
||||
A: Exile
|
||||
|
||||
Q: What claimed the life of singer Kathleen Ferrier??
|
||||
A: Cancer
|
||||
|
||||
Q: What was the last US state to reintroduce alcohol after prohibition?
|
||||
A:
|
||||
==================================================
|
||||
|
||||
==================================================
|
||||
PROMPT #8:
|
||||
==================================================
|
||||
to New York City for work in summer stock theatre shortly before winning a supporting role in MGM's These Glamour Girls (1939) opposite Lana Turner and Lew Ayres. The role of Betty was said to have been written especially with Hunt in mind. Other roles in major studio productions soon followed, including supporting roles as Mary Bennet in MGM's version of Pride and Prejudice (1940) with Laurence Olivier, and as Martha Scott's surrogate child Hope Thompson in Cheers for Miss Bishop (1941). Years at MGM In 1941, Hunt signed a contract with MGM, where she remained for the next six years. While filming Blossoms in the Dust, film director Mervyn LeRoy lauded Hunt for her heartfelt and genuine acting ability. During this period she had starring roles in 21 films, including The Penalty (1941) opposite Lionel Barrymore, Panama Hattie (1942) opposite Ann Sothern and Red Skelton, and the war drama Pilot No. 5 (1943) in which she was cast as the love interest of Franchot Tone, and The Valley of Decision (1945). In 1944 she polled seventh in a list by exhibitors of "Stars of Tomorrow". She previously did a screen test to play Melanie Hamilton in Gone with the Wind (1939) and was told by David O. Selznick she would play the role, but to "keep it a secret for now." Three days later, it was announced that Olivia de Havilland was cast. In 1944, she appeared in None Shall Escape, a film that is now regarded as the first about the Holocaust. She playedMiss America 1941, the 15th Miss America pageant, was held at the Boardwalk Hall in Atlantic City, New Jersey on September 6, 1941. Shortly after the crowning of Miss California, Rosemary LaPlanche, who had been first runner-up in 1940, the pageant committee adopted this rule: "No contestant can compete in Atlantic City for the title of Miss America more than once", thus eliminating future state winners with more than one attempt at the national title. LaPlanche became a film actress, as did her sister, Louise LaPlanche. 1941 was also the first year that the special award, “Miss Congeniality” was created. It went to Mifaunwy Shunatona, a member of the Otoe and Pawnee tribes — she was also the first American Indian contestant in the pageant’s history. Results Awards Preliminary awards Other awards Contestants References Secondary sources External links Miss America official website 1941 1941 in the United States 1941 in New Jersey September 1941 events Events in Atlantic City, New JerseyMiss America 1942, the 16th Miss America pageant, was held at the Warner Theater in Atlantic City, New Jersey on September 12, 1942. Miss Texas, Jo-Carroll Dennison won the title after winning the swimsuit and talent categories. She was the first Miss Texas to win the Miss America title. Dennison became an actress and had roles in films such as Winged Victory. She was married at one time to comedian Phil Silvers. Results Awards Preliminary awards Other awards Contestants References Secondary sources External links Miss America (1942) 1942 1942 in the United States 1942 in New Jersey September 1942 events Events in Atlantic City, New Jersey
|
||||
Think hard, but answer shortly and concisely. Only give direct answers to the questions. No additional explanations. Directly answer these questions:
|
||||
Q: Who was the man behind The Chipmunks??
|
||||
A: David Seville
|
||||
|
||||
Q: Which Lloyd Webber musical premiered in the US on 10th December 1993??
|
||||
A: Sunset Boulevard
|
||||
|
||||
Q: Who was the next British Prime Minister after Arthur Balfour??
|
||||
A: Campbell-Bannerman
|
||||
|
||||
Q: Who had a 70s No 1 hit with Kiss You All Over??
|
||||
A: Exile
|
||||
|
||||
Q: What claimed the life of singer Kathleen Ferrier??
|
||||
A: Cancer
|
||||
|
||||
Q: Which actress was voted Miss Greenwich Village in 1942?
|
||||
A:
|
||||
==================================================
|
||||
|
||||
==================================================
|
||||
PROMPT #9:
|
||||
==================================================
|
||||
De Tokyo Stock Price Index (Japans: 東証株価指数) of TOPIX is een belangrijke aandelenindex van de Tokyo Stock Exchange. Berekening In deze index zijn alle bedrijven opgenomen die op de beurs van Tokio staan genoteerd in de First Section. Dit zijn de grootste en meest liquide aandelen die op de beurs worden verhandeld. Tot medio 2006 werd het gewicht van de individuele bedrijven in de index bepaald op basis van de marktkapitalisatie, hierna wordt ook de free float in de berekening meegenomen. Het effect van deze verandering was significant, daar veel Japanse bedrijven aandelen houden in andere Japanse bedrijven, ook wel bekend als crossholdings, om daarmee de langdurige zakenrelatie te onderstrepen. Deze belangen worden voor lange tijd gehouden en worden niet tot de free float gerekend. De index heeft 4 januari 1968 als startdatum, maar ging op 1 juli 1969 daadwerkelijk van start. Een andere belangrijke beursindex in Japan is de Nikkei 225. In deze index zijn 225 bedrijven opgenomen en dit is een prijsgewogen index. Samenstelling Eind maart 2021 bestond de index uit 2187 aandelen. Door het grote aantal aandelen is het gewicht van de individuele namen zeer klein. De top 10 aandelen hebben een gezamenlijk gewicht in de index van slechts 18,4% en de lijst zag er als volgt uit, met de gewichten tussen de haakjes: De belangrijkste drie sectoren zijn: elektronische apparatuur, informatie technologie en chemie. Deze drie vertegenwoordigen tezamen zo'n 34% van de index, waarvan de sector elektronische apparatuur het grootst is met een gewicht van 17,5%. Koershistorie De hoogste stand van deTOPIX steht für Tōkyō Stock Price Index (jap. , Tōshō kabuka shisū) und ist neben dem Nikkei 225 ein Kursindex der Tokioter Börse. Berechnet wird der TOPIX seit dem 1. Juli 1969. Die Index-Basis liegt bei 100 Punkten per 4. Januar 1968. Er enthält alle japanischen Aktien, welche im amtlichen Handel zugelassen sind. Die Gewichtung der einzelnen Unternehmen im Index erfolgt anhand der Marktkapitalisierung. Gegenwärtig (8. September 2021) setzt sich der Index aus 2.189 Aktien zusammen. Wegen dieser hohen Zahl an vertretenen Unternehmen wird der TOPIX als aussagekräftiger für den Zustand der japanischen Wirtschaft angesehen als der Nikkei 225. Weblinks Beschreibung des TOPIX (engl.) TOPIX in Echtzeit Jährliche Entwicklung des TOPIX seit 1949 (Daten vor 1969 – dem Einführungsjahr des TOPIX – sind rückgerechnet; XLS-Format, 31,5 KB; abgerufen am 12. Oktober 2017) Einzelnachweise Aktienindex Wirtschaft (Japan) Abkürzung, commonly known as TOPIX, along with the Nikkei 225, is an important stock market index for the Tokyo Stock Exchange (TSE) in Japan, tracking all domestic companies of the exchange's Prime market division. It is calculated and published by the TSE. , there were 1,669 companies listed on the First Section of the TSE, and the market value for the index was ¥197.4 trillion. The index transitioned from a system where a company's weighting is based on the total number of shares outstanding to a weighting based on the number of shares available for trading (called the free float). This transition took place in three phases starting in October 2005 and was completed in June 2006. Although the change is a technicality, it had a significant effect on the weighting of many companies in the index, because many companies in Japan hold a significant number of shares of their business partners as a part of intricate business alliances, and such shares are no longer included in calculating the weight of companies in the index. The TOPIX index is traded as a future on the Osaka Exchange under the ticker symbol JTPX. The CQG contract specifications for the TOPIX Index are listed below. TSE currently calculates and distributes TOPIX every second and further plans to launch a new High-Speed Index dissemination service provided at the millisecond level starting from February 28, 2011. History of TOPIX 1969-07-01 TSE to begin calculating and publishing “TOPIX” and “TOPIX Sector Indices” 1969-08-18 TSE to begin calculating and publishing “Tokyo Stock
|
||||
Think hard, but answer shortly and concisely. Only give direct answers to the questions. No additional explanations. Directly answer these questions:
|
||||
Q: Who was the man behind The Chipmunks??
|
||||
A: David Seville
|
||||
|
||||
Q: Which Lloyd Webber musical premiered in the US on 10th December 1993??
|
||||
A: Sunset Boulevard
|
||||
|
||||
Q: Who was the next British Prime Minister after Arthur Balfour??
|
||||
A: Campbell-Bannerman
|
||||
|
||||
Q: Who had a 70s No 1 hit with Kiss You All Over??
|
||||
A: Exile
|
||||
|
||||
Q: What claimed the life of singer Kathleen Ferrier??
|
||||
A: Cancer
|
||||
|
||||
Q: What is the Japanese share index called?
|
||||
A:
|
||||
==================================================
|
||||
|
||||
==================================================
|
||||
PROMPT #10:
|
||||
==================================================
|
||||
Man in the Music: The Creative Life and Work of Michael Jackson is a non-fiction book written by Joseph Vogel, published in June 2011 by the Sterling Publishing. Reception Man in the Music: The Creative Life and Work of Michael Jackson, was described by the Associated Press as "a fascinating read and really a must have for any fan of Jackson." Filmmaker Spike Lee characterized it as having "brilliantly cracked the DNA, the code, the artistry of Michael Joseph Jackson." References Works about Michael Jackson 2011 non-fiction books Sterling Publishing booksMoonwalk is a 1988 autobiography written by American recording artist Michael Jackson. The book was first published by Doubleday on February 1, 1988, five months after the release of Jackson's 1987 Bad album, and named after Jackson's signature dance move, the moonwalk. The book contains a foreword by Jacqueline Onassis. It reached number one on the New York Times Best Seller list. The book was reissued by Doubleday on October 13, 2009, following Jackson's death on June 25, 2009. Production Jacqueline Onassis, who was an editor at Doubleday, secured the book deal and paid Jackson a $300,000 advance. As part of the deal Jackson wanted Onassis to write a foreword, which she initially refused not wanting her name on any books she worked on but agreed to three paragraphs. She also edited the book. The first manuscript of the book was written by Robert Hilburn and was refused by the publishers, Doubleday, because it lacked "juicy details". A second manuscript was written by Stephen Davis, which Jackson drastically edited. Jackson finally decided to write the book himself, with help from Shaye Areheart. Due to the public interest in Jackson, Moonwalk was prepared for publication in secret. Relatives of Doubleday employees were hired as couriers, to deliver portions of the book from the company's head office in Manhattan to the printing plant in Fairfield, Pennsylvania. At the printing plant, the book was given the code name "Neil Armstrong", after the first "moonwalker". Narrative Dedicated to Fred Astaire, the book discusses Jackson's show business friends, girlfriends and hisMichael Jackson: Unauthorized in a 1994 biography of the late pop star Michael Jackson, written by celebrity biographer Christopher Andersen. Development According to Andersen, work started on the book in early 1991 when he received a call from a fellow journalist, who told him that two workers at Jackson's Neverland Ranch allegedly witnessed Jackson fondling a young celebrity. Andersen tried to interview Jackson several times, but was turned down. When Michael was publicly accused of child molestation in 1993, Andersen was told that he was under surveillance from investigators. Reception The book was largely overlooked by the public. Dana Kennedy of Entertainment Weekly felt that, with its "killer material", Anderson "probably could have retired from the celebrity-bio grind for good" had it been released five years before. People magazine found it to be a "sad book", considering its dark revelations about Jackson's behaviour. References 1994 non-fiction books Unauthorized biographies Works about the Michael Jackson sexual abuse allegations Biographies about musicians
|
||||
Think hard, but answer shortly and concisely. Only give direct answers to the questions. No additional explanations. Directly answer these questions:
|
||||
Q: Who was the man behind The Chipmunks??
|
||||
A: David Seville
|
||||
|
||||
Q: Which Lloyd Webber musical premiered in the US on 10th December 1993??
|
||||
A: Sunset Boulevard
|
||||
|
||||
Q: Who was the next British Prime Minister after Arthur Balfour??
|
||||
A: Campbell-Bannerman
|
||||
|
||||
Q: Who had a 70s No 1 hit with Kiss You All Over??
|
||||
A: Exile
|
||||
|
||||
Q: What claimed the life of singer Kathleen Ferrier??
|
||||
A: Cancer
|
||||
|
||||
Q: What was the name of Michael Jackson's autobiography written in 1988?
|
||||
A:
|
||||
==================================================
|
||||
|
||||
==================================================
|
||||
PROMPT #11:
|
||||
==================================================
|
||||
including popular titles by Sérgio Mendes and Herb Alpert were released with this audio process starting in September 1968. Other record labels soon followed suit, and an estimated 10% of all stereophonic albums released during the late 1960s and early 1970s employed the system. Other labels known to have used the system include Warner Bros. Records and Reprise Records. One of the biggest selling albums using the process is The Association's Greatest Hits, released in 1968. This recording has sold more than 2 million copies in the United States. The process was also used on the 1968 Frank Sinatra album Cycles as well as on most of the studio recordings on Wheels of Fire by Cream. Early 1968 copies of Neil Young's self-titled debut album also used the system. Use of Haeco-CSG in promotional recordings for radio The original intention of using Haeco-CSG on commercial LP releases was rather short lived, however, use of the process continued well into the mid-1970s on promotional records sent to radio stations. Many commercial FM Rock stations did not transition from mono to stereo broadcasting until the mid to late 1970s. AM Pop music stations continued to broadcast in mono, as AM stereo broadcasting was not introduced until 1982 and was never widely adopted. Many promotional singles and some commercial singles from the Warner/Reprise/Atlantic label group from this era had "CSG Mono Process" or "CSG Process" printed on the labels. Artists included Frank Sinatra, Gordon Lightfoot, James Taylor, Seals and Crofts. Warner subsidiary labels such as Atlantic issued a serieswas introduced to the public on December 13, 1957, at the Times Auditorium in New York City. 500 copies of this initial demonstration record were pressed. On December 16, 1957, Frey advertised in the trade magazine Billboard that he would send a free copy to anyone in the industry who wrote to him on company letterhead. Frey became known as "Mr. Stereo" during that era. Stereophonic sound was not entirely new to the public. In 1952 sound engineer Emory Cook developed a "Binaural" disk that used two separate grooves and playback needles to produce stereophonic sound; the following year he had a catalog of about 25 disks available for audiophiles. Multi-channel sound was integral to the widescreen motion picture processes Cinerama (1952) and CinemaScope (1953). Stereophonic audio tapes had been commercially available to audiophiles, although expensive, since the early-1950s. After the release of the Audio Fidelity demonstration disks, the other spur to the popularity of stereo disks was the reduction in price of a stereo magnetic cartridge, for playing the disks, from $250 to $29.95 in June 1958. The first four stereo discs available to the general public were released by Audio Fidelity in March, 1958--Johnny Puleo and his Harmonica Gang Volume 1 (AFSD 5830), Railroad - Sounds of a Vanishing Era (AFSD 5843), Lionel - Lionel Hampton and his Orchestra (AFSD 5849) and Marching Along with the Dukes of Dixieland Volume 3 (AFSD 5851). By the end of March the company had four more stereo LPs available. In the summer of 1958, Audio Fidelity recordedin 1957, with his Essex Records office manager George Phillips, he founded Somerset Records and Somerset Stereo Fidelity Records budget albums. His greatest claim to fame was selling large amounts of cheaply priced albums, with Somerset claiming to have manufactured the first stereo budget albums. The name of Somerset high fidelity albums was suggested by Miller International's West Coast distributor, Jimmy Warren, with the name of Stereo Fidelity (stereo albums) thought of by Wally Hill to capitalize on the public's interest in both high fidelity and stereophonic sound. The economy came from Miller starting his own record factory in Swarthmore, Pennsylvania, using public domain music and non union musicians from outside the United States to record cover versions of hit songs of the time. Many original tunes were written by Monty Kelly, Robert Lowden, and Joseph Kuhn with the music published by Miller's own music publisher, Chesdel Music created in 1962. Miller had his own distribution channels of his records in supermarkets and drugstores with the cheap albums being sold in metal racks similar to those holding paperback books or cardboard record holders called "dumps" that could be placed anywhere. Miller's record albums were sold wholesale for 93 cents to salesmen who sold them to merchants who sold them to the public for $1.98. Somerset Records used artist Anthony "Chic" Laganella to create attractive eye catching album covers. Miller used the name 101 Strings for several German orchestras; their first album appearing in September 1957. In 1958 Somerset released 24 101 Strings titles. Miller International's philosophy
|
||||
Think hard, but answer shortly and concisely. Only give direct answers to the questions. No additional explanations. Directly answer these questions:
|
||||
Q: Who was the man behind The Chipmunks??
|
||||
A: David Seville
|
||||
|
||||
Q: Which Lloyd Webber musical premiered in the US on 10th December 1993??
|
||||
A: Sunset Boulevard
|
||||
|
||||
Q: Who was the next British Prime Minister after Arthur Balfour??
|
||||
A: Campbell-Bannerman
|
||||
|
||||
Q: Who had a 70s No 1 hit with Kiss You All Over??
|
||||
A: Exile
|
||||
|
||||
Q: What claimed the life of singer Kathleen Ferrier??
|
||||
A: Cancer
|
||||
|
||||
Q: In which decade did stereo records first go on sale?
|
||||
A:
|
||||
==================================================
|
||||
|
||||
==================================================
|
||||
PROMPT #12:
|
||||
==================================================
|
||||
Flack in 1896) to win gold medals in both the 800 m and 1500 m in the same Olympics. Billy Mills, an unfancied runner, became the only American to win the gold in the men's 10,000 m. Bob Hayes won the 100 metre title in a time of 10.06 seconds, equaling the world record, and set the current record for the fastest relay leg in the 4×100 m. Joe Frazier, future heavyweight champion of the world, won a gold medal in heavyweight boxing while competing with a broken thumb. This was the last Summer Olympics to use a cinder running track for athletic events, and the first to use fiberglass poles for pole vaulting. Zambia declared its independence on the day of the closing ceremony of the 1964 Summer Olympics, thereby becoming the first country ever to have entered an Olympic games as one country, and left it as another. This was celebrated in the ceremony itself by the team using a placard with "Zambia" instead of the "Northern Rhodesia" placard from the opening ceremony. Zambia was the only team to use a placard in the closing ceremony. The start of operations for the first Japanese "bullet train" (the Tōkaidō Shinkansen) between Tokyo Station and Shin-Ōsaka Station was scheduled to coincide with the Olympic games. The first regularly scheduled train ran on 1 October 1964, just nine days before the opening of the games, transporting passengers in about four hours, and connecting the three major metropolitan areas of Tokyo, Nagoya, and Osaka. Ranatunge Karunananda who representedsystems were used: official hand timing, hand started photo-finish times, and the Gustavus Town Kirby timing device, which was designed by Kirby to determine the correct order of finish in horse races. The official report for 1932 Olympics states: "In addition to hand timing, two auxiliary electrical timing devices were used. Both were started by an attachment to the starters gun. One was stopped by hand at the time the runners hit the tape. The other was provided with a motion picture camera which photographed the runner at the tape and the dial of the time indicator simultaneously." Kirby's system was also used at the 1932 US. Olympic Trials, where Ralph Metcalfe's winning time of 10.62 in the 100 meters is considered possibly the first automatically timed world record. FAT was also used in 1936, but very few times have been found. In 1948, Bulova began developing the Phototimer, a unique combination of photo-finish camera and precision electronic timing instrument. The Phototimer was the first automatic timing device to be used in competitive sports. It was used extensively in North America, including at the 1948 US Olympic trials. The Bulova device was activated by the sound of the starting gun firing, rather than by a direct connection, which means that the times were around 0.02 seconds faster than reality. The 1948 Olympics, however, continued to use Omega timing with a device called the 'Magic Eye', developed by British Race Finish Recording Co. Ltd. The automatic times produced in the 1948 Olympics have never been released, butWhile the most notable story coming out of 1968 was socio-political, politics involved with the Olympics was not something unique to this year. However, the year marked the beginning of several emerging elements of contemporary track and field. Automatic timing While timing to the 100th of a second had been experimented with for many years, the 1968 Summer Olympics were the first to use Fully Automatic Timing, in not only athletics, but in canoeing, rowing, cycling, equestrian and swimming competitions. Subsequently, systems to record such times became more common and thus the accuracy of Fully Automatic Timing became mandated for World Record acceptance. While this rule was officially put into place in 1977, many 1968 records still stood as the first Automatically timed record. All weather tracks This technology too had been developing, but Tartan tracks were used as the competition surface for the first time at an Olympics. Since then an all-weather running track was required for all top-level competition. Subsequently, the inconsistency of the running surface became a significantly smaller factor in athletic performance. Altitude With the Olympics happening in Mexico City, at high altitude, the effect of the thin air on athletic performance became a factor on world records. This was already a known phenomenon, and the American team was selected by holding the Olympic Trials at high altitude at Echo Summit, California. In 1955, Lou Jones set the world record in the 400 meters at altitude in Mexico City. Following the 1968 Summer Olympics the: Men's 100 meters record, set by Jim
|
||||
Think hard, but answer shortly and concisely. Only give direct answers to the questions. No additional explanations. Directly answer these questions:
|
||||
Q: Who was the man behind The Chipmunks??
|
||||
A: David Seville
|
||||
|
||||
Q: Which Lloyd Webber musical premiered in the US on 10th December 1993??
|
||||
A: Sunset Boulevard
|
||||
|
||||
Q: Who was the next British Prime Minister after Arthur Balfour??
|
||||
A: Campbell-Bannerman
|
||||
|
||||
Q: Who had a 70s No 1 hit with Kiss You All Over??
|
||||
A: Exile
|
||||
|
||||
Q: What claimed the life of singer Kathleen Ferrier??
|
||||
A: Cancer
|
||||
|
||||
Q: In what year's Olympics were electric timing devices and a public-address system used for the first time?
|
||||
A:
|
||||
==================================================
|
||||
|
||||
==================================================
|
||||
PROMPT #13:
|
||||
==================================================
|
||||
A list of stratovolcanoes follows below. Africa Cameroon Mount Cameroon Democratic Republic of Congo Mount Nyiragongo, Goma; designated as a Decade Volcano It contains an active lava lake inside its crater which overflowed due to cracks in 2002. Mount Mikeno Eritrea Alid Volcano Dubbi Volcano Nabro Volcano Ethiopia Adwa Borawli, Afar Region Dabbahu Volcano Mount Fentale Kenya Mount Kenya, which contains several volcanic plugs on its peak. Mount Longonot Rwanda Mount Bisoke, on the border between Rwanda and the Democratic Republic of the Congo. Mount Gahinga, on the border between Rwanda and Uganda. Mount Karisimbi, on the border between Rwanda and the Democratic Republic of the Congo. Mount Muhabura, on the border between Rwanda and Uganda. Mount Sabyinyo, marks the border between Rwanda, Uganda, and the Democratic Republic of the Congo. Tanzania Ol Doinyo Lengai, the Earth's only active carbonatite lava-producing volcano. Mount Kilimanjaro, a dormant stratovolcano. It is the highest point of Africa. Mount Meru Mid-Atlantic Ridge Mount Pico in Pico Island, Azores, Portugal Teide in Tenerife, Canary Islands, Spain; designated as a Decade Volcano Cumbre Vieja in La Palma, Canary Islands, Spain Mount Fogo in Fogo, Cape Verde Green Mountain, Ascension Island Pico de las Nieves in Gran Canaria, Canary Islands, Spain Americas Caribbean La Grande Soufrière on Basse-Terre Island, Guadeloupe Soufriere Hills on the island Montserrat Its 1995 eruptions resulted in the abandonment of its capital city, Plymouth. Soufrière on the island Saint Vincent Mount Pelée on the island Martinique Its devastating eruption on 8 May 1902 resulted in the complete destruction ofMount Kilimanjaro is a volcano in Tanzania and the highest mountain in Africa. Kilimanjaro may also refer to: Tanzania Kilimanjaro National Park comprises the whole of Mount Kilimanjaro above the tree line and six forest corridors stretching down Kilimanjaro Region, a region in Tanzania Kilimanjaro (ward), a ward in the Moshi Urban district of Kilimanjaro Region, Tanzania Kilimanjaro International Airport in Tanzania a Tanzanian beer, see Beer in Africa#Eastern Africa a Tanzanite jewellery brand owned by F. Hinds Music Killamanjaro, a Jamaican reggae sound system Albums Kilimanjaro, an album by German artist Superpitcher Kilimanjaro (The Rippingtons album), a 1988 album by The Rippingtons Kilimanjaro (The Teardrop Explodes album), an album by The Teardrop Explodes Songs "Kilimanjaro", song by The Del Vikings 1962 "Kilimanjaro", song by Manhattan Brothers 1955 "Kilimanjaro", song by The Teardrop Explodes 1980 "Kilimanjaro", song by Juluka 1984 "Kilimandjaro" (song), a 1966 French-language song by French singer Pascal Danel "Kilimanjaro" (song), a 2010 song by A.R. Rahman from the film Enthiran "Kilimanjaro", a song by KSI from the 2016 extended play Keep Up Film Kilimanjaro (film), a 2013 American film Nigeria Kilimanjaro restaurant, a fast-food chain in Nigeria. See also The Snows of Kilimanjaro (disambiguation)Mount Kilimanjaro () is a dormant volcano located in Kilimanjaro Region of Tanzania. It has three volcanic cones: Kibo, Mawenzi, and Shira. It is the highest mountain in Africa and the highest single free-standing mountain above sea level in the world: above sea level and about above its plateau base. It is the highest volcano in Africa and the Eastern Hemisphere. Kilimanjaro is the fourth most topographically prominent peak on Earth. It is part of Kilimanjaro National Park and is a major hiking and climbing destination. Because of its shrinking glaciers and ice fields, which are projected to disappear between 2025 and 2035, it has been the subject of many scientific studies. Toponymy The origin of the name Kilimanjaro is not known, but a number of theories exist. European explorers had adopted the name by 1860 and reported that Kilimanjaro was the mountain's Kiswahili name. The 1907 edition of The Nuttall Encyclopædia also records the name of the mountain as Kilima-Njaro. Johann Ludwig Krapf wrote in 1860 that Swahilis along the coast called the mountain Kilimanjaro. Although he did not offer any support, he claimed that Kilimanjaro meant either mountain of greatness or mountain of caravans. Under the latter meaning, kilima meant mountain and jaro meant caravans. Jim Thompson claimed in 1885, again without support, that the term Kilima-Njaro "has generally been understood to mean" the mountain (kilima) of greatness (njaro). He also suggested "though not improbably it may mean" the white mountain. Njaro is an ancient Kiswahili word for shining. Similarly, Krapf wrote that a
|
||||
Think hard, but answer shortly and concisely. Only give direct answers to the questions. No additional explanations. Directly answer these questions:
|
||||
Q: Who was the man behind The Chipmunks??
|
||||
A: David Seville
|
||||
|
||||
Q: Which Lloyd Webber musical premiered in the US on 10th December 1993??
|
||||
A: Sunset Boulevard
|
||||
|
||||
Q: Who was the next British Prime Minister after Arthur Balfour??
|
||||
A: Campbell-Bannerman
|
||||
|
||||
Q: Who had a 70s No 1 hit with Kiss You All Over??
|
||||
A: Exile
|
||||
|
||||
Q: What claimed the life of singer Kathleen Ferrier??
|
||||
A: Cancer
|
||||
|
||||
Q: Which volcano in Tanzania is the highest mountain in Africa?
|
||||
A:
|
||||
==================================================
|
||||
|
||||
==================================================
|
||||
PROMPT #14:
|
||||
==================================================
|
||||
of the Libyan Draft Constitutional Charter for the Transitional Stage: The national flag shall have the following shape and dimensions: Its length shall be double its width, its shall be divided into three parallel coloured stripes, the uppermost being red, the centre black and lowest green, the black stripe shall be equal in area to the other two stripes together and shall bear in its centre a white crescent, between the two extremities of which there shall be a five-pointed white star. On 10 March 2011, France was the first country to recognise the council as the official government of Libya, as well as the first to allow the Libyan embassy staff to raise the flag. On 21 March, the flag was flown by the Permanent Mission of Libya to the United Nations and appeared on their official website, and thereafter in late August by the Arab League and by Libya's own telecommunications authority, the Libya Telecom & Technology, on its own website. In the following months many other Libyan embassies replaced the green flag of Gaddafi with the tricolour flag. This original flag of Libya is now the only flag used by the United Nations to represent Libya, according to the following UN statement: "Following the adoption by the General Assembly of resolution 66/1, the Permanent Mission of Libya to the United Nations formally notified the United Nations of a Declaration by the National Transitional Council of 3 August 2011 changing the official name of the Libyan Arab Jamahiriya to 'Libya' as well as athe flag's colours and symbols. According to Omar Faiek Shennib, "red was selected for the blood sacrificed for the freedom of Libya, black to remember the dark days that Libyans lived under the occupation of the Italians and green to represent its primary wealth, agriculture, [Libya once being referred to as the 'agricultural basket' or 'breadbasket' of the Ottoman Empire] and the future prosperity of the country. The star and crescent were placed within the black central strip of the flag as a reference to the Senussi flag and the role of King Idris in leading the country to independence". The flag's colours also echo the colours of the flags of the three regions of Libya: Fezzan (red), Cyrenaica (black), and Tripolitania (green). Under Muammar Gaddafi's dictatorship, Libya had a red-white-black flag from 1969 to 1977, and it was replaced by the all-green flag from 1977 to 2011, during which it was the only flag in the world to have one color and no design. During the Libyan Civil War against the rule of Muammar Gaddafi, the 1951–69 flag – as well as various makeshift versions without the crescent and star symbol, or without the green stripe – came back into use in areas held by the Libyan opposition and by protesters at several Libyan diplomatic missions abroad. The National Transitional Council, formed on 27 February 2011, adopted the flag previously used in the Kingdom of Libya between 1951 and 1969 as the "emblem of the Libyan Republic". The flag was officially defined in article threeThe flag of Libya from 1977 to 2011 was used by the Socialist People's Libyan Arab Jamahiriya from 1977 to 1986 and later the Great Socialist People's Libyan Arab Jamahiriya until 2011. The design is a green field in 1:2 ratio and was considered the only solid colour national flag in the world during its time. In 2011, after the collapse of Gaddafi's government, the 1951–1969 flag from the Kingdom of Libya was re-adopted but the flag introduced by Gaddafi remained in use by Pro-Gaddafists and Gaddafi loyalists. Before 1977, the country was called the Libyan Arab Republic from 1969 to 1977 and used a red-white-black flag similar to most traditional Arab national flags bearing a resemblance to the modern flag of Yemen. in 1977 after the Egyptian-Libyan War, the blank green flag was introduced to replace the red-white-black flag to avoid similarities with Egypt. History of Libya under Muammar Gaddafi Flags introduced in 1977 1977 establishments in Libya 2011 disestablishments in Libya
|
||||
Think hard, but answer shortly and concisely. Only give direct answers to the questions. No additional explanations. Directly answer these questions:
|
||||
Q: Who was the man behind The Chipmunks??
|
||||
A: David Seville
|
||||
|
||||
Q: Which Lloyd Webber musical premiered in the US on 10th December 1993??
|
||||
A: Sunset Boulevard
|
||||
|
||||
Q: Who was the next British Prime Minister after Arthur Balfour??
|
||||
A: Campbell-Bannerman
|
||||
|
||||
Q: Who had a 70s No 1 hit with Kiss You All Over??
|
||||
A: Exile
|
||||
|
||||
Q: What claimed the life of singer Kathleen Ferrier??
|
||||
A: Cancer
|
||||
|
||||
Q: The flag of Libya is a plain rectangle of which color?
|
||||
A:
|
||||
==================================================
|
||||
|
||||
==================================================
|
||||
PROMPT #15:
|
||||
==================================================
|
||||
la Francophonie. Places of worship Niger being a predominantly Muslim country, mosques are the most common places of worship, with the Grande Mosquée being the largest in the city. There are also various Christian churches, most notably Our Lady of Perpetual Help Cathedral and the Cathedral de Maourey. Governance Administration Niamey makes up a special capital district of Niger, which is surrounded by the Region of Tillabéri. The city of Niamey itself is governed as an autonomous first-level administrative block, the Niamey Urban Community (Fr. Communauté Urbaine de Niamey, or CUN). It includes five Urban Communes, divided into 44 "Districts" and 99 "Quartiers", including formerly independent towns. It is a co-equal first division subdivision with the seven Regions of Niger. The Niamey Urban Community includes an administration and Governor appointed by national leaders. Like the rest of Niger, Niamey has seen a decentralisation of governance since 2000. Government Ordinance n°2010–56 and Presidential Decree n°2010-679 of September 2010 mandated an elected City Council for the city of Niamey, subsumed under the CUN. This excludes some outlying areas of the CUN. Forty-five councillors are popularly elected and in turn elect the Mayor of the City of Niamey. In July 2011, the first Mayor under the new system, Oumarou Dogari Moumouni, was installed by the Governor of the CUN Mrs. Aïchatou Boulama Kané and the City Council. The City Council and Mayor have limited roles compared to the CUN Governor. Niamey has a third layer of government in the Commune system. Each Commune elects its own council, and outsidein Niger Niamey NigerNiamey () is the capital and largest city of Niger. Niamey lies on the Niger River, primarily situated on the east bank. Niamey's population was counted as 1,026,848 as of the 2012 census. As of 2017, population projections show the capital district growing at a slower rate than the country as a whole, which has the world's highest fertility rate. The city is located in a pearl millet growing region, while manufacturing industries include bricks, ceramic goods, cement, and weaving. History Niamey was probably founded in the 18th century and originated as a cluster of small villages (Gaweye, Kalley, Maourey, Zongo and Foulani Koira). Niamey was of little importance until the French developed it as a colonial centre in the late 1890s. The town, then with an estimated population of some 1,800, was chosen as the capital of the newly created Military Territory of Niger in 1905, however, the capital was shifted to the more established city of Zinder in 1912. Zinder's proximity to the Nigerian border and distance from French-controlled ports prompted the French to move the capital back to Niamey in 1926, by which time the city had some 3,000 inhabitants. A series of devastating droughts prompted significant population growth during this period, and by 1945 the population was about 8,000. Prior to 1926-27 the Upper Volta-Niger border ran along the Niger river, meaning that Niamey lay directly on the boundary. At the time of independence in 1960 the population had grown to around 30,000. The period from 1970 to 1988 was one in
|
||||
Think hard, but answer shortly and concisely. Only give direct answers to the questions. No additional explanations. Directly answer these questions:
|
||||
Q: Who was the man behind The Chipmunks??
|
||||
A: David Seville
|
||||
|
||||
Q: Which Lloyd Webber musical premiered in the US on 10th December 1993??
|
||||
A: Sunset Boulevard
|
||||
|
||||
Q: Who was the next British Prime Minister after Arthur Balfour??
|
||||
A: Campbell-Bannerman
|
||||
|
||||
Q: Who had a 70s No 1 hit with Kiss You All Over??
|
||||
A: Exile
|
||||
|
||||
Q: What claimed the life of singer Kathleen Ferrier??
|
||||
A: Cancer
|
||||
|
||||
Q: Of which African country is Niamey the capital?
|
||||
A:
|
||||
==================================================
|
||||
|
||||
==================================================
|
||||
PROMPT #16:
|
||||
==================================================
|
||||
James Walter McCord Jr. (January 26, 1924 – June 15, 2017) was an American CIA officer, later head of security for President Richard Nixon's 1972 reelection campaign. He was involved as an electronics expert in the burglaries which precipitated the Watergate scandal. Career McCord was born in Waurika, Oklahoma. He served as a bombardier with the rank of second lieutenant in the Army Air Forces during World War II. He briefly attended Baylor University before receiving a B.B.A. from the University of Texas at Austin in 1949. In 1965, he received an M.S. in international affairs from George Washington University. After beginning his career at the Federal Bureau of Investigation (FBI), McCord worked for the Central Intelligence Agency (CIA), ultimately ascending to the GS-15 directorship of the Agency's Office of Security. For a period of time, he was in charge of physical security at the Agency's Langley headquarters. L. Fletcher Prouty, a former colonel in the United States Air Force, claimed then-Director of Central Intelligence Allen Dulles introduced McCord to him as "my top man.". In 1961, under his direction, a counter-intelligence program was launched against the Fair Play for Cuba Committee. He also held the rank of lieutenant colonel in the United States Air Force Reserve. Watergate scandal Shortly after resigning from the CIA, McCord was interviewed and then hired by Jack Caulfield in January 1972 "for strict, solely defensive security work at the Republican National Committee (RNC) and the Committee to Re-Elect the President (CRP)." Some of the money from this contract came fromadministration as assistant director of the Bureau of the Budget, devoting most of his time to Defense matters. In 1971, President Nixon appointed Schlesinger a member of the Atomic Energy Commission (AEC) and designated him as chairman. Serving in this position for about a year and a half, Schlesinger instituted extensive organizational and management changes in an effort to improve the AEC's regulatory performance. CIA Director Schlesinger was CIA Director from February 2, 1973, to July 2, 1973. He was succeeded by William Colby. Schlesinger was extremely unpopular with CIA staff, as he reduced CIA staff by 7%, and was considered a Nixon loyalist seeking to make the agency more obedient to Nixon. He had a CCTV camera installed near his official portrait at the CIA headquarters in Langley, Va., as it was believed that vandalism of the portrait by disgruntled staff was likely. Secretary of Defense (1973–1975) Schlesinger left the CIA to become Secretary of Defense on July 2, aged 44. As a university professor, researcher at Rand, and government official in three agencies, he had acquired an impressive resume in national security affairs. Nuclear strategy Shortly after assuming office, Schlesinger outlined the basic objectives that would guide his administration: maintain a "strong defense establishment"; "assure the military balance so necessary to deterrence and a more enduring peace"; obtain for members of the military "the respect, dignity and support that are their due"; assume "an . . . obligation to use our citizens' resources wisely"; and "become increasingly competitive with potential adversaries.... [W]e must nota conventional North Vietnamese assault in 1975. The CORDS model and its approach influenced U.S. strategy and thinking on counterinsurgency in the 2000s in Iraq and Afghanistan. CIA HQ: Director Colby returned to Washington in July 1971 and became executive director of CIA. After long-time DCI Richard Helms was dismissed by President Nixon in 1973, James Schlesinger assumed the helm at the Agency. A strong believer in reform of the CIA and the intelligence community more broadly, Schlesinger had written a 1971 Bureau of the Budget report outlining his views on the subject. Colby, who had had a somewhat unorthodox career in the CIA focused on political action and counterinsurgency, agreed with Schlesinger's reformist approach. Schlesinger appointed him head of the clandestine branch in early 1973. When Nixon reshuffled his agency heads and made Schlesinger secretary of defense, Colby emerged as a natural candidate for DCI—apparently on the basis of the recommendation that he was a professional who would not make waves. Colby was known as a media-friendly CIA director. His tenure as DCI, which lasted two and a half tumultuous years, was overshadowed by the Church and Pike congressional investigations into alleged U.S. intelligence malfeasance over the preceding 25 years, including 1975, the so-called Year of Intelligence. Colby's time as DCI was also eventful on the world stage. Shortly after he assumed leadership, the Yom Kippur War broke out, an event that surprised not only the American intelligence agencies but also the Israelis. This intelligence surprise reportedly affected Colby's credibility with the Nixon administration. Colby
|
||||
Think hard, but answer shortly and concisely. Only give direct answers to the questions. No additional explanations. Directly answer these questions:
|
||||
Q: Who was the man behind The Chipmunks??
|
||||
A: David Seville
|
||||
|
||||
Q: Which Lloyd Webber musical premiered in the US on 10th December 1993??
|
||||
A: Sunset Boulevard
|
||||
|
||||
Q: Who was the next British Prime Minister after Arthur Balfour??
|
||||
A: Campbell-Bannerman
|
||||
|
||||
Q: Who had a 70s No 1 hit with Kiss You All Over??
|
||||
A: Exile
|
||||
|
||||
Q: What claimed the life of singer Kathleen Ferrier??
|
||||
A: Cancer
|
||||
|
||||
Q: Who was the director of the CIA from 1976-81?
|
||||
A:
|
||||
==================================================
|
||||
|
||||
==================================================
|
||||
PROMPT #17:
|
||||
==================================================
|
||||
"On the Street Where You Live" is a song with music by Frederick Loewe and lyrics by Alan Jay Lerner from the 1956 Broadway musical My Fair Lady. It is sung in the musical by the character Freddy Eynsford-Hill, who was portrayed by John Michael King in the original production. In the 1964 film version, it was sung by Bill Shirley, dubbing for actor Jeremy Brett. Recorded versions The most popular single of the song was recorded by Vic Damone in 1956 for Columbia Records. It reached No. 4 on the Billboard chart and No. 6 on Cashbox magazine's chart. It was a No. 1 hit in the UK Singles Chart in 1958. Eddie Fisher also had a top 20 Billboard hit with the song in 1956, reaching No. 18. Lawrence Welk and His Orchestra released a version that went to No. 96 in 1956. Andy Williams' recording appeared in the Billboard top 40 in 1964, reaching No. 3 on the adult contemporary chart and No. 28 on the Billboard Hot 100. The song has been recorded by a wide variety of other performers, including Ray Conniff and Bing Crosby, who recorded the song in 1956 for use on his radio show and it was subsequently included in the boxed set The Bing Crosby CBS Radio Recordings (1954–56) issued by Mosaic Records (catalog MD7-245) in 2009, Lawrence Welk (whose band also performed it on his weekly TV series numerous times), Shirley Horn, Doris Day, George Shearing, Frank Chacksfield, Alfie Boe, Bobby Darin, Dean Martin, Mario Lanza,The Times praised it as "Alan Jay Lerner's terrific autobiography". The Street Where I Live was reissued in 1989 by Columbus Books and in 1994 by the Da Capo Press. In 2000, BBC radio broadcast a serialization of the book, read by Henry Goodman, which The Times called "one of the delights of the evening schedule". References Sources Non-fiction books about musical theatre"On the Street Where You Live" is a song from the 1956 Broadway musical My Fair Lady. On the Street Where You Live may also refer to: On the Street Where You Live (TV series), an Irish documentary television series On The Street Where You Live, a 2001 novel by Mary Higgins Clark
|
||||
Think hard, but answer shortly and concisely. Only give direct answers to the questions. No additional explanations. Directly answer these questions:
|
||||
Q: Who was the man behind The Chipmunks??
|
||||
A: David Seville
|
||||
|
||||
Q: Which Lloyd Webber musical premiered in the US on 10th December 1993??
|
||||
A: Sunset Boulevard
|
||||
|
||||
Q: Who was the next British Prime Minister after Arthur Balfour??
|
||||
A: Campbell-Bannerman
|
||||
|
||||
Q: Who had a 70s No 1 hit with Kiss You All Over??
|
||||
A: Exile
|
||||
|
||||
Q: What claimed the life of singer Kathleen Ferrier??
|
||||
A: Cancer
|
||||
|
||||
Q: Which musical featured the song The Street Where You Live?
|
||||
A:
|
||||
==================================================
|
||||
|
||||
==================================================
|
||||
PROMPT #18:
|
||||
==================================================
|
||||
engineers were ordered to end construction work. The Allies were unaware of this and mounted further attacks on the site as part of the United States Army Air Forces experimental Operation Aphrodite, involving radio-controlled B-24 Liberators packed with explosives. Two such attacks were mounted but failed; in the second such attack, on 12 August, Lt Joseph P. Kennedy, Jr. – the elder brother of future US President John F. Kennedy – was killed when the drone aircraft exploded prematurely. By the end of the bombing campaign, over 4,100 tons of bombs had been dropped on Mimoyecques, more than on any other V-weapons site. The Mimoyecques site was never formally abandoned, but German forces left it at the start of September 1944 as the Allies advanced northeast from Normandy towards the Pas de Calais. It was captured on 5 September by the Canadian 3rd Infantry Division. Subsequent investigations and attempted demolition In September 1944, Duncan Sandys ordered the constitution of a Technical Inter-Services Mission under Colonel T.R.B. Sanders. It was given the task of investigating the V-weapons sites at Mimoyecques, Siracourt, Watten, and Wizernes, collectively known to the Allies as the "Heavy Crossbow" sites. Sanders' report was submitted to the War Cabinet on 19 March 1945. Even at this stage the true purpose of the site was unclear. Claims that it had been intended to be used for "electro-magnetic projectors" (railguns), firing huge shells at London, were debunked by Lord Cherwell, Winston Churchill's scientific adviser, who calculated that it would take sixty times the output of Battersearesearched at a facility in Peenemünde along with the V-1 flying bomb. The V-2's first target was Paris on 8 September 1944. The program while advanced proved to be an impediment to the war economy. The large capital investment was not repaid in military effectiveness. The rockets were built at an underground factory at Mittelwerk. Labor to build the A4 rockets came from the Mittelbau-Dora concentration camp. Of the 60,000 people who ended up at the camp 20,000 died, due to the appalling conditions. On 14 April 1944, Speer lost control of Organisation Todt to his Deputy, Franz Xaver Dorsch. He opposed the assassination attempt against Hitler on 20 July 1944. He was not involved in the plot, and played a minor role in the regime's efforts to regain control over Berlin after Hitler survived. After the plot Speer's rivals attacked some of his closest allies and his management system fell out of favor with radicals in the party. He lost yet more authority. Defeat of Nazi Germany Losses of territory and a dramatic expansion of the Allied strategic bombing campaign caused the collapse of the German economy from late 1944. Air attacks on the transport network were particularly effective, as they cut the main centres of production off from essential coal supplies. In January 1945, Speer told Goebbels that armaments production could be sustained for at least a year. However, he concluded that the war was lost after Soviet forces captured the important Silesian industrial region later that month. Nevertheless, Speer believed that Germany shouldof 1944 the Allies continued their gains in the Mediterranean Theatre and massed men and materiel for a European invasion along the French channel coastline. The conspirators began to organize for another attempt to assassinate Hitler and take over both German civil government and its military. The von Stauffenberg bomb attempt and aftermath By the summer of 1944 unrest in the German military and diplomatic ranks was widespread. The Allied landing at Normandy in June and failed German response raised the specter of doom among the upper ranks even of German field marshals. The Schwarze Kapelle responded by organizing a deadly attempt on Hitler's life at his Wolf's Lair compound in East Prussia. Undertaken by an aristocratic member of a hereditarily military family, Colonel Claus von Stauffenberg, the July 20 Plot nearly succeeded. Although surrounded by fatalities from the bomb Hitler escaped with a concussion and various injuries. In the aftermath he was determined to get vengeance upon the plotters. The Gestapo rounded up the members of the Schwarze Kapelle and many, many more it believed were either implicated in or sympathetic to it; according to its records it put 7,000 of them to death. Stauffenberg and three others were summarily shot that night. Most of the conspirators were put on trial in the Volksgerichtshof (People's Court) between August 1944 to February 1945. Many were executed the day after their convictions by hanging from meat hooks at Plötzensee Prison. Architect of the 1943 bomb plot on Hitler's plane Fabian von Schlabrendorff only escaped death because an
|
||||
Think hard, but answer shortly and concisely. Only give direct answers to the questions. No additional explanations. Directly answer these questions:
|
||||
Q: Who was the man behind The Chipmunks??
|
||||
A: David Seville
|
||||
|
||||
Q: Which Lloyd Webber musical premiered in the US on 10th December 1993??
|
||||
A: Sunset Boulevard
|
||||
|
||||
Q: Who was the next British Prime Minister after Arthur Balfour??
|
||||
A: Campbell-Bannerman
|
||||
|
||||
Q: Who had a 70s No 1 hit with Kiss You All Over??
|
||||
A: Exile
|
||||
|
||||
Q: What claimed the life of singer Kathleen Ferrier??
|
||||
A: Cancer
|
||||
|
||||
Q: "Who was the target of the failed ""Bomb Plot"" of 1944?"
|
||||
A:
|
||||
==================================================
|
||||
|
||||
==================================================
|
||||
PROMPT #19:
|
||||
==================================================
|
||||
propelling him into the first rank of international superstars. The album contained the number-one hit "All Night Long", a Caribbean-flavored dance number that was promoted by a colorful music video produced by former Monkee Michael Nesmith. In 1984, he performed "All Night Long" at the ending ceremony of the XXIII Olympic Games in Los Angeles. Several more Top 10 hits followed, the most successful of which was the ballad "Hello" (1984), a sentimental love song that showed how far he had moved from his R&B roots. Richie had three more top ten hits in 1984, "Stuck on You" (No. 3), "Running with the Night" (No. 7) and "Penny Lover" (No. 8), as well as writing and producing "Missing You" for former labelmate and duet partner Diana Ross (No. 10 Pop, No. 1 R&B). In 1985, he wrote and performed "Say You, Say Me" for the film White Nights. The song won an Academy Award and reached No. 1 on the U.S. charts, staying there for four weeks, making it the number-two song of 1986 according to Billboards Year-End Hot 100 chart, behind the charity single "That's What Friends Are For" by Dionne and Friends. He also collaborated with Michael Jackson on the charity single "We Are the World" by USA for Africa, another number-one hit. In 1986, Richie released Dancing on the Ceiling, his last widely popular album, which produced a run of five US and UK hits, "Say You, Say Me" (U.S. No. 1), "Dancing on the Ceiling" (U.S. No. 2), "Love Will Conquer All"top 20 US R&B chart hit in 1972. Their first few recordings were released on Buddah Records, including "Hold Back the Night", which was a hit on the Billboard R&B chart in 1973, before a re-release saw it climb in the UK two years later. Several R&B hits followed during a stay with Philadelphia International subsidiary Golden Fleece (run by Baker-Harris-Young) before they signed to Atlantic Records. Their single "Disco Inferno" (1976), which was included on the Grammy Award-winning Saturday Night Fever: The Original Movie Sound Track in 1977, reached No. 11 on the Billboard Hot 100 chart in May 1978. Other major hits included "Hold Back the Night" (1975) (UK No. 5) and "That's Where the Happy People Go" (1976). In late 1977, the Trammps released the song "The Night the Lights Went Out" to commemorate the electrical blackout that affected New York City on July 13–14, 1977. Their signature song "Disco Inferno" has been covered by Tina Turner and Cyndi Lauper. In addition, Graham Parker covered "Hold Back the Night" on "The Pink Parker EP" in 1977, and reached No. 24 in the UK Singles Chart, and top 60 in the US. In 2021, "Disco Inferno" was certified Silver by the British Phonographic Industry, together with "Can We Come Together" (from the album Where the Happy People Go). Dissolution and aftermath On September 19, 2005, the group's "Disco Inferno" was inducted into the Dance Music Hall of Fame at a ceremony held in New York. The song was part-written by Ron Kersey, a producer-arranger"Hold On to the Nights" is a power ballad written and performed by American rock singer/songwriter/musician Richard Marx. This was the fourth and final single released from his self-titled debut album, and his first to reach number one on the US Billboard Hot 100 chart. The song has been re-released on numerous albums and is included on Marx's live performance DVD A Night Out with Friends (2012). Release "Hold On to the Nights" reached the Billboard Hot 100 number 1 position on July 23, 1988, preventing Def Leppard's "Pour Some Sugar on Me" from reaching the top spot that same week. The song was on the chart for twenty-one weeks, and left the chart at number 91. The song also reached at number three on the Billboard Adult Contemporary chart. Chart performance Charts Personnel Richard Marx – vocals, keyboards, acoustic piano Michael Landau – guitars Patrick O'Hearn – bass Tris Imboden – drums Paulinho da Costa – percussion Other performances Marx appeared as lounge singer/piano player Buddy Daquiri in the "Poison Fire Teats Universe" episode of the TV series Life in Pieces in 2017, in which he played the song on the piano while whistling. References 1987 songs 1988 singles Richard Marx songs Billboard Hot 100 number-one singles Songs written by Richard Marx Pop ballads Rock ballads EMI Records singles Songs about nights
|
||||
Think hard, but answer shortly and concisely. Only give direct answers to the questions. No additional explanations. Directly answer these questions:
|
||||
Q: Who was the man behind The Chipmunks??
|
||||
A: David Seville
|
||||
|
||||
Q: Which Lloyd Webber musical premiered in the US on 10th December 1993??
|
||||
A: Sunset Boulevard
|
||||
|
||||
Q: Who was the next British Prime Minister after Arthur Balfour??
|
||||
A: Campbell-Bannerman
|
||||
|
||||
Q: Who had a 70s No 1 hit with Kiss You All Over??
|
||||
A: Exile
|
||||
|
||||
Q: What claimed the life of singer Kathleen Ferrier??
|
||||
A: Cancer
|
||||
|
||||
Q: Who had an 80s No 1 hit with Hold On To The Nights?
|
||||
A:
|
||||
==================================================
|
||||
|
||||
==================================================
|
||||
PROMPT #20:
|
||||
==================================================
|
||||
Turner Classic Movies in November 2006 features directors Steven Spielberg, Clint Eastwood, and Martin Scorsese, who suggest that the string of classic films Ford directed during 1936 to 1941 was due in part to an intense six-month extramarital affair with Katharine Hepburn, the star of Mary of Scotland (1936), an Elizabethan costume drama. 1939–1941 Stagecoach (1939) was Ford's first western since 3 Bad Men in 1926, and it was his first with sound. Orson Welles claimed that he watched Stagecoach forty times in preparation for making Citizen Kane. It remains one of the most admired and imitated of all Hollywood movies, not least for its climactic stagecoach chase and the hair-raising horse-jumping scene, performed by the stuntman Yakima Canutt. The Dudley Nichols–Ben Hecht screenplay was based on an Ernest Haycox story that Ford had spotted in Collier's magazine and he purchased the screen rights for just $2500. Production chief Walter Wanger urged Ford to hire Gary Cooper and Marlene Dietrich for the lead roles, but eventually accepted Ford's decision to cast Claire Trevor as Dallas and a virtual unknown, his friend John Wayne, as Ringo; Wanger reportedly had little further influence over the production. In making Stagecoach, Ford faced entrenched industry prejudice about the now-hackneyed genre which he had helped to make so popular. Although low-budget western features and serials were still being churned out in large numbers by "Poverty Row" studios, the genre had fallen out of favor with the big studios during the 1930s and they were regarded as B-grade "pulp" movies at best.Stagecoach is a 1986 American made-for-television Western action drama film and remake of the classic 1939 film Stagecoach, directed by Ted Post and starring Kris Kristofferson as the Ringo Kid, the role originally played by John Wayne. Willie Nelson portrays famous gunslinger and dentist Doc Holliday, Johnny Cash portrays Marshal Curly Wilcox and Waylon Jennings plays the gambler Hatfield. The four main stars of the film (Nelson, Kristofferson, Cash and Jennings) were associated as members of the country music supergroup The Highwaymen. The supporting cast features Elizabeth Ashley, Anthony Newley, Tony Franciosa, Mary Crosby, June Carter Cash and Jessi Colter. Plot In 1880, a group of strangers boards the east-bound stagecoach from Tonto, Arizona Territory, to Lordsburg, New Mexico Territory. The travelers seem ordinary, but many have secrets from which they are running. Among them are Dallas, a prostitute, who is being driven out of town; an alcoholic dentist, Doc Holliday; pregnant Lucy Mallory, who is meeting her cavalry officer husband; and whiskey salesman Trevor Peacock. As the stage sets out, U.S. Cavalry Lieutenant Blanchard announces that Geronimo and his Apaches are on the warpath; his small troop will provide an escort to Dry Fork. Cast Willie Nelson as Doc Holliday Kris Kristofferson as Ringo / Ringo Kid / Bill Williams Johnny Cash as Marshal Curly Wilcox Waylon Jennings as Hatfield (Gambler) John Schneider as Buck (Overland Stage Driver) Elizabeth Ashley as Dallas Anthony Newley as Trevor Peacock (Old John's Whiskey Salesman) Tony Franciosa as Henry Gatewood (Tonto Banker) Merritt Butrick as Lieutenant Blanchard Mary CrosbyStagecoach is a 1939 American Western film directed by John Ford and starring Claire Trevor and John Wayne in his breakthrough role. The screenplay by Dudley Nichols is an adaptation of "The Stage to Lordsburg", a 1937 short story by Ernest Haycox. The film follows a group of strangers riding on a stagecoach through dangerous Apache territory. The film has long been recognized as an important work that transcends the Western genre. Philosopher Robert B. Pippin has observed that both the collection of characters and their journey "are archetypal rather than merely individual" and that the film is a "mythic representation of the American aspiration toward a form of politically meaningful equality." In 1995, the film was deemed "culturally, historically, or aesthetically significant" by the United States Library of Congress and selected for preservation in their National Film Registry. Still, Stagecoach has not avoided controversy. Like most Westerns of the era, its depiction of Native Americans as simplistic savages has been criticized. Stagecoach was the first of many Westerns that Ford shot in Monument Valley, on the Arizona–Utah border in the American Southwest. Many of the movies Ford shot there also starred John Wayne. Scenes from Stagecoach, including a sequence introducing John Wayne's character the Ringo Kid, blended shots of Monument Valley with shots filmed on the Iverson Movie Ranch in Chatsworth, California, RKO Encino Movie Ranch, and other locations. Geographic incongruities are visible throughout the film, including the closing scene where Ringo (Wayne) and Dallas (Trevor) depart Lordsburg, in southwestern New Mexico, by way of
|
||||
Think hard, but answer shortly and concisely. Only give direct answers to the questions. No additional explanations. Directly answer these questions:
|
||||
Q: Who was the man behind The Chipmunks??
|
||||
A: David Seville
|
||||
|
||||
Q: Which Lloyd Webber musical premiered in the US on 10th December 1993??
|
||||
A: Sunset Boulevard
|
||||
|
||||
Q: Who was the next British Prime Minister after Arthur Balfour??
|
||||
A: Campbell-Bannerman
|
||||
|
||||
Q: Who had a 70s No 1 hit with Kiss You All Over??
|
||||
A: Exile
|
||||
|
||||
Q: What claimed the life of singer Kathleen Ferrier??
|
||||
A: Cancer
|
||||
|
||||
Q: Who directed the classic 30s western Stagecoach?
|
||||
A:
|
||||
==================================================
|
||||
|
||||
@@ -1,114 +0,0 @@
|
||||
import argparse
|
||||
import re
|
||||
import sys
|
||||
import time
|
||||
from pathlib import Path
|
||||
from statistics import mean
|
||||
|
||||
from leann.chat import get_llm
|
||||
|
||||
|
||||
def parse_prompts_from_file(file_path: str) -> list[str]:
|
||||
"""
|
||||
Parse a prompt dump file into individual prompt strings.
|
||||
|
||||
Splits by lines that look like: "PROMPT #<n>:".
|
||||
Keeps the content from each marker up to the next marker (or EOF).
|
||||
"""
|
||||
with open(file_path, "r", encoding="utf-8") as f:
|
||||
text = f.read()
|
||||
|
||||
matches = list(re.finditer(r"^PROMPT\s+#\d+:\s*$", text, flags=re.MULTILINE))
|
||||
if not matches:
|
||||
# Fallback: try a more permissive pattern
|
||||
matches = list(
|
||||
re.finditer(r"^=+\nPROMPT\s+#\d+:\n=+\s*$", text, flags=re.MULTILINE)
|
||||
)
|
||||
|
||||
prompts: list[str] = []
|
||||
if not matches:
|
||||
# No explicit markers; treat the whole file as a single prompt
|
||||
return [text]
|
||||
|
||||
for i, m in enumerate(matches):
|
||||
start = m.end()
|
||||
end = matches[i + 1].start() if i + 1 < len(matches) else len(text)
|
||||
block = text[start:end].strip()
|
||||
# Reattach the marker line content above the block for full context
|
||||
header_line_start = text.rfind("\n", 0, m.start()) + 1
|
||||
header = text[header_line_start : m.end()].strip()
|
||||
prompts.append(f"{header}\n{block}".strip())
|
||||
|
||||
return prompts
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description=(
|
||||
"Iterate prompts in a dump file, time generations, print outputs, and report last-10 average time."
|
||||
)
|
||||
)
|
||||
parser.add_argument(
|
||||
"--path",
|
||||
default="benchmarks/data/prompts_g5/prompt_dump_nq_hnsw.txt",
|
||||
help="Path to the prompt dump file",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--type",
|
||||
default="ollama",
|
||||
choices=["hf", "openai", "ollama", "gemini", "simulated"],
|
||||
help="LLM backend type",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--model",
|
||||
default="Qwen/Qwen3-4B",
|
||||
help="Model identifier (depends on backend)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--max_tokens",
|
||||
type=int,
|
||||
default=512,
|
||||
help="Max new tokens to generate per prompt",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
llm_config = {"type": args.type, "model": args.model}
|
||||
chat = get_llm(llm_config)
|
||||
|
||||
prompts = parse_prompts_from_file(args.path)
|
||||
print(f"Found {len(prompts)} prompts in {args.path}")
|
||||
|
||||
times: list[float] = []
|
||||
for idx, prompt in enumerate(prompts, start=1):
|
||||
print("\n" + "=" * 80)
|
||||
print(f"PROMPT {idx}/{len(prompts)}")
|
||||
print("-" * 80)
|
||||
start = time.perf_counter()
|
||||
try:
|
||||
output = chat.ask(prompt, max_tokens=args.max_tokens)
|
||||
except Exception as e:
|
||||
output = f"<error: {e}>"
|
||||
elapsed = time.perf_counter() - start
|
||||
times.append(elapsed)
|
||||
print(f"Time: {elapsed:.3f}s")
|
||||
print("-" * 80)
|
||||
print(output)
|
||||
print("=" * 80)
|
||||
|
||||
if times:
|
||||
window = times[-10:] if len(times) >= 10 else times
|
||||
avg_last_10 = mean(window)
|
||||
print(
|
||||
f"\nAverage time over last {len(window)} prompts: {avg_last_10:.3f}s"
|
||||
)
|
||||
else:
|
||||
print("No prompts processed.")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
@@ -1,49 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# 公共参数
|
||||
INDEX_PATH="benchmarks/data/indices/rpj_wiki/rpj_wiki"
|
||||
NUM_QUERIES=20
|
||||
BATCH_SIZE=128
|
||||
LLM_MODEL="qwen3:4b"
|
||||
TOP_K=3
|
||||
|
||||
# 日志目录(带时间戳)
|
||||
LOG_DIR="logs/eval_runs_$(date +%Y%m%d_%H%M%S)"
|
||||
mkdir -p "$LOG_DIR"
|
||||
|
||||
# dataset -> ef 列表
|
||||
declare -A EF_MAP=(
|
||||
[nq_open.jsonl]="32 62 190"
|
||||
[trivia_qa.jsonl]="77 150 249"
|
||||
[gpqa.jsonl]="41 72 124"
|
||||
[hotpot_qa.jsonl]="137 299 1199"
|
||||
)
|
||||
|
||||
# 按指定顺序遍历
|
||||
ORDERED_DATASETS=(nq_open.jsonl trivia_qa.jsonl gpqa.jsonl hotpot_qa.jsonl)
|
||||
|
||||
for dataset in "${ORDERED_DATASETS[@]}"; do
|
||||
for ef in ${EF_MAP[$dataset]}; do
|
||||
log_file="${LOG_DIR}/${dataset%.jsonl}_ef${ef}.log"
|
||||
|
||||
# 展示并记录将要执行的命令
|
||||
cmd=(python benchmarks/run_evaluation.py "$INDEX_PATH" \
|
||||
--num-queries "$NUM_QUERIES" \
|
||||
--ef "$ef" \
|
||||
--batch-size "$BATCH_SIZE" \
|
||||
--llm-model "$LLM_MODEL" \
|
||||
--top-k "$TOP_K" \
|
||||
--queries-file "$dataset")
|
||||
|
||||
echo "=== Running dataset=${dataset} ef=${ef} ===" | tee -a "$log_file"
|
||||
printf 'CMD: '; printf '%q ' "${cmd[@]}" | tee -a "$log_file"; echo | tee -a "$log_file"
|
||||
|
||||
# 同时输出到命令行和日志文件
|
||||
"${cmd[@]}" 2>&1 | tee -a "$log_file"
|
||||
|
||||
echo | tee -a "$log_file"
|
||||
done
|
||||
done
|
||||
|
||||
echo "All runs completed. Logs in: $LOG_DIR"
|
||||
@@ -12,7 +12,7 @@ import time
|
||||
from pathlib import Path
|
||||
|
||||
import numpy as np
|
||||
from leann.api import LeannBuilder, LeannChat, LeannSearcher
|
||||
from leann.api import LeannBuilder, LeannSearcher
|
||||
|
||||
|
||||
def download_data_if_needed(data_root: Path, download_embeddings: bool = False):
|
||||
@@ -197,34 +197,6 @@ def main():
|
||||
parser.add_argument(
|
||||
"--ef-search", type=int, default=120, help="The 'efSearch' parameter for HNSW."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--batch-size",
|
||||
type=int,
|
||||
default=0,
|
||||
help="Batch size for HNSW batched search (0 disables batching)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--queries-file",
|
||||
type=str,
|
||||
default="nq_open.jsonl",
|
||||
help=(
|
||||
"Queries file to use. Provide a filename under benchmarks/data/queries "
|
||||
"or an absolute path to a .jsonl file (default: nq_open.jsonl)."
|
||||
),
|
||||
)
|
||||
parser.add_argument(
|
||||
"--llm-type",
|
||||
type=str,
|
||||
choices=["ollama", "hf", "openai", "gemini", "simulated"],
|
||||
default="ollama",
|
||||
help="LLM backend type to optionally query during evaluation (default: ollama)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--llm-model",
|
||||
type=str,
|
||||
default="qwen3:1.7b",
|
||||
help="LLM model identifier for the chosen backend (default: qwen3:1.7b)",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
# --- Path Configuration ---
|
||||
@@ -323,52 +295,8 @@ def main():
|
||||
dataset_type = Path(args.index_path).name
|
||||
print(f"WARNING: Could not detect dataset type from path, inferred '{dataset_type}'.")
|
||||
|
||||
# Resolve queries file (supports absolute path or name under data/queries)
|
||||
queries_file_candidate = Path(args.queries_file)
|
||||
if queries_file_candidate.is_absolute():
|
||||
queries_file = queries_file_candidate
|
||||
else:
|
||||
queries_file = data_root / "queries" / args.queries_file
|
||||
|
||||
if not queries_file.exists():
|
||||
print(f"Error: Queries file not found: {queries_file}")
|
||||
print("Tip: Use --queries-file with a filename under benchmarks/data/queries or an absolute path.")
|
||||
sys.exit(1)
|
||||
|
||||
# Infer ground-truth file from the queries filename
|
||||
qname = queries_file.name.lower()
|
||||
if "hotpot" in qname:
|
||||
task_key = "hotpot"
|
||||
elif "trivia" in qname:
|
||||
task_key = "trivia"
|
||||
elif "gpqa" in qname:
|
||||
task_key = "gpqa"
|
||||
elif "nq" in qname:
|
||||
task_key = "nq"
|
||||
else:
|
||||
print(
|
||||
"Error: Could not infer task from queries filename. Supported names include 'nq', 'hotpot', 'trivia', 'gpqa'."
|
||||
)
|
||||
print(f"Filename was: {queries_file.name}")
|
||||
sys.exit(1)
|
||||
|
||||
golden_results_file = data_root / "ground_truth" / dataset_type / f"flat_results_{task_key}_k3.json"
|
||||
if not golden_results_file.exists():
|
||||
gt_dir = data_root / "ground_truth" / dataset_type
|
||||
try:
|
||||
available = sorted(p.name for p in gt_dir.glob("flat_results_*_k3.json"))
|
||||
except Exception:
|
||||
available = []
|
||||
print(
|
||||
f"Error: Ground truth file not found for task '{task_key}' under dataset '{dataset_type}': {golden_results_file}"
|
||||
)
|
||||
if available:
|
||||
print("Available ground truth files:")
|
||||
for name in available:
|
||||
print(f" - {name}")
|
||||
else:
|
||||
print(f"No ground truth files found in {gt_dir}")
|
||||
sys.exit(1)
|
||||
queries_file = data_root / "queries" / "nq_open.jsonl"
|
||||
golden_results_file = data_root / "ground_truth" / dataset_type / "flat_results_nq_k3.json"
|
||||
|
||||
print(f"INFO: Detected dataset type: {dataset_type}")
|
||||
print(f"INFO: Using queries file: {queries_file}")
|
||||
@@ -390,24 +318,9 @@ def main():
|
||||
|
||||
for i in range(num_eval_queries):
|
||||
start_time = time.time()
|
||||
new_results = searcher.search(
|
||||
queries[i],
|
||||
top_k=args.top_k,
|
||||
complexity=args.ef_search,
|
||||
batch_size=args.batch_size,
|
||||
)
|
||||
new_results = searcher.search(queries[i], top_k=args.top_k, ef=args.ef_search)
|
||||
search_times.append(time.time() - start_time)
|
||||
|
||||
# Optional: also call the LLM with configurable backend/model (does not affect recall)
|
||||
# llm_config = {"type": args.llm_type, "model": args.llm_model}
|
||||
# chat = LeannChat(args.index_path, llm_config=llm_config, searcher=searcher)
|
||||
# answer = chat.ask(
|
||||
# queries[i],
|
||||
# top_k=args.top_k,
|
||||
# complexity=args.ef_search,
|
||||
# batch_size=args.batch_size,
|
||||
# )
|
||||
# print(f"Answer: {answer}")
|
||||
# Correct Recall Calculation: Based on TEXT content
|
||||
new_texts = {result.text for result in new_results}
|
||||
|
||||
@@ -431,16 +344,10 @@ def main():
|
||||
avg_recall = np.mean(recall_scores) if recall_scores else 0
|
||||
avg_time = np.mean(search_times) if search_times else 0
|
||||
|
||||
print(f"search time: {search_times}")
|
||||
|
||||
print("\n🎉 --- Evaluation Complete ---")
|
||||
print(f"Avg. Recall@{args.top_k} (efSearch={args.ef_search}): {avg_recall:.4f}")
|
||||
print(f"Avg. Search Time: {avg_time:.4f}s")
|
||||
|
||||
# avg last 10 search times
|
||||
avg_last_10_search_times = np.mean(search_times[-10:])
|
||||
print(f"Avg. Last 10 Search Times: {avg_last_10_search_times:.4f}s")
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n❌ An error occurred during evaluation: {e}")
|
||||
import traceback
|
||||
|
||||
@@ -1,55 +0,0 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Absolute paths (adjust if needed)
|
||||
PROMPTS_DIR="/home/tony/yichuan/leann/benchmarks/data/prompts_g5"
|
||||
SCRIPT_PATH="/home/tony/yichuan/leann/benchmarks/generation_speed_bench.py"
|
||||
|
||||
# Common args
|
||||
MAX_TOKENS=2048
|
||||
OLLAMA_MODEL="qwen3:4b"
|
||||
HF_MODEL="Qwen/Qwen3-4B"
|
||||
|
||||
# Logs
|
||||
LOG_DIR="/home/tony/yichuan/leann/logs/speed_bench_$(date +%Y%m%d_%H%M%S)"
|
||||
mkdir -p "$LOG_DIR"
|
||||
|
||||
echo "Scanning: $PROMPTS_DIR"
|
||||
|
||||
# Iterate all .txt files under PROMPTS_DIR
|
||||
while IFS= read -r -d '' file; do
|
||||
base_name=$(basename "$file")
|
||||
stem_name="${base_name%.*}"
|
||||
|
||||
# 1) Ollama
|
||||
log_ollama="${LOG_DIR}/${stem_name}_ollama.log"
|
||||
cmd_ollama=(python "$SCRIPT_PATH" \
|
||||
--path "$file" \
|
||||
--type ollama \
|
||||
--model "$OLLAMA_MODEL" \
|
||||
--max_tokens "$MAX_TOKENS")
|
||||
|
||||
echo "=== Running (ollama) file=${file} model=${OLLAMA_MODEL} ===" | tee -a "$log_ollama"
|
||||
printf 'CMD: '; printf '%q ' "${cmd_ollama[@]}" | tee -a "$log_ollama"; echo | tee -a "$log_ollama"
|
||||
"${cmd_ollama[@]}" 2>&1 | tee -a "$log_ollama"
|
||||
echo | tee -a "$log_ollama"
|
||||
|
||||
# 2) HF
|
||||
log_hf="${LOG_DIR}/${stem_name}_hf.log"
|
||||
cmd_hf=(python "$SCRIPT_PATH" \
|
||||
--path "$file" \
|
||||
--type hf \
|
||||
--model "$HF_MODEL" \
|
||||
--max_tokens "$MAX_TOKENS")
|
||||
|
||||
echo "=== Running (hf) file=${file} model=${HF_MODEL} ===" | tee -a "$log_hf"
|
||||
printf 'CMD: '; printf '%q ' "${cmd_hf[@]}" | tee -a "$log_hf"; echo | tee -a "$log_hf"
|
||||
"${cmd_hf[@]}" 2>&1 | tee -a "$log_hf"
|
||||
echo | tee -a "$log_hf"
|
||||
|
||||
done < <(find "$PROMPTS_DIR" -type f -name '*.txt' -print0)
|
||||
|
||||
|
||||
echo "All runs completed. Logs in: $LOG_DIR"
|
||||
|
||||
|
||||
@@ -20,7 +20,7 @@ except ImportError:
|
||||
|
||||
@dataclass
|
||||
class BenchmarkConfig:
|
||||
model_path: str = "facebook/contriever-msmarco"
|
||||
model_path: str = "facebook/contriever"
|
||||
batch_sizes: list[int] = None
|
||||
seq_length: int = 256
|
||||
num_runs: int = 5
|
||||
@@ -34,7 +34,7 @@ class BenchmarkConfig:
|
||||
|
||||
def __post_init__(self):
|
||||
if self.batch_sizes is None:
|
||||
self.batch_sizes = [1, 2, 4, 8, 16, 32, 64, 128, 256, 512, 1024]
|
||||
self.batch_sizes = [1, 2, 4, 8, 16, 32, 64]
|
||||
|
||||
|
||||
class MLXBenchmark:
|
||||
@@ -179,14 +179,11 @@ class Benchmark:
|
||||
|
||||
def _run_inference(self, input_ids: torch.Tensor) -> float:
|
||||
attention_mask = torch.ones_like(input_ids)
|
||||
# print shape of input_ids and attention_mask
|
||||
print(f"input_ids shape: {input_ids.shape}")
|
||||
print(f"attention_mask shape: {attention_mask.shape}")
|
||||
|
||||
start_time = time.time()
|
||||
with torch.no_grad():
|
||||
self.model(input_ids=input_ids, attention_mask=attention_mask)
|
||||
if torch.cuda.is_available():
|
||||
torch.cuda.synchronize()
|
||||
# mps sync
|
||||
if torch.backends.mps.is_available():
|
||||
torch.mps.synchronize()
|
||||
end_time = time.time()
|
||||
|
||||
@@ -1,128 +0,0 @@
|
||||
# AST-Aware Code chunking guide
|
||||
|
||||
## Overview
|
||||
|
||||
This guide covers best practices for using AST-aware code chunking in LEANN. AST chunking provides better semantic understanding of code structure compared to traditional text-based chunking.
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```bash
|
||||
# Enable AST chunking for mixed content (code + docs)
|
||||
python -m apps.document_rag --enable-code-chunking --data-dir ./my_project
|
||||
|
||||
# Specialized code repository indexing
|
||||
python -m apps.code_rag --repo-dir ./my_codebase
|
||||
|
||||
# Global CLI with AST support
|
||||
leann build my-code-index --docs ./src --use-ast-chunking
|
||||
```
|
||||
|
||||
### Installation
|
||||
|
||||
```bash
|
||||
# Install LEANN with AST chunking support
|
||||
uv pip install -e "."
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### When to Use AST Chunking
|
||||
|
||||
✅ **Recommended for:**
|
||||
- Code repositories with multiple languages
|
||||
- Mixed documentation and code content
|
||||
- Complex codebases with deep function/class hierarchies
|
||||
- When working with Claude Code for code assistance
|
||||
|
||||
❌ **Not recommended for:**
|
||||
- Pure text documents
|
||||
- Very large files (>1MB)
|
||||
- Languages not supported by tree-sitter
|
||||
|
||||
### Optimal Configuration
|
||||
|
||||
```bash
|
||||
# Recommended settings for most codebases
|
||||
python -m apps.code_rag \
|
||||
--repo-dir ./src \
|
||||
--ast-chunk-size 768 \
|
||||
--ast-chunk-overlap 96 \
|
||||
--exclude-dirs .git __pycache__ node_modules build dist
|
||||
```
|
||||
|
||||
### Supported Languages
|
||||
|
||||
| Extension | Language | Status |
|
||||
|-----------|----------|--------|
|
||||
| `.py` | Python | ✅ Full support |
|
||||
| `.java` | Java | ✅ Full support |
|
||||
| `.cs` | C# | ✅ Full support |
|
||||
| `.ts`, `.tsx` | TypeScript | ✅ Full support |
|
||||
| `.js`, `.jsx` | JavaScript | ✅ Via TypeScript parser |
|
||||
|
||||
## Integration Examples
|
||||
|
||||
### Document RAG with Code Support
|
||||
|
||||
```python
|
||||
# Enable code chunking in document RAG
|
||||
python -m apps.document_rag \
|
||||
--enable-code-chunking \
|
||||
--data-dir ./project \
|
||||
--query "How does authentication work in the codebase?"
|
||||
```
|
||||
|
||||
### Claude Code Integration
|
||||
|
||||
When using with Claude Code MCP server, AST chunking provides better context for:
|
||||
- Code completion and suggestions
|
||||
- Bug analysis and debugging
|
||||
- Architecture understanding
|
||||
- Refactoring assistance
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Fallback to Traditional Chunking**
|
||||
- Normal behavior for unsupported languages
|
||||
- Check logs for specific language support
|
||||
|
||||
2. **Performance with Large Files**
|
||||
- Adjust `--max-file-size` parameter
|
||||
- Use `--exclude-dirs` to skip unnecessary directories
|
||||
|
||||
3. **Quality Issues**
|
||||
- Try different `--ast-chunk-size` values (512, 768, 1024)
|
||||
- Adjust overlap for better context preservation
|
||||
|
||||
### Debug Mode
|
||||
|
||||
```bash
|
||||
export LEANN_LOG_LEVEL=DEBUG
|
||||
python -m apps.code_rag --repo-dir ./my_code
|
||||
```
|
||||
|
||||
## Migration from Traditional Chunking
|
||||
|
||||
Existing workflows continue to work without changes. To enable AST chunking:
|
||||
|
||||
```bash
|
||||
# Before
|
||||
python -m apps.document_rag --chunk-size 256
|
||||
|
||||
# After (maintains traditional chunking for non-code files)
|
||||
python -m apps.document_rag --enable-code-chunking --chunk-size 256 --ast-chunk-size 768
|
||||
```
|
||||
|
||||
## References
|
||||
|
||||
- [astchunk GitHub Repository](https://github.com/yilinjz/astchunk)
|
||||
- [LEANN MCP Integration](../packages/leann-mcp/README.md)
|
||||
- [Research Paper](https://arxiv.org/html/2506.15655v1)
|
||||
|
||||
---
|
||||
|
||||
**Note**: AST chunking maintains full backward compatibility while enhancing code understanding capabilities.
|
||||
@@ -3,7 +3,6 @@
|
||||
## 🔥 Core Features
|
||||
|
||||
- **🔄 Real-time Embeddings** - Eliminate heavy embedding storage with dynamic computation using optimized ZMQ servers and highly optimized search paradigm (overlapping and batching) with highly optimized embedding engine
|
||||
- **🧠 AST-Aware Code Chunking** - Intelligent code chunking that preserves semantic boundaries (functions, classes, methods) for Python, Java, C#, and TypeScript files
|
||||
- **📈 Scalable Architecture** - Handles millions of documents on consumer hardware; the larger your dataset, the more LEANN can save
|
||||
- **🎯 Graph Pruning** - Advanced techniques to minimize the storage overhead of vector search to a limited footprint
|
||||
- **🏗️ Pluggable Backends** - HNSW/FAISS (default), with optional DiskANN for large-scale deployments
|
||||
|
||||
@@ -1,300 +0,0 @@
|
||||
# LEANN Metadata Filtering Usage Guide
|
||||
|
||||
## Overview
|
||||
|
||||
Leann possesses metadata filtering capabilities that allow you to filter search results based on arbitrary metadata fields set during chunking. This feature enables use cases like spoiler-free book search, document filtering by date/type, code search by file type, and potentially much more.
|
||||
|
||||
## Basic Usage
|
||||
|
||||
### Adding Metadata to Your Documents
|
||||
|
||||
When building your index, add metadata to each text chunk:
|
||||
|
||||
```python
|
||||
from leann.api import LeannBuilder
|
||||
|
||||
builder = LeannBuilder("hnsw")
|
||||
|
||||
# Add text with metadata
|
||||
builder.add_text(
|
||||
text="Chapter 1: Alice falls down the rabbit hole",
|
||||
metadata={
|
||||
"chapter": 1,
|
||||
"character": "Alice",
|
||||
"themes": ["adventure", "curiosity"],
|
||||
"word_count": 150
|
||||
}
|
||||
)
|
||||
|
||||
builder.build_index("alice_in_wonderland_index")
|
||||
```
|
||||
|
||||
### Searching with Metadata Filters
|
||||
|
||||
Use the `metadata_filters` parameter in search calls:
|
||||
|
||||
```python
|
||||
from leann.api import LeannSearcher
|
||||
|
||||
searcher = LeannSearcher("alice_in_wonderland_index")
|
||||
|
||||
# Search with filters
|
||||
results = searcher.search(
|
||||
query="What happens to Alice?",
|
||||
top_k=10,
|
||||
metadata_filters={
|
||||
"chapter": {"<=": 5}, # Only chapters 1-5
|
||||
"spoiler_level": {"!=": "high"} # No high spoilers
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## Filter Syntax
|
||||
|
||||
### Basic Structure
|
||||
|
||||
```python
|
||||
metadata_filters = {
|
||||
"field_name": {"operator": value},
|
||||
"another_field": {"operator": value}
|
||||
}
|
||||
```
|
||||
|
||||
### Supported Operators
|
||||
|
||||
#### Comparison Operators
|
||||
- `"=="`: Equal to
|
||||
- `"!="`: Not equal to
|
||||
- `"<"`: Less than
|
||||
- `"<="`: Less than or equal
|
||||
- `">"`: Greater than
|
||||
- `">="`: Greater than or equal
|
||||
|
||||
```python
|
||||
# Examples
|
||||
{"chapter": {"==": 1}} # Exactly chapter 1
|
||||
{"page": {">": 100}} # Pages after 100
|
||||
{"rating": {">=": 4.0}} # Rating 4.0 or higher
|
||||
{"word_count": {"<": 500}} # Short passages
|
||||
```
|
||||
|
||||
#### Membership Operators
|
||||
- `"in"`: Value is in list
|
||||
- `"not_in"`: Value is not in list
|
||||
|
||||
```python
|
||||
# Examples
|
||||
{"character": {"in": ["Alice", "Bob"]}} # Alice OR Bob
|
||||
{"genre": {"not_in": ["horror", "thriller"]}} # Exclude genres
|
||||
{"tags": {"in": ["fiction", "adventure"]}} # Any of these tags
|
||||
```
|
||||
|
||||
#### String Operators
|
||||
- `"contains"`: String contains substring
|
||||
- `"starts_with"`: String starts with prefix
|
||||
- `"ends_with"`: String ends with suffix
|
||||
|
||||
```python
|
||||
# Examples
|
||||
{"title": {"contains": "alice"}} # Title contains "alice"
|
||||
{"filename": {"ends_with": ".py"}} # Python files
|
||||
{"author": {"starts_with": "Dr."}} # Authors with "Dr." prefix
|
||||
```
|
||||
|
||||
#### Boolean Operators
|
||||
- `"is_true"`: Field is truthy
|
||||
- `"is_false"`: Field is falsy
|
||||
|
||||
```python
|
||||
# Examples
|
||||
{"is_published": {"is_true": True}} # Published content
|
||||
{"is_draft": {"is_false": False}} # Not drafts
|
||||
```
|
||||
|
||||
### Multiple Operators on Same Field
|
||||
|
||||
You can apply multiple operators to the same field (AND logic):
|
||||
|
||||
```python
|
||||
metadata_filters = {
|
||||
"word_count": {
|
||||
">=": 100, # At least 100 words
|
||||
"<=": 500 # At most 500 words
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Compound Filters
|
||||
|
||||
Multiple fields are combined with AND logic:
|
||||
|
||||
```python
|
||||
metadata_filters = {
|
||||
"chapter": {"<=": 10}, # Up to chapter 10
|
||||
"character": {"==": "Alice"}, # About Alice
|
||||
"spoiler_level": {"!=": "high"} # No major spoilers
|
||||
}
|
||||
```
|
||||
|
||||
## Use Case Examples
|
||||
|
||||
### 1. Spoiler-Free Book Search
|
||||
|
||||
```python
|
||||
# Reader has only read up to chapter 5
|
||||
def search_spoiler_free(query, max_chapter):
|
||||
return searcher.search(
|
||||
query=query,
|
||||
metadata_filters={
|
||||
"chapter": {"<=": max_chapter},
|
||||
"spoiler_level": {"in": ["none", "low"]}
|
||||
}
|
||||
)
|
||||
|
||||
results = search_spoiler_free("What happens to Alice?", max_chapter=5)
|
||||
```
|
||||
|
||||
### 2. Document Management by Date
|
||||
|
||||
```python
|
||||
# Find recent documents
|
||||
recent_docs = searcher.search(
|
||||
query="project updates",
|
||||
metadata_filters={
|
||||
"date": {">=": "2024-01-01"},
|
||||
"document_type": {"==": "report"}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### 3. Code Search by File Type
|
||||
|
||||
```python
|
||||
# Search only Python files
|
||||
python_code = searcher.search(
|
||||
query="authentication function",
|
||||
metadata_filters={
|
||||
"file_extension": {"==": ".py"},
|
||||
"lines_of_code": {"<": 100}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### 4. Content Filtering by Audience
|
||||
|
||||
```python
|
||||
# Age-appropriate content
|
||||
family_content = searcher.search(
|
||||
query="adventure stories",
|
||||
metadata_filters={
|
||||
"age_rating": {"in": ["G", "PG"]},
|
||||
"content_warnings": {"not_in": ["violence", "adult_themes"]}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### 5. Multi-Book Series Management
|
||||
|
||||
```python
|
||||
# Search across first 3 books only
|
||||
early_series = searcher.search(
|
||||
query="character development",
|
||||
metadata_filters={
|
||||
"series": {"==": "Harry Potter"},
|
||||
"book_number": {"<=": 3}
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## Running the Example
|
||||
|
||||
You can see metadata filtering in action with our spoiler-free book RAG example:
|
||||
|
||||
```bash
|
||||
# Don't forget to set up the environment
|
||||
uv venv
|
||||
source .venv/bin/activate
|
||||
|
||||
# Set your OpenAI API key (required for embeddings, but you can update the example locally and use ollama instead)
|
||||
export OPENAI_API_KEY="your-api-key-here"
|
||||
|
||||
# Run the spoiler-free book RAG example
|
||||
uv run examples/spoiler_free_book_rag.py
|
||||
```
|
||||
|
||||
This example demonstrates:
|
||||
- Building an index with metadata (chapter numbers, characters, themes, locations)
|
||||
- Searching with filters to avoid spoilers (e.g., only show results up to chapter 5)
|
||||
- Different scenarios for readers at various points in the book
|
||||
|
||||
The example uses Alice's Adventures in Wonderland as sample data and shows how you can search for information without revealing plot points from later chapters.
|
||||
|
||||
## Advanced Patterns
|
||||
|
||||
### Custom Chunking with metadata
|
||||
|
||||
```python
|
||||
def chunk_book_with_metadata(book_text, book_info):
|
||||
chunks = []
|
||||
|
||||
for chapter_num, chapter_text in parse_chapters(book_text):
|
||||
# Extract entities, themes, etc.
|
||||
characters = extract_characters(chapter_text)
|
||||
themes = classify_themes(chapter_text)
|
||||
spoiler_level = assess_spoiler_level(chapter_text, chapter_num)
|
||||
|
||||
# Create chunks with rich metadata
|
||||
for paragraph in split_paragraphs(chapter_text):
|
||||
chunks.append({
|
||||
"text": paragraph,
|
||||
"metadata": {
|
||||
"book_title": book_info["title"],
|
||||
"chapter": chapter_num,
|
||||
"characters": characters,
|
||||
"themes": themes,
|
||||
"spoiler_level": spoiler_level,
|
||||
"word_count": len(paragraph.split()),
|
||||
"reading_level": calculate_reading_level(paragraph)
|
||||
}
|
||||
})
|
||||
|
||||
return chunks
|
||||
```
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### Efficient Filtering Strategies
|
||||
|
||||
1. **Post-search filtering**: Applies filters after vector search, which should be efficient for typical result sets (10-100 results).
|
||||
|
||||
2. **Metadata design**: Keep metadata fields simple and avoid deeply nested structures.
|
||||
|
||||
### Best Practices
|
||||
|
||||
1. **Consistent metadata schema**: Use consistent field names and value types across your documents.
|
||||
|
||||
2. **Reasonable metadata size**: Keep metadata reasonably sized to avoid storage overhead.
|
||||
|
||||
3. **Type consistency**: Use consistent data types for the same fields (e.g., always integers for chapter numbers).
|
||||
|
||||
4. **Index multiple granularities**: Consider chunking at different levels (paragraph, section, chapter) with appropriate metadata.
|
||||
|
||||
### Adding Metadata to Existing Indices
|
||||
|
||||
To add metadata filtering to existing indices, you'll need to rebuild them with metadata:
|
||||
|
||||
```python
|
||||
# Read existing passages and add metadata
|
||||
def add_metadata_to_existing_chunks(chunks):
|
||||
for chunk in chunks:
|
||||
# Extract or assign metadata based on content
|
||||
chunk["metadata"] = extract_metadata(chunk["text"])
|
||||
return chunks
|
||||
|
||||
# Rebuild index with metadata
|
||||
enhanced_chunks = add_metadata_to_existing_chunks(existing_chunks)
|
||||
builder = LeannBuilder("hnsw")
|
||||
for chunk in enhanced_chunks:
|
||||
builder.add_text(chunk["text"], chunk["metadata"])
|
||||
builder.build_index("enhanced_index")
|
||||
```
|
||||
@@ -1,250 +0,0 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Spoiler-Free Book RAG Example using LEANN Metadata Filtering
|
||||
|
||||
This example demonstrates how to use LEANN's metadata filtering to create
|
||||
a spoiler-free book RAG system where users can search for information
|
||||
up to a specific chapter they've read.
|
||||
|
||||
Usage:
|
||||
python spoiler_free_book_rag.py
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
from typing import Any, Optional
|
||||
|
||||
# Add LEANN to path (adjust path as needed)
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "../packages/leann-core/src"))
|
||||
|
||||
from leann.api import LeannBuilder, LeannSearcher
|
||||
|
||||
|
||||
def chunk_book_with_metadata(book_title: str = "Sample Book") -> list[dict[str, Any]]:
|
||||
"""
|
||||
Create sample book chunks with metadata for demonstration.
|
||||
|
||||
In a real implementation, this would parse actual book files (epub, txt, etc.)
|
||||
and extract chapter boundaries, character mentions, etc.
|
||||
|
||||
Args:
|
||||
book_title: Title of the book
|
||||
|
||||
Returns:
|
||||
List of chunk dictionaries with text and metadata
|
||||
"""
|
||||
# Sample book chunks with metadata
|
||||
# In practice, you'd use proper text processing libraries
|
||||
|
||||
sample_chunks = [
|
||||
{
|
||||
"text": "Alice was beginning to get very tired of sitting by her sister on the bank, and of having nothing to do.",
|
||||
"metadata": {
|
||||
"book": book_title,
|
||||
"chapter": 1,
|
||||
"page": 1,
|
||||
"characters": ["Alice", "Sister"],
|
||||
"themes": ["boredom", "curiosity"],
|
||||
"location": "riverbank",
|
||||
},
|
||||
},
|
||||
{
|
||||
"text": "So she was considering in her own mind (as well as she could, for the hot day made her feel very sleepy and stupid), whether the pleasure of making a daisy-chain would be worth the trouble of getting up and picking the daisies, when suddenly a White Rabbit with pink eyes ran close by her.",
|
||||
"metadata": {
|
||||
"book": book_title,
|
||||
"chapter": 1,
|
||||
"page": 2,
|
||||
"characters": ["Alice", "White Rabbit"],
|
||||
"themes": ["decision", "surprise", "magic"],
|
||||
"location": "riverbank",
|
||||
},
|
||||
},
|
||||
{
|
||||
"text": "Alice found herself falling down a very deep well. Either the well was very deep, or she fell very slowly, for she had plenty of time as she fell to look about her and to wonder what was going to happen next.",
|
||||
"metadata": {
|
||||
"book": book_title,
|
||||
"chapter": 2,
|
||||
"page": 15,
|
||||
"characters": ["Alice"],
|
||||
"themes": ["falling", "wonder", "transformation"],
|
||||
"location": "rabbit hole",
|
||||
},
|
||||
},
|
||||
{
|
||||
"text": "Alice meets the Cheshire Cat, who tells her that everyone in Wonderland is mad, including Alice herself.",
|
||||
"metadata": {
|
||||
"book": book_title,
|
||||
"chapter": 6,
|
||||
"page": 85,
|
||||
"characters": ["Alice", "Cheshire Cat"],
|
||||
"themes": ["madness", "philosophy", "identity"],
|
||||
"location": "Duchess's house",
|
||||
},
|
||||
},
|
||||
{
|
||||
"text": "At the Queen's croquet ground, Alice witnesses the absurd trial that reveals the arbitrary nature of Wonderland's justice system.",
|
||||
"metadata": {
|
||||
"book": book_title,
|
||||
"chapter": 8,
|
||||
"page": 120,
|
||||
"characters": ["Alice", "Queen of Hearts", "King of Hearts"],
|
||||
"themes": ["justice", "absurdity", "authority"],
|
||||
"location": "Queen's court",
|
||||
},
|
||||
},
|
||||
{
|
||||
"text": "Alice realizes that Wonderland was all a dream, even the Rabbit, as she wakes up on the riverbank next to her sister.",
|
||||
"metadata": {
|
||||
"book": book_title,
|
||||
"chapter": 12,
|
||||
"page": 180,
|
||||
"characters": ["Alice", "Sister", "Rabbit"],
|
||||
"themes": ["revelation", "reality", "growth"],
|
||||
"location": "riverbank",
|
||||
},
|
||||
},
|
||||
]
|
||||
|
||||
return sample_chunks
|
||||
|
||||
|
||||
def build_spoiler_free_index(book_chunks: list[dict[str, Any]], index_name: str) -> str:
|
||||
"""
|
||||
Build a LEANN index with book chunks that include spoiler metadata.
|
||||
|
||||
Args:
|
||||
book_chunks: List of book chunks with metadata
|
||||
index_name: Name for the index
|
||||
|
||||
Returns:
|
||||
Path to the built index
|
||||
"""
|
||||
print(f"📚 Building spoiler-free book index: {index_name}")
|
||||
|
||||
# Initialize LEANN builder
|
||||
builder = LeannBuilder(
|
||||
backend_name="hnsw", embedding_model="text-embedding-3-small", embedding_mode="openai"
|
||||
)
|
||||
|
||||
# Add each chunk with its metadata
|
||||
for chunk in book_chunks:
|
||||
builder.add_text(text=chunk["text"], metadata=chunk["metadata"])
|
||||
|
||||
# Build the index
|
||||
index_path = f"{index_name}_book_index"
|
||||
builder.build_index(index_path)
|
||||
|
||||
print(f"✅ Index built successfully: {index_path}")
|
||||
return index_path
|
||||
|
||||
|
||||
def spoiler_free_search(
|
||||
index_path: str,
|
||||
query: str,
|
||||
max_chapter: int,
|
||||
character_filter: Optional[list[str]] = None,
|
||||
) -> list[dict[str, Any]]:
|
||||
"""
|
||||
Perform a spoiler-free search on the book index.
|
||||
|
||||
Args:
|
||||
index_path: Path to the LEANN index
|
||||
query: Search query
|
||||
max_chapter: Maximum chapter number to include
|
||||
character_filter: Optional list of characters to focus on
|
||||
|
||||
Returns:
|
||||
List of search results safe for the reader
|
||||
"""
|
||||
print(f"🔍 Searching: '{query}' (up to chapter {max_chapter})")
|
||||
|
||||
searcher = LeannSearcher(index_path)
|
||||
|
||||
metadata_filters = {"chapter": {"<=": max_chapter}}
|
||||
|
||||
if character_filter:
|
||||
metadata_filters["characters"] = {"contains": character_filter[0]}
|
||||
|
||||
results = searcher.search(query=query, top_k=10, metadata_filters=metadata_filters)
|
||||
|
||||
return results
|
||||
|
||||
|
||||
def demo_spoiler_free_rag():
|
||||
"""
|
||||
Demonstrate the spoiler-free book RAG system.
|
||||
"""
|
||||
print("🎭 Spoiler-Free Book RAG Demo")
|
||||
print("=" * 40)
|
||||
|
||||
# Step 1: Prepare book data
|
||||
book_title = "Alice's Adventures in Wonderland"
|
||||
book_chunks = chunk_book_with_metadata(book_title)
|
||||
|
||||
print(f"📖 Loaded {len(book_chunks)} chunks from '{book_title}'")
|
||||
|
||||
# Step 2: Build the index (in practice, this would be done once)
|
||||
try:
|
||||
index_path = build_spoiler_free_index(book_chunks, "alice_wonderland")
|
||||
except Exception as e:
|
||||
print(f"❌ Failed to build index (likely missing dependencies): {e}")
|
||||
print(
|
||||
"💡 This demo shows the filtering logic - actual indexing requires LEANN dependencies"
|
||||
)
|
||||
return
|
||||
|
||||
# Step 3: Demonstrate various spoiler-free searches
|
||||
search_scenarios = [
|
||||
{
|
||||
"description": "Reader who has only read Chapter 1",
|
||||
"query": "What can you tell me about the rabbit?",
|
||||
"max_chapter": 1,
|
||||
},
|
||||
{
|
||||
"description": "Reader who has read up to Chapter 5",
|
||||
"query": "Tell me about Alice's adventures",
|
||||
"max_chapter": 5,
|
||||
},
|
||||
{
|
||||
"description": "Reader who has read most of the book",
|
||||
"query": "What does the Cheshire Cat represent?",
|
||||
"max_chapter": 10,
|
||||
},
|
||||
{
|
||||
"description": "Reader who has read the whole book",
|
||||
"query": "What can you tell me about the rabbit?",
|
||||
"max_chapter": 12,
|
||||
},
|
||||
]
|
||||
|
||||
for scenario in search_scenarios:
|
||||
print(f"\n📚 Scenario: {scenario['description']}")
|
||||
print(f" Query: {scenario['query']}")
|
||||
|
||||
try:
|
||||
results = spoiler_free_search(
|
||||
index_path=index_path,
|
||||
query=scenario["query"],
|
||||
max_chapter=scenario["max_chapter"],
|
||||
)
|
||||
|
||||
print(f" 📄 Found {len(results)} results:")
|
||||
for i, result in enumerate(results[:3], 1): # Show top 3
|
||||
chapter = result.metadata.get("chapter", "?")
|
||||
location = result.metadata.get("location", "?")
|
||||
print(f" {i}. Chapter {chapter} ({location}): {result.text[:80]}...")
|
||||
|
||||
except Exception as e:
|
||||
print(f" ❌ Search failed: {e}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
print("📚 LEANN Spoiler-Free Book RAG Example")
|
||||
print("=====================================")
|
||||
|
||||
try:
|
||||
demo_spoiler_free_rag()
|
||||
except ImportError as e:
|
||||
print(f"❌ Cannot run demo due to missing dependencies: {e}")
|
||||
except Exception as e:
|
||||
print(f"❌ Error running demo: {e}")
|
||||
@@ -83,7 +83,9 @@ def create_diskann_embedding_server(
|
||||
|
||||
logger.info(f"Loading PassageManager with metadata_file_path: {passages_file}")
|
||||
passages = PassageManager(meta["passage_sources"], metadata_file_path=passages_file)
|
||||
logger.info(f"Loaded PassageManager with {len(passages)} passages from metadata")
|
||||
logger.info(
|
||||
f"Loaded PassageManager with {len(passages.global_offset_map)} passages from metadata"
|
||||
)
|
||||
|
||||
# Import protobuf after ensuring the path is correct
|
||||
try:
|
||||
|
||||
@@ -4,8 +4,8 @@ build-backend = "scikit_build_core.build"
|
||||
|
||||
[project]
|
||||
name = "leann-backend-diskann"
|
||||
version = "0.3.2"
|
||||
dependencies = ["leann-core==0.3.2", "numpy", "protobuf>=3.19.0"]
|
||||
version = "0.3.1"
|
||||
dependencies = ["leann-core==0.3.1", "numpy", "protobuf>=3.19.0"]
|
||||
|
||||
[tool.scikit-build]
|
||||
# Key: simplified CMake path
|
||||
|
||||
@@ -1,7 +1,6 @@
|
||||
import logging
|
||||
import os
|
||||
import shutil
|
||||
import time
|
||||
from pathlib import Path
|
||||
from typing import Any, Literal, Optional
|
||||
|
||||
@@ -237,7 +236,6 @@ class HNSWSearcher(BaseSearcher):
|
||||
distances = np.empty((batch_size_query, top_k), dtype=np.float32)
|
||||
labels = np.empty((batch_size_query, top_k), dtype=np.int64)
|
||||
|
||||
search_time = time.time()
|
||||
self._index.search(
|
||||
query.shape[0],
|
||||
faiss.swig_ptr(query),
|
||||
@@ -246,8 +244,7 @@ class HNSWSearcher(BaseSearcher):
|
||||
faiss.swig_ptr(labels),
|
||||
params,
|
||||
)
|
||||
search_time = time.time() - search_time
|
||||
logger.info(f" Search time in HNSWSearcher.search() backend: {search_time} seconds")
|
||||
|
||||
string_labels = [[str(int_label) for int_label in batch_labels] for batch_labels in labels]
|
||||
|
||||
return {"labels": string_labels, "distances": distances}
|
||||
|
||||
@@ -90,7 +90,9 @@ def create_hnsw_embedding_server(
|
||||
embedding_dim: int = int(meta.get("dimensions", 0))
|
||||
except Exception:
|
||||
embedding_dim = 0
|
||||
logger.info(f"Loaded PassageManager with {len(passages)} passages from metadata")
|
||||
logger.info(
|
||||
f"Loaded PassageManager with {len(passages.global_offset_map)} passages from metadata"
|
||||
)
|
||||
|
||||
# (legacy ZMQ thread removed; using shutdown-capable server only)
|
||||
|
||||
|
||||
@@ -6,10 +6,10 @@ build-backend = "scikit_build_core.build"
|
||||
|
||||
[project]
|
||||
name = "leann-backend-hnsw"
|
||||
version = "0.3.2"
|
||||
version = "0.3.1"
|
||||
description = "Custom-built HNSW (Faiss) backend for the Leann toolkit."
|
||||
dependencies = [
|
||||
"leann-core==0.3.2",
|
||||
"leann-core==0.3.1",
|
||||
"numpy",
|
||||
"pyzmq>=23.0.0",
|
||||
"msgpack>=1.0.0",
|
||||
|
||||
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
|
||||
|
||||
[project]
|
||||
name = "leann-core"
|
||||
version = "0.3.2"
|
||||
version = "0.3.1"
|
||||
description = "Core API and plugin system for LEANN"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.9"
|
||||
|
||||
@@ -10,7 +10,7 @@ import time
|
||||
import warnings
|
||||
from dataclasses import dataclass, field
|
||||
from pathlib import Path
|
||||
from typing import Any, Literal, Optional, Union
|
||||
from typing import Any, Literal, Optional
|
||||
|
||||
import numpy as np
|
||||
|
||||
@@ -18,7 +18,6 @@ from leann.interface import LeannBackendSearcherInterface
|
||||
|
||||
from .chat import get_llm
|
||||
from .interface import LeannBackendFactoryInterface
|
||||
from .metadata_filter import MetadataFilterEngine
|
||||
from .registry import BACKEND_REGISTRY
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -120,13 +119,9 @@ class PassageManager:
|
||||
def __init__(
|
||||
self, passage_sources: list[dict[str, Any]], metadata_file_path: Optional[str] = None
|
||||
):
|
||||
self.offset_maps: dict[str, dict[str, int]] = {}
|
||||
self.passage_files: dict[str, str] = {}
|
||||
# Avoid materializing a single gigantic global map to reduce memory
|
||||
# footprint on very large corpora (e.g., 60M+ passages). Instead, keep
|
||||
# per-shard maps and do a lightweight per-shard lookup on demand.
|
||||
self._total_count: int = 0
|
||||
self.filter_engine = MetadataFilterEngine() # Initialize filter engine
|
||||
self.offset_maps = {}
|
||||
self.passage_files = {}
|
||||
self.global_offset_map = {} # Combined map for fast lookup
|
||||
|
||||
# Derive index base name for standard sibling fallbacks, e.g., <index_name>.passages.*
|
||||
index_name_base = None
|
||||
@@ -147,25 +142,12 @@ class PassageManager:
|
||||
default_name: Optional[str],
|
||||
source_dict: dict[str, Any],
|
||||
) -> list[Path]:
|
||||
"""
|
||||
Build an ordered list of candidate paths. For relative paths specified in
|
||||
metadata, prefer resolution relative to the metadata file directory first,
|
||||
then fall back to CWD-based resolution, and finally to conventional
|
||||
sibling defaults (e.g., <index_base>.passages.idx / .jsonl).
|
||||
"""
|
||||
candidates: list[Path] = []
|
||||
# 1) Primary path
|
||||
# 1) Primary as-is (absolute or relative)
|
||||
if primary:
|
||||
p = Path(primary)
|
||||
if p.is_absolute():
|
||||
candidates.append(p)
|
||||
else:
|
||||
# Prefer metadata-relative resolution for relative paths
|
||||
if metadata_file_path:
|
||||
candidates.append(Path(metadata_file_path).parent / p)
|
||||
# Also consider CWD-relative as a fallback for legacy layouts
|
||||
candidates.append(Path.cwd() / p)
|
||||
# 2) metadata-relative explicit relative key (if present)
|
||||
candidates.append(p if p.is_absolute() else (Path.cwd() / p))
|
||||
# 2) metadata-relative explicit relative key
|
||||
if metadata_file_path and source_dict.get(relative_key):
|
||||
candidates.append(Path(metadata_file_path).parent / source_dict[relative_key])
|
||||
# 3) metadata-relative standard sibling filename
|
||||
@@ -195,78 +177,23 @@ class PassageManager:
|
||||
raise FileNotFoundError(f"Passage index file not found: {index_file}")
|
||||
|
||||
with open(index_file, "rb") as f:
|
||||
offset_map: dict[str, int] = pickle.load(f)
|
||||
offset_map = pickle.load(f)
|
||||
self.offset_maps[passage_file] = offset_map
|
||||
self.passage_files[passage_file] = passage_file
|
||||
self._total_count += len(offset_map)
|
||||
|
||||
# Build global map for O(1) lookup
|
||||
for passage_id, offset in offset_map.items():
|
||||
self.global_offset_map[passage_id] = (passage_file, offset)
|
||||
|
||||
def get_passage(self, passage_id: str) -> dict[str, Any]:
|
||||
# Fast path: check each shard map (there are typically few shards).
|
||||
# This avoids building a massive combined dict while keeping lookups
|
||||
# bounded by the number of shards.
|
||||
for passage_file, offset_map in self.offset_maps.items():
|
||||
try:
|
||||
offset = offset_map[passage_id]
|
||||
with open(passage_file, encoding="utf-8") as f:
|
||||
f.seek(offset)
|
||||
return json.loads(f.readline())
|
||||
except KeyError:
|
||||
continue
|
||||
if passage_id in self.global_offset_map:
|
||||
passage_file, offset = self.global_offset_map[passage_id]
|
||||
# Lazy file opening - only open when needed
|
||||
with open(passage_file, encoding="utf-8") as f:
|
||||
f.seek(offset)
|
||||
return json.loads(f.readline())
|
||||
raise KeyError(f"Passage ID not found: {passage_id}")
|
||||
|
||||
def filter_search_results(
|
||||
self,
|
||||
search_results: list[SearchResult],
|
||||
metadata_filters: Optional[dict[str, dict[str, Union[str, int, float, bool, list]]]],
|
||||
) -> list[SearchResult]:
|
||||
"""
|
||||
Apply metadata filters to search results.
|
||||
|
||||
Args:
|
||||
search_results: List of SearchResult objects
|
||||
metadata_filters: Filter specifications to apply
|
||||
|
||||
Returns:
|
||||
Filtered list of SearchResult objects
|
||||
"""
|
||||
if not metadata_filters:
|
||||
return search_results
|
||||
|
||||
logger.debug(f"Applying metadata filters to {len(search_results)} results")
|
||||
|
||||
# Convert SearchResult objects to dictionaries for the filter engine
|
||||
result_dicts = []
|
||||
for result in search_results:
|
||||
result_dicts.append(
|
||||
{
|
||||
"id": result.id,
|
||||
"score": result.score,
|
||||
"text": result.text,
|
||||
"metadata": result.metadata,
|
||||
}
|
||||
)
|
||||
|
||||
# Apply filters using the filter engine
|
||||
filtered_dicts = self.filter_engine.apply_filters(result_dicts, metadata_filters)
|
||||
|
||||
# Convert back to SearchResult objects
|
||||
filtered_results = []
|
||||
for result_dict in filtered_dicts:
|
||||
filtered_results.append(
|
||||
SearchResult(
|
||||
id=result_dict["id"],
|
||||
score=result_dict["score"],
|
||||
text=result_dict["text"],
|
||||
metadata=result_dict["metadata"],
|
||||
)
|
||||
)
|
||||
|
||||
logger.debug(f"Filtered results: {len(filtered_results)} remaining")
|
||||
return filtered_results
|
||||
|
||||
def __len__(self) -> int:
|
||||
return self._total_count
|
||||
|
||||
|
||||
class LeannBuilder:
|
||||
def __init__(
|
||||
@@ -630,8 +557,6 @@ class LeannSearcher:
|
||||
self.passage_manager = PassageManager(
|
||||
self.meta_data.get("passage_sources", []), metadata_file_path=self.meta_path_str
|
||||
)
|
||||
# Preserve backend name for conditional parameter forwarding
|
||||
self.backend_name = backend_name
|
||||
backend_factory = BACKEND_REGISTRY.get(backend_name)
|
||||
if backend_factory is None:
|
||||
raise ValueError(f"Backend '{backend_name}' not found.")
|
||||
@@ -651,44 +576,15 @@ class LeannSearcher:
|
||||
recompute_embeddings: bool = True,
|
||||
pruning_strategy: Literal["global", "local", "proportional"] = "global",
|
||||
expected_zmq_port: int = 5557,
|
||||
metadata_filters: Optional[dict[str, dict[str, Union[str, int, float, bool, list]]]] = None,
|
||||
batch_size: int = 0,
|
||||
**kwargs,
|
||||
) -> list[SearchResult]:
|
||||
"""
|
||||
Search for nearest neighbors with optional metadata filtering.
|
||||
|
||||
Args:
|
||||
query: Text query to search for
|
||||
top_k: Number of nearest neighbors to return
|
||||
complexity: Search complexity/candidate list size, higher = more accurate but slower
|
||||
beam_width: Number of parallel search paths/IO requests per iteration
|
||||
prune_ratio: Ratio of neighbors to prune via approximate distance (0.0-1.0)
|
||||
recompute_embeddings: Whether to fetch fresh embeddings from server vs use stored codes
|
||||
pruning_strategy: Candidate selection strategy - "global" (default), "local", or "proportional"
|
||||
expected_zmq_port: ZMQ port for embedding server communication
|
||||
metadata_filters: Optional filters to apply to search results based on metadata.
|
||||
Format: {"field_name": {"operator": value}}
|
||||
Supported operators:
|
||||
- Comparison: "==", "!=", "<", "<=", ">", ">="
|
||||
- Membership: "in", "not_in"
|
||||
- String: "contains", "starts_with", "ends_with"
|
||||
Example: {"chapter": {"<=": 5}, "tags": {"in": ["fiction", "drama"]}}
|
||||
**kwargs: Backend-specific parameters
|
||||
|
||||
Returns:
|
||||
List of SearchResult objects with text, metadata, and similarity scores
|
||||
"""
|
||||
logger.info("🔍 LeannSearcher.search() called:")
|
||||
logger.info(f" Query: '{query}'")
|
||||
logger.info(f" Top_k: {top_k}")
|
||||
logger.info(f" Metadata filters: {metadata_filters}")
|
||||
logger.info(f" Additional kwargs: {kwargs}")
|
||||
|
||||
# Smart top_k detection and adjustment
|
||||
# Use PassageManager length (sum of shard sizes) to avoid
|
||||
# depending on a massive combined map
|
||||
total_docs = len(self.passage_manager)
|
||||
total_docs = len(self.passage_manager.global_offset_map)
|
||||
original_top_k = top_k
|
||||
if top_k > total_docs:
|
||||
top_k = total_docs
|
||||
@@ -717,33 +613,23 @@ class LeannSearcher:
|
||||
use_server_if_available=recompute_embeddings,
|
||||
zmq_port=zmq_port,
|
||||
)
|
||||
logger.info(f" Generated embedding shape: {query_embedding.shape}")
|
||||
embedding_time = time.time() - start_time
|
||||
logger.info(f" Embedding time: {embedding_time} seconds")
|
||||
# logger.info(f" Generated embedding shape: {query_embedding.shape}")
|
||||
# time.time() - start_time
|
||||
# logger.info(f" Embedding time: {embedding_time} seconds")
|
||||
|
||||
start_time = time.time()
|
||||
backend_search_kwargs: dict[str, Any] = {
|
||||
"complexity": complexity,
|
||||
"beam_width": beam_width,
|
||||
"prune_ratio": prune_ratio,
|
||||
"recompute_embeddings": recompute_embeddings,
|
||||
"pruning_strategy": pruning_strategy,
|
||||
"zmq_port": zmq_port,
|
||||
}
|
||||
# Only HNSW supports batching; forward conditionally
|
||||
if self.backend_name == "hnsw":
|
||||
backend_search_kwargs["batch_size"] = batch_size
|
||||
|
||||
# Merge any extra kwargs last
|
||||
backend_search_kwargs.update(kwargs)
|
||||
|
||||
results = self.backend_impl.search(
|
||||
query_embedding,
|
||||
top_k,
|
||||
**backend_search_kwargs,
|
||||
complexity=complexity,
|
||||
beam_width=beam_width,
|
||||
prune_ratio=prune_ratio,
|
||||
recompute_embeddings=recompute_embeddings,
|
||||
pruning_strategy=pruning_strategy,
|
||||
zmq_port=zmq_port,
|
||||
**kwargs,
|
||||
)
|
||||
search_time = time.time() - start_time
|
||||
logger.info(f" Search time in search() LEANN searcher: {search_time} seconds")
|
||||
# logger.info(f" Search time: {search_time} seconds")
|
||||
logger.info(f" Backend returned: labels={len(results.get('labels', [[]])[0])} results")
|
||||
|
||||
enriched_results = []
|
||||
@@ -782,13 +668,6 @@ class LeannSearcher:
|
||||
f" {RED}✗{RESET} [{i + 1:2d}] ID: '{string_id}' -> {RED}ERROR: Passage not found!{RESET}"
|
||||
)
|
||||
|
||||
# Apply metadata filters if specified
|
||||
if metadata_filters:
|
||||
logger.info(f" 🔍 Applying metadata filters: {metadata_filters}")
|
||||
enriched_results = self.passage_manager.filter_search_results(
|
||||
enriched_results, metadata_filters
|
||||
)
|
||||
|
||||
# Define color codes outside the loop for final message
|
||||
GREEN = "\033[92m"
|
||||
RESET = "\033[0m"
|
||||
@@ -829,15 +708,9 @@ class LeannChat:
|
||||
index_path: str,
|
||||
llm_config: Optional[dict[str, Any]] = None,
|
||||
enable_warmup: bool = False,
|
||||
searcher: Optional[LeannSearcher] = None,
|
||||
**kwargs,
|
||||
):
|
||||
if searcher is None:
|
||||
self.searcher = LeannSearcher(index_path, enable_warmup=enable_warmup, **kwargs)
|
||||
self._owns_searcher = True
|
||||
else:
|
||||
self.searcher = searcher
|
||||
self._owns_searcher = False
|
||||
self.searcher = LeannSearcher(index_path, enable_warmup=enable_warmup, **kwargs)
|
||||
self.llm = get_llm(llm_config)
|
||||
|
||||
def ask(
|
||||
@@ -851,8 +724,6 @@ class LeannChat:
|
||||
pruning_strategy: Literal["global", "local", "proportional"] = "global",
|
||||
llm_kwargs: Optional[dict[str, Any]] = None,
|
||||
expected_zmq_port: int = 5557,
|
||||
metadata_filters: Optional[dict[str, dict[str, Union[str, int, float, bool, list]]]] = None,
|
||||
batch_size: int = 0,
|
||||
**search_kwargs,
|
||||
):
|
||||
if llm_kwargs is None:
|
||||
@@ -867,12 +738,10 @@ class LeannChat:
|
||||
recompute_embeddings=recompute_embeddings,
|
||||
pruning_strategy=pruning_strategy,
|
||||
expected_zmq_port=expected_zmq_port,
|
||||
metadata_filters=metadata_filters,
|
||||
batch_size=batch_size,
|
||||
**search_kwargs,
|
||||
)
|
||||
search_time = time.time() - search_time
|
||||
logger.info(f" Search time: {search_time} seconds")
|
||||
# logger.info(f" Search time: {search_time} seconds")
|
||||
context = "\n\n".join([r.text for r in results])
|
||||
prompt = (
|
||||
"Here is some retrieved context that might help answer your question:\n\n"
|
||||
@@ -908,9 +777,7 @@ class LeannChat:
|
||||
This method should be called after you're done using the chat interface,
|
||||
especially in test environments or batch processing scenarios.
|
||||
"""
|
||||
# Only stop the embedding server if this LeannChat instance created the searcher.
|
||||
# When a shared searcher is passed in, avoid shutting down the server to enable reuse.
|
||||
if getattr(self, "_owns_searcher", False) and hasattr(self.searcher, "cleanup"):
|
||||
if hasattr(self.searcher, "cleanup"):
|
||||
self.searcher.cleanup()
|
||||
|
||||
# Enable automatic cleanup patterns
|
||||
|
||||
@@ -522,8 +522,6 @@ class OllamaChat(LLMInterface):
|
||||
logger.debug(f"Sending request to Ollama: {payload}")
|
||||
try:
|
||||
logger.info("Sending request to Ollama and waiting for response...")
|
||||
max_tokens = kwargs.get("max_tokens", 1000)
|
||||
payload["options"]["max_tokens"] = max_tokens
|
||||
response = requests.post(full_url, data=json.dumps(payload))
|
||||
response.raise_for_status()
|
||||
|
||||
@@ -622,8 +620,8 @@ class HFChat(LLMInterface):
|
||||
is_qwen_model = "qwen" in self.model.config._name_or_path.lower()
|
||||
|
||||
# For Qwen models, automatically add /no_think to the prompt
|
||||
# if is_qwen_model and "/no_think" not in prompt and "/think" not in prompt:
|
||||
# prompt = prompt + " /no_think"
|
||||
if is_qwen_model and "/no_think" not in prompt and "/think" not in prompt:
|
||||
prompt = prompt + " /no_think"
|
||||
|
||||
# Prepare chat template
|
||||
messages = [{"role": "user", "content": prompt}]
|
||||
|
||||
@@ -1,8 +1,7 @@
|
||||
import argparse
|
||||
import asyncio
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Any, Optional, Union
|
||||
from typing import Optional, Union
|
||||
|
||||
from llama_index.core import SimpleDirectoryReader
|
||||
from llama_index.core.node_parser import SentenceSplitter
|
||||
@@ -181,29 +180,6 @@ Examples:
|
||||
default=50,
|
||||
help="Code chunk overlap (default: 50)",
|
||||
)
|
||||
build_parser.add_argument(
|
||||
"--use-ast-chunking",
|
||||
action="store_true",
|
||||
help="Enable AST-aware chunking for code files (requires astchunk)",
|
||||
)
|
||||
build_parser.add_argument(
|
||||
"--ast-chunk-size",
|
||||
type=int,
|
||||
default=768,
|
||||
help="AST chunk size in characters (default: 768)",
|
||||
)
|
||||
build_parser.add_argument(
|
||||
"--ast-chunk-overlap",
|
||||
type=int,
|
||||
default=96,
|
||||
help="AST chunk overlap in characters (default: 96)",
|
||||
)
|
||||
build_parser.add_argument(
|
||||
"--ast-fallback-traditional",
|
||||
action="store_true",
|
||||
default=True,
|
||||
help="Fall back to traditional chunking if AST chunking fails (default: True)",
|
||||
)
|
||||
|
||||
# Search command
|
||||
search_parser = subparsers.add_parser("search", help="Search documents")
|
||||
@@ -857,7 +833,6 @@ Examples:
|
||||
docs_paths: Union[str, list],
|
||||
custom_file_types: Union[str, None] = None,
|
||||
include_hidden: bool = False,
|
||||
args: Optional[dict[str, Any]] = None,
|
||||
):
|
||||
# Handle both single path (string) and multiple paths (list) for backward compatibility
|
||||
if isinstance(docs_paths, str):
|
||||
@@ -1163,50 +1138,18 @@ Examples:
|
||||
}
|
||||
|
||||
print("start chunking documents")
|
||||
# Add progress bar for document chunking
|
||||
for doc in tqdm(documents, desc="Chunking documents", unit="doc"):
|
||||
# Check if this is a code file based on source path
|
||||
source_path = doc.metadata.get("source", "")
|
||||
is_code_file = any(source_path.endswith(ext) for ext in code_file_exts)
|
||||
|
||||
# Check if AST chunking is requested
|
||||
use_ast = getattr(args, "use_ast_chunking", False)
|
||||
# Use appropriate parser based on file type
|
||||
parser = self.code_parser if is_code_file else self.node_parser
|
||||
nodes = parser.get_nodes_from_documents([doc])
|
||||
|
||||
if use_ast:
|
||||
print("🧠 Using AST-aware chunking for code files")
|
||||
try:
|
||||
# Import enhanced chunking utilities
|
||||
# Add apps directory to path to import chunking utilities
|
||||
apps_dir = Path(__file__).parent.parent.parent.parent.parent / "apps"
|
||||
if apps_dir.exists():
|
||||
sys.path.insert(0, str(apps_dir))
|
||||
|
||||
from chunking import create_text_chunks
|
||||
|
||||
# Use enhanced chunking with AST support
|
||||
all_texts = create_text_chunks(
|
||||
documents,
|
||||
chunk_size=self.node_parser.chunk_size,
|
||||
chunk_overlap=self.node_parser.chunk_overlap,
|
||||
use_ast_chunking=True,
|
||||
ast_chunk_size=getattr(args, "ast_chunk_size", 768),
|
||||
ast_chunk_overlap=getattr(args, "ast_chunk_overlap", 96),
|
||||
code_file_extensions=None, # Use defaults
|
||||
ast_fallback_traditional=getattr(args, "ast_fallback_traditional", True),
|
||||
)
|
||||
|
||||
except ImportError as e:
|
||||
print(f"⚠️ AST chunking not available ({e}), falling back to traditional chunking")
|
||||
use_ast = False
|
||||
|
||||
if not use_ast:
|
||||
# Use traditional chunking logic
|
||||
for doc in tqdm(documents, desc="Chunking documents", unit="doc"):
|
||||
# Check if this is a code file based on source path
|
||||
source_path = doc.metadata.get("source", "")
|
||||
is_code_file = any(source_path.endswith(ext) for ext in code_file_exts)
|
||||
|
||||
# Use appropriate parser based on file type
|
||||
parser = self.code_parser if is_code_file else self.node_parser
|
||||
nodes = parser.get_nodes_from_documents([doc])
|
||||
|
||||
for node in nodes:
|
||||
all_texts.append(node.get_content())
|
||||
for node in nodes:
|
||||
all_texts.append(node.get_content())
|
||||
|
||||
print(f"Loaded {len(documents)} documents, {len(all_texts)} chunks")
|
||||
return all_texts
|
||||
@@ -1273,7 +1216,7 @@ Examples:
|
||||
)
|
||||
|
||||
all_texts = self.load_documents(
|
||||
docs_paths, args.file_types, include_hidden=args.include_hidden, args=args
|
||||
docs_paths, args.file_types, include_hidden=args.include_hidden
|
||||
)
|
||||
if not all_texts:
|
||||
print("No documents found")
|
||||
|
||||
@@ -6,7 +6,6 @@ Preserves all optimization parameters to ensure performance
|
||||
|
||||
import logging
|
||||
import os
|
||||
import time
|
||||
from typing import Any
|
||||
|
||||
import numpy as np
|
||||
@@ -21,9 +20,6 @@ logger.setLevel(log_level)
|
||||
# Global model cache to avoid repeated loading
|
||||
_model_cache: dict[str, Any] = {}
|
||||
|
||||
# Enable fast tokenizer multithreading by default
|
||||
os.environ.setdefault("TOKENIZERS_PARALLELISM", "true")
|
||||
|
||||
|
||||
def compute_embeddings(
|
||||
texts: list[str],
|
||||
@@ -32,8 +28,6 @@ def compute_embeddings(
|
||||
is_build: bool = False,
|
||||
batch_size: int = 32,
|
||||
adaptive_optimization: bool = True,
|
||||
manual_tokenize: bool = False,
|
||||
max_length: int = 256,
|
||||
) -> np.ndarray:
|
||||
"""
|
||||
Unified embedding computation entry point
|
||||
@@ -56,8 +50,6 @@ def compute_embeddings(
|
||||
is_build=is_build,
|
||||
batch_size=batch_size,
|
||||
adaptive_optimization=adaptive_optimization,
|
||||
manual_tokenize=manual_tokenize,
|
||||
max_length=max_length,
|
||||
)
|
||||
elif mode == "openai":
|
||||
return compute_embeddings_openai(texts, model_name)
|
||||
@@ -73,18 +65,13 @@ def compute_embeddings(
|
||||
|
||||
def compute_embeddings_sentence_transformers(
|
||||
texts: list[str],
|
||||
model_name: str,
|
||||
model_name: str,
|
||||
use_fp16: bool = True,
|
||||
device: str = "auto",
|
||||
batch_size: int = 32,
|
||||
is_build: bool = False,
|
||||
adaptive_optimization: bool = True,
|
||||
manual_tokenize: bool = False,
|
||||
max_length: int = 256,
|
||||
) -> np.ndarray:
|
||||
manual_tokenize = False
|
||||
batch_size = 512
|
||||
|
||||
"""
|
||||
Compute embeddings using SentenceTransformer with model caching and adaptive optimization
|
||||
|
||||
@@ -125,7 +112,7 @@ def compute_embeddings_sentence_transformers(
|
||||
# Keep original batch_size for CPU
|
||||
|
||||
# Create cache key
|
||||
cache_key = f"sentence_transformers_{model_name}_{device}_{use_fp16}_optimized_len{max_length}"
|
||||
cache_key = f"sentence_transformers_{model_name}_{device}_{use_fp16}_optimized"
|
||||
|
||||
# Check if model is already cached
|
||||
if cache_key in _model_cache:
|
||||
@@ -164,18 +151,13 @@ def compute_embeddings_sentence_transformers(
|
||||
"torch_dtype": torch.float16 if use_fp16 else torch.float32,
|
||||
"low_cpu_mem_usage": True,
|
||||
"_fast_init": True,
|
||||
"attn_implementation": "eager", # Use eager attention for speed
|
||||
}
|
||||
# Prefer SDPA on CUDA; fall back to eager elsewhere
|
||||
if device == "cuda":
|
||||
model_kwargs["attn_implementation"] = "sdpa"
|
||||
else:
|
||||
model_kwargs["attn_implementation"] = "eager"
|
||||
|
||||
tokenizer_kwargs = {
|
||||
"use_fast": True,
|
||||
"padding": "max_length",
|
||||
"padding": True,
|
||||
"truncation": True,
|
||||
"max_length": max_length,
|
||||
}
|
||||
|
||||
try:
|
||||
@@ -227,181 +209,25 @@ def compute_embeddings_sentence_transformers(
|
||||
for param in model.parameters():
|
||||
param.requires_grad_(False)
|
||||
|
||||
# Enforce max sequence length for encode path
|
||||
try:
|
||||
if hasattr(model, "max_seq_length"):
|
||||
model.max_seq_length = max_length
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Cache the model
|
||||
_model_cache[cache_key] = model
|
||||
logger.info(f"Model cached: {cache_key}")
|
||||
|
||||
# Compute embeddings with optimized inference mode
|
||||
logger.info(
|
||||
f"Starting embedding computation... (batch_size: {batch_size}, manual_tokenize={manual_tokenize})"
|
||||
)
|
||||
logger.info(f"Starting embedding computation... (batch_size: {batch_size})")
|
||||
|
||||
start_time = time.time()
|
||||
if not manual_tokenize:
|
||||
# Use SentenceTransformer's optimized encode path (default)
|
||||
# print text shapr
|
||||
with torch.inference_mode():
|
||||
# print avg len of texts
|
||||
avg_len = sum(len(text) for text in texts) / len(texts)
|
||||
logger.info(f"Avg len of texts: {avg_len}")
|
||||
# print the precision of the model
|
||||
logger.info(f"Model precision: {model.dtype}")
|
||||
time_start = time.time()
|
||||
embeddings = model.encode(
|
||||
texts,
|
||||
batch_size=batch_size,
|
||||
show_progress_bar=is_build, # Don't show progress bar in server environment
|
||||
convert_to_tensor=True,
|
||||
normalize_embeddings=False,
|
||||
device=device,
|
||||
max_length=max_length,
|
||||
)
|
||||
|
||||
# Synchronize if CUDA to measure accurate wall time
|
||||
try:
|
||||
# if torch.cuda.is_available():
|
||||
# torch.cuda.synchronize()
|
||||
time_end = time.time()
|
||||
embedding_time, embedding_tpt = (
|
||||
time_end - time_start,
|
||||
embeddings.shape[0] / (time_end - time_start),
|
||||
)
|
||||
logger.info(
|
||||
f"Time taken in embedding {batch_size} texts in embedding model: {embedding_time} seconds, embedding tpt: {embedding_tpt} seqs/s"
|
||||
)
|
||||
except Exception:
|
||||
pass
|
||||
# Single CPU copy after timing (avoid per-batch D2H sync)
|
||||
if isinstance(embeddings, torch.Tensor):
|
||||
embeddings = embeddings.float().cpu().numpy()
|
||||
else:
|
||||
time_start = time.time()
|
||||
# Manual tokenization + forward pass using HF AutoTokenizer/AutoModel
|
||||
try:
|
||||
from transformers import AutoModel, AutoTokenizer # type: ignore
|
||||
except Exception as e:
|
||||
raise ImportError(f"transformers is required for manual_tokenize=True: {e}")
|
||||
|
||||
# Cache tokenizer and model
|
||||
tok_cache_key = f"hf_tokenizer_{model_name}_len{max_length}_padmax"
|
||||
mdl_cache_key = f"hf_model_{model_name}_{device}_{use_fp16}_len{max_length}"
|
||||
if tok_cache_key in _model_cache and mdl_cache_key in _model_cache:
|
||||
hf_tokenizer = _model_cache[tok_cache_key]
|
||||
hf_model = _model_cache[mdl_cache_key]
|
||||
logger.info("Using cached HF tokenizer/model for manual path")
|
||||
else:
|
||||
logger.info("Loading HF tokenizer/model for manual tokenization path")
|
||||
hf_tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True)
|
||||
torch_dtype = torch.float16 if (use_fp16 and device == "cuda") else torch.float32
|
||||
hf_model = AutoModel.from_pretrained(model_name, torch_dtype=torch_dtype)
|
||||
hf_model.to(device)
|
||||
hf_model.eval()
|
||||
# Optional compile on supported devices
|
||||
if device in ["cuda", "mps"]:
|
||||
try:
|
||||
hf_model = torch.compile(hf_model, mode="reduce-overhead", dynamic=True) # type: ignore
|
||||
except Exception:
|
||||
pass
|
||||
_model_cache[tok_cache_key] = hf_tokenizer
|
||||
_model_cache[mdl_cache_key] = hf_model
|
||||
|
||||
emb_list: list[torch.Tensor] = []
|
||||
# Progress bar when building or for large inputs
|
||||
show_progress = is_build or len(texts) > 32
|
||||
show_progress = False
|
||||
try:
|
||||
if show_progress:
|
||||
from tqdm import tqdm # type: ignore
|
||||
|
||||
batch_iter = tqdm(
|
||||
range(0, len(texts), batch_size),
|
||||
desc="Embedding (manual)",
|
||||
unit="batch",
|
||||
)
|
||||
else:
|
||||
batch_iter = range(0, len(texts), batch_size)
|
||||
except Exception:
|
||||
batch_iter = range(0, len(texts), batch_size)
|
||||
|
||||
start_time_manual = time.time()
|
||||
with torch.inference_mode():
|
||||
for start_index in batch_iter:
|
||||
end_index = min(start_index + batch_size, len(texts))
|
||||
batch_texts = texts[start_index:end_index]
|
||||
tokenize_start_time = time.time()
|
||||
inputs = hf_tokenizer(
|
||||
batch_texts,
|
||||
padding="max_length",
|
||||
truncation=True,
|
||||
max_length=max_length,
|
||||
return_tensors="pt",
|
||||
)
|
||||
tokenize_end_time = time.time()
|
||||
logger.debug(
|
||||
f"Tokenize time taken: {tokenize_end_time - tokenize_start_time} seconds"
|
||||
)
|
||||
to_device_start_time = time.time()
|
||||
# Pin CPU memory then transfer non-blocking to GPU when available
|
||||
inputs = {
|
||||
k: (v.pin_memory() if (device == "cuda" and v.device.type == "cpu") else v)
|
||||
for k, v in inputs.items()
|
||||
}
|
||||
inputs = {
|
||||
k: v.to(device, non_blocking=(device == "cuda")) for k, v in inputs.items()
|
||||
}
|
||||
to_device_end_time = time.time()
|
||||
logger.debug(
|
||||
f"To device time taken: {to_device_end_time - to_device_start_time} seconds"
|
||||
)
|
||||
# if device == "cuda":
|
||||
# torch.cuda.synchronize()
|
||||
forward_start_time = time.time()
|
||||
outputs = hf_model(**inputs)
|
||||
# if device == "cuda":
|
||||
# torch.cuda.synchronize()
|
||||
forward_end_time = time.time()
|
||||
logger.debug(f"Forward time taken: {forward_end_time - forward_start_time} seconds")
|
||||
last_hidden_state = outputs.last_hidden_state # (B, L, H)
|
||||
attention_mask = inputs.get("attention_mask")
|
||||
if attention_mask is None:
|
||||
# Fallback: assume all tokens are valid
|
||||
pooled = last_hidden_state.mean(dim=1)
|
||||
else:
|
||||
mask = attention_mask.unsqueeze(-1).to(last_hidden_state.dtype)
|
||||
masked = last_hidden_state * mask
|
||||
lengths = mask.sum(dim=1).clamp(min=1)
|
||||
pooled = masked.sum(dim=1) / lengths
|
||||
# Accumulate on-device; single D2H copy after loop
|
||||
emb_list.append(pooled.detach())
|
||||
|
||||
# Concatenate and single-copy to CPU/NumPy
|
||||
embeddings_tensor = torch.cat(emb_list, dim=0)
|
||||
embeddings = embeddings_tensor.float().cpu().numpy()
|
||||
# try:
|
||||
# if torch.cuda.is_available():
|
||||
# torch.cuda.synchronize()
|
||||
# except Exception:
|
||||
# pass
|
||||
end_time = time.time()
|
||||
logger.info(f"Manual tokenize time taken: {end_time - start_time_manual} seconds")
|
||||
time_end = time.time()
|
||||
tokenize_time, tokenize_tpt = (
|
||||
time_end - time_start,
|
||||
embeddings.shape[0] / (time_end - time_start),
|
||||
# Use torch.inference_mode for optimal performance
|
||||
with torch.inference_mode():
|
||||
embeddings = model.encode(
|
||||
texts,
|
||||
batch_size=batch_size,
|
||||
show_progress_bar=is_build, # Don't show progress bar in server environment
|
||||
convert_to_numpy=True,
|
||||
normalize_embeddings=False,
|
||||
device=device,
|
||||
)
|
||||
logger.info(
|
||||
f"Tokenize time taken: {tokenize_time} seconds, tokenize tpt: {tokenize_tpt} seqs/s"
|
||||
)
|
||||
end_time = time.time()
|
||||
|
||||
logger.info(f"Generated {len(embeddings)} embeddings, dimension: {embeddings.shape[1]}")
|
||||
logger.info(f"Time taken: {end_time - start_time} seconds")
|
||||
|
||||
# Validate results
|
||||
if np.isnan(embeddings).any() or np.isinf(embeddings).any():
|
||||
|
||||
@@ -192,7 +192,6 @@ class EmbeddingServerManager:
|
||||
stderr_target = None # Direct to console for visible logs
|
||||
|
||||
# Start embedding server subprocess
|
||||
logger.info(f"Starting server process with command: {' '.join(command)}")
|
||||
self.server_process = subprocess.Popen(
|
||||
command,
|
||||
cwd=project_root,
|
||||
|
||||
@@ -1,240 +0,0 @@
|
||||
"""
|
||||
Metadata filtering engine for LEANN search results.
|
||||
|
||||
This module provides generic metadata filtering capabilities that can be applied
|
||||
to search results from any LEANN backend. The filtering supports various
|
||||
operators for different data types including numbers, strings, booleans, and lists.
|
||||
"""
|
||||
|
||||
import logging
|
||||
from typing import Any, Union
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Type alias for filter specifications
|
||||
FilterValue = Union[str, int, float, bool, list]
|
||||
FilterSpec = dict[str, FilterValue]
|
||||
MetadataFilters = dict[str, FilterSpec]
|
||||
|
||||
|
||||
class MetadataFilterEngine:
|
||||
"""
|
||||
Engine for evaluating metadata filters against search results.
|
||||
|
||||
Supports various operators for filtering based on metadata fields:
|
||||
- Comparison: ==, !=, <, <=, >, >=
|
||||
- Membership: in, not_in
|
||||
- String operations: contains, starts_with, ends_with
|
||||
- Boolean operations: is_true, is_false
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize the filter engine with supported operators."""
|
||||
self.operators = {
|
||||
"==": self._equals,
|
||||
"!=": self._not_equals,
|
||||
"<": self._less_than,
|
||||
"<=": self._less_than_or_equal,
|
||||
">": self._greater_than,
|
||||
">=": self._greater_than_or_equal,
|
||||
"in": self._in,
|
||||
"not_in": self._not_in,
|
||||
"contains": self._contains,
|
||||
"starts_with": self._starts_with,
|
||||
"ends_with": self._ends_with,
|
||||
"is_true": self._is_true,
|
||||
"is_false": self._is_false,
|
||||
}
|
||||
|
||||
def apply_filters(
|
||||
self, search_results: list[dict[str, Any]], metadata_filters: MetadataFilters
|
||||
) -> list[dict[str, Any]]:
|
||||
"""
|
||||
Apply metadata filters to a list of search results.
|
||||
|
||||
Args:
|
||||
search_results: List of result dictionaries, each containing 'metadata' field
|
||||
metadata_filters: Dictionary of filter specifications
|
||||
Format: {"field_name": {"operator": value}}
|
||||
|
||||
Returns:
|
||||
Filtered list of search results
|
||||
"""
|
||||
if not metadata_filters:
|
||||
return search_results
|
||||
|
||||
logger.debug(f"Applying filters: {metadata_filters}")
|
||||
logger.debug(f"Input results count: {len(search_results)}")
|
||||
|
||||
filtered_results = []
|
||||
for result in search_results:
|
||||
if self._evaluate_filters(result, metadata_filters):
|
||||
filtered_results.append(result)
|
||||
|
||||
logger.debug(f"Filtered results count: {len(filtered_results)}")
|
||||
return filtered_results
|
||||
|
||||
def _evaluate_filters(self, result: dict[str, Any], filters: MetadataFilters) -> bool:
|
||||
"""
|
||||
Evaluate all filters against a single search result.
|
||||
|
||||
All filters must pass (AND logic) for the result to be included.
|
||||
|
||||
Args:
|
||||
result: Full search result dictionary (including metadata, text, etc.)
|
||||
filters: Filter specifications to evaluate
|
||||
|
||||
Returns:
|
||||
True if all filters pass, False otherwise
|
||||
"""
|
||||
for field_name, filter_spec in filters.items():
|
||||
if not self._evaluate_field_filter(result, field_name, filter_spec):
|
||||
return False
|
||||
return True
|
||||
|
||||
def _evaluate_field_filter(
|
||||
self, result: dict[str, Any], field_name: str, filter_spec: FilterSpec
|
||||
) -> bool:
|
||||
"""
|
||||
Evaluate a single field filter against a search result.
|
||||
|
||||
Args:
|
||||
result: Full search result dictionary
|
||||
field_name: Name of the field to filter on
|
||||
filter_spec: Filter specification for this field
|
||||
|
||||
Returns:
|
||||
True if the filter passes, False otherwise
|
||||
"""
|
||||
# First check top-level fields, then check metadata
|
||||
field_value = result.get(field_name)
|
||||
if field_value is None:
|
||||
# Try to get from metadata if not found at top level
|
||||
metadata = result.get("metadata", {})
|
||||
field_value = metadata.get(field_name)
|
||||
|
||||
# Handle missing fields - they fail all filters except existence checks
|
||||
if field_value is None:
|
||||
logger.debug(f"Field '{field_name}' not found in result or metadata")
|
||||
return False
|
||||
|
||||
# Evaluate each operator in the filter spec
|
||||
for operator, expected_value in filter_spec.items():
|
||||
if operator not in self.operators:
|
||||
logger.warning(f"Unsupported operator: {operator}")
|
||||
return False
|
||||
|
||||
try:
|
||||
if not self.operators[operator](field_value, expected_value):
|
||||
logger.debug(
|
||||
f"Filter failed: {field_name} {operator} {expected_value} "
|
||||
f"(actual: {field_value})"
|
||||
)
|
||||
return False
|
||||
except Exception as e:
|
||||
logger.warning(
|
||||
f"Error evaluating filter {field_name} {operator} {expected_value}: {e}"
|
||||
)
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
# Comparison operators
|
||||
def _equals(self, field_value: Any, expected_value: Any) -> bool:
|
||||
"""Check if field value equals expected value."""
|
||||
return field_value == expected_value
|
||||
|
||||
def _not_equals(self, field_value: Any, expected_value: Any) -> bool:
|
||||
"""Check if field value does not equal expected value."""
|
||||
return field_value != expected_value
|
||||
|
||||
def _less_than(self, field_value: Any, expected_value: Any) -> bool:
|
||||
"""Check if field value is less than expected value."""
|
||||
return self._numeric_compare(field_value, expected_value, lambda a, b: a < b)
|
||||
|
||||
def _less_than_or_equal(self, field_value: Any, expected_value: Any) -> bool:
|
||||
"""Check if field value is less than or equal to expected value."""
|
||||
return self._numeric_compare(field_value, expected_value, lambda a, b: a <= b)
|
||||
|
||||
def _greater_than(self, field_value: Any, expected_value: Any) -> bool:
|
||||
"""Check if field value is greater than expected value."""
|
||||
return self._numeric_compare(field_value, expected_value, lambda a, b: a > b)
|
||||
|
||||
def _greater_than_or_equal(self, field_value: Any, expected_value: Any) -> bool:
|
||||
"""Check if field value is greater than or equal to expected value."""
|
||||
return self._numeric_compare(field_value, expected_value, lambda a, b: a >= b)
|
||||
|
||||
# Membership operators
|
||||
def _in(self, field_value: Any, expected_value: Any) -> bool:
|
||||
"""Check if field value is in the expected list/collection."""
|
||||
if not isinstance(expected_value, (list, tuple, set)):
|
||||
raise ValueError("'in' operator requires a list, tuple, or set")
|
||||
return field_value in expected_value
|
||||
|
||||
def _not_in(self, field_value: Any, expected_value: Any) -> bool:
|
||||
"""Check if field value is not in the expected list/collection."""
|
||||
if not isinstance(expected_value, (list, tuple, set)):
|
||||
raise ValueError("'not_in' operator requires a list, tuple, or set")
|
||||
return field_value not in expected_value
|
||||
|
||||
# String operators
|
||||
def _contains(self, field_value: Any, expected_value: Any) -> bool:
|
||||
"""Check if field value contains the expected substring."""
|
||||
field_str = str(field_value)
|
||||
expected_str = str(expected_value)
|
||||
return expected_str in field_str
|
||||
|
||||
def _starts_with(self, field_value: Any, expected_value: Any) -> bool:
|
||||
"""Check if field value starts with the expected prefix."""
|
||||
field_str = str(field_value)
|
||||
expected_str = str(expected_value)
|
||||
return field_str.startswith(expected_str)
|
||||
|
||||
def _ends_with(self, field_value: Any, expected_value: Any) -> bool:
|
||||
"""Check if field value ends with the expected suffix."""
|
||||
field_str = str(field_value)
|
||||
expected_str = str(expected_value)
|
||||
return field_str.endswith(expected_str)
|
||||
|
||||
# Boolean operators
|
||||
def _is_true(self, field_value: Any, expected_value: Any) -> bool:
|
||||
"""Check if field value is truthy."""
|
||||
return bool(field_value)
|
||||
|
||||
def _is_false(self, field_value: Any, expected_value: Any) -> bool:
|
||||
"""Check if field value is falsy."""
|
||||
return not bool(field_value)
|
||||
|
||||
# Helper methods
|
||||
def _numeric_compare(self, field_value: Any, expected_value: Any, compare_func) -> bool:
|
||||
"""
|
||||
Helper for numeric comparisons with type coercion.
|
||||
|
||||
Args:
|
||||
field_value: Value from metadata
|
||||
expected_value: Value to compare against
|
||||
compare_func: Comparison function to apply
|
||||
|
||||
Returns:
|
||||
Result of comparison
|
||||
"""
|
||||
try:
|
||||
# Try to convert both values to numbers for comparison
|
||||
if isinstance(field_value, str) and isinstance(expected_value, str):
|
||||
# String comparison if both are strings
|
||||
return compare_func(field_value, expected_value)
|
||||
|
||||
# Numeric comparison - attempt to convert to float
|
||||
field_num = (
|
||||
float(field_value) if not isinstance(field_value, (int, float)) else field_value
|
||||
)
|
||||
expected_num = (
|
||||
float(expected_value)
|
||||
if not isinstance(expected_value, (int, float))
|
||||
else expected_value
|
||||
)
|
||||
|
||||
return compare_func(field_num, expected_num)
|
||||
except (ValueError, TypeError):
|
||||
# Fall back to string comparison if numeric conversion fails
|
||||
return compare_func(str(field_value), str(expected_value))
|
||||
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
|
||||
|
||||
[project]
|
||||
name = "leann"
|
||||
version = "0.3.2"
|
||||
version = "0.3.1"
|
||||
description = "LEANN - The smallest vector index in the world. RAG Everything with LEANN!"
|
||||
readme = "README.md"
|
||||
requires-python = ">=3.9"
|
||||
|
||||
@@ -46,13 +46,6 @@ dependencies = [
|
||||
"pathspec>=0.12.1",
|
||||
"nbconvert>=7.16.6",
|
||||
"gitignore-parser>=0.1.12",
|
||||
# AST-aware code chunking dependencies
|
||||
"astchunk>=0.1.0",
|
||||
"tree-sitter>=0.20.0",
|
||||
"tree-sitter-python>=0.20.0",
|
||||
"tree-sitter-java>=0.20.0",
|
||||
"tree-sitter-c-sharp>=0.20.0",
|
||||
"tree-sitter-typescript>=0.20.0",
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
|
||||
@@ -1,397 +0,0 @@
|
||||
"""
|
||||
Test suite for astchunk integration with LEANN.
|
||||
Tests AST-aware chunking functionality, language detection, and fallback mechanisms.
|
||||
"""
|
||||
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch
|
||||
|
||||
import pytest
|
||||
|
||||
# Add apps directory to path for imports
|
||||
sys.path.insert(0, str(Path(__file__).parent.parent / "apps"))
|
||||
|
||||
from typing import Optional
|
||||
|
||||
from chunking import (
|
||||
create_ast_chunks,
|
||||
create_text_chunks,
|
||||
create_traditional_chunks,
|
||||
detect_code_files,
|
||||
get_language_from_extension,
|
||||
)
|
||||
|
||||
|
||||
class MockDocument:
|
||||
"""Mock LlamaIndex Document for testing."""
|
||||
|
||||
def __init__(self, content: str, file_path: str = "", metadata: Optional[dict] = None):
|
||||
self.content = content
|
||||
self.metadata = metadata or {}
|
||||
if file_path:
|
||||
self.metadata["file_path"] = file_path
|
||||
|
||||
def get_content(self) -> str:
|
||||
return self.content
|
||||
|
||||
|
||||
class TestCodeFileDetection:
|
||||
"""Test code file detection and language mapping."""
|
||||
|
||||
def test_detect_code_files_python(self):
|
||||
"""Test detection of Python files."""
|
||||
docs = [
|
||||
MockDocument("print('hello')", "/path/to/file.py"),
|
||||
MockDocument("This is text", "/path/to/file.txt"),
|
||||
]
|
||||
|
||||
code_docs, text_docs = detect_code_files(docs)
|
||||
|
||||
assert len(code_docs) == 1
|
||||
assert len(text_docs) == 1
|
||||
assert code_docs[0].metadata["language"] == "python"
|
||||
assert code_docs[0].metadata["is_code"] is True
|
||||
assert text_docs[0].metadata["is_code"] is False
|
||||
|
||||
def test_detect_code_files_multiple_languages(self):
|
||||
"""Test detection of multiple programming languages."""
|
||||
docs = [
|
||||
MockDocument("def func():", "/path/to/script.py"),
|
||||
MockDocument("public class Test {}", "/path/to/Test.java"),
|
||||
MockDocument("interface ITest {}", "/path/to/test.ts"),
|
||||
MockDocument("using System;", "/path/to/Program.cs"),
|
||||
MockDocument("Regular text content", "/path/to/document.txt"),
|
||||
]
|
||||
|
||||
code_docs, text_docs = detect_code_files(docs)
|
||||
|
||||
assert len(code_docs) == 4
|
||||
assert len(text_docs) == 1
|
||||
|
||||
languages = [doc.metadata["language"] for doc in code_docs]
|
||||
assert "python" in languages
|
||||
assert "java" in languages
|
||||
assert "typescript" in languages
|
||||
assert "csharp" in languages
|
||||
|
||||
def test_detect_code_files_no_file_path(self):
|
||||
"""Test handling of documents without file paths."""
|
||||
docs = [
|
||||
MockDocument("some content"),
|
||||
MockDocument("other content", metadata={"some_key": "value"}),
|
||||
]
|
||||
|
||||
code_docs, text_docs = detect_code_files(docs)
|
||||
|
||||
assert len(code_docs) == 0
|
||||
assert len(text_docs) == 2
|
||||
for doc in text_docs:
|
||||
assert doc.metadata["is_code"] is False
|
||||
|
||||
def test_get_language_from_extension(self):
|
||||
"""Test language detection from file extensions."""
|
||||
assert get_language_from_extension("test.py") == "python"
|
||||
assert get_language_from_extension("Test.java") == "java"
|
||||
assert get_language_from_extension("component.tsx") == "typescript"
|
||||
assert get_language_from_extension("Program.cs") == "csharp"
|
||||
assert get_language_from_extension("document.txt") is None
|
||||
assert get_language_from_extension("") is None
|
||||
|
||||
|
||||
class TestChunkingFunctions:
|
||||
"""Test various chunking functionality."""
|
||||
|
||||
def test_create_traditional_chunks(self):
|
||||
"""Test traditional text chunking."""
|
||||
docs = [
|
||||
MockDocument(
|
||||
"This is a test document. It has multiple sentences. We want to test chunking."
|
||||
)
|
||||
]
|
||||
|
||||
chunks = create_traditional_chunks(docs, chunk_size=50, chunk_overlap=10)
|
||||
|
||||
assert len(chunks) > 0
|
||||
assert all(isinstance(chunk, str) for chunk in chunks)
|
||||
assert all(len(chunk.strip()) > 0 for chunk in chunks)
|
||||
|
||||
def test_create_traditional_chunks_empty_docs(self):
|
||||
"""Test traditional chunking with empty documents."""
|
||||
chunks = create_traditional_chunks([], chunk_size=50, chunk_overlap=10)
|
||||
assert chunks == []
|
||||
|
||||
@pytest.mark.skipif(
|
||||
os.environ.get("CI") == "true",
|
||||
reason="Skip astchunk tests in CI - dependency may not be available",
|
||||
)
|
||||
def test_create_ast_chunks_with_astchunk_available(self):
|
||||
"""Test AST chunking when astchunk is available."""
|
||||
python_code = '''
|
||||
def hello_world():
|
||||
"""Print hello world message."""
|
||||
print("Hello, World!")
|
||||
|
||||
def add_numbers(a, b):
|
||||
"""Add two numbers and return the result."""
|
||||
return a + b
|
||||
|
||||
class Calculator:
|
||||
"""A simple calculator class."""
|
||||
|
||||
def __init__(self):
|
||||
self.history = []
|
||||
|
||||
def add(self, a, b):
|
||||
result = a + b
|
||||
self.history.append(f"{a} + {b} = {result}")
|
||||
return result
|
||||
'''
|
||||
|
||||
docs = [MockDocument(python_code, "/test/calculator.py", {"language": "python"})]
|
||||
|
||||
try:
|
||||
chunks = create_ast_chunks(docs, max_chunk_size=200, chunk_overlap=50)
|
||||
|
||||
# Should have multiple chunks due to different functions/classes
|
||||
assert len(chunks) > 0
|
||||
assert all(isinstance(chunk, str) for chunk in chunks)
|
||||
assert all(len(chunk.strip()) > 0 for chunk in chunks)
|
||||
|
||||
# Check that code structure is somewhat preserved
|
||||
combined_content = " ".join(chunks)
|
||||
assert "def hello_world" in combined_content
|
||||
assert "class Calculator" in combined_content
|
||||
|
||||
except ImportError:
|
||||
# astchunk not available, should fall back to traditional chunking
|
||||
chunks = create_ast_chunks(docs, max_chunk_size=200, chunk_overlap=50)
|
||||
assert len(chunks) > 0 # Should still get chunks from fallback
|
||||
|
||||
def test_create_ast_chunks_fallback_to_traditional(self):
|
||||
"""Test AST chunking falls back to traditional when astchunk is not available."""
|
||||
docs = [MockDocument("def test(): pass", "/test/script.py", {"language": "python"})]
|
||||
|
||||
# Mock astchunk import to fail
|
||||
with patch("chunking.create_ast_chunks"):
|
||||
# First call (actual test) should import astchunk and potentially fail
|
||||
# Let's call the actual function to test the import error handling
|
||||
chunks = create_ast_chunks(docs)
|
||||
|
||||
# Should return some chunks (either from astchunk or fallback)
|
||||
assert isinstance(chunks, list)
|
||||
|
||||
def test_create_text_chunks_traditional_mode(self):
|
||||
"""Test text chunking in traditional mode."""
|
||||
docs = [
|
||||
MockDocument("def test(): pass", "/test/script.py"),
|
||||
MockDocument("This is regular text.", "/test/doc.txt"),
|
||||
]
|
||||
|
||||
chunks = create_text_chunks(docs, use_ast_chunking=False, chunk_size=50, chunk_overlap=10)
|
||||
|
||||
assert len(chunks) > 0
|
||||
assert all(isinstance(chunk, str) for chunk in chunks)
|
||||
|
||||
def test_create_text_chunks_ast_mode(self):
|
||||
"""Test text chunking in AST mode."""
|
||||
docs = [
|
||||
MockDocument("def test(): pass", "/test/script.py"),
|
||||
MockDocument("This is regular text.", "/test/doc.txt"),
|
||||
]
|
||||
|
||||
chunks = create_text_chunks(
|
||||
docs,
|
||||
use_ast_chunking=True,
|
||||
ast_chunk_size=100,
|
||||
ast_chunk_overlap=20,
|
||||
chunk_size=50,
|
||||
chunk_overlap=10,
|
||||
)
|
||||
|
||||
assert len(chunks) > 0
|
||||
assert all(isinstance(chunk, str) for chunk in chunks)
|
||||
|
||||
def test_create_text_chunks_custom_extensions(self):
|
||||
"""Test text chunking with custom code file extensions."""
|
||||
docs = [
|
||||
MockDocument("function test() {}", "/test/script.js"), # Not in default extensions
|
||||
MockDocument("Regular text", "/test/doc.txt"),
|
||||
]
|
||||
|
||||
# First without custom extensions - should treat .js as text
|
||||
chunks_without = create_text_chunks(docs, use_ast_chunking=True, code_file_extensions=None)
|
||||
|
||||
# Then with custom extensions - should treat .js as code
|
||||
chunks_with = create_text_chunks(
|
||||
docs, use_ast_chunking=True, code_file_extensions=[".js", ".jsx"]
|
||||
)
|
||||
|
||||
# Both should return chunks
|
||||
assert len(chunks_without) > 0
|
||||
assert len(chunks_with) > 0
|
||||
|
||||
|
||||
class TestIntegrationWithDocumentRAG:
|
||||
"""Integration tests with the document RAG system."""
|
||||
|
||||
@pytest.fixture
|
||||
def temp_code_dir(self):
|
||||
"""Create a temporary directory with sample code files."""
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
temp_path = Path(temp_dir)
|
||||
|
||||
# Create sample Python file
|
||||
python_file = temp_path / "example.py"
|
||||
python_file.write_text('''
|
||||
def fibonacci(n):
|
||||
"""Calculate fibonacci number."""
|
||||
if n <= 1:
|
||||
return n
|
||||
return fibonacci(n-1) + fibonacci(n-2)
|
||||
|
||||
class MathUtils:
|
||||
@staticmethod
|
||||
def factorial(n):
|
||||
if n <= 1:
|
||||
return 1
|
||||
return n * MathUtils.factorial(n-1)
|
||||
''')
|
||||
|
||||
# Create sample text file
|
||||
text_file = temp_path / "readme.txt"
|
||||
text_file.write_text("This is a sample text file for testing purposes.")
|
||||
|
||||
yield temp_path
|
||||
|
||||
@pytest.mark.skipif(
|
||||
os.environ.get("CI") == "true",
|
||||
reason="Skip integration tests in CI to avoid dependency issues",
|
||||
)
|
||||
def test_document_rag_with_ast_chunking(self, temp_code_dir):
|
||||
"""Test document RAG with AST chunking enabled."""
|
||||
with tempfile.TemporaryDirectory() as index_dir:
|
||||
cmd = [
|
||||
sys.executable,
|
||||
"apps/document_rag.py",
|
||||
"--llm",
|
||||
"simulated",
|
||||
"--embedding-model",
|
||||
"facebook/contriever",
|
||||
"--embedding-mode",
|
||||
"sentence-transformers",
|
||||
"--index-dir",
|
||||
index_dir,
|
||||
"--data-dir",
|
||||
str(temp_code_dir),
|
||||
"--enable-code-chunking",
|
||||
"--query",
|
||||
"How does the fibonacci function work?",
|
||||
]
|
||||
|
||||
env = os.environ.copy()
|
||||
env["HF_HUB_DISABLE_SYMLINKS"] = "1"
|
||||
env["TOKENIZERS_PARALLELISM"] = "false"
|
||||
|
||||
try:
|
||||
result = subprocess.run(
|
||||
cmd,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=300, # 5 minutes
|
||||
env=env,
|
||||
)
|
||||
|
||||
# Should succeed even if astchunk is not available (fallback)
|
||||
assert result.returncode == 0, f"Command failed: {result.stderr}"
|
||||
|
||||
output = result.stdout + result.stderr
|
||||
assert "Index saved to" in output or "Using existing index" in output
|
||||
|
||||
except subprocess.TimeoutExpired:
|
||||
pytest.skip("Test timed out - likely due to model download in CI")
|
||||
|
||||
@pytest.mark.skipif(
|
||||
os.environ.get("CI") == "true",
|
||||
reason="Skip integration tests in CI to avoid dependency issues",
|
||||
)
|
||||
def test_code_rag_application(self, temp_code_dir):
|
||||
"""Test the specialized code RAG application."""
|
||||
with tempfile.TemporaryDirectory() as index_dir:
|
||||
cmd = [
|
||||
sys.executable,
|
||||
"apps/code_rag.py",
|
||||
"--llm",
|
||||
"simulated",
|
||||
"--embedding-model",
|
||||
"facebook/contriever",
|
||||
"--index-dir",
|
||||
index_dir,
|
||||
"--repo-dir",
|
||||
str(temp_code_dir),
|
||||
"--query",
|
||||
"What classes are defined in this code?",
|
||||
]
|
||||
|
||||
env = os.environ.copy()
|
||||
env["HF_HUB_DISABLE_SYMLINKS"] = "1"
|
||||
env["TOKENIZERS_PARALLELISM"] = "false"
|
||||
|
||||
try:
|
||||
result = subprocess.run(cmd, capture_output=True, text=True, timeout=300, env=env)
|
||||
|
||||
# Should succeed
|
||||
assert result.returncode == 0, f"Command failed: {result.stderr}"
|
||||
|
||||
output = result.stdout + result.stderr
|
||||
assert "Using AST-aware chunking" in output or "traditional chunking" in output
|
||||
|
||||
except subprocess.TimeoutExpired:
|
||||
pytest.skip("Test timed out - likely due to model download in CI")
|
||||
|
||||
|
||||
class TestErrorHandling:
|
||||
"""Test error handling and edge cases."""
|
||||
|
||||
def test_text_chunking_empty_documents(self):
|
||||
"""Test text chunking with empty document list."""
|
||||
chunks = create_text_chunks([])
|
||||
assert chunks == []
|
||||
|
||||
def test_text_chunking_invalid_parameters(self):
|
||||
"""Test text chunking with invalid parameters."""
|
||||
docs = [MockDocument("test content")]
|
||||
|
||||
# Should handle negative chunk sizes gracefully
|
||||
chunks = create_text_chunks(
|
||||
docs, chunk_size=0, chunk_overlap=0, ast_chunk_size=0, ast_chunk_overlap=0
|
||||
)
|
||||
|
||||
# Should still return some result
|
||||
assert isinstance(chunks, list)
|
||||
|
||||
def test_create_ast_chunks_no_language(self):
|
||||
"""Test AST chunking with documents missing language metadata."""
|
||||
docs = [MockDocument("def test(): pass", "/test/script.py")] # No language set
|
||||
|
||||
chunks = create_ast_chunks(docs)
|
||||
|
||||
# Should fall back to traditional chunking
|
||||
assert isinstance(chunks, list)
|
||||
assert len(chunks) >= 0 # May be empty if fallback also fails
|
||||
|
||||
def test_create_ast_chunks_empty_content(self):
|
||||
"""Test AST chunking with empty content."""
|
||||
docs = [MockDocument("", "/test/script.py", {"language": "python"})]
|
||||
|
||||
chunks = create_ast_chunks(docs)
|
||||
|
||||
# Should handle empty content gracefully
|
||||
assert isinstance(chunks, list)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
pytest.main([__file__, "-v"])
|
||||
@@ -57,51 +57,6 @@ def test_document_rag_simulated(test_data_dir):
|
||||
assert "This is a simulated answer" in output
|
||||
|
||||
|
||||
@pytest.mark.skipif(
|
||||
os.environ.get("CI") == "true",
|
||||
reason="Skip AST chunking tests in CI to avoid dependency issues",
|
||||
)
|
||||
def test_document_rag_with_ast_chunking(test_data_dir):
|
||||
"""Test document_rag with AST-aware chunking enabled."""
|
||||
with tempfile.TemporaryDirectory() as temp_dir:
|
||||
# Use a subdirectory that doesn't exist yet to force index creation
|
||||
index_dir = Path(temp_dir) / "test_ast_index"
|
||||
cmd = [
|
||||
sys.executable,
|
||||
"apps/document_rag.py",
|
||||
"--llm",
|
||||
"simulated",
|
||||
"--embedding-model",
|
||||
"facebook/contriever",
|
||||
"--embedding-mode",
|
||||
"sentence-transformers",
|
||||
"--index-dir",
|
||||
str(index_dir),
|
||||
"--data-dir",
|
||||
str(test_data_dir),
|
||||
"--enable-code-chunking", # Enable AST chunking
|
||||
"--query",
|
||||
"What is Pride and Prejudice about?",
|
||||
]
|
||||
|
||||
env = os.environ.copy()
|
||||
env["HF_HUB_DISABLE_SYMLINKS"] = "1"
|
||||
env["TOKENIZERS_PARALLELISM"] = "false"
|
||||
|
||||
result = subprocess.run(cmd, capture_output=True, text=True, timeout=600, env=env)
|
||||
|
||||
# Check return code
|
||||
assert result.returncode == 0, f"Command failed: {result.stderr}"
|
||||
|
||||
# Verify output
|
||||
output = result.stdout + result.stderr
|
||||
assert "Index saved to" in output or "Using existing index" in output
|
||||
assert "This is a simulated answer" in output
|
||||
|
||||
# Should mention AST chunking if code files are present
|
||||
# (might not be relevant for the test data, but command should succeed)
|
||||
|
||||
|
||||
@pytest.mark.skipif(not os.environ.get("OPENAI_API_KEY"), reason="OpenAI API key not available")
|
||||
@pytest.mark.skipif(
|
||||
os.environ.get("CI") == "true", reason="Skip OpenAI tests in CI to avoid API costs"
|
||||
|
||||
@@ -1,365 +0,0 @@
|
||||
"""
|
||||
Comprehensive tests for metadata filtering functionality.
|
||||
|
||||
This module tests the MetadataFilterEngine class and its integration
|
||||
with the LEANN search system.
|
||||
"""
|
||||
|
||||
import os
|
||||
|
||||
# Import the modules we're testing
|
||||
import sys
|
||||
from unittest.mock import Mock, patch
|
||||
|
||||
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "../packages/leann-core/src"))
|
||||
|
||||
from leann.api import PassageManager, SearchResult
|
||||
from leann.metadata_filter import MetadataFilterEngine
|
||||
|
||||
|
||||
class TestMetadataFilterEngine:
|
||||
"""Test suite for the MetadataFilterEngine class."""
|
||||
|
||||
def setup_method(self):
|
||||
"""Setup test fixtures."""
|
||||
self.engine = MetadataFilterEngine()
|
||||
|
||||
# Sample search results for testing
|
||||
self.sample_results = [
|
||||
{
|
||||
"id": "doc1",
|
||||
"score": 0.95,
|
||||
"text": "This is chapter 1 content",
|
||||
"metadata": {
|
||||
"chapter": 1,
|
||||
"character": "Alice",
|
||||
"tags": ["adventure", "fantasy"],
|
||||
"word_count": 150,
|
||||
"is_published": True,
|
||||
"genre": "fiction",
|
||||
},
|
||||
},
|
||||
{
|
||||
"id": "doc2",
|
||||
"score": 0.87,
|
||||
"text": "This is chapter 3 content",
|
||||
"metadata": {
|
||||
"chapter": 3,
|
||||
"character": "Bob",
|
||||
"tags": ["mystery", "thriller"],
|
||||
"word_count": 250,
|
||||
"is_published": True,
|
||||
"genre": "fiction",
|
||||
},
|
||||
},
|
||||
{
|
||||
"id": "doc3",
|
||||
"score": 0.82,
|
||||
"text": "This is chapter 5 content",
|
||||
"metadata": {
|
||||
"chapter": 5,
|
||||
"character": "Alice",
|
||||
"tags": ["romance", "drama"],
|
||||
"word_count": 300,
|
||||
"is_published": False,
|
||||
"genre": "non-fiction",
|
||||
},
|
||||
},
|
||||
{
|
||||
"id": "doc4",
|
||||
"score": 0.78,
|
||||
"text": "This is chapter 10 content",
|
||||
"metadata": {
|
||||
"chapter": 10,
|
||||
"character": "Charlie",
|
||||
"tags": ["action", "adventure"],
|
||||
"word_count": 400,
|
||||
"is_published": True,
|
||||
"genre": "fiction",
|
||||
},
|
||||
},
|
||||
]
|
||||
|
||||
def test_engine_initialization(self):
|
||||
"""Test that the filter engine initializes correctly."""
|
||||
assert self.engine is not None
|
||||
assert len(self.engine.operators) > 0
|
||||
assert "==" in self.engine.operators
|
||||
assert "contains" in self.engine.operators
|
||||
assert "in" in self.engine.operators
|
||||
|
||||
def test_direct_instantiation(self):
|
||||
"""Test direct instantiation of the engine."""
|
||||
engine = MetadataFilterEngine()
|
||||
assert isinstance(engine, MetadataFilterEngine)
|
||||
|
||||
def test_no_filters_returns_all_results(self):
|
||||
"""Test that passing None or empty filters returns all results."""
|
||||
# Test with None
|
||||
result = self.engine.apply_filters(self.sample_results, None)
|
||||
assert len(result) == len(self.sample_results)
|
||||
|
||||
# Test with empty dict
|
||||
result = self.engine.apply_filters(self.sample_results, {})
|
||||
assert len(result) == len(self.sample_results)
|
||||
|
||||
# Test comparison operators
|
||||
def test_equals_filter(self):
|
||||
"""Test equals (==) filter."""
|
||||
filters = {"chapter": {"==": 1}}
|
||||
result = self.engine.apply_filters(self.sample_results, filters)
|
||||
assert len(result) == 1
|
||||
assert result[0]["id"] == "doc1"
|
||||
|
||||
def test_not_equals_filter(self):
|
||||
"""Test not equals (!=) filter."""
|
||||
filters = {"genre": {"!=": "fiction"}}
|
||||
result = self.engine.apply_filters(self.sample_results, filters)
|
||||
assert len(result) == 1
|
||||
assert result[0]["metadata"]["genre"] == "non-fiction"
|
||||
|
||||
def test_less_than_filter(self):
|
||||
"""Test less than (<) filter."""
|
||||
filters = {"chapter": {"<": 5}}
|
||||
result = self.engine.apply_filters(self.sample_results, filters)
|
||||
assert len(result) == 2
|
||||
chapters = [r["metadata"]["chapter"] for r in result]
|
||||
assert all(ch < 5 for ch in chapters)
|
||||
|
||||
def test_less_than_or_equal_filter(self):
|
||||
"""Test less than or equal (<=) filter."""
|
||||
filters = {"chapter": {"<=": 5}}
|
||||
result = self.engine.apply_filters(self.sample_results, filters)
|
||||
assert len(result) == 3
|
||||
chapters = [r["metadata"]["chapter"] for r in result]
|
||||
assert all(ch <= 5 for ch in chapters)
|
||||
|
||||
def test_greater_than_filter(self):
|
||||
"""Test greater than (>) filter."""
|
||||
filters = {"word_count": {">": 200}}
|
||||
result = self.engine.apply_filters(self.sample_results, filters)
|
||||
assert len(result) == 3 # Documents with word_count 250, 300, 400
|
||||
word_counts = [r["metadata"]["word_count"] for r in result]
|
||||
assert all(wc > 200 for wc in word_counts)
|
||||
|
||||
def test_greater_than_or_equal_filter(self):
|
||||
"""Test greater than or equal (>=) filter."""
|
||||
filters = {"word_count": {">=": 250}}
|
||||
result = self.engine.apply_filters(self.sample_results, filters)
|
||||
assert len(result) == 3
|
||||
word_counts = [r["metadata"]["word_count"] for r in result]
|
||||
assert all(wc >= 250 for wc in word_counts)
|
||||
|
||||
# Test membership operators
|
||||
def test_in_filter(self):
|
||||
"""Test in filter."""
|
||||
filters = {"character": {"in": ["Alice", "Bob"]}}
|
||||
result = self.engine.apply_filters(self.sample_results, filters)
|
||||
assert len(result) == 3
|
||||
characters = [r["metadata"]["character"] for r in result]
|
||||
assert all(ch in ["Alice", "Bob"] for ch in characters)
|
||||
|
||||
def test_not_in_filter(self):
|
||||
"""Test not_in filter."""
|
||||
filters = {"character": {"not_in": ["Alice", "Bob"]}}
|
||||
result = self.engine.apply_filters(self.sample_results, filters)
|
||||
assert len(result) == 1
|
||||
assert result[0]["metadata"]["character"] == "Charlie"
|
||||
|
||||
# Test string operators
|
||||
def test_contains_filter(self):
|
||||
"""Test contains filter."""
|
||||
filters = {"genre": {"contains": "fiction"}}
|
||||
result = self.engine.apply_filters(self.sample_results, filters)
|
||||
assert len(result) == 4 # Both "fiction" and "non-fiction"
|
||||
|
||||
def test_starts_with_filter(self):
|
||||
"""Test starts_with filter."""
|
||||
filters = {"genre": {"starts_with": "non"}}
|
||||
result = self.engine.apply_filters(self.sample_results, filters)
|
||||
assert len(result) == 1
|
||||
assert result[0]["metadata"]["genre"] == "non-fiction"
|
||||
|
||||
def test_ends_with_filter(self):
|
||||
"""Test ends_with filter."""
|
||||
filters = {"text": {"ends_with": "content"}}
|
||||
result = self.engine.apply_filters(self.sample_results, filters)
|
||||
assert len(result) == 4 # All sample texts end with "content"
|
||||
|
||||
# Test boolean operators
|
||||
def test_is_true_filter(self):
|
||||
"""Test is_true filter."""
|
||||
filters = {"is_published": {"is_true": True}}
|
||||
result = self.engine.apply_filters(self.sample_results, filters)
|
||||
assert len(result) == 3
|
||||
assert all(r["metadata"]["is_published"] for r in result)
|
||||
|
||||
def test_is_false_filter(self):
|
||||
"""Test is_false filter."""
|
||||
filters = {"is_published": {"is_false": False}}
|
||||
result = self.engine.apply_filters(self.sample_results, filters)
|
||||
assert len(result) == 1
|
||||
assert not result[0]["metadata"]["is_published"]
|
||||
|
||||
# Test compound filters (AND logic)
|
||||
def test_compound_filters(self):
|
||||
"""Test multiple filters applied together (AND logic)."""
|
||||
filters = {"genre": {"==": "fiction"}, "chapter": {"<=": 5}}
|
||||
result = self.engine.apply_filters(self.sample_results, filters)
|
||||
assert len(result) == 2
|
||||
for r in result:
|
||||
assert r["metadata"]["genre"] == "fiction"
|
||||
assert r["metadata"]["chapter"] <= 5
|
||||
|
||||
def test_multiple_operators_same_field(self):
|
||||
"""Test multiple operators on the same field."""
|
||||
filters = {"word_count": {">=": 200, "<=": 350}}
|
||||
result = self.engine.apply_filters(self.sample_results, filters)
|
||||
assert len(result) == 2
|
||||
for r in result:
|
||||
wc = r["metadata"]["word_count"]
|
||||
assert 200 <= wc <= 350
|
||||
|
||||
# Test edge cases
|
||||
def test_missing_field_fails_filter(self):
|
||||
"""Test that missing metadata fields fail filters."""
|
||||
filters = {"nonexistent_field": {"==": "value"}}
|
||||
result = self.engine.apply_filters(self.sample_results, filters)
|
||||
assert len(result) == 0
|
||||
|
||||
def test_invalid_operator(self):
|
||||
"""Test that invalid operators are handled gracefully."""
|
||||
filters = {"chapter": {"invalid_op": 1}}
|
||||
result = self.engine.apply_filters(self.sample_results, filters)
|
||||
assert len(result) == 0 # Should filter out all results
|
||||
|
||||
def test_type_coercion_numeric(self):
|
||||
"""Test numeric type coercion in comparisons."""
|
||||
# Add a result with string chapter number
|
||||
test_results = [
|
||||
*self.sample_results,
|
||||
{
|
||||
"id": "doc5",
|
||||
"score": 0.75,
|
||||
"text": "String chapter test",
|
||||
"metadata": {"chapter": "2", "genre": "test"},
|
||||
},
|
||||
]
|
||||
|
||||
filters = {"chapter": {"<": 3}}
|
||||
result = self.engine.apply_filters(test_results, filters)
|
||||
# Should include doc1 (chapter=1) and doc5 (chapter="2")
|
||||
assert len(result) == 2
|
||||
ids = [r["id"] for r in result]
|
||||
assert "doc1" in ids
|
||||
assert "doc5" in ids
|
||||
|
||||
def test_list_membership_with_nested_tags(self):
|
||||
"""Test membership operations with list metadata."""
|
||||
# Note: This tests the metadata structure, not list field filtering
|
||||
# For list field filtering, we'd need to modify the test data
|
||||
filters = {"character": {"in": ["Alice"]}}
|
||||
result = self.engine.apply_filters(self.sample_results, filters)
|
||||
assert len(result) == 2
|
||||
assert all(r["metadata"]["character"] == "Alice" for r in result)
|
||||
|
||||
def test_empty_results_list(self):
|
||||
"""Test filtering on empty results list."""
|
||||
filters = {"chapter": {"==": 1}}
|
||||
result = self.engine.apply_filters([], filters)
|
||||
assert len(result) == 0
|
||||
|
||||
|
||||
class TestPassageManagerFiltering:
|
||||
"""Test suite for PassageManager filtering integration."""
|
||||
|
||||
def setup_method(self):
|
||||
"""Setup test fixtures."""
|
||||
# Mock the passage manager without actual file I/O
|
||||
self.passage_manager = Mock(spec=PassageManager)
|
||||
self.passage_manager.filter_engine = MetadataFilterEngine()
|
||||
|
||||
# Sample SearchResult objects
|
||||
self.search_results = [
|
||||
SearchResult(
|
||||
id="doc1",
|
||||
score=0.95,
|
||||
text="Chapter 1 content",
|
||||
metadata={"chapter": 1, "character": "Alice"},
|
||||
),
|
||||
SearchResult(
|
||||
id="doc2",
|
||||
score=0.87,
|
||||
text="Chapter 5 content",
|
||||
metadata={"chapter": 5, "character": "Bob"},
|
||||
),
|
||||
SearchResult(
|
||||
id="doc3",
|
||||
score=0.82,
|
||||
text="Chapter 10 content",
|
||||
metadata={"chapter": 10, "character": "Alice"},
|
||||
),
|
||||
]
|
||||
|
||||
def test_search_result_filtering(self):
|
||||
"""Test filtering SearchResult objects."""
|
||||
# Create a real PassageManager instance just for the filtering method
|
||||
# We'll mock the file operations
|
||||
with patch("builtins.open"), patch("json.loads"), patch("pickle.load"):
|
||||
pm = PassageManager([{"type": "jsonl", "path": "test.jsonl"}])
|
||||
|
||||
filters = {"chapter": {"<=": 5}}
|
||||
result = pm.filter_search_results(self.search_results, filters)
|
||||
|
||||
assert len(result) == 2
|
||||
chapters = [r.metadata["chapter"] for r in result]
|
||||
assert all(ch <= 5 for ch in chapters)
|
||||
|
||||
def test_filter_search_results_no_filters(self):
|
||||
"""Test that None filters return all results."""
|
||||
with patch("builtins.open"), patch("json.loads"), patch("pickle.load"):
|
||||
pm = PassageManager([{"type": "jsonl", "path": "test.jsonl"}])
|
||||
|
||||
result = pm.filter_search_results(self.search_results, None)
|
||||
assert len(result) == len(self.search_results)
|
||||
|
||||
def test_filter_maintains_search_result_type(self):
|
||||
"""Test that filtering returns SearchResult objects."""
|
||||
with patch("builtins.open"), patch("json.loads"), patch("pickle.load"):
|
||||
pm = PassageManager([{"type": "jsonl", "path": "test.jsonl"}])
|
||||
|
||||
filters = {"character": {"==": "Alice"}}
|
||||
result = pm.filter_search_results(self.search_results, filters)
|
||||
|
||||
assert len(result) == 2
|
||||
for r in result:
|
||||
assert isinstance(r, SearchResult)
|
||||
assert r.metadata["character"] == "Alice"
|
||||
|
||||
|
||||
# Integration tests would go here, but they require actual LEANN backend setup
|
||||
# These would test the full pipeline from LeannSearcher.search() with metadata_filters
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Run basic smoke tests
|
||||
engine = MetadataFilterEngine()
|
||||
|
||||
sample_data = [
|
||||
{
|
||||
"id": "test1",
|
||||
"score": 0.9,
|
||||
"text": "Test content",
|
||||
"metadata": {"chapter": 1, "published": True},
|
||||
}
|
||||
]
|
||||
|
||||
# Test basic filtering
|
||||
result = engine.apply_filters(sample_data, {"chapter": {"==": 1}})
|
||||
assert len(result) == 1
|
||||
print("✅ Basic filtering test passed")
|
||||
|
||||
result = engine.apply_filters(sample_data, {"chapter": {"==": 2}})
|
||||
assert len(result) == 0
|
||||
print("✅ No match filtering test passed")
|
||||
|
||||
print("🎉 All smoke tests passed!")
|
||||
Reference in New Issue
Block a user