Documentation Index
Fetch the complete documentation index at: https://mintlify.com/georgeguimaraes/arcana/llms.txt
Use this file to discover all available pages before exploring further.
Function Signature
Purpose
This step implements multi-hop reasoning by:- Asking the LLM if current results can answer the question
- If not, getting a follow-up query and searching again
- Repeating until sufficient or max iterations reached
queries_tried to prevent searching the same query twice.
Parameters
The agent context from the pipeline
Options for the reason step
Options
Maximum additional searches (default: 2)Limits how many times the agent can perform follow-up searches.
Custom prompt function
fn question, chunks -> prompt_string endAllows customizing how the LLM evaluates result sufficiency.Override the LLM function for this step
Context Updates
Updated with additional search results if follow-up searches were performed. Chunks are deduplicated across all searches.
Set of all queries that have been searched (prevents duplicates)
Number of additional searches performed (0 if results were sufficient)
Examples
Basic Usage
Multi-Hop Scenario
With Custom Max Iterations
With Custom Prompt
Complete Multi-Hop Pipeline
Default Sufficiency Prompt
Expected LLM Response
When Sufficient
When Insufficient
Deduplication
Results are deduplicated by chunk ID across all searches:Query Tracking
Prevents infinite loops by tracking tried queries:Collection Selection for Follow-Up
Follow-up searches use collections in this priority:ctx.collections(fromselect/2)- Collection from first result
- Fallback:
"default"
Skip Retrieval
Ifgate/2 sets skip_retrieval: true, reasoning is skipped:
Telemetry Event
Emits[:arcana, :agent, :reason] with metadata:
When to Use
Usereason/2 when:
- Initial search may miss important information
- Questions require connecting multiple pieces of information
- You want the agent to autonomously gather more context
- Complex questions benefit from iterative refinement
Examples of Multi-Hop Reasoning
Example 1: Missing Context
Question: “How do I fix a GenServer that’s crashing?”- Initial search: “GenServer crashes”
- LLM: “Need more about crash causes”
- Follow-up: “Common GenServer crash causes”
- LLM: “Need more about debugging”
- Follow-up: “GenServer debugging and tracing”
- LLM: “Sufficient”
Example 2: Connecting Concepts
Question: “What’s the relationship between Supervisors and GenServers?”- Initial search: “Supervisors GenServers relationship”
- LLM: “Need more about supervision trees”
- Follow-up: “Supervisor tree structure”
- LLM: “Sufficient”
Best Practices
- Set reasonable max_iterations - 2-3 is usually sufficient
- Use after initial search - Let reason/2 handle follow-ups
- Combine with rerank - Rerank the final merged results
- Monitor iterations - Track via telemetry to tune max_iterations
- Consider cost - Each iteration adds LLM calls and searches
Trade-offs
Benefits:- More comprehensive answers
- Handles complex questions requiring multiple information sources
- Autonomous gap-filling
- Additional LLM calls (1 per iteration)
- Additional searches (1+ per iteration)
- Increased latency
- Higher token usage
Performance Considerations
- Each iteration adds ~1-2 seconds (LLM eval + search)
- With max_iterations=3, worst case is 3 additional searches
- Consider user timeout tolerance
- Monitor actual iteration counts to optimize max_iterations