Initial commit
This commit is contained in:
170
skills/langgraph-master/01_core_concepts_edge.md
Normal file
170
skills/langgraph-master/01_core_concepts_edge.md
Normal file
@@ -0,0 +1,170 @@
|
||||
# Edge
|
||||
|
||||
Control flow that defines transitions between nodes.
|
||||
|
||||
## Overview
|
||||
|
||||
Edges determine "what to do next". Nodes perform processing, and edges dictate the next action.
|
||||
|
||||
## Types of Edges
|
||||
|
||||
### 1. Normal Edges (Fixed Transitions)
|
||||
|
||||
Always transition to a specific node:
|
||||
|
||||
```python
|
||||
from langgraph.graph import START, END
|
||||
|
||||
# From START to node_a
|
||||
builder.add_edge(START, "node_a")
|
||||
|
||||
# From node_a to node_b
|
||||
builder.add_edge("node_a", "node_b")
|
||||
|
||||
# From node_b to end
|
||||
builder.add_edge("node_b", END)
|
||||
```
|
||||
|
||||
### 2. Conditional Edges (Dynamic Transitions)
|
||||
|
||||
Determine the destination based on state:
|
||||
|
||||
```python
|
||||
from typing import Literal
|
||||
|
||||
def should_continue(state: State) -> Literal["continue", "end"]:
|
||||
if state["iteration"] < state["max_iterations"]:
|
||||
return "continue"
|
||||
return "end"
|
||||
|
||||
# Add conditional edge
|
||||
builder.add_conditional_edges(
|
||||
"agent",
|
||||
should_continue,
|
||||
{
|
||||
"continue": "tools", # Go to tools if continue
|
||||
"end": END # End if end
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### 3. Entry Points
|
||||
|
||||
Define the starting point of the graph:
|
||||
|
||||
```python
|
||||
# Simple entry
|
||||
builder.add_edge(START, "first_node")
|
||||
|
||||
# Conditional entry
|
||||
builder.add_conditional_edges(
|
||||
START,
|
||||
route_start,
|
||||
{
|
||||
"path_a": "node_a",
|
||||
"path_b": "node_b"
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## Parallel Execution
|
||||
|
||||
Nodes with multiple outgoing edges will have **all destination nodes execute in parallel** in the next step:
|
||||
|
||||
```python
|
||||
# From node_a to multiple nodes
|
||||
builder.add_edge("node_a", "node_b")
|
||||
builder.add_edge("node_a", "node_c")
|
||||
|
||||
# node_b and node_c execute in parallel
|
||||
```
|
||||
|
||||
To aggregate results from parallel execution, use a Reducer:
|
||||
|
||||
```python
|
||||
from operator import add
|
||||
|
||||
class State(TypedDict):
|
||||
results: Annotated[list, add] # Aggregate results from multiple nodes
|
||||
```
|
||||
|
||||
## Edge Control with Command
|
||||
|
||||
Specify the next destination from within a node:
|
||||
|
||||
```python
|
||||
from langgraph.types import Command
|
||||
|
||||
def smart_node(state: State) -> Command:
|
||||
result = analyze(state["data"])
|
||||
|
||||
if result["confidence"] > 0.8:
|
||||
return Command(
|
||||
update={"result": result},
|
||||
goto="finalize"
|
||||
)
|
||||
else:
|
||||
return Command(
|
||||
update={"result": result, "needs_review": True},
|
||||
goto="human_review"
|
||||
)
|
||||
```
|
||||
|
||||
## Conditional Branching Implementation Patterns
|
||||
|
||||
### Pattern 1: Tool Call Loop
|
||||
|
||||
```python
|
||||
def should_continue(state: State) -> Literal["continue", "end"]:
|
||||
messages = state["messages"]
|
||||
last_message = messages[-1]
|
||||
|
||||
# Continue if there are tool calls
|
||||
if last_message.tool_calls:
|
||||
return "continue"
|
||||
return "end"
|
||||
|
||||
builder.add_conditional_edges(
|
||||
"agent",
|
||||
should_continue,
|
||||
{
|
||||
"continue": "tools",
|
||||
"end": END
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Pattern 2: Routing
|
||||
|
||||
```python
|
||||
def route_query(state: State) -> Literal["search", "calculate", "general"]:
|
||||
query = state["query"]
|
||||
|
||||
if "calculate" in query or "+" in query:
|
||||
return "calculate"
|
||||
elif "search" in query:
|
||||
return "search"
|
||||
return "general"
|
||||
|
||||
builder.add_conditional_edges(
|
||||
"router",
|
||||
route_query,
|
||||
{
|
||||
"search": "search_node",
|
||||
"calculate": "calculator_node",
|
||||
"general": "general_node"
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## Important Principles
|
||||
|
||||
1. **Explicit Control Flow**: Transitions should be transparent and traceable
|
||||
2. **Type Safety**: Explicitly specify destinations with Literal
|
||||
3. **Leverage Parallel Execution**: Execute independent tasks in parallel
|
||||
|
||||
## Related Pages
|
||||
|
||||
- [01_core_concepts_node.md](01_core_concepts_node.md) - Node implementation
|
||||
- [02_graph_architecture_routing.md](02_graph_architecture_routing.md) - Routing patterns
|
||||
- [05_advanced_features_map_reduce.md](05_advanced_features_map_reduce.md) - Parallel processing patterns
|
||||
132
skills/langgraph-master/01_core_concepts_node.md
Normal file
132
skills/langgraph-master/01_core_concepts_node.md
Normal file
@@ -0,0 +1,132 @@
|
||||
# Node
|
||||
|
||||
Python functions that execute individual tasks.
|
||||
|
||||
## Overview
|
||||
|
||||
Nodes are "processing units" that read state, perform some processing, and return updates.
|
||||
|
||||
## Basic Implementation
|
||||
|
||||
```python
|
||||
def my_node(state: State) -> dict:
|
||||
# Get information from state
|
||||
messages = state["messages"]
|
||||
|
||||
# Execute processing
|
||||
result = process_messages(messages)
|
||||
|
||||
# Return updates (don't modify state directly)
|
||||
return {"result": result, "count": state["count"] + 1}
|
||||
```
|
||||
|
||||
## Types of Nodes
|
||||
|
||||
### 1. LLM Call Node
|
||||
|
||||
```python
|
||||
def llm_node(state: State):
|
||||
messages = state["messages"]
|
||||
response = llm.invoke(messages)
|
||||
|
||||
return {"messages": [response]}
|
||||
```
|
||||
|
||||
### 2. Tool Execution Node
|
||||
|
||||
```python
|
||||
from langgraph.prebuilt import ToolNode
|
||||
|
||||
tools = [search_tool, calculator_tool]
|
||||
tool_node = ToolNode(tools)
|
||||
```
|
||||
|
||||
### 3. Processing Node
|
||||
|
||||
```python
|
||||
def process_node(state: State):
|
||||
data = state["raw_data"]
|
||||
|
||||
# Data processing
|
||||
processed = clean_and_transform(data)
|
||||
|
||||
return {"processed_data": processed}
|
||||
```
|
||||
|
||||
## Node Signature
|
||||
|
||||
Nodes can accept the following parameters:
|
||||
|
||||
```python
|
||||
from langgraph.types import Command
|
||||
|
||||
def advanced_node(
|
||||
state: State,
|
||||
config: RunnableConfig, # Optional
|
||||
) -> dict | Command:
|
||||
# Get configuration from config
|
||||
thread_id = config["configurable"]["thread_id"]
|
||||
|
||||
# Processing...
|
||||
|
||||
return {"result": result}
|
||||
```
|
||||
|
||||
## Control with Command API
|
||||
|
||||
Specify state updates and control flow simultaneously:
|
||||
|
||||
```python
|
||||
from langgraph.types import Command
|
||||
|
||||
def decision_node(state: State) -> Command:
|
||||
if state["should_continue"]:
|
||||
return Command(
|
||||
update={"status": "continuing"},
|
||||
goto="next_node"
|
||||
)
|
||||
else:
|
||||
return Command(
|
||||
update={"status": "done"},
|
||||
goto=END
|
||||
)
|
||||
```
|
||||
|
||||
## Important Principles
|
||||
|
||||
1. **Idempotency**: Return the same output for the same input
|
||||
2. **Return Updates**: Return update contents instead of directly modifying state
|
||||
3. **Single Responsibility**: Each node does one thing well
|
||||
|
||||
## Adding Nodes
|
||||
|
||||
```python
|
||||
from langgraph.graph import StateGraph
|
||||
|
||||
builder = StateGraph(State)
|
||||
|
||||
# Add nodes
|
||||
builder.add_node("analyze", analyze_node)
|
||||
builder.add_node("decide", decide_node)
|
||||
builder.add_node("execute", execute_node)
|
||||
|
||||
# Add tool node
|
||||
builder.add_node("tools", tool_node)
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
```python
|
||||
def robust_node(state: State) -> dict:
|
||||
try:
|
||||
result = risky_operation(state["data"])
|
||||
return {"result": result, "error": None}
|
||||
except Exception as e:
|
||||
return {"result": None, "error": str(e)}
|
||||
```
|
||||
|
||||
## Related Pages
|
||||
|
||||
- [01_core_concepts_state.md](01_core_concepts_state.md) - How to define State
|
||||
- [01_core_concepts_edge.md](01_core_concepts_edge.md) - Connections between nodes
|
||||
- [04_tool_integration_overview.md](04_tool_integration_overview.md) - Tool node details
|
||||
57
skills/langgraph-master/01_core_concepts_overview.md
Normal file
57
skills/langgraph-master/01_core_concepts_overview.md
Normal file
@@ -0,0 +1,57 @@
|
||||
# 01. Core Concepts
|
||||
|
||||
Understanding the three core elements of LangGraph.
|
||||
|
||||
## Overview
|
||||
|
||||
LangGraph is a framework that models agent workflows as **graphs**. By decomposing complex workflows into **discrete steps (nodes)**, it achieves the following:
|
||||
|
||||
- **Improved Resilience**: Create checkpoints at node boundaries
|
||||
- **Enhanced Visibility**: Enable state inspection between each step
|
||||
- **Independent Testing**: Easy unit testing of individual nodes
|
||||
- **Error Handling**: Apply different strategies for each error type
|
||||
|
||||
## Three Core Elements
|
||||
|
||||
### 1. [State](01_core_concepts_state.md)
|
||||
- Memory shared across all nodes in the graph
|
||||
- Snapshot of the current execution state
|
||||
- Defined with TypedDict or Pydantic models
|
||||
|
||||
### 2. [Node](01_core_concepts_node.md)
|
||||
- Python functions that execute individual tasks
|
||||
- Receive the current state and return updates
|
||||
- Basic unit of processing
|
||||
|
||||
### 3. [Edge](01_core_concepts_edge.md)
|
||||
- Define transitions between nodes
|
||||
- Fixed transitions or conditional branching
|
||||
- Determine control flow
|
||||
|
||||
## Design Philosophy
|
||||
|
||||
The core concept of LangGraph is **decomposition into discrete steps**:
|
||||
|
||||
```python
|
||||
# Split agent into individual nodes
|
||||
graph = StateGraph(State)
|
||||
graph.add_node("analyze", analyze_node) # Analysis step
|
||||
graph.add_node("decide", decide_node) # Decision step
|
||||
graph.add_node("execute", execute_node) # Execution step
|
||||
```
|
||||
|
||||
This approach allows each step to operate independently, building a robust system as a whole.
|
||||
|
||||
## Important Principles
|
||||
|
||||
1. **Store Raw Data**: Store raw data in State, format prompts dynamically within nodes
|
||||
2. **Return Updates**: Nodes return update contents instead of directly modifying state
|
||||
3. **Transparent Control Flow**: Explicitly declare the next destination with Command objects
|
||||
|
||||
## Next Steps
|
||||
|
||||
For details on each element, refer to the following pages:
|
||||
|
||||
- [01_core_concepts_state.md](01_core_concepts_state.md) - State management details
|
||||
- [01_core_concepts_node.md](01_core_concepts_node.md) - How to implement nodes
|
||||
- [01_core_concepts_edge.md](01_core_concepts_edge.md) - Edges and control flow
|
||||
102
skills/langgraph-master/01_core_concepts_state.md
Normal file
102
skills/langgraph-master/01_core_concepts_state.md
Normal file
@@ -0,0 +1,102 @@
|
||||
# State
|
||||
|
||||
Memory shared across all nodes in the graph.
|
||||
|
||||
## Overview
|
||||
|
||||
State is like a "notebook" that records everything the agent learns and decides. It is a **shared data structure** accessible to all nodes and edges in the graph.
|
||||
|
||||
## Definition Methods
|
||||
|
||||
### Using TypedDict
|
||||
|
||||
```python
|
||||
from typing import TypedDict
|
||||
|
||||
class State(TypedDict):
|
||||
messages: list[str]
|
||||
user_name: str
|
||||
count: int
|
||||
```
|
||||
|
||||
### Using Pydantic Model
|
||||
|
||||
```python
|
||||
from pydantic import BaseModel
|
||||
|
||||
class State(BaseModel):
|
||||
messages: list[str]
|
||||
user_name: str
|
||||
count: int = 0 # Default value
|
||||
```
|
||||
|
||||
## Reducer (Controlling Update Methods)
|
||||
|
||||
A function that specifies how each key is updated. If not specified, it defaults to **value overwrite**.
|
||||
|
||||
### Addition (Adding to List)
|
||||
|
||||
```python
|
||||
from typing import Annotated
|
||||
from operator import add
|
||||
|
||||
class State(TypedDict):
|
||||
messages: Annotated[list[str], add] # Add to existing list
|
||||
count: int # Overwrite
|
||||
```
|
||||
|
||||
### Custom Reducer
|
||||
|
||||
```python
|
||||
def concat_strings(existing: str, new: str) -> str:
|
||||
return existing + " " + new
|
||||
|
||||
class State(TypedDict):
|
||||
text: Annotated[str, concat_strings]
|
||||
```
|
||||
|
||||
## MessagesState (LLM Preset)
|
||||
|
||||
For LLM conversations, LangChain's `MessagesState` is convenient:
|
||||
|
||||
```python
|
||||
from langgraph.graph import MessagesState
|
||||
|
||||
# This is equivalent to:
|
||||
class MessagesState(TypedDict):
|
||||
messages: Annotated[list[AnyMessage], add_messages]
|
||||
```
|
||||
|
||||
The `add_messages` reducer:
|
||||
- Adds new messages
|
||||
- Updates existing messages (ID-based)
|
||||
- Supports OpenAI format shorthand
|
||||
|
||||
## Important Principles
|
||||
|
||||
1. **Store Raw Data**: Format prompts within nodes
|
||||
2. **Clear Schema**: Define types with TypedDict or Pydantic
|
||||
3. **Control with Reducer**: Explicitly specify update methods
|
||||
|
||||
## Example
|
||||
|
||||
```python
|
||||
from typing import Annotated, TypedDict
|
||||
from operator import add
|
||||
|
||||
class AgentState(TypedDict):
|
||||
# Messages are added to the list
|
||||
messages: Annotated[list[str], add]
|
||||
|
||||
# User information is overwritten
|
||||
user_id: str
|
||||
user_name: str
|
||||
|
||||
# Counter is also overwritten
|
||||
iteration_count: int
|
||||
```
|
||||
|
||||
## Related Pages
|
||||
|
||||
- [01_core_concepts_node.md](01_core_concepts_node.md) - How to use State in nodes
|
||||
- [03_memory_management_overview.md](03_memory_management_overview.md) - State persistence
|
||||
338
skills/langgraph-master/02_graph_architecture_agent.md
Normal file
338
skills/langgraph-master/02_graph_architecture_agent.md
Normal file
@@ -0,0 +1,338 @@
|
||||
# Agent (Autonomous Tool Usage)
|
||||
|
||||
A pattern where the LLM dynamically determines tool selection to handle unpredictable problem-solving.
|
||||
|
||||
## Overview
|
||||
|
||||
The Agent pattern follows **ReAct** (Reasoning + Acting), where the LLM dynamically selects and executes tools to solve problems.
|
||||
|
||||
## ReAct Pattern
|
||||
|
||||
**ReAct** = Reasoning + Acting
|
||||
|
||||
1. **Reasoning**: Think "What should I do next?"
|
||||
2. **Acting**: Take action using tools
|
||||
3. **Observing**: Observe the results
|
||||
4. **Repeat steps 1-3** until reaching a final answer
|
||||
|
||||
## Implementation Example: Basic Agent
|
||||
|
||||
```python
|
||||
from langgraph.graph import StateGraph, START, END, MessagesState
|
||||
from langgraph.prebuilt import ToolNode
|
||||
from typing import Literal
|
||||
|
||||
# Tool definitions
|
||||
@tool
|
||||
def search(query: str) -> str:
|
||||
"""Execute web search"""
|
||||
return perform_search(query)
|
||||
|
||||
@tool
|
||||
def calculator(expression: str) -> float:
|
||||
"""Execute calculation"""
|
||||
return eval(expression)
|
||||
|
||||
tools = [search, calculator]
|
||||
|
||||
# Agent node
|
||||
def agent_node(state: MessagesState):
|
||||
"""LLM determines tool usage"""
|
||||
messages = state["messages"]
|
||||
|
||||
# Invoke LLM with tools
|
||||
response = llm_with_tools.invoke(messages)
|
||||
|
||||
return {"messages": [response]}
|
||||
|
||||
# Continue decision
|
||||
def should_continue(state: MessagesState) -> Literal["tools", "end"]:
|
||||
"""Check if there are tool calls"""
|
||||
last_message = state["messages"][-1]
|
||||
|
||||
# Continue if there are tool calls
|
||||
if last_message.tool_calls:
|
||||
return "tools"
|
||||
|
||||
# End if no tool calls (final answer)
|
||||
return "end"
|
||||
|
||||
# Build graph
|
||||
builder = StateGraph(MessagesState)
|
||||
|
||||
builder.add_node("agent", agent_node)
|
||||
builder.add_node("tools", ToolNode(tools))
|
||||
|
||||
builder.add_edge(START, "agent")
|
||||
|
||||
# ReAct loop
|
||||
builder.add_conditional_edges(
|
||||
"agent",
|
||||
should_continue,
|
||||
{
|
||||
"tools": "tools",
|
||||
"end": END
|
||||
}
|
||||
)
|
||||
|
||||
# Return to agent after tool execution
|
||||
builder.add_edge("tools", "agent")
|
||||
|
||||
graph = builder.compile()
|
||||
```
|
||||
|
||||
## Tool Definitions
|
||||
|
||||
### Basic Tools
|
||||
|
||||
```python
|
||||
from langchain_core.tools import tool
|
||||
|
||||
@tool
|
||||
def get_weather(location: str) -> str:
|
||||
"""Get weather for the specified location.
|
||||
|
||||
Args:
|
||||
location: City name (e.g., "Tokyo", "New York")
|
||||
"""
|
||||
return fetch_weather_data(location)
|
||||
|
||||
@tool
|
||||
def send_email(to: str, subject: str, body: str) -> str:
|
||||
"""Send an email.
|
||||
|
||||
Args:
|
||||
to: Recipient email address
|
||||
subject: Email subject
|
||||
body: Email body
|
||||
"""
|
||||
return send_email_api(to, subject, body)
|
||||
```
|
||||
|
||||
### Structured Output Tools
|
||||
|
||||
```python
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
class WeatherResponse(BaseModel):
|
||||
location: str
|
||||
temperature: float
|
||||
condition: str
|
||||
humidity: int
|
||||
|
||||
@tool(response_format="content_and_artifact")
|
||||
def get_detailed_weather(location: str) -> tuple[str, WeatherResponse]:
|
||||
"""Get detailed weather information"""
|
||||
data = fetch_weather_data(location)
|
||||
|
||||
weather = WeatherResponse(
|
||||
location=location,
|
||||
temperature=data["temp"],
|
||||
condition=data["condition"],
|
||||
humidity=data["humidity"]
|
||||
)
|
||||
|
||||
message = f"Weather in {location}: {weather.condition}, {weather.temperature}°C"
|
||||
|
||||
return message, weather
|
||||
```
|
||||
|
||||
## Advanced Patterns
|
||||
|
||||
### Pattern 1: Multi-Agent Collaboration
|
||||
|
||||
```python
|
||||
# Specialist agents
|
||||
def research_agent(state: State):
|
||||
"""Research specialist agent"""
|
||||
response = research_llm_with_tools.invoke(state["messages"])
|
||||
return {"messages": [response]}
|
||||
|
||||
def coding_agent(state: State):
|
||||
"""Coding specialist agent"""
|
||||
response = coding_llm_with_tools.invoke(state["messages"])
|
||||
return {"messages": [response]}
|
||||
|
||||
# Router
|
||||
def route_to_specialist(state: State) -> Literal["research", "coding"]:
|
||||
"""Select specialist based on task"""
|
||||
last_message = state["messages"][-1]
|
||||
|
||||
if "research" in last_message.content or "search" in last_message.content:
|
||||
return "research"
|
||||
elif "code" in last_message.content or "implement" in last_message.content:
|
||||
return "coding"
|
||||
|
||||
return "research" # Default
|
||||
```
|
||||
|
||||
### Pattern 2: Agent with Memory
|
||||
|
||||
```python
|
||||
from langgraph.checkpoint.memory import MemorySaver
|
||||
|
||||
class AgentState(TypedDict):
|
||||
messages: Annotated[list, add_messages]
|
||||
context: dict # Long-term memory
|
||||
|
||||
def agent_with_memory(state: AgentState):
|
||||
"""Agent utilizing context"""
|
||||
messages = state["messages"]
|
||||
context = state.get("context", {})
|
||||
|
||||
# Add context to prompt
|
||||
system_message = f"Context: {context}"
|
||||
|
||||
response = llm_with_tools.invoke([
|
||||
{"role": "system", "content": system_message},
|
||||
*messages
|
||||
])
|
||||
|
||||
return {"messages": [response]}
|
||||
|
||||
# Compile with checkpointer
|
||||
checkpointer = MemorySaver()
|
||||
graph = builder.compile(checkpointer=checkpointer)
|
||||
```
|
||||
|
||||
### Pattern 3: Human-in-the-Loop Agent
|
||||
|
||||
```python
|
||||
from langgraph.types import interrupt
|
||||
|
||||
def careful_agent(state: State):
|
||||
"""Confirm with human before important actions"""
|
||||
response = llm_with_tools.invoke(state["messages"])
|
||||
|
||||
# Request confirmation for important tool calls
|
||||
if response.tool_calls:
|
||||
for tool_call in response.tool_calls:
|
||||
if tool_call["name"] in ["send_email", "delete_data"]:
|
||||
# Wait for human approval
|
||||
approved = interrupt({
|
||||
"action": tool_call["name"],
|
||||
"args": tool_call["args"],
|
||||
"message": "Approve this action?"
|
||||
})
|
||||
|
||||
if not approved:
|
||||
return {
|
||||
"messages": [
|
||||
{"role": "assistant", "content": "Action cancelled by user"}
|
||||
]
|
||||
}
|
||||
|
||||
return {"messages": [response]}
|
||||
```
|
||||
|
||||
### Pattern 4: Error Handling and Retry
|
||||
|
||||
```python
|
||||
class RobustAgentState(TypedDict):
|
||||
messages: Annotated[list, add_messages]
|
||||
retry_count: int
|
||||
errors: list[str]
|
||||
|
||||
def robust_tool_node(state: RobustAgentState):
|
||||
"""Tool execution with error handling"""
|
||||
last_message = state["messages"][-1]
|
||||
tool_results = []
|
||||
|
||||
for tool_call in last_message.tool_calls:
|
||||
try:
|
||||
result = execute_tool(tool_call)
|
||||
tool_results.append(result)
|
||||
|
||||
except Exception as e:
|
||||
error_msg = f"Tool {tool_call['name']} failed: {str(e)}"
|
||||
|
||||
# Check if retry is possible
|
||||
if state.get("retry_count", 0) < 3:
|
||||
tool_results.append({
|
||||
"tool_call_id": tool_call["id"],
|
||||
"error": error_msg,
|
||||
"retry": True
|
||||
})
|
||||
else:
|
||||
tool_results.append({
|
||||
"tool_call_id": tool_call["id"],
|
||||
"error": "Max retries exceeded",
|
||||
"retry": False
|
||||
})
|
||||
|
||||
return {
|
||||
"messages": tool_results,
|
||||
"retry_count": state.get("retry_count", 0) + 1
|
||||
}
|
||||
```
|
||||
|
||||
## Advanced Tool Features
|
||||
|
||||
### Dynamic Tool Generation
|
||||
|
||||
```python
|
||||
def create_tool_for_api(api_spec: dict):
|
||||
"""Dynamically generate tool from API specification"""
|
||||
|
||||
@tool
|
||||
def dynamic_api_tool(**kwargs) -> str:
|
||||
f"""
|
||||
{api_spec['description']}
|
||||
|
||||
Args: {api_spec['parameters']}
|
||||
"""
|
||||
return call_api(api_spec['endpoint'], kwargs)
|
||||
|
||||
return dynamic_api_tool
|
||||
```
|
||||
|
||||
### Conditional Tool Usage
|
||||
|
||||
```python
|
||||
def conditional_agent(state: State):
|
||||
"""Change toolset based on situation"""
|
||||
context = state.get("context", {})
|
||||
|
||||
# Basic tools only for beginners
|
||||
if context.get("user_level") == "beginner":
|
||||
tools = [basic_search, simple_calculator]
|
||||
# Advanced tools for advanced users
|
||||
else:
|
||||
tools = [advanced_search, scientific_calculator, code_executor]
|
||||
|
||||
llm_with_selected_tools = llm.bind_tools(tools)
|
||||
response = llm_with_selected_tools.invoke(state["messages"])
|
||||
|
||||
return {"messages": [response]}
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
✅ **Flexibility**: Dynamically responds to unpredictable problems
|
||||
✅ **Autonomy**: LLM selects optimal tools and strategies
|
||||
✅ **Extensibility**: Extend functionality by simply adding tools
|
||||
✅ **Adaptability**: Solves complex multi-step tasks
|
||||
|
||||
## Considerations
|
||||
|
||||
⚠️ **Unpredictability**: May behave differently with same input
|
||||
⚠️ **Cost**: Multiple LLM calls occur
|
||||
⚠️ **Infinite Loops**: Proper termination conditions required
|
||||
⚠️ **Tool Misuse**: LLM may use tools incorrectly
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Clear Tool Descriptions**: Write detailed tool docstrings
|
||||
2. **Maximum Iterations**: Set upper limit for loops
|
||||
3. **Error Handling**: Handle tool execution errors appropriately
|
||||
4. **Logging**: Make agent behavior traceable
|
||||
|
||||
## Summary
|
||||
|
||||
The Agent pattern is optimal for **dynamic and uncertain problem-solving**. It autonomously solves problems using tools through the ReAct loop.
|
||||
|
||||
## Related Pages
|
||||
|
||||
- [02_graph_architecture_workflow_vs_agent.md](02_graph_architecture_workflow_vs_agent.md) - Differences between Workflow and Agent
|
||||
- [04_tool_integration_overview.md](04_tool_integration_overview.md) - Tool details
|
||||
- [05_advanced_features_human_in_the_loop.md](05_advanced_features_human_in_the_loop.md) - Human intervention
|
||||
@@ -0,0 +1,335 @@
|
||||
# Evaluator-Optimizer (Evaluation-Improvement Loop)
|
||||
|
||||
A pattern that repeats generation and evaluation, continuing iterative improvement until acceptable criteria are met.
|
||||
|
||||
## Overview
|
||||
|
||||
Evaluator-Optimizer is a pattern that repeats the **generate → evaluate → improve** loop, continuing until quality standards are met.
|
||||
|
||||
## Use Cases
|
||||
|
||||
- Code generation and quality verification
|
||||
- Translation accuracy improvement
|
||||
- Gradual content improvement
|
||||
- Iterative solution for optimization problems
|
||||
|
||||
## Implementation Example: Translation Quality Improvement
|
||||
|
||||
```python
|
||||
from typing import TypedDict
|
||||
|
||||
class State(TypedDict):
|
||||
original_text: str
|
||||
translated_text: str
|
||||
quality_score: float
|
||||
iteration: int
|
||||
max_iterations: int
|
||||
feedback: str
|
||||
|
||||
def generator_node(state: State):
|
||||
"""Generate or improve translation"""
|
||||
if state.get("translated_text"):
|
||||
# Improve existing translation
|
||||
prompt = f"""
|
||||
Original: {state['original_text']}
|
||||
Current translation: {state['translated_text']}
|
||||
Feedback: {state['feedback']}
|
||||
|
||||
Improve the translation based on the feedback.
|
||||
"""
|
||||
else:
|
||||
# Initial translation
|
||||
prompt = f"Translate to Japanese: {state['original_text']}"
|
||||
|
||||
translated = llm.invoke(prompt)
|
||||
|
||||
return {
|
||||
"translated_text": translated,
|
||||
"iteration": state.get("iteration", 0) + 1
|
||||
}
|
||||
|
||||
def evaluator_node(state: State):
|
||||
"""Evaluate translation quality"""
|
||||
evaluation_prompt = f"""
|
||||
Original: {state['original_text']}
|
||||
Translation: {state['translated_text']}
|
||||
|
||||
Rate the translation quality (0-1) and provide specific feedback.
|
||||
Format: SCORE: 0.X\nFEEDBACK: ...
|
||||
"""
|
||||
|
||||
result = llm.invoke(evaluation_prompt)
|
||||
|
||||
# Extract score and feedback
|
||||
score = extract_score(result)
|
||||
feedback = extract_feedback(result)
|
||||
|
||||
return {
|
||||
"quality_score": score,
|
||||
"feedback": feedback
|
||||
}
|
||||
|
||||
def should_continue(state: State) -> Literal["improve", "done"]:
|
||||
"""Continuation decision"""
|
||||
# Check if quality standard is met
|
||||
if state["quality_score"] >= 0.9:
|
||||
return "done"
|
||||
|
||||
# Check if maximum iterations reached
|
||||
if state["iteration"] >= state["max_iterations"]:
|
||||
return "done"
|
||||
|
||||
return "improve"
|
||||
|
||||
# Build graph
|
||||
builder = StateGraph(State)
|
||||
|
||||
builder.add_node("generator", generator_node)
|
||||
builder.add_node("evaluator", evaluator_node)
|
||||
|
||||
builder.add_edge(START, "generator")
|
||||
builder.add_edge("generator", "evaluator")
|
||||
|
||||
builder.add_conditional_edges(
|
||||
"evaluator",
|
||||
should_continue,
|
||||
{
|
||||
"improve": "generator", # Loop
|
||||
"done": END
|
||||
}
|
||||
)
|
||||
|
||||
graph = builder.compile()
|
||||
```
|
||||
|
||||
## Advanced Patterns
|
||||
|
||||
### Pattern 1: Multiple Evaluation Criteria
|
||||
|
||||
```python
|
||||
class MultiEvalState(TypedDict):
|
||||
content: str
|
||||
scores: dict[str, float] # Multiple evaluation scores
|
||||
min_scores: dict[str, float] # Minimum value for each criterion
|
||||
|
||||
def multi_evaluator(state: State):
|
||||
"""Evaluate from multiple perspectives"""
|
||||
content = state["content"]
|
||||
|
||||
# Evaluate each perspective
|
||||
scores = {
|
||||
"accuracy": evaluate_accuracy(content),
|
||||
"readability": evaluate_readability(content),
|
||||
"completeness": evaluate_completeness(content)
|
||||
}
|
||||
|
||||
return {"scores": scores}
|
||||
|
||||
def multi_should_continue(state: MultiEvalState):
|
||||
"""Check if all criteria are met"""
|
||||
for criterion, min_score in state["min_scores"].items():
|
||||
if state["scores"][criterion] < min_score:
|
||||
return "improve"
|
||||
|
||||
return "done"
|
||||
```
|
||||
|
||||
### Pattern 2: Progressive Criteria Increase
|
||||
|
||||
```python
|
||||
def adaptive_evaluator(state: State):
|
||||
"""Adjust criteria based on iteration"""
|
||||
iteration = state["iteration"]
|
||||
|
||||
# Start with lenient criteria, gradually stricter
|
||||
threshold = 0.7 + (iteration * 0.05)
|
||||
threshold = min(threshold, 0.95) # Maximum 0.95
|
||||
|
||||
score = evaluate(state["content"])
|
||||
|
||||
return {
|
||||
"quality_score": score,
|
||||
"threshold": threshold
|
||||
}
|
||||
|
||||
def adaptive_should_continue(state: State):
|
||||
if state["quality_score"] >= state["threshold"]:
|
||||
return "done"
|
||||
|
||||
if state["iteration"] >= state["max_iterations"]:
|
||||
return "done"
|
||||
|
||||
return "improve"
|
||||
```
|
||||
|
||||
### Pattern 3: Multiple Improvement Strategies
|
||||
|
||||
```python
|
||||
from typing import Literal
|
||||
|
||||
def strategy_router(state: State) -> Literal["minor_fix", "major_rewrite"]:
|
||||
"""Select improvement strategy based on score"""
|
||||
score = state["quality_score"]
|
||||
|
||||
if score >= 0.7:
|
||||
# Minor adjustments sufficient
|
||||
return "minor_fix"
|
||||
else:
|
||||
# Major rewrite needed
|
||||
return "major_rewrite"
|
||||
|
||||
def minor_fix_node(state: State):
|
||||
"""Small improvements"""
|
||||
prompt = f"Make minor improvements: {state['content']}\n{state['feedback']}"
|
||||
return {"content": llm.invoke(prompt)}
|
||||
|
||||
def major_rewrite_node(state: State):
|
||||
"""Major rewrite"""
|
||||
prompt = f"Completely rewrite: {state['content']}\n{state['feedback']}"
|
||||
return {"content": llm.invoke(prompt)}
|
||||
|
||||
builder.add_conditional_edges(
|
||||
"evaluator",
|
||||
strategy_router,
|
||||
{
|
||||
"minor_fix": "minor_fix",
|
||||
"major_rewrite": "major_rewrite"
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Pattern 4: Early Termination and Timeout
|
||||
|
||||
```python
|
||||
import time
|
||||
|
||||
class TimedState(TypedDict):
|
||||
content: str
|
||||
quality_score: float
|
||||
iteration: int
|
||||
start_time: float
|
||||
max_duration: float # seconds
|
||||
|
||||
def timed_should_continue(state: TimedState):
|
||||
"""Check both quality criteria and timeout"""
|
||||
# Quality standard met
|
||||
if state["quality_score"] >= 0.9:
|
||||
return "done"
|
||||
|
||||
# Timeout
|
||||
elapsed = time.time() - state["start_time"]
|
||||
if elapsed >= state["max_duration"]:
|
||||
return "timeout"
|
||||
|
||||
# Maximum iterations
|
||||
if state["iteration"] >= 10:
|
||||
return "max_iterations"
|
||||
|
||||
return "improve"
|
||||
|
||||
builder.add_conditional_edges(
|
||||
"evaluator",
|
||||
timed_should_continue,
|
||||
{
|
||||
"improve": "generator",
|
||||
"done": END,
|
||||
"timeout": "timeout_handler",
|
||||
"max_iterations": "max_iter_handler"
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## Evaluator Implementation Patterns
|
||||
|
||||
### Pattern 1: Rule-Based Evaluation
|
||||
|
||||
```python
|
||||
def rule_based_evaluator(state: State):
|
||||
"""Rule-based evaluation"""
|
||||
content = state["content"]
|
||||
score = 0.0
|
||||
feedback = []
|
||||
|
||||
# Length check
|
||||
if 100 <= len(content) <= 500:
|
||||
score += 0.3
|
||||
else:
|
||||
feedback.append("Length should be 100-500 characters")
|
||||
|
||||
# Keyword check
|
||||
required_keywords = state["required_keywords"]
|
||||
if all(kw in content for kw in required_keywords):
|
||||
score += 0.3
|
||||
else:
|
||||
missing = [kw for kw in required_keywords if kw not in content]
|
||||
feedback.append(f"Missing keywords: {missing}")
|
||||
|
||||
# Structure check
|
||||
if has_proper_structure(content):
|
||||
score += 0.4
|
||||
else:
|
||||
feedback.append("Improve structure")
|
||||
|
||||
return {
|
||||
"quality_score": score,
|
||||
"feedback": "\n".join(feedback)
|
||||
}
|
||||
```
|
||||
|
||||
### Pattern 2: LLM-Based Evaluation
|
||||
|
||||
```python
|
||||
def llm_evaluator(state: State):
|
||||
"""LLM evaluation"""
|
||||
evaluation_prompt = f"""
|
||||
Evaluate this content on a scale of 0-1:
|
||||
{state['content']}
|
||||
|
||||
Criteria:
|
||||
- Clarity
|
||||
- Completeness
|
||||
- Accuracy
|
||||
|
||||
Provide:
|
||||
1. Overall score (0-1)
|
||||
2. Specific feedback for improvement
|
||||
"""
|
||||
|
||||
result = llm.invoke(evaluation_prompt)
|
||||
|
||||
return {
|
||||
"quality_score": parse_score(result),
|
||||
"feedback": parse_feedback(result)
|
||||
}
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
✅ **Quality Assurance**: Continue improvement until standards are met
|
||||
✅ **Automatic Optimization**: Quality improvement without manual intervention
|
||||
✅ **Feedback Loop**: Use evaluation results for next improvement
|
||||
✅ **Adaptive**: Iteration count varies based on problem difficulty
|
||||
|
||||
## Considerations
|
||||
|
||||
⚠️ **Infinite Loops**: Set termination conditions appropriately
|
||||
⚠️ **Cost**: Multiple LLM calls occur
|
||||
⚠️ **No Convergence Guarantee**: May not always meet standards
|
||||
⚠️ **Local Optima**: Improvement may get stuck
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Clear Termination Conditions**: Set maximum iterations and timeout
|
||||
2. **Progressive Feedback**: Provide specific improvement points
|
||||
3. **Progress Tracking**: Record scores for each iteration
|
||||
4. **Fallback**: Handle cases where standards cannot be met
|
||||
|
||||
## Summary
|
||||
|
||||
Evaluator-Optimizer is optimal when **iterative improvement is needed until quality standards are met**. Clear evaluation criteria and termination conditions are key to success.
|
||||
|
||||
## Related Pages
|
||||
|
||||
- [02_graph_architecture_prompt_chaining.md](02_graph_architecture_prompt_chaining.md) - Basic sequential processing
|
||||
- [02_graph_architecture_agent.md](02_graph_architecture_agent.md) - Combining with Agent
|
||||
- [05_advanced_features_human_in_the_loop.md](05_advanced_features_human_in_the_loop.md) - Human evaluation
|
||||
@@ -0,0 +1,262 @@
|
||||
# Orchestrator-Worker (Master-Worker)
|
||||
|
||||
A pattern where an orchestrator decomposes tasks and delegates them to multiple workers.
|
||||
|
||||
## Overview
|
||||
|
||||
Orchestrator-Worker is a pattern where a **master node** decomposes tasks into multiple subtasks and delegates them in parallel to **worker nodes**. Also known as the Map-Reduce pattern.
|
||||
|
||||
## Use Cases
|
||||
|
||||
- Parallel processing of multiple documents
|
||||
- Dividing large tasks into smaller subtasks
|
||||
- Distributed processing of datasets
|
||||
- Parallel API calls
|
||||
|
||||
## Implementation Example: Summarizing Multiple Documents
|
||||
|
||||
```python
|
||||
from langgraph.types import Send
|
||||
from typing import TypedDict, Annotated
|
||||
from operator import add
|
||||
|
||||
class State(TypedDict):
|
||||
documents: list[str]
|
||||
summaries: Annotated[list[str], add]
|
||||
final_summary: str
|
||||
|
||||
class WorkerState(TypedDict):
|
||||
document: str
|
||||
summary: str
|
||||
|
||||
def orchestrator_node(state: State):
|
||||
"""Decompose task and delegate to workers"""
|
||||
# Send each document to a worker
|
||||
return [
|
||||
Send("worker", {"document": doc})
|
||||
for doc in state["documents"]
|
||||
]
|
||||
|
||||
def worker_node(state: WorkerState):
|
||||
"""Summarize individual document"""
|
||||
summary = llm.invoke(f"Summarize: {state['document']}")
|
||||
return {"summaries": [summary]}
|
||||
|
||||
def reducer_node(state: State):
|
||||
"""Integrate all summaries"""
|
||||
all_summaries = "\n".join(state["summaries"])
|
||||
final = llm.invoke(f"Create final summary from:\n{all_summaries}")
|
||||
return {"final_summary": final}
|
||||
|
||||
# Build graph
|
||||
builder = StateGraph(State)
|
||||
|
||||
builder.add_node("orchestrator", orchestrator_node)
|
||||
builder.add_node("worker", worker_node)
|
||||
builder.add_node("reducer", reducer_node)
|
||||
|
||||
# Orchestrator to workers (dynamic)
|
||||
builder.add_edge(START, "orchestrator")
|
||||
|
||||
# Workers to aggregation node
|
||||
builder.add_edge("worker", "reducer")
|
||||
builder.add_edge("reducer", END)
|
||||
|
||||
graph = builder.compile()
|
||||
```
|
||||
|
||||
## Using the Send API
|
||||
|
||||
Generate **node instances dynamically** with `Send` objects:
|
||||
|
||||
```python
|
||||
def orchestrator(state: State):
|
||||
# Generate worker instance for each item
|
||||
return [
|
||||
Send("worker", {"item": item, "index": i})
|
||||
for i, item in enumerate(state["items"])
|
||||
]
|
||||
```
|
||||
|
||||
## Advanced Patterns
|
||||
|
||||
### Pattern 1: Hierarchical Processing
|
||||
|
||||
```python
|
||||
def master_orchestrator(state: State):
|
||||
"""Master delegates to multiple sub-orchestrators"""
|
||||
return [
|
||||
Send("sub_orchestrator", {"category": cat, "items": items})
|
||||
for cat, items in group_by_category(state["all_items"])
|
||||
]
|
||||
|
||||
def sub_orchestrator(state: SubState):
|
||||
"""Sub-orchestrator delegates to individual workers"""
|
||||
return [
|
||||
Send("worker", {"item": item})
|
||||
for item in state["items"]
|
||||
]
|
||||
```
|
||||
|
||||
### Pattern 2: Conditional Worker Selection
|
||||
|
||||
```python
|
||||
def smart_orchestrator(state: State):
|
||||
"""Select different workers based on task characteristics"""
|
||||
tasks = []
|
||||
|
||||
for item in state["items"]:
|
||||
if is_complex(item):
|
||||
tasks.append(Send("advanced_worker", {"item": item}))
|
||||
else:
|
||||
tasks.append(Send("simple_worker", {"item": item}))
|
||||
|
||||
return tasks
|
||||
```
|
||||
|
||||
### Pattern 3: Batch Processing
|
||||
|
||||
```python
|
||||
def batch_orchestrator(state: State):
|
||||
"""Divide items into batches"""
|
||||
batch_size = 10
|
||||
batches = [
|
||||
state["items"][i:i+batch_size]
|
||||
for i in range(0, len(state["items"]), batch_size)
|
||||
]
|
||||
|
||||
return [
|
||||
Send("batch_worker", {"batch": batch, "batch_id": i})
|
||||
for i, batch in enumerate(batches)
|
||||
]
|
||||
|
||||
def batch_worker(state: BatchState):
|
||||
"""Process batch"""
|
||||
results = [process(item) for item in state["batch"]]
|
||||
return {"results": results}
|
||||
```
|
||||
|
||||
### Pattern 4: Error Handling and Retry
|
||||
|
||||
```python
|
||||
class WorkerState(TypedDict):
|
||||
item: str
|
||||
retry_count: int
|
||||
result: str
|
||||
error: str | None
|
||||
|
||||
def robust_worker(state: WorkerState):
|
||||
"""Worker with error handling"""
|
||||
try:
|
||||
result = process_item(state["item"])
|
||||
return {"result": result, "error": None}
|
||||
except Exception as e:
|
||||
if state.get("retry_count", 0) < 3:
|
||||
# Retry
|
||||
return Send("worker", {
|
||||
"item": state["item"],
|
||||
"retry_count": state.get("retry_count", 0) + 1
|
||||
})
|
||||
else:
|
||||
# Maximum retries reached
|
||||
return {"error": str(e)}
|
||||
```
|
||||
|
||||
## Dynamic Parallelism Control
|
||||
|
||||
```python
|
||||
import os
|
||||
|
||||
def adaptive_orchestrator(state: State):
|
||||
"""Adjust parallelism based on system resources"""
|
||||
max_workers = int(os.getenv("MAX_WORKERS", "5"))
|
||||
|
||||
# Divide items into chunks
|
||||
items = state["items"]
|
||||
chunk_size = max(1, len(items) // max_workers)
|
||||
|
||||
chunks = [
|
||||
items[i:i+chunk_size]
|
||||
for i in range(0, len(items), chunk_size)
|
||||
]
|
||||
|
||||
return [
|
||||
Send("worker", {"chunk": chunk})
|
||||
for chunk in chunks
|
||||
]
|
||||
```
|
||||
|
||||
## Reducer Implementation Patterns
|
||||
|
||||
### Pattern 1: Simple Aggregation
|
||||
|
||||
```python
|
||||
from operator import add
|
||||
|
||||
class State(TypedDict):
|
||||
results: Annotated[list, add]
|
||||
|
||||
def reducer(state: State):
|
||||
"""Simple aggregation of results"""
|
||||
return {"total": sum(state["results"])}
|
||||
```
|
||||
|
||||
### Pattern 2: Complex Aggregation
|
||||
|
||||
```python
|
||||
def advanced_reducer(state: State):
|
||||
"""Calculate statistics"""
|
||||
results = state["results"]
|
||||
|
||||
return {
|
||||
"total": sum(results),
|
||||
"average": sum(results) / len(results),
|
||||
"min": min(results),
|
||||
"max": max(results)
|
||||
}
|
||||
```
|
||||
|
||||
### Pattern 3: LLM-Based Integration
|
||||
|
||||
```python
|
||||
def llm_reducer(state: State):
|
||||
"""Integrate multiple results with LLM"""
|
||||
all_results = "\n".join(state["summaries"])
|
||||
|
||||
final = llm.invoke(
|
||||
f"Synthesize these summaries into one:\n{all_results}"
|
||||
)
|
||||
|
||||
return {"final_summary": final}
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
✅ **Scalability**: Workers automatically generated based on task count
|
||||
✅ **Parallel Processing**: High-speed processing of large amounts of data
|
||||
✅ **Flexibility**: Dynamically adjustable worker count
|
||||
✅ **Distributed Processing**: Distributable across multiple servers
|
||||
|
||||
## Considerations
|
||||
|
||||
⚠️ **Memory Consumption**: Many worker instances are generated
|
||||
⚠️ **Reducer Design**: Appropriately design result aggregation method
|
||||
⚠️ **Error Handling**: Handle cases where some workers fail
|
||||
⚠️ **Resource Management**: May need to limit parallelism
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Batch Size Adjustment**: Too small causes overhead, too large reduces parallelism
|
||||
2. **Error Isolation**: One failure shouldn't affect the whole
|
||||
3. **Progress Tracking**: Visualize progress for large task counts
|
||||
4. **Resource Limits**: Set upper limit on parallelism
|
||||
|
||||
## Summary
|
||||
|
||||
Orchestrator-Worker is optimal for **parallel processing of large task volumes**. Workers are generated dynamically with the Send API, and results are aggregated with a Reducer.
|
||||
|
||||
## Related Pages
|
||||
|
||||
- [02_graph_architecture_parallelization.md](02_graph_architecture_parallelization.md) - Comparison with static parallel processing
|
||||
- [05_advanced_features_map_reduce.md](05_advanced_features_map_reduce.md) - Map-Reduce details
|
||||
- [01_core_concepts_state.md](01_core_concepts_state.md) - Reducer details
|
||||
59
skills/langgraph-master/02_graph_architecture_overview.md
Normal file
59
skills/langgraph-master/02_graph_architecture_overview.md
Normal file
@@ -0,0 +1,59 @@
|
||||
# 02. Graph Architecture
|
||||
|
||||
Six major graph patterns and agent design.
|
||||
|
||||
## Overview
|
||||
|
||||
LangGraph supports various architectural patterns. It's important to select the optimal pattern based on the nature of the problem.
|
||||
|
||||
## [Workflow vs Agent](02_graph_architecture_workflow_vs_agent.md)
|
||||
|
||||
First, understand the difference between Workflow and Agent:
|
||||
|
||||
- **Workflow**: Predetermined code paths, operates in a specific order
|
||||
- **Agent**: Dynamic, defines its own processes and tool usage
|
||||
|
||||
## Six Major Patterns
|
||||
|
||||
### 1. [Prompt Chaining (Sequential Processing)](02_graph_architecture_prompt_chaining.md)
|
||||
Each LLM call processes the previous output. Suitable for translation and stepwise processing.
|
||||
|
||||
### 2. [Parallelization (Parallel Processing)](02_graph_architecture_parallelization.md)
|
||||
Execute multiple independent tasks simultaneously. Used for speed improvement and reliability verification.
|
||||
|
||||
### 3. [Routing (Branching Processing)](02_graph_architecture_routing.md)
|
||||
Route to specialized flows based on input. Optimal for customer support.
|
||||
|
||||
### 4. [Orchestrator-Worker (Master-Worker)](02_graph_architecture_orchestrator_worker.md)
|
||||
Orchestrator decomposes tasks and delegates to multiple workers.
|
||||
|
||||
### 5. [Evaluator-Optimizer (Evaluation-Improvement Loop)](02_graph_architecture_evaluator_optimizer.md)
|
||||
Repeat generation and evaluation, iteratively improving until acceptable criteria are met.
|
||||
|
||||
### 6. [Agent (Autonomous Tool Usage)](02_graph_architecture_agent.md)
|
||||
LLM dynamically determines tool selection, handling unpredictable problem-solving.
|
||||
|
||||
## [Subgraph](02_graph_architecture_subgraph.md)
|
||||
|
||||
Build hierarchical graph structures and modularize complex systems.
|
||||
|
||||
## Pattern Selection Guide
|
||||
|
||||
| Pattern | Use Case | Example |
|
||||
|---------|----------|---------|
|
||||
| Prompt Chaining | Stepwise processing | Translation → Summary → Analysis |
|
||||
| Parallelization | Simultaneous execution of independent tasks | Evaluation by multiple criteria |
|
||||
| Routing | Type-based routing | Support inquiry classification |
|
||||
| Orchestrator-Worker | Task decomposition and delegation | Parallel processing of multiple documents |
|
||||
| Evaluator-Optimizer | Iterative improvement | Quality improvement loop |
|
||||
| Agent | Dynamic problem solving | Uncertain tasks |
|
||||
|
||||
## Important Principles
|
||||
|
||||
1. **Workflow if structure is clear**: When task structure can be predefined
|
||||
2. **Agent if uncertain**: When problem or solution is uncertain and LLM judgment is needed
|
||||
3. **Subgraph for modularization**: Organize complex systems with hierarchical structure
|
||||
|
||||
## Next Steps
|
||||
|
||||
For details on each pattern, refer to individual pages. We recommend starting with [02_graph_architecture_workflow_vs_agent.md](02_graph_architecture_workflow_vs_agent.md).
|
||||
182
skills/langgraph-master/02_graph_architecture_parallelization.md
Normal file
182
skills/langgraph-master/02_graph_architecture_parallelization.md
Normal file
@@ -0,0 +1,182 @@
|
||||
# Parallelization (Parallel Processing)
|
||||
|
||||
A pattern for executing multiple independent tasks simultaneously.
|
||||
|
||||
## Overview
|
||||
|
||||
Parallelization is a pattern that executes **multiple tasks that don't depend on each other** simultaneously, achieving speed improvements and reliability verification.
|
||||
|
||||
## Use Cases
|
||||
|
||||
- Scoring documents with multiple evaluation criteria
|
||||
- Analysis from different perspectives (technical/business/legal)
|
||||
- Comparing results from multiple translation engines
|
||||
- Implementing Map-Reduce pattern
|
||||
|
||||
## Implementation Example
|
||||
|
||||
```python
|
||||
from typing import Annotated, TypedDict
|
||||
from operator import add
|
||||
|
||||
class State(TypedDict):
|
||||
document: str
|
||||
scores: Annotated[list[dict], add] # Aggregate multiple results
|
||||
|
||||
def technical_review(state: State):
|
||||
"""Review from technical perspective"""
|
||||
score = llm.invoke(
|
||||
f"Technical review: {state['document']}"
|
||||
)
|
||||
return {"scores": [{"type": "technical", "score": score}]}
|
||||
|
||||
def business_review(state: State):
|
||||
"""Review from business perspective"""
|
||||
score = llm.invoke(
|
||||
f"Business review: {state['document']}"
|
||||
)
|
||||
return {"scores": [{"type": "business", "score": score}]}
|
||||
|
||||
def legal_review(state: State):
|
||||
"""Review from legal perspective"""
|
||||
score = llm.invoke(
|
||||
f"Legal review: {state['document']}"
|
||||
)
|
||||
return {"scores": [{"type": "legal", "score": score}]}
|
||||
|
||||
def aggregate_scores(state: State):
|
||||
"""Aggregate scores"""
|
||||
total = sum(s["score"] for s in state["scores"])
|
||||
return {"final_score": total / len(state["scores"])}
|
||||
|
||||
# Build graph
|
||||
builder = StateGraph(State)
|
||||
|
||||
# Nodes to be executed in parallel
|
||||
builder.add_node("technical", technical_review)
|
||||
builder.add_node("business", business_review)
|
||||
builder.add_node("legal", legal_review)
|
||||
builder.add_node("aggregate", aggregate_scores)
|
||||
|
||||
# Edges for parallel execution
|
||||
builder.add_edge(START, "technical")
|
||||
builder.add_edge(START, "business")
|
||||
builder.add_edge(START, "legal")
|
||||
|
||||
# To aggregation node
|
||||
builder.add_edge("technical", "aggregate")
|
||||
builder.add_edge("business", "aggregate")
|
||||
builder.add_edge("legal", "aggregate")
|
||||
builder.add_edge("aggregate", END)
|
||||
|
||||
graph = builder.compile()
|
||||
```
|
||||
|
||||
## Important Concept: Reducer
|
||||
|
||||
A **Reducer** is essential for aggregating results from parallel execution:
|
||||
|
||||
```python
|
||||
from operator import add
|
||||
|
||||
class State(TypedDict):
|
||||
# Additively aggregate results from multiple nodes
|
||||
results: Annotated[list, add]
|
||||
|
||||
# Keep maximum value
|
||||
max_score: Annotated[int, max]
|
||||
|
||||
# Custom Reducer
|
||||
combined: Annotated[dict, combine_dicts]
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
✅ **Speed**: Time reduction through parallel task execution
|
||||
✅ **Reliability**: Verification by comparing multiple results
|
||||
✅ **Scalability**: Adjust parallelism based on task count
|
||||
✅ **Robustness**: Can continue if some succeed even if others fail
|
||||
|
||||
## Considerations
|
||||
|
||||
⚠️ **Reducer Required**: Explicitly define result aggregation method
|
||||
⚠️ **Resource Consumption**: Increased memory and API calls from parallel execution
|
||||
⚠️ **Uncertain Order**: Execution order not guaranteed
|
||||
⚠️ **Debugging Complexity**: Parallel execution troubleshooting is difficult
|
||||
|
||||
## Advanced Patterns
|
||||
|
||||
### Pattern 1: Fan-out / Fan-in
|
||||
|
||||
```python
|
||||
# Fan-out: One node to multiple
|
||||
builder.add_edge("router", "task_a")
|
||||
builder.add_edge("router", "task_b")
|
||||
builder.add_edge("router", "task_c")
|
||||
|
||||
# Fan-in: Multiple to one aggregation
|
||||
builder.add_edge("task_a", "aggregator")
|
||||
builder.add_edge("task_b", "aggregator")
|
||||
builder.add_edge("task_c", "aggregator")
|
||||
```
|
||||
|
||||
### Pattern 2: Balancing (defer=True)
|
||||
|
||||
Wait for branches of different lengths:
|
||||
|
||||
```python
|
||||
from operator import add
|
||||
|
||||
def add_with_defer(left: list, right: list) -> list:
|
||||
return left + right
|
||||
|
||||
class State(TypedDict):
|
||||
results: Annotated[list, add_with_defer]
|
||||
|
||||
# Specify defer=True at compile time
|
||||
graph = builder.compile(
|
||||
checkpointer=checkpointer,
|
||||
# Wait until all branches complete
|
||||
)
|
||||
```
|
||||
|
||||
### Pattern 3: Reliability Through Redundancy
|
||||
|
||||
```python
|
||||
def provider_a(state: State):
|
||||
"""Provider A"""
|
||||
return {"responses": [call_api_a(state["query"])]}
|
||||
|
||||
def provider_b(state: State):
|
||||
"""Provider B (backup)"""
|
||||
return {"responses": [call_api_b(state["query"])]}
|
||||
|
||||
def provider_c(state: State):
|
||||
"""Provider C (backup)"""
|
||||
return {"responses": [call_api_c(state["query"])]}
|
||||
|
||||
def select_best(state: State):
|
||||
"""Select best response"""
|
||||
responses = state["responses"]
|
||||
best = max(responses, key=lambda r: r.confidence)
|
||||
return {"result": best}
|
||||
```
|
||||
|
||||
## vs Other Patterns
|
||||
|
||||
| Pattern | Parallelization | Prompt Chaining |
|
||||
|---------|----------------|-----------------|
|
||||
| Execution Order | Parallel | Sequential |
|
||||
| Dependencies | None | Yes |
|
||||
| Execution Time | Short | Long |
|
||||
| Result Aggregation | Reducer required | Not required |
|
||||
|
||||
## Summary
|
||||
|
||||
Parallelization is optimal for **simultaneous execution of independent tasks**. It's important to properly aggregate results using a Reducer.
|
||||
|
||||
## Related Pages
|
||||
|
||||
- [02_graph_architecture_orchestrator_worker.md](02_graph_architecture_orchestrator_worker.md) - Dynamic parallel processing
|
||||
- [05_advanced_features_map_reduce.md](05_advanced_features_map_reduce.md) - Map-Reduce pattern
|
||||
- [01_core_concepts_state.md](01_core_concepts_state.md) - Reducer details
|
||||
138
skills/langgraph-master/02_graph_architecture_prompt_chaining.md
Normal file
138
skills/langgraph-master/02_graph_architecture_prompt_chaining.md
Normal file
@@ -0,0 +1,138 @@
|
||||
# Prompt Chaining (Sequential Processing)
|
||||
|
||||
A sequential pattern where each LLM call processes the previous output.
|
||||
|
||||
## Overview
|
||||
|
||||
Prompt Chaining is a pattern that **chains multiple LLM calls in sequence**. The output of each step becomes the input for the next step.
|
||||
|
||||
## Use Cases
|
||||
|
||||
- Stepwise processing like translation → summary → analysis
|
||||
- Content generation → validation → correction pipeline
|
||||
- Data extraction → transformation → validation flow
|
||||
|
||||
## Implementation Example
|
||||
|
||||
```python
|
||||
from langgraph.graph import StateGraph, START, END
|
||||
from typing import TypedDict
|
||||
|
||||
class State(TypedDict):
|
||||
text: str
|
||||
translated: str
|
||||
summarized: str
|
||||
analyzed: str
|
||||
|
||||
def translate_node(state: State):
|
||||
"""Translate English → Japanese"""
|
||||
translated = llm.invoke(
|
||||
f"Translate to Japanese: {state['text']}"
|
||||
)
|
||||
return {"translated": translated}
|
||||
|
||||
def summarize_node(state: State):
|
||||
"""Summarize translated text"""
|
||||
summarized = llm.invoke(
|
||||
f"Summarize this text: {state['translated']}"
|
||||
)
|
||||
return {"summarized": summarized}
|
||||
|
||||
def analyze_node(state: State):
|
||||
"""Analyze summary"""
|
||||
analyzed = llm.invoke(
|
||||
f"Analyze sentiment: {state['summarized']}"
|
||||
)
|
||||
return {"analyzed": analyzed}
|
||||
|
||||
# Build graph
|
||||
builder = StateGraph(State)
|
||||
builder.add_node("translate", translate_node)
|
||||
builder.add_node("summarize", summarize_node)
|
||||
builder.add_node("analyze", analyze_node)
|
||||
|
||||
# Edges for sequential execution
|
||||
builder.add_edge(START, "translate")
|
||||
builder.add_edge("translate", "summarize")
|
||||
builder.add_edge("summarize", "analyze")
|
||||
builder.add_edge("analyze", END)
|
||||
|
||||
graph = builder.compile()
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
✅ **Simple**: Processing flow is linear and easy to understand
|
||||
✅ **Predictable**: Always executes in the same order
|
||||
✅ **Easy to Debug**: Each step can be tested independently
|
||||
✅ **Gradual Improvement**: Quality improves at each step
|
||||
|
||||
## Considerations
|
||||
|
||||
⚠️ **Accumulated Delay**: Takes time as each step executes sequentially
|
||||
⚠️ **Error Propagation**: Earlier errors affect later stages
|
||||
⚠️ **Lack of Flexibility**: Dynamic branching is difficult
|
||||
|
||||
## Advanced Patterns
|
||||
|
||||
### Pattern 1: Chain with Validation
|
||||
|
||||
```python
|
||||
def validate_translation(state: State):
|
||||
"""Validate translation quality"""
|
||||
is_valid = check_quality(state["translated"])
|
||||
return {"is_valid": is_valid}
|
||||
|
||||
def route_after_validation(state: State):
|
||||
if state["is_valid"]:
|
||||
return "continue"
|
||||
return "retry"
|
||||
|
||||
# Validation → continue or retry
|
||||
builder.add_conditional_edges(
|
||||
"validate",
|
||||
route_after_validation,
|
||||
{
|
||||
"continue": "summarize",
|
||||
"retry": "translate"
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Pattern 2: Gradual Refinement
|
||||
|
||||
```python
|
||||
def draft_node(state: State):
|
||||
"""Create draft"""
|
||||
draft = llm.invoke(f"Write a draft: {state['topic']}")
|
||||
return {"draft": draft}
|
||||
|
||||
def refine_node(state: State):
|
||||
"""Refine draft"""
|
||||
refined = llm.invoke(f"Improve this draft: {state['draft']}")
|
||||
return {"refined": refined}
|
||||
|
||||
def polish_node(state: State):
|
||||
"""Final polish"""
|
||||
polished = llm.invoke(f"Polish this text: {state['refined']}")
|
||||
return {"final": polished}
|
||||
```
|
||||
|
||||
## vs Other Patterns
|
||||
|
||||
| Pattern | Prompt Chaining | Parallelization |
|
||||
|---------|----------------|-----------------|
|
||||
| Execution Order | Sequential | Parallel |
|
||||
| Dependencies | Yes | No |
|
||||
| Execution Time | Long | Short |
|
||||
| Use Case | Stepwise processing | Independent tasks |
|
||||
|
||||
## Summary
|
||||
|
||||
Prompt Chaining is the simplest pattern, optimal for **cases requiring stepwise processing**. Use when each step depends on the previous step.
|
||||
|
||||
## Related Pages
|
||||
|
||||
- [02_graph_architecture_parallelization.md](02_graph_architecture_parallelization.md) - Comparison with parallel processing
|
||||
- [02_graph_architecture_evaluator_optimizer.md](02_graph_architecture_evaluator_optimizer.md) - Combination with validation loop
|
||||
- [01_core_concepts_edge.md](01_core_concepts_edge.md) - Edge basics
|
||||
263
skills/langgraph-master/02_graph_architecture_routing.md
Normal file
263
skills/langgraph-master/02_graph_architecture_routing.md
Normal file
@@ -0,0 +1,263 @@
|
||||
# Routing (Branching Processing)
|
||||
|
||||
A pattern for routing to specialized flows based on input.
|
||||
|
||||
## Overview
|
||||
|
||||
Routing is a pattern that **selects the appropriate processing path** based on input characteristics. Used for customer support question classification, etc.
|
||||
|
||||
## Use Cases
|
||||
|
||||
- Route customer questions to specialized teams by type
|
||||
- Different processing pipelines by document type
|
||||
- Prioritization by urgency/importance
|
||||
- Processing flow selection by language
|
||||
|
||||
## Implementation Example: Customer Support
|
||||
|
||||
```python
|
||||
from typing import Literal, TypedDict
|
||||
|
||||
class State(TypedDict):
|
||||
query: str
|
||||
category: str
|
||||
response: str
|
||||
|
||||
def router_node(state: State) -> Literal["pricing", "refund", "technical"]:
|
||||
"""Classify and route question"""
|
||||
query = state["query"]
|
||||
|
||||
# Classify with LLM
|
||||
category = llm.invoke(
|
||||
f"Classify this customer query into: pricing, refund, or technical\n"
|
||||
f"Query: {query}\n"
|
||||
f"Category:"
|
||||
)
|
||||
|
||||
if "price" in query or "cost" in query:
|
||||
return "pricing"
|
||||
elif "refund" in query or "cancel" in query:
|
||||
return "refund"
|
||||
else:
|
||||
return "technical"
|
||||
|
||||
def pricing_node(state: State):
|
||||
"""Handle pricing queries"""
|
||||
response = handle_pricing_query(state["query"])
|
||||
return {"response": response, "category": "pricing"}
|
||||
|
||||
def refund_node(state: State):
|
||||
"""Handle refund queries"""
|
||||
response = handle_refund_query(state["query"])
|
||||
return {"response": response, "category": "refund"}
|
||||
|
||||
def technical_node(state: State):
|
||||
"""Handle technical issues"""
|
||||
response = handle_technical_query(state["query"])
|
||||
return {"response": response, "category": "technical"}
|
||||
|
||||
# Build graph
|
||||
builder = StateGraph(State)
|
||||
|
||||
builder.add_node("router", router_node)
|
||||
builder.add_node("pricing", pricing_node)
|
||||
builder.add_node("refund", refund_node)
|
||||
builder.add_node("technical", technical_node)
|
||||
|
||||
# Routing edges
|
||||
builder.add_edge(START, "router")
|
||||
builder.add_conditional_edges(
|
||||
"router",
|
||||
lambda state: state.get("category", "technical"),
|
||||
{
|
||||
"pricing": "pricing",
|
||||
"refund": "refund",
|
||||
"technical": "technical"
|
||||
}
|
||||
)
|
||||
|
||||
# End from each node
|
||||
builder.add_edge("pricing", END)
|
||||
builder.add_edge("refund", END)
|
||||
builder.add_edge("technical", END)
|
||||
|
||||
graph = builder.compile()
|
||||
```
|
||||
|
||||
## Advanced Patterns
|
||||
|
||||
### Pattern 1: Multi-Stage Routing
|
||||
|
||||
```python
|
||||
def first_router(state: State) -> Literal["sales", "support"]:
|
||||
"""Stage 1: Sales or Support"""
|
||||
if "purchase" in state["query"] or "quote" in state["query"]:
|
||||
return "sales"
|
||||
return "support"
|
||||
|
||||
def support_router(state: State) -> Literal["billing", "technical"]:
|
||||
"""Stage 2: Classification within Support"""
|
||||
if "billing" in state["query"]:
|
||||
return "billing"
|
||||
return "technical"
|
||||
|
||||
# Multi-stage routing
|
||||
builder.add_conditional_edges("first_router", first_router, {...})
|
||||
builder.add_conditional_edges("support_router", support_router, {...})
|
||||
```
|
||||
|
||||
### Pattern 2: Priority-Based Routing
|
||||
|
||||
```python
|
||||
from typing import Literal
|
||||
|
||||
def priority_router(state: State) -> Literal["urgent", "normal", "low"]:
|
||||
"""Route by urgency"""
|
||||
query = state["query"]
|
||||
|
||||
# Urgent keywords
|
||||
if any(word in query for word in ["urgent", "immediately", "asap"]):
|
||||
return "urgent"
|
||||
|
||||
# Importance determination
|
||||
importance = analyze_importance(query)
|
||||
if importance > 0.7:
|
||||
return "normal"
|
||||
|
||||
return "low"
|
||||
|
||||
builder.add_conditional_edges(
|
||||
"priority_router",
|
||||
priority_router,
|
||||
{
|
||||
"urgent": "urgent_handler", # Immediate processing
|
||||
"normal": "normal_queue", # Normal queue
|
||||
"low": "batch_processor" # Batch processing
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Pattern 3: Semantic Routing (Embedding-Based)
|
||||
|
||||
```python
|
||||
import numpy as np
|
||||
from typing import Literal
|
||||
|
||||
def semantic_router(state: State) -> Literal["product", "account", "general"]:
|
||||
"""Semantic routing based on embeddings"""
|
||||
query_embedding = embed(state["query"])
|
||||
|
||||
# Representative embeddings for each category
|
||||
categories = {
|
||||
"product": embed("product, features, how to use"),
|
||||
"account": embed("account, login, password"),
|
||||
"general": embed("general questions")
|
||||
}
|
||||
|
||||
# Select closest category
|
||||
similarities = {
|
||||
cat: cosine_similarity(query_embedding, emb)
|
||||
for cat, emb in categories.items()
|
||||
}
|
||||
|
||||
return max(similarities, key=similarities.get)
|
||||
```
|
||||
|
||||
### Pattern 4: Dynamic Routing (LLM Judgment)
|
||||
|
||||
```python
|
||||
def llm_router(state: State):
|
||||
"""Have LLM determine optimal route"""
|
||||
routes = ["expert_a", "expert_b", "expert_c", "general"]
|
||||
|
||||
prompt = f"""
|
||||
Select the most appropriate expert to handle this question:
|
||||
- expert_a: Database specialist
|
||||
- expert_b: API specialist
|
||||
- expert_c: UI specialist
|
||||
- general: General questions
|
||||
|
||||
Question: {state['query']}
|
||||
|
||||
Selection: """
|
||||
|
||||
route = llm.invoke(prompt).strip()
|
||||
return route if route in routes else "general"
|
||||
|
||||
builder.add_conditional_edges(
|
||||
"router",
|
||||
llm_router,
|
||||
{
|
||||
"expert_a": "database_expert",
|
||||
"expert_b": "api_expert",
|
||||
"expert_c": "ui_expert",
|
||||
"general": "general_handler"
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
✅ **Specialization**: Specialized processing for each type
|
||||
✅ **Efficiency**: Skip unnecessary processing
|
||||
✅ **Maintainability**: Improve each route independently
|
||||
✅ **Scalability**: Easy to add new routes
|
||||
|
||||
## Considerations
|
||||
|
||||
⚠️ **Classification Accuracy**: Routing errors affect the whole
|
||||
⚠️ **Coverage**: Need to cover all cases
|
||||
⚠️ **Fallback**: Handling unknown cases is important
|
||||
⚠️ **Balance**: Consider load balancing between routes
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Provide Fallback Route
|
||||
|
||||
```python
|
||||
def safe_router(state: State):
|
||||
try:
|
||||
route = determine_route(state)
|
||||
if route in valid_routes:
|
||||
return route
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
# Fallback
|
||||
return "general_handler"
|
||||
```
|
||||
|
||||
### 2. Log Routing Reasons
|
||||
|
||||
```python
|
||||
def logged_router(state: State):
|
||||
route = determine_route(state)
|
||||
|
||||
return {
|
||||
"route": route,
|
||||
"routing_reason": f"Routed to {route} because..."
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Dynamic Route Addition
|
||||
|
||||
```python
|
||||
# Load routes from configuration file
|
||||
ROUTES = load_routes_config()
|
||||
|
||||
builder.add_conditional_edges(
|
||||
"router",
|
||||
determine_route,
|
||||
{route: handler for route, handler in ROUTES.items()}
|
||||
)
|
||||
```
|
||||
|
||||
## Summary
|
||||
|
||||
Routing is optimal for **appropriate processing selection based on input characteristics**. Classification accuracy and fallback handling are keys to success.
|
||||
|
||||
## Related Pages
|
||||
|
||||
- [02_graph_architecture_agent.md](02_graph_architecture_agent.md) - Combining with Agent
|
||||
- [01_core_concepts_edge.md](01_core_concepts_edge.md) - Conditional edge details
|
||||
- [02_graph_architecture_workflow_vs_agent.md](02_graph_architecture_workflow_vs_agent.md) - Pattern usage
|
||||
282
skills/langgraph-master/02_graph_architecture_subgraph.md
Normal file
282
skills/langgraph-master/02_graph_architecture_subgraph.md
Normal file
@@ -0,0 +1,282 @@
|
||||
# Subgraph
|
||||
|
||||
A pattern for building hierarchical graph structures and modularizing complex systems.
|
||||
|
||||
## Overview
|
||||
|
||||
Subgraph is a pattern for hierarchically organizing complex systems by **embedding graphs as nodes in other graphs**.
|
||||
|
||||
## Use Cases
|
||||
|
||||
- Modularizing large-scale agent systems
|
||||
- Integrating multiple specialized agents
|
||||
- Reusable workflow components
|
||||
- Multi-level hierarchical structures
|
||||
|
||||
## Two Implementation Approaches
|
||||
|
||||
### Approach 1: Add Graph as Node
|
||||
|
||||
Use when **sharing state keys**.
|
||||
|
||||
```python
|
||||
# Subgraph definition
|
||||
class SubState(TypedDict):
|
||||
messages: Annotated[list, add_messages]
|
||||
sub_result: str
|
||||
|
||||
def sub_node_a(state: SubState):
|
||||
return {"messages": [{"role": "assistant", "content": "Sub A"}]}
|
||||
|
||||
def sub_node_b(state: SubState):
|
||||
return {"sub_result": "Sub B completed"}
|
||||
|
||||
# Build subgraph
|
||||
sub_builder = StateGraph(SubState)
|
||||
sub_builder.add_node("sub_a", sub_node_a)
|
||||
sub_builder.add_node("sub_b", sub_node_b)
|
||||
sub_builder.add_edge(START, "sub_a")
|
||||
sub_builder.add_edge("sub_a", "sub_b")
|
||||
sub_builder.add_edge("sub_b", END)
|
||||
|
||||
sub_graph = sub_builder.compile()
|
||||
|
||||
# Use subgraph as node in parent graph
|
||||
class ParentState(TypedDict):
|
||||
messages: Annotated[list, add_messages] # Shared key
|
||||
sub_result: str # Shared key
|
||||
parent_data: str
|
||||
|
||||
parent_builder = StateGraph(ParentState)
|
||||
|
||||
# Add subgraph directly as node
|
||||
parent_builder.add_node("subgraph", sub_graph)
|
||||
|
||||
parent_builder.add_edge(START, "subgraph")
|
||||
parent_builder.add_edge("subgraph", END)
|
||||
|
||||
parent_graph = parent_builder.compile()
|
||||
```
|
||||
|
||||
### Approach 2: Call Graph from Within Node
|
||||
|
||||
Use when having **different state schemas**.
|
||||
|
||||
```python
|
||||
# Subgraph (own state)
|
||||
class SubGraphState(TypedDict):
|
||||
input_text: str
|
||||
output_text: str
|
||||
|
||||
def process_node(state: SubGraphState):
|
||||
return {"output_text": process(state["input_text"])}
|
||||
|
||||
sub_builder = StateGraph(SubGraphState)
|
||||
sub_builder.add_node("process", process_node)
|
||||
sub_builder.add_edge(START, "process")
|
||||
sub_builder.add_edge("process", END)
|
||||
|
||||
sub_graph = sub_builder.compile()
|
||||
|
||||
# Parent graph (different state)
|
||||
class ParentState(TypedDict):
|
||||
user_query: str
|
||||
result: str
|
||||
|
||||
def invoke_subgraph_node(state: ParentState):
|
||||
"""Call subgraph within node"""
|
||||
# Convert parent state to subgraph state
|
||||
sub_input = {"input_text": state["user_query"]}
|
||||
|
||||
# Execute subgraph
|
||||
sub_output = sub_graph.invoke(sub_input)
|
||||
|
||||
# Convert subgraph output to parent state
|
||||
return {"result": sub_output["output_text"]}
|
||||
|
||||
parent_builder = StateGraph(ParentState)
|
||||
parent_builder.add_node("call_subgraph", invoke_subgraph_node)
|
||||
parent_builder.add_edge(START, "call_subgraph")
|
||||
parent_builder.add_edge("call_subgraph", END)
|
||||
|
||||
parent_graph = parent_builder.compile()
|
||||
```
|
||||
|
||||
## Multi-Level Subgraphs
|
||||
|
||||
Multiple levels of subgraphs (parent → child → grandchild) are also possible:
|
||||
|
||||
```python
|
||||
# Grandchild graph
|
||||
class GrandchildState(TypedDict):
|
||||
data: str
|
||||
|
||||
grandchild_builder = StateGraph(GrandchildState)
|
||||
grandchild_builder.add_node("process", lambda s: {"data": f"Processed: {s['data']}"})
|
||||
grandchild_builder.add_edge(START, "process")
|
||||
grandchild_builder.add_edge("process", END)
|
||||
grandchild_graph = grandchild_builder.compile()
|
||||
|
||||
# Child graph (includes grandchild graph)
|
||||
class ChildState(TypedDict):
|
||||
data: str
|
||||
|
||||
child_builder = StateGraph(ChildState)
|
||||
child_builder.add_node("grandchild", grandchild_graph) # Add grandchild graph
|
||||
child_builder.add_edge(START, "grandchild")
|
||||
child_builder.add_edge("grandchild", END)
|
||||
child_graph = child_builder.compile()
|
||||
|
||||
# Parent graph (includes child graph)
|
||||
class ParentState(TypedDict):
|
||||
data: str
|
||||
|
||||
parent_builder = StateGraph(ParentState)
|
||||
parent_builder.add_node("child", child_graph) # Add child graph
|
||||
parent_builder.add_edge(START, "child")
|
||||
parent_builder.add_edge("child", END)
|
||||
parent_graph = parent_builder.compile()
|
||||
```
|
||||
|
||||
## Navigation Between Subgraphs
|
||||
|
||||
Transition from subgraph to another node in parent graph:
|
||||
|
||||
```python
|
||||
from langgraph.types import Command
|
||||
|
||||
def sub_node_with_navigation(state: SubState):
|
||||
"""Navigate from subgraph node to parent graph"""
|
||||
result = process(state["data"])
|
||||
|
||||
if need_parent_intervention(result):
|
||||
# Transition to another node in parent graph
|
||||
return Command(
|
||||
update={"result": result},
|
||||
goto="parent_handler",
|
||||
graph=Command.PARENT
|
||||
)
|
||||
|
||||
return {"result": result}
|
||||
```
|
||||
|
||||
## Persistence and Debugging
|
||||
|
||||
### Automatic Checkpointer Propagation
|
||||
|
||||
```python
|
||||
from langgraph.checkpoint.memory import MemorySaver
|
||||
|
||||
# Set checkpointer only on parent graph
|
||||
checkpointer = MemorySaver()
|
||||
|
||||
parent_graph = parent_builder.compile(
|
||||
checkpointer=checkpointer # Automatically propagates to child graphs
|
||||
)
|
||||
```
|
||||
|
||||
### Streaming Including Subgraph Output
|
||||
|
||||
```python
|
||||
# Stream including subgraph details
|
||||
for chunk in parent_graph.stream(
|
||||
inputs,
|
||||
stream_mode="values",
|
||||
subgraphs=True # Include subgraph output
|
||||
):
|
||||
print(chunk)
|
||||
```
|
||||
|
||||
## Practical Example: Multi-Agent System
|
||||
|
||||
```python
|
||||
# Research agent (subgraph)
|
||||
class ResearchState(TypedDict):
|
||||
messages: Annotated[list, add_messages]
|
||||
research_result: str
|
||||
|
||||
research_builder = StateGraph(ResearchState)
|
||||
research_builder.add_node("search", search_node)
|
||||
research_builder.add_node("analyze", analyze_node)
|
||||
research_builder.add_edge(START, "search")
|
||||
research_builder.add_edge("search", "analyze")
|
||||
research_builder.add_edge("analyze", END)
|
||||
research_graph = research_builder.compile()
|
||||
|
||||
# Coding agent (subgraph)
|
||||
class CodingState(TypedDict):
|
||||
messages: Annotated[list, add_messages]
|
||||
code: str
|
||||
|
||||
coding_builder = StateGraph(CodingState)
|
||||
coding_builder.add_node("generate", generate_code_node)
|
||||
coding_builder.add_node("test", test_code_node)
|
||||
coding_builder.add_edge(START, "generate")
|
||||
coding_builder.add_edge("generate", "test")
|
||||
coding_builder.add_edge("test", END)
|
||||
coding_graph = coding_builder.compile()
|
||||
|
||||
# Integrated system (parent graph)
|
||||
class SystemState(TypedDict):
|
||||
messages: Annotated[list, add_messages]
|
||||
research_result: str
|
||||
code: str
|
||||
task_type: str
|
||||
|
||||
def router(state: SystemState):
|
||||
if "research" in state["messages"][-1].content:
|
||||
return "research"
|
||||
return "coding"
|
||||
|
||||
system_builder = StateGraph(SystemState)
|
||||
|
||||
# Add subgraphs
|
||||
system_builder.add_node("research_agent", research_graph)
|
||||
system_builder.add_node("coding_agent", coding_graph)
|
||||
|
||||
# Routing
|
||||
system_builder.add_conditional_edges(
|
||||
START,
|
||||
router,
|
||||
{
|
||||
"research": "research_agent",
|
||||
"coding": "coding_agent"
|
||||
}
|
||||
)
|
||||
|
||||
system_builder.add_edge("research_agent", END)
|
||||
system_builder.add_edge("coding_agent", END)
|
||||
|
||||
system_graph = system_builder.compile()
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
✅ **Modularization**: Divide complex systems into smaller parts
|
||||
✅ **Reusability**: Use subgraphs in multiple parent graphs
|
||||
✅ **Maintainability**: Improve each subgraph independently
|
||||
✅ **Testability**: Test subgraphs individually
|
||||
|
||||
## Considerations
|
||||
|
||||
⚠️ **State Sharing**: Carefully design which keys to share
|
||||
⚠️ **Debugging Complexity**: Deep hierarchies are hard to track
|
||||
⚠️ **Performance**: Multi-level increases overhead
|
||||
⚠️ **Circular References**: Watch for circular dependencies between subgraphs
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Shallow Hierarchy**: Keep hierarchy as shallow as possible (2-3 levels)
|
||||
2. **Clear Responsibilities**: Clearly define role of each subgraph
|
||||
3. **Minimize State**: Share only necessary state keys
|
||||
4. **Independence**: Subgraphs should operate as independently as possible
|
||||
|
||||
## Summary
|
||||
|
||||
Subgraph is optimal for **hierarchical organization of complex systems**. Choose between two approaches depending on state sharing method.
|
||||
|
||||
## Related Pages
|
||||
|
||||
- [02_graph_architecture_agent.md](02_graph_architecture_agent.md) - Combination with multi-agent
|
||||
- [01_core_concepts_state.md](01_core_concepts_state.md) - State design
|
||||
- [03_memory_management_persistence.md](03_memory_management_persistence.md) - Checkpointer propagation
|
||||
@@ -0,0 +1,156 @@
|
||||
# Workflow vs Agent
|
||||
|
||||
Differences and usage between Workflow and Agent.
|
||||
|
||||
## Basic Differences
|
||||
|
||||
### Workflow
|
||||
> "predetermined code paths and are designed to operate in a certain order"
|
||||
> (Predetermined code paths, operates in specific order)
|
||||
|
||||
- **Pre-defined**: Processing flow is clear
|
||||
- **Predictable**: Follows same path for same input
|
||||
- **Controlled Execution**: Developer has complete control over control flow
|
||||
|
||||
### Agent
|
||||
> "dynamic and define their own processes and tool usage"
|
||||
> (Dynamic, defines its own processes and tool usage)
|
||||
|
||||
- **Dynamic**: LLM decides next action
|
||||
- **Autonomous**: Self-determines tool selection
|
||||
- **Uncertain**: May follow different paths with same input
|
||||
|
||||
## Implementation Comparison
|
||||
|
||||
### Workflow Example: Translation Pipeline
|
||||
|
||||
```python
|
||||
def translate_node(state: State):
|
||||
return {"text": translate(state["text"])}
|
||||
|
||||
def summarize_node(state: State):
|
||||
return {"summary": summarize(state["text"])}
|
||||
|
||||
def validate_node(state: State):
|
||||
return {"valid": check_quality(state["summary"])}
|
||||
|
||||
# Fixed flow
|
||||
builder.add_edge(START, "translate")
|
||||
builder.add_edge("translate", "summarize")
|
||||
builder.add_edge("summarize", "validate")
|
||||
builder.add_edge("validate", END)
|
||||
```
|
||||
|
||||
### Agent Example: Problem-Solving Agent
|
||||
|
||||
```python
|
||||
def agent_node(state: State):
|
||||
# LLM determines tool usage
|
||||
response = llm_with_tools.invoke(state["messages"])
|
||||
return {"messages": [response]}
|
||||
|
||||
def should_continue(state: State):
|
||||
last_message = state["messages"][-1]
|
||||
# Continue if there are tool calls
|
||||
if last_message.tool_calls:
|
||||
return "continue"
|
||||
return "end"
|
||||
|
||||
# LLM decides dynamically
|
||||
builder.add_conditional_edges(
|
||||
"agent",
|
||||
should_continue,
|
||||
{"continue": "tools", "end": END}
|
||||
)
|
||||
```
|
||||
|
||||
## Selection Criteria
|
||||
|
||||
### Choose Workflow When
|
||||
|
||||
✅ **Structure is Clear**
|
||||
- Processing steps are known in advance
|
||||
- Execution order is fixed
|
||||
|
||||
✅ **Predictability is Important**
|
||||
- Compliance requirements exist
|
||||
- Debugging needs to be easy
|
||||
|
||||
✅ **Cost Efficiency**
|
||||
- Want to minimize LLM calls
|
||||
- Want to reduce token consumption
|
||||
|
||||
**Examples**: Data processing pipelines, approval workflows, translation chains
|
||||
|
||||
### Choose Agent When
|
||||
|
||||
✅ **Problem is Uncertain**
|
||||
- Don't know which tools are needed
|
||||
- Variable number of steps
|
||||
|
||||
✅ **Flexibility is Needed**
|
||||
- Different approaches based on situation
|
||||
- Diverse user questions
|
||||
|
||||
✅ **Autonomy is Valuable**
|
||||
- Want to leverage LLM's judgment
|
||||
- ReAct (reasoning + action) pattern is suitable
|
||||
|
||||
**Examples**: Customer support, research assistant, complex problem solving
|
||||
|
||||
## Hybrid Approach
|
||||
|
||||
Many practical systems combine both:
|
||||
|
||||
```python
|
||||
# Embed Agent within Workflow
|
||||
builder.add_edge(START, "input_validation") # Workflow
|
||||
builder.add_edge("input_validation", "agent") # Agent part
|
||||
builder.add_conditional_edges("agent", should_continue, {...})
|
||||
builder.add_edge("tools", "agent")
|
||||
builder.add_conditional_edges("agent", ..., {"end": "output_formatting"})
|
||||
builder.add_edge("output_formatting", END) # Workflow
|
||||
```
|
||||
|
||||
## ReAct Pattern (Agent Foundation)
|
||||
|
||||
Agent follows the **ReAct** (Reasoning + Acting) pattern:
|
||||
|
||||
1. **Reasoning**: Think "What should I do next?"
|
||||
2. **Acting**: Take action using tools
|
||||
3. **Observing**: Observe results
|
||||
4. Repeat until reaching final answer
|
||||
|
||||
```python
|
||||
# ReAct loop implementation
|
||||
def agent(state):
|
||||
# Reasoning: Determine next action
|
||||
response = llm_with_tools.invoke(state["messages"])
|
||||
return {"messages": [response]}
|
||||
|
||||
def tools(state):
|
||||
# Acting: Execute tools
|
||||
results = execute_tools(state["messages"][-1].tool_calls)
|
||||
return {"messages": results}
|
||||
|
||||
# Observing & Repeat
|
||||
builder.add_conditional_edges("agent", should_continue, ...)
|
||||
```
|
||||
|
||||
## Summary
|
||||
|
||||
| Aspect | Workflow | Agent |
|
||||
|--------|----------|-------|
|
||||
| Control | Developer has complete control | LLM decides dynamically |
|
||||
| Predictability | High | Low |
|
||||
| Flexibility | Low | High |
|
||||
| Cost | Low | High |
|
||||
| Use Case | Structured tasks | Uncertain tasks |
|
||||
|
||||
**Important**: Both can be built with the same tools (State, Node, Edge) in LangGraph. Pattern choice depends on problem nature.
|
||||
|
||||
## Related Pages
|
||||
|
||||
- [02_graph_architecture_prompt_chaining.md](02_graph_architecture_prompt_chaining.md) - Workflow pattern example
|
||||
- [02_graph_architecture_agent.md](02_graph_architecture_agent.md) - Agent pattern details
|
||||
- [02_graph_architecture_routing.md](02_graph_architecture_routing.md) - Hybrid approach example
|
||||
224
skills/langgraph-master/03_memory_management_checkpointer.md
Normal file
224
skills/langgraph-master/03_memory_management_checkpointer.md
Normal file
@@ -0,0 +1,224 @@
|
||||
# Checkpointer
|
||||
|
||||
Implementation details for saving and restoring state.
|
||||
|
||||
## Overview
|
||||
|
||||
Checkpointer implements the `BaseCheckpointSaver` interface and is responsible for state persistence.
|
||||
|
||||
## Checkpointer Implementations
|
||||
|
||||
### 1. MemorySaver (For Experimentation & Testing)
|
||||
|
||||
Saves checkpoints in memory:
|
||||
|
||||
```python
|
||||
from langgraph.checkpoint.memory import MemorySaver
|
||||
|
||||
checkpointer = MemorySaver()
|
||||
graph = builder.compile(checkpointer=checkpointer)
|
||||
|
||||
# All data is lost when the process terminates
|
||||
```
|
||||
|
||||
**Use Case**: Local testing, prototyping
|
||||
|
||||
### 2. SqliteSaver (For Local Development)
|
||||
|
||||
Saves to SQLite database:
|
||||
|
||||
```python
|
||||
from langgraph.checkpoint.sqlite import SqliteSaver
|
||||
|
||||
# File-based
|
||||
checkpointer = SqliteSaver.from_conn_string("checkpoints.db")
|
||||
|
||||
# Or from connection object
|
||||
import sqlite3
|
||||
conn = sqlite3.connect("checkpoints.db")
|
||||
checkpointer = SqliteSaver(conn)
|
||||
|
||||
graph = builder.compile(checkpointer=checkpointer)
|
||||
```
|
||||
|
||||
**Use Case**: Local development, single-user applications
|
||||
|
||||
### 3. PostgresSaver (For Production)
|
||||
|
||||
Saves to PostgreSQL database:
|
||||
|
||||
```python
|
||||
from langgraph.checkpoint.postgres import PostgresSaver
|
||||
from psycopg_pool import ConnectionPool
|
||||
|
||||
# Connection pool
|
||||
pool = ConnectionPool(
|
||||
conninfo="postgresql://user:password@localhost:5432/db"
|
||||
)
|
||||
|
||||
checkpointer = PostgresSaver(pool)
|
||||
graph = builder.compile(checkpointer=checkpointer)
|
||||
```
|
||||
|
||||
**Use Case**: Production environments, multi-user applications
|
||||
|
||||
## BaseCheckpointSaver Interface
|
||||
|
||||
All checkpointers implement the following methods:
|
||||
|
||||
```python
|
||||
class BaseCheckpointSaver:
|
||||
def put(
|
||||
self,
|
||||
config: RunnableConfig,
|
||||
checkpoint: Checkpoint,
|
||||
metadata: dict
|
||||
) -> RunnableConfig:
|
||||
"""Save a checkpoint"""
|
||||
|
||||
def get_tuple(
|
||||
self,
|
||||
config: RunnableConfig
|
||||
) -> CheckpointTuple | None:
|
||||
"""Retrieve a checkpoint"""
|
||||
|
||||
def list(
|
||||
self,
|
||||
config: RunnableConfig,
|
||||
*,
|
||||
before: RunnableConfig | None = None,
|
||||
limit: int | None = None
|
||||
) -> Iterator[CheckpointTuple]:
|
||||
"""Get list of checkpoints"""
|
||||
```
|
||||
|
||||
## Custom Checkpointer
|
||||
|
||||
Implement your own persistence logic:
|
||||
|
||||
```python
|
||||
from langgraph.checkpoint.base import BaseCheckpointSaver
|
||||
|
||||
class RedisCheckpointer(BaseCheckpointSaver):
|
||||
def __init__(self, redis_client):
|
||||
self.redis = redis_client
|
||||
|
||||
def put(self, config, checkpoint, metadata):
|
||||
thread_id = config["configurable"]["thread_id"]
|
||||
checkpoint_id = checkpoint["id"]
|
||||
|
||||
key = f"checkpoint:{thread_id}:{checkpoint_id}"
|
||||
self.redis.set(key, serialize(checkpoint))
|
||||
|
||||
return config
|
||||
|
||||
def get_tuple(self, config):
|
||||
thread_id = config["configurable"]["thread_id"]
|
||||
# Retrieve the latest checkpoint
|
||||
# ...
|
||||
|
||||
def list(self, config, before=None, limit=None):
|
||||
# Return list of checkpoints
|
||||
# ...
|
||||
```
|
||||
|
||||
## Checkpointer Configuration
|
||||
|
||||
### Namespaces
|
||||
|
||||
Share the same checkpointer across multiple graphs:
|
||||
|
||||
```python
|
||||
checkpointer = MemorySaver()
|
||||
|
||||
graph1 = builder1.compile(
|
||||
checkpointer=checkpointer,
|
||||
name="graph1" # Namespace
|
||||
)
|
||||
|
||||
graph2 = builder2.compile(
|
||||
checkpointer=checkpointer,
|
||||
name="graph2" # Different namespace
|
||||
)
|
||||
```
|
||||
|
||||
### Automatic Propagation
|
||||
|
||||
Parent graph's checkpointer automatically propagates to subgraphs:
|
||||
|
||||
```python
|
||||
# Set only on parent graph
|
||||
parent_graph = parent_builder.compile(checkpointer=checkpointer)
|
||||
|
||||
# Automatically propagates to child graphs
|
||||
```
|
||||
|
||||
## Checkpoint Management
|
||||
|
||||
### Deleting Old Checkpoints
|
||||
|
||||
```python
|
||||
# Delete after a certain period (implementation-dependent)
|
||||
import datetime
|
||||
|
||||
cutoff = datetime.datetime.now() - datetime.timedelta(days=30)
|
||||
|
||||
# Implementation example (SQLite)
|
||||
checkpointer.conn.execute(
|
||||
"DELETE FROM checkpoints WHERE created_at < ?",
|
||||
(cutoff,)
|
||||
)
|
||||
```
|
||||
|
||||
### Optimizing Checkpoint Size
|
||||
|
||||
```python
|
||||
class State(TypedDict):
|
||||
# Avoid large data
|
||||
messages: Annotated[list, add_messages]
|
||||
|
||||
# Store references only
|
||||
large_data_id: str # Actual data in separate storage
|
||||
|
||||
def node(state: State):
|
||||
# Retrieve large data from external source
|
||||
large_data = fetch_from_storage(state["large_data_id"])
|
||||
# ...
|
||||
```
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### Connection Pool (PostgreSQL)
|
||||
|
||||
```python
|
||||
from psycopg_pool import ConnectionPool
|
||||
|
||||
pool = ConnectionPool(
|
||||
conninfo=conn_string,
|
||||
min_size=5,
|
||||
max_size=20
|
||||
)
|
||||
|
||||
checkpointer = PostgresSaver(pool)
|
||||
```
|
||||
|
||||
### Async Checkpointer
|
||||
|
||||
```python
|
||||
from langgraph.checkpoint.postgres import AsyncPostgresSaver
|
||||
|
||||
async_checkpointer = AsyncPostgresSaver(async_pool)
|
||||
|
||||
# Async execution
|
||||
async for chunk in graph.astream(input, config):
|
||||
print(chunk)
|
||||
```
|
||||
|
||||
## Summary
|
||||
|
||||
Checkpointer determines how state is persisted. It's important to choose the appropriate implementation for your use case.
|
||||
|
||||
## Related Pages
|
||||
|
||||
- [03_memory_management_persistence.md](03_memory_management_persistence.md) - How to use persistence
|
||||
- [03_memory_management_store.md](03_memory_management_store.md) - Differences from long-term memory
|
||||
152
skills/langgraph-master/03_memory_management_overview.md
Normal file
152
skills/langgraph-master/03_memory_management_overview.md
Normal file
@@ -0,0 +1,152 @@
|
||||
# 03. Memory Management
|
||||
|
||||
State management through persistence and checkpoint features.
|
||||
|
||||
## Overview
|
||||
|
||||
LangGraph's **built-in persistence layer** allows you to save and restore agent state. This enables conversation continuation, error recovery, and time travel.
|
||||
|
||||
## Memory Types
|
||||
|
||||
### Short-term Memory: [Checkpointer](03_memory_management_checkpointer.md)
|
||||
- Automatically saves state at each superstep
|
||||
- Thread-based conversation management
|
||||
- Time travel functionality
|
||||
|
||||
### Long-term Memory: [Store](03_memory_management_store.md)
|
||||
- Share information across threads
|
||||
- Persist user information
|
||||
- Semantic search
|
||||
|
||||
## Key Features
|
||||
|
||||
### 1. [Persistence](03_memory_management_persistence.md)
|
||||
|
||||
**Checkpoints**: Save state at each superstep
|
||||
- Snapshot state at each stage of graph execution
|
||||
- Recoverable from failures
|
||||
- Track execution history
|
||||
|
||||
**Threads**: Unit of conversation
|
||||
- Identify conversations by `thread_id`
|
||||
- Each thread maintains independent state
|
||||
- Manage multiple conversations in parallel
|
||||
|
||||
**StateSnapshot**: Representation of checkpoints
|
||||
- `values`: State at that point in time
|
||||
- `next`: Nodes to execute next
|
||||
- `config`: Checkpoint configuration
|
||||
- `metadata`: Metadata
|
||||
|
||||
### 2. Human-in-the-Loop
|
||||
|
||||
**State Inspection**: Check state at any point
|
||||
```python
|
||||
state = graph.get_state(config)
|
||||
print(state.values)
|
||||
```
|
||||
|
||||
**Approval Flow**: Human approval before critical operations
|
||||
```python
|
||||
# Pause graph and wait for approval
|
||||
```
|
||||
|
||||
### 3. Memory
|
||||
|
||||
**Conversation Memory**: Memory within a thread
|
||||
```python
|
||||
# Conversation continues when called with the same thread_id
|
||||
config = {"configurable": {"thread_id": "conversation-1"}}
|
||||
graph.invoke(input, config)
|
||||
```
|
||||
|
||||
**Long-term Memory**: Memory across threads
|
||||
```python
|
||||
# Save user information in Store
|
||||
store.put(("user", user_id), "preferences", user_prefs)
|
||||
```
|
||||
|
||||
### 4. Time Travel
|
||||
|
||||
Replay and fork past executions:
|
||||
```python
|
||||
# Resume from specific checkpoint
|
||||
history = graph.get_state_history(config)
|
||||
for state in history:
|
||||
print(f"Checkpoint: {state.config['configurable']['checkpoint_id']}")
|
||||
|
||||
# Re-execute from past checkpoint
|
||||
graph.invoke(input, past_checkpoint_config)
|
||||
```
|
||||
|
||||
## Checkpointer Implementations
|
||||
|
||||
LangGraph provides multiple checkpointer implementations:
|
||||
|
||||
### InMemorySaver (For Experimentation)
|
||||
```python
|
||||
from langgraph.checkpoint.memory import MemorySaver
|
||||
|
||||
checkpointer = MemorySaver()
|
||||
graph = builder.compile(checkpointer=checkpointer)
|
||||
```
|
||||
|
||||
### SqliteSaver (For Local Development)
|
||||
```python
|
||||
from langgraph.checkpoint.sqlite import SqliteSaver
|
||||
|
||||
checkpointer = SqliteSaver.from_conn_string("checkpoints.db")
|
||||
graph = builder.compile(checkpointer=checkpointer)
|
||||
```
|
||||
|
||||
### PostgresSaver (For Production)
|
||||
```python
|
||||
from langgraph.checkpoint.postgres import PostgresSaver
|
||||
|
||||
checkpointer = PostgresSaver.from_conn_string(
|
||||
"postgresql://user:pass@localhost/db"
|
||||
)
|
||||
graph = builder.compile(checkpointer=checkpointer)
|
||||
```
|
||||
|
||||
## Basic Usage Example
|
||||
|
||||
```python
|
||||
from langgraph.checkpoint.memory import MemorySaver
|
||||
|
||||
# Compile with checkpointer
|
||||
checkpointer = MemorySaver()
|
||||
graph = builder.compile(checkpointer=checkpointer)
|
||||
|
||||
# Execute with thread_id
|
||||
config = {"configurable": {"thread_id": "user-123"}}
|
||||
|
||||
# First execution
|
||||
result1 = graph.invoke({"messages": [("user", "Hello")]}, config)
|
||||
|
||||
# Continue in same thread
|
||||
result2 = graph.invoke({"messages": [("user", "How are you?")]}, config)
|
||||
|
||||
# Check state
|
||||
state = graph.get_state(config)
|
||||
print(state.values) # All messages so far
|
||||
|
||||
# Check history
|
||||
for state in graph.get_state_history(config):
|
||||
print(f"Step: {state.values}")
|
||||
```
|
||||
|
||||
## Key Principles
|
||||
|
||||
1. **Thread ID Management**: Use unique thread_id for each conversation
|
||||
2. **Checkpointer Selection**: Choose appropriate implementation for your use case
|
||||
3. **State Minimization**: Save only necessary information to keep checkpoint size small
|
||||
4. **Cleanup**: Periodically delete old checkpoints
|
||||
|
||||
## Next Steps
|
||||
|
||||
For details on each feature, refer to the following pages:
|
||||
|
||||
- [03_memory_management_persistence.md](03_memory_management_persistence.md) - Persistence details
|
||||
- [03_memory_management_checkpointer.md](03_memory_management_checkpointer.md) - Checkpointer implementation
|
||||
- [03_memory_management_store.md](03_memory_management_store.md) - Long-term memory management
|
||||
264
skills/langgraph-master/03_memory_management_persistence.md
Normal file
264
skills/langgraph-master/03_memory_management_persistence.md
Normal file
@@ -0,0 +1,264 @@
|
||||
# Persistence
|
||||
|
||||
Functionality to save and restore graph state.
|
||||
|
||||
## Overview
|
||||
|
||||
Persistence is a feature that **automatically saves** state at each stage of graph execution and allows you to restore it later.
|
||||
|
||||
## Basic Concepts
|
||||
|
||||
### Checkpoints
|
||||
|
||||
State is automatically saved after each **superstep** (set of nodes executed in parallel).
|
||||
|
||||
```python
|
||||
# Superstep 1: node_a and node_b execute in parallel
|
||||
# → Checkpoint 1
|
||||
|
||||
# Superstep 2: node_c executes
|
||||
# → Checkpoint 2
|
||||
|
||||
# Superstep 3: node_d executes
|
||||
# → Checkpoint 3
|
||||
```
|
||||
|
||||
### Threads
|
||||
|
||||
A thread is an identifier containing the **accumulated state of a series of executions**:
|
||||
|
||||
```python
|
||||
config = {"configurable": {"thread_id": "conversation-123"}}
|
||||
```
|
||||
|
||||
Executing with the same `thread_id` continues from the previous state.
|
||||
|
||||
## Implementation Example
|
||||
|
||||
```python
|
||||
from langgraph.checkpoint.memory import MemorySaver
|
||||
from langgraph.graph import StateGraph, MessagesState
|
||||
|
||||
# Define graph
|
||||
builder = StateGraph(MessagesState)
|
||||
builder.add_node("chatbot", chatbot_node)
|
||||
builder.add_edge(START, "chatbot")
|
||||
builder.add_edge("chatbot", END)
|
||||
|
||||
# Compile with checkpointer
|
||||
checkpointer = MemorySaver()
|
||||
graph = builder.compile(checkpointer=checkpointer)
|
||||
|
||||
# Execute with thread ID
|
||||
config = {"configurable": {"thread_id": "user-001"}}
|
||||
|
||||
# First execution
|
||||
graph.invoke(
|
||||
{"messages": [{"role": "user", "content": "My name is Alice"}]},
|
||||
config
|
||||
)
|
||||
|
||||
# Continue in same thread (retains previous state)
|
||||
response = graph.invoke(
|
||||
{"messages": [{"role": "user", "content": "What's my name?"}]},
|
||||
config
|
||||
)
|
||||
|
||||
# → "Your name is Alice"
|
||||
```
|
||||
|
||||
## StateSnapshot Object
|
||||
|
||||
Checkpoints are represented as `StateSnapshot` objects:
|
||||
|
||||
```python
|
||||
class StateSnapshot:
|
||||
values: dict # State at that point in time
|
||||
next: tuple[str] # Nodes to execute next
|
||||
config: RunnableConfig # Checkpoint configuration
|
||||
metadata: dict # Metadata
|
||||
tasks: tuple[PregelTask] # Scheduled tasks
|
||||
```
|
||||
|
||||
### Getting Latest State
|
||||
|
||||
```python
|
||||
state = graph.get_state(config)
|
||||
|
||||
print(state.values) # Current state
|
||||
print(state.next) # Next nodes
|
||||
print(state.config) # Checkpoint configuration
|
||||
```
|
||||
|
||||
### Getting History
|
||||
|
||||
```python
|
||||
# Get list of StateSnapshots in chronological order
|
||||
for state in graph.get_state_history(config):
|
||||
print(f"Checkpoint: {state.config['configurable']['checkpoint_id']}")
|
||||
print(f"Values: {state.values}")
|
||||
print(f"Next: {state.next}")
|
||||
print("---")
|
||||
```
|
||||
|
||||
## Time Travel Feature
|
||||
|
||||
Resume execution from a specific checkpoint:
|
||||
|
||||
```python
|
||||
# Get specific checkpoint from history
|
||||
history = list(graph.get_state_history(config))
|
||||
|
||||
# Checkpoint from 3 steps ago
|
||||
past_state = history[3]
|
||||
|
||||
# Re-execute from that checkpoint
|
||||
result = graph.invoke(
|
||||
{"messages": [{"role": "user", "content": "New question"}]},
|
||||
past_state.config
|
||||
)
|
||||
```
|
||||
|
||||
### Validating Alternative Paths
|
||||
|
||||
```python
|
||||
# Get current state
|
||||
current_state = graph.get_state(config)
|
||||
|
||||
# Try with different input
|
||||
alt_result = graph.invoke(
|
||||
{"messages": [{"role": "user", "content": "Different question"}]},
|
||||
current_state.config
|
||||
)
|
||||
|
||||
# Original execution is not affected
|
||||
```
|
||||
|
||||
## Updating State
|
||||
|
||||
Directly update checkpoint state:
|
||||
|
||||
```python
|
||||
# Get current state
|
||||
state = graph.get_state(config)
|
||||
|
||||
# Update state
|
||||
graph.update_state(
|
||||
config,
|
||||
{"messages": [{"role": "assistant", "content": "Updated message"}]}
|
||||
)
|
||||
|
||||
# Resume from updated state
|
||||
graph.invoke({"messages": [...]}, config)
|
||||
```
|
||||
|
||||
## Use Cases
|
||||
|
||||
### 1. Conversation Continuation
|
||||
|
||||
```python
|
||||
# Session 1
|
||||
config = {"configurable": {"thread_id": "chat-1"}}
|
||||
graph.invoke({"messages": [("user", "Hello")]}, config)
|
||||
|
||||
# Session 2 (days later)
|
||||
# Remembers previous conversation
|
||||
graph.invoke({"messages": [("user", "Continuing from last time")]}, config)
|
||||
```
|
||||
|
||||
### 2. Error Recovery
|
||||
|
||||
```python
|
||||
try:
|
||||
graph.invoke(input, config)
|
||||
except Exception as e:
|
||||
# Even if error occurs, can recover from checkpoint
|
||||
print(f"Error: {e}")
|
||||
|
||||
# Check latest state
|
||||
state = graph.get_state(config)
|
||||
|
||||
# Fix state and re-execute
|
||||
graph.update_state(config, {"error_fixed": True})
|
||||
graph.invoke(input, config)
|
||||
```
|
||||
|
||||
### 3. A/B Testing
|
||||
|
||||
```python
|
||||
# Base execution
|
||||
base_result = graph.invoke(input, base_config)
|
||||
|
||||
# Alternative execution 1
|
||||
alt_config_1 = base_config.copy()
|
||||
alt_result_1 = graph.invoke(modified_input_1, alt_config_1)
|
||||
|
||||
# Alternative execution 2
|
||||
alt_config_2 = base_config.copy()
|
||||
alt_result_2 = graph.invoke(modified_input_2, alt_config_2)
|
||||
|
||||
# Compare results
|
||||
```
|
||||
|
||||
### 4. Debugging and Tracing
|
||||
|
||||
```python
|
||||
# Execute
|
||||
graph.invoke(input, config)
|
||||
|
||||
# Check each step
|
||||
for i, state in enumerate(graph.get_state_history(config)):
|
||||
print(f"Step {i}:")
|
||||
print(f" State: {state.values}")
|
||||
print(f" Next: {state.next}")
|
||||
```
|
||||
|
||||
## Important Considerations
|
||||
|
||||
### Thread ID Uniqueness
|
||||
|
||||
```python
|
||||
# Use different thread_id per user
|
||||
user_config = {"configurable": {"thread_id": f"user-{user_id}"}}
|
||||
|
||||
# Use different thread_id per conversation
|
||||
conversation_config = {"configurable": {"thread_id": f"conv-{conv_id}"}}
|
||||
```
|
||||
|
||||
### Checkpoint Cleanup
|
||||
|
||||
```python
|
||||
# Delete old checkpoints (implementation-dependent)
|
||||
checkpointer.cleanup(before_timestamp=old_timestamp)
|
||||
```
|
||||
|
||||
### Multi-user Support
|
||||
|
||||
```python
|
||||
# Combine user ID and session ID
|
||||
def get_config(user_id: str, session_id: str):
|
||||
return {
|
||||
"configurable": {
|
||||
"thread_id": f"{user_id}-{session_id}"
|
||||
}
|
||||
}
|
||||
|
||||
config = get_config("user123", "session456")
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Meaningful thread_id**: Format that can identify user, session, conversation
|
||||
2. **Regular Cleanup**: Delete old checkpoints
|
||||
3. **Appropriate Checkpointer**: Choose implementation based on use case
|
||||
4. **Error Handling**: Properly handle errors when retrieving checkpoints
|
||||
|
||||
## Summary
|
||||
|
||||
Persistence enables **state persistence and restoration**, making conversation continuation, error recovery, and time travel possible.
|
||||
|
||||
## Related Pages
|
||||
|
||||
- [03_memory_management_checkpointer.md](03_memory_management_checkpointer.md) - Checkpointer implementation details
|
||||
- [03_memory_management_store.md](03_memory_management_store.md) - Combining with long-term memory
|
||||
- [05_advanced_features_human_in_the_loop.md](05_advanced_features_human_in_the_loop.md) - Applications of state inspection
|
||||
287
skills/langgraph-master/03_memory_management_store.md
Normal file
287
skills/langgraph-master/03_memory_management_store.md
Normal file
@@ -0,0 +1,287 @@
|
||||
# Store (Long-term Memory)
|
||||
|
||||
Long-term memory for sharing information across multiple threads.
|
||||
|
||||
## Overview
|
||||
|
||||
Checkpointer only saves state within a single thread. To share information across multiple threads, use **Store**.
|
||||
|
||||
## Checkpointer vs Store
|
||||
|
||||
| Feature | Checkpointer | Store |
|
||||
|---------|-------------|-------|
|
||||
| Scope | Single thread | All threads |
|
||||
| Purpose | Conversation state | User information |
|
||||
| Auto-save | Yes | No (manual) |
|
||||
| Search | thread_id | Namespace |
|
||||
|
||||
## Basic Usage
|
||||
|
||||
```python
|
||||
from langgraph.store.memory import InMemoryStore
|
||||
|
||||
# Create Store
|
||||
store = InMemoryStore()
|
||||
|
||||
# Save user information
|
||||
store.put(
|
||||
namespace=("users", "user-123"),
|
||||
key="preferences",
|
||||
value={
|
||||
"language": "en",
|
||||
"theme": "dark",
|
||||
"notifications": True
|
||||
}
|
||||
)
|
||||
|
||||
# Retrieve user information
|
||||
user_prefs = store.get(("users", "user-123"), "preferences")
|
||||
```
|
||||
|
||||
## Namespace
|
||||
|
||||
Namespaces are grouped by **tuples**:
|
||||
|
||||
```python
|
||||
# User information
|
||||
("users", user_id)
|
||||
|
||||
# Session information
|
||||
("sessions", session_id)
|
||||
|
||||
# Project information
|
||||
("projects", project_id, "documents")
|
||||
|
||||
# Hierarchical structure
|
||||
("organization", org_id, "department", dept_id)
|
||||
```
|
||||
|
||||
## Store Operations
|
||||
|
||||
### Save
|
||||
|
||||
```python
|
||||
store.put(
|
||||
namespace=("users", "alice"),
|
||||
key="profile",
|
||||
value={
|
||||
"name": "Alice",
|
||||
"email": "alice@example.com",
|
||||
"joined": "2024-01-01"
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Retrieve
|
||||
|
||||
```python
|
||||
# Single item
|
||||
profile = store.get(("users", "alice"), "profile")
|
||||
|
||||
# All items in namespace
|
||||
items = store.search(("users", "alice"))
|
||||
```
|
||||
|
||||
### Search
|
||||
|
||||
```python
|
||||
# Filter by namespace
|
||||
all_users = store.search(("users",))
|
||||
|
||||
# Filter by key
|
||||
profiles = store.search(("users",), filter={"key": "profile"})
|
||||
```
|
||||
|
||||
### Delete
|
||||
|
||||
```python
|
||||
# Single item
|
||||
store.delete(("users", "alice"), "profile")
|
||||
|
||||
# Entire namespace
|
||||
store.delete_namespace(("users", "alice"))
|
||||
```
|
||||
|
||||
## Integration with Graph
|
||||
|
||||
```python
|
||||
from langgraph.store.memory import InMemoryStore
|
||||
|
||||
store = InMemoryStore()
|
||||
|
||||
# Integrate Store with graph
|
||||
graph = builder.compile(
|
||||
checkpointer=checkpointer,
|
||||
store=store
|
||||
)
|
||||
|
||||
# Use Store within nodes
|
||||
def personalized_node(state: State, *, store):
|
||||
user_id = state["user_id"]
|
||||
|
||||
# Get user preferences
|
||||
prefs = store.get(("users", user_id), "preferences")
|
||||
|
||||
# Process based on preferences
|
||||
if prefs and prefs.value.get("language") == "en":
|
||||
response = generate_english_response(state)
|
||||
else:
|
||||
response = generate_default_response(state)
|
||||
|
||||
return {"response": response}
|
||||
```
|
||||
|
||||
## Semantic Search
|
||||
|
||||
Store implementations with vector search capability:
|
||||
|
||||
```python
|
||||
from langgraph.store.memory import InMemoryStore
|
||||
|
||||
store = InMemoryStore(index={"embed": True})
|
||||
|
||||
# Save documents (automatically vectorized)
|
||||
store.put(
|
||||
("documents", "doc-1"),
|
||||
"content",
|
||||
{"text": "LangGraph is an agent framework"}
|
||||
)
|
||||
|
||||
# Semantic search
|
||||
results = store.search(
|
||||
("documents",),
|
||||
query="agent development"
|
||||
)
|
||||
```
|
||||
|
||||
## Practical Example: User Profile
|
||||
|
||||
```python
|
||||
class ProfileState(TypedDict):
|
||||
user_id: str
|
||||
messages: Annotated[list, add_messages]
|
||||
|
||||
def save_user_info(state: ProfileState, *, store):
|
||||
"""Extract and save user information from conversation"""
|
||||
messages = state["messages"]
|
||||
user_id = state["user_id"]
|
||||
|
||||
# Extract information with LLM
|
||||
info = extract_user_info(messages)
|
||||
|
||||
if info:
|
||||
# Save to Store
|
||||
current = store.get(("users", user_id), "profile")
|
||||
|
||||
if current:
|
||||
# Merge with existing information
|
||||
updated = {**current.value, **info}
|
||||
else:
|
||||
updated = info
|
||||
|
||||
store.put(
|
||||
("users", user_id),
|
||||
"profile",
|
||||
updated
|
||||
)
|
||||
|
||||
return {}
|
||||
|
||||
def personalized_response(state: ProfileState, *, store):
|
||||
"""Personalize using user information"""
|
||||
user_id = state["user_id"]
|
||||
|
||||
# Get user information
|
||||
profile = store.get(("users", user_id), "profile")
|
||||
|
||||
if profile:
|
||||
context = f"User context: {profile.value}"
|
||||
messages = [
|
||||
{"role": "system", "content": context},
|
||||
*state["messages"]
|
||||
]
|
||||
else:
|
||||
messages = state["messages"]
|
||||
|
||||
response = llm.invoke(messages)
|
||||
return {"messages": [response]}
|
||||
```
|
||||
|
||||
## Practical Example: Knowledge Base
|
||||
|
||||
```python
|
||||
def query_knowledge_base(state: State, *, store):
|
||||
"""Search for knowledge related to question"""
|
||||
query = state["messages"][-1].content
|
||||
|
||||
# Semantic search
|
||||
relevant_docs = store.search(
|
||||
("knowledge",),
|
||||
query=query,
|
||||
limit=3
|
||||
)
|
||||
|
||||
# Add relevant information to context
|
||||
context = "\n".join([
|
||||
doc.value["text"]
|
||||
for doc in relevant_docs
|
||||
])
|
||||
|
||||
# Pass to LLM
|
||||
response = llm.invoke([
|
||||
{"role": "system", "content": f"Context:\n{context}"},
|
||||
*state["messages"]
|
||||
])
|
||||
|
||||
return {"messages": [response]}
|
||||
```
|
||||
|
||||
## Store Implementations
|
||||
|
||||
### InMemoryStore
|
||||
|
||||
```python
|
||||
from langgraph.store.memory import InMemoryStore
|
||||
|
||||
store = InMemoryStore()
|
||||
```
|
||||
|
||||
### Custom Store
|
||||
|
||||
```python
|
||||
from langgraph.store.base import BaseStore
|
||||
|
||||
class RedisStore(BaseStore):
|
||||
def __init__(self, redis_client):
|
||||
self.redis = redis_client
|
||||
|
||||
def put(self, namespace, key, value):
|
||||
ns_key = f"{':'.join(namespace)}:{key}"
|
||||
self.redis.set(ns_key, json.dumps(value))
|
||||
|
||||
def get(self, namespace, key):
|
||||
ns_key = f"{':'.join(namespace)}:{key}"
|
||||
data = self.redis.get(ns_key)
|
||||
return json.loads(data) if data else None
|
||||
|
||||
def search(self, namespace, filter=None):
|
||||
pattern = f"{':'.join(namespace)}:*"
|
||||
keys = self.redis.keys(pattern)
|
||||
return [self.get_by_key(k) for k in keys]
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Namespace Design**: Hierarchical and meaningful structure
|
||||
2. **Key Naming**: Clear and consistent naming conventions
|
||||
3. **Data Size**: Store references only for large data
|
||||
4. **Cleanup**: Periodic deletion of old data
|
||||
|
||||
## Summary
|
||||
|
||||
Store is long-term memory for sharing information across multiple threads. Use it for persisting user profiles, knowledge bases, settings, etc.
|
||||
|
||||
## Related Pages
|
||||
|
||||
- [03_memory_management_checkpointer.md](03_memory_management_checkpointer.md) - Differences from short-term memory
|
||||
- [03_memory_management_persistence.md](03_memory_management_persistence.md) - Persistence basics
|
||||
280
skills/langgraph-master/04_tool_integration_command_api.md
Normal file
280
skills/langgraph-master/04_tool_integration_command_api.md
Normal file
@@ -0,0 +1,280 @@
|
||||
# Command API
|
||||
|
||||
An advanced API that integrates state updates and control flow.
|
||||
|
||||
## Overview
|
||||
|
||||
The Command API is a feature that allows nodes to specify **state updates** and **control flow** simultaneously.
|
||||
|
||||
## Basic Usage
|
||||
|
||||
```python
|
||||
from langgraph.types import Command
|
||||
|
||||
def decision_node(state: State) -> Command:
|
||||
"""Update state and specify the next node"""
|
||||
result = analyze(state["data"])
|
||||
|
||||
if result["confidence"] > 0.8:
|
||||
return Command(
|
||||
update={"result": result, "confident": True},
|
||||
goto="finalize"
|
||||
)
|
||||
else:
|
||||
return Command(
|
||||
update={"result": result, "confident": False},
|
||||
goto="review"
|
||||
)
|
||||
```
|
||||
|
||||
## Command Object Parameters
|
||||
|
||||
```python
|
||||
Command(
|
||||
update: dict, # Updates to state
|
||||
goto: str | list[str], # Next node(s) (single or multiple)
|
||||
graph: str | None = None # For subgraph navigation
|
||||
)
|
||||
```
|
||||
|
||||
## vs Traditional State Updates
|
||||
|
||||
### Traditional Method
|
||||
|
||||
```python
|
||||
def node(state: State) -> dict:
|
||||
return {"result": "value"}
|
||||
|
||||
# Control flow in edges
|
||||
def route(state: State) -> str:
|
||||
if state["result"] == "value":
|
||||
return "next_node"
|
||||
return "other_node"
|
||||
|
||||
builder.add_conditional_edges("node", route, {...})
|
||||
```
|
||||
|
||||
### Command API
|
||||
|
||||
```python
|
||||
def node(state: State) -> Command:
|
||||
return Command(
|
||||
update={"result": "value"},
|
||||
goto="next_node" # Specify control flow as well
|
||||
)
|
||||
|
||||
# No edges needed (Command controls flow)
|
||||
```
|
||||
|
||||
## Advanced Patterns
|
||||
|
||||
### Pattern 1: Conditional Branching
|
||||
|
||||
```python
|
||||
def validator(state: State) -> Command:
|
||||
"""Validate and determine next node"""
|
||||
is_valid = validate(state["data"])
|
||||
|
||||
if is_valid:
|
||||
return Command(
|
||||
update={"valid": True},
|
||||
goto="process"
|
||||
)
|
||||
else:
|
||||
return Command(
|
||||
update={"valid": False, "errors": get_errors(state["data"])},
|
||||
goto="error_handler"
|
||||
)
|
||||
```
|
||||
|
||||
### Pattern 2: Parallel Execution
|
||||
|
||||
```python
|
||||
def fan_out_node(state: State) -> Command:
|
||||
"""Branch to multiple nodes in parallel"""
|
||||
return Command(
|
||||
update={"started": True},
|
||||
goto=["worker_a", "worker_b", "worker_c"] # Parallel execution
|
||||
)
|
||||
```
|
||||
|
||||
### Pattern 3: Loop Control
|
||||
|
||||
```python
|
||||
def iterator_node(state: State) -> Command:
|
||||
"""Iterative processing"""
|
||||
iteration = state.get("iteration", 0) + 1
|
||||
result = process_iteration(state["data"], iteration)
|
||||
|
||||
if iteration < state["max_iterations"] and not result["done"]:
|
||||
return Command(
|
||||
update={"iteration": iteration, "result": result},
|
||||
goto="iterator_node" # Loop back to self
|
||||
)
|
||||
else:
|
||||
return Command(
|
||||
update={"final_result": result},
|
||||
goto=END
|
||||
)
|
||||
```
|
||||
|
||||
### Pattern 4: Subgraph Navigation
|
||||
|
||||
```python
|
||||
def sub_node(state: State) -> Command:
|
||||
"""Navigate from subgraph to parent graph"""
|
||||
result = process(state["data"])
|
||||
|
||||
if need_parent_intervention(result):
|
||||
return Command(
|
||||
update={"sub_result": result},
|
||||
goto="parent_handler",
|
||||
graph=Command.PARENT # Navigate to parent graph
|
||||
)
|
||||
|
||||
return {"sub_result": result}
|
||||
```
|
||||
|
||||
## Integration with Tools
|
||||
|
||||
### Control After Tool Execution
|
||||
|
||||
```python
|
||||
def tool_node_with_command(state: MessagesState) -> Command:
|
||||
"""Determine next action after tool execution"""
|
||||
last_message = state["messages"][-1]
|
||||
tool_results = []
|
||||
|
||||
for tool_call in last_message.tool_calls:
|
||||
tool = tool_map[tool_call["name"]]
|
||||
result = tool.invoke(tool_call["args"])
|
||||
|
||||
tool_results.append(
|
||||
ToolMessage(
|
||||
content=str(result),
|
||||
tool_call_id=tool_call["id"]
|
||||
)
|
||||
)
|
||||
|
||||
# Determine next node based on results
|
||||
if any("error" in r.content.lower() for r in tool_results):
|
||||
return Command(
|
||||
update={"messages": tool_results},
|
||||
goto="error_handler"
|
||||
)
|
||||
else:
|
||||
return Command(
|
||||
update={"messages": tool_results},
|
||||
goto="agent"
|
||||
)
|
||||
```
|
||||
|
||||
### Command from Within Tools
|
||||
|
||||
```python
|
||||
from langgraph.types import interrupt
|
||||
|
||||
@tool
|
||||
def send_email(to: str, subject: str, body: str) -> str:
|
||||
"""Send email (with approval)"""
|
||||
|
||||
# Request approval
|
||||
approved = interrupt({
|
||||
"action": "send_email",
|
||||
"to": to,
|
||||
"subject": subject,
|
||||
"message": "Approve sending this email?"
|
||||
})
|
||||
|
||||
if approved:
|
||||
result = actually_send_email(to, subject, body)
|
||||
return f"Email sent to {to}"
|
||||
else:
|
||||
return "Email cancelled by user"
|
||||
```
|
||||
|
||||
## Dynamic Routing
|
||||
|
||||
```python
|
||||
def dynamic_router(state: State) -> Command:
|
||||
"""Dynamically select route based on state"""
|
||||
score = evaluate(state["data"])
|
||||
|
||||
# Select route based on score
|
||||
if score > 0.9:
|
||||
route = "expert_handler"
|
||||
elif score > 0.7:
|
||||
route = "standard_handler"
|
||||
else:
|
||||
route = "basic_handler"
|
||||
|
||||
return Command(
|
||||
update={"confidence_score": score},
|
||||
goto=route
|
||||
)
|
||||
```
|
||||
|
||||
## Error Recovery
|
||||
|
||||
```python
|
||||
def processor_with_fallback(state: State) -> Command:
|
||||
"""Fallback on error"""
|
||||
try:
|
||||
result = risky_operation(state["data"])
|
||||
|
||||
return Command(
|
||||
update={"result": result, "error": None},
|
||||
goto="success_handler"
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
return Command(
|
||||
update={"error": str(e)},
|
||||
goto="fallback_handler"
|
||||
)
|
||||
```
|
||||
|
||||
## State Machine Implementation
|
||||
|
||||
```python
|
||||
def state_machine_node(state: State) -> Command:
|
||||
"""State machine"""
|
||||
current_state = state.get("state", "initial")
|
||||
|
||||
transitions = {
|
||||
"initial": ("validate", {"state": "validating"}),
|
||||
"validating": ("process" if state.get("valid") else "error", {"state": "processing"}),
|
||||
"processing": ("finalize", {"state": "finalizing"}),
|
||||
"finalizing": (END, {"state": "done"})
|
||||
}
|
||||
|
||||
next_node, update = transitions[current_state]
|
||||
|
||||
return Command(
|
||||
update=update,
|
||||
goto=next_node
|
||||
)
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
✅ **Conciseness**: Define state updates and control flow in one place
|
||||
✅ **Readability**: Node intent is clear
|
||||
✅ **Flexibility**: Dynamic routing is easier
|
||||
✅ **Debugging**: Control flow is easier to track
|
||||
|
||||
## Considerations
|
||||
|
||||
⚠️ **Complexity**: Avoid overly complex conditional branching
|
||||
⚠️ **Testing**: All branches need to be tested
|
||||
⚠️ **Parallel Execution**: Order of parallel nodes is non-deterministic
|
||||
|
||||
## Summary
|
||||
|
||||
The Command API integrates state updates and control flow, enabling more flexible and readable graph construction.
|
||||
|
||||
## Related Pages
|
||||
|
||||
- [01_core_concepts_node.md](01_core_concepts_node.md) - Node basics
|
||||
- [01_core_concepts_edge.md](01_core_concepts_edge.md) - Comparison with edges
|
||||
- [02_graph_architecture_subgraph.md](02_graph_architecture_subgraph.md) - Subgraph navigation
|
||||
158
skills/langgraph-master/04_tool_integration_overview.md
Normal file
158
skills/langgraph-master/04_tool_integration_overview.md
Normal file
@@ -0,0 +1,158 @@
|
||||
# 04. Tool Integration
|
||||
|
||||
Integration and execution control of external tools.
|
||||
|
||||
## Overview
|
||||
|
||||
In LangGraph, LLMs can interact with external systems by calling **tools**. Tools provide various capabilities such as search, calculation, API calls, and more.
|
||||
|
||||
## Key Components
|
||||
|
||||
### 1. [Tool Definition](04_tool_integration_tool_definition.md)
|
||||
|
||||
How to define tools:
|
||||
- `@tool` decorator
|
||||
- Function descriptions and parameters
|
||||
- Structured output
|
||||
|
||||
### 2. [Tool Node](04_tool_integration_tool_node.md)
|
||||
|
||||
Nodes that execute tools:
|
||||
- Using `ToolNode`
|
||||
- Error handling
|
||||
- Custom tool nodes
|
||||
|
||||
### 3. [Command API](04_tool_integration_command_api.md)
|
||||
|
||||
Controlling tool execution:
|
||||
- Integration of state updates and control flow
|
||||
- Transition control from tools
|
||||
|
||||
## Basic Implementation
|
||||
|
||||
```python
|
||||
from langchain_core.tools import tool
|
||||
from langgraph.prebuilt import ToolNode
|
||||
from langgraph.graph import MessagesState, StateGraph
|
||||
|
||||
# 1. Define tools
|
||||
@tool
|
||||
def search(query: str) -> str:
|
||||
"""Perform a web search.
|
||||
|
||||
Args:
|
||||
query: Search query
|
||||
"""
|
||||
return perform_search(query)
|
||||
|
||||
@tool
|
||||
def calculator(expression: str) -> float:
|
||||
"""Calculate a mathematical expression.
|
||||
|
||||
Args:
|
||||
expression: Expression to calculate (e.g., "2 + 2")
|
||||
"""
|
||||
return eval(expression)
|
||||
|
||||
tools = [search, calculator]
|
||||
|
||||
# 2. Bind tools to LLM
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
# 3. Agent node
|
||||
def agent(state: MessagesState):
|
||||
response = llm_with_tools.invoke(state["messages"])
|
||||
return {"messages": [response]}
|
||||
|
||||
# 4. Tool node
|
||||
tool_node = ToolNode(tools)
|
||||
|
||||
# 5. Build graph
|
||||
builder = StateGraph(MessagesState)
|
||||
builder.add_node("agent", agent)
|
||||
builder.add_node("tools", tool_node)
|
||||
|
||||
# 6. Conditional edges
|
||||
def should_continue(state: MessagesState):
|
||||
last_message = state["messages"][-1]
|
||||
if last_message.tool_calls:
|
||||
return "tools"
|
||||
return END
|
||||
|
||||
builder.add_edge(START, "agent")
|
||||
builder.add_conditional_edges("agent", should_continue)
|
||||
builder.add_edge("tools", "agent")
|
||||
|
||||
graph = builder.compile()
|
||||
```
|
||||
|
||||
## Types of Tools
|
||||
|
||||
### Search Tools
|
||||
|
||||
```python
|
||||
@tool
|
||||
def web_search(query: str) -> str:
|
||||
"""Search the web"""
|
||||
return search_api(query)
|
||||
```
|
||||
|
||||
### Calculator Tools
|
||||
|
||||
```python
|
||||
@tool
|
||||
def calculator(expression: str) -> float:
|
||||
"""Calculate a mathematical expression"""
|
||||
return eval(expression)
|
||||
```
|
||||
|
||||
### API Tools
|
||||
|
||||
```python
|
||||
@tool
|
||||
def get_weather(city: str) -> dict:
|
||||
"""Get weather information"""
|
||||
return weather_api(city)
|
||||
```
|
||||
|
||||
### Database Tools
|
||||
|
||||
```python
|
||||
@tool
|
||||
def query_database(sql: str) -> list[dict]:
|
||||
"""Query the database"""
|
||||
return execute_sql(sql)
|
||||
```
|
||||
|
||||
## Tool Execution Flow
|
||||
|
||||
```
|
||||
User Query
|
||||
↓
|
||||
[Agent Node]
|
||||
↓
|
||||
LLM decides: Use tool?
|
||||
↓ Yes
|
||||
[Tool Node] ← Execute tool
|
||||
↓
|
||||
[Agent Node] ← Tool result
|
||||
↓
|
||||
LLM decides: Continue?
|
||||
↓ No
|
||||
Final Answer
|
||||
```
|
||||
|
||||
## Key Principles
|
||||
|
||||
1. **Clear Descriptions**: Write detailed docstrings for tools
|
||||
2. **Error Handling**: Handle tool execution errors appropriately
|
||||
3. **Type Safety**: Explicitly specify parameter types
|
||||
4. **Approval Flow**: Incorporate Human-in-the-Loop for critical tools
|
||||
|
||||
## Next Steps
|
||||
|
||||
For details on each component, please refer to the following pages:
|
||||
|
||||
- [04_tool_integration_tool_definition.md](04_tool_integration_tool_definition.md) - How to define tools
|
||||
- [04_tool_integration_tool_node.md](04_tool_integration_tool_node.md) - Tool node implementation
|
||||
- [04_tool_integration_command_api.md](04_tool_integration_command_api.md) - Using the Command API
|
||||
227
skills/langgraph-master/04_tool_integration_tool_definition.md
Normal file
227
skills/langgraph-master/04_tool_integration_tool_definition.md
Normal file
@@ -0,0 +1,227 @@
|
||||
# Tool Definition
|
||||
|
||||
How to define tools and design patterns.
|
||||
|
||||
## Basic Definition
|
||||
|
||||
```python
|
||||
from langchain_core.tools import tool
|
||||
|
||||
@tool
|
||||
def search(query: str) -> str:
|
||||
"""Perform a web search.
|
||||
|
||||
Args:
|
||||
query: Search query
|
||||
"""
|
||||
return perform_search(query)
|
||||
```
|
||||
|
||||
## Key Elements
|
||||
|
||||
### 1. Docstring
|
||||
|
||||
Description for the LLM to understand the tool:
|
||||
|
||||
```python
|
||||
@tool
|
||||
def get_weather(location: str, unit: str = "celsius") -> str:
|
||||
"""Get the current weather for a specified location.
|
||||
|
||||
This tool provides up-to-date weather information for cities around the world.
|
||||
It includes detailed information such as temperature, humidity, and weather conditions.
|
||||
|
||||
Args:
|
||||
location: City name (e.g., "Tokyo", "New York", "London")
|
||||
unit: Temperature unit ("celsius" or "fahrenheit"), default is "celsius"
|
||||
|
||||
Returns:
|
||||
A string containing weather information
|
||||
|
||||
Examples:
|
||||
>>> get_weather("Tokyo")
|
||||
"Tokyo weather: Sunny, Temperature: 25°C, Humidity: 60%"
|
||||
"""
|
||||
return fetch_weather(location, unit)
|
||||
```
|
||||
|
||||
### 2. Type Annotations
|
||||
|
||||
Explicitly specify parameter and return value types:
|
||||
|
||||
```python
|
||||
from typing import List, Dict
|
||||
|
||||
@tool
|
||||
def search_products(
|
||||
query: str,
|
||||
max_results: int = 10,
|
||||
category: str | None = None
|
||||
) -> List[Dict[str, any]]:
|
||||
"""Search for products.
|
||||
|
||||
Args:
|
||||
query: Search keywords
|
||||
max_results: Maximum number of results
|
||||
category: Category filter (optional)
|
||||
"""
|
||||
return database.search(query, max_results, category)
|
||||
```
|
||||
|
||||
## Structured Output
|
||||
|
||||
Structured output using Pydantic models:
|
||||
|
||||
```python
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
class WeatherInfo(BaseModel):
|
||||
temperature: float = Field(description="Temperature in Celsius")
|
||||
humidity: int = Field(description="Humidity (%)")
|
||||
condition: str = Field(description="Weather condition")
|
||||
location: str = Field(description="Location")
|
||||
|
||||
@tool(response_format="content_and_artifact")
|
||||
def get_detailed_weather(location: str) -> tuple[str, WeatherInfo]:
|
||||
"""Get detailed weather information.
|
||||
|
||||
Args:
|
||||
location: City name
|
||||
"""
|
||||
data = fetch_weather_data(location)
|
||||
|
||||
weather = WeatherInfo(
|
||||
temperature=data["temp"],
|
||||
humidity=data["humidity"],
|
||||
condition=data["condition"],
|
||||
location=location
|
||||
)
|
||||
|
||||
summary = f"{location} weather: {weather.condition}, {weather.temperature}°C"
|
||||
|
||||
return summary, weather
|
||||
```
|
||||
|
||||
## Best Practices for Tool Design
|
||||
|
||||
### 1. Single Responsibility
|
||||
|
||||
```python
|
||||
# Good: Does one thing well
|
||||
@tool
|
||||
def send_email(to: str, subject: str, body: str) -> str:
|
||||
"""Send an email"""
|
||||
|
||||
# Bad: Multiple responsibilities
|
||||
@tool
|
||||
def send_and_log_email(to: str, subject: str, body: str, log_file: str) -> str:
|
||||
"""Send an email and log it"""
|
||||
# Two different responsibilities
|
||||
```
|
||||
|
||||
### 2. Clear Parameters
|
||||
|
||||
```python
|
||||
# Good: Clear parameters
|
||||
@tool
|
||||
def book_meeting(
|
||||
title: str,
|
||||
start_time: str, # "2024-01-01 10:00"
|
||||
duration_minutes: int,
|
||||
attendees: List[str]
|
||||
) -> str:
|
||||
"""Book a meeting"""
|
||||
|
||||
# Bad: Ambiguous parameters
|
||||
@tool
|
||||
def book_meeting(data: dict) -> str:
|
||||
"""Book a meeting"""
|
||||
```
|
||||
|
||||
### 3. Error Handling
|
||||
|
||||
```python
|
||||
@tool
|
||||
def divide(a: float, b: float) -> float:
|
||||
"""Divide two numbers.
|
||||
|
||||
Args:
|
||||
a: Dividend
|
||||
b: Divisor
|
||||
|
||||
Raises:
|
||||
ValueError: If b is 0
|
||||
"""
|
||||
if b == 0:
|
||||
raise ValueError("Cannot divide by zero")
|
||||
|
||||
return a / b
|
||||
```
|
||||
|
||||
## Dynamic Tool Generation
|
||||
|
||||
Automatically generate tools from API schemas:
|
||||
|
||||
```python
|
||||
def create_api_tool(endpoint: str, method: str, description: str):
|
||||
"""Generate tools from API specifications"""
|
||||
|
||||
@tool
|
||||
def api_tool(**kwargs) -> dict:
|
||||
f"""
|
||||
{description}
|
||||
|
||||
API Endpoint: {endpoint}
|
||||
Method: {method}
|
||||
"""
|
||||
response = requests.request(
|
||||
method=method,
|
||||
url=endpoint,
|
||||
json=kwargs
|
||||
)
|
||||
return response.json()
|
||||
|
||||
return api_tool
|
||||
|
||||
# Example usage
|
||||
create_user_tool = create_api_tool(
|
||||
endpoint="https://api.example.com/users",
|
||||
method="POST",
|
||||
description="Create a new user"
|
||||
)
|
||||
```
|
||||
|
||||
## Grouping Tools
|
||||
|
||||
Group related tools together:
|
||||
|
||||
```python
|
||||
# Database tool group
|
||||
database_tools = [
|
||||
query_users_tool,
|
||||
update_user_tool,
|
||||
delete_user_tool
|
||||
]
|
||||
|
||||
# Search tool group
|
||||
search_tools = [
|
||||
web_search_tool,
|
||||
image_search_tool,
|
||||
news_search_tool
|
||||
]
|
||||
|
||||
# Select based on context
|
||||
if user.role == "admin":
|
||||
tools = database_tools + search_tools
|
||||
else:
|
||||
tools = search_tools
|
||||
```
|
||||
|
||||
## Summary
|
||||
|
||||
Tool definitions require clear and detailed docstrings, appropriate type annotations, and error handling.
|
||||
|
||||
## Related Pages
|
||||
|
||||
- [04_tool_integration_tool_node.md](04_tool_integration_tool_node.md) - Using tools in tool nodes
|
||||
- [04_tool_integration_command_api.md](04_tool_integration_command_api.md) - Integration with Command API
|
||||
318
skills/langgraph-master/04_tool_integration_tool_node.md
Normal file
318
skills/langgraph-master/04_tool_integration_tool_node.md
Normal file
@@ -0,0 +1,318 @@
|
||||
# Tool Node
|
||||
|
||||
Implementation of nodes that execute tools.
|
||||
|
||||
## ToolNode (Built-in)
|
||||
|
||||
The simplest approach:
|
||||
|
||||
```python
|
||||
from langgraph.prebuilt import ToolNode
|
||||
|
||||
tools = [search_tool, calculator_tool]
|
||||
tool_node = ToolNode(tools)
|
||||
|
||||
# Add to graph
|
||||
builder.add_node("tools", tool_node)
|
||||
```
|
||||
|
||||
## How It Works
|
||||
|
||||
ToolNode:
|
||||
1. Extracts `tool_calls` from the last message
|
||||
2. Executes each tool
|
||||
3. Returns results as `ToolMessage`
|
||||
|
||||
```python
|
||||
# Input
|
||||
{
|
||||
"messages": [
|
||||
AIMessage(tool_calls=[
|
||||
{"name": "search", "args": {"query": "weather"}, "id": "1"}
|
||||
])
|
||||
]
|
||||
}
|
||||
|
||||
# ToolNode execution
|
||||
|
||||
# Output
|
||||
{
|
||||
"messages": [
|
||||
ToolMessage(
|
||||
content="Sunny, 25°C",
|
||||
tool_call_id="1"
|
||||
)
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Custom Tool Node
|
||||
|
||||
For finer control:
|
||||
|
||||
```python
|
||||
def custom_tool_node(state: MessagesState):
|
||||
"""Custom tool node"""
|
||||
last_message = state["messages"][-1]
|
||||
tool_results = []
|
||||
|
||||
for tool_call in last_message.tool_calls:
|
||||
# Find the tool
|
||||
tool = tool_map.get(tool_call["name"])
|
||||
|
||||
if not tool:
|
||||
result = f"Tool {tool_call['name']} not found"
|
||||
else:
|
||||
try:
|
||||
# Execute the tool
|
||||
result = tool.invoke(tool_call["args"])
|
||||
except Exception as e:
|
||||
result = f"Error: {str(e)}"
|
||||
|
||||
# Create ToolMessage
|
||||
tool_results.append(
|
||||
ToolMessage(
|
||||
content=str(result),
|
||||
tool_call_id=tool_call["id"]
|
||||
)
|
||||
)
|
||||
|
||||
return {"messages": tool_results}
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
### Basic Error Handling
|
||||
|
||||
```python
|
||||
def robust_tool_node(state: MessagesState):
|
||||
"""Tool node with error handling"""
|
||||
last_message = state["messages"][-1]
|
||||
tool_results = []
|
||||
|
||||
for tool_call in last_message.tool_calls:
|
||||
try:
|
||||
tool = tool_map[tool_call["name"]]
|
||||
result = tool.invoke(tool_call["args"])
|
||||
|
||||
tool_results.append(
|
||||
ToolMessage(
|
||||
content=str(result),
|
||||
tool_call_id=tool_call["id"]
|
||||
)
|
||||
)
|
||||
|
||||
except KeyError:
|
||||
# Tool not found
|
||||
tool_results.append(
|
||||
ToolMessage(
|
||||
content=f"Error: Tool '{tool_call['name']}' not found",
|
||||
tool_call_id=tool_call["id"]
|
||||
)
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
# Execution error
|
||||
tool_results.append(
|
||||
ToolMessage(
|
||||
content=f"Error executing tool: {str(e)}",
|
||||
tool_call_id=tool_call["id"]
|
||||
)
|
||||
)
|
||||
|
||||
return {"messages": tool_results}
|
||||
```
|
||||
|
||||
### Retry Logic
|
||||
|
||||
```python
|
||||
import time
|
||||
|
||||
def tool_node_with_retry(state: MessagesState, max_retries: int = 3):
|
||||
"""Tool node with retry"""
|
||||
last_message = state["messages"][-1]
|
||||
tool_results = []
|
||||
|
||||
for tool_call in last_message.tool_calls:
|
||||
tool = tool_map[tool_call["name"]]
|
||||
retry_count = 0
|
||||
|
||||
while retry_count < max_retries:
|
||||
try:
|
||||
result = tool.invoke(tool_call["args"])
|
||||
|
||||
tool_results.append(
|
||||
ToolMessage(
|
||||
content=str(result),
|
||||
tool_call_id=tool_call["id"]
|
||||
)
|
||||
)
|
||||
break
|
||||
|
||||
except TransientError as e:
|
||||
retry_count += 1
|
||||
if retry_count >= max_retries:
|
||||
tool_results.append(
|
||||
ToolMessage(
|
||||
content=f"Failed after {max_retries} retries: {str(e)}",
|
||||
tool_call_id=tool_call["id"]
|
||||
)
|
||||
)
|
||||
else:
|
||||
time.sleep(2 ** retry_count) # Exponential backoff
|
||||
|
||||
except Exception as e:
|
||||
# Non-retryable error
|
||||
tool_results.append(
|
||||
ToolMessage(
|
||||
content=f"Error: {str(e)}",
|
||||
tool_call_id=tool_call["id"]
|
||||
)
|
||||
)
|
||||
break
|
||||
|
||||
return {"messages": tool_results}
|
||||
```
|
||||
|
||||
## Conditional Tool Execution
|
||||
|
||||
```python
|
||||
def conditional_tool_node(state: MessagesState, *, store):
|
||||
"""Tool node with permission checking"""
|
||||
user_id = state.get("user_id")
|
||||
user = store.get(("users", user_id), "profile")
|
||||
|
||||
last_message = state["messages"][-1]
|
||||
tool_results = []
|
||||
|
||||
for tool_call in last_message.tool_calls:
|
||||
tool = tool_map[tool_call["name"]]
|
||||
|
||||
# Permission check
|
||||
if not has_permission(user, tool.name):
|
||||
tool_results.append(
|
||||
ToolMessage(
|
||||
content=f"Permission denied for tool '{tool.name}'",
|
||||
tool_call_id=tool_call["id"]
|
||||
)
|
||||
)
|
||||
continue
|
||||
|
||||
# Execute
|
||||
result = tool.invoke(tool_call["args"])
|
||||
tool_results.append(
|
||||
ToolMessage(
|
||||
content=str(result),
|
||||
tool_call_id=tool_call["id"]
|
||||
)
|
||||
)
|
||||
|
||||
return {"messages": tool_results}
|
||||
```
|
||||
|
||||
## Logging Tool Execution
|
||||
|
||||
```python
|
||||
import logging
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
def logged_tool_node(state: MessagesState):
|
||||
"""Tool node with logging"""
|
||||
last_message = state["messages"][-1]
|
||||
tool_results = []
|
||||
|
||||
for tool_call in last_message.tool_calls:
|
||||
tool = tool_map[tool_call["name"]]
|
||||
|
||||
logger.info(
|
||||
f"Executing tool: {tool.name}",
|
||||
extra={
|
||||
"tool": tool.name,
|
||||
"args": tool_call["args"],
|
||||
"call_id": tool_call["id"]
|
||||
}
|
||||
)
|
||||
|
||||
try:
|
||||
start = time.time()
|
||||
result = tool.invoke(tool_call["args"])
|
||||
duration = time.time() - start
|
||||
|
||||
logger.info(
|
||||
f"Tool completed: {tool.name}",
|
||||
extra={
|
||||
"tool": tool.name,
|
||||
"duration": duration,
|
||||
"success": True
|
||||
}
|
||||
)
|
||||
|
||||
tool_results.append(
|
||||
ToolMessage(
|
||||
content=str(result),
|
||||
tool_call_id=tool_call["id"]
|
||||
)
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"Tool failed: {tool.name}",
|
||||
extra={
|
||||
"tool": tool.name,
|
||||
"error": str(e)
|
||||
},
|
||||
exc_info=True
|
||||
)
|
||||
|
||||
tool_results.append(
|
||||
ToolMessage(
|
||||
content=f"Error: {str(e)}",
|
||||
tool_call_id=tool_call["id"]
|
||||
)
|
||||
)
|
||||
|
||||
return {"messages": tool_results}
|
||||
```
|
||||
|
||||
## Parallel Tool Execution
|
||||
|
||||
```python
|
||||
from concurrent.futures import ThreadPoolExecutor
|
||||
|
||||
def parallel_tool_node(state: MessagesState):
|
||||
"""Execute tools in parallel"""
|
||||
last_message = state["messages"][-1]
|
||||
|
||||
def execute_tool(tool_call):
|
||||
tool = tool_map[tool_call["name"]]
|
||||
try:
|
||||
result = tool.invoke(tool_call["args"])
|
||||
return ToolMessage(
|
||||
content=str(result),
|
||||
tool_call_id=tool_call["id"]
|
||||
)
|
||||
except Exception as e:
|
||||
return ToolMessage(
|
||||
content=f"Error: {str(e)}",
|
||||
tool_call_id=tool_call["id"]
|
||||
)
|
||||
|
||||
with ThreadPoolExecutor(max_workers=5) as executor:
|
||||
tool_results = list(executor.map(
|
||||
execute_tool,
|
||||
last_message.tool_calls
|
||||
))
|
||||
|
||||
return {"messages": tool_results}
|
||||
```
|
||||
|
||||
## Summary
|
||||
|
||||
ToolNode executes tools and returns results as ToolMessage. You can add error handling, permission checks, logging, and more.
|
||||
|
||||
## Related Pages
|
||||
|
||||
- [04_tool_integration_tool_definition.md](04_tool_integration_tool_definition.md) - Tool definition
|
||||
- [04_tool_integration_command_api.md](04_tool_integration_command_api.md) - Integration with Command API
|
||||
- [05_advanced_features_human_in_the_loop.md](05_advanced_features_human_in_the_loop.md) - Combining with approval flows
|
||||
@@ -0,0 +1,289 @@
|
||||
# Human-in-the-Loop (Approval Flow)
|
||||
|
||||
A feature to pause graph execution and request human intervention.
|
||||
|
||||
## Overview
|
||||
|
||||
Human-in-the-Loop is a feature that requests **human approval or input** before important decisions or actions.
|
||||
|
||||
## Dynamic Interrupt (Recommended)
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```python
|
||||
from langgraph.types import interrupt
|
||||
|
||||
def approval_node(state: State):
|
||||
"""Request approval"""
|
||||
approved = interrupt("Do you approve this action?")
|
||||
|
||||
if approved:
|
||||
return {"status": "approved"}
|
||||
else:
|
||||
return {"status": "rejected"}
|
||||
```
|
||||
|
||||
### Execution
|
||||
|
||||
```python
|
||||
# Initial execution (stops at interrupt)
|
||||
result = graph.invoke(input, config)
|
||||
|
||||
# Check interrupt information
|
||||
print(result["__interrupt__"]) # "Do you approve this action?"
|
||||
|
||||
# Approve and resume
|
||||
graph.invoke(None, config, resume=True)
|
||||
|
||||
# Or reject
|
||||
graph.invoke(None, config, resume=False)
|
||||
```
|
||||
|
||||
## Application Patterns
|
||||
|
||||
### Pattern 1: Approve or Reject
|
||||
|
||||
```python
|
||||
def action_approval(state: State):
|
||||
"""Approval before action execution"""
|
||||
action_details = prepare_action(state)
|
||||
|
||||
approved = interrupt({
|
||||
"question": "Approve this action?",
|
||||
"details": action_details
|
||||
})
|
||||
|
||||
if approved:
|
||||
result = execute_action(action_details)
|
||||
return {"result": result, "approved": True}
|
||||
else:
|
||||
return {"result": None, "approved": False}
|
||||
```
|
||||
|
||||
### Pattern 2: Editable Approval
|
||||
|
||||
```python
|
||||
def review_and_edit(state: State):
|
||||
"""Review and edit generated content"""
|
||||
generated = generate_content(state)
|
||||
|
||||
edited_content = interrupt({
|
||||
"instruction": "Review and edit this content",
|
||||
"content": generated
|
||||
})
|
||||
|
||||
return {"final_content": edited_content}
|
||||
|
||||
# Resume with edited version
|
||||
graph.invoke(None, config, resume=edited_version)
|
||||
```
|
||||
|
||||
### Pattern 3: Tool Execution Approval
|
||||
|
||||
```python
|
||||
@tool
|
||||
def send_email(to: str, subject: str, body: str):
|
||||
"""Send email (with approval)"""
|
||||
response = interrupt({
|
||||
"action": "send_email",
|
||||
"to": to,
|
||||
"subject": subject,
|
||||
"body": body,
|
||||
"message": "Approve sending this email?"
|
||||
})
|
||||
|
||||
if response.get("action") == "approve":
|
||||
# When approved, parameters can also be edited
|
||||
final_to = response.get("to", to)
|
||||
final_subject = response.get("subject", subject)
|
||||
final_body = response.get("body", body)
|
||||
|
||||
return actually_send_email(final_to, final_subject, final_body)
|
||||
else:
|
||||
return "Email cancelled by user"
|
||||
```
|
||||
|
||||
### Pattern 4: Input Validation Loop
|
||||
|
||||
```python
|
||||
def get_valid_input(state: State):
|
||||
"""Loop until valid input is obtained"""
|
||||
prompt = "Enter a positive number:"
|
||||
|
||||
while True:
|
||||
answer = interrupt(prompt)
|
||||
|
||||
if isinstance(answer, (int, float)) and answer > 0:
|
||||
break
|
||||
|
||||
prompt = f"'{answer}' is invalid. Enter a positive number:"
|
||||
|
||||
return {"value": answer}
|
||||
```
|
||||
|
||||
## Static Interrupt (For Debugging)
|
||||
|
||||
Set breakpoints at compile time:
|
||||
|
||||
```python
|
||||
graph = builder.compile(
|
||||
checkpointer=checkpointer,
|
||||
interrupt_before=["risky_node"], # Stop before node execution
|
||||
interrupt_after=["generate_content"] # Stop after node execution
|
||||
)
|
||||
|
||||
# Execute (stops before specified node)
|
||||
graph.invoke(input, config)
|
||||
|
||||
# Check state
|
||||
state = graph.get_state(config)
|
||||
|
||||
# Resume
|
||||
graph.invoke(None, config)
|
||||
```
|
||||
|
||||
## Practical Example: Multi-Stage Approval Workflow
|
||||
|
||||
```python
|
||||
from langgraph.types import interrupt, Command
|
||||
|
||||
class ApprovalState(TypedDict):
|
||||
request: str
|
||||
draft: str
|
||||
reviewed: str
|
||||
approved: bool
|
||||
|
||||
def draft_node(state: ApprovalState):
|
||||
"""Create draft"""
|
||||
draft = create_draft(state["request"])
|
||||
return {"draft": draft}
|
||||
|
||||
def review_node(state: ApprovalState):
|
||||
"""Review and edit"""
|
||||
reviewed = interrupt({
|
||||
"type": "review",
|
||||
"content": state["draft"],
|
||||
"instruction": "Review and improve the draft"
|
||||
})
|
||||
|
||||
return {"reviewed": reviewed}
|
||||
|
||||
def approval_node(state: ApprovalState):
|
||||
"""Final approval"""
|
||||
approved = interrupt({
|
||||
"type": "approval",
|
||||
"content": state["reviewed"],
|
||||
"question": "Approve for publication?"
|
||||
})
|
||||
|
||||
if approved:
|
||||
return Command(
|
||||
update={"approved": True},
|
||||
goto="publish"
|
||||
)
|
||||
else:
|
||||
return Command(
|
||||
update={"approved": False},
|
||||
goto="draft" # Return to draft
|
||||
)
|
||||
|
||||
def publish_node(state: ApprovalState):
|
||||
"""Publish"""
|
||||
publish(state["reviewed"])
|
||||
return {"status": "published"}
|
||||
|
||||
# Build graph
|
||||
builder.add_node("draft", draft_node)
|
||||
builder.add_node("review", review_node)
|
||||
builder.add_node("approval", approval_node)
|
||||
builder.add_node("publish", publish_node)
|
||||
|
||||
builder.add_edge(START, "draft")
|
||||
builder.add_edge("draft", "review")
|
||||
builder.add_edge("review", "approval")
|
||||
# approval node determines control flow with Command
|
||||
builder.add_edge("publish", END)
|
||||
```
|
||||
|
||||
## Important Rules
|
||||
|
||||
### ✅ Recommendations
|
||||
|
||||
- Pass values in JSON format
|
||||
- Keep `interrupt()` call order consistent
|
||||
- Make processing before `interrupt()` idempotent
|
||||
|
||||
### ❌ Prohibitions
|
||||
|
||||
- Don't catch `interrupt()` with `try-except`
|
||||
- Don't skip `interrupt()` conditionally
|
||||
- Don't pass non-serializable objects
|
||||
|
||||
## Use Cases
|
||||
|
||||
### 1. High-Risk Operation Approval
|
||||
|
||||
```python
|
||||
def delete_data(state: State):
|
||||
"""Delete data (approval required)"""
|
||||
approved = interrupt({
|
||||
"action": "delete_data",
|
||||
"warning": "This cannot be undone!",
|
||||
"data_count": len(state["data_to_delete"])
|
||||
})
|
||||
|
||||
if approved:
|
||||
execute_delete(state["data_to_delete"])
|
||||
return {"deleted": True}
|
||||
return {"deleted": False}
|
||||
```
|
||||
|
||||
### 2. Creative Work Review
|
||||
|
||||
```python
|
||||
def creative_generation(state: State):
|
||||
"""Creative content generation and review"""
|
||||
versions = []
|
||||
|
||||
for _ in range(3):
|
||||
version = generate_creative(state["prompt"])
|
||||
versions.append(version)
|
||||
|
||||
selected = interrupt({
|
||||
"type": "select_version",
|
||||
"versions": versions,
|
||||
"instruction": "Select the best version or request regeneration"
|
||||
})
|
||||
|
||||
return {"final_version": selected}
|
||||
```
|
||||
|
||||
### 3. Incremental Data Input
|
||||
|
||||
```python
|
||||
def collect_user_info(state: State):
|
||||
"""Collect user information incrementally"""
|
||||
name = interrupt("What is your name?")
|
||||
|
||||
age = interrupt(f"Hello {name}, what is your age?")
|
||||
|
||||
city = interrupt("What city do you live in?")
|
||||
|
||||
return {
|
||||
"user_info": {
|
||||
"name": name,
|
||||
"age": age,
|
||||
"city": city
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Summary
|
||||
|
||||
Human-in-the-Loop is a feature for incorporating human judgment in important decisions. Dynamic interrupt is flexible and recommended.
|
||||
|
||||
## Related Pages
|
||||
|
||||
- [03_memory_management_persistence.md](03_memory_management_persistence.md) - Checkpointer is required
|
||||
- [02_graph_architecture_agent.md](02_graph_architecture_agent.md) - Combination with agents
|
||||
- [04_tool_integration_tool_node.md](04_tool_integration_tool_node.md) - Approval before tool execution
|
||||
283
skills/langgraph-master/05_advanced_features_map_reduce.md
Normal file
283
skills/langgraph-master/05_advanced_features_map_reduce.md
Normal file
@@ -0,0 +1,283 @@
|
||||
# Map-Reduce (Parallel Processing Pattern)
|
||||
|
||||
A pattern for parallel processing and aggregation of large datasets.
|
||||
|
||||
## Overview
|
||||
|
||||
Map-Reduce is a pattern that combines **Map** (parallel processing) and **Reduce** (aggregation). In LangGraph, it's implemented using the Send API.
|
||||
|
||||
## Basic Implementation
|
||||
|
||||
```python
|
||||
from langgraph.types import Send
|
||||
from typing import Annotated
|
||||
from operator import add
|
||||
|
||||
class MapReduceState(TypedDict):
|
||||
items: list[str]
|
||||
results: Annotated[list[str], add]
|
||||
final_result: str
|
||||
|
||||
def map_node(state: MapReduceState):
|
||||
"""Map: Send each item to worker"""
|
||||
return [
|
||||
Send("worker", {"item": item})
|
||||
for item in state["items"]
|
||||
]
|
||||
|
||||
def worker_node(item_state: dict):
|
||||
"""Process individual item"""
|
||||
result = process_item(item_state["item"])
|
||||
return {"results": [result]}
|
||||
|
||||
def reduce_node(state: MapReduceState):
|
||||
"""Reduce: Aggregate results"""
|
||||
final = aggregate_results(state["results"])
|
||||
return {"final_result": final}
|
||||
|
||||
# Build graph
|
||||
builder = StateGraph(MapReduceState)
|
||||
builder.add_node("map", map_node)
|
||||
builder.add_node("worker", worker_node)
|
||||
builder.add_node("reduce", reduce_node)
|
||||
|
||||
builder.add_edge(START, "map")
|
||||
builder.add_edge("worker", "reduce")
|
||||
builder.add_edge("reduce", END)
|
||||
|
||||
graph = builder.compile()
|
||||
```
|
||||
|
||||
## Types of Reducers
|
||||
|
||||
### Addition (List Concatenation)
|
||||
|
||||
```python
|
||||
from operator import add
|
||||
|
||||
class State(TypedDict):
|
||||
results: Annotated[list, add] # Concatenate lists
|
||||
|
||||
# [1, 2] + [3, 4] = [1, 2, 3, 4]
|
||||
```
|
||||
|
||||
### Custom Reducer
|
||||
|
||||
```python
|
||||
def merge_dicts(left: dict, right: dict) -> dict:
|
||||
"""Merge dictionaries"""
|
||||
return {**left, **right}
|
||||
|
||||
class State(TypedDict):
|
||||
data: Annotated[dict, merge_dicts]
|
||||
```
|
||||
|
||||
## Application Patterns
|
||||
|
||||
### Pattern 1: Parallel Document Summarization
|
||||
|
||||
```python
|
||||
class DocSummaryState(TypedDict):
|
||||
documents: list[str]
|
||||
summaries: Annotated[list[str], add]
|
||||
final_summary: str
|
||||
|
||||
def map_documents(state: DocSummaryState):
|
||||
"""Send each document to worker"""
|
||||
return [
|
||||
Send("summarize_worker", {"doc": doc, "index": i})
|
||||
for i, doc in enumerate(state["documents"])
|
||||
]
|
||||
|
||||
def summarize_worker(worker_state: dict):
|
||||
"""Summarize individual document"""
|
||||
summary = llm.invoke(f"Summarize: {worker_state['doc']}")
|
||||
return {"summaries": [summary]}
|
||||
|
||||
def final_summary_node(state: DocSummaryState):
|
||||
"""Integrate all summaries"""
|
||||
combined = "\n".join(state["summaries"])
|
||||
final = llm.invoke(f"Create final summary from:\n{combined}")
|
||||
return {"final_summary": final}
|
||||
```
|
||||
|
||||
### Pattern 2: Hierarchical Map-Reduce
|
||||
|
||||
```python
|
||||
def level1_map(state: State):
|
||||
"""Level 1: Split data into chunks"""
|
||||
chunks = create_chunks(state["data"], chunk_size=100)
|
||||
return [
|
||||
Send("level1_worker", {"chunk": chunk})
|
||||
for chunk in chunks
|
||||
]
|
||||
|
||||
def level1_worker(worker_state: dict):
|
||||
"""Level 1 worker: Aggregate within chunk"""
|
||||
partial_result = aggregate_chunk(worker_state["chunk"])
|
||||
return {"level1_results": [partial_result]}
|
||||
|
||||
def level2_map(state: State):
|
||||
"""Level 2: Further aggregate partial results"""
|
||||
return [
|
||||
Send("level2_worker", {"partial": result})
|
||||
for result in state["level1_results"]
|
||||
]
|
||||
|
||||
def level2_worker(worker_state: dict):
|
||||
"""Level 2 worker: Final aggregation"""
|
||||
final = final_aggregate(worker_state["partial"])
|
||||
return {"final_result": final}
|
||||
```
|
||||
|
||||
### Pattern 3: Dynamic Parallelism Control
|
||||
|
||||
```python
|
||||
import os
|
||||
|
||||
def adaptive_map(state: State):
|
||||
"""Adjust parallelism based on system resources"""
|
||||
max_workers = int(os.getenv("MAX_WORKERS", "10"))
|
||||
items = state["items"]
|
||||
|
||||
# Split items into batches
|
||||
batch_size = max(1, len(items) // max_workers)
|
||||
batches = [
|
||||
items[i:i+batch_size]
|
||||
for i in range(0, len(items), batch_size)
|
||||
]
|
||||
|
||||
return [
|
||||
Send("batch_worker", {"batch": batch})
|
||||
for batch in batches
|
||||
]
|
||||
|
||||
def batch_worker(worker_state: dict):
|
||||
"""Process batch"""
|
||||
results = [process_item(item) for item in worker_state["batch"]]
|
||||
return {"results": results}
|
||||
```
|
||||
|
||||
### Pattern 4: Error-Resilient Map-Reduce
|
||||
|
||||
```python
|
||||
class RobustState(TypedDict):
|
||||
items: list[str]
|
||||
successes: Annotated[list, add]
|
||||
failures: Annotated[list, add]
|
||||
|
||||
def robust_worker(worker_state: dict):
|
||||
"""Worker with error handling"""
|
||||
try:
|
||||
result = process_item(worker_state["item"])
|
||||
return {"successes": [{"item": worker_state["item"], "result": result}]}
|
||||
|
||||
except Exception as e:
|
||||
return {"failures": [{"item": worker_state["item"], "error": str(e)}]}
|
||||
|
||||
def error_handler(state: RobustState):
|
||||
"""Process failed items"""
|
||||
if state["failures"]:
|
||||
# Retry or log failed items
|
||||
log_failures(state["failures"])
|
||||
|
||||
return {"final_result": aggregate(state["successes"])}
|
||||
```
|
||||
|
||||
## Performance Optimization
|
||||
|
||||
### Batch Size Adjustment
|
||||
|
||||
```python
|
||||
def optimal_batching(items: list, target_batch_time: float = 1.0):
|
||||
"""Calculate optimal batch size"""
|
||||
# Estimate processing time per item
|
||||
sample_time = estimate_processing_time(items[0])
|
||||
|
||||
# Batch size to reach target time
|
||||
batch_size = max(1, int(target_batch_time / sample_time))
|
||||
|
||||
batches = [
|
||||
items[i:i+batch_size]
|
||||
for i in range(0, len(items), batch_size)
|
||||
]
|
||||
|
||||
return batches
|
||||
```
|
||||
|
||||
### Progress Tracking
|
||||
|
||||
```python
|
||||
from langgraph.config import get_stream_writer
|
||||
|
||||
def map_with_progress(state: State):
|
||||
"""Map that reports progress"""
|
||||
writer = get_stream_writer()
|
||||
total = len(state["items"])
|
||||
|
||||
sends = []
|
||||
for i, item in enumerate(state["items"]):
|
||||
sends.append(Send("worker", {"item": item}))
|
||||
writer({"progress": f"{i+1}/{total}"})
|
||||
|
||||
return sends
|
||||
```
|
||||
|
||||
## Aggregation Patterns
|
||||
|
||||
### Statistical Aggregation
|
||||
|
||||
```python
|
||||
def statistical_reduce(state: State):
|
||||
"""Calculate statistics"""
|
||||
results = state["results"]
|
||||
|
||||
return {
|
||||
"total": sum(results),
|
||||
"average": sum(results) / len(results),
|
||||
"min": min(results),
|
||||
"max": max(results),
|
||||
"count": len(results)
|
||||
}
|
||||
```
|
||||
|
||||
### LLM-Based Integration
|
||||
|
||||
```python
|
||||
def llm_reduce(state: State):
|
||||
"""Integrate multiple results with LLM"""
|
||||
all_results = "\n\n".join([
|
||||
f"Result {i+1}:\n{r}"
|
||||
for i, r in enumerate(state["results"])
|
||||
])
|
||||
|
||||
final = llm.invoke(
|
||||
f"Synthesize these results into a comprehensive answer:\n\n{all_results}"
|
||||
)
|
||||
|
||||
return {"final_result": final}
|
||||
```
|
||||
|
||||
## Advantages
|
||||
|
||||
✅ **Scalability**: Efficiently process large datasets
|
||||
✅ **Parallelism**: Execute independent tasks concurrently
|
||||
✅ **Flexibility**: Dynamically adjust number of workers
|
||||
✅ **Error Isolation**: One failure doesn't affect the whole
|
||||
|
||||
## Considerations
|
||||
|
||||
⚠️ **Memory Consumption**: Many worker instances
|
||||
⚠️ **Order Non-deterministic**: Worker execution order is not guaranteed
|
||||
⚠️ **Overhead**: Inefficient for small tasks
|
||||
⚠️ **Reducer Design**: Design appropriate aggregation method
|
||||
|
||||
## Summary
|
||||
|
||||
Map-Reduce is a pattern that uses Send API to process large datasets in parallel and aggregates with Reducers. Optimal for large-scale data processing.
|
||||
|
||||
## Related Pages
|
||||
|
||||
- [02_graph_architecture_orchestrator_worker.md](02_graph_architecture_orchestrator_worker.md) - Orchestrator-Worker pattern
|
||||
- [02_graph_architecture_parallelization.md](02_graph_architecture_parallelization.md) - Comparison with static parallelization
|
||||
- [01_core_concepts_state.md](01_core_concepts_state.md) - Details on Reducers
|
||||
73
skills/langgraph-master/05_advanced_features_overview.md
Normal file
73
skills/langgraph-master/05_advanced_features_overview.md
Normal file
@@ -0,0 +1,73 @@
|
||||
# 05. Advanced Features
|
||||
|
||||
Advanced features and implementation patterns.
|
||||
|
||||
## Overview
|
||||
|
||||
By leveraging LangGraph's advanced features, you can build more sophisticated agent systems.
|
||||
|
||||
## Key Features
|
||||
|
||||
### 1. [Human-in-the-Loop (Approval Flow)](05_advanced_features_human_in_the_loop.md)
|
||||
|
||||
Pause graph execution and request human intervention:
|
||||
- Dynamic interrupt
|
||||
- Static interrupt
|
||||
- Approval, editing, and rejection flows
|
||||
|
||||
### 2. [Streaming](05_advanced_features_streaming.md)
|
||||
|
||||
Monitor progress in real-time:
|
||||
- LLM token streaming
|
||||
- State update streaming
|
||||
- Custom event streaming
|
||||
|
||||
### 3. [Map-Reduce (Parallel Processing Pattern)](05_advanced_features_map_reduce.md)
|
||||
|
||||
Parallel processing of large datasets:
|
||||
- Dynamic worker generation with Send API
|
||||
- Result aggregation with Reducers
|
||||
- Hierarchical parallel processing
|
||||
|
||||
## Feature Comparison
|
||||
|
||||
| Feature | Use Case | Implementation Complexity |
|
||||
|---------|----------|--------------------------|
|
||||
| Human-in-the-Loop | Approval flows, quality control | Medium |
|
||||
| Streaming | Real-time monitoring, UX improvement | Low |
|
||||
| Map-Reduce | Large-scale data processing | High |
|
||||
|
||||
## Combination Patterns
|
||||
|
||||
### Human-in-the-Loop + Streaming
|
||||
|
||||
```python
|
||||
# Stream while requesting approval
|
||||
for chunk in graph.stream(input, config, stream_mode="values"):
|
||||
print(chunk)
|
||||
|
||||
# Pause at interrupt
|
||||
if chunk.get("__interrupt__"):
|
||||
approval = input("Approve? (y/n): ")
|
||||
graph.invoke(None, config, resume=approval == "y")
|
||||
```
|
||||
|
||||
### Map-Reduce + Streaming
|
||||
|
||||
```python
|
||||
# Stream progress of parallel processing
|
||||
for chunk in graph.stream(
|
||||
{"items": large_dataset},
|
||||
stream_mode="updates",
|
||||
subgraphs=True # Also show worker progress
|
||||
):
|
||||
print(f"Progress: {chunk}")
|
||||
```
|
||||
|
||||
## Next Steps
|
||||
|
||||
For details on each feature, refer to the following pages:
|
||||
|
||||
- [05_advanced_features_human_in_the_loop.md](05_advanced_features_human_in_the_loop.md) - Implementation of approval flows
|
||||
- [05_advanced_features_streaming.md](05_advanced_features_streaming.md) - How to use streaming
|
||||
- [05_advanced_features_map_reduce.md](05_advanced_features_map_reduce.md) - Map-Reduce pattern
|
||||
220
skills/langgraph-master/05_advanced_features_streaming.md
Normal file
220
skills/langgraph-master/05_advanced_features_streaming.md
Normal file
@@ -0,0 +1,220 @@
|
||||
# Streaming
|
||||
|
||||
A feature to monitor graph execution progress in real-time.
|
||||
|
||||
## Overview
|
||||
|
||||
Streaming is a feature that receives **real-time updates** during graph execution. You can stream LLM tokens, state changes, custom events, and more.
|
||||
|
||||
## Types of stream_mode
|
||||
|
||||
### 1. values (Complete State Snapshot)
|
||||
|
||||
Complete state after each step:
|
||||
|
||||
```python
|
||||
for chunk in graph.stream(input, stream_mode="values"):
|
||||
print(chunk)
|
||||
|
||||
# Example output
|
||||
# {"messages": [{"role": "user", "content": "Hello"}]}
|
||||
# {"messages": [{"role": "user", "content": "Hello"}, {"role": "assistant", "content": "Hi!"}]}
|
||||
```
|
||||
|
||||
### 2. updates (Only State Changes)
|
||||
|
||||
Only changes at each step:
|
||||
|
||||
```python
|
||||
for chunk in graph.stream(input, stream_mode="updates"):
|
||||
print(chunk)
|
||||
|
||||
# Example output
|
||||
# {"messages": [{"role": "assistant", "content": "Hi!"}]}
|
||||
```
|
||||
|
||||
### 3. messages (LLM Tokens)
|
||||
|
||||
Stream at token level from LLM:
|
||||
|
||||
```python
|
||||
for msg, metadata in graph.stream(input, stream_mode="messages"):
|
||||
if msg.content:
|
||||
print(msg.content, end="", flush=True)
|
||||
|
||||
# Output: "H" "i" "!" " " "H" "o" "w" ... (token by token)
|
||||
```
|
||||
|
||||
### 4. debug (Debug Information)
|
||||
|
||||
Detailed graph execution information:
|
||||
|
||||
```python
|
||||
for chunk in graph.stream(input, stream_mode="debug"):
|
||||
print(chunk)
|
||||
|
||||
# Details like node execution, edge transitions, etc.
|
||||
```
|
||||
|
||||
### 5. custom (Custom Data)
|
||||
|
||||
Send custom data from nodes:
|
||||
|
||||
```python
|
||||
from langgraph.config import get_stream_writer
|
||||
|
||||
def my_node(state: State):
|
||||
writer = get_stream_writer()
|
||||
|
||||
for i in range(10):
|
||||
writer({"progress": i * 10}) # Custom data
|
||||
|
||||
return {"result": "done"}
|
||||
|
||||
for mode, chunk in graph.stream(input, stream_mode=["updates", "custom"]):
|
||||
if mode == "custom":
|
||||
print(f"Progress: {chunk['progress']}%")
|
||||
```
|
||||
|
||||
## LLM Token Streaming
|
||||
|
||||
### Stream Only Specific Nodes
|
||||
|
||||
```python
|
||||
for msg, metadata in graph.stream(input, stream_mode="messages"):
|
||||
# Display tokens only from specific node
|
||||
if metadata["langgraph_node"] == "chatbot":
|
||||
if msg.content:
|
||||
print(msg.content, end="", flush=True)
|
||||
|
||||
print() # Newline
|
||||
```
|
||||
|
||||
### Filter by Tags
|
||||
|
||||
```python
|
||||
# Set tags on LLM
|
||||
llm = init_chat_model("gpt-5", tags=["main_llm"])
|
||||
|
||||
for msg, metadata in graph.stream(input, stream_mode="messages"):
|
||||
if "main_llm" in metadata.get("tags", []):
|
||||
if msg.content:
|
||||
print(msg.content, end="", flush=True)
|
||||
```
|
||||
|
||||
## Using Multiple Modes Simultaneously
|
||||
|
||||
```python
|
||||
for mode, chunk in graph.stream(input, stream_mode=["values", "messages"]):
|
||||
if mode == "values":
|
||||
print(f"\nState: {chunk}")
|
||||
elif mode == "messages":
|
||||
if chunk[0].content:
|
||||
print(chunk[0].content, end="", flush=True)
|
||||
```
|
||||
|
||||
## Subgraph Streaming
|
||||
|
||||
```python
|
||||
# Include subgraph outputs
|
||||
for chunk in graph.stream(
|
||||
input,
|
||||
stream_mode="updates",
|
||||
subgraphs=True # Include subgraphs
|
||||
):
|
||||
print(chunk)
|
||||
```
|
||||
|
||||
## Practical Example: Progress Bar
|
||||
|
||||
```python
|
||||
from tqdm import tqdm
|
||||
|
||||
def process_with_progress(items: list):
|
||||
"""Processing with progress bar"""
|
||||
total = len(items)
|
||||
|
||||
with tqdm(total=total) as pbar:
|
||||
for chunk in graph.stream(
|
||||
{"items": items},
|
||||
stream_mode="custom"
|
||||
):
|
||||
if "progress" in chunk:
|
||||
pbar.update(1)
|
||||
|
||||
return "Complete!"
|
||||
```
|
||||
|
||||
## Practical Example: Real-time UI Updates
|
||||
|
||||
```python
|
||||
import streamlit as st
|
||||
|
||||
def run_with_ui_updates(user_input: str):
|
||||
"""Update Streamlit UI in real-time"""
|
||||
status = st.empty()
|
||||
output = st.empty()
|
||||
|
||||
full_response = ""
|
||||
|
||||
for msg, metadata in graph.stream(
|
||||
{"messages": [{"role": "user", "content": user_input}]},
|
||||
stream_mode="messages"
|
||||
):
|
||||
if msg.content:
|
||||
full_response += msg.content
|
||||
output.markdown(full_response + "▌")
|
||||
|
||||
status.text(f"Node: {metadata['langgraph_node']}")
|
||||
|
||||
output.markdown(full_response)
|
||||
status.text("Complete!")
|
||||
```
|
||||
|
||||
## Async Streaming
|
||||
|
||||
```python
|
||||
async def async_stream_example():
|
||||
"""Async streaming"""
|
||||
async for chunk in graph.astream(input, stream_mode="updates"):
|
||||
print(chunk)
|
||||
await asyncio.sleep(0) # Yield to other tasks
|
||||
```
|
||||
|
||||
## Sending Custom Events
|
||||
|
||||
```python
|
||||
from langgraph.config import get_stream_writer
|
||||
|
||||
def multi_step_node(state: State):
|
||||
"""Report progress of multiple steps"""
|
||||
writer = get_stream_writer()
|
||||
|
||||
# Step 1
|
||||
writer({"status": "Analyzing..."})
|
||||
analysis = analyze_data(state["data"])
|
||||
|
||||
# Step 2
|
||||
writer({"status": "Processing..."})
|
||||
result = process_analysis(analysis)
|
||||
|
||||
# Step 3
|
||||
writer({"status": "Finalizing..."})
|
||||
final = finalize(result)
|
||||
|
||||
return {"result": final}
|
||||
|
||||
# Receive
|
||||
for mode, chunk in graph.stream(input, stream_mode=["updates", "custom"]):
|
||||
if mode == "custom":
|
||||
print(chunk["status"])
|
||||
```
|
||||
|
||||
## Summary
|
||||
|
||||
Streaming monitors progress in real-time and improves user experience. Choose the appropriate stream_mode based on your use case.
|
||||
|
||||
## Related Pages
|
||||
|
||||
- [02_graph_architecture_agent.md](02_graph_architecture_agent.md) - Agent streaming
|
||||
- [05_advanced_features_human_in_the_loop.md](05_advanced_features_human_in_the_loop.md) - Combining streaming and approval
|
||||
299
skills/langgraph-master/06_llm_model_ids.md
Normal file
299
skills/langgraph-master/06_llm_model_ids.md
Normal file
@@ -0,0 +1,299 @@
|
||||
# LLM Model ID Reference
|
||||
|
||||
List of model IDs for major LLM providers commonly used in LangGraph. For detailed information and best practices for each provider, please refer to the individual pages.
|
||||
|
||||
> **Last Updated**: 2025-11-24
|
||||
> **Note**: Model availability and names may change. Please refer to each provider's official documentation for the latest information.
|
||||
|
||||
## 📚 Provider-Specific Documentation
|
||||
|
||||
### [Google Gemini Models](06_llm_model_ids_gemini.md)
|
||||
|
||||
Google's latest LLM models featuring large-scale context (up to 1M tokens).
|
||||
|
||||
**Key Models**:
|
||||
|
||||
- `google/gemini-3-pro-preview` - Latest high-performance model
|
||||
- `gemini-2.5-flash` - Fast response version (1M tokens)
|
||||
- `gemini-2.5-flash-lite` - Lightweight fast version
|
||||
|
||||
**Details**: [Gemini Model ID Complete Guide](06_llm_model_ids_gemini.md)
|
||||
|
||||
---
|
||||
|
||||
### [Anthropic Claude Models](06_llm_model_ids_claude.md)
|
||||
|
||||
Anthropic's Claude 4.x series featuring balanced performance and cost.
|
||||
|
||||
**Key Models**:
|
||||
|
||||
- `claude-opus-4-1-20250805` - Most powerful model
|
||||
- `claude-sonnet-4-5` - Balanced (recommended)
|
||||
- `claude-haiku-4-5-20251001` - Fast and low-cost
|
||||
|
||||
**Details**: [Claude Model ID Complete Guide](06_llm_model_ids_claude.md)
|
||||
|
||||
---
|
||||
|
||||
### [OpenAI GPT Models](06_llm_model_ids_openai.md)
|
||||
|
||||
OpenAI's GPT-5 series supporting a wide range of tasks, with 400K context and advanced reasoning capabilities.
|
||||
|
||||
**Key Models**:
|
||||
|
||||
- `gpt-5` - GPT-5 standard version
|
||||
- `gpt-5-mini` - Small version (cost-efficient ◎)
|
||||
- `gpt-5.1-thinking` - Adaptive reasoning model
|
||||
|
||||
**Details**: [OpenAI Model ID Complete Guide](06_llm_model_ids_openai.md)
|
||||
|
||||
---
|
||||
|
||||
## 🚀 Quick Start
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```python
|
||||
from langchain_anthropic import ChatAnthropic
|
||||
from langchain_openai import ChatOpenAI
|
||||
from langchain_google_genai import ChatGoogleGenerativeAI
|
||||
|
||||
# Use Claude
|
||||
claude_llm = ChatAnthropic(model="claude-sonnet-4-5")
|
||||
|
||||
# Use OpenAI
|
||||
openai_llm = ChatOpenAI(model="gpt-5")
|
||||
|
||||
# Use Gemini
|
||||
gemini_llm = ChatGoogleGenerativeAI(model="gemini-2.5-flash")
|
||||
```
|
||||
|
||||
### Using with LangGraph
|
||||
|
||||
```python
|
||||
from langgraph.graph import StateGraph
|
||||
from langchain_anthropic import ChatAnthropic
|
||||
from typing import TypedDict, Annotated
|
||||
from langgraph.graph.message import add_messages
|
||||
|
||||
# State definition
|
||||
class State(TypedDict):
|
||||
messages: Annotated[list, add_messages]
|
||||
|
||||
# Model initialization
|
||||
llm = ChatAnthropic(model="claude-sonnet-4-5")
|
||||
|
||||
# Node definition
|
||||
def chat_node(state: State):
|
||||
response = llm.invoke(state["messages"])
|
||||
return {"messages": [response]}
|
||||
|
||||
# Graph construction
|
||||
graph = StateGraph(State)
|
||||
graph.add_node("chat", chat_node)
|
||||
graph.set_entry_point("chat")
|
||||
graph.set_finish_point("chat")
|
||||
|
||||
app = graph.compile()
|
||||
```
|
||||
|
||||
## 📊 Model Selection Guide
|
||||
|
||||
### Recommended Models by Use Case
|
||||
|
||||
| Use Case | Recommended Model | Reason |
|
||||
| ---------------------- | ------------------------------------------------------------- | ------------------------- |
|
||||
| **Cost-focused** | `claude-haiku-4-5`<br>`gpt-5-mini`<br>`gemini-2.5-flash-lite` | Low cost and fast |
|
||||
| **Balance-focused** | `claude-sonnet-4-5`<br>`gpt-5`<br>`gemini-2.5-flash` | Balance of performance and cost |
|
||||
| **Performance-focused** | `claude-opus-4-1`<br>`gpt-5-pro`<br>`gemini-3-pro` | Maximum performance |
|
||||
| **Reasoning-specialized** | `gpt-5.1-thinking`<br>`gpt-5.1-instant` | Adaptive reasoning, math, science |
|
||||
| **Large-scale context** | `gemini-2.5-pro` | 1M token context |
|
||||
|
||||
### Selection by Task Complexity
|
||||
|
||||
```python
|
||||
def select_model(task_complexity: str, budget: str = "normal"):
|
||||
"""Select optimal model based on task and budget"""
|
||||
|
||||
# Budget-focused
|
||||
if budget == "low":
|
||||
models = {
|
||||
"simple": "claude-haiku-4-5-20251001",
|
||||
"medium": "gpt-5-mini",
|
||||
"complex": "claude-sonnet-4-5"
|
||||
}
|
||||
return models.get(task_complexity, "gpt-5-mini")
|
||||
|
||||
# Performance-focused
|
||||
if budget == "high":
|
||||
models = {
|
||||
"simple": "claude-sonnet-4-5",
|
||||
"medium": "gpt-5",
|
||||
"complex": "claude-opus-4-1-20250805"
|
||||
}
|
||||
return models.get(task_complexity, "claude-opus-4-1-20250805")
|
||||
|
||||
# Balance-focused (default)
|
||||
models = {
|
||||
"simple": "gpt-5-mini",
|
||||
"medium": "claude-sonnet-4-5",
|
||||
"complex": "gpt-5"
|
||||
}
|
||||
return models.get(task_complexity, "claude-sonnet-4-5")
|
||||
```
|
||||
|
||||
## 🔄 Multi-Model Strategy
|
||||
|
||||
### Fallback Between Providers
|
||||
|
||||
```python
|
||||
from langchain_anthropic import ChatAnthropic
|
||||
from langchain_openai import ChatOpenAI
|
||||
|
||||
# Primary model and fallback
|
||||
primary = ChatAnthropic(model="claude-sonnet-4-5")
|
||||
fallback1 = ChatOpenAI(model="gpt-5")
|
||||
fallback2 = ChatGoogleGenerativeAI(model="gemini-2.5-flash")
|
||||
|
||||
llm_with_fallback = primary.with_fallbacks([fallback1, fallback2])
|
||||
|
||||
# Automatically fallback until one model succeeds
|
||||
response = llm_with_fallback.invoke("Question content")
|
||||
```
|
||||
|
||||
### Cost-Optimized Auto-Routing
|
||||
|
||||
```python
|
||||
from langgraph.graph import StateGraph
|
||||
from typing import TypedDict, Annotated, Literal
|
||||
from langgraph.graph.message import add_messages
|
||||
|
||||
class State(TypedDict):
|
||||
messages: Annotated[list, add_messages]
|
||||
complexity: Literal["simple", "medium", "complex"]
|
||||
|
||||
# Use different models based on complexity
|
||||
simple_llm = ChatAnthropic(model="claude-haiku-4-5-20251001") # Low cost
|
||||
medium_llm = ChatOpenAI(model="gpt-5-mini") # Balance
|
||||
complex_llm = ChatAnthropic(model="claude-opus-4-1-20250805") # High performance
|
||||
|
||||
def analyze_complexity(state: State):
|
||||
"""Analyze message complexity"""
|
||||
message = state["messages"][-1].content
|
||||
# Simple complexity determination
|
||||
if len(message) < 50:
|
||||
complexity = "simple"
|
||||
elif len(message) < 200:
|
||||
complexity = "medium"
|
||||
else:
|
||||
complexity = "complex"
|
||||
return {"complexity": complexity}
|
||||
|
||||
def route_by_complexity(state: State):
|
||||
"""Route based on complexity"""
|
||||
routes = {
|
||||
"simple": "simple_node",
|
||||
"medium": "medium_node",
|
||||
"complex": "complex_node"
|
||||
}
|
||||
return routes[state["complexity"]]
|
||||
|
||||
def simple_node(state: State):
|
||||
response = simple_llm.invoke(state["messages"])
|
||||
return {"messages": [response]}
|
||||
|
||||
def medium_node(state: State):
|
||||
response = medium_llm.invoke(state["messages"])
|
||||
return {"messages": [response]}
|
||||
|
||||
def complex_node(state: State):
|
||||
response = complex_llm.invoke(state["messages"])
|
||||
return {"messages": [response]}
|
||||
|
||||
# Graph construction
|
||||
graph = StateGraph(State)
|
||||
graph.add_node("analyze", analyze_complexity)
|
||||
graph.add_node("simple_node", simple_node)
|
||||
graph.add_node("medium_node", medium_node)
|
||||
graph.add_node("complex_node", complex_node)
|
||||
|
||||
graph.set_entry_point("analyze")
|
||||
graph.add_conditional_edges("analyze", route_by_complexity)
|
||||
|
||||
app = graph.compile()
|
||||
```
|
||||
|
||||
## 🔧 Best Practices
|
||||
|
||||
### 1. Environment Variable Management
|
||||
|
||||
```python
|
||||
import os
|
||||
|
||||
# Flexibly manage models with environment variables
|
||||
DEFAULT_MODEL = os.getenv("DEFAULT_LLM_MODEL", "claude-sonnet-4-5")
|
||||
FAST_MODEL = os.getenv("FAST_LLM_MODEL", "claude-haiku-4-5-20251001")
|
||||
SMART_MODEL = os.getenv("SMART_LLM_MODEL", "claude-opus-4-1-20250805")
|
||||
|
||||
# Switch provider based on environment
|
||||
PROVIDER = os.getenv("LLM_PROVIDER", "anthropic")
|
||||
|
||||
if PROVIDER == "anthropic":
|
||||
llm = ChatAnthropic(model=DEFAULT_MODEL)
|
||||
elif PROVIDER == "openai":
|
||||
llm = ChatOpenAI(model="gpt-5")
|
||||
elif PROVIDER == "google":
|
||||
llm = ChatGoogleGenerativeAI(model="gemini-2.5-flash")
|
||||
```
|
||||
|
||||
### 2. Fixed Model Version (Production)
|
||||
|
||||
```python
|
||||
# ✅ Recommended: Use dated version (production)
|
||||
prod_llm = ChatAnthropic(model="claude-sonnet-4-20250514")
|
||||
|
||||
# ⚠️ Caution: No version specified (potential unexpected updates)
|
||||
dev_llm = ChatAnthropic(model="claude-sonnet-4")
|
||||
```
|
||||
|
||||
### 3. Cost Monitoring
|
||||
|
||||
```python
|
||||
from langchain.callbacks import get_openai_callback
|
||||
|
||||
# OpenAI cost tracking
|
||||
with get_openai_callback() as cb:
|
||||
response = openai_llm.invoke("question")
|
||||
print(f"Total Cost: ${cb.total_cost}")
|
||||
print(f"Tokens: {cb.total_tokens}")
|
||||
|
||||
# For other providers, track manually
|
||||
# Refer to each provider's detail pages
|
||||
```
|
||||
|
||||
## 📖 Detailed Documentation
|
||||
|
||||
For detailed information on each provider, please refer to the following pages:
|
||||
|
||||
- **[Gemini Model ID](06_llm_model_ids_gemini.md)**: Model list, usage, advanced settings, multimodal features
|
||||
- **[Claude Model ID](06_llm_model_ids_claude.md)**: Model list, platform-specific IDs, tool usage, deprecated model information
|
||||
- **[OpenAI Model ID](06_llm_model_ids_openai.md)**: Model list, reasoning models, vision features, Azure OpenAI
|
||||
|
||||
## 🔗 Reference Links
|
||||
|
||||
### Official Documentation
|
||||
|
||||
- [Google Gemini API](https://ai.google.dev/gemini-api/docs/models)
|
||||
- [Anthropic Claude API](https://docs.anthropic.com/en/docs/about-claude/models/overview)
|
||||
- [OpenAI Platform](https://platform.openai.com/docs/models)
|
||||
|
||||
### Integration Guides
|
||||
|
||||
- [LangChain Chat Models](https://docs.langchain.com/oss/python/modules/model_io/chat/)
|
||||
- [LangGraph Documentation](https://langchain-ai.github.io/langgraph/)
|
||||
|
||||
### Pricing Information
|
||||
|
||||
- [Gemini Pricing](https://ai.google.dev/pricing)
|
||||
- [Claude Pricing](https://www.anthropic.com/pricing)
|
||||
- [OpenAI Pricing](https://openai.com/pricing)
|
||||
127
skills/langgraph-master/06_llm_model_ids_claude.md
Normal file
127
skills/langgraph-master/06_llm_model_ids_claude.md
Normal file
@@ -0,0 +1,127 @@
|
||||
# Anthropic Claude Model IDs
|
||||
|
||||
List of available model IDs for the Anthropic Claude API.
|
||||
|
||||
> **Last Updated**: 2025-11-24
|
||||
|
||||
## Model List
|
||||
|
||||
### Claude 4.x (2025)
|
||||
|
||||
| Model ID | Context | Max Output | Release | Features |
|
||||
|-----------|------------|---------|---------|------|
|
||||
| `claude-opus-4-1-20250805` | 200K | 32K | 2025-08 | Most powerful. Complex reasoning & code generation |
|
||||
| `claude-sonnet-4-5` | 1M | 64K | 2025-09 | Latest balanced model (recommended) |
|
||||
| `claude-sonnet-4-20250514` | 200K (1M beta) | 64K | 2025-05 | Production recommended (date-fixed) |
|
||||
| `claude-haiku-4-5-20251001` | 200K | 64K | 2025-10 | Fast & low-cost |
|
||||
|
||||
**Model Characteristics**:
|
||||
- **Opus**: Highest performance, complex tasks (200K context)
|
||||
- **Sonnet**: Balanced, general-purpose (1M context)
|
||||
- **Haiku**: Fast & low-cost ($1/M input, $5/M output)
|
||||
|
||||
## Basic Usage
|
||||
|
||||
```python
|
||||
from langchain_anthropic import ChatAnthropic
|
||||
|
||||
# Recommended: Latest Sonnet
|
||||
llm = ChatAnthropic(model="claude-sonnet-4-5")
|
||||
|
||||
# Production: Date-fixed version
|
||||
llm = ChatAnthropic(model="claude-sonnet-4-20250514")
|
||||
|
||||
# Fast & low-cost
|
||||
llm = ChatAnthropic(model="claude-haiku-4-5-20251001")
|
||||
|
||||
# Highest performance
|
||||
llm = ChatAnthropic(model="claude-opus-4-1-20250805")
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
||||
```bash
|
||||
export ANTHROPIC_API_KEY="sk-ant-..."
|
||||
```
|
||||
|
||||
## Model Selection Guide
|
||||
|
||||
| Use Case | Recommended Model |
|
||||
|------|-----------|
|
||||
| Cost-focused | `claude-haiku-4-5-20251001` |
|
||||
| Balanced | `claude-sonnet-4-5` |
|
||||
| Performance-focused | `claude-opus-4-1-20250805` |
|
||||
| Production | `claude-sonnet-4-20250514` (date-fixed) |
|
||||
|
||||
## Claude Features
|
||||
|
||||
### 1. Large Context Window
|
||||
|
||||
Claude Sonnet 4.5 supports **1M tokens** context window:
|
||||
|
||||
| Model | Standard Context | Max Output | Notes |
|
||||
|--------|---------------|---------|------|
|
||||
| Sonnet 4.5 | 1M | 64K | Latest version |
|
||||
| Sonnet 4 | 200K (1M beta) | 64K | 1M available with beta header |
|
||||
| Opus 4.1 | 200K | 32K | High-performance version |
|
||||
| Haiku 4.5 | 200K | 64K | Fast version |
|
||||
|
||||
```python
|
||||
# Using 1M context (Sonnet 4.5)
|
||||
llm = ChatAnthropic(
|
||||
model="claude-sonnet-4-5",
|
||||
max_tokens=64000 # Max output: 64K
|
||||
)
|
||||
|
||||
# Enable 1M context for Sonnet 4 (beta)
|
||||
llm = ChatAnthropic(
|
||||
model="claude-sonnet-4-20250514",
|
||||
default_headers={"anthropic-beta": "max-tokens-3-5-sonnet-2024-07-15"}
|
||||
)
|
||||
```
|
||||
|
||||
### 2. Date-Fixed Versions
|
||||
|
||||
For production environments, date-fixed versions are recommended to prevent unexpected updates:
|
||||
|
||||
```python
|
||||
# ✅ Recommended (production)
|
||||
llm = ChatAnthropic(model="claude-sonnet-4-20250514")
|
||||
|
||||
# ⚠️ Caution (development only)
|
||||
llm = ChatAnthropic(model="claude-sonnet-4")
|
||||
```
|
||||
|
||||
### 3. Tool Use (Function Calling)
|
||||
|
||||
Claude has powerful tool use capabilities (see [Tool Use Guide](06_llm_model_ids_claude_tools.md) for details).
|
||||
|
||||
### 4. Multi-Platform Support
|
||||
|
||||
Available on multiple cloud platforms (see [Platform-Specific Guide](06_llm_model_ids_claude_platforms.md) for details):
|
||||
|
||||
- Anthropic API (direct)
|
||||
- Google Vertex AI
|
||||
- AWS Bedrock
|
||||
- Azure AI (Microsoft Foundry)
|
||||
|
||||
## Deprecated Models
|
||||
|
||||
| Model | Deprecation Date | Migration Target |
|
||||
|--------|-------|--------|
|
||||
| Claude 3 Opus | 2025-07-21 | `claude-opus-4-1-20250805` |
|
||||
| Claude 3 Sonnet | 2025-07-21 | `claude-sonnet-4-5` |
|
||||
| Claude 2.1 | 2025-07-21 | `claude-sonnet-4-5` |
|
||||
|
||||
## Detailed Documentation
|
||||
|
||||
For advanced settings and parameters:
|
||||
- **[Claude Advanced Features](06_llm_model_ids_claude_advanced.md)** - Parameter configuration, streaming, caching
|
||||
- **[Platform-Specific Guide](06_llm_model_ids_claude_platforms.md)** - Usage on Vertex AI, AWS Bedrock, Azure AI
|
||||
- **[Tool Use Guide](06_llm_model_ids_claude_tools.md)** - Function Calling implementation
|
||||
|
||||
## Reference Links
|
||||
|
||||
- [Claude API Official](https://docs.anthropic.com/en/docs/about-claude/models/overview)
|
||||
- [Anthropic Console](https://console.anthropic.com/)
|
||||
- [LangChain Integration](https://docs.langchain.com/oss/python/integrations/chat/anthropic)
|
||||
262
skills/langgraph-master/06_llm_model_ids_claude_advanced.md
Normal file
262
skills/langgraph-master/06_llm_model_ids_claude_advanced.md
Normal file
@@ -0,0 +1,262 @@
|
||||
# Claude Advanced Features
|
||||
|
||||
Advanced settings and parameter tuning for Claude models.
|
||||
|
||||
## Context Window and Output Limits
|
||||
|
||||
| Model | Context Window | Max Output Tokens | Notes |
|
||||
|--------|-------------------|---------------|------|
|
||||
| `claude-opus-4-1-20250805` | 200,000 | 32,000 | Highest performance |
|
||||
| `claude-sonnet-4-5` | 1,000,000 | 64,000 | Latest version |
|
||||
| `claude-sonnet-4-20250514` | 200,000 (1M beta) | 64,000 | 1M with beta header |
|
||||
| `claude-haiku-4-5-20251001` | 200,000 | 64,000 | Fast version |
|
||||
|
||||
**Note**: To use 1M context with Sonnet 4, a beta header is required.
|
||||
|
||||
## Parameter Configuration
|
||||
|
||||
```python
|
||||
from langchain_anthropic import ChatAnthropic
|
||||
|
||||
llm = ChatAnthropic(
|
||||
model="claude-sonnet-4-5",
|
||||
temperature=0.7, # Creativity (0.0-1.0)
|
||||
max_tokens=64000, # Max output (Sonnet 4.5: 64K)
|
||||
top_p=0.9, # Diversity
|
||||
top_k=40, # Sampling
|
||||
)
|
||||
|
||||
# Opus 4.1 (max output 32K)
|
||||
llm_opus = ChatAnthropic(
|
||||
model="claude-opus-4-1-20250805",
|
||||
max_tokens=32000,
|
||||
)
|
||||
```
|
||||
|
||||
## Using 1M Context
|
||||
|
||||
### Sonnet 4.5 (Standard)
|
||||
|
||||
```python
|
||||
llm = ChatAnthropic(
|
||||
model="claude-sonnet-4-5",
|
||||
max_tokens=64000
|
||||
)
|
||||
|
||||
# Can process 1M tokens of context
|
||||
long_document = "..." * 500000 # Long document
|
||||
response = llm.invoke(f"Please analyze the following document:\n\n{long_document}")
|
||||
```
|
||||
|
||||
### Sonnet 4 (Beta Header)
|
||||
|
||||
```python
|
||||
# Enable 1M context with beta header
|
||||
llm = ChatAnthropic(
|
||||
model="claude-sonnet-4-20250514",
|
||||
max_tokens=64000,
|
||||
default_headers={
|
||||
"anthropic-beta": "max-tokens-3-5-sonnet-2024-07-15"
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## Streaming
|
||||
|
||||
```python
|
||||
llm = ChatAnthropic(
|
||||
model="claude-sonnet-4-5",
|
||||
streaming=True
|
||||
)
|
||||
|
||||
for chunk in llm.stream("question"):
|
||||
print(chunk.content, end="", flush=True)
|
||||
```
|
||||
|
||||
## Prompt Caching
|
||||
|
||||
Cache parts of long prompts for efficiency:
|
||||
|
||||
```python
|
||||
from langchain_anthropic import ChatAnthropic
|
||||
|
||||
llm = ChatAnthropic(
|
||||
model="claude-sonnet-4-5",
|
||||
max_tokens=4096
|
||||
)
|
||||
|
||||
# System prompt for caching
|
||||
system_prompt = """
|
||||
You are a professional code reviewer.
|
||||
Please review according to the following coding guidelines:
|
||||
[long guidelines...]
|
||||
"""
|
||||
|
||||
# Use cache
|
||||
response = llm.invoke(
|
||||
[
|
||||
{"role": "system", "content": system_prompt, "cache_control": {"type": "ephemeral"}},
|
||||
{"role": "user", "content": "Please review this code"}
|
||||
]
|
||||
)
|
||||
```
|
||||
|
||||
**Cache Benefits**:
|
||||
- Cost reduction (90% off on cache hits)
|
||||
- Latency reduction (faster processing on reuse)
|
||||
|
||||
## Vision (Image Processing)
|
||||
|
||||
```python
|
||||
from langchain_anthropic import ChatAnthropic
|
||||
from langchain_core.messages import HumanMessage
|
||||
|
||||
llm = ChatAnthropic(model="claude-sonnet-4-5")
|
||||
|
||||
message = HumanMessage(
|
||||
content=[
|
||||
{"type": "text", "text": "What's in this image?"},
|
||||
{
|
||||
"type": "image_url",
|
||||
"image_url": {
|
||||
"url": "https://example.com/image.jpg"
|
||||
}
|
||||
}
|
||||
]
|
||||
)
|
||||
|
||||
response = llm.invoke([message])
|
||||
```
|
||||
|
||||
## JSON Mode
|
||||
|
||||
When structured output is needed:
|
||||
|
||||
```python
|
||||
llm = ChatAnthropic(
|
||||
model="claude-sonnet-4-5",
|
||||
model_kwargs={
|
||||
"response_format": {"type": "json_object"}
|
||||
}
|
||||
)
|
||||
|
||||
response = llm.invoke("Return user information in JSON format")
|
||||
```
|
||||
|
||||
## Token Usage Tracking
|
||||
|
||||
```python
|
||||
from langchain.callbacks import get_openai_callback
|
||||
|
||||
llm = ChatAnthropic(model="claude-sonnet-4-5")
|
||||
|
||||
with get_openai_callback() as cb:
|
||||
response = llm.invoke("question")
|
||||
print(f"Total Tokens: {cb.total_tokens}")
|
||||
print(f"Prompt Tokens: {cb.prompt_tokens}")
|
||||
print(f"Completion Tokens: {cb.completion_tokens}")
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
```python
|
||||
from anthropic import AnthropicError, RateLimitError
|
||||
|
||||
try:
|
||||
llm = ChatAnthropic(model="claude-sonnet-4-5")
|
||||
response = llm.invoke("question")
|
||||
except RateLimitError:
|
||||
print("Rate limit reached")
|
||||
except AnthropicError as e:
|
||||
print(f"Anthropic error: {e}")
|
||||
```
|
||||
|
||||
## Rate Limit Handling
|
||||
|
||||
```python
|
||||
from tenacity import retry, wait_exponential, stop_after_attempt
|
||||
from anthropic import RateLimitError
|
||||
|
||||
@retry(
|
||||
wait=wait_exponential(multiplier=1, min=4, max=60),
|
||||
stop=stop_after_attempt(5),
|
||||
retry=lambda e: isinstance(e, RateLimitError)
|
||||
)
|
||||
def invoke_with_retry(llm, messages):
|
||||
return llm.invoke(messages)
|
||||
|
||||
llm = ChatAnthropic(model="claude-sonnet-4-5")
|
||||
response = invoke_with_retry(llm, ["question"])
|
||||
```
|
||||
|
||||
## Listing Models
|
||||
|
||||
```python
|
||||
import anthropic
|
||||
import os
|
||||
|
||||
client = anthropic.Anthropic(api_key=os.getenv("ANTHROPIC_API_KEY"))
|
||||
models = client.models.list()
|
||||
|
||||
for model in models.data:
|
||||
print(f"{model.id} - {model.display_name}")
|
||||
```
|
||||
|
||||
## Cost Optimization
|
||||
|
||||
### Cost Management by Model Selection
|
||||
|
||||
```python
|
||||
# Low-cost version (simple tasks)
|
||||
llm_cheap = ChatAnthropic(model="claude-haiku-4-5-20251001")
|
||||
|
||||
# Balanced version (general tasks)
|
||||
llm_balanced = ChatAnthropic(model="claude-sonnet-4-5")
|
||||
|
||||
# High-performance version (complex tasks)
|
||||
llm_powerful = ChatAnthropic(model="claude-opus-4-1-20250805")
|
||||
|
||||
# Select based on task
|
||||
def get_llm_for_task(complexity):
|
||||
if complexity == "simple":
|
||||
return llm_cheap
|
||||
elif complexity == "medium":
|
||||
return llm_balanced
|
||||
else:
|
||||
return llm_powerful
|
||||
```
|
||||
|
||||
### Cost Reduction with Prompt Caching
|
||||
|
||||
```python
|
||||
# Cache long system prompt
|
||||
system = {"role": "system", "content": long_guidelines, "cache_control": {"type": "ephemeral"}}
|
||||
|
||||
# Reuse cache across multiple calls (90% cost reduction)
|
||||
for user_input in user_inputs:
|
||||
response = llm.invoke([system, {"role": "user", "content": user_input}])
|
||||
```
|
||||
|
||||
## Leveraging Large Context
|
||||
|
||||
```python
|
||||
llm = ChatAnthropic(model="claude-sonnet-4-5")
|
||||
|
||||
# Process large documents at once (1M token support)
|
||||
documents = load_large_documents() # Large document collection
|
||||
|
||||
response = llm.invoke(f"""
|
||||
Please analyze the following multiple documents:
|
||||
|
||||
{documents}
|
||||
|
||||
Tell me the main themes and conclusions.
|
||||
""")
|
||||
```
|
||||
|
||||
## Reference Links
|
||||
|
||||
- [Claude API Documentation](https://docs.anthropic.com/)
|
||||
- [Anthropic API Reference](https://docs.anthropic.com/en/api/)
|
||||
- [Claude Models Overview](https://docs.anthropic.com/en/docs/about-claude/models/overview)
|
||||
- [Prompt Caching Guide](https://docs.anthropic.com/en/docs/build-with-claude/prompt-caching)
|
||||
219
skills/langgraph-master/06_llm_model_ids_claude_platforms.md
Normal file
219
skills/langgraph-master/06_llm_model_ids_claude_platforms.md
Normal file
@@ -0,0 +1,219 @@
|
||||
# Claude Platform-Specific Guide
|
||||
|
||||
How to use Claude on different cloud platforms.
|
||||
|
||||
## Anthropic API (Direct)
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```python
|
||||
from langchain_anthropic import ChatAnthropic
|
||||
|
||||
llm = ChatAnthropic(
|
||||
model="claude-sonnet-4-5",
|
||||
anthropic_api_key="sk-ant-..."
|
||||
)
|
||||
```
|
||||
|
||||
### Listing Models
|
||||
|
||||
```python
|
||||
import anthropic
|
||||
import os
|
||||
|
||||
client = anthropic.Anthropic(api_key=os.getenv("ANTHROPIC_API_KEY"))
|
||||
models = client.models.list()
|
||||
|
||||
for model in models.data:
|
||||
print(f"{model.id} - {model.display_name}")
|
||||
```
|
||||
|
||||
## Google Vertex AI
|
||||
|
||||
### Model ID Format
|
||||
|
||||
Vertex AI uses `@` notation:
|
||||
|
||||
```
|
||||
claude-opus-4-1@20250805
|
||||
claude-sonnet-4@20250514
|
||||
claude-haiku-4.5@20251001
|
||||
```
|
||||
|
||||
### Usage
|
||||
|
||||
```python
|
||||
from langchain_google_vertexai import ChatVertexAI
|
||||
|
||||
llm = ChatVertexAI(
|
||||
model="claude-haiku-4.5@20251001",
|
||||
project="your-gcp-project",
|
||||
location="us-central1"
|
||||
)
|
||||
```
|
||||
|
||||
### Environment Setup
|
||||
|
||||
```bash
|
||||
# GCP authentication
|
||||
gcloud auth application-default login
|
||||
|
||||
# Environment variables
|
||||
export GOOGLE_CLOUD_PROJECT="your-project-id"
|
||||
export GOOGLE_CLOUD_LOCATION="us-central1"
|
||||
```
|
||||
|
||||
## AWS Bedrock
|
||||
|
||||
### Model ID Format
|
||||
|
||||
Bedrock uses ARN format:
|
||||
|
||||
```
|
||||
anthropic.claude-opus-4-1-20250805-v1:0
|
||||
anthropic.claude-sonnet-4-20250514-v1:0
|
||||
anthropic.claude-haiku-4-5-20251001-v1:0
|
||||
```
|
||||
|
||||
### Usage
|
||||
|
||||
```python
|
||||
from langchain_aws import ChatBedrock
|
||||
|
||||
llm = ChatBedrock(
|
||||
model_id="anthropic.claude-haiku-4-5-20251001-v1:0",
|
||||
region_name="us-east-1",
|
||||
model_kwargs={
|
||||
"temperature": 0.7,
|
||||
"max_tokens": 4096
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
### Environment Setup
|
||||
|
||||
```bash
|
||||
# AWS CLI configuration
|
||||
aws configure
|
||||
|
||||
# Or environment variables
|
||||
export AWS_ACCESS_KEY_ID="your-access-key"
|
||||
export AWS_SECRET_ACCESS_KEY="your-secret-key"
|
||||
export AWS_DEFAULT_REGION="us-east-1"
|
||||
```
|
||||
|
||||
## Azure AI (Microsoft Foundry)
|
||||
|
||||
> **Release**: Public preview started in November 2025
|
||||
|
||||
### Model ID Format
|
||||
|
||||
Azure AI uses the same format as Anthropic API:
|
||||
|
||||
```
|
||||
claude-opus-4-1
|
||||
claude-sonnet-4-5
|
||||
claude-haiku-4-5
|
||||
```
|
||||
|
||||
### Available Models
|
||||
|
||||
- **Claude Opus 4.1** (`claude-opus-4-1`)
|
||||
- **Claude Sonnet 4.5** (`claude-sonnet-4-5`)
|
||||
- **Claude Haiku 4.5** (`claude-haiku-4-5`)
|
||||
|
||||
### Usage
|
||||
|
||||
```python
|
||||
# Calling Claude using Azure OpenAI SDK
|
||||
import os
|
||||
from openai import AzureOpenAI
|
||||
|
||||
client = AzureOpenAI(
|
||||
azure_endpoint=os.getenv("AZURE_FOUNDRY_ENDPOINT"),
|
||||
api_key=os.getenv("AZURE_FOUNDRY_API_KEY"),
|
||||
api_version="2024-12-01-preview"
|
||||
)
|
||||
|
||||
# Specify deployment name (default is same as model ID)
|
||||
response = client.chat.completions.create(
|
||||
model="claude-sonnet-4-5", # Or your custom deployment name
|
||||
messages=[
|
||||
{"role": "user", "content": "Hello"}
|
||||
]
|
||||
)
|
||||
```
|
||||
|
||||
### Custom Deployments
|
||||
|
||||
You can set custom deployment names in the Foundry portal:
|
||||
|
||||
```python
|
||||
# Using custom deployment name
|
||||
response = client.chat.completions.create(
|
||||
model="my-custom-claude-deployment",
|
||||
messages=[...]
|
||||
)
|
||||
```
|
||||
|
||||
### Environment Setup
|
||||
|
||||
```bash
|
||||
export AZURE_FOUNDRY_ENDPOINT="https://your-foundry-resource.azure.com"
|
||||
export AZURE_FOUNDRY_API_KEY="your-api-key"
|
||||
```
|
||||
|
||||
### Region Limitations
|
||||
|
||||
Currently available in the following regions:
|
||||
- **East US2**
|
||||
- **Sweden Central**
|
||||
|
||||
Deployment type: **Global Standard**
|
||||
|
||||
## Platform-Specific Features
|
||||
|
||||
| Platform | Model ID Format | Benefits | Drawbacks |
|
||||
|----------------|------------|---------|-----------|
|
||||
| **Anthropic API** | `claude-sonnet-4-5` | Instant access to latest models | Single provider dependency |
|
||||
| **Vertex AI** | `claude-sonnet-4@20250514` | Integration with GCP services | Complex setup |
|
||||
| **AWS Bedrock** | `anthropic.claude-sonnet-4-20250514-v1:0` | Integration with AWS ecosystem | Complex model ID format |
|
||||
| **Azure AI** | `claude-sonnet-4-5` | Azure + GPT and Claude integration | Region limitations |
|
||||
|
||||
## Cross-Platform Fallback
|
||||
|
||||
```python
|
||||
from langchain_anthropic import ChatAnthropic
|
||||
from langchain_google_vertexai import ChatVertexAI
|
||||
from langchain_aws import ChatBedrock
|
||||
|
||||
# Primary and fallback (multi-platform support)
|
||||
primary = ChatAnthropic(model="claude-sonnet-4-5")
|
||||
fallback_gcp = ChatVertexAI(
|
||||
model="claude-sonnet-4@20250514",
|
||||
project="your-project"
|
||||
)
|
||||
fallback_aws = ChatBedrock(
|
||||
model_id="anthropic.claude-sonnet-4-20250514-v1:0",
|
||||
region_name="us-east-1"
|
||||
)
|
||||
|
||||
# Fallback across three platforms
|
||||
llm = primary.with_fallbacks([fallback_gcp, fallback_aws])
|
||||
```
|
||||
|
||||
## Model ID Comparison Table
|
||||
|
||||
| Anthropic API | Vertex AI | AWS Bedrock | Azure AI |
|
||||
|--------------|-----------|-------------|----------|
|
||||
| `claude-opus-4-1-20250805` | `claude-opus-4-1@20250805` | `anthropic.claude-opus-4-1-20250805-v1:0` | `claude-opus-4-1` |
|
||||
| `claude-sonnet-4-5` | `claude-sonnet-4@20250514` | `anthropic.claude-sonnet-4-20250514-v1:0` | `claude-sonnet-4-5` |
|
||||
| `claude-haiku-4-5-20251001` | `claude-haiku-4.5@20251001` | `anthropic.claude-haiku-4-5-20251001-v1:0` | `claude-haiku-4-5` |
|
||||
|
||||
## Reference Links
|
||||
|
||||
- [Anthropic API Documentation](https://docs.anthropic.com/)
|
||||
- [Vertex AI Claude Models](https://docs.cloud.google.com/vertex-ai/generative-ai/docs/partner-models/claude)
|
||||
- [AWS Bedrock Claude Models](https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters-claude.html)
|
||||
- [Azure AI Claude Documentation](https://learn.microsoft.com/en-us/azure/ai-foundry/foundry-models/how-to/use-foundry-models-claude)
|
||||
- [Claude in Microsoft Foundry Announcement](https://www.anthropic.com/news/claude-in-microsoft-foundry)
|
||||
216
skills/langgraph-master/06_llm_model_ids_claude_tools.md
Normal file
216
skills/langgraph-master/06_llm_model_ids_claude_tools.md
Normal file
@@ -0,0 +1,216 @@
|
||||
# Claude Tool Use Guide
|
||||
|
||||
Implementation methods for Claude's tool use (Function Calling).
|
||||
|
||||
## Basic Tool Definition
|
||||
|
||||
```python
|
||||
from langchain_anthropic import ChatAnthropic
|
||||
from langchain_core.tools import tool
|
||||
|
||||
@tool
|
||||
def get_weather(location: str) -> str:
|
||||
"""Get weather for a specified location.
|
||||
|
||||
Args:
|
||||
location: Location to check weather (e.g., "Tokyo")
|
||||
"""
|
||||
return f"The weather in {location} is sunny"
|
||||
|
||||
@tool
|
||||
def calculate(expression: str) -> float:
|
||||
"""Calculate a mathematical expression.
|
||||
|
||||
Args:
|
||||
expression: Mathematical expression to calculate (e.g., "2 + 2")
|
||||
"""
|
||||
return eval(expression)
|
||||
|
||||
# Bind tools
|
||||
llm = ChatAnthropic(model="claude-sonnet-4-5")
|
||||
llm_with_tools = llm.bind_tools([get_weather, calculate])
|
||||
|
||||
# Usage
|
||||
response = llm_with_tools.invoke("Tell me Tokyo's weather and 2+2")
|
||||
print(response.tool_calls)
|
||||
```
|
||||
|
||||
## Tool Integration with LangGraph
|
||||
|
||||
```python
|
||||
from langgraph.prebuilt import create_react_agent
|
||||
from langchain_anthropic import ChatAnthropic
|
||||
from langchain_core.tools import tool
|
||||
|
||||
@tool
|
||||
def search_database(query: str) -> str:
|
||||
"""Search the database.
|
||||
|
||||
Args:
|
||||
query: Search query
|
||||
"""
|
||||
return f"Search results for '{query}'"
|
||||
|
||||
# Create agent
|
||||
llm = ChatAnthropic(model="claude-sonnet-4-5")
|
||||
tools = [search_database]
|
||||
|
||||
agent = create_react_agent(llm, tools)
|
||||
|
||||
# Execute
|
||||
result = agent.invoke({
|
||||
"messages": [("user", "Search for user information")]
|
||||
})
|
||||
```
|
||||
|
||||
## Custom Tool Node Implementation
|
||||
|
||||
```python
|
||||
from langgraph.graph import StateGraph
|
||||
from langchain_anthropic import ChatAnthropic
|
||||
from typing import TypedDict, Annotated
|
||||
from langgraph.graph.message import add_messages
|
||||
|
||||
class State(TypedDict):
|
||||
messages: Annotated[list, add_messages]
|
||||
|
||||
@tool
|
||||
def get_stock_price(symbol: str) -> float:
|
||||
"""Get stock price"""
|
||||
return 150.25
|
||||
|
||||
llm = ChatAnthropic(model="claude-sonnet-4-5")
|
||||
llm_with_tools = llm.bind_tools([get_stock_price])
|
||||
|
||||
def agent_node(state: State):
|
||||
response = llm_with_tools.invoke(state["messages"])
|
||||
return {"messages": [response]}
|
||||
|
||||
def tool_node(state: State):
|
||||
# Execute tool calls
|
||||
last_message = state["messages"][-1]
|
||||
tool_calls = last_message.tool_calls
|
||||
|
||||
results = []
|
||||
for tool_call in tool_calls:
|
||||
tool_result = get_stock_price.invoke(tool_call["args"])
|
||||
results.append({
|
||||
"tool_call_id": tool_call["id"],
|
||||
"output": tool_result
|
||||
})
|
||||
|
||||
return {"messages": results}
|
||||
|
||||
# Build graph
|
||||
graph = StateGraph(State)
|
||||
graph.add_node("agent", agent_node)
|
||||
graph.add_node("tools", tool_node)
|
||||
# ... Add edges, etc.
|
||||
```
|
||||
|
||||
## Streaming + Tool Use
|
||||
|
||||
```python
|
||||
from langchain_anthropic import ChatAnthropic
|
||||
from langchain_core.tools import tool
|
||||
|
||||
@tool
|
||||
def get_info(topic: str) -> str:
|
||||
"""Get information"""
|
||||
return f"Information about {topic}"
|
||||
|
||||
llm = ChatAnthropic(
|
||||
model="claude-sonnet-4-5",
|
||||
streaming=True
|
||||
)
|
||||
llm_with_tools = llm.bind_tools([get_info])
|
||||
|
||||
for chunk in llm_with_tools.stream("Tell me about Python"):
|
||||
if hasattr(chunk, 'tool_calls') and chunk.tool_calls:
|
||||
print(f"Tool: {chunk.tool_calls}")
|
||||
elif chunk.content:
|
||||
print(chunk.content, end="", flush=True)
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
```python
|
||||
from langchain_anthropic import ChatAnthropic
|
||||
from langchain_core.tools import tool
|
||||
import anthropic
|
||||
|
||||
@tool
|
||||
def risky_operation(data: str) -> str:
|
||||
"""Risky operation"""
|
||||
if not data:
|
||||
raise ValueError("Data is required")
|
||||
return f"Processing complete: {data}"
|
||||
|
||||
try:
|
||||
llm = ChatAnthropic(model="claude-sonnet-4-5")
|
||||
llm_with_tools = llm.bind_tools([risky_operation])
|
||||
response = llm_with_tools.invoke("Execute operation")
|
||||
except anthropic.BadRequestError as e:
|
||||
print(f"Invalid request: {e}")
|
||||
except Exception as e:
|
||||
print(f"Error: {e}")
|
||||
```
|
||||
|
||||
## Tool Best Practices
|
||||
|
||||
### 1. Clear Documentation
|
||||
|
||||
```python
|
||||
@tool
|
||||
def analyze_sentiment(text: str, language: str = "en") -> dict:
|
||||
"""Perform sentiment analysis on text.
|
||||
|
||||
Args:
|
||||
text: Text to analyze (max 1000 characters)
|
||||
language: Language of text ("ja", "en", etc.) defaults to English
|
||||
|
||||
Returns:
|
||||
{"sentiment": "positive|negative|neutral", "score": 0.0-1.0}
|
||||
"""
|
||||
# Implementation
|
||||
return {"sentiment": "positive", "score": 0.8}
|
||||
```
|
||||
|
||||
### 2. Use Type Hints
|
||||
|
||||
```python
|
||||
from typing import List, Dict
|
||||
|
||||
@tool
|
||||
def batch_process(items: List[str]) -> Dict[str, int]:
|
||||
"""Batch process multiple items.
|
||||
|
||||
Args:
|
||||
items: List of items to process
|
||||
|
||||
Returns:
|
||||
Dictionary of processing results for each item
|
||||
"""
|
||||
return {item: len(item) for item in items}
|
||||
```
|
||||
|
||||
### 3. Proper Error Handling
|
||||
|
||||
```python
|
||||
@tool
|
||||
def safe_operation(data: str) -> str:
|
||||
"""Safe operation"""
|
||||
try:
|
||||
# Execute operation
|
||||
result = process(data)
|
||||
return result
|
||||
except ValueError as e:
|
||||
return f"Input error: {e}"
|
||||
except Exception as e:
|
||||
return f"Unexpected error: {e}"
|
||||
```
|
||||
|
||||
## Reference Links
|
||||
|
||||
- [Claude Tool Use Guide](https://docs.anthropic.com/en/docs/tool-use)
|
||||
- [LangGraph Tools Documentation](https://langchain-ai.github.io/langgraph/concepts/agentic_concepts/)
|
||||
115
skills/langgraph-master/06_llm_model_ids_gemini.md
Normal file
115
skills/langgraph-master/06_llm_model_ids_gemini.md
Normal file
@@ -0,0 +1,115 @@
|
||||
# Google Gemini Model IDs
|
||||
|
||||
List of available model IDs for the Google Gemini API.
|
||||
|
||||
> **Last Updated**: 2025-11-24
|
||||
|
||||
## Model List
|
||||
|
||||
While there are many models available, `gemini-2.5-flash` is generally recommended for development at this time. It offers a good balance of cost and performance for a wide range of use cases.
|
||||
|
||||
### Gemini 3.x (Latest)
|
||||
|
||||
| Model ID | Context | Max Output | Use Case |
|
||||
| ---------------------------------------- | ------------ | -------- | ------------------ |
|
||||
| `google/gemini-3-pro-preview` | - | 64K | Latest high-performance model |
|
||||
| `google/gemini-3-pro-image-preview` | - | - | Image generation |
|
||||
| `google/gemini-3-pro-image-preview-edit` | - | - | Image editing |
|
||||
|
||||
### Gemini 2.5
|
||||
|
||||
| Model ID | Context | Max Output | Use Case |
|
||||
| ----------------------- | ------------ | -------- | ---------------------- |
|
||||
| `google/gemini-2.5-pro` | 1M (2M planned) | - | High performance |
|
||||
| `gemini-2.5-flash` | 1M | - | Fast balanced model (recommended) |
|
||||
| `gemini-2.5-flash-lite` | 1M | - | Lightweight and fast |
|
||||
|
||||
**Note**: Free tier is limited to approximately 32K tokens. Gemini Advanced (2.5 Pro) supports 1M tokens.
|
||||
|
||||
### Gemini 2.0
|
||||
|
||||
| Model ID | Context | Max Output | Use Case |
|
||||
| ------------------ | ------------ | -------- | ------ |
|
||||
| `gemini-2.0-flash` | 1M | - | Stable version |
|
||||
|
||||
## Basic Usage
|
||||
|
||||
```python
|
||||
from langchain_google_genai import ChatGoogleGenerativeAI
|
||||
|
||||
# Recommended: Balanced model
|
||||
llm = ChatGoogleGenerativeAI(model="gemini-2.5-flash")
|
||||
|
||||
# Also works with prefix
|
||||
llm = ChatGoogleGenerativeAI(model="models/gemini-2.5-flash")
|
||||
|
||||
# High-performance version
|
||||
llm = ChatGoogleGenerativeAI(model="google/gemini-3-pro")
|
||||
|
||||
# Lightweight version
|
||||
llm = ChatGoogleGenerativeAI(model="gemini-2.5-flash-lite")
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
||||
```bash
|
||||
export GOOGLE_API_KEY="your-api-key"
|
||||
```
|
||||
|
||||
## Model Selection Guide
|
||||
|
||||
| Use Case | Recommended Model |
|
||||
| ------------------ | ------------------------------ |
|
||||
| Cost-focused | `gemini-2.5-flash-lite` |
|
||||
| Balanced | `gemini-2.5-flash` |
|
||||
| Performance-focused | `google/gemini-3-pro` |
|
||||
| Large context | `gemini-2.5-pro` (1M tokens) |
|
||||
|
||||
## Gemini Features
|
||||
|
||||
### 1. Large Context Window
|
||||
|
||||
Gemini is the **industry's first model to support 1M tokens**:
|
||||
|
||||
| Tier | Context Limit |
|
||||
| ------------------------- | ---------------- |
|
||||
| Gemini Advanced (2.5 Pro) | 1M tokens |
|
||||
| Vertex AI | 1M tokens |
|
||||
| Free tier | ~32K tokens |
|
||||
|
||||
**Use Cases**:
|
||||
|
||||
- Long document analysis
|
||||
- Understanding entire codebases
|
||||
- Long conversation history
|
||||
|
||||
```python
|
||||
# Processing large context
|
||||
llm = ChatGoogleGenerativeAI(
|
||||
model="gemini-2.5-pro",
|
||||
max_tokens=8192 # Specify output token count
|
||||
)
|
||||
```
|
||||
|
||||
**Future**: Gemini 2.5 Pro is planned to support 2M token context windows.
|
||||
|
||||
### 2. Multimodal Support
|
||||
|
||||
Image input and generation capabilities (see [Advanced Features](06_llm_model_ids_gemini_advanced.md) for details).
|
||||
|
||||
## Important Notes
|
||||
|
||||
- ❌ **Deprecated**: Gemini 1.0, 1.5 series are no longer available
|
||||
- ✅ **Migration Recommended**: Use `gemini-2.5-flash` or later models
|
||||
|
||||
## Detailed Documentation
|
||||
|
||||
For advanced configuration and multimodal features, see:
|
||||
|
||||
- **[Gemini Advanced Features](06_llm_model_ids_gemini_advanced.md)**
|
||||
|
||||
## Reference Links
|
||||
|
||||
- [Gemini API Official](https://ai.google.dev/gemini-api/docs/models)
|
||||
- [Google AI Studio](https://makersuite.google.com/)
|
||||
- [LangChain Integration](https://docs.langchain.com/oss/python/integrations/chat/google_generative_ai)
|
||||
118
skills/langgraph-master/06_llm_model_ids_gemini_advanced.md
Normal file
118
skills/langgraph-master/06_llm_model_ids_gemini_advanced.md
Normal file
@@ -0,0 +1,118 @@
|
||||
# Gemini Advanced Features
|
||||
|
||||
Advanced configuration and multimodal features for Google Gemini models.
|
||||
|
||||
## Context Window and Output Limits
|
||||
|
||||
| Model | Context Window | Max Output Tokens |
|
||||
|--------|-------------------|---------------|
|
||||
| Gemini 3 Pro | - | 64K |
|
||||
| Gemini 2.5 Pro | 1M (2M planned) | - |
|
||||
| Gemini 2.5 Flash | 1M | - |
|
||||
| Gemini 2.0 Flash | 1M | - |
|
||||
|
||||
**Tier-based Limits**:
|
||||
- Gemini Advanced / Vertex AI: 1M tokens
|
||||
- Free tier: ~32K tokens
|
||||
|
||||
## Parameter Configuration
|
||||
|
||||
```python
|
||||
from langchain_google_genai import ChatGoogleGenerativeAI
|
||||
|
||||
llm = ChatGoogleGenerativeAI(
|
||||
model="gemini-2.5-flash",
|
||||
temperature=0.7, # Creativity (0.0-1.0)
|
||||
top_p=0.9, # Diversity
|
||||
top_k=40, # Sampling
|
||||
max_tokens=8192, # Max output
|
||||
)
|
||||
```
|
||||
|
||||
## Multimodal Features
|
||||
|
||||
### Image Input
|
||||
|
||||
```python
|
||||
from langchain_google_genai import ChatGoogleGenerativeAI
|
||||
from langchain_core.messages import HumanMessage
|
||||
|
||||
llm = ChatGoogleGenerativeAI(model="gemini-2.5-flash")
|
||||
|
||||
message = HumanMessage(
|
||||
content=[
|
||||
{"type": "text", "text": "What is in this image?"},
|
||||
{"type": "image_url", "image_url": "https://example.com/image.jpg"}
|
||||
]
|
||||
)
|
||||
|
||||
response = llm.invoke([message])
|
||||
```
|
||||
|
||||
### Image Generation (Gemini 3.x)
|
||||
|
||||
```python
|
||||
llm = ChatGoogleGenerativeAI(model="google/gemini-3-pro-image-preview")
|
||||
response = llm.invoke("Generate a beautiful sunset landscape")
|
||||
```
|
||||
|
||||
## Streaming
|
||||
|
||||
```python
|
||||
llm = ChatGoogleGenerativeAI(
|
||||
model="gemini-2.5-flash",
|
||||
streaming=True
|
||||
)
|
||||
|
||||
for chunk in llm.stream("Question"):
|
||||
print(chunk.content, end="", flush=True)
|
||||
```
|
||||
|
||||
## Safety Settings
|
||||
|
||||
```python
|
||||
from langchain_google_genai import (
|
||||
ChatGoogleGenerativeAI,
|
||||
HarmBlockThreshold,
|
||||
HarmCategory
|
||||
)
|
||||
|
||||
llm = ChatGoogleGenerativeAI(
|
||||
model="gemini-2.5-flash",
|
||||
safety_settings={
|
||||
HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE,
|
||||
HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE,
|
||||
}
|
||||
)
|
||||
```
|
||||
|
||||
## Retrieving Model List
|
||||
|
||||
```python
|
||||
import google.generativeai as genai
|
||||
import os
|
||||
|
||||
genai.configure(api_key=os.getenv("GOOGLE_API_KEY"))
|
||||
|
||||
for model in genai.list_models():
|
||||
if 'generateContent' in model.supported_generation_methods:
|
||||
print(f"{model.name}: {model.input_token_limit} tokens")
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
```python
|
||||
from google.api_core import exceptions
|
||||
|
||||
try:
|
||||
response = llm.invoke("Question")
|
||||
except exceptions.ResourceExhausted:
|
||||
print("Rate limit reached")
|
||||
except exceptions.InvalidArgument as e:
|
||||
print(f"Invalid argument: {e}")
|
||||
```
|
||||
|
||||
## Reference Links
|
||||
|
||||
- [Gemini API Models](https://ai.google.dev/gemini-api/docs/models)
|
||||
- [Vertex AI](https://docs.cloud.google.com/vertex-ai/generative-ai/docs/models)
|
||||
186
skills/langgraph-master/06_llm_model_ids_openai.md
Normal file
186
skills/langgraph-master/06_llm_model_ids_openai.md
Normal file
@@ -0,0 +1,186 @@
|
||||
# OpenAI GPT Model IDs
|
||||
|
||||
List of available model IDs for the OpenAI API.
|
||||
|
||||
> **Last Updated**: 2025-11-24
|
||||
|
||||
## Model List
|
||||
|
||||
### GPT-5 Series
|
||||
|
||||
> **Released**: August 2025
|
||||
|
||||
| Model ID | Context | Max Output | Features |
|
||||
|-----------|------------|---------|------|
|
||||
| `gpt-5` | 400K | 128K | Full-featured. High-quality general-purpose tasks |
|
||||
| `gpt-5-pro` | 400K | 272K | Extended reasoning version. Complex enterprise and research use cases |
|
||||
| `gpt-5-mini` | 400K | 128K | Small high-speed version. Low latency |
|
||||
| `gpt-5-nano` | 400K | 128K | Ultra-lightweight version. Resource optimized |
|
||||
|
||||
**Performance**: Achieved 94.6% on AIME 2025, 74.9% on SWE-bench Verified
|
||||
**Note**: Context window is the combined length of input + output
|
||||
|
||||
### GPT-5.1 Series (Latest Update)
|
||||
|
||||
| Model ID | Context | Max Output | Features |
|
||||
|-----------|------------|---------|------|
|
||||
| `gpt-5.1` | 128K (ChatGPT) / 400K (API) | 128K | Balance of intelligence and speed |
|
||||
| `gpt-5.1-instant` | 128K / 400K | 128K | Adaptive reasoning. Balances speed and accuracy |
|
||||
| `gpt-5.1-thinking` | 128K / 400K | 128K | Adjusts thinking time based on problem complexity |
|
||||
| `gpt-5.1-mini` | 128K / 400K | 128K | Compact version |
|
||||
| `gpt-5.1-codex` | 400K | 128K | Code-specialized version (for GitHub Copilot) |
|
||||
| `gpt-5.1-codex-mini` | 400K | 128K | Code-specialized compact version |
|
||||
|
||||
## Basic Usage
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
|
||||
# Latest: GPT-5
|
||||
llm = ChatOpenAI(model="gpt-5")
|
||||
|
||||
# Latest update: GPT-5.1
|
||||
llm = ChatOpenAI(model="gpt-5.1")
|
||||
|
||||
# High performance: GPT-5 Pro
|
||||
llm = ChatOpenAI(model="gpt-5-pro")
|
||||
|
||||
# Cost-conscious: Compact version
|
||||
llm = ChatOpenAI(model="gpt-5-mini")
|
||||
|
||||
# Ultra-lightweight
|
||||
llm = ChatOpenAI(model="gpt-5-nano")
|
||||
```
|
||||
|
||||
### Environment Variables
|
||||
|
||||
```bash
|
||||
export OPENAI_API_KEY="sk-..."
|
||||
```
|
||||
|
||||
## Model Selection Guide
|
||||
|
||||
| Use Case | Recommended Model |
|
||||
|------|-----------|
|
||||
| **Maximum Performance** | `gpt-5-pro` |
|
||||
| **General-Purpose Tasks** | `gpt-5` or `gpt-5.1` |
|
||||
| **Cost-Conscious** | `gpt-5-mini` |
|
||||
| **Ultra-Lightweight** | `gpt-5-nano` |
|
||||
| **Adaptive Reasoning** | `gpt-5.1-instant` or `gpt-5.1-thinking` |
|
||||
| **Code Generation** | `gpt-5.1-codex` or `gpt-5` |
|
||||
|
||||
## GPT-5 Features
|
||||
|
||||
### 1. Large Context Window
|
||||
|
||||
GPT-5 series has a **400K token** context window:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
model="gpt-5",
|
||||
max_tokens=128000 # Max output: 128K
|
||||
)
|
||||
|
||||
# GPT-5 Pro has a maximum output of 272K
|
||||
llm_pro = ChatOpenAI(
|
||||
model="gpt-5-pro",
|
||||
max_tokens=272000
|
||||
)
|
||||
```
|
||||
|
||||
**Use Cases**:
|
||||
- Batch processing of long documents
|
||||
- Analysis of large codebases
|
||||
- Maintaining long conversation histories
|
||||
|
||||
### 2. Software On-Demand Generation
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(model="gpt-5")
|
||||
response = llm.invoke("Generate a web application")
|
||||
```
|
||||
|
||||
### 3. Advanced Reasoning Capabilities
|
||||
|
||||
**Performance Metrics**:
|
||||
- AIME 2025: 94.6%
|
||||
- SWE-bench Verified: 74.9%
|
||||
- Aider Polyglot: 88%
|
||||
- MMMU: 84.2%
|
||||
|
||||
### 4. GPT-5.1 Adaptive Reasoning
|
||||
|
||||
Automatically adjusts thinking time based on problem complexity:
|
||||
|
||||
```python
|
||||
# Balance between speed and accuracy
|
||||
llm = ChatOpenAI(model="gpt-5.1-instant")
|
||||
|
||||
# Tasks requiring deep thought
|
||||
llm = ChatOpenAI(model="gpt-5.1-thinking")
|
||||
```
|
||||
|
||||
**Compaction Technology**: GPT-5.1 introduces technology that effectively handles longer contexts.
|
||||
|
||||
### 5. GPT-5 Pro - Extended Reasoning
|
||||
|
||||
Advanced reasoning for enterprise and research environments. **Maximum output of 272K tokens**:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
model="gpt-5-pro",
|
||||
max_tokens=272000 # Larger output possible than other models
|
||||
)
|
||||
# More detailed and reliable responses
|
||||
```
|
||||
|
||||
### 6. Code-Specialized Models
|
||||
|
||||
```python
|
||||
# Used in GitHub Copilot
|
||||
llm = ChatOpenAI(model="gpt-5.1-codex")
|
||||
|
||||
# Compact version
|
||||
llm = ChatOpenAI(model="gpt-5.1-codex-mini")
|
||||
```
|
||||
|
||||
## Multimodal Support
|
||||
|
||||
GPT-5 supports images and audio (see [Advanced Features](06_llm_model_ids_openai_advanced.md) for details).
|
||||
|
||||
## JSON Mode
|
||||
|
||||
When structured output is needed:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
model="gpt-5",
|
||||
model_kwargs={"response_format": {"type": "json_object"}}
|
||||
)
|
||||
```
|
||||
|
||||
## Retrieving Model List
|
||||
|
||||
```python
|
||||
from openai import OpenAI
|
||||
import os
|
||||
|
||||
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
|
||||
models = client.models.list()
|
||||
|
||||
for model in models:
|
||||
if model.id.startswith("gpt-5"):
|
||||
print(model.id)
|
||||
```
|
||||
|
||||
## Detailed Documentation
|
||||
|
||||
For advanced settings, vision features, and Azure OpenAI:
|
||||
- **[OpenAI Advanced Features](06_llm_model_ids_openai_advanced.md)**
|
||||
|
||||
## Reference Links
|
||||
|
||||
- [OpenAI GPT-5](https://openai.com/index/introducing-gpt-5/)
|
||||
- [OpenAI GPT-5.1](https://openai.com/index/gpt-5-1/)
|
||||
- [OpenAI Platform](https://platform.openai.com/)
|
||||
- [LangChain Integration](https://docs.langchain.com/oss/python/integrations/chat/openai)
|
||||
289
skills/langgraph-master/06_llm_model_ids_openai_advanced.md
Normal file
289
skills/langgraph-master/06_llm_model_ids_openai_advanced.md
Normal file
@@ -0,0 +1,289 @@
|
||||
# OpenAI GPT-5 Advanced Features
|
||||
|
||||
Advanced settings and multimodal features for GPT-5 models.
|
||||
|
||||
## Parameter Settings
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
|
||||
llm = ChatOpenAI(
|
||||
model="gpt-5",
|
||||
temperature=0.7, # Creativity (0.0-2.0)
|
||||
max_tokens=128000, # Max output (GPT-5: 128K)
|
||||
top_p=0.9, # Diversity
|
||||
frequency_penalty=0.0, # Repetition penalty
|
||||
presence_penalty=0.0, # Topic diversity
|
||||
)
|
||||
|
||||
# GPT-5 Pro (larger max output)
|
||||
llm_pro = ChatOpenAI(
|
||||
model="gpt-5-pro",
|
||||
max_tokens=272000, # GPT-5 Pro: 272K
|
||||
)
|
||||
```
|
||||
|
||||
## Context Window and Output Limits
|
||||
|
||||
| Model | Context Window | Max Output Tokens |
|
||||
|--------|-------------------|---------------|
|
||||
| `gpt-5` | 400,000 (API) | 128,000 |
|
||||
| `gpt-5-mini` | 400,000 (API) | 128,000 |
|
||||
| `gpt-5-nano` | 400,000 (API) | 128,000 |
|
||||
| `gpt-5-pro` | 400,000 | 272,000 |
|
||||
| `gpt-5.1` | 128,000 (ChatGPT) / 400,000 (API) | 128,000 |
|
||||
| `gpt-5.1-codex` | 400,000 | 128,000 |
|
||||
|
||||
**Note**: Context window is the combined length of input + output.
|
||||
|
||||
## Vision (Image Processing)
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
from langchain_core.messages import HumanMessage
|
||||
|
||||
llm = ChatOpenAI(model="gpt-5")
|
||||
|
||||
message = HumanMessage(
|
||||
content=[
|
||||
{"type": "text", "text": "What is shown in this image?"},
|
||||
{
|
||||
"type": "image_url",
|
||||
"image_url": {
|
||||
"url": "https://example.com/image.jpg",
|
||||
"detail": "high" # "low", "high", "auto"
|
||||
}
|
||||
}
|
||||
]
|
||||
)
|
||||
|
||||
response = llm.invoke([message])
|
||||
```
|
||||
|
||||
## Tool Use (Function Calling)
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
from langchain_core.tools import tool
|
||||
|
||||
@tool
|
||||
def get_weather(location: str) -> str:
|
||||
"""Get weather"""
|
||||
return f"The weather in {location} is sunny"
|
||||
|
||||
@tool
|
||||
def calculate(expression: str) -> float:
|
||||
"""Calculate"""
|
||||
return eval(expression)
|
||||
|
||||
llm = ChatOpenAI(model="gpt-5")
|
||||
llm_with_tools = llm.bind_tools([get_weather, calculate])
|
||||
|
||||
response = llm_with_tools.invoke("Tell me the weather in Tokyo and 2+2")
|
||||
print(response.tool_calls)
|
||||
```
|
||||
|
||||
## Parallel Tool Calling
|
||||
|
||||
```python
|
||||
@tool
|
||||
def get_stock_price(symbol: str) -> float:
|
||||
"""Get stock price"""
|
||||
return 150.25
|
||||
|
||||
@tool
|
||||
def get_company_info(symbol: str) -> dict:
|
||||
"""Get company information"""
|
||||
return {"name": "Apple Inc.", "industry": "Technology"}
|
||||
|
||||
llm = ChatOpenAI(model="gpt-5")
|
||||
llm_with_tools = llm.bind_tools([get_stock_price, get_company_info])
|
||||
|
||||
# Call multiple tools in parallel
|
||||
response = llm_with_tools.invoke("Tell me the stock price and company info for AAPL")
|
||||
```
|
||||
|
||||
## Streaming
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
model="gpt-5",
|
||||
streaming=True
|
||||
)
|
||||
|
||||
for chunk in llm.stream("Question"):
|
||||
print(chunk.content, end="", flush=True)
|
||||
```
|
||||
|
||||
## JSON Mode
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
model="gpt-5",
|
||||
model_kwargs={"response_format": {"type": "json_object"}}
|
||||
)
|
||||
|
||||
response = llm.invoke("Return user information in JSON format")
|
||||
```
|
||||
|
||||
## Using GPT-5.1 Adaptive Reasoning
|
||||
|
||||
### Instant Mode
|
||||
|
||||
Balance between speed and accuracy:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(model="gpt-5.1-instant")
|
||||
|
||||
# Adaptively adjusts reasoning time
|
||||
response = llm.invoke("Solve this problem...")
|
||||
```
|
||||
|
||||
### Thinking Mode
|
||||
|
||||
Deep thought for complex problems:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(model="gpt-5.1-thinking")
|
||||
|
||||
# Improves accuracy with longer thinking time
|
||||
response = llm.invoke("Complex math problem...")
|
||||
```
|
||||
|
||||
## Leveraging GPT-5 Pro
|
||||
|
||||
Extended reasoning for enterprise and research environments:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(
|
||||
model="gpt-5-pro",
|
||||
temperature=0.3, # Precision-focused
|
||||
max_tokens=272000 # Large output possible
|
||||
)
|
||||
|
||||
# More detailed and reliable responses
|
||||
response = llm.invoke("Detailed analysis of...")
|
||||
```
|
||||
|
||||
## Code Generation Specialized Models
|
||||
|
||||
```python
|
||||
# Codex used in GitHub Copilot
|
||||
llm = ChatOpenAI(model="gpt-5.1-codex")
|
||||
|
||||
response = llm.invoke("Implement quicksort in Python")
|
||||
|
||||
# Compact version (fast)
|
||||
llm_mini = ChatOpenAI(model="gpt-5.1-codex-mini")
|
||||
```
|
||||
|
||||
## Tracking Token Usage
|
||||
|
||||
```python
|
||||
from langchain.callbacks import get_openai_callback
|
||||
|
||||
llm = ChatOpenAI(model="gpt-5")
|
||||
|
||||
with get_openai_callback() as cb:
|
||||
response = llm.invoke("Question")
|
||||
print(f"Total Tokens: {cb.total_tokens}")
|
||||
print(f"Prompt Tokens: {cb.prompt_tokens}")
|
||||
print(f"Completion Tokens: {cb.completion_tokens}")
|
||||
print(f"Total Cost (USD): ${cb.total_cost}")
|
||||
```
|
||||
|
||||
## Azure OpenAI Service
|
||||
|
||||
GPT-5 is also available on Azure:
|
||||
|
||||
```python
|
||||
from langchain_openai import AzureChatOpenAI
|
||||
|
||||
llm = AzureChatOpenAI(
|
||||
azure_endpoint="https://your-resource.openai.azure.com/",
|
||||
api_key="your-azure-api-key",
|
||||
api_version="2024-12-01-preview",
|
||||
deployment_name="gpt-5",
|
||||
model="gpt-5"
|
||||
)
|
||||
```
|
||||
|
||||
### Environment Variables (Azure)
|
||||
|
||||
```bash
|
||||
export AZURE_OPENAI_ENDPOINT="https://your-resource.openai.azure.com/"
|
||||
export AZURE_OPENAI_API_KEY="your-azure-api-key"
|
||||
export AZURE_OPENAI_DEPLOYMENT_NAME="gpt-5"
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
```python
|
||||
from langchain_openai import ChatOpenAI
|
||||
from openai import OpenAIError, RateLimitError
|
||||
|
||||
try:
|
||||
llm = ChatOpenAI(model="gpt-5")
|
||||
response = llm.invoke("Question")
|
||||
except RateLimitError:
|
||||
print("Rate limit reached")
|
||||
except OpenAIError as e:
|
||||
print(f"OpenAI error: {e}")
|
||||
```
|
||||
|
||||
## Handling Rate Limits
|
||||
|
||||
```python
|
||||
from tenacity import retry, wait_exponential, stop_after_attempt
|
||||
from openai import RateLimitError
|
||||
|
||||
@retry(
|
||||
wait=wait_exponential(multiplier=1, min=4, max=60),
|
||||
stop=stop_after_attempt(5),
|
||||
retry=lambda e: isinstance(e, RateLimitError)
|
||||
)
|
||||
def invoke_with_retry(llm, messages):
|
||||
return llm.invoke(messages)
|
||||
|
||||
llm = ChatOpenAI(model="gpt-5")
|
||||
response = invoke_with_retry(llm, ["Question"])
|
||||
```
|
||||
|
||||
## Leveraging Large Context
|
||||
|
||||
Utilizing GPT-5's 400K context window:
|
||||
|
||||
```python
|
||||
llm = ChatOpenAI(model="gpt-5")
|
||||
|
||||
# Process large amounts of documents at once
|
||||
long_document = "..." * 100000 # Long document
|
||||
|
||||
response = llm.invoke(f"""
|
||||
Please analyze the following document:
|
||||
|
||||
{long_document}
|
||||
|
||||
Provide a summary and key points.
|
||||
""")
|
||||
```
|
||||
|
||||
## Compaction Technology
|
||||
|
||||
GPT-5.1 introduces technology that effectively handles longer contexts:
|
||||
|
||||
```python
|
||||
# Processing very long conversation histories or documents
|
||||
llm = ChatOpenAI(model="gpt-5.1")
|
||||
|
||||
# Efficiently processed through Compaction
|
||||
response = llm.invoke(very_long_context)
|
||||
```
|
||||
|
||||
## Reference Links
|
||||
|
||||
- [OpenAI GPT-5 Documentation](https://openai.com/gpt-5/)
|
||||
- [OpenAI GPT-5.1 Documentation](https://openai.com/index/gpt-5-1/)
|
||||
- [OpenAI API Reference](https://platform.openai.com/docs/api-reference)
|
||||
- [OpenAI Platform Models](https://platform.openai.com/docs/models)
|
||||
- [Azure OpenAI Documentation](https://learn.microsoft.com/azure/ai-services/openai/)
|
||||
131
skills/langgraph-master/README.md
Normal file
131
skills/langgraph-master/README.md
Normal file
@@ -0,0 +1,131 @@
|
||||
# langgraph-master
|
||||
|
||||
**PROACTIVE SKILL** - Comprehensive guide for building AI agents with LangGraph. Claude invokes this skill automatically when LangGraph development is detected, providing architecture patterns, implementation guidance, and best practices.
|
||||
|
||||
## Installation
|
||||
|
||||
```
|
||||
/plugin marketplace add hiroshi75/ccplugins
|
||||
/plugin install langgraph-master-plugin@hiroshi75
|
||||
```
|
||||
|
||||
## Automatic Triggers
|
||||
|
||||
Claude **automatically invokes** this skill when:
|
||||
|
||||
- **LangGraph development** - Detecting LangGraph imports or StateGraph usage
|
||||
- **Agent architecture** - Planning or implementing AI agent workflows
|
||||
- **Graph patterns** - Working with nodes, edges, or state management
|
||||
- **Keywords detected** - When user mentions: LangGraph, StateGraph, agent workflow, node, edge, checkpointer
|
||||
- **Implementation requests** - Building chatbots, RAG agents, or autonomous systems
|
||||
|
||||
**No manual action required** - Claude provides LangGraph expertise automatically.
|
||||
|
||||
## Workflow
|
||||
|
||||
```
|
||||
Detect LangGraph context → Auto-invoke skill → Provide patterns/guidance → Implement with best practices
|
||||
```
|
||||
|
||||
## Manual Invocation (Optional)
|
||||
|
||||
To manually trigger LangGraph guidance:
|
||||
|
||||
```
|
||||
/langgraph-master-plugin:langgraph-master
|
||||
```
|
||||
|
||||
For learning specific patterns:
|
||||
|
||||
```
|
||||
/langgraph-master-plugin:langgraph-master "explain routing pattern"
|
||||
```
|
||||
|
||||
## Learning Resources
|
||||
|
||||
The skill provides comprehensive documentation covering:
|
||||
|
||||
| Category | Topics | Files |
|
||||
|----------|--------|-------|
|
||||
| **Core Concepts** | State, Node, Edge fundamentals | 01_core_concepts_*.md |
|
||||
| **Architecture** | 6 major graph patterns (Routing, Agent, etc.) | 02_graph_architecture_*.md |
|
||||
| **Memory** | Checkpointer, Store, Persistence | 03_memory_management_*.md |
|
||||
| **Tools** | Tool definition, Command API, Tool Node | 04_tool_integration_*.md |
|
||||
| **Advanced** | Human-in-the-Loop, Streaming, Map-Reduce | 05_advanced_features_*.md |
|
||||
| **Models** | Gemini, Claude, OpenAI model IDs | 06_llm_model_ids*.md |
|
||||
| **Examples** | Chatbot, RAG agent implementations | example_*.md |
|
||||
|
||||
## Subagent: langgraph-engineer
|
||||
|
||||
The skill includes a specialized **langgraph-master-plugin:langgraph-engineer** subagent for efficient parallel development:
|
||||
|
||||
### Key Features
|
||||
- **Functional Module Scope**: Implements complete features (2-5 nodes) as cohesive units
|
||||
- **Parallel Execution**: Multiple subagents can develop different modules simultaneously
|
||||
- **Production-Ready**: No TODOs or placeholders, fully functional code only
|
||||
- **Skill-Driven**: Always references langgraph-master documentation before implementation
|
||||
|
||||
### When to Use
|
||||
1. **Feature Module Implementation**: RAG search, intent analysis, approval workflows
|
||||
2. **Subgraph Patterns**: Complete functional units with nodes, edges, and state
|
||||
3. **Tool Integration**: Full tool integration modules with error handling
|
||||
|
||||
### Parallel Development Pattern
|
||||
```
|
||||
Planner → Decompose into functional modules
|
||||
├─ langgraph-engineer 1: Intent analysis module (parallel)
|
||||
│ └─ analyze + classify + route nodes
|
||||
└─ langgraph-engineer 2: RAG search module (parallel)
|
||||
└─ retrieve + rerank + generate nodes
|
||||
Orchestrator → Integrate modules into complete graph
|
||||
```
|
||||
|
||||
## How It Works
|
||||
|
||||
1. **Context Detection** - Claude monitors LangGraph-related activities
|
||||
2. **Trigger Evaluation** - Checks if auto-invoke conditions are met
|
||||
3. **Skill Invocation** - Automatically invokes langgraph-master skill
|
||||
4. **Pattern Guidance** - Provides architecture patterns and best practices
|
||||
5. **Implementation Support** - Assists with code generation using documented patterns
|
||||
|
||||
## Example Use Cases
|
||||
|
||||
### Automatic Guidance
|
||||
```python
|
||||
# Claude detects LangGraph usage and automatically provides guidance
|
||||
from langgraph.graph import StateGraph
|
||||
|
||||
# Skill auto-invoked → Provides state management patterns
|
||||
class AgentState(TypedDict):
|
||||
messages: list[str]
|
||||
```
|
||||
|
||||
### Pattern Implementation
|
||||
```
|
||||
User: "Build a RAG agent with LangGraph"
|
||||
Claude: [Auto-invokes skill]
|
||||
→ Provides RAG architecture pattern
|
||||
→ Suggests node structure (retrieve → rerank → generate)
|
||||
→ Implements with checkpointer for state persistence
|
||||
```
|
||||
|
||||
### Subagent Delegation
|
||||
```
|
||||
User: "Create a chatbot with intent classification and RAG search"
|
||||
Claude: → Decomposes into 2 modules
|
||||
→ Spawns langgraph-engineer for each module (parallel)
|
||||
→ Integrates completed modules into final graph
|
||||
```
|
||||
|
||||
## Benefits
|
||||
|
||||
- **Faster Development**: Pre-validated architecture patterns reduce trial and error
|
||||
- **Best Practices**: Automatically applies LangGraph best practices and conventions
|
||||
- **Parallel Implementation**: Efficient development through subagent delegation
|
||||
- **Complete Documentation**: 40+ documentation files covering all aspects
|
||||
- **Production-Ready**: Guidance ensures robust, maintainable implementations
|
||||
|
||||
## Reference Links
|
||||
|
||||
- [LangGraph Official Docs](https://docs.langchain.com/oss/python/langgraph/overview)
|
||||
- [LangGraph GitHub](https://github.com/langchain-ai/langgraph)
|
||||
193
skills/langgraph-master/SKILL.md
Normal file
193
skills/langgraph-master/SKILL.md
Normal file
@@ -0,0 +1,193 @@
|
||||
---
|
||||
name: langgraph-master
|
||||
description: Use when specifying or implementing LangGraph applications - from architecture planning and specification writing to actual code implementation. Also use for designing agent workflows or learning LangGraph patterns. This is a comprehensive guide for building AI agents with LangGraph, covering core concepts, architecture patterns, memory management, tool integration, and advanced features.
|
||||
---
|
||||
|
||||
# LangGraph Agent Construction Skill
|
||||
|
||||
A comprehensive guide for building AI agents using LangGraph.
|
||||
|
||||
## 📚 Learning Content
|
||||
|
||||
### [01. Core Concepts](01_core_concepts_overview.md)
|
||||
|
||||
Understanding the three core elements of LangGraph
|
||||
|
||||
- [State](01_core_concepts_state.md)
|
||||
- [Node](01_core_concepts_node.md)
|
||||
- [Edge](01_core_concepts_edge.md)
|
||||
- Advantages of the graph-based approach
|
||||
|
||||
### [02. Graph Architecture](02_graph_architecture_overview.md)
|
||||
|
||||
Six major graph patterns and agent design
|
||||
|
||||
- [Workflow vs Agent Differences](02_graph_architecture_workflow_vs_agent.md)
|
||||
- [Prompt Chaining (Sequential Processing)](02_graph_architecture_prompt_chaining.md)
|
||||
- [Parallelization](02_graph_architecture_parallelization.md)
|
||||
- [Routing (Branching)](02_graph_architecture_routing.md)
|
||||
- [Orchestrator-Worker](02_graph_architecture_orchestrator_worker.md)
|
||||
- [Evaluator-Optimizer](02_graph_architecture_evaluator_optimizer.md)
|
||||
- [Agent (Autonomous Tool Usage)](02_graph_architecture_agent.md)
|
||||
- [Subgraph](02_graph_architecture_subgraph.md)
|
||||
|
||||
### [03. Memory Management](03_memory_management_overview.md)
|
||||
|
||||
Persistence and checkpoint functionality
|
||||
|
||||
- [Checkpointer](03_memory_management_checkpointer.md)
|
||||
- [Store (Long-term Memory)](03_memory_management_store.md)
|
||||
- [Persistence](03_memory_management_persistence.md)
|
||||
|
||||
### [04. Tool Integration](04_tool_integration_overview.md)
|
||||
|
||||
External tool integration and execution control
|
||||
|
||||
- [Tool Definition](04_tool_integration_tool_definition.md)
|
||||
- [Command API (Control API)](04_tool_integration_command_api.md)
|
||||
- [Tool Node](04_tool_integration_tool_node.md)
|
||||
|
||||
### [05. Advanced Features](05_advanced_features_overview.md)
|
||||
|
||||
Advanced functionality and implementation patterns
|
||||
|
||||
- [Human-in-the-Loop (Approval Flow)](05_advanced_features_human_in_the_loop.md)
|
||||
- [Streaming](05_advanced_features_streaming.md)
|
||||
- [Map-Reduce Pattern](05_advanced_features_map_reduce.md)
|
||||
|
||||
### [06. LLM Model IDs](06_llm_model_ids.md)
|
||||
|
||||
Model ID reference for major LLM providers. Always refer to this document when selecting model IDs. Do not use models not listed in this document.
|
||||
|
||||
- Google Gemini model list
|
||||
- Anthropic Claude model list
|
||||
- OpenAI GPT model list
|
||||
- Usage examples and best practices with LangGraph
|
||||
|
||||
### Implementation Examples
|
||||
|
||||
Practical agent implementation examples
|
||||
|
||||
- [Basic Chatbot](example_basic_chatbot.md)
|
||||
- [RAG Agent](example_rag_agent.md)
|
||||
|
||||
## 📖 How to Use
|
||||
|
||||
Each section can be read independently, but reading them in order is recommended:
|
||||
|
||||
1. First understand LangGraph fundamentals in "Core Concepts"
|
||||
2. Learn design patterns in "Graph Architecture"
|
||||
3. Grasp implementation details in "Memory Management" and "Tool Integration"
|
||||
4. Master advanced features in "Advanced Features"
|
||||
5. Check practical usage in "Implementation Examples"
|
||||
|
||||
Each file is kept short and concise, allowing you to reference only the sections you need.
|
||||
|
||||
## 🤖 Efficient Implementation: Utilizing Subagents
|
||||
|
||||
To accelerate LangGraph application development, utilize the dedicated subagent `langgraph-master-plugin:langgraph-engineer`.
|
||||
|
||||
### Subagent Characteristics
|
||||
|
||||
**langgraph-master-plugin:langgraph-engineer** is an agent specialized in implementing functional modules:
|
||||
|
||||
- **Functional Unit Scope**: Implements complete functionality with multiple nodes, edges, and state definitions as a set
|
||||
- **Parallel Execution Optimization**: Designed for multiple agents to develop different functional modules simultaneously
|
||||
- **Skill-Driven**: Always references the langgraph-master skill before implementation
|
||||
- **Complete Implementation**: Generates fully functional modules (no TODOs or placeholders)
|
||||
- **Appropriate Size**: Functional units of about 2-5 nodes (subgraphs, workflow patterns, tool integrations, etc.)
|
||||
|
||||
### When to Use
|
||||
|
||||
Use langgraph-master-plugin:langgraph-engineer in the following cases:
|
||||
|
||||
1. **When functional module implementation is needed**
|
||||
|
||||
- Decompose the application into functional units
|
||||
- Efficiently develop each function through parallel execution
|
||||
|
||||
2. **Subgraph and pattern implementation**
|
||||
|
||||
- RAG search functionality (retrieve → rerank → generate)
|
||||
- Human-in-the-Loop approval flow (propose → wait_approval → execute)
|
||||
- Intent analysis functionality (analyze → classify → route)
|
||||
|
||||
3. **Tool integration and memory setup**
|
||||
- Complete tool integration module (definition → execution → processing → error handling)
|
||||
- Memory management module (checkpoint setup → persistence → restoration)
|
||||
|
||||
### Practical Example
|
||||
|
||||
**Task**: Build a chatbot with intent analysis and RAG search
|
||||
|
||||
**Parallel Execution Pattern**:
|
||||
|
||||
```
|
||||
Planner → Decompose into functional units
|
||||
├─ langgraph-master-plugin:langgraph-engineer 1: Intent analysis module (parallel)
|
||||
│ └─ analyze + classify + route nodes + conditional edges
|
||||
└─ langgraph-master-plugin:langgraph-engineer 2: RAG search module (parallel)
|
||||
└─ retrieve + rerank + generate nodes + state management
|
||||
Orchestrator → Integrate modules to assemble graph
|
||||
```
|
||||
|
||||
### Usage Method
|
||||
|
||||
1. **Decompose into functional modules**
|
||||
|
||||
- Decompose large LangGraph applications into functional units
|
||||
- Verify that each module can be implemented and tested independently
|
||||
- Verify that module size is appropriate (about 2-5 nodes)
|
||||
|
||||
2. **Implement common parts first**
|
||||
|
||||
- State used across the entire graph
|
||||
- Common tool definitions and common nodes used throughout
|
||||
|
||||
3. **Parallel Execution**
|
||||
|
||||
Assign one functional module implementation to each langgraph-master-plugin:langgraph-engineer agent and execute in parallel
|
||||
|
||||
- Implement independent functional modules simultaneously
|
||||
|
||||
4. **Integration**
|
||||
- Incorporate completed modules into the graph
|
||||
- Verify operation through integration testing
|
||||
|
||||
### Testing Method
|
||||
|
||||
- Perform unit testing for each functional module
|
||||
- Verify overall operation after integration. In many cases, there's an API key in .env, so load it and run at least one successful test case
|
||||
- If the successful case doesn't work well, code review is important, but roughly pinpoint the location, add appropriate logs to identify the cause, think carefully, and then fix.
|
||||
|
||||
### Functional Module Examples
|
||||
|
||||
**Appropriate Size (langgraph-master-plugin:langgraph-engineer scope)**:
|
||||
|
||||
- RAG search functionality: retrieve + rerank + generate (3 nodes)
|
||||
- Intent analysis: analyze + classify + route (2-3 nodes)
|
||||
- Approval workflow: propose + wait_approval + execute (3 nodes)
|
||||
- Tool integration: tool_call + execute + process + error_handling (3-4 nodes)
|
||||
|
||||
**Too Small (individual implementation is sufficient)**:
|
||||
|
||||
- Single node only
|
||||
- Single edge only
|
||||
- State field definition only
|
||||
|
||||
**Too Large (further decomposition needed)**:
|
||||
|
||||
- Complete chatbot application
|
||||
- Entire system containing multiple independent functions
|
||||
|
||||
### Notes
|
||||
|
||||
- **Appropriate Scope Setting**: Verify that each task is limited to one functional module
|
||||
- **Functional Independence**: Minimize dependencies between modules
|
||||
- **Interface Design**: Clearly document state contracts between modules
|
||||
- **Integration Plan**: Plan the integration method after module implementation in advance
|
||||
|
||||
## 🔗 Reference Links
|
||||
|
||||
- [LangGraph Official Documentation](https://docs.langchain.com/oss/python/langgraph/overview)
|
||||
- [LangGraph GitHub](https://github.com/langchain-ai/langgraph)
|
||||
117
skills/langgraph-master/example_basic_chatbot.md
Normal file
117
skills/langgraph-master/example_basic_chatbot.md
Normal file
@@ -0,0 +1,117 @@
|
||||
# Basic Chatbot
|
||||
|
||||
Implementation example of a basic chatbot using LangGraph.
|
||||
|
||||
## Complete Code
|
||||
|
||||
```python
|
||||
from typing import Annotated
|
||||
from langgraph.graph import StateGraph, START, END, MessagesState
|
||||
from langgraph.graph.message import add_messages
|
||||
from langgraph.checkpoint.memory import MemorySaver
|
||||
from langchain_anthropic import ChatAnthropic
|
||||
|
||||
# 1. Initialize LLM
|
||||
llm = ChatAnthropic(model="claude-sonnet-4-5-20250929")
|
||||
|
||||
# 2. Define node
|
||||
def chatbot_node(state: MessagesState):
|
||||
"""Chatbot node"""
|
||||
response = llm.invoke(state["messages"])
|
||||
return {"messages": [response]}
|
||||
|
||||
# 3. Build graph
|
||||
builder = StateGraph(MessagesState)
|
||||
builder.add_node("chatbot", chatbot_node)
|
||||
builder.add_edge(START, "chatbot")
|
||||
builder.add_edge("chatbot", END)
|
||||
|
||||
# 4. Compile with checkpointer
|
||||
checkpointer = MemorySaver()
|
||||
graph = builder.compile(checkpointer=checkpointer)
|
||||
|
||||
# 5. Execute
|
||||
config = {"configurable": {"thread_id": "conversation-1"}}
|
||||
|
||||
while True:
|
||||
user_input = input("User: ")
|
||||
if user_input.lower() in ["quit", "exit", "q"]:
|
||||
break
|
||||
|
||||
# Send message
|
||||
for chunk in graph.stream(
|
||||
{"messages": [{"role": "user", "content": user_input}]},
|
||||
config,
|
||||
stream_mode="values"
|
||||
):
|
||||
chunk["messages"][-1].pretty_print()
|
||||
```
|
||||
|
||||
## Explanation
|
||||
|
||||
### 1. MessagesState
|
||||
|
||||
```python
|
||||
from langgraph.graph import MessagesState
|
||||
|
||||
# MessagesState is equivalent to:
|
||||
class MessagesState(TypedDict):
|
||||
messages: Annotated[list[AnyMessage], add_messages]
|
||||
```
|
||||
|
||||
- `messages`: List of messages
|
||||
- `add_messages`: Reducer that adds new messages
|
||||
|
||||
### 2. Checkpointer
|
||||
|
||||
```python
|
||||
from langgraph.checkpoint.memory import MemorySaver
|
||||
|
||||
checkpointer = MemorySaver()
|
||||
graph = builder.compile(checkpointer=checkpointer)
|
||||
```
|
||||
|
||||
- Saves conversation state
|
||||
- Continues conversation with same `thread_id`
|
||||
|
||||
### 3. Streaming
|
||||
|
||||
```python
|
||||
for chunk in graph.stream(input, config, stream_mode="values"):
|
||||
chunk["messages"][-1].pretty_print()
|
||||
```
|
||||
|
||||
- `stream_mode="values"`: Complete state after each step
|
||||
- `pretty_print()`: Displays messages in a readable format
|
||||
|
||||
## Extension Examples
|
||||
|
||||
### Adding System Message
|
||||
|
||||
```python
|
||||
def chatbot_with_system(state: MessagesState):
|
||||
"""With system message"""
|
||||
system_msg = {
|
||||
"role": "system",
|
||||
"content": "You are a helpful assistant."
|
||||
}
|
||||
|
||||
response = llm.invoke([system_msg] + state["messages"])
|
||||
return {"messages": [response]}
|
||||
```
|
||||
|
||||
### Limiting Message History
|
||||
|
||||
```python
|
||||
def chatbot_with_limit(state: MessagesState):
|
||||
"""Use only the latest 10 messages"""
|
||||
recent_messages = state["messages"][-10:]
|
||||
response = llm.invoke(recent_messages)
|
||||
return {"messages": [response]}
|
||||
```
|
||||
|
||||
## Related Pages
|
||||
|
||||
- [01_core_concepts_overview.md](01_core_concepts_overview.md) - Understanding fundamental concepts
|
||||
- [03_memory_management_overview.md](03_memory_management_overview.md) - Checkpointer details
|
||||
- [example_rag_agent.md](example_rag_agent.md) - More advanced example
|
||||
169
skills/langgraph-master/example_rag_agent.md
Normal file
169
skills/langgraph-master/example_rag_agent.md
Normal file
@@ -0,0 +1,169 @@
|
||||
# RAG Agent
|
||||
|
||||
Implementation example of a RAG (Retrieval-Augmented Generation) agent with search functionality.
|
||||
|
||||
## Complete Code
|
||||
|
||||
```python
|
||||
from typing import Annotated, Literal
|
||||
from langgraph.graph import StateGraph, START, END, MessagesState
|
||||
from langgraph.prebuilt import ToolNode
|
||||
from langgraph.checkpoint.memory import MemorySaver
|
||||
from langchain_anthropic import ChatAnthropic
|
||||
from langchain_core.tools import tool
|
||||
|
||||
# 1. Define tool
|
||||
@tool
|
||||
def retrieve_documents(query: str) -> str:
|
||||
"""Retrieve relevant documents.
|
||||
|
||||
Args:
|
||||
query: Search query
|
||||
"""
|
||||
# In practice, search with vector store, etc.
|
||||
# Using dummy data here
|
||||
docs = [
|
||||
"LangGraph is an agent framework.",
|
||||
"StateGraph manages state.",
|
||||
"You can extend agents with tools."
|
||||
]
|
||||
|
||||
return "\n".join(docs)
|
||||
|
||||
tools = [retrieve_documents]
|
||||
|
||||
# 2. Bind tools to LLM
|
||||
llm = ChatAnthropic(model="claude-sonnet-4-5-20250929")
|
||||
llm_with_tools = llm.bind_tools(tools)
|
||||
|
||||
# 3. Define nodes
|
||||
def agent_node(state: MessagesState):
|
||||
"""Agent node"""
|
||||
response = llm_with_tools.invoke(state["messages"])
|
||||
return {"messages": [response]}
|
||||
|
||||
def should_continue(state: MessagesState) -> Literal["tools", "end"]:
|
||||
"""Determine tool usage"""
|
||||
last_message = state["messages"][-1]
|
||||
|
||||
if last_message.tool_calls:
|
||||
return "tools"
|
||||
return "end"
|
||||
|
||||
# 4. Build graph
|
||||
builder = StateGraph(MessagesState)
|
||||
|
||||
builder.add_node("agent", agent_node)
|
||||
builder.add_node("tools", ToolNode(tools))
|
||||
|
||||
builder.add_edge(START, "agent")
|
||||
builder.add_conditional_edges(
|
||||
"agent",
|
||||
should_continue,
|
||||
{
|
||||
"tools": "tools",
|
||||
"end": END
|
||||
}
|
||||
)
|
||||
builder.add_edge("tools", "agent")
|
||||
|
||||
# 5. Compile
|
||||
checkpointer = MemorySaver()
|
||||
graph = builder.compile(checkpointer=checkpointer)
|
||||
|
||||
# 6. Execute
|
||||
config = {"configurable": {"thread_id": "rag-session-1"}}
|
||||
|
||||
query = "What is LangGraph?"
|
||||
|
||||
for chunk in graph.stream(
|
||||
{"messages": [{"role": "user", "content": query}]},
|
||||
config,
|
||||
stream_mode="values"
|
||||
):
|
||||
chunk["messages"][-1].pretty_print()
|
||||
```
|
||||
|
||||
## Execution Flow
|
||||
|
||||
```
|
||||
User Query: "What is LangGraph?"
|
||||
↓
|
||||
[Agent Node]
|
||||
↓
|
||||
LLM: "I'll search for information" + ToolCall(retrieve_documents)
|
||||
↓
|
||||
[Tool Node] ← Execute search
|
||||
↓
|
||||
ToolMessage: "LangGraph is an agent framework..."
|
||||
↓
|
||||
[Agent Node] ← Use search results
|
||||
↓
|
||||
LLM: "LangGraph is a framework for building agents..."
|
||||
↓
|
||||
END
|
||||
```
|
||||
|
||||
## Extension Examples
|
||||
|
||||
### Multiple Search Tools
|
||||
|
||||
```python
|
||||
@tool
|
||||
def web_search(query: str) -> str:
|
||||
"""Search the web"""
|
||||
return search_web(query)
|
||||
|
||||
@tool
|
||||
def database_search(query: str) -> str:
|
||||
"""Search database"""
|
||||
return search_database(query)
|
||||
|
||||
tools = [retrieve_documents, web_search, database_search]
|
||||
```
|
||||
|
||||
### Vector Search Implementation
|
||||
|
||||
```python
|
||||
from langchain_community.vectorstores import FAISS
|
||||
from langchain_openai import OpenAIEmbeddings
|
||||
|
||||
# Initialize vector store
|
||||
embeddings = OpenAIEmbeddings()
|
||||
vectorstore = FAISS.from_texts(
|
||||
["LangGraph is an agent framework.", ...],
|
||||
embeddings
|
||||
)
|
||||
|
||||
@tool
|
||||
def semantic_search(query: str) -> str:
|
||||
"""Perform semantic search"""
|
||||
docs = vectorstore.similarity_search(query, k=3)
|
||||
return "\n".join([doc.page_content for doc in docs])
|
||||
```
|
||||
|
||||
### Adding Human-in-the-Loop
|
||||
|
||||
```python
|
||||
from langgraph.types import interrupt
|
||||
|
||||
@tool
|
||||
def sensitive_search(query: str) -> str:
|
||||
"""Search sensitive information (requires approval)"""
|
||||
approved = interrupt({
|
||||
"action": "sensitive_search",
|
||||
"query": query,
|
||||
"message": "Approve this sensitive search?"
|
||||
})
|
||||
|
||||
if approved:
|
||||
return perform_sensitive_search(query)
|
||||
else:
|
||||
return "Search cancelled by user"
|
||||
```
|
||||
|
||||
## Related Pages
|
||||
|
||||
- [02_graph_architecture_agent.md](02_graph_architecture_agent.md) - Agent pattern
|
||||
- [04_tool_integration_overview.md](04_tool_integration_overview.md) - Tool details
|
||||
- [example_basic_chatbot.md](example_basic_chatbot.md) - Basic chatbot
|
||||
Reference in New Issue
Block a user