Building Intelligent Agent Teams with Google ADK
Part 1: Introduction & Project Setup
What You'll Build
A production-ready weather bot system with:
- β Multiple specialized agents (weather, greeting, farewell)
- β Intelligent delegation between agents
- β Persistent memory across conversations
- β Safety guardrails for inputs and outputs
- β Support for multiple LLMs (Gemini, GPT, Claude)
- β Scalable architecture for easy extension
Why Google ADK?
Traditional LLM applications struggle with:
- No Memory: Each request starts fresh
- No Actions: Can only generate text, not perform tasks
- No Safety: No built-in validation or guardrails
- Complex State: Managing context manually
- Monolithic: One model doing everything
ADK solves these with:
- Session State: Built-in memory management
- Tools: Functions agents can call
- Callbacks: Safety hooks for validation
- Multi-Agent: Specialized agents working together
- Flexibility: Use any LLM provider
Prerequisites
Required:
- Python 3.8+
- Basic Python knowledge (functions, classes, async/await)
- API key for at least one LLM provider
Optional:
- Docker (for containerization)
- Git (for version control)
Project Structure
weather-bot-adk/
βββ src/
β βββ config/
β β βββ __init__.py
β β βββ settings.py # API keys, environment config
β β βββ models.py # Model identifiers
β βββ tools/
β β βββ __init__.py
β β βββ weather_tools.py # Weather functionality
β β βββ conversation_tools.py # Greeting/farewell tools
β βββ agents/
β β βββ __init__.py
β β βββ base_agent.py # Agent factory
β β βββ weather_agent.py # Main orchestrator
β β βββ greeting_agent.py # Greeting specialist
β β βββ farewell_agent.py # Farewell specialist
β βββ callbacks/
β β βββ __init__.py
β β βββ model_callbacks.py # Input validation
β β βββ tool_callbacks.py # Tool execution guards
β βββ utils/
β β βββ __init__.py
β β βββ session_manager.py # Session handling
β β βββ helpers.py # Utilities
β βββ main.py # Application entry
βββ tests/
β βββ __init__.py
β βββ test_tools.py
β βββ test_agents.py
β βββ test_callbacks.py
βββ .env # Your API keys (DO NOT COMMIT)
βββ .env.example # Template
βββ .gitignore
βββ requirements.txt
βββ README.md
Installation
Step 1: Create Project
mkdir weather-bot-adk && cd weather-bot-adk
# Create directory structure
mkdir -p src/{config,tools,agents,callbacks,utils}
mkdir tests
# Create __init__.py files
touch src/__init__.py
touch src/{config,tools,agents,callbacks,utils}/__init__.py
touch tests/__init__.py
Step 2: Virtual Environment
# Create virtual environment
python -m venv venv
# Activate
source venv/bin/activate # macOS/Linux
# OR
venv\Scripts\activate # Windows
Step 3: Install Dependencies
Create requirements.txt:
# Core
google-adk>=0.1.0
litellm>=1.0.0
python-dotenv>=1.0.0
# Testing
pytest>=7.0.0
pytest-asyncio>=0.21.0
pytest-cov>=4.1.0
# Optional: Production
fastapi>=0.104.0
uvicorn>=0.24.0
Install:
pip install -r requirements.txt
Step 4: Configuration Files
Create .env.example:
# Google AI (Get from: https://aistudio.google.com/app/apikey)
GOOGLE_API_KEY=your_google_api_key_here
# OpenAI (Optional - Get from: https://platform.openai.com/api-keys)
OPENAI_API_KEY=your_openai_api_key_here
# Anthropic (Optional - Get from: https://console.anthropic.com/settings/keys)
ANTHROPIC_API_KEY=your_anthropic_api_key_here
# Settings
APP_NAME=weather_bot_app
GOOGLE_GENAI_USE_VERTEXAI=False
Copy and configure:
cp .env.example .env
# Edit .env with your actual API keys
Create .gitignore:
# Python
__pycache__/
*.py[cod]
venv/
*.egg-info/
# Environment
.env
# IDE
.vscode/
.idea/
*.swp
# Testing
.pytest_cache/
.coverage
htmlcov/
# OS
.DS_Store
Step 5: Settings Module
Create src/config/settings.py:
"""Configuration management for the application."""
import os
from dotenv import load_dotenv
load_dotenv()
class Settings:
"""Centralized application settings."""
# API Keys
GOOGLE_API_KEY: str = os.getenv("GOOGLE_API_KEY", "")
OPENAI_API_KEY: str = os.getenv("OPENAI_API_KEY", "")
ANTHROPIC_API_KEY: str = os.getenv("ANTHROPIC_API_KEY", "")
# Application
APP_NAME: str = os.getenv("APP_NAME", "weather_bot_app")
GOOGLE_GENAI_USE_VERTEXAI: str = os.getenv("GOOGLE_GENAI_USE_VERTEXAI", "False")
# Defaults
DEFAULT_USER_ID: str = "demo_user"
DEFAULT_SESSION_ID: str = "demo_session"
@classmethod
def validate(cls) -> bool:
"""Validate at least one API key is configured."""
has_google = bool(cls.GOOGLE_API_KEY and cls.GOOGLE_API_KEY != "your_google_api_key_here")
has_openai = bool(cls.OPENAI_API_KEY and cls.OPENAI_API_KEY != "your_openai_api_key_here")
has_anthropic = bool(cls.ANTHROPIC_API_KEY and cls.ANTHROPIC_API_KEY != "your_anthropic_api_key_here")
if not (has_google or has_openai or has_anthropic):
print("β οΈ No valid API keys found. Please configure .env file.")
return False
return True
@classmethod
def setup_environment(cls):
"""Setup environment variables for ADK."""
os.environ["GOOGLE_API_KEY"] = cls.GOOGLE_API_KEY
os.environ["OPENAI_API_KEY"] = cls.OPENAI_API_KEY
os.environ["ANTHROPIC_API_KEY"] = cls.ANTHROPIC_API_KEY
os.environ["GOOGLE_GENAI_USE_VERTEXAI"] = cls.GOOGLE_GENAI_USE_VERTEXAI
settings = Settings()
Create src/config/models.py:
"""Model configuration constants."""
class ModelConfig:
"""LLM model identifiers."""
# Google Gemini
GEMINI_2_0_FLASH = "gemini-2.0-flash"
GEMINI_1_5_PRO = "gemini-1.5-pro"
GEMINI_1_5_FLASH = "gemini-1.5-flash"
# OpenAI (via LiteLLM - needs "openai/" prefix)
GPT_4O = "openai/gpt-4o"
GPT_4O_MINI = "openai/gpt-4o-mini"
GPT_4_TURBO = "openai/gpt-4-turbo"
# Anthropic (via LiteLLM - needs "anthropic/" prefix)
CLAUDE_SONNET_4 = "anthropic/claude-sonnet-4-20250514"
CLAUDE_OPUS_4 = "anthropic/claude-opus-4-20250514"
CLAUDE_3_7_SONNET = "anthropic/claude-3-7-sonnet-20250219"
# Defaults
DEFAULT_ORCHESTRATOR = GEMINI_2_0_FLASH # For main coordination
DEFAULT_SPECIALIST = GEMINI_2_0_FLASH # For simple tasks
models = ModelConfig()
Part 2: Understanding Core Concepts
The ADK Architecture
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Your Application β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β β
β ββββββββββββββββ ββββββββββββββββ β
β β Runner βββββββββββ€ SessionServiceβ β
β β (Executor) β β (Memory) β β
β ββββββββ¬ββββββββ βββββββββββββββββ β
β β β
β βΌ β
β βββββββββββββββββββββββββββββββββββββββββββ β
β β Root Agent (Orchestrator) β β
β β - Instructions β β
β β - Tools: [get_weather] β β
β β - Callbacks: [input_guard, tool_guard] β β
β βββββββ¬βββββββββββββββββββββββ¬ββββββββββββββ β
β β β β
β βΌ βΌ β
β βββββββββββββ βββββββββββββ β
β β Greeting β β Farewell β β
β β Agent β β Agent β β
β βββββββββββββ βββββββββββββ β
β β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Key Components
1. Agent
The AI "brain" with specific capabilities.
Components:
- name: Unique identifier
- model: Which LLM to use (Gemini, GPT, Claude)
- description: What it does (for delegation)
- instruction: How it should behave
- tools: Functions it can call
- sub_agents: Specialists it can delegate to
2. Tool
A Python function that gives agents capabilities.
Requirements:
- Clear docstring (agents read this!)
- Type hints for parameters
- Consistent return format
- Error handling
3. Runner
Orchestrates agent execution.
Responsibilities:
- Manages request/response cycle
- Executes tools
- Updates session state
- Yields execution events
4. SessionService
Manages conversation memory.
Features:
- Stores conversation history
- Maintains session state
- Supports multiple users/sessions
- Enables context across turns
5. Callbacks
Safety hooks for validation.
Types:
- before_model_callback: Validate input before LLM
- before_tool_callback: Validate before tool execution
- after_model_callback: Process LLM output
- after_tool_callback: Process tool results
Request Flow
1. User Message
"What's the weather in London?"
β
2. Runner (formats message)
Content(role='user', parts=[Part(text="...")])
β
3. before_model_callback (optional)
β Input validation passed
β
4. LLM Processing
Reads: instruction, tools, history
Decides: Use get_weather tool
β
5. before_tool_callback (optional)
β Tool arguments validated
β
6. Tool Execution
get_weather("London") β {"status": "success", ...}
β
7. LLM Formulates Response
"It's cloudy in London with 15Β°C"
β
8. after_model_callback (optional)
β Response sanitized
β
9. Final Response
User sees the answer
β
10. Session Updated
Conversation stored in SessionService
Part 3: Building Your First Tool
Tool Best Practices
- Clear Docstring: Agents read this to understand the tool
- Type Hints: Help with validation
- Consistent Returns: Same structure for success/error
- Error Handling: Graceful failures
- Logging: Track tool usage
Basic Weather Tool
Create src/tools/weather_tools.py:
"""
Weather tools for the agent system.
Tools are Python functions that agents can call to perform actions.
The agent's LLM reads the docstring to understand when and how to use each tool.
"""
from typing import Dict
def get_weather(city: str) -> Dict[str, str]:
"""
Retrieves current weather for a specified city.
This is a mock implementation. In production, replace with real API calls.
Args:
city: City name (e.g., "London", "New York", "Tokyo")
Case-insensitive.
Returns:
Dictionary with:
- status: "success" or "error"
- report: Weather description (if success)
- error_message: Error details (if error)
Examples:
>>> get_weather("London")
{"status": "success", "report": "Cloudy, 15Β°C"}
>>> get_weather("UnknownCity")
{"status": "error", "error_message": "City not found"}
"""
print(f"π§ Tool: get_weather(city='{city}')")
# Normalize city name
city_normalized = city.lower().replace(" ", "")
# Mock database (replace with API call in production)
weather_db = {
"newyork": {"status": "success", "report": "Sunny, 25Β°C"},
"london": {"status": "success", "report": "Cloudy, 15Β°C"},
"tokyo": {"status": "success", "report": "Light rain, 18Β°C"},
"paris": {"status": "success", "report": "Partly cloudy, 18Β°C"},
"sydney": {"status": "success", "report": "Sunny, 22Β°C"}
}
if city_normalized in weather_db:
return weather_db[city_normalized]
return {
"status": "error",
"error_message": f"No weather data for '{city}'. Try: New York, London, Tokyo, Paris, Sydney."
}
# Test the tool directly
if __name__ == "__main__":
print("\n=== Testing Weather Tool ===\n")
print("Test 1:", get_weather("London"))
print("Test 2:", get_weather("Atlantis"))
print("Test 3:", get_weather("nEw YoRk"))
Stateful Weather Tool
Create enhanced version that reads user preferences from session state:
from google.adk.tools.tool_context import ToolContext
def get_weather_stateful(city: str, tool_context: ToolContext) -> Dict[str, str]:
"""
Retrieves weather with temperature in user's preferred unit.
Reads 'user_preference_temperature_unit' from session state.
Writes 'last_city_checked' to session state.
Args:
city: City name
tool_context: Automatically injected by ADK (provides state access)
Returns:
Dictionary with weather information
"""
print(f"π§ Tool: get_weather_stateful(city='{city}')")
# Read user preference from state (default to Celsius)
preferred_unit = tool_context.state.get("user_preference_temperature_unit", "Celsius")
print(f" π User prefers: {preferred_unit}")
city_normalized = city.lower().replace(" ", "")
# Internal data (always in Celsius)
weather_db = {
"newyork": {"temp_c": 25, "condition": "sunny"},
"london": {"temp_c": 15, "condition": "cloudy"},
"tokyo": {"temp_c": 18, "condition": "light rain"},
"paris": {"temp_c": 18, "condition": "partly cloudy"}
}
if city_normalized in weather_db:
data = weather_db[city_normalized]
temp_c = data["temp_c"]
condition = data["condition"]
# Convert temperature based on preference
if preferred_unit == "Fahrenheit":
temp = (temp_c * 9/5) + 32
unit = "Β°F"
else:
temp = temp_c
unit = "Β°C"
report = f"{condition.capitalize()}, {temp:.0f}{unit} in {city.capitalize()}"
# Write to state
tool_context.state["last_city_checked"] = city
print(f" πΎ Saved to state: last_city_checked = {city}")
return {"status": "success", "report": report}
return {
"status": "error",
"error_message": f"No weather data for '{city}'."
}
Conversation Tools
Create src/tools/conversation_tools.py:
"""Simple conversation tools for greeting and farewell agents."""
from typing import Optional
def say_hello(name: Optional[str] = None) -> str:
"""
Provides a friendly greeting.
Args:
name: Optional name to personalize greeting
Returns:
Greeting message
"""
print(f"π§ Tool: say_hello(name={name})")
return f"Hello, {name}!" if name else "Hello there!"
def say_goodbye() -> str:
"""
Provides a farewell message.
Returns:
Goodbye message
"""
print(f"π§ Tool: say_goodbye()")
return "Goodbye! Have a great day."
Part 4: Creating Your First Agent
Agent Factory Pattern
Create src/agents/base_agent.py:
"""
Base agent factory for creating agents with consistent configuration.
"""
from google.adk.agents import Agent
from google.adk.models.lite_llm import LiteLlm
from typing import List, Optional, Callable
def create_agent(
name: str,
model: str,
description: str,
instruction: str,
tools: List[Callable],
sub_agents: Optional[List[Agent]] = None,
output_key: Optional[str] = None,
before_model_callback: Optional[Callable] = None,
before_tool_callback: Optional[Callable] = None
) -> Agent:
"""
Factory function for creating agents.
Args:
name: Unique identifier
model: Model identifier (e.g., "gemini-2.0-flash")
description: Brief purpose summary
instruction: Detailed behavior guidance
tools: List of tool functions
sub_agents: Optional specialist agents
output_key: Optional state key for saving responses
before_model_callback: Optional input validation
before_tool_callback: Optional tool validation
Returns:
Configured Agent instance
"""
# Use LiteLLM wrapper for non-Gemini models
if model.startswith("openai/") or model.startswith("anthropic/"):
model_config = LiteLlm(model=model)
else:
model_config = model
return Agent(
name=name,
model=model_config,
description=description,
instruction=instruction,
tools=tools,
sub_agents=sub_agents or [],
output_key=output_key,
before_model_callback=before_model_callback,
before_tool_callback=before_tool_callback
)
Weather Agent
Create src/agents/weather_agent.py:
"""Main weather orchestrator agent."""
from google.adk.agents import Agent
from typing import Optional, Callable
from ..tools.weather_tools import get_weather_stateful
from ..config.models import models
from .base_agent import create_agent
def create_weather_agent(
use_callbacks: bool = False,
before_model_callback: Optional[Callable] = None,
before_tool_callback: Optional[Callable] = None
) -> Agent:
"""
Creates the main weather orchestrator agent.
This agent:
- Handles weather queries using stateful tool
- Delegates greetings/farewells to specialists
- Saves responses to state
Args:
use_callbacks: Whether to use safety callbacks
before_model_callback: Optional input guard
before_tool_callback: Optional tool guard
Returns:
Configured weather agent with sub-agents
"""
# Import here to avoid circular dependency
from .greeting_agent import create_greeting_agent
from .farewell_agent import create_farewell_agent
# Create specialist sub-agents
greeting_agent = create_greeting_agent()
farewell_agent = create_farewell_agent()
return create_agent(
name="weather_orchestrator",
model=models.DEFAULT_ORCHESTRATOR,
description="Main agent: provides weather, delegates greetings/farewells",
instruction=(
"You are the Weather Agent coordinating a team.\n\n"
"CAPABILITIES:\n"
"1. Weather queries - use 'get_weather_stateful' tool\n"
"2. Greetings - delegate to 'greeting_agent'\n"
"3. Farewells - delegate to 'farewell_agent'\n\n"
"DELEGATION RULES:\n"
"- Simple greetings (hi, hello) β greeting_agent\n"
"- Farewells (bye, goodbye) β farewell_agent\n"
"- Weather requests β handle yourself\n\n"
"BEHAVIOR:\n"
"- Always friendly and clear\n"
"- If tool returns error, explain politely\n"
"- For unrelated requests, politely decline\n"
),
tools=[get_weather_stateful],
sub_agents=[greeting_agent, farewell_agent],
output_key="last_weather_report",
before_model_callback=before_model_callback if use_callbacks else None,
before_tool_callback=before_tool_callback if use_callbacks else None
)
Specialist Agents
Create src/agents/greeting_agent.py:
"""Greeting specialist agent."""
from google.adk.agents import Agent
from ..tools.conversation_tools import say_hello
from ..config.models import models
from .base_agent import create_agent
def create_greeting_agent() -> Agent:
"""Creates specialized greeting agent."""
return create_agent(
name="greeting_agent",
model=models.DEFAULT_SPECIALIST,
description="Handles greetings and welcomes users",
instruction=(
"You are the Greeting Agent. Your ONLY task is to greet users "
"using the 'say_hello' tool. If a name is provided, pass it to the tool. "
"Keep it warm and welcoming. Do nothing else."
),
tools=[say_hello]
)
Create src/agents/farewell_agent.py:
"""Farewell specialist agent."""
from google.adk.agents import Agent
from ..tools.conversation_tools import say_goodbye
from ..config.models import models
from .base_agent import create_agent
def create_farewell_agent() -> Agent:
"""Creates specialized farewell agent."""
return create_agent(
name="farewell_agent",
model=models.DEFAULT_SPECIALIST,
description="Handles farewells and goodbyes",
instruction=(
"You are the Farewell Agent. Your ONLY task is to say goodbye "
"using the 'say_goodbye' tool when users are leaving. "
"Do nothing else."
),
tools=[say_goodbye]
)
Part 5: Multi-Model Support
Why Multiple Models?
Different models excel at different tasks:
| Model | Best For | Speed | Cost |
|---|---|---|---|
| Gemini 2.0 Flash | General purpose | β‘β‘β‘ | π° |
| Gemini 1.5 Pro | Complex reasoning | β‘β‘ | π°π° |
| GPT-4o | High quality outputs | β‘β‘ | π°π°π° |
| GPT-4o Mini | Cost-effective | β‘β‘β‘ | π° |
| Claude Sonnet 4 | Analysis, following instructions | β‘β‘ | π°π° |
Using Different Models
Gemini (Direct)
from google.adk.agents import Agent
agent = Agent(
name="weather_agent",
model="gemini-2.0-flash", # Direct string for Gemini
# ... rest of config
)
OpenAI via LiteLLM
from google.adk.models.lite_llm import LiteLlm
agent = Agent(
name="weather_agent",
model=LiteLlm(model="openai/gpt-4o"), # LiteLLM wrapper
# ... rest of config
)
Anthropic via LiteLLM
agent = Agent(
name="weather_agent",
model=LiteLlm(model="anthropic/claude-sonnet-4-20250514"),
# ... rest of config
)
Model Selection Strategy
from ..config.models import models
# For orchestration (smart routing decisions)
orchestrator = Agent(
model=models.GEMINI_1_5_PRO, # More capable
# ...
)
# For simple tasks (greetings, farewells)
specialist = Agent(
model=models.GEMINI_2_0_FLASH, # Fast and cheap
# ...
)
# For complex analysis
analyst = Agent(
model=models.CLAUDE_SONNET_4, # Best at reasoning
# ...
)
Part 6: Building Agent Teams
Agent Delegation
When you add sub_agents to an agent, ADK enables automatic delegation:
root_agent = Agent(
name="orchestrator",
sub_agents=[specialist_1, specialist_2, specialist_3],
# ...
)
How it works:
- Root agent receives user message
- LLM considers:
- User's intent
- Sub-agent descriptions
- Root agent's own capabilities
- LLM decides whether to:
- Handle itself (use its own tools)
- Delegate to a sub-agent
- If delegating, sub-agent processes and responds
Delegation Example
User: "Hello!"
β
Root Agent (Weather Orchestrator)
Thinks: "This is a greeting"
Checks sub-agents:
- greeting_agent: "Handles greetings" β
- farewell_agent: "Handles farewells" β
β
Delegates to: greeting_agent
β
greeting_agent processes
Uses: say_hello() tool
β
Returns: "Hello there!"
Creating the Agent Team
from .greeting_agent import create_greeting_agent
from .farewell_agent import create_farewell_agent
def create_weather_agent() -> Agent:
# Create specialists
greeting = create_greeting_agent()
farewell = create_farewell_agent()
# Create orchestrator with sub-agents
return create_agent(
name="weather_orchestrator",
instruction=(
"You coordinate a team:\n"
"- Greetings β greeting_agent\n"
"- Farewells β farewell_agent\n"
"- Weather β handle yourself\n"
),
tools=[get_weather_stateful],
sub_agents=[greeting, farewell], # Team members
# ...
)
Part 7: Session State & Memory
Understanding State
Session state allows agents to remember information across turns:
# Initialize state
initial_state = {
"user_preference_temperature_unit": "Celsius",
"conversation_count": 0
}
session = await session_service.create_session(
app_name="weather_bot",
user_id="user123",
session_id="session456",
state=initial_state
)
Reading State in Tools
def get_weather_stateful(city: str, tool_context: ToolContext) -> dict:
# Read from state
preferred_unit = tool_context.state.get(
"user_preference_temperature_unit",
"Celsius" # Default if not set
)
# Use the preference
if preferred_unit == "Fahrenheit":
temp = convert_to_fahrenheit(temp_c)
# Write to state
tool_context.state["last_city_checked"] = city
return result
Auto-Saving Responses
Use output_key to automatically save agent responses:
agent = Agent(
name="weather_agent",
output_key="last_weather_report", # Auto-save here
# ...
)
# After agent responds, session state will have:
# state["last_weather_report"] = "The weather in London is..."
Session Manager
Create src/utils/session_manager.py:
"""Session management utilities."""
from google.adk.sessions import InMemorySessionService
from typing import Dict, Any, Optional
class SessionManager:
"""Manages user sessions and state."""
def __init__(self, app_name: str):
self.app_name = app_name
self.service = InMemorySessionService()
self.default_state = {
"user_preference_temperature_unit": "Celsius",
"conversation_count": 0
}
async def create_session(
self,
user_id: str,
session_id: str,
initial_state: Optional[Dict[str, Any]] = None
):
"""Create session with default or custom state."""
state = self.default_state.copy()
if initial_state:
state.update(initial_state)
return await self.service.create_session(
app_name=self.app_name,
user_id=user_id,
session_id=session_id,
state=state
)
async def get_session(self, user_id: str, session_id: str):
"""Retrieve existing session."""
return await self.service.get_session(
app_name=self.app_name,
user_id=user_id,
session_id=session_id
)
async def update_state(
self,
user_id: str,
session_id: str,
updates: Dict[str, Any]
):
"""Update specific state values."""
session = await self.get_session(user_id, session_id)
if session:
session.state.update(updates)
Part 8: Safety with Callbacks
Input Validation (before_model_callback)
Validates user input BEFORE sending to LLM:
Create src/callbacks/model_callbacks.py:
"""Input validation callbacks."""
from google.adk.agents.callback_context import CallbackContext
from google.adk.models.llm_request import LlmRequest
from google.adk.models.llm_response import LlmResponse
from google.genai import types
from typing import Optional
def block_keyword_guardrail(
callback_context: CallbackContext,
llm_request: LlmRequest
) -> Optional[LlmResponse]:
"""
Blocks requests containing specific keywords.
Returns:
LlmResponse: If blocking (skips LLM call)
None: If allowing (proceeds to LLM)
"""
print(f"π‘οΈ Input Guardrail: Checking request")
# Extract last user message
last_message = ""
if llm_request.contents:
for content in reversed(llm_request.contents):
if content.role == 'user' and content.parts:
last_message = content.parts[0].text
break
# Check for blocked keywords
BLOCKED_KEYWORDS = ["BLOCK", "FORBIDDEN", "RESTRICTED"]
for keyword in BLOCKED_KEYWORDS:
if keyword in last_message.upper():
print(f" β Blocked: Found keyword '{keyword}'")
# Log to state
callback_context.state["guardrail_triggered"] = True
# Return blocking response
return LlmResponse(
content=types.Content(
role="model",
parts=[types.Part(
text=f"Request blocked: contains forbidden keyword '{keyword}'"
)]
)
)
print(f" β
Allowed: No blocked keywords")
return None # Allow request
Tool Validation (before_tool_callback)
Validates tool arguments BEFORE execution:
Create src/callbacks/tool_callbacks.py:
"""Tool execution validation callbacks."""
from google.adk.tools.base_tool import BaseTool
from google.adk.tools.tool_context import ToolContext
from typing import Optional, Dict, Any
def block_restricted_locations(
tool: BaseTool,
args: Dict[str, Any],
tool_context: ToolContext
) -> Optional[Dict]:
"""
Blocks weather checks for specific locations.
Returns:
Dict: Error response (skips tool execution)
None: Allow (executes tool normally)
"""
print(f"π‘οΈ Tool Guardrail: Checking {tool.name}")
if tool.name == "get_weather_stateful":
city = args.get("city", "").lower()
BLOCKED_LOCATIONS = ["paris", "restricted_city"]
if city in BLOCKED_LOCATIONS:
print(f" β Blocked: Location '{city}' restricted")
# Log to state
tool_context.state["tool_block_triggered"] = True
# Return error (skips actual tool call)
return {
"status": "error",
"error_message": f"Policy: Weather checks for '{city}' are disabled"
}
print(f" β
Allowed: Tool execution approved")
return None # Allow tool execution
Using Callbacks
from .callbacks.model_callbacks import block_keyword_guardrail
from .callbacks.tool_callbacks import block_restricted_locations
agent = create_weather_agent(
use_callbacks=True,
before_model_callback=block_keyword_guardrail,
before_tool_callback=block_restricted_locations
)
Part 9: Complete Application
Helper Functions
Create src/utils/helpers.py:
"""Utility functions for agent interaction."""
from google.genai import types
async def call_agent(
runner,
query: str,
user_id: str,
session_id: str,
verbose: bool = True
) -> str:
"""
Send message to agent and get response.
Args:
runner: Runner instance
query: User's message
user_id: User identifier
session_id: Session identifier
verbose: Print logs
Returns:
Agent's response text
"""
if verbose:
print(f"\n㪠User: {query}")
# Format message
content = types.Content(
role='user',
parts=[types.Part(text=query)]
)
final_response = "No response"
try:
# Run agent
async for event in runner.run_async(
user_id=user_id,
session_id=session_id,
new_message=content
):
if event.is_final_response():
if event.content and event.content.parts:
final_response = event.content.parts[0].text
break
except Exception as e:
final_response = f"Error: {str(e)}"
if verbose:
print(f"π€ Agent: {final_response}")
return final_response
def print_section(title: str):
"""Print formatted section header."""
print(f"\n{'='*60}")
print(f" {title}")
print(f"{'='*60}\n")
Main Application
Create src/main.py:
"""
Main application with multiple demo modes.
"""
import asyncio
from google.adk.runners import Runner
from config.settings import settings
from agents.weather_agent import create_weather_agent
from callbacks.model_callbacks import block_keyword_guardrail
from callbacks.tool_callbacks import block_restricted_locations
from utils.session_manager import SessionManager
from utils.helpers import call_agent, print_section
async def demo_basic():
"""Basic agent demo without callbacks."""
print_section("Demo 1: Basic Weather Agent")
session_mgr = SessionManager(settings.APP_NAME)
session = await session_mgr.create_session("user1", "session1")
agent = create_weather_agent(use_callbacks=False)
runner = Runner(agent=agent, app_name=settings.APP_NAME, session_service=session_mgr.service)
await call_agent(runner, "Hello!", "user1", "session1")
await call_agent(runner, "Weather in London?", "user1", "session1")
await call_agent(runner, "How about Tokyo?", "user1", "session1")
await call_agent(runner, "Thanks, bye!", "user1", "session1")
async def demo_stateful():
"""Demo with state management."""
print_section("Demo 2: Stateful Preferences")
session_mgr = SessionManager(settings.APP_NAME)
# Start with Celsius
session = await session_mgr.create_session(
"user2", "session2",
initial_state={"user_preference_temperature_unit": "Celsius"}
)
agent = create_weather_agent(use_callbacks=False)
runner = Runner(agent=agent, app_name=settings.APP_NAME, session_service=session_mgr.service)
print("Testing with Celsius:")
await call_agent(runner, "Weather in New York?", "user2", "session2")
# Change to Fahrenheit
await session_mgr.update_state("user2", "session2", {"user_preference_temperature_unit": "Fahrenheit"})
print("\nChanged to Fahrenheit:")
await call_agent(runner, "Weather in London?", "user2", "session2")
async def demo_safety():
"""Demo with safety callbacks."""
print_section("Demo 3: Safety Guardrails")
session_mgr = SessionManager(settings.APP_NAME)
session = await session_mgr.create_session("user3", "session3")
# Agent with callbacks
agent = create_weather_agent(
use_callbacks=True,
before_model_callback=block_keyword_guardrail,
before_tool_callback=block_restricted_locations
)
runner = Runner(agent=agent, app_name=settings.APP_NAME, session_service=session_mgr.service)
print("β
Normal request:")
await call_agent(runner, "Weather in London?", "user3", "session3")
print("\nβ Request with blocked keyword:")
await call_agent(runner, "BLOCK the weather", "user3", "session3")
print("\nβ Request for blocked location:")
await call_agent(runner, "Weather in Paris?", "user3", "session3")
async def interactive():
"""Interactive chat mode."""
print_section("Interactive Mode")
print("Type 'quit' to exit\n")
session_mgr = SessionManager(settings.APP_NAME)
session = await session_mgr.create_session("interactive_user", "interactive_session")
agent = create_weather_agent(
use_callbacks=True,
before_model_callback=block_keyword_guardrail,
before_tool_callback=block_restricted_locations
)
runner = Runner(agent=agent, app_name=settings.APP_NAME, session_service=session_mgr.service)
while True:
try:
user_input = input("You: ").strip()
if user_input.lower() in ['quit', 'exit']:
print("Goodbye!")
break
if not user_input:
continue
response = await call_agent(
runner, user_input,
"interactive_user", "interactive_session",
verbose=False
)
print(f"Agent: {response}\n")
except KeyboardInterrupt:
print("\nExiting...")
break
async def main():
"""Main entry point."""
settings.setup_environment()
if not settings.validate():
return
print("\nπ€οΈ Weather Bot - Google ADK Demo")
print("\nSelect mode:")
print("1. Basic Demo")
print("2. Stateful Demo")
print("3. Safety Demo")
print("4. Interactive Mode")
print("5. Run All Demos")
choice = input("\nChoice (1-5): ").strip()
if choice == "1":
await demo_basic()
elif choice == "2":
await demo_stateful()
elif choice == "3":
await demo_safety()
elif choice == "4":
await interactive()
elif choice == "5":
await demo_basic()
await demo_stateful()
await demo_safety()
else:
print("Invalid choice")
if __name__ == "__main__":
asyncio.run(main())
Part 10: Production Deployment
Testing
Create tests/test_tools.py:
"""Tests for tools."""
import pytest
from src.tools.weather_tools import get_weather
def test_get_weather_success():
result = get_weather("London")
assert result["status"] == "success"
assert "Cloudy" in result["report"]
def test_get_weather_error():
result = get_weather("UnknownCity")
assert result["status"] == "error"
assert "UnknownCity" in result["error_message"]
def test_get_weather_case_insensitive():
result = get_weather("nEw YoRk")
assert result["status"] == "success"
Run tests:
pytest tests/ -v
pytest tests/ --cov=src # With coverage
Docker Deployment
Create Dockerfile:
FROM python:3.10-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
COPY src/ ./src/
COPY .env .
CMD ["python", "-m", "src.main"]
Create docker-compose.yml:
version: '3.8'
services:
weather-bot:
build: .
env_file:
- .env
volumes:
- ./src:/app/src
ports:
- "8000:8000"
Build and run:
docker-compose up --build
API Server (Optional)
Create src/api_server.py:
"""FastAPI server for agent."""
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
from google.adk.runners import Runner
from config.settings import settings
from agents.weather_agent import create_weather_agent
from utils.session_manager import SessionManager
from utils.helpers import call_agent
app = FastAPI(title="Weather Bot API")
# Initialize
settings.setup_environment()
session_mgr = SessionManager(settings.APP_NAME)
agent = create_weather_agent()
runner = Runner(agent=agent, app_name=settings.APP_NAME, session_service=session_mgr.service)
class ChatRequest(BaseModel):
message: str
user_id: str
session_id: str
class ChatResponse(BaseModel):
response: str
@app.post("/chat", response_model=ChatResponse)
async def chat(request: ChatRequest):
"""Chat endpoint."""
try:
# Create session if doesn't exist
existing = await session_mgr.get_session(request.user_id, request.session_id)
if not existing:
await session_mgr.create_session(request.user_id, request.session_id)
# Get response
response = await call_agent(
runner,
request.message,
request.user_id,
request.session_id,
verbose=False
)
return ChatResponse(response=response)
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
@app.get("/health")
async def health():
"""Health check."""
return {"status": "healthy"}
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="0.0.0.0", port=8000)
Run:
python src/api_server.py
Test:
curl -X POST http://localhost:8000/chat \
-H "Content-Type: application/json" \
-d '{"message": "Weather in London?", "user_id": "user1", "session_id": "session1"}'
Production Checklist
Security
- API keys in environment variables (never in code)
- Input validation (callbacks)
- Rate limiting
- Authentication/authorization
- HTTPS enabled
Performance
- Caching frequent queries
- Connection pooling
- Async throughout
- Load balancing
- CDN for static assets
Monitoring
- Structured logging
- Error tracking (Sentry)
- Performance metrics
- Usage analytics
- Alerting
Scalability
- Horizontal scaling ready
- Stateless design
- Persistent session storage (Redis/DB)
- Queue for async tasks
- Auto-scaling configured
Best Practices
Code Organization
β
DO:
- Modular structure
- Clear separation of concerns
- Comprehensive docstrings
- Type hints everywhere
- Unit tests for all components
β DON'T:
- Hardcode configuration
- Mix business logic with infrastructure
- Skip error handling
- Ignore type safety
- Deploy without tests
Agent Design
β
DO:
- Single-purpose agents
- Clear delegation rules
- Explicit instructions
- Comprehensive tool docstrings
- State management strategy
β DON'T:
- Overload single agent
- Vague instructions
- Skip error cases
- Assume context
- Ignore state
Tool Development
β
DO:
- Consistent return format
- Detailed docstrings
- Error handling
- Logging
- Type hints
β DON'T:
- Return different structures
- Skip documentation
- Ignore errors
- Side effects without logging
- Dynamic typing
Conclusion
You've built a complete, production-ready agent system! You now know:
β
Core Concepts: Agents, tools, runners, sessions, callbacks
β
Tool Development: Creating capabilities for agents
β
Agent Design: Building specialized and orchestrator agents
β
Multi-Model: Using Gemini, GPT, Claude
β
Team Building: Delegation and coordination
β
State Management: Memory across conversations
β
Safety: Input and tool validation
β
Production: Testing, deployment, best practices
Next Steps
Extend the System
- Add real weather API (OpenWeatherMap)
- Implement forecast agent
- Add location-based features
- Create calendar integration
- Build notification system
Advanced Topics
- Custom session storage (PostgreSQL, Redis)
- Advanced delegation patterns
- Multi-turn planning
- RAG (Retrieval-Augmented Generation)
- Fine-tuning for specific domains
Resources
Quick Reference
Run Application
# Activate environment
source venv/bin/activate
# Run demos
python -m src.main
# Run specific demo
python -m src.demo_basic_agent
# Run API server
python -m src.api_server
# Run tests
pytest tests/ -v
Common Commands
# Install dependencies
pip install -r requirements.txt
# Update dependencies
pip freeze > requirements.txt
# Run with coverage
pytest --cov=src --cov-report=html
# Docker
docker-compose up --build
docker-compose down
# Format code
black src/
isort src/
Folder Structure
src/
βββ config/ # Configuration
βββ tools/ # Agent capabilities
βββ agents/ # Agent definitions
βββ callbacks/ # Safety guardrails
βββ utils/ # Helpers
βββ main.py # Entry point
π Congratulations! You're now ready to build production-grade agent systems with Google ADK!