Skip to Content
A2A AdapterAdapters

Adapters

Detailed guide for each built-in adapter.

For a full signature reference, use API Reference. The tables below focus on the most commonly used constructor arguments.

N8nAdapter

Connects n8n workflows to the A2A Protocol via webhook.

from a2a_adapter import N8nAdapter, serve_agent adapter = N8nAdapter( webhook_url="http://localhost:5678/webhook/my-workflow", name="N8n Math Agent", description="Math operations powered by n8n", timeout=30, message_field="message", # Field name in the webhook payload ) serve_agent(adapter, port=9000)
ParameterTypeDefaultDescription
webhook_urlstrrequiredYour n8n webhook URL
namestr"N8nAdapter"Agent name
descriptionstr""Agent description
timeoutint30HTTP request timeout (seconds)
message_fieldstr"message"Webhook payload field name

Streaming: No (n8n webhooks are request/response)


LangChainAdapter

Wraps any LangChain Runnable (chains, models, prompts) as an A2A agent.

from langchain_openai import ChatOpenAI from langchain_core.prompts import ChatPromptTemplate from a2a_adapter import LangChainAdapter, serve_agent chain = ChatPromptTemplate.from_template("Answer: {input}") | ChatOpenAI(model="gpt-4o-mini") adapter = LangChainAdapter( runnable=chain, input_key="input", name="Chat Agent", description="General-purpose chat powered by GPT-4o-mini", ) serve_agent(adapter, port=9000)
ParameterTypeDefaultDescription
runnableRunnablerequiredAny LangChain Runnable
input_keystr"input"Key for the runnable’s input dict
namestr"LangChainAdapter"Agent name
descriptionstr""Agent description
input_mappercallable | NoneNoneCustom input transform function

Streaming: Auto-detected. If the Runnable supports .astream(), streaming is enabled automatically.


LangGraphAdapter

Wraps a compiled LangGraph graph as an A2A agent.

from a2a_adapter import LangGraphAdapter, serve_agent adapter = LangGraphAdapter( graph=compiled_graph, input_key="messages", output_key="output", name="Research Workflow", description="Multi-step research workflow", ) serve_agent(adapter, port=9000)
ParameterTypeDefaultDescription
graphCompiledGraphrequiredA compiled LangGraph graph
input_keystr"messages"Input state key
output_keystr"output"Output state key to extract response
namestr"LangGraphAdapter"Agent name
descriptionstr""Agent description
input_mappercallable | NoneNoneCustom input transform function

Streaming: Auto-detected. Streams state deltas during graph execution.


CrewAIAdapter

Wraps a CrewAI Crew as an A2A agent.

from a2a_adapter import CrewAIAdapter, serve_agent adapter = CrewAIAdapter( crew=your_crew_instance, name="Research Crew", description="Multi-agent research team", ) serve_agent(adapter, port=9000)
ParameterTypeDefaultDescription
crewCrewrequiredA CrewAI Crew instance
namestr"CrewAIAdapter"Agent name
descriptionstr""Agent description
input_mappercallable | NoneNoneCustom input transform function

Streaming: No. Uses crew.kickoff_async() with sync fallback.


OllamaAdapter

Connects to a local Ollama instance for running LLMs as A2A agents.

from a2a_adapter import OllamaAdapter, serve_agent adapter = OllamaAdapter( model="llama3.2:8b", name="Local LLM", description="Llama 3.2 via Ollama", system_prompt="You are a helpful assistant", ) serve_agent(adapter, port=9000)
ParameterTypeDefaultDescription
modelstrrequiredOllama model name
namestr"OllamaAdapter"Agent name
descriptionstr""Agent description
system_promptstr | NoneNoneSystem prompt for the model
base_urlstr"http://localhost:11434"Ollama API base URL

Streaming: Yes. Uses NDJSON streaming from the Ollama /api/chat endpoint.


OpenClawAdapter

Wraps an OpenClaw  AI coding agent as an A2A agent. OpenClaw is a local AI coding assistant that runs on your machine.

from a2a_adapter import OpenClawAdapter, serve_agent adapter = OpenClawAdapter( thinking="low", name="OpenClaw Agent", description="Personal AI powered by OpenClaw", ) serve_agent(adapter, port=9000)
ParameterTypeDefaultDescription
thinkingstr"low"Thinking level
namestr"OpenClawAdapter"Agent name
descriptionstr""Agent description

Streaming: No. Uses subprocess execution with JSON parsing (checks both stdout and stderr). Supports POSIX signal cancellation.


CallableAdapter

Wraps any async function as an A2A agent.

from a2a_adapter import CallableAdapter, serve_agent async def my_agent(inputs: dict) -> str: return f"You said: {inputs['message']}" adapter = CallableAdapter( func=my_agent, name="Echo Agent", description="Echoes back user input", ) serve_agent(adapter, port=9000)
ParameterTypeDefaultDescription
funcCallablerequiredAsync function to wrap
namestr"CallableAdapter"Agent name
descriptionstr""Agent description
streamingboolFalseWhether the function should be treated as a streaming callable

Streaming: Optional. Use CallableAdapter(func=my_streaming_agent, streaming=True) for async-generator callables.


Custom Adapter

For full control, subclass BaseA2AAdapter directly:

from a2a_adapter import BaseA2AAdapter, AdapterMetadata, serve_agent class MyAdapter(BaseA2AAdapter): async def invoke(self, user_input: str, context_id=None, **kwargs) -> str: # Your logic here return f"Response to: {user_input}" async def stream(self, user_input: str, context_id=None, **kwargs): # Optional: streaming support for word in user_input.split(): yield word + " " def get_metadata(self) -> AdapterMetadata: return AdapterMetadata( name="My Custom Agent", description="Does amazing things", skills=[{"id": "skill-1", "name": "Process", "description": "Processes input"}], ) serve_agent(MyAdapter(), port=9000)
Last updated on