Quick Start
Get your first A2A-compatible agent running in about 5 minutes.
Step 1 — Install
pip install a2a-adapterStep 2 — Your First Agent (3 Lines)
Save the following as my_agent.py:
from a2a_adapter import N8nAdapter, serve_agent
adapter = N8nAdapter(webhook_url="http://localhost:5678/webhook/my-workflow")
serve_agent(adapter, port=9000)Replace the webhook URL with your real n8n webhook URL.
Step 3 — Start the Agent Server
python my_agent.pyLeave this terminal running. The server listens on port 9000 and serves:
- A2A endpoint at
http://localhost:9000
Step 4 — Test the Agent
Open a second terminal and use the A2A client:
git clone https://github.com/hybroai/a2a-adapter.git
cd a2a-adapter
python examples/single_agent_client.pyThe client sends a test message to http://localhost:9000 and prints the response.
Other Frameworks
LangChain (with streaming)
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from a2a_adapter import LangChainAdapter, serve_agent
chain = ChatPromptTemplate.from_template("Answer: {input}") | ChatOpenAI(model="gpt-4o-mini")
adapter = LangChainAdapter(runnable=chain, input_key="input", name="Chat Agent")
serve_agent(adapter, port=9000) # Streaming auto-detectedLangGraph
from a2a_adapter import LangGraphAdapter, serve_agent
adapter = LangGraphAdapter(
graph=your_compiled_graph,
input_key="messages",
output_key="output",
name="Workflow Agent",
)
serve_agent(adapter, port=9000)CrewAI
from a2a_adapter import CrewAIAdapter, serve_agent
adapter = CrewAIAdapter(crew=your_crew_instance, name="Research Crew")
serve_agent(adapter, port=9000)Ollama (local LLM)
from a2a_adapter import OllamaAdapter, serve_agent
adapter = OllamaAdapter(model="llama3.2:8b", name="Local LLM")
serve_agent(adapter, port=9000) # Streaming enabled by defaultCustom Function
from a2a_adapter import CallableAdapter, serve_agent
async def my_agent(inputs: dict) -> str:
return f"You said: {inputs['message']}"
adapter = CallableAdapter(func=my_agent, name="Echo Agent")
serve_agent(adapter, port=9000)Custom Class
from a2a_adapter import BaseA2AAdapter, AdapterMetadata, serve_agent
class SentimentAdapter(BaseA2AAdapter):
async def invoke(self, user_input: str, context_id=None, **kwargs) -> str:
sentiment = "positive" if "good" in user_input.lower() else "negative"
return f"Sentiment: {sentiment}"
def get_metadata(self) -> AdapterMetadata:
return AdapterMetadata(
name="Sentiment Analyzer",
description="Analyzes text sentiment",
)
serve_agent(SentimentAdapter(), port=9000)Production Deployment
For production, use to_a2a() to get an ASGI app you can deploy with any server:
from a2a_adapter import N8nAdapter, to_a2a
adapter = N8nAdapter(webhook_url="http://localhost:5678/webhook/agent")
app = to_a2a(adapter)
# Deploy with: gunicorn app:app -k uvicorn.workers.UvicornWorkerLast updated on