Overview
While OneRun manages simulations, personas, and evaluations, Workers are where your agent’s intelligence lives. They:- Poll for conversations assigned to your agent during active simulations
- Implement conversation logic using any AI framework or model
- Handle turn-by-turn exchanges between personas and your agent
- Run independently from the OneRun platform as separate processes
How Workers Function
Workers operate on a polling model:- Poll for Tasks - Continuously check for new conversations assigned to your agent
- Process Conversations - Handle the actual conversation logic and AI responses
- Exchange Messages - Send and receive messages with personas through the OneRun API
- Manage Concurrency - Handle multiple conversations simultaneously
- Report Status - Update conversation status as they progress
Worker Architecture
Entrypoint Function
The core of every worker is an entrypoint function that handles individual conversations:Worker Configuration
Workers are configured with essential parameters:- Client - OneRun API client for communication
- Project ID - Which project the worker serves
- Agent ID - Which agent this worker implements
- Entrypoint - The function that handles conversations
Conversation Flow
Workers manage conversations through a request-response cycle:- Receive Context - Get conversation details from OneRun
- Generate Response - Use your AI model to create agent responses
- Send to Persona - Submit response and receive persona’s reply
- Continue or End - Loop until conversation completes naturally
Deployment Patterns
Local Development
Workers run as separate Python processes during development, connecting to your local OneRun instance to poll for assigned conversations.Production Deployment
Deploy workers using various patterns:- Docker containers for consistent environments
- Kubernetes pods for scalability and resilience
- Cloud functions for serverless execution
- Background services on dedicated servers
Scaling Considerations
Workers support concurrent conversation processing:- Multiple workers can serve the same agent for higher throughput
- Concurrent tasks within a single worker for efficiency
- Load balancing happens automatically through the polling mechanism
Framework Flexibility
Workers can integrate with any AI framework or service:- LangChain for complex agent workflows
- Direct API calls to AI providers (Anthropic, OpenAI, etc.)
- Custom models and local inference
- Hybrid approaches combining multiple AI services
Key Concepts
Polling Model
Workers continuously poll OneRun for new conversation assignments, ensuring reliable message delivery without requiring persistent connections.Stateless Processing
Each conversation is handled independently, making workers resilient to failures and easy to scale horizontally.Framework Agnostic
Workers act as adapters between OneRun’s evaluation system and your existing AI infrastructure, regardless of the underlying technology.Getting Started
Ready to implement your first worker? See the Connecting an Agent guide for step-by-step instructions and complete code examples.Workers can be implemented in any programming language that can make HTTP API calls, though the Python SDK provides the most convenient integration.