Overview
In OneRun, agents are the subjects of your testing. They represent customer support bots, sales assistants, technical advisors, or any other conversational AI system you want to evaluate and improve.How Agents Work
Agents in OneRun represent the AI systems you want to evaluate. Each agent has a name, description, and belongs to a specific project. The agent description is particularly important as it guides persona generation during simulations. Along with the simulation scenario, the agent description helps create realistic personas with appropriate stories and purposes that match the types of people who would interact with your agent.Agent Metadata
You can store relevant evaluation context in the agent’s metadata:Agent Types
Customer Support
Handle customer inquiries, complaints, and support requests with empathy and accuracy
Sales Assistant
Engage prospects, handle objections, and guide customers through purchase decisions
Technical Advisor
Provide technical guidance, troubleshooting, and documentation assistance
Product Guide
Help users navigate features, onboard new customers, and provide usage tips
Evaluation Process
Once an agent is created, it can be evaluated through:- Simulation Creation - Define test scenarios and objectives
- Persona Generation - Create diverse conversation participants
- Conversation Execution - Run interactions between personas and the agent
- Performance Analysis - Review results against defined objectives
Best Practices
Store important configuration details in the metadata field for easy reference and version tracking.