# Introduction Source: https://docs.zeroeval.com/autotune/introduction Run evaluations on models and prompts to find the best variants for your agents Prompt optimization is a different approach to the traditional evals experience. Instead of setting up complex eval pipelines, we simply ingest your production traces and let you optimize your prompts based on your feedback. ## How it works Replace hardcoded prompts with `ze.prompt()` calls in Python or `ze.prompt({...})` in TypeScript Each time you modify your prompt content, a new version is automatically created and tracked ZeroEval automatically tracks all LLM interactions and their outcomes Use the UI to run experiments, vote on outputs, and identify the best prompt/model combinations Winning configurations are automatically deployed to your application without code changes Learn how to integrate ze.prompt() into your Python or TypeScript codebase Run experiments and deploy winning combinations # Models Source: https://docs.zeroeval.com/autotune/prompts/models Evaluate your agent's performance across multiple models