Use @send_email() and send Pete an email with the details.
Tell the customer we will notify them when we have cars that fit their budget and model preference.
ADL Studio
A complete environment for building, testing, and refining your agents.
Reliable Agent Logic Authoring
Automated Testing
Performance Analytics
AI-Powered Suggestions
Interactive Chat Playground
Contract-Based Reliability
Get up and running quickly with Docker.
Run the latest version of the ADL Server using Docker. Be sure to replace [OPENAI_API_KEY] with your actual API key and
create a local directory called my-adls to mount as your ADL Storage.
docker run -p 8080:8080 \ -v "./my-adls":/app/adls \ -e ARC_AI_KEY=[OPENAI_API_KEY] \ -e ARC_MODEL=gpt-4o \ -e ARC_CLIENT=openai \ ghcr.io/eclipse-lmos/adl-server:latest
Examples can be found here: https://github.com/eclipse-lmos/adl/tree/main/adl-examples
A test MCP server can be started with:
docker pull ghcr.io/eclipse-lmos/demo-mcp:latest
Use the docker compose https://github.com/eclipse-lmos/adl/tree/main/docker-compose.yml
And set the MCP server (http://demo-mcp:8088) under settings.
The ADL Studio is still in the beta phase and may not completely implement all ADL features. We appreciate your feedback and patience as we continue to improve the platform.
Why Agent Programming?
Prompt-based systems are not reliable or verifiable.
ADL defines programmable agent behavior for production systems.
Agent behavior must be explicitly defined.
ADL enforces rules, boundaries, and execution structure.
Agents execute within scoped instructions.
Behavior is constrained to reduce ambiguity and failure.
Agents maintain structured state across interactions.
Workflows become persistent and verifiable.
ADL Examples
See the difference between standard LLM behavior and ADL's controlled execution and how ADL can improve dialog design.
LLMs often rush to complete tasks in a single turn. ADL's Steps break down interactions, creating natural, stateful conversations.
"Ask the customer for their budget and if they want to trade in their old car."
LLMs are "lazy" and often skip invisible backend tasks if they feel the user doesn't need to know. ADL ensures tools are called every time.
"When a customer is interested, tell them to contact sales. Use inform_interest to signal our department."
LLM agents can get stuck in repetitive loops. ADL Skills detect these loops and resolve them automatically so the conversation can progress safely.
"Help the customer reset access after a failed login while asking for confirmation only once."
Instead of writing prompts and hoping the agent behaves correctly, you describe what you want in clear, structured ADL Skills.
ADL turns those ADL Skills into reliable behavior.
The agent follows your rules, keeps context, and works the way you expect.
Skill: refund_customer
Skill: upgrade_customer
ADL Skill Format & Capabilities
ADL separates agent behavior definition from LLM prompting, providing a structured format backed by rules and conventions.
Each ADL Skill defines how the agent responds to a specific scenario.
Customer needs to reset their password.
Build adaptive agents that change their behavior based on context. Conditionals act like "if statements" for your prompts, letting you include or exclude instructions based on user attributes, dates, or conversation state.
ADL defines several built-in conditionals, and you can inject custom logic at runtime to handle complex business rules.
<c1,
c2> Multiple conditions (AND).
<c1
or c2> Multiple conditions (OR).
<!condition>
Negation (e.g. <!is_weekend>).
<else>
Fallback branch. True if no other Conditional applies.
<is_weekend>
True if the current date is a weekend.
<date>
Matches the current date, for example, <10.02.2006>
<step_n>
True for each turn (e.g. <step_1>, <step_2>).
Multi-line conditionals are also supported:
Customer asks for the current news.
Use markdown coding blocks to insert code directly into your Skill.
Predefined Functions
Empower your agents to take action. Define tools inline, and the ADL Engine will handle the orchestration—calling the function securely and feeding the result back to the agent.
By explicitly declaring the tools that are required, the ADL Engine can:
Sometimes you need absolute compliance. Static responses bypass the LLM entirely for specific turns, ensuring that legal disclaimers, greetings, or fallback messages are delivered exactly as written.
In this example, the Agent will always return the text within the brackets.
Conversation flows enable the author to convey decision trees in their use cases.
The Skill above describes a simple conversation flow with multiple branches. The ADL Engine will parse the Skill and ensure that the Agent follows the defined flow, allowing at the same time the user to jump to other Skills if needed.
Skill presented as a decision tree:
HTML templates can be defined in the Skill, allowing you to create rich, styled responses that go beyond plain text. By using placeholders for dynamic content, you can ensure that your agents deliver visually appealing and contextually relevant information.
In the example above, the agent will return a styled HTML snippet with the title of the top news article. The placeholder <news.title> will be replaced with the actual title retrieved by the @get_news() tool.
Clients displaying this content should support Tailwind CSS to render the styles correctly.
The template language Mustache is used for placeholders, allowing for simple variable interpolation.
Html comments can be used to inform the system how to extract the variables form the generated output.
Precision in language is crucial for defining agent behavior. While "should" or "can" imply optionality, "MUST" is a definitive directive. In ADL, "MUST" is a reserved keyword that signifies a mandatory requirement.
The ADL Engine extracts these "MUST" instructions to:
ADL supports Mustache variables to inject dynamic context into your instructions.
This allows you to reference user memory, profile information, or any other context variable directly within your Skill definition.
The ADL Engine resolves these variables before executing the agent logic, ensuring personalized and context-aware behavior.
Like Conditionals, these variables can be injected at runtime when calling the ADL engine.
Comments are a great way to add notes in ADL that are not fed to the agent.
In ADL, any line starting with // is treated as a comment.
The ADL engine removes these lines before passing the final instructions to the agent.
Connect Anywhere
The ADL Engine accepts ADL files and exposes them via standard protocols, integrating seamlessly into your existing ecosystem.
Structured agent definitions
Core execution runtime
Integration is effortless. ADL Server exposes a standard /v1/chat/completions
endpoint.
This allows you to swap out your existing OpenAI API calls for ADL Server calls, instantly upgrading your application with ADL's structured capabilities without rewriting your client code.
curl https://adl-server/v1/chat/completions \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $KEY" \ -d '{ "messages": [ { "role": "user", "content": "Analyze the Q3 report attached." } ] }'
def main(): print(f"Connecting to ADL Server at {ADL_SERVER_URL}...") # Initialize the ChatOpenAI client pointing to the ADL Server, for example: http://localhost:8080/v1 chat = ChatOpenAI( base_url=ADL_SERVER_URL, api_key=OPENAI_API_KEY, model=MODEL_NAME, temperature=0.7, ) # Create a simple message messages = [HumanMessage(content="Hello ADL")] try: # Send the message to the server print("Sending message: 'Hello'") response = chat.invoke(messages) # Print the response print("-" * 20) print("Response from ADL Server:") print(response.content) print("-" * 20) except Exception as e: print(f"Error communicating with ADL Server: {e}")
Source for this Python example: https://github.com/eclipse-lmos/adl/tree/main/examples/langchain-python
Comprehensive API for managing, compiling, testing, and evaluating your ADLs.
Returns the supported ADL version.
List ADLs that semantically match a searchTerm.
Retrieve a single ADL by ID.
Find ADL Skills matching a conversation context.
Save ADL definitions to storage.
Auto-generate test cases.
Fetch test cases.
Run evaluation logic against inputs.
Compiles ADL to agent-ready format.
Check syntax and tool references.
Generate the full system prompt.
Get AI-suggested improvements.