Agent Programming Not Prompting

  • Prompting is not a production model.
  • Production agents must exhibit bounded, deterministic behavior.
  • ADL defines programmable agent logic with enforced execution rules.
Skill: customer_wants_to_buy_a_car
Description
Customer would like to buy a new car.
Steps
  • Ask the customer how much they want to spend on the car.
  • Ask the customer if they would like to trade in their old car.
Solution

Use @send_email() and send Pete an email with the details.

Tell the customer we will notify them when we have cars that fit their budget and model preference.

<is_weekend> As it is the weekend. Tell the customer that they will receive a response on Monday.
Context
Pete is our sales representative for car sales.

The Face

ADL Studio

A complete environment for building, testing, and refining your agents.

ADL Studio Interface

Reliable Agent Logic Authoring

Author human-maintainable agent logic with enforced behavioral structure.

Automated Testing

Define test cases and run them against your prompts to ensure consistent and reliable behavior.

Performance Analytics

Get detailed performance metrics and scores for your prompts to identify areas for improvement.

AI-Powered Suggestions

Leverage AI to get suggestions for improving your prompts and generating new test cases automatically.

Interactive Chat Playground

Test your prompts in a real-time chat interface to see how they perform in a conversational context.

Contract-Based Reliability

Enforce critical tests as contracts to ensure your most important use cases never break at runtime.

Run ADL Server + Studio

Get up and running quickly with Docker.

Run the latest version of the ADL Server using Docker. Be sure to replace [OPENAI_API_KEY] with your actual API key and create a local directory called my-adls to mount as your ADL Storage.

docker run -p 8080:8080 \
  -v "./my-adls":/app/adls \
  -e ARC_AI_KEY=[OPENAI_API_KEY] \
  -e ARC_MODEL=gpt-4o \
  -e ARC_CLIENT=openai \
  ghcr.io/eclipse-lmos/adl-server:latest

Examples can be found here: https://github.com/eclipse-lmos/adl/tree/main/adl-examples

A test MCP server can be started with:

docker pull ghcr.io/eclipse-lmos/demo-mcp:latest

Use the docker compose https://github.com/eclipse-lmos/adl/tree/main/docker-compose.yml

And set the MCP server (http://demo-mcp:8088) under settings.

The ADL Studio is still in the beta phase and may not completely implement all ADL features. We appreciate your feedback and patience as we continue to improve the platform.

Core Philosophy

Why Agent Programming?

Prompt-based systems are not reliable or verifiable.
ADL defines programmable agent behavior for production systems.

Formal Structure

Agent behavior must be explicitly defined.
ADL enforces rules, boundaries, and execution structure.

Controlled Complexity

Agents execute within scoped instructions.
Behavior is constrained to reduce ambiguity and failure.

Stateful Execution

Agents maintain structured state across interactions.
Workflows become persistent and verifiable.

In Action

ADL Examples

See the difference between standard LLM behavior and ADL's controlled execution and how ADL can improve dialog design.

Steps vs. Monologue

LLMs often rush to complete tasks in a single turn. ADL's Steps break down interactions, creating natural, stateful conversations.

Scenario

"Ask the customer for their budget and if they want to trade in their old car."

Standard LLM
User: I want to buy a car.
Assistant: Sure! What is your budget? And would you like to exchange your current car?
Response overload. Asks too many questions at once.
ADL
User: I want to buy a car.
Assistant: Sure! What is your budget?
User replies...
Assistant: Sounds good. Would you like to exchange your current car?
Step-by-step execution. One question at a time.

Enforced Tool Use

LLMs are "lazy" and often skip invisible backend tasks if they feel the user doesn't need to know. ADL ensures tools are called every time.

Scenario

"When a customer is interested, tell them to contact sales. Use inform_interest to signal our department."

Standard LLM
User: I'd love to buy this car!
Assistant: Great! Please contact our sales department at 555-0123.
[Internal thought] User informed. No need to call tool.
Tool Skipped. The backend system was never notified.
ADL
User: I'd love to buy this car!
Executing: @inform_interest()
Assistant: Great! Please contact our sales department at 555-0123.
Tool Enforced. Sales team notified automatically.

Loop Detection

LLM agents can get stuck in repetitive loops. ADL Skills detect these loops and resolve them automatically so the conversation can progress safely.

Scenario

"Help the customer reset access after a failed login while asking for confirmation only once."

Standard LLM
User: I cannot log in.
Assistant: Please confirm your email address.
User: jane@example.com
Assistant: Please confirm your email address.
User: jane@example.com
Assistant: Please confirm your email address.
Endless loop detected too late. The conversation stalls.
ADL
User: I cannot log in.
Assistant: Please confirm your email address.
User: jane@example.com
Loop detected: repeated prompt pattern - applying recovery step.
Assistant: Thanks. I have verified your email and sent a reset link. Please check your inbox.
Loop automatically detected and resolved by ADL Skills.

The Paradigm Shift

ADL Skills

Instead of writing prompts and hoping the agent behaves correctly, you describe what you want in clear, structured ADL Skills.

ADL turns those ADL Skills into reliable behavior.

The agent follows your rules, keeps context, and works the way you expect.

requirements.md

Requirements

Skill: refund_customer

Skill: upgrade_customer

SystemPrompt
Tools
Evals
Guardrails

Technical Specification

ADL Skill Format & Capabilities

ADL separates agent behavior definition from LLM prompting, providing a structured format backed by rules and conventions.

ADL Skill Structure

Required | Optional

Each ADL Skill defines how the agent responds to a specific scenario.

  • Name Unique identifier (lowercase with underscores).
  • Description Detailed explanation of the scenario.
  • Goal (Optional) The objective of the skill, useful for evals and tracking business objectives.
  • Examples (Optional) Defines a list of utterances that trigger the skill. For example:
    • i would like to know the weather
    • weather?
    • what is the weather forecast for my area?
  • Context (Optional) Extra information LLM may need to answer any questions the customer may have.
  • Steps The Steps block lists a set of instructions that is fed to the Agent one at the time with the ADL skipping steps that do not apply.
  • Solution The recommended resolution path.
  • Alternative Solution (Optional) Logic that is activated after the primary solution was used.
  • Fallback (Optional) Logic that is activated when the primary and fallback solution does not work.

Features

Conditionals

Skill: password_reset
Description

Customer needs to reset their password.

Solution
Provide the customer with the follow link to reset their password:
<isBusinessCustomer>https://example.com/b2b/reset-password.
<isPrivateCustomer>https://example.com/reset-password.

Build adaptive agents that change their behavior based on context. Conditionals act like "if statements" for your prompts, letting you include or exclude instructions based on user attributes, dates, or conversation state.

ADL defines several built-in conditionals, and you can inject custom logic at runtime to handle complex business rules.

  • <c1, c2> Multiple conditions (AND).
  • <c1 or c2> Multiple conditions (OR).
  • <!condition> Negation (e.g. <!is_weekend>).
  • <else> Fallback branch. True if no other Conditional applies.
  • <is_weekend> True if the current date is a weekend.
  • <date> Matches the current date, for example, <10.02.2006>
  • <step_n> True for each turn (e.g. <step_1>, <step_2>).

Multi-line conditionals are also supported:

Skill: password_reset
Solution
<isBusinessCustomer>
A multi line
conditional
</>
<else>
A multi line
conditional
</>

Executable Code

Skill: current_news
Description

Customer asks for the current news.

Solution
```kotlin
"Today's News is: ${httpGet("https://news.com")}"
```

Use markdown coding blocks to insert code directly into your Skill.

Predefined Functions

  • httpGet(url) - Retrieves data from http
  • time(zoneId?) - Current time (HH:mm)
  • date(zoneId?) - Current date (dd.MM)
  • year(zoneId?) - Current year (yyyy)

Tool Calls

Skill: password_reset
Solution
Call @password_reset_link()! to generate a reset link.
Provide the link and guide the customer through the process.

Empower your agents to take action. Define tools inline, and the ADL Engine will handle the orchestration—calling the function securely and feeding the result back to the agent.

Syntax: @tool_name()
Enforce: @tool_name()!

By explicitly declaring the tools that are required, the ADL Engine can:

  • Dynamically loads tools.
  • Validates availability.
  • Ensures execution.

Static Responses

Skill: greeting
Description
Customer is saying hello or greeting the agent.
Solution
"Hello! I am the ADL AI assistant. How can I assist you today?"

Sometimes you need absolute compliance. Static responses bypass the LLM entirely for specific turns, ensuring that legal disclaimers, greetings, or fallback messages are delivered exactly as written.

In this example, the Agent will always return the text within the brackets.

Conversation flows

Skill: buy_a_new_phone
Description
Customer wants to buy a new phone.
Solution
Ask the customer what phone they want to buy.
[android] goto #buy_android
[ios] goto #buy_ios
[else] thank the customer...

buy_ios
Inform the customer that we currently don't have iphones in stock.

buy_android
Ask the customer what color they would like?
[color] goto #complete_buy_phone
[pink] inform the customer that the color pink is currently out of stock.

complete_buy_phone
Use the @buy_phone() tool to complete the purchase.

Conversation flows enable the author to convey decision trees in their use cases.

The Skill above describes a simple conversation flow with multiple branches. The ADL Engine will parse the Skill and ensure that the Agent follows the defined flow, allowing at the same time the user to jump to other Skills if needed.

Skill presented as a decision tree:

Start Conversation Android iOS What color? Out of Stock [color] [pink] Complete @buy_phone() Sold Out

Styled Output

Skill: current_news
Description
Customer asks for the current news.
Solution
Return the top article from http://news.com using @get_new().
```html
<!-- news.title - The title of the new article -->
<div class="bg-blue-500 p-4 rounded-lg shadow-md max-w-md mx-auto"> <h2 class="text-xl font-semibold text-gray-800 mb-2">{{news.title}}</h2>
</div> ```

HTML templates can be defined in the Skill, allowing you to create rich, styled responses that go beyond plain text. By using placeholders for dynamic content, you can ensure that your agents deliver visually appealing and contextually relevant information.

In the example above, the agent will return a styled HTML snippet with the title of the top news article. The placeholder <news.title> will be replaced with the actual title retrieved by the @get_news() tool.

Clients displaying this content should support Tailwind CSS to render the styles correctly.

The template language Mustache is used for placeholders, allowing for simple variable interpolation.

Html comments can be used to inform the system how to extract the variables form the generated output.

The MUST Command

Skill: secure_password
Description
Customer wants to set a new password.
Solution
Use the @set_password() tool to set the new password.
You MUST inform the customer that updating their password will take up to 24 hours to propagate across all systems.

Precision in language is crucial for defining agent behavior. While "should" or "can" imply optionality, "MUST" is a definitive directive. In ADL, "MUST" is a reserved keyword that signifies a mandatory requirement.

The ADL Engine extracts these "MUST" instructions to:

  • Reinforce agent behavior by prioritizing these instructions in the system prompt.
  • Generate automatic evaluation criteria to verify that the output adheres to these mandatory constraints.
  • Ensure critical business logic is never bypassed.

Variables

Skill: greeting
Description
Greet the logged in user.
Solution
"Hello {{user.name}}! How can I help you today?"

ADL supports Mustache variables to inject dynamic context into your instructions.

This allows you to reference user memory, profile information, or any other context variable directly within your Skill definition.

The ADL Engine resolves these variables before executing the agent logic, ensuring personalized and context-aware behavior.

Like Conditionals, these variables can be injected at runtime when calling the ADL engine.

Comments

Skill: car_sales_intro
Solution
// Internal note: keep this step short and polite
When a customer is interested in buying a car, ask for their budget first.
// Internal note: this comment will not be sent to the agent

Comments are a great way to add notes in ADL that are not fed to the agent.

In ADL, any line starting with // is treated as a comment.

The ADL engine removes these lines before passing the final instructions to the agent.

Integration

Connect Anywhere

The ADL Engine accepts ADL files and exposes them via standard protocols, integrating seamlessly into your existing ecosystem.

ADL Files

Structured agent definitions

ADL Engine

Core execution runtime

OpenAI Completions
Standard chat endpoint
MCP Protocol
Tool & context bridge
REST / GraphQL
Management & Control

OpenAI Compatible Endpoint

Integration is effortless. ADL Server exposes a standard /v1/chat/completions endpoint.

This allows you to swap out your existing OpenAI API calls for ADL Server calls, instantly upgrading your application with ADL's structured capabilities without rewriting your client code.

  • Drop-in replacement for OpenAI SDKs
  • Streaming support for real-time UIs
  • Model agnostic - Connect to any LLM backend
cURL Request
curl https://adl-server/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $KEY" \
    -d '{
    "messages": [
      {
        "role": "user",
        "content": "Analyze the Q3 report attached."
      }
    ]
  }'
Python Example
def main():
    print(f"Connecting to ADL Server at {ADL_SERVER_URL}...")

    # Initialize the ChatOpenAI client pointing to the ADL Server, for example: http://localhost:8080/v1
    chat = ChatOpenAI(
        base_url=ADL_SERVER_URL,
        api_key=OPENAI_API_KEY,
        model=MODEL_NAME,
        temperature=0.7,
    )

    # Create a simple message
    messages = [HumanMessage(content="Hello ADL")]

    try:
        # Send the message to the server
        print("Sending message: 'Hello'")
        response = chat.invoke(messages)

        # Print the response
        print("-" * 20)
        print("Response from ADL Server:")
        print(response.content)
        print("-" * 20)
    except Exception as e:
        print(f"Error communicating with ADL Server: {e}")

Source for this Python example: https://github.com/eclipse-lmos/adl/tree/main/examples/langchain-python

GraphQL API

Comprehensive API for managing, compiling, testing, and evaluating your ADLs.

Queries

Core ADL

  • version(): String

    Returns the supported ADL version.

  • list(searchTerm: SearchCriteria?): [Adl]

    List ADLs that semantically match a searchTerm.

  • searchById(id: String): Adl

    Retrieve a single ADL by ID.

  • search(conversation: [Message], ...): [SkillMatch]

    Find ADL Skills matching a conversation context.

  • store(id: String, content: String, ...): StorageResult

    Save ADL definitions to storage.

Eval & Tests

  • createTests(Skill: String): [TestCase]

    Auto-generate test cases.

  • testCases(adlId: String): [TestCase]

    Fetch test cases.

  • eval(input: EvalInput): EvalOutput

    Run evaluation logic against inputs.

Mutations

Build & Validate

  • compile(adl: String, conditionals: [String]): CompileResult

    Compiles ADL to agent-ready format.

  • validate(adl: String): ValidationResult

    Check syntax and tool references.

  • systemPrompt(adl: String, ...): SystemPromptResult

    Generate the full system prompt.

  • improveSkill(Skill: String): ImprovementResponse

    Get AI-suggested improvements.