Skip to content

BaseRock-AI/docs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

170 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Image 01

User Guide

Version 1.4

Table of Contents

About BaseRock Agentic QA Platform

BaseRock Agentic QA is an advanced AI-driven smart automation platform purpose-built to automate functional testing through intelligent, self-learning QA agents. It empowers teams to validate software behavior across complex user journeys, APIs, and integrations — without relying heavily on manually written test scripts.

Deployment Architecture

Image 02

The BaseRock Agentic QA Platform is composed of two primary components — the BaseRock Control Plane and the BaseRock Agent.

BaseRock Control Plane

The Control Plane serves as the central hub for managing the entire QA process. It enables users to:

  • Onboard services and define testing requirements
  • Strategize test plans and prepare comprehensive test cases
  • Integrate with external systems such as source code repositories, MCP Servers, and LLM foundation models

The Control Plane is available as both a SaaS offering and a self-hosted deployment. It includes a web-based UI portal that allows users to easily create connectors, configure environments, and manage automation.

BaseRock Agent

The BaseRock Agent operates within the customer’s environment — typically behind their firewall — and acts as a secure bridge between the Control Plane and local systems.

The Agent:

  • Connects to the Control Plane
  • Receives test execution instructions and executes test suites on the machine (VPC or physical)
  • Adding or configuring a service in BaseRock from development environment.
  • Executes automated test suites as part of the CI/CD pipeline

It can be deployed on:

  • A developer’s local machine for on-demand testing
  • A dedicated worker node (physical or virtual) for continuous integration workflows

Check BaseRock agent usage section for more details.

Getting Started

Prerequisites

  • Access to BaseRock AI: This guide assumes you have access to the BaseRock AI SaaS platform or a Control Plane deployed in your VPC that allows you to configure your application. We’ll use the SaaS based control plane for illustration. If you need access, email sales@baserock.ai.

  • Your Application: You need to have an environment running your application, so you could allow baserock-agent to target the environment and execute all the functional test cases.

If you do not have a proper application set up, you can use the sample TODO List application available here. The respective README file can be found here that describes how to set up the application locally.

  • Admin Access to Source Code Repository: BaseRock installs an app via OAuth in your GitHub, BitBucket or GitLab repository that allows BaseRock to access the source code and learn about your application. This requires someone with admin access.

First Time Setup

1. Login:

For a newly created account, you can login using email using the following steps:

  • Use any of Github or Gitlab based via OAuth in your browser Or you can use name and email address to login via email.
  • Enter your name and email address.
Image 04
  • Go to your inbox and find the email sent by BaseRock AI. Attn: Check the spam folder if it does not show up in the inbox.
Image 05
  • Click on the link under “Click here to confirm your email address”
  • Go back to the BaseRock AI tab and click on REFRESH button
Image 06

You should be able to get into the BaseRock AI application now.

2. Invite Team members

Image 07

3. Setup LLM Provider

BaseRock operates using the client’s LLM API key and offers broad support for leading providers such as Anthropic, OpenAI, and models available through Azure, Amazon Bedrock, and Google Vertex or directly the API key.

Image 08

4. Add Connector

BaseRock supports git connectors of Bitbucket (Cloud and Native) and Github (Cloud). The access is Read-only access for the source code following strict compliance as BaseRock is SOC-Type 2 certified.

Note: For first time setup a repository admin needs to provide the approval

Image 09

Quick Start

1. Add a Service

Image 10

Adding a service in BaseRock allows users to discover all the available endpoints present in their respective micro-service. Click on the “Add Service” button on the top right corner to open the service creation form and provide the details.

Image 11

For monolithic repositories, users can create BaseRock services that logically represent a specific section of the monolith. Each service corresponds to one or more application modules into which the monolith has been structured, and should be named accordingly.

To discover endpoints associated with these modules, users can choose one of two approaches:

  1. Provide a Service Code Path pointing to the relevant module(s), or
  2. Specify routers or controllers in the Endpoints text area, where they can list either individual endpoints or entire routers/controllers.

In the above screenshot “/module/v1” will discover all the endpoints that belong inside this controller/router and it will also discover the exact endpoint named as “/user/details/{id}”

Requirement Document

Image 12

Service requirement document is a document used by BaseRock to gather more context about the service under test. This document content does not need to follow any specific formatting. Document type should be .txt and .pdf with total size of 10MB max.

Ideally expectations from a document is to have API details, API rules, examples as well as some validation checkpoints and user flows. But all the abovementioned are not mandatory and the document can have very basic information written in plain english as well.

A sample data in the document might look like this:

Example:

Purpose
This document outlines the requirements for a web-based To-Do List Application that enables users to efficiently manage and organize their daily tasks from any browser.
Features
- Users can register using an email and password via /auth/register endpoint.
- The task title must be between 5 and 250 characters when using POST/create-task.
- Users can set due dates and times for tasks.
- Users can assign priority levels (e.g., Low, Medium, High).
- Users can filter tasks by category, due date, completion status, or priority.
Use Cases
-Data is tied to individual user accounts and securely retrieved upon login.
-Upon creation of a new TODO list item, a corresponding JIRA ticket must be automatically created.
- An email notification must be sent to the task creator upon successful JIRA ticket creation for a new TODO item.
- The email notification must include a direct link to the newly created JIRA ticket.
- All requests for new TODO list items and their corresponding email notifications must be audited for tracking and compliance purposes.

2. Create Test Cases

The test cases depend on:

  1. Source code read from repository
  2. Requirement document uploaded by user (optional)
  3. Prompt given by user (optional)

Test cases can be generated from 2 places - within the business flow or within the above created service

Image 13

When inside the individual service user can click on 3 dots on the top right corner and click “Generate Test Cases a window will be shown as above where the user can either provide a prompt for adding domain specific instructions or simply click on generate to generate an optimized number of test cases based on source code.

When inside the business flow section and “use case” tab, after generating the use cases, users can click on 3 dots on top right and select “Map Use Cases To Test Cases” and BaseRock would generate test cases as well as map it to respective use cases. (We will see more about Use Cases in the upcoming section)

Final results would look like something as shown below:

Image 14

3. Generate Playbooks

To get detailed understanding of playbooks refer to the next section For now,

  • Go to configurations tab inside the service and fill it as per the need. For example if API requires authentication then make sure it is setup in configurations tab before proceeding.
  • Click on 3 dots inside the endpoint tab and select Generate Playbooks. You should after some time see the endpoint-level playbooks populated. (don't worry, you can learn about the playbook hierarchy later here)

4. Running Tests

Tests can be executed in 2 different ways:

  • A single test case to perform dry runs quickly for ensuring users are going in the right direction.

Follow the steps:

  1. Go to the generated test case and you'd see a traingular shaped run button
  2. Click on it, select the agent, select environment, enter base URL that is under test
  3. Click on Run
  4. You should be navigated to the Test Run page and in your terminal you should see the logs of the execution. Once the execution is over you'de see the final results on the UI. Refer to Test Runs section

NOTE: In order to run via UI user has to ensure that the BaseRock agent is actively running on the machine where they want the execution to take place.

Image 15
  • Multi test runs like a smoke or regression or all tests of a single service - This can be achieved by configuring the run_tests.sh file according to the need and running it via CLI or CICD pipeline. Examples as shown in the section - Execute test suite

Set preconditions and postconditions for testing an endpoint. Ex: authentication before triggering an API and deletion of test data after test execution.

Perform special actions like capturing system date/time, generating a random string/number and storing in a variable to use later, loop a test at a specific interval, ask BaseRock to use an already existing value in a payload during runtime, etc.

Playbooks is the most powerful feature of BaseRock when used properly and it enables users to be as creative as they want.

Image 16

In the above example screenshot:

The goal is to test the PUT /todos/${{todo_id}} endpoint. However, to automate this test, a todo item must always exist so that the PUT endpoint has a resource to update.

In automated testing, it is standard practice to execute the full flow from the beginning, even when the objective is to validate an intermediate step (in this case, the PUT endpoint).

Therefore, we use the Playbooks (Setup) section to create a todo item at the start of every test run. This ensures that a valid todo item is always available for the update operation. During this setup step, the system-generated todo_id is captured at runtime and stored for reuse in the subsequent PUT request.

After the PUT operation is tested, a postcondition step executes a DELETE request to remove the created todo item. This cleanup step prevents test data from accumulating in the database across multiple test executions.

Understanding Playbooks

Playbooks are a medium to communicate with BaseRock AI via english prompts. These playbook text areas are available on various stages and for different uses. Let’s understand the hierarchy of playbooks and how to use them.

Precedence level (high to low): Test Case Level Playbook > Endpoint Level Playbook > Service Level Playbook

Coverage level (high to low): Service Level Playbook > Endpoint Level Playbook > Test Case Level Playbook

  • Service Level Playbook: Service level playbooks have the highest coverage i.e it works for the entire service. Any data or instruction that needs to be used again and again in multiple end points or test cases can be defined here which then automatically cascades, hence reducing the repeatability aspect.
Image 17
  • Endpoint Level Playbook: These playbooks take precedence over service level and if there are any instructions that need to override the service level but are required for all the test cases of that end point then it can be populated here.

For example:

In the service playbook, let’s say you define a variable in the “Runtime Execution Context Variables” section as: user_name: johndoe123 The service-level playbook can then refer to it as ${user_name} for some instruction. However, if there is an endpoint /login that specifies: POST /login with user_name: "johnwick321" then the endpoint-level value (johnwick321) will override the service-level variable.

Additionally, if the username (or any variable) is dynamically generated (ex: random email address) at runtime within the endpoint definition, that generated value will also take precedence over the service-level variable.

  • Test Case Level Playbook

When there are some specific test case level validations which might depend on the domain. Then those can be explicitly populated here by the product/domain expert.

Check section: Playbook Usage to understand the best practices and some examples.

BaseRock Platform Walkthrough

The side bar panel on the left can be used to navigate to different sections of BaseRock portal that provides access to all the functionalities.

Business Flows:

This section allows user to generate use-cases out of the given PRD document and map the corresponding AI generated test scripts to create a Requirement Traceability Matrix

Services:

Services section consists of the list of services that BaseRock has learnt from source code.

Image 19

This is the page where you will be able to see all the services under test of your applications with an infinite scroll and corresponding versions in front.

In order to see more information about the services, you can either click on the name of the service or “View x more endpoints”. Once you click on the name of the service, you will land on the following page shown below.

Image 20

There are 3 tabs within a particular service - Sources, Configurations and Endpoints.

Sources :

It shows the way BaseRock has learnt about the service and the way you can feed more context to BaseRock via a requirement document.

Image 21

Another example of BaseRock’s learning via Github repository is shown above under the sources tab of the service.

Versioning: Users can maintain different versions of branches of the service to make sure the test cases generated for previous versions stay intact and do not get updated without approval.

Image 22

A versioning use case could be, The first time service is set up with a main branch and later a feature branch containing significant changes need to be tested but without making any changes in previous test cases generated for the main branch.

This highlights the need for version-aware testing to support parallel validation across code branches.

Configurations:

The next tab is Configurations where a user can define additional context variables and their values (static or dynamic via instructions) to be used for performing any computations or providing actions via playbook. Check section: Best practices for using variables in BaseRock.

Image 23

For example, you can create variables like for credentials for an authentication token that needs to be passed to the payload of the API request before sending it.

Open-Questions:

Open questions are BaseRock’s intelligence to bridge the context gap between what it has learnt from the source code and the information missing in order to run the end to end flows.

Image 24

Open questions can be answered in a very simple and plain english to either provide answers or perform some actions and fetch the answers on runtime dynamically.

Endpoints:

Third tab within the service is Endpoints, where all APIs, Kafka topics, etc will be shown as per the learning of BaseRock, shown in the images.

Image 25

Now each of the respective end points will be having their details, a combination of test scripts covering positive, negative and edge cases and execution results of the respective test runs.

The 3 dots on the top right corner inside the Endpoints tab will give users options to Generate Playbook, Generate test suite, rediscover API endpoints, download test cases and modify service. This is the main step towards test case generation

Image 26

The prompt area while generating test cases allows user to add their domain expertise and business specific validations to the test cases.

Check section: Test Case Generate to learn how to leverage prompts to generate quality and desirable test cases.

Further inside an endpoint you can verify the schema of the same endpoint using which different flavours of payload and test cases will be generated.

Image 27

Test Cases:

This section is a consolidated section for all the test cases of all the services present inside BaseRock. You can use filters to find the desired tests and customize the way you want the table to look like.

Image 28

Test Runs:

This section stores all the test results belonging to any service or suite. The following table contains all the "Test Runs", further when you go inside a test run, you should see all the test cases that were executed in it. The status of a test run will be failed even if a single test case fails inside it. You can filter and see the results based on your choice according to the type of test or which service it belongs to, etc.

Image 33

below are the "Test Case Runs" inside a particular test run

Image 29

Connectors:

With the help of connectors BaseRock is able to connect with the git using which it learns from the source code (read-only)

Check the section for more details on how to use connectors: Add Connector

Test profiles:

Test profiles are a way for you user to store environment settings that can be changed while running test cases. For example, UAT environment will have a different configurations than Pre-production.

Image 32

Access management:

there will be two sets of users in BaseRock:

  • Administrators - Who can add service acounts and modify LLM keys in-use
  • Users - Regular users of BaseRock having all access execpt the above mentioned.

Check the section to know how to add a team member to BaseRock: Invite Team members

Settings:

Settings allow Administrators to add the LLM provider to BaseRock without any external help and get started.

Check the section to know how to Setup LLM Provider

BaseRock Agent Usage

As explained in the BaseRock Agent Introduction above, in a nutshell, BaseRock agent essentially allows BaseRock control-plane to communicate with the developer’s system or machine on which test execution takes place (physical or virtual).

There are 3 primary usages of BaseRock agent:

  1. To execute single test cases via UI
  2. To execute bunch of test cases either by manual trigger or CICD pipeline
  3. Create service on BaseRock from locally available source code if not via git connector

BaseRock agent can be downloaded from the UI from the left panel. Once done make sure that the env is set (run set_env.sh) and the agent is whitelisted (run startup.sh)

Image 30

Execute single test case:

BaseRock allows users to run a single test case right from the UI. Every test case has a “run” button in front of it.

The only condition is that the BaseRock agent should be already running and be in Active status as shown above. In order to activate agent, run the file start_baserock_daemon.sh from the baserock_agent folder that you’ve downloaded.

Execute test suite:

In order to run multiple tests, the user has to first configure the suite file (named as run_tests.sh) inside BaseRock agent folder and then execute it.

Configurations are as below:

./baserock_agent --run-tests \
--service="service_ABC" \
--env="staging123" \
--service-url="https://app.dev.yourCompany.com/" \
--protocol="rest,graphql,kafka" \
--test-case-id="TC-1,TC-2" \
--tags="regression"\
--category="HAPPY_TEST_CASES or NEGATIVE_TEST_CASES" \
--endpoints="/user/auth,/temp/job,/user/jobs" \
--version="2025.12.11.978" \
--method="GET,POST,PUT,DELETE,PATCH"

Where,

  • Service is the name of the service inside baserock in which the test cases are present (Mandatory).
  • Env is just a string to organise the tests based on the environment it is running on like staging or pre-prod (Mandatory).
  • Service-url is the base url of the endpoints of your application (Mandatory).
  • Protocol is the type of endpoint like REST or Kafka or graphQL (Mandatory).
  • Test case id is used when user want to club any specific test case under this suite
  • Tags are the string literals you have tagged a test case with
  • Category is the type of test case that BaseRock has generated
  • Endpoints are essentially when you want to organise the suite based on the endpoint level
  • Version is your service version
  • Methods are your API methods

For multiple set of values, use double quoted comma separated values as shown above.

Add service via git:

This option allows users to add a specific version of a service on BaseRock directly from the local machine without first pushing to the cloud repo.

Run the file add_service.sh after configuring the following inside this file.

./baserock_agent --add-service \
--service=<service-name> \
--tech-stack=<tech-stack> \
--integration-type=<Integration-type> \
--local-src-path=<local-src-path>

Here,

  • Service is the name of the service you want to store it as and it will reflect on BaseRock UI
  • Tech stack is your micro-service programming language. User can just mention Java, Python, GoLang, Javascript, etc
  • Integration is not mandatory, It is like the additional information of the tech stack for example, Springboot for Java, Django for Python, kafka, etc
  • Local src path is basically the path where you have the source code saved in the local system from which BaseRock will learn and discover the endpoints. This will be later reflected in BaseRock UI once discovery is completed

Detailed Sample Examples:

Either directly use the command line or use run_tests.sh to create multiple .sh files where you setup different configuratiosn for different needs and just run them via terminal or CI pipeline.

For example, if you have few login test cases that are mandatory to run before every release, create a copy of run_tests.sh and rename it to maybe, login_smoke_suite.sh and use the following configuration as below for your "smoke suite":

./baserock_agent --run-tests \
--service="login" \
--env="dev" \
--service-url="https://app.dev.yourCompany.com/" \
--protocol="rest" \
--tags="smoke"

Below are some more examples of other configurations with a petstore scenario

  1. Run All test cases of “petstore-backend” service against a QA environment, where the service is hosted at https://qa.example.app/petstore-backend.

Prerequisites:

  • User has access to the BaseRock control plane (By default it’s https://app.baserock.ai)
  • The user has already onboarded a service named as “petstore-backend” resulting in discovery of many endpoints.
  • The user has generated and reviewed playbooks for the endpoints that entail setup, test and teardown steps.
  • The user has already generated and reviewed test suites for some of the endpoints.
  • The user has an instance of the petstore-backend service already running successfully at https://qa.example.app/petstore-backend URL.

Command Line:

./baserock_agent --run-tests \
--service=petstore-backend \
--env=QA \
--service-url=https://qa.example.app/petstore-backend
  1. Run All positive test cases of “petstore-backend” service against a Localhost environment, where the service is hosted at http://localhost:8080

Prerequisites:

  • User has access to the BaseRock control plane (By default it’s https://app.baserock.ai)
  • The user has already onboarded a service named as “petstore-backend” resulting in discovery of many endpoints.
  • The user has generated and reviewed playbooks for the endpoints that entail setup, test and teardown steps.
  • The user has already generated and reviewed test cases for some of the endpoints.
  • The user has an instance of the petstore-backend service already running successfully at https://qa.example.app/petstore-backend URL.

Command Line:

./baserock_agent --run-tests \
--service=petstore-backend \
--env=QA \
--service-url=http://localhost:8080 \
--category=HAPPY_TEST_CASE

Note: category is specified as an additional filter.

  1. Run test cases of specific set of endpoints - /inventory and /owners “ petstore-backend” service against a Localhost environment, where the service is hosted at http://localhost:8080

Prerequisites:

  • has access to the BaseRock control plane (By default it’s https://app.baserock.ai)
  • The user has already onboarded a service named as “petstore-backend” resulting in discovery of many endpoints.
  • The user has generated and reviewed playbooks for the endpoints that entail setup, test and teardown steps.
  • The user has already generated and reviewed test cases for /inventory and /owners endpoints.
  • The user has an instance of the petstore-backend service already running locally at http://localhost:8080 URL.

Command Line:

./baserock_agent --run-tests \
--service=petstore-backend \
--env=QA \
--service-url=http://localhost:8080 \
--endpoint=/inventory,/owner

Note: endpoint addresses are specified as an additional filter with comma delimiter.

  1. Run test cases of specific method (“POST”) of a specific endpoint (“/inventory”) for “petstore-backend” service against a Localhost environment, where the service is hosted at http://localhost:8080

Prerequisites:

  • User has access to the BaseRock control plane (By default it’s https://app.baserock.ai)
  • The user has already onboarded a service named as “petstore-backend” resulting in discovery of many endpoints.
  • The user has generated and reviewed playbooks for the endpoints that entail setup, test and teardown steps.
  • The user has already generated and reviewed test suites for POST /inventory endpoint.
  • The user has an instance of the petstore-backend service already running locally at http://localhost:8080 URL.

Command Line:

./baserock_agent --run-tests \
--service=petstore-backend \
--env=QA \
--service-url=http://localhost:8080 \
--endpoint=/inventory \
--method=POST

Note: endpoint addresses and method are specified as additional filters.

Best Practices

Context variables

  1. Always try to create context variables in camel case, although it's not mandatory but following any one standard of writing variable reduces chances of mistakes while referring later on.
  2. Refer to context variables with ${} syntax, for ex: ${variableName} everywhere throughout BaseRock whenever user wants to use the value of that variable or want to write anything back to variable.

Test case generation

  1. Users should always try to have a very clear vision on what type of tests they expect out of BaseRock. It's always better to jot down the expectations somewhere to refer while working with BaseRock.
  2. Try to always have the following items documented: functional requirement, API details, Request-Response examples, validation rules, business use cases, existing test scenarios or test cases (if any). The reason is that these informations make BaseRock context-rich eventually generating top-class high quality test cases. But remember, these informations are not mandatory for you to use BaseRock to achieve testing goals.
  3. Write prompts that are specific to your tetsing goals while generating test cases. You can mix the below examples in just one prompt if needed. Examples:
  • If you are looking for more of security and I18N test cases along with regular coverage then write
1- Along with standard coverage generate security test cases following OWASP rules and also Internationalisation test cases for the languages Arabic, French, German and Hindi.
2- Do not create positive test cases for security, only negativ.
3- Do not create more than 3 test cases for each language during I18N test.
4- Make sure to involve tests that include PII data in the payload
  • If you have a quantity criteria then mention the number of test cases to be generated.
Generate only 1 test case for GET methods.
Generate not more than 5 edge cases for the DELETE method APIs
  • You can give negative prompts too.
Ensure none of the test cases have similar payload and ensure there is no duplicacy.
Make sure test cases do not focus on option payload fields.
Don't generate non-functional test cases for POST method
  • Do not write any vague prompts like: "Create good quality test cases". In such cases it's better to leave the prompt area empty and just click Generate button.

Playbooks usage

  1. Make sure to have a mental diagram or flow written to know what is required to run an API before you jump inside BaseRock and start automation. Like, authentication, specific payload values that are always fixed, generation of any random value before every run, etc.
  2. Any pre-condition which is common for all endpoints MUST be written in the service level playbook and it should not be written inside endpoint level playbook.
  3. After the playbooks are saved, always check the AI workflows (aka AI adaption) of the respective endpoint and also on test case level to be sure of the sequence of steps that will execute before, during and after the test.
  4. While generating playbooks try to visualise the type of configurations, validations and post-test instructions you would want and mention in the prompt area before clicking on generate playbooks. Example:
  • validation instructions can be written as:
Validate that every successful response contains a message "successfully sent" for all POST endpoints of /user/form router.
  • Looping of instruction steps can be done like this, where instructions can be even sending external API request during runtime:
LOOP every 5 seconds up to 30 seconds until it succeeds
GET /file/download/${{fileID}} with headers 
--header 'X-QWERT-UUID: 822ccd6c-ee85-47c7-8081-ab89ad896e6c' 
Verify 200 response code
Verify response contains data.userid.value
END LOOP
  • Dynamic value generations like random text or system date can be done like this:
1. Generate random email id follwoing this pattern: name_role@gmail.com and store in variable testEmail.
2. Store system date in variable sysDate once the test execution is over and pass it on to Teardown step.

Test suite and test run

  1. Always create and keep mulitple flavours of run_test_suite.sh files with different configurations and re-name the file based on it. For ex: run_fullRegression.sh, run_serviceA_sanity.sh, run_onlyHappyFlows.sh, run_onlyPUTmethods.sh, run_POST_auth_endpoint.sh
  2. When playbooks and test cases are generated, always try executing some test cases individually first from UI before running full suite. Check Execute Single Test Case

Appendix

Try hands-on with sample TODO List application:

  1. https://github.com/BaseRock-AI/todo-web-service-public for backend
  2. https://github.com/BaseRock-AI/todo-web-app frontend

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors