To make handlers forcefully execute at the end of task you can add:
– meta: flush_handlers
To make handlers forcefully execute at the end of task you can add:
– meta: flush_handlers
You can configure ansible to fail on a condition with:
pre_tasks:
- name: Fail if OS is not Centos
fail:
msg: "The playbook only supports Centos"
when: "ansible_distribution != 'CentOS'
The following is how you can set dynamic fact. The fact which will have it’s name as a variable. The key will be a variable and value will also be a variable.
- set_fact:
{"{{ groups['nginx'][groups['nodejs'].index(inventory_hostname)] }}":"{{ hostvars[inventory_hostname]['ansible_eth0']['ipv4']['address'] }}"}
Here we are setting a fact whose key is the host in nginx group with same index as current host in the nodejs group. We are assigning it the value of IP address of current host.
You can print it as follows
- name: print
debug:
msg: " {{ hostvars[groups['nodejs'][groups['nginx'].index(inventory_hostname)]][groups['nginx'][groups['nodejs'].index(inventory_hostname)]] }} "
The index can be accessed as:
- name: Print index
debug:
msg: "Index is {{ groups['nginx'].index(inventory_hostname) }}"
This will print the index id of the current host in the group “nginx”. It starts from 0.
First one found from of
$ANSIBLE_CONFIG./ansible.cfg~/.ansible.cfg/etc/ansible/ansible.cfgConfiguration settings can be overridden by environment variables – see constants.py in the source tree for names.
Used on the ansible command line, or in playbooks.
all (or *)foo.example.comwebserverswebservers:dbserverwebserver:!phoenixwebservers:&stagingOperators can be chained: webservers:dbservers:&staging:!phoenix
Patterns can include variable substitutions: {{foo}}, wildcards: *.example.com or 192.168.1.*, and regular expressions: ~(web|db).*\.example\.com
intro_inventory.html, intro_dynamic_inventory.html
‘INI-file’ structure, blocks define groups. Hosts allowed in more than one group. Non-standard SSH port can follow hostname separated by ‘:’ (but see also ansible_ssh_port below).
Hostname ranges: www[01:50].example.com, db-[a:f].example.com
Per-host variables: foo.example.com foo=bar baz=wibble
[foo:children]: new group foo containing all members if included groups[foo:vars]: variable definitions for all members of group fooInventory file defaults to /etc/ansible/hosts. Veritable with -i or in the configuration file. The ‘file’ can also be a dynamic inventory script. If a directory, all contained files are processed.
YAML; given inventory file at ./hosts:
./group_vars/foo: variable definitions for all members of group foo./host_vars/foo.example.com: variable definitions for foo.example.comgroup_vars and host_vars directories can also exist in the playbook directory. If both paths exist, variables in the playbook directory will be loaded second.
ansible_ssh_hostansible_ssh_portansible_ssh_useransible_ssh_passansible_sudo_passansible_connectionansible_ssh_private_key_fileansible_python_interpreteransible_*_interpreterplaybooks_intro.html, playbooks_roles.html
Playbooks are a YAML list of one or more plays. Most (all?) keys are optional. Lines can be broken on space with continuation lines indented.
Playbooks consist of a list of one or more ‘plays’ and/or inclusions:
---
- include: playbook.yml
- <play>
- ...
playbooks_intro.html, playbooks_roles.html, playbooks_variables.html, playbooks_conditionals.html,playbooks_acceleration.html, playbooks_delegation.html, playbooks_prompts.html, playbooks_tags.html Forum postingForum postinb
Plays consist of play metadata and a sequence of task and handler definitions, and roles.
- hosts: webservers
remote_user: root
sudo: yes
sudo_user: postgress
su: yes
su_user: exim
gather_facts: no
accelerate: no
accelerate_port: 5099
any_errors_fatal: yes
max_fail_percentage: 30
connection: local
serial: 5
vars:
http_port: 80
vars_files:
- "vars.yml"
- [ "try-first.yml", "try-second-.yml" ]
vars_prompt:
- name: "my_password2"
prompt: "Enter password2"
default: "secret"
private: yes
encrypt: "md5_crypt"
confirm: yes
salt: 1234
salt_size: 8
tags:
- stuff
- nonsence
pre_tasks:
- <task>
- ...
roles:
- common
- { role: common, port: 5000, when: "bar == 'Baz'", tags :[one, two] }
- { role: common, when: month == 'Jan' }
- ...
tasks:
- include: tasks.yaml
- include: tasks.yaml foo=bar baz=wibble
- include: tasks.yaml
vars:
foo: aaa
baz:
- z
- y
- { include: tasks.yaml, foo: zzz, baz: [a,b]}
- include: tasks.yaml
when: day == 'Thursday'
- <task>
- ...
post_tasks:
- <task>
- ...
handlers:
- include: handlers.yml
- <task>
- ...
Using encrypt with vars_prompt requires that Passlib is installed.
In addition the source code implies the availability of the following which don’t seem to be mentioned in the documentation: name, user (deprecated), port, accelerate_ipv6, role_names, and vault_password.
playbooks_intro.html, playbooks_roles.html, playbooks_async.html, playbooks_checkmode.html, playbooks_delegation.html,playbooks_environment.html, playbooks_error_handling.html, playbooks_tags.html ansible-1-5-released Forum postingAnsible examples
Each task definition is a list of items, normally including at least a name and a module invocation:
- name: task
remote_user: apache
sudo: yes
sudo_user: postgress
sudo_pass: wibble
su: yes
su_user: exim
ignore_errors: True
delegate_to: 127.0.0.1
async: 45
poll: 5
always_run: no
run_once: false
meta: flush_handlers
no_log: true
environment: <hash>
environment:
var1: val1
var2: val2
tags:
- stuff
- nonsence
<module>: src=template.j2 dest=/etc/foo.conf
action: <module>, src=template.j2 dest=/etc/foo.conf
action: <module>
args:
src=template.j2
dest=/etc/foo.conf
local_action: <module> /usr/bin/take_out_of_pool {{ inventory_hostname }}
when: ansible_os_family == "Debian"
register: result
failed_when: "'FAILED' in result.stderr"
changed_when: result.rc != 2
notify:
- restart apache
delegate_to: 127.0.0.1 is implied by local_action:
The forms <module>: <args>, action: <module> <args>, and local_action: <module> <args> are mutually-exclusive.
Additional keys when_*, until, retries and delay are documented below under ‘Loops’.
In addition the source code implies the availability of the following which don’t seem to be mentioned in the documentation:first_available_file (deprecated), transport, connection, any_errors_fatal.
Directory structure:
playbook.yml
roles/
common/
tasks/
main.yml
handlers/
main.yml
vars/
main.yml
meta/
main.yml
defaults/
main.yml
files/
templates/
library/
modules.htm, modules_by_category.html
List all installed modules with
ansible-doc --list
Document a particular module with
ansible-doc <module>
Show playbook snippet for specified module
ansible-doc -i <module>
playbooks_roles.html, playbooks_variables.html
Names: letters, digits, underscores; starting with a letter.
{{ var }}{{ var["key1"]["key2"]}}{{ var.key1.key2 }}{{ list[0] }}YAML requires an item starting with a variable substitution to be quoted.
--extra-vars on the command linevars component of a playbookvars_file in a playbookregister: in tasks/etc/ansible/facts.d/filename.fact on managed machines (sets variables with `ansible_local.filename. prefix)hostvars (e.g. hostvars[other.example.com][...])group_names (groups containing current host)groups (all groups and hosts in the inventory)inventory_hostname (current host as in inventory)inventory_hostname_short (first component of inventory_hostname)play_hosts (hostnames in scope for current play)inventory_dir (location of the inventory)inventoty_file (name of the inventory)Run ansible hostname -m setup, but in particular:
ansible_distributionansible_distribution_releaseansible_distribution_versionansible_fqdnansible_hostnameansible_os_familyansible_pkg_mgransible_default_ipv4.addressansible_default_ipv6.addressplaybooks_conditionals.html, playbooks_loops.html
Depends on module. Typically includes:
.rc.stdout.stdout_lines.changed.msg (following failure).results (when used in a loop)See also failed, changed, etc filters.
When used in a loop the result element is a list containing all responses from the module.
ansible_managed: string containing the information belowtemplate_host: node name of the template�s machinetemplate_uid: the ownertemplate_path: absolute path of the templatetemplate_fullpath: the absolute path of the templatetemplate_run_date: the date that the template was rendered{{ var | to_nice_json }}{{ var | to_json }}{{ var | from_json }}{{ var | to_nice_yml }}{{ var | to_yml }}{{ var | from_yml }}{{ result | failed }}{{ result | changed }}{{ result | success }}{{ result | skipped }}{{ var | manditory }}{{ var | default(5) }}{{ list1 | unique }}{{ list1 | union(list2) }}{{ list1 | intersect(list2) }}{{ list1 | difference(list2) }}{{ list1 | symmetric_difference(list2) }}{{ ver1 | version_compare(ver2, operator='>=', strict=True }}{{ list | random }}{{ number | random }}{{ number | random(start=1, step=10) }}{{ list | join(" ") }}{{ path | basename }}{{ path | dirname }}{{ path | expanduser }}{{ path | realpath }}{{ var | b64decode }}{{ var | b64encode }}{{ filename | md5 }}{{ var | bool }}{{ var | int }}{{ var | quote }}{{ var | md5 }}{{ var | fileglob }}{{ var | match }}{{ var | search }}{{ var | regex }}{{ var | regexp_replace('from', 'to' )}}See also default jinja2 filters. In YAML, values starting { must be quoted.
Lookups are evaluated on the control machine.
{{ lookup('file', '/etc/foo.txt') }}{{ lookup('password', '/tmp/passwordfile length=20 chars=ascii_letters,digits') }}{{ lookup('env','HOME') }}{{ lookup('pipe','date') }}{{ lookup('redis_kv', 'redis://localhost:6379,somekey') }}{{ lookup('dnstxt', 'example.com') }}{{ lookup('template', './some_template.j2') }}Lookups can be assigned to variables and will be evaluated each time the variable is used.
Lookup plugins also support loop iteration (see below).
when: <condition>, where condition is:
var == "Vaue", var >= 5, etc.var, where var coreces to boolean (yes, true, True, TRUE)var is defined, var is not defined<condition1> and <condition2> (also or?)Combined with with_items, the when statement is processed for each item.
when can also be applied to includes and roles. Conditional Imports and variable substitution in file and template names can avoid the need for explicit conditionals.
In addition the source code implies the availability of the following which don’t seem to be mentioned in the documentation: csvfile, etcd, inventory_hostname.
- user: name={{ item }} state=present groups=wheel
with_items:
- testuser1
- testuser2
- name: add several users
user: name={{ item.name }} state=present groups={{ item.groups }}
with_items:
- { name: 'testuser1', groups: 'wheel' }
- { name: 'testuser2', groups: 'root' }
with_items: somelist
- mysql_user: name={{ item[0] }} priv={{ item[1] }}.*:ALL
append_privs=yes password=foo
with_nested:
- [ 'alice', 'bob', 'eve' ]
- [ 'clientdb', 'employeedb', 'providerdb' ]
Given
---
users:
alice:
name: Alice Appleworth
telephone: 123-456-7890
bob:
name: Bob Bananarama
telephone: 987-654-3210
tasks:
- name: Print phone records
debug: msg="User {{ item.key }} is {{ item.value.name }}
({{ item.value.telephone }})"
with_dict: users
- copy: src={{ item }} dest=/etc/fooapp/ owner=root mode=600
with_fileglob:
- /playbooks/files/fooapp/*
In a role, relative paths resolve relative to the roles/<rolename>/files directory.
(see example for authorized_key module)
- authorized_key: user=deploy key="{{ item }}"
with_file:
- public_keys/doe-jane
- public_keys/doe-john
See also the file lookup when the content of a file is needed.
Given
---
alpha: [ 'a', 'b', 'c', 'd' ]
numbers: [ 1, 2, 3, 4 ]
- debug: msg="{{ item.0 }} and {{ item.1 }}"
with_together:
- alpha
- numbers
Given
---
users:
- name: alice
authorized:
- /tmp/alice/onekey.pub
- /tmp/alice/twokey.pub
- name: bob
authorized:
- /tmp/bob/id_rsa.pub
- authorized_key: "user={{ item.0.name }}
key='{{ lookup('file', item.1) }}'"
with_subelements:
- users
- authorized
Decimal, hexadecimal (0x3f8) or octal (0600)
- user: name={{ item }} state=present groups=evens
with_sequence: start=0 end=32 format=testuser%02x
with_sequence: start=4 end=16 stride=2
with_sequence: count=4
- debug: msg={{ item }}
with_random_choice:
- "go through the door"
- "drink from the goblet"
- "press the red button"
- "do nothing"
- action: shell /usr/bin/foo
register: result
until: result.stdout.find("all systems go") != -1
retries: 5
delay: 10
- name: Example of looping over a command result
shell: /usr/bin/frobnicate {{ item }}
with_lines: /usr/bin/frobnications_per_host
--param {{ inventory_hostname }}
To loop over the results of a remote program, use register: result and then with_items: result.stdout_lines in a subsequent task.
- name: indexed loop demo
debug: msg="at array position {{ item.0 }} there is
a value {{ item.1 }}"
with_indexed_items: some_list
---
# file: roles/foo/vars/main.yml
packages_base:
- [ 'foo-package', 'bar-package' ]
packages_apps:
- [ ['one-package', 'two-package' ]]
- [ ['red-package'], ['blue-package']]
- name: flattened loop demo
yum: name={{ item }} state=installed
with_flattened:
- packages_base
- packages_apps
- name: template a file
template: src={{ item }} dest=/etc/myapp/foo.conf
with_first_found:
- files:
- {{ ansible_distribution }}.conf
- default.conf
paths:
- search_location_one/somedir/
- /opt/other_location/somedir/
Both plays and tasks support a tags: attribute.
- template: src=templates/src.j2 dest=/etc/foo.conf
tags:
- configuration
Tags can be applied to roles and includes (effectively tagging all included tasks)
roles:
- { role: webserver, port: 5000, tags: [ 'web', 'foo' ] }
- include: foo.yml tags=web,foo
To select by tag:
ansible-playbook example.yml --tags "configuration,packages"
ansible-playbook example.yml --skip-tags "notification"
Usage: ansible <host-pattern> [options]
Options:
-a MODULE_ARGS, --args=MODULE_ARGS
module arguments
-k, --ask-pass ask for SSH password
--ask-su-pass ask for su password
-K, --ask-sudo-pass ask for sudo password
--ask-vault-pass ask for vault password
-B SECONDS, --background=SECONDS
run asynchronously, failing after X seconds
(default=N/A)
-C, --check don't make any changes; instead, try to predict some
of the changes that may occur
-c CONNECTION, --connection=CONNECTION
connection type to use (default=smart)
-f FORKS, --forks=FORKS
specify number of parallel processes to use
(default=5)
-h, --help show this help message and exit
-i INVENTORY, --inventory-file=INVENTORY
specify inventory host file
(default=/etc/ansible/hosts)
-l SUBSET, --limit=SUBSET
further limit selected hosts to an additional pattern
--list-hosts outputs a list of matching hosts; does not execute
anything else
-m MODULE_NAME, --module-name=MODULE_NAME
module name to execute (default=command)
-M MODULE_PATH, --module-path=MODULE_PATH
specify path(s) to module library
(default=/usr/share/ansible)
-o, --one-line condense output
-P POLL_INTERVAL, --poll=POLL_INTERVAL
set the poll interval if using -B (default=15)
--private-key=PRIVATE_KEY_FILE
use this file to authenticate the connection
-S, --su run operations with su
-R SU_USER, --su-user=SU_USER
run operations with su as this user (default=root)
-s, --sudo run operations with sudo (nopasswd)
-U SUDO_USER, --sudo-user=SUDO_USER
desired sudo user (default=root)
-T TIMEOUT, --timeout=TIMEOUT
override the SSH timeout in seconds (default=10)
-t TREE, --tree=TREE log output to this directory
-u REMOTE_USER, --user=REMOTE_USER
connect as this user (default=jw35)
--vault-password-file=VAULT_PASSWORD_FILE
vault password file
-v, --verbose verbose mode (-vvv for more, -vvvv to enable
connection debugging)
--version show program's version number and exit
Usage: ansible-playbook playbook.yml
Options:
-k, --ask-pass ask for SSH password
--ask-su-pass ask for su password
-K, --ask-sudo-pass ask for sudo password
--ask-vault-pass ask for vault password
-C, --check don't make any changes; instead, try to predict some
of the changes that may occur
-c CONNECTION, --connection=CONNECTION
connection type to use (default=smart)
-D, --diff when changing (small) files and templates, show the
differences in those files; works great with --check
-e EXTRA_VARS, --extra-vars=EXTRA_VARS
set additional variables as key=value or YAML/JSON
-f FORKS, --forks=FORKS
specify number of parallel processes to use
(default=5)
-h, --help show this help message and exit
-i INVENTORY, --inventory-file=INVENTORY
specify inventory host file
(default=/etc/ansible/hosts)
-l SUBSET, --limit=SUBSET
further limit selected hosts to an additional pattern
--list-hosts outputs a list of matching hosts; does not execute
anything else
--list-tasks list all tasks that would be executed
-M MODULE_PATH, --module-path=MODULE_PATH
specify path(s) to module library
(default=/usr/share/ansible)
--private-key=PRIVATE_KEY_FILE
use this file to authenticate the connection
--skip-tags=SKIP_TAGS
only run plays and tasks whose tags do not match these
values
--start-at-task=START_AT
start the playbook at the task matching this name
--step one-step-at-a-time: confirm each task before running
-S, --su run operations with su
-R SU_USER, --su-user=SU_USER
run operations with su as this user (default=root)
-s, --sudo run operations with sudo (nopasswd)
-U SUDO_USER, --sudo-user=SUDO_USER
desired sudo user (default=root)
--syntax-check perform a syntax check on the playbook, but do not
execute it
-t TAGS, --tags=TAGS only run plays and tasks tagged with these values
-T TIMEOUT, --timeout=TIMEOUT
override the SSH timeout in seconds (default=10)
-u REMOTE_USER, --user=REMOTE_USER
connect as this user (default=jw35)
--vault-password-file=VAULT_PASSWORD_FILE
vault password file
-v, --verbose verbose mode (-vvv for more, -vvvv to enable
connection debugging)
--version show program's version number and exit
playbooks_vault.html
Usage: ansible-vault [create|decrypt|edit|encrypt|rekey] [--help] [options] file_name
Options:
-h, --help show this help message and exit
See 'ansible-vault <command> --help' for more information on a specific command.
Usage: ansible-doc [options] [module...]
Show Ansible module documentation
Options:
--version show program's version number and exit
-h, --help show this help message and exit
-M MODULE_PATH, --module-path=MODULE_PATH
Ansible modules/ directory
-l, --list List available modules
-s, --snippet Show playbook snippet for specified module(s)
-v Show version number and exit
Usage: ansible-galaxy [init|info|install|list|remove] [--help] [options] ...
Options:
-h, --help show this help message and exit
See 'ansible-galaxy <command> --help' for more information on a
specific command
Usage: ansible-pull [options] [playbook.yml]
‘inventory_hostname‘ contains the name of the current node being worked on…. (as in, what it is defined in your hosts file as) so if you want to skip a task for a single node –
- name: Restart amavis service: name=amavis state=restarted when: inventory_hostname != "boris"
(Don’t restart Amavis for boris, do for all others).
You could also use :
... when: inventory_hostname not in groups['group_name'] ...
if your aim was to (perhaps skip) a task for some nodes in the specified group.
- name: Check for reboot hint.
shell: if [ $(readlink -f /vmlinuz) != /boot/vmlinuz-$(uname -r) ]; then echo 'reboot'; else echo 'no'; fi
ignore_errors: true
register: reboot_hint
- name: Rebooting ...
command: shutdown -r now "Ansible kernel update applied"
async: 0
poll: 0
ignore_errors: true
when: kernelup|changed or reboot_hint.stdout.find("reboot") != -1
register: rebooting
- name: Wait for thing to reboot...
pause: seconds=45
when: rebooting|changed
Often an ansible script may create a remote node – and often it’ll have the same IP/name as a previous entity. This confuses SSH — so after creating :
- name: Fix .ssh/known_hosts. (1) local_action: command ssh-keygen -f "~/.ssh/known_hosts" -R hostname
If you’re using ec2, for instance, you could do something like :
- name: Fix .ssh/known_hosts.
local_action: command ssh-keygen -f "~/.ssh/known_hosts" -R {{ item.public_ip }}
with_items: ec2_info.instances
Where ec2_info is your registered variable from calling the ‘ec2’ module.
- name: What's in reboot_hint? debug: var=reboot_hint
which might output something like :
"reboot_hint": {
"changed": true,
"cmd": "if [ $(readlink -f /vmlinuz) != /boot/vmlinuz-$(uname -r) ]; then echo 'reboot'; else echo 'no'; fi",
"delta": "0:00:00.024759",
"end": "2014-07-29 09:05:06.564505",
"invocation": {
"module_args": "if [ $(readlink -f /vmlinuz) != /boot/vmlinuz-$(uname -r) ]; then echo 'reboot'; else echo 'no'; fi",
"module_name": "shell"
},
"rc": 0,
"start": "2014-07-29 09:05:06.539746",
"stderr": "",
"stdout": "reboot",
"stdout_lines": [
"reboot"
]
}
Which leads on to —
Registered variables have useful attributes like :
(see above)
- name: Do something shell: /usr/bin/something | grep -c foo || true register: shell_output
So – we could :
- name: Catch some fish (there are at least 5) shell: /usr/bin/somethingelse when: shell_output.stdout > "5"
Perhaps you’ll override a variable, or perhaps not … so you can do something like the following in a template :
...
max_allowed_packet = {{ mysql_max_allowed_packet|default('128M') }}
...
And for the annoying hosts that need a larger mysql_max_allowed_packet, just define it within the inventory hosts file like :
[linux_servers] beech busy-web-server mysql_max_allowed_packet=256M
---
- vars_prompt:
- name: "var1"
prompt: "Please pass variable"
private: no
- fail: msg="var1 is not passed or blank"
when: var1 is undefined or ( var1 is defined and storeid == "" )
when it should be as follows:
---
vars_prompt:
- name: "var1"
prompt: "Please pass variable"
private: no
tasks:
- fail: msg="var1 is not passed or blank"
when: var1 is undefined or ( var1 is defined and storeid == "" )
the example referenced is just a task. It is not a valid playbook because it is missings a hosts declaration and the module call is not under a tasks section.
- name: deploy get url
win_get_url:
url: 'http://server_ip/builds/build.zip'
dest: 'D:\build.zip'
- name: deploy unzip
win_unzip:
src: D:\build.zip
dest: D:\
- name: Get ES cluster health
uri:
url: http://{{inventory_hostname}}:9200/_cluster/health
return_content: yes
register: cluster_status
- set_fact:
es_cluster_health: "{{ cluster_status.content | from_json }}"
- fail: msg="ES cluster not healthy"
when: "es_cluster_health.status != 'yellow'"
You can compare the status with any string you want. Here I am comparing it with string “yellow”
If you haven’t installed GoCD yet, you can follow the GoCD Installation Guide to install the GoCD Server and at least one GoCD Agent. This is a good point to stop and learn about the first concept in GoCD.
In the GoCD ecosystem, the server is the one that controls everything. It provides the user interface to users of the system and provides work for the agents to do. The agents are the ones that do any work (run commands, do deployments, etc) that is configured by the users or administrators of the system.
The server does not do any user-specified “work” on its own. It will not run any commands or do deployments. That is the reason you need a GoCD Server and at least one GoCD Agent installed before you proceed.
Once you have them installed and running, you should see a screen similar to this, if you navigate to the home page in a web browser (defaults to: http://localhost:8153):

If you have installed the GoCD Agent properly and click on the “Agents” link in the header, you should see an idle GoCD agent waiting (as shown below). If you do not, head over to the troubleshooting page to figure out why.

Congratulations! You’re on your way to using GoCD. If you now click “Pipelines”, you’ll get back to the “Add pipeline” screen you saw earlier.
Before creating a pipeline, it might help to know what it is and concepts around it.
A pipeline, in GoCD, is a representation of workflow or a part of a workflow. For instance, if you are trying to automatically run tests, build an installer and then deploy an application to a test environment, then those steps can be modeled as a pipeline. GoCD provides different modeling constructs within a pipeline, such as stages, jobs and tasks. We will see these in more detail soon. For the purpose of this part of the guide, you need to know that a pipeline can be configured to run a command or set of commands.
Another equally important concept is that of a material. A material is a cause for a pipeline to “trigger” or to start doing what it is configured to do. Typically, a material is a source control repository (like Git, Subversion, etc) and any new commit to the repository is a cause for the pipeline to trigger. A pipeline needs to have at least one material and can have as many materials of different kinds as you want.
The concept of a pipeline is extemely central to Continuous Delivery. Together with the concepts of stages, jobs and tasks, GoCD provides important modeling blocks which allow you to build up very complex workflows, and get feedback quicker. You’ll learn more about GoCD pipelines and Deployment pipelines in the upcoming parts of this guide. In case you cannot wait, Martin Fowler has a nice and succint article here.
Now, back at the “Add pipeline” screen, let’s provide the pipeline a name, without spaces, and ignore the “Pipeline Group” field for now.

Pressing “Next” will take you to step 2, which can be used to configure a material.

You can choose your own material here [1], or use a Git repository available on GitHub for this guide. The URL of that repository is: https://github.com/gocd-contrib/getting-started-repo.git. You can change “Material Type” to “Git” and provide the URL in the “URL” textbox. If you now click on the “Check Connection” button, it should tell you everything is OK.

This step assumes that you have git installed on the GoCD Server and Agent. Like git, any other commands you need for running your scripts need to be installed on the GoCD Agent nodes.
If you had trouble with this step, and it failed, take a look at the troubleshooting page in the documentation. If everything went well, press “Next” to be taken to step 3, which deals with stages, jobs and tasks.

Since a stage name and a job name are populated, and in the interest of quickly achieving our goal of creating and running a pipeline in GoCD, let us delay understanding the (admittedly very important) concepts of stage and job and focus on a task instead. Scrolling down a bit, you’ll see the “Initial Job and Task” section.

Since we don’t want to use “Ant” right now, let’s change the “Task Type” to “More…”. You should see a screen like this:

Change the text of the “Command” text box to “./build” (that is, dot forward-slash build) and press “Finish”. If all went well, you just created your first pipeline! Leaving you in a screen similar to the one shown below.

Helpfully, the pipeline has been paused for you (see the pause button and the text next to it, right next to the pipeline name). This allows you to make more changes to the pipeline before it triggers. Usually, pipeline administrators can take this opportunity to add more stages, jobs, etc. and model their pipelines better. For the purpose of this guide, let’s just un-pause this pipeline and get it to run. Click on the blue “pause” button and then click on the “Pipelines” link in the header.
If you give it a minute, you’ll see your pipeline building (yellow) and then finish successfully (green):


Clicking on the bright green bar will show you information about the stage:

and then clicking on the job will take you to the actual task and show you what it did:

Scrolling up a bit, you can see it print out the environment variables for the task and the details of the agent this task ran on (remember “Concept 1”?).

