2024 Posted by James Gill

Ansible and z/OS – Messing with Modules


An Ansible module is a way of extending the existing capabilities and functionality of Ansible. Whilst there is already a huge amount of content available to us with the core capabilities and extended collections, we might want to add a module to provide a capability that doesn’t already exist, or that we could only make available with a long and complex role.

An Ansible module should be focused on delivering coverage of a single thing – i.e. in the example below, Db2 DDF information. A single module covering everything that we might want to do with Db2, MQ and CICS functionality would be a really bad example and probably be (a) enormous, (b) an everlasting development challenge and (c) a maintenance nightmare! As the Ansible doc says: “follow the UNIX philosophy of doing one thing well.”

Where the scope / ambition of the additional capabilities is larger than a single thing, consider creating a collection instead. More on these in the next blog.

The body of this blog will cover the anatomy of a module and an example, as well as how to validate it and how to use it.


Some Documentation

The module creation process is fairly simple and documented on the Ansible website here:


There is also a section on getting the structure and documentation correct, here:


It’s important to get all of this right to ensure that the module is self-documenting and to the standard expected by the community, as well as getting the functional code correct.


The Anatomy of a Module

We’re going to focus on Python based modules – which make up the vast majority of Ansible modules. Modules can be written in any language that will run on the intended platform, although interpretive languages will inevitably reach more targets than compiled ones.

A Python Ansible module should begin with the Python shebang – this is the pseudo comment that tells Linux which interpreter to use to process the rest of the file.

The shebang should be followed by a comment clarifying the file encoding – e.g.

# -*- coding: utf-8 -*-

Then the copyright comment, which should be the short form – e.g.

# Copyright: Triton Consulting
# GNU General Public License v3.0+ (see COPYING or
#     https://www.gnu.org/licenses/gpl-3.0.txt)

and not the whole GPL license statement.

If you’re going to supply the __future__  import, this needs to come next:

from __future__ import (absolute_import, division, print_function)

This has to go before the first code items (which are the documentation assignments – below), but note that no other imports should be here – they should come after the doc.

This is then followed by the documentation of the feature being implemented, one or more examples of use and what the returned data should look like. These are all required and are processed by ansible-doc to generate the module documentation. The specific documentation sections / variables are:

  • DOCUMENTATION – this gives the name of the module, what it does, what parameters it takes and what requirements / dependencies it may have.
  • EXAMPLES – this shows one or more examples of how to use the module in a play.
  • RETURN – what form the returned dictionary takes

Have a look at the documentation link above, and the example module (db2_get_ddf.py), below.

The documentation is then followed by any Python imports and then the implementation code.


An Example – Gathering Db2 for z/OS DDF Configuration Information

The example below started off as a set of tasks in a role but rapidly became quite complicated. This module allows us to simplify the task to a single activity and make better use of available resources. It works by using the ZOA Utilities (ZOAU) mvscmd to run the TSO command processor (IKJEFT01) and use this to run the Db2 for z/OS TSO command processor (DSN) to execute the “-DIS DDF” command. We then parse the results into fields in the “ddf” dictionary and return to the caller.

Fiddly bits with doing this were:

  1. No real documented examples for doing the concatenated STEPLIB allocation. The Db2 command processor needs both the Db2 exit library (SDSNEXIT) and the Db2 load library (SDSNLOAD) to function. We ended up looking at some of the ibm_zos_core modules in GitHub to get a clue! (Please feel free to use this code as an example)
  2. There doesn’t seem to be an easy way to pass a set of commands to TSO except by putting them all in a temporary dataset. There’s probably an “opportunity” for someone to add something to ibm_zos_core…

All of the code referenced in this blog including the complete module code is available on GitHub here:


The following is snipped a bit to keep the blog manageable. Things to note are:

  • The “result” dictionary initialisation. We return data in “ddf” member (also a dictionary)
  • The “module = AnsibleModule” which defines the parameters that will be used. The module object is used to access the supplied parms. Be careful to ensure that the DOCUMENTATION aligns with this.
  • If the module is successful, we return information in “result” by calling the module.exit_json() method
  • If the module is not successful and we want to flag a failure, we can still return “result” but with module.fail_json()

And the rest, as they say, is Python:

# -*- coding: utf-8 -*-


# Copyright: Triton Consulting
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)


from __future__ import (absolute_import, division, print_function)


module: db2_get_ddf
short_description: Get the Db2 for z/OS DDF configuration


db2ssid: DBDG
register: ddf


msg: "TCPPORT = {{ ddf.tcpport }}"


RETURN = r'''
description: The original db2ssid param that was passed in.
type: str
returned: always
sample: 'DBDG'

from zoautil_py import mvscmd, datasets
from zoautil_py.types import DDStatement, DatasetDefinition, FileDefinition
import uuid
from os import environ, remove
from ansible.module_utils.basic import AnsibleModule

def run_module():
# init the result dict

# define the module argument(s)
module = AnsibleModule(
argument_spec = dict(
db2ssid = dict(type = 'str', aliases = ['ssid','db2'], required = True)
params = module.params
db2ssid = params['db2ssid']
result['db2ssid'] = db2ssid
# setup DDs ahead of calling IKJEFT01.
ddstmt = []
# Db2 load libraries named in the ssid_DB2LIBS environment variable
db2libs = environ.get("%s_DB2LIBS" % (db2ssid))
if (db2libs == "None"):
db2libs = "DSND10.%s.SDSNEXIT:DSND10.SDSNLOAD" % (db2ssid)
dl_ents = db2libs.split(':')
steplib = []
for dlds in dl_ents:


# create SYSTSIN TSO cmd file. NB uuid used to make the file unique
cmdfile = "/tmp/db2_get_ddf_cmd_%s.txt" % (uuid.uuid4())
with open(cmdfile, mode="w", encoding="cp1047") as ip:
ip.write("DSN S(DBDG)\n")
ip.write("  -DIS DDF\n")
systsin = FileDefinition(cmdfile)

# make SYSTSPRT a SYSOUT=* allocation


# we have to call IKJEFT01 authorised
rsp = mvscmd.execute_authorized(pgm="IKJEFT01",dds=ddstmt)


# tidy up the SYSTSIN file
except OSError:


# parse the response
r = rsp
out = []
ddf = {}
alias = {}
capt = False
if (r.rc == 0):
for line in r.stdout_response.splitlines():
# lose the print control character
tline = line[1:].rstrip()
if (len(tline.split()) > 1):
msgid, msg = tline.split(' ',1)
words = msg.split()
if (msgid == "DSNL080I"):
capt = True
if (capt):
match msgid:
case "DSNL081I":
# STATUS=status
ddf['status'] = words[0].split('=',1)[1]
case "DSNL083I":
# location-name  luname  genericlu
ddf['location'] = words[0]
ddf['luname'] = words[1]
ddf['genericlu'] = words[2]
case "DSNL099I":
capt = False
if (len(alias) > 0):
ddf['aliases'] = alias
ddf['out'] = out
out.append("** rc = %d **" % (r.rc))
for line in r.stderr_response.splitlines():
for line in r.stdout_response.splitlines():
ddf['out'] = out
result['ddf'] = ddf
module.fail_json(msg='Error processing DSN command request',**result)
result['ddf'] = ddf
result['changed'] = False
# exit the module and return the json result

def main():


if __name__ == '__main__':


Testing The Module

There’s two parts to this really:

  • Testing the production of documentation that you’ve created
  • Testing the functionality of the module

In the following examples, we’ll assume that we’ve kept the same directory structure as in the GitHub repository – i.e.

  • Playbook / role home ($HOME/ansible, below)
    • chk_ddf.yml = playbook
    • zpdt.yml = inventory
    • host_vars – host vars directory
      • zpdt.yml = host vars for host named “zpdt” in the inventory
    • library – our module directory
      • db2_get_ddf.yml = our module (as above)

Working in the Playbook / role home directory, we can run a test generation of the documentation from the module like this:

ansible-doc -t module db2_get_ddf -M $HOME/ansible/library

This generates the three document sections and pipes them to the “more” command:

>DB2_GET_DDF    (/home/james/ansible/library/db2_get_ddf.py)

Get the Db2 for z/OS DDF configuration details from TSO DSN -DIS DDF and return the
results as JSON. This uses the ZOA Utilities to execute the IKJEFT01 TSO commnad
processor, and then runs the Db2 DSN command from there. The Db2 load libraries are
named in the ssid_DB2LOAD environment variable as colon separated MVS dataset names. The
data is returned as a JSON / dictionary including the command output.

ADDED IN: version 0.0.1

OPTIONS (= is mandatory):

= db2ssid

The Db2 subsystem ID to issue the command to.
The subsystem must be local to the current host.
aliases: [ssid, db2]
type: str

REQUIREMENTS:  IBM ZOA Utilities v1.2.5.0 and up, IBM Open Enterprise SDK for Python 3.11.5 and up, IBM Db2 for z/OS V12.1 and up

AUTHOR: James Gill (@db2dinosaur)



db2ssid: DBDG

register: ddf



To check the function, we will need to run the code. In our case, we will need to setup some environment variables (to tell the module what the Db2 load libraries are) and create a test play to call the module.

We used Ansible host_vars to setup a variable called “environment_vars” for this specific host. This variable is used in the test play (see below). As well as our Db2 load libraries (DBDG_DB2LOAD for subsystem DBDG), we also set the other environment variables used by the Python interpreter and ZOAU tooling on our z/OS host. This is in the GitHub repo as “host_vars/zpdt.yml”:

# Environment vars for zpdt
PYZ: "/usr/lpp/IBM/cyp/v3r11/pyz"
ZOAU: "/usr/lpp/IBM/zoautil"
JAVA: "/usr/lpp/java/J11.0_64"

# environment_vars
PYTHONPATH: "{{ ZOAU }}/lib"
LIBPATH: "{{ PYZ }}/lib:{{ ZOAU }}/lib:{{ JAVA }}/lib:/lib:/usr/lib"
PATH: "{{ ZOAU }}/bin:{{ PYZ }}/bin:{{ JAVA }}/bin:/bin:/usr/bin"
MANPATH: "{{ ZOAU }}/docs/%L"

The test playbook looks like this (“chk_ddf.yml” in the repo):

# Get DDF Configuration
- name: "Get DDF Ports for DBDG"
hosts: zpdt
gather_facts: false
environment: "{{ environment_vars }}"


- name: "Playbook start timestamp"
msg: "{{ lookup('pipe','date') }}"

- name: "Run -DIS DDF"
db2ssid: DBDG
register: ddf


- name: "What did we get?"
msg: "{{ ddf | to_nice_json }}"


- name: "Playbook end timestamp"
msg: "{{ lookup('pipe','date') }}"


Note that the environment is set from the host_vars configured “environment_vars”.

Finally, to run this, use the shell script “chk_ddf.sh” in the repo, which looks like this:

# Check the db2_get_ddf module
# Make sure that we have the library hooked:
ANSIBLE_LIBRARY=./library ansible-playbook -i zpdt.yml chk_ddf.yml


The ANSIBLE_LIBRARY var tells Ansible where to look for the module.


What If I’ve Got Lots of Modules?

Then, my friend, you need a collection, which is what we will be discussing in the next Ansible blog.



Leave a Reply

Your email address will not be published. Required fields are marked *

« | »
Have a Question?

Get in touch with our expert team and see how we can help with your IT project, call us on +44(0) 870 2411 550 or use our contact form…