Greetings from the Polish Database User Group (PDUG) meeting in Krakow. PDUG is one of the biggest and most vibrant user groups in Europe. They are celebrating the 25th meeting of the user group. I am known as the Godfather of the user group, as I agreed to sponsor them as a regular speaker at their inception. There is an impressive list of attendees. Over 110 attendees in total, with about 70% focused on Db2 for z/OS. It’s run like a family business. For me it is an opportunity to meet old friends, both customers and vendors. The attendees are so very enthusiastic, appreciative and passionate about Db2.
The first session was a presentation by Jacek Rafalak that was all about z/OS workflows using z/OSMF when migrating to Db2 13 for z/OS. The vast majority of customers have already migrated or are planning to migrate to Db2 13 for z/OS using the classic method. But it is highly likely that the use of z/OSMF workflows will become mandatary for the next release of Db2 for z/OS. Use of z/OSMF is already mandatory for migrating to z/OS Version 3. Basically, the use of z/OSMF is all about simplifying and automating tasks. The Db2 for z/OS product provides sample predefined workflow artefacts. It is role-based. There are lots of new default values for Db2 system parameters to reflect best practices and encourage the use of new features. Jacek covered how to prepare the process for using z/OSMF, how important it is to review the overall process, decide what is actually needed and build a checklist before starting. JCL parcels are put into dependent steps. Steps can be skipped, modified, added or forced. A very nice capability is to be able to export the workflow into a printable format to provide documentation. Workflow can also be imported from CLIST output. Review instructions are provided for the JCL parcels created. SMP/E receive apply is automatically done for you. Use of the z/OSMF method is going to be very productive for customers with large number of Db2 environments. Use of z/OSMF is the future. I would recommend that all customers invest time and effort now in investigating and getting familiar with z/OSMF method of migration.
The next session that got my attention was about a “A Day in the Life of a Db2 Schema” presented by Toine Michelse from Broadcom. Toine is a very enthusiastic and passionate speaker on this subject. This session was primarily about mainframe modernisation, and specifically about modernising how we do things and how we interact Db2 for z/OS to improve the cycle of life of a Db2 schema. The main focus was on automating IT processes and developer self-service. There are important technology pieces that can be combined to help and add value. Toine went through a number of examples. Source code management with GitHub to help integrate application components and kick off other mandatory processes (e.g., testing). Process orchestration with Jenkins to automate process execution, provide sample reporting and good integration with GitHub. Process orchestration with z/OSMF to automate process execution. Use of open source Ansible for infrastructure maintenance. Some of the technology pieces are mainframe only but can be used for distributed processes using Zowe cli. This session provided a manifesto for developing high quality mainframe application software through process integration, automated processes which are repeatable and robust and efficient, and quality gates for enforcing standards and adding value.
A session that I really appreciated was presented by Nigel Slinger and Wen Jie Zhu from BMC. It was a talk about how to build your own Automated Intelligence (AI) application to minimize disruptions to business applications. What IT operations want from Machine Learning/AI in real time is to be able to minimize disruptions to business applications: lower mean time to detection, lower mean time to resolution, ear of use, low cost, should be explainable, and minimize false positives. Ideally what they want is an “easy button” to “give me the answer before I have a problem”. To work towards a solution, we must start to understand normal profile, score Key Performance Indicators (KPIs) in real time, use multi variant analysis to allow early detection, minimize false positives and identify events. False positives can only be suppressed by studying multiple variable analysis, ignore system start-up periods and using normal situation data. Most importantly, ignoring abnormal and outage periods. Graph format database technology should be used to look at cause and effective relationships previously defined, and help determine the real cause and effect. Key messages provided: allow experts to teach model, augment with tribal knowledge, tooling is required, works well with graph and vector databases. Retrieval Augmented Generation (RAG) techniques allow Large Language Model (LLM) programs using deep learning to access and retrieve information from external sources such as graph and vector databases. Overall, a very interesting session about how to build your own Automated Intelligence (AI) application to minimize disruptions to business applications.
End of a very good at PDUG.