Jul
09
2010 Posted by James Gill

Five Days in the labs – Part 2. DB2 pureScale Coupling Facility

By James Gill

We were lucky enough to get an opportunity to spend a week in the IBM Boeblingen Labs in Germany, to get some hands on experience with DB2 9.8 – pureScale – or Data Sharing on mid-range. 

The platform that we were working with was a five node configuration – two coupling facilities (CFs) and three DB2 member nodes. This was all implemented in three partitioned pSeries p550 servers, with 1TB of disk supporting it. 

On z/OS, the performance and behaviour of the CFs in a Data Sharing Group (DSG) can have an enormous impact on the overall viability of the solution – both in terms of availability and performance. 

In DB2 for z/OS, the CF configuration works as a client-server model. Members in the DSG request actions from the CF, which manages the structures, as well as the client requests.

These requests are delivered to the CF by scheduling them through XES, which queues them for delivery to the CF over XCF. The CF receives the request and can potentially queue them depending on its current workload. The request is dispatched, the answer is resolved and the response is returned to the requester through the XCF transport and XES interface. 

The model is different with pureScale, where the CF is used as a remote data cache – the intelligence being retained in the group members. 

On DB2 pureScale, the CF structures are accessed using the Infiniband (IB) remote direct memory access (RDMA) protocol. This allows a requester on one box to directly interact with a preconfigured data area on another box. Further, the protocol and data access is all managed by the IB cards, without having to interrupt the CPUs to complete any of the operations. This is extremely efficient, especially when coupled with the low latency of IB. So whilst the CFs are presented as simple data areas, the performance of the member interactions is limited only by the capacity in the IB network and horsepower in the IB cards required to access the data areas.

CF implementation in DB2 pureScale has been achieved with three processes:

  • ca-server
  • ca-mgmnt-lwd
  • ca-wdog

These are not currently covered in the DB2 pureScale documentation, but it seems reasonable that one owns the CF structures, one provides restart monitoring and the other provides operational information and management capabilities to the member nodes.

Assessing the performance of the CF is tricky, as the IB network performance is difficult to directly diagnose. As the network stack is not directly involved in the RDMA conversation, tools like netstat do not provide any insight. Further, it does not currently seem possible to detect queuing depths for RDMA requests in the CF IB card.

Having said that, there is a wealth of information available relating to the CF structures themselves and usage levels through enhancements to the db2pd tool. The following are example commands that we used to understand the impact of our workload on the CFs:

db2pd –cfinfo

db2pd –db sample –cfinfo gbp

db2pd –db sample –cfinfo lock

db2pd –db sample –cfinfo lockv

db2pd –db sample –cfinfo list

db2pd –db sample –cfinfo sca

db2pd –db sample –cfinfo 128 perf

We’re looking forwards to the documentation update so that some of the information produced will make more sense!

Note that a lot of the basic information returned by these commands is also available in the SYSIBMADM views implemented in DB2 pureScale. More on these in the following blogs.

We were very impressed by the performance and resilience of DB2 pureScale whilst working in the Labs. These are the main focus of the next two blogs.

« | »
Have a Question?

Get in touch with our expert team and see how we can help with your IT project, call us on +44(0) 870 2411 550 or use our contact form…