Jul
10
2020 Posted by Damir Wilder

DB2 Replatforming to the Cloud – 6 features to enable

Most DBAs, System Administrators or Engineers reading this, will have already been part of projects moving applications and/or databases to the Cloud, or planning to do so in the near future. According to a recent LogicMonitor Cloud 2025* study 87% of enterprises will accelerate their migration to the cloud  in a post-Covid 19 world. The study also revealed that on-premise workloads are set to fall by 13% over the next five years.

This DB2 Replatforming to the Cloud article will list six DB2 features to consider enabling when migrating your DB2 databases.  The list is by no means exhaustive, represented below are the practices Triton Consulting considered and implemented in the Cloud migration projects that we have advised and delivered so far.

Convert all tablespaces to Automatic Storage and Reclaimable

If you are still using the good old (but very much obsolete) DMS and/or SMS tablespaces, now is the time to drop that old habit and move on! Also, if you have been using the AS (Automatic Storage) tablespaces since DB2 version 9.7, there is a good chance your TBSPs are non-reclaimable (i.e. the empty space within TBSPs cannot be returned to the disks easily). To find out if this is the case, use the db2pd tool:
DB2 Replatforming to Cloud

Of course, your mileage will vary, so it’s worth experimenting with this variable for a while to find out which setting is optimal for your environment.

Revision of user accounts

Over time, the total number of user accounts with a specific set of access privileges on the database and its objects will increase. Whether those are, or were, database administrators, developers or read only business userids is sometimes hard to tell. People come and go from the organisations and more often than not their digital accounts are not cleaned up properly. Artefacts remain and sometimes no one is 100% sure what will happen if an obscure user account is dropped and whether it can be dropped at all (there may be an equally obscure part of the Application services still using it, however rarely).

Replatforming your whole IT infrastructure is a good moment to deal with this issue. Plan ahead and set aside some time to test how your applications behave with a reduced set of user accounts on the database. For starters, leave just those user accounts that you know for sure are being used and see if all parts of your application, or applications, are working as expected. If any parts of the application fail due to insufficient DB access rights, there will be clear error messages about it in the log and the required grants can be reinstated (or the application can be rewritten to use one of the remaining user accounts).

Enable data compression

The option of compressing the data in our databases has been around for quite a while now. And even better, with the consolidated editions in DB2 v11.5 compression is available in both the standard and the advanced edition! Fears that constantly having to compress/uncompress the data will bog down your CPU resources do not hold the water very well. The thing is, the DB2 compression has become so deeply embedded in the database engine that it can operate on the compressed data. Yes, DB2 can fetch pages of compressed data and check if any of the rows on those pages match the query criteria without having to uncompress the data. How cool is that? Well, from one side you don’t waste any more CPU resources than you would otherwise and from the other side you can have considerable I/O savings. Official IBM statements say the adaptive compression in DB2 can typically compress your data anywhere from 30% to 70% and in some extreme cases even more than 80-90%, and that is huge!

So, what are you waiting for? Find all those still uncompressed tables and/or indexes in your databases and set up a plan how to compress them at some stage of the replatforming process.

Compressing an uncompressed DB2 table in situ will require some downtime – the table must be reorganized (REORG) in order to activate the compression and create the compression dictionary.

If you are already replatforming, then it makes sense to create all tables in the target database(s) as compressed, before loading them with the data from the source database.

Enable data encryption

Everyone (and their dog) is talking about security, security and then some more security. Data encryption falls into this category, so without further ado, if you too are concerned with your database security, do enable the data encryption at all levels!
In DB2 v11.5 native encryption is included in both the standard and the advanced edition.

Data encryption at rest is natively supported in DB2 databases, quote from the KC:
DB2 native encryption encrypts your DB2 database, requires no hardware, software, application, or schema changes, and provides transparent and secure key management.

To encrypt data in-transit between clients and DB2 databases, you can use the older approach with the DATA_ENCRYPT authentication type (however, some considerations apply here), or, the newer and better DB2 database system support of Secure Sockets Layer (SSL).

System Architecture change

If your on-premise architecture is on AIX, then you will have to consider the difference in endianness when you migrate the Cloud which are typically based on a Linux architecture.

AIX is a “big endian” architecture, whereas Linux distributions are “little endian” (those in the Cloud probably without exception, but there appear to still exist some big-endian Linux distros for POWER) and this means you cannot simply take a database backup on-premise, ship it over to the Cloud and restore it there. This won’t work because the backup image will be binary incompatible with the target (little-endian) system.

In case of the system architecture change you will have to take a longer and more complicated method of migrating the data in several steps:

  1. Create a DB2LOOK file on the source system and use it to recreate the (empty) database on the target system
  2. Export the data on the source system
  3. Transfer the data to the target system (typically SCP)
  4. Load the data into the target system database

(steps 2-4 can be done in a single pass using the “LOAD FROM CURSOR” command, providing that the source and the target database are accessible from each other)

This certainly looks more complicated than a simple backup/restore approach, and it is, so a careful planning is required, followed by a thorough testing of the migration procedure, which will ensure a successful migration at the end!

If you require support with your DB2 Replatforming project contact Triton’s team of experts, visit our website for details on our modernisation service or download Application Modernisation – The Data Perspective by Iqbal Goralwalla. Triton provide consultancy and services to help organisations plan, implement and reduce operational risks associated with moving DB2 applications to the Cloud.

 

*Access the LogicMonitor Cloud 2025 study here

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

« | »
Have a Question?

Get in touch with our expert team and see how we can help with your IT project, call us on +44(0) 870 2411 550 or use our contact form…