2021 Posted by James Cockayne

Ransomware and the DB2 Database – Part Two

Prevention is Better than Cure

In part one of this blog we looked at what might happen to a DB2 database that was attacked by ransomware encryption.  Similar to encountering catastrophic storage issues, recovering from these simulated attacks would require a restore of the database or reinstallation of binaries – which is something any good DBA will have planned for, but would not be a pleasant experience for any of us.  The situation might be made even more complicated by the difficulty in finding and closing the vulnerabilities that enabled the systems to be infiltrated in the first place, along with any backdoors the hackers themselves might have planted for future use.  This might mean the database servers have to be rebuilt from scratch on servers that are known to not have been compromised.

The good news from our testing is that the HADR standby in our test system remained available for failover.  Whilst if the primary DB2 engine has gone down this would have to be a forced failover scenario it provides a lifeline to quickly get a copy of the database available.  Needless to say, in order for the standby database to be viable, you would have to be sure that the standby server had not itself been compromised.  Good security hygiene is required for this, and it may make some administration tasks more difficult – for example the underlying software and hardware stack would have to be separate from the primary, and the passwords or certificates needed to connect would also have to be separate and secure.

We are seeing increasing numbers of customers who maintain their own data centres using a cloud-based HADR standby for DR, and as the stacks are completely separate from on-premises this is a good safety net.  With up to four nodes possible, it makes sense to have a primary-principal standby pair in on-prem datacentres with automatic failover to handle standard HA/DR requirements and a third node running on a cloud provided infrastructure to provide ‘extreme’ DR.  With the third node in superasync mode there is no chance of backpressure affecting performance on the primary, and the occasional downtime associated with single-node cloud VMs is not an issue for most customers – if it were, a fourth node in a different cloud availability zone could be used.  For customers who have migrated their DB2 installations to cloud virtual machines (IaaS) this architecture can also be used – for multi-cloud customers a different cloud provider provides a good degree of separation, or alternatively a different account with the preferred cloud provider for those who stick to one supplier can be configured with the appropriate routing and security.

Not everyone has the luxury of locating a database on the cloud of course, and we should be practising good security techniques regardless of whether the database is on-premises or cloud based.  Here are four dos and four don’ts to help protect your databases wherever they are located*:


Do store usernames, passwords, IP addresses, host names and ports securely.

Hopefully by now no-one is storing their usernames and passwords in a spreadsheet.  Even if your shop doesn’t want to invest in an enterprise class password manager which logs access, changes passwords on a schedule etc. there are free secure options such as KeePass.

It’s important to remember that any information that informs people where the databases are and what ports they are listening on is valuable to hackers looking to exploit vulnerabilities.  If you can, moving away from defaults is also a good idea – if you’re looking for a DB2 LUW database the first port you try to connect on is 50000 and the first user you try is db2inst1…


Do encrypt usernames & passwords traffic – even on networks regarded as secure.

A technique preferred by ransomware attackers is to compromise a network, then simply observe for a period of time before starting encryption or exfiltration of data.  This allows them to find the valuable systems and monitor them for weaknesses to gain further access.  That network you think is secure because it’s behind a firewall may not be.  Defence in depth is key – at the very least use the server_encrypt authentication type so usernames and passwords are not sent over the network in clear text.


Do encrypt data in transit.

Building on the idea of encrypting usernames and passwords as they traverse the network, we can also encrypt the traffic carrying the data.  DB2 supports encryption of data in transit between clients and servers, as well as HADR traffic between servers using the TLS protocol.


Do encrypt data at rest – on disk and backups.

Securing your database so only authorised users can connect is a very good thing.  But somewhat undone if an attacker can get their hands on the data files on disk – or even more convenient a backup file.  In addition to corrupting data on an enterprise’s servers, ransomware attackers are known to steal a copy of the data and demand payment to avoid releasing it.  As the overhead from DB2 native encryption is reduced with the power of new hardware it is worth considering for any database.


Don’t allow application users to log on to the database server.

Even if DB2 is using the operating system to authenticate users, that doesn’t mean they should have permissions to open a shell or desktop on there – disallowing shells or remote desktop privileges removes a large attack vector from the database server.  There is also the bonus that accidental filling of filesystems and random performance issues should (hopefully) decrease.


Don’t use the same password between database servers.

In some cases this is difficult – after all if your database clients are going to automatically redirect between servers in a cluster they are going to need to use the same password.  For users such as the instance owner however the passwords should not need to be identical.  For standalone databases, and certainly between different environments such as prod and test, unique passwords should be used.


Don’t enable any one user admin access to all infrastructure in a HADR cluster.

Following on from the discussion above about keeping your HADR nodes secure so if one is compromised another should still be available, that is probably not going to be the case if you have admin users that can access the infrastructure of all the nodes and one of those users is compromised.


Don’t enable the same user to drop database and delete backups.

Sometimes it is easy to forget the power a single user can wield.  If your instance owner user is able to drop the database and also delete your backups you have the potential to very quickly lose all your data.  There is generally no need for the instance owner to have powers to delete backups directly – storage managers often have features that allow a user to delete files, but in reality it is a logical delete and the files are not really removed until a time limit has expired.  Cloud object storage will have the ability to set retention periods for files and automatically remove them with no direct ability for the user to delete.


*disclaimer – list is not exhaustive, your mileage may vary, the biggest risk to data security might be in the mirror, etc.


Leave a Reply

Your email address will not be published. Required fields are marked *

« | »
Have a Question?

Get in touch with our expert team and see how we can help with your IT project, call us on +44(0) 870 2411 550 or use our contact form…