Dec
15
2021 Posted by Damir Wilder

Enabling DB2 LOGARCHIVE directly to S3 in the latest DB2 version

Introduction

This is a follow-up to an already published article about configuring the DB2REMOTE: option in a local DB2 instance (“local” in this case meaning “in the local data centre, not in AWS“) to enable direct DB2 backups to the AWS S3 storage.

In this article, we will go one step further and examine a new option in DB2 v11.5.7 which allows LOG archiving to be done directly to the AWS S3 storage for the first time.

 

Background

Without going into too many technical details, we are going to use a DB2 database TESTDB created on a local Linux server within a DB2 instance that has just been upgraded to the required level v11.5.7.

(for more details on the actual set up please read the previous article)

The current DB2 instance level:

db2level
DB21085I  This instance or install (instance name, where applicable:
"db2dwtst") uses "64" bits and DB2 code release "SQL11057" with level
identifier "0608010F".
Informational tokens are "DB2 v11.5.7.0", "s2110181747", "DYN2110181747AMD64",
and Fix Pack "0".
Product is installed at "/opt/ibm/db2/V11.5".

 

Configuration

In order to use the AWS S3 storage as the target for our archived DB2 transaction logs (as well as the database backups) we must do a bit of preparation first: install and configure the AWS CLI, then create the local keystore and configure DB2 to use it (again, see the previous article for details on how all this is done).

Next, a STORAGE ACCESS ALIAS must be defined in our DB2 instance – this will specify the AWS S3 bucket to be used (and provide the connection details such as the username/password which will be stored in the local keystore in encrypted format).


db2 "CATALOG STORAGE ACCESS ALIAS S3DWRND
     VENDOR S3
     SERVER s3.us-east-2.amazonaws.com
     USER '********************'
     PASSWORD '********************'
     CONTAINER dwrnd.triton.co.uk"

NOTE: This command configures DB2 to use the S3 bucket dwrnd.triton.co.uk which I have defined beforehand in AWS using my personal AWS account. Obviously, I have omitted the username and the password for security reasons here, so if you want to try this out you will have to use your own AWS account to create (your own) AWS Bucket 😊

Lastly, we must update one of the DB2LOGARCHMETH database parameters to use the Storage Access Alias that we have just defined (that points to the AWS bucket in this case – with the only other available option (for now?) being the SoftLayer container, according to the DB2 docs) .

We will also specify a subdirectory (i.e. the AWS bucket prefix) “TESTDB_LOGS” where the LOGs will be saved to within the AWS bucket, to keep them separated from the other files:


db2 update db cfg for TESTDB using LOGARCHMETH1 'DB2REMOTE://S3DWRND//TESTDB_LOGS'
DB20000I  The UPDATE DATABASE CONFIGURATION command completed successfully.

 

The following entry appears in the DB2 diagnostic log and confirms the configuration change is OK:


2021-12-11-16.01.26.705399+000 I811850E569           LEVEL: Event
PID     : 11376                TID : 139858219624192 PROC : db2sysc 0
INSTANCE: db2dwtst             NODE : 000            DB   :
APPHDL  : 0-58                 APPID: *LOCAL.db2dwtst.211211160051
AUTHID  : DB2DWTST             HOSTNAME: itchy
EDUID   : 25                   EDUNAME: db2agent (instance) 0
FUNCTION: DB2 UDB, config/install, sqlfLogUpdateCfgParam, probe:20
CHANGE  : CFG DB TESTDB: "Logarchmeth1" From: "DISK:/home/db2dwtst/logarchive/"  To: "DB2REMOTE://S3DWRND//TESTDB_LOGS" 

 

DB2 Restarts

Notice that we didn’t have to stop/restart the DB2 instance at any point during this exercise – this is due to the fact that the DB2 instance has already been configured to use the local keystore. But if you are going to do all this from the scratch, a DB2 instance restart will be required, keep that in mind.

As for the database, again a similar situation, no restart (deactivation) was required because our test database TESTDB has already been configured for the log archiving before this exercise:


First log archive method  (LOGARCHMETH1) = DISK:/home/db2dwtst/logarchive/
Second log archive method (LOGARCHMETH2) = OFF 

If this weren’t the case then we would have had to restart the database after reconfiguring it for the log archiving!

 

Test

At this point, the local database TESTDB should be completely configured to archive the transaction logs directly to the S3 storage.

In order to check this, we should start executing transactional activities on the database to start consuming (and archiving) the log space. [that’s precisely what happens here, but let’s not go into details 😊]

After a while, the following message(s) appear in the DB2 diagnostic log:


2021-12-11-16.26.19.866332+000 E913527E609           LEVEL: Info
PID     : 11376                TID : 139858211235584 PROC : db2sysc 0
INSTANCE: db2dwtst             NODE : 000            DB   : TESTDB
HOSTNAME: itchy
EDUID   : 78                   EDUNAME: db2logmgr (TESTDB) 0
FUNCTION: DB2 UDB, data protection services, sqlpgArchiveLogFile, probe:3175
MESSAGE : ADM1846I  Completed archive for log file "S0000305.LOG" to
"DB2REMOTE://S3DWRND//TESTDB_LOGS/db2dwtst/TESTDB/NODE0000/
    LOGSTREAM0000/C0000000/" from
"/home/db2dwtst/db2dwtst/NODE0000/SQL00001/LOGSTREAM0000/".

This confirms the closed DB2 transaction log files are being archived to the (remote) S3 storage.
Just to be completely sure, let’s check the contents of the S3 Bucket (prefix TESTDB_LOGS):


aws s3 ls s3://dwrnd.triton.co.uk/TESTDB_LOGS/db2dwtst/TESTDB/NODE0000
/LOGSTREAM0000/C0000000/
... 
2021-12-11 16:26:15    4202496 S0000304.LOG
2021-12-11 16:26:18    4202496 S0000305.LOG
2021-12-11 16:27:16    2121728 S0000306.LOG
...

We can clearly see that our log file(s) are archived in the S3 Bucket!

 

The Ultimate Step

Last but not the least, we must now try to convince our clients to upgrade to the latest DB2 level in order to start using this new option.

This is usually not a simple task because of the many levels of bureaucracy involved in the approval process 😬 however in this case it should be plain sailing given the benefit(s) reaped from the upgrade 😊

 

Conclusion

This is (was) an eagerly awaited and very welcome extension of the DB2 “cloud” capabilities, which well complements and extends the already existing support for leveraging AWS services and will considerably improve the DB2 manageability in the AWS cloud.

 

 

5 thoughts on “Enabling DB2 LOGARCHIVE directly to S3 in the latest DB2 version”

  1. Gopalan Venkatramani says:

    Do we need to open up any FW ports for the S3 to accessed ? Also what directory privileges do we need to keep

    1. Damir Wilder says:

      Hi,
      As far as I remember all you have to do is install the AWS CLI and configure it to access your AWS account (where the S3 bucket has been created). That should also take care of the S3 bucket access privileges.
      Please check this blog https://www.triton.co.uk/setting-up-db2-remote-access-to-aws-s3/ for more details on how all this is done.

  2. Gopalan Venkatramani says:

    So I’ve a TDE encryption enabled.
    Keystore type (KEYSTORE_TYPE) = KMIP
    Keystore location (KEYSTORE_LOCATION) = /opt/db2inst1/security/config/db2eskm.cfg

    I can ping the s3
    ping s3.us-west-2.amazonaws.com
    PING s3.us-west-2.amazonaws.com (52.218.176.152) 56(84) bytes of data.
    64 bytes from s3-us-west-2.amazonaws.com (52.218.176.152): icmp_seq=1 ttl=244 time=4.78 ms
    64 bytes from s3-us-west-2.amazonaws.com (52.218.176.152): icmp_seq=2 ttl=244 time=2.51 ms

    While running the the catalog storage I’m getting the following error

    aws s3 ls is working fine ,

    aws s3 cp file.out s3://stg-db2-arch-logs/stg-db2-arch-logs/

    aws s3 ls s3://stg-db2-arch-logs/stg-db2-arch-logs/
    2023-05-30 16:37:35 0
    2023-05-30 18:46:43 0 .
    2023-05-30 18:47:17 0 file.out

    db2 “CATALOG STORAGE ACCESS ALIAS S3STG
    VENDOR S3
    SERVER s3.us-west-2.amazonaws.com
    USER ‘db2arch’
    PASSWORD ‘xxxxxxx’
    CONTAINER stg-db2-arch-logs”

    PID     : 15677                TID : 139881464215424 PROC : db2
    INSTANCE: db2inst1             NODE : 000
    HOSTNAME: ip-10-210-128-16.stg01.aws.allegiant.com
    FUNCTION: DB2 UDB, bsu security, sqlexGetMasterKeyFromLabelKMIP, probe:1528
    MESSAGE : ZRC=0x805C0912=-2141452014=SQLEX_KMIP_PARSE_ERROR
              “Failed to parse KMIP response.”

    May I know which/what username and password did you pass during the s3 storage creation ?

    1. DAMIR WILDER says:

      Hi Gopalan,
      You need to use your AWS account credentials (username, password) when setting up the storage access alias in the DB2 catalogue.
      It won’t work with the local OS accounts such as the DB2 instance owner (db2arch) and such.
      Hope this helps!
      Regards, Damir

  3. Gopalan Venkatramani says:

    Damir – Thank you, I found the limitations and I’m working on it. This is great article. Thanks

Leave a Reply

Your email address will not be published. Required fields are marked *

« | »
Have a Question?

Get in touch with our expert team and see how we can help with your IT project, call us on +44(0) 870 2411 550 or use our contact form…