IBM Sterling Connect:Direct: Configuring C:D to use AWS S3 Storage provide an AWS credentials file as a volume

From Wiki
Revision as of 18:50, 12 March 2026 by Ebasso (talk | contribs) (→‎Mounting the .aws Directory in Kubernetes)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

This article describes how to configure IBM Connect:Direct (CDU) to access S3-compatible object storage.

The setup involves preparing AWS-style configuration files, updating `Initparm.cfg`, and mounting the necessary credential files into the Connect:Direct pod using Kubernetes `extraVolume` and `extraVolumeMount` parameters.

AWS Configuration Files

The file credentials must be created under the user's `.aws` directory:

Example: credentials File

[default]
aws_access_key_id = <profile id>
aws_secret_access_key = <access key>

[aws-profile02]
aws_access_key_id = <profile id>
aws_secret_access_key = <access key>


These files can be automatically generated using the AWS CLI:

aws configure --profile aws-profile02

Updating Initparm.cfg

The following S3 I/O Exit configuration must be added to `Initparm.cfg`:

# S3 IO Exit parameters
file.ioexit:\
:name=s3:\
:library=/opt/cdunix/ndm/lib/libcdjnibridge.so:\
:home.dir=/opt/cdunix/ndm/ioexit-plugins/s3:\
:options=-Xmx640m \
-Djava.class.path=/opt/cdunix/ndm/ioexit-plugins/s3/cd-s3-ioexit.jar \
-Ds3.profilePath='/home/cduser/.aws/credentials' \
-Ds3.profileName=default \
com.aricent.ibm.mft.connectdirect.s3ioexit.S3IOExitFactory:

file.ioexit:\
:name=s3qa:\
:library=/opt/cdunix/ndm/lib/libcdjnibridge.so:\
:home.dir=/opt/cdunix/ndm/ioexit-plugins/s3:\
:options=-Xmx640m \
-Djava.class.path=/opt/cdunix/ndm/ioexit-plugins/s3/cd-s3-ioexit.jar \
-Ds3.profilePath='/home/cduser/.aws/credentials' \
-Ds3.profileName=aws-profile-02 \
com.aricent.ibm.mft.connectdirect.s3ioexit.S3IOExitFactory:

Mounting the .aws Directory in Kubernetes

The `.aws` directory containing the **credentials** file must be mounted on the Connect:Direct container at:

/home/cduser/.aws

This can be achieved via Helm chart configuration using `extraVolume` and `extraVolumeMount`.

Example Helm Values Configuration

# Mount the .aws directory in the container
extraVolumeMounts:
  - name: awsvol
    mountPath: /home/cduser/.aws

# Define the NFS volume in the pod spec
extraVolume:
  - name: awsvol
    nfs:
      server: <NFS server IP>
      path: /srv/nfs/.aws/

Ensure that the .aws/credentials file are already placed on the NFS path before deploying the chart.

Ver também