IBM Sterling Connect:Direct: Configuring C:D to use AWS S3 Storage provide an AWS credentials file as a volume: Difference between revisions
| (One intermediate revision by the same user not shown) | |||
| Line 48: | Line 48: | ||
== Mounting the .aws Directory in Kubernetes == | == Mounting the .aws Directory in Kubernetes == | ||
The `.aws` directory containing the | The `.aws` directory containing the **credentials** file must be mounted on the Connect:Direct container at: | ||
<syntaxhighlight lang="bash">/home/cduser/.aws</syntaxhighlight> | <syntaxhighlight lang="bash">/home/cduser/.aws</syntaxhighlight> | ||
| Line 57: | Line 57: | ||
<syntaxhighlight lang="yaml"># Mount the .aws directory in the container | <syntaxhighlight lang="yaml"># Mount the .aws directory in the container | ||
extraVolumeMounts: | extraVolumeMounts: | ||
- name: | - name: awsvol | ||
mountPath: /home/cduser/.aws | mountPath: /home/cduser/.aws | ||
# Define the NFS volume in the pod spec | # Define the NFS volume in the pod spec | ||
extraVolume: | extraVolume: | ||
- name: | - name: awsvol | ||
nfs: | nfs: | ||
server: <NFS server IP> | server: <NFS server IP> | ||
Latest revision as of 18:50, 12 March 2026
This article describes how to configure IBM Connect:Direct (CDU) to access S3-compatible object storage.
The setup involves preparing AWS-style configuration files, updating `Initparm.cfg`, and mounting the necessary credential files into the Connect:Direct pod using Kubernetes `extraVolume` and `extraVolumeMount` parameters.
AWS Configuration Files
The file credentials must be created under the user's `.aws` directory:
Example: credentials File
[default]
aws_access_key_id = <profile id>
aws_secret_access_key = <access key>
[aws-profile02]
aws_access_key_id = <profile id>
aws_secret_access_key = <access key>
These files can be automatically generated using the AWS CLI:
aws configure --profile aws-profile02
Updating Initparm.cfg
The following S3 I/O Exit configuration must be added to `Initparm.cfg`:
# S3 IO Exit parameters
file.ioexit:\
:name=s3:\
:library=/opt/cdunix/ndm/lib/libcdjnibridge.so:\
:home.dir=/opt/cdunix/ndm/ioexit-plugins/s3:\
:options=-Xmx640m \
-Djava.class.path=/opt/cdunix/ndm/ioexit-plugins/s3/cd-s3-ioexit.jar \
-Ds3.profilePath='/home/cduser/.aws/credentials' \
-Ds3.profileName=default \
com.aricent.ibm.mft.connectdirect.s3ioexit.S3IOExitFactory:
file.ioexit:\
:name=s3qa:\
:library=/opt/cdunix/ndm/lib/libcdjnibridge.so:\
:home.dir=/opt/cdunix/ndm/ioexit-plugins/s3:\
:options=-Xmx640m \
-Djava.class.path=/opt/cdunix/ndm/ioexit-plugins/s3/cd-s3-ioexit.jar \
-Ds3.profilePath='/home/cduser/.aws/credentials' \
-Ds3.profileName=aws-profile-02 \
com.aricent.ibm.mft.connectdirect.s3ioexit.S3IOExitFactory:
Mounting the .aws Directory in Kubernetes
The `.aws` directory containing the **credentials** file must be mounted on the Connect:Direct container at:
/home/cduser/.aws
This can be achieved via Helm chart configuration using `extraVolume` and `extraVolumeMount`.
Example Helm Values Configuration
# Mount the .aws directory in the container
extraVolumeMounts:
- name: awsvol
mountPath: /home/cduser/.aws
# Define the NFS volume in the pod spec
extraVolume:
- name: awsvol
nfs:
server: <NFS server IP>
path: /srv/nfs/.aws/
Ensure that the .aws/credentials file are already placed on the NFS path before deploying the chart.