IBM Sterling Connect:Direct: Configuring C:D to use AWS S3 Storage provide an AWS credentials file as a volume: Difference between revisions

From Wiki
Line 7: Line 7:
Two files must be created under the user's `.aws` directory: the **config** file and the **credentials** file.
Two files must be created under the user's `.aws` directory: the **config** file and the **credentials** file.


=== Example: config File ===
=== Example: credentials File ===
<syntaxhighlight lang="ini">[default]
<syntaxhighlight lang="ini">[default]
output = json
aws_access_key_id = <profile id>
region = us-east-1
aws_secret_access_key = <access key>
 
[profile emc]
</syntaxhighlight>


=== Example: credentials File ===
[aws-profile02]
<syntaxhighlight lang="ini">[emc]
aws_access_key_id = <profile id>
aws_access_key_id = <profile id>
aws_secret_access_key = <access key>
aws_secret_access_key = <access key>
</syntaxhighlight>
</syntaxhighlight>


These files can be automatically generated using the AWS CLI:
These files can be automatically generated using the AWS CLI:


<syntaxhighlight lang="bash">aws configure --profile emc</syntaxhighlight>
<syntaxhighlight lang="bash">aws configure --profile aws-profile02</syntaxhighlight>
 
You will be prompted to enter region, access key, and secret key. After creation, the `.aws` directory must later be mounted inside the pod.


== Updating Initparm.cfg ==
== Updating Initparm.cfg ==

Revision as of 18:35, 12 March 2026

This article describes how to configure IBM Connect:Direct (CDU) to access S3-compatible object storage.

The setup involves preparing AWS-style configuration files, updating `Initparm.cfg`, and mounting the necessary credential files into the Connect:Direct pod using Kubernetes `extraVolume` and `extraVolumeMount` parameters.

AWS Configuration Files

Two files must be created under the user's `.aws` directory: the **config** file and the **credentials** file.

Example: credentials File

[default]
aws_access_key_id = <profile id>
aws_secret_access_key = <access key>

[aws-profile02]
aws_access_key_id = <profile id>
aws_secret_access_key = <access key>


These files can be automatically generated using the AWS CLI:

aws configure --profile aws-profile02

Updating Initparm.cfg

The following S3 I/O Exit configuration must be added to `Initparm.cfg`:

# S3 IO Exit parameters
file.ioexit:\
:name=s3:\
:library=/opt/cdunix/ndm/lib/libcdjnibridge.so:\
:home.dir=/opt/cdunix/ndm/ioexit-plugins/s3:\
:options=-Xmx640m \
-Djava.class.path=/opt/cdunix/ndm/ioexit-plugins/s3/cd-s3-ioexit.jar \
-Ds3.endPointUrl=<S3 source IP> \
-Ds3.endPointPort=<port number> \
-Ds3.endPointSecure=NO \
-Ds3.profilePath='/home/cduser/.aws/credentials' \
-Ds3.profileName=<profile name> \
com.aricent.ibm.mft.connectdirect.s3ioexit.S3IOExitFactory:

Using S3 in Process Files

When referencing S3 paths in Connect:Direct processes, use the **s://** prefix to indicate that S3 object storage is being accessed.

Mounting the .aws Directory in Kubernetes

The `.aws` directory containing the **config** and **credentials** files must be mounted on the Connect:Direct container at:

/home/cduser/.aws

This can be achieved via Helm chart configuration using `extraVolume` and `extraVolumeMount`.

Example Helm Values Configuration

# Mount the .aws directory in the container
extraVolumeMounts:
  - name: extravol
    mountPath: /home/cduser/.aws

# Define the NFS volume in the pod spec
extraVolume:
  - name: extravol
    nfs:
      server: <NFS server IP>
      path: /srv/nfs/.aws/

Ensure that the `.aws/config` and `.aws/credentials` files are already placed on the NFS path **before** deploying the chart.

Summary

By preparing AWS configuration files, configuring the S3 I/O Exit in `Initparm.cfg`, and mounting the `.aws` directory via Kubernetes volumes, IBM Connect:Direct can seamlessly interact with S3-compatible object storage systems such as Dell EMC. This setup allows secure and flexible object-based file transfers within Connect:Direct processes.


Ver também