IBM Maximo: Migration from NFS to S3 Storage
You can configure IBM Maximo and MAS so that it stores attachments in a Simple Storage Service (S3) cloud object storage.
This is the best option when migrate from EAM to MAS.
In order to migrate your environment to use S3, it is necessary to:
- Create a bucket in S3 storage in order to store your data.
- Setup Maximo application to use S3
- Migrate files from NFS storage to S3
- Change URL on MAXIMO.docinfo table
Create a bucket in S3 storage in order to store your data
(Optional) Deploying MinIO as S3 storage
I will use MinIO S3 as my S3 storage
1) The fastest way to have MinIO: MinIO: Deploy MinIO as Container
Important:
- S3 Protocol Port: 9000
- MinIO console Port: 9001
2) Access MinIO console Sample: http://10.1.1.1:9001/login. To login provide the variables from podman command:
-e "MINIO_ROOT_USER=root" \ -e "MINIO_ROOT_PASSWORD=passw0rd" \
Create a bucket
1) Create a bucket. Sample: maximo-doclinks
2) (Access the MinIO console and) create a access token.
The file is something like this:
{ "url":"http://10.1.1.1:9001/api/v1/service-account-credentials", "accessKey":"VQ...A", "secretKey":"K4...e", "api":"s3v4", "path":"auto" }
Setup Maximo application to use S3
1) setup Maximo application to use S3
- a. Login into Maximo
- b. Go to System Properties Application
- c. Change configurations
Header text | Header text |
---|---|
mxe.cosaccesskey | This value is the access_key_id described in the bucket. |
mxe.cosendpointuri | This value is the use-geo endpoint address: https://s3.us.cloud-object-storage.appdomain.cloud |
mxe.cosbucketname | This value is the name defined in the bucket. |
mxe.cossecretkey | This value is the secret_access_key described in the bucket. |
mxe.attachmentstorage |
com.ibm.tivoli.maximo.oslc.provider.COSAttachmentStorage Once this value is set, traditional doclinks will no longer work. To revert, this property must be removed and the server restarted. |
mxe.doclink.doctypes.defpath | cos:doclinks\default |
mxe.doclink.doctypes.topLevelPaths | cos:doclinks |
mxe.doclink.path01 | cos:doclinks=hostname/DOCLINKS |
mxe.doclink.securedAttachment | True |
3) Restart your Maximo Server JVM or Manage UI\ALL pod (if you are using Maximo Application Suite).
After restart upload a file and check in you S3 storage console
Migrate files from NFS storage to S3
On your EAM server or in a Linux machine that have access to nfs storage
1) Install aws client
2) run command
aws configure
and informe accessKey and secretKey.
3) Copy files from local machine to bucket
aws s3 cp /nfs_path s3://maximo-doclinks/ --endpoint-url http://10.1.1.1:9001 --recursive
you can also use sync
3) Copy files from local machine to bucket
aws s3 sync /nfs_path s3://maximo-doclinks/ --endpoint-url http://10.1.1.1:9001 --recursive
Sample script to copy files using aws client
#!/bin/bash DOCLINKS_SOURCE_PATH=/nfs_path/ S3_TARGET=s3.target.data S3_BUCKET=maximo-doclinks echo "Start Time" > result.log date >> result.log echo "" aws s3 cp $DOCLINKS_SOURCE_PATH s3://$S3_TARGET/$S3_BUCKET/ --recursive >> files_copied.log echo "Finish Time" >> result.log date >> result.log echo "Data transfered to S3 Bucket complete"
DRAFT: Using rclone
#!/bin/bash DOCLINKS_SOURCE_PATH=/nfs_path/ S3_TARGET=s3.target.data S3_BUCKET=maximo-doclinks rclone sync $DOCLINKS_SOURCE_PATH $S3_TARGET/$S3_BUCKET --log-file=rclone-migration.log --log-level INFO --fast-list --ignore-existing --retries 1 --transfers 64 --checkers 128 --delete-before
DRAFT: Change URL on MAXIMO.docinfo table
select document,urlname,docinfoid from MAXIMO.docinfo