Difference between revisions of "Docking in AWS With DOCK 3.8"

From DISI
Jump to navigation Jump to search
Line 117: Line 117:
 
In the above example, we use aws secretsmanager to fetch our dockerhub password. This is accomplished by adding the AWSSecretsManager:GetSecret permission (for our dockerhub password secret arn) to our ecsInstance role. Alternatively, you can hard-code the authentication details into the launch template.
 
In the above example, we use aws secretsmanager to fetch our dockerhub password. This is accomplished by adding the AWSSecretsManager:GetSecret permission (for our dockerhub password secret arn) to our ecsInstance role. Alternatively, you can hard-code the authentication details into the launch template.
  
For more information on authentication methods, try this page: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/private-auth-container-instances.html
+
For more information on authentication methods, try the following links:  
 +
 
 +
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/private-auth-container-instances.html
 +
 
 +
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html

Revision as of 19:03, 11 May 2021

Dockerhub Tag:

btingle/dockaws:testing

Environment Variable Input

SHRTCACHE

description: The base directory where intermediate input/output will be stored. Recommended /dev/shm, default is /tmp

S3_INPUT_LOCATION

description: Should be either a directory or file hosted on s3

If S3_INPUT_LOCATION is a directory:

Input will be taken from the AWS_BATCH_JOB_ARRAY_INDEX'th file listed under the directory

        AWS_BATCH_JOB_ARRAY_INDEX=3
        S3_INPUT_LOCATION=s3://mytestbucket/input

        >>> aws s3 ls $S3_INPUT_LOCATION/ | sort -k4
        >>> 2021-03-12 17:14:26  249815112 H25.db2.tgz
        >>> 2021-03-12 17:14:26  255685176 H26.db2.tgz
        >>> 2021-03-12 11:40:43  307920266 H27.db2.tgz

In this example, s3://mytestbucket/H27.db2.tgz would be evaluated

If S3_INPUT_LOCATION is a file:

Input will be downloaded from the AWS_BATCH_JOB_ARRAY_INDEX'th line in the file

        AWS_BATCH_JOB_ARRAY_INDEX=2
        S3_INPUT_LOCATION=s3://mytestbucket/input/inputlist_batch1.txt
        ### inputlist_batch1.txt
        s3://mytestbucket/input/H26.db2.tgz
        s3://mytestbucket/input/H25.db2.tgz
        s3://mytestbucket/input/H27.db2.tgz
        ### EOF

In this example, the job would evaluate s3://mytestbucket/input/H25.db2.tgz

S3_OUTPUT_LOCATION

description: A directory hosted on s3 for output to be stored at.

Each job will create a subdirectory within this directory according to its array index for OUTDOCK & test.mol2.gz files.

For example, a job with S3_OUTPUT_LOCATION=s3://mytestbucket/output and AWS_BATCH_JOB_ARRAY_INDEX=5 may produce the following files after the first run:

s3://mytestbucket/output/5/OUTDOCK.0
s3://mytestbucket/output/5/test.mol2.gz.0
s3://mytestbucket/output/5/restart ### potentially, if job is interrupted

S3_DOCKFILES_LOCATION

description: An s3 directory containing dockfiles, e.g receptor grids, parameters, spheres, etc. INDOCK must be present in these files. The directory is recursively copied to local storage for each job, so make sure only the necessities are in this directory.

AWS_BATCH_JOB_ID

description: The name of the current job. Automatically supplied when submitting via aws batch.

AWS_BATCH_JOB_ARRAY_INDEX

description: The array index of the current job. Automatically supplied when submitting via aws batch.

Optional: AWS credentials

If you are testing outside of AWS batch (or even within aws batch, but passing credentials around like that can be dangerous), you may want to provide aws account credentials for accessing any necessary s3 buckets. You can do this by supplying the following as environmental parameters:

AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_DEFAULT_REGION

If you are submitting within aws batch, you may want to consider adding s3 permissions to your ecsInstanceRole (or whatever role you are using for aws batch instances) in lieu of an explicit access key.

Restartability & Spot Instances

This image has been configured to take advantage of AWS batch w/spot instances. When a job is interrupted by a spot termination notice, it will exit gracefully and report any partial results along with a restart marker in the output. You can also manually interrupt a job by sending signal 10 (SIGUSR1) to the dock process running within the container.

Exit Codes

When a job completes successfully, the exit code is 0.

When a failure is detected, or if an interruption notice has been caught, the exit code is 1.

In our aws batch configuration, we tell jobs to re-attempt when they exit with code 1, up to a limit of 3-5 attempts. (3 attempts is generally the highest number we observe)

Misc: Configuring Launch Template with Dockerhub credentials

When running jobs at scale with aws batch, dockerhub will likely complain about too much bandwidth being used by your jobs. In order to get around this, you need to have dockerhub premium for your account. This requires that you supply dockerhub authentication to each instance launched for your jobs. This can be accomplished with a Launch Template. Simply create a blank template, and fill in the "user data" section like so:

MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==DOCKERHUBINIT=="

--==DOCKERHUBINIT==
Content-Type: text/x-shellscript; charset="us-ascii"

#!/bin/bash
docker_email=ben@tingle.org
docker_user=btingle
export AWS_DEFAULT_REGION=us-west-1
docker_auth=$(aws secretsmanager get-secret-value --secret-id dockerhub_pass | grep "SecretString" | cut -d':' -f2 | sed 's/\"//g' | sed 's/,//g' | tail -c +2)

echo ECS_ENGINE_AUTH_TYPE=docker >> /etc/ecs/ecs.config
echo ECS_ENGINE_AUTH_DATA={\"https://index.docker.io/v1/\":{\"auth\":\"${docker_auth}\",\"email\":\"${docker_email}\",\"username\":\"${docker_user}\"}} >> /etc/ecs/ecs.config

--==DOCKERHUBINIT==--

In the above example, we use aws secretsmanager to fetch our dockerhub password. This is accomplished by adding the AWSSecretsManager:GetSecret permission (for our dockerhub password secret arn) to our ecsInstance role. Alternatively, you can hard-code the authentication details into the launch template.

For more information on authentication methods, try the following links:

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/private-auth-container-instances.html

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html