Contents

AWS CLI Cheatsheet

Contents

CLI Command Structure:

aws <command> <subcommand> [options and parameters]

aws <command> wait <subcommand> [options and parameters] (supported by some commands)

Info

Save output to a file using the > command. For example: aws dynamodb scan --table-name MusicCollection > output.json.

Tip: Use >> to append to a file. Also useful - Load Parameters from a file

Set Up

  • Using long-term credentials with IAM user (Not recommended):

    1
    
    aws configure
    
  • Using short-term credentialswith IAM user:

    1
    2
    
    aws configure
    aws configure set aws_session_token TOKEN # token generated from previous command
    
  • Using EC2 instance metadata:

    1
    2
    3
    4
    
    aws configure set role_arn arn:aws:iam::123456789012:role/defaultrole
    aws configure set credential_source Ec2InstanceMetadata
    aws configure set region us-west-2
    aws configure set output json
    
  • Using IAM role:

    1
    2
    3
    4
    5
    
    aws configure set role_arn arn:aws:iam::123456789012:role/defaultrole
    aws configure set source_profile default
    aws configure set role_session_name session_user1
    aws configure set region us-west-2
    aws configure set output json
    
  • Using IAM Identity Center user

    1
    
    aws configure sso
    

Credentials and config files

The config and credentials can set in various ways (in order of precedence):

  • Command line options: Such as the --region, --output, and --profile parameters (complete list)
  • Environment variables: Such as AWS_CONFIG_FILE, AWS_SHARED_CREDENTIALS_FILE, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_REGION (complete list)
  • Assume role: Assume the permissions of an IAM role through configuration, web identity or the aws sts assume-role command.
  • aws folder in home directory, %UserProfile% in Windows and $HOME or ~ in Unix (config file settings)

More details are at: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html

AWS CLI Global Settings

Specifying parameters

Simple parameters like strings and numbers can be passed as: aws ec2 create-key-pair --key-name my-key-pair where my-key-pair is a parameter

Formats for some other types are:

  • Timestamps: aws ec2 describe-spot-price-history --start-time 2014-10-13T19:00:00Z. Accepatble formats are:

    • YYYY-MM-DDThh:mm:ss.sssTZD (UTC)
    • YYYY-MM-DDThh:mm:ss.sssTZD (with offset)
    • YYYY-MM-DD
    • Unix Epoch time
  • List: aws ec2 describe-spot-price-history --instance-types m1.xlarge m1.medium

  • Boolean: aws ec2 describe-spot-price-history --dry-run Binary flag that turns an option on or off. For example, ec2 describe-spot-price-history has a Boolean –dry-run parameter that, when specified, validates the query with the service without actually running the query.

  • Blob: Specify a path to a local file that contains the binary data using the fileb:// prefix. This is treated as raw unencoded binary and the path is interpreted as being relative to the current working directory.

    1
    2
    3
    4
    5
    6
    
    aws kms encrypt \
        --key-id 1234abcd-12ab-34cd-56ef-1234567890ab \
        --plaintext fileb://ExamplePlaintextFile \
        --output text \
        --query CiphertextBlob | base64 \
        --decode > ExampleEncryptedFile
    
  • Streaming blob: Some parameters do not use the fileb:// prefix. These are re formatted using the direct file path.

    1
    2
    3
    4
    
    aws cloudsearchdomain upload-documents \
        --endpoint-url https://doc-my-domain.us-west-1.cloudsearch.amazonaws.com \
        --content-type application/json \
        --documents document-batch.json
    
  • Map: A set of key-value pairs specified in JSON or by using the CLI’s shorthand syntax. aws dynamodb get-item --table-name my-table --key '{"id": {"N":"1"}}'

  • Document: Document types are used to send data without needing to embed JSON inside strings. This allows for sending JSON data without needing to escape values.

Info
If any of the string items contain a space, you must put quotation marks around that item.

Shorthand Syntax

AWS CLI also supports a shorthand syntax that enables a simpler representation of your option parameters than using the full JSON format. It makes it easier for users to input parameters that are flat (non-nested structures). The format is a comma-separated list of key-value pairs.

For example:

1
2
3
aws dynamodb update-table \
    --provisioned-throughput ReadCapacityUnits=15,WriteCapacityUnits=10 \
    --table-name MyDDBTable

This is equivalent to the following example formatted in JSON.

1
2
3
aws dynamodb update-table \
    --provisioned-throughput '{"ReadCapacityUnits":15,"WriteCapacityUnits":10}' \
    --table-name MyDDBTable
ShorthandJSON
–option key1=value1,key2=value2,key3=value3–option ‘{“key1”:“value1”,“key2”:“value2”,“key3”:“value3”}’
–option value1 value2 value3–option ‘[value1,value2,value3]’
–option Key=My1stTag,Value=Value1 Key=My2ndTag,Value=Value2 Key=My3rdTag,Value=Value3–option β€˜[{“Key”: “My1stTag”, “Value”: “Value1”}, {“Key”: “My2ndTag”, “Value”: “Value2”}, {“Key”: “My3rdTag”, “Value”: “Value3”}]’

Load Parameters from a file

Use file://complete/path/to/file to provide a file URL to the parameter. Example:

aws ec2 describe-instances --filters file://filter.json

AWS CLI skeletons and input files

  • Most of the AWS CLI commands accept all parameter inputs from a file. These templates can be generated using the generate-cli-skeleton option. After generating a file, modify the parameter as per your requirement.
  • Most of the AWS Command Line Interface (AWS CLI) commands support the ability to accept all parameter inputs from a file using the --cli-input-json and --cli-input-yaml parameters. Use ths parameter and point to the filled-in file from the previous step.

AWS CLI output format

The AWS CLI supports the following output formats:

  • json: The output is formatted as a JSON.
  • yaml: The output is formatted as a YAML.
  • yaml-stream: The output is streamed and formatted as a YAML string. Streaming allows for faster handling of large data types.
  • text: The output is formatted as multiple lines of tab-separated string values. The output can be passed to a text processor, like grep.
  • table: The output is formatted as a table using the characters +|- to form the cell borders. This is more readable than other formats.

Pagination

Server-side pagination can be used with the help of following options:

  • –no-paginate: Disables pagination (AWS CLI pagnates by default)
  • –page-size: Change default page size
  • –max-items: Change default maximum number of items in a page
  • –starting-token: If total number of items is more than the items in a page (–max-items), then NextToken is also returned. This is passed as a parameter to –starting-token in the next CLI call to fetch the next page
1
2
3
4
aws s3api list-objects \
    --bucket my-bucket \
    --max-items 100 \
    --starting-token eyJNYXJrZXIiOiBudWxsLCAiYm90b190cnVuY2F0ZV9hbW91bnQiOiAxfQ==

AWS CLI Global Options

  • --debug
  • --endpoint-url: Change default URL used by the service
  • --no-verify-ssl
  • --no-paginate: Disable automatic pagination.
  • --output: json/text/table/yaml/yaml stream
  • --query: JMESPath query to filter the response
  • --profile
  • --region
  • --version
  • --color: on/off/auto
  • --no-sign-request: Do not sign requests. Credentials will not be loaded
  • --ca-bundle: The CA certificate bundle to use when verifying SSL certificates. Overrides config/env settings.
  • --cli-read-timeout: The maximum socket read time in seconds. If the value is set to 0, the socket read will be blocking and not timeout. The default value is 60 seconds.
  • --cli-connect-timeout: The maximum socket connect time in seconds. If the value is set to 0, the socket connect will be blocking and not timeout. The default value is 60 seconds.
  • --cli-binary-format: The formatting style to be used for binary blobs.
  • --no-cli-pager
  • --cli-auto-prompt
  • --no-cli-auto-prompt

Filtering the output

Server-side filtering

Server-side filtering in the AWS CLI is provided by the AWS service API. The parameter names and functions vary between services. Some common parameter names used for filtering are:

  • --filter such as ses and ce.
  • --filters such as ec2, autoscaling, and rds.
  • Names starting with the word filter, for example -filter-expression for the aws dynamodb scan command.

Client-side filtering

The –query parameter takes the HTTP response that comes back from the server and filters the results before displaying them. Querying uses JMESPath syntax to create expressions for filtering the output.

For example, consider the command aws ec2 describe-volumes returns the following output:

  • Output

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    61
    
    {
      "Volumes": [
        {
          "AvailabilityZone": "us-west-2a",
          "Attachments": [
            {
              "AttachTime": "2013-09-17T00:55:03.000Z",
              "InstanceId": "i-a071c394",
              "VolumeId": "vol-e11a5288",
              "State": "attached",
              "DeleteOnTermination": true,
              "Device": "/dev/sda1"
            }
          ],
          "VolumeType": "standard",
          "VolumeId": "vol-e11a5288",
          "State": "in-use",
          "SnapshotId": "snap-f23ec1c8",
          "CreateTime": "2013-09-17T00:55:03.000Z",
          "Size": 30
        },
        {
          "AvailabilityZone": "us-west-2a",
          "Attachments": [
            {
              "AttachTime": "2013-09-18T20:26:16.000Z",
              "InstanceId": "i-4b41a37c",
              "VolumeId": "vol-2e410a47",
              "State": "attached",
              "DeleteOnTermination": true,
              "Device": "/dev/sda1"
            }
          ],
          "VolumeType": "standard",
          "VolumeId": "vol-2e410a47",
          "State": "in-use",
          "SnapshotId": "snap-708e8348",
          "CreateTime": "2013-09-18T20:26:15.000Z",
          "Size": 8
        },
        {
          "AvailabilityZone": "us-west-2a",
          "Attachments": [
            {
              "AttachTime": "2020-11-20T19:54:06.000Z",
              "InstanceId": "i-1jd73kv8",
              "VolumeId": "vol-a1b3c7nd",
              "State": "attached",
              "DeleteOnTermination": true,
              "Device": "/dev/sda1"
            }
          ],
          "VolumeType": "standard",
          "VolumeId": "vol-a1b3c7nd",
          "State": "in-use",
          "SnapshotId": "snap-234087fb",
          "CreateTime": "2020-11-20T19:54:05.000Z",
          "Size": 15
        }
      ]
    }
    

Different ways of filtering it are:

  • To return only the first two volumes

    aws ec2 describe-volumes --query 'Volumes[0:2:1]'

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    
    [
      {
        "AvailabilityZone": "us-west-2a",
        "Attachments": [
          {
            "AttachTime": "2013-09-17T00:55:03.000Z",
            "InstanceId": "i-a071c394",
            "VolumeId": "vol-e11a5288",
            "State": "attached",
            "DeleteOnTermination": true,
            "Device": "/dev/sda1"
          }
        ],
        "VolumeType": "standard",
        "VolumeId": "vol-e11a5288",
        "State": "in-use",
        "SnapshotId": "snap-f23ec1c8",
        "CreateTime": "2013-09-17T00:55:03.000Z",
        "Size": 30
      },
      {
        "AvailabilityZone": "us-west-2a",
        "Attachments": [
          {
            "AttachTime": "2013-09-18T20:26:16.000Z",
            "InstanceId": "i-4b41a37c",
            "VolumeId": "vol-2e410a47",
            "State": "attached",
            "DeleteOnTermination": true,
            "Device": "/dev/sda1"
          }
        ],
        "VolumeType": "standard",
        "VolumeId": "vol-2e410a47",
        "State": "in-use",
        "SnapshotId": "snap-708e8348",
        "CreateTime": "2013-09-18T20:26:15.000Z",
        "Size": 8
      }
    ]
    
  • To shows all Attachments information for all volumes

    aws ec2 describe-volumes --query 'Volumes[*].Attachments'

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    
    [
      [
        {
          "AttachTime": "2013-09-17T00:55:03.000Z",
          "InstanceId": "i-a071c394",
          "VolumeId": "vol-e11a5288",
          "State": "attached",
          "DeleteOnTermination": true,
          "Device": "/dev/sda1"
        }
      ],
      [
        {
          "AttachTime": "2013-09-18T20:26:16.000Z",
          "InstanceId": "i-4b41a37c",
          "VolumeId": "vol-2e410a47",
          "State": "attached",
          "DeleteOnTermination": true,
          "Device": "/dev/sda1"
        }
      ],
      [
        {
          "AttachTime": "2020-11-20T19:54:06.000Z",
          "InstanceId": "i-1jd73kv8",
          "VolumeId": "vol-a1b3c7nd",
          "State": "attached",
          "DeleteOnTermination": true,
          "Device": "/dev/sda1"
        }
      ]
    ]
    
  • To list the State for all Volumes (also flatten the result)

    aws ec2 describe-volumes --query 'Volumes[*]*.Attachments*[].State'

    1
    2
    3
    4
    5
    
    [
      "attached",
      "attached",
      "attached"
    ]
    
  • Filter for the VolumeIds for all Volumes in an AttachedState

    aws ec2 describe-volumes --query 'Volumes[*].Attachments[?State==attached].VolumeId'

    1
    2
    3
    4
    5
    6
    7
    8
    9
    
    [
      "vol-e11a5288"
    ],
    [
      "vol-2e410a47"
    ],
    [
      "vol-a1b3c7nd"
    ]
    
  • To shows first InstanceId in all Attachments information for all volumes

    aws ec2 describe-volumes --query 'Volumes[*].Attachments[].InstanceId | [0]'

    1
    
    "i-a071c394"
    

    This pipes results of a filter to a new list, and then filter the result again

  • Filter VolumeId and VolumeType in the Volumes list

    aws ec2 describe-volumes --query 'Volumes[].[VolumeId, VolumeType]'

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    
    [
      [
        "vol-e11a5288",
        "standard"
      ],
      [
        "vol-2e410a47",
        "standard"
      ],
      [
        "vol-a1b3c7nd",
        "standard"
      ]
    ]
    

    To add more nesting, example: aws ec2 describe-volumes --query 'Volumes[].[VolumeId, VolumeType, Attachments[].[InstanceId, State]]'

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    
    [
      [
        "vol-e11a5288",
        "standard",
        [
          [
            "i-a071c394",
            "attached"
          ]
        ]
      ],
      [
        "vol-2e410a47",
        "standard",
        [
          [
            "i-4b41a37c",
            "attached"
          ]
        ]
      ],
      [
        "vol-a1b3c7nd",
        "standard",
        [
          [
            "i-1jd73kv8",
            "attached"
          ]
        ]
      ]
    ]
    
  • Filter VolumeType and add label VolumeType for the VolumeType values

    aws ec2 describe-volumes --query 'Volumes[].{VolumeType: VolumeType}'

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    
    [
      {
        "VolumeType": "standard",
      },
      {
        "VolumeType": "standard",
      },
      {
        "VolumeType": "standard",
      }
    ]
    
  • Filter, add labels and sort the output by VolumeId

    aws ec2 describe-volumes --query 'sort_by(Volumes, &VolumeId)[].{VolumeId: VolumeId, VolumeType: VolumeType, InstanceId: Attachments[0].InstanceId, State: Attachments[0].State}'

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    
    [
      {
        "VolumeId": "vol-2e410a47",
        "VolumeType": "standard",
        "InstanceId": "i-4b41a37c",
        "State": "attached"
      },
      {
        "VolumeId": "vol-a1b3c7nd",
        "VolumeType": "standard",
        "InstanceId": "i-1jd73kv8",
        "State": "attached"
      },
      {
        "VolumeId": "vol-e11a5288",
        "VolumeType": "standard",
        "InstanceId": "i-a071c394",
        "State": "attached"
      }
    ]
    

Frequently Used CLI commands

DynamoDB

Get Item

1
2
3
aws dynamodb get-item \
    --table-name MusicCollection \
    --key file://key.json

Contents of key.json:

1
2
3
4
{
    "Artist": {"S": "Acme Band"},
    "SongTitle": {"S": "Happy Day"}
}
  • For strongly consistent reads, use --consistent-read

Get specific attributes for an item

1
2
3
4
aws dynamodb get-item \
    --table-name ProductCatalog \
    --key '{"Id": {"N": "102"}}' \
    --projection-expression "Description, RelatedItems[0], ProductReviews.FiveStar"

Batch Get Items

1
2
3
aws dynamodb batch-get-item \
    --request-items file://request-items.json \
    --return-consumed-capacity TOTAL

Contents of request-items.json:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
{
    "MusicCollection": {
        "Keys": [
            {
                "Artist": {"S": "No One You Know"},
                "SongTitle": {"S": "Call Me Today"}
            },
            {
                "Artist": {"S": "Acme Band"},
                "SongTitle": {"S": "Happy Day"}
            },
            {
                "Artist": {"S": "No One You Know"},
                "SongTitle": {"S": "Scared of My Shadow"}
            }
        ],
        "ProjectionExpression":"AlbumTitle",
        "ConsistentRead": true
    }
}
  • Gets maximum of 100 items, upto 16 MB of data
  • Use UnprocessedKeys to get next page of results
  • For eventually consistent reads, set ConsistentRead as false or remove it

Put an item

1
2
3
4
5
aws dynamodb put-item \
    --table-name MusicCollection \
    --item file://item.json \
    --return-consumed-capacity TOTAL \
    --return-item-collection-metrics SIZE

Contents of item.json:

1
2
3
4
5
{
		"Artist": {"S": "No One You Know"},
    "SongTitle": {"S": "Call Me Today"},
    "AlbumTitle": {"S": "Greatest Hits"}
}
  • Will replace an item, if it exists with same primary key

Batch write items

1
2
3
4
aws dynamodb batch-write-item \
    --request-items file://request-items.json \
    --return-consumed-capacity INDEXES \
    --return-item-collection-metrics SIZE

Contents of request-items.json:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
{
    "MusicCollection": [
        {
            "PutRequest": {
                "Item": {
                    "Artist": {"S": "No One You Know"},
                    "SongTitle": {"S": "Call Me Today"},
                    "AlbumTitle": {"S": "Somewhat Famous"}
                }
            }
        },
        {
            "DeleteRequest": {
                "Key": {
							      "Artist": {"S": "No One You Know"},
							      "SongTitle": {"S": "Scared of My Shadow"}
							}
            }
        }
    ]
}
  • Will return maximum of 16MB of data, 400KB per item and up to 25 items per put and delete operation
  • PutRequest will replace an existing item with same primary key

Create a backup

1
2
3
aws dynamodb create-backup \
    --table-name MusicCollection \
    --backup-name MusicCollectionBackup

Delete an item

1
2
3
4
5
6
aws dynamodb delete-item \
    --table-name MusicCollection \
    --key '{"Artist": {"S": "No One You Know"}, "SongTitle": {"S": "Scared of My Shadow"}}'\
    --return-values ALL_OLD \
    --return-consumed-capacity TOTAL \
    --return-item-collection-metrics SIZE

Delete an item conditionally

1
2
3
4
5
6
7
aws dynamodb delete-item \
    --table-name ProductCatalog \
    --key '{"Id":{"N":"456"}}' \
    --condition-expression "(ProductCategory IN (:cat1, :cat2)) and (#P between :lo and :hi)" \
    --expression-attribute-names '{"#P": "Price"}' \
    --expression-attribute-values file://values.json \
    --return-values ALL_OLD

Contents of values.json:

1
2
3
4
5
6
{
    ":cat1": {"S": "Sporting Goods"},
    ":cat2": {"S": "Gardening Supplies"},
    ":lo": {"N": "500"},
    ":hi": {"N": "600"}
}

View provisioned capacity limits

1
aws dynamodb describe-limits

Describe a table

1
aws dynamodb describe-table --table-name MusicCollection

Output:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
{
    "Table": {
        "AttributeDefinitions": [
            {
                "AttributeName": "Artist",
                "AttributeType": "S"
            },
            {
                "AttributeName": "SongTitle",
                "AttributeType": "S"
            }
        ],
        "ProvisionedThroughput": {
            "NumberOfDecreasesToday": 0,
            "WriteCapacityUnits": 5,
            "ReadCapacityUnits": 5
        },
        "TableSizeBytes": 0,
        "TableName": "MusicCollection",
        "TableStatus": "ACTIVE",
        "KeySchema": [
            {
                "KeyType": "HASH",
                "AttributeName": "Artist"
            },
            {
                "KeyType": "RANGE",
                "AttributeName": "SongTitle"
            }
        ],
        "ItemCount": 0,
        "CreationDateTime": 1421866952.062
    }
}

List tables

1
aws dynamodb list-tables

List tags

1
2
aws dynamodb list-tags-of-resource \
    --resource-arn arn:aws:dynamodb:us-west-2:123456789012:table/MusicCollection

Query

1
2
3
4
5
6
aws dynamodb query \
    --table-name MusicCollection \
    --projection-expression "SongTitle" \
    --key-condition-expression "Artist = :v1" \
    --expression-attribute-values '{":v1": {"S": "No One You Know"}"' \
    --return-consumed-capacity TOTAL
  • For strongly consistent reads, use --consistent-read
  • Use --scan-index-forward to sort in ascending order and --no-scan-index-forward to sort in descendng order (sorted by sort key)

Query with filtering

1
2
3
4
5
6
aws dynamodb query \
    --table-name MusicCollection \
    --key-condition-expression "Artist = :v1" \
    --filter-expression "NOT (AlbumTitle IN (:v2, :v3))" \
    --expression-attribute-names file://names.json \
    --return-consumed-capacity TOTAL

Contents of values.json:

1
2
3
4
5
{
    ":v1": {"S": "No One You Know"},
    ":v2": {"S": "Blue Sky Blues"},
    ":v3": {"S": "Greatest Hits"}
}

Query and return item count

1
2
3
4
5
aws dynamodb query \
    --table-name MusicCollection \
    --select COUNT \
    --key-condition-expression "Artist = :v1" \
    --expression-attribute-values file://expression-attributes.json

Query an index

1
2
3
4
5
6
7
aws dynamodb query \
    --table-name MusicCollection \
    --index-name AlbumTitleIndex \
    --key-condition-expression "Artist = :v1" \
    --expression-attribute-values '{":v1": {"S": "No One You Know"}}' \
    --select ALL_PROJECTED_ATTRIBUTES \
    --return-consumed-capacity INDEXES

Scan a table

1
2
3
4
5
6
aws dynamodb scan \
    --table-name MusicCollection \
    --filter-expression "Artist = :a" \
    --projection-expression "#ST, #AT" \
    --expression-attribute-names file://expression-attribute-names.json \
    --expression-attribute-values file://expression-attribute-values.json

Contents of expression-attribute-names.json:

1
2
3
4
{
    "#ST": "SongTitle",
    "#AT":"AlbumTitle"
}

Contents of expression-attribute-values.json:

1
2
3
{
    ":a": {"S": "No One You Know"}
}

Update an item

1
2
3
4
5
6
7
8
9
aws dynamodb update-item \
    --table-name MusicCollection \
    --key file://key.json \
    --update-expression "SET #Y = :y, #AT = :t" \
    --expression-attribute-names file://expression-attribute-names.json \
    --expression-attribute-values file://expression-attribute-values.json  \
    --return-values ALL_NEW \
    --return-consumed-capacity TOTAL \
    --return-item-collection-metrics SIZE

Contents of key.json:

1
2
3
4
{
    "Artist": {"S": "Acme Band"},
    "SongTitle": {"S": "Happy Day"}
}

Contents of expression-attribute-names.json:

1
2
3
{
    "#Y":"Year", "#AT":"AlbumTitle"
}

Update an item conditionally

1
2
3
4
5
6
7
aws dynamodb update-item \
    --table-name MusicCollection \
    --key file://key.json \
    --update-expression "SET #Y = :y, #AT = :t" \
    --expression-attribute-names file://expression-attribute-names.json \
    --expression-attribute-values file://expression-attribute-values.json  \
    --condition-expression "attribute_not_exists(#Y)"

Contents of key.json:

1
2
3
4
{
    "Artist": {"S": "Acme Band"},
    "SongTitle": {"S": "Happy Day"}
}

Contents of expression-attribute-names.json:

1
2
3
4
{
    "#Y":"Year",
    "#AT":"AlbumTitle"
}

Contents of expression-attribute-values.json:

1
2
3
4
{
    ":y":{"N": "2015"},
    ":t":{"S": "Louder Than Ever"}
}

Create a global secondary index

1
2
3
4
aws dynamodb update-table \
    --table-name MusicCollection \
    --attribute-definitions AttributeName=AlbumTitle,AttributeType=S \
    --global-secondary-index-updates file://gsi-updates.json

Contents of gsi-updates.json:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
[
    {
        "Create": {
            "IndexName": "AlbumTitle-index",
            "KeySchema": [
                {
                    "AttributeName": "AlbumTitle",
                    "KeyType": "HASH"
                }
            ],
            "ProvisionedThroughput": {
                "ReadCapacityUnits": 10,
                "WriteCapacityUnits": 10
            },
            "Projection": {
                "ProjectionType": "ALL"
            }
        }
    }
]

Lambda

Add a permission

1
2
3
4
5
aws lambda add-permission \
    --function-name my-function \
    --action lambda:InvokeFunction \
    --statement-id sns \
    --principal sns.amazonaws.com

Output:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
{
    "Statement":
    {
        "Sid":"sns",
        "Effect":"Allow",
        "Principal":{
            "Service":"sns.amazonaws.com"
        },
        "Action":"lambda:InvokeFunction",
        "Resource":"arn:aws:lambda:us-east-2:123456789012:function:my-function"
    }
}

Create a function

1
2
3
4
5
6
aws lambda create-function \
    --function-name my-function \
    --runtime nodejs18.x \
    --zip-file fileb://my-function.zip \
    --handler my-function.handler \
    --role arn:aws:iam::123456789012:role/service-role/MyTestFunction-role-tges6bf4

Update function code

1
2
3
aws lambda update-function-code \
    --function-name  my-function \
    --zip-file fileb://my-function.zip

Delete a function

1
2
aws lambda delete-function \
    --function-name my-function

Get function information

1
2
aws lambda get-function \
    --function-name  my-function

Output:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
{
    "Concurrency": {
        "ReservedConcurrentExecutions": 100
    },
    "Code": {
        "RepositoryType": "S3",
        "Location": "https://awslambda-us-west-2-tasks.s3.us-west-2.amazonaws.com/snapshots/123456789012/my-function..."
    },
    "Configuration": {
        "TracingConfig": {
            "Mode": "PassThrough"
        },
        "Version": "$LATEST",
        "CodeSha256": "5tT2qgzYUHoqwR616pZ2dpkn/0J1FrzJmlKidWaaCgk=",
        "FunctionName": "my-function",
        "VpcConfig": {
            "SubnetIds": [],
            "VpcId": "",
            "SecurityGroupIds": []
        },
        "MemorySize": 128,
        "RevisionId": "28f0fb31-5c5c-43d3-8955-03e76c5c1075",
        "CodeSize": 304,
        "FunctionArn": "arn:aws:lambda:us-west-2:123456789012:function:my-function",
        "Handler": "index.handler",
        "Role": "arn:aws:iam::123456789012:role/service-role/helloWorldPython-role-uy3l9qyq",
        "Timeout": 3,
        "LastModified": "2019-09-24T18:20:35.054+0000",
        "Runtime": "nodejs10.x",
        "Description": ""
    }
}

Get function configuration

1
2
aws lambda get-function-configuration \
    --function-name  my-function:2

Output:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
{
    "FunctionName": "my-function",
    "LastModified": "2019-09-26T20:28:40.438+0000",
    "RevisionId": "e52502d4-9320-4688-9cd6-152a6ab7490d",
    "MemorySize": 256,
    "Version": "2",
    "Role": "arn:aws:iam::123456789012:role/service-role/my-function-role-uy3l9qyq",
    "Timeout": 3,
    "Runtime": "nodejs10.x",
    "TracingConfig": {
        "Mode": "PassThrough"
    },
    "CodeSha256": "5tT2qgzYUHaqwR716pZ2dpkn/0J1FrzJmlKidWoaCgk=",
    "Description": "",
    "VpcConfig": {
        "SubnetIds": [],
        "VpcId": "",
        "SecurityGroupIds": []
    },
    "CodeSize": 304,
    "FunctionArn": "arn:aws:lambda:us-west-2:123456789012:function:my-function:2",
    "Handler": "index.handler"
}

Update function configuration

1
2
3
aws lambda update-function-configuration \
    --function-name  my-function \
    --memory-size 256

Invoke a function

1
2
3
4
5
aws lambda invoke \
    --function-name my-function \
    --cli-binary-format raw-in-base64-out \
    --payload '{ "name": "Bob" }' \
    response.json

Invoke a function

1
2
3
4
5
6
aws lambda invoke \
    --function-name my-function \
    --invocation-type Event \
    --cli-binary-format raw-in-base64-out \
    --payload '{ "name": "Bob" }' \
    response.json

Get reserved concurrent execution limit

1
2
aws lambda get-function-concurrency \
    --function-name my-function

Add reserved concurrent execution limit

1
2
3
aws lambda put-function-concurrency \
    --function-name  my-function  \
    --reserved-concurrent-executions 100

Remove reserved concurrent execution limit

1
2
aws lambda delete-function-concurrency \
    --function-name  my-function

Get provisioned concurrency configuration

1
2
3
aws lambda get-provisioned-concurrency-config \
    --function-name my-function \
    --qualifier BLUE

Add provisioned concurrency configuration

1
2
3
4
aws lambda put-provisioned-concurrency-config \
    --function-name my-function \
    --qualifier BLUE \
    --provisioned-concurrent-executions 100

Delete provisioned concurrency configuration

1
2
3
aws lambda delete-provisioned-concurrency-config \
    --function-name my-function \
    --qualifier GREEN

Get Lambda limits and usage

1
aws lambda get-account-settings

Output:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
{
    "AccountLimit": {
       "CodeSizeUnzipped": 262144000,
       "UnreservedConcurrentExecutions": 1000,
       "ConcurrentExecutions": 1000,
       "CodeSizeZipped": 52428800,
       "TotalCodeSize": 80530636800
    },
    "AccountUsage": {
       "FunctionCount": 4,
       "TotalCodeSize": 9426
    }
}

Get policy attached to function

1
2
aws lambda get-policy \
    --function-name my-function

List functions

1
aws lambda list-functions

List layers compatible with a runtme

1
2
aws lambda list-layers \
    --compatible-runtime python3.11

List tags

1
2
aws lambda list-tags \
    --resource arn:aws:lambda:us-west-2:123456789012:function:my-function

List versions of a function

1
2
aws lambda list-versions-by-function \
    --function-name my-function

S3

Create a multipart upload

1
aws s3api create-multipart-upload --bucket my-bucket --key 'multipart/01'

Output:

1
2
3
4
5
{
    "Bucket": "my-bucket",
    "UploadId": "dfRtDYU0WWCCcH43C3WFbkRONycyCpTJJvxu2i5GYkZljF.Yxwh6XG7WfS2vC4to6HiV6Yjlx.cph0gtNBtJ8P3URCSbB7rjxI5iEwVDmgaXZOGgkk5nVTW16HOQ5l0R",
    "Key": "multipart/01"
}

List active multipart uploads

1
aws s3api list-multipart-uploads --bucket my-bucket

List parts that have uploaded

1
aws s3api list-parts --bucket my-bucket --key 'multipart/01' --upload-id dfRtDYU0WWCCcH43C3WFbkRONycyCpTJJvxu2i5GYkZljF.Yxwh6XG7WfS2vC4to6HiV6Yjlx.cph0gtNBtJ8P3URCSbB7rjxI5iEwVDmgaXZOGgkk5nVTW16HOQ5l0R

Upload a part

1
2
3
4
5
6
aws s3api upload-part \
--bucket my-bucket \
--key 'multipart/01' \
--part-number 1 \
--body part01 \
--upload-id  "dfRtDYU0WWCCcH43C3WFbkRONycyCpTJJvxu2i5GYkZljF.Yxwh6XG7WfS2vC4to6HiV6Yjlx.cph0gtNBtJ8P3URCSbB7rjxI5iEwVDmgaXZOGgkk5nVTW16HOQ5l0R"

Complete a multipart upload

1
2
3
4
aws s3api complete-multipart-upload \
		--multipart-upload file://mpustruct \
		--bucket my-bucket --key 'multipart/01' \
		--upload-id dfRtDYU0WWCCcH43C3WFbkRONycyCpTJJvxu2i5GYkZljF.Yxwh6XG7WfS2vC4to6HiV6Yjlx.cph0gtNBtJ8P3URCSbB7rjxI5iEwVDmgaXZOGgkk5nVTW16HOQ5l0R

Abort a multipart upload

1
2
3
4
aws s3api abort-multipart-upload \
    --bucket my-bucket \
    --key multipart/01 \
    --upload-id dfRtDYU0WWCCcH43C3WFbkRONycyCpTJJvxu2i5GYkZljF.Yxwh6XG7WfS2vC4to6HiV6Yjlx.cph0gtNBtJ8P3URCSbB7rjxI5iEwVDmgaXZOGgkk5nVTW16HOQ5l0R

Copy an object from bucket-1 to bucket-2

1
aws s3api copy-object --copy-source bucket-1/test.txt --key test.txt --bucket bucket-2

Copy a file from S3 to S3

1
aws s3 cp s3://mybucket/test.txt s3://mybucket/test2.txt

Copy a local file to S3

1
aws s3 cp test.txt s3://mybucket/test2.txt
  • To add an expiry, use --expires with timestamp. For ex: --expires 2014-10-01T20:30:00Z

Copy an S3 object to a local file

1
aws s3 cp s3://mybucket/test.txt test2.txt

Copy an S3 object from one bucket to another

1
aws s3 cp s3://mybucket/test.txt s3://mybucket2/

Upload to an access point

1
aws s3 cp mydoc.txt s3://arn:aws:s3:us-west-2:123456789012:accesspoint/myaccesspoint/mykey

Download from an access point

1
aws s3 cp s3://arn:aws:s3:us-west-2:123456789012:accesspoint/myaccesspoint/mykey mydoc.txt

Recursively copy S3 objects to a local directory

1
aws s3 cp s3://mybucket . --recursive

Recursively copy local files to S3

1
2
3
aws s3 cp myDir s3://mybucket/ \
    --recursive \
    --exclude "*.jpg"
  • Combine --exclude and --include options to copy only objects that match a pattern, excluding all others.
  • --recursive to recursively include all files under a folder/key/prefix
  • --acl acceptable values are private, public-read, public-read-write, authenticated-read, aws-exec-read, bucket-owner-read, bucket-owner-full-control and log-delivery-write

Move a local file to bucket

1
aws s3 mv test.txt s3://mybucket/test2.txt
  • To move wih original name, use: aws s3 mv s3://mybucket/test.txt s3://mybucket2/

Move an object to local folder

1
aws s3 mv s3://mybucket/test.txt test2.txt

Move an object to another bucket

1
aws s3 mv s3://mybucket/test.txt s3://mybucket/test2.txt

Move file to an acess point

1
aws s3 ls s3://arn:aws:s3:us-west-2:123456789012:accesspoint/myaccesspoint/

Move all objects in a bucket to local folder

1
aws s3 mv s3://mybucket . --recursive
  • Combine --exclude and --include options to copy only objects that match a pattern, excluding all others.
  • --recursive to recursively include all files under a folder/key/prefix
  • --acl acceptable values are private, public-read, public-read-write, authenticated-read, aws-exec-read, bucket-owner-read, bucket-owner-full-control and log-delivery-write

Grant permissions for an S3 object

1
2
aws s3 cp file.txt s3://mybucket/ \
--grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers full=id=79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be

Delete an object

1
aws s3api delete-object --bucket my-bucket --key test.txt

Delete object

1
aws s3api delete-objects --bucket my-bucket --delete file://delete.json

Contents of delete.json:

1
2
3
4
5
6
7
8
{
  "Objects": [
    {
      "Key": "test1.txt"
    }
  ],
  "Quiet": false
}

Get object ACL

1
aws s3api get-object-acl --bucket my-bucket --key index.html

Get object metadata without object

1
aws s3api head-object --bucket my-bucket --key index.html

Output:

1
2
3
4
5
6
7
8
9
{
    "AcceptRanges": "bytes",
    "ContentType": "text/html",
    "LastModified": "Thu, 16 Apr 2015 18:19:14 GMT",
    "ContentLength": 77,
    "VersionId": "null",
    "ETag": "\"30a6ec7e1a9ad79c203d05a589c8b400\"",
    "Metadata": {}
}

Get object attributes without object

1
2
3
4
aws s3api get-object-attributes \
    --bucket my-bucket \
    --key doc1.rtf \
    --object-attributes "StorageClass" "ETag" "ObjectSize"

Download a S3 object

1
aws s3api get-object --bucket text-content --key dir/my_images.tar.bz2 my_images.tar.bz2

Upload a S3 object

1
aws s3api put-object --bucket text-content --key dir-1/my_images.tar.bz2 --body my_images.tar.bz2

List objects

1
aws s3api list-objects --bucket text-content --query 'Contents[].{Key: Key, Size: Size}'

List object versions

1
aws s3api list-object-versions --bucket my-bucket --prefix index.html

List objects from access point

1
aws s3 ls s3://arn:aws:s3:us-west-2:123456789012:accesspoint/myaccesspoint/

Delete an object

1
aws s3 rm s3://mybucket/test2.txt

Delete all objects in a bucket

1
aws s3 rm s3://mybucket --recursive

Delete an object from an access point

1
aws s3 rm s3://arn:aws:s3:us-west-2:123456789012:accesspoint/myaccesspoint/mykey
  • Combine --exclude and --include options to copy only objects that match a pattern, excluding all others.
  • --recursive to recursively include all files under a folder/key/prefix
  • --acl acceptable values are private, public-read, public-read-write, authenticated-read, aws-exec-read, bucket-owner-read, bucket-owner-full-control and log-delivery-write

Filter the contents of an object based on an SQL statement

1
2
3
4
5
6
7
8
aws s3api select-object-content \
    --bucket my-bucket \
    --key my-data-file.csv \
    --expression "select * from s3object limit 100" \
    --expression-type 'SQL' \
    --input-serialization '{"CSV": {}, "CompressionType": "NONE"}' \
    --output-serialization '{"CSV": {}}' "output.csv" \
    --request-progress
  • Support object formats are CSV, JSON and Parquet
  • GZIP or BZIP2 compressed CSV and JSON files are supported. Columnar compression for Parquet using GZIP or Snappy is supported.
  • Files should have UTF-8 encoding

Sync all local objects to the specified bucket

1
aws s3 sync . s3://mybucket
  • Use --delete to delete file in bucket that does not exist in local folder

Sync all S3 objects from one bucket to another bucket

1
aws s3 sync s3://mybucket s3://mybucket2

Sync all objects from a bucket to the local directory

1
aws s3 sync s3://mybucket .

Sync to an S3 access point

1
aws s3 sync . s3://arn:aws:s3:us-west-2:123456789012:accesspoint/myaccesspoint/
  • Combine --exclude and --include options to copy only objects that match a pattern, excluding all others.

Create a bucket

1
aws s3 mb s3://mybucket

Verify if bucket exists and access to it

1
aws s3api get-object --bucket text-content --key dir/my_images.tar.bz2 my_images.tar.bz2

List buckets

1
aws s3api list-buckets --query "Buckets[].Name"

List all buckets owned by user

1
aws s3 ls

List all prefixes and objects in a bucket

1
2
3
4
aws s3 ls s3://mybucket \
    --recursive \
    --human-readable \
    --summarize

Get bucket ACL

1
aws s3api get-bucket-acl --bucket my-bucket

Add permission to a bucket using ACL

1
2
3
4
aws s3api put-bucket-acl \
		--bucket MyBucket \
		--grant-full-control emailaddress=user1@example.com,emailaddress=user2@example.com \
		--grant-read uri=http://acs.amazonaws.com/groups/global/AllUsers

Add permission to a bucket using ACL

1
2
3
4
5
aws s3api put-object-acl \
		--bucket MyBucket \
		--key file.txt \
		--grant-full-control emailaddress=user1@example.com,emailaddress=user2@example.com \
		--grant-read uri=http://acs.amazonaws.com/groups/global/AllUsers

Set the block public access configuration

1
2
3
aws s3api put-public-access-block \
    --bucket my-bucket \
    --public-access-block-configuration "BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true"

Delete a bucket

1
aws s3 rb s3://mybucket
  • --force will remove all object and then delete the bucket

SQS

Delete a message

1
aws sqs delete-message --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyQueue --receipt-handle AQEBRXTo...q2doVA==

Get Queue attributes

1
aws sqs get-queue-attributes --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyQueue --attribute-names All

Output:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
{
  "Attributes": {
    "ApproximateNumberOfMessagesNotVisible": "0",
    "RedrivePolicy": "{\"deadLetterTargetArn\":\"arn:aws:sqs:us-east-1:80398EXAMPLE:MyDeadLetterQueue\",\"maxReceiveCount\":1000}",
    "MessageRetentionPeriod": "345600",
    "ApproximateNumberOfMessagesDelayed": "0",
    "MaximumMessageSize": "262144",
    "CreatedTimestamp": "1442426968",
    "ApproximateNumberOfMessages": "0",
    "ReceiveMessageWaitTimeSeconds": "0",
    "DelaySeconds": "0",
    "VisibilityTimeout": "30",
    "LastModifiedTimestamp": "1442426968",
    "QueueArn": "arn:aws:sqs:us-east-1:80398EXAMPLE:MyNewQueue"
  }
}

Get queue URL

1
aws sqs get-queue-url --queue-name MyQueue

List souce queues of a dead letter queue

1
aws sqs list-dead-letter-source-queues --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyDeadLetterQueue

List all queues

1
aws sqs list-queues
  • Use --queue-name-prefix to filter by a name starting with a specific value

Delete all messages in a queue

1
aws sqs purge-queue --queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyNewQueue

Send a message

1
2
3
4
5
aws sqs send-message \
		--queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyQueue \
		--message-body "Information about the largest city in Any Region." 
		--delay-seconds 10 
		--message-attributes file://send-message.json

Contents of send-message.json:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
{
  "City": {
    "DataType": "String",
    "StringValue": "Any City"
  },
  "Greeting": {
    "DataType": "Binary",
    "BinaryValue": "Hello, World!"
  },
  "Population": {
    "DataType": "Number",
    "StringValue": "1250800"
  }
}

Send multiple messages

1
2
3
aws sqs send-message-batch \
	--queue-url https://sqs.us-east-1.amazonaws.com/80398EXAMPLE/MyQueue 
	--entries file://send-message-batch.json

Contents of send-message-batch.json

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
[
  {
    "Id": "FuelReport-0001-2015-09-16T140731Z",
    "MessageBody": "Fuel report for account 0001 on 2015-09-16 at 02:07:31 PM.",
    "DelaySeconds": 10,
    "MessageAttributes": {
      "SellerName": {
        "DataType": "String",
        "StringValue": "Example Store"
      },
      "City": {
        "DataType": "String",
        "StringValue": "Any City"
      },
      "Region": {
        "DataType": "String",
        "StringValue": "WA"
      },
      "PostalCode": {
        "DataType": "String",
        "StringValue": "99065"
      },
      "PricePerGallon": {
        "DataType": "Number",
        "StringValue": "1.99"
      }
    }
  },
  {
    "Id": "FuelReport-0002-2015-09-16T140930Z",
    "MessageBody": "Fuel report for account 0002 on 2015-09-16 at 02:09:30 PM.",
    "DelaySeconds": 10,
    "MessageAttributes": {
      "SellerName": {
        "DataType": "String",
        "StringValue": "Example Fuels"
      },
      "City": {
        "DataType": "String",
        "StringValue": "North Town"
      },
      "Region": {
        "DataType": "String",
        "StringValue": "WA"
      },
      "PostalCode": {
        "DataType": "String",
        "StringValue": "99123"
      },
      "PricePerGallon": {
        "DataType": "Number",
        "StringValue": "1.87"
      }
    }
  }
]

Start message move task - redrive a DLQ

1
2
3
4
aws sqs start-message-move-task \
    --source-arn arn:aws:sqs:us-west-2:80398EXAMPLE:MyQueue1 \
    --destination-arn arn:aws:sqs:us-west-2:80398EXAMPLE:MyQueue2 \
    --max-number-of-messages-per-second 50
  • Redrive messages from DLQ to the source queue (cannot redrive DLQs of lambda, SNS, etc, only a queue)
  • Only standard queues support redrive. FIFO queues don’t support redrive
  • Only one active message movement task is supported per queue at any given time

List message move tasks

1
2
3
aws sqs list-message-move-tasks \
    --source-arn arn:aws:sqs:us-west-2:80398EXAMPLE:MyQueue \
    --max-results 2

Cancel message move tasks

1
aws sqs cancel-message-move-task --task-handle AQEB6nR4...HzlvZQ==

Api Gateway

Get API Gateway account settings

1
aws apigateway get-account

Output:

1
2
3
4
5
6
7
{
    "cloudwatchRoleArn": "arn:aws:iam::123412341234:role/APIGatewayToCloudWatchLogsRole",
    "throttleSettings": {
        "rateLimit": 500.0,
        "burstLimit": 1000
    }
}

List REST APIs

1
aws apigateway get-rest-apis

Test invoke the root resource in an API by making a GET request

1
aws apigateway test-invoke-method --rest-api-id 1234123412 --resource-id avl5sg8fw8 --http-method GET --path-with-query-string '/'

Test invoke a sub-resource in an API by making a GET request with a path parameter value

1
aws apigateway test-invoke-method --rest-api-id 1234123412 --resource-id 3gapai --http-method GET --path-with-query-string '/pets/1'

Send data to a WebSocket connection

1
2
3
4
aws apigatewaymanagementapi post-to-connection \
    --connection-id L0SM9cOFvHcCIhw= \
    --data "Hello from API Gateway!" \
    --endpoint-url https://aabbccddee.execute-api.us-west-2.amazonaws.com/prod

Get information about a WebSocket connection

1
2
3
aws apigatewaymanagementapi get-connection \
    --connection-id L0SM9cOFvHcCIhw= \
    --endpoint-url https://aabbccddee.execute-api.us-west-2.amazonaws.com/prod

Delete a WebSocket connection

1
2
3
aws apigatewaymanagementapi delete-connection \
    --connection-id L0SM9cOFvHcCIhw= \
    --endpoint-url https://aabbccddee.execute-api.us-west-2.amazonaws.com/prod

RDS

Describe account attributes

1
aws rds describe-account-attributes

Execute a batch SQL statement over an array of parameters

1
2
3
4
5
6
7
8
aws rds-data batch-execute-statement \
    --resource-arn "arn:aws:rds:us-west-2:123456789012:cluster:mydbcluster" \
    --database "mydb" \
    --secret-arn "arn:aws:secretsmanager:us-west-2:123456789012:secret:mysecret" \
    --sql "insert into mytable values (:id, :val)" \
    --parameter-sets "[[{\"name\": \"id\", \"value\": {\"longValue\": 1}},{\"name\": \"val\", \"value\": {\"stringValue\": \"ValueOne\"}}],
        [{\"name\": \"id\", \"value\": {\"longValue\": 2}},{\"name\": \"val\", \"value\": {\"stringValue\": \"ValueTwo\"}}],
        [{\"name\": \"id\", \"value\": {\"longValue\": 3}},{\"name\": \"val\", \"value\": {\"stringValue\": \"ValueThree\"}}]]"

Begin a transaction

1
2
3
4
aws rds-data begin-transaction \
    --resource-arn "arn:aws:rds:us-west-2:123456789012:cluster:mydbcluster" \
    --database "mydb" \
    --secret-arn "arn:aws:secretsmanager:us-west-2:123456789012:secret:mysecret"

Output:

1
2
3
{
    "transactionId": "ABC1234567890xyz"
}

Commit a SQL transaction

1
2
3
4
aws rds-data commit-transaction \
    --resource-arn "arn:aws:rds:us-west-2:123456789012:cluster:mydbcluster" \
    --secret-arn "arn:aws:secretsmanager:us-west-2:123456789012:secret:mysecret" \
    --transaction-id "ABC1234567890xyz"

Execute a SQL statement that is part of a transaction

1
2
3
4
5
6
aws rds-data execute-statement \
    --resource-arn "arn:aws:rds:us-west-2:123456789012:cluster:mydbcluster" \
    --database "mydb" \
    --secret-arn "arn:aws:secretsmanager:us-west-2:123456789012:secret:mysecret" \
    --sql "update mytable set quantity=5 where id=201" \
    --transaction-id "ABC1234567890xyz"

Execute a SQL statement with parameters

1
2
3
4
5
6
aws rds-data execute-statement \
    --resource-arn "arn:aws:rds:us-east-1:123456789012:cluster:mydbcluster" \
    --database "mydb" \
    --secret-arn "arn:aws:secretsmanager:us-east-1:123456789012:secret:mysecret" \
    --sql "insert into mytable values (:id, :val)" \
    --parameters "[{\"name\": \"id\", \"value\": {\"longValue\": 1}},{\"name\": \"val\", \"value\": {\"stringValue\": \"value1\"}}]"

Roll back a SQL transaction

1
2
3
4
aws rds-data rollback-transaction \
    --resource-arn "arn:aws:rds:us-west-2:123456789012:cluster:mydbcluster" \
    --secret-arn "arn:aws:secretsmanager:us-west-2:123456789012:secret:mysecret" \
    --transaction-id "ABC1234567890xyz"

References