About Boto3

Boto3 is the Amazon Web Services (AWS) Software Development Kit (SDK) for Python. It enables developers to build software that uses Amazon services like EC2, S3, RDS, etc.

I will build a portable python3.9 + Boto3 docker environment to test some AWS automation tasks.

Build and run a docker with Python3.9 + Boto3

As I do not want to install Python, Boto3, and AWScli on my local PC, creating a docker image with all software ready as a portable env is the best way to start.

root@ubt-server:~# vim Dockerfile
# Build from python:3.9.19-alpine3.19
From python:3.9.19-alpine3.19 
# install boto3 and alwcli
RUN pip install --upgrade pip && \
    pip install --upgrade awscli && \
    pip install --upgrade boto3
# set work dir
WORKDIR /work
# run Python
CMD "python"
# build a docker image from the above Dockerfile 
root@ubt-server:~# docker image build -t zack_aws_boto3 .

# ls docker images 
root@ubt-server:~# docker image ls
REPOSITORY               TAG             IMAGE ID       CREATED         SIZE
zack_aws_boto3           v1              07a13f7801ed   1 days ago     998MB
zackpy                   latest          287ba6873741   4 days ago      48.2MB
zackz001/gitops-jekyll   latest          d92894f7be6d   6 days ago      70.9MB
postgres                 15.0            027eba2e8939   19 months ago   377MB

# run docker and mount local python work dir
root@ubt-server:~/pythonwork# docker run -ti -v ${PWD}:/work zack_aws_boto3:v1 bash
root@c04670a43564:/# 
root@c04670a43564:/# cd work && ls

# configure aws in the container
root@c04670a43564:/work# aws configure 
AWS Access Key ID [****************GFNW]: 
AWS Secret Access Key [****************Db7O]: 
Default region name [ap-southeast-2]: 
Default output format [None]: 

# validate aws cred by listing ec2 instance id 
root@c04670a43564:/work# aws ec2 describe-instances --query "Reservations[*].Instances[*].InstanceId" --output json
[
    [
        "i-076226daa5aaf7cf2"
    ]
]

Manage AWS resource with Python Boto3 script

Here we have Python and boto3 env ready; I will list some aws tasks that I want to be achieved by Python scripts

  • List ec2 instance name, instanceID and state
root@ubt-server:~/pythonwork# vim app.py
# import boto3 library
import boto3

def list_ec2_instances():
    # Create a session using default AWS profile
    session = boto3.Session()
    # Create an EC2 client
    ec2_client = session.client('ec2')

    # Describe EC2 instances
    response = ec2_client.describe_instances()

    # Iterate over the instances
    for reservation in response['Reservations']:
        for instance in reservation['Instances']:
            # Get the instance ID
            instance_id = instance['InstanceId']
            
            # Get the instance state
            instance_state = instance['State']['Name']
            
            # Get the instance Name tag if exists
            instance_name = 'No Name'
            if 'Tags' in instance:
                for tag in instance['Tags']:
                    if tag['Key'] == 'Name':
                        instance_name = tag['Value']
                        break
            
            # Print instance ID, Name, and State
            print(f"Instance ID: {instance_id}, Name: {instance_name}, State: {instance_state}")

if __name__ == "__main__":
    list_ec2_instances()

root@c04670a43564:/work# python app.py 
Instance ID: i-076226daa5aaf7cf2, Name: zack-blog, State: stopped
  • Filter ec2 instance without tag “owner”
# create app-untagged.py
root@ubt-server:~/pythonwork# vim app-untagged.py
import boto3

def get_untagged_ec2_instances():
    ec2_client = boto3.client('ec2')
    response = ec2_client.describe_instances()
    
    untagged_instances = []
    
    for reservation in response['Reservations']:
        for instance in reservation['Instances']:
            has_owner_tag = False
            if 'Tags' in instance:
                for tag in instance['Tags']:
                    if tag['Key'].lower() == 'owner':
                        has_owner_tag = True
                        break
            
            if not has_owner_tag:
                instance_id = instance['InstanceId']
                instance_state = instance['State']['Name']
                untagged_instances.append({'InstanceId': instance_id, 'State': instance_state})
    
    return untagged_instances

untagged_instances = get_untagged_ec2_instances()
print("Untagged Instances:", untagged_instances)

# run script to filter untagged "owner" ec2 
root@c04670a43564:/work# python app-untagged.py 
Untagged Instances: [{'InstanceId': 'i-076226daa5aaf7cf2', 'State': 'stopped'}]
  • Create lambda function to list ebs volume snapshots older than 30 days and delete them

To achieve this we need :

  1. create lambda IAM role for lambda to manage EBS volume snapshot
  2. create bellow python lambda function
  3. zip and upload zip function
  4. create CloudWatch Event to Trigger run it every 30 days
# create lambda function to delete snapshots older than 30 days
root@ubt-server:~/pythonwork# vim app-snapshot-older-30days.py
import boto3
from datetime import datetime, timezone, timedelta

def lambda_handler(event, context):
    ec2_client = boto3.client('ec2')
    
    # Get the current time
    now = datetime.now(timezone.utc)
    
    # Define the time threshold
    time_threshold = now - time delta(days=30)
    
    # Describe snapshots
    snapshots = ec2_client.describe_snapshots(OwnerIds=['self'])['Snapshots']
    
    # Filter snapshots older than 30 days
    old_snapshots = [snap for snap in snapshots if snap['StartTime'] < time_threshold]
    
    # Delete old snapshots
    for snapshot in old_snapshots:
        snapshot_id = snapshot['SnapshotId']
        ec2_client.delete_snapshot(SnapshotId=snapshot_id)
        print(f"Deleted snapshot: {snapshot_id}")
    
    return {
        'statusCode': 200,
        'body': f"Deleted {len(old_snapshots)} snapshots."
    }

# zip Package for the Lambda Function
root@ubt-server:~/pythonwork# zip function.zip app-snapshot-older-30days.py
  • Email me when a security group allow inbound SSH (port 22) from everywhere (0.0.0.0/0)

To achieve this, we need:

  1. AWS CloudTrail enable
  2. Create CloudWatch Event Rule to capture AWS CloudTrail logs for security group changes
  3. Create bellow Lambda Function if inbound allow port 22 from everywhere are met
  4. Allow CloudWatch Events to Invoke the Lambda Function
  5. Add the Lambda Function as a Target for the CloudWatch Event Rule
root@ubt-server:~/pythonwork# vim app-sg-allow-22.py
import boto3
import json

def lambda_handler(event, context):
    # Initialize boto3 clients
    ses_client = boto3.client('ses')
    ec2_client = boto3.client('ec2')
    
    # Email details
    sender = 'zhbsoftboy1@gmail.com'
    recipient = 'zhbsoftboy1@gmail.com'
    subject = 'Security Group Alert: Port 22 Open to everywhere'
    
    # Extract details from the event
    detail = event['detail']
    event_name = detail['eventName']
    security_group_id = None
    
    if event_name == 'AuthorizeSecurityGroupIngress':
        security_group_id = detail['requestParameters']['groupId']
        ip_permissions = detail['requestParameters']['ipPermissions']['items']
    elif event_name == 'CreateSecurityGroup':
        security_group_id = detail['responseElements']['groupId']
        ip_permissions = detail['requestParameters']['ipPermissionsEgress']['items']
    
    # Check if port 22 is open to 0.0.0.0/0
    if security_group_id and ip_permissions:
        for permission in ip_permissions:
            if 'ipRanges' in permission:
                for ip_range in permission['ipRanges']['items']:
                    if ip_range['cidrIp'] == '0.0.0.0/0' and permission['fromPort'] == 22 and permission['toPort'] == 22:
                        # Compose email body
                        body_text = (f"Security Group ID: {security_group_id} has been modified to allow port 22 from everywhere (0.0.0.0/0).")
                        body_html = f"""<html>
                        <head></head>
                        <body>
                          <h1>Security Group Alert</h1>
                          <p>Security Group ID: <b>{security_group_id}</b> has been modified to allow port 22 from everywhere (0.0.0.0/0).</p>
                        </body>
                        </html>"""
                        
                        # Send email
                        response = ses_client.send_email(
                            Source=sender,
                            Destination={'ToAddresses': [recipient]},
                            Message={
                                'Subject': {'Data': subject},
                                'Body': {
                                    'Text': {'Data': body_text},
                                    'Html': {'Data': body_html}
                                }
                            }
                        )
                        print(f"Email sent! Message ID: {response['MessageId']}")
    
    return {
        'statusCode': 200,
        'body': json.dumps('Lambda function executed successfully!')
    }

Conclusion

There are many ways to automate AWS tasks using Python Boto3 script. Together with Lambda and trigger, many resource tasks can be scheduled and managed in a scripted way.