How To Interact With AWS S3 In Python Using Boto3

The s3 is a highly efficient, scalable file storage solution provided by AWS.
 
The s3 consist of buckets and objects. Buckets are like a folder in the file system and s3 objects are the files we could store in a bucket.
 
The object consists of keys, values, and metadata.
 
Key is a unique identifier for the object, value is actual object data and metadata is the data about the data.
 
To programmatically access s3 we need aws_secret_access_key to access s3, we get it at the time of creating a new user on AWS through IAM or we can re-generate it through AWS IAM.
 

Introduction to Boto3

 
Boto3 is the library we can use in Python to interact with s3, Boto3 consists of 2 ways to interact with aws service, either by client or resource object. The major difference between resource and boto3 client is the client is a low level class object and resource is a high-level service class; it’s a wrapper on the boto3 client.
 
Firstly, we need to create the boto3 session. An AWS credentials session is where to initiate the connectivity to AWS services, the aws_session is optional here.
  1. def aws_session(region_name=region,id=aws_id,secret=secret):  
  2.     return boto3.session.Session(aws_access_key_id=id,  
  3.                                 aws_secret_access_key=secret,  
  4.                                 region_name=region_name,aws_session_token=session_token)  
Now we will create s3 client which we will use in the rest of tutorial.
  1. session = aws_session()  
  2. s3_client = session.client('s3')  

Create/Delete/List Bucket

 
Before we upload any object on s3 we need to create buckets using the following line of code.
  1. def create_s3_bucket(s3_client,bucket_name,region):  
  2.     s3_client.create_bucket(Bucket=bucket_name)  
Now list all the buckets we have with the following line of code.
  1. def list_bucket(s3_client):  
  2.     list_of_bucket =  s3_client.list_buckets()  
  3.     for bucket in list_of_bucket['Buckets']:  
  4.         print(f'  {bucket["Name"]}')   
If you want to delete a bucket you need to specify the bucket name and key.
  1. def delete_s3_obj(s3_client,bucket_name,key):  
  2.     s3_client.delete_object(Bucket=bucket_name,Key=key)  

Upload/Delete/Generate Presigned-URL for s3 Object

 
We will be uploading the object to the s3 in multiparts so if the file is large it could be uploaded fast and we are also making the object public at the time of uploading by providing the ACL rule, if you don’t want your object to be public then  remove the ACL part.
  1. def upload_to_s3_multipart(s3_client,file,file_name,S3_bucket,content):  
  2.      config = TransferConfig(multipart_threshold=1024*25, max_concurrency=10,  
  3.                         multipart_chunksize=1024*25, use_threads=True)  
  4.      obj = BytesIO(file)  
  5.      obj.seek(0)  
  6.      s3_client.upload_fileobj(obj, S3_bucket, file_name,  
  7.      ExtraArgs={ 'ACL''public-read''ContentType': content},  
  8.      Config = config  
  9.      )  
We can delete the object by providing the bucket and key of the object.
  1. def delete_s3_obj(s3_client,bucket_name,key):  
  2.     s3_client.delete_object(Bucket=bucket_name,Key=key)  
If you want someone else to access your s3 object you can generate the Presigned-URL and pass it to them. The Presigned-URL has some expiration time after which it will not work, so user can access the object through Presigned-URL within the expiration time.
  1. def create_s3_signed_url(s3_client,method,bucket_name,key,expiration):  
  2.     return s3_client.generate_presigned_url(ClientMethod=method,  
  3.                                             Params={'Bucket': bucket_name,  
  4.                                                     'Key': key},  
  5.                                                     ExpiresIn=expiration)