Skip to content

S3

S3 provides developers and IT teams with secure, durable, and highly-scalable object storage. Object storage, as opposed to block storage, is a general term that refers to data composed of three things:

  1. the data that you want to store
  2. an expandable amount of metadata
  3. a unique identifier so that the data can be retrieved

This makes it a perfect candidate to host files or directories and a poor candidate to host databases or operating systems. The following table highlights key differences between object and block storage:

object storageblockstorage
PerformancePerforms best for big content and high stream throughputStrong performance with database and transactional data
GeographyData can be stored across multiple resionsThe greater the distance vetween storage and application, the higher the latency
ScalabilityCan scale infinitely to petabytes and beyondAddressing requirements limit scalability
AnalyticsCustomizable metadata allows data to be easily orginized and retrievedNo metadata

Data uploaded into S3 is spread across multiple files and facilities. The files uploaded into S3 have an upper-bound of 5TB per file and the number of files that can be uploaded is virtually limiless. S3 buckets, which contain all files, are names in a universal namespace so uniqueness is required. All successful uploads will return an HTTP 200 response.


  • Objects (regular files or directories) are stored in S3 with a key, value, version ID, and metadata. They can also contain torrents and sub resources for access controll lists which are basically permissions for the object inself.

  • The data consistency model for S3 ensures immediate read access for new objects after the initial PUT requests. These new objects are introduced into AWS for the first time and thus do not need to be updated anywhere so thet are availavle immediately. The data consistency for S3 also ensures immediate read access for PUTS and DELETES of already existing objects, since december 2020

    Strong read-after-write consistency helps when you need to immediately read an object after a write; for example, when you often read and list immediately after writing objects. High-performance computing workloads also benefit in that when an object is overwritten and then read many times simultaneously, strong read-after-write consistency provides assurance that the latest write is read across all reads. These applications automatically and immediately benefit from strong read-after-write consistency. The strong consistency of S3 also reduces costs by removing the need for extra infrastructure to provide strong consistency.

  • S3 comes with the following main features:

    1. tiered storage and pricing variability
    2. lifecycle management to expire older content
    3. versinoing for version control
    4. encryption for privacy
    5. MFA deletes to prevent accidental or malicious removal of content
    6. acess control lists & bucket policies to secure the data
  • S3 charges by:

    1. storage size
    2. number of requests
    3. storage management pricing(known ad tiers)
    4. data transfer pricing (objects leaving/entering AWS via the internet)
    5. transfer acceleration (an optional speed increase for moving objects via Cloudfront)
    6. cross region replication (more HA than offered by default)
  • Bucket policies secure data at the bucket level while access control lists secure data at the more granular object level.

  • By default, all newly created buckets are private

  • S3 can be configured to create access logs which can be shipped into another bucket in the current account or even a separate account all together. This makes it easy to monitor who accesses what inside S3.

  • There are 3 Different ways to chare S3 buckets across AWS accounts:

    1. For programmatic access only, use IAM & Bucket Policies to chare entire buckets
    2. For progremmatic access only, use ACLs & Bucket Policies to chare objects
    3. For access via the console & the terminal, use cross-account IAM roles
  • S3 is a great candidate for static website hosting. for static website hosting. When you enable static website hosting for S3 you need both an index.html file and an html file. Static website hosting creates a website endpoint that can be accessed via the internet.

  • When you upload new files and have versioning enabled, they will not inherit the properties of the previous version.

S3 Storage Classes:

S3 Standard - 99.99% availability and 11 9s durability. Data in this class is stored redundantly across multiple devices in multiple devices in multiple facilities and is designed to withstand the failure of 2 concurrent data centers.

S3 Infrequently Accessed(IA) - For data that is needed less often, but when it is needed the data should be available quickly. The storage fee is cheaper, but you are charged for retrieval.

S3 One Zone Infrequently Accessed (an improvement of the legacy RRS / Reduced Redundancy Storage) - For when you want the lower costs of IA, but do not require high availability. This is even cheaper because of the lack of HA.

S3 Intelligent Tiering - Cloud storage that automatically reduces your storage costs on a granular object level by automatically moving data to the most cost-effective access tier based on access frequency, without performance impact, retrieval fees, or operational overhead.

S3 Intelligent-Tiering monitors access patterns and automatically moves objects that have not been accessed to lower-cost access tiers.

S3 Glacier - low-cost storage class for data archiving. This class is for pure storage purposes where retrieval isn’t needed often at all. Retrieval times range from minutes to hours. There are differing retrieval metohd depending on how acceptable the default retrieval times are for you:

Expedited: 1 - 5 minutes, but this option is the most expensive.
Standard: 3 - 5 hours to restore.
Bulk: 5 - 12 hours. This option has the lowest cost and is good for a large set of data.

S3 Deep Glacier - The lowest const S3 storage where retrieval can take 12 hours.

image

Encryption

S3 data can be encrypted both in transit and at rest.

Encryption In Transit: When the traffic passing between one endpoint to another is indecipherable. Anyone eavesdropping between server A and server B won’t be able to make sense of the information passing by. Encryption in transit for S3 is always achieved by SSL/TLS

Encryption At Rest: When the immobile data sitting inside S3 is encrypted. If someone breaks into a server, they still won’t be done either on the server-side or the client-side. The server-side is when S3 encrypts your data as it is being written to disk and decrypts it when you access it. The client-side is when you personally encrypt the object on you own and then upload it into S3 afterward.

You can encrypt on the AWS supported server-side in the following ways:

  • S3 Managed Keys / SSE - S3 (server side encryption S3) - when Amazon manages the encryption and decryption keys for you automatically. In this scenario, you concede a little control to Amazon in exchange for ease of use.
  • AWS Key Management Service / SSE - KMS - when Amazon and you both manage the encryption and decryption keys together.
  • Server Side Encryption w/ customer provided keys / SSE - C - when I give Amazon my own keys that I manage. In this scenario, you concede ease of use in exchange for more control.

Versioning

  • When versioning is enabled, S3 stores all versions of an object including all writes and even deletes.
  • It is a great feature for implicitly backing up content and for easy rollbacks in case of human error.
  • It can be thought of as analogous to Git.
  • Once versioning is enabled on a bucket, it cannot be disabled - only suspended.
  • Versioning integrates w/ lifecycle rules so you can set rules to expire or migrate data based on their version.
  • Versioning also has MFA delete capability to provide an additional layer of security.

Lifecycle Management

  • Automates the moving of objects between the different storage tiers.
  • Can be used in conjunction with versioning.
  • Lifecycle rules can be applied to both current and previous versions of an object.

Cross Region Replication

  • Cross resion replicatino only work if versioning is enabled.
  • When cross region replication is enabled, no pre-existing data is transferred. Only new uploads into the original bucket are replicated. All subsequent updates are replicated.
  • When you replicate the contents of one bucket to another, you can actually change the ownership of the content if you want. You can also change the storage tier of the new bucket with the replicated content.
  • When files are deleted in the original bucket (via a delete marker as versioning prevents true deletions), those deletes are not replicated.
  • Cross Region Replication Overview
  • What is and isn’t replicated such as encrypted objects, deletes, items in glacier, etc.

Transfer Acceleration

  • Transfer acceleration makes use of the CloudFront network by sending or receiving data at CDN points of presence (called edge locations) rather than slower uploads or downloads at the origin.
  • This is accomplished by uploading to a distinc URL for the edge location instead of the bucket itself. This is then transferred over the AWS network backbone at a mush faster speed.
  • You can test transfer acceleration speed directly in comparison to regular uploads.

ElasticSearch

  • If you are using S3 to store log files, ElasticSearch provides full search capabilities for logs and can be used to search through data stored in an S3 bucket.
  • You can integrate you ElesticSearch domain with S3 and Lambda. In this setup, any new logs received by S3 will trigger an event notification to Lambda, whish in turn will then run your application code on the new log data. After your code finishes processing, the data will be streamed into you ElasticSearch domain and be available for observation.

Maximizing S3 Read/Write Performance

  • If the request rate for reading and writing objects to S3 is extremely high, you can use sequential date-based naming for you prefixes to improve performance. Earlier versions of the AWS Docs also suggested to use hash keys or random strings to prefix the object’s name. In such cases, the partitions used to store the objects will be better distributed and therefore will allow better read.write performance on you objects.

  • If your S3 data is receiving a high number of GET request from users, you should consider uding Amazon CloudFront for performance optimizatino. By intergrating CloudFront with S3, you can distribute content via CloudFront’s cache to your users for lower latency and a higher data transfer rate. This also has the added bonus of sending fewer direct requests to S3 which will reduce costs. For example, suppose that you have a few objects that are very popular. CloudFront fetches those objects from S3 and caches them. CloudFront can then serve future requests for the objects from its cache, reducing the total number of GET requests it sends to Amazon S3.

  • More information on how to ensure high performance in S3

Server Access Logging

  • Server access logging provides detailed records for the requests that are made to a bucket. Server access logs are useful for many applications. For example, access log information van be useful in security and access audits. It can also help you learn about your customer base and better understand your Amazon S3 bill.
  • By default, logging is disabled. When logging is enabled, logs are saved to a bucket in the same AWS Region as the source bucket.
  • Each access log record provides details about a single access request, such as the requester, bucket name, request time, request action, response status, and an error code, if relevant.
  • It works in the following way:
    • S3 periodically collects access log records of the bucket you want to monitor
    • S3 then consolidates those records into log files
    • S3 finally uploads the log files to your secondary monitoring bucket as log objects

Multipart Upload

  • Multipart upload allows you to upload a single object as a set of parts. Each part is a contignous porting of the object;s data. You can upload these object parts independently and in any order.
  • Multipart uploads are recommended for files over 100MB and is the only way to upload files over 5GB. It achieves functionality by uploading your data in parallel to boost efficiency.
  • If transmission of any part fails, you can retransmit that part without affecting other parts. After all parts of your object are uploaded, Amazon S3 assembles these part and creates the object.
  • Possible reasons for why you would want to use Multipart upload:
    • Multipart upload delivers the ability to begin an upload before you know the final object size.
    • Multipart upload delivers improved throughput.
    • Multipart upload delivers the ability to pause and resume object uploads.
    • Multipart upload delivers quick recovery from network issues.
  • You can use an AWS SDK to upload an object in parts. Alternatively, you can perform the same action via the AWS CLI.
  • You can also parallelize downloads from S3 using byte-range fetches. If there’s a failure during the download, the failure is localized just to the specific byte range and not the whole object.

Athena

  • Athena is an interactive query service which allows you to interact and query data from S3 using standard SQL commands. This is beneficial for programmatic querying for the average developer. It is serverless, requires no provisioning, and you pay per query and per TB scanned. You basically turn S3 into a SQL supported database by using Athena.

  • Example use cases:

    • Query logs that are dumped into S3 buckets as an alternative or supplement to the ELK stack
    • Setting queries to run business reports based off of the data regularly entering S3
    • Running queries on click-stream data to have further insight of customer behavior

reference