Skip to Content

AWS — Amazon S3 Storage Classes Overview

Posted on October 24, 2022 by

Categories: AWS

Tags:

The concurrent loss of data in one or two sites is something that Amazon S3 storage classes are intended to withstand. Allow lifecycle management to migrate objects automatically to reduce costs.

A variety of storage classes tailored for various use cases with Amazon S3. There are seven distinct storage class types in S3.

Standard S3

  • Ideal for frequently requested data and use cases with high-performance requirements.
  • Designed to withstand the simultaneous loss of two facilities. Data is kept in several places. Able to withstand circumstances that affect a whole availability zone (3 AZs).
  • The default storage class will be used if no storage class is selected during upload.
  • Provides performance with minimal latency and great throughput.
  • designed for 99.999999999% (11 9’s) durability and 99.99% availability.
  • of all storage classes, the most costly.
  • Use case: We have storage space for files we often access. It is suitable for many use cases, including big data analytics, mobile and gaming apps, dynamic websites, content distribution, and cloud applications.

Tiering S3 Intelligent

  • Designed to save expenses by automatically relocating data to the access tier with the lowest access prices while minimizing operational overhead and performance degradation.
  • Objects are stored in two access tiers: one more expensive and suited for frequent use and another less expensive and optimized for rare access.
  • It automatically switches items between two access levels depending on shifting access patterns. It transfers items to the seldom access tier after they haven’t been accessed for 30 days in a row. An object in the seldom access tier is immediately shifted back to the frequent access tier if it is accessed.
  • Delivers the same high throughput and low latency characteristics as S3 Standard.
  • Designed for 99.999999999% (11 9’s) endurance and 99.99% availability (same as Standard).
  • Small auto-tiering and monitoring cost each month. When utilizing the S3 Intelligent-Tiering storage class, there are no retrieval costs and additional tiering fees when objects are transferred across access levels.
  • Use case: It is the perfect storage type for persistent data with erratic or atypical access patterns. For long-lived data, it is preferable to automatically optimize storage costs when access patterns are unknown or uncertain.

Standard-Repeated Access S3 (S3 Standard-IA)

  • suited for persistent and seldom accessible data. This means it is for data accessed less frequently but has to be accessible quickly when necessary. Real-time access is offered to objects.
  • Designed to withstand the simultaneous loss of two facilities (same as S3 Standard).
  • It has data redundantly stored across many Availability Zones (AZs) separated by geography and is robust losing of one AZ (3 AZs).
  • Delivers the same high throughput and low latency characteristics as S3 Standard.
  • Designed for 99.999999999% (11 9’s) endurance and 99.99% availability (same as Standard).
  • It provides S3 Standard’s high durability, high throughput, low latency, and inexpensive per-GB storage and retrieval costs.
  • Although it is less costly than S3 Standard storage, there is a recovery fee. They are, therefore, best suited for seldom accessed data.
  • Compared to OneZone-IA storage, it provides higher availability and resilience.
  • It is appropriate for more significant things that are larger than 128 KB (smaller objects are charged for 128 KB only) stored for a minimum of 30 days (charged for minimum 30 days)
  • Use case: S3 Standard-IA is the best option for long-term storage, backups, and as a data store for disaster recovery files when access is restricted, but high performance is still required because of its low cost and good performance. It is perfect for using a single or primary copy of irreplaceable data.

S3 One Zone – Seldom accessed (S3 One Zone-IA)

  • suited for persistent and seldom accessible data. This means it is for data accessed less frequently but has to be accessible quickly when necessary. It is made for persistent data accessed infrequently (similar to the Standard and Standard-IA storage class).
  • S3 One Zone-IA stores data in a single Availability Zone (AZ), unlike other S3 Storage Classes, which need data to be stored in a minimum of three Availability Zones (AZs), and is 20% less costly than S3 Standard-IA.
  • The destruction of an Availability Zone will result in the loss of data stored in this storage class.
  • The best choice for low-cost data storage when S3 Standard or S3 Standard-availability IA’s and resilience are not necessary.
  • Designed for 99.999999999% (11 9’s) durability and 99.5% availability in a single availability zone (same as Standard).
  • It is appropriate for more significant things that are larger than 128 KB (smaller objects are charged for 128 KB only) stored for a minimum of 30 days (charged for minimum 30 days)
  • Use case: It is a suitable option for holding secondary backup copies of on-premises data or data that is simple to recreate if AZ is down. S3 Cross-Region Replication allows you to utilize it as inexpensive storage for data duplicated from another AWS Region (CRR).

Outposts S3

  • Your on-premises AWS Outposts setup receives object storage from Amazon S3 on Outposts.
  • Designed to store data on your outposts redundantly and durably.
  • SSE-S3 and SSE-C encryption.
  • IAM, S3 Access Points, and authentication and authorization.
  • Utilize AWS DataSync to send data to AWS Regions.
  • Activities for S3 Lifecycle expiry.

Glacier

  • Long-term archiving is best served by inexpensive design.
  • Retrieval times may be set, ranging from minutes to hours.
  • Even if a whole Availability Zone is destroyed, data remains robust.
  • Designed for multiple Availability Zones with 99.99% availability and 99.999999999% (11 9’s) durability.
  • Both the archive (Glacier rate) and the temporarily restored copy are subject to fees (RRS rate)
  • A lockable policy is used by the Vault Lock feature to ensure compliance.
  • It may be retrieved utilizing accelerated retrieval in as short as 1–5 minutes and has a minimum storage retention span of 90 days.

Deep Glacier Archive

  • Data that will be stored for 7–10 years and may only be accessed once or twice per year is supported by the cheapest storage class, which also enables long-term retention and digital preservation.
  • The most affordable storage choice in S3. Glacier Archive’s storage charges are less expensive than those associated with utilizing the Glacier storage class.
  • Designed for 99.999999999% (11 9’s) durability and 99.99% availability in different availability zones (same as Glacier).
  • Perfect replacement for magnetic tape libraries.
  • It may be retrieved with a default time of 12 hours and has a minimum storage retention period of 180 days.
  • Bulk retrieval, which delivers data in 48 hours, can save retrieval costs.
  • Use case: It is intended for clients who must keep data sets for seven to ten years to comply with regulatory complements. This includes clients in highly regulated financial services, healthcare, and public sectors. In addition, it has applications for backup and disaster recovery.

Three different sorts of retrieval options are offered by S3 Glacier:

  • Expedited Retrieval – It enables you to quickly retrieve your data when you need to make sporadic urgent queries for a portion of the archives. Data retrieved via Expedited retrievals are typically made accessible for Glacier within 1–5 minutes, except for the more enormous archives (250 MB+). For items kept in the Glacier
    •  AWS s3

    The deep Archive storage class, it is not accessible.

  • You may retrieve any of your archives using standard retrieval within a few hours.
  • This is the default choice when a retrieval request does not include a retrieval option. For items saved in the Glacier storage class, retrievals of this type are typically implemented in 3–5 hours, and for objects stored in the Deep Archive storage class, in 12 hours.
  • Bulk retrieval is the most affordable solution for data retrieval, allowing you to quickly and affordably get hold of petabytes of data in a single day. The ideal choice for planned or non-urgent data retrieval is bulk retrieval. For items saved in the Glacier storage class, bulk retrievals are generally completed in 5 to 12 hours, and 48 hours for objects stored in the Glacier Deep Archive storage class.