chore(aws): give rds its own article and start kms' section

This commit is contained in:
Michele Cereda
2024-06-19 19:09:22 +02:00
parent 4f631ff3da
commit b5c4461e11
6 changed files with 388 additions and 77 deletions

View File

@@ -1,6 +1,7 @@
# Simple Storage Service
1. [TL;DR](#tldr)
1. [Storage tiers](#storage-tiers)
1. [Lifecycle configuration](#lifecycle-configuration)
1. [Further readings](#further-readings)
1. [Sources](#sources)
@@ -8,7 +9,7 @@
## TL;DR
<details>
<summary>Common usage</summary>
<summary>Usage</summary>
```sh
# List all buckets.
@@ -47,11 +48,18 @@ aws s3 cp 'file.txt' 's3://my-bucket/' \
'full=id=79a59df900b949e55d96a1e698fbacedfd6e09d98eacf8f8d5218e7cd47ef2be'
aws s3 cp 'mydoc.txt' 's3://arn:aws:s3:us-west-2:123456789012:accesspoint/myaccesspoint/mykey'
# Handling file streams.
# Handle file streams.
# Useful for piping:
# - setting the source to '-' sends data from stdin
# - setting the destination to '-' sends data to stdout
aws s3 cp - 's3://my-bucket/stream.txt'
aws s3 cp - 's3://my-bucket/stream.txt' --expected-size '54760833024'
aws s3 cp 's3://my-bucket/stream.txt' -
# Directly print the contents of files to stdout.
aws s3 cp --quiet 's3://my-bucket/file.txt' '-'
aws s3 cp --quiet 's3://my-bucket/file.txt' '/dev/stdout'
# Remove objects.
aws s3 rm 's3://my-bucket/prefix-name' --recursive --dryrun
@@ -105,20 +113,35 @@ aws s3api list-objects-v2 \
</details>
## Storage tiers
| | Standard | Intelligent-Tiering | Express One Zone | Standard Infrequent Access | One Zone Infrequent Access | Glacier Instant Retrieval | Glacier Flexible Retrieval | Glacier Deep Archive |
| ---------------------- | ------------ | ------------------- | ------------------------- | -------------------------- | -------------------------- | ------------------------- | -------------------------- | -------------------- |
| Retrieval charge | ✗ | ✗ | ✗ | per GB retrieved | per GB retrieved | per GB retrieved | per GB retrieved | per GB retrieved |
| Latency | milliseconds | milliseconds | single-digit milliseconds | milliseconds | milliseconds | milliseconds | minutes to hours | hours |
| Minimum storage charge | ✗ | ✗ | 1 hour | 30 days | 30 days | 90 days | 90 days | 180 days |
| Availability Zones | 3+ | 3+ | 1 | 3+ | 1 | 3+ | 3+ | 3+ |
## Lifecycle configuration
> Adding, removing or changing lifecycle rules takes a while.<br/>
> Wait a couple of minutes after the operation to make sure all the bucket's properties are synced.
When one has multiple rules in an S3 Lifecycle configuration, an object can become eligible for multiple S3 Lifecycle actions. In such cases, Amazon S3 follows these general rules:
When multiple rules are applied through S3 Lifecycle configurations, objects can become eligible for multiple S3
Lifecycle actions. In such cases:
1. Permanent deletion takes precedence over transition.
1. Transition takes precedence over creation of delete markers.
1. When an object is eligible for both a S3 Glacier Flexible Retrieval and S3 Standard-IA (or S3 One Zone-IA) transition, Amazon S3 chooses the S3 Glacier Flexible Retrieval transition.
1. Permanent deletion takes precedence over transitions.
1. Transitions takes precedence over creation of delete markers.
1. When objects are eligible for transition to both S3 Glacier Flexible Retrieval and S3 Standard-IA (or One Zone-IA),
precedence is given to S3 Glacier Flexible Retrieval transition.
Propagation delay: When you add an S3 Lifecycle configuration to a bucket, there is usually some lag before a new or updated Lifecycle configuration is fully propagated to all the Amazon S3 systems. Expect a delay of a few minutes before the configuration fully takes effect. This delay can also occur when you delete an S3 Lifecycle configuration.
When adding S3 Lifecycle configurations to buckets, there is usually some lag before a new or updated Lifecycle
configuration is fully propagated to all the S3's systems.<br/>
Expect a delay of a few minutes before any change in configuration fully takes effect. This includes configuration
deletions.
Objects can only go down the tiers, not up. Many other constraints apply, like no transition done for objects <128KiB.<br/>
Objects can only go down the tiers, not up.<br/>
Other constraints apply, like no transition done for objects smaller than 128KiB.<br/>
See [General considerations for transitions][lifecycle general considerations for transitions].
Examples: [1][lifecycle configuration examples], [2][s3 lifecycle rules examples]
@@ -132,6 +155,7 @@ Examples: [1][lifecycle configuration examples], [2][s3 lifecycle rules example
### Sources
- [Amazon S3 Storage Classes]
- [General considerations for transitions][lifecycle general considerations for transitions]
- [Lifecycle configuration examples][lifecycle configuration examples]
- [CLI subcommand reference]
@@ -139,7 +163,8 @@ Examples: [1][lifecycle configuration examples], [2][s3 lifecycle rules example
- [How S3 Intelligent-Tiering works]
<!--
References
Reference
═╬═Time══
-->
<!-- In-article sections -->
@@ -151,6 +176,7 @@ Examples: [1][lifecycle configuration examples], [2][s3 lifecycle rules example
[s3 lifecycle rules examples]: ../../../examples/aws/s3.lifecycle-rules
<!-- Upstream -->
[amazon s3 storage classes]: https://aws.amazon.com/s3/storage-classes/
[cli subcommand reference]: https://docs.aws.amazon.com/cli/latest/reference/s3/
[expiring amazon s3 objects based on last accessed date to decrease costs]: https://aws.amazon.com/blogs/architecture/expiring-amazon-s3-objects-based-on-last-accessed-date-to-decrease-costs/
[find out the size of your amazon s3 buckets]: https://aws.amazon.com/blogs/storage/find-out-the-size-of-your-amazon-s3-buckets/