File: README.md

package info (click to toggle)
aws-crt-python 0.20.4%2Bdfsg-1~bpo12%2B1
  • links: PTS, VCS
  • area: main
  • in suites: bookworm-backports
  • size: 72,656 kB
  • sloc: ansic: 381,805; python: 23,008; makefile: 6,251; sh: 4,536; cpp: 699; ruby: 208; java: 77; perl: 73; javascript: 46; xml: 11
file content (66 lines) | stat: -rw-r--r-- 3,190 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
# Helper script to setup your S3 structure to run the tests for aws-c-s3

To use this script, you must have AWS credentials with permission to create and delete buckets.

To create the S3 buckets and objects that tests will use:

```sh
pip3 install boto3
export CRT_S3_TEST_BUCKET_NAME=<bucket_name>
python3 test_helper.py init
# change directory to the build/tests
cd aws-c-s3/build/tests && ctest
```

To clean up the S3 buckets created

```sh
export CRT_S3_TEST_BUCKET_NAME=<bucket_name>
python3 test_helper.py clean
```

## Actions

### `init` action

* Create `<BUCKET_NAME>` in us-west-2.
* Add the lifecycle to automatic clean up the `upload/` after one day
* Upload files:
  + `pre-existing-10MB-aes256-c` [SSE-C](https://docs.aws.amazon.com/AmazonS3/latest/userguide/ServerSideEncryptionCustomerKeys.html#sse-c-highlights) encrypted fille
  + `pre-existing-10MB-aes256` [SSE-S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/specifying-s3-encryption.html) encrypted fille
  + `pre-existing-10MB-kms` [SSE-KMS](https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingKMSEncryption.html) encrypted fille
  + `pre-existing-10MB`
  + `pre-existing-1MB`
  + `pre-existing-empty`

* Create `<BUCKET_NAME>-public` in us-west-2
* Upload files:
  + `pre-existing-1MB` 1MB file with public read access.

* Create directory bucket `<BUCKET_NAME>--usw2-az1--x-s3` in us-west-2
* Upload files:
  + `pre-existing-10MB` 10MB file.

* Create directory bucket `<BUCKET_NAME>--use1-az4--x-s3` in us-east-1
* Upload files:
  + `pre-existing-10MB` 10MB file.

### `clean` action

* Delete the buckets create by init action and every object inside them.

## BUCKET_NAME

You can specify the bucket name to be created either by passing argument to the script or by setting an environment variable, the `bucket_name` passed in takes precedence. If neither of these options is chosen, the `init` action will create a random bucket name. In this case, you will need to set the `CRT_S3_TEST_BUCKET_NAME` environment variable to the printed-out bucket name before running the test.

## Notes

* The MRAP tests are not included in this script, and it's disabled by default. To run those tests, you will need to create a MRAP access point with the buckets have `pre-existing-1MB` in it. Then update `g_test_mrap_endpoint` to the uri of the MRAP endpoint and build with `-DENABLE_MRAP_TESTS=true`.
* To run tests in tests/s3_mock_server_tests.c, initialize the mock S3 server first from [here](./../mock_s3_server/). And build your cmake project with `-DENABLE_MOCK_SERVER_TESTS=true`
* Note: If you are not at the aws-common-runtime AWS team account, you must set environment variable `CRT_S3_TEST_BUCKET_NAME` to the bucket created before running the test.
* When you see error with "Check your account level S3 settings, public access may be blocked.", Check https://docs.aws.amazon.com/AmazonS3/latest/userguide/configuring-block-public-access-account.html to set `BlockPublicAcls` to false, which enables public read of the object with `public-read` ACL in the bucket.

## TODO

* Automatic the mrap creation
* Instead of hard-coded path and region, make it configurable and pick up from tests.