I’ll introduce how to bulk upload files to S3 using aws-cli.
brew install awscli
$ aws --version
aws-cli/1.16.290 Python/3.7.5 Darwin/18.7.0 botocore/1.13.26
$ aws configure --profile=YOUR_PROFILE_NAME
AWS Access Key ID [None]: your_access_key
AWS Secret Access Key [None]: your_secret_access_key
Default region name [None]: ap-northeast-1
Default output format [None]: json
This article includes the —dryrun option for all write operation commands. Remove the —dryrun option when executing for real.
--dryrun optionThis is the most important option available for object write commands (cp/mv/rm/sync). Adding the —dryrun option outputs command simulation results. It’s just a simulation and the command isn’t actually executed. Unless you can 100% predict the command results, we recommend adding this option to check behavior before execution. Subsequent command examples include the —dryrun option, so please be careful when copying and pasting.
📝 Quote from: —dryrun オプション | 私家版AWS CLI S3チートシート - Qiita
The command to list S3 buckets is as follows:
aws s3 ls --profile=YOUR_PROFILE_NAME
aws s3 cp access.log s3://your_bucket/ \\
--profile=YOUR_PROFILE_NAME \\
--dryrun
aws s3 sync . s3://your_bucket/ \\
--include "*" \\
--profile=YOUR_PROFILE_NAME \\
--dryrun
aws s3 sync . s3://your_bucket/ \\
--exclude "*" \\
--include "*.log" \\
--profile=YOUR_PROFILE_NAME \\
--dryrun
That’s all from the Gemba.