AWS CLI #
Setup and Config #
Install AWS CLIv2 | |
apt install unzip |
Install zip tool |
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" |
Download |
unzip awscliv2.zip |
Unzip |
sudo ./aws/install |
Run install script |
aws --version |
Check Version |
Save IAM Access Key Credentials | |
aws configure |
Start AWS credential setup |
aws configure --profile account1 |
Use different profiles |
aws configure --profile account2 |
Use different profiles |
Manually edit Credentials | |
~/.aws/credentials |
Stores AWS Acces Key ID and Serec Acces Key |
~/.aws/config |
Stores default region |
Troubleshooting #
If AWS CLI sync does not work it could be cause of a wrong server time.
date |
Show time |
date +%T -s "13:11:15" |
Change time |
AWS CLI with S3 #
Copy and Sync data #
Create & Delete S3 Buckets #
# Create S3 Bucket
aws s3 mb s3://bucketname
# Remove S3 Bucket
aws s3 rb --force s3://bucketname
Sync to S3 Bucket #
# Uploads only new or changed files
aws s3 sync /source/path s3://bucketname/destination
# Uploads only new or changed files: Delete files from the destination that no longer exist in the source directory
aws s3 sync /source/path s3://bucketname/destination --delete
# Only *.pdf files
aws s3 sync --exclude="*" --include="*.pdf" /source/path s3://bucketname/destination
# Exclude *.pdf files
aws s3 sync --exclude="*.pdf" /source/path s3://bucketname/destination
# Use specific account
aws s3 sync /source/path s3://bucketname/destination --profile account1
Sync from S3 Bucket #
# Download only new or changed files
aws s3 sync s3://bucket-name/source /destination/path
Copy to & from S3 Bucket #
# Upload files from bucket (include sub dirs)
aws s3 cp --recursive /source s3://bucketname/destination
# Download files from bucket (include sub dirs)
aws s3 cp --recursive s3://bucketname /destination
Note: “aws s3 cp” command does not delete files from the destination bucket when they no longer exist in the source directory.
Static Website Hosting #
Create S3 Bucket
aws s3 mb s3://BucketName
Copy index.html file to Bucket:
aws s3 cp /path/index.html s3://BucketName/index.html
Policy: Allow read-only access to every object in the bucket
bucketpolicy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::BucketName/*"
]
}
]
}
Add Policy to S3 Bucket:
$ aws s3api put-bucket-policy --bucket $BucketName \
--policy file://$pathToPolicy/bucketpolicy.json
Define name of index document:
aws s3 website s3://$BucketName --index-document index.html
Open Website:
http://BucketName.s3-website-us-east-1.amazonaws.com
AWS s3fs #
Create IAM Permission #
- Create a S3 Bucket with standard settings, “Block public access” should be enabled by default
- Create an IAM Permission to access the S3 Bucket
- Attach the Permession to an IAM User
- Create an “Access Key” for the User
IAM Permission: Replace name_of_your_s3-bucket
with the name of your S3 Bucket
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets"
],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::name_of_your_s3-bucket",
"arn:aws:s3:::name_of_your_s3-bucket/*"
]
}
]
}
Part 2: Automatic mount S3 Bucket on system startup #
Uncomment the user_allow_other
string in /etc/fuse.conf
file to allow S3 for other (non root) users:
vi /etc/fuse.conf
Command | Description |
---|---|
sudo apt install s3fs -y |
Install s3fs |
sudo vi /etc/passwd-s3fs |
Create passwd-s3fs Credential file in the standard location |
sudo echo your_key:your_private_key > /etc/passwd-s3fs |
Save Credentials |
sudo chmod 640 /etc/passwd-s3fs |
Change permissions of Credential file |
mkdir /mount_path |
Make mountpoint directory for S3 Bucket |
vi /etc/fstab |
Open fstab |
s3fs#name_of_your_s3-bucket /mount_path fuse _netdev,allow_other,url=https://s3.amazonaws.com 0 0 |
Add to fstab |
mount -a |
Load fstab |
Replace name_of_your_s3-bucket
with the actual name of the S3 Bucket.
Part 3: Manually mount S3 Bucket #
Alternative the S3 Bucket can also be manually mounted.
Command | Description |
---|---|
sudo apt install s3fs -y |
Install s3fs |
echo "your_key:your_private_key" > ${HOME}/.passwd-s3fs; |
Store Credentials in s3fs file |
cat ~/.passwd-s3fs |
Check |
chmod 600 ${HOME}/.passwd-s3fs |
Change Permissions of credential file |
mkdir /mount_path |
Make directory as mountpoint for S3 Bucket |
s3fs bucket_name /mount_path -o passwd_file=~/.passwd-s3fs |
Mount with path to credential file |
Use mount | grep sftp_data
or df -h | grep sftp_data
or just ls /mount_path
to check whether the bucket is mounted.
It is not possible to mount the S3 Bucket when the user is in the mountpoint directory while trying to mount the S3 Bucket.