11

Hi need to transfer a file to ec2 machine via ssm agent. I have successfully installed ssm-agent in ec2 instances and from UI i am able to start session via "session-manager" and login to the shell of that ec2 machine.

Now I tried to automate it via boto3 and using the below code,

ssm_client = boto3.client('ssm', 'us-west-2') 
resp = client.send_command(
DocumentName="AWS-RunShellScript", # One of AWS' preconfigured documents
Parameters={'commands': ['echo "hello world" >> /tmp/test.txt']},
InstanceIds=['i-xxxxx'],
)

The above works fine and i am able to send create a file called test.txt in remote machine but his is via echo command Instead I need to send a file from my local machine to this remove ec2 machine via ssm agent, hence I did the following ,

Modified the "/etc/ssh/ssh_config" with proxy as below,

# SSH over Session Manager
host i-* mi-*
    ProxyCommand sh -c "aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters 'portNumber=%p'"

Then In above code, I have tried to start a session with below code and that is also successfully .

response = ssm_client.start_session(Target='i-04843lr540028e96a')

Now I am not sure how to use this session response or use this aws ssm session and send a file

Environment description: Source: pod running in an EKS cluster dest: ec2 machine (which has ssm agent running) file to be transferred: Important private key which will be used by some process in ec2 machine and it will be different for different machine's

Solution tried:

  • I can push the file to s3 in source and execute ssm boto3 libaray can pull from s3 and store in the remote ec2 machine
  • But I don't want to do the above due to the reason I don't want to store the private key i s3. So wanted to directly send the file from memory to the remote ec2 machine

Basically i wanted to achieve scp which is mentioned in this aws document : https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-sessions-start.html#sessions-start-ssh

1
  • 1
    It might be easier to "pull" the file into the instance. For example, if the file is stored in Amazon S3, then put a aws s3 cp command in the shell script. Commented Nov 26, 2020 at 8:59

6 Answers 6

9

The answer given by @Nathan Williams is confusing (scp file.txt ec2-user@i-04843lr540028e96a).

When using scp you use SSH as protocol to copy, so you have to setup a username/password or user name ssh keys to copy a file. This scp command doesn't work unless you share keys, specify a region and a profile. The complete command would be something like:

scp -i keyfile file.txt ec2-user@i-04843lr540028e96a --region xxx --profile myprofile 

If you configured your default profile and region you don't have to put them in the command.

I think that what most people is looking for is just an easy way to transfer a file to an ec2 instance only using SSM, like ssm cp file instance-name, which as far as I could research doesn't exist. @Matteo statement is correct, why do I need ssh keys if the whole point of SSM is to get rid of it? Basically what you do is to use SSM as a kind of proxy so you can reach you EC2 machines without having to specify the actual IP address, (may be the instance doesn't have a public IP, or it does and in that case you have to whitelist your source IP in a Security Group), so you reach port 22 of the EC2 instance over SSM by just specifying the i-id but you authenticate over SSH (key and user). SCP works over SSH so you still need a KEY to use it. Again I think that most people was expecting was just plain ssm cp file instance.

Sign up to request clarification or add additional context in comments.

2 Comments

Better format answer increase the readability
how des the scp command take --profile and --region arguments ? Like the other answers don't you have to set up a ssm session tunnel first?
6

If you have SSH over SSM setup, you can just use normal scp, like so:

scp file.txt ec2-user@i-04843lr540028e96a

If it isn't working, make sure you have:

  • Session Manager plugin installed locally
  • Your key pair on the instance and locally (you will need to define it in your ssh config, or via the -i switch)
  • SSM agent on the instance (installed by default on Amazon Linux 2)
  • An instance role attached to the instance that allows Session Manager (it needs to be there at boot, so if you just attached, reboot)

Reference: https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started-enable-ssh-connections.html

If you need more detail, give me more info on your setup, and I'll try and help.

7 Comments

client is a container running in EKS nodes and server is ec2 machine (which has ssm agent installed). The issue is I can't go with keys as then i need to add public key to all my ec2 machine also need to rebuild the container with my private key So i am looking for any solution with boto3, without keys option. One ways is via s3 I can push a file , but here due to security issue i can's use s3 to publish data and then use in my script. So this means I have only 2 option to send a file via ssm, ? 1) via scp (with keys) steps mentioned above 2) via s3
Ah, I thought you meant your laptop to EC2. EKS to EC2 would need a different solution. What are your other requirements? Is latency an issue, what about retries, what happens on the EC2 when the file is copied? Without more detail, it is hard to suggest something. I'm guessing S3 & using SNS + SQS to notify the server of a new file would probably be a better solution. Please update your original question with more detail on your full requirements, and I'll try again :)
updated my question as its a private key i can't save it to s3 and this key will be generated in my service which is running in EKS as a POD and will be sent to the ec2 machine (note: each machine will send a different key)
And if i use sns/ sqs I need to add new consumers for this purpose in the ec2 machine which is also not possible as these ec2 machines are kind of some appliances
But why do I need ssh keypair? The whole point of SSM is to get rid of SSH keys
|
5

If you have ssm already setup, why do you need to use boto3 and send-file?

aws ssm start-session --target i-xxxx \
 --document-name AmazonEKS-ExecuteNonInteractiveCommand \
 --parameters 'command="cat remotefile"' | tail -n +3 | head -n -3  > file

Where i-xxx is your instance ID, remotefile is the name of the remote file and file is the name it will be called when you get it. Obviously ssm-user will need to be able to read remotefile. If not, you could probably add some sudo magic in there too.

No need to setup ssh keys or S3 buckets or ...whatever.....

The tail -n +3| head -n -3 is needed because I can't see a way to persuade ssm not to print a blank line and "Starting session..." or "Exiting session..." on each connection.

That only works up to about 250kb filesize for me. I don't know if it was corrupting binary or not because my file was bigger.

Alternative:

aws ssm start-session --target i-xxxx \
 --document-name AWSFleetManager-GetFileContent \
 --parameters 'Path=remotefile,PrintInteractiveStatements=No' | tail -n +4 | head -n -3 > file

Only works for text files I think. It nuked my tar file. It might be just doing a unix2dos conversion on the way through.

Ran the file through base64 first and it was happy.

NBs:

  • If you're on OSX, the head command might not work, you can use a text editor instead to remove the last few lines.
  • Both options require you to press a key when the transfer finishes.

1 Comment

It works for me when transferring 1.6MB YAML file from ec2 instance to Linux machine.
3

You can use an S3 bucket as proxy. The only thing required is to give EC2 permission to access the S3. This way you don't have to use a SSH protocol to copy files between machines.

All steps are explained here

Comments

1

Even not the OP's requested boto3 approach but here are the vanilla CLI steps:

  1. Use ssm-agent to connect to the instance.
  2. become ubuntu/ec2-user
  3. edit ~/.ssh/authorized_keys and add a line for your personal SSH key. Make sure the permissions are 0600.
  4. On your local machine edit ~/.ssh/config and add something like
host i-* mi-*
    ProxyCommand sh -c "aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters 'portNumber=%p'"
  1. export the required environment variables, e.g. export AWS_PROFILE=your-profile and export AWS_REGION=eu-central-1.

  2. verify you can log in via SSH by running ssh i-0123456789 -l ubuntu

  3. tar.gz the content up to make it faster as the transfer is slow.

  4. SCP the archive of the machine: scp ubuntu@i-0123456789:/home/ubuntu/content.tar.gz ~/Downloads/

1 Comment

My solution automate these steps, so you dont have to hazzle araound with those steps github.com/qoomon/aws-ssm-ssh-proxy-command
0

What worked for me was to upload the file to S3 first, and then get it from there.

High level plan:

  1. install AWS CLI if not already present

  2. find out the current identity that the CLI will use

  3. locate/create an S3 bucket where you want to upload the file

  4. upload the file to the S3 bucket

  5. download the file from S3 bucket

1. Install AWS CLI

Try running aws from the command line: if you get an error, follow the instructions at https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html

2. Find out current IAM identity

aws sts get-caller-identity

You should get something like this:

{
    "UserId": "AROAXKJ7HLGGCEYWXB3UV:i-08cc8005ac8681f86",
    "Account": "123456789012345",
    "Arn": "arn:aws:sts::123456789012345:assumed-role/awsddcdisprod01-CloudWatchAgent/i-0987654321"
}

Copy the value of the Arn attribute:

arn:aws:sts::123456789012345:assumed-role/awsddcdisprod01-CloudWatchAgent/i-0987654321

3. S3 Bucket

Open the AWS Console, into the S3 Buckets section and ope the "Permissions" settings for the bucket you want to upload to (or create a new one first), scroll to "Bucket policy" section and click "Edit", and copy paste a JSON like this, where in the Principal/AWS key you must enter the ARN from the previous step, and replace <BUCKET-NAME> with the actual bucket name:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "TempUploadFromSSM",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:sts::123456789012345:assumed-role/awsddcdisprod01-CloudWatchAgent/i-0987654321"
      },
      "Action": [
        "s3:ListBucket",
        "s3:PutObject"
      ],
      "Resource": [
        "arn:aws:s3:::<BUCKET-NAME>",
        "arn:aws:s3:::<BUCKET-NAME>/temp/*"
      ]
    }
  ]
}

4. Upload file

Back to the SSM console of the EC2 instance, use a command like this:

aws s3 cp logs.zip s3://<BUCKET-NAME>/temp/

(replace <BUCKET-NAME> with the bucket you changed in step 3)

5. Download file

Use any S3 client or the AWS Web Console itself to download the file you just uploaded.

6. Bonus step

Celebrate! 🥳

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.