5

I have the following directory in my local computer:

dir1
  |
  |__ randomfile.jpg
  |__ dir2
        |
        |__ file1.txt
        |__ file2.txt
        |__ pict.png

What I want to do is to copy all the files with *.txt and preserving the subdirectory structure to Amazon S3 bucket. How can I do that?

At the end of the day in S3. We'd like to find this file and directory structure:

 dir1
      |
      |__ dir2
            |
            |__ file1.txt
            |__ file2.txt

With standard Unix command I can do this:

find . -name '*.txt' -exec cp --parents \{\} /target \;

But not sure how to do with with AWS command line.

In reality with have files with ~10Tb of size to transfer.

1

1 Answer 1

8

Just use sync:

aws s3 sync src/ s3://mybucket --exclude "*" --include "*.txt"

Exclude include doc

Sign up to request clarification or add additional context in comments.

5 Comments

As an alternative to sync (which is intended for repeated syncing), you could use aws s3 cp --recursive ... which would do the same job.
@JohnRotenstein you could but exclude and include as of now work only with sync and not with cp or mv!
@KaranShah that works with cp you need ` --recursive`
@Ôrel recursive will copy everything, it cannot exclude the random file as mentioned in the question
My point is exclue and include are supported by cp, but here to keep structure you need sync I agree

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.