11

"ResourceLoader" with AWS S3 works fine with these properties:

cloud:
  aws:
    s3:
        endpoint: s3.amazonaws.com     <-- custom endpoint added in spring cloud aws 2.3
    credentials:
        accessKey: XXXXXX
        secretKey: XXXXXX
    region:
        static: us-east-1
    stack:
        auto: false

However, when I bring up a localstack container locally and try to use it with these properties(as per this release doc: https://spring.io/blog/2021/03/17/spring-cloud-aws-2-3-is-now-available):

cloud:
  aws:
    s3:
        endpoint: http://localhost:4566
    credentials:
        accessKey: test
        secretKey: test
    region:
        static: us-east-1
    stack:
        auto: false

I get this exception:

17:12:12.130 [reactor-http-nio-2] ERROR org.springframework.boot.autoconfigure.web.reactive.error.AbstractErrorWebExceptionHandler - [23efd000-1] 500 Server Error for HTTP GET "/getresource/test" com.amazonaws.SdkClientException: Unable to execute HTTP request: mybucket.localhost at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1207) ~[aws-java-sdk-core-1.11.951.jar:?] Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException: Error has been observed at the following site(s): |_ checkpoint ⇢ org.springframework.boot.actuate.metrics.web.reactive.server.MetricsWebFilter [DefaultWebFilterChain] |_ checkpoint ⇢ HTTP GET "/getresource/test" [ExceptionHandlingWebHandler] Stack trace: at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleRetryableException(AmazonHttpClient.java:1207) ~[aws-java-sdk-core-1.11.951.jar:?]

Caused by: java.net.UnknownHostException: mybucket.localhost at java.net.InetAddress$CachedAddresses.get(InetAddress.java:797) ~[?:?]

I can view my localstack bucket files otherwise fine in an S3 browser.

Here is the docker compose config for my localstack:

version: '3.1'
services:
localstack:
    image: localstack/localstack:latest
    environment:
        - AWS_DEFAULT_REGION=us-east-1
        - AWS_ACCESS_KEY_ID=test
        - AWS_SECRET_ACCESS_KEY=test
        - EDGE_PORT=4566
        - SERVICES=lambda,s3
    ports:
        - '4566-4583:4566-4583'
    volumes:
        - "${TEMPDIR:-/tmp/localstack}:/tmp/localstack"
        - "/var/run/docker.sock:/var/run/docker.sock"          

Here is how I am reading a text file:

public class ResourceTransferManager {

@Autowired
ResourceLoader resourceLoader;

public void resourceLoadingMethod() throws IOException {
    Resource resource = resourceLoader.getResource("s3://mybucket/index.txt");
    InputStream inputStream = resource.getInputStream();
    System.out.println("File content: " + IOUtils.toString(inputStream, StandardCharsets.UTF_8));
}}

  
3
  • It starts working though when this is added to the etc/hosts file: 127.0.0.1 mybucket.localhost Commented Jun 21, 2021 at 10:06
  • But this is not a feasible solution. If this is happening due to path style access issue, then is there an application.yml property which can be used to enable it ? Commented Jun 21, 2021 at 10:12
  • in yaml used for docker , you can create network alias for your container , like : - <yourbucketname>.s3.localhost.localstack.cloud Commented Jun 21, 2022 at 4:47

3 Answers 3

17

By default S3 client creates a path having bucket name as subdomain and this causes the issue. there are couple of ways to address this issue :

  1. In case of localstack , do not use the endpoint http://localhost:4566 , use the standard formate endpoint i.e : http://s3.localhost.localstack.cloud:4566 , this will actualy reachout to DNS and will resolve into localhost IP internally and thus this will work fine. (only caviate it , it resolve using public DNS thus it either needs internet connection or you will need to make host entries prefixing bucketname for example in host file put 127.0.0.1 <yourexpectedbucketName>.s3.localhost.localstack.cloud). OR if you are using docker then instead of making host entries , you can also create network alias for your localstack container like : <yourexpectedbucketName>.s3.localhost.localstack.cloud

  2. another better way is extension to first approach , but here instead of using aliases for each of your bucket (which may not always be feasible) , you can spin up local dns container and use wildcard dns config there. refer simplified sample at : https://gist.github.com/paraspatidar/c29e4adb172a5afc92852a57e621323d ( original reference : https://gist.github.com/NAR8789/92da076d0c35b434107fb4f4f198fd12)

  3. In latest image of localstack,they seem to have force condition of a valid AWS region in url,in that case we have to use url like : http://s3.us-east-1.localhost.localstack.cloud:4566

Sign up to request clarification or add additional context in comments.

2 Comments

First option should work, I think. However the property ForcePathStyleAccess is not exposed in spring-cloud-aws.
First option Works for me! I change the application.properties: aws.dynamodb.endpoint=s3.localhost.localstack.cloud:4566 aws.s3.endpoint=s3.localhost.localstack.cloud:4566
2

i'm using spring boot 3 with spring-cloud-aws-starter-s3 3.0.0 if you're using the same lib to connect with s3 you can add this on your application.properties

spring.cloud.aws.s3.path-style-access-enabled=true 

Comments

0

2024-08-30 18:24:45 2024-08-30T12:54:45.922 WARN --- [ MainThread] localstack.deprecations : HOSTNAME_EXTERNAL is deprecated (since 2.0.0) and will be removed in upcoming releases of LocalStack! This configuration will be migrated to LOCALSTACK_HOST

I just added the below in my localstack environment of docker compose file and worked for me. - LOCALSTACK_HOST=localhost

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.