I'm trying to setup an VPC Peering from my MongoDB Atlas Cluster to my Kubernetes EKS Cluster on AWS. The Peering is established successfully but i get no connection to the cluster on my pod's.
The default entry for the whitelist ist added as well. Once the connection works i will replace it with a security Group.

The peering on AWS is accepted and "DNS resolution from requester VPC to private IP" is enabled.

The route as been added to the Public Route Table of the K8S Cluster.

When i connect to a pod and try to establish a connection with the following command:
# mongo "mongodb://x.mongodb.net:27017,y.mongodb.net:27017,z.mongodb.net:27017/test?replicaSet=Cluster0-shard-0" --ssl --authenticationDatabase admin --username JackBauer
I get "CONNECT_ERROR" for every endpoint.
What am I missing?
NOTE: I've just created a new paid cluster and the VPC is working perfectly. Might this feature be limited to paid clusters only?

--verbosemay give you some more info on what is the reason of the connection errorsslissue. Theconnected to server x.mongodb.net:27017means you have connectivity on the network level. To clarify further you may runnc -v x.mongodb.net 27017in advance. If you'll got something likex.mongodb.net (108.x.y.z) 27017 (?) openthen you do have a connection at least on the network level. So as a next step you may checkssl-related stuff like key/cert files or so.