1

I have a container based application running node JS and my backend is a mongoDB container.

Basically, what I am planning to do is to run this in kubernetes.

I have deployed this as separate containers on my current environment and it works fine. I have a mongoDB container and a node JS container.

To connect the two I would do

docker run -d --link=mongodb:mongodb -e MONGODB_URL='mongodb://mongodb:27017/user' -p 4000:4000 e922a127d049 

my connection.js runs as below where it would take the MONGODB_URL and pass into the process.env in my node JS container. My connection.js would then extract the MONGODB_URL into the mongoDbUrl as show below.

const mongoClient = require('mongodb').MongoClient;
const mongoDbUrl = process.env.MONGODB_URL;
//console.log(process.env.MONGODB_URL)
let mongodb;

function connect(callback){
    mongoClient.connect(mongoDbUrl, (err, db) => {
        mongodb = db;
        callback();
    });
}
function get(){
    return mongodb;
}

function close(){
    mongodb.close();
}

module.exports = {
    connect,
    get,
    close
};

To deploy on k8s, I have written a yaml file for

1) web controller 2) web service 3) mongoDB controller 4) mongoDB service

This is my current mongoDB controller

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mongo-deployment
spec:
  replicas: 1
  template:
    metadata:
      labels:
        name: mongo
    spec:
      containers:
      - image: mongo:latest
        name: mongo
        ports:
        - name: mongo
          containerPort: 27017
          hostPort: 27017

my mongoDB service

apiVersion: v1
kind: Service
metadata:
  labels:
    name: mongodb
  name: mongodb
spec:
  ports:
    - port: 27017
      targetPort: 27017
  selector:
    name: mongo


my web controller

apiVersion: v1
kind: ReplicationController
metadata:
  labels:
    name: web
  name: web-controller
spec:
  replicas: 1
  selector:
    name: web
  template:
    metadata:
      labels:
        name: web
    spec:
      containers:
      - image: leexha/node_demo:21
        env:
        - name: MONGODB_URL
          value: "mongodb://mongodb:27017/user"
        name: web
        ports:
        - containerPort: 4000
          name: node-server

and my web service

apiVersion: v1
kind: Service
metadata:
  name: web
  labels:
    name: web
spec:
  type: NodePort
  ports:
    - port: 4000
      targetPort: 4000
      protocol: TCP
  selector:
    name: web

I was able to deploy all the services and pods on my local kubernetes cluster.

However, when I tried to access the web application over a nodeport, it tells me that there is a connection error to my mongoDB.

TypeError: Cannot read property 'collection' of null
    at /app/app.js:24:17
    at Layer.handle [as handle_request] 

This is my node JS code for app.js

var bodyParser = require('body-parser')
, MongoClient = require('mongodb').MongoClient
, PORT = 4000
, instantMongoCrud = require('express-mongo-crud') // require the module
, express = require('express')
, app = express()
, path = require('path')
, options = { //specify options
    host: `localhost:${PORT}`
}
, db = require('./connection')


// connection to database
db.connect(() => {

    app.use(bodyParser.json()); // add body parser
    app.use(bodyParser.urlencoded({ extended: true }));
    //console.log('Hello ' + process.env.MONGODB_URL)

    // get function 
    app.get('/', function(req, res) {
        db.get().collection('users').find({}).toArray(function(err, data){
            if (err)
                console.log(err)
            else
                res.render('../views/pages/index.ejs',{data:data});
        });
    });

Clearly, this is an error when my node JS application is unable to read the mongoDB service.

I at first thought my MONGODB_URL was not set in my container. However, when I checked the nodeJS container using

kubectl exec -it web-controller-r269f /bin/bash

and echo my MONGODB_URL it returned me back mongodb://mongodb:27017/user which is correct.

Im quite unsure what I am doing wrong as I am pretty sure I have done everything in order and my web deployment is communicating to mongoDB service. Any help? Sorry am still learning kubernetes and please pardon any mistakes

11
  • the error is shown in /app/app.js can you paste its snippet too ? Commented May 21, 2019 at 16:55
  • app.get('/', function(req, res) { db.get().collection('users').find({}).toArray(function(err, data){ if (err) console.log(err) else res.render('../views/pages/index.ejs',{data:data}); }); }); Commented May 21, 2019 at 17:01
  • can you add an err check in your connection.js after mongoClient.connect(mongoDbUrl, (err, db) => { to check error and print error to console. Commented May 21, 2019 at 17:08
  • Also can you once delete the web-controller pod and start It again? Leave the other pods as it. It could be the issue of the pod starting. Your web-controller might have started first then the mongo one. Commented May 21, 2019 at 17:09
  • I tried delete webcontroller pod and restart. i wrote a console log but how do i check the logs in the container for this? Commented May 21, 2019 at 17:40

1 Answer 1

1

[Edit]

Sorry my bad, the connections string mongodb://mongodb:27017 would actually work. I tried dns querying that name, and it was able to resolve to the correct ip address even without specifying ".default.svc...".

root@web-controller-mlplb:/app# host mongodb mongodb.default.svc.cluster.local has address 10.108.119.125

@Anshul Jindal is correct that you have race condition, where the web pods are being loaded first before the database pods. You were probably doing kubectl apply -f . Try doing a reset kubectl delete -f . in the folder containing those yaml . Then kubectl apply the database manifests first, then after a few seconds, kubectl apply the web manifests. You could also probably use Init Containers to check when the mongo service is ready, before running the pods. Or, you can also do that check in your node.js application.

Example of waiting for mongodb service in Node.js

In your connection.js file, you can change the connect function such that if it fails the first time (i.e due to mongodb service/pod not being available yet), it will retry again every 3 seconds until a connection can be established. This way, you don't even have to worry about load order of applying kubernetes manifests, you can just kubectl apply -f .

let RECONNECT_INTERVAL = 3000

function connect(callback){
      mongoClient.connect(mongoDbUrl, (err, db) => {
        if (err) {
          console.log("attempting to reconnect to " + mongoDbUrl)
          setTimeout(connect.bind(this, callback), RECONNECT_INTERVAL)
          return
        } else {
          mongodb = db;
          callback();
        }
      });
}
Sign up to request clarification or add additional context in comments.

11 Comments

hi Redgetan, unfortunately it didnt work. I deployed to default namespace.
I actually removed my deplyment web service and my web service web and tried again. apiVersion: extensions/v1beta1 kind: Deployment metadata: labels: name: web name: web spec: replicas: 1 template: metadata: labels: name: web spec: containers: - image: leexha/node_demo:23 env: - name: MONGODB_URL value: "mongodb://mongodb.default.svc.cluster.local:27017/user" name: web ports: - containerPort: 4000 name: web
@adr if you do kubectl get svc, do you see mongodb in the list? When you ssh into your web pod, does nslookup kubernetes.default or nslookup mongodb.default.svc.cluster.local work ? Here is another useful link kubernetes.io/docs/tasks/debug-application-cluster/… . What does your kubectl log <pod_name> show?
hihi yes i do see mongodb in the list. also I couldnt do nslookup in my pod- says nslookup not found. for the logs see the following comment
Error from server (BadRequest): a container name must be specified for pod web-5b5d7fd596-ctn2q, choose one of: [web istio-proxy] or one of the init containers: [istio-init] ladrian-a01:mongo-node adrianlee$ kubectl logs web-5b5d7fd596-ctn2q -c web checking database { Error [MongoError]: failed to connect to server [mongo:27017] on first connect [MongoError: connect ECONNREFUSED 10.109.232.130:27017] 'failed to connect to server [mongo:27017] on first connect [MongoError: connect ECONNREFUSED 10.109.232.130:27017]' }
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.