I do have n nodes managed by Kubernetes, all of them running several containers. A container may contain a database alongside with an application. As containers should be stateless, all data within the container's database would be lost (uppon restart or redeployment via Kubernetes). Hence, I thought of a central database (See Fig. 2.6.2 Management) which contains all Databases for permanent persistence. The idea is to link the Container's Database via Master-Master synchronization to the related permanent persisted database.
As each Node runs docker and boots up the image into a container, how does the container's database initially synchronize all the master's content? It can't be pre-shipped within the image, as the image will be build by an external source, only providing the compiled services and not knowing anything about the actual database(s) structure/source.
The only thing I can currently think of is a sqldump that gets requested by the container on boot-up. (via SMB, SSH,...) But that seems a bit off to me.
