Coming from a SQL background, I always relied on modeling and schema evolution tools heavily. But since moving to a mixed approach in which any given data element might be in Postgres, Mongo, etcd, Kafka or redis, and for which I might need any number of different fragments of boilerplate code, I've embraced the development and use of a Jupyter Notebook template I developed using Python to manage schema universally.
I use a Python dict to define all the elements of the model I need - e.g.
someModelDict["someFieldName"] = {"Name":"Node hash name",
"Description":"Host Node hash name",
"Datatype":str,
"ConstraintType":"FK",
"Constraint":"Nodes.NodeHashName",
"SampleValue":"DSFAGER3455",
"Notes":"Node providing resources to the service - hash of IP Address or Hostname"}
I've written relatively simple Python code to generate sample values and queries in any of the platforms I'm using, as well as boilerplate code for web frameworks in which I need to work with the models - all based on that simple Python dict, and some additional mapping dicts (that map datatypes, commands, etc. from Python to the target framework).
If you're more of a Javascript person, then perhaps a simplistic node.js front end on JSON files that serve the same function as the Python dicts. Again, by making the originating entity for the model simple (JSON or Dict), you make it straightforward to process that with code to do things like check integrity, or audit structure or migrate from one version to another
Having the data model originate from one authoritative place is key, but when using multiple platforms, many of them NOSQL, I've found it best to 'roll your own', based on a simple but extensible structure like dict/json.