Then run docker-compose build and done, right? Wait…
The better way: use the directory
While the above would eventually work, this is definitely not the docker-way of doing this. The great thing about Docker is that besides offering a lightweight virtualbox, it also provides access to a huge directory of setup scripts: the DockerHub.
DockerHub hosts official and unofficial docker images
There you will find all the Docker images that the community accepted to share. It works like Github: if your image is meant to be public (= open source) then hosting is free, if it needs to be private, you will be charged a fee. In our case, we simply need to download an official image so no registration is required.
No wonder that you will find several that setup MongoDB. There is even an official one.
So you can enter the MongoDB docker using the following docker-compose.yml:
And type docker-enter. But is it really what you want to do?
Connect Mongo to your Node Docker
Most of the time, what you need is the database server to remain unchanged, and simply let your server connect to it, right?
This is where docker-compose really shines. Go back to your app docker-compose.yml and add the following:
You can now run docker-enter and see what is going on:
The offical Mongo docker gets downloaded
The Mongo Docker gets pulled because it is not yet present locally. Once it is done, yai, you are inside a virtual machine with NodeJS installed on it and a MongoDB connected to it.
This is great! But before we continue, can’t you think of some improvement we could make on the app Dockerfile?
Yes! You are correct: instead of installing manually NodeJS, why not using the official Node Docker, just like we did for the Mongo one?
Let’s edit our Dockerfile:
Now you’ll have to rerun docker-compose build (otherwise the previous version will be used) and then docker-enter. You are getting good! :)
Accessing Mongo from you app
I have told you Mongo is connected to your app, but until now you mostly had to trust me about this. How do I check that?
Try typing printenv:
Ouch, that’s a lot of things. Don’t worry we don’t need it all. What interests us is the Mongo IP and port, respectively 172.17.0.2 and 27017.
How to bring this into NodeJS? You already know how to do this I’m sure:
Inspired from the MongoDB doc, edit your index.js this way:
And try it:
Persist the data
Docker containers are meant to be disposable. Wipe and rebuild’em as often as you want! Just like softwares: when you open your browser, your photo editor, the calculator, or any software, it starts and then live until being closed. The only things that remains after this lifecycle are files.
Well in the application world, the server is the software, and the database is the file: you want to make it persist so that you have to signup in your local environment each time you want to work with it.
There is a solution to achieve this: your working directory is what remains between docker executions, your source code. Why not also set the database content there?
Let’s configure it for the above Mongo DB:
Add this folder to your .gitignore otherwise you will commit all your database content to your source code
And in docker-compose.yml, add the volumes entry to the db block that matches the location of the MongoDB content:
Unfortunately there is not 1 solution for all databases since each database system stores the data at different location. To find the solution for your DB simply google where is data stored in <DB Name> or how to make <DB Name> data persist docker and you will find it.
So I got a server with its persistant database running. Fine, but that’s not all I need when I work locally. Most of the time, I also need to run scripts, to watch for files, open a console etc. So I need another terminal. Without stopping the server, let’s open a new terminal tab and go for:
Ouch doesn’t work. Why? Well message is pretty clear: the port 3000 is already allocated to your other docker instance. That’s when our second alias comes into play. Try typing instead:
In there, you will be able to configure every thing you need: your webpack build, run the beautiful rails console, resize your images etc.
Pro-tip: You need to be aware that the context of this new container does not share context with the server one. They both start in the state of the last build. It means if you run apt-get install vim in an instance, it won’t be available in another one. As mentionned above, the only thing that is shared is the content of the shared folder: your app working directory.
Final word (for now)
From this, you should have several keys to being able to work locally with docker. If you are having some trouble, please comment this article and I will definitely help you around. If some regular questions arise, I shall edit my article or write a third part. Docker is a huge project, there are tons of possible ways to use it, so be aware that this content was simply introductory and very opinionated from my usage.
Some docker experts might even consider this is non-sense, as this was not what Docker was created for. And so what? I have been surprised of the few litterature available on the topic and this is what took me to write this, after some time practicing by myself.
Finally, let me suggest you a last alias:
Because despite the --rm option that is supposed to remove the instance after it has been run, sometimes you won’t exit things correctly, and some instances happen to remain ghosts on your laptop, docker-clean will remove those.