Setting up development environment with docker


An update from 2020:

This post no longer feels relevant. Nowadays I just run something like docker run -v $(pwd):/code --workdir /code node to install my dependencies.

This summer I finally decided to learn and integrate docker into my development process. I had to use a bunch of different tools, such as postgres, influxdb, go and nodejs, and last thing I wanted was to install all that locally.

I won’t go into yet another “docker tutorial”: there are hundreds of them already (if you’re into go, this one looks good, although a bit outdated). Instead I’ll explain some issues I faced trying to move all the dependencies into containers and how I solved them, with a focus on installing vendor libraries within a container and sharing them with host through a volume.

After reading some (mostly enthusiastic) guides and reviews, I had rather high expectations from a development environment based on docker containers:

  1. Easy deployment with all necessary dependencies bundled together, not conflicting with any other project. This where docker excels, which is not surprising, as this is its primary use case.
  2. Minimal requirements for host machine setup. Surely I don’t want to install a compiler or database on host, because of 1. This might be more difficult; for example, Intellij IDEA wants go compiler on the host. But it’s not really a fault of docker itself.
  3. Libraries, such as node_modules, should be downloaded and compiled within a container, but stay accessible from the host to help my IDE with autocomplete. This is related to a previous point and seems obvious to me, but I’ve seen numerous tutorials on the internet that advise to install node_modules on host and copy it to docker container - which kind of eliminates all the benefits of using docker for isolated environments.
  4. An ability to debug those libraries by modifying their source. This also seems a natural thing to have, but we’ll see it’s not that easy to achieve with docker if we still want point 3.

While 1 and 2 are solved pretty trivially with docker and docker-compose (and I can live with go compiler on host for my IDE, as it does not interfere with running real projects), I quickly faced a problem of sharing libraries, such as glide dependencies for go and node_modules for nodejs, between host and container.

The thing is that if you choose a noble way of managing libraries from within a container, it will break code synchronization with host in a following scenario:

  • Assume a freshly cloned nodejs project, so there’s no node_modules directory on host, but the dockerfile conveniently runs npm install. Project directory is mapped as a volume, so the code will be synchronized and updated immediately when changing it on the host.
  • After running docker-compose up, you will unfortunately discover those dependencies have vanished from the container, because they were overwritten by a volume from a host machine, that did not contain node_modules. Everything is broken.
  • The (non-obvious) solution suggested in some articles and on stackoverflow is to declare another volume in docker-compose specifically for node_modules directory, like this:
    version: '3'
    services:
      server:
        build: ./
        volumes:
          - .:/app # syncs all your code, but removes node_modules
          - node_modules:/app/node_modules # brings node_modules back
    volumes:
      node_modules:

This looks suspiciously, but after rebuilding a project with docker-compose, the dependencies will be present in a container, and you app will (hopefully) work.

However, this solves the problem only partially: there is still no node_modules directory on the host! I’m surprised this problem is ignored by most docker articles and tutorials, because inspecting and modifying the source code of the libraries is an important part of my programming routine. Writing this post, I had to verify again that the problem really does exist and it’s not just in my head.

Anyway, the workaround I came up with is to manually copy directory with dependencies from container to host, like this:

# get the ID of (running) container that you will copy dependencies from.
# replace `server` with a service name you use in docker-compose.
docker-compose ps -q server
# copy node_modules from this container to current directory.
docker cp CONTAINER_ID:/app/node_modules . 

Finally, IDE is happy, and you can navigate to library sources.

Still, we’re not done. Every time any dependency is changed, this directory needs to be copied again. And what if we need to modify some library? Its files on host are not synced with container, so it just wouldn’t work. The obvious hack, which I’m not happy about, is to connect to container (with something like docker-compose exec bash), and edit the files from within.


To conclude, while docker indeed helps to solve the most crucial problems of environment isolaton on local machine, it comes with a cost. Out of 4 expected features, I only managed to get 3 - and let’s not forget all that was just to set up a local environment.

comments powered by Disqus