So, you’ve heard about this new Docker thing everyone is talking about? Having all your apps in containers seems nice, but you just can’t figure out where to start? Start with a blog, so you can tell everyone else how you did it! In this post you’ll see how to get started with Ghost proxied behind nginx (pronounced "engine-ex").

I’m assuming you already have Docker installed. I just used the pre-built Ubuntu+Docker image from DigitalOcean, which got me running in a couple of minutes.

ProTip: Use an SSH key when creating your DigitalOcean droplet. It makes your life so much easier.

So… what now?

Start by checking if everything is up to date:

root@docker:~# apt-get update && apt-get upgrade
root@docker:~# docker version
  Client version: 1.4.0
  Client API version: 1.16
  Go version (client): go1.3.3
  Git commit (client): 4595d4f
  OS/Arch (client): linux/amd64
  Server version: 1.4.0
  Server API version: 1.16
  Go version (server): go1.3.3
  Git commit (server): 4595d4f

Now the first thing you need is a data-only container to hold your data volume (why?):

docker run -v /data --name ghostdata dockerfile/ubuntu echo Data-only container for Ghost

This will create a ubuntu container with the folder /data as volume and ghostdata as its name. Once created, it will run the echo and exit - data-only containers don’t need to be running to be used.

ProTip: Use the same base image for your data volumes and your app containers to leverage Docker’s layer caching (why?).

With that out of the way let’s create our Ghost config file, /docker/blog/config.js:

// # Ghost Configuration
// Setup your Ghost install for various environments
// Documentation can be found at

var path = require('path');
var config;

config = {
  production: {
    url: '', //set this up or your RSS links will break
    mail: {
      transport: \"SMTP\",
      from: '[email protected]',
      options: {
        host: '',
        port: 465,
        auth: {
          user: '[email protected]',
          pass: 'YourPasswordGoesHere'
    database: {
      client: 'sqlite3',
      connection: {
        filename: '/data/blog.db' //use the same path as in your data volume
      debug: false
    server: {
      // Host to be passed to node's `net.Server#listen()`
      host: '',
      // Port to be passed to node's `net.Server#listen()`, for iisnode set this to `process.env.PORT`
      port: '2368'
// Export config
module.exports = config;

Ghost reccomends using MailGun, but Zoho has a great service too and no ads. Fix your config accordingly.

If you are going to use a custom theme, now is the time to add it to your /docker/blog/content/themes/ folder. It will be copied to the container when you first start it.

Now start your Ghost image, using the volumes from your data-only container:

docker run -d --volumes-from ghostdata --name blog -v /docker/blog:/ghost-override dockerfile/ghost

This will download and run a dockerfile/ghost image, naming it blog, using all volumes from your ghostdata container, and mounting the /docker/blog host folder as /ghost-override in your container.

If you checked the image’s Dockerfile, you saw that port 2368 is exposed. So you browse to http://your-ip:2368 and…


That’s because, by default, container exposed ports are only visible to linked containers but not to the host. If you want to have it visible use the -p host:container parameter (say, -p 80:2368) and voilà. There is, however, an (arguably) better option: we’ll use nginx as a reverse proxy (why?).

To do that, create your /docker/nginx/sites-enabled/blog config file:

server {
  listen       80;
  access_log   /var/log/nginx/;
  error_log    /var/log/nginx/;
  location  /  {
    proxy_pass        http://blog:2368;
    proxy_set_header  X-Forwarded-Host  $server_name;
    proxy_set_header  Host              $server_name;
    proxy_set_header  X-Real-IP         $remote_addr;

See the proxy_pass line? A nice touch of Docker is that it adds the IPs of all linked containers to the hosts file, so you don’t have to worry about the internal containers IPs and can just name them.

Now we can start nginx:

docker run -d -p 80:80 --name nginx --link blog:blog -v /docker/nginx/sites-enabled:/etc/nginx/sites-enabled dockerfile/nginx

This will:

  • Download and start the dockerfile/nginx image

  • Naming it nginx

  • Binding the container port 80 to the host port 80

  • Linking it with the blog container, using blog as an alias on the hosts file and environmental variables

  • Mounting the host folder /docker/nginx/sites-enabled folder as /etc/nginx/sites-enabled on the container

And there you have it, your Ghost install is proxied behind nginx. They are each on their own container, and you can update/restart/remove them both as you wish without loosing your data. The only way to delete the data is by deleting all linked containers that use your data volume, and then your data-only container:

root@docker:~# docker stop nginx
root@docker:~# docker rm nginx
root@docker:~# docker stop blog
root@docker:~# docker rm blog
root@docker:~# docker rm ghostdata

ProTip: Before you go fix your DNS to point to your droplet, configure Cloudflare to get a nice (and free) CDN with free SSL certificates.