07 Mar 2016, 08:37

Let's Encrypt - setting up automated certificate renewals

let's encrypt

Not too long ago, Let’s Encrypt launched a free CA to enable website owners to easily generate their own signed certificates from the command line on their server. It integrates with popular web servers like Apache and NGINX to make the validation step easier. However, the certificates are only valid for 3 months, as opposed to the 1 year that is more typical. They’ll send an email when your certificate is close to expiring, but that’s not always idea. The good news is that since this is a command line tool, it can be easily written into a cron job to run periodically.

The current release, 0.4.2, seems to work reasonably well for scripted renewals. There are a couple notes for issues that I ran into, at least with this version. First, here’s the general command that I use for renewals:

./letsencrypt/letsencrypt-auto certonly --no-self-upgrade --apache --force-renew --renew-by-default --agree-tos -nvv --email admin@example.com -d example.com,www.example.com >> /var/log/letsencrypt/renew.log

That’s great when you have a simple site, with sub-domains that all point to one set of content, and one vhost entry (per port). If you have a couple of different subdomains that relate to different sets of content, something like this:

./letsencrypt/letsencrypt-auto certonly --no-self-upgrade --apache --force-renew --renew-by-default --agree-tos -nvv --email admin@example.com -d example.com,www.example.com,app.example.com >> /var/log/letsencrypt/renew.log

Where app.example.com in the above example relates to a different vhost. In this case, you’ll need to break the vhosts into separate files. See the below examples.

Vhost for example.com and www.example.com:

<VirtualHost *:80>
    ServerAdmin admin@example.com
    ServerName example.com
    ServerAlias www.example.com
    DocumentRoot "/var/www/example/"
    <Directory "/var/www/example/">
        DirectoryIndex index.html
        RewriteEngine On
        RewriteOptions Inherit
        AllowOverride All
        Order Deny,Allow
        Allow from all
        Require all granted
    </Directory>
</VirtualHost>

<VirtualHost *:443>
    ServerAdmin admin@example.com   
    ServerName example.com
    ServerAlias www.example.com  
    DocumentRoot "/var/www/example/"
    <Directory "/var/www/example/">
        AllowOverride all
        Options -MultiViews +Indexes +FollowSymLinks
        DirectoryIndex index.html index.htm
        Order Deny,Allow
        Allow from all
        AllowOverride All
        Require all granted
        DirectoryIndex index.html index.php index.phtml index.htm
    </Directory>

    ErrorLog "/var/log/httpd/example.com-error_log"
    CustomLog "/var/log/httpd/example.com-access_log" common

    SSLEngine on

    SSLCertificateFile /etc/letsencrypt/live/example.com/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem
</VirtualHost>

Vhost for app.example.com:

<VirtualHost *:80>
    ServerAdmin admin@example.com
    ServerName app.example.com
    DocumentRoot "/var/www/example-app/"
    <Directory "/var/www/example-app/">
        DirectoryIndex index.html
        RewriteEngine On
        RewriteOptions Inherit
        AllowOverride All
        Order Deny,Allow
        Allow from all
        Require all granted
    </Directory>
</VirtualHost>

<VirtualHost *:443>
    ServerAdmin admin@example.com   
    ServerName app.example.com
    DocumentRoot "/var/www/example-app/"
    <Directory "/var/www/example-app/">
        AllowOverride all
        Options -MultiViews +Indexes +FollowSymLinks
        DirectoryIndex index.html index.htm
        Order Deny,Allow
        Allow from all
        AllowOverride All
        Require all granted
        DirectoryIndex index.html index.php index.phtml index.htm
    </Directory>

    ErrorLog "/var/log/httpd/example-app.com-error_log"
    CustomLog "/var/log/httpd/example-app.com-access_log" common

    SSLEngine on

    SSLCertificateFile /etc/letsencrypt/live/example.com/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem
</VirtualHost>

Then, I created a script that handles a couple of different renewals, and at the end, a command to reload apache configs, called updateCerts.sh:

#!/bin/bash

./letsencrypt/letsencrypt-auto certonly --no-self-upgrade --apache --force-renew --renew-by-default --agree-tos -nvv --email admin@example.com -d example.com,www.example.com,app.example.com >> /var/log/letsencrypt/renew.log
./letsencrypt/letsencrypt-auto certonly --no-self-upgrade --apache --force-renew --renew-by-default --agree-tos -nvv --email admin@thing.com -d thing.com,www.thing.com,files.thing.com >> /var/log/letsencrypt/renew.log

apachectl graceful

Then, I added this to the root user’s crontab:

# m h  dom mon dow   command
0 4  *   *   1     /home/user/updateCerts.sh

Cron is supposed to run this script once a week, and we’ll have logs at /var/log/letsencrypt/renew.log.

06 Mar 2016, 10:30

Container Tomcat Graph - part 1

Note: I wrote this post 3 years ago, and somehow forgot to publish it. It may contain incomplete sentences, and may be complete nonsense, I am not going to re-read it. Proceed at your own risk.

For years, I’ve had this problem around cooking at home, which is sort of two-fold. First, I really don’t like finding recipes and coming up with meals to cook. It’s time-consuming, and when I’m digging through recipes, I get indecisive and overwhelmed with the number of choices, wasting hours a week. The other problem I have is that there are meals that I cook that call for me to buy some ingredient that the meal itself doesn’t completely use and that I probably won’t use again.

I decided to build a tool to help me solve these problems. The tool will allow me to enter in recipes, it will store them in a graph database, and then it will traverse the graph to build a menu for the week for me. If there are recipes that call for uncommon ingredients that are somewhat expensive that won’t be fully used, it should try to find other recipes that also use that ingredient, and prioritize surfacing those recipes.

I wanted to write up my process, as I have had a lot of difficulty, just getting the basics off the ground. Especially with how many moving parts there are in this. Hopefully, if I decide to build another app with a similar stack, it’ll be a little quicker with this guide.

This blog post is going to take us through the very basics of building a Docker image that runs Tomcat. The next post will cover getting that image up to the cloud. From there, future posts will cover using AngularJS as a frontend, and adding in the Titan graph.

Here are the technologies that I’m going to use to build this app:

As a prerequisite, to this, you’ll need to have Docker set up locally. I haven’t written a thing on this, so you’re on your own here. It’s not that hard, the installer does most of the work for you. You’ll also need to have the Google Cloud tools set up.

Create the project

You’ll need to start with your basic directory structure, with a handful of basic files. You can find the full source for this blog post here. I’ll be using the following structure:

Tomcat

The very first thing that we’ll do, is give a rough outline of our Tomcat app in build.gradle:

This uses a gradle plugin that helps us out with building for tomcat, as well as grabbing tomcat dependencies.

Now, let’s define our web.xml file:

All we’re really doing with that is defining the jsp that we’ll use as a place-holder. Speaking of, here’s the index.jsp:

Just a simple ‘hello world’, so that we can see that things are working.

Docker

Now that that’s done, we need to be able to run that in some sort of environment. I grabbed a Dockerfile from jeanblanchard/docker-tomcat, and modified it to copy my app in.

As you can see, I’m building an entire new Tomcat image every time I build this. I will probably refactor this into two separate Docker images, one for generic Tomcat stuff, and one that depends on the generic Tomcat one, and pulls my war in.

You’ll note that if you try to build now, it’s going to fail, that’s because you need a tomcat-users.xml file. Here is the one from the jeanblanchard/docker-tomcat repository:

Running Locally

Now that you’ve got that done, I like to create a script to build and run things locally, just to test it all out. Here’s my buildrun.sh script:

Try running that with:

./buildrun.sh

Once again, you’ll need to have your environment set up for the above command to work correctly. If it does run correctly, you should be able to visit the app in your browser, at an address that looks something like this: http://192.168.99.100:8080/container-tomcat-graph-0.1/index.jsp (the app name might be different if you used something different than I did).

To be continued

Check out Part 2 (coming soon) for getting your Docker container deployed.

06 Mar 2016, 10:30

Container Tomcat Graph - part 2

Note: I wrote this post 3 years ago, and somehow forgot to publish it. It may contain incomplete sentences, and may be complete nonsense, I am not going to re-read it. Proceed at your own risk.

Last time, we covered creating a new Tomcat project, and running it locally with Docker. This blog post is going to take us through deploying our docker image to Google Cloud’s Container Engine. In future posts, I’ll cover using AngularJS as a frontend, and adding in the Titan graph.

Here are the technologies that I’m going to use to build this app:

As a prerequisite, to this, you’ll need to have Docker and Google Cloud tools set up locally, and run through part 1 of this series.

Google Container Engine

Now that we’ve got something building, we want to push it off to Google and run this in the Google Cloud, to do that we will use Container Engine. Container Engine is a product that allows you to run Docker containers on Google Cloud Platform. There’s another tool that we’ll need, called Kubernetes, in order to get this done. Check out the Getting Started docs to enable the API, and install the necessary tools.

Creating a Cluster

When you click the button in the ‘Getting Started’ page, it will prompt you to define a cluster. I’d suggest specifying a size of 1, and a smaller instance type. We can always go back later and change the definition if we need to.

Once you’ve built your cluster in the Cloud Console, you’re going to want to set that as your default cluster in the command line tool (where NAME is the name of the cluster you created):

$ gcloud config set container/cluster NAME

Then authenticate yourself:

$ gcloud container clusters get-credentials NAME

Google Container Registry

Great, now we’ve got a cluster, but how do we get our Docker image to our cluster? Kubernetes pulls images from a registry, so we need to get our image into a registry. However, we don’t want to push it to the public Docker image registry, because this is our app, not a general purpose image that should be reused by other people. There’s a solution for this, called Google Container Registry, which is a private Docker registry that you can use with Kubernetes. In your Container Engine console, there’s a section for ‘Container Registry’.

Check out the Google Container Registry docs, where it talks about endpoints. There are some global region specific endpoints, and for this post, I’m going to use the us.gcr.io endpoint, since I want my image hosted in the US. Here are the list of endpoints:

  • us.gcr.io - United States
  • eu.gcr.io - European Union
  • asia.gcr.io - Asia
  • gcr.io - default, try to pick one of the others
  • b.gcr.io - existing Google Storage buckets

Secure Tomcat

Before I get to uploading our container, I need to revisit something from the previous post, which was the tomcat-users.xml file.

See if you can spot the huge security flaw in that file. Hint, take a look at the user section. Obviously, setting both the username and password to ‘admin’ is very bad, and will result in your container being compromised quite quickly (as was mine). I suggest generating random values for both name and password. A better solution would be able to disable this altogether. One way that you can do for that is to rename the ‘war’ file that you upload to ROOT.war.

That said, I would not take my advice as the final word, and go research a bit on how to secure Tomcat. Either that, or don’t run the service publicly.

If your service gets compromised, you’ll get a friendly note from Google Cloud Compliance, informing you that your project will be shut off if you don’t fix it. (The tag in GitHub for the sample code for this post is not terribly secure, you may want to modify it yourself.)

Upload Container

Here’s the script that I’m using to tag and push containers:

There’s a lot going on in this script. I would suggest running it line by line manually, to make sure that everything works correctly.

The first couple of lines should be familiar, we’re basically just building the project, then building a Docker image. Next, we create a tag and endpoint with the Google Cloud tool, which will allow us to then push the built container image to Google Cloud’s private Container Registry.

It may take a couple of minutes for each of the next commands to complete. Run the Kubernetes tool to create a pod, referencing the image that we just uploaded, and giving it a name. Note, that you are limited to a 24 character, alpha-numeric name. It may allow you to create a pod and replication controller with an invalid name, but that will cause problems later. Finally, we can add the pod to a load balancer to get public traffic pointing to it.

Testing it out

Here’s what it looks like when you run ‘describe’ shortly after creating.

Note that it says “Creating load balancer”, then “Created load balancer”. This threw me initially, but it looks like it logs recent events, as opposed to giving the current status. The important bit is the LoadBalancer Ingress IP, which you can hit with your web browser, using the port you defined in Docker, and exposed in Kubernetes, and the path that you used when you ran this locally.

working request

It works!

Monitoring resources

KubeUI dashboard

Guess what! Kubernetes for Google Cloud ships with this great set of monitoring utilities built in! This is really helpful to show you what resources are being used, and what exactly is running on the box. It’s not perfect, and there are still some holes in the knowledge, but it gives you a lot of information that isn’t in the Google Cloud Console, and you don’t need to do anything yourself to set this up.

(Hint, you may need to refresh the KubeUI page to get it to load after logging in.)

Conclusion

This was a very simple ‘Hello World’ example, but it does demonstrate how to build a Docker container image, deploy it to Google Cloud, build a Kubernetes cluster, deploy the Docker image to the cluster, and then access it. Wow, looking back, we got a lot done!

Next time, we’ll take a look at defining a simple Graph Database, and adding it to our Tomcat application. Keep an eye out for part 3 in this series!

06 Mar 2016, 10:30

Container Tomcat Graph - part 3

Note: I wrote this post 3 years ago, and somehow forgot to publish it. It may contain incomplete sentences, and may be complete nonsense, I am not going to re-read it. Proceed at your own risk.

So far in this series, we have created a new Tomcat app, added it to a Docker container, and run that container in Google Cloud with Kubernetes. This blog post is going to cover adding in a Titan graph as data store. And we’ll wrap up with using AngularJS as a frontend in the next post.

Here are the technologies that I’m going to use to build this app:

As a prerequisite, to this, you’ll need to have Docker set up locally, and run through part 1 and part 2 of this series.

The Graph

We’re looking at building a graph today, so we should probably first talk a little about what the graph will be modeling. I figured that it would be best to keep it simple.

friend graph

As you can see, there are a couple of parts here to pay attention to. In the above image, the circles represent nodes in the graph, also called vertices. The arrows represent edges, which are in this graph directional.

We also have properties on both the vertices (nodes) and edges. The vertices hold a first and last name, and the edges have two labels, one is ‘friend’ for all of them, and the other is a property that describes how the two people know each other.

Now, one thing to note right away is that it’s a bit strange in this case to have a directional edge.

Titan DB

For this post, we’re going to use Titan DB as our Graph Database. Titan is an open-source graph database, and implements the Apache TinkerPop 3 stack. Most of what you’re going to care about is a part of TinkerPop called Gremlin. Gremlin is the domain-specific language (DSL) that you’ll be using to traverse the graph. There’s a very helpful Getting Started guide on the TinkerPop site.

I know that this is a fairly poor explanation so far, but honestly, your best bet is to go research this a bit, because it is kind of complicated. Once you wrap your head around the model, things will start to make sense.

There’s only a little bit of Titan-specific code that we’ll be using to get the ball rolling. From there, it’ll be pretty much just Gremlin GraphTraversals.

The first thing that we’ll do is add Titan to our build.gradle file.

init Titan

Next up, we need to initialize the graph. We’re going to be running this one in-memory within our application. It may not be how you want to run in production, but it is the quickest way for us to get something up and running to poke around with.

The above is pretty much taken straight from the Titan docs, and molded from several open source examples.

Nodes and Edges

Now that we’ve got that, let’s talk a little about our data model. If you take a look at the image at the top of the post, that’s what I was thinking, people and relationships. People have names, and know other people, along with how they know each other. In this sample, we’re going to be using directional edges, and only one edge between two nodes. Titan is flexible on this, but one of my queries would go into a recursive loop if we were to have bi-directional edges.

The Person class is going to represent the nodes in our graph. We’ve got some basic properties, and want to be able to build the object from a vertex (Titan’s name for nodes).

There’s an important method call in the Person constructor, which is buildFriends(). That method does a traversal to find all of the other people that this person knows, and also captures the knowsFrom information stored on those edges. If you’re interested in breaking down this traversal, I would suggest taking a look at the Gremlin docs.

The Friend class is going to represent the edges in our graph. Here we have a basic traversal, and a method to create edges.

Servlet

All right, we’ve got our graph initialized, and have thought a little bit about data models. Now we want to query for things. What I’ve included as properties, should allow us to run some basic queries, and find relationships.

We’re building on Tomcat, so I think it makes sense to use a Servlet as our interface for making queries, and returning results.

There are two main parts to this servlet, init(), and doGet(). The init method initializes our graph with some nodes (vertices) and edges. Since we are running in-memory, none of this stuff is persisted between server runs.

The doGet method is our typical Tomcat servlet method, and it is what we are using to accept requests and build responses. As you can see, we accept queries on relationships and names. The supported names are full names.

We can issue a request like this:

GET http://example.com:8080/?name=Thompson,John

And you should receive a response that looks like:

{"person":[{"fullName":"Thompson,John","firstName":"John","lastName":"Thompson","friends":[{"fullName":"Beam,Jim","firstName":"Jim","lastName":"Beam","friends":[],"knowsFrom":"math_class"},{"fullName":"Lopez,Jenny","firstName":"Jenny","lastName":"Lopez","friends":[],"knowsFrom":"band"},{"fullName":"Thompson,Jane","firstName":"Jane","lastName":"Thompson","friends":[{"fullName":"Lopez,Jenny","firstName":"Jenny","lastName":"Lopez","friends":[],"knowsFrom":"gym_class"}],"knowsFrom":"sibling"}]}]}

Or, formatted nicely:

As a note, you’ll also want to add gson to the build.gradle, again you can checkout the full source of the project for that.

graph request and response in browser

Conclusion

We went super fast through this material, but I’m hoping that you were able to get something out of it.

03 Mar 2016, 09:08

MOD-t Tips

mod-t

I’m writing a couple of notes to my future self about this printer.

This morning, I was trying to to swap out the pink filament with new white filament that I picked up on Amazon. The filament was completely stuck, I couldn’t unload or load or do much of anything with it. I found great GitHub repo for MOD-t utilities, and tried running the clean nozzle tool. (There’s a config file for Slic3r, a popular modeling tool for 3D printing. There are some calibration files, old firmware, and a clean nozzle utility.) That bought me a bit of pink filament dripping out of the nozzle. However, after some additional searching, I found this Google Group for the MOD-t, and on it, there was a really helpful post entitled “Clearing Filament Jam Without Opening MOD-t”. Basically, it said that the jam may be living above the nozzle, and if that’s the case, then what you can do is remove the hot end (nozzle), heat up the nut (with a soldering iron or something), and pull the jammed filament out.

I used tweezers to hold the filament inside the nut, and needle nose pliers to hold the tweezers. Then I heated up both the tweezers and the nut, and was able to pull the jammed bit of filament out. After that, I replaced the hot end, loaded the new filament, and was back in business.

Future self - if this happens again, try the above solution!

01 Mar 2016, 08:30

New Matter MOD-t Review - a 3D printer

3d printed watch stand

Back in the beginning of November, I received a giant box on my doorstep. It was a 3D printer that I had backed on Indiegogo a year and a half earlier. The printer is a New Matter MOD-t. They were a bit late shipping, but that was not exactly unexpected, generally, I’m happy when crowdfunded projects ship at all. Either way, it was here, arriving just in time for me to be super busy with other things. We were on our way out to do something, so all I had time to do was pull it out of the box, and stick it on a shelf.

3d printer unboxing collage

This past weekend, I finally got around to getting the thing actually set up, and running. While the setup was rocky, and the desktop app poor, the overall experience is actually better than expected. I’m very excited to see where New Matter takes this product.

The setup was not easy, the software tools that they provide for Mac don’t work well. I was able to get the firmware updated on the machine, but spent about two hours trying to get it to connect to WiFi. There was a bug in their installer and desktop app that shows an error and a ‘disconnected’ status, even when the WiFi is connected properly. Took me a while to realize that it had actually been connected. One of the problems with this is that it means that it’s impossible to complete the setup installation, including calibrating the thing and loading the filament. You can install the desktop app without completing the setup, so I did that.

After getting the desktop app installed, I tried getting on WiFi again (still didn’t realize that I was connected), and ran into the same bug described above. I decided to skip that, and try to get the test print going. Looking in the desktop app, they have a button called ‘Load Filament’, I tried that, and it asked me if I wanted to unload or load the filament. I needed to load filament, and it then gave instructions for first unloading filament, and then a button to press for loading filament. I pressed the button and nothing happened. It took me quite some time to figure out that you needed to restart the printer while on that screen in the app before that button would become active. (Restarting the printer is part of the unload filament process.) Figuring that bit out, I was able to get the filament loaded and the test print going.

3d printer collage

This is about the time that I figured out that the printer was really connected, and showing up in the New Matter web app. Excellent! I loaded up some STL files from Thingiverse into my New Matter library, and sent them to the printer from the web. I was able to disconnect the printer from my MacBook, and let it run on its own. From here, I was able to basically get what I needed done with little to no issue.

For me, this is where the MOD-t really shines, and I think that New Matter has done a brilliant job. There’s no figuring out printer settings, or doing a deep dive into deeply understanding how FDM 3D printers work, you just go to the website, and hit ‘print’, then press a button on the printer. Simple. The problems that I experienced were all, 100%, on the desktop app side, which are easily fixable with updates.

There were two little hiccups. First, the top part of the watch stand that I was printing kept failing. I needed to edit the STL file to fix it, but it wasn’t really New Matter’s fault. The file was set up to print with only a single edge on the print bed. I grabbed the Meshmixer app, rotated the part, re-uploaded, and it printed just fine. The other issue was that the New Matter web app doesn’t seem to handle printing multiple parts too well (or at all). You can add multiple parts to a thing in your library, but it will only print one of them, without giving you a way to select which one. The workaround is just to upload each part separately, and that’s not really an issue.

All in all, I’m very excited about doing more with this thing. I’ve already started printing a case for the PiGrrl 2 project that I’m working on, just waiting for some white filament to arrive for printing certain parts. The watch stand that I printed is great, and I saved myself $15 which is what they go for on Amazon. If you’re in the market for a 3D printer, and want something simple and relatively inexpensive, this is a good choice.

18 Jan 2016, 09:56

Blogging tools - creating Hugo blog posts from Android

I am starting this post on my phone. I realized that one of the roadblocks that I have is that the workflow for creating new blog posts from my phone was nonexistent. I had a git client, and a Markdown editor, but Hugo uses special headers, and I also needed to be able to publish on my server.

writing tools on Android

Here’s what I did:

  1. Built an Android app to generate the header, and add it to my clipboard. Source
  2. Add a script to my server for easy publishing.
  3. Set up JuiceSSH, so that I can log into my server from my phone.

jotterpad preview

New header

With those couple things, I can now do the end-to-end process of creating and publishing blog posts from my phone. Here’s how that goes:

  1. In git client, pull from remote
  2. Create new file in repo
  3. Open file in JotterPad
  4. Run my HugoNew app
  5. Paste header into new post file
  6. Save file, and push to get repo
  7. Log into my server and run publish script

It still needs work (no image support yet), since it is a lot of steps, but it is a start.

Note - I added the images from my laptop. Here’s the source code for my Android app.

21 Sep 2015, 07:53

Running NGINX in Docker without caching

docker nginx

Lately, I’ve been working on a little web utility that I decided to write in AngularJS. It started out as a couple of JSP pages, and I quickly realized why things like Angular exist. Anyways, if you don’t run AngularJS from a server, it’ll complain and want you to load internal bits with XHR. Obviously, you don’t want to do this. The solution is to fire up a server.

What I really wanted out of a server was something that was incredibly dumb, and didn’t require me to restart the server whenever I updated my frontend code. So, while I could’ve packaged it with my Tomcat app, and run it in my Tomcat Docker image, that would’ve required lots of killing and starting the Docker image, just for quick little frontend modifications. I figured that I should be able to use a traditional web server on Docker, and found that both Apache and NGINX were available. There was some reason that I didn’t choose Apache, but I don’t recall what it was, so I went with NGINX.

The quickest thing to do is to pull the hello world example from dockerhub and remap the volume that hosts the HTML content. This actually works fine until you start trying to quickly iterate. What I found was that the default configuration for NGINX (as used by all of the Docker images that I tried) enables caching. So, even though it’s serving the right directory, it may be serving a cached version.

UPDATE: It turns out that the root cause is actually not a caching issue, but a bug with the kernel utililty, sendfile and Virtualbox. See here: NGINX docs, VirtualBox bug report. Thanks to u/justaphpguy for pointing this out

At first, I tried logging into a Docker image, and modifying the config, and using that. That did work, but it wasn’t a great process. Instead, I decided to build my own image, and pull in my own config file.

nginx.conf

Here’s the default nginx.conf file with a couple of minor modifications. Lines 23-24 have been changed to disable caching, and line 25 turns on the autoindex feature (so that you don’t need to type in /index.html in the browser).

Dockerfile

The Dockerfile is really simple, it just takes the stock nginx image, and pulls in our new config file.

Building and Running

Here’s a script I wrote to build and run my image. I tend to treat my Docker images as ephemeral, so I don’t really care if they get rebuilt frequently and overwrite some previous version.

As a note, on line 11, you’ll notice that we map a path that you specify as the volume to serve content from. On a Mac, this must be in /Users, and won’t work elsewhere.

Conclusion

If you drop all three of those files into one directory, and run something like the following, you should be good to go:

$ ./build_nginx_docker.sh /Users/me/webstuffs

20 Sep 2015, 20:50

Docker Cleanup Commands

Running docker, creating and running containers can make a bit of a mess, depending on how you use it. After a couple weeks of using Docker for development, I figured that it would be good to figure out how to clean up my unused images and containers.

Images

To remove an image called ‘node’

docker rmi node

To remove all untagged images

docker rmi $(docker images -a | grep "^<none>" | awk '{print $3}')

To remove all images

docker rmi $(docker images -qf "dangling=true")

Containers

To list active containers

docker ps -a

To remove all stopped containers

docker rm $(docker ps -a -q)

To remove all existing containers

docker rm $(docker ps -aq)

To kill and remove containers

docker rm $(docker kill $(docker ps -aq))

References

15 Sep 2015, 09:08

MySQL in Docker with Java Hibernate

docker mysql logos

Recently, I started working on a new server project at work, and wanted to be able to run a local dev environment with Docker. This has become my normal flow for a couple of server projects because of how easy Docker is to work with, and especially for the fact that I don’t need to set up any of the supporting structure on my machine to run the server. I really dislike needing to install things like Tomcat, Apache, or a MySQL server locally on my machine for development. Every time one of those things needs to be installed, I know that it’s one more thing on my machine that I need to maintain, and one more thing that could break and cause me to dump hours into fixing. With Docker, I don’t need to care about that, I can fire something off with a repeatable, programatic configuration that may be ephemeral, and disappear when I’m done with it.

My work project has two parts, first is the Java part that uses JPA/Hibernate as an ORM layer which talks to a MySQL database. While the Java portion of this was fairly straightforward, the MySQL part was not.

Outline

  • Source code
  • Basic SQL structure
  • Java code
  • Java Docker scripts
  • MySQL Docker scripts
  • Tying it together

SQL Structure

Here we have a very basic relational structure in MySQL. We have Users, Tags, and UserTags to tie the two together. If you’re familiar with SQL, this should be familiar to you.

Java code

I have a bit more Java code that I could share here, but this is probably the most important bit, which is setting up the connection. If you haven’t set things up correctly, establishing the connection will be the first thing to fail. If you’re interested in seeing the actual JPA/Hibernate entities, check out the source code.

Java Docker scripts

The Java Dockerfile is dead simple, basically, we’re just copying a jar to a stock java8 Docker image, and running that jar.

MySQL Docker scripts

Next up, is the MySQL Docker file. It’s not terrible, but there are a couple of scripts, and something a little non-obvious going on.

When I started poking around with MySQL and Docker, I wanted to use the official MySQL base, as opposed to rolling my own. While I did find a good example run command in the CoreOS documentation, I didn’t find one that used a Dockerfile, and a SQL script to set up the database. So, I had to start digging.

One of the first things that I learned was that the mysqld command in the CMD line does not run the system daemon directly. Instead, it is run through the entrypoint wrapper. If you want to do custom things to MySQL during its first run, or startup, then you’ll want to modify the MySQL entrypoint script. The one that I used is shown below with my comments inline. Basically, I wanted to add a parameter for my setup SQL script.

You’ll see on line 84 where I added the initialization from the script I passed in. You will also note that I added line 103, turning on show_compatibility_56 in /etc/mysql/my.cnf. This was the solution to a problem that I had run into where whenever I tried to connect from my Java app as a non-root user, I was given an error like the following:

SELECT command denied to user 'test'@'host' for table 'session_variables'

I ran across the initial solution on StackOverflow, and was able to implement it in this entrypoint script.

Tying it together

Below is my buildrun.sh script, which I use as a one-liner for setting everything up and running it. For the purposes of this post, it makes some sense, though for my actual implementation, I split the Java app from the SQL instance, so that I can iterate on my Java code without needing to constantly setup and teardown a MySQL instance. It’s also set up such that both instances are ephemeral, since I get annoyed when Docker leaves around dozens of 300MB files on my relatively constrained SSD.

The other issue that is common with running MySQL in Docker is one of data persistence. Unless you are keeping around the instances that you run, you’re going to lose data in between runs if you don’t do something to handle that. Data persistence is obviously an important aspect of a database. There are two options for keeping your data persistent:

  1. Passing in a volume from host
  2. Creating a share Docker volume

On my Mac, I had trouble with using a host volume for MySQL, I kept getting permissions errors, and it would fail to run after that. So, I opted for creating the shared Docker volume. To me, using a host volume would still be preferable, since I’d really like to be able to commit the data directory to my repository, so that I can share it with the code, but oh well. (Yes, I know that sounds strange and bad, but in my particular case, it does make some sense.)

The shared Docker volume is created on line 22, and explained in the Docker documentation. In our run command, we call that volume on line 42.

Conclusion

This was a bit more involved than I suspected, but I think that it was worth spending the time to get this up and running. The workflow is much nicer than needing to do this stuff natively, or constantly deploying to a remote host. When I came into the office on Monday morning, after getting this up and running over the weekend, the Docker portions without issue, and it saved me a ton of time. Hopefully this information saves you a bit of time as well.

GitHub repo with source for this whole project.