18 Feb 2019, 20:50

Assembling a watch from parts

I recently started getting into watches. I guess that I’ve been interested in them for a long time, but the difference I guess is that I’ve started focusing on mechanical watches. Something that wormed its way into my brain was the thought of buying a bunch of parts and assembling a watch from scratch.

A couple months ago, I looked around at watch parts, and really had no idea what I was looking at and gave up quickly. Mainly, I didn’t really understand how to find a movement, and then match a dial, case, and hands to it. Then, I saw a post on Massdrop that had a couple shops to go to for parts, as well as a chronograph movement (Valjoux ETA 7750). I checked out Ofrei, and they had everything that I would need with a base Valjoux 7750 movement.

I built a list of parts:

Total - ~$589 - not too bad, around the same price as the cheapest automatic chronograph I could find. However, the cheapest chronograph with a Valjoux 7750 movement was over $1000 (most were between $1500-4000), so I figure I’m a ahead here.

I had some of the required tools, but did need a few others, like a hand press, and some other odds and ends.


I had done a fair amount of research on all the different parts of assembling a watch from parts, and felt that overall I had a pretty good handle on everything. Once I actually started trying to place the chrono hands on the movement, my theory quickly fell apart, and I turned to r/WatchHorology for help. There were a few suggestions that were useful, but this one, by u/hal0eight I thought was the most helpful:

Magnification, HOROTEC hand press set and Rodico will solve your problems.

The trick here is the hands haven’t been fitted before, so they are quite tight. When you put it on the shaft it will want to move around rather than slightly go on.

You’ll need to line it up best you can on the shaft, then put the flat hand press end on top of the hand, move it around so you are applying a little pressure down, but also moving the hand around so it’s reasonably straight on the tip of the shaft.

When you’ve got it, gently push down.

If you need to reseat the hand it should be much easier.


As you can see, I used the Rodico sticky putty to hold the hand onto the hand press, and then was able to simply move the movement into place underneath it. That was a lot easier than trying to place the thing on there with tweezers.

chronos placed

After figuring that out, I was able to move on to place the other small chrono hand and the sub-second hand without issue.

hands placed

Placing the larger hands was much easier, so they went on quickly. At this point, I started playing around with winding the movement to see if the thing worked, and was a bit concerned to find that the sub-second hand was not turning, and nothing on the movement seemed to be moving. I was able to manipulate it a bit to move for a few seconds at a time, but not much beyond that. I’m not sure, but I have a feeling that it was because I had removed the rotor. I think that without the rotor, something is allowed to spin freely that should have resistance on it. Something for me to read up on anyways.

watch in case

I put the movement in the case, and figured out how to cut down the stem to size. This was particularly tricky with a screw-down crown. I ended up making several cuts, and now I think it’s perfect.

on wrist final shot

After that, I put the rotor back on the movement, put the case together, and did verify that the watch works correctly, and is able to keep time. The only thing that does not work right now is the minute chrono hand. I’m thinking there may be some residual Rodico in there, so when I’m feeling up to it, I’ll take the thing apart, and try to find out. If it’s not that, then I’ll be doing a much deeper dive on the Valjoux 7750 to try to diagnose it (asking reddit for help).

18 Aug 2017, 12:40

My Coffee Roasting Template and Notes


Photo stolen from Red City Roasters, my coffee roasting company!

Here’s a template doc that I use for my one of my roast profiles. (This is a lighter roast in terms of flavor profile, tuned to espresso. You may want to go lighter, shorter time, more acidic, for pour-over.)

Times are all relative, I have a timer that I restart at the beginning of each phase of the roast, as outlined in the doc. E.g., if I overshoot one phase, the overall roast time is lengthened. Temperatures are from my thermocouples that I’ve adjusted based on understanding the typical FC temp to be 392ºF. I use artisan in addition to this for tracking, but these notes tend to be much more useful for me to dial in my roasts. If you’re going to make adjustments to timing, I would suggest changing one phase by no more than 30s, and comparing. (+/- 10s in any given phase is a change that you may be able to taste.)

I based my template on the book “Modulating the Flavor Profile of Coffee”, which is short, concise and has lots of great info. I’d strongly recommend giving it a read if you’re into coffee roasting.

08 May 2016, 07:36

Doing My Best


Photo by Ashley St. John

Sixteen months ago, my daughter was born. After a difficult first week, the first month was great, challenging, but overall things were really good. Then the depression crept back in.

I’ve been dealing with depression for almost as long as I can remember. It comes in waves for me, where sometimes I’ll be generally feeling good, or ok for weeks, then seemingly out of nowhere, the depression knocks me down, and would stick around for one or a few weeks. Life was this cycle of feeling good for a while, getting depressed for a while, then fighting my way out of it and starting the cycle over.

I was generally happy when I would get home from work, but I’d spend the days being miserable, and wanting to quit my job. I don’t have a bad job, and in a lot of ways, it’s a really great job, but this was what was really bothering me at the time.

After Lydia was born, I decided that being depressed was a real hinderance to my ability to be the sort of father that I wanted to be. I wanted to be able to set a good example, and not act irrationally or impulsively because I couldn’t think of an alternative. I wanted to be able to pass on some tools for dealing with life better. So, after years of dealing with this, I made an appointment with a therapist.

I ended up only going to two sessions, because that’s all that I felt like I needed. Those two sessions were so useful, and illuminating, that it has taken me a little over a year to get to a point where I might need to go back for a refresher. The therapist that I met with introduced me to a version of therapy called Cognitive Behavioral Therapy (or CBT). CBT’s model is that our thoughts are what cause our feelings, and if we change the way that we think about things, then we can change the way that we feel. It can be broken down into a few parts: cognitions, goals, and behaviors. (As a note, there’s a great book called “Feeling Good” that goes through all the basics of CBT.)

One of the really basic CBT tools is a chart that helps you to identify specifically the thoughts that are triggering your negative emotions, and then asks you to come up with alternate ways to think about the triggering event or that idea that might be more balanced. This requires paying close attention to how you’re feeling, and working backwards from there, which is tricky initially, but you get the hang of it.

Using this technique, I was able to begin to see what was bothering me so much, and it turned out that if I forced myself to restate what was happening, in a way that might be more balanced and not as distorted, I felt better about the situation. After going through this exercise a number of times, I started to internalize the process and could run through this exercise in my head, faster and faster as things happened. I also was becoming more conscious of how I felt, and why I felt that way.

It has also highlighted to me a basic thought pattern that I had of taking a negative default view of things, which I then was able to shift to start take a more positive view. I’ve also started figuring out different strategies for handling difficult situations better, a big part of which is simply setting expectations differently, revising expectations as needed, and changing my approach.

Things aren’t perfect, and probably will never be, but overall, I have been happier in the last year than any other time that I can remember. That’s mostly a result of focusing on the best parts of my life, instead of the few not so great details. At this point, I certainly feel like I have more to give Lydia that will hopefully help her to avoid the trap of depression that I fell into. As a bonus, I can enjoy my time with my family to a much greater extent than before.

07 Mar 2016, 08:37

Let's Encrypt - setting up automated certificate renewals

let's encrypt

Not too long ago, Let’s Encrypt launched a free CA to enable website owners to easily generate their own signed certificates from the command line on their server. It integrates with popular web servers like Apache and NGINX to make the validation step easier. However, the certificates are only valid for 3 months, as opposed to the 1 year that is more typical. They’ll send an email when your certificate is close to expiring, but that’s not always idea. The good news is that since this is a command line tool, it can be easily written into a cron job to run periodically.

The current release, 0.4.2, seems to work reasonably well for scripted renewals. There are a couple notes for issues that I ran into, at least with this version. First, here’s the general command that I use for renewals:

./letsencrypt/letsencrypt-auto certonly --no-self-upgrade --apache --force-renew --renew-by-default --agree-tos -nvv --email admin@example.com -d example.com,www.example.com >> /var/log/letsencrypt/renew.log

That’s great when you have a simple site, with sub-domains that all point to one set of content, and one vhost entry (per port). If you have a couple of different subdomains that relate to different sets of content, something like this:

./letsencrypt/letsencrypt-auto certonly --no-self-upgrade --apache --force-renew --renew-by-default --agree-tos -nvv --email admin@example.com -d example.com,www.example.com,app.example.com >> /var/log/letsencrypt/renew.log

Where app.example.com in the above example relates to a different vhost. In this case, you’ll need to break the vhosts into separate files. See the below examples.

Vhost for example.com and www.example.com:

<VirtualHost *:80>
    ServerAdmin admin@example.com
    ServerName example.com
    ServerAlias www.example.com
    DocumentRoot "/var/www/example/"
    <Directory "/var/www/example/">
        DirectoryIndex index.html
        RewriteEngine On
        RewriteOptions Inherit
        AllowOverride All
        Order Deny,Allow
        Allow from all
        Require all granted

<VirtualHost *:443>
    ServerAdmin admin@example.com   
    ServerName example.com
    ServerAlias www.example.com  
    DocumentRoot "/var/www/example/"
    <Directory "/var/www/example/">
        AllowOverride all
        Options -MultiViews +Indexes +FollowSymLinks
        DirectoryIndex index.html index.htm
        Order Deny,Allow
        Allow from all
        AllowOverride All
        Require all granted
        DirectoryIndex index.html index.php index.phtml index.htm

    ErrorLog "/var/log/httpd/example.com-error_log"
    CustomLog "/var/log/httpd/example.com-access_log" common

    SSLEngine on

    SSLCertificateFile /etc/letsencrypt/live/example.com/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem

Vhost for app.example.com:

<VirtualHost *:80>
    ServerAdmin admin@example.com
    ServerName app.example.com
    DocumentRoot "/var/www/example-app/"
    <Directory "/var/www/example-app/">
        DirectoryIndex index.html
        RewriteEngine On
        RewriteOptions Inherit
        AllowOverride All
        Order Deny,Allow
        Allow from all
        Require all granted

<VirtualHost *:443>
    ServerAdmin admin@example.com   
    ServerName app.example.com
    DocumentRoot "/var/www/example-app/"
    <Directory "/var/www/example-app/">
        AllowOverride all
        Options -MultiViews +Indexes +FollowSymLinks
        DirectoryIndex index.html index.htm
        Order Deny,Allow
        Allow from all
        AllowOverride All
        Require all granted
        DirectoryIndex index.html index.php index.phtml index.htm

    ErrorLog "/var/log/httpd/example-app.com-error_log"
    CustomLog "/var/log/httpd/example-app.com-access_log" common

    SSLEngine on

    SSLCertificateFile /etc/letsencrypt/live/example.com/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem

Then, I created a script that handles a couple of different renewals, and at the end, a command to reload apache configs, called updateCerts.sh:


./letsencrypt/letsencrypt-auto certonly --no-self-upgrade --apache --force-renew --renew-by-default --agree-tos -nvv --email admin@example.com -d example.com,www.example.com,app.example.com >> /var/log/letsencrypt/renew.log
./letsencrypt/letsencrypt-auto certonly --no-self-upgrade --apache --force-renew --renew-by-default --agree-tos -nvv --email admin@thing.com -d thing.com,www.thing.com,files.thing.com >> /var/log/letsencrypt/renew.log

apachectl graceful

Then, I added this to the root user’s crontab:

# m h  dom mon dow   command
0 4  *   *   1     /home/user/updateCerts.sh

Cron is supposed to run this script once a week, and we’ll have logs at /var/log/letsencrypt/renew.log.

06 Mar 2016, 10:30

Container Tomcat Graph - part 1

Note: I wrote this post 3 years ago, and somehow forgot to publish it. It may contain incomplete sentences, and may be complete nonsense, I am not going to re-read it. Proceed at your own risk.

For years, I’ve had this problem around cooking at home, which is sort of two-fold. First, I really don’t like finding recipes and coming up with meals to cook. It’s time-consuming, and when I’m digging through recipes, I get indecisive and overwhelmed with the number of choices, wasting hours a week. The other problem I have is that there are meals that I cook that call for me to buy some ingredient that the meal itself doesn’t completely use and that I probably won’t use again.

I decided to build a tool to help me solve these problems. The tool will allow me to enter in recipes, it will store them in a graph database, and then it will traverse the graph to build a menu for the week for me. If there are recipes that call for uncommon ingredients that are somewhat expensive that won’t be fully used, it should try to find other recipes that also use that ingredient, and prioritize surfacing those recipes.

I wanted to write up my process, as I have had a lot of difficulty, just getting the basics off the ground. Especially with how many moving parts there are in this. Hopefully, if I decide to build another app with a similar stack, it’ll be a little quicker with this guide.

This blog post is going to take us through the very basics of building a Docker image that runs Tomcat. The next post will cover getting that image up to the cloud. From there, future posts will cover using AngularJS as a frontend, and adding in the Titan graph.

Here are the technologies that I’m going to use to build this app:

As a prerequisite, to this, you’ll need to have Docker set up locally. I haven’t written a thing on this, so you’re on your own here. It’s not that hard, the installer does most of the work for you. You’ll also need to have the Google Cloud tools set up.

Create the project

You’ll need to start with your basic directory structure, with a handful of basic files. You can find the full source for this blog post here. I’ll be using the following structure:


The very first thing that we’ll do, is give a rough outline of our Tomcat app in build.gradle:

This uses a gradle plugin that helps us out with building for tomcat, as well as grabbing tomcat dependencies.

Now, let’s define our web.xml file:

All we’re really doing with that is defining the jsp that we’ll use as a place-holder. Speaking of, here’s the index.jsp:

Just a simple ‘hello world’, so that we can see that things are working.


Now that that’s done, we need to be able to run that in some sort of environment. I grabbed a Dockerfile from jeanblanchard/docker-tomcat, and modified it to copy my app in.

As you can see, I’m building an entire new Tomcat image every time I build this. I will probably refactor this into two separate Docker images, one for generic Tomcat stuff, and one that depends on the generic Tomcat one, and pulls my war in.

You’ll note that if you try to build now, it’s going to fail, that’s because you need a tomcat-users.xml file. Here is the one from the jeanblanchard/docker-tomcat repository:

Running Locally

Now that you’ve got that done, I like to create a script to build and run things locally, just to test it all out. Here’s my buildrun.sh script:

Try running that with:


Once again, you’ll need to have your environment set up for the above command to work correctly. If it does run correctly, you should be able to visit the app in your browser, at an address that looks something like this: (the app name might be different if you used something different than I did).

To be continued

Check out Part 2 (coming soon) for getting your Docker container deployed.

06 Mar 2016, 10:30

Container Tomcat Graph - part 2

Note: I wrote this post 3 years ago, and somehow forgot to publish it. It may contain incomplete sentences, and may be complete nonsense, I am not going to re-read it. Proceed at your own risk.

Last time, we covered creating a new Tomcat project, and running it locally with Docker. This blog post is going to take us through deploying our docker image to Google Cloud’s Container Engine. In future posts, I’ll cover using AngularJS as a frontend, and adding in the Titan graph.

Here are the technologies that I’m going to use to build this app:

As a prerequisite, to this, you’ll need to have Docker and Google Cloud tools set up locally, and run through part 1 of this series.

Google Container Engine

Now that we’ve got something building, we want to push it off to Google and run this in the Google Cloud, to do that we will use Container Engine. Container Engine is a product that allows you to run Docker containers on Google Cloud Platform. There’s another tool that we’ll need, called Kubernetes, in order to get this done. Check out the Getting Started docs to enable the API, and install the necessary tools.

Creating a Cluster

When you click the button in the ‘Getting Started’ page, it will prompt you to define a cluster. I’d suggest specifying a size of 1, and a smaller instance type. We can always go back later and change the definition if we need to.

Once you’ve built your cluster in the Cloud Console, you’re going to want to set that as your default cluster in the command line tool (where NAME is the name of the cluster you created):

$ gcloud config set container/cluster NAME

Then authenticate yourself:

$ gcloud container clusters get-credentials NAME

Google Container Registry

Great, now we’ve got a cluster, but how do we get our Docker image to our cluster? Kubernetes pulls images from a registry, so we need to get our image into a registry. However, we don’t want to push it to the public Docker image registry, because this is our app, not a general purpose image that should be reused by other people. There’s a solution for this, called Google Container Registry, which is a private Docker registry that you can use with Kubernetes. In your Container Engine console, there’s a section for ‘Container Registry’.

Check out the Google Container Registry docs, where it talks about endpoints. There are some global region specific endpoints, and for this post, I’m going to use the us.gcr.io endpoint, since I want my image hosted in the US. Here are the list of endpoints:

  • us.gcr.io - United States
  • eu.gcr.io - European Union
  • asia.gcr.io - Asia
  • gcr.io - default, try to pick one of the others
  • b.gcr.io - existing Google Storage buckets

Secure Tomcat

Before I get to uploading our container, I need to revisit something from the previous post, which was the tomcat-users.xml file.

See if you can spot the huge security flaw in that file. Hint, take a look at the user section. Obviously, setting both the username and password to ‘admin’ is very bad, and will result in your container being compromised quite quickly (as was mine). I suggest generating random values for both name and password. A better solution would be able to disable this altogether. One way that you can do for that is to rename the ‘war’ file that you upload to ROOT.war.

That said, I would not take my advice as the final word, and go research a bit on how to secure Tomcat. Either that, or don’t run the service publicly.

If your service gets compromised, you’ll get a friendly note from Google Cloud Compliance, informing you that your project will be shut off if you don’t fix it. (The tag in GitHub for the sample code for this post is not terribly secure, you may want to modify it yourself.)

Upload Container

Here’s the script that I’m using to tag and push containers:

There’s a lot going on in this script. I would suggest running it line by line manually, to make sure that everything works correctly.

The first couple of lines should be familiar, we’re basically just building the project, then building a Docker image. Next, we create a tag and endpoint with the Google Cloud tool, which will allow us to then push the built container image to Google Cloud’s private Container Registry.

It may take a couple of minutes for each of the next commands to complete. Run the Kubernetes tool to create a pod, referencing the image that we just uploaded, and giving it a name. Note, that you are limited to a 24 character, alpha-numeric name. It may allow you to create a pod and replication controller with an invalid name, but that will cause problems later. Finally, we can add the pod to a load balancer to get public traffic pointing to it.

Testing it out

Here’s what it looks like when you run ‘describe’ shortly after creating.

Note that it says “Creating load balancer”, then “Created load balancer”. This threw me initially, but it looks like it logs recent events, as opposed to giving the current status. The important bit is the LoadBalancer Ingress IP, which you can hit with your web browser, using the port you defined in Docker, and exposed in Kubernetes, and the path that you used when you ran this locally.

working request

It works!

Monitoring resources

KubeUI dashboard

Guess what! Kubernetes for Google Cloud ships with this great set of monitoring utilities built in! This is really helpful to show you what resources are being used, and what exactly is running on the box. It’s not perfect, and there are still some holes in the knowledge, but it gives you a lot of information that isn’t in the Google Cloud Console, and you don’t need to do anything yourself to set this up.

(Hint, you may need to refresh the KubeUI page to get it to load after logging in.)


This was a very simple ‘Hello World’ example, but it does demonstrate how to build a Docker container image, deploy it to Google Cloud, build a Kubernetes cluster, deploy the Docker image to the cluster, and then access it. Wow, looking back, we got a lot done!

Next time, we’ll take a look at defining a simple Graph Database, and adding it to our Tomcat application. Keep an eye out for part 3 in this series!

06 Mar 2016, 10:30

Container Tomcat Graph - part 3

Note: I wrote this post 3 years ago, and somehow forgot to publish it. It may contain incomplete sentences, and may be complete nonsense, I am not going to re-read it. Proceed at your own risk.

So far in this series, we have created a new Tomcat app, added it to a Docker container, and run that container in Google Cloud with Kubernetes. This blog post is going to cover adding in a Titan graph as data store. And we’ll wrap up with using AngularJS as a frontend in the next post.

Here are the technologies that I’m going to use to build this app:

As a prerequisite, to this, you’ll need to have Docker set up locally, and run through part 1 and part 2 of this series.

The Graph

We’re looking at building a graph today, so we should probably first talk a little about what the graph will be modeling. I figured that it would be best to keep it simple.

friend graph

As you can see, there are a couple of parts here to pay attention to. In the above image, the circles represent nodes in the graph, also called vertices. The arrows represent edges, which are in this graph directional.

We also have properties on both the vertices (nodes) and edges. The vertices hold a first and last name, and the edges have two labels, one is ‘friend’ for all of them, and the other is a property that describes how the two people know each other.

Now, one thing to note right away is that it’s a bit strange in this case to have a directional edge.

Titan DB

For this post, we’re going to use Titan DB as our Graph Database. Titan is an open-source graph database, and implements the Apache TinkerPop 3 stack. Most of what you’re going to care about is a part of TinkerPop called Gremlin. Gremlin is the domain-specific language (DSL) that you’ll be using to traverse the graph. There’s a very helpful Getting Started guide on the TinkerPop site.

I know that this is a fairly poor explanation so far, but honestly, your best bet is to go research this a bit, because it is kind of complicated. Once you wrap your head around the model, things will start to make sense.

There’s only a little bit of Titan-specific code that we’ll be using to get the ball rolling. From there, it’ll be pretty much just Gremlin GraphTraversals.

The first thing that we’ll do is add Titan to our build.gradle file.

init Titan

Next up, we need to initialize the graph. We’re going to be running this one in-memory within our application. It may not be how you want to run in production, but it is the quickest way for us to get something up and running to poke around with.

The above is pretty much taken straight from the Titan docs, and molded from several open source examples.

Nodes and Edges

Now that we’ve got that, let’s talk a little about our data model. If you take a look at the image at the top of the post, that’s what I was thinking, people and relationships. People have names, and know other people, along with how they know each other. In this sample, we’re going to be using directional edges, and only one edge between two nodes. Titan is flexible on this, but one of my queries would go into a recursive loop if we were to have bi-directional edges.

The Person class is going to represent the nodes in our graph. We’ve got some basic properties, and want to be able to build the object from a vertex (Titan’s name for nodes).

There’s an important method call in the Person constructor, which is buildFriends(). That method does a traversal to find all of the other people that this person knows, and also captures the knowsFrom information stored on those edges. If you’re interested in breaking down this traversal, I would suggest taking a look at the Gremlin docs.

The Friend class is going to represent the edges in our graph. Here we have a basic traversal, and a method to create edges.


All right, we’ve got our graph initialized, and have thought a little bit about data models. Now we want to query for things. What I’ve included as properties, should allow us to run some basic queries, and find relationships.

We’re building on Tomcat, so I think it makes sense to use a Servlet as our interface for making queries, and returning results.

There are two main parts to this servlet, init(), and doGet(). The init method initializes our graph with some nodes (vertices) and edges. Since we are running in-memory, none of this stuff is persisted between server runs.

The doGet method is our typical Tomcat servlet method, and it is what we are using to accept requests and build responses. As you can see, we accept queries on relationships and names. The supported names are full names.

We can issue a request like this:

GET http://example.com:8080/?name=Thompson,John

And you should receive a response that looks like:


Or, formatted nicely:

As a note, you’ll also want to add gson to the build.gradle, again you can checkout the full source of the project for that.

graph request and response in browser


We went super fast through this material, but I’m hoping that you were able to get something out of it.

03 Mar 2016, 09:08

MOD-t Tips


I’m writing a couple of notes to my future self about this printer.

This morning, I was trying to to swap out the pink filament with new white filament that I picked up on Amazon. The filament was completely stuck, I couldn’t unload or load or do much of anything with it. I found great GitHub repo for MOD-t utilities, and tried running the clean nozzle tool. (There’s a config file for Slic3r, a popular modeling tool for 3D printing. There are some calibration files, old firmware, and a clean nozzle utility.) That bought me a bit of pink filament dripping out of the nozzle. However, after some additional searching, I found this Google Group for the MOD-t, and on it, there was a really helpful post entitled “Clearing Filament Jam Without Opening MOD-t”. Basically, it said that the jam may be living above the nozzle, and if that’s the case, then what you can do is remove the hot end (nozzle), heat up the nut (with a soldering iron or something), and pull the jammed filament out.

I used tweezers to hold the filament inside the nut, and needle nose pliers to hold the tweezers. Then I heated up both the tweezers and the nut, and was able to pull the jammed bit of filament out. After that, I replaced the hot end, loaded the new filament, and was back in business.

Future self - if this happens again, try the above solution!

01 Mar 2016, 08:30

New Matter MOD-t Review - a 3D printer

3d printed watch stand

Back in the beginning of November, I received a giant box on my doorstep. It was a 3D printer that I had backed on Indiegogo a year and a half earlier. The printer is a New Matter MOD-t. They were a bit late shipping, but that was not exactly unexpected, generally, I’m happy when crowdfunded projects ship at all. Either way, it was here, arriving just in time for me to be super busy with other things. We were on our way out to do something, so all I had time to do was pull it out of the box, and stick it on a shelf.

3d printer unboxing collage

This past weekend, I finally got around to getting the thing actually set up, and running. While the setup was rocky, and the desktop app poor, the overall experience is actually better than expected. I’m very excited to see where New Matter takes this product.

The setup was not easy, the software tools that they provide for Mac don’t work well. I was able to get the firmware updated on the machine, but spent about two hours trying to get it to connect to WiFi. There was a bug in their installer and desktop app that shows an error and a ‘disconnected’ status, even when the WiFi is connected properly. Took me a while to realize that it had actually been connected. One of the problems with this is that it means that it’s impossible to complete the setup installation, including calibrating the thing and loading the filament. You can install the desktop app without completing the setup, so I did that.

After getting the desktop app installed, I tried getting on WiFi again (still didn’t realize that I was connected), and ran into the same bug described above. I decided to skip that, and try to get the test print going. Looking in the desktop app, they have a button called ‘Load Filament’, I tried that, and it asked me if I wanted to unload or load the filament. I needed to load filament, and it then gave instructions for first unloading filament, and then a button to press for loading filament. I pressed the button and nothing happened. It took me quite some time to figure out that you needed to restart the printer while on that screen in the app before that button would become active. (Restarting the printer is part of the unload filament process.) Figuring that bit out, I was able to get the filament loaded and the test print going.

3d printer collage

This is about the time that I figured out that the printer was really connected, and showing up in the New Matter web app. Excellent! I loaded up some STL files from Thingiverse into my New Matter library, and sent them to the printer from the web. I was able to disconnect the printer from my MacBook, and let it run on its own. From here, I was able to basically get what I needed done with little to no issue.

For me, this is where the MOD-t really shines, and I think that New Matter has done a brilliant job. There’s no figuring out printer settings, or doing a deep dive into deeply understanding how FDM 3D printers work, you just go to the website, and hit ‘print’, then press a button on the printer. Simple. The problems that I experienced were all, 100%, on the desktop app side, which are easily fixable with updates.

There were two little hiccups. First, the top part of the watch stand that I was printing kept failing. I needed to edit the STL file to fix it, but it wasn’t really New Matter’s fault. The file was set up to print with only a single edge on the print bed. I grabbed the Meshmixer app, rotated the part, re-uploaded, and it printed just fine. The other issue was that the New Matter web app doesn’t seem to handle printing multiple parts too well (or at all). You can add multiple parts to a thing in your library, but it will only print one of them, without giving you a way to select which one. The workaround is just to upload each part separately, and that’s not really an issue.

All in all, I’m very excited about doing more with this thing. I’ve already started printing a case for the PiGrrl 2 project that I’m working on, just waiting for some white filament to arrive for printing certain parts. The watch stand that I printed is great, and I saved myself $15 which is what they go for on Amazon. If you’re in the market for a 3D printer, and want something simple and relatively inexpensive, this is a good choice.

09 Feb 2016, 08:06

Roasting with Roastmaster and a BlueTherm Duo

I’ve been roasting coffee for 7 or 8 years. In that time, I’ve learned how to roast a great batch of beans that will make excellent, and easy to drink espresso. This tends to live somewhere between City+ and Full City+, usually right at Full City. Now, this is great, and I love the coffee that I roast, but I have realized that I haven’t really nailed the lighter roasts yet, specifically, roasting an excellent batch to City, where the bean is fully developed and has a rounded flavor.

The other thing that’s going on right now, is that I’m waiting for a new roaster setup, which is currently being built. The new setup is a BBQ top 5lb roaster from Coffee Roasters Club. The new roaster is going to be entirely manual, where I’ll completely control both the heat and time. Additionally, controlling the heat exactly is going to be tricky, since it’ll be done on the grill.

With the new setup, I needed some new tools. First, I knew that I would need some way to grab temperature data, and preferably to log it. There’s an app that a friend told me about called “Roastmaster”, which helps you manage just about everything involved with coffee roasting, and has an option to do data logging. I checked out which data loggers were supported, and found that the BlueTherm Duo looked like what I wanted. (I was looking for something bluetooth, with two probes, that could handle the heat.)

My BlueTherm came in the other day, and last night was time to roast some coffee.

BlueTherm Duo

Getting the BlueTherm set up was as easy as turning it on and plugging one of the thermocouples in. I opened the Settings on my iPad, and paired it with Bluetooth quickly enough. Getting it connected in the Roastmaster app was a little unintuitive, but it worked. Luckily, Roastmaster has excellent documentation, which I would suggest having a look at.

BlueTherm with Roaster

Now it was time to place the thermocouple. If you see the the two thermocouple leads in the BlueTherm photo you’ll notice that one has an alligator clip, and the other is a probe that you’d stick into something like a steak. The alligator clip one is the more useful one here. I clipped the clip onto the downward facing vertical part of the chaff tray, next to the drum. I maybe could’ve gotten it underneath the drum, I’ll look up ideal placement next time around. The lead wire is fairly thick, but I was able to let it out in the upper right corner of the roaster door, which allowed me to still close the door. All good.

I plugged things in, and got a roast ready to go, both with my roaster, and in the Roastmaster app. After the thermocouple was all set, and everything ready to go, I fired things up, and let it run.

Roastmaster App

Getting good readings, and things are logging correctly. Yay!

In the app, there are buttons to record the first and second cracks. The app itself is fairly complicated, and there’s definitely a learning curve involved. That said, it’s an extremely useful tool, and I think that it will be indispensable to me going forward.

Something that I thought was interesting while I was roasting was that I could see how much heat was lost whenever I opened up the door to check on the progress. It’s funny, but it had never really occurred to me before that opening the door for a couple of seconds would have that much of an impact on the roast, but when I was logging the temperature data, it was clear that it dropped significantly, and took a bit of time to climb back up.

Another interesting finding was that opening the door during the Behmor’s cooling cycle does not cool it down faster than leaving the door closed (though it does make more of a mess). This seems counterintuitive, but I think that it was designed to get a lot of airflow, assuming the door was closed. It’s similar to a PC case that has been designed to maximize airflow through the components, opening up the case door does not make it cooler, it just screws up the airflow.

Here’s the finished product, something right around City+.

Roasted Beans

I pulled a couple shots of espresso this morning with it, and it was OK. I reviewed it with the Angel’s Cup app.

For me, working with the Behmor is still pretty tricky, but I would really like to learn how to improve on that machine before my new equipment comes next month. Going to a fully manual setup is a little daunting, and I’d like to use this last month to really push the Behmor and see what I can get out of it.