top | android | coffee | dealing_with_companies | dev_blog | experiments | food | fun | io15 | politics | products | random | security | startups | tech_events | thoughts | wearables |

07 Mar 2016, 08:37

Let's Encrypt - setting up automated certificate renewals

let's encrypt

Not too long ago, Let’s Encrypt launched a free CA to enable website owners to easily generate their own signed certificates from the command line on their server. It integrates with popular web servers like Apache and NGINX to make the validation step easier. However, the certificates are only valid for 3 months, as opposed to the 1 year that is more typical. They’ll send an email when your certificate is close to expiring, but that’s not always idea. The good news is that since this is a command line tool, it can be easily written into a cron job to run periodically.

The current release, 0.4.2, seems to work reasonably well for scripted renewals. There are a couple notes for issues that I ran into, at least with this version. First, here’s the general command that I use for renewals:

./letsencrypt/letsencrypt-auto certonly --no-self-upgrade --apache --force-renew --renew-by-default --agree-tos -nvv --email admin@example.com -d example.com,www.example.com >> /var/log/letsencrypt/renew.log

That’s great when you have a simple site, with sub-domains that all point to one set of content, and one vhost entry (per port). If you have a couple of different subdomains that relate to different sets of content, something like this:

./letsencrypt/letsencrypt-auto certonly --no-self-upgrade --apache --force-renew --renew-by-default --agree-tos -nvv --email admin@example.com -d example.com,www.example.com,app.example.com >> /var/log/letsencrypt/renew.log

Where app.example.com in the above example relates to a different vhost. In this case, you’ll need to break the vhosts into separate files. See the below examples.

Vhost for example.com and www.example.com:

<VirtualHost *:80>
    ServerAdmin admin@example.com
    ServerName example.com
    ServerAlias www.example.com
    DocumentRoot "/var/www/example/"
    <Directory "/var/www/example/">
        DirectoryIndex index.html
        RewriteEngine On
        RewriteOptions Inherit
        AllowOverride All
        Order Deny,Allow
        Allow from all
        Require all granted
    </Directory>
</VirtualHost>

<VirtualHost *:443>
    ServerAdmin admin@example.com   
    ServerName example.com
    ServerAlias www.example.com  
    DocumentRoot "/var/www/example/"
    <Directory "/var/www/example/">
        AllowOverride all
        Options -MultiViews +Indexes +FollowSymLinks
        DirectoryIndex index.html index.htm
        Order Deny,Allow
        Allow from all
        AllowOverride All
        Require all granted
        DirectoryIndex index.html index.php index.phtml index.htm
    </Directory>

    ErrorLog "/var/log/httpd/example.com-error_log"
    CustomLog "/var/log/httpd/example.com-access_log" common

    SSLEngine on

    SSLCertificateFile /etc/letsencrypt/live/example.com/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem
</VirtualHost>

Vhost for app.example.com:

<VirtualHost *:80>
    ServerAdmin admin@example.com
    ServerName app.example.com
    DocumentRoot "/var/www/example-app/"
    <Directory "/var/www/example-app/">
        DirectoryIndex index.html
        RewriteEngine On
        RewriteOptions Inherit
        AllowOverride All
        Order Deny,Allow
        Allow from all
        Require all granted
    </Directory>
</VirtualHost>

<VirtualHost *:443>
    ServerAdmin admin@example.com   
    ServerName app.example.com
    DocumentRoot "/var/www/example-app/"
    <Directory "/var/www/example-app/">
        AllowOverride all
        Options -MultiViews +Indexes +FollowSymLinks
        DirectoryIndex index.html index.htm
        Order Deny,Allow
        Allow from all
        AllowOverride All
        Require all granted
        DirectoryIndex index.html index.php index.phtml index.htm
    </Directory>

    ErrorLog "/var/log/httpd/example-app.com-error_log"
    CustomLog "/var/log/httpd/example-app.com-access_log" common

    SSLEngine on

    SSLCertificateFile /etc/letsencrypt/live/example.com/fullchain.pem
    SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem
</VirtualHost>

Then, I created a script that handles a couple of different renewals, and at the end, a command to reload apache configs, called updateCerts.sh:

#!/bin/bash

./letsencrypt/letsencrypt-auto certonly --no-self-upgrade --apache --force-renew --renew-by-default --agree-tos -nvv --email admin@example.com -d example.com,www.example.com,app.example.com >> /var/log/letsencrypt/renew.log
./letsencrypt/letsencrypt-auto certonly --no-self-upgrade --apache --force-renew --renew-by-default --agree-tos -nvv --email admin@thing.com -d thing.com,www.thing.com,files.thing.com >> /var/log/letsencrypt/renew.log

apachectl graceful

Then, I added this to the root user’s crontab:

# m h  dom mon dow   command
0 4  *   *   1     /home/user/updateCerts.sh

Cron is supposed to run this script once a week, and we’ll have logs at /var/log/letsencrypt/renew.log.

03 Mar 2016, 09:08

MOD-t Tips

mod-t

I’m writing a couple of notes to my future self about this printer.

This morning, I was trying to to swap out the pink filament with new white filament that I picked up on Amazon. The filament was completely stuck, I couldn’t unload or load or do much of anything with it. I found great GitHub repo for MOD-t utilities, and tried running the clean nozzle tool. (There’s a config file for Slic3r, a popular modeling tool for 3D printing. There are some calibration files, old firmware, and a clean nozzle utility.) That bought me a bit of pink filament dripping out of the nozzle. However, after some additional searching, I found this Google Group for the MOD-t, and on it, there was a really helpful post entitled “Clearing Filament Jam Without Opening MOD-t”. Basically, it said that the jam may be living above the nozzle, and if that’s the case, then what you can do is remove the hot end (nozzle), heat up the nut (with a soldering iron or something), and pull the jammed filament out.

I used tweezers to hold the filament inside the nut, and needle nose pliers to hold the tweezers. Then I heated up both the tweezers and the nut, and was able to pull the jammed bit of filament out. After that, I replaced the hot end, loaded the new filament, and was back in business.

Future self - if this happens again, try the above solution!

01 Mar 2016, 08:30

New Matter MOD-t Review - a 3D printer

3d printed watch stand

Back in the beginning of November, I received a giant box on my doorstep. It was a 3D printer that I had backed on Indiegogo a year and a half earlier. The printer is a New Matter MOD-t. They were a bit late shipping, but that was not exactly unexpected, generally, I’m happy when crowdfunded projects ship at all. Either way, it was here, arriving just in time for me to be super busy with other things. We were on our way out to do something, so all I had time to do was pull it out of the box, and stick it on a shelf.

3d printer unboxing collage

This past weekend, I finally got around to getting the thing actually set up, and running. While the setup was rocky, and the desktop app poor, the overall experience is actually better than expected. I’m very excited to see where New Matter takes this product.

The setup was not easy, the software tools that they provide for Mac don’t work well. I was able to get the firmware updated on the machine, but spent about two hours trying to get it to connect to WiFi. There was a bug in their installer and desktop app that shows an error and a ‘disconnected’ status, even when the WiFi is connected properly. Took me a while to realize that it had actually been connected. One of the problems with this is that it means that it’s impossible to complete the setup installation, including calibrating the thing and loading the filament. You can install the desktop app without completing the setup, so I did that.

After getting the desktop app installed, I tried getting on WiFi again (still didn’t realize that I was connected), and ran into the same bug described above. I decided to skip that, and try to get the test print going. Looking in the desktop app, they have a button called ‘Load Filament’, I tried that, and it asked me if I wanted to unload or load the filament. I needed to load filament, and it then gave instructions for first unloading filament, and then a button to press for loading filament. I pressed the button and nothing happened. It took me quite some time to figure out that you needed to restart the printer while on that screen in the app before that button would become active. (Restarting the printer is part of the unload filament process.) Figuring that bit out, I was able to get the filament loaded and the test print going.

3d printer collage

This is about the time that I figured out that the printer was really connected, and showing up in the New Matter web app. Excellent! I loaded up some STL files from Thingiverse into my New Matter library, and sent them to the printer from the web. I was able to disconnect the printer from my MacBook, and let it run on its own. From here, I was able to basically get what I needed done with little to no issue.

For me, this is where the MOD-t really shines, and I think that New Matter has done a brilliant job. There’s no figuring out printer settings, or doing a deep dive into deeply understanding how FDM 3D printers work, you just go to the website, and hit ‘print’, then press a button on the printer. Simple. The problems that I experienced were all, 100%, on the desktop app side, which are easily fixable with updates.

There were two little hiccups. First, the top part of the watch stand that I was printing kept failing. I needed to edit the STL file to fix it, but it wasn’t really New Matter’s fault. The file was set up to print with only a single edge on the print bed. I grabbed the Meshmixer app, rotated the part, re-uploaded, and it printed just fine. The other issue was that the New Matter web app doesn’t seem to handle printing multiple parts too well (or at all). You can add multiple parts to a thing in your library, but it will only print one of them, without giving you a way to select which one. The workaround is just to upload each part separately, and that’s not really an issue.

All in all, I’m very excited about doing more with this thing. I’ve already started printing a case for the PiGrrl 2 project that I’m working on, just waiting for some white filament to arrive for printing certain parts. The watch stand that I printed is great, and I saved myself $15 which is what they go for on Amazon. If you’re in the market for a 3D printer, and want something simple and relatively inexpensive, this is a good choice.

18 Jan 2016, 09:56

Blogging tools - creating Hugo blog posts from Android

I am starting this post on my phone. I realized that one of the roadblocks that I have is that the workflow for creating new blog posts from my phone was nonexistent. I had a git client, and a Markdown editor, but Hugo uses special headers, and I also needed to be able to publish on my server.

writing tools on Android

Here’s what I did:

  1. Built an Android app to generate the header, and add it to my clipboard. Source
  2. Add a script to my server for easy publishing.
  3. Set up JuiceSSH, so that I can log into my server from my phone.

jotterpad preview

New header

With those couple things, I can now do the end-to-end process of creating and publishing blog posts from my phone. Here’s how that goes:

  1. In git client, pull from remote
  2. Create new file in repo
  3. Open file in JotterPad
  4. Run my HugoNew app
  5. Paste header into new post file
  6. Save file, and push to get repo
  7. Log into my server and run publish script

It still needs work (no image support yet), since it is a lot of steps, but it is a start.

Note - I added the images from my laptop. Here’s the source code for my Android app.

21 Sep 2015, 07:53

Running NGINX in Docker without caching

docker nginx

Lately, I’ve been working on a little web utility that I decided to write in AngularJS. It started out as a couple of JSP pages, and I quickly realized why things like Angular exist. Anyways, if you don’t run AngularJS from a server, it’ll complain and want you to load internal bits with XHR. Obviously, you don’t want to do this. The solution is to fire up a server.

What I really wanted out of a server was something that was incredibly dumb, and didn’t require me to restart the server whenever I updated my frontend code. So, while I could’ve packaged it with my Tomcat app, and run it in my Tomcat Docker image, that would’ve required lots of killing and starting the Docker image, just for quick little frontend modifications. I figured that I should be able to use a traditional web server on Docker, and found that both Apache and NGINX were available. There was some reason that I didn’t choose Apache, but I don’t recall what it was, so I went with NGINX.

The quickest thing to do is to pull the hello world example from dockerhub and remap the volume that hosts the HTML content. This actually works fine until you start trying to quickly iterate. What I found was that the default configuration for NGINX (as used by all of the Docker images that I tried) enables caching. So, even though it’s serving the right directory, it may be serving a cached version.

UPDATE: It turns out that the root cause is actually not a caching issue, but a bug with the kernel utililty, sendfile and Virtualbox. See here: NGINX docs, VirtualBox bug report. Thanks to u/justaphpguy for pointing this out

At first, I tried logging into a Docker image, and modifying the config, and using that. That did work, but it wasn’t a great process. Instead, I decided to build my own image, and pull in my own config file.

nginx.conf

Here’s the default nginx.conf file with a couple of minor modifications. Lines 23-24 have been changed to disable caching, and line 25 turns on the autoindex feature (so that you don’t need to type in /index.html in the browser).

Dockerfile

The Dockerfile is really simple, it just takes the stock nginx image, and pulls in our new config file.

Building and Running

Here’s a script I wrote to build and run my image. I tend to treat my Docker images as ephemeral, so I don’t really care if they get rebuilt frequently and overwrite some previous version.

As a note, on line 11, you’ll notice that we map a path that you specify as the volume to serve content from. On a Mac, this must be in /Users, and won’t work elsewhere.

Conclusion

If you drop all three of those files into one directory, and run something like the following, you should be good to go:

$ ./build_nginx_docker.sh /Users/me/webstuffs

20 Sep 2015, 20:50

Docker Cleanup Commands

Running docker, creating and running containers can make a bit of a mess, depending on how you use it. After a couple weeks of using Docker for development, I figured that it would be good to figure out how to clean up my unused images and containers.

Images

To remove an image called ‘node’

docker rmi node

To remove all untagged images

docker rmi $(docker images -a | grep "^<none>" | awk '{print $3}')

To remove all images

docker rmi $(docker images -qf "dangling=true")

Containers

To list active containers

docker ps -a

To remove all stopped containers

docker rm $(docker ps -a -q)

To remove all existing containers

docker rm $(docker ps -aq)

To kill and remove containers

docker rm $(docker kill $(docker ps -aq))

References

15 Sep 2015, 09:08

MySQL in Docker with Java Hibernate

docker mysql logos

Recently, I started working on a new server project at work, and wanted to be able to run a local dev environment with Docker. This has become my normal flow for a couple of server projects because of how easy Docker is to work with, and especially for the fact that I don’t need to set up any of the supporting structure on my machine to run the server. I really dislike needing to install things like Tomcat, Apache, or a MySQL server locally on my machine for development. Every time one of those things needs to be installed, I know that it’s one more thing on my machine that I need to maintain, and one more thing that could break and cause me to dump hours into fixing. With Docker, I don’t need to care about that, I can fire something off with a repeatable, programatic configuration that may be ephemeral, and disappear when I’m done with it.

My work project has two parts, first is the Java part that uses JPA/Hibernate as an ORM layer which talks to a MySQL database. While the Java portion of this was fairly straightforward, the MySQL part was not.

Outline

  • Source code
  • Basic SQL structure
  • Java code
  • Java Docker scripts
  • MySQL Docker scripts
  • Tying it together

SQL Structure

Here we have a very basic relational structure in MySQL. We have Users, Tags, and UserTags to tie the two together. If you’re familiar with SQL, this should be familiar to you.

Java code

I have a bit more Java code that I could share here, but this is probably the most important bit, which is setting up the connection. If you haven’t set things up correctly, establishing the connection will be the first thing to fail. If you’re interested in seeing the actual JPA/Hibernate entities, check out the source code.

Java Docker scripts

The Java Dockerfile is dead simple, basically, we’re just copying a jar to a stock java8 Docker image, and running that jar.

MySQL Docker scripts

Next up, is the MySQL Docker file. It’s not terrible, but there are a couple of scripts, and something a little non-obvious going on.

When I started poking around with MySQL and Docker, I wanted to use the official MySQL base, as opposed to rolling my own. While I did find a good example run command in the CoreOS documentation, I didn’t find one that used a Dockerfile, and a SQL script to set up the database. So, I had to start digging.

One of the first things that I learned was that the mysqld command in the CMD line does not run the system daemon directly. Instead, it is run through the entrypoint wrapper. If you want to do custom things to MySQL during its first run, or startup, then you’ll want to modify the MySQL entrypoint script. The one that I used is shown below with my comments inline. Basically, I wanted to add a parameter for my setup SQL script.

You’ll see on line 84 where I added the initialization from the script I passed in. You will also note that I added line 103, turning on show_compatibility_56 in /etc/mysql/my.cnf. This was the solution to a problem that I had run into where whenever I tried to connect from my Java app as a non-root user, I was given an error like the following:

SELECT command denied to user 'test'@'host' for table 'session_variables'

I ran across the initial solution on StackOverflow, and was able to implement it in this entrypoint script.

Tying it together

Below is my buildrun.sh script, which I use as a one-liner for setting everything up and running it. For the purposes of this post, it makes some sense, though for my actual implementation, I split the Java app from the SQL instance, so that I can iterate on my Java code without needing to constantly setup and teardown a MySQL instance. It’s also set up such that both instances are ephemeral, since I get annoyed when Docker leaves around dozens of 300MB files on my relatively constrained SSD.

The other issue that is common with running MySQL in Docker is one of data persistence. Unless you are keeping around the instances that you run, you’re going to lose data in between runs if you don’t do something to handle that. Data persistence is obviously an important aspect of a database. There are two options for keeping your data persistent:

  1. Passing in a volume from host
  2. Creating a share Docker volume

On my Mac, I had trouble with using a host volume for MySQL, I kept getting permissions errors, and it would fail to run after that. So, I opted for creating the shared Docker volume. To me, using a host volume would still be preferable, since I’d really like to be able to commit the data directory to my repository, so that I can share it with the code, but oh well. (Yes, I know that sounds strange and bad, but in my particular case, it does make some sense.)

The shared Docker volume is created on line 22, and explained in the Docker documentation. In our run command, we call that volume on line 42.

Conclusion

This was a bit more involved than I suspected, but I think that it was worth spending the time to get this up and running. The workflow is much nicer than needing to do this stuff natively, or constantly deploying to a remote host. When I came into the office on Monday morning, after getting this up and running over the weekend, the Docker portions without issue, and it saved me a ton of time. Hopefully this information saves you a bit of time as well.

GitHub repo with source for this whole project.

08 Sep 2015, 07:04

Comparison of Asynchronous Data Loading in Java

threads

Recently, I was working on a problem at work that had some blocking calls that I thought I may not want to block on. My first instinct was to throw those requests into a thread-pool, but I also needed to get the values out of them. There are some obvious ways of doing this, but they all involve different trade-offs.

I’m going to compare a few different methods of doing asynchronous work in Java, and try to think about the pros and cons of each. What I will not say is what you should use, since that’s going to be application-specific. This is also not intended to be an exhaustive discussion, since I simply don’t have that kind of time, and I’m sure that there are many ways of doing this. Instead, I will try to focus on the most common methods. This is also going to be about Java, not Android, Android has some fancy-pants thread handling stuff that is not present in Java.

Futures

If you need to have the value returned to the calling thread, you’ll probably need to use Future. This may involve some amount of blocking I/O. Here’s the basic flow for a Future:

  1. create a reference to your future, and give it a type
  2. pass a Callable to a thread-pool, and make the return type of the callable match your future
  3. get the value out of your future

Here’s a code example:

Future<String> future = threadpool.submit((Callable<String>)() -> getString());
// blocks until the callable returns
String myString = future.get();

Here’s the rub with that, future.get() is a blocking call, so while you’ve run getString() on another thread, you’re still blocking the calling thread.

If you don’t need the value right away, you can avoid calling get(), and periodically check future.isDone(). Here is an example of that:

Future<String> future = threadpool.submit((Callable<String>)() -> getString());
// do some work while the task is running
doSomeWork();
anotherMethodCall();
if (!future.isDone())
    System.out.println("still need a blocking call");
// blocks until the callable returns
String myString = future.get();

There are cases where this is acceptable, or even preferred. For example, if the threadpool represents some resource that you’d like to tightly bound (e.g. a set of 10 threads for doing network requests), and you actually need the response before you can do any more work on the calling thread, then this pattern makes sense.

pros

  • returns your value back to the calling thread
  • fairly straightforward threading model
  • you know exactly when in your flow you will have your value

cons

  • very likely to block the calling thread

Callbacks and Wrapper Classes

A second possible way to handle this problem is to create a callback or wrapper class (or wrapper interface), which provides some known method that can be called from within the thread-pool after the initial work is done.

High-level view of this method:

  1. create a method or runnable to call
  2. add a call back to the callback in the task submitted to the thread-pool

The biggest problem with this method is that you may not be able to, or want to modify the task running in the thread-pool to call your callback. It ties the two together, making the task more tailored for one specific application.

Let’s take a look at how this might work:

// first, define our callback class
static class Callback implements Runnable {
    private String result;

    public void setResult(String s) {
        result = s;
    }

    @Override
    public void run() {
        handleResponse(result);
    }
}

public static void handleResponse(String result) {
    System.out.println("got result: " + result);
}

void doAsyncWork(){
  Callback callback = new Callback();
  threadpool.submit(() -> {
      callback.setsetResult(getString());
      callback.run();
  });
}

This is quite a bit more verbose than the Future solution, however it does avoid the issue of blocking the caller thread. While this is non-blocking, it is not terribly flexible, as you need to know something about the callback in the task that’s running on the thread-pool. Ideally, you should be able to separate these things out more.

pros

  • non-blocking

cons

  • does not return to the calling thread
  • verbose
  • messy
  • brittle

Observers

Observers are a little more verbose than Callbacks, but can be much more flexible. The key to Observers that makes them more flexible is that they separate out the retrieval of a value from the logic of handling that data. You can also have multiple Observers watching an Observable.

Observers provide more separation, and may be able to be made more generic with less effort than a callback. However, you still need to create a class and a method that you’ll use for setting the data and kicking off the Observer.

High-level view of this method:

  1. create an Observable class
  2. implement Observer
  3. set the value in the Observable in the task submitted to the thread-pool

Let’s look at an implementation.

protected static class ContentObserver implements Observer {

    @Override
    public void update(Observable o, Object arg) {
        if (!(o instanceof ObservableContent))
            return;

        System.out.println("update result" + ((ObservableContent) o).getContent());
    }
}

protected static class ObservableContent extends Observable {
    private String content = null;

    public ObservableContent() {
    }

    public void setContent(String content) {
        this.content = content;
        setChanged();
        notifyObservers();
    }

    public String getContent() {
        return content;
    }

}

public static void doAsyncWork(ObservableContent content) {
    threadpool.submit(() -> {
        content.setContent(getString());
    });
}

pros

  • non-blocking
  • flexible

cons

  • does not return to the calling thread
  • verbose

RxJava

While RxJava is not a part of the Java language, I think that it is interesting, and seems to be quickly gaining popularity. RxJava takes the observer pattern a step further, and greatly simplifies it.

RxJava is nearly its own DSL (Domain Specific Language), so there’s a lot available with RxJava.

High-level view of this method:

  1. create a RxJava Observable
  2. tell it what to do
  3. tell it what thread to do the work on
  4. give it a Subscriber
  5. tell the Subscriber which thread to work on

It’s easier than it sounds, let’s take a look.

public static void handleResponse(String result) {
    System.out.println("got result: " + result);
}

Observable.just(0)
    .map(i -> request())
    .subscribeOn(Schedulers.from(threadpool))
    .observeOn(Schedulers.io())
    .subscribe(c -> handleResponse(c));

I’m not going to explain how to use RxJava here, but I will say that Dan Lew has a great series on RxJava on his blog.

pros

  • non-blocking
  • able to return to different threads
  • very flexible
  • compact
  • simple

cons

  • does not return to the calling thread
  • adds a dependency
  • requires learning how to use RxJava

Conclusion

I think that I’m going to give RxJava a try next time I need a non-blocking asynchronous data loading in Java. It still depends on the application, without the Java 8 lambdas, RxJava is still fairly verbose, similar to Java Observers. There are also considerations around whether or not you want to take dependencies in your application, and which specific dependencies you want to take. If I didn’t want the dependency, I would probably use Java’s built-in Observers. However, if I need that return value on the same thread, it’s still going to be Futures, and blocking the calling thread.

If you have a better way of doing this, especially getting the value back on the same thread without blocking, I’d be interested to hear about it. (Again, for Java, not Android, as Android does not have the same constraints.)

I wrote up a full set of examples of all the above here.

14 Aug 2015, 10:25

Talkray On Chrome

And other Android apps!

There’s a way to pull in Android apps into Chrome, and now have Talkray up and running on my laptop! Here’s hoping that Chrome support gets better.

Talkray on the desktop

There are two versions of the how-to. The first is something that should work, but that I have not tested. The second, is what I actually did, though it is more work. What’s more, this should work for many different Android apps.

Here’s the one that should work:

  1. Install the Android version of the Evernote app in Chrome
  2. Install twerk (Android APK packager for Chrome OS)
  3. Run twerk from Chrome App Launcher
  4. Download Talkray apk from somewhere.
  5. Then drag an apk of Talkray into Twerk
  6. Set the package name to com.talkray.client, and then set it to be tablet and landscape
  7. Save it, and load the output into Chrome as an unpacked extension
  8. You may be able to manually edit the generated output to add a different icon

Here’s what I actually did:

  1. Download this: http://archon-runtime.github.io/
  2. Unzip it, load it into Chrome from chrome://extensions as an unpacked extension
  3. Install twerk (Android APK packager for Chrome OS)
  4. Run twerk from Chrome App Launcher
  5. Download Talkray apk from somewhere.
  6. Then drag an apk of Talkray into Twerk
  7. Set the package name to com.talkray.client, and then set it to be tablet and landscape
  8. Save it, and load the output into Chrome as an unpacked extension
  9. You may be able to manually edit the generated output to add a different icon

The major difference is that I manually downloaded and installed the Android runtime. The runtime is supposed to be installed when Evernote is installed, since it’s also an Android app, and requires the runtime. However, I think that the version that I’m using is different than the one provided by Google. I have not had time to test the differences between the two.

14 Aug 2015, 10:03

How to Build a Hugo Site

hugo logo

Image Source

When I decided to build a new website, I knew that I wanted to use a static site generator that could take in Markdown, and spit out HTML. I tried a handful of different options, none of which worked out of the box as advertised. Some were very difficult to install, some would crash when running, some would generate something, but not enough to really get going, and the documentation across the various options was all over the spectrum. Then, I found Hugo.

I used homebrew to install it, but since it’s written in Go Lang, it is pretty simple to run. The other nice thing that it does for you is to generate a very basic structure, where it is very obvious where the content lives, and you’re one command from running a dev server.

The following instructions are from the Hugo Quickstart Guide.

To install with homebrew (on a Mac):

brew install hugo

To create a new site:

mkdir new_site
cd new_site
hugo new site .

To create your first page:

hugo new about.md

The above post will live in ./content/about.md, and will be accessible at http://my_site.com/about.

Or, to create a post in a subdirectory:

hugo new post/first.md

The above post will live in ./content/post/first.md, and will be accessible at http://my_site.com/post/first.

Install all the themes:

git clone --recursive https://github.com/spf13/hugoThemes themes

Run the dev server:

hugo server --theme=hyde --buildDrafts

Now, all of your static content has been generated in the ./public directory. You can copy the URL printed out into your browser to view the site.

If all you want to do is rebuild the site with new content (to be published to production):

hugo -t hyde

I don’t like committing the ./public directory to git, so I added the following lines to my .gitignore:

public
public/*

This way, on your production server, you can pull changes from the source, and run the hugo command to generate the new public content.

From there, you can play with trying out different themes, and once you pick a theme, hacking on the HTML/CSS in the theme to customize it to exactly what you want.

Update

Here’s a link to the source for this particular site.