top | android | coffee | dealing_with_companies | dev_blog | experiments | food | fun | io15 | politics | products | random | security | startups | tech_events | thoughts | wearables |

04 Aug 2015, 08:33

Introducing My New Blog

Hello to whomever is reading this, you’ve landed on my new blog. This will be my primary collection of junk on the web. I’ve never been super happy with having my content live outside of my control, and this effort intends to fix that issue. That basically means that I will be posting here a lot, and share this content out with to other outlets.

This is a work-in-progress. I’ve already imported three separate blog’s worth of content here, and I’m continuing to add the things that I’d like to keep. So, if you find things broken, or a bit off, hopefully I’ll get to fixing it.

On a technical note, I’ve been wanting to try out a static site generator for a while, and landed on Hugo. It’s written in GoLang, and was the only tool I tried that worked as advertised out of the box. It also happens to be quite speedy.

That’s all for now!

29 Jul 2015, 13:28

Building a docker image from scratch

Building a docker image from scratch

Docker logo

This post assumes that you have a working docker environment already set up.

Here are the really simple instructions:

  1. create a Dockerfile
  2. open the Dockerfile
  3. add a FROM statement with a base image
  4. add a RUN statement for running whatever setup commands are needed
  5. add any additional RUN statements necessary
  6. add a COPY statement for copying stuff over from the local filesystem
  7. run docker build -t user/image_name:v1 .
  8. run docker run user/image_name:v1



The following is the contents of testfile:

this is a test file

As you’re writing your Dockerfile, it may be helpful to run the commands against a real image as you go, so that you can more easily predict what’s happening. Here is how to do that:

docker pull debian:latest
docker run -t -i debian:latest /bin/bash

If you want to generate a docker image based off of what you did in the interactive session vs a Dockerfile, you can take note of the image id in the console after root. E.g. root@28934273 # specifies the user is root and the image id is 28934273. Then, you can run whatever commands you’d like, exit the session, and run the following:

docker commit -m "ran some commands" -a "My Name" \
28934273 user/debian:v1

  • -m is the commit message
  • -a is the author’s name

Now, you can run that image:

docker run -t -i user/debian:v1

You can use the image you just created as a base image in another of your Dockerfiles, so that you can interactively set up your image initially, and then in the second step, add any CMD statements to actually run your software.


Documentation pulled from:


Clone from GitHub

02 Jun 2015, 06:21

Using Cardboard with unsupported VR viewers

Using Cardboard with unsupported VR viewers

Google Project Tango

The Project Tango booth at Google I/O 2015

On Saturday, I was hanging out with some friends at the SFVR Project Tango Hackathon & VR Expo. The Tango team had handed out Tango devices and Durovis Dive 7 VR masks to develop with. I was feeling pretty dead from Google I/O, and the surrounding parties, but Etienne Caron and Dario Laverde were busy doing their best to build something on the Tango using the Cardboard SDK. They both fired up demos, and immediately found an issue. The screen was not configured correctly for the device, or the viewer.

Google Cardboard

Giant Cardboard hanging at Google I/O 2015

It turns out that Cardboard expects a configuration that is typically provided by the manufacturer of the viewer, but the Dive doesn’t come with one. Etienne noticed that there was nothing documented in the Cardboard API that related to screen or viewer configs. He and I started to poke at the debugger, trying to figure out if we could find the place that those values get set. We traced it back to a file on the sdcard, but when we looked at it, we realized that it was in serialized protobuf format. My initial thought was to copy the files that read the protobuf and decode the file, but we realized that there was an easier way, the Cardboard View Profile Generator.

Etienne and I generated config files, and Dario helped us test. Dario was also working on some other Cardboard related issues. Here’s what we did:

  1. Visit the View Profile Generator from Chrome on a computer that’s not the tablet you’re trying to configure.
  2. On the tablet, in Chrome, visit the shortlink shown on the main page.
  3. On the tablet, if your instance doesn’t go full screen all the way (if you can see the nav bars), install GMD Full Screen Immersive Mode from the Play Store.
  4. Install the phone/tablet in the viewer.
  5. Back on the computer, hit ‘Continue’.
  6. Using the tool, you can dynamically configure the view settings. The tablet screen is synced up with the tool, so the changes should appear on the tablet in real time. (It uses Firebase!) dynamically resizing screen
  7. Follow the instructions on each field, and watch the changes on the screen. You can tweak them until you have something that looks good to you. Here’s the config that I generated.
  8. Next, you should be able to generate your profile. final screen config
  9. In your Cardboard app, you should be able to scan the QR code in the setup step of Cardboard, or go to Settings.
  10. If you’re on the Tango, you will need to go through one extra step, the camera that attempts to scan the QR code doesn’t work right, so you will need to use a second device.
  11. After scanning from the second device, plug it into a computer with adb installed, and run the following command: adb pull /sdcard/Cardboard/current_device_params ./
  12. Then, plug your tablet in, and push the config that you generated: adb push current_device_params /sdcard/Cardboard/current_device_params
  13. Fire up the Cardboard app on your tablet and check it out! Cardboard demo
  14. If it needs tweaking, just repeat steps 6-12.

Here’s the final config file that I generated.

19 May 2014, 15:18

Witness, a simple Android and Java Event Emitter

Witness, a simple Android and Java Event Emitter

Source. I found this in an image search for ‘witness’. Had to use it. =)

I’ve been working on a project for Google Glass and Android that requires asynchronous messages to be passed around between threads. In the past, I’ve used Square’s Otto for this, but I wasn’t thrilled with the performance. Additionally, Otto makes some assumptions about which thread to run on, and how many threads to run with, that I wasn’t crazy about. Before continuing, I should point out that Otto is configurable, and some of these basic assumptions can be dealt with in the configs. Regardless, I felt like writing my own.


Witness is a really simple event emitter for Java. The idea here is to build a really simple structure that can be used to implement observers easily. It’s sort of like JavaScript’s Event Emitter, and the code is short enough to read in its entirety.

I’ll start with Reporter because it’s so simple. Essentially, this is just an interface that you implement if you want your class to be able to receive events.

The Witness class maps class types to Reporter instances. This means that for a given data type, Witness will fan out the event to all registered Reporters. It uses an ExecutorService with a pool size of 10 threads to accomplish this as quickly as possible off of the main thread:


To receive events for a specific datatype, the receiving class needs to implement the Reporter interface, then register for whatever data types the following way:

Witness.register(EventTypeOne.class, this);
Witness.register(EventTypeTwo.class, this);

When you’re done listening, unregister with the following:

Witness.remove(EventTypeOne.class, this);
Witness.remove(EventTypeTwo.class, this);

To publish events to listeners:


Handling events in the Reporter:

public void notifyEvent(Object o) {
    if (o instanceof SomeObject) {
        objectHandlingMethod(((SomeObject) o));

Android, it is a good idea to use a handler, to force the event handling to run on the thread you expect. E.g., if you need code run on the main thread, in an Activity or Service:

public class MyActivity extends Activity implements Reporter {
    private Handler handler = new Handler();

    // ...

    public void notifyEvent(final Object o) { Runnable() {
            public void run() {

                if (o instanceof SomeObject) {
                    objectHandlingMethod(((SomeObject) o));



Events are published on a background thread, using an ExecutorService, backed by a BlockingQueue. This has a few important implications:

  • Thread safety

    • You need to be careful about making sure that whatever you’re using this for is thread safe
  • UI/Main thread

    • All events will be posted to background threads
  • Out of order

    • Events are handled in parallel, so it is possible for them to come in out of order

Please find this project on GitHub. If I get enough questions about it, I might be willing to take the time to package it and submit it to maven central.


+Dietrich Schulten had the following comment:

It has the effect that events can be delivered on arbitrary threads. The event handler must be threadsafe, must synchronize access to shared or stateful resources, and be careful not to create deadlocks. Otoh if you use a single event queue, you avoid this kind of complexity in the first place. I’d opt for the latter, threadsafe programming is something you want to avoid if you can.

I should note that my usage is all on Android, where I’m explicitly specifying the thread that the events will run on using Handlers. I haven’t used this in a non-Android environment, and I’m not entirely sure how to implement the equivalent behavior for regular Java.

20 Mar 2014, 18:11

AWS S3 utils for node.js

AWS S3 utils for node.js

I just published my first package on npm! It’s a helper for S3 that I wrote. It does three things, it lists your buckets, gets a URL pair with key, and deletes media upon request.

The URL pair is probably the most important, because this allows you to have clients that put things on S3 without those clients having any credentials. They can simply make a request to your sever for a URL pair, and then use those URLs to put the thing in your bucket, as well as a public GET URL, so that anyone can go get it out.

var s3Util = require('s3-utils');
var s3 = new s3Util('your_bucket');
var urlPair = s3.generateUrlPair(success);
    urlPair: {
        s3_key: "key",
        s3_put_url: "some_long_private_url",
        s3_get_url: "some_shorter_public_url"

Deleting media from your S3 bucket:

s3.deleteMedia(key, success);

Or just list your buckets:


I had previously written about using the AWS SDK for node.js here. It includes some information about making sure that you have the correct permissions set up on your S3 bucket, as well as how to PUT a file on S3 using the signed URL.

19 Feb 2014, 20:57

Crashlytics is awesome!

Crashlytics is awesome!

I recently started playing with Crashlytics for an app that I’m working on. I needed better crash reporting than what Google Play was giving me. I had used HockeyApp for work, and I really like that service. My initial thought was to go with HA, but as I started looking around, I noticed that Crashlytics offers a free enterprise level service. No down-side to trying it!

I gave it a shot, they do a nice job with Intellij and Gradle integration for their Android SDK, so setting up my project was quite easy. I tested it out, and again, it was very simple and worked well. The reporting that I got back was quite thorough, more than what anybody else that I’m aware of gives you. It reports not just the stack trace, but the state of the system at the time of your crash. If you’ve ever run into an Android bug, this sort of thing can really come in handy.

But, then I ran into an issue. I had some thing that was acting funny, so I pinged the Crashlytics support. I was pretty sure that it was an Android bug, but hadn’t had time to really nail down what the exact problem was. After a short back and forth, I let them know that I’d try to dig in a little more when I had time, but that I was busy and it might not be until next week. The following day, I received a long, detailed response, that included Android source code, to explain exactly the condition that I was seeing. I was floored. They had two engineers working on this, figuring out exactly what the problem was, and what to do with it. I don’t think that I could imagine a better customer service experience!

As a note, I have no affiliation with Crashlytics outside of trying out their product for a few days. Their CS rep did not ask me to write this. I was so impressed that I wanted other people to know about it.

01 Feb 2014, 19:45

Demos using AWS with Node.JS and an AngularJS frontend

Demos using AWS with Node.JS and an AngularJS frontend

I recently decided to build some reusable code for a bunch of projects that I’ve got queued up. I wanted some backend components that leveraged a few of the highly scalable Amazon AWS services. This ended up taking me a month to finish, which is way longer than the week that I had intended to spend on it. (It was a month of nights and weekends, not as much free time as I’d hoped for in January.) Oh, and before I forget, here’s the GitHub repo.

This project’s goal is to build small demo utilities that should be a reasonable approximation of what we might see in an application that uses the aws-sdk node.js module. AngularJS will serve as a front-end, with no direct access to the AWS libraries, and will use the node server to handle all requests.

Here’s a temporary EBS instance running to demonstrate this. It will be taken down in a few weeks, so don’t get so attached to it. I might migrate it to my other server, so hopefully while the URL might change, the service will remain alive.


  1. Data Set
  2. DynamoDB
  3. RDS
  4. S3
  5. SES
  6. SNS
  7. AngularJS
  8. Elastic Beanstalk Deployment

Data Set

Both DynamoDB and RDS are going to use the same basic data set. It will be a very simple set, with two tables that are JOINable. Here’s the schema:

Users: {
    id: 0,
    name: "Steve",
    email: ""

Media: {
    id: 0,
    uid: 0,
    url: "",
    type: "image/jpg"

The same schema will be used for both Dynamo and RDS, almost. RDS uses an mkey field in the media table, to keep track of the key. Dynamo uses a string id, which should be the key of the media object in S3.


Using the above schema, we set up a couple Dynamo tables. These can be treated in a similar way to how you would treat any NoSQL database, except that Dynamo’s API is a bit onerous. I’m not sure why they insisted on not using standard JSON, but a converter can be easily written to go back and forth between Dynamo’s JSON format, and the normal JSON that you’ll want to work with. Take a look at how the converter works. Also, check out some other dynamo code here.

There are just a couple of things going on in the DynamoDB demo. We have a method for getting all the users, adding or updating a user (if the user has the same id), and deleting a user. The getAll method does a scan on the Dynamo table, but only returns 100 results. It’s a good idea to limit your results, and then load more as the user requests.

The addUpdateUser method takes in a user object, generates an id based off of the hash of the email, then does a putItem to Dynamo, which will either create a new entry, or update a current one. Finally, deleteUser runs the Dynamo API method deleteItem.

The following are a few methods that you’ll find in the node.js code. Essentially, the basics are there, and we spit the results out over a socket. The socket will be used throughout most of the examples.


AWS.config.region = "us-east-1";
AWS.config.apiVersions = {
    dynamodb: '2012-08-10',
var dynamodb = new AWS.DynamoDB();

Get all the users

var getAll = function (socket) {
        "TableName": c.DYN_USERS_TABLE,
        "Limit": 100
    }, function (err, data) {
    if (err) {
        socket.emit(c.DYN_GET_USERS, "error");
    } else {
        var finalData = converter.ArrayConverter(data.Items);
        socket.emit(c.DYN_GET_USERS, finalData);

Insert or update a user

var addUpdateUser = function (user, socket) { = genIdFromEmail(;
    var userObj = converter.ConvertFromJson(user);
        "TableName": c.DYN_USERS_TABLE,
        "Item": userObj
    }, function (err, data) {
        if (err) {
            socket.emit(c.DYN_UPDATE_USER, "error");
        } else {
           socket.emit(c.DYN_UPDATE_USER, data);

Delete a user

var deleteUser = function (userId, socket) {
    var userObj = converter.ConvertFromJson({id: userId});
        "TableName": c.DYN_USERS_TABLE,
        "Key": userObj
    }, function (err, data) {
        if (err) {
            socket.emit(c.DYN_DELETE_USER, "error");
        } else {
            socket.emit(c.DYN_DELETE_USER, data);


This one’s pretty simple, RDS gives you an olde fashioned SQL database server. It’s so common that I had to add the ‘e’ to the end of old, to make sure you understand just how common this is. Pick your favorite database server, fire it up, then use whichever node module works best for you. There’s a bit of setup and configuration, which I’ll dive into in the blog post. Here’s the code.

I’m not sure that there’s even much to talk about with this one. This example uses the mysql npm module, and is really, really straightforward. We need to start off by connecting to our DB, but that’s about it. The only thing you’ll need to figure out is the deployment of RDS, and making sure that you’re able to connect to it, but that’s a very standard topic, that I’m not going to cover here since there’s nothing specific to node.js or AngularJS.

The following are a few methods that you’ll find in the node.js code. Essentially, the basics are there, and we spit the results out over a socket. The socket will be used throughout most of the examples.


AWS.config.region = "us-east-1";
AWS.config.apiVersions = {
    rds: '2013-09-09',
var rds_conf = {
    host: mysqlHost,
    database: "aws_node_demo",
    user: mysqlUserName,
    password: mysqlPassword
var mysql = require('mysql');
var connection = mysql.createConnection(rds_conf);
var rds = new AWS.RDS();
    if (err)
        console.error("couldn't connect",err);
        console.log("mysql connected");

Get all the users

var getAll = function(socket){
    var query = this.connection.query('select * from users;',
        if (err){
            socket.emit(c.RDS_GET_USERS, c.ERROR);
        } else {
            socket.emit(c.RDS_GET_USERS, result);

Insert or update a user

var addUpdateUser = function(user, socket){
    var query = this.connection.query('INSERT INTO users SET ?',
      user, function(err, result) {
        if (err) {
            socket.emit(c.RDS_UPDATE_USER, c.ERROR);
        } else {
            socket.emit(c.RDS_UPDATE_USER, result);

Delete a user

var deleteUser = function(userId, socket){
    var query = self.connection.query('DELETE FROM users WHERE id = ?',
      userId, function(err, result) {
        if (err) {
            socket.emit(c.RDS_DELETE_USER, c.ERROR);
        } else {
            socket.emit(c.RDS_DELETE_USER, result);


This one was a little tricky, but basically, we’re just generating a unique random key and using that to keep track of the object. We then generate both GET and PUT URLs on the node.js server, so that the client does not have access to our AWS auth tokens. The client only gets passed the URLs it needs. Check out the code!

The s3_utils.js file is very simple. listBuckets is a method to verify that you’re up and running, and lists out your current s3 buckets. Next up, generateUrlPair is simple, but important. Essentially, what we want is a way for the client to push things up to S3, without having our credentials. To accomplish this, we can generate signed URLs on the server, and pass those back to the client, for the client to use. This was a bit tricky to do, because there are a lot of important details, like making certain that the client uses the same exact content type when it attempts to PUT the object. We’re also making it world readable, so instead of creating a signed GET URL, we’re just calculating the publicly accessible GET URL and returning that. The key for the object is random, so we don’t need to know anything about the object we’re uploading ahead of time. (However, this demo assumes that only images will be uploaded, for simplicity.) Finally, deleteMedia is simple, we just use the S3 API to delete the object.

There are actually two versions of the S3 demo, the DynamoDB version, and the S3 version. For Dynamo, we use the Dynamo media.js file. Similarly, for the RDS version, we use the RDS media.js.

Looking first at the Dynamo version, getAll is not very useful, since we don’t really want to see everyone’s media, I don’t think this even gets called. The methods here are very similar to those in user.js, we leverage the scan, putItem, and deleteItem APIs.

The same is true of the RDS version with respect to our previous RDS example. We’re just making standard SQL calls, just like we did before.

You’ll need to modify the CORS settings on your S3 bucket for this to work. Try the following configuration:

    <?xml version="1.0" encoding="UTF-8"?>
    <CORSConfiguration xmlns="">

The following are a few methods that you’ll find in the node.js code. Essentially, the basics are there, and we spit the results out over a socket. The socket will be used throughout most of the examples.


AWS.config.region = "us-east-1";
AWS.config.apiVersions = {
    s3: '2006-03-01',
var s3 = new AWS.S3();

Generate signed URL pair

The GET URL is public, since that’s how we want it. We could have easily generated a signed GET URL, and kept the objects in the bucket private.

var generateUrlPair = function (socket) {
    var urlPair = {};
    var key = genRandomKeyString();
    urlPair[c.S3_KEY] = key;
    var putParams = {Bucket: c.S3_BUCKET, Key: key, ACL: "public-read", ContentType: "application/octet-stream" };
    s3.getSignedUrl('putObject', putParams, function (err, url) {
        if (!!err) {
            socket.emit(c.S3_GET_URLPAIR, c.ERROR);
        urlPair[c.S3_PUT_URL] = url;
        urlPair[c.S3_GET_URL] = "" + qs.escape(key);
        socket.emit(c.S3_GET_URLPAIR, urlPair);

Delete Object from bucket

var deleteMedia = function(key, socket) {
    var params = {Bucket: c.S3_BUCKET, Key: key};
    s3.deleteObject(params, function(err, data){
        if (!!err) {
            socket.emit(c.S3_DELETE, c.ERROR);
        socket.emit(c.S3_DELETE, data);

Client-side send file

var sendFile = function(file, url, getUrl) {
    var xhr = new XMLHttpRequest();
    xhr.file = file; // not necessary if you create scopes like this
    xhr.addEventListener('progress', function(e) {
        var done = e.position || e.loaded, total = e.totalSize ||;
        var prcnt = Math.floor(done/total*1000)/10;
        if (prcnt % 5 === 0)
            console.log('xhr progress: ' + (Math.floor(done/total*1000)/10) + '%');
    }, false);
    if ( xhr.upload ) {
        xhr.upload.onprogress = function(e) {
            var done = e.position || e.loaded, total = e.totalSize ||;
            var prcnt = Math.floor(done/total*1000)/10;
            if (prcnt % 5 === 0)
                console.log('xhr.upload progress: ' + done + ' / ' + total + ' = ' + (Math.floor(done/total*1000)/10) + '%');
    xhr.onreadystatechange = function(e) {
        if ( 4 == this.readyState ) {
            console.log(['xhr upload complete', e]);
            // emit the 'file uploaded' event
            $rootScope.$broadcast(Constants.S3_FILE_DONE, getUrl);
    };'PUT', url, true);


SES uses another DynamoDB table to track emails that have been sent. We want to ensure that users have the ability to unsubscribe, and we don’t want people sending them multiple messages. Here’s the schema for the Dynamo table:

 Emails: {
     email: "",
     count: 1

That’s it! We’re just going to check if the email is in that table, and what the count is before doing anything, then update the record after the email has been sent. Take a look at how it works.

Sending email with SES is fairly simple, however getting it to production requires jumping through a couple extra hoops. Basically, you’ll need to use SNS to keep track of bounces and complaints.

What we’re doing here is for a given user, grab all their media, package it up in some auto-generated HTML, then use the sendEmail API call to actually send the message. We are also keeping track of the number of times we send each user an email. Since this is just a stupid demo that I’m hoping can live on auto-pilot for a bit, I set a very low limit on the number of emails that may be sent to a particular address. Emails also come with a helpful 'ubsubscribe’ link.

The following are a few methods that you’ll find in the node.js code. Essentially, the basics are there, and we spit the results out over a socket. The socket will be used throughout most of the examples.


AWS.config.region = "us-east-1";
AWS.config.apiVersions = {
    sns: '2010-03-31',
    ses: '2010-12-01'
var ses = new AWS.SES();

Send Email

var sendEmail = function (user, userMedia, socket) {
    var params = {
        Source: "",
        Destination: {
            ToAddresses: []
        Message: {
            Subject: {
                Data: + "'s media"
            Body: {
                Text: {
                    Data: "please enable HTML to view this message"
                Html: {
                    Data: getHtmlBodyFor(user, userMedia)
    ses.sendEmail(params, function (err, data) {
        if (err) {
            socket.emit(c.SES_SEND_EMAIL, c.ERROR);
        } else {
            socket.emit(c.SES_SEND_EMAIL, c.SUCCESS);


We’re also listening for SNS messages to tell us if there’s an email that’s bounced or has a complaint. In the case that we get something, we immediately add an entry to the Emails table with a count of 1000. We will never attempt to send to that email address again.

I have my SES configured to tell SNS to send REST requests to my service, so that I can simply parse out the HTML, and grab the data that I need that way. Some of this is done in app.js, and the rest is handled in bounces.js. In bounces, we first need to verify with SNS that we’re receiving the requests and handling them properly. That’s what confirmSubscription is all about. Then, in handleBounce we deal with any complaints and bounces by unsubscribing the email.


The AngularJS code for this is pretty straightforward. Essentially, we just have a service for our connection, and to keep track of data coming in from Dynamo and RDS. There are controllers for each of the different views that we have, and they also coordinate with the services. We are also leveraging Angular’s built-in events system, to inform various pieces about when things get updated.

There’s nothing special about the AngularJS code here, we use to shuffle data to and from the server, then dump it to the UI with the normal bindings. I do use Angular events which I will discuss in a separate post.

Elastic Beanstalk Deployment

Here’s the AWS doc on setting up deployment with git integration straight from your project. It’s super simple. What’s not so straightforward, however, is that you need to make sure that the ports are set up correctly. If you can just run your node server on port 80, that’s the easiest thing, but I don’t think that the instance that you get from Amazon will allow you to do that. So, you’ll need to configure your LoadBalancer to forward port 80 to whatever port you’re running on, then open that port in the EC2 Security Group that the Beanstalk environment is running in.

Once again, do use the git command-line deployment tools, as it allows you to deploy in one line after a git commit, using git aws.push.

A couple of other notes about the deployment. First, you’re going to need to make sure that the node.js version is set correctly, AWS Elastic Beanstalk currently supports up to v0.10.21, but runs an earlier version by default. You will also need to add several environment variables from the console. I use the following parameters:


Doing this allowed me to not ever commit sensitive data. To get there, log into your AWS console, then go to Elastic Beanstalk and select your environment. Navigate to ‘Configuration’, then to ‘Software Configuration’. From here you can set the node.js version, and add environment variables. You’ll need to add the custom names above along with the values. If you’re deploying to your own box, you’ll need to at least export the above environment variables:

export AWS_RDS_HOST='hostname'
export AWS_RDS_MYSQL_USERNAME='username'

Again, the GitHub repo.

29 Jan 2014, 18:51

Hacking Crappy Customer Support

Hacking Crappy Customer Support

The Situation

We had an issue at work the other week. Basically, we were running into some pretty serious problems with one of the SAAS services that we use (I will leave which one to the reader’s imagination). This is a mission critical service for a couple of our offerings, and it’s not particularly cheap, at $200/month for the pro service (which is what we have). This is a service that is structured in such a way that for anybody that uses it, it’s likely to be a mission critical service. This is all well and good, except that they don’t seem to bother answering support emails. We’ve had emails to them go totally unanswered before. However, until last week, we hadn’t run into an issue important enough that we really, really needed a response.

We found ourselves in a situation where we had taken a dependency on a third-party service, ran into an issue, and were getting no help from the provider. We had guessed at a work-around that turned out to be the right answer for an immediate fix, but we still needed a proper fix for this, else we would need to make larger changes to our apps to better work-around the problem. The vendor was not answering the urgent emails, and provided no phone number for the company at all.

The Hack

I had an idea. The provider has an Enterprise tier, we could contact the sales team, and say that we’re looking to possibly upgrade to Enterprise, but that we had some questions that needed to get answered first. We structured our questions in a way that first asked what we needed to know, and then asked if the Enterprise tier might solve the issue, or if they were working on a fix. This tactic worked. A couple of us separately sent emails to the Enterprise sales team (they, too, do not have a phone number listed) and received responses fairly quickly. We got our questions answered after a couple rounds of emails.


It’s true, they didn’t promise any support, even at the pro level; we had mistakenly assumed that we’d at least be able to get questions answered via email. We should have probably done a bit more research before choosing a provider. The provider may not have been set up to handle that much in terms of support. However, they are not a small company, and could at least offer paid support as needed.


As far as whether or not we stick with them, we’ll see. I’m not too thrilled about paying a company $200/mo and getting the finger from them whenever something of theirs is broken. But, there are other constraints, and you can’t always get everything you want. Sometimes there simply isn’t the time to go back and fix everything that you’d like to.

31 Oct 2013, 23:59



Special Delivery!

We got a special delivery here at the TiKL office today, a brand new Nexus 5. As soon as we opened it, we booted it up, and installed Talkray on it. It is a fantastic device.

Fun day at work!

26 Oct 2013, 16:42

python sweetness: How to lose $172,222 a second for 45 minutes

python sweetness: How to lose $172,222 a second for 45 minutes

The actual loss was \$465M, as opposed to \$172,222. Regardless, it’s a good read.


This is probably the most painful bug report I’ve ever read, describing in glorious technicolor the steps leading to Knight Capital’s \$465m trading loss due to a software bug that struck late last year, effectively bankrupting the company.

The tale has all the hallmarks of technical debt in a…