top 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 |

08 Sep 2015, 07:04

Comparison of Asynchronous Data Loading in Java

threads

Recently, I was working on a problem at work that had some blocking calls that I thought I may not want to block on. My first instinct was to throw those requests into a thread-pool, but I also needed to get the values out of them. There are some obvious ways of doing this, but they all involve different trade-offs.

I’m going to compare a few different methods of doing asynchronous work in Java, and try to think about the pros and cons of each. What I will not say is what you should use, since that’s going to be application-specific. This is also not intended to be an exhaustive discussion, since I simply don’t have that kind of time, and I’m sure that there are many ways of doing this. Instead, I will try to focus on the most common methods. This is also going to be about Java, not Android, Android has some fancy-pants thread handling stuff that is not present in Java.

Futures

If you need to have the value returned to the calling thread, you’ll probably need to use Future. This may involve some amount of blocking I/O. Here’s the basic flow for a Future:

  1. create a reference to your future, and give it a type
  2. pass a Callable to a thread-pool, and make the return type of the callable match your future
  3. get the value out of your future

Here’s a code example:

Future<String> future = threadpool.submit((Callable<String>)() -> getString());
// blocks until the callable returns
String myString = future.get();

Here’s the rub with that, future.get() is a blocking call, so while you’ve run getString() on another thread, you’re still blocking the calling thread.

If you don’t need the value right away, you can avoid calling get(), and periodically check future.isDone(). Here is an example of that:

Future<String> future = threadpool.submit((Callable<String>)() -> getString());
// do some work while the task is running
doSomeWork();
anotherMethodCall();
if (!future.isDone())
    System.out.println("still need a blocking call");
// blocks until the callable returns
String myString = future.get();

There are cases where this is acceptable, or even preferred. For example, if the threadpool represents some resource that you’d like to tightly bound (e.g. a set of 10 threads for doing network requests), and you actually need the response before you can do any more work on the calling thread, then this pattern makes sense.

pros

  • returns your value back to the calling thread
  • fairly straightforward threading model
  • you know exactly when in your flow you will have your value

cons

  • very likely to block the calling thread

Callbacks and Wrapper Classes

A second possible way to handle this problem is to create a callback or wrapper class (or wrapper interface), which provides some known method that can be called from within the thread-pool after the initial work is done.

High-level view of this method:

  1. create a method or runnable to call
  2. add a call back to the callback in the task submitted to the thread-pool

The biggest problem with this method is that you may not be able to, or want to modify the task running in the thread-pool to call your callback. It ties the two together, making the task more tailored for one specific application.

Let’s take a look at how this might work:

// first, define our callback class
static class Callback implements Runnable {
    private String result;

    public void setResult(String s) {
        result = s;
    }

    @Override
    public void run() {
        handleResponse(result);
    }
}

public static void handleResponse(String result) {
    System.out.println("got result: " + result);
}

void doAsyncWork(){
  Callback callback = new Callback();
  threadpool.submit(() -> {
      callback.setsetResult(getString());
      callback.run();
  });
}

This is quite a bit more verbose than the Future solution, however it does avoid the issue of blocking the caller thread. While this is non-blocking, it is not terribly flexible, as you need to know something about the callback in the task that’s running on the thread-pool. Ideally, you should be able to separate these things out more.

pros

  • non-blocking

cons

  • does not return to the calling thread
  • verbose
  • messy
  • brittle

Observers

Observers are a little more verbose than Callbacks, but can be much more flexible. The key to Observers that makes them more flexible is that they separate out the retrieval of a value from the logic of handling that data. You can also have multiple Observers watching an Observable.

Observers provide more separation, and may be able to be made more generic with less effort than a callback. However, you still need to create a class and a method that you’ll use for setting the data and kicking off the Observer.

High-level view of this method:

  1. create an Observable class
  2. implement Observer
  3. set the value in the Observable in the task submitted to the thread-pool

Let’s look at an implementation.

protected static class ContentObserver implements Observer {

    @Override
    public void update(Observable o, Object arg) {
        if (!(o instanceof ObservableContent))
            return;

        System.out.println("update result" + ((ObservableContent) o).getContent());
    }
}

protected static class ObservableContent extends Observable {
    private String content = null;

    public ObservableContent() {
    }

    public void setContent(String content) {
        this.content = content;
        setChanged();
        notifyObservers();
    }

    public String getContent() {
        return content;
    }

}

public static void doAsyncWork(ObservableContent content) {
    threadpool.submit(() -> {
        content.setContent(getString());
    });
}

pros

  • non-blocking
  • flexible

cons

  • does not return to the calling thread
  • verbose

RxJava

While RxJava is not a part of the Java language, I think that it is interesting, and seems to be quickly gaining popularity. RxJava takes the observer pattern a step further, and greatly simplifies it.

RxJava is nearly its own DSL (Domain Specific Language), so there’s a lot available with RxJava.

High-level view of this method:

  1. create a RxJava Observable
  2. tell it what to do
  3. tell it what thread to do the work on
  4. give it a Subscriber
  5. tell the Subscriber which thread to work on

It’s easier than it sounds, let’s take a look.

public static void handleResponse(String result) {
    System.out.println("got result: " + result);
}

Observable.just(0)
    .map(i -> request())
    .subscribeOn(Schedulers.from(threadpool))
    .observeOn(Schedulers.io())
    .subscribe(c -> handleResponse(c));

I’m not going to explain how to use RxJava here, but I will say that Dan Lew has a great series on RxJava on his blog.

pros

  • non-blocking
  • able to return to different threads
  • very flexible
  • compact
  • simple

cons

  • does not return to the calling thread
  • adds a dependency
  • requires learning how to use RxJava

Conclusion

I think that I’m going to give RxJava a try next time I need a non-blocking asynchronous data loading in Java. It still depends on the application, without the Java 8 lambdas, RxJava is still fairly verbose, similar to Java Observers. There are also considerations around whether or not you want to take dependencies in your application, and which specific dependencies you want to take. If I didn’t want the dependency, I would probably use Java’s built-in Observers. However, if I need that return value on the same thread, it’s still going to be Futures, and blocking the calling thread.

If you have a better way of doing this, especially getting the value back on the same thread without blocking, I’d be interested to hear about it. (Again, for Java, not Android, as Android does not have the same constraints.)

I wrote up a full set of examples of all the above here.

14 Aug 2015, 10:25

Talkray On Chrome

And other Android apps!

There’s a way to pull in Android apps into Chrome, and now have Talkray up and running on my laptop! Here’s hoping that Chrome support gets better.

Talkray on the desktop

There are two versions of the how-to. The first is something that should work, but that I have not tested. The second, is what I actually did, though it is more work. What’s more, this should work for many different Android apps.

Here’s the one that should work:

  1. Install the Android version of the Evernote app in Chrome
  2. Install twerk (Android APK packager for Chrome OS)
  3. Run twerk from Chrome App Launcher
  4. Download Talkray apk from somewhere.
  5. Then drag an apk of Talkray into Twerk
  6. Set the package name to com.talkray.client, and then set it to be tablet and landscape
  7. Save it, and load the output into Chrome as an unpacked extension
  8. You may be able to manually edit the generated output to add a different icon

Here’s what I actually did:

  1. Download this: http://archon-runtime.github.io/
  2. Unzip it, load it into Chrome from chrome://extensions as an unpacked extension
  3. Install twerk (Android APK packager for Chrome OS)
  4. Run twerk from Chrome App Launcher
  5. Download Talkray apk from somewhere.
  6. Then drag an apk of Talkray into Twerk
  7. Set the package name to com.talkray.client, and then set it to be tablet and landscape
  8. Save it, and load the output into Chrome as an unpacked extension
  9. You may be able to manually edit the generated output to add a different icon

The major difference is that I manually downloaded and installed the Android runtime. The runtime is supposed to be installed when Evernote is installed, since it’s also an Android app, and requires the runtime. However, I think that the version that I’m using is different than the one provided by Google. I have not had time to test the differences between the two.

14 Aug 2015, 10:03

How to Build a Hugo Site

hugo logo

Image Source

When I decided to build a new website, I knew that I wanted to use a static site generator that could take in Markdown, and spit out HTML. I tried a handful of different options, none of which worked out of the box as advertised. Some were very difficult to install, some would crash when running, some would generate something, but not enough to really get going, and the documentation across the various options was all over the spectrum. Then, I found Hugo.

I used homebrew to install it, but since it’s written in Go Lang, it is pretty simple to run. The other nice thing that it does for you is to generate a very basic structure, where it is very obvious where the content lives, and you’re one command from running a dev server.

The following instructions are from the Hugo Quickstart Guide.

To install with homebrew (on a Mac):

brew install hugo

To create a new site:

mkdir new_site
cd new_site
hugo new site .

To create your first page:

hugo new about.md

The above post will live in ./content/about.md, and will be accessible at http://my_site.com/about.

Or, to create a post in a subdirectory:

hugo new post/first.md

The above post will live in ./content/post/first.md, and will be accessible at http://my_site.com/post/first.

Install all the themes:

git clone --recursive https://github.com/spf13/hugoThemes themes

Run the dev server:

hugo server --theme=hyde --buildDrafts

Now, all of your static content has been generated in the ./public directory. You can copy the URL printed out into your browser to view the site.

If all you want to do is rebuild the site with new content (to be published to production):

hugo -t hyde

I don’t like committing the ./public directory to git, so I added the following lines to my .gitignore:

public
public/*

This way, on your production server, you can pull changes from the source, and run the hugo command to generate the new public content.

From there, you can play with trying out different themes, and once you pick a theme, hacking on the HTML/CSS in the theme to customize it to exactly what you want.

Update

Here’s a link to the source for this particular site.

04 Aug 2015, 08:33

Introducing My New Blog

Hello to whomever is reading this, you’ve landed on my new blog. This will be my primary collection of junk on the web. I’ve never been super happy with having my content live outside of my control, and this effort intends to fix that issue. That basically means that I will be posting here a lot, and share this content out with to other outlets.

This is a work-in-progress. I’ve already imported three separate blog’s worth of content here, and I’m continuing to add the things that I’d like to keep. So, if you find things broken, or a bit off, hopefully I’ll get to fixing it.

On a technical note, I’ve been wanting to try out a static site generator for a while, and landed on Hugo. It’s written in GoLang, and was the only tool I tried that worked as advertised out of the box. It also happens to be quite speedy.

That’s all for now!

29 Jul 2015, 13:28

Building a docker image from scratch

Building a docker image from scratch

Docker logo

This post assumes that you have a working docker environment already set up.

Here are the really simple instructions:

  1. create a Dockerfile
  2. open the Dockerfile
  3. add a FROM statement with a base image
  4. add a RUN statement for running whatever setup commands are needed
  5. add any additional RUN statements necessary
  6. add a COPY statement for copying stuff over from the local filesystem
  7. run docker build -t user/image_name:v1 .
  8. run docker run user/image_name:v1

Dockerfile

testfile

The following is the contents of testfile:

this is a test file

As you’re writing your Dockerfile, it may be helpful to run the commands against a real image as you go, so that you can more easily predict what’s happening. Here is how to do that:

docker pull debian:latest
docker run -t -i debian:latest /bin/bash

If you want to generate a docker image based off of what you did in the interactive session vs a Dockerfile, you can take note of the image id in the console after root. E.g. root@28934273 # specifies the user is root and the image id is 28934273. Then, you can run whatever commands you’d like, exit the session, and run the following:

docker commit -m "ran some commands" -a "My Name" \
28934273 user/debian:v1

  • -m is the commit message
  • -a is the author’s name

Now, you can run that image:

docker run -t -i user/debian:v1

You can use the image you just created as a base image in another of your Dockerfiles, so that you can interactively set up your image initially, and then in the second step, add any CMD statements to actually run your software.

Refernces

Documentation pulled from:

Code

Clone from GitHub

02 Jun 2015, 06:21

Using Cardboard with unsupported VR viewers

Using Cardboard with unsupported VR viewers

Google Project Tango

The Project Tango booth at Google I/O 2015

On Saturday, I was hanging out with some friends at the SFVR Project Tango Hackathon & VR Expo. The Tango team had handed out Tango devices and Durovis Dive 7 VR masks to develop with. I was feeling pretty dead from Google I/O, and the surrounding parties, but Etienne Caron and Dario Laverde were busy doing their best to build something on the Tango using the Cardboard SDK. They both fired up demos, and immediately found an issue. The screen was not configured correctly for the device, or the viewer.

Google Cardboard

Giant Cardboard hanging at Google I/O 2015

It turns out that Cardboard expects a configuration that is typically provided by the manufacturer of the viewer, but the Dive doesn’t come with one. Etienne noticed that there was nothing documented in the Cardboard API that related to screen or viewer configs. He and I started to poke at the debugger, trying to figure out if we could find the place that those values get set. We traced it back to a file on the sdcard, but when we looked at it, we realized that it was in serialized protobuf format. My initial thought was to copy the files that read the protobuf and decode the file, but we realized that there was an easier way, the Cardboard View Profile Generator.

Etienne and I generated config files, and Dario helped us test. Dario was also working on some other Cardboard related issues. Here’s what we did:

  1. Visit the View Profile Generator from Chrome on a computer that’s not the tablet you’re trying to configure.
  2. On the tablet, in Chrome, visit the shortlink shown on the main page.
  3. On the tablet, if your instance doesn’t go full screen all the way (if you can see the nav bars), install GMD Full Screen Immersive Mode from the Play Store.
  4. Install the phone/tablet in the viewer.
  5. Back on the computer, hit ‘Continue’.
  6. Using the tool, you can dynamically configure the view settings. The tablet screen is synced up with the tool, so the changes should appear on the tablet in real time. (It uses Firebase!) dynamically resizing screen
  7. Follow the instructions on each field, and watch the changes on the screen. You can tweak them until you have something that looks good to you. Here’s the config that I generated.
  8. Next, you should be able to generate your profile. final screen config
  9. In your Cardboard app, you should be able to scan the QR code in the setup step of Cardboard, or go to Settings.
  10. If you’re on the Tango, you will need to go through one extra step, the camera that attempts to scan the QR code doesn’t work right, so you will need to use a second device.
  11. After scanning from the second device, plug it into a computer with adb installed, and run the following command: adb pull /sdcard/Cardboard/current_device_params ./
  12. Then, plug your tablet in, and push the config that you generated: adb push current_device_params /sdcard/Cardboard/current_device_params
  13. Fire up the Cardboard app on your tablet and check it out! Cardboard demo
  14. If it needs tweaking, just repeat steps 6-12.

Here’s the final config file that I generated.

19 May 2014, 15:18

Witness, a simple Android and Java Event Emitter

Witness, a simple Android and Java Event Emitter

Source. I found this in an image search for ‘witness’. Had to use it. =)

I’ve been working on a project for Google Glass and Android that requires asynchronous messages to be passed around between threads. In the past, I’ve used Square’s Otto for this, but I wasn’t thrilled with the performance. Additionally, Otto makes some assumptions about which thread to run on, and how many threads to run with, that I wasn’t crazy about. Before continuing, I should point out that Otto is configurable, and some of these basic assumptions can be dealt with in the configs. Regardless, I felt like writing my own.

Code

Witness is a really simple event emitter for Java. The idea here is to build a really simple structure that can be used to implement observers easily. It’s sort of like JavaScript’s Event Emitter, and the code is short enough to read in its entirety.

I’ll start with Reporter because it’s so simple. Essentially, this is just an interface that you implement if you want your class to be able to receive events.

The Witness class maps class types to Reporter instances. This means that for a given data type, Witness will fan out the event to all registered Reporters. It uses an ExecutorService with a pool size of 10 threads to accomplish this as quickly as possible off of the main thread:

Usage

To receive events for a specific datatype, the receiving class needs to implement the Reporter interface, then register for whatever data types the following way:

Witness.register(EventTypeOne.class, this);
Witness.register(EventTypeTwo.class, this);

When you’re done listening, unregister with the following:

Witness.remove(EventTypeOne.class, this);
Witness.remove(EventTypeTwo.class, this);

To publish events to listeners:

Witness.notify(event);

Handling events in the Reporter:

@Override
public void notifyEvent(Object o) {
    if (o instanceof SomeObject) {
        objectHandlingMethod(((SomeObject) o));
    }
}

Android, it is a good idea to use a handler, to force the event handling to run on the thread you expect. E.g., if you need code run on the main thread, in an Activity or Service:

public class MyActivity extends Activity implements Reporter {
    private Handler handler = new Handler();

    // ...

    @Override
    public void notifyEvent(final Object o) {
        handler.post(new Runnable() {
            @Override
            public void run() {

                if (o instanceof SomeObject) {
                    objectHandlingMethod(((SomeObject) o));
                }

            }
        });
    }
}

Notes

Events are published on a background thread, using an ExecutorService, backed by a BlockingQueue. This has a few important implications:

  • Thread safety

    • You need to be careful about making sure that whatever you’re using this for is thread safe
  • UI/Main thread

    • All events will be posted to background threads
  • Out of order

    • Events are handled in parallel, so it is possible for them to come in out of order

Please find this project on GitHub. If I get enough questions about it, I might be willing to take the time to package it and submit it to maven central.

Update

+Dietrich Schulten had the following comment:

It has the effect that events can be delivered on arbitrary threads. The event handler must be threadsafe, must synchronize access to shared or stateful resources, and be careful not to create deadlocks. Otoh if you use a single event queue, you avoid this kind of complexity in the first place. I’d opt for the latter, threadsafe programming is something you want to avoid if you can.

I should note that my usage is all on Android, where I’m explicitly specifying the thread that the events will run on using Handlers. I haven’t used this in a non-Android environment, and I’m not entirely sure how to implement the equivalent behavior for regular Java.

20 Mar 2014, 18:11

AWS S3 utils for node.js

AWS S3 utils for node.js

I just published my first package on npm! It’s a helper for S3 that I wrote. It does three things, it lists your buckets, gets a URL pair with key, and deletes media upon request.

The URL pair is probably the most important, because this allows you to have clients that put things on S3 without those clients having any credentials. They can simply make a request to your sever for a URL pair, and then use those URLs to put the thing in your bucket, as well as a public GET URL, so that anyone can go get it out.

var s3Util = require('s3-utils');
var s3 = new s3Util('your_bucket');
var urlPair = s3.generateUrlPair(success);
/**
    urlPair: {
        s3_key: "key",
        s3_put_url: "some_long_private_url",
        s3_get_url: "some_shorter_public_url"
    }
*/

Deleting media from your S3 bucket:

s3.deleteMedia(key, success);

Or just list your buckets:

s3.listBuckets();

I had previously written about using the AWS SDK for node.js here. It includes some information about making sure that you have the correct permissions set up on your S3 bucket, as well as how to PUT a file on S3 using the signed URL.

19 Feb 2014, 20:57

Crashlytics is awesome!

Crashlytics is awesome!

I recently started playing with Crashlytics for an app that I’m working on. I needed better crash reporting than what Google Play was giving me. I had used HockeyApp for work, and I really like that service. My initial thought was to go with HA, but as I started looking around, I noticed that Crashlytics offers a free enterprise level service. No down-side to trying it!

I gave it a shot, they do a nice job with Intellij and Gradle integration for their Android SDK, so setting up my project was quite easy. I tested it out, and again, it was very simple and worked well. The reporting that I got back was quite thorough, more than what anybody else that I’m aware of gives you. It reports not just the stack trace, but the state of the system at the time of your crash. If you’ve ever run into an Android bug, this sort of thing can really come in handy.

But, then I ran into an issue. I had some thing that was acting funny, so I pinged the Crashlytics support. I was pretty sure that it was an Android bug, but hadn’t had time to really nail down what the exact problem was. After a short back and forth, I let them know that I’d try to dig in a little more when I had time, but that I was busy and it might not be until next week. The following day, I received a long, detailed response, that included Android source code, to explain exactly the condition that I was seeing. I was floored. They had two engineers working on this, figuring out exactly what the problem was, and what to do with it. I don’t think that I could imagine a better customer service experience!

As a note, I have no affiliation with Crashlytics outside of trying out their product for a few days. Their CS rep did not ask me to write this. I was so impressed that I wanted other people to know about it.

01 Feb 2014, 19:45

Demos using AWS with Node.JS and an AngularJS frontend

Demos using AWS with Node.JS and an AngularJS frontend

I recently decided to build some reusable code for a bunch of projects that I’ve got queued up. I wanted some backend components that leveraged a few of the highly scalable Amazon AWS services. This ended up taking me a month to finish, which is way longer than the week that I had intended to spend on it. (It was a month of nights and weekends, not as much free time as I’d hoped for in January.) Oh, and before I forget, here’s the GitHub repo.

This project’s goal is to build small demo utilities that should be a reasonable approximation of what we might see in an application that uses the aws-sdk node.js module. AngularJS will serve as a front-end, with no direct access to the AWS libraries, and will use the node server to handle all requests.

Here’s a temporary EBS instance running to demonstrate this. It will be taken down in a few weeks, so don’t get so attached to it. I might migrate it to my other server, so hopefully while the URL might change, the service will remain alive.

ToC

  1. Data Set
  2. DynamoDB
  3. RDS
  4. S3
  5. SES
  6. SNS
  7. AngularJS
  8. Elastic Beanstalk Deployment

Data Set

Both DynamoDB and RDS are going to use the same basic data set. It will be a very simple set, with two tables that are JOINable. Here’s the schema:

Users: {
    id: 0,
    name: "Steve",
    email: "steve@example.com"
}

Media: {
    id: 0,
    uid: 0,
    url: "http://example.com/image.jpg",
    type: "image/jpg"
}

The same schema will be used for both Dynamo and RDS, almost. RDS uses an mkey field in the media table, to keep track of the key. Dynamo uses a string id, which should be the key of the media object in S3.

DynamoDB

Using the above schema, we set up a couple Dynamo tables. These can be treated in a similar way to how you would treat any NoSQL database, except that Dynamo’s API is a bit onerous. I’m not sure why they insisted on not using standard JSON, but a converter can be easily written to go back and forth between Dynamo’s JSON format, and the normal JSON that you’ll want to work with. Take a look at how the converter works. Also, check out some other dynamo code here.

There are just a couple of things going on in the DynamoDB demo. We have a method for getting all the users, adding or updating a user (if the user has the same id), and deleting a user. The getAll method does a scan on the Dynamo table, but only returns 100 results. It’s a good idea to limit your results, and then load more as the user requests.

The addUpdateUser method takes in a user object, generates an id based off of the hash of the email, then does a putItem to Dynamo, which will either create a new entry, or update a current one. Finally, deleteUser runs the Dynamo API method deleteItem.

The following are a few methods that you’ll find in the node.js code. Essentially, the basics are there, and we spit the results out over a socket.io socket. The socket will be used throughout most of the examples.

Initialize

AWS.config.region = "us-east-1";
AWS.config.apiVersions = {
    dynamodb: '2012-08-10',
};
var dynamodb = new AWS.DynamoDB();

Get all the users

var getAll = function (socket) {
    dynamodb.scan({
        "TableName": c.DYN_USERS_TABLE,
        "Limit": 100
    }, function (err, data) {
    if (err) {
        socket.emit(c.DYN_GET_USERS, "error");
    } else {
        var finalData = converter.ArrayConverter(data.Items);
        socket.emit(c.DYN_GET_USERS, finalData);
    }
    });
};

Insert or update a user

var addUpdateUser = function (user, socket) {
    user.id = genIdFromEmail(user.email);
    var userObj = converter.ConvertFromJson(user);
    dynamodb.putItem({
        "TableName": c.DYN_USERS_TABLE,
        "Item": userObj
    }, function (err, data) {
        if (err) {
            socket.emit(c.DYN_UPDATE_USER, "error");
        } else {
           socket.emit(c.DYN_UPDATE_USER, data);
        }
    });
};

Delete a user

var deleteUser = function (userId, socket) {
    var userObj = converter.ConvertFromJson({id: userId});
    dynamodb.deleteItem({
        "TableName": c.DYN_USERS_TABLE,
        "Key": userObj
    }, function (err, data) {
        if (err) {
            socket.emit(c.DYN_DELETE_USER, "error");
        } else {
            socket.emit(c.DYN_DELETE_USER, data);
        }
    });
};

RDS

This one’s pretty simple, RDS gives you an olde fashioned SQL database server. It’s so common that I had to add the ‘e’ to the end of old, to make sure you understand just how common this is. Pick your favorite database server, fire it up, then use whichever node module works best for you. There’s a bit of setup and configuration, which I’ll dive into in the blog post. Here’s the code.

I’m not sure that there’s even much to talk about with this one. This example uses the mysql npm module, and is really, really straightforward. We need to start off by connecting to our DB, but that’s about it. The only thing you’ll need to figure out is the deployment of RDS, and making sure that you’re able to connect to it, but that’s a very standard topic, that I’m not going to cover here since there’s nothing specific to node.js or AngularJS.

The following are a few methods that you’ll find in the node.js code. Essentially, the basics are there, and we spit the results out over a socket.io socket. The socket will be used throughout most of the examples.

Initialize

AWS.config.region = "us-east-1";
AWS.config.apiVersions = {
    rds: '2013-09-09',
};
var rds_conf = {
    host: mysqlHost,
    database: "aws_node_demo",
    user: mysqlUserName,
    password: mysqlPassword
};
var mysql = require('mysql');
var connection = mysql.createConnection(rds_conf);
var rds = new AWS.RDS();
connection.connect(function(err){
    if (err)
        console.error("couldn't connect",err);
    else
        console.log("mysql connected");
});

Get all the users

var getAll = function(socket){
    var query = this.connection.query('select * from users;',
      function(err,result){
        if (err){
            socket.emit(c.RDS_GET_USERS, c.ERROR);
        } else {
            socket.emit(c.RDS_GET_USERS, result);
        }
    });
};

Insert or update a user

var addUpdateUser = function(user, socket){
    var query = this.connection.query('INSERT INTO users SET ?',
      user, function(err, result) {
        if (err) {
            socket.emit(c.RDS_UPDATE_USER, c.ERROR);
        } else {
            socket.emit(c.RDS_UPDATE_USER, result);
        }
    });
};

Delete a user

var deleteUser = function(userId, socket){
    var query = self.connection.query('DELETE FROM users WHERE id = ?',
      userId, function(err, result) {
        if (err) {
            socket.emit(c.RDS_DELETE_USER, c.ERROR);
        } else {
            socket.emit(c.RDS_DELETE_USER, result);
        }
    });
};

S3

This one was a little tricky, but basically, we’re just generating a unique random key and using that to keep track of the object. We then generate both GET and PUT URLs on the node.js server, so that the client does not have access to our AWS auth tokens. The client only gets passed the URLs it needs. Check out the code!

The s3_utils.js file is very simple. listBuckets is a method to verify that you’re up and running, and lists out your current s3 buckets. Next up, generateUrlPair is simple, but important. Essentially, what we want is a way for the client to push things up to S3, without having our credentials. To accomplish this, we can generate signed URLs on the server, and pass those back to the client, for the client to use. This was a bit tricky to do, because there are a lot of important details, like making certain that the client uses the same exact content type when it attempts to PUT the object. We’re also making it world readable, so instead of creating a signed GET URL, we’re just calculating the publicly accessible GET URL and returning that. The key for the object is random, so we don’t need to know anything about the object we’re uploading ahead of time. (However, this demo assumes that only images will be uploaded, for simplicity.) Finally, deleteMedia is simple, we just use the S3 API to delete the object.

There are actually two versions of the S3 demo, the DynamoDB version, and the S3 version. For Dynamo, we use the Dynamo media.js file. Similarly, for the RDS version, we use the RDS media.js.

Looking first at the Dynamo version, getAll is not very useful, since we don’t really want to see everyone’s media, I don’t think this even gets called. The methods here are very similar to those in user.js, we leverage the scan, putItem, and deleteItem APIs.

The same is true of the RDS version with respect to our previous RDS example. We’re just making standard SQL calls, just like we did before.

You’ll need to modify the CORS settings on your S3 bucket for this to work. Try the following configuration:

    <?xml version="1.0" encoding="UTF-8"?>
    <CORSConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
        <CORSRule>
            <AllowedOrigin>*</AllowedOrigin>
            <AllowedOrigin>http://localhost:3000</AllowedOrigin>
            <AllowedMethod>GET</AllowedMethod>
            <AllowedMethod>PUT</AllowedMethod>
            <AllowedMethod>POST</AllowedMethod>
            <AllowedMethod>DELETE</AllowedMethod>
            <MaxAgeSeconds>3000</MaxAgeSeconds>
            <AllowedHeader>Content-*</AllowedHeader>
            <AllowedHeader>Authorization</AllowedHeader>
            <AllowedHeader>*</AllowedHeader>
        </CORSRule>
    </CORSConfiguration>

The following are a few methods that you’ll find in the node.js code. Essentially, the basics are there, and we spit the results out over a socket.io socket. The socket will be used throughout most of the examples.

Initialize

AWS.config.region = "us-east-1";
AWS.config.apiVersions = {
    s3: '2006-03-01',
};
var s3 = new AWS.S3();

Generate signed URL pair

The GET URL is public, since that’s how we want it. We could have easily generated a signed GET URL, and kept the objects in the bucket private.

var generateUrlPair = function (socket) {
    var urlPair = {};
    var key = genRandomKeyString();
    urlPair[c.S3_KEY] = key;
    var putParams = {Bucket: c.S3_BUCKET, Key: key, ACL: "public-read", ContentType: "application/octet-stream" };
    s3.getSignedUrl('putObject', putParams, function (err, url) {
        if (!!err) {
            socket.emit(c.S3_GET_URLPAIR, c.ERROR);
            return;
        }
        urlPair[c.S3_PUT_URL] = url;
        urlPair[c.S3_GET_URL] = "https://aws-node-demos.s3.amazonaws.com/" + qs.escape(key);
        socket.emit(c.S3_GET_URLPAIR, urlPair);
    });
};

Delete Object from bucket

var deleteMedia = function(key, socket) {
    var params = {Bucket: c.S3_BUCKET, Key: key};
    s3.deleteObject(params, function(err, data){
        if (!!err) {
            socket.emit(c.S3_DELETE, c.ERROR);
            return;
        }
        socket.emit(c.S3_DELETE, data);
    });
}

Client-side send file

var sendFile = function(file, url, getUrl) {
    var xhr = new XMLHttpRequest();
    xhr.file = file; // not necessary if you create scopes like this
    xhr.addEventListener('progress', function(e) {
        var done = e.position || e.loaded, total = e.totalSize || e.total;
        var prcnt = Math.floor(done/total*1000)/10;
        if (prcnt % 5 === 0)
            console.log('xhr progress: ' + (Math.floor(done/total*1000)/10) + '%');
    }, false);
    if ( xhr.upload ) {
        xhr.upload.onprogress = function(e) {
            var done = e.position || e.loaded, total = e.totalSize || e.total;
            var prcnt = Math.floor(done/total*1000)/10;
            if (prcnt % 5 === 0)
                console.log('xhr.upload progress: ' + done + ' / ' + total + ' = ' + (Math.floor(done/total*1000)/10) + '%');
        };
    }
    xhr.onreadystatechange = function(e) {
        if ( 4 == this.readyState ) {
            console.log(['xhr upload complete', e]);
            // emit the 'file uploaded' event
            $rootScope.$broadcast(Constants.S3_FILE_DONE, getUrl);
        }
    };
    xhr.open('PUT', url, true);
    xhr.setRequestHeader("Content-Type","application/octet-stream");
    xhr.send(file);
}

SES

SES uses another DynamoDB table to track emails that have been sent. We want to ensure that users have the ability to unsubscribe, and we don’t want people sending them multiple messages. Here’s the schema for the Dynamo table:

 Emails: {
     email: "steve@example.com",
     count: 1
 }

That’s it! We’re just going to check if the email is in that table, and what the count is before doing anything, then update the record after the email has been sent. Take a look at how it works.

Sending email with SES is fairly simple, however getting it to production requires jumping through a couple extra hoops. Basically, you’ll need to use SNS to keep track of bounces and complaints.

What we’re doing here is for a given user, grab all their media, package it up in some auto-generated HTML, then use the sendEmail API call to actually send the message. We are also keeping track of the number of times we send each user an email. Since this is just a stupid demo that I’m hoping can live on auto-pilot for a bit, I set a very low limit on the number of emails that may be sent to a particular address. Emails also come with a helpful 'ubsubscribe’ link.

The following are a few methods that you’ll find in the node.js code. Essentially, the basics are there, and we spit the results out over a socket.io socket. The socket will be used throughout most of the examples.

Initialize

AWS.config.region = "us-east-1";
AWS.config.apiVersions = {
    sns: '2010-03-31',
    ses: '2010-12-01'
};
var ses = new AWS.SES();

Send Email

var sendEmail = function (user, userMedia, socket) {
    var params = {
        Source: "me@mydomain.com",
        Destination: {
            ToAddresses: [user.email]
        },
        Message: {
            Subject: {
                Data: user.name + "'s media"
            },
            Body: {
                Text: {
                    Data: "please enable HTML to view this message"
                },
                Html: {
                    Data: getHtmlBodyFor(user, userMedia)
                }
            }
        }
    };
    ses.sendEmail(params, function (err, data) {
        if (err) {
            socket.emit(c.SES_SEND_EMAIL, c.ERROR);
        } else {
            socket.emit(c.SES_SEND_EMAIL, c.SUCCESS);
        }
    });
};

SNS

We’re also listening for SNS messages to tell us if there’s an email that’s bounced or has a complaint. In the case that we get something, we immediately add an entry to the Emails table with a count of 1000. We will never attempt to send to that email address again.

I have my SES configured to tell SNS to send REST requests to my service, so that I can simply parse out the HTML, and grab the data that I need that way. Some of this is done in app.js, and the rest is handled in bounces.js. In bounces, we first need to verify with SNS that we’re receiving the requests and handling them properly. That’s what confirmSubscription is all about. Then, in handleBounce we deal with any complaints and bounces by unsubscribing the email.

AngularJS

The AngularJS code for this is pretty straightforward. Essentially, we just have a service for our socket.io connection, and to keep track of data coming in from Dynamo and RDS. There are controllers for each of the different views that we have, and they also coordinate with the services. We are also leveraging Angular’s built-in events system, to inform various pieces about when things get updated.

There’s nothing special about the AngularJS code here, we use socket.io to shuffle data to and from the server, then dump it to the UI with the normal bindings. I do use Angular events which I will discuss in a separate post.

Elastic Beanstalk Deployment

Here’s the AWS doc on setting up deployment with git integration straight from your project. It’s super simple. What’s not so straightforward, however, is that you need to make sure that the ports are set up correctly. If you can just run your node server on port 80, that’s the easiest thing, but I don’t think that the instance that you get from Amazon will allow you to do that. So, you’ll need to configure your LoadBalancer to forward port 80 to whatever port you’re running on, then open that port in the EC2 Security Group that the Beanstalk environment is running in.

Once again, do use the git command-line deployment tools, as it allows you to deploy in one line after a git commit, using git aws.push.

A couple of other notes about the deployment. First, you’re going to need to make sure that the node.js version is set correctly, AWS Elastic Beanstalk currently supports up to v0.10.21, but runs an earlier version by default. You will also need to add several environment variables from the console. I use the following parameters:

  • AWS_ACCESS_KEY_ID
  • AWS_SECRET_ACCESS_KEY
  • AWS_RDS_HOST
  • AWS_RDS_MYSQL_USERNAME
  • AWS_RDS_MYSQL_PASSWORD

Doing this allowed me to not ever commit sensitive data. To get there, log into your AWS console, then go to Elastic Beanstalk and select your environment. Navigate to ‘Configuration’, then to ‘Software Configuration’. From here you can set the node.js version, and add environment variables. You’ll need to add the custom names above along with the values. If you’re deploying to your own box, you’ll need to at least export the above environment variables:

export AWS_ACCESS_KEY_ID='AKID'
export AWS_SECRET_ACCESS_KEY='SECRET'
export AWS_RDS_HOST='hostname'
export AWS_RDS_MYSQL_USERNAME='username'
export AWS_RDS_MYSQL_PASSWORD='pass'

Again, the GitHub repo.