Thursday, May 29, 2014

The JavaScript technology stack

Context

I've been developing with the JavaScript language since 1999. I was then involved in the development of an administrative console for clusters of calendar servers. Coming from an experience in Windows development with the MFC environment, I quickly implemented the MVC pattern in the browser with a set of frames: one frame taking all the visible space to render the interface, another frame kept hidden (with height="0") for exchanging data with my service (implemented in C++ as a FastCGI module), and the frameset to save the state.
Later, while working for IBM Rational, I had the chance to discover the Dojo Toolkit in its early days (v0.4). After that experience, even if I extended my expertise on mobile (mainly native Android and cross-platform with Unity), I continued to contribute to Web projects. However, I did not have a chance to start a project from scratch, always having to deal with legacy code and constrained schedules... until I joined Ubisoft last February!

The presentation layer and user's input handling

Most of "Web developers" are more hackers than developers: they use PHP on the server to assemble HTML components, they use JavaScript to control the application behaviour in the browser, they control the interface from any place, and they deploy often without the safety net of automated tests... Their flexibility is a real asset for editorial teams but a nightmare for the quality assurance teams!
Here are my recommendations for the development of Web applications:
  • Use the server as little as possible when preparing the views, just to detect the browser user agent and the user's preferred language in order to deliver the adequate HTML template;
  • Define all the presentation elements with HTML, CSS, and images. Don't use the JavaScript to create HTML fragments on the fly, only if it clones parts of the HTML template;
  • Use the JavaScript to inject data and behaviour: switch from one section to another, submit a request to the server, display data and notifications, etc.
The main advantage of letting the presentation coded with HTML, CSS, and images is that it can be delegated to designers. Developers can take the materials designers have produced with Dreamweaver, for example, and insert the identifiers required to connect the JavaScript handlers. Or they provided the initial skeleton instrumented with these identifiers, and designers iterate over them freely. There are so many tools to optimize the presentation layer:
  • HTML minifier, going up to drop optional tags;
  • CSS minifier and tools to detect unused rules, like uncss;
  • Image optimizer and sprite generators.
Designers can then focus on defining on the best interface regardless optimization. The only limitations: don't let them introduce or depend on any JavaScript library!
As for choosing a JavaScript library, I recommend to consider the following points:
  • A mechanism to define modules and dependencies. The best specification is AMD (for Asynchronous Module Definition), implemented by RequireJS, Dojo, and jQuery, among others. Note that AngularJS has its own dependency injection mechanism.
  • Until ES6 (or ECMAScript 6) makes it standard, use a library that provide an implementation of Promise, especially useful for its all() method;
  • A test framework that reports about the code coverage. IMHO, without the code coverage statistics, it's difficult to judge the quality of the tests, then determine our ability to detect regressions early...
  • Even if you cannot rely on a dispatcher that detects user-agents and preferred languages, use the functionality of the has.js library or equivalent to expose methods for test purposes. Coupled with a smart build system, the exposed methods will be hidden in production.
  • Just minifying the JavaScript code is not sufficient. It's important to have the dead code removed (especially the one exposed just for test purposes). The Google Closure compiler should be part of your tool set.

The data access layer

In the last 10 years, my back-end services were always implemented in Java, sometimes with my own REST compliant library, sometimes with other libraries like Spring, RestEasy, Guice, etc. Java is an easy language to develop with and, with all the available tooling, ramping up new developers is not difficult. Without counting services like Google App Engine which hosts low profile applications for free.
On the other end, Java is really verbose and not done for asynchronicity. It lacks also the support of closures. And, as with many programming languages, nothing prevents you to use patterns to clarify the code behaviour.
At Ubisoft, I've been given the opportunity to host the back-end services on Node.js. The major factor in favor to Node.js is its WebSocket support (not yet decided between ws and engine.io). The second factor is related to the nature of the application: 99% of the transactions between the clients and the server are short lived. Operations that require long computations are handled by services based on Redis and Hadoop. And finally, Node.js scales well.
When I joined the project, the team had a small working environment made a-la Node.js style: no clear definition of dependencies, a lot of nested callbacks to handle asynchronous behaviours, all presentation built with Jade, no commonly adopted patterns to organize the code logic, no unit tests (obvious as nested callbacks are nearly impossible to test!). I then rebooted the project with a better approach for the separation of concerns:
  • AMD as the format to define modules, with the Dojo loader to bootstrap the application;
  • Express to handle the RESTful entry points
  • A layered structure to process the requests:
    • Resource: one class per entity, to gather data from the input streams, and forwarding them to the service layer;
    • Service: one class per entity, possibly communicating with other services when data should be aggregated for many entities;
    • DAO: one class per entity, controlling data from the file system, from MongoDB, from MySQL or from another service over HTTP.
  • A set of classes modelling each entities; I imposed this layer to serve two needs: 1) allow some restrictions of entities (attributes can be declared mandatory or read-only, or need to match a regular expression) and 2) support non destructive partial updates.
  • Unit tests to cover 100% of the implemented logic.
At this stage, the most complex classes are the base classes for the MongoDB and the MySQL DAOs (I judge their complexity by the tests that requires 3 times more code). But with the help of Promises, the code is elegant and compact ;)
/**
 * Select the identified resources, or all resources if not filter nor range are specified
 *
 * @Param {Object} filters bag of key/value pairs to be used as a filter the resources to be returned
 * @Param {Range} range limit the number of results returned
 * @Param {Object} order bag of key/value pairs to be used to order the results
 * @Return a Promise with a list of resources as the parameter of the onSuccess method
 *
 * @Throw error with code 204-NO CONTENT if the selection is empty, as a parameter of the onFailure method of the promise
 */
select: function (filters, range, order) {
    return all([this._select(filters, range, order), this._count(filters)]).then(function (responses) {
        range.total = responses[1];
        responses[0].range = range;
        return responses[0];
    });
},

// Helper forwarding the SELECT request to the MySql connection
_select: function (filters, range, order) {
    var query = this.getSelectQuery(filters, range, order),
        ModelClass = this.ModelClass;

    return this._getConnection().then(function (connection) {
        var dfd = new Deferred();

        connection.query(query, function (err, rows) {
            connection.release();
            if (err) {
                _forwardError(dfd, 500, 'Query to DB failed:' + query, err);
                return;
            }

            var idx, limit = rows.length,
                entities = [];
            if (limit === 0) {
                _forwardError(dfd, 204, 'No entity match the given criteria', 'Query with no result: ' + query);
                return;
            }
            for (idx = 0; idx < limit; idx += 1) {
                entities.push(new ModelClass(rows[idx]));
            }
            dfd.resolve(entities);
        });

        return dfd.promise;
    });
},

// Helper forwarding the COUNT request to the MySql connection
_count: function (filters) {
    var query = this.getCountQuery(filters);

    return this._getConnection().then(function (connection) {
        var dfd = new Deferred();

        connection.query(query, function (err, rows) {
            connection.release();
            if (err) {
                _forwardError(dfd, 500, 'Query to DB failed:' + query, err);
                return;
            }

            dfd.resolve(rows[0].total);
        });

        return dfd.promise;
    });
},
Code sample: the select() method of the MySqlDao, and its two direct helpers.
Few comments on the code illustrated above:
  • Each helper wraps the asynchronicity with callbacks of the MySQL plugin into Promise (via the Deferred class);
  • The main entry point relies on the Promise all() method to convey the request result only when the responses from the two helpers are ready;
  • The method _getConnection() returns a Promise which is resolved with a connection from the MySQL pool (i.e. from mysql.createPool().getConnection());
  • The method _forwardError() is a simple helper logging the error and rejecting the Promise; at the highest level, express use the error code as the status for the HTTP response;
  • The method _select() converts each results into an instance of the specified model, providing transparently a support for field validation and partial updates;
  • With the use of given ModelClass, this MySqlDao class acts like a Java and C# Generics.

The persistence layer

I'm not a DBA and will never be one. Regarding the piece of code above, one of my colleague proposed to update the query for the SELECT in order to get the record count at the same time. It would be repeated with each rows but it would save a round trip. At the end, I decided to keep the code as-is because it's consistent with the code of the MongoDB DAO. We'll measure the impact of the improvement later when the entire application is ready.
If I'm not an expert, I always had to deal with databases: Oracle 10g, IBM DB2, MySQL, PostgreSQL for the relational databases and Google datastore and MongoDB for the Non-SQL ones. My current project relies on MongoDB where player's session information are stored and on MySQL which stores static information. I like working with MongoDB because of its ability to work with documents instead of rows of normalized values. It is very flexible, well aligned with entities used client-side. And MongoDB is highly scalable.
Once the DAOs have been correctly defined, implemented, and tested, dealing with any database at the service level is transparent. Developers can focus on the business logic while DBAs optimize the database settings and deployments.

The continuous integration

Java is an easy language to deal with. First, they are a lot of very good IDE, like Eclipse and IntelliJ. Then, they are plenty of test tools to help verifying the code behaves as expected&em;my favorites ones are JUnit, Mockito, and Cobertura. And finally Java applications can be remotely debugged, profiled, and even obfuscated.
In the past, I controlled the quality of my JavaScript code with JSUnit and JSCoverage. Now I recommend Intern to run unit tests efficiently with Node.js and functional tests with Selenium. I really like Intern because it's AMD compliant, it produces coverage reports, and it mainly do organize my tests as I want! A small run of around 1,000 unit tests by Node.js takes around 5 seconds. The functional test suite with 20 green-path scenarios takes 20 seconds to run in Firefox and Chrome in parallel.
Here is a small nonetheless important point about Intern flexibility:
  • I want my tests to work on modules totally isolated one from another. To have them isolated, I inject mocks to replace the injected dependencies in the module to be tested.
  • Intern suggested way requires:
    • Removing the module to be tested from the AMD cache, with require.undef([mid]);;
    • Replacing references of dependent classes by mock ones, with require({map: { '*': { [normal-mid]: [mock-mid] } });;
    • Reloading the module to be tested that will now use the mock classes instead of the original ones;
    • Calling and verifying the behaviour of the module.
  • Currently, I prefer instrumenting the modules with the help of dojo/has to be able to access private methods and replace on-the-fly dependent classes with mock ones. Each test injects the required mocks, and the afterEach test method restore all original dependent classes.
  • My Intern configuration file contains the definition used by dojo/has to expose test-friendly methods, while my index.html and my app.profile.js (used by the Dojo build system) leave it undefined. So these methods are not accessible from the browser, and not even defined in the built code.
  • With the help of Mockery, I can test everything, up to the classes controlling the access to MySQL, as illustrated above.
In the Java world, maven has replaced ant as the configuration tool of choice. In the JavaScript world, developers have to rely on many tools:
  • Node.js, npm, and bower to manage the libraries required server-side (npm) and client-side (bower);
  • Grunt to run administrative tasks like: building CSS file from Stylus ones, running the tests, compiling the code, deploying the built code, etc.
  • Intern produces test and coverage reports for Jenkins and TeamCity CI tools.

The development environment

My editor of choice is Brackets, by Adobe. It's a very powerful tool, still being actively developed, so continuously better than before. It has a lot of extensions, like an interactive linter and the Theseus debugger. And debugging and fixing extensions to fit your needs is very easy.
MongoDB consumes as much as memory as possible. To avoid cluttering your development environment while keeping your database on hand, I suggest you use vagrant to configure a virtual machine where MongoDB and your Node.js server run in isolation. Coupling a Vagrantfile with a provisioning script allows all your collaborators to benefit from the same configuration.
When it's time to push on production environments, check if there isn't a Grunt extension that can help you sending the compiled code via FTP or SSH on a remote machine or on Amazon Web Service or on Google Cloud Engine.
I hope this helps, Dom

Sunday, March 23, 2014

Review of the book: Getting Started with Grunt

Disclaimer: I've been offered a electronic copy of the book in exchange to this review. I have no contractual relationship with Packt Publishing nor the book author Jamie Pillora.

Context

Since early 2000, I've been on the Java technology stack for implementing three-tier applications (Web or mobile clients, J2EE Web server, Relational databases). I considered the move from make to ant as a real improvement. My first public project on GitHub are still relying on ant.

Getting Started with Grunt: The JavaScript Task RunnerI was so happy with ant that I switched to maven only around 2010. To me, the main benefit of maven is its management of the dependencies: from the maven repository, I can depend simply on many Java libraries like Google App Engine, RestEasy and Mockito. I can also get resources for the Web client, like the Dojo Toolkit. Over the years, I wrote few plugins to cover the same features I used to get from ant.

Almost one, I heard about Grunt, the JavaScript build tool. I liked that many plugins were also provided (the officially supported plugins with a name prefixed by grunt-contrib-). Because I was happy with my maven environment, I then started to use Grunt only for Stylus and the grunt-contrib-watch.

Recently, I joined the game company Ubisoft to work on a project involving Node.js server-side. My switch to Grunt was immediate ;) I love the extensive set of plugins and the easiness of writing my own ones if needed. The pair npm / Grunt is rock solid: npm which manages the dependencies and Grunt runs the tasks.

When Packt Publishing contacted me for the review of the book Getting Started with Grunt, I saw an occasion to consolidate my knowledge on Grunt, to compare what I know with someone else experience. And reviewing a published book is way more difficult and lengthy than working on drafts, a work I did one for the book Google App Engine and GWT ;)

Book content

As revealed by the title, the book targets new users of Grunt, or people evaluating the technology. If I compare to my own experience, Grunt power users won't learn much by reading the book.

If the book does not explain how Grunt works, it however describes a lot of topics extensively:

  • The transpiling aspect: from CoffeeScript, from Stylus/Sass/Less;
  • The code processing: verification with JSLint and JSHint, minification with Uglify;
  • The code testing: with Mocha and PhantomJS;
  • The deployment: assembling many files in one, sending it over FTP, publishing on Amazon S3;
  • The customization: writing and publishing plugins.

Opinion

This is book really targets new Grunt adopter. It helps understanding the basic tasks. It also describes how to setup a build environment for Web clients. I think exposing more grunt plugins, like grunt-exec to run non JavaScript tool, could have set the book as a reference book...

The author decided to focus on one type of application: a Web client based on Jade, the HTML template engine. I think describing tasks for the application logic on a Node.js server, like grunt-express or grunt-nodemon, would have interest a wider audience. Grunt is a really versatile tool.

I hope it helps,
A+, Dom

Friday, January 31, 2014

2013 Products I Can't Leave Without

Having some spare time before starting a new job at Ubisoft in Montreal, I want to give a try of my old 2009 Products I Can't Leave Without post.

Update April 27, 2014: as many others, I have decided to stop using the Dropbox service. For now, I use Google Drive and QNAP qsynch.

2013
Blogger
Chrome Canary
Dropbox
Eclipse
Feedly
Firefox
Gimp
Git
GMail
Google App Engine
Google Search
KeePassX
SourceTree
Unity3D
VirtualBox
YouTube

Blogger
This is the tool used to publish this blog. I use WordPress in other projects. WP has definitely a larger feature set, without counting its amazing plugin list (commercial and free). However, Blogger is just fine for the type of edition I conduct here.


Company:Google
Website:blogger.com
Launch Date:August 18, 2007

Blogger is a blog publishing platform formerly known as Pyra Labs before Google acquired it in February 2003. Blogger blogs are mostly hosted internally with the “dot Blogspot” domain but they can also be hosted externally on a user’s own server.

Blogger provides bloggers with a WYSIWYG editor that allows them to drag-and-drop widgets, change fonts and edit page elements. Also, Feedburner’s feed management tools are tightly integrated with Blogger blogs due to Google’s recent acquisition.

Credits: CrunchBase

Chrome Canary
Google declines the Chrome browser in 4 versions, and Chrome Canary is its most bleeding-edge version. I still use Firefox for most of the Internet browsing, but I now use Chrome for my development related tasks: Webapp debugging on the desktop, remote debugging on a tablet, device emulation (iPad, Nexus, etc.). As the vast majority of mobile browsers are WebKit based as Chrome, it's definitely a must-have development tool.


Company:Google
Website:google.ca/intl/en/chrome/browser/canary.html

Google Chrome is an based on the open source web browser Chromium which is based on Webkit. It was accidentally announced prematurely on September 1, 2008 and slated for release the following day. It premiered originally on Windows only, with Mac OS and Linux versions released in early 2010.

Credits: CrunchBase

Eclipse
For an early adopter of IntelliJ IDEA by JetBrains, I had to use Eclipse (company's tool when working with IBM Rational, cheap when working with Compuware). I should recognize that it is going better and better (especially with the refactoring features and the JavaScript support) and it has more plugins than IntelliJ. It is also a platform for OSGi and for Rich Applications.


Company:FLOSS
Website:eclipse.org

Eclipse is a Java-based Integrated Development Environment (IDE). In addition to supporting Java (standard and J2EE), Eclipse supports other programming languages like C++ and Python thanks to plugins. It also offers extensions for developing on Android, BIRT, databases, etc.


Feedly
When Google Reader disappeared, I tried Feedly. It's not a great tool, but it does the job: I can continue to easily read the continuous new stream from the Internet ;)


Company:Feedly
Website:feedly.com
Launch Date:June 2008

Credits: CrunchBase

Firefox
Having been a Web application developer for a long time, I adopted Firefox (then know as Firebird) in 2003. With the introduction of the Firebug extension (in 2005), it became with primary browser and it had never lost this status. Its early integration of Google search was also a serious advantage. These days, with the faviconize extension and Firefox ability to start with the previous configuration, my browser always starts with: iGoogle, GMail, Google Calendar.


Company:Mozilla
Website:getfirefox.com
Launch Date:November 9, 2004

Firefox 4 Hits 100 Million Downloads After A Month (4/22/11).

Credits: CrunchBase

Gimp
I never the budget and training for Adobe Photoshop. So I started using Gimp. If you can pass over its weird interface (too many windows, IMO), Gimp offer tons of features for Web application developers: to adjust pictures, to generate textures, to resize images, etc. And there are plenty of free tutorials on the Web.


Company:FLOSS
Website:gimp.org

My favorite video series on Photoshop, starting with the first episode: You Suck at Photoshop #1: Distort, Warp, & Layer Effects.


Git
As a developer, I always want to put my code into a source control system. It is not just because I am afraid that my laptop crashes, then wasting hours of work. It is mainly because I want to keep track of the update history. At work, over the years, I used ClearCase, CVS, and Subversion. For my personal development, I used Subversion a lot and now I use Git.


Company:FLOSS
Website:git-scm.com

Free hosting service of open-sources Github - Charges applied for private hostings.


GMail
When I started working, I dealt with many machines and I hated having to start one just to look at a specific inbox. With GMail, my account is available anywhere. When I read Turn Gmail Into Your Personal Nerve Center, I started to use GMail as my knowledge database.


Company:Google
Website:gmail.com
Launch Date:April 1, 2004

Gmail, also known as Google Mail, is a free email service with innovative features like “conversation view” email threads, search-oriented interface and plenty of free storage (almost 7GB). Gmail opened in private beta mode in April 2004 by invite only. At first, invites were hard to come by and were spotted up for sell on auction sites like eBay. The email service is now open to everyone and is part of Google Apps. Paul Buchheit, an early employee at Google, is given credit for building the product.

Another Gmail feature is the organization, tracking and recording users’ contact lists. For instance, if you start typing the letter C into the “To” field Gmail will bring up a list of every email address and contact name starting with the letter. This feature helps when you can’t quite remember a name. Plus, Gmail automatically adds and updates email addresses and names to your contact list when you write emails.

Credits: CrunchBase

Google App Engine
My first job in Montréal, Canada, was with a small company named Steltor (bought few years later by Oracle). The core business was the development of a distributed calendar system (servers in cluster, native clients, web client, mobile client, etc.). Since then, I am used to tracking my work with an electronic calendar. Google Calendar and its ability to mix many agendas is excellent.


Company:Google
Website:developers.google.com/appengine
Launch Date:April 2008

Google App Engine offers a full-stack, hosted, automatically scalable web application platform. The service allows developers to build applications in Python, Java (including other JVM based languages such as JRuby) which can then use other Google services such as the Datastore (built on BigTable) and XMPP. The service allows developers to create complete web application that run entirely on Google’s computing infrastructure and scale automatically as the application’s load changes over time. Google also provides an SDK for local development and site monitoring tools for measuring traffic and machine usage.

Google’s offering competes with Amazon’s Web Services suite, including EC2, S3, SQS, and SimpleDB.

Credits: CrunchBase

Google Search
Google Search is an amazing tool: recently, I was trying to find a solution to a tough technical problem and I found it thanks to Google Search which pointed toward a blog post written the same day, just few hours before, in Europe! Incredible... When I give a conference into universities, I often say: “If I asked a question today and you have no clue about the response, that's fine. If you still have no clue tomorrow, you're in trouble...”


Company:Google
Website:google.com
Launch Date:September 4, 1998

Search is Google’s core product and is what got them an official transitive verb addition to the Merriam Webster for “google”. The product is known for its Internet-crawling Googlebots and its PageRank algorithm influenced heavily by linking.

When users type keywords into the home page search box they are returned with relevant results that they can further refine. Google also has more specific search for blogs, images, news and video. Google will also return search results from your own computer files and emails via Google Desktop.

Credits: CrunchBase

KeePassX
KeePassX is an open source Password Safe, an multi-platform extension of KeePass. I use it in conjunction with Dropbox so my precious list of account coordinates are available on all my devices (desktops, tablets, and phones).


Company:FLOSS
Website:keepassx.org


SourceTree
As a developer, I much prefer using command line tools like maven, git, and other ant and Python scripts. Git is a really powerful tool but I worked with team members having some difficulties to deal with the branches, cherry-picking, and stashing for example. The freeware SourceTree offers a neat interface on the top of git. I'm more a command line user than a GUI one. Hoever


Unity3D
For my own company AnotherSocialEconomy.com, I only developed native applications for Android. As a software architect at Electronic Arts, in the (now defunct) Play Mantis franchise lab in Montréal, I worked with Unity developers. I then learned its cross-platform capabilities, it's physics engine, and it's powerful C# library. Since then, I've created few project with Unity and I think it's a powerful ecosystem.


VirtualBox
Developing software requires sometimes specific configurations. Testing them requires always specific configurations (at least to replay always the same test cases every time the source control system, like Git, is updated). There are the famous VMWare products (Workstation, Player, ESX) and Microsoft VirtualPC. VirtualBox is an open source product provided by SUN Microsystems, and it has nice features while being powerful.


Company:Oracle
Website:virtualbox.org

VirtualBox is a family of powerful x86 virtualization products for enterprise as well as home use. Not only is VirtualBox an extremely feature rich, high performance product for enterprise customers, it is also the only professional solution that is freely available as Open Source Software under the terms of the GNU General Public License (GPL).

Presently, VirtualBox runs on Windows, Linux, Macintosh and OpenSolaris hosts and supports a large number of guest operating systems including but not limited to Windows (NT 4.0, 2000, XP, Server 2003, Vista), DOS/Windows 3.x, Linux (2.4 and 2.6), Solaris and OpenSolaris, and OpenBSD.


YouTube
YouTube is famous because of fun videos. But it also hosts technical videos.


Company:Youtube
Website:youtube.com
Launch Date:December 11, 2005

YouTube was founded in 2005 by Chad Hurley, Steve Chen and Jawed Karim, who were all early employees of PayPal. YouTube is the leader in online video, sharing original videos worldwide through a Web experience. YouTube allows people to easily upload and share video clips across the Internet through websites, mobile devices, blogs, and email.

Everyone can watch videos on YouTube. People can see first-hand accounts of current events, find videos about their hobbies and interests, and discover the quirky and unusual. As more people capture special moments on video, YouTube is empowering them to become the broadcasters of tomorrow.

In November 2006, within a year of its launch, YouTube was purchased by Google Inc. in one of the most talked-about acquisitions to date.

Credits: CrunchBase

I hope it helps,
A+, Dom

Thursday, January 30, 2014

Singleton uniqueness issue in a C# generic class

I code in C# within the Unity environment (I really like the co-routine mechanism and the yield return instruction).

At one point, I developed a set of base classes to provide a default behavior to entities that must be exchanged on the wire and cached on the device. And I used generic/template classes to be able to generate instances of the final classes from within the methods of the super class. Information saved on the device are signed with a piece of information obtained from the server.

public class LocalCache<T> where T : BaseModel {

    public static string signatureKey { get; set; }

    public T Get(long id) {
        // Get the JSON from PlayerPrefs
        // Verify its signature
        return (T) Activator.CreateInstance(typeof(T), new object[] { JSON });
    }

    public void Save(T entity) {
        // Get the JSON representing the data
        // Sign the JSON
        // Save the signed JSON into PlayerPrefs
    }

    public void Reset(long id) { ... }

    private string SignData(string data) { ... }
}

In Java, there would be only one instance of the signatureKey variable per JVM, whatever the number of class instances. In C#, I've been surprised to find that this attribute is not unique!

public class LocalCache<T> where T : BaseModel {

    // Other code as shown above...

    public static void TestDifferentInstanciations() {
        LocalCache<System.Int64>.signatureKey = "Set for Int64";

        UnityEngine.Debug.Log("Int16: " + LocalCache<System.Int16>.signatureKey);
        // => produces: Int16:

        UnityEngine.Debug.Log("Int16: " + LocalCache<System.Int64>.signatureKey);
        // => produces: Int64: Set for Int64

        UnityEngine.Debug.Log("string: " + LocalCache<System.string>.signatureKey);
        // => produces: string:
    }
}

My surprise comes from the fact that the concept of generic class only exist for the definitions. At runtime in C# only exist types and the type of LocalCache<Int16> is different from the type of LocalCache<Int64>. Then there are multiple copies of the signatureKey attribute, exactly one per constructed types...

The solution is to set the singleton into a separated non generic class!

internal static class StaticLocalCacheInfo {
    public static string signatureKey { get; set; }
}

public class LocalCache<T> where T : BaseModel {

    public static string signaturePartFromServer {
        get { return StaticLocalCacheInfo.signatureKey; }
        set { StaticLocalCacheInfo.signatureKey = value; }
    }

    // Rest of the business logic...

    public static void TestDifferentInstanciations() {
        LocalCache<System.Int64>.signatureKey = "Set for Int64";

        UnityEngine.Debug.Log("Int16: " + LocalCache<System.Int16>.signatureKey);
        // => produces: Int16: Set for Int64

        UnityEngine.Debug.Log("Int16: " + LocalCache<System.Int64>.signatureKey);
        // => produces: Int64: Set for Int64

        UnityEngine.Debug.Log("string: " + LocalCache<System.string>.signatureKey);
        // => produces: string: Set for Int64
    }
}

Note that I got the confirmation of the behavior on Unity forum.

I hope it helps!
A+, Dom

Tuesday, January 28, 2014

Objectify and conflicting OnSave / OnLoad callbacks

Since the beginning, I've been using Google App Engine. I tried first its Python implementation, then I moved to its Java implementation. The move to Java was dictated by the simplicity of the language and the largest set of tooling to code, test, and debug the applications.

My first implementation was using JDO as the ORM layer with my own servlet infrastructure implementing a REST interface. Nowadays, I use Objectify in place of JDO and RestEasy (with Guice and JAX-RS).

In my latest project, I've implemented two sets of entities:

  • One set which tracks with a creation date and a version number;
  • One set which extends the first one and tracks the identifier of their owners.

Both base classes implement an @OnSave callback to control their core information. Here are a short version of the these classes.

@Index public abstract class AbstractBase<T> {
    @Id private Long id;
    @Unindex protected Date creation;
    @Unindex protected Long version;

    // Accessors
    // ...

    @OnSave protected void prePersist() {
        if (creation == null) {
            creation = new DateTime().toDate();
            version = Long.valueOf(0L);
        }
        version ++;
    }
}

@Index public abstract class AbstractAuthBase<T> extends AbstractBase<T> {
    @Index private Long ownerId;

    // Accessors
    // ...

    @OnSave protected void prePersist() {
        if (ownerId == null || Long.valueOf(0L).equals(ownerId)) {
            throw new ClientErrorException("Field ownerId is missing");
        }
    }
}

When I ran the implementation of the code above, I faced a weird behavior:

  • The entities of the User class extending only AbstractBase had their creation and version fields correctly set when persisted.
  • The entities of the Category class extending AbstractAuthBase had none of these two fields set!

It appears the issue comes from Objectify which was only invoking the first method! And twice BTW...

I looked then at the Objectify implementation, precisely at the methods ConcreteEntityMetadata<T>.processLifeCycleCallbacks() and ConcreteEntityMetadata<T>.invokeLifeCycleCallbacks(). In the first method, you can see that the reference of the methods annotated @OnSave and @OnLoad are accumulated in two lists. In the second method, the given list is parsed and the method is applied to the current object.

My issue arose because applying twice the method prePersist() on a instance of a Category class was always calling the method of the base class AbstractAuthBase<T>! I fixed the issue by renaming the callbacks (one in checkOwnerId() and the other in setCreationAndVersion()).

A+, Dom

Monday, February 13, 2012

SoapUI / REST / JSON / variable namespaces

As a developer of large Web applications, I'm used to relying on a few proven test strategies:
When I joined my current development team to work on the project TradeInsight, I was introduced to SoapUI for the functional tests. I like it's ability to convert JSON to XML, enabling then the writing of XPath match assertions.

There's one little caveat related to the conversion and the XPath assertions:
  • When the server response contains a single JSON object, the conversion introduce a namespace into the generated XML.
  • And this namespace depends on the server address.
The following figures show the original JSON response and its conversion to XML with the inferred namespace.
JSON response produced by a REST service.
Transformation into a XML payload with an inferred namespace.

This automatic mapping to namespace server dependent does not allow to write server-agnostic code if you follow the suggested solution!

The following figures show a XPath match expression as documented in the SoapUI training materials. Sadly, running it against an XML with a namespace, this error is reported:

XPathContains assertion failed for path [//startDate/text()] : Exception missing content for xpath [//startDate/text()] in Response.

Simple XPath expression with the corresponding error message.
Corrected XPath expression as suggested, now server dependent :(

 A simple solution consists in replacing the specified namespace with the meta-character '*', which match any namespace. As all elements of the XML document are under the scope of the inferred namespace, it's important to prefix all nodes of the XPath expression with '*:', as illustrated by the following figure.

Use of the '*' prefix to produce assertion server independent.
I hope this helps.

A+, Dom

Monday, June 27, 2011

Un nouveau chapître

Il y a quelques 18 moins, j'ai choisi de devenir un entrepreneur à temps plein, consacrant mon temps et mes ressources financières au développement du service AnotherSocialEconomy. Cela a été une expérience très riche en enseignements.

Ceux qui me connaissent savent à quel point il était important que je sois vraiment maître de mon destin professionnel :
  • Développer un outil qui offre une vraie valeur à son groupe d'utilisateurs.
  • Baser les décisions sur les faits et leurs conséquences sans laisser places aux interférences politiques (tellement immobilisantes dans les grandes entreprises).
  • Utiliser la technologie pour servir d'abord les usagers, puis pour construire les services et faciliter leur développement. En ce sens, la méthodologie Lean Startup et son principe Build-Measure-Learn offre un excellent cadre de travail.
  • Utiliser la méthodologie Agile (sans oublier les outils de XP) pour développer une application de grande envergure.

Avec mon associé Steven, cela a été très intéressant de définir mes propres objectifs, mon organisation de travail, et diriger le développement de l'entreprise.

Mais il y a un mois, principalement à cause de l'absence d'une perspective de revenus stables dans un futur proche, je me suis mis à la recherche de contrats et de postes permanents. C'est sûr que c'est déchirant, mais l'outil est en production et je peux le faire évoluer dans mon temps libre. En appliquant donc un principe de Lean une fois de plus ("Failure to change is a vice!", Hiroshi Okuda, Président de Toyota Motor Corp.), je replonge du côté salarié.

Finalement, après plusieurs entretiens, mon choix s'est porté sur la compagnie MEI qui développe une plate-forme Web pour aider ses clients (des producteurs de biens de consommation emballés--consumer packaged goods) à suivre leurs campagnes de promotion. Jusqu'il y a deux ans, MEI offrait principalement son service aux grands manufacturiers. Maintenant, dans une offre SAAS, MEI développe une nouvelle version pour les producteurs petits et intermédiaires, sous la marque TradeInsight. La synergie entre mon expérience (Java, JavaScript, cloud, mobile, Agile) et l'équipe est indéniable.

Au fait : MEI embauche encore !

L'aventure d'AnotherSocialEconomy continue, à temps perdu de mon point de vue et dans mes interactions avec les groupes de la communauté technique de Montréal (NewTech, Android, NodeJS, etc.). Pour plus d'informations quant aux partenariats possibles, veuillez prendre contact avec Steven.

A+, Dom

Tuesday, June 7, 2011

OAuth authorization handling in a Android application

Note : This post is part of the series "Lessons learned as an independent developer". Please refer to the introduction (in French) for more information. Cet article fait partie de la série intitulée « Leçons d'un développeur indépendant ». Au besoin, lisez mon introduction pour plus d'informations.

Context

AnotherSocialEconomy APIs are based on standards as much as possible: OpenID and OAuth for the authentication and authorization, HTTP-based REST interface (1, 2) for the communication protocol, JSON for the data payload format, etc.

This post is about setting up a Android application which get authorization tokens from a OAuth provider.

OAuth provider

There are many known OAuth providers like Netflix, Twitter, Facebook (coming to OAuth 2.0 soon), Yahoo!, Google, etc. If these providers are convenient, they don't offer much flexibility if some debugging is required.

For this experiment, I'm going to use Google App Engine Java and their OAuth support. For a complete walk-through, refer to Ikai Lan's post: Setting up an OAuth provider on Google App Engine, especially for the part which describes how to get the public and secret keys for your client application to sign communications with the provider.

OAuth client - Work flow

Strategy: Upon creation, the process checks if the authorization tokens have been saved as user preferences.
  • If they are present, they are loaded to be used to sign each future communication with the application on the server.
  • If they are missing, the OAuth work flow is triggered:
    1. With the server application keys, a signed request is sent to get a temporary request token.
    2. With this request token, a URL to the authentication page is required and an Intent is created to load the corresponding page in a browser. At this step, the application is stopped.
    3. The user enters his credentials in the browser and grants access rights to the mobile application. The return URL has a custom format: ase://oauthresponse.
    4. The mobile application, which has an Intent registered for that custom URL, is restarted and is given the return URL. A verification code is extracted from this URL.
    5. The verification code is used to issue a signed request asking for the access tokens.
  • The access tokens are saved as part of the user preferences only if she selected a 'Remember me' option.

Figure 1: Authorization work flow

Alternative: If the mobile application offers anonymous services, like browsing the list of registered stores in the case of AnotherSocialEconomy.com, it can be friendlier to delay the authorization verification.

OAuth client - Initiating the authorization process (1, 2, 3)

To simplify the application development, I have decided to use oauth-signpost, a library provided by Matthias Käppler who wanted a slick and simple way to access Netflix services.
Signpost is the easy and intuitive solution for signing HTTP messages on the Java platform in conformance with the OAuth Core 1.0a standard. Signpost follows a modular and flexible design, allowing you to combine it with different HTTP messaging layers.
Note that this library is also good to manage remotely Twitter accounts.

This section is about initiating the authorization process, which occurs if the application is not called by the application on the server (with the verification code, see next section) and if the OAuth token could not be found in the user preferences. This is the path with the steps {1, 2, 3} in Figure 1.

if (!justAuthenticated && Preferences.get(Preferences.OAUTH_KEY, "").length() == 0) {
    // Display the pane with the warning message and the sign in button
    setContentView(R.layout.main_noauth);

    // Update the 'Remember me' checkbox with its last saved state, or the default one
    final String saveOAuthKeysPrefs = Preferences.get(Preferences.SAVE_OAUTH_KEYS, Preferences.SAVE_OAUTH_KEYS_DEFAULT);
    ((CheckBox) findViewById(R.id.app_noauth_keepmeconnected)).setChecked(Preferences.SAVE_OAUTH_KEYS_YES.equals(saveOAuthKeysPrefs));

    // Attach the event handler that will initiate the authorization process up to opening the browser with the authorization page
    findViewById(R.id.app_noauth_continue).setOnClickListener(new OnClickListener() {
        @Override
        public void onClick(View v) {
            // Check if the 'Keep me connected' check box state changed and save its new state
            boolean keepMeConnected = ((CheckBox) findViewById(R.id.app_noauth_keepmeconnected)).isChecked();
            if (Preferences.SAVE_OAUTH_KEYS_YES.equals(saveOAuthKeysPrefs) != keepMeConnected) {
                Preferences.set(Preferences.SAVE_OAUTH_KEYS, keepMeConnected ? Preferences.SAVE_OAUTH_KEYS_YES : Preferences.SAVE_OAUTH_KEYS_NO);
            }
                    
            // Set up the OAuth library
            consumer = new CommonsHttpOAuthConsumer("<your_app_public_key>", "<your_app_secret_key>");

            provider = new CommonsHttpOAuthProvider(
                    "https://<your_app_id>.appspot.com/_ah/OAuthGetRequestToken",
                    "https://<your_app_id>.appspot.com/_ah/OAuthAuthorizeToken",
                    "https://<your_app_id>.appspot.com/_ah/OAuthGetAccessToken");
                    
            try {
                // Steps 1 & 2:
                // Get a request token from the application and prepare the URL for the authorization service
                // Note: the response is going to be handled by the application <intent/> registered for that custom return URL
                String requestTokenUrl = provider.retrieveRequestToken(consumer, "ase://oauthresponse");

                // Step 3:
                // Invoke a browser intent where the user will be able to log in
                startActivity(new Intent(Intent.ACTION_VIEW, Uri.parse(requestTokenUrl)));
            }
            catch(Exception ex) {
                Toast.makeText(Dashboard.this, R.string.app_noauth_requesttoken_ex, Toast.LENGTH_LONG).show();
                Log.e("Dashboard no auth", "Cannot initiate communication to get the request token\nException: " + ex.getClass().getName() + "\nMessage: " + ex.getMessage());
            }
        }
    });
}

Figure 2 below illustrates the pane main_noauth displaying the warning message and the action button, and figure 3 shows the authorization page as provided by Google for the hosted applications on App Engine.


Figure 2: Pane displayed if application not yet authorized

Figure 3: Google authorization page

Whatever action the user takes, the application is going to be called with the URL ase://oauthresponse. The next section covers this work flow path.

OAuth client - Processing the authorization (4, 5)

The application is registered with an Intent associated to the scheme ase and the host oauthresponse. The labels themselves are not important, only their uniqueness and the correspondence with the return URL specified at Step 2.

<intent-filter>
    <action android:name="android.intent.action.VIEW"/>
    <category android:name="android.intent.category.DEFAULT" />
    <category android:name="android.intent.category.BROWSABLE"/>
    <data android:scheme="ase" android:host="oauthresponse"/>
</intent-filter>

The following code snippet implements the steps 4 and 5 as described in Figure 1.

private boolean checkOAuthReturn(Intent intent) {
    boolean returnFromAuth = false;
    Uri uri = intent.getData();

    if (uri != null && uri.toString().startsWith("ase://oauthresponse")) {
        // Step 4:
        // Get the request token from the Authentication log in page
        String code = uri.getQueryParameter("oauth_verifier");
            
        try {
            // Step 5:
            // Get directly the access tokens
            provider.retrieveAccessToken(consumer, code);
            returnFromAuth = true;
                
            // Persist the tokens
            if (Preferences.SAVE_OAUTH_KEYS_YES.equals(Preferences.get(Preferences.SAVE_OAUTH_KEYS, Preferences.SAVE_OAUTH_KEYS_DEFAULT))) {
                Preferences.set(Preferences.OAUTH_KEY, consumer.getToken());
                Preferences.set(Preferences.OAUTH_SECRET, consumer.getTokenSecret());
            }
        }
        catch(Exception ex) {
            Toast.makeText(Dashboard.this, R.string.app_noauth_accesstoken_ex, Toast.LENGTH_LONG).show();
            Log.e("Dashboard no auth", "Cannot complete communication to get the request token\nException: " + ex.getClass().getName() + "\nMessage: " + ex.getMessage());
        }
    }
       
    return returnFromAuth;
}

The Dashboard class definitions are available in a gist on GitHub. This gist contains also a wrapper of the SharedPreferences class, the application manifest with the declaration of the Intent for the custom return URL, and the layout definition of the pane with the warning and the sign in button.

OAuth Client - The quirks

My Android application is very simple and is configured with the launch mode singleTop. As such, if the system does not destroy the application when the code starts an activity to browse the Authentication service URL, the invocation of the ase://oauthresponse URL by the browser should trigger a call to the onNewIntent() method. It never happened during my tests and on my phone... Every time, the application is recreated and a call to onCreate() is issued. So both functions delegate to the helper checkOAuthReturn().

@Override
protected void onNewIntent(Intent intent) {
    checkOAuthReturn(intent);
}

In this example, I've decided to select the view to associate to the first screen of the application according to the knowledge of the OAuth access token (read from the user preferences or retrieved dynamically thanks to the verification code coming with the ase://oauthresponse URL). The following snippet illustrates this flow. In some occasions, it can be better to start a separate activity if the main pane is instrumented to disable the triggers to protected actions. This approach with a separate activity is also better for the portability.

@Override
public void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    Preferences.setPreferenceContext(PreferenceManager.getDefaultSharedPreferences(getBaseContext()));

    boolean justAuthenticated = checkOAuthReturn(getIntent());
        
    if (!justAuthenticated && Preferences.get(Preferences.OAUTH_KEY, "").length() == 0) {
        setContentView(R.layout.main_noauth);

        // Instrumentation of the pane to initiate the authorization process on demand
        // ...
    }
    else {
        setContentView(R.layout.main);
    }
}

I hope this helps.
A+, Dom

Thursday, June 2, 2011

Partage d'expérience dans le développement d'applications Android

Contexte

Note : Cet article est le premier d'une série intitulée « Leçons d'un développeur indépendant ». Le niveau de la discussion dans cet article est général. Les articles suivant iront plus en profondeur et seront illustrés de bouts de code.

Dans le cadre de mon entreprise AnotherSocialEconomy.com, j'ai eu l'occasion de mettre en œuvre plusieurs bonnes pratiques et je vais en partager quelques unes ici. Je vais notamment me concentrer sur les développements autour de l'application cliente Android, depuis l'identification des usagers jusque l'émission de notifications asynchrones.

AnotherSocialEconomy.com, ou ASE, offre un service connectant consommateurs et détaillants :
  • Les consommateurs à la recherche d'un produit ou service n'ont qu'à décrire leur demande depuis l'un des multiples points d'entrée de l'application : une page Web faite pour AdWords, le site de ASE ou affilié, la page de l'application Facebook, un message direct depuis Twitter, etc.
  • Les détaillants participants sont notifiés des demandes en fonction de leurs préférence. Les détaillants sont libres de faire une ou plusieurs propositions en fonction de leur disponibilité.
  • Au fur et à mesure que les propositions sont composées, les consommateurs en sont notifiés et peuvent à tout moment les décliner ou les confirmer.
  • Les confirmations sont notifiés aux détaillants qui réservent alors le produit ou le service pour le consommateur.
  • Ce dernier n'a plus qu'à payer et à en prendre possession.
Pour faire simple : ASE connecte les consommateurs avec les détaillants qui ont les produits ou services qu'ils recherchent en inversant le processus de recherche.

Le moteur de ASE est présentement codé en Java et est hébergé sur l'infrastructure Google App Engine. Dans la suite de cet article, pour généraliser le propos, le moteur de ASE est référencé en tant qu'application serveur.
La gestion des utilisateurs sur le serveur

Depuis le départ, le service des usagers de l'application serveur repose sur OpenID. Avec OpenID, l'identification des utilisateurs est confiée à des services tiers de confiance (Yahoo!, Google, AOL, etc.) sans que l'application serveur ne voit le mot de passe des utilisateurs, seulement leur identifiant OpenID. Ce mode de gestion règle aussi plusieurs problèmes :
  • Les utilisateurs n'ont pas à créer un énième compte pour le service ASE.
  • Le serveur est moins à risque car il n'y a pas de mots de passe enregistré là.
  • La gestion des sauvegardes est plus simple (toujours parce qu'il n'y a pas de mots de passe).
  • En cas de bris de leur mot de passe, les utilisateurs peuvent assurément mieux s'appuyer sur les services de leur fournisseur OpenID que sur les miens :)
Plus tard dans le cycle de développement, notamment parce qu'il a s'agit de développer une application Facebook, les mécanismes d'identification de Facebook, ceux de Twitter et de ceux de Microsoft Live ont été intégrés à l'application serveur. Des trois, le mécanisme de Twitter est le plus standardisé (OAuth), donnant aussi accès aux données de l'utilisateur du service. Mais tous ont été intégrés de manière à agir comme des services OpenID.

OpenID est un bon système d'identification pour une application cliente Web. Avec les restrictions de sécurité des navigateurs (SSL et sandbox), une fois que l'identité de l'utilisateur est confirmée par un fournisseur OpenID, tant que cette identité reste associée à la session Web, l'envoi de données vers les navigateurs reste protégé.

Par contre, quand l'application cliente est native (sur un ordinateur ou sur un téléphone mobile), il n'est pas possible de s'appuyer sur un mode de session Web robuste comme celui des navigateurs. Aussi une application malicieuse pourrait intercepter l'identifiant de session et s'en servir à l'insu de l'usager. Pour se prémunir contre cette attaque, il est souhaitable d'utiliser OAuth qui signe chaque échange entre l'application cliente et le serveur, rendant caduc l'utilisation de l'identifiant de session Web.

L'authentification des usagers sur le client

Chaque téléphone Android est associé à un utilisateur. Si la carte SIM de l'opérateur téléphonique est changée, les données de l'utilisateur précédent ne sont plus accessibles. Chaque application à accès à son propre espace de stockage protégé, mais l'utilisateur peut réclamer cet espace à tout instant. Ce n'est donc pas une solution de stockage à long terme.

Dans le modèle d'authentification OAuth, les échanges de données sont signés par l'application cliente grâce à un jeton émis par l'application serveur. Grâce à la signature, l'application serveur est assurée de l'identité de l'utilisateur à chaque échange de données.

Pour avoir un jeton, le protocole à observer par l'application cliente est relativement simple :
  • Émettre une requête pour recevoir un premier jeton dit d'accès.
  • Ce jeton est utilisé pour initier un appel vers une page d'autorisation.
  • L'application serveur présente alors une page d'identification où l'utilisateur doit, s'il n'est pas déjà authentifié, entrer son identifiant et son mot de passe, puis accepter que l'application cliente accède aux données qui sont gérées par l'application serveur.
  • L'application serveur retourne un second jeton attestant de l'acceptation par l'utilisateur de l'accès aux données. Ce jeton a une durée de vie limitée.
  • Ce second jeton peut être utilisé pour obtenir deux jetons (clé publique et clé secrète) qui permettront à l'application cliente de signer les échanges de données de telle sorte que l'application serveur les associera à l'utilisateur concerné.
  • Souvent ces deux jetons ont une grande durée de vie (pas d'expiration dans le cas de Twitter), et peuvent donc être sauvegardés par l'application cliente pour signer de manière transparente tous les futurs échanges.
  • Il faut cependant tenir compte que l'utilisateur peut révoquer ces deux jetons n'importe quand, ou qu'ils peuvent expirer n'importe quand (à cause d'un changement de stratégie du côté de l'application serveur, par exemple) aussi il faut être prêt à exécuter le processus pour obtenir deux nouveaux jetons à n'importe quel moment.
Il est important de noter que la sauvegarde des jetons d'authentification doit être très sécuritaire. Il n'est pas acceptable de les sauvegarder dans un simple fichier texte situé  sur une carte d'extension mémoire par exemple. Si le risque d'accès à ces jetons est trop grand, il faut mieux rejouer le scénario ci-dessus pour obtenir un nouveau jeu de jetons.

Au moment où j'écris cet article, le matériel de la série Samsumg S et la tablette Motorola Xoom ont des systèmes de fichiers encryptés. À ma connaissance, même Android 3.1 n'offre toujours pas de solution bas niveau de sécurité maximale...

La réception des notifications asynchrones

Si de plus en plus de fondeurs de silicium mettent l'accent sur la puissance du processeur central (Qualcomm) et leur nombre (NVidia vient d'annoncer un Tegra avec 4 cœurs), si l'augmentation de la bande passante (de HPSA+ à LTE par exemple) permet des échanges de données de plus en plus rapide même loin de tout réseau informatique, la capacité énergétique des téléphones portables modernes reste leur point faible. Par le passé, j'ai eu des téléphones Nokia et Sony Ericsson capables de rester en veille plus d'une semaine. Maintenant, je dois brancher mon téléphone HTC Desire chaque soir, et cela même avec une navigation somme toute restreinte !

Dans ces conditions, maintenir une application éveillée pour pouvoir interroger l'application serveur à intervalles réguliers (technique dite de polling) est à proscrire.

Il y a deux ans, en développant une application pour la plate-forme BlackBerry 5, j'ai utilisé la technique suivante :
  • L'application client sur le téléphone écoutait un certain nombre de messages du système (changement de type de réseau, perte du réseau, etc.) et les colligeait dans une base de données interne.
  • C'était l'application serveur qui décidait du moment de transmission de ces données statistiques en envoyant un SMS à chaque téléphone.
  • À la réception de ce SMS, l'application client ouvrait une connexion HTTP pour transmettre en rafale ses données colligés.
  • Une fois l'ensemble de données rapatriés de chaque téléphone, l'application serveur établissaient des rapports de couverture pour l'opérateur.
Depuis la version 2.2, il existe le protocole AC2DM: Android Cloud to Device Messaging. Quand une application cliente configurée pour AC2DM s'initialise, elle doit s'enregistrer auprès du serveur local AC2DM et reçoit en retour un identifiant d'enregistrement. C'est la responsabilité de l'application cliente d'envoyer cet identifiant à l'application serveur pour que celle-ci ait la clé pour envoyer les notifications asynchrones à cette application cliente, et à elle seule.

Quelque part, l'approche du AC2DM est semblable à ma méthode d'activation par SMS. Il se peut même qu'elle utilise en sous main cette technique ;) La principale différence réside dans l'aspect service : avec AC2DM, l'application cliente n'a pas à rester active pour recevoir les notifications, c'est le serveur local de notifications qu'il l'activera au besoin.

L'application pour les consommateurs

L'application cliente pour les consommateurs doit offrir plusieurs fonctionnalités :
  • recevoir les notifications concernant les demandes et les propositions en attente
  • gérer la liste des demandes et propositions en attente
  • créer de nouvelles demandes
  • avec un accès au carnet d'adresses du téléphone pour pouvoir inclure ses « amis » en copie des demandes
  • avec un accès au système de localisation géographique du téléphone pour faciliter la création des demandes
  • modifier ou annuler des demandes en attente
  • confirmer ou annuler des propositions en attente
Le principal objectif de l'application cliente sur les téléphones mobiles est le relais des notifications de mise-à-jour de demande, en réaction à la réception de nouvelles propositions ou de modifications de proposition de la part de détaillants. En quelques « clics », l'utilisateur doit pouvoir accéder rapidement au détail de la demande concernée, au détail de la proposition et à des informations sur le magasin ou bureau du détaillant. Pour faciliter cet accès, la plupart des informations sont sauvegardées sur le téléphone au fur et à mesure qu'elles sont requises. Pour garder une structure proche du modèle de données produit par l'application serveur, le stockage utilisé est le service de base de données interne du mobile (SQLite sur Android, par exemple).

L'application pour les détaillants

L'application cliente pour les consommateurs doit offrir plusieurs fonctionnalités :
  • recevoir les notifications de nouvelles demandes
  • créer et gérer des propositions (possiblement avec un accès à la caméra pour scanner les codes barre)
  • confirmer les livraisons
  • gérer la liste des demandes et propositions en attente
Parce que les services offerts aux consommateurs sont très différents de ceux offerts aux détaillants, ils sont proposés dans deux applications différentes. Cela réduit les risques de confusion de contexte pour les utilisateurs agissant autant en tant que consommateur que détaillant.

À suivre...

Dans les prochains articles, je décrirai en détail les différentes implantations que j'ai réalisées. Il y a plusieurs techniques qui ne sont pas évidentes, comme celle gérant l'authentification avec OAuth, et j'imagine que cela sera utile à bien des développeurs ;)

A+, Dom

Saturday, April 16, 2011

Google App Engine, scheduled tasks, and persisting changes into the datastore: the risk of a race condition

This post is about a race condition I've accidentally discovered and hopefully fixed. It occurred in App Engine and was generated by tasks I created for immediate execution...

Context

When I started developing in Java for Google App Engine, I decided to give a try with JDO, mainly because it is datastore agnostic (*). Operations managing my entities are organized in DAOs with set of methods like the followings.

public Demand update(Demand demand) {
    PersistenceManager pm = getPersistenceManager();
    try {
      return update(pm, demand);
    }
    finally {
        pm.close();
    }
}

public Demand update(PersistenceManager pm, Demand demand) {
    // Check if this instance comes from memcache
    ObjectState state = JDOHelper.getObjectState(consumer);
    if (ObjectState.TRANSIENT.equals(state)) {
        // Get a fresh copy from the data store
        ...
        // Merge old copy attributes into the fresh one
        ...
    }
    // Persists the changes
    return pm.makePersistent(demand);
}

I knew that changes are persisted only when the PersistenceManager is closed; closing it after an update is safe attitude. I decided anyway to separate the PersistenceManager instance management from the business logic updating the entity for clarity.

This decision offers the additional benefit of being able to share PersistenceManager instance with many operations. The following code snippet illustrates my point: a unique PersistenceManager instance is used for two entity loads and one save.

public void processDemandUpdateCommand(Long demandKey, JsonObject command, Long ownerKey) throws ... {
    PersistenceManager pm = getPersistenceManager();
    try {
        // Get the identified demand (can come from the memcache)
        Demand demand = getDemandOperations().getDemand(pm, demandKey, ownerKey);

        // Check if the demand's location is changed
        if (command.contains(Location.POSTAL_CODE) || command.contains(Location.COUNTRY_CODE) {
            Location location = getLocationOperations().getLocation(pm, command);
            if (!location.getKey().equals(demand.getLocationKey())) {
                command.put(Demand.LOCATION_KEY, location.getKey());
            }
        }

        // Merge the changes
        demand.fromJson(command);

        // Validate the demand attributes
        ...

        // Persist them
        demand = getDemandOperations().updateDemand(pm, demand);

        // Report the demand state to the owner
        ...
    }
    finally {
        pm.close();
    }
}

For my service AnotherSocialEconomy which connects Consumers to Retailers, the life cycle for a Demand is made of many steps:
  • State open: raw data just submitted by a Consumer;
  • State invalid: one verification step failed, requires an update from the Consumer;
  • State published: verification is OK, and Demand broadcasted to Retailers;
  • State confirmed: Consumer confirmed one Proposal; Retailer reserves the product for pick-up, or delivers it;
  • State closed: Consumer notified the system that the transaction is closed successfully;
  • State cancelled: ...
  • State expired: ...

In my system, some operations take time:
  • Because of some congestion in the environment, which occurs sometimes when sending e-mails.
  • Because some operations require a large data set to be processed–like when a Demand has to be broadcasted to selected Retailers.

Because this time constraint and the 30 second limit, I decided to use tasks extensively (tasks can run for 10 minutes). In some ways, my code is very modular now, easier to maintain and test.

So I updated my code to trigger a validation task once the Demand has been updated with the raw data submitted by the Consumer. The code snippet shows the task scheduling in the context of the processDemandUpdate() method illustrated above.

public void processDemandUpdateCommand(Long demandKey, JsonObject command, Long ownerKey) throws ... {
    PersistenceManager pm = getPersistenceManager();
    try {
        ...

        // Update the state so the entity is ready for the validation process
        demand.setState(State.OPEN);

        // Persist them
        demand = getDemandOperations().updateDemand(pm, demand);

        // Create a task for that demand validation
        getQueue().add(
            withUrl("/_tasks/validateOpenDemand").
                param(Demand.KEY, demandKey.toString()).
                method(Method.GET)
        );
    }
    finally {
        pm.close();
    }
}

Issue

Until I activated the Always On feature, no issue has been reported for that piece of code: my unit tests worked as expected, my smoke tests were fine, the live site behaved correctly, etc.

Then the issue started to appear randomly: sometimes, updated Demand instances were not processed by the validation task anymore! A manual trigger of this task from a browser or curl had however the expected result...

For the task to be idempotent, the state of the Demand instance to be validated is checked: if set with open, the Demand attributes are checked with the result of the state being set with invalid or published. Otherwise nothing happens. With that approach, Demands already validated are not processed a second time...

What occurred?
  • Without the Always On feature activated, because of the low traffic in my application, the infrastructure was delaying the process of the validation task a bit and it was executed once the request process finished.
  • Thanks to that soft process serialization, the datastore update commanded by the instruction pm.close() had all chances to be completed before the start of the validation task!
  • With the Always On feature activated, the infrastructure had much more chance to get one of the two other application instances to process the validation task... which could happen before the datastore update...
  • As it started before the datastore update, the validation task found a task in the state set by the previous run of the task for this Demand instance: invalid or published. Then it exited without reporting any error.

Solutions

The ugly one:
Add a delay before executing the task with the countdownMillis() method.

// Create a task for that demand validation
        getQueue().add(
            withUrl("/_tasks/validateOpenDemand").
                param(Demand.KEY, demandKey.toString()).
                method(Method.GET).
                countdownMillis(2000)
        );
    }
    finally {
        pm.close();
    }
}

A tricky one:
Use memcache to store a copy of the Demand, which the validation will use instead of the reading it from the datastore. Because there's no warranty that your entity won't be evicted before the the run of the validation task, this is not a solution I can recommend.

The simplest one:
Move the code scheduling the code outside the try...finally... block. The task will be scheduled only if the updates of the Demand instance have been persisted.

public void processDemandUpdateCommand(Long demandKey, JsonObject command, Long ownerKey) throws ... {
    PersistenceManager pm = getPersistenceManager();
    try {
        ...

        // Update the state so the entity is ready for the validation process
        demand.setState(State.OPEN);

        // Persist them
        demand = getDemandOperations().updateDemand(pm, demand);
    }
    finally {
        pm.close();
    }

    // Create a task for that demand validation
    getQueue().add(
        withUrl("/_tasks/validateOpenDemand").
            param(Demand.KEY, demandKey.toString()).
            method(Method.GET)
    );
}

The most robust one:
Wrap everything withing a transaction. When a task is scheduled within a transaction, it's really enqueued when the transaction is committed.

Be aware that adopting this solution may require a major refactoring.

Conclusion

Now I understand the issue, I'm a bit ashamed of it. For my defense, I should say the defect has been introduced as part of an iteration which came with a series of unit tests. Before the activation of the Always On feature, it stayed undetected, and later it occurred only rarely.

Anyway, verifying the impact of all calls to external tasks before persisting any changes is one point in my review check list.

I hope this helps,
A+, Dom

--
Notes:
* These days, I would start my application with Objectify. This blog post summarizes many arguments I agree on too in favor to Objectify.