Sunday, March 23, 2014

Review of the book: Getting Started with Grunt

Disclaimer: I've been offered a electronic copy of the book in exchange to this review. I have no contractual relationship with Packt Publishing nor the book author Jamie Pillora.

Context

Since early 2000, I've been on the Java technology stack for implementing three-tier applications (Web or mobile clients, J2EE Web server, Relational databases). I considered the move from make to ant as a real improvement. My first public project on GitHub are still relying on ant.

Getting Started with Grunt: The JavaScript Task RunnerI was so happy with ant that I switched to maven only around 2010. To me, the main benefit of maven is its management of the dependencies: from the maven repository, I can depend simply on many Java libraries like Google App Engine, RestEasy and Mockito. I can also get resources for the Web client, like the Dojo Toolkit. Over the years, I wrote few plugins to cover the same features I used to get from ant.

Almost one, I heard about Grunt, the JavaScript build tool. I liked that many plugins were also provided (the officially supported plugins with a name prefixed by grunt-contrib-). Because I was happy with my maven environment, I then started to use Grunt only for Stylus and the grunt-contrib-watch.

Recently, I joined the game company Ubisoft to work on a project involving Node.js server-side. My switch to Grunt was immediate ;) I love the extensive set of plugins and the easiness of writing my own ones if needed. The pair npm / Grunt is rock solid: npm which manages the dependencies and Grunt runs the tasks.

When Packt Publishing contacted me for the review of the book Getting Started with Grunt, I saw an occasion to consolidate my knowledge on Grunt, to compare what I know with someone else experience. And reviewing a published book is way more difficult and lengthy than working on drafts, a work I did one for the book Google App Engine and GWT ;)

Book content

As revealed by the title, the book targets new users of Grunt, or people evaluating the technology. If I compare to my own experience, Grunt power users won't learn much by reading the book.

If the book does not explain how Grunt works, it however describes a lot of topics extensively:

  • The transpiling aspect: from CoffeeScript, from Stylus/Sass/Less;
  • The code processing: verification with JSLint and JSHint, minification with Uglify;
  • The code testing: with Mocha and PhantomJS;
  • The deployment: assembling many files in one, sending it over FTP, publishing on Amazon S3;
  • The customization: writing and publishing plugins.

Opinion

This is book really targets new Grunt adopter. It helps understanding the basic tasks. It also describes how to setup a build environment for Web clients. I think exposing more grunt plugins, like grunt-exec to run non JavaScript tool, could have set the book as a reference book...

The author decided to focus on one type of application: a Web client based on Jade, the HTML template engine. I think describing tasks for the application logic on a Node.js server, like grunt-express or grunt-nodemon, would have interest a wider audience. Grunt is a really versatile tool.

I hope it helps,
A+, Dom

Friday, January 31, 2014

2013 Products I Can't Leave Without

Having some spare time before starting a new job at Ubisoft in Montreal, I want to give a try of my old 2009 Products I Can't Leave Without post.

Update April 27, 2014: as many others, I have decided to stop using the Dropbox service. For now, I use Google Drive and QNAP qsynch.

2013
Blogger
Chrome Canary
Dropbox
Eclipse
Feedly
Firefox
Gimp
Git
GMail
Google App Engine
Google Search
KeePassX
SourceTree
Unity3D
VirtualBox
YouTube

Blogger
This is the tool used to publish this blog. I use WordPress in other projects. WP has definitely a larger feature set, without counting its amazing plugin list (commercial and free). However, Blogger is just fine for the type of edition I conduct here.


Company:Google
Website:blogger.com
Launch Date:August 18, 2007

Blogger is a blog publishing platform formerly known as Pyra Labs before Google acquired it in February 2003. Blogger blogs are mostly hosted internally with the “dot Blogspot” domain but they can also be hosted externally on a user’s own server.

Blogger provides bloggers with a WYSIWYG editor that allows them to drag-and-drop widgets, change fonts and edit page elements. Also, Feedburner’s feed management tools are tightly integrated with Blogger blogs due to Google’s recent acquisition.

Credits: CrunchBase

Chrome Canary
Google declines the Chrome browser in 4 versions, and Chrome Canary is its most bleeding-edge version. I still use Firefox for most of the Internet browsing, but I now use Chrome for my development related tasks: Webapp debugging on the desktop, remote debugging on a tablet, device emulation (iPad, Nexus, etc.). As the vast majority of mobile browsers are WebKit based as Chrome, it's definitely a must-have development tool.


Company:Google
Website:google.ca/intl/en/chrome/browser/canary.html

Google Chrome is an based on the open source web browser Chromium which is based on Webkit. It was accidentally announced prematurely on September 1, 2008 and slated for release the following day. It premiered originally on Windows only, with Mac OS and Linux versions released in early 2010.

Credits: CrunchBase

Eclipse
For an early adopter of IntelliJ IDEA by JetBrains, I had to use Eclipse (company's tool when working with IBM Rational, cheap when working with Compuware). I should recognize that it is going better and better (especially with the refactoring features and the JavaScript support) and it has more plugins than IntelliJ. It is also a platform for OSGi and for Rich Applications.


Company:FLOSS
Website:eclipse.org

Eclipse is a Java-based Integrated Development Environment (IDE). In addition to supporting Java (standard and J2EE), Eclipse supports other programming languages like C++ and Python thanks to plugins. It also offers extensions for developing on Android, BIRT, databases, etc.


Feedly
When Google Reader disappeared, I tried Feedly. It's not a great tool, but it does the job: I can continue to easily read the continuous new stream from the Internet ;)


Company:Feedly
Website:feedly.com
Launch Date:June 2008

Credits: CrunchBase

Firefox
Having been a Web application developer for a long time, I adopted Firefox (then know as Firebird) in 2003. With the introduction of the Firebug extension (in 2005), it became with primary browser and it had never lost this status. Its early integration of Google search was also a serious advantage. These days, with the faviconize extension and Firefox ability to start with the previous configuration, my browser always starts with: iGoogle, GMail, Google Calendar.


Company:Mozilla
Website:getfirefox.com
Launch Date:November 9, 2004

Firefox 4 Hits 100 Million Downloads After A Month (4/22/11).

Credits: CrunchBase

Gimp
I never the budget and training for Adobe Photoshop. So I started using Gimp. If you can pass over its weird interface (too many windows, IMO), Gimp offer tons of features for Web application developers: to adjust pictures, to generate textures, to resize images, etc. And there are plenty of free tutorials on the Web.


Company:FLOSS
Website:gimp.org

My favorite video series on Photoshop, starting with the first episode: You Suck at Photoshop #1: Distort, Warp, & Layer Effects.


Git
As a developer, I always want to put my code into a source control system. It is not just because I am afraid that my laptop crashes, then wasting hours of work. It is mainly because I want to keep track of the update history. At work, over the years, I used ClearCase, CVS, and Subversion. For my personal development, I used Subversion a lot and now I use Git.


Company:FLOSS
Website:git-scm.com

Free hosting service of open-sources Github - Charges applied for private hostings.


GMail
When I started working, I dealt with many machines and I hated having to start one just to look at a specific inbox. With GMail, my account is available anywhere. When I read Turn Gmail Into Your Personal Nerve Center, I started to use GMail as my knowledge database.


Company:Google
Website:gmail.com
Launch Date:April 1, 2004

Gmail, also known as Google Mail, is a free email service with innovative features like “conversation view” email threads, search-oriented interface and plenty of free storage (almost 7GB). Gmail opened in private beta mode in April 2004 by invite only. At first, invites were hard to come by and were spotted up for sell on auction sites like eBay. The email service is now open to everyone and is part of Google Apps. Paul Buchheit, an early employee at Google, is given credit for building the product.

Another Gmail feature is the organization, tracking and recording users’ contact lists. For instance, if you start typing the letter C into the “To” field Gmail will bring up a list of every email address and contact name starting with the letter. This feature helps when you can’t quite remember a name. Plus, Gmail automatically adds and updates email addresses and names to your contact list when you write emails.

Credits: CrunchBase

Google App Engine
My first job in Montréal, Canada, was with a small company named Steltor (bought few years later by Oracle). The core business was the development of a distributed calendar system (servers in cluster, native clients, web client, mobile client, etc.). Since then, I am used to tracking my work with an electronic calendar. Google Calendar and its ability to mix many agendas is excellent.


Company:Google
Website:developers.google.com/appengine
Launch Date:April 2008

Google App Engine offers a full-stack, hosted, automatically scalable web application platform. The service allows developers to build applications in Python, Java (including other JVM based languages such as JRuby) which can then use other Google services such as the Datastore (built on BigTable) and XMPP. The service allows developers to create complete web application that run entirely on Google’s computing infrastructure and scale automatically as the application’s load changes over time. Google also provides an SDK for local development and site monitoring tools for measuring traffic and machine usage.

Google’s offering competes with Amazon’s Web Services suite, including EC2, S3, SQS, and SimpleDB.

Credits: CrunchBase

Google Search
Google Search is an amazing tool: recently, I was trying to find a solution to a tough technical problem and I found it thanks to Google Search which pointed toward a blog post written the same day, just few hours before, in Europe! Incredible... When I give a conference into universities, I often say: “If I asked a question today and you have no clue about the response, that's fine. If you still have no clue tomorrow, you're in trouble...”


Company:Google
Website:google.com
Launch Date:September 4, 1998

Search is Google’s core product and is what got them an official transitive verb addition to the Merriam Webster for “google”. The product is known for its Internet-crawling Googlebots and its PageRank algorithm influenced heavily by linking.

When users type keywords into the home page search box they are returned with relevant results that they can further refine. Google also has more specific search for blogs, images, news and video. Google will also return search results from your own computer files and emails via Google Desktop.

Credits: CrunchBase

KeePassX
KeePassX is an open source Password Safe, an multi-platform extension of KeePass. I use it in conjunction with Dropbox so my precious list of account coordinates are available on all my devices (desktops, tablets, and phones).


Company:FLOSS
Website:keepassx.org


SourceTree
As a developer, I much prefer using command line tools like maven, git, and other ant and Python scripts. Git is a really powerful tool but I worked with team members having some difficulties to deal with the branches, cherry-picking, and stashing for example. The freeware SourceTree offers a neat interface on the top of git. I'm more a command line user than a GUI one. Hoever


Unity3D
For my own company AnotherSocialEconomy.com, I only developed native applications for Android. As a software architect at Electronic Arts, in the (now defunct) Play Mantis franchise lab in Montréal, I worked with Unity developers. I then learned its cross-platform capabilities, it's physics engine, and it's powerful C# library. Since then, I've created few project with Unity and I think it's a powerful ecosystem.


VirtualBox
Developing software requires sometimes specific configurations. Testing them requires always specific configurations (at least to replay always the same test cases every time the source control system, like Git, is updated). There are the famous VMWare products (Workstation, Player, ESX) and Microsoft VirtualPC. VirtualBox is an open source product provided by SUN Microsystems, and it has nice features while being powerful.


Company:Oracle
Website:virtualbox.org

VirtualBox is a family of powerful x86 virtualization products for enterprise as well as home use. Not only is VirtualBox an extremely feature rich, high performance product for enterprise customers, it is also the only professional solution that is freely available as Open Source Software under the terms of the GNU General Public License (GPL).

Presently, VirtualBox runs on Windows, Linux, Macintosh and OpenSolaris hosts and supports a large number of guest operating systems including but not limited to Windows (NT 4.0, 2000, XP, Server 2003, Vista), DOS/Windows 3.x, Linux (2.4 and 2.6), Solaris and OpenSolaris, and OpenBSD.


YouTube
YouTube is famous because of fun videos. But it also hosts technical videos.


Company:Youtube
Website:youtube.com
Launch Date:December 11, 2005

YouTube was founded in 2005 by Chad Hurley, Steve Chen and Jawed Karim, who were all early employees of PayPal. YouTube is the leader in online video, sharing original videos worldwide through a Web experience. YouTube allows people to easily upload and share video clips across the Internet through websites, mobile devices, blogs, and email.

Everyone can watch videos on YouTube. People can see first-hand accounts of current events, find videos about their hobbies and interests, and discover the quirky and unusual. As more people capture special moments on video, YouTube is empowering them to become the broadcasters of tomorrow.

In November 2006, within a year of its launch, YouTube was purchased by Google Inc. in one of the most talked-about acquisitions to date.

Credits: CrunchBase

I hope it helps,
A+, Dom

Thursday, January 30, 2014

Singleton uniqueness issue in a C# generic class

I code in C# within the Unity environment (I really like the co-routine mechanism and the yield return instruction).

At one point, I developed a set of base classes to provide a default behavior to entities that must be exchanged on the wire and cached on the device. And I used generic/template classes to be able to generate instances of the final classes from within the methods of the super class. Information saved on the device are signed with a piece of information obtained from the server.

public class LocalCache<T> where T : BaseModel {

    public static string signatureKey { get; set; }

    public T Get(long id) {
        // Get the JSON from PlayerPrefs
        // Verify its signature
        return (T) Activator.CreateInstance(typeof(T), new object[] { JSON });
    }

    public void Save(T entity) {
        // Get the JSON representing the data
        // Sign the JSON
        // Save the signed JSON into PlayerPrefs
    }

    public void Reset(long id) { ... }

    private string SignData(string data) { ... }
}

In Java, there would be only one instance of the signatureKey variable per JVM, whatever the number of class instances. In C#, I've been surprised to find that this attribute is not unique!

public class LocalCache<T> where T : BaseModel {

    // Other code as shown above...

    public static void TestDifferentInstanciations() {
        LocalCache<System.Int64>.signatureKey = "Set for Int64";

        UnityEngine.Debug.Log("Int16: " + LocalCache<System.Int16>.signatureKey);
        // => produces: Int16:

        UnityEngine.Debug.Log("Int16: " + LocalCache<System.Int64>.signatureKey);
        // => produces: Int64: Set for Int64

        UnityEngine.Debug.Log("string: " + LocalCache<System.string>.signatureKey);
        // => produces: string:
    }
}

My surprise comes from the fact that the concept of generic class only exist for the definitions. At runtime in C# only exist types and the type of LocalCache<Int16> is different from the type of LocalCache<Int64>. Then there are multiple copies of the signatureKey attribute, exactly one per constructed types...

The solution is to set the singleton into a separated non generic class!

internal static class StaticLocalCacheInfo {
    public static string signatureKey { get; set; }
}

public class LocalCache<T> where T : BaseModel {

    public static string signaturePartFromServer {
        get { return StaticLocalCacheInfo.signatureKey; }
        set { StaticLocalCacheInfo.signatureKey = value; }
    }

    // Rest of the business logic...

    public static void TestDifferentInstanciations() {
        LocalCache<System.Int64>.signatureKey = "Set for Int64";

        UnityEngine.Debug.Log("Int16: " + LocalCache<System.Int16>.signatureKey);
        // => produces: Int16: Set for Int64

        UnityEngine.Debug.Log("Int16: " + LocalCache<System.Int64>.signatureKey);
        // => produces: Int64: Set for Int64

        UnityEngine.Debug.Log("string: " + LocalCache<System.string>.signatureKey);
        // => produces: string: Set for Int64
    }
}

Note that I got the confirmation of the behavior on Unity forum.

I hope it helps!
A+, Dom

Tuesday, January 28, 2014

Objectify and conflicting OnSave / OnLoad callbacks

Since the beginning, I've been using Google App Engine. I tried first its Python implementation, then I moved to its Java implementation. The move to Java was dictated by the simplicity of the language and the largest set of tooling to code, test, and debug the applications.

My first implementation was using JDO as the ORM layer with my own servlet infrastructure implementing a REST interface. Nowadays, I use Objectify in place of JDO and RestEasy (with Guice and JAX-RS).

In my latest project, I've implemented two sets of entities:

  • One set which tracks with a creation date and a version number;
  • One set which extends the first one and tracks the identifier of their owners.

Both base classes implement an @OnSave callback to control their core information. Here are a short version of the these classes.

@Index public abstract class AbstractBase<T> {
    @Id private Long id;
    @Unindex protected Date creation;
    @Unindex protected Long version;

    // Accessors
    // ...

    @OnSave protected void prePersist() {
        if (creation == null) {
            creation = new DateTime().toDate();
            version = Long.valueOf(0L);
        }
        version ++;
    }
}

@Index public abstract class AbstractAuthBase<T> extends AbstractBase<T> {
    @Index private Long ownerId;

    // Accessors
    // ...

    @OnSave protected void prePersist() {
        if (ownerId == null || Long.valueOf(0L).equals(ownerId)) {
            throw new ClientErrorException("Field ownerId is missing");
        }
    }
}

When I ran the implementation of the code above, I faced a weird behavior:

  • The entities of the User class extending only AbstractBase had their creation and version fields correctly set when persisted.
  • The entities of the Category class extending AbstractAuthBase had none of these two fields set!

It appears the issue comes from Objectify which was only invoking the first method! And twice BTW...

I looked then at the Objectify implementation, precisely at the methods ConcreteEntityMetadata<T>.processLifeCycleCallbacks() and ConcreteEntityMetadata<T>.invokeLifeCycleCallbacks(). In the first method, you can see that the reference of the methods annotated @OnSave and @OnLoad are accumulated in two lists. In the second method, the given list is parsed and the method is applied to the current object.

My issue arose because applying twice the method prePersist() on a instance of a Category class was always calling the method of the base class AbstractAuthBase<T>! I fixed the issue by renaming the callbacks (one in checkOwnerId() and the other in setCreationAndVersion()).

A+, Dom

Monday, February 13, 2012

SoapUI / REST / JSON / variable namespaces

As a developer of large Web applications, I'm used to relying on a few proven test strategies:
When I joined my current development team to work on the project TradeInsight, I was introduced to SoapUI for the functional tests. I like it's ability to convert JSON to XML, enabling then the writing of XPath match assertions.

There's one little caveat related to the conversion and the XPath assertions:
  • When the server response contains a single JSON object, the conversion introduce a namespace into the generated XML.
  • And this namespace depends on the server address.
The following figures show the original JSON response and its conversion to XML with the inferred namespace.
JSON response produced by a REST service.
Transformation into a XML payload with an inferred namespace.

This automatic mapping to namespace server dependent does not allow to write server-agnostic code if you follow the suggested solution!

The following figures show a XPath match expression as documented in the SoapUI training materials. Sadly, running it against an XML with a namespace, this error is reported:

XPathContains assertion failed for path [//startDate/text()] : Exception missing content for xpath [//startDate/text()] in Response.

Simple XPath expression with the corresponding error message.
Corrected XPath expression as suggested, now server dependent :(

 A simple solution consists in replacing the specified namespace with the meta-character '*', which match any namespace. As all elements of the XML document are under the scope of the inferred namespace, it's important to prefix all nodes of the XPath expression with '*:', as illustrated by the following figure.

Use of the '*' prefix to produce assertion server independent.
I hope this helps.

A+, Dom

Monday, June 27, 2011

Un nouveau chapître

Il y a quelques 18 moins, j'ai choisi de devenir un entrepreneur à temps plein, consacrant mon temps et mes ressources financières au développement du service AnotherSocialEconomy. Cela a été une expérience très riche en enseignements.

Ceux qui me connaissent savent à quel point il était important que je sois vraiment maître de mon destin professionnel :
  • Développer un outil qui offre une vraie valeur à son groupe d'utilisateurs.
  • Baser les décisions sur les faits et leurs conséquences sans laisser places aux interférences politiques (tellement immobilisantes dans les grandes entreprises).
  • Utiliser la technologie pour servir d'abord les usagers, puis pour construire les services et faciliter leur développement. En ce sens, la méthodologie Lean Startup et son principe Build-Measure-Learn offre un excellent cadre de travail.
  • Utiliser la méthodologie Agile (sans oublier les outils de XP) pour développer une application de grande envergure.

Avec mon associé Steven, cela a été très intéressant de définir mes propres objectifs, mon organisation de travail, et diriger le développement de l'entreprise.

Mais il y a un mois, principalement à cause de l'absence d'une perspective de revenus stables dans un futur proche, je me suis mis à la recherche de contrats et de postes permanents. C'est sûr que c'est déchirant, mais l'outil est en production et je peux le faire évoluer dans mon temps libre. En appliquant donc un principe de Lean une fois de plus ("Failure to change is a vice!", Hiroshi Okuda, Président de Toyota Motor Corp.), je replonge du côté salarié.

Finalement, après plusieurs entretiens, mon choix s'est porté sur la compagnie MEI qui développe une plate-forme Web pour aider ses clients (des producteurs de biens de consommation emballés--consumer packaged goods) à suivre leurs campagnes de promotion. Jusqu'il y a deux ans, MEI offrait principalement son service aux grands manufacturiers. Maintenant, dans une offre SAAS, MEI développe une nouvelle version pour les producteurs petits et intermédiaires, sous la marque TradeInsight. La synergie entre mon expérience (Java, JavaScript, cloud, mobile, Agile) et l'équipe est indéniable.

Au fait : MEI embauche encore !

L'aventure d'AnotherSocialEconomy continue, à temps perdu de mon point de vue et dans mes interactions avec les groupes de la communauté technique de Montréal (NewTech, Android, NodeJS, etc.). Pour plus d'informations quant aux partenariats possibles, veuillez prendre contact avec Steven.

A+, Dom

Tuesday, June 7, 2011

OAuth authorization handling in a Android application

Note : This post is part of the series "Lessons learned as an independent developer". Please refer to the introduction (in French) for more information. Cet article fait partie de la série intitulée « Leçons d'un développeur indépendant ». Au besoin, lisez mon introduction pour plus d'informations.

Context

AnotherSocialEconomy APIs are based on standards as much as possible: OpenID and OAuth for the authentication and authorization, HTTP-based REST interface (1, 2) for the communication protocol, JSON for the data payload format, etc.

This post is about setting up a Android application which get authorization tokens from a OAuth provider.

OAuth provider

There are many known OAuth providers like Netflix, Twitter, Facebook (coming to OAuth 2.0 soon), Yahoo!, Google, etc. If these providers are convenient, they don't offer much flexibility if some debugging is required.

For this experiment, I'm going to use Google App Engine Java and their OAuth support. For a complete walk-through, refer to Ikai Lan's post: Setting up an OAuth provider on Google App Engine, especially for the part which describes how to get the public and secret keys for your client application to sign communications with the provider.

OAuth client - Work flow

Strategy: Upon creation, the process checks if the authorization tokens have been saved as user preferences.
  • If they are present, they are loaded to be used to sign each future communication with the application on the server.
  • If they are missing, the OAuth work flow is triggered:
    1. With the server application keys, a signed request is sent to get a temporary request token.
    2. With this request token, a URL to the authentication page is required and an Intent is created to load the corresponding page in a browser. At this step, the application is stopped.
    3. The user enters his credentials in the browser and grants access rights to the mobile application. The return URL has a custom format: ase://oauthresponse.
    4. The mobile application, which has an Intent registered for that custom URL, is restarted and is given the return URL. A verification code is extracted from this URL.
    5. The verification code is used to issue a signed request asking for the access tokens.
  • The access tokens are saved as part of the user preferences only if she selected a 'Remember me' option.

Figure 1: Authorization work flow

Alternative: If the mobile application offers anonymous services, like browsing the list of registered stores in the case of AnotherSocialEconomy.com, it can be friendlier to delay the authorization verification.

OAuth client - Initiating the authorization process (1, 2, 3)

To simplify the application development, I have decided to use oauth-signpost, a library provided by Matthias Käppler who wanted a slick and simple way to access Netflix services.
Signpost is the easy and intuitive solution for signing HTTP messages on the Java platform in conformance with the OAuth Core 1.0a standard. Signpost follows a modular and flexible design, allowing you to combine it with different HTTP messaging layers.
Note that this library is also good to manage remotely Twitter accounts.

This section is about initiating the authorization process, which occurs if the application is not called by the application on the server (with the verification code, see next section) and if the OAuth token could not be found in the user preferences. This is the path with the steps {1, 2, 3} in Figure 1.

if (!justAuthenticated && Preferences.get(Preferences.OAUTH_KEY, "").length() == 0) {
    // Display the pane with the warning message and the sign in button
    setContentView(R.layout.main_noauth);

    // Update the 'Remember me' checkbox with its last saved state, or the default one
    final String saveOAuthKeysPrefs = Preferences.get(Preferences.SAVE_OAUTH_KEYS, Preferences.SAVE_OAUTH_KEYS_DEFAULT);
    ((CheckBox) findViewById(R.id.app_noauth_keepmeconnected)).setChecked(Preferences.SAVE_OAUTH_KEYS_YES.equals(saveOAuthKeysPrefs));

    // Attach the event handler that will initiate the authorization process up to opening the browser with the authorization page
    findViewById(R.id.app_noauth_continue).setOnClickListener(new OnClickListener() {
        @Override
        public void onClick(View v) {
            // Check if the 'Keep me connected' check box state changed and save its new state
            boolean keepMeConnected = ((CheckBox) findViewById(R.id.app_noauth_keepmeconnected)).isChecked();
            if (Preferences.SAVE_OAUTH_KEYS_YES.equals(saveOAuthKeysPrefs) != keepMeConnected) {
                Preferences.set(Preferences.SAVE_OAUTH_KEYS, keepMeConnected ? Preferences.SAVE_OAUTH_KEYS_YES : Preferences.SAVE_OAUTH_KEYS_NO);
            }
                    
            // Set up the OAuth library
            consumer = new CommonsHttpOAuthConsumer("<your_app_public_key>", "<your_app_secret_key>");

            provider = new CommonsHttpOAuthProvider(
                    "https://<your_app_id>.appspot.com/_ah/OAuthGetRequestToken",
                    "https://<your_app_id>.appspot.com/_ah/OAuthAuthorizeToken",
                    "https://<your_app_id>.appspot.com/_ah/OAuthGetAccessToken");
                    
            try {
                // Steps 1 & 2:
                // Get a request token from the application and prepare the URL for the authorization service
                // Note: the response is going to be handled by the application <intent/> registered for that custom return URL
                String requestTokenUrl = provider.retrieveRequestToken(consumer, "ase://oauthresponse");

                // Step 3:
                // Invoke a browser intent where the user will be able to log in
                startActivity(new Intent(Intent.ACTION_VIEW, Uri.parse(requestTokenUrl)));
            }
            catch(Exception ex) {
                Toast.makeText(Dashboard.this, R.string.app_noauth_requesttoken_ex, Toast.LENGTH_LONG).show();
                Log.e("Dashboard no auth", "Cannot initiate communication to get the request token\nException: " + ex.getClass().getName() + "\nMessage: " + ex.getMessage());
            }
        }
    });
}

Figure 2 below illustrates the pane main_noauth displaying the warning message and the action button, and figure 3 shows the authorization page as provided by Google for the hosted applications on App Engine.


Figure 2: Pane displayed if application not yet authorized

Figure 3: Google authorization page

Whatever action the user takes, the application is going to be called with the URL ase://oauthresponse. The next section covers this work flow path.

OAuth client - Processing the authorization (4, 5)

The application is registered with an Intent associated to the scheme ase and the host oauthresponse. The labels themselves are not important, only their uniqueness and the correspondence with the return URL specified at Step 2.

<intent-filter>
    <action android:name="android.intent.action.VIEW"/>
    <category android:name="android.intent.category.DEFAULT" />
    <category android:name="android.intent.category.BROWSABLE"/>
    <data android:scheme="ase" android:host="oauthresponse"/>
</intent-filter>

The following code snippet implements the steps 4 and 5 as described in Figure 1.

private boolean checkOAuthReturn(Intent intent) {
    boolean returnFromAuth = false;
    Uri uri = intent.getData();

    if (uri != null && uri.toString().startsWith("ase://oauthresponse")) {
        // Step 4:
        // Get the request token from the Authentication log in page
        String code = uri.getQueryParameter("oauth_verifier");
            
        try {
            // Step 5:
            // Get directly the access tokens
            provider.retrieveAccessToken(consumer, code);
            returnFromAuth = true;
                
            // Persist the tokens
            if (Preferences.SAVE_OAUTH_KEYS_YES.equals(Preferences.get(Preferences.SAVE_OAUTH_KEYS, Preferences.SAVE_OAUTH_KEYS_DEFAULT))) {
                Preferences.set(Preferences.OAUTH_KEY, consumer.getToken());
                Preferences.set(Preferences.OAUTH_SECRET, consumer.getTokenSecret());
            }
        }
        catch(Exception ex) {
            Toast.makeText(Dashboard.this, R.string.app_noauth_accesstoken_ex, Toast.LENGTH_LONG).show();
            Log.e("Dashboard no auth", "Cannot complete communication to get the request token\nException: " + ex.getClass().getName() + "\nMessage: " + ex.getMessage());
        }
    }
       
    return returnFromAuth;
}

The Dashboard class definitions are available in a gist on GitHub. This gist contains also a wrapper of the SharedPreferences class, the application manifest with the declaration of the Intent for the custom return URL, and the layout definition of the pane with the warning and the sign in button.

OAuth Client - The quirks

My Android application is very simple and is configured with the launch mode singleTop. As such, if the system does not destroy the application when the code starts an activity to browse the Authentication service URL, the invocation of the ase://oauthresponse URL by the browser should trigger a call to the onNewIntent() method. It never happened during my tests and on my phone... Every time, the application is recreated and a call to onCreate() is issued. So both functions delegate to the helper checkOAuthReturn().

@Override
protected void onNewIntent(Intent intent) {
    checkOAuthReturn(intent);
}

In this example, I've decided to select the view to associate to the first screen of the application according to the knowledge of the OAuth access token (read from the user preferences or retrieved dynamically thanks to the verification code coming with the ase://oauthresponse URL). The following snippet illustrates this flow. In some occasions, it can be better to start a separate activity if the main pane is instrumented to disable the triggers to protected actions. This approach with a separate activity is also better for the portability.

@Override
public void onCreate(Bundle savedInstanceState) {
    super.onCreate(savedInstanceState);
    Preferences.setPreferenceContext(PreferenceManager.getDefaultSharedPreferences(getBaseContext()));

    boolean justAuthenticated = checkOAuthReturn(getIntent());
        
    if (!justAuthenticated && Preferences.get(Preferences.OAUTH_KEY, "").length() == 0) {
        setContentView(R.layout.main_noauth);

        // Instrumentation of the pane to initiate the authorization process on demand
        // ...
    }
    else {
        setContentView(R.layout.main);
    }
}

I hope this helps.
A+, Dom

Thursday, June 2, 2011

Partage d'expérience dans le développement d'applications Android

Contexte

Note : Cet article est le premier d'une série intitulée « Leçons d'un développeur indépendant ». Le niveau de la discussion dans cet article est général. Les articles suivant iront plus en profondeur et seront illustrés de bouts de code.

Dans le cadre de mon entreprise AnotherSocialEconomy.com, j'ai eu l'occasion de mettre en œuvre plusieurs bonnes pratiques et je vais en partager quelques unes ici. Je vais notamment me concentrer sur les développements autour de l'application cliente Android, depuis l'identification des usagers jusque l'émission de notifications asynchrones.

AnotherSocialEconomy.com, ou ASE, offre un service connectant consommateurs et détaillants :
  • Les consommateurs à la recherche d'un produit ou service n'ont qu'à décrire leur demande depuis l'un des multiples points d'entrée de l'application : une page Web faite pour AdWords, le site de ASE ou affilié, la page de l'application Facebook, un message direct depuis Twitter, etc.
  • Les détaillants participants sont notifiés des demandes en fonction de leurs préférence. Les détaillants sont libres de faire une ou plusieurs propositions en fonction de leur disponibilité.
  • Au fur et à mesure que les propositions sont composées, les consommateurs en sont notifiés et peuvent à tout moment les décliner ou les confirmer.
  • Les confirmations sont notifiés aux détaillants qui réservent alors le produit ou le service pour le consommateur.
  • Ce dernier n'a plus qu'à payer et à en prendre possession.
Pour faire simple : ASE connecte les consommateurs avec les détaillants qui ont les produits ou services qu'ils recherchent en inversant le processus de recherche.

Le moteur de ASE est présentement codé en Java et est hébergé sur l'infrastructure Google App Engine. Dans la suite de cet article, pour généraliser le propos, le moteur de ASE est référencé en tant qu'application serveur.
La gestion des utilisateurs sur le serveur

Depuis le départ, le service des usagers de l'application serveur repose sur OpenID. Avec OpenID, l'identification des utilisateurs est confiée à des services tiers de confiance (Yahoo!, Google, AOL, etc.) sans que l'application serveur ne voit le mot de passe des utilisateurs, seulement leur identifiant OpenID. Ce mode de gestion règle aussi plusieurs problèmes :
  • Les utilisateurs n'ont pas à créer un énième compte pour le service ASE.
  • Le serveur est moins à risque car il n'y a pas de mots de passe enregistré là.
  • La gestion des sauvegardes est plus simple (toujours parce qu'il n'y a pas de mots de passe).
  • En cas de bris de leur mot de passe, les utilisateurs peuvent assurément mieux s'appuyer sur les services de leur fournisseur OpenID que sur les miens :)
Plus tard dans le cycle de développement, notamment parce qu'il a s'agit de développer une application Facebook, les mécanismes d'identification de Facebook, ceux de Twitter et de ceux de Microsoft Live ont été intégrés à l'application serveur. Des trois, le mécanisme de Twitter est le plus standardisé (OAuth), donnant aussi accès aux données de l'utilisateur du service. Mais tous ont été intégrés de manière à agir comme des services OpenID.

OpenID est un bon système d'identification pour une application cliente Web. Avec les restrictions de sécurité des navigateurs (SSL et sandbox), une fois que l'identité de l'utilisateur est confirmée par un fournisseur OpenID, tant que cette identité reste associée à la session Web, l'envoi de données vers les navigateurs reste protégé.

Par contre, quand l'application cliente est native (sur un ordinateur ou sur un téléphone mobile), il n'est pas possible de s'appuyer sur un mode de session Web robuste comme celui des navigateurs. Aussi une application malicieuse pourrait intercepter l'identifiant de session et s'en servir à l'insu de l'usager. Pour se prémunir contre cette attaque, il est souhaitable d'utiliser OAuth qui signe chaque échange entre l'application cliente et le serveur, rendant caduc l'utilisation de l'identifiant de session Web.

L'authentification des usagers sur le client

Chaque téléphone Android est associé à un utilisateur. Si la carte SIM de l'opérateur téléphonique est changée, les données de l'utilisateur précédent ne sont plus accessibles. Chaque application à accès à son propre espace de stockage protégé, mais l'utilisateur peut réclamer cet espace à tout instant. Ce n'est donc pas une solution de stockage à long terme.

Dans le modèle d'authentification OAuth, les échanges de données sont signés par l'application cliente grâce à un jeton émis par l'application serveur. Grâce à la signature, l'application serveur est assurée de l'identité de l'utilisateur à chaque échange de données.

Pour avoir un jeton, le protocole à observer par l'application cliente est relativement simple :
  • Émettre une requête pour recevoir un premier jeton dit d'accès.
  • Ce jeton est utilisé pour initier un appel vers une page d'autorisation.
  • L'application serveur présente alors une page d'identification où l'utilisateur doit, s'il n'est pas déjà authentifié, entrer son identifiant et son mot de passe, puis accepter que l'application cliente accède aux données qui sont gérées par l'application serveur.
  • L'application serveur retourne un second jeton attestant de l'acceptation par l'utilisateur de l'accès aux données. Ce jeton a une durée de vie limitée.
  • Ce second jeton peut être utilisé pour obtenir deux jetons (clé publique et clé secrète) qui permettront à l'application cliente de signer les échanges de données de telle sorte que l'application serveur les associera à l'utilisateur concerné.
  • Souvent ces deux jetons ont une grande durée de vie (pas d'expiration dans le cas de Twitter), et peuvent donc être sauvegardés par l'application cliente pour signer de manière transparente tous les futurs échanges.
  • Il faut cependant tenir compte que l'utilisateur peut révoquer ces deux jetons n'importe quand, ou qu'ils peuvent expirer n'importe quand (à cause d'un changement de stratégie du côté de l'application serveur, par exemple) aussi il faut être prêt à exécuter le processus pour obtenir deux nouveaux jetons à n'importe quel moment.
Il est important de noter que la sauvegarde des jetons d'authentification doit être très sécuritaire. Il n'est pas acceptable de les sauvegarder dans un simple fichier texte situé  sur une carte d'extension mémoire par exemple. Si le risque d'accès à ces jetons est trop grand, il faut mieux rejouer le scénario ci-dessus pour obtenir un nouveau jeu de jetons.

Au moment où j'écris cet article, le matériel de la série Samsumg S et la tablette Motorola Xoom ont des systèmes de fichiers encryptés. À ma connaissance, même Android 3.1 n'offre toujours pas de solution bas niveau de sécurité maximale...

La réception des notifications asynchrones

Si de plus en plus de fondeurs de silicium mettent l'accent sur la puissance du processeur central (Qualcomm) et leur nombre (NVidia vient d'annoncer un Tegra avec 4 cœurs), si l'augmentation de la bande passante (de HPSA+ à LTE par exemple) permet des échanges de données de plus en plus rapide même loin de tout réseau informatique, la capacité énergétique des téléphones portables modernes reste leur point faible. Par le passé, j'ai eu des téléphones Nokia et Sony Ericsson capables de rester en veille plus d'une semaine. Maintenant, je dois brancher mon téléphone HTC Desire chaque soir, et cela même avec une navigation somme toute restreinte !

Dans ces conditions, maintenir une application éveillée pour pouvoir interroger l'application serveur à intervalles réguliers (technique dite de polling) est à proscrire.

Il y a deux ans, en développant une application pour la plate-forme BlackBerry 5, j'ai utilisé la technique suivante :
  • L'application client sur le téléphone écoutait un certain nombre de messages du système (changement de type de réseau, perte du réseau, etc.) et les colligeait dans une base de données interne.
  • C'était l'application serveur qui décidait du moment de transmission de ces données statistiques en envoyant un SMS à chaque téléphone.
  • À la réception de ce SMS, l'application client ouvrait une connexion HTTP pour transmettre en rafale ses données colligés.
  • Une fois l'ensemble de données rapatriés de chaque téléphone, l'application serveur établissaient des rapports de couverture pour l'opérateur.
Depuis la version 2.2, il existe le protocole AC2DM: Android Cloud to Device Messaging. Quand une application cliente configurée pour AC2DM s'initialise, elle doit s'enregistrer auprès du serveur local AC2DM et reçoit en retour un identifiant d'enregistrement. C'est la responsabilité de l'application cliente d'envoyer cet identifiant à l'application serveur pour que celle-ci ait la clé pour envoyer les notifications asynchrones à cette application cliente, et à elle seule.

Quelque part, l'approche du AC2DM est semblable à ma méthode d'activation par SMS. Il se peut même qu'elle utilise en sous main cette technique ;) La principale différence réside dans l'aspect service : avec AC2DM, l'application cliente n'a pas à rester active pour recevoir les notifications, c'est le serveur local de notifications qu'il l'activera au besoin.

L'application pour les consommateurs

L'application cliente pour les consommateurs doit offrir plusieurs fonctionnalités :
  • recevoir les notifications concernant les demandes et les propositions en attente
  • gérer la liste des demandes et propositions en attente
  • créer de nouvelles demandes
  • avec un accès au carnet d'adresses du téléphone pour pouvoir inclure ses « amis » en copie des demandes
  • avec un accès au système de localisation géographique du téléphone pour faciliter la création des demandes
  • modifier ou annuler des demandes en attente
  • confirmer ou annuler des propositions en attente
Le principal objectif de l'application cliente sur les téléphones mobiles est le relais des notifications de mise-à-jour de demande, en réaction à la réception de nouvelles propositions ou de modifications de proposition de la part de détaillants. En quelques « clics », l'utilisateur doit pouvoir accéder rapidement au détail de la demande concernée, au détail de la proposition et à des informations sur le magasin ou bureau du détaillant. Pour faciliter cet accès, la plupart des informations sont sauvegardées sur le téléphone au fur et à mesure qu'elles sont requises. Pour garder une structure proche du modèle de données produit par l'application serveur, le stockage utilisé est le service de base de données interne du mobile (SQLite sur Android, par exemple).

L'application pour les détaillants

L'application cliente pour les consommateurs doit offrir plusieurs fonctionnalités :
  • recevoir les notifications de nouvelles demandes
  • créer et gérer des propositions (possiblement avec un accès à la caméra pour scanner les codes barre)
  • confirmer les livraisons
  • gérer la liste des demandes et propositions en attente
Parce que les services offerts aux consommateurs sont très différents de ceux offerts aux détaillants, ils sont proposés dans deux applications différentes. Cela réduit les risques de confusion de contexte pour les utilisateurs agissant autant en tant que consommateur que détaillant.

À suivre...

Dans les prochains articles, je décrirai en détail les différentes implantations que j'ai réalisées. Il y a plusieurs techniques qui ne sont pas évidentes, comme celle gérant l'authentification avec OAuth, et j'imagine que cela sera utile à bien des développeurs ;)

A+, Dom

Saturday, April 16, 2011

Google App Engine, scheduled tasks, and persisting changes into the datastore: the risk of a race condition

This post is about a race condition I've accidentally discovered and hopefully fixed. It occurred in App Engine and was generated by tasks I created for immediate execution...

Context

When I started developing in Java for Google App Engine, I decided to give a try with JDO, mainly because it is datastore agnostic (*). Operations managing my entities are organized in DAOs with set of methods like the followings.

public Demand update(Demand demand) {
    PersistenceManager pm = getPersistenceManager();
    try {
      return update(pm, demand);
    }
    finally {
        pm.close();
    }
}

public Demand update(PersistenceManager pm, Demand demand) {
    // Check if this instance comes from memcache
    ObjectState state = JDOHelper.getObjectState(consumer);
    if (ObjectState.TRANSIENT.equals(state)) {
        // Get a fresh copy from the data store
        ...
        // Merge old copy attributes into the fresh one
        ...
    }
    // Persists the changes
    return pm.makePersistent(demand);
}

I knew that changes are persisted only when the PersistenceManager is closed; closing it after an update is safe attitude. I decided anyway to separate the PersistenceManager instance management from the business logic updating the entity for clarity.

This decision offers the additional benefit of being able to share PersistenceManager instance with many operations. The following code snippet illustrates my point: a unique PersistenceManager instance is used for two entity loads and one save.

public void processDemandUpdateCommand(Long demandKey, JsonObject command, Long ownerKey) throws ... {
    PersistenceManager pm = getPersistenceManager();
    try {
        // Get the identified demand (can come from the memcache)
        Demand demand = getDemandOperations().getDemand(pm, demandKey, ownerKey);

        // Check if the demand's location is changed
        if (command.contains(Location.POSTAL_CODE) || command.contains(Location.COUNTRY_CODE) {
            Location location = getLocationOperations().getLocation(pm, command);
            if (!location.getKey().equals(demand.getLocationKey())) {
                command.put(Demand.LOCATION_KEY, location.getKey());
            }
        }

        // Merge the changes
        demand.fromJson(command);

        // Validate the demand attributes
        ...

        // Persist them
        demand = getDemandOperations().updateDemand(pm, demand);

        // Report the demand state to the owner
        ...
    }
    finally {
        pm.close();
    }
}

For my service AnotherSocialEconomy which connects Consumers to Retailers, the life cycle for a Demand is made of many steps:
  • State open: raw data just submitted by a Consumer;
  • State invalid: one verification step failed, requires an update from the Consumer;
  • State published: verification is OK, and Demand broadcasted to Retailers;
  • State confirmed: Consumer confirmed one Proposal; Retailer reserves the product for pick-up, or delivers it;
  • State closed: Consumer notified the system that the transaction is closed successfully;
  • State cancelled: ...
  • State expired: ...

In my system, some operations take time:
  • Because of some congestion in the environment, which occurs sometimes when sending e-mails.
  • Because some operations require a large data set to be processed–like when a Demand has to be broadcasted to selected Retailers.

Because this time constraint and the 30 second limit, I decided to use tasks extensively (tasks can run for 10 minutes). In some ways, my code is very modular now, easier to maintain and test.

So I updated my code to trigger a validation task once the Demand has been updated with the raw data submitted by the Consumer. The code snippet shows the task scheduling in the context of the processDemandUpdate() method illustrated above.

public void processDemandUpdateCommand(Long demandKey, JsonObject command, Long ownerKey) throws ... {
    PersistenceManager pm = getPersistenceManager();
    try {
        ...

        // Update the state so the entity is ready for the validation process
        demand.setState(State.OPEN);

        // Persist them
        demand = getDemandOperations().updateDemand(pm, demand);

        // Create a task for that demand validation
        getQueue().add(
            withUrl("/_tasks/validateOpenDemand").
                param(Demand.KEY, demandKey.toString()).
                method(Method.GET)
        );
    }
    finally {
        pm.close();
    }
}

Issue

Until I activated the Always On feature, no issue has been reported for that piece of code: my unit tests worked as expected, my smoke tests were fine, the live site behaved correctly, etc.

Then the issue started to appear randomly: sometimes, updated Demand instances were not processed by the validation task anymore! A manual trigger of this task from a browser or curl had however the expected result...

For the task to be idempotent, the state of the Demand instance to be validated is checked: if set with open, the Demand attributes are checked with the result of the state being set with invalid or published. Otherwise nothing happens. With that approach, Demands already validated are not processed a second time...

What occurred?
  • Without the Always On feature activated, because of the low traffic in my application, the infrastructure was delaying the process of the validation task a bit and it was executed once the request process finished.
  • Thanks to that soft process serialization, the datastore update commanded by the instruction pm.close() had all chances to be completed before the start of the validation task!
  • With the Always On feature activated, the infrastructure had much more chance to get one of the two other application instances to process the validation task... which could happen before the datastore update...
  • As it started before the datastore update, the validation task found a task in the state set by the previous run of the task for this Demand instance: invalid or published. Then it exited without reporting any error.

Solutions

The ugly one:
Add a delay before executing the task with the countdownMillis() method.

// Create a task for that demand validation
        getQueue().add(
            withUrl("/_tasks/validateOpenDemand").
                param(Demand.KEY, demandKey.toString()).
                method(Method.GET).
                countdownMillis(2000)
        );
    }
    finally {
        pm.close();
    }
}

A tricky one:
Use memcache to store a copy of the Demand, which the validation will use instead of the reading it from the datastore. Because there's no warranty that your entity won't be evicted before the the run of the validation task, this is not a solution I can recommend.

The simplest one:
Move the code scheduling the code outside the try...finally... block. The task will be scheduled only if the updates of the Demand instance have been persisted.

public void processDemandUpdateCommand(Long demandKey, JsonObject command, Long ownerKey) throws ... {
    PersistenceManager pm = getPersistenceManager();
    try {
        ...

        // Update the state so the entity is ready for the validation process
        demand.setState(State.OPEN);

        // Persist them
        demand = getDemandOperations().updateDemand(pm, demand);
    }
    finally {
        pm.close();
    }

    // Create a task for that demand validation
    getQueue().add(
        withUrl("/_tasks/validateOpenDemand").
            param(Demand.KEY, demandKey.toString()).
            method(Method.GET)
    );
}

The most robust one:
Wrap everything withing a transaction. When a task is scheduled within a transaction, it's really enqueued when the transaction is committed.

Be aware that adopting this solution may require a major refactoring.

Conclusion

Now I understand the issue, I'm a bit ashamed of it. For my defense, I should say the defect has been introduced as part of an iteration which came with a series of unit tests. Before the activation of the Always On feature, it stayed undetected, and later it occurred only rarely.

Anyway, verifying the impact of all calls to external tasks before persisting any changes is one point in my review check list.

I hope this helps,
A+, Dom

--
Notes:
* These days, I would start my application with Objectify. This blog post summarizes many arguments I agree on too in favor to Objectify.

Thursday, April 14, 2011

State of the AnotherSocialEconomy Initiative

When my partner Steven and I started our startup adventure a few years ago, our main goal was to demonstrate our ability to convert an idea into a live project. As we used our experience to build a viable product, we knew it would add value to our resume.

Over the months, the project evolved slowly:
  • The core idea is: help the consumers who look for a specific product to find the retailer who has it in stock, and help retailers to connect with consumers online and drive them in-store. Our moto: the missing link between shopping online and buying offline.
  • The proof-of-concept was made of screenshots, live Twitter accounts, and a piece of Python code connecting those accounts together. This material allowed us to be among the semi-finalist companies of TechCrunch50 in 2009!
  • The first implementation of the engine connected consumers and retailers, each of them interacting with the system with direct messages (DMs), sent  from their own Twitter account. At that time, the tool was named Twetailer.
  • Later, we figured out Twitter was too geeky and we added a connector to accept and generate e-mails. Since then, the engine has a XMPP (instant messaging) connector, another one for Facebook, and a plan for VOIP (with Twilio).
  • At one point, we were approached to start an experiment for golfers: usually avid golfers have to spend a lot of time on the phone to get three buddies to play with and to book a tee-time. In two months, we created ezToff.com, developed an embeddable widget to ease the creation of a tee-off request, and developed a Web console for the golf course staff. The experiment was shut down because of a lack of traction...
  • Recently, we started another experiment in the used car market, under the name AnotherSocialEconomy. Our market researches found that the average time to buy a used car is six weeks in Quebec. Typically, consumers start on the Web, grab listings, and call dealerships one after the other. In the Montreal area, there are 300,000 pre-owned cars bought per month: ⅓ from dealerships, ⅓ from wholesalers, and ⅓ from individuals. Dealerships control 45% of the market value.
  • So far, this experiment has been a partial success: we get demands from consumers and forward proposals from used car sales people. We helped our first customer finding a car in only two weeks! But, as the dealership staff is not used to new technologies (sic), we manage the service for them and it's very time consuming...
What's next?

We are very happy with consumers trusting us. We work hard to continue to improve their experience, on our landing pages and in our communication by e-mail. The priority is to have them qualifying more their demands upfront.

Our focus right now is more on the retailer-side, in order to have sales people in the dealerships interacting with the system by e-mail too. If around 80% of them agree to work with us to serve our users, we have to prepare the proposal details and to reach them out for approval. For the business to scale, they should prepare and post the proposals themselves.

For now, we need more data to determine trends. This is a prerequisite for used car dealers to adopt our methodology. It is also possible that will lead to another pivot.

Lessons learned?

The first one is an obvious one now: nobody can be as committed as the founders! Since I left the company Compuware to become a full time entrepreneur, Steven and I have met many people we expected to work/partner with: a technology company CEO, a former manufacture owner and now real estate agent, a UX designer, a few VCs, a marketer, two successful startup founders, etc. If we sometimes got excellent feedback, none joined us.

The second one is related to the product development: two techies are not enough to make a great product! They can talk about their product at length, but they don't know how to convince decision makers. They need the help of a marketing genius!

Another one is related to the importance of the contact network. If you don't know the right people, very few will listen to you. Having a large address book or friends with deep pockets definitively helps a lot.

And the last one: developing a tool for the general public is difficult! Following the Lean Startup process can really help. Check Ash Maurya's blog, for example.

Technologies learned?

I continue to find Google App Engine an awesome environment. The recent addition of the Channel API which allows the back-end logic to push asynchronous notifications into Web consoles really improves the user experience. On the maintenance side, I appreciate the Java Remote API which simplifies the development of maintenance and data extraction tasks.

Web console side, I've started upgrading the code to Dojo 1.6 and its new HTML5 compliant syntax. I don't use the AMD loader yet, but I'm waiting for the one coming with 1.7. I have recently started to use Selenium 2 for my smoke tests and I really like it!

Mobile side, I wish to have more spare time to update my Android application for ezToff and to benefit from the Android Cloud To Device Messaging (C2DM) API. But I'm also thinking of building application with the awesome dojox.mobile.

To develop our customer base in the used car experiment, we have created two AdWords campaigns: one for each language, both in the Montreal area. Using AdWords and optimizing the campaigns was very instructive. There are many concepts to master: long tail, auto bid, average CPC, conversion rate, landing page quality score, etc. I know understand why so many people choose to become an AdWords certified partner ;)