Showing posts with label Technology. Show all posts
Showing posts with label Technology. Show all posts

Thursday, January 6, 2011

Wrapping up 2010, preparing 2011

2010 Summary

2010 was an interesting year for me professionally. Inspired by similar lists online, I present what I did (or can remember at least):
  1. Left Compuware and joined my partner Steven at Milstein and Associates inc. end-of-January, to focus 150% of my time on AnotherSocialEconomy (formely known as Twetailer).
  2. Adapted the Amazon Flexible Payment Service (FPS) library to the App Engine environment—freely available on github.
  3. Refactored the communication layer to be able to send details e-mails to customers, in addition to short ones sent over Twitter and Instant Messaging services.
  4. Built the first Web consoles for Golf players and Golf courses staff, based on Dojo and using the freshly delivered REST API—check ezToff.com.
  5. Built the first Android application for Golf players using their GPS & Address book to ease the tee-off booking process with AnotherSocialEconomy—freely available on github.
  6. Helped preparing pitches to Golf Canada representatives and to Golf staff members and owners.
  7. Developed the AnotherSocialEconomy widget, ready to be embedded in participant websites and loading the AnotherSocialEconomy wizard on demand
  8. Reviewed the book Google App Engine Java and GWT Application Development.
  9. Continued to develop my open-sourced library offering tools for globalizable generic resource bundles (TMX)—on github too.
  10. Developed a prototype of a Facebook application.
  11. Augmented the AnotherSocialEconomy engine to support the used car dealers: buyers don't buy immediately, but collect car information and offers for a while before committing with one dealer => the engine work flow has been adapted to support this slower path of interaction.
  12. Attended presentations to few car dealerlship owners.
  13. Attended meetings with various mentors and potential investors.
  14. Attended meetings of Montreal NewTech, Android Montreal, Augmented Reality Montreal communities

I’m pretty happy with what I have done so far and am looking forward to doing even more.

New technologies

It was also fun to play around some hot new technologies:
  • Ubuntu 10.04 and 10.10
  • Android 2.2 and push mechanism on my HTC Desire
  • App Engine 1.4.0 and channel api
  • Node.js and WebSockets

2011 Goals and Plans

2011 is going to be critical for AnotherSocialEconomy. The application runs and passed usability tests. The focus point is now on the business development!
  1. Concentrate on one domain (used car market) and get a significant traffic in the Montreal area.
  2. Gather customer feedback (consumer looking for second hand cars and used car dealers), tune the system, and increase traffic. Repeat until 100% satisfaction ;)
  3. Once the system is proven by the traffic and testimonies, involve investors and/or partners to 1) expand the business to other areas or 2) to target another domain or 3) both expand geographically and vertically.
  4. Develop data mining tools for retailers.
  5. Develop domain oriented interfaces for consumers (Web/HTML5 for tablets and PC, native apps for iPhone, Android, BlackBerry).
  6. Add more communication channels (like voice messages with Twilio, for example).
  7. Offer my services as Software Architect & Developer consultant on designing & developing highly scalable and highly available applications on Google App Engine and mobile applications on Android.
A+, Dom

Friday, December 3, 2010

Reviewed book 'Google App Engine Java and GWT Application Development' is out!

I know that's a pity not to post more regularly! It's just I'm too busy with the developments for AnotherSocialEconomy.com ;)

Here is a little news for Google App Engine developers:
Over the summer, I've been asked to review the draft of the book Google App Engine Java and GWT Application Development. Even with my experience, I learn few techniques, like with the object relationships (chapter 5). A very good book for beginners/intermediates, and still an interesting book for experts.


Enjoy!
A+, Dom

Note: I've no incentive to sell the book, just the pleasure to share a good reference ;)

Monday, September 20, 2010

Securing accounts on the Web

Situation

Few days ago, my partner Steven got his Google account compromised for a short period of time:
  • Tweet #1 at 8:52 PM on Sept. 9: Just received 2 calls from friends wondering if I'm being held at a London hotel. FYI, I'm not.
  • Tweet #2 at 12:23 AM on Sept. 10: Re: Being held in London. My Google account password was changed by an IP address in Nigeria. I've got it back now but with no Contacts.
  • Tweet #3: at 3:04 PM on Sept. 10: Re Stuck in London: I thought I had everything under control last night but needed http://bit.ly/czgYdg Google Security Breach help to fix.
The thieves used his account to send a scam to few of his friends asking for money because he was supposedly blocked in London without resources.

If Steven's password was not very strong, there's no chance it has been discovered after only few attempts. At no time, Google reported that attempts to log into his account were conducted from computers with IP addresses in Liberia! Steven saw the first warning only when he recovered the access!

Encountered risk

The goal of these thieves was limited to getting money as soon as possible. So they reached out few of Steven's contacts, ones he contacts only occasionally, and they asked for a money to be transferred by Western Union. As they kept the control of his account, they would have been able to get the transaction MTCN (money transfer control number) via his inbox. Western Union maintains a page listing the Common Scams.

Others could have decided to change his password, to just spy his incoming message stream (these ones enabled the POP3 and IMAP accesses), to ask for password reset when Steven is not online, and then to steal his identity in many online services.

Because Steven reacted promptly and because his contacts detected the scam, the thieves did not get any benefit from this operation. They are probably trying to get someone else now, maybe someone from his contact list.

How to reduce the exposure

The first protection consists in defining strong passwords. A lot of services offer information about how to produce strong passwords. I would recommend this Microsoft site Strong Passwords | Microsoft Security—I'm confident that they don't provide the online password checker to enhance a grey dictionary ;)

The second protection would be to use a unique and strong password per account. This is probably the most difficult part! I may use probably 20 to 30 online services, some I use regularly, others I use very rarely. There's no way I can remember so many strong passwords...

My solution: Keepass + DropBox
  • Keepass is an open source password manager. The tool has been ported on many platforms: Windows, Mac, Linux, iPhone, Android, etc.—Full list on the download page.
  • DropBox (link with my referral id ;) is an online file sharing system that, thanks to a program installed on each computer/mobile in your network, maintains in sync the corresponding set of files. DropBox is a nice companion to Keepass as it duplicates your password database transparently, reducing the risk to loose the passwords if the original computer is lost.

The combination of the password generator and Keepass secure edit controls makes the tool especially useful:
  • It's easy to generate a strong passwords (remember: 16 characters or plus ;)
  • You don't have to remember them as a simple Ctrl+C / Ctrl+V allows to copy securely them in your browser! (the computer clipboard is automatically flushed after few seconds.)
In final, I just have one very strong password (30+ characters) to remember and to change periodically.

Known limitations

Some sites ask users to give a secret answer for a series of predefined questions. If you look at the Apple page below, you'll see that some questions might weaken users more than offering a protection... These days, it's pretty simple to find the responses online!

List of predefined security questions on Apple.com website

Many sites only accepts alphanumerical characters only or don't accept passwords over 20 characters. Oddly enough, most of the bank websites I use prevent too long and too complex passwords! I guess they have other tools to detect intrusions...

List of predefined security questions on Apple.com website

Last minute update

Today, Google announced on its Online Security blog that they will offer a Two-Step Authentication mechanism to log into Google services. This One-Time Password authentication is simpler than distributing a one-time password generator, as Amazon does for example, while providing a still strong security enhancement.

I hope it helps.
A+, Dom

Monday, July 19, 2010

How to resize VirtualBox disk images?

-- Update on May 8, 2012 --

Having to resize an image again, I looked at the VirtualBox documentation before following the CloneVDI route a second time. And I was pleasantly surprised that the version 4.1 of VBoxManage accepts a parameter --resize to the command modifyhd!

The process can be done in a matter of minutes:
  1. Stop the hosted system (Win7 in my case).
  2. Run the following command in the folder of your VDI file
       VBoxManage modifyhd <name>.vdi --resize <size-in-mb>
  3. Start the virtual machine.
  4. Open the disk manager tool (Use the Menu Windows, type disk man in the search box, and select 'Create and format hard disk partitions').
  5. You should see your drive with the initial partition(s) and new free space.
  6. Click on the partition to extend and choose the command 'Extend Volume' in the contextual menu.
  7. And voilà.
No need to copy the VDI on the host machine. Very fast and robust process.
'Extend Volume' option in the contextual menu.


-- Original post on July 19, 2010 --

Quick post to share a wonderful VirtualBox companion: CloneVDI!

Almost two months ago, I switched from Windows XP to GNU/Linux with Ubuntu 10.04 distribution. Everything goes very well and I do not regret the move.

In my day-to-day job, as the architect/developer/tester of the Twetailer project, engine and of many of its clients, I still need to run programs on the Windows OS, especially the series of Internet Explorer 7 & 8 (IE 6 is killed, isn't it?).

To verify my test suites for Internet Explorer, I rely on VirtualBox running the initial Windows XP release. Because the disk size requirement for the initial version is low, I stupidly created a small  4 GB disk image! Then it required hours to load and install the service packs 2 & 3, plus IE 8, plus the latest .Net framework, plus the security updates. Be careful: 4 GB is way too small to store the system, the virtual memory page file, and the additional stock coming with the service packs and others!

Few days after the initial setup, I was facing the "Not enough space available" warning :-( Instead of wasting another set of hours in a re-installation, I googled virtualbox increase vdi size, and the first article I found was from the VirtualBox forums. I thought it was a good sign and I started reading... but I became quickly disappointed because the given explanations required many tricks and time to setup too. Then I jumped to the last pages (page 6 to be precise) to read:
...

The new way:
Run the CloneVDI tool. Enter name of source VDI and desired disk size into dialog box. Click "Proceed" button. Expect to spend maybe 5 minutes for a typical VDI, longer of course with big drives.

The CloneVDI tool has existed since mid September 2009. It was created specifically so as to remove the need to recommend an embarassingly complicated rigmarole for performing what should have been a simple task. So, in late May 2010 it is quite disheartening to see people still joining the site in order to provide uninformed endorsement of the obsolete procedure.
The post was from Don Milne, alias mpack, the creator of the CloneVDI tool.

Then I loaded CloneVDI from the referenced post, installed Wine with the Ubuntu Software Center, ran CloneVDI, selected my initial image, specified a new name and a new size (now 10 GB), and all the magic transformation occurred in less than 1 minute!

CloneVDI pane: clone an virtual image with an increased size in just a few clicks!

Warning: It seems the cloning only works up to the first snapshot, as my clone did not get the snapshots. This was not a issue for me but be careful on your side because it might be necessary to merge the snapshots. As the cloning is very fast, producing a new image per snapshot should work-around the issue.

Anyway, I wholeheartedly recommend CloneVDI when it's time to allocate more disk space to a VirtualBox machine!

A+, Dom

Monday, June 7, 2010

The joy of Ubuntu

Few decades ago, most of my work was done on a SUN hardware running Solaris. The Université de Rennes I, France, was providing the Internet access and Mosaic, followed few times later by Netscape, was my favorite browser. At that time, I was preparing a PhD and most of the online discussions were conducted in newsgroups and emails.

One week ago, I definitively switched the OS of my Thinkpad T61 from Windows XP to Ubuntu Lucid Lynx (10.04). In some ways, the Ubuntu environment is not that different from the Solaris I used to work on. For example, I was very pleased to use again an efficient Virtual Desktop system and to benefit from the native Compose key mechanism that allows me to type any crazy sequences producing foreign characters.

Having been pushed on the Microsoft side by my various employers, it seems I forgot all the good sides of the Unix environments. Microsoft marketing has been very brilliant: as many others, I accepted the PC environment limitations! Do you remember the "cooperative multitasking" of Windows 3.1? Do you remember that changing a registry key transformed your Windows NT Workstation in a much capable Windows NT Server? Do you remember that Windows 7 is really the first shiny interface with transparency and gadgets? (Sorry: Vis-what? I don't know ;)

To be honest, let's recognize that the Unix environments did not progress much. On the open source side, the Linux distributions provide a very fragmented offer. The success of some shiny distributions like Knoppix and Ubuntu is fairly recent.

As a geek, I would say that the most impressive feature in these distributions is the 3D rendering engine Compiz and its Cube plugin! It's just amazing.





When I decided to switch to Ubuntu, I wondered if I would lose some features. So far, all have their counterpart for the Linux environment, sometimes with many additional benefits. I did not have to even install Wine, the Windows emulator! Here are the tools that were important to me and I carried over:
  • Dropbox: has Linux/MacOS/Windows distributions.
  • Keepass: had to convert the database to 1.x format and now use KeepassX which has Linux/MacOS/Windows distributions.
  • Aptana/Android SDK/App Engine SDK: equivalent Java packages.
  • Firefox/Chrome/Add-ons: have Linux/MacOS/Windows distributions.
  • Skype (for VOIP calls and screen shares)...
  • OpenOffice.
  • VirtualBox.
  • git.

I am so impressed with the system that I've also switched my old Toshiba M40 (one P4 core and a NVidia card) and now my kids have an easy access to many free games and educational tools!

A+, Dom

Wednesday, April 21, 2010

Social Software, Cynapse, and Open Platforms

What's a Social Software?

These days, most of white collar workers have to deal with computers on a daily basis. Computers are everywhere: from the front desk to allow receptionists to get to the company directory up to garage doors checking who's in, who's out. There are so different types of computers that some of them go unnoticed!

The scope of this post is limited computers used to collaborate:
  • Make appointments
  • Produce documents
  • Review & comment documents
  • Forward documents
  • Poll users
  • Manage tasks
When it's time to collaborate, the vast majority of computer users rely on e-mails. E-mail is probably the most essential tool now, and without e-mails, many are lost. Do you remember the BlackBerry syndrome of the RIM product early adopters? Yeah, the one that wakes up people middle of the night so they can get new e-mails ;)

With tools like Microsoft Outlook, Apple iCal, and Google Calendar, more users relying on calendar tools to organize their time. This is neat because with such tools you can sometimes see up front if the targeted time slot is good for other attendees. And when attendance confirmation comes, the meeting information are updated automatically without requiring a specific triage in the mailbox.

When people have to share pictures, because the ones modern digital camera can generate are so big and very few people have the knowledge to reduce their size nicely, more and more people resort to online hosting services like flickr or Picasa. In addition to process  images to transit on the Web (while still available in high resolution for printing purposes), sharing pictures via links is way lighter for the mail system. It has also the additional benefit that pictures can be removed safely without one someone else to purge his/her mailbox.

With review sites like cnet or Epinions.com, more and more users have started to give their opinions online to share the joy of being a owner of a wonderful gadget, or to inform others that such a gadget is crap! Involving readers and customers to share their experiences is the key element of the social software movement.

So what's the relation between Cynapse and social software:
  • Cynapse is a central place where people can collaborate online with the most practical tools without relying on e-mails, a place where collaboration and communication do not loose their context.
Why Cynapse?

When my partner Steven and I started to work on the !twetailer, we knew that we wanted to collaborate online, within a reliable and protected environment that could handle many document types: user stories, specifications, diagrams, mock ups, pictures, bookmarks, discussions, etc.

As IBM-ers, we looked among the tremendous IBM products and we selected Lotus GreenHouse which was still in beta. The main flaw we experienced was its disruptive slowness. Another issue to us was our quasi-impossibility to influence the development path to fix the painful issues. Just 2 guys in a big user community have a very little impact...

We then decided to switch to Elgg, a free and open source software, that our hosting service offered. Elgg is probably a good tool but it did not match our needs. The variety of entities was limited as our ability to fix the issues. Probably, it would have been better to host the service at home and to extend its model. Because it would have distracted us from our project, we looked for a better solution.

At this time, Steven was studying the social software offering and Cynapse came out as a good fit to us:
  • The service out-of-the-box was promising and was still under active development.
  • They had an affordable offer: free self hosted service, managed service for a small subscription, managed hardware also for a fee.
  • They had an active user community which was hosted on the tool itself (the “eat your own dog food” principle).
  • Whatever solution we choose, we are free to upgrade/downgrade/exit.

Cynapse is an Indian company and its managers are really great. For practical reasons, we decided to go with the managed service online. When Steven contacted them, asking for a rebate in exchange to us blogging about our experience with the tool and us feeding them with enhancement requests from the development point of view, they accepted and sustained our involvement.

Disclosure: Because the tool and the team is great, Milstein & associates is now a business partner and can resell and offer services on the top of the tool.

After one full year and two upgrades, we are very happy with the tool and its support. At one point, we have developed an offer for schools available at edu.cyn.in which has been extremely well adapted by the kids!

In addition to the collaboration aspect, Cynapse with its offering of various tools in one platform allows users to develop their online reputation in a controlled environment. As Craig Newmark, the founder of craigslist, mentions in a blog post:
People use social networking tools to figure out who they can trust and rely on for decision making. By the end of this decade, power and influence will shift largely to those people with the best reputations and trust networks, from people with money and nominal power. That is, peer networks will confer legitimacy on people emerging from the grassroots.
The Cynapse environment allows users to highlight their work: statistics about contributions and comments are shared on the main page, readers can note the published materials, etc. When people are new to social software, Cynapse offers a simple way to identify the people others “trust” and allows good contributors to build their reputation.

Aside our work on the !twetailer project, because the tool fits our needs, because kids adopted very well, we have proposed it to traditional companies (the ones that relies on Microsoft Outlook and shared folders for their collaboration). For now, the feedback is positive but it's too early to advance any adoption rate ;)

Why Open Platforms?

As a developer, I'm an heavy user of open source software (development environment, source control, build system, etc.). I am also a contributor myself with two open libraries hosted on github:
  • A set of utilities for Web application developers (Java, Python, JavaScript) which offers:
    • Globalization features: from one set of central repositories (TMX format) to programming language dependent resource bundles;
    • JSON related objects: to ease the parsing/generation of JSON structure
  • An adaptation of the Amazon Flexible Payment Service (FPS) library for the Java environment of Google App Engine.
And the content of this blog is offered under the Common Creative Licence By-NC-SA, which allows using beyond the traditional “fair-use” as long as you cite the source ;) I very like open platform for the reasons Larry Lessig gave in speech on . Open source is good for innovation for geeks like myself. For enterprises, I would argue that the key point is the data access: at anytime, someone can get the data out of an open source project without risking any patent infringement, without risky reverse-engineering. For sure, it won't be free as in “free beer” but they'll be free to get their data back. Some people will argue that close software often offer a way to export data in a standard format (like a SQL dump for a database) and allow then to import them somewhere else, but that's only true if no feature are dropped during the export and if all features from one can be activated in two during the import.

To summarize my point, I would say that open source software allow anyone to exit at anytime while continuing to control the data.

As a collaboration tool, Cynapse is maybe not the best one but it offers competitive advantages for the right price (for free if you have the team and expertise to manage it, for fee if you choose the hassle-free solution of the hosted service on Amazon AWS) while letting users migrating to another platform anytime.

Important point to consider during the selection of an open source solution: the quality of its community. More active is the community, better are your chances that issues you are facing have been documented, better your chances to see your (excellent) enhancement requests supported by others. Being able to contribute to an active community (by submitting bug reports, by answering others' questions, by submitting patches) have also the side benefit of improving your reputation.

A+, Dom

Sunday, March 21, 2010

Amazon FPS library for the Google App Engine environment

Here is a post on Amazon Flexible Payments System (FPS) and the beauty of the open source model!

Amazon FPS as the payment platform

!twetailer is an application connecting people to people, consumers to retailers and vice-versa. The business model is freemiun-based:
  • Posting demands, receiving proposals, and accepting proposals is available for free to any customers.
  • Listening to demands, proposing products or services, and closing deals is available for free to any registered retailers.
  • At one point, !twetailer is going to offer retailers to ask consumers to pay for the accepted products or services. The payment platform is going to be Amazon FPS1.

Amazon FPS is a really neat platform, the only one to my knowledge allowing to organize money transfers between third-parties. With Amazon FPS, !twetailer will be able to convey money from the consumers directly to the retailers, without the money transiting on !twetailer own account! This is a really safe mechanism.

As a quick introduction to Amazon FPS, I would strongly suggest you listen to that one hour webcast introduced on Amazon Payments blog on April 7, 2009: Monetize your innovation with Amazon FPS. If you use the open-source tool VideoLAN VLC, you can load the ASX file directly from Akamai from here.

Amazon and the open-source model

Amazon FPS, as many others Amazon Web Services (AWS), allows third-party applications to use its services through very simple APIs which are HTTP based! The libraries that developers need to use are mostly wrappers over HTTP connections with some specific controllers formatting the requests and signing them (to avoid a man-in-the-middle process tampering them).

Because HTTP is an open protocol and because Amazon could not probably develop its libraries for all possible Web servers, Amazon opened the libraries' source and attached to them a very liberal license2.

This is a very respectable attitude regarding their customers and also very well thought on the business-side: if developers can adopt their libraries for their own needs, Amazon won't have to pay for the corresponding development and it will enlarge the set of applications their platform can serve!

Amazon FPS on Google App Engine platform

The !twetailer server-side logic is Java based and dropping the Amazon FPS library freshly compiled in war/WEB-INF/lib is simple. However, the Amazon FPS code cannot run as-is because of few App Engine limitations...

The first one is encountered when the application needs to build the URL that will launch the co-branded service, a page that will allow consumers to pay for the service or product previously proposed by a retailer.
The static method HttpURLConnection.setFollowRedirects(boolean) controls the VM behavior and is then guarded by a JVM permission.
Read the incident report in the Google App Engine discussion group.

Fixing this issue is simple: tune the ability to follow redirection on the connection itself instead of applying the settings globally.

The second issue is really major:
The library uses the Jakarta Commons HttpClient component to convey payment requests from the application to the Amazon infrastructure. And many of its underlying calls are blocked in Google App Engine Java environment.
I asked for advices on AWS FPS forums. But without response, I have decided to go with my own wrapper of the Google URL Fetch mimicking the HttpClient HttpConnectionManager and HttpConnection classes.

Wrappers of Google URL Fetch for Amazon FPS

Following Amazon's leadership, I offer the URL Fetch wrappers that allows Amazon FPS to work on Google App Engine platform:
The code currently available works in the simple scenario !twetailer needs. But it is still under development. And the test suite covering it is not yet completed.

UrlFetchConnectionManager class definition
/******************************************************************************* 
 *  Adaptation for the Amazon FPS library to work on the Java platform of
 *  Google App Engine.
 *  
 *  Copyright 2010 Dom Derrien
 *  Licensed under the Apache License, Version 2.0
 */

package domderrien.wrapper.UrlFetch;

import org.apache.commons.httpclient.ConnectionPoolTimeoutException;
import org.apache.commons.httpclient.HostConfiguration;
import org.apache.commons.httpclient.HttpConnection;
import org.apache.commons.httpclient.HttpConnectionManager;
import org.apache.commons.httpclient.HttpException;
import org.apache.commons.httpclient.params.HttpConnectionManagerParams;

public class UrlFetchConnectionManager implements HttpConnectionManager {

 private HttpConnectionManagerParams params;
 private HttpConnection connection;
 
 public void closeIdleConnections(long timeout) {
  throw new RuntimeException("closeIdleConnections(long)");
 }

 public HttpConnection getConnection(HostConfiguration hostConfiguration) {
  throw new RuntimeException("getConnection(HostConfiguration)");
  // return null;
 }

 public HttpConnection getConnection(HostConfiguration hostConfiguration, long timeout) throws HttpException {
  throw new RuntimeException("getConnection(HostConfiguration, long)");
  // return null;
 }

 public HttpConnection getConnectionWithTimeout(HostConfiguration hostConfiguration, long timeout) throws ConnectionPoolTimeoutException {
  // As reported in http://code.google.com/appengine/docs/java/urlfetch/usingjavanet.html#Java_Net_Features_Not_Supported
  // > The app cannot set explicit connection timeouts for the request.
  if (connection != null) {
   releaseConnection(connection);
  }
  connection = new UrlFetchHttpConnection(hostConfiguration);
  return connection;
 }

 public HttpConnectionManagerParams getParams() {
  return params;
 }

 public void releaseConnection(HttpConnection connection) {
  connection.releaseConnection();
 }

 public void setParams(HttpConnectionManagerParams params) {
  // Parameters set in AmazonFPSClient#configureHttpClient:
        // - ConnectionTimeout: 50000 ms
  // - SoTimeout: 50000 ms
  // - StaleCheckingEnabled: true
  // - TcpNoDelay: true
  // - MaxTotalConnections: 100 (as proposed in the default config.properties file)
  // - MaxConnectionsPerHost: 100 (as proposed in the default config.properties file)

  this.params = params;
 }
 
}

UrlFetchConnection class definition
/******************************************************************************* 
 *  Adaptation for the Amazon FPS library to work on the Java platform of
 *  Google App Engine.
 *  
 *  Copyright 2010 Dom Derrien
 *  Licensed under the Apache License, Version 2.0
 */

package domderrien.wrapper.UrlFetch;

import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.net.InetAddress;
import java.net.MalformedURLException;
import java.net.Socket;
import java.net.SocketException;
import java.net.URL;

import javamocks.io.MockInputStream;
import javamocks.io.MockOutputStream;

import org.apache.commons.httpclient.HostConfiguration;
import org.apache.commons.httpclient.HttpConnection;
import org.apache.commons.httpclient.HttpConnectionManager;
import org.apache.commons.httpclient.HttpStatus;
import org.apache.commons.httpclient.params.HttpConnectionParams;
import org.apache.commons.httpclient.protocol.Protocol;

import com.google.appengine.api.urlfetch.FetchOptions;
import com.google.appengine.api.urlfetch.HTTPHeader;
import com.google.appengine.api.urlfetch.HTTPMethod;
import com.google.appengine.api.urlfetch.HTTPRequest;
import com.google.appengine.api.urlfetch.HTTPResponse;
import com.google.appengine.api.urlfetch.URLFetchService;
import com.google.appengine.api.urlfetch.URLFetchServiceFactory;

public class UrlFetchHttpConnection extends HttpConnection {

 private static URLFetchService urlFS = URLFetchServiceFactory.getURLFetchService();

 private HostConfiguration hostConfiguration;
 private HTTPRequest _request;
 private HTTPResponse _response;
 private MockOutputStream _requestBody = new MockOutputStream();
 private MockInputStream _responseBody = new MockInputStream();

 
 private HTTPRequest getRequest() throws MalformedURLException {
  if (_request == null) {
   _request = new HTTPRequest(
     new URL(hostConfiguration.getHostURL()), 
     HTTPMethod.POST, // AmazonFPSClient#invoke(Class, Map) uses only POST method
     FetchOptions.Builder.disallowTruncate().followRedirects()
   );
  }
  return _request;
 }
 
 private static final String SEPARATOR = ": ";
 private static final int SEPARATOR_LENGTH = SEPARATOR.length();
 private static final String NEW_LINE = "\r\n";

 private HTTPResponse getResponse() throws MalformedURLException, IOException {
  if (_response == null) {
   // Get the response from the remote service
   _response = urlFS.fetch(getRequest());
   // Rebuild stream of HTTP headers (except the HTTP status retrieved from readLine(String) method)
   StringBuilder buffer = new StringBuilder();
   for (HTTPHeader header: _response.getHeaders()) {
    buffer.append(header.getName()).append(SEPARATOR).append(header.getValue()).append(NEW_LINE);
   }
   buffer.append("Content-Length: ").append(_response.getContent().length).append(NEW_LINE);
   buffer.append(NEW_LINE);
   // Rebuild stream of HTTP content (chunked-encoded)
   buffer.append(Integer.toString(_response.getContent().length, 16)).append(";chunk size").append(NEW_LINE);
   buffer.append(new String(_response.getContent())).append(NEW_LINE);
   buffer.append("0;").append(NEW_LINE);
   _responseBody.resetActualContent(buffer.toString());
  }
  return _response;
 }

 /**
  * Default constructor
  * @param hostConfiguration
  */
 public UrlFetchHttpConnection(HostConfiguration hostConfiguration) {
  super(hostConfiguration);
  this.hostConfiguration = hostConfiguration;
 }

 @Override
 protected void assertNotOpen() throws IllegalStateException {
  throw new RuntimeException("assertNotOpen()");
 }

 @Override
 protected void assertOpen() throws IllegalStateException {
  assert(_response != null);
 }

 @Override
 public void close() {
  // Nothing to do!
 }

 @Override
 public boolean closeIfStale() throws IOException {
  // Safe call, passed to the inherited method
  return super.closeIfStale();
 }

 @Override
 protected void closeSocketAndStreams() {
  throw new RuntimeException("closeSocketAndStreams()");
 }

 @Override
 public void flushRequestOutputStream() throws IOException {
  getRequest().setPayload(_requestBody.getStream().toString().getBytes());
 }

 @Override
 public String getHost() {
  return hostConfiguration.getHost();
 }

 @Override
 public HttpConnectionManager getHttpConnectionManager() {
  throw new RuntimeException("getHttpConnectionManager()");
 }

 @Override
 public InputStream getLastResponseInputStream() {
  throw new RuntimeException("getLastResponseInputStream()");
 }

 @Override
 public InetAddress getLocalAddress() {
  throw new RuntimeException("getLocalAddress()");
 }

 @Override
 public HttpConnectionParams getParams() {
  return new HttpConnectionParams();
 }

 @Override
 public int getPort() {
  return hostConfiguration.getPort();
 }

 @Override
 public Protocol getProtocol() {
  return hostConfiguration.getProtocol();
 }

 @Override
 public String getProxyHost() {
  throw new RuntimeException("getProxyHost()");
 }

 @Override
 public int getProxyPort() {
  throw new RuntimeException("getProxyPort()");
 }

 @Override
 public OutputStream getRequestOutputStream() throws IOException, IllegalStateException {
  return _requestBody;
 }

 @Override
 public InputStream getResponseInputStream() throws IOException {
  return _responseBody;
 }

 @Override
 public int getSendBufferSize() throws SocketException {
  throw new RuntimeException("getSendBufferSize()");
 }

 @Override
 protected Socket getSocket() {
  throw new RuntimeException("getSocket()");
 }

 @Override
 public int getSoTimeout() throws SocketException {
  throw new RuntimeException("getSoTimeout()");
 }

 @Override
 public String getVirtualHost() {
  throw new RuntimeException("getVirtualHost()");
 }

 @Override
 protected boolean isLocked() {
  throw new RuntimeException("isLocked()");
 }

 @Override
 public boolean isOpen() {
  // Safe call, passed to inherited method
  return super.isOpen();
 }

 @Override
 public boolean isProxied() {
  // Safe call, passed to inherited method
  return super.isProxied();
 }

 @Override
 public boolean isResponseAvailable() throws IOException {
  return _response != null;
 }

 @Override
 public boolean isResponseAvailable(int timeout) throws IOException {
  return _response != null;
 }

 @Override
 public boolean isSecure() {
  return hostConfiguration.getPort() == 443;
 }

 @Override
 protected boolean isStale() throws IOException {
  throw new RuntimeException("isStale()");
 }

 @Override
 public boolean isStaleCheckingEnabled() {
  throw new RuntimeException("isStaleCheckingEnabled()");
 }

 @Override
 public boolean isTransparent() {
  // Safe call, passed to the inherited method
  return super.isTransparent();
 }
 
 @Override
 public void open() throws IOException {
  // Nothing to do
 }

 @Override
 public void print(String data, String charset) throws IOException, IllegalStateException {
  // Save the passed HTTP headers for the request
  int idx = data.indexOf(SEPARATOR);
  if (idx != -1) {
   String name = data.substring(0, idx);
   String value = data.substring(idx + SEPARATOR_LENGTH).trim();
   getRequest().addHeader(new HTTPHeader(name, value));
  }
  // Other information are just ignored safely 
 }

 @Override
 public void print(String data) throws IOException, IllegalStateException {
  throw new RuntimeException("print(string): " + data);
 }

 @Override
 public void printLine() throws IOException, IllegalStateException {
  throw new RuntimeException("printLine()");
 }

 @Override
 public void printLine(String data, String charset) throws IOException, IllegalStateException {
  throw new RuntimeException("printLine(string, String): " + data + " -- " + charset);
 }

 @Override
 public void printLine(String data) throws IOException, IllegalStateException {
  throw new RuntimeException("printLine(string): " + data);
 }

 @Override
 public String readLine() throws IOException, IllegalStateException {
  throw new RuntimeException("readLine()");
 }

 private boolean waitForHttpStatus = true;
 
 @Override
 public String readLine(String charset) throws IOException, IllegalStateException {
  if (waitForHttpStatus) {
   // Dom Derrien: called only once to get the HTTP status, other information being read from the response output stream
   int responseCode = getResponse().getResponseCode();
   String line = "HTTP/1.1 " + responseCode;
   switch(responseCode) {
    case HttpStatus.SC_OK: line += " OK"; break;
    case HttpStatus.SC_BAD_REQUEST: line += " BAD REQUEST"; break;
    case HttpStatus.SC_UNAUTHORIZED: line += " UNAUTHORIZED"; break;
    case HttpStatus.SC_FORBIDDEN: line += " FORBIDDEN"; break;
    case HttpStatus.SC_NOT_FOUND: line += " NOT FOUND"; break;
    case HttpStatus.SC_INTERNAL_SERVER_ERROR: line += " INTERNAL SERVER ERROR"; break;
    case HttpStatus.SC_SERVICE_UNAVAILABLE: line += " SERVICE UNAVAILABLE"; break;
    default: line = "HTTP/1.1 " + HttpStatus.SC_BAD_REQUEST + " BAD REQUEST";
   }
   waitForHttpStatus = false;
   return line;
  }
  throw new RuntimeException("readLine(String)");
 }

 @Override
 public void releaseConnection() {
  // Do nothing, connection closed automatically...
 }

 @Override
 public void setConnectionTimeout(int timeout) {
  throw new RuntimeException("setConnectionTimeout(int)");
 }

 @Override
 public void setHost(String host) throws IllegalStateException {
  throw new RuntimeException("setHost(String");
 }

 @Override
 public void setHttpConnectionManager(HttpConnectionManager httpConnectionManager) {
  throw new RuntimeException("setHttpConnectionManager(HttpConnectionManager");
 }

 @Override
 public void setLastResponseInputStream(InputStream inStream) {
  // Safe call, passed to inherited method
  super.setLastResponseInputStream(inStream);
 }

 @Override
 public void setLocalAddress(InetAddress localAddress) {
  throw new RuntimeException("setLocalAddress(InetAddress)");
 }

 @Override
 protected void setLocked(boolean locked) {
  // Safe call, passed to inherited method
  super.setLocked(locked);
 }

 @Override
 public void setParams(HttpConnectionParams params) {
  throw new RuntimeException("setParams(HttpConnectionParams)");
 }

 @Override
 public void setPort(int port) throws IllegalStateException {
  throw new RuntimeException("setPort(int)");
 }

 @Override
 public void setProtocol(Protocol protocol) {
  throw new RuntimeException("setProtocol(Protocol)");
 }

 @Override
 public void setProxyHost(String host) throws IllegalStateException {
  throw new RuntimeException("setProxyHost(String)");
 }

 @Override
 public void setProxyPort(int port) throws IllegalStateException {
  throw new RuntimeException("setProxyPort(int)");
 }

 @Override
 public void setSendBufferSize(int sendBufferSize) throws SocketException {
  throw new RuntimeException("setSendBufferSize(int)");
 }

 @Override
 public void setSocketTimeout(int timeout) throws SocketException, IllegalStateException {
  // Safe call, passed to inherited method
  super.setSocketTimeout(timeout);
 }

 @Override
 public void setSoTimeout(int timeout) throws SocketException, IllegalStateException {
  throw new RuntimeException("setSoTimeout(int)");
 }

 @Override
 public void setStaleCheckingEnabled(boolean staleCheckEnabled) {
  throw new RuntimeException("setStaleCheckingEnabled(boolean)");
 }

 @Override
 public void setVirtualHost(String host) throws IllegalStateException {
  throw new RuntimeException("setVirtualHost(String)");
 }

 @Override
 public void shutdownOutput() {
  throw new RuntimeException("shutdownOutput()");
 }

 @Override
 public void tunnelCreated() throws IllegalStateException, IOException {
  throw new RuntimeException("tunnelCreated()");
 }

 @Override
 public void write(byte[] data, int offset, int length) throws IOException, IllegalStateException {
  throw new RuntimeException("write(byte[], int, int): " + new String(data) + ", " + offset + ", " + length);
 }

 @Override
 public void write(byte[] data) throws IOException, IllegalStateException {
  throw new RuntimeException("write(byte[]): " + new String(data));
 }

 @Override
 public void writeLine() throws IOException, IllegalStateException {
  // Safe call, new line being inserted automatically by the HTTPRequest renderer
 }

 @Override
 public void writeLine(byte[] data) throws IOException, IllegalStateException {
  throw new RuntimeException("writeLine(byte[]): " + new String(data));
 }
}

Anyone is free to fork it for his own needs. Be careful with the code because I deliver it without warranties! If you have issues to report, if you can document how to reproduce them, depending on my workload, I will help you. If you fix the issue on your side, I will be happy to merge the corresponding patches into my main branch.

I hope this helps,
A+, Dom
--
Notes:
  1. At least in United States of America until Amazon extends its coverage to company without a US bank account.
  2. Apache License, Version 2.0, January 2004, which allows users to make modifications while keeping them private.

Friday, November 20, 2009

Unit tests, Mock objects, and App Engine

For my [still a secret] project which is running on Google App Engine infrastructure [1], I want to make it as solid as possible from the beginning by applying most of the best practices of the Agile methodology [2].

Update 2009/12/05:
With the release of the App Engine Java SDK 1.2.8 (read release notes, I had to update my code and this post on two points:
  • Without the specification of the JDO inheritance type, the environment assumes it's superclass-table. This type is not supported by App Engine. Only subclass-table and complete-table are supported. In the Entity class described below, I had to add @Inheritance(strategy = InheritanceStrategy.SUBCLASS_TABLE). Read the documentation about Defining data classes for more information.
  • With the automation of the task execution, the MockAppEngineEnvironment class listed below had to be updated to serve an expected value when the Queue runs in the live environment. Read the details on the thread announcing the 1.2.8. SDK prerelease on Google Groups.
Now, all tests pass again ;)

As written on my post from September 18, I had to develop many mock classes to keep reaching the mystical 100% of code coverage (by unit tests) [3]. A good introduction of mock objects is given by Vincent Massol in his book “JUnit in Action” [4]. To summarize, mock objects are especially useful to inject behavior and force the code using them to exercise complex control flows.

Developing applications for Google App Engine is not that complex because the system has a good documentation and an Eclipse plug-in ease the first steps.

Use case description

Let's consider a simple class organization implementing a common J2EE pattern:

  • A DTO class for a Consumer;
  • The DAO class getting the Consumer from the persistence layer, and sending it back with updates; and
  • A Controller class routing REST requests. The Controller is an element of the implemented MVC pattern
Use case illustration

The code for the DTO class is instrumented with JDO annotations [5]:

Consumer DTO class definition
@PersistenceCapable(identityType = IdentityType.APPLICATION, detachable="true")
@Inheritance(strategy = InheritanceStrategy.SUBCLASS_TABLE)
public class Consumer extends Entity {
    @Persistent
    private String address;
 
    @Persistent
    private String displayName;
 
    @Persistent
    private String email;
 
    @Persistent
    private String facebookId;
 
    @Persistent
    private String jabberId;
 
    @Persistent
    private Long locationKey;
 
    @Persistent
    private String twitterId;
 
    /** Default constructor */
    public Consumer() {
        super();
    }
 
    /**
     * Creates a consumer
     * @param in HTTP request parameters
     */
    public Consumer(JsonObject parameters) {
        this();
        fromJson(parameters);
    }
 
    public String getAddress() {
        return address;
    }
    
    public void setAddress(String address) {
        this.address = address;
    }
    
    //...
}

My approach for the DAO class is modular:

  • When the calling code is doing just one call, like the ConsumerOperations.delete(String) method deleting the identified Consumer instance, the call can be done without the persistence layer knowledge.
  • When many calls to the persistence layer are required, the DAO API offers the caller to pass a PersistenceManager instance that can be re-used from call to call. With the combination of the detachable="true" parameter specified in the JDO annotation for the Consumer class, it saves many cycles.
Excerpt from the ConsumerOperations DAO class definition
/**
 * Persist the given (probably updated) resource
 * @param consumer Resource to update
 * @return Updated resource
 * @see ConsumerOperations#updateConsumer(PersistenceManager, Consumer)
 */
public Consumer updateConsumer(Consumer consumer) {
    PersistenceManager pm = getPersistenceManager();
    try {
        // Persist updated consumer
        return updateConsumer(pm, consumer);
    }
    finally {
        pm.close();
    }
}
 
/**
 * Persist the given (probably updated) resource while leaving the given persistence manager open for future updates
 * @param pm Persistence manager instance to use - let opened at the end to allow possible object updates later
 * @param consumer Resource to update
 * @return Updated resource
 */
public Consumer updateConsumer(PersistenceManager pm, Consumer consumer) {
    return pm.makePersistent(consumer);
}

The following piece of the abstract class BaseOperations shows the accessor made availabe to any controller code to get one handle of a valid PersistenceManager instance.

Excerpt from the abstract BaseOperations DAO class definition
/**
 * Accessor isolated to facilitate tests by IOP
 * @return Persistence manager instance
 */
public PersistenceManager getPersistenceManager() {
    PersistenceManager pm = getPersistenceManagerFactory().getPersistenceManager();
    pm.setDetachAllOnCommit(true);
    pm.setCopyOnAttach(false);
    return pm;
}

To finish the use case setup, here is a part of the controller code which deals with incoming HTTP requests and serves or operates accordingly. This specific piece of code replies to a GET request like:

  • Invocation: http://<host:port>/API/Consumer/43544"
  • Response:
    • {key:43544, displayName:"John", address:"75, Queen, Montréal, Qc, Canada", 
      locationKey:3245, location: {id:3245, postalCode:"H3C2N6", countryCode:"CA",
      latitude:43.3, longitude:-73.4}, ...}
Excerpt from the ConsumerRestlet Controller class definition
@Override
protected JsonObject getResource(JsonObject parameters, String resourceId, User loggedUser) throws DataSourceException {
    PersistenceManager pm = getBaseOperations().getPersistenceManager();
    try {
        // Get the consumer instance
        Consumer consumer = getConsumerOperations().getConsumer(pm, Long.valueOf(resourceId));
        JsonObject output = consumer.toJson();
        // Get the related information
        Long locationKey = consumer.getLocationKey();
        if (locationKey != null) {
            Location location = getLocationOperations().getLocation(pm, locationKey);
            output.put(Consumer.LOCATION, location.toJson());
        }
        // Return the complete set of information
        return output;
    }
    finally {
        pm.close();
    }
}

Simple mock

Now, it's time to test! To start slowly, let's deal with the Restlet getResource() method to verify:

  • Just one and only one instance of PersistenceManager is loaded by the function;
  • The PersistenceManager instance is cleanly closed at the end of the process;
  • There's a call issued to get the identified Consumer instance;
  • There's possibly a call issued to get the identified Location instance;
  • The output value has the expected information.

In the corresponding unit test series, we don't want to interfere with the App Engine infrastructure (the following chapter will address that aspect). So we'll rely on a mock for the PersistenceManager class that will be injected into the ConsumerRestlet code. The full source of this class is available on my open source project two-tiers-utils: javax.jdo.MockPersistenceManager.

Custom part of the mock for the PersistenceManager class
public class MockPersistenceManager implements PersistenceManager {
    private boolean closed = false; // To keep track of the "closed" state
    public void close() {
        closed = true;
    }
    public boolean isClosed() {
        return closed;
    }

    // ...
}

Here are the unit tests verifying the different flow paths:

  • When an exception is thrown, because the back-end does not serve the data for example;
  • When the Consumer instance returns without location coordinates;
  • When the Consumer instance is fully documented.
Three tests validating the behavior of the ConsumerRestlet.getResource() method
@Test(expected=IllegalArgumentException.class)
public void testUnexpectedError() {
    // Test prepration
    final PersistenceManager pm = new MockPersistenceManager();
    final BaseOperations baseOps = new BaseOperations() {
        boolean askedOnce = false;
        @Override
        PersistenceManager getPersistenceManager() {
            if (askedOnce) {
                fail("Expects only one call");
            }
            askedOnce = true;
            return pm;
        }
    };
    final Long consumerId = 12345L;
    final ConsumerOperations consumerOps = new ConsumerOperations() {
        @Override
        Consumer getConsumer(PersistenceManager pm, Long id) {
            assertEquals(consumerId, id);
            throw new IllegalArgumentException("Done in purpose!");
        }
    };
    ConsumerRestlet restlet = new ConsumerRestlet() {
        @Override BaseOperation getBaseOperations() { return baseOps; }
        @Override ConsumerOperation getConsumerOperations() { return consumerOps; }
    }
    
    // Test itself
    JsonObject response = restlet.getResource(null, consumerId.toString, null);
}
@Test
public void testGettingOneConsumer() {
    // Test prepration
    final PersistenceManager pm = new MockPersistenceManager();
    final BaseOperations baseOps = new BaseOperations() {
        boolean askedOnce = false;
        @Override
        PersistenceManager getPersistenceManager() {
            if (askedOnce) {
                fail("Expects only one call");
            }
            askedOnce = true;
            return pm;
        }
    };
    final Long consumerId = 12345L;
    final ConsumerOperations consumerOps = new ConsumerOperations() {
        @Override
        Consumer getConsumer(PersistenceManager pm, Long id) {
            assertEquals(consumerId, id);
            Consumer consumer = new Consumer();
            consumer.setId(consumerId);
            return consumer;
        }
    };
    final Long locationId = 67890L;
    final LocationOperations locationOps = new LocationOperations() {
        @Override
        Location getLocation(PersistenceManager pm, Long id) {
            fail("Call not expected here!");
            return null;
        }
    };
    ConsumerRestlet restlet = new ConsumerRestlet() {
        @Override BaseOperation getBaseOperations() { return baseOps; }
        @Override ConsumerOperation getConsumerOperations() { return consumerOps; }
        @Override LocationOperation getLocationOperations() { return locationOps; }
    }
    
    // Test itself
    JsonObject response = restlet.getResource(null, consumerId.toString, null);
    
    // Post-test verifications
    assertTrue(pm.isClosed());
    assertNotSame(0, response.size());
    assertTrue(response.containsKey(Consumer.ID);
    assertEquals(consumerId, response.getLong(Consumer.ID));
}
@Test
public void testGettingConsumerWithLocation() {
    // Test prepration
    final PersistenceManager pm = new MockPersistenceManager();
    final BaseOperations baseOps = new BaseOperations() {
        boolean askedOnce = false;
        @Override
        PersistenceManager getPersistenceManager() {
            if (askedOnce) {
                fail("Expects only one call");
            }
            askedOnce = true;
            return pm;
        }
    };
    final Long consumerId = 12345L;
    final Long locationId = 67890L;
    final ConsumerOperations consumerOps = new ConsumerOperations() {
        @Override
        Consumer getConsumer(PersistenceManager pm, Long id) {
            assertEquals(consumerId, id);
            Consumer consumer = new Consumer();
            consumer.setId(consumerId);
            consumer.setLocationId(locationId);
            return consumer;
        }
    };
    final LocationOperations locationOps = new LocationOperations() {
        @Override
        Location getLocation(PersistenceManager pm, Long id) {
            assertEquals(locationId, id);
            Location location = new Location();
            location.setId(locationId);
            return location;
        }
    };
    ConsumerRestlet restlet = new ConsumerRestlet() {
        @Override BaseOperation getBaseOperations() { return baseOps; }
        @Override ConsumerOperation getConsumerOperations() { return consumerOps; }
        @Override LocationOperation getLocationOperations() { return locationOps; }
    }
    
    // Test itself
    JsonObject response = restlet.getResource(null, consumerId.toString, null);
    
    // Post-test verifications
    assertTrue(pm.isClosed());
    assertNotSame(0, response.size());
    assertTrue(response.containsKey(Consumer.ID);
    assertEquals(consumerId, response.getLong(Consumer.ID));
    assertTrue(response.containsKey(Consumer.LOCATION_ID);
    assertEquals(locationId, response.getLong(Consumer.LOCATION_ID));
    assertTrue(response.containsKey(Consumer.LOCATION);
    assertEquals(consumerId, response.getJsonObject(Consumer.LOCATION).getLong(Location.ID));
}

Note that I would have been able to override just the PersistenceManager class to have the Object getObjectById(Object arg0) method returning the expected exception, Consumer, and Location instances. But I would have pass over the strict limit of a unit test by then testing also the behavior of the ConsumerOperations.getConsumer() and LocationOperations.getLocation() methods.

App Engine environment mock

Now, testing the ConsumerOperations class offers a better challenge.

As suggested above, I could override many pieces of the PersistenceManager class to be sure to control the flow. But to do a nice simulation, I almost need to have the complete specification of the Google App Engine infrastructure to be sure I mock it correctly. This is especially crucial when processing Query because Google data store has many limitations [6] that others traditional database, like MySQL, don't have...

Because this documentation is partially available and because Google continues to update its infrastructure, I looked for a way to use the standalone environment made available with the App Engine SDK [1]. This has not been easy because I wanted to have the test running independently from the development server itself. I found first some documentation on Google Code website: Unit Testing With Local Service Implementations, but it was very low level and did not fit with my JDO instrumentation of the DTO classes. Hopefully, I found this article JDO and unit tests from App Engine Fan, a great community contributor I mentioned many times in previous posts!

By cooking information gathered on Google Code website and on App Engine Post, I've produced a com.google.apphosting.api.MockAppEngineEnvironment I can use for my JUnit4 tests.

Three tests validating the behavior of the ConsumerRestlet.getResource() method
package com.google.apphosting.api;
 
// import ...
 
/**
 * Mock for the App Engine Java environment used by the JDO wrapper.
 *
 * These class has been build with information gathered on:
 * - App Engine documentation: http://code.google.com/appengine/docs/java/howto/unittesting.html
 * - App Engine Fan blog: http://blog.appenginefan.com/2009/05/jdo-and-unit-tests.html
 *
 * @author Dom Derrien
 */
public class MockAppEngineEnvironment {
 
    private class ApiProxyEnvironment implements ApiProxy.Environment {
        public String getAppId() {
          return "test";
        }
 
        public String getVersionId() {
          return "1.0";
        }
 
        public String getEmail() {
          throw new UnsupportedOperationException();
        }
 
        public boolean isLoggedIn() {
          throw new UnsupportedOperationException();
        }
 
        public boolean isAdmin() {
          throw new UnsupportedOperationException();
        }
 
        public String getAuthDomain() {
          throw new UnsupportedOperationException();
        }
 
        public String getRequestNamespace() {
          return "";
        }
 
        public Map getAttributes() {
            Map out = new HashMap();

            // Only necessary for tasks that are added when there is no "live" request
            // See: http://groups.google.com/group/google-appengine-java/msg/8f5872b05214...
            out.put("com.google.appengine.server_url_key", "http://localhost:8080");

            return out;
        }
    };
 
    private final ApiProxy.Environment env;
    private PersistenceManagerFactory pmf;
 
    public MockAppEngineEnvironment() {
        env = new ApiProxyEnvironment();
    }
 
    /**
     * Setup the mock environment
     */
    public void setUp() throws Exception {
        // Setup the App Engine services
        ApiProxy.setEnvironmentForCurrentThread(env);
        ApiProxyLocalImpl proxy = new ApiProxyLocalImpl(new File(".")) {};
 
        // Setup the App Engine data store
        proxy.setProperty(LocalDatastoreService.NO_STORAGE_PROPERTY, Boolean.TRUE.toString());
        ApiProxy.setDelegate(proxy);
    }
 
    /**
     * Clean up the mock environment
     */
    public void tearDown() throws Exception {
        // Verify that there's no pending transaction (ie pm.close() has been called)
        Transaction transaction = DatastoreServiceFactory.getDatastoreService().getCurrentTransaction(null);
        boolean transactionPending = transaction != null;
        if (transactionPending) {
            transaction.rollback();
        }
 
        // Clean up the App Engine data store
        ApiProxyLocalImpl proxy = (ApiProxyLocalImpl) ApiProxy.getDelegate();
        if (proxy != null) {
            LocalDatastoreService datastoreService = (LocalDatastoreService) proxy.getService("datastore_v3");
            datastoreService.clearProfiles();
        }
 
        // Clean up the App Engine services
        ApiProxy.setDelegate(null);
        ApiProxy.clearEnvironmentForCurrentThread();
 
        // Report the issue with the transaction still open
        if (transactionPending) {
            throw new IllegalStateException("Found a transaction nor commited neither rolled-back." +
                    "Probably related to a missing PersistenceManager.close() call.");
        }
    }
 
    /**
     * Creates a PersistenceManagerFactory on the fly, with the exact same information
     * stored in the /WEB-INF/META-INF/jdoconfig.xml file.
     */
    public PersistenceManagerFactory getPersistenceManagerFactory() {
        if (pmf == null) {
            Properties newProperties = new Properties();
            newProperties.put("javax.jdo.PersistenceManagerFactoryClass",
                    "org.datanucleus.store.appengine.jdo.DatastoreJDOPersistenceManagerFactory");
            newProperties.put("javax.jdo.option.ConnectionURL", "appengine");
            newProperties.put("javax.jdo.option.NontransactionalRead", "true");
            newProperties.put("javax.jdo.option.NontransactionalWrite", "true");
            newProperties.put("javax.jdo.option.RetainValues", "true");
            newProperties.put("datanucleus.appengine.autoCreateDatastoreTxns", "true");
            newProperties.put("datanucleus.appengine.autoCreateDatastoreTxns", "true");
            pmf = JDOHelper.getPersistenceManagerFactory(newProperties);
        }
        return pmf;
    }
 
    /**
     * Gets an instance of the PersistenceManager class
     */
    public PersistenceManager getPersistenceManager() {
        return getPersistenceManagerFactory().getPersistenceManager();
    }
}

With such a class, the unit test part is easy and I can build complex test cases without worrying about the pertinence of my mock classes! That's really great.

Excerpt of the TestConsumerOperations class
public class TestConsumerOperations {
 
    private MockAppEngineEnvironment mockAppEngineEnvironment;
 
    @Before
    public void setUp() throws Exception {
        mockAppEngineEnvironment = new MockAppEngineEnvironment();
        mockAppEngineEnvironment.setUp();
    }
 
    @After
    public void tearDown() throws Exception {
        mockAppEngineEnvironment.tearDown();
    }
 
    @Test
    public void testCreateVI() throws DataSourceException, UnsupportedEncodingException {
        final String email = "unit@test.net";
        final String name = "Mr Unit Test";
        Consumer newConsumer = new Consumer();
        newConsumer.setDisplayName(name);
        newConsumer.setEmail(email);
        assertNull(newConsumer.getId());
 
        // Verify there's no instance
        Query query = new Query(Consumer.class.getSimpleName());
        assertEquals(0, DatastoreServiceFactory.getDatastoreService().prepare(query).countEntities());
 
        // Create the user once
        ConsumerOperations ops = new ConsumerOperations();
        Consumer createdConsumer = ops.createConsumer(newConsumer);
 
        // Verify there's one instance
        query = new Query(Consumer.class.getSimpleName());
        assertEquals(1, DatastoreServiceFactory.getDatastoreService().prepare(query).countEntities());
 
        assertNotNull(createdConsumer.getId());
        assertEquals(email, createdConsumer.getEmail());
        assertEquals(name, createdConsumer.getName());
    }
    
    // ...
}

Conclusion

As a big fan of TDD, I'm now all set to cover the code of my [still a secret] project efficiently. It does not mean everything is correct, more that everything I thought about is correctly covered. At the time of this writing, just for the server-side logic, the code I produced covers more than 10,000 lines and the unit tests bring an additional set of 23,400 lines.

When it's time to refactor a bit or to add new features (plenty of them are aligned in my task list ;), I feel comfortable because I know I can detect most of regressions (if not all) after 3 minutes of running the test suite.

If you want to follow this example, feel free to get the various mock classes I have added to my two-tiers-utils open-source project. In addition to mock classes for the App Engine environment, you'll find:

  • Basic mock classes for the servlet (see javax.servlet) and javamocks.io packages -- I had to adopt the root javamocks because the JVM class loader does not accept the creation on the fly of classes in the java root).
  • A mock class for twitter4j.TwitterUser -- I needed a class with public constructor and a easy way to create a default account.
  • A series of mock class for David Yu's Project which I use to allow users with OpenID credentials to log in. Read the discussion I had with David on ways to test his code, in fact the code he produced and I customized for my own needs and for other security reasons.

For other details on my library, read my post Internationalization and my two-tiers-utils library.

I hope this helps.

A+, Dom
--
References:

  1. Google App Engine: the homepage and the SDK page.
  2. See my post on Agile: SCRUM is Hype, but XP is More Important... where I mentionned the following techniques: Continuous Integration (CI), Unit testing and code coverage (CQC), and Continuous refactoring.
  3. I know that keeping 100% as the target for code coverage numbers is a bit extreme. I read this article Don't be fooled by the coverage report soon after I started using Cobertura. In addition to reducing the exposition to bugs, the 100% coverage gives a very high chance to detect regressions before pushing the updates to the source control system!
  4. Vincent Massol; JUnit in Action; Editions Manning; www.manning.com/massol and Petar Tahchiev, Felipe Leme, Vincent Massol, and Gary Gregory; JUnit in Action, Second Edition; Editions Massol; www.manning.com/tahchiev. I was used to asking any new developer joigning my team to read at least this chapter 7: Testing in isolation with mock objects.
  5. JDO stands for Java Data Objects and is an attempt abstract the data storage manipulation. The code is instrumented with Java annotations like @Persistent, it is instrumented at compile time, and dynamically connect to the data source thanks few properties files. Look at the App Engine - Using the Datastore with JDO documentation for more information.
  6. For general limitations, check this page Will it play in App Engine. For JDO related limitations, check the bottom of the page Usin JDO with App Engine.

Friday, October 16, 2009

Canadian Wireless Management Forum - my review

Last week, I attended the Canadian Wireless Management Forum in Montreal. Honestly, I've been a bit disappointed because none of the presenters were as great as the ones I met last year. Over the day, I've still been able to gather bits of information I want to share here ;) Thanks to my company Compuware to have allowed sparing one day there.

iPhone and other smart phones deployed in enterprises

The main focus of the conference is around managing wireless (read mobile phone) communications in enterprises. Some presenters talked about how to control expenses, from sharing guidelines up to using monitoring tools suggesting policies from the statistical analysis of the monthly bills. Testimonies showed that applying a strict control on the mobile phone usage and on the contracts allowed cutting costs from 10 to 30%!

At one point, Nicolas Arsenault made a strange point. Here is what I remind from his talk:


Credits: smoothouse

Credits:Josh Bancroft
Deploying an iPhone application for your employees, when compared to a BlackBerry application, has a much lower cost because the employee already owns the device or he's more likely to buy it. With the employee paying for the phone, probably paying to get new phones regularly (like to move from a iPhone 3G to the latest 3GS), employers can just assume the data plans and can spend more resources on the application development, that at one point can be offered to the company customers. Another benefit is related to the technical support: phone owners stop annoying enterprises' help desk for their phone, they contact the manufacturers or the software vendors directly (they are on their own)...

If this approach have evident economical benefits, I disagree with it because:
  • In general, letting employees owning their work mobile phones cause problems when they leave the company. Most of employees have a non concurrence clause in their contract, usually valid 6 months after the employment contract termination. During that period, they cannot compete with their previous employer. Imagine a salesperson, an account manager, or a consultant who owns his phone number, who is referenced in the phone directories all his previous contacts. When these contacts want to deal with the company, who do you think they'll call: the mobile phone or the company front line? Now that these employees are out, they don't have to submit their phone bills anymore (as they did with their expenses reports). No one can detect the issues anymore… And this is without mentioning that the employee replacement or his colleagues have no way to get the mobile phone contact list to continue the business as usual!
  • In the company I worked, none let me use my own computer on their network! For the IT departments, this practice would raise too much security concerns. I even know cases that just installing a VPN software on your own machine install silently a bunch of monitoring tools that can mess up your systems (thanks to VirtualBox, it easy to limit these nasty side effects ;). In the old days, when phones were just stupid, just able to handle voice call and exchange text messages (SMS), the risk of phone infection by viruses were pretty negligible. The widely used phone operating system is Symbian, used on Nokia and Sony Ericsson phones, and J2ME is a common application framework, even on phones running Windows Mobile. If the identified viruses are not a lot, and if they are not some damageable (send on your behalf SMS to expensive services, for example), the newest smart platforms provide much more threats because the corresponding phones can host tons of applications. In such an environment, how can a company force software upgrades on systems it does not own? How can it force an employee to upgrade to a new hardware because the current one is compromised?
  • My last point is more ethical: how far companies should go with putting the burden on their employees? An iPhone costs around 800$. It's often offered around 200$ with a 3-year contract, which costs basically around 60$/month with a simple data plan. If the telecom operators (telco) subsidize so aggressively such a phone, they surely expect higher average revenue per user (ARPU). In addition to be linked to the telco for a long period of time (compare 3 years with the 3 to 6 months between technological evolution: 6 to 12 times longer!), employees have to support costs the company should assume. Usually, companies try to provide a comfortable work environment to get most of their employees, and that's fair to me: if the company gives more than the salary, employees are more likely to deliver better work. With the incitation of employees assuming the cost of the new mobile phones, I see a regression: companies give less while expecting more (reachable—possibly traceable—outside the office hours, for example). IMO, it's yet another example of a technological progress that might worsen fragile people condition...

Mobile payment and Near-field-communication (NFC)

Shortly on this topic, I want to mention presentation made by Daniel Martin for Atlas Telecom Mobile, David Robinson for Rogers Wireless, and Prakash Hariramani for Visa. They talked about an experiment conducted downtown Toronto where they were able, thanks to Motorola phones equipped with a NFC emitter and a good number of retailers there, to allow mobile payment over the mobile phone network. Visa's solution, called PayWave™ (MasterCard's one is called PayPass™), was inserted into the Motorola phone extension and allow consumers to pay for their purchase quite easily.

During the discussion, Mr. Robinson talked about Rogers Wireless approach being strictly based on standards. He mentioned the recent u-turn of telco who continued to invest in closed and proprietary solutions (like CDMA) and now move to standardized ones (like HSPA which is an upgrade of GSM, on the road to LTE—see example of Bell and Telus in Canada). Rogers Wireless' approach is then to work with the rest of the major industry players (Orange, Vodafone, etc.) to define a solution for anyone anywhere. Mr. Robinson talked about the possibility to define an extended SIM card (for Subscriber Identity Module; the card contains a micro-SIMCARD processor which manage very securely information). This new SIM card will have NFC capabilities and will be able to interact with contactless payment terminals. It's possible that these SIM cards will contain additional information like driver license identifier that police officers will be able to read, insurance number for the government agencies, etc. The phone will provide the interface to enable the data access, and smart phones with touch screens open the door to various and robust verification techniques.

Because more and more people are more attached to their phone than to their wallet, this approach will possibly be more convenient, more secure, and smaller (no more cash, business cards, credits cards, etc. ;) Who said that implanting the SIM under anyone's skin, a SIM that can unlock your phones, cars, houses, computers, etc, is just science fiction?

Telco business model in danger!

iBwave offers solutions improving in-building wireless coverage. Knowing the fact that 60%-80% of mobile usages are indoor, that telco have difficulties to boost the power or to multiply outdoor antennas near high density areas, these places stay mostly uncovered. Mr. Bouchard illustrated his point with the simulation of the poor performance of the traditional networks on the McGill campus—really amazing! In conclusion, iBwave sits in a very nice and promising niche ;)
The last point of interest to me came from Mario Bouchard, from iBwave.

To introduce his company activities, Mr. Bouchard shows two diagrams which made me think for a while. Let me try to reproduce them.


Mr. Bouchard states that 15 years ago, the innovation came from the network manufacturers: they invented the technology, others created cell phones to connect to the new network, some operators offered (very expensive) plans, and consumers (locked with long term contracts) tried to communicate.



Credits: DigitalAlan
Today, the order has been scrambled:
  • Thanks to the rapid technology evolution, designers of mobile devices can embed many types of sensors into communicant machines, and with the increasing miniaturization, such machines are pervasiveness!
  • Because of the reduced delay between big technology evolutions (think about the iPhone which is just 2 years old), consumers choose their devices carefully, and bargain more to get the best quality-price ratio.
  • Network manufacturers now to their best to provide networks than can deliver at the rhythm the devices can consume. When the European community created the Groupe Spécial Mobile (initial meaning of the GSM acronym, known now for Global System for Mobile communications) in 1982, we had to wait up to 1991 to experiment the first GSM network. GSM is also known the 2G technology. The EDGE (Enhanced Data rates for GSM Evolution, or 2.5G) has been introduced in 1999, the first 3G technology (HSPA/UMTS, and EV-DO/CDMA) has been delivered late 2001, and the coming 4G ones (LTE for Long Term Evolution or WiMax) are on the bench.
  • At the end of the line, now there are the telcos:
    • Investments are phenomenal
    • The competition is rude (not rude enough in Canada, IMO)
    • Customers are volatile and they always want the latest phone
    • They have to subsidize the phones to lower the barriers to entry
    • They have to provide the best coverage everywhere
    • Customers are quick to leverage social tools to complain about them
    • Customers don't respect the old rules (read: jailbreak their phone)
    • Customers are not the cash cows they used to be...

I don't think the telco future looks very nice. As traditional telecommunication service providers, they are more and more just Internet providers. Personally, I'm fine with communication on Internet (VOIP/SIP), with the possibility to stream on Internet (Qik.com, Layar.com), to receive instant messages (IM) instead of text messages (SMS). And look, when I've a chance to connect my phone to a wifi network, I'm happy to get a better connectivity while saving few bucks.

Interesting developments to follow, aren't they?

A+, Dom