Thursday, December 23, 2010

Android Draw 9-patch seems broken too

As far as I can tell, the current release of the Android SDK (R08) ships without the Swing Desktop jar, and this causes draw9patch to fail.
~$ draw9patch &
[2] 52452
~$ Exception in thread "AWT-EventQueue-0" java.lang.NoClassDefFoundError:
       org/jdesktop/swingworker/SwingWorker
 at com.android.draw9patch.Application$1.run(Application.java:48)
 at java.awt.event.InvocationEvent.dispatch(InvocationEvent.java:209)
 at java.awt.EventQueue.dispatchEvent(EventQueue.java:461)
 at java.awt.EventDispatchThread.pumpOneEventForHierarchy(EventDispatchThread.java:269)
 at java.awt.EventDispatchThread.pumpEventsForHierarchy(EventDispatchThread.java:190)
 at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:184)
 at java.awt.EventDispatchThread.pumpEvents(EventDispatchThread.java:176)
 at java.awt.EventDispatchThread.run(EventDispatchThread.java:110)

[2]+  Done                    draw9patch

The 'fix' is rather simple: just download the JAR for Swing Desktop from here, and drop the swinglabs-0.8.0.jar file into the [sdk-install-dir]/tools/lib folder.

The Swing Desktop project can be found here.



Update -- Richard just left a comment here and pointed me to his site (Android 9 Patch) who's got plenty of 9-patch icons that really look awesome, and I thought I'd share the goodness (especially, given he's so kind as to share them at no charge). 

Go check out Patch 15, it's pretty impressive!

Carrier BIlling comes to Android!

Some of you folks may have recently noticed a little remark at the bottom of an email from the Android Market folks:
Finally, we wanted to bring to your attention that Android Market now offers a new form of payment for users on the AT&T network -- Direct Carrier Billing. This payment option lets Android users on the AT&T network purchase applications more easily.
This is an awesome achievement on the part of the Google folks, which integrates payments made by users into their phone bills, if they are AT&T customers: in turn, this enables us (the developers) to receive our payments via fewer user clicks and in a more streamlined way, so as to smooth the path for the user to purchase our Apps.

And all that, at virtually no additional complexity for us, which is, I'd say, pretty awesome!

I would expect more integrations and more awesomeness in the next few months: stay tuned!

Wednesday, December 8, 2010

Latest update for Android SDK breaks for Ubuntu Karmic

If you have recently updated your Android SDK to R08 (Gingerbread, 2.3) on Linux and are using Ubuntu Karmic, chances are that you will not be able to run the emulator: It will simply die with the following error:


$ ./emulator
./emulator: /lib32/libc.so.6: version `GLIBC_2.11' not found (required by ./emulator)



This is caused by an incompatibility with Karmic's installed GLIBC - the new SDK/Emulator works just fine on Lucid.


The Android team is working on a fix - in the meantime, a workaround is to download the _r07 of the tools from here, and replace emulator in the tools/ folder under the android_sdk installation folder.


A bit hacky, but works.

Tuesday, November 30, 2010

Using Boost in Ubuntu with Eclipse

Boost is an open source library of extremely useful and carefully designed C++ classes and methods ranging from graph algorithms, to regular expressions matching, to multi-threading.
Part of the Boost library was also integrated into the C++ standard as the TR1 set of libraries.
You can learn more about Boost here.
Using it with Eclipse in Ubuntu is definitely possible, but not straightforward; so I decided to post this simple tip here, to help folk spare some of the grief.
The first step in using Boost in your code is to download the header (.hpp) files and (optionally) source code - the latest release is 1.45 and is available from Boost's website.
In theory, you could compile it, and build the libraries' binaries yourself: by all means, go ahead and do it, but there is an easier way, if you can live with a slightly 'older' version.

Ubuntu 9.10 (Karmic Koala) comes with Boost 1.34 pre-configured (I think, this is, at any rate, the version that I had on my system):
$ ls /usr/lib/libboost*
/usr/lib/libboost_date_time-gcc41-1_34_1.so.1.34.1
/usr/lib/libboost_date_time-gcc41-mt-1_34_1.so.1.34.1
/usr/lib/libboost_date_time-gcc42-1_34_1.so.1.34.1
/usr/lib/libboost_date_time-gcc42-mt-1_34_1.so.1.34.1
/usr/lib/libboost_filesystem-gcc41-1_34_1.so.1.34.1
/usr/lib/libboost_filesystem-gcc41-mt-1_34_1.so.1.34.1
/usr/lib/libboost_filesystem-gcc42-1_34_1.so.1.34.1
/usr/lib/libboost_filesystem-gcc42-mt-1_34_1.so.1.34.1
/usr/lib/libboost_iostreams-gcc41-1_34_1.so.1.34.1
/usr/lib/libboost_iostreams-gcc41-mt-1_34_1.so.1.34.1
/usr/lib/libboost_iostreams-gcc42-1_34_1.so.1.34.1
/usr/lib/libboost_iostreams-gcc42-mt-1_34_1.so.1.34.1
/usr/lib/libboost_regex-gcc41-1_34_1.so.1.34.1
/usr/lib/libboost_regex-gcc41-mt-1_34_1.so.1.34.1
/usr/lib/libboost_regex-gcc42-1_34_1.so.1.34.1
/usr/lib/libboost_regex-gcc42-mt-1_34_1.so.1.34.1
/usr/lib/libboost_signals-gcc41-1_34_1.so.1.34.1
/usr/lib/libboost_signals-gcc41-mt-1_34_1.so.1.34.1
/usr/lib/libboost_signals-gcc42-1_34_1.so.1.34.1
/usr/lib/libboost_signals-gcc42-mt-1_34_1.so.1.34.1
/usr/lib/libboost_thread-gcc41-mt-1_34_1.so.1.34.1
/usr/lib/libboost_thread-gcc42-mt-1_34_1.so.1.34.1

However, by using Synaptic, you can actually install the more recent 1.40.0 version:


This will install a bunch of libboost_*.so.1.40.0 files into /usr/lib: in order to match the libraries with the header files (and avoid introducing incomprehensible compilation errors at best, and subtle bugs at worst) you should download the matching boost_1_40_0.tar.bz2 from Boost download archives.

We would be almost there, were it not for the fact that the soname does not really comply with the -l gcc linker option.

In gcc the -L option specifies a 'search directory' (this would not be needed, as /usr/lib is searched for libraries by default) and the -l specifies additional dynamic (.so) or static (.a) libraries: you specify the file to include during the link step by omitting the `lib` prefix and the .so or .a extension.
In other words, if your HelloBoost source uses code from Boost Regex (libboost_regex.so) you would specify it like thus:
$ gcc -Wall hello_boost.cc -o HelloBoost -lboost_regex

You can see where I'm heading with all this: the last step will be to add symbolic links that do away with the soname suffixes of Boost libraries, and all will be well:
$ ln -s /usr/lib/libboost_regex.so.1.40.0 /usr/lib/libboost_regex.so

Given that there are in total 24 files for the whole of Boost 1.40, I've done a bit of find & replace magic, and have come up with the following shell file (the other advantage being that, when a later version becomes available for Ubuntu, I will not have to change any of my Eclipse projects setting, but just change the VERSION value in the script):
#!/bin/bash
#
# Simple script to enable Boost with simple -l gcc option
# Use: gcc -Wall hello_world.cc -o HelloWorld -lboost_regex
#
# Created by M. Massenzio, 2010-11-30

VERSION=1.40.0

for libname in libboost_date_time \
            libboost_filesystem \
            libboost_graph_parallel  \
            libboost_graph      \
            libboost_iostreams  \
            libboost_math_c99f  \
            libboost_math_c99l  \
            libboost_math_c99   \
            libboost_math_tr1f  \
            libboost_math_tr1l  \
            libboost_math_tr1   \
            libboost_mpi        \
            libboost_prg_exec_monitor \
            libboost_program_options  \
            libboost_python-py25      \
            libboost_python-py26      \
            libboost_regex            \
            libboost_serialization    \
            libboost_signals          \
            libboost_system           \
            libboost_thread           \
            libboost_unit_test_framework \
            libboost_wave \
            libboost_wserialization 
do
  ln -s /usr/lib/$libname.so.$VERSION /usr/local/lib/$libname.so
done


The last and final step is to tell Eclipse what to look for when building your binary: right-click on your project's folder (in the C++ Perspective, Project Explorer) then Properties > C/C++ Build > Settings, select GCC C++ Linker > Libraries and add the 'stripped' library names in the option list:


Note for the curious: the additional libraries listed there are for Google C++ Unit testing framework, GUnit, available here - you will have to build it, but that's pretty straightforward, resulting in the two libgtest.a and libgtest_main.a libraries - be careful to add those only to your 'Test' configuration and exclude your unit tests from build in your Release/Debug configurations.

Tuesday, November 23, 2010

Using the same model classes in Android, GWT and JPA

These days, it is rather common to have a mobile-enabled web service or application, where you essentially enable your users to access the service both via a browser-enable desktop application, as well as from their mobiles whilst on the go.


It's usually the case that the main business logic, as well as the business Model, are shared between the browser components and your mobile app: however, it is not obvious how to re-use code (or even use the same source code) when you are essentially using two completely separate SDKs, and even different JREs!


In fact, if you are using Google Web Toolkit on the front-end, there's not even a JRE involved: as your classes will be compiled in JavaScript.


This short video shows how you can actually use the very same source files in all three layers, so that when you make changes (fix bugs, add functionality) the improvements are immediately reflected everywhere - and even better, you avoid subtle bugs that may be caused by inadvertently the code going "out-of-sync" in one of the layer.


The example is based on my Android Receipts application, which can also be freely downloaded in Android Market.



Please let me know what you think, and whether you are finding the videos useful - or you prefer the written word!


Update    Someone commented on the low resolution of the video (480p) as making it difficult to follow: I understand, and, in fact, the original video was at a higher res (720x576, 25 fps): unfortunately, uploading to YouTube the resolution gets dropped during the transcoding.
Disappointing, I know, but hardly something I can do anything about.


I would suggest instead to see also Part II of this tutorial, which further elaborates on the topic, and provides some code samples.

Saturday, November 20, 2010

Dump CPU temperature data to a file in Ubuntu

Update: I've figured out that it makes a lot more sense to have a reading of the CPU load to correlate with the temperature reading, so I've added that too using /usr/bin/uptime.

Thursday, November 11, 2010

Changing the value returned by getModuleName

When refactoring the name of the GWT module (which typically involves changing the name of the Module.gwt.xml file, see this post) you also have to change the returned value from getModuleName() in every GWTTestCase:


This is way I usually factor the name of the module out into a utility class, and reference a static constant from every test case: saves a lot of pain and effort following some heavy refactoring.

If you do want to watch the video, here's a few tips:

  • if your connection is fast enough, choose 720p as the resolution, or the various menu items and text in general, will be very blurry;
  • this is best viewed full-screen;
  • a description as to how to create a reusable GWT Module can be found here

Change a GWT Module name

This is really a trivial how-to, but wanted to try out how easy it was to capture Eclipse's window using xvidcap, and then upload to Youtube and embed the video here.


This was all relatively simple (from having the idea, to blog posted and tested, took around 15 minutes) so I plan to do a few more.

If you do want to see the video, here's a few tips:

  • if your connection is fast enough, choose 720p as the resolution, or the various menu items and text in general, will be very blurry;
  • this is best viewed full-screen;
  • do remember if you follow along, and you have unit tests, to change the module name there too (or see this video);

Sunday, November 7, 2010

Implementing a Remote Service in Android - Part I

When I decided that a particular idea of mine could be best implemented as an Android Service, running in the background, I found that there is not much information available on the web, beyond some very basic examples, android.com's API Demo sample, and the AIDL tutorial page.

However, those are rather "sparse" guidelines, and several gaps are left for oneself to fill out by trial and error: so I decided to post this brief how-to, along with the code itself.

As noted in the Android Developer's guide:

A service doesn't have a visual user interface, but rather runs in the background for an indefinite period of time. For example, a service might play background music as the user attends to other matters, or it might fetch data over the network or calculate something and provide the result to activities that need it. Each service extends the Service base class.

There are actually two ways of implementing a service: one is to just extend the Service class, implement the onBind() method and then implement the Binder interface in your service implementation class (this is the approach described here); the other is to use AIDL (Android Interface Description Language), followed by the API Demo sample (and further described in Part II of this blog - coming soon).

So, here we go: let's assume that we want to implement a simple service, which enables developers to add a simple UI element to their app to allow delighted users to make donations, but freeing them from having to implement all the machinery: our service will take a Developer Key and an Amount value ($ cents), and will credit the amount to the developer's account (we assume that the user's credentials can be retrieved from the system - how to do that is outside the scope of this blog); the app developer herself will only have to implement a simple UI (or a fancy complicated one: that's up to her) and connect to our service.

The service class definition is pretty trivial:
public class DonateService extends Service {
  @Override
  public IBinder onBind(Intent intent) {
    return new DonateServiceImpl(getResources());
  }
}

The service implementation itself is rather simple too:
public class DonateServiceImpl extends Binder { ... }
with all the 'action' being in its onTransact() method:
@Override
  protected boolean onTransact(int code, Parcel data, Parcel reply, int flags) {
    if (code != res.getInteger(R.id.SERVICE_CODE)) {
      Log.e(getClass().getSimpleName(), "Transaction code should be " +
          res.getInteger(R.id.SERVICE_CODE) + ";" + " received instead " + code);
      return false;
    }
    Bundle values = data.readBundle();
    String devKey = values.getString(res.getString(R.string.DEV_KEY));
    int amountInCents = values.getInt(res.getString(R.string.AMT));
    Log.i(getClass().getSimpleName(), getUser() + " wants to donate " +
        amountInCents + " to " + devKey);
    if (amountInCents <= 0) {
      Log.e(getClass().getSimpleName(), "Amount should be a positive integer (" +
          amountInCents + " is not).");
      return false;
    }
    Log.d(getClass().getSimpleName(), "Sending request to server");
    // This is where we would implement our HTTPS connection service, most likely to
    // some RESTful service
    
    return true;
  }

And this is pretty much all there is to it, as far as it concerns to handling a service call (please do read the description on the Service class, as well as the notes about a service's lifecycle: similar to an Activity, there is an onCreate(), onDestroy() etc.) and retrieving the data marshalled by the system, from the calling Activity.

If you now try to 'test' it by using the Android Instrumentation framework from a 'sibling' test project (you create one typically at the same time as you create a new Android project in Eclipse), your code should look something like this:
public void testServiceRuns() {
    Resources myRes = getContext().getResources();
    assertNotNull("The test case resources are null", myRes);
    
    Intent i = new Intent(ACTION);
    IBinder binder = bindService(i);
    assertNotNull(binder);
    
    Bundle values = new Bundle();
    values.putString(myRes.getString(R.string.DEV_KEY), "12345ABCDE");
    values.putInt(myRes.getString(R.string.AMT), 99);
    Parcel data = Parcel.obtain();
    data.writeBundle(values);
    
    try {
      int serviceCode = myRes.getInteger(R.id.SERVICE_CODE);
      assertTrue(binder.transact(serviceCode, data, null, 0));
      Log.i("test", "Service executed successfully");
    } catch (Exception ex) {
      Log.e("test", "Could not transact service: " + ex.getLocalizedMessage(), ex);
      fail(ex.getLocalizedMessage());
    };
  }

Funnily enough, this will work even if you mistype the name of the service's implementation class in the Android Manifest (AndroidManifest.xml):
<application android:label="@string/service_label" 
                     android:icon="@drawable/app_icon">
   <service android:exported="true" 
            android:name=".DonateService"
            android:process=":remote">
      <intent-filter>
         <action android:name="@string/ACTION_DONATE" />
      </intent-filter>
   </service>
</application>

(try changing .DonateService to .blah and this will still work) -- all this to say: the mechanics of Android unit testing are different from a remote invocation, and may succeed even though your service may be unavailable to other applications.

Notice in particular android:process=":remote" - this is what tells Android application manager to lookup your service's intent-filter too, when looking for possible targets, when another process invokes startService(Intent).

Finally, your test case should extends ServiceTestCase - but make sure you change the constructor, from Eclipse's auto-generated one to something like this:

public DonateServiceBinderTest(String name) {
    super(DonateService.class);
    setName(name);
  }
as explained elsewhere.

The <intent-filter> is the trick that does all the magic here: when matched against the Intent's 'action' will cause your service to be invoked.
Which brings us to the next topic: how do we invoke the service from a separately developed, completely independent application?

I have created a very simple Activity (download service_example_usage-0.2_beta from our site) that does exactly that.

In the following, I will largely ignore the niceties of building an Android UI, as these are widely explained elsewhere and are largely outside the scope of this post, and will focus instead on the specifics of calling a Service.

As a general rule, when invoking a Service, you must do it outside of your primary UI thread: for starters, you have no idea how long will it take to complete and, secondly, you want to give the users the ability to terminate it, should it take too long.

Hence:

Rule #1 - execute a service invocation from within a Runnable that executes outside your main UI thread, and set up the service call, so that it has a callback handler;

Which neatly leads to

Rule #2 - to handle the service outcome where this needs to influence changes in the UI (and it most likely will, even for the trivial task of notifying the user that the call succeeded/failed, whatever...) use a Handler, or your app will crash unceremoniously.

Let us dissect the code in SimpleApplication.java (activateService(final int amt)) one step at a time:

1. Visually notify the user that we're engaging in something that will take time to complete, and we aren't quite sure how long:
LinearLayout panel = (LinearLayout) findViewById(R.id.ProgressPanel);
    panel.setVisibility(View.VISIBLE);

the progress bar has been defined in the layout/main.xml file as 'indeterminate' and as a spinning wheel:
<ProgressBar android:id="@+id/ProgressBar" android:layout_height="wrap_content" 
                android:layout_marginLeft="15px" android:layout_width="wrap_content"
                android:indeterminate="true" android:indeterminateBehavior="repeat" 
                android:visibility="visible"/>

2. Create an intent, whose action will match the service's action's intent-filter (see note below[*] regarding sharing common strings between the service and its intended users):
Intent i = new Intent(getResources().getString(R.string.ACTION_DONATE));

3. Using this Activity's Context, bind it to a service that can service this Intent, by providing an implementation of a ConnectionService interface:
boolean isConnected = bindService(i, new ServiceConnection() {
     @Override
     public void onServiceConnected(ComponentName name, IBinder service) {
       Bundle values = new Bundle();
       values.putString(getResources().getString(R.string.DEV_KEY),
           getResources().getString(R.string.DEVELOPER_KEY_VALUE));
       values.putInt(getResources().getString(R.string.AMT), amt);
       Parcel data = Parcel.obtain();
       data.writeBundle(values);
       boolean res = false;
       try {
         res = service.transact(serviceCode, data, null, 0);
       } catch (RemoteException ex) {
         Log.e("onServiceConnected", "Remote exception when calling service", ex);
         res = false;
       }
       Message msg = Message.obtain(h, serviceCode, amt, (res ? 1 : -1));
       msg.sendToTarget();
     }

     @Override
     public void onServiceDisconnected(ComponentName name) {
     }
   }, Context.BIND_AUTO_CREATE);

4. Wrap the above into a Runnable, and then kick the Thread alive:
Thread serviceThread = new Thread(new Runnable() {
     @Override
     public void run() { //... }
   });
   serviceThread.start();

5. To enable this newly created thread to 'callback' your activity and carry out tasks inside the UI thread, you need to implement a Handler class, create a Message to wrap your returned results and then configure your handler as the message's target:
// outside the Runnable:
  final Handler h = new ServiceCompleteHandler(result, panel, ctx);
// this will be invoked inside the UI thread when called
// at the end of onServiceConnected, in the service activation thread
  Message msg = Message.obtain(h, serviceCode, amt, (res ? 1 : -1));
  msg.sendToTarget();

Your Handler needs to override the default handleMessage() method (that does nothing otherwise) and will be run by the System inside your UI thread (and will thus have access to your Activity's widgets, without causing a RuntimeException):
public static class ServiceCompleteHandler extends Handler {
  @Override
  public void handleMessage(Message msg) {
    int amt = msg.arg1;
    boolean outcome = msg.arg2 > 0;
    serviceComplete(amt, outcome);
  }

  void serviceComplete(int amt, boolean outcome) {
    panel.setVisibility(View.INVISIBLE);
    Rect r = new Rect(0, 0, 48, 48);
    Drawable icon;
    if (outcome) {
      icon = getResources().getDrawable(R.drawable.accepted_48);
    } else {
      icon = getResources().getDrawable(R.drawable.cancel_48);
    }
    icon.setBounds(r);
    result.setCompoundDrawables(icon, null, null, null);
    int dollars = amt / 100;
    int cents = amt % 100;
    result.setText(getResources().getString(outcome ? 
        R.string.thanks : R.string.sorry) + dollars +
        "." + cents + "\n" + getResources().getString(R.string.promo));
    result.setVisibility(View.VISIBLE);
  }
}

And this is pretty much all there is to it: sure, it's a bit more convoluted than just invoking a method in class, but we're talking here inter-process communication (IPC) and remote invocation (RMI) and they ain't pretty in any of the other frameworks either (think J5EE or, God forbid, CORBA).

In a subsequent post, I will show how to use AIDL to make invoking the service a more painless experience for the clients, but at the cost of having a slightly more convoluted development process on the service implementation's side.
---
[*]A note about re-using common strings (see the file res/values/donate_service_strings.xml): it is good practice not to use hard-coded strings in your service calls and data maps, and even better to have them in a commonly shared file (instead of duplicated everywhere in each applications strings.xml file).

Ideally, if you publish a service, you should make such a file freely available to your service users (eg, downloadable from your API docs site's pages); equally, if you use a service, either use the available strings file (hopefully made available from your provider -- if they don't, question your desire to use their service...) or partition those service-specific strings in its ownd dedicated file.

Eclipse users: while developing your service, you will certainly want to test it out with a simple app (very much like I describe here). The obvious way to heed the above advice would then be to add the 'strings' file as a "link" in Eclipse Package Explorer, so that any changes made during the service development, would be automatically picked up the 'testing' app.

This won't work (Android's plug-in does not pick linked .xml files in the res/... tree, to generate the R.java file). A simple workaround would be to create a soft link to the one file in the res/values/ directory:
$ ln -s ../DonateService/res/values/donate_service_strings.xml res/values/service_defs.xml
(you can give whatever name you wish to the link).
In the downloadable code, I've added a physical copy to the file in each package, to avoid people having troubles when re-using it: if you do use both to follow along and/or make changes, I recommend removing one copy and making a link as suggested here.

Sunday, June 6, 2010

Unit testing Android Activity


This blog is now hosted on codetrips.com (WordPress) 



This post is best read there (with any updates, as I will add to it)

------------------
As I mentioned earlier, testing on Android is not for the faint-hearted (or the man-in-a-hurry) - documentation is very thin on the ground (although I've recently seen appear a few testing-related articles in the Developer documentation for the latest SDK - haven't checked them out yet, though) and the API is cumbersome at the best of times (and outright misleading, at the worst).

The "master class" to test an Activity class MyActivity, is ActivityInstrumentationTestCase2<MyActivity> that instruments and initializes it.

Its documentation is marginally better than that for android.test package:

A framework for writing Android test cases and suites.

but, hey, we haven't set the bar too high here.

Here are a few 'top tips' as how to avoid some of the grief, that I've discovered whilst developing unit tests for my AndroidReceipts application:

Do not use the 'default' constructor, but the one that uses ActivityUnitTest(String)
In Eclipse, you can right-click on a class in the project explores and then select New/Other... to create a new JUnit TestCase: in the ensuing dialog box, then select ActivityInstrumentationTestCase2 as the 'super' class, the plugin will helpfully auto-generate the full class with the following constructor:
// DON'T DO THIS - it won't compile
public MyActivityTest(String name) {
  super(name);
}
Here, Eclipse will (correctly) complain that there is no such thing as a ActivityInstrumentationTestCase2(String name) and you will quickly figure out that a (possible) super call may be something like this:
// DON'T DO THIS - it won't work
public MyActivityTest(String name) {
  super(PKG_NAME, MyActivity.class);
}
This will, sadly, cause the test runner (android.test.InstrumentationTestRunner) to happily ignore your test class:

[2010-06-06 22:33:20 - PolarisTest] Test run failed: Test run incomplete. Expected 51 tests, received 4

the trick is to add a call to setName(name) so that the test runner will find your tests (name, incidentally, is the name of the method being run).
// Do this instead:
public ScanActivityTest(String name) {
  // NOTE -- API Level 8 have deprecated this constructor, and replaced with one that simply takes the Class<T> argument
  super(PKG_NAME, MyActivity.class);
  setName(name);
}
If you are using the SDK 2.2 (L8) version, then there appears to be a new constructor that only takes the name of the Activity's Class under test (while the constructor shown above is deprecated): I have not tried it out, and targetting L8 devices, at the moment, rather severely restricts your target market.

Beware of Eclipse (ADB) missing a change in project source
The typical cycle is to write some code, run the tests, make changes, run the tests again.

This generally works, but, from time to time (I've been unable to discern a pattern) ADB misses a change in your source code and just re-runs the same tests as before.
As this typically happens when you make changes to the 'main' project (as opposed to the 'test' one) I suspect this happens when you do not save the modified source file, this in turn does not trigger a re-build of the APK, which change would have been picked up by the deployer of the 'test' project.

Be that as it may, keep an eye on the Console view and check that the a new version of the APK for the 'main' or 'test' (or both) projects gets installed on the emulator.

If you 'remove' the app manually, you also must 'clean' the project
On the same token, at times it turns out that the only way to get ADT out of its own hole, is just to go into the Emulator's Settings/Applications/Manage Applications and just remove either or both of the installed projects.

This will make the test runner deeply unhappy, and it will manifest its unhappiness by refusing to deploy the APKs and giving out an

[2010-06-06 19:28:31 - PolarisTest] Application already deployed. No need to reinstall.

The one way to quickly 'fix' this is to go into Project/Clean... and clean one or both of the projects.

The default tearDown() calls on onPause() (but not onStop() and even less onDestroy())
Well, I was surprised to find that out - it would have been reasonable to expect the test fix to run through the whole Activity lifecycle, and shut it down "gracefully.

In case you have some 'session management' (and who doesn't these days, in a serious Android app? you want to preserve state so that the user can come back to your app and find it exactly the way she left it) this may cause some surprising results when testing.

Considering that, in the Android process management system, there is no guarantee of a 'graceful shutdown' (essentially the scheduler wants to feel free to kill your proces without having to wait for your app to get itself sorted out -- and quite rightly so: we don't want a "Windows Experience" where some poorly-paid and even less-trained programmer can bring the whole system to its knees by sheer incompetence) this is just as well: in fact, the more I look into it, the more I find myself doing state management in the onPause / onCreate / onRestart lifecycle methods, and essentially ignoring the onStop and onDestroy (in particular the latter, I wonder sometimes why it's there at all)

Whatever, words to the wise: your Activity's onStop/onDestroy won't be called, unless you do it yourself.

Unit test run concurrently, but, apparently, @UiThreadTest prevents this for the UI thread
I must confess I'm not entirely clear about the full implications of using the @UiThreadTest annotation (apart from ensuring that the tests will run sequentially in the UI thread, thus avoiding a predictable chaos if they were all allowed to try to access UI resources concurrently) but one fundamental implication of this is that sendKeys cannot be called from within a test that is annotated with the @UiThreadTest.

Despite the documentation being conspicuosly silent about this minor detail, it turns that it must not be run in the UI thread; all we know about this method is that  "Sends a series of key events through instrumentation and waits for idle." - whatever that means, the bottom line is that it does not return and your test will eventually fail with a timeout exception.

From my limited experimentation, the only workaround is to either use it only in tests thare are annotated with something such as @SmallTest or similar (but not @UiThreadTest) or to make it run in a separate thread (just create a Runnable that will execute once you are sure the UI elements have been initialized - critically, the layout has been 'inflated').

And, on this topic...

Activity.findViewById(id) - only use after you have 'inflated' the View (typically, by calling the setContentView() on an R.layout.my_layout resource)

Nothing much to add here, really, just be careful about how you sequence your asserts, if you need to verify conditions on UI elements (typically, Widgets) as the findViewById will invariably return null until the view is 'inflated' from the XML (if this is, indeed, the way you build your View).

On a similar note, I've found the Window.getCurrentFocus() (and similarly, Activity.getCurrentFocus()) pretty much useless: naively, I thought that, once the Layout had been inflated, the 'focused window' would have been the main container (or some random widget therein: that would have worked for my tests): in fact, this call most invariably returns null (unless, I presume, you sendKeys to move the focus where you want it to be: this is rather cumbersome, in my opinion, and makes the tests rather brittle and too tightly coupled with the UI layout, which, in my book anyway, is A Bad Thing).

So, here is what I do instead:
 @UiThreadTest
 public void testOnDisplayReceipts() {
     Receipt r = new Receipt();
   instance.accept(r);
   instance.onDisplayReceipts();
   assertNotNull(instance.mGallery);

   // verifies that the Gallery view has been 'inflated'
   View gallery = instance.findViewById(R.id.gallery_layout);
   assertNotNull(gallery);
 }

Ok, it's not pretty, I'll give you that, but it works (and is rather independent of what I do in my Gallery view, what widgets are there and how they are arranged).


Beware of super.tearDown()
Worth to feature in one of Bloch's Puzzlers, what does this code do, when run with an InstrumentationTestRunner?
You may also want to know that the test passes, and, upon exiting from testOnCreateSQLiteDatabase the stub field contains a valid
reference to the SQLiteDatabase just created and opened in its db package-visible field.
public class ReceiptsDbOpenHelperTest extends ActivityInstrumentationTestCase2 {

  public ReceiptsDbOpenHelperTest(String name) {
    super("com.google.android.applications.receiptscan", ScanActivity.class);
    setName(name);
  }

  public static final String DB_NAME = "test_db";
  ReceiptsDbOpenHelperStub stub;

  protected void setUp() throws Exception {
    super.setUp();
  }

  protected void tearDown() throws Exception {
    super.tearDown();
    if (stub != null) {
      String path = stub.db.getPath();
      Log.d("test", "Cleaning up " + path);
      if (path != null) {
 File dbFile = new File(path);
 boolean wasDeleted = dbFile.delete();
 Log.d("test", "Database was " + (wasDeleted ? "" : "not ") + "deleted");
      }
    }
  }

  /**
   * Ignore the details, but this does "work as intended," opens the database and returns a reference
   * to it in the db variable.
   * The {@code stub} is a class derived from {@link ReceiptsDbOpenHelper} and simply gives us access to
   * some protected / private fields
   */
  public void testOnCreateSQLiteDatabase() {
    stub = new ReceiptsDbOpenHelperStub(getActivity(), DB_NAME, null, 1);
    SQLiteDatabase db = stub.getReadableDatabase();
    assertNotNull(db);
    assertTrue(stub.wasCreated);
    assertEquals(stub.db, db);
  }
}

Well, you'll be surprised to know that, our tearDown() does absolutely nothing: after the call to super.tearDown(), stub is null (one can check it out using a debugger session; at least, that's what I did: upon entering ReceiptsDbOpenHelperTest.tearDown() stub is a perfectly valid reference, just after the call to super.tearDown() it's a null: apparently a call to super.tearDown() on an ActivityInstrumentationTestCase2<MyActivity> wipes out all the 'context-related' instance variables.

Yes, I was too.

The fix is, obviously, trivial: move the call to super.tearDown() to the bottom of your tearDown() (luckily, this is not a constructor, so there's no reason why not to).

Summary
There seems to be light at the end of the tunnel: some "official" documentation is starting to appear on Android.com, the SDK is (slowly) moving to be (marginally) more user-friendly, and writing (and running) unit tests is no longer as painful as it used to be in version 1.0.

However, one would have wished that it wouldn't have taken until release 2.2 of the SDK (and version 8 of the API) to get where we are now: for one thing, it's rather likely that Android will be the first (or main) programming platform that many kids will take up when starting to explore computing and software development - and whilst programming in Android is fun, productive and gives an immediate sense of accomplishment, I am concerned that, having given testing (and unit testing, in particular) such a back seat, the wrong lesson may be learnt by young computer scientists: namely, that testing (and coding for testing) is something that can be done another day, when we'll get on the next release...

It isn't - writing unit tests is vital to write bug-free, solid and portable code; it also encourages the design of clean APIs: there's nothing like writing a few tests to figure out that one's just written a cumbersome API that needs fixing: and the sooner one finds out, the better!
Blogged with the Flock Browser

Sunday, May 2, 2010

Setting up a shared repository for Mercurial

 

Mercurial -- Source Versioning Control


(this is also available as a Google Doc)

Use of Mercurial over SSH

Please review the section named: Using the Secure Shell (ssh) protocol in Mercurial's "Definitive Guide" -- you may also find useful to review this blog entry (but there are differences with my 'paranoid' settings....)

I have configured my SSH port away from the standard sshd (22) and I'm using Dynamic DNS to use with my broadband connection (see http://dyndns.org for more info); further, I recommend only allowing SSH connections from a (limited) number of trusted IP (ranges) by editing /etc/hosts.allow; if you can't get this one to work, please figure out which IP address are you connecting from (http://www.formyip.com) and tweak your hosts.allow accordingly:

ssh -p 666 -l [user] yourdomain.dyndns.org

at this stage, you should be prompted for your password, that is to be expected.
If you can't connect, make sure you can DNS-lookup the server (nslookup yourdomain.dyndns.org must return something meaningful -- this is achieved via inadyn, so if you get an IP address, but still can't connect, it is -marginally- possible that inadyn is down and the IP was changed by your ISP).

Creating a private/public key pair

On the local Linux[1] box do the following:

ssh-keygen -t rsa

[1] if are using Windoze, you're too far down in the evolution chain for me to waste any time educating you, but you can do worse than Google "putttygen key pairs" or some such thing.

this will generate a private/public key pair in your ~/.ssh directory, one of which should be named something like id_rsa.pub -- copy that one (not the id_rsa private key!) on the remote machine's ~/.ssh directory (or in /tmp, it doesn't really matter).

You must then add this public key to your ~/.ssh/authorized_keys file (if it does not exist, you will need to touch it, remember to chmod it to 500 -- only YOU must be able to rw it);

touch ~/.ssh/authorized_keys
chmod 500 ~/.ssh/authorized_keys
cat /tmp/id_rsa.pub >>
~/.ssh/authorized_keys

logout from the remote machine, then re-issue the ssh command above, you should then be logged in, without the remote SSH server asking for the password.

Setting up your username

It's always good practice to ensure that your changes can easily be tracked back to you: as an essential part of this, setting up a meaningful username for HG is critical: you can use the HGUSER environment variable (export HGUSER="Marco Massenzio <m.massenzio@gmail.com>") or set it up in a more 'permanent' way by editing the Mercurial configuration file (~/.hgrc):

# This is a Mercurial configuration file.
[ui]
username = Marco Massenzio <m.massenzio@gmail.com>

Pulling a changelist from the remote repository

The directory where we will place HG repositories is in /usr/local/hgrepo, each directory will be a project's repository -- a simple test repo is 'hello' (/usr/local/hgrepo/hello) and you should be able to clone it onto your machine by issuing the following (from a local shell -- do not ssh into the remote server):

hg clone ssh://user@yourdomain.dyndns.org:666//usr/local/hgrepo/hello

A couple of things worthy of note:

  1. note the // after 666 (//usr) this tells SSH to start at the root of the filesystem (/) instead of your homedir (~/)
  2. replace the username in user@ with your actual username -- do not send the password in cleartext, you should in fact not be asked for a password (if you are, then something went wrong with the private/public key pair setup above)
  3. you should now have a hello directory, containing a few files on your local machine; if something went wrong, you should see an error message.

Making changes to the files in the (local) repository

Follow the usual HG practices -- edit the file with your favourite editor, save it, and once you are happy with it, commit it to the (local) repository:

hg ci -m "Did something astounding"

Pushing changes to the (remote) repository

Once you are happy with the changes, and think it's time for the other developers to see them, push them back to the remote repository:

hg push -v ssh://user@yourdomain.dyndns.org:666//usr/local/hgrepo/hello

By design, doing so, will not make the changes the 'tip' (or head) of the remote repository -- see the book as for why -- you should run an hg update command on the remote directory: the simplest way I found is to SSH into your remote machine and then run an hg update on the repo:

ssh -p 666 -l user yourdomain.dyndns.org
cd /usr/local/hgrepo/hello
hg status
# this should be an empty line, no editing of files should happen on the remote machine
hg glog -l 6
# see the 'graphic log' extension section in the hg manual
# this will show a list of the last 6 CLs, the 'active' one marked with an @
hg update
# this will make 'head' the 'active' revision

if no conflicts arise then this should update head; if there are conflicts, you will have to run a merge command:
hg merge
# I discourage doing merges on the remote machine however: merges are best done using a visual
# editor, suck as tkdiff, or similar.

possibly resolving the merge conflicts as they arise.

Pulling changes from the (remote) repository

Before starting any work, you should pull the latest version of the repository from ibw:

hg pull -v ssh://user@yourdomain.dyndns.org:666//usr/local/hgrepo/hello

By design, doing so will not make the changes the 'tip' (or head) of the local repository -- see the book as for why -- you should run an hg update command on the local directory:

hg up

if no conflicts arise then this should update head; if there are conflicts, you will have to run a merge command:

hg merge


Pulling / Pushing changes in Eclipse

Use the Mercurial Eclipse plugin [follow instructions on the site to install -- update site: http://www.vectrace.com/eclipse-update/] then use the right-click context sensitive Team menu with the Project folder selected, and choose accordingly.

The Repository URL to use is:
ssh://user@yourdomain.dyndns.org:666//usr/local/hgrepo/hello
 
but do not enter username/password in the dialog box, SSH will pick up the private key and proceed to conduct the handshake (if the pub/priv keys setup does not work for any reason, Eclipse will still be able to connect to the repository, but you will be asked for the password a couple of times, or possibly more).

Personally, I'm not using the Eclipse plugin (the initial delay in starting up at Eclipse launch is most irritating) and only use hg's command-line interface.



Putting some Google Goodines in your Code

Adding Protocol Buffers and GTest to your C++ project in Eclipse

(this post is also available as a Google Doc)

Use of Protocol Buffers

Download the package from the Google Code website (we are currently using version 2.2.0) and unpack the tar.gz file in a directory (say, /usr/share/protobuf_2.2.0) and then follow the steps outlined in the INSTALL.TXT file.

NOTE -- you must run the last step as root or installation will fail:

sudo make install

(optionally, run sudo make clean).

Once installed, you will have a bunch of files in your /usr/local/include and /usr/local/lib directories, these matter!

Using Eclipse, you can create a new C++ Project and then have to set up the include directories and add the protobuf.a library -- otherwise your code will not compile/link.
(this is not explained anywhere in the protobuf documentation).

Add the include directory

Right-click on the Project's folder, Properties > Settings: add /usr/local/include in the Directories for the C++ Compiler options:






Add the libraries directory

In the Directories for the C++ Linker options add /usr/local/lib; however, this does not seem to work, if one then adds libprotobuf.a in the box above (Libraries) the Linker will complain that it cannot find the file. Not sure whether this is a bug or "intended behaviour".


Select instead "Miscellaneous" and add into the "Other objects" dialog the full path to libprotobuf.a: /usr/local/lib/libprotobuf.a



Accept (OK) the settings, build the project, profit!


Adding gUnit Tests


This is a very similar procedure to the above: download and install Google Test from the Google Code website, install the code someplace on your disk and then add /usr/local/lib/libgtest.a and /usr/local/lib/libgtest_main.a to the Miscellaneous section in the C++ Linker.

NOTE -- to run the unit tests, your code must NOT have a main() function defined (simply rename it to something else).

Then you can add a prime_test.cc file in your project and run it simply by right-clicking the Project's folder and choosing Run As > Local C/C++ Application; libgtest_main will provide the main() to run the tests.

/* * p3_unittest.cc * * Unit test for Problem 3 * * See p3.cc * * Created on: 21-Dec-2008 * Author: Marco Massenzio (m.massenzio@gmail.com) */

#include <iostream>
#include <gtest/gtest.h>

#include <set>


#include "../common/euler.h"

using namespace euler;

TEST(PrimeTest, IsThree) {
ASSERT_TRUE(isPrime(3));
}

TEST(PrimeTest, IsTen) {
ASSERT_FALSE(isPrime(10));
}



where isPrime() is defined in euler.h/euler.cc as follows:

namespace euler {
// returns true if n is prime
bool isPrime(const long n);
}

Tuesday, April 20, 2010

JMock on Android

I am developing a very simple application for Android (more about this in a later post) and wanted to add some Unit Tests: these have always been notoriously difficult to add and use in the Android SDK, but recently the task has been made marginally less cumbersome by the adding a 'parallel' Test project to the one under development in Eclipse.

The process is documented here, so I won't repeat it and I'll assume you have happily created a Test project alongside your Android project.

An excellent intro to testing on Android can also be found in Diego Torres Milano's presentation to DroidCon 2009, and in other entries in his blog.

However, what turned out to be not-so-straightforward was how to add the ability to use jMock as a mocking framework, as I kept getting NoClassDefFoundErrors when launching the unit tests on the Mockery class and others, despite having added them to the classpath.

It turns out that if you add the three required jars for jMock (jmock-2.5.1.jar, hamcrest-core-1.1.jar and hamcrest-library-1.1.jar) as a "User Library" in the Project's build path (the sensible thing to do, in Eclipse) they will not be added to the classpath: neither in the Project nor the Test Project (even if you mark that User Library to be 'exported' as suggested here).

The only way to make it all work, is to add the individual JARs to the Test Project's build path, all three of them.

However (and here's the twist) given that both hamcrest-core and hamcrest-library have a LICENCE.txt file (and a MANIFEST) the APK builder will complain about duplicate files and refuse to import one (or both) of the JARs: the only solution (as suggested here) is to unpack the JARs into a single directory (one of both the LICENCE and MANIFEST files will be overwritten -- not sure whether this will cause a breach of the licensing terms?) and then re-pack them into a single hamcrest-all-1.1.jar file, and include that one into the build path.

Given that I wasted a couple of hours trying to figure out what was preventing Android to accept jMock, I thought I'd post this quick one to help out others.

Sunday, January 24, 2010

Writing a GWT Module

The available documentation from http://code.google.com/gwt and other blogs I've researched, is pretty scant on detail about creating and re-using your own GWT module, so I've decided to draft here a few guidelines, highlighting the most likely pitfalls along the path.

The following assumes you already have a certain familiarity with GWT, the use of Eclipse and writing Java: if you are a complete newbie, I encourage you to go through the GWT Tutorials, before coming back here.

The problem statement for this entry is one that ought to be familiar to most folk who have been doing GWT development for some time: you realise that a few of the classes and widgets you have created would be of use in several projects, but are unhappy with "copy & paste programming," a most awful practice indeed.

This recently happened to me: I have developed a "grid-based view" UI element (essentially a wrapper around GWT's Grid widget, with such functionality, so as to elevate it from humble Widget, to fully-fledged View status).  Without wanting to enter too much into the detail (perhaps a subject for another blog post), the usage of the View is based around a View interface:
package com.alertavert.gwt.client.ui.views;

public interface View extends AlertableOnActivationView {
/**
* Each view is expected to be able to render itself.
* This method can be called at any time on the view, even if it's not
* visible/enabled
*/
public void render();

/**
* The main client area (Panel) where this view is being rendered
*
* @return a {@link Panel} that contains this view's {@link Widget}s
*/
public Panel getMainPanel();

/**
* The main framework will call this method to set the main client area where the
* view has to render itself.
* There is no guarantee about when this method will be called, or even that it will
* be called at all.
* Always check for a main panel that is {@code null} and, preferably use the
* {@link #getMainPanel()} to get at it.
*
* @param panel the main client area for the view to render itself
*/
public void setMainPanel(Panel panel);

/**
* A user-friendly name for the view, could be used also to uniquely identify it
*
* @return a user-friendly name for the view
*/
public String getName();
}

and an abstract GridBasedView class, that implements the Template Pattern:

// All rights reserved Infinite Bandwidth ltd (c) 2009
package com.alertavert.gwt.client.ui.views;

/**
* Abstracts away from the concrete implementations the common details of a 'grid-based' view:
* essentially a UI element based primarily on a grid-spaced table.
*
* Other elements can be placed either before/after the grid is actually rendered (by using
* {@link #preRender()} and {@link #postRender()}) and the grid itself can be moved around the
* {@link #mainPanel} by manipulating the Panel itself.
*
* All elements of the grid are customasible (to a point) by passing the appropriate labels for
* headings and row titles ({@link #getHeadings()} and {@link #getRowTitles()} must be implemented
* by the concrete classes).
*
* @author Marco Massenzio (m.mmassenzio@gmail.com)
*/
public abstract class GridBasedView implements View, ToggleTextListener {

protected Grid grid;
Panel mainPanel;

/**
* Flag to indicate that individual rows can be deleted by user. The renderer will add a 'delete'
* icon and will handle row deletion
*/
private boolean canDeleteRow = false;

/**
* Flag to indicate that individual columns can be deleted by user. The renderer will add a
* 'delete' icon and will handle column deletion TOOD(mmassenzio) implement functionality
*/
private boolean canDeleteColumns = false;

/*
* Flag to indicate whether the row titles can be edited by user
*/
private boolean canEditRowTitle = false;

/*
* Flag to indicate that individual rows can be added by user. The renderer will add a 'new row'
* icon and will handle row addition
*/
private boolean canAddRow = false;

// A bunch of Setters & Getters follows here
// ...


/**
* Constructs a View based on a Table displayed in the center of the main panel ({@link
* #getMainPanel())}).
* The grid will have (at least initially) (row, col) cells (the number or
* rows can be changed calling {@link #resizeRows(int))}, if you need to change the number of
* columns too, use the methods in {@link Grid}).
*
* @param rows the number of rows
* @param columns the number of columns
*/
public GridBasedView(int rows, int columns) {
grid = new Grid(rows, columns);
}

/**
* Renders this View -- follows the Template pattern calling a number of abstract methods that
* will be customised by the actual, concrete implementations.
*/
public void render() {
if ((!initialized) && (mainPanel != null)) {
renderFirst();
}
}

/**
* Sets the Table's header, given the titles {@code headings}. Titles are set for all the columns,
* starting from the second: in other words, the first column (number 0,
* containing the rows' titles) will not be set.
*
* The "titles' heading" will be set according to {@link #getRowTitles()}'s first returned
* element.
*
* +---------------------+-------+-------+-------+-------+
* | getRowTitles.get(0) | h[0] | h[1] | h[2] | h[3] |
* +---------------------+-------+-------+-------+-------+
* | getRowTitles.get(1) | ...
* +---------------------+-------+-------+-------+-------+
*
* @param headings an ordered List of headings (text only) for this grid
* @see GridBasedView#getRowTitles() getRowTitles
* @see GridBasedView#renderFirstCol(List) renderFirstCol
*/
protected void renderHeader(List headings) {
int col = 0;
if (headings != null) {
for (String heading : headings) {
if (++col < grid.getColumnCount())
grid.setText(0, col, heading);
else
break;
}
}
grid.getRowFormatter().setStylePrimaryName(0, Styles.GRIDBASED_HEADER);
}

protected void setHeadingsClickable() {
grid.getRowFormatter().addStyleName(0, Styles.CLICKABLE);
grid.getCellFormatter().addStyleName(0, 0, Styles.NOT_CLICKABLE);
}

protected void setRowsSelectable() {
for (int row = 1; row < grid.getRowCount(); ++row) {
grid.getCellFormatter().addStyleName(row, 0, Styles.CLICKABLE);
}
}

/**
* Renders the first column which is assumed to contain the titles for each row. The grid will be
* resized to match the number of titles passed in: if you want to display more rows than there
* are titles, or need to have intervening empty row titles, just pass {@code null} as that
* particular element of the List.
*
* The first 'title' passed in ({@code rowTitles.get(0)}) will be used as the 'heading' for the
* titles themselves.
*
* @param rowTitles titles for the rows, one for each row; the collection must be non-null itself
* however, the actual titles can and will be represented as empty strings ({@code  })
* @throws IllegalArgumentException if rowTitles is null
* @see GridBasedView#renderHeader(List) renderHeader
*/
protected void renderFirstCol(List rowTitles) {
// lots of widget-manipulation here....
}

/**
* Template pattern - renders the view for the first time (before the view gets
* {@link #initialized}) by calling a sequence of methods that are either {@code abstract} or
* {@code Override}n by the derived classes.
*
* Before actually rendering the grid, {@link preRender()} is called, and after the grid is
* rendered, {@link postRender()} is called, thus giving derived classes the opportunity to add
* other custom widgets to the View (and/or customise the {@link #grid}'s appearance itself).
*/
protected void renderFirst() {
mainPanel.clear();
grid.setStylePrimaryName(Styles.GRIDBASED);
preRender();
renderTable();
mainPanel.add(grid);
postRender();
initialized = true;
}

/**
* Renders the main table ({@link #grid}) that constitutes the central component of this View
*/
protected void renderTable() {
renderHeader(getHeadings());
renderFirstCol(getRowTitles());
fillCells();
}

protected void fillCells() {
String stylePrefix = Styles.GRIDBASED_CELL;
String styleOddRow = stylePrefix + "_odd";
String styleEvenRow = stylePrefix + "_even";

for (int row = 1; row < grid.getRowCount(); ++row) {
// assign a default CSS Style here, can be overridden in the concrete fillCell()
if ((row % 2) == 0)
grid.getRowFormatter().setStylePrimaryName(row, styleEvenRow);
else
grid.getRowFormatter().setStylePrimaryName(row, styleOddRow);
for (int col = 1; col < grid.getColumnCount(); ++col) {
fillCell(row, col);
}
}
}

// =============== Template Methods ============
protected abstract List getRowTitles();

protected abstract List getHeadings();

protected abstract void preRender();

protected abstract void postRender();

protected abstract void fillCell(int row, int column);

// ... a few other utility methods here to manipulate rows and cells
}

Whatever the merits of the above, let's focus on how we can turn it into a module, so that other projects will be able to re-use it.
The first step is, obviously, to "extract" all the relevant classes and sever any links to any project-specific dependency: that in itself is a good exercise, as it ensures that we have not made any undue assumption about the specific usage of our library module.

The end result is as shown in the 'snapshot' from the Package Explorer here on the left: the module resides in package com.alertavert.gwt and is called GridBased (the GridBasedDemo module is simply a very basic GWT app to demonstrate how to use the GridBasedView; more on that later).

GridBased.gwt.xml itself is very simple:
<module>
<inherits name="com.google.gwt.user.User"/>
<inherits name="com.google.gwt.user.theme.standard.Standard"/>

<stylesheet src="gridbased.css"/>

<!--
There is no EntryPoint defined here, as this is inherited by
the external apps that will provide their own entry point.
To run the demo app, use GridBasedDemo.gwt.xml
-->
<module>
So far, so good, and so simple.
However, this is where the interplay between GWT's plugin's inadequacy to handle modules and the fac that really Eclipse has no idea of the concept or turning Java into Javascript, makes matters a tad more complicated.

In fact, GWT's documentation, simply indicates that you ought to package both source (.java) and binaries (.class) in the same JAR, and make it available to the GWT compiler (in other words, add it to the project's classpath).

If you do that, using the module above is trivial to the point of banality; this is a simple definition for a 'demo' project (totally separate from the one above) that uses my GridBased module (notice the "Other module inherits" entry)
<?xml version="1.0" encoding="UTF-8"?>
<module rename-to='demo_grid_based'>
<!-- Inherit the core Web Toolkit stuff. -->
<inherits name='com.google.gwt.user.User'/>

<!-- Inherit the default GWT style sheet. -->
<inherits name='com.google.gwt.user.theme.standard.Standard'/>

<!-- Other module inherits -->
<inherits name='com.alertavert.gwt.GridBased' />

<!-- Specify the app entry point class. -->
<entry-point class='com.alertavert.gwt.demo.client.Demo_grid_based'/>

<!-- Specify the paths for translatable code -->
<source path='client'/>

</module>
Assuming that you have GridBased.jar in your filesystem, all you have to do in Eclipse is to right-click on the Project's folder in the Package Navigator, select Add External Libraries, and then point to the newly-minted jar:





So, I hear you asking, how do you go about creating the JAR file in the first place.

One option, obviously, would be to use Eclipse's built-in jar builder (File > Export... > Jar file) and select the necessary and desired .class files, checking the "Export Java source files and resources" option.

However, I found that option to work not too well for me, and also rather cumbersome to remove the unit test class files from the generated jar.

So I resorted to this simple shell script to hand-craft my jar, adding the necessary .java and .class files, whilst removing all unit tests:

#!/bin/bash

BASEDIR=/home/marco/workspace-gwt-modules/grid_based
VERSION=0.1

cd $BASEDIR
rm -rf dist
mkdir dist
cp -R src/com/ dist/ && \
cp -R war/WEB-INF/classes/com/* dist/com/ && \
find ./dist|grep -e [a-zA-Z]*Test\.class|xargs rm && \
find ./dist|grep -e Testable.*\.class|xargs rm

cd dist && jar cvf gridbased_${VERSION}.jar com/

Even if you don't get all the nuances here, it's pretty clear that I'm gathering all the source (.java) files from src/; all the binaries (.class) from war/WEB-INF/classes (the default place for any GWT application created using Google's plugins); into and removing all test classes and utilities from the dist/ folder and then mashing them all up using jar cvf ... into gridbased_0.1.jar
(I use ClassNameTestable for those classes that are not quite mocks, but extend some 'genuine' class so that they can expose private/protected members for inspection and manipulation during testing).

And that's pretty much about it: if you then want to use a GridBasedView-derived class to display, for example, a grid where odd-numbered rows have your own custom style, you can do so:
// Copyright AlertAvert.com (c) 2010. All rights reserved.
// Created 6 Jan 2010, by M. Massenzio (m.massenzio@gmail.com)
package com.alertavert.gwt.demo.client;

import com.alertavert.gwt.client.SimpleGridBasedView;

public class AlternateRowsSimpleGrid extends SimpleGridBasedView {

public AlternateRowsSimpleGrid(int rows, int columns) {
super(rows, columns);
}

@Override
public void fillCell(int row, int column) {
super.fillCell(row, column);
if ((row % 2 == 1) && (column == 1)) {
getGrid().getRowFormatter().setStylePrimaryName(row, "odd_row");
}
}
}
where your own-derived class is inheriting from a class that was defined inside the GridBased module:
public class SimpleGridBasedView extends GridBasedView {
  // implements all the abstract methods defined in GridBasedView
}
and having the .class files inside the JAR allows Eclipse to do all its magic (including code-completion, snippets and javadoc).
Blogged with the Flock Browser