Ken's Digital Den

Taking couch potatoing to the next level

I recently bought a TV.  It has some “smart tv” features including the ability to play videos from a network drive.  That seemed fancy at the time and I was pumped to try that out.  However once I got the thing unboxed and setup, I realized it needed to use the DLNA protocol.  So, I had a Buffalo NAS that happened to support that and I found that I could in fact watch videos from share.  Yay!

I then found myself in the situation of wanting to watch videos from my Android tablet.  There are a lot of DLNA/uPNP media players in the Google Play store.  I tried several of them.  Most of them were overkill and were full of wonky features I didn’t want or need.  I got to thinking and decided to write my own app that:

1. crawls all DLNA/uPNP media shares on the LAN (uPNP supports discovery, so this is pretty easy.)
2. Makes guesses about what the title is based on the filename, and then downloads images from the internet.

The reason for feature #2 is that I got too lazy to sort through lots of directories, and reading poorly formatted names.  I just wanted to use as little brain power as possible and just see the thing on the screen, touch it, and watch it.  In fact, the UI that comes on my “smart tv” makes me navigate through some folders, all of the empty save the last one, just to get me to my media.  Such a pain…I wonder if the UI designers of these TVs actually use the features.

In any case, if your just too lazy to sort through a bunch of directories and just want to play the movies, try out my app!

On the composition of small Java programs

Typically when writing Java code, we’re integrating into larger systems such as application servers or cloud engine containers.  Or we use dependency injection frameworks that essentially define a large part of how an application is structured.  However there are times when this is overkill and starting fresh is best.  Inspired in part by the ease-of-use of the dropwizard web framework, I have distilled a few elements that when taken together make for a nice starting point for a modern Java-based program. With very little code, an annotation-validated YAML-based configuration class is neatly associated with a Guava Service.

It is useful for a program to have a configuration that lives outside of the code that can be changed.  There are various approaches, but the YAML format lends itself to readability and simplicity.  Jackson and Hibernate Validator are able to turn YAML files into Java configuration object instances quite nicely.  Legal values can be expressed via annotation and the program code can work with the idiomatic Java getters without dealing with the particulars of where and how the file is loaded.  This can cut a out a surprising amount of mundane validation checking.

The relationship between the bootstrap class and the configuration is best if explicitly defined.  This helps to make sense of the context of the configuration.  Too much subclassing can lead to tightly coupled code but in some cases works well to cleanly define such relationships between specific classes.  Once we have a configuration, then comes the creation of the Java classes that will consist of the application logic.  Google’s Guava library has a nice ServiceManager/Service abstraction that provides a nice place to express this startup code.  The ServiceManager is created and passed a set of services to manage.  The Java program defines one or more Services that are the top-level application classes for the program.    Assuming a generic type T that represents the configuration, the following abstract class will parse a YAML file as a Java POJO and pass it to an abstract method that will create Guava services that are then loaded by the Service Manager:

</pre>
public abstract class AbstractBootstrap<T> {
    
    public void run(String[] args, String name, Class<T> configType) throws Exception {
        if (args.length != 1) {
            System.err.println("Usage: " + name + " <configuration.yml>");
            System.exit(1);
        }
        
        ObjectMapper mapper = new ObjectMapper(new YAMLFactory());
        T config = mapper.readValue(new File(args[0]), configType);
        
        final ServiceManager sm = new ServiceManager(getServices(config));
        
        sm.startAsync();
        
        Runtime.getRuntime().addShutdownHook(new Thread() {
            @Override
            public void run() {
                shutdown();
                sm.stopAsync();
            }
        });
    }

    /**
     * Handle any global resource deallocation.
     */
    protected void shutdown() {
    }
    
    /**
     * @param config 
     * @return
     * @throws JAXBException 
     * @throws IOException 
     */
    protected abstract Iterable<? extends Service> getServices(T config) throws Exception;
}

The shutdown method is added for the unfortunate case that there are resources that need to be explicitly managed at a global level.

Now in the actual application, this abstract class is subclassed and the services are created with the configuration. Given a simple configuration file:

mySetting: 1

And corresponding class:

final public class Config {
    private final int mySetting;
    
    @JsonCreator
    public Config(@JsonProperty("mySetting") int mySetting) {
        this.mySetting = mySetting;
    }
    
    public int getMySetting() {
        return mySetting;
    }
}

We can then write the concrete class that will start the application. For my example I’m just printing messages to stdout, but more powerful service options await in the Guava library:

public class BootStrap extends AbstractBootstrap<Config>{

    public static void main(String[] args) throws Exception {
        BootStrap bs = new BootStrap();
        bs.run(args, "Example", Config.class);
    }

    @Override
    protected Iterable<? extends Service> getServices(Config config) throws Exception {
        return Collections.singleton(new MyService());
    }

    private class MyService extends AbstractIdleService {

        @Override
        protected void startUp() throws Exception {
            System.out.println("Hello!");
        }

        @Override
        protected void shutDown() throws Exception {
            System.out.println("Goodbye!");
        }
        
    }
}

From here it’s simply a matter of choosing one of Guava’s Service implementations that match the needs of your program. By using this Guava facility, the lifecycle state is handled for you. Shutdown hooks can be added cleanly at the service level. The Jackson and Hibernate Validation give you clean ways of expressing legal configuration values. Since Jackson does all the deserialization via the constructor, our configuration class remains immutable.

Using Guava Services, Jackson, and Hibernate Validator provide a easily readable yet powerful beginning to a Java program.

Here are the maven dependencies I used to create my example:


<dependency>

<groupId>com.fasterxml.jackson.core</groupId>

<artifactId>jackson-core</artifactId>

<version>2.3.0</version>

</dependency>

<dependency>

<groupId>com.fasterxml.jackson.core</groupId>

<artifactId>jackson-databind</artifactId>

<version>2.3.0</version>

</dependency>

<dependency>

<groupId>com.fasterxml.jackson.dataformat</groupId>

<artifactId>jackson-dataformat-yaml</artifactId>

<version>2.1.3</version>

</dependency>

<dependency>

<groupId>org.hibernate</groupId>

<artifactId>hibernate-validator</artifactId>

<version>5.0.2.Final</version>

</dependency>

<dependency>

<groupId>com.google.guava</groupId>

<artifactId>guava</artifactId>

<version>16.0</version>

</dependency>
<pre>

Strategies for Multiple Android Products with Eclipse and Git

Problem: What’s the cleanest, easist way of sharing code between multiple “products” within a given Android app?

Background: An Android application may have free and paid versions.  Both have some common code and other files, but also have specific code and some files must differ between the two versions, such as the Android Manifest.  When bugs or features are added in one version of the product, it would be nice for those changes to be applied to others if appropriate.  This is similar to the “one product multiple versions” workflow, but different in than specific well-known files will always differ, and the requirement of merging between the concurrent branches (at least for me) is far more frequent.

Requirements:

  • Simple to migrate code from one product to another.
  • Easy to isolate code which should be shared and code that belongs to a specific product.
  • Relies on SCM.
  • Unlikely to let unwanted changes to slip between versions.
The Simplest Thing that Could Work:
Create two isolated projects, one for the free, one for the paid.  When code sharing opportunities present themselves, just copy the files or sections of files across.  Do not rely on SCM or build conventions.  This works but as projects grow, it becomes more difficult to keep things straight and not overwrite changes unintentionally.

My Current Approach:

I’m working on my first free/paid application and I’ve decided to represent the two products (free, paid) as branches in a git repository.  In this case, I’m using master for free.  Since the Android market requires unique product IDs for each product, I use a Java package namespace that corresponds to the product ID for code that only belongs to one product, and a general Java package name for common code.  To migrate code from one version to another, I make careful commits of only the shared java namespace and use git-cherry-pick.  For now I’m using the command-line rather than egit.  I find it’s best to first get familiar with a particular git feature via the command-line tools.  Then any troubles encountered when using the GUI provided by egit can be better understood.

This so far is working out acceptably, but it’s not perfect.  It can be hard to remember to specify commits for specific in-common code, as required by cherry-pick.  Also, depending on how modular the code is, there can be a lot of refactoring to keep the code bases in-sync.

Another approach I may take in the future is to move all common code into separate Android library projects, and only have product-specific code live in actual project associated with the product.  This will reduce the amount of cherry-picking required.  On the downside it seems that the ADT Eclipse plugin has a proprietary  way of managing code in Library projects and changes to library code does not automatically propagate to downstream projects, until the entire workspace is rebuilt.  I’m hoping there is something I can do to fix this, because using the ADT Eclipse library feature is not very useful with this restriction.

Google searches didn’t turn up much for me, but I’d love to know what techniques others are using to solve this problem…

Using PojoSR for unit testing OSGi bundles

So soon after the initial release of the Apache Felix httplite bundle, a sudden fear gripped me: that someone might actually use the bundle and find defects!  While writing httplite I’d written informal tests but had nothing proper in place.  For many of the tests I needed to execute code in an OSGi context and could not rely on plain old junits.  I could have gone down the road of mocking things but, being lazy, I want to do as little as possible

I checked into using a full OSGi framework context via Pax Exam, but found that Java 1.4 (JUnit 3) is not supported.  Also the documentation seems a bit overly complex for the “just get it working” user, such as me.  I also had written framework launching things in the past but wanted to write/maintain as little as possible and be as maven-like as possible.  Karl on the Felix development list suggested I checked out PojoSR.

I opted to use PojoSR and it’s working quite nicely.  To add unit tests to an OSGi bundle with PojoSR, you need to:

  1. Add the PojoSR dependency to your project.
  2. Write some code that initializes the PojoSR registry and activates your bundle.

For item #1, add this dependency:

<dependency>
<groupId>com.googlecode.pojosr</groupId>
<artifactId>de.kalpatec.pojosr.framework</artifactId>
<version>0.1.6</version>
<type>bundle</type>
<scope>test</scope>
</dependency>

Note, if you get an “org.osgi.vendor.framework property not set” error when executing tests, check and see if you have Felix 4.1 or less as a dependency.  If so remove it or update to 4.2 to resolve the error.

For #2, I opted to write an Abstract JUnit test case that initialized PojoSR.  This probably will not work for adapting existing TestCases but luckily the code necessary is small so it should not be difficult to add to specific test cases as needed.

After making these changes maven and Eclipse were able to execute my tests.  Writing new test cases is easy as I simply extend my abstract pojoSR test case and can already know that my bundle is running and access the BundleContext if needed.

While I’m sure there are defects lurking in httplite, at least now I have some baseline tests and the ability to capture problems in tests going forward!

Using PojoSR for unit testing OSGi bundles

So soon after the initial release of the Apache Felix httplite bundle, a sudden fear gripped me: that someone might actually use the bundle and find defects!  While writing httplite I’d written informal tests but had nothing proper in place.  For many of the tests I needed to execute code in an OSGi context and could not rely on plain old junits.  I could have gone down the road of mocking things but, being lazy, I want to do as little as possible

I checked into using a full OSGi framework context via Pax Exam, but found that Java 1.4 (JUnit 3) is not supported.  Also the documentation seems a bit overly complex for the “just get it working” user, such as me.  I also had written framework launching things in the past but wanted to write/maintain as little as possible and be as maven-like as possible.  Karl on the Felix development list suggested I checked out PojoSR.

I opted to use PojoSR and it’s working quite nicely.  To add unit tests to an OSGi bundle with PojoSR, you need to:

  1. Add the PojoSR dependency to your project.
  2. Write some code that initializes the PojoSR registry and activates your bundle.

For item #1, add this dependency:

<dependency>
<groupId>com.googlecode.pojosr</groupId>
<artifactId>de.kalpatec.pojosr.framework</artifactId>
<version>0.1.6</version>
<type>bundle</type>
<scope>test</scope>
</dependency>

Note, if you get an “org.osgi.vendor.framework property not set” error when executing tests, check and see if you have Felix 4.1 or less as a dependency.  If so remove it or update to 4.2 to resolve the error.

For #2, I opted to write an Abstract JUnit test case that initialized PojoSR.  This probably will not work for adapting existing TestCases but luckily the code necessary is small so it should not be difficult to add to specific test cases as needed.

After making these changes maven and Eclipse were able to execute my tests.  Writing new test cases is easy as I simply extend my abstract pojoSR test case and can already know that my bundle is running and access the BundleContext if needed.

While I’m sure there are defects lurking in httplite, at least now I have some baseline tests and the ability to capture problems in tests going forward!

Upgrading from Oneiric to Lucid

Upon getting my new thinkpad I was faced with the question of what to install.  I’d been pretty happy with 10.04, and so I thought I’d continue that.  I’d read here and there that 11.10 was better than 11.04 but there still seemed to be a lot of unhappy campers.  I myself had mindlessly upgraded to 11.04 only to come to realize that I should have stayed with 10.04.  So, given this I burned a fresh copy of Lucid to a USB disc and got crankin’.

However, after some time installing I came to realize that the kernel version that ships with 10.04 is too old for the chipset in my x220.  I had no networking capability which made it quite a struggle to look for updates.  I played around with “sideloading” kernel images from later releases but after a few hours of toiling thought to myself “how bad could Oneiric be?”  I considered other distros but came to the conclusion that the time to learn/tune Unity would be less than the time to learn some whole other world of stuff.  And besides, I like Ubuntu.  I like the guys that put it together.  Over the years they’ve saved me a lot of time and hassle. So, off to Oneiric it was.

…3 weeks pass…

Oneiric was almost good enough, and if haddn’t had such a good experience with 10.04 I might have just been happy.  After tweaking and tuning, finding howto guides on smoothing out the edges, and generally just trying to learn the Unity way, I had a pretty decent setup.  Today however, a straw broke and that straw consisted of:

  • The 200 ~ 400 ms it takes for the application (task?) switcher to render after pressing alt-tab.
  • Being presented with photos and music when trying to launch an application via Dash.
  • Being distracted by little jumping icons off to the left of the dock that I tried to hide.
  • Finding an install guide for 10.04 on the x220.

Dash, aesthetically is very pleasing.  But for me who uses the computer to get work done, it’s maybe a little too slick.  I do not want to see family photos when I’m trying to launch a tool.  I don’t want to see my music collection in a global system view.  I want to run my tools and get stuff done.  Gnome-do does a great job of this.  Dash showing me my media collection is just a rub, but the half second it takes for me to switch applications, in the end, was a deal breaker.

Half a work day later, I’m back to 10.04 with a backported Natty kernel that seems to be fine with my Sandybridge rig.  And now that I’m here, I feel right at home 🙂

Upgrading from Oneiric to Lucid

Upon getting my new thinkpad I was faced with the question of what to install.  I’d been pretty happy with 10.04, and so I thought I’d continue that.  I’d read here and there that 11.10 was better than 11.04 but there still seemed to be a lot of unhappy campers.  I myself had mindlessly upgraded to 11.04 only to come to realize that I should have stayed with 10.04.  So, given this I burned a fresh copy of Lucid to a USB disc and got crankin’.  

However, after some time installing I came to realize that the kernel version that ships with 10.04 is too old for the hardware in my x220.  I had no networking capability which made it quite a struggle to look for updates.  I played around with “sideloading” kernel images but after a few hours of toiling thought to myself “how bad could Oneiric be?”  I considered other distros but came to the conclusion that the time to learn/tune Unity would be less than the time to learn some whole other world of stuff.  And besides, I like Ubuntu.  I like the guys that put it together.  Over the years they’ve saved me a lot of time and hassle. So, off to Oneiric it was.

…3 weeks pass…

Oneiric was almost good enough, and if haddn’t had such a good experience with 10.04 I might have just been happy.  After tweaking and tuning, finding howto guides on smoothing out the edges, and generally just trying to learn the Unity way, I had a pretty decent setup.  Today however, a straw broke and that straw consisted of:

  • The 200 ~ 400 ms it takes for the application (task?) switcher to render after pressing alt-tab.
  • Being presented with photos and music when trying to launch an application.
  • Finding an install guide for 10.04 on the x220.

Dash, aesthetically is very pleasing.  But for me who uses the computer to get work done, it’s maybe a little too slick.  I do not want to see family photos when I’m trying to launch a tool.  I don’t care about my music collection.  I want to run my tools and get stuff done.  Dash showing me my media collection is just a rub, but the half second it takes for me to switch applications, in the end, was a deal breaker.  

Half a work day later, I’m back to 10.04 with a backported Natty kernel that seems to be fine with my Sandybridge rig.  And now that I’m here, I feel right at home 🙂 

 

Specify package-export version using bnd wrap.

Create a property file specifying the package pattern and the version:

$ echo "Export-Package: *;version=3.1" > bnd.properties

Now pass the property file to bnd:

$ bnd wrap -properties bnd.properties myjar.jar

Java 7 (OpenJDK/IcedTea) for the BUG 2.0

NOTE: this post was originally at http://community.buglabs.net/kgilmer/posts/251-Java-7-OpenJDK-IcedTea-for-the-BUG-2-, however the webserver seems only intermittently available so I’m moving the post here.

For anyone interested in Java 7, I have built OpenJDK 7 (via IcedTea) for BUG 2.0 (Angstrom, armv7, Zero VM).  The binaries are available here.  The binaries are not provided as Angstrom opkgs, so the installation process is manual. This is because it was built natively on a BUG, rather than using the OpenEmbedded cross compiling build system.  You’ll also need to install libcups2 via opkg if you want to replace the Java 6 JRE to run BUG applications.  Here is my installation steps, assuming the j2re-image tarball is decompressed in /home/root:


$ mkdir -p /usr/lib/jvm/java-7-openjdk/jre
$ mv ~/j2re-image/* /usr/lib/jvm/java-7-openjdk/jre
$ rm /usr/bin/java
$ ln -s /usr/lib/jvm/java-7-openjdk/jre/bin/java /usr/bin/java
$ opkg install libcups2

Now if you restart Felix (via Knapsack if you are running a 3.0 release), you’ll notice that the JVM running is java.runtime.version = 1.7.0_147-icedtea-b147 via http://bug/support.html.  This is provided on an experimental basis and I would not be surprised if problems pop up!

An interesting benefit that IcedTea 7 provides (other than ARM compatibility via Zero) is alternative VM implementations CACAO and JamVM.  The binary build I have provided unfortunately does not include these but I’m going to try and build them.  Hopefully I can get Zero, Shark, CACAO, and JamVM all happily living inside the BUG!  🙂

Special thanks to xranby@irc.oftc.net#openjdk for his invaluable assistance!

Fresh from the oven: Knapsack for Apache Felix

Both for my various weekend projects and on the BUG, I’ve been unhappy with the bundled launcher that comes with Felix.  Equinox is no better.  Typically people use the Eclipse tooling to create and configure framework instances, but when setting up for production or working with embedded targets this is often not possible. So I set out to make something that would allow me to easily get a framework up and running from the terminal.  Luckily, Felix has excellent embedding support and documentation so it was pretty easy to get started.

What is it?  Essentially an alternative launcher for the Felix OSGi framework (3.2.2) that uses the OS’s native shell for framework management, and creates all of the configuration files automatically, letting the user just download, run, and start adding their application bundles, with interactive feedback.

I’ve much more detail on the design details and usage on the project page and in the readme.

However, here is the summary of how to go from Zero to OSGi in 30 seconds*:

$ mkdir foo && cd foo
$ wget https://leafcutter.ci.cloudbees.com/job/knapsack/lastSuccessfulBuild/artifact/knapsack.jar
$ java -jar knapsack.jar &

* Depending, of course, on your copy-paste skillz.