Google Glass: A Developer's Paradise, Thoughts, and Experiences
My last article presented the consumer experience typical of Google Glass explorers. Many of the lucky early adopters of this wearable computing technology are software developers, so there is a developer experience to share as well.
Google Glass presents a playground for software developers. It is a new paradigm that can be used in new and creative ways. A new kind of smart device with a different screen; different input mechanisms; and different challenges, such as battery life that is limited due to weight considerations for user comfort.
At first, there was the Mirror API. Open to developers in any language that supports RESTful Internet transactions (Perl, Python, Php, C#, Ruby, Java, JavaScript, Node-the list includes almost every language you can imagine), it provides access to Google Glass for all kinds of developers.
Using the Mirror API, a developer can interact with a Google Glass owner's timeline. Insert a new card with formatted HTML including images and tables. The cards can be programmed to be interactive, having menus and children cards. The cards can even feature callbacks to respond to events, including location changes that are sent about once every ten minutes.
If this feels as if we are going backward in time, like programming Hypercard or for those who can remember Gopher, well, it kind of is. Yet at the same time, we are moving to the future with a heads-up display, a computer that can be comfortably worn, and has access to all the power of the internet. Programming for Glass with the Mirror API is a dichotomy.
It is fun, and I have built Glassware to track price updates to Bitcoin on fanciful timeline cards, and a location-sensitive finder for nearby medical facilities. Other users have imagined interactive games of Battleship, recipe tools, and news alert systems.
But the real fun has just begun. In August 2013, Google has started publicly promoting APK development for Glass, using the bare bones of the GDK which now has scant, but actual documentation. This requires Java and familiarity with Android, although there have been reports of users writing PhoneGap style APKs that run on Glass.
As of August 2013, the GDK documentation includes four reference projects that I will review: Compass, Level, Stopwatch, and Waveform. Each uses a clean interface that belies a sophisticated interaction with the sensor and input capability of this extraordinary device.
Compass provides a 360-degree view of a compass and direction headings, almost like your head is inside a compass bubble. As your head swivels, the headings rotate by using the rotation vector sensor to update a custom view. If you tap the touchpad, the familiar Google Glass voice (American, female) reads aloud the heading, such as "337 degrees, North, Northwest".
This is a screenshot for Compass:
Here we can see some of the code, courtesy of Google, used to acquire the heading from Glass sensor polling:
Level is a program that shows a flat blue line onscreen that responds to the tilt of your head. No matter how quickly you tilt your head, the line remains flat by changing the angle. With my head tilted to the right, this is what the screen looks like:
The Level project has code examples for using the Gravity sensor on Google Glass.
Stopwatch uses a chronometer and the touchpad to show a tally of elapsed time between presses with a stark white-on-black design:
It is another example, like Compass, that uses the touchpad. Here is some of that code:
Finally, Waveform provides the most stunning visual display yet. In tests on my device, it crunches more than 4,000 calculations, roughly ten times every second to create three overlapping graphs of varying brightness. This visually indicates the decibel level of input from the user and surroundings. These calculations are based on input from the Google Glass microphone recorded with the MediaRecorder AudioSource.
This is what the screen looks like, including a numeric reading of the decibel level in the lower right of the display:
Here is some of the recording source code, provided by Google:
These four projects inspired me to put one of my favorite retro arcade games onto Glass. Snake is a game in which you eat apples and grow with each meal until you hit a wall or yourself. After eating some apples, it looks like this figure:
I modified a typical Android version of the game to work for Glass. In doing so, I learned some valuable lessons.
The game uses sounds and small sprites like any typical android APK.
It conforms to the input options of the device, notably the touchpad, which passes along taps and forward/backward swipes to the APK. Taps are registered like an ENTER key press, and both forward and backward swipes look like a TAB key press to the Android application.
Here is some code that handles a portion of the game control input:
The game does not provide haptic feedback in a strict definition, but by using the bone conducting speaker it does provide audio feedback to the player. When making turns and when eating apples, ending, or starting a game, the Google Glass voice speaks to the player.
Here is some code that orchestrates this inside the event of a meal. For edification purposes, this code has been modified; it has been reorganized and does not use an R file for the string constant "yummy" at the expense of internationalization ease:
I also borrowed from the Google examples to include a clean interface, hiding the normal top row of an Android device that might indicate a Bluetooth connection, WiFi, and the time. I did this through the Manifest, as follows:
With Snake in my Glass, so to speak, I conducted a tournament with dozens of players competing for the top score (eventual winner-26 apples) and learned more about Glass and how real people will adapt to this technology.
Although many first-time users had trouble registering swipes predictably, it improved over time. It taught me that taps are more effective than swipes if you need only a single gesture.
Writing APKs for Google Glass is a great experience; there are many sensors to creatively receive user input. The usability for end users as of August 2013 is a little more suspect. Unless you "hack" your device-to use parlance from Google I/O 2013 and add a sideloaded launcher-you have to launch each APK with an arcane multistep process.
First, you must ensure that the Glass screen is on, a setting called Debug Mode is enabled, and it is connected to a computer using the Android Debug Bridge (ADB). Then you have to run a command like this to side load the APK onto your device, using Waveform as an example:
adb install –r apk-waveform-sample.apk
Finally you can launch the APK with this command:
adb shell am start –n com.google.glass.samples.waveform/.WaveformActivity
Clearly this process will improve in the future, and for now, Developers have all the tools they need to start working on native Android software for Google Glass. It is incredibly exciting to see these sample applications and inspired derivatives such as my Glass Snake game.
As more developers develop native applications for Glass, an incredible piece of hardware, the possibilities are limitless.
Summary
This article summarizes the developer experience of Glass Explorer Mark Scheel with the Google Glass product in August 2013. It reviews four Google example projects and a game that Mark was able to port to Glass using the GDK.