The future of Google Glass: Visualizing beyond limitations

Dave Senior


For us at Playground, Google Glass is exciting. We are constantly trying to dissect the human-technology relationship and Glass represents information technology at its most intimate.  The Explorer edition Glass and its Mirror API is an amazing techno-social experiment but it is an experiment with limitations.  We wanted to visualize what Glass may do as the platform matures past today’s limits.

Heads Up. The Future is here

It is no surprise that Google Glass has ignited a lot of curiosity, commentary and criticism over the last few months.  People everywhere are fascinated by yet another form factor for consuming information.  Only a few years after the iPhone sparked the smartphone revolution, we are all glued to our pocket-sized screen.  Google began to view this behaviour as a problem. Interrupting your relationship with the real world by checking your phone is intrusive. Maybe it was finally time for a heads up display—a device with a certain type of inevitability, an indicator that we are finally catching up to that future we were promised in science fiction.

A few weeks ago, Google Glass Explorer edition headsets were shipped to press, developers and enthusiasts to a great deal of fanfare.  Reactions have been mixed: a few really like it, a few really don’t, but it is inspiring a lot of conversation about how we interact with information and how we interact with the world.

The overall impression is that this is an immature product and a yet to be understood platform.

Glassware
“Glassware” is just a RESTful Google messaging platform. It is the only way to design software for Glass.

In the video we are visualizing a few use cases for a heads up display. Most of these examples would not be possible on the current Google Glass Mirror API platform.  The platform itself is a RESTful-based API meant for sending small messages of push data.  Video, picture, audio, and text linked from HTML/CSS formatted messages are sent via a web service request to the Google Glass Mirror API.  The API is not for developing true native applications.

Why so Limiting?

Google Glass API
There just isn’t much to the Google Glass Mirror API right now.

Today’s Google Glass is still an experiment.  This is Google’s first attempt at the form factor, hardware, software and platform. Battery life likely played a huge role in the guidelines for applications. Native processing is a huge drain on battery. Applications that process video feeds or photos of your environment would strain Glass’s processor.  Maintaining a two-way data connection for long periods of time will quickly burn through battery as well.  The current battery likely would not even be able to support LTE/4G and GPS radios.

That’s Okay, Context is everything

Even though the current platform is immature doesn’t mean you can’t accomplish some cool things.  Being tethered to the phone is a good thing.  The phone can be a whole new source of sensors for Google Glass.  Information beamed directly to the eye should be context-sensitive and sensors power context.  The first truly innovative apps for the Mirror API will go beyond posting images from Glass to a web service: they will display information in context.  Google Now is Google’s context platform and it will likely power some of Glass’s future context-sensitive services.  It uses information from a user’s GPS, email, calendar and search history to deliver relevant information.  Mobile apps that are aware of relevant interest and location information could trigger their web service to send a notification to Glass. This could be great for things like reminding someone that their favourite show is on or bringing up more information about a nearby landmark. Context will power the best HUD experiences.

Context also helps Google’s software predict your next action, stepping in to increase accuracy on voice commands or clumsy finger interactions.  Similar to the way Google predicts your search queries, Google will predict your actions.

Everything is possible today

All of our examples are actually possible right now.  Smartphones (batteries not included) have enough raw processing power to run this software today.  If only current batteries were ten times more efficient and there was a robust native hardware API for Glass.   Well, it’s coming. Sooner than we think.

Workout Path tracking

Workout and path tracking is something we can do today on our phone.  With a direct connection to the phone’s GPS it would be trivial to stream location and position data to a Glass application for visualization.

Comparison Shopping

In-store comparison shopping has always been an interesting retail use case. Today’s experiences are powered by photo recognition and technologies like Amazon’s A9 Visual Search.  Using the current Mirror API, you could send a picture to a web service, send it to be processed by SnapTell via API and have a Glass timeline event generated upon response from SnapTell. In the future when Wallet applications are more robust, ordering directly via voice command will be an obvious addition.

Advertising

Glass can easily make use of markers like QR codes to trigger experiences. In this case, Glass is used to scan an interactive ad. This process is far less cumbersome than the current QR/image recognition applications on smartphones. Google doesn’t support advertising today, but in the future rich ad formats will be standardized and implemented in advertising products.

Wallet: Secure payment experiences

Using tools like Google Wallet can be far more intimate and secure. Two-factor authentication can be utilized by pairing with Google Glass.  These types of experiences are possible today.

Smart Shopping

Context sensitive wallet applications can be used in places like the grocery store. Using GPS, Glass could understand that you were issuing commands within a specific store and would allow you to interact directly with that store’s API. Using image recognition technology, items can be added to cart and checkout can happen instantly.

Taxi: Location services

Using a native or mobile app, it would be easy to call a cab to your precise location. Taxi apps reveal a trickier set of issues with voice-powered interfaces, however. Who owns a command? “Glass, Uber me a cab to my current location” seems like a nice solution, but tools that allow us to choose default preferences may become necessary as the platform becomes more robust.

Sports Updates

Receiving updates about sports can happen right now using the Mirror API.  Today, a Glass enabled application can be set to push score results to your Glass timeline.  A native app could be used to add a game subscription.  Given the robustness on Google Now’s interest cards, I imagine a Google Now implementation of this feature is coming soon from Google.

Guitar Training

Using native audio processing, Glass could be an interesting music education tool. In the App Store there are many educational tools that use sound wave analysis.

Emergency Services

The technology to support two way video streaming to emergency services is already here today.  Using modern web protocols like web sockets, emergency responders can stream real-time information to people at the scene.  The barriers to implementing these technologies are usually bureaucratic. Public agencies cannot be bullish risk takers, but adding a software platform to emergency services may be an initiative undertaken by smart governments in the near future.

Presentation Aids

Notes in presentations are considered clumsy by some.  Interactions between multiple devices and screens can be tricky for even the most experienced presenter.  Glass, with very simple software, can be synced with your presentation software to show relevant information..

Gaming: Device Awareness and Syncing

The heads-up display is a concept manifested most often in video games.  Games use HUDs to display information of all types.  Why not have that in an actual heads up display to make the gaming experience that much more immersive?  Video games powered by web services and consoles with robust bluetooth and wifi connectivity tools can easily stream information wirelessly to a second display.

TV Alerts: Context-aware notifications

Contextually-aware alerts seem like the sweet spot for current interesting Mirror API applications.  In the gaming example, the player is interrupted by a notification that Game of Thrones is about to start.  Being synced to his console and TV of the future, he issues a command to switch to TV so he can watch his favourite show.  Something like this could be done today. A web service could send an event to Glass at a set time and if that web service could interact with a smart cable tuner it would be able to set it to the appropriate channel.

Will it be a success?

Although many are predicting an early demise, I think the idea of a heads-up display is here to stay. Is the current manifestation perfect? Absolutely not. But this is the first serious attempt at making this concept a reality. It will take a great open software platform, better hardware, beautiful design and tremendous real-world value for it to outweigh our resistance to change.

Imagine if we had Glass

We are extremely excited about Glass.  It is truly a courageous undertaking and we are fortunate that companies like Google are able and willing to experiment like this.  Such experimentation is already paying dividends: software is getting better, services are getting more robust and innovation is getting faster.  As far as Playground is concerned, if we had Glass we would be thrilled to join this experimentation. We already aim to design software that is more contextual and intimate and here, we think, is the ideal platform.