I got to
try on the Google Glass the other day. Sitting in my friend’s kitchen, after a
lot of fiddling we
were (just) able to get it working. Thus, I found a rosy
display sitting slightly off-centre of my field of vision.
It didn’t work
too well.
It got hot
quite quickly; I wasn’t ever quite able to convince it to take a photo
properly; it kept trying to email his twin brother. This being said, it was a
developer kit, pre-sale and all that, so it’s not likely to work properly at just this moment. No, this isn’t a
finished model, this is a demonstration of ability more than anything else.
This is one of the most strident steps towards widely available reality
augmentation yet taken. This is Google showing us what it can do right now, so
it can hint at what it wants to do later.
It’s one
thing to carry a phone around with you in your pocket, but to have an
interactive display that is visible only
to you, subjectively, is an exciting (and frightening) move. The biggest
problem that the Glass faces, other than the obvious hardware/software
challenges any and all new tech has to confront, will be a mixture of
aesthetics and social accessibility. At the moment, I fear it looks a little
too…Star Trek [link]. What needs to happen is for the Glass to become less
obvious, less intrusive. There will always be people who revel in obviously wearing their new toy, but
wide-spread Augmented Reality will only occur when the hardware that facilitates
it can be properly blended with already standing aesthetic and social
conventions.
In a word:
Google Glass needs to look like a pair of glasses.
This is the
direction that technology is going to go in. It will become less-and-less
obvious, more-and-more commonplace, and less distinguishable from its user. Makes you wonder what direction all this is going in, doesn't it?
No comments:
Post a Comment