Seeing The World Through (Google) Glass

So last week I had the great opportunity to test-drive the truly futuristic device that is Google Glass. Similar space-age accessories have appeared in countless science fiction films and novels, but let me tell you, Google Glass is very real, very functional, and may be coming very soon.

The Basics

Let’s get this out of the way. Right now, Google Glass is in “open beta”, meaning it’s basically being marketed to developers at the moment. Technically, anyone can purchase one here, but at $1,500 I can’t say it’s worth it. Do not be alarmed, however, the price will most likely be around $400-$500 once it officially launches for consumers.

Glass is basically a small computer in the shape of eyeglass frames. It can be worn with or without a lens (frames can be customized – they slip right on and off). It doesn’t have the full capabilities of your smartphone however. It cannot call or text people without it being tethered to your mobile device via bluetooth.

So you probably still need to carry  your phone around. More importantly, Glass only comes with wifi at the moment, it can’t receive cellular signal (so no 3G or LTE). This means that Glass will use your smartphone’s data plan (via bluetooth) when it needs to ping the web. So Glass and your smartphone work together.

It Feels Magical

The first time you put Glass on, you will feel like a cyborg. One way to describe it is that it feels like a “natural” extension to your body. You don’t have to fidget around your purse or pocket to access it – as you would your phone – but instead, all you have to do is look slightly up and to the right. To navigate, you can use the touchpad no the right side of the frame. Currently, three gestures: down, forward, and tap allow you to either close, scroll and select what’s on the screen.

The screen is small enough that you actually don’t notice it at all if you’re looking straight ahead. It almost entirely disappears from your sight, unless you look slightly up. Although it definitely takes some getting used to, it does not feel intrusive at all.

On the main screen, you can use the voice command “Okay Glass” to bring up a list of commands onto your screen, which you can then dictate to the built-in microphone. Commands range from take a picture, record a video, give me directions, and perform a Google search. I had limited time with the device, but I could already see the potential in the near-instant speed at which I could do these any of these things.

I am highly interested in the photo and video taking capabilities. Currently, I have to physically whip my phone out of my pocket, start the camera app and tap a button to take a picture. Also, while recording a video, the phone’s screen is basically covering my view. Having Glass mounted onto your face not only frees your hands up, but also frees up your line of sight. It doesn’t seem like a big deal, but it’s actually phenomenal when you first experience it. Since the beginning of time, humans have captured moments by holding up a rather large camera to their face and snapping a photo. With Glass, it becomes second nature. You simply say the command.

The idea of getting instant directions is also helpful. I was just in Washington D.C. this past weekend and I had to look up walking directions to museums and monuments via smartphone. Instead of looking down at my phone, I could’ve been looking up at the sidewalk full of people and the street signs that could help direct me, all while getting directions spoken into my ear by Google Glass.

IMG_0219

Color options for Google Glass. Also available in gray (not in photograph).

Put My Money Where My Mouth Is

So it seems that all I have for Glass is praise. Why didn’t I walk out with a fresh pair when I visited Google’s Chelsea Market location last week? Mostly because of the price. I can’t stomach spending $1,500 on a piece of technology that I want. That’s the rub. No one actually needs this thing. It’s purely for convenience, akin to a bluetooth headset. There’s a huge potential for Glass to be used in a professional setting. Think of people that need instant information like surgeons, soldiers and firemen. They could really use such a tool, but as for the layperson, it’s just for convenience.

Will I get one at $400? You betcha’, but that’s just because I’m a technophile.

The Biggest Ethical Question About Self-Driving Cars

I’ve done a fair amount of reading regarding self-driving cars, and the potential they have to change our world and all the good that can come from them. But to this date, this article written by Patrick Lin, is one that I always think about. He writes about self-driving cars from an ethicist’s perspective, and it really got me thinking. Here’s a brief summary of the scenario he portrays:

A self-driving car is racing down a crowded highway (adhering to the speed limit, of course) when, all of the sudden, an accident unfolds in front of it. Immediately to the front of the car is a tiny smart car and a big pick-up truck. The car’s algorithm runs through thousands, even millions, of possible scenarios in the milliseconds it has to react. The car concludes that it only has two choices: hit the smart car or hit the pick-up truck.

To entertain this possibility, you must accept the fact that the car’s computation really led to only these two choices. In reality, there is probably very little chance that this would occur. But nonetheless, the ethical dilemma is evident.

One rationale for the car choosing to hit the pick-up truck is that the massive size of the truck makes it well equipped to take a hit as compared to the frail body of the smart car. This would ensure safety for all drivers involved. But even in this scenario, the self-driving vehicle seems to be targeting large vehicles. Now, is it fair to truck drivers that they are essentially targets of self-driving vehicles? And what of the computer scientists who programmed this self-driving vehicle? Can they be held liable for programming the vehicle to target large trucks?

There is, of course, no right or wrong answer. We are in the gray area of autonomous machine ethics. I am definitely not qualified to comment on the questions posed above, but I do think about it every now and then. Regardless of what people may think, I believe we won’t come to a conclusion until a similar situation unfolds in the near future, leading the courts to essentially et a precedent for this scenario.

Obviously there are probably hundreds of varying opinions out there. If you have any thoughts or ideas on how this might play out, feel free to leave a comment.

iOS 7 Wish List

We all know it’s coming. Every year Apple refreshes its mobile software, “iOS,” with a big update that tend to include many “revolutionary” features. Rumor has it that iOS 7 will be announced some time in June along with the iPhone 5s and perhaps the “cheaper” iPhone. There’s been a lot of talk about iOS 7, primarily because of CEO Tim Cook’s executive shakeup last year that put Jony Ive in charge of iOS design. If you didn’t know, Jony Ive is THE hardware designer of Apple. He designed everything from the iPod to the Macbook to the iPhone.

While I’m confident that any design changes in iOS 7 will be nothing short of spectacular (see flat design), that is not the focus of my post. Instead, I’m more concerned about the practical changes in iOS 7. Last year, iOS 6 failed to provide any new useful features. Here are the top five improvements and features I’d like to see in iOS 7:

1. Easy-access toggles for common features

The iPhone, as a full-featured smartphone, has many connectivity features that may need to be switched on or off depending on your needs. Unfortunately, there’s no easy way to do this currently. If I want to turn on Bluetooth, I have to go into the Settings app and turn it on from there. Maybe I want my phone to use LTE instead of my school’s terrible wifi. Again, I’d have to go into settings to do that. The ability to quickly turn on/off these popular features (along with Airplane mode and Location services) would be a welcome addition. With the iPhone 5, Apple has increased the amount of screen space they can play around with. I think it is very likely that we will see toggle switches added on to the notification center in iOS 7.

2. Close all background apps button.

I don’t know about you, but I tend to close my apps entirely even from backgrounding. I don’t understand how Apple has never made a “close all” button in the background/app switch manager. I would like to see this in iOS 7 and I think there’s a good chance we will.

3. Live app icons

live app iconsYou may have noticed that the iOS default calendar app shows you the actual date. However, that’s the only Apple App that does that. Why can’t they do this for other apps? Why does my “Clock” app always show 10:15 and why is the weather always 73 degrees? I understand Apple has some sort of vendetta against widgets on iOS home screen, but this would be a great way they could incorporate live information onto their current setup.

4. Change default apps

I know it is in Apple’s nature to be a control-freak when it comes to the OS environment, but I think it’s time to let us users choose what apps we want to be our defaults. No, I don’t want Safari to open up every time I click on a URL, I prefer the Google Chrome app. I would like to use Sunrise as my default calendar app, not the one provided. APPLE IF YOU’RE LISTENING, DO THIS!

5. Smarter, faster Siri

To be honest, I barely use Siri, but that’s probably because it sucks. If they improved it, I would definitely use it more often. Every now and then I use Siri to text people, but I hate doing so because it can’t distinguish between two separate sentences. That’s an understandable problem, but sometimes Siri doesn’t work entirely. It often just freezes in a middle of a request with that purple beam continuously spinning itself into oblivion. Needless to say, get on this Apple.