Lately, there have been a lot of folks getting into the “AR excitement”, especially with the iPhone SDK becoming available (i.e., all kinds of folks _have_ to have an AR demo on the iPhone, even though the camera sucks and you can’t even distribute an app that uses video legally because the SDK doesn’t support it and it’s not “legal” to reverse engineer unsupported APIs). ”Sekai Camera” has gotten a ton of press, for example. As did “Enkin” before that (a mockup of an Android app on a mac, pre-Android phone release). Various companies have “point and know” kinds of technology, where the pitch is “using GPS and orientation information, combined with our vast wonderful backend database, you can point your phone at things and learn what they are.”
The problem, of course, is that these are really hard problems, and all of these systems only kinda-sorta work, even in their restricted demo modes. Can I really point at that doggy in the window (as the google folks suggest you’ll be able to some day)? Certainly not now. And, most likely, not any time soon! Could I point at the shop? Perhaps. At the items in the display case? Not likely.
The issue, of course, is that most of these so-called AR applications are more alluring than real. One huge problem is that the amount of information needed to deliver on the hype is mind-boggling; it’s the scale of information that will never be available in a closed system, in just the sort of system most of these demos are pushing.
And, like the VR hype before this, and the AI hype before that, the worry is that (since none of these systems will do what they purport to do) the overhype will kill the potential industry and possible market. There are companies who are tackling more modest problems, but they don’t get the PR and can’t create web memes because they aren’t as flashy. That’s shame.
Because I’d hate to see AR creep back into the lab with it’s tail between it’s legs. None of us who’ve been working on AR for decades want it to be the next “Big AI”.