Professor, Designer, Husband, Father, Gamer, Bagpiper

While popping into Ori's blog, I say this post: Unveiling Tonchidot: A Cool Parallel World - on the iPhone

Other coverage seems to indicate that it's not real, and given the lack of detail (and the state of the art in mobile AR) it's hard to imagine it can do what the video seems to want to imply that it does.  Of course, if it's just grabbing information based on the location, and then using interesting layout algorithms, it could really work.  For example, given that the iPhone location could be pretty accurate, you could do some simple heuristic-based search to associate a "geo-tagged" piece of data (e.g., put this note at this intersection) with a part of the image stream (e.g., put it over this bit of the view of the world, like a sign).  It would likely be very fragile, but given that the labels seems to reposition a lot in the video, perhaps that's what it's doing.

Of course, like many of these demos (think about that one on Android, called Enkin), it's amazing how they seem to be "inventing" the same idea over and over.

And not addressing any of the "real" problems that will come up.  Consider what happens when thousands of people leave hundreds of notes in that mall the video appears in.  How does arfilter filter that?  How do people author content?  Deal with privacy, security and permissions?  Of course, as we all know, we must take baby-steps, but I think these questions are fascinating to consider.  Especially when the commentators refer to this as evoking something like Vinge's "Rainbow's End," where these concerns are central.

You’ve successfully subscribed to Blair MacIntyre's Blog
Welcome back! You’ve successfully signed in.
Great! You’ve successfully signed up.
Your link has expired
Success! Check your email for magic link to sign-in.