TF2 / Occulus Rift

I was wondering how VR mode is handle in TF2. Normally, the mouse directs your cursor and the view because your cursor is in the middle of the screen. So I was wondering if in the VR mode the view and the cursor are independent (at least partially).

I have found this entry on the occulus rift blog ( which is not really interesting except the fact that you earn a free hat if you have pre-order the glasses.

I have also found this entry (, which is far more interesting. Indeed, there is a video on which we can appreciate the “gameplay”. And now, the cursor can be moved without moving the scene represented on the screen.

So, now the question is, will this new technology bring more in-depth changes in the gameplay or is it just going to add small modification that are necessary for a good user experience in VR mode.

I think we will only observe minor changes till the HMD will not be largely spread because if one side (HMD vs non-HMD players) get a significant advantage, the game will not be balanced anymore and players will switch to another. But maybe I am wrong.


From GDC, a talk about “Why VR is Hard”

Michael Abrash gave a talk at GDC on “Why VR is Hard” in which he discusses head-worn displays for VR and AR.  I’ve mentioned Michael before, and there is lots of good stuff in this transcript of his talk that you should read!

Idea for AR games

This is an article to those who want to do gaming AR and need some inspiration. The basics of the block by block seems to be interesting base on what we seen in class with Argon.

Some AR experiences for kids

For those interested in AR for kids, check out this mini-review of 5 AR things for kids.  (I won’t comment on each of them, they aren’t all great, but they should give you ideas).

I will say, I like the idea of Zooburst:  not sure how I feel about the particular design choices or business model, but we’ve talked about building AR pop-up books for kids for many years, so it’s nice to see someone do it!

Project thoughts

A few folks have asked me to suggest project ideas.  First, let me pull some text from the project proposal page from the last time I ran this course, where I said:

You should consider these projects as mini research projects.  Simply implementing something that you think might be fun (like a game or 3D environment), or re-implementing something someone else has done, is not acceptable. You should have a question you are asking, hypothesis you are testing or new technology idea you are exploring.  In short, you should be doing something that could (in theory) be submitted to an academic conference.  Of course, I do not expect all projects to be of a level that would result in a top-tier conference publication;  however, the basic approach should be the same.  We will spend time in class talking about what this might mean;  I am also happy to discuss ideas with students individually.

This is a good way to think about it.  To make it clearer, I added this line to the project proposal page, as a 6th point to include on your proposal:

Include an annotated bibliography of related projects or papers.  (By annotated, I mean include a sentence or two about each project or paper, saying how it relates to what you are thinking about)

So, what does this mean in practice, and what might some projects be?   Most of the ideas I will suggest leverage Argon, because that’s what I’ve been thinking about lately.

  • Implement a game or application of AR or VR.  The focus should be specifically those parts that are “interesting” and unique about the idea.  This would be what you’d want for research, or if you were trying to convince someone of the validity of the concept.  Consider an outdoor MR/AR game like Google’s Ingress;  what would it mean to make it “really” AR, or to build an desktop “VR” companion game to let folks join in who aren’t out moving around?  How does the game change?  What is the new kind of “fun”?
  • Mobile Facebook games.  Is there anything “interesting” about “Farmville + mobile AR”?  Mobile MR?
  • A 3D AR/VR presentation program (“Prezzi + AR”).  I’ve talked to students over the past few years about something like this, and really like the idea.  But, if you really consider what would be useful to someone giving real presentations, it’s not as obvious how to structure it.
  • Take any of the 3D UI interaction metaphors in Bowman, and think about them in the context of tablet/phone-based AR.  Some make less sense (e.g., techniques for travel) but some are clearly interesting and different.
  • 3D Visualization.  Both Hafez and I are very interested in ideas about Vis and what it means to do Vis in AR.  We have a bunch of ideas on what that might mean, so having a group explore one of the ideas is appealing.
  • Create a mirror world of part of campus, to support collaborative social experiences.  Watch this video of something we did with Unity years ago, pre-Argon.  It would be much easier now, with Argon: YouTube Preview Image

Obviously, some of these are much more “research publication ready” that others (and some, like the 3D Vis concepts, are ones that we’d really want to work closely with you on, because they are research we’re already thinking about).

Read Michael Abrash’s blog

I mentioned Michael’s Valve blog in class, I think it would be very useful for you to read the articles on it.

I would suggest going back and reading all the entries (there’s only a dozen or so), as they have some nice practical insights into what it would take to create real AR/VR experiences and devices.

Does Valve have a major role to play for HMD ?

Valve is trying to port Team Forteress 2, an online game, on 3D glasses. What has to be known about TF2 is that game is used by Valve as a laboratory. For example, this is the first game of the company that implement micro-transaction (paying with real money for virtual items).

So, if the experience is a success, and in a dream word, the SDK and the source engine (the graphic engine) will be update and then, most of the companies that use that tools will be able to offer more immersive games.

I think video games is a good way to decrease the price of a technology and be making it more popular. A recent (or not so recent) example is the PS3 which make the BlueRay winner of the contest for HD storage.

4D Art Park

An interactive 4D amusement park in Korea.

The environment has hologram, 360 degree sound, Augmented Reality and avatar among others.

Check it out on the link below. There are a quite a few interesting videos on the site

YouTube Preview Image

Primesense sensor at CES 2013

Here’s the link to the page with the video (also embedded below) for the primesense sensor I mentioned in class.  I personally think the “story” (if you can call it that) in the video is pretty pathetic, but watch it and consider each use of the tech.  Some are good, some bad.  What I like are the ones that let you do things at a distance where no other interaction would be easy (e.g., interacting with the vacuum robot), or when your hands are otherwise occupied or messy/sterile (e.g., in the kitchen, the doctor).   I personally despise the opening “use gesture to control a presentation” (if he cared about emphasizing the importance of that pie chart, he would have scripted it, not trusted his success to a could-go-badly live “interactive demo”).  And some are silly:  would she really shop in front of her date in a public place, putting her personal info (even minimal) up for all to see?   Please consider all the proposed uses seriously when you watch.

YouTube Preview Image

Microsoft IllumiRoom’s Proof-of-Concept

I came across this interesting proof-of-concept by Microsoft, released during CES-13:

YouTube Preview Image

In class today we discussed (or at least my conclusion of the discussion was) that 3D TVs do not really create a Virtual Environment because we do not really ever interact with it. But I guess it depends on what are you doing with it. Watching a movie might be a passive experience while at the same time playing a game can be considered as an active experience in which we do look around. In my opinion, the only difference between a CAVE and a 3D TV is probably the size of the display. Instead of covering 3 walls, it covers a 1/6th of a wall. Does “Virtual Environment” actually mean that if we move physically then the world around responds to it as oppose to using a controller to move around? But then simulators may be considered equivalent to controllers.

The definition of Virtual Reality seems to be really broad.

This proof-of-concept is an interesting attempt at maybe taking it to another level. It is simulating the CAVE in your living room and may add to the virtual reality “feel” of the TV displays we have.

I would like to know what does everyone else think and maybe clear any misconceptions I might have portrayed above.

Welcome to CS7497: Virtual Environments

For the Spring 2013 session of this class, we will use this blog and the Georgia Tech t-square server together.  Any content that should remain private (e.g., grades, material that should not be publicly available on the internet, etc) will be put on t-square.  Please look at the pages (linked across the top of this blog) for additional details on the schedule, syllabus, etc.