Displaying posts published in

April 2013

AR Content Placement Using 3D Models

This goal of this project is to allow users to place virtual content in the real world using an AR application built on ARGON2.  Users of the app can place virtual content on Georgia Tech campus buildings.  The placement of the virtual content is done by pushing the content from the device out till it […]

smARtcar

SmARt Car is an augmented reality application meant to be integrated in a car, in order to ease the interaction of the driver with the tools usually found in a car. It is made of several tools (GPS, tachometer, …) gathered on the windshield of the car. The utility of smARt Car is twofold: providing […]

Arienteering: AR-based Orienteering

This project aims to use augmented reality techniques to create a new kind of orienteering. We implement a system on Argon to support the whole process of the outdoor game. The system can provide necessary information and some useful tools to instruct the players from the beginning to the end. Users can navigate themselves and […]

Augmented Reality Marauder’s Map of Georgia Tech

“The Marauder’s Map” is a magical map in J. K. Rowling’s fantasy series, “Harry Potter and the Prisoner of Azkaban”. When being used by wizards who “solemnly swear that I am up to no good”, it shows all moving objects within the boundary of the “Hogwarts School of Witchcraft and Wizardry”. In this paper, we […]

Geobased Memory Book

We feel closest to someone’s presence on sharing both time and physical space with them. So we tried to leverage Augmented Reality in the context of social applications by creating a Interactive 3d Photo Gallery. The application allows users to share pictures with their friends and family by uploading them at a particular geo-location. Friends can view these […]

AR Bejeweled

Shane del Solar  and Suraj Saripalli AR version of Bejeweled a puzzle game where a single player moves multicolored jewels on a grid. The jewels are moved by swapping them with a neighboring jewel. A string of three or more jewels in the horizontal or vertical direction makes those jewels disappear, the game then adds […]

ARt Explore

Final Project post by: Andy Pruett, Anshul Bhatnagar and Mukul Sati Overview and Lessons Learnt: Our project was aimed at using AR to enhance the experience of viewing and interacting with large art-work such as murals and graffiti. Using natural image tracking for images of these sizes presents substantial challenges that we sought to solve, […]

Summaries for week 13

Moving Objects In Space: Exploiting Proprioception In Virtual-Environment Interaction One of the aims in virtual environments is to provide to users natural ways to interact with the interfaces. However, applications of such technologies are seldom despite their importance. First because of the hardness of manipulating virtual objects, but also because of in non-adapted metaphors. Several […]

Week 13 Summaries

OmniTouch: Wearable Multitouch Interaction Everywhere This paper by Harrison et al is a more recent, robust (and real) implementation of the Sixth Sense tech by Pranav Mistry et al. The group has implemented the system using depth sensing technology similar to Kinect. They have also mentioned the work done by Sixth Sense and Interactive Dirt, […]

Week 13 summary

OmniTouch: Wearable Multitouch Interaction Everywhere -Chris Harrison, Hrvoje Benko and Andrew D. Wilson Omnitouch is an interactive system that allows multitouch input and projected display on numerous surfaces. The system consists of a depth camera mounted on the shoulder along with a pico projector. The depth camera can detect multiple surfaces and the user’s hand […]

week 13 summaries

OmniTouch: Wearable Multitouch Interaction Everywhere The paper discusses a novel wearable interactive system introduced by Microsoft. The system tries to leverage the ubiquitous access to information, its manipulation and sharing especially through mobile and cloud devices. However, the most prevalent interfaces available have limited screen real estate and modes of interaction. Omnitouch is an interactive […]

[Summaries Week 13] Exploiting Proprioception, OmniTouch

Moving Objects In Space: Exploiting Proprioception In Virtual-Environment Interaction This paper aims to allay the difficulty inherent in interaction in VEs by the ingenious use of  one’s awareness of one’s own body (proprioception) as an aid for orientation and spatial memory tasks. The authors expound the problem that motivates their study, stating that the precise manipulation […]

week 13 summaries

OmniTouch: Wearable Multitouch Interaction Everywhere This paper deals with a shoulder-worn omnitouch system developed by the authors. This system allows to turn everyday surfaces into a graphical, interactive, multitouch input. It comports 3 components: a short range PrimeSense depth camera, a Microvision ShowWX+ laser pico-projector, mounted on a form-fitting metal frame worn on the shoulders. […]

Week 13 Summaries

Moving Objects In Space: Exploiting Proprioception In Virtual-Environment Interaction Manipulation in immersive virtual environments is difficult partly because users must do without the haptic contact with real objects they rely on in the real world to orient themselves and their manipulanda. The paper describes proprioception, a person’s sense of the position and orientation of his […]

Summaries Week 13

Moving Objects In Space: Exploiting Proprioception In Virtual-Environment Interaction Working in a virtual environment with lasers and other pointing and manipulation tools is not as convincing as they were imagined. Pointing methods is tiring and hard to use. This is because of lack of haptic feedback. So the authors propose to use the user himself […]

Week 13 Summaries

Moving Objects In Space: Exploiting Proprioception In Virtual-Environment Interaction This paper describes the use of manipulation techniques in virtual environment exploiting the concept of proprioception, a person’s sense of the position and orientation of his body and limbs. There are three forms of body-relative interaction namely direct manipulation, physical mnemonics and gestural action. In general […]

OmniTouch: Wearable Multitouch Interaction Everywhere The OmniTouch is a system to enable graphical, interactive multi-touch input on every surfaces. In other words, it has on-the-go interactive capabilities with no calibration. The OmniTouch has three main components; a custom short-range PrimeSense depth camera, a Microvision ShowWX+ laser pico-projector, and a depth camera and projector are tethered […]

Week 13 Summary

Moving Objects In Space This paper tends to solve the problem about sense of touch in immersive virtual environments. It is important because users cannot feel the virtual world even though they can see and hear it. In order to overcome the challenges, the authors determine to use proprioception to build three kinds of interaction, which are […]

Week 13 Summaries

Omnitouch : The paper introduces a novel device from Microsoft. It is a shoulder worn system which aims to be used as an input/output device enabling multi-touch operations for input. It utilizes depth sensing and projection technologies to enable the multi-touch operations. Interfaces from the system can be projected on any surface ranging from table […]

Week 13 Summaries

Moving Objects in Space: Exploiting Proprioception In Virtual-Environment Interaction: This paper looks into the challenge of manipulation of objects in virtual environments.  The main hinderance in this space is the lack of haptic contact with real objects.  The authors of the paper propose using proprioception as a way to help users deal with the lack […]

Shane’s Week 12 Summaries

Omitouch: The omitouch is a shoulder mounted device intended to be used as an input/output device.  The goal is to be able to utilize different surfaces in a users environments as a touch screen device much like using your hand instead of a smart phone.  It consists of a few different parts.  It has a […]

Summaries :

Moving Objects in Space: Exploiting Proprioception In Virtual-Environment Interaction In virtual environment, precise manipulations of object in 3D worlds are hard mainly for three reasons. First, in VE, there is hardly haptic feedback. Hence, this is hard and very tiring for user to get precise results. Moreover, there is also a limitation about the input […]

Ruge’s week 13 summaries

OmniTouch: Wearable Multitouch Interaction Everywhere The Omnitouch paper discussed a prototype technology that utilized depth sensing cameras combined with small form projectors to create small usable interfaces in the real world.   The first component, the depth sensor was used to reliably determine the location of the users hands, fingers, pointers, and surfaces in the near environment. […]

[week 13 discussions-proprioception]

Main paper: Moving Objects in Space: Exploiting Proprioception in Virtual-Environment Interaction http://www.cs.unc.edu/~mine/papers/minecows.pdf This paper described a VE interface by utilizing proprioception. Because of the lack of haptic contact, how to manipulate objects in immersive virtual environments is of difficulty. By investigating body-relative interaction, the author presents a set of gestural actions that map to grabbing, […]

Moving Objects in Space: Exploiting Proprioception In Virtual-Environment Interaction Mine, Brooks, and Sequin This paper describes a great solution for handling 3DUI in immersive virtual environments. The methods leverage our innate proprioceptive sense. This is our human ability to sense the relative location of our body parts. It is the phenomenon that makes it possible […]

Project Report 2 : Control the map

Project Report 2 : Control the map Student : Olivier Richard Aurelien Bonnafont What has change Our initial project was to use face recognition to allow a user to “kill” an opponent. We do not think that we are going to implement this feature because we do not have access to the screen image and […]

week 12 summaries

Exploring 3D Navigation: Combining Speed-coupled Flying with Orbiting This paper deals with the different navigation techniques which allow the user to see different view of the scene and interact with the environment. They built a taxonomy to categorized the different navigation techniques and expand the structure to design new navigation techniques. This taxonomy is divided […]

Week 12 Summaries

Exploring 3D Navigation : Combining Speed-coupled Flying with Orbiting The introduction of this paper hits home the importance of the research the paper is doing. As they state in the paper, virtual environments generally comprise of worlds which are larger than that can be viewed from a single vantage point. In order to experience the […]

Week 12 Summaries: Users

This week, we read two papers explaining the issues of designing 3DUI experiences. In “Exploring 3D Navigation”, the researchers present results of user studies testing some novel interaction methods on desktop computers using mouse and keyboard input devices, and comparing the results for different tasks of selection, travel control, navigation and inspection for large virtual […]

Week 12 Summaries

A Survey of Design Issues in Spatial Input The paper addresses the issues designers should consider and lists a set of design principles for designing interfaces for virtual environments. 3D interaction space is very different to the 2D space we use in everyday computing. The authors combine the information from their experiences and tests with […]

Week 12 Summaries

Exploring 3D Navigation: Combining Speed-coupled Flying with Orbiting This paper first presents a task-based taxonomy of navigation techniques for 3D virtual environment which categorized existing techniques. Inspired by this taxonomy, the authors propose several new techniques. The authors try to use taxonomy to give a more disciplined exploration of the design space of navigation in […]

[Week 12 Summary] Exploring 3D Navigation: Combining Speed-coupled Flying with Orbiting and A Survey of Design Issues in Spatial Input

Combining Speed-coupled Flying with Orbiting In their paper, the authors employ a task-based approach to construct a taxonomy of 3D-navigation techniques and then proceed to describe a couple of navigation techniques in this context, also proposing new techniques with the aid of the taxonomy. Primarily employing a task-based approach to drive the classification process, the […]

Week 12 summaries

Exploring 3D Navigation: Combining Speed-coupled Flying with Orbiting The authors start the paper with developing their taxonomy for navigation technology. The current taxonomy is divided into three groups, Task Selection, Travel Control, and User Interface. The authors give definitions for terms in each group. The authors then provide four different AR navigation techniques. First technique […]

Week 12 Discussions

Here are the three papers of the discussion: A Survey of Design Issues in Spatial Input http://delivery.acm.org/10.1145/200000/192501/p213-hinckley.pdf?ip=128.61.69.11&acc=ACTIVE%20SERVICE&key=C2716FEBFA981EF16C75A17C38D3955A3C0C269F20590017&CFID=187123085&CFTOKEN=56661494&__acm__=1364965022_4af4fe5334e1e3cbf3c59c7301ae70ee This paper goes through the different issues and solutions that can be found and designed to make users more comfortable with spatial input.   “Put-That-There”: Voice and Gesture  at the Graphics Interface http://delivery.acm.org/10.1145/810000/807503/p262-bolt.pdf?ip=128.61.69.11&acc=ACTIVE%20SERVICE&key=C2716FEBFA981EF16C75A17C38D3955A3C0C269F20590017&CFID=187123085&CFTOKEN=56661494&__acm__=1364980007_bd27c61ae3279b5bdc5e4d50948a722d I chose this paper because […]

Week 12 Summaries

A Survey of Design Issues in Spatial Input: This paper explores design principles and issues designers need to account for when designing interfaces that will be used with spatial input.  The paper goes into two major areas for spatial input, human perception and ergonomic concerns.  Human perception of 3D space is key in designing interfaces […]

Week 12 Summary

Exploring 3D Navigation: Combining Speed-coupled Flying with Orbiting This paper describes task based taxonomy of navigation techniques for 3D virtual environment. Most Virtual Environment encompass more space than can be viewed from a single vantage point, it is very important for the user to navigate efficiently within the environment in order to obtain different views […]

week 12 summary [Hitesh]

Exploring 3D navigation: Navigating an interacting in a 3D world is a lot more challenging than rendering one. Also, the usability and effectiveness of a virtual environment largely depends upon the user’s ability to get around and interact with the information within it. The paper discusses previous research efforts to construct taxonomy of navigation techniques. […]

Week 12 Summary

A Survey of Design Issues in Spatial Input This paper is about a survey of design issues for free-space 3D interface. The issues are described using examples from 3D interface instances. The issues they study are: 1. Users’ difficulty understanding 3D space. In this part they mainly talk about that “using a spatial reference is […]

Summaries Week 12

Exploring 3D Navigation: Combining Speed-coupled Flying with Orbiting This paper mainly contains two parts which are the taxonomy of navigation techniques in 3D virtual and the Speed-coupled Flying with Orbiting technique. There are several new techniques that have grown out of this taxonomy, including Object manipulation, Ghost Copy, Inverse Fog/Scaling, Ephemeral World Compression, Possession, Rubberneck […]

Week 12 Summaries

Exploring 3D Navigation: Combining Speed-coupled Flying with Orbiting                 Navigation technique used inside a 3D virtual environment is one of the important aspect for the success or failure of the system. The paper discusses about the existing navigation techniques and provides detailed information about one particular technique called ‘Speed-coupled Flying with Orbiting’. Previous work in […]

Week 12 summary

Exploring 3D Navigation: Combining Speed-coupled Flying with Orbiting -Desney S. Tan1, George G. Robertson, Mary Czerwinski This paper is a study of existing navigational technologies for 3d virtual environments. The authors have studied and developed taxonomy of navigational approaches to enable future systematic studies. They categorize 3d VE interactions into task selection, travel control and […]

[week 12 summaries]

Exploring 3D Navigation: Combining Speed-coupled Flying with Orbiting This paper mainly contains two parts: taxonomy of navigation techniques for 3D virtual environments and a detailed discussion on Speed-coupled Flying with Orbiting technique. The taxonomy is developed for better exploration and design. Firstly, based on the task category, ie the user’s goal to conduct 3D navigation, […]

Ruge’s Week 12 Summaries

Exploring 3D Navigation: Combining Speed-coupled Flying with Orbiting This paper is broken into two separate tasks. The fist is to provide a basic taxonomy of current navigation and interface techniques used for selection navigation and control of the orientation and position within a virtual environment. The second task was using this new taxonomy, discuss a […]

Week 12 Discussions

Main Paper : Exploring 3D navigation: Combining speed-coupled flying with orbiting.   http://doi.acm.org/10.1145/365024.365307 The paper proposes a new technique to navigate through 3D work spaces. It makes use of both position and height to determine the velocity of motion. It also puts forth a technique to orient but the technique has it’s limitations. Paper 1: Rapid controlled […]

Week 12 – Summaries

Exploring 3D Navigation : Combining Speed-coupled Flying with Orbiting Virtual environment encompass mare than what can display one point of view. For a good user experience, it is necessary to take care of the navigation to allow the user move intuitively in the environment. The design of navigation can be treated as a task-based model […]

TF2 / Occulus Rift

I was wondering how VR mode is handle in TF2. Normally, the mouse directs your cursor and the view because your cursor is in the middle of the screen. So I was wondering if in the VR mode the view and the cursor are independent (at least partially). I have found this entry on the […]

AR Content Placement using 3D models and orthographic projections

  Matt Ruge Paul Plunkett PROGRESS REPORT 2 ABSTRACT We feel that Augmented Reality would be a much more useable and popular tool if resources didn’t require an investment of time, expertise, and money.  Our goal is to provide an interface that would allow simple augmented reality interfaces to be created in as little as […]

Bejeweled Part II

Progress Report   Summary of Proposed Project(unchanged):    We will build a Competitive AR version of Bejeweled.  Bejeweled is a puzzle game where a single player moves multi coloured jewels on a grid. The jewels are moved by swapping them with a neighboring jewel.   A string of three or more jewels in the horizontal or vertical […]

Project Report 2: smARtcar

Project Report 2: smARtcar Overview: This time our focus was getting information about interesting places on the route the user is currently taking and displaying this information in Argon. We managed to get some interesting locations around the current location by using the geoNames API. We created virtual billboards of information and positioned them around […]

Progress Report II – AR for Participation with Outdoor Art

artExplore Project Progress Report II: Summary Our Argon2 application called artExplore is developing towards our goal of supporting interactions with art or images of very large size in an outdoor setting. In this document we describe the evolving vision for the app, and report our proof-of-concept milestones currently working and in progress. In addition, we […]