Augmented reality for mobile devices has grown in popularity in recent years partly because of the proliferation of smart phones and tablet computers equipped with exceptional cameras and partly because of developments in computer vision algorithms that make implementing such technologies on embedded systems possible.
Said augmented reality applications have always been limited to a single user receiving additional information about a physical entity or interacting with a virtual agent. Researchers at MIT’s Media Lab have taken augmented reality to the next level by developing a multi-user collaboration tool that allows users to augment reality and share that we other users essentially turning the real world into a digital canvas for all to share.
The Second Surface project as it is known is described as,
…a novel multi-user Augmented reality system that fosters a real-time interaction for user-generated contents on top of the physical environment. This interaction takes place in the physical surroundings of everyday objects such as trees or houses. The system allows users to place three dimensional drawings, texts, and photos relative to such objects and share this expression with any other person who uses the same software at the same spot.
If you still have difficulty understanding how this works and why I believe when made available to the general masses it will be a game changing technology for augmented reality and mobile devices, check out the following explanatory video.
Now, imagine combining this technology with Google Glass and free-form gesture recognition. How awesome would that be?