h002

Tai-Chi Chuan Project

Tai Chi Chuan (TCC) is a famous Chinese martial art, and is well known to be a good exercise for leading to a healthy life. The traditional ways to learn TCC are reading TCC manuals, watching demonstration videos, or being instructed by a coach. However, for many people who have tried to learn TCC, it turns to be not so easy as they have thought at the beginning. In this project, we try to apply new technologies, such as head-mounted display (HMD) and wearable sensors, to help the beginners to learn TCC in an easier way. For example, by wearing the HMD, the beginner will be able to see eight virtual coaches standing around him, so that no matter which direction he turns to, due to the natural movement of the TCC, there is always some virtual coaches in front of him, and move in his pace …

Read more »

Dun-Huang Project (Tabletop Version)

This project presents a tabletop system which contains plenty of media contents about two Dun-Huang caves: cave 61 and cave 254. The caves mentioned above are two of the most important caves which contain rich historical information and research data of Dun-Hunag. We aim to build an intuitive cave browsing system to allow users to tour the caves in details and learn the important historical contents. Users are allowed to use tangible objects and body gestures to operate the system.

Read more »

Dun-Huang Project (Wearable Version)

Based on the concept of augmented reality and virtual touring, we developed an interactive virtual touring system by integrating image pattern recognition and mobile computing techniques. Our aim is to enable users to conduct a realistic virtual touring (e.g. Mogao Caves) by manipulating handheld devices or head-mounted displays. During the tour, the users can view multimedia contents and can physically move in the virtual environment. Our interactive system is integrated with a video see-through head-mounted display (HMD). With the HMD, users can view the artifacts in the physical world or “transfer” themselves to the virtual environment. For enabling users to explore the virtual Caves, we implemented several moving mechanisms including jumping, glide, and walking.

Read more »

Quartic Smiles

This work was created in hopes of enabling viewers to break through the limitations of three-dimensional space, while freely wandering through the beautiful campus of National Taiwan University — and through chance encounters, bridges of friendship are constructed via smiles and laughter.  For the interactive artwork Smiling Four-Directional Link, it is hoped that through “smiling”, strangers are linked together with a smile. The four directions also symbolize the four links of people and people, people and time, people and space, and people and the world. An observer becomes a focal point, and through this artwork and another observer a linking line is formed. Then, the different linking lines form the surface and, finally, enter the Internet community to transform into a three-dimensional cube, and by smiling through this artwork move towards a diffusion to the four directions.

Read more »

I am: The Interactive Wall Driven by Human Attention

2×4 meter interactive wall consists of 4×11 arrays of small screens presenting the continuous metamorphosis of “portraits”. When a participant is attracted by the brilliant variation of the portraits and sits on the chair with pressure sensor, his or her face will be captured into system and activate a series of interactive activities driven by human attention. The portrait being viewed will transform into the participant’s portrait captured earlier by the system to engage the participant into the work.

Read more »

Collaborative Driver Assistance System

This research focuses on novel collaborative driver assistance systems integrating sensing, communication, and advanced data analytics from the perspective of user-centric design. Furthermore, we build a driving simulator for users to experience future technologies, including aggressive driving behavior prediction, transparent car for seeing-through, and giraffe view for reduced blind spot. The technologies aimed to optimize for not only driving safety but also driving efficiency through V2V communications.

Read more »

Immersive VR project

With the recent advances of wearable I/O devices, designers of immersive VR systems are able to provide users with many different ways to explore the virtual space. Although the use of HMD is quite popular recently, moving around in a virtual space is not as easy as looking around in a virtual space, mainly because position tracking is more complicated than orientation tracking with state-of-the-art technologies. Our goal is to provide the user the first-person perspective and experience of moving around in 3D space like a super human – jump high, glide off, fly with rope, teleport, etc., even without the position tracking technologies.

Read more »

i-m-Top: Interactive Multi-resolution Tabletop Display

We are highly interested in the interactive Tabletop research area and have conducted several related research work in the following:(1) Interactive multi-resolution tabletop: In this work, we developed an innovative tabletop display system, called i-m-Top (interactive multi-resolution tabletop), featuring not only multi-touch, but also multiresolution display accommodating to the multi-resolution characteristics of human vision.(2) To move or not to move: a comparison between steerable versus fixed focus region paradigms in multi-resolution tabletop display

Read more »

Beyond the Surfaces

This project presents a programmable infrared (IR) technique that utilizes invisible, programmable markers to support interaction beyond the surface of a diffused-illumination (DI) multi-touch system. We combine an IR projector and a standard color projector to simultaneously project visible content and invisible markers. Mobile devices outfitted with IR cameras can compute their 3D positions based on the markers perceived. Markers are selectively turned off to support multi-touch and direct on-surface tangible input.

Read more »

Turning Rust into Gold: Mau-Kung Ting Multimedia System

In this work, we combine the digital simulation de-weathering processes with human interactive interface using a breathing-based biofeedback for the signaling of the simulation process. A piece of invaluable artifact, the Mau Gong Ding, from National Palace Museum (NPM) has been chosen for this implementation. The majority working members of this project are those from our Lab, the Visual Computing and the Internet Graphics group of MSRA and National Palace Museum.

Read more »

Win-Win Asleep: A Social Persuasion System that Alleviating Insomnias

Insomnia is one of important disorders to personal health. It is important to identify medical and psychological causes before deciding on the treatment for insomnia. Attention to sleep hygiene is an important first line treatment strategy and should be tried before any pharmacological approach is considered. For rule out causes of insomnia and supporting sleep hygiene, we need a tool that convenient and accurate recording personal lifestyle.

Read more »

On building a decompressive environment for rehabilitation of breast cancer patients

In this research, we investigate both interactive multimedia technology and persuasive mobile computing for post-surgical rehabilitation and for encouraging healthy behavior of patients in their daily life. A novel interactive multimedia-enhanced space called “i-m-Space” (Interactive Multimedia-enhanced Space) has been designed and implemented, where the program includes “Instruction of Abdominal Breathing”…

Read more »

Face Recognition Based on Facial Trait Code

We propose the Facial Trait Code (FTC) to encode human facial images. A given face can be encoded at some prescribed facial traits to render an n-ary facial trait code with each symbol in its codeword corresponding to the closest Distinctive Trait Patterns (DTP). In order to handle the most rigorous face recognition scenario in which only one facial image per individual is available for enrollment and face variations caused by illumination, expression, pose or misalignment. We also propose the Probabilistic Facial Trait Code (PFTC)…

Read more »

e-Fovea: Large-Scale and High-Resolution Monitoring System

Large-scale and high-resolution monitoring systems are ideal for many visual surveillance applications. However, existing approaches have insufficient resolution and low frame rate per second, or have high complexity and cost. We take inspiration from the human visual system and propose a multi-resolution design, e-Fovea, which combines both multi-resolution camera input and multi-resolution steerable projector output to support large-scale and high-resolution visual monitoring.

Read more »

Object Clustering in Specific Scene.

In this approach, we classified the objects inside the surveillance video into three categories: pedestrians, scooters, and vehicles. A period of video was derived to construct a specific model for this scene. According the perspective, a same object will have different information at different position: object size, moving direction, and moving velocity.On the other hand, different objects at the same position will have different information either, such as aspect ratio and object size.

Read more »

Target Tracking across Multiple Cameras

We have developed an adaptive learning method for tracking targets across multiple cameras with disjoint filed of views. There are usually two visual cues employed for tracking targets across cameras: spatial-temporal cue and appearance cue. To learn the relationships among cameras.Traditional methods learning the relationships by using either hand-labeled correspondence or batch-learning procedure…

Read more »

Tubular Interactive Multi-Resolution Display System

We propose i-m-Tube as a tubular interface which provides multiple users interaction with multimedia content by multi-touch and multi-resolution display. With the tubular surface of i-m-Tube, it is suitable for displaying panoramic image content like Chinese scroll painting “Along the River During the Ch’ing Ming Festival” which has been regarded as a national treasure and widely known by its extraordinary width and its various details.

Read more »

TUIC

We present TUIC, a technology that enables tangible interaction on capacitive multi-touch devices, such as iPad, iPhone, and 3M’s multi-touch displays, without requiring any hardware modifications. TUIC simulates finger touches on capacitive displays using passive materials and active modulation circuits embedded inside tangible objects, and can be used with multi-touch gestures simultaneously.TUIC consists of three approaches to sense and track objects

Read more »

Magic Crystal Ball

Magic Ball is a spherical display system, which allows the users to see a virtual object/scene appearing inside a transparent sphere, and to manipulate the displayed content with barehanded interactions. Magic Crystal Ball provides the users to perform touch and hover interactions by their bare hands. The user can wave hands above the ball, followed by computer-generated clouds blowing from bottom of the ball quickly surrounding the displayed content.

Read more »

Panorama-Based Interacting with the Physical Environment

In this project, we present an intuitive user interface for interacting with the physical environment. A panorama of the environment is displayed on the handheld device equipped with an orientation sensor to align the panoramic views with the real world. A user can interact with objects within the environment by selecting the corresponding items on the display.

Read more »