The VAMPIRE project studies mobile assistance technologies as one of its main application scenarios. In such applications, the system assists the user in performing certain tasks or provide him with additional memorised information relevant in a particular situation. Future real world applications might include industrial assembly, remote teaching and prosthetic memory devices. Questions answered by such assistants are for instance "Where have I put my keys?" or "How do I construct this assembly?" Such use cases and the general approach of constructing visual active memory processes led to the development of "mobile augmented reality assistant systems.
In the mobile assistant scenario of VAMPIRE the user wears a mobile device that - by means of augmented reality - integrates him in the processing loop to close the perception-action cycle. Thus, the user is able to intuitively direct the focus of the system as it follow his own. The tight coupling of the system and the user allows direct interaction based on visual feedback and facilitates visual learning capabilities.
A typical view through the memory spectacles
The systems must not only recognise and memorise the current constellation of objects, but also has to be aware of the current contextual situation and its own the spatial position. In conjunction with capabilities to anticipate of the user's intentions this system is able to selectively present information to the user he or she is interested in, leading to context aware scene augmentation.But as we are moving towards real assistant technologies that can aid the user performing tasks, additional functionality is required. Action recognition observes what the user is doing based on trajectories of manipulated objects. The learning capabilities demand for human-machine-interaction as the user has to teach the system new objects and even situations. Thus, several interaction modalities are incorporated and allow the user the reference spatial positions and objects in the scene, direct the system's attention and retrieve memorise knowledge. The modalities applied in the scenario range from speech recognition for object labelling to head and pointing gestures. Special visualisation techniques are used to redisplay visual information in augmented reality.
In VAMPIRE we study all these aspects in non-artificial environments (like offices and kitchen setups) which therefore especially comprises great challenges for all vision processing (object learning and recognition and visual tracking).
Computer vision research as carried out in VAMPIRE is more and more shifting from algorithmic solutions to the construction of active systems by building integrated demonstrators as described above. The technical composition and functional cooperation of so many capabilities demands for a suitable system integration solution. Thus, an integration framework, the XCF software development kit, was developed that combines ideas from data- and event-driven architectures enabling researchers to easily build highly reactive distributed information systems as needed e.g. in the VAMPIRE mobile augmented reality scenario.