Abstract
In everyday life, it is useful for mobile devices like cell phones and PDAs to have an understanding of their user's surrounding context. Presentation output planning is one area where such context can be used to optimally adapt information to a user's current situational context. This paper outlines the architecture of a context-aware output planning module, as well as the design and implementation of three output generation strategies: user-defined, symmetric multi- modal, and context-based output planning. These strategies are responsible for selecting the best suited modalities (e.g. speech, gesture, text), for presenting information to a user situated in a public environment such as a shopping mall. A central point of this paper is the identification of context factors relevant to presentation planning on mobile devices with finite resources to obtain a private and/or public output. We show via a working demonstrator the extent to which such factors can, with readily available technology, be incorporated into a system. The paper also outlines the set of reactions that a system might take when given context information on the user and the environment.
Original language | English |
---|---|
Title of host publication | Proceedings of the AISB Symposium on Multimodal Output Generation (MOG) |
Editors | Mariet Theune, Ielka van der Sluis, Yulia Bachvarova, Elisabeth Andre |
Place of Publication | UK |
Publisher | The Society for the Study of Artificial Intelligence and Simulation of Behaviour |
Pages | 46-49 |
Number of pages | 4 |
ISBN (Print) | 1902956699 |
Publication status | Published - 2008 |
Event | AISB 2008 Convention - Communication, Interaction and Social Intelligence - Aberdeen, Scotland, UK Duration: 1 Apr 2008 → 4 Apr 2008 |
Conference
Conference | AISB 2008 Convention - Communication, Interaction and Social Intelligence |
---|---|
City | Aberdeen, Scotland, UK |
Period | 1/04/08 → 4/04/08 |