Abstract
Future multimodal mobile platforms are expected to require high interactivity in their applications and user interfaces. Until now, mobile devices have been designed to remain in a stand-by state until the user actively turns it on in the interaction sense. The motivation for this approach has been battery conservation.
Imaging is a versatile sensing modality that can enable context recognition, unobtrusively predicting the user's interaction needs and directing the computational resources accordingly. However, vision-based always-on functionalities have been impractical in battery-powered devices, since their requirements of computational power and energy make their use unattainable for extended periods of time.
Vision-based applications can benefit from the addition of interactive stages that, properly designed, can reduce the complexity of the methods utilizing user feedback and collaboration, resulting in a system that balances computational throughput and energy efficiency.
The usability of user interfaces critically rests on their latency. However, an always-on sensing platform needs a careful balance with the power consumption demands. Improving reactiveness when designing for highly interactive vision-based interfaces can be achieved by reducing the number of operations that the application processor needs to execute, deriving the most expensive tasks to accelerators or specific processors.
In this context, this thesis focuses on investigating and surveying enablers and solutions for vision-based interactivity on mobile devices. The thesis explores the development of new user interaction methods by analyzing and comparing means to reach interactivity, high performance, low latency and energy efficiency. The researched techniques, ranging from mobile GPGPU and dedicated sensor processing to reconfigurable image processors, provide understanding on designing for future mobile platforms.
Imaging is a versatile sensing modality that can enable context recognition, unobtrusively predicting the user's interaction needs and directing the computational resources accordingly. However, vision-based always-on functionalities have been impractical in battery-powered devices, since their requirements of computational power and energy make their use unattainable for extended periods of time.
Vision-based applications can benefit from the addition of interactive stages that, properly designed, can reduce the complexity of the methods utilizing user feedback and collaboration, resulting in a system that balances computational throughput and energy efficiency.
The usability of user interfaces critically rests on their latency. However, an always-on sensing platform needs a careful balance with the power consumption demands. Improving reactiveness when designing for highly interactive vision-based interfaces can be achieved by reducing the number of operations that the application processor needs to execute, deriving the most expensive tasks to accelerators or specific processors.
In this context, this thesis focuses on investigating and surveying enablers and solutions for vision-based interactivity on mobile devices. The thesis explores the development of new user interaction methods by analyzing and comparing means to reach interactivity, high performance, low latency and energy efficiency. The researched techniques, ranging from mobile GPGPU and dedicated sensor processing to reconfigurable image processors, provide understanding on designing for future mobile platforms.
Original language | English |
---|---|
Qualification | Doctor Degree |
Awarding Institution |
|
Supervisors/Advisors |
|
Award date | 15 Dec 2014 |
Publisher | |
Print ISBNs | 978-952-62-0671-4 |
Electronic ISBNs | 978-952-62-0672-1 |
Publication status | Published - Dec 2014 |
MoE publication type | G4 Doctoral dissertation (monograph) |
Keywords
- computer vision
- energy-efficiency
- mobile device
- user interface