In this project we wish to create a gesture based interaction with a 2D display. The interaction will be based on IVCAM multimodality head pose and hand tracking. By understanding the head’s pose and orientation along with the hand’s location / gesture we get an angle between the two which enables us to understand with which element on the screen the user wishes to interact. The content the user will interact with includes large icons and a solution should be proposed for finer details.
Project Goal:
A working application based on Intel RealSense Camera along with RealSense SDK.