Algorithm Overview

The tracking algorithm takes an array of the following:

  1. Gesture (includes side of face it is on)
  2. Face
  3. Timestamp

Each gesture is tracked in a queue. The gestures are placed into a queue based on the location of the gesture relative to the face and the location of the face.

Gestures which are over 3 seconds older than the newest gesture are discarded from the queue. The library supports callback functions being hooked up into the gesture tracking. If a gesture is recognized and held for 500ms before a different gesture is recognized and held for 500ms within a 3 second span, a callback is executed (if one exists).

Limitations

  • Does not handle body crossover
  • Does not provide spatial feedback

Future work

  • Spatial feedback would be a great feature to add
    • Could use highest point in the contour (this is currently being returned with the gesture)
    • Could also use the center of mass of the contour, but this is skin depended and if contours intersect, could have weird behaviour