Unity3d ARKit Face Tracking and adding configurable actions post to detecting expressions

Unity3d ARKit Face Tracking and adding configurable actions post to detecting expressions

Watch Video

Unity3d ARKit Face Tracking while adding configurable actions when detecting specific blend shapes or when detecting an expression as a whole. Specific blend shape detection occurs when we add one area of the face to be detected with lower and upper bounds, if the detection is within the bounds an action is executed given the method name and delay specified in the action configuration.

Why should we care about actions when detecting facial expressions? There are millions of cases but the idea is that you can execute a method when a expression is captured, this can be useful to do other routines such as changes in the environment, tracking this data in a database, calling a web service, etc.

Pingbacks are closed.

No comments yet.