
VISUAL TRACKING AND RECOGNITION OF THE HUMAN HAND
Combining Top-down Synthesis with Bottom-up Analysis
Versandkostenfrei!
Versandfertig in 6-10 Tagen
32,99 €
inkl. MwSt.
PAYBACK Punkte
16 °P sammeln!
Most of the computer-vision-based hand gesturerecognition systems are either confined to a fixedset of static gestures or only able to track 2Dglobal hand motion. In order to recognize natural hand gestures such asthose in American sign language, we need to trackarticulated hand motion in real time. The task ischallenging due to the high degrees of freedom of thehand, self-occlusion, variable views, and lighting. This book focuses on automatic recovery of 3D handmotion from one or more views.The problem of hand tracking is formulated asBayesian filtering in the framework ofanalysis-by-synthesi...
Most of the computer-vision-based hand gesture
recognition systems are either confined to a fixed
set of static gestures or only able to track 2D
global hand motion.
In order to recognize natural hand gestures such as
those in American sign language, we need to track
articulated hand motion in real time. The task is
challenging due to the high degrees of freedom of the
hand, self-occlusion, variable views, and lighting.
This book focuses on automatic recovery of 3D hand
motion from one or more views.
The problem of hand tracking is formulated as
Bayesian filtering in the framework of
analysis-by-synthesis. We propose an Eigen Dynamic
Analysis model and a new feature called likelihood edge.
To automatically initialize and recover from
loss-track, we proposed a bottom-up posture
recognition algorithm. It collectively matches the
local features in a single image with those in the
image database. Through quantitative and visual
experimental results, we demonstrate the
effectiveness of our approach and point out its
limitations.
recognition systems are either confined to a fixed
set of static gestures or only able to track 2D
global hand motion.
In order to recognize natural hand gestures such as
those in American sign language, we need to track
articulated hand motion in real time. The task is
challenging due to the high degrees of freedom of the
hand, self-occlusion, variable views, and lighting.
This book focuses on automatic recovery of 3D hand
motion from one or more views.
The problem of hand tracking is formulated as
Bayesian filtering in the framework of
analysis-by-synthesis. We propose an Eigen Dynamic
Analysis model and a new feature called likelihood edge.
To automatically initialize and recover from
loss-track, we proposed a bottom-up posture
recognition algorithm. It collectively matches the
local features in a single image with those in the
image database. Through quantitative and visual
experimental results, we demonstrate the
effectiveness of our approach and point out its
limitations.