
Visual Servoing by Means of Structured Light
Visual-based Robot Task Optimisation from Controlled Light Pattern Projection
Versandkostenfrei!
Versandfertig in 6-10 Tagen
51,99 €
inkl. MwSt.
PAYBACK Punkte
26 °P sammeln!
Visual servoing is a popular technique for performingrobotic tasks like relative positioning or targettracking.Classic visual servoing assumes that visual featurescan be extracted from camera images of the targetobject or scene.However, what happens when the target object isuniform? Can a robot position itself with respect toa white wall or avoiding a non-textured object usingjust visual feedback? What happens when the scenariois complex or a priori unknown like a region of thesea floor? In cases like these, visual servoingassumptions fail. This book studies the use ofstructured light to provi...
Visual servoing is a popular technique for performing
robotic tasks like relative positioning or target
tracking.
Classic visual servoing assumes that visual features
can be extracted from camera images of the target
object or scene.
However, what happens when the target object is
uniform? Can a robot position itself with respect to
a white wall or avoiding a non-textured object using
just visual feedback? What happens when the scenario
is complex or a priori unknown like a region of the
sea floor? In cases like these, visual servoing
assumptions fail. This book studies the use of
structured light to provide visual features. A
rigorous comparison of the most relevant structured
light patterns is provided. Then, structured light is
integrated to a visual servoing framework. This book
shows that a suitable design of the structured light
pattern is able to provide robust and unambiguous
visual features. Furthermore, the control law can be
optimised in order to obtain nice properties like
decoupling and robustness against image noise and
calibration errors.
robotic tasks like relative positioning or target
tracking.
Classic visual servoing assumes that visual features
can be extracted from camera images of the target
object or scene.
However, what happens when the target object is
uniform? Can a robot position itself with respect to
a white wall or avoiding a non-textured object using
just visual feedback? What happens when the scenario
is complex or a priori unknown like a region of the
sea floor? In cases like these, visual servoing
assumptions fail. This book studies the use of
structured light to provide visual features. A
rigorous comparison of the most relevant structured
light patterns is provided. Then, structured light is
integrated to a visual servoing framework. This book
shows that a suitable design of the structured light
pattern is able to provide robust and unambiguous
visual features. Furthermore, the control law can be
optimised in order to obtain nice properties like
decoupling and robustness against image noise and
calibration errors.