Visual Perception for Humanoid Robots

Visual Perception for Humanoid Robots PDF

Author: David Israel González Aguirre

Publisher: Springer

Published: 2018-09-01

Total Pages: 220

ISBN-13: 3319978411

DOWNLOAD EBOOK →

This book provides an overview of model-based environmental visual perception for humanoid robots. The visual perception of a humanoid robot creates a bidirectional bridge connecting sensor signals with internal representations of environmental objects. The objective of such perception systems is to answer two fundamental questions: What & where is it? To answer these questions using a sensor-to-representation bridge, coordinated processes are conducted to extract and exploit cues matching robot’s mental representations to physical entities. These include sensor & actuator modeling, calibration, filtering, and feature extraction for state estimation. This book discusses the following topics in depth: • Active Sensing: Robust probabilistic methods for optimal, high dynamic range image acquisition are suitable for use with inexpensive cameras. This enables ideal sensing in arbitrary environmental conditions encountered in human-centric spaces. The book quantitatively shows the importance of equipping robots with dependable visual sensing. • Feature Extraction & Recognition: Parameter-free, edge extraction methods based on structural graphs enable the representation of geometric primitives effectively and efficiently. This is done by eccentricity segmentation providing excellent recognition even on noisy & low-resolution images. Stereoscopic vision, Euclidean metric and graph-shape descriptors are shown to be powerful mechanisms for difficult recognition tasks. • Global Self-Localization & Depth Uncertainty Learning: Simultaneous feature matching for global localization and 6D self-pose estimation are addressed by a novel geometric and probabilistic concept using intersection of Gaussian spheres. The path from intuition to the closed-form optimal solution determining the robot location is described, including a supervised learning method for uncertainty depth modeling based on extensive ground-truth training data from a motion capture system. The methods and experiments are presented in self-contained chapters with comparisons and the state of the art. The algorithms were implemented and empirically evaluated on two humanoid robots: ARMAR III-A & B. The excellent robustness, performance and derived results received an award at the IEEE conference on humanoid robots and the contributions have been utilized for numerous visual manipulation tasks with demonstration at distinguished venues such as ICRA, CeBIT, IAS, and Automatica.

Visual Perception for Manipulation and Imitation in Humanoid Robots

Visual Perception for Manipulation and Imitation in Humanoid Robots PDF

Author: Pedram Azad

Publisher: Springer Science & Business Media

Published: 2009-11-19

Total Pages: 273

ISBN-13: 3642042295

DOWNLOAD EBOOK →

Dealing with visual perception in robots and its applications to manipulation and imitation, this monograph focuses on stereo-based methods and systems for object recognition and 6 DoF pose estimation as well as for marker-less human motion capture.

Visual Perception and Robotic Manipulation

Visual Perception and Robotic Manipulation PDF

Author: Geoffrey Taylor

Publisher: Springer

Published: 2008-08-18

Total Pages: 231

ISBN-13: 3540334556

DOWNLOAD EBOOK →

This book moves toward the realization of domestic robots by presenting an integrated view of computer vision and robotics, covering fundamental topics including optimal sensor design, visual servo-ing, 3D object modelling and recognition, and multi-cue tracking, emphasizing robustness throughout. Covering theory and implementation, experimental results and comprehensive multimedia support including video clips, VRML data, C++ code and lecture slides, this book is a practical reference for roboticists and a valuable teaching resource.

Fundamentals Of Robotics: Linking Perception To Action

Fundamentals Of Robotics: Linking Perception To Action PDF

Author: Xie Ming

Publisher: World Scientific Publishing Company

Published: 2003-04-11

Total Pages: 716

ISBN-13: 9813102349

DOWNLOAD EBOOK →

Tomorrow's robots, which includes the humanoid robot, can perform task like tutoring children, working as tour guides, driving humans to and from work, do the family shopping etc. Tomorrow's robots will enhance lives in ways we never dreamed possible. No time to attend the decisive meeting on Asian strategy? Let your robot go for you and make the decisions. Not feeling well enough to go to the clinic? Let Dr Robot come to you, make a diagnosis, and get you the necessary medicine for treatment. No time to coach the soccer team this week? Let the robot do it for you.Tomorrow's robots will be the most exciting and revolutionary things to happen to the world since the invention of the automobile. It will change the way we work, play, think, and live. Because of this, nowadays robotics is one of the most dynamic fields of scientific research. These days, robotics is offered in almost every university in the world. Most mechanical engineering departments offer a similar course at both the undergraduate and graduate levels. And increasingly, many computer and electrical engineering departments are also offering it.This book will guide you, the curious beginner, from yesterday to tomorrow. The book will cover practical knowledge in understanding, developing, and using robots as versatile equipment to automate a variety of industrial processes or tasks. But, the book will also discuss the possibilities we can look forward to when we are capable of creating a vision-guided, learning machine.

Active Vision for Scene Understanding

Active Vision for Scene Understanding PDF

Author: Grotz, Markus

Publisher: KIT Scientific Publishing

Published: 2021-12-21

Total Pages: 202

ISBN-13: 3731511010

DOWNLOAD EBOOK →

Visual perception is one of the most important sources of information for both humans and robots. A particular challenge is the acquisition and interpretation of complex unstructured scenes. This work contributes to active vision for humanoid robots. A semantic model of the scene is created, which is extended by successively changing the robot's view in order to explore interaction possibilities of the scene.

Modelling Human Motion

Modelling Human Motion PDF

Author: Nicoletta Noceti

Publisher: Springer Nature

Published: 2020-07-09

Total Pages: 351

ISBN-13: 3030467325

DOWNLOAD EBOOK →

The new frontiers of robotics research foresee future scenarios where artificial agents will leave the laboratory to progressively take part in the activities of our daily life. This will require robots to have very sophisticated perceptual and action skills in many intelligence-demanding applications, with particular reference to the ability to seamlessly interact with humans. It will be crucial for the next generation of robots to understand their human partners and at the same time to be intuitively understood by them. In this context, a deep understanding of human motion is essential for robotics applications, where the ability to detect, represent and recognize human dynamics and the capability for generating appropriate movements in response sets the scene for higher-level tasks. This book provides a comprehensive overview of this challenging research field, closing the loop between perception and action, and between human-studies and robotics. The book is organized in three main parts. The first part focuses on human motion perception, with contributions analyzing the neural substrates of human action understanding, how perception is influenced by motor control, and how it develops over time and is exploited in social contexts. The second part considers motion perception from the computational perspective, providing perspectives on cutting-edge solutions available from the Computer Vision and Machine Learning research fields, addressing higher-level perceptual tasks. Finally, the third part takes into account the implications for robotics, with chapters on how motor control is achieved in the latest generation of artificial agents and how such technologies have been exploited to favor human-robot interaction. This book considers the complete human-robot cycle, from an examination of how humans perceive motion and act in the world, to models for motion perception and control in artificial agents. In this respect, the book will provide insights into the perception and action loop in humans and machines, joining together aspects that are often addressed in independent investigations. As a consequence, this book positions itself in a field at the intersection of such different disciplines as Robotics, Neuroscience, Cognitive Science, Psychology, Computer Vision, and Machine Learning. By bridging these different research domains, the book offers a common reference point for researchers interested in human motion for different applications and from different standpoints, spanning Neuroscience, Human Motor Control, Robotics, Human-Robot Interaction, Computer Vision and Machine Learning. Chapter 'The Importance of the Affective Component of Movement in Action Understanding' of this book is available open access under a CC BY 4.0 license at link.springer.com.

Robot Learning by Visual Observation

Robot Learning by Visual Observation PDF

Author: Aleksandar Vakanski

Publisher: John Wiley & Sons

Published: 2017-01-13

Total Pages: 208

ISBN-13: 1119091780

DOWNLOAD EBOOK →

This book presents programming by demonstration for robot learning from observations with a focus on the trajectory level of task abstraction Discusses methods for optimization of task reproduction, such as reformulation of task planning as a constrained optimization problem Focuses on regression approaches, such as Gaussian mixture regression, spline regression, and locally weighted regression Concentrates on the use of vision sensors for capturing motions and actions during task demonstration by a human task expert