Results 1 - 10
of
62
GraspIt! -- A Versatile Simulator for Robotic Grasping
, 2004
"... Research in robotic grasping has flourished in the last 25 years. A recent survey by Bicchi [1] covered over 140 papers, and many more than that have been published. Stemming from our desire to implement some of the work in grasp analysis for particular hand designs, we created an interactive graspi ..."
Abstract
-
Cited by 179 (20 self)
- Add to MetaCart
Research in robotic grasping has flourished in the last 25 years. A recent survey by Bicchi [1] covered over 140 papers, and many more than that have been published. Stemming from our desire to implement some of the work in grasp analysis for particular hand designs, we created an interactive grasping simulator that can import a wide variety of hand and object models and can evaluate the grasps formed by these hands. This system, dubbed “GraspIt!,” has since expanded in scope to the point where we feel it could serve as a useful tool for other researchers in the field. To that end, we are making the system publicly available (GraspIt! is available for download for a variety of platforms from
Robotic Grasping of Novel Objects using Vision
"... We consider the problem of grasping novel objects, specifically ones that are being seen for the first time through vision. Grasping a previously unknown object, one for which a 3-d model is not available, is a challenging problem. Further, even if given a model, one still has to decide where to gra ..."
Abstract
-
Cited by 176 (18 self)
- Add to MetaCart
We consider the problem of grasping novel objects, specifically ones that are being seen for the first time through vision. Grasping a previously unknown object, one for which a 3-d model is not available, is a challenging problem. Further, even if given a model, one still has to decide where to grasp the object. We present a learning algorithm that neither requires, nor tries to build, a 3-d model of the object. Given two (or more) images of an object, our algorithm attempts to identify a few points in each image corresponding to good locations at which to grasp the object. This sparse set of points is then triangulated to obtain a 3-d location at which to attempt a grasp. This is in contrast to standard dense stereo, which tries to triangulate every single point in an image (and often fails to return a good 3-d model). Our algorithm for identifying grasp locations from an image is trained via supervised learning, using synthetic images for the training set. We demonstrate this approach on two robotic manipulation platforms. Our algorithm successfully grasps a wide variety of objects, such as plates, tape-rolls, jugs, cellphones, keys, screwdrivers, staplers, a thick coil of wire, a strangely shaped power horn, and others, none of which were seen in the training set. We also apply our method to the task of unloading items from dishwashers. 1 1
Robotic grasping of novel objects
- In Neural Information Processing Systems
, 2006
"... We consider the problem of grasping novel objects, specifically ones that are being seen for the first time through vision. We present a learning algorithm that neither requires, nor tries to build, a 3-d model of the object. Instead it predicts, directly as a function of the images, a point at whic ..."
Abstract
-
Cited by 67 (23 self)
- Add to MetaCart
(Show Context)
We consider the problem of grasping novel objects, specifically ones that are being seen for the first time through vision. We present a learning algorithm that neither requires, nor tries to build, a 3-d model of the object. Instead it predicts, directly as a function of the images, a point at which to grasp the object. Our algorithm is trained via supervised learning, using synthetic images for the training set. We demonstrate on a robotic manipulation platform that this approach successfully grasps a wide variety of objects, such as wine glasses, duct tape, markers, a translucent box, jugs, knife-cutters, cellphones, keys, screwdrivers, staplers, toothbrushes, a thick coil of wire, a strangely shaped power horn, and others, none of which were seen in the training set. 1
Navigation among movable obstacles: Real-time reasoning in complex environments
- INT. J. HUMANOID ROBOTICS
, 2005
"... In this paper, we address the problem of Navigation Among Movable Obstacles (NAMO): a practical extension to navigation for humanoids and other dexterous mobile robots. The robot is permitted to reconfigure the environment by moving obstacles and clearing free space for a path. Simpler problems have ..."
Abstract
-
Cited by 58 (15 self)
- Add to MetaCart
In this paper, we address the problem of Navigation Among Movable Obstacles (NAMO): a practical extension to navigation for humanoids and other dexterous mobile robots. The robot is permitted to reconfigure the environment by moving obstacles and clearing free space for a path. Simpler problems have been shown to be P-SPACE hard. For real-world scenarios with large numbers of movable obstacles, complete motion planning techniques are largely intractable. This paper presents a resolution complete planner for a subclass of NAMO problems. Our planner takes advantage of the navigational structure through state-space decomposition and heuristic search. The planning complexity is reduced to the difficulty of the specific navigation task, rather than the dimensionality of the multi-object domain. We demonstrate real-time results for spaces that contain large numbers of movable obstacles. We also present a practical framework for single-agent search that can be used in algorithmic reasoning about this domain.
Learning grasp strategies with partial shape information
- In AAAI
, 2008
"... Abstract We consider the problem of grasping novel objects in cluttered environments. If a full 3-d model of the scene were available, one could use the model to estimate the stability and robustness of different grasps (formalized as form/force-closure, etc); in practice, however, a robot facing a ..."
Abstract
-
Cited by 54 (10 self)
- Add to MetaCart
(Show Context)
Abstract We consider the problem of grasping novel objects in cluttered environments. If a full 3-d model of the scene were available, one could use the model to estimate the stability and robustness of different grasps (formalized as form/force-closure, etc); in practice, however, a robot facing a novel object will usually be able to perceive only the front (visible) faces of the object. In this paper, we propose an approach to grasping that estimates the stability of different grasps, given only noisy estimates of the shape of the visible portions of an object, such as that obtained from a depth sensor. By combining this with a kinematic description of a robot arm and hand, our algorithm is able to compute a specific positioning of the robot's fingers so as to grasp an object. We test our algorithm on two robots (with very different arms/manipulators, including one with a multi-fingered hand). We report results on the task of grasping objects of significantly different shapes and appearances than ones in the training set, both in highly cluttered and in uncluttered environments. We also apply our algorithm to the problem of unloading items from a dishwasher.
The Columbia grasp database
- IEEE Intl. Conf. on Robotics and Automation
, 2009
"... Abstract — Collecting grasp data for learning and benchmarking purposes is very expensive. It would be helpful to have a standard database of graspable objects, along with a set of stable grasps for each object, but no such database exists. In this work we show how to automate the construction of a ..."
Abstract
-
Cited by 40 (8 self)
- Add to MetaCart
(Show Context)
Abstract — Collecting grasp data for learning and benchmarking purposes is very expensive. It would be helpful to have a standard database of graspable objects, along with a set of stable grasps for each object, but no such database exists. In this work we show how to automate the construction of a database consisting of several hands, thousands of objects, and hundreds of thousands of grasps. Using this database, we demonstrate a novel grasp planning algorithm that exploits geometric similarity between a 3D model and the objects in the database to synthesize form closure grasps. Our contributions are this algorithm, and the database itself, which we are releasing to the community as a tool for both grasp planning and benchmarking. I.
Grasp planning via decomposition trees,”
- in IEEE Int. Conf. on Robotics and Automation,
, 2007
"... Abstract-Planning realizable and stable grasps on 3D objects is crucial for many robotics applications, but grasp planners often ignore the relative sizes of the robotic hand and the object being grasped or do not account for physical joint and positioning limitations. We present a grasp planner th ..."
Abstract
-
Cited by 39 (3 self)
- Add to MetaCart
(Show Context)
Abstract-Planning realizable and stable grasps on 3D objects is crucial for many robotics applications, but grasp planners often ignore the relative sizes of the robotic hand and the object being grasped or do not account for physical joint and positioning limitations. We present a grasp planner that can consider the full range of parameters of a real hand and an arbitrary object, including physical and material properties as well as environmental obstacles and forces, and produce an output grasp that can be immediately executed. We do this by decomposing a 3D model into a superquadric 'decomposition tree' which we use to prune the intractably large space of possible grasps into a subspace that is likely to contain many good grasps. This subspace can be sampled and evaluated in GraspIt!, our 3D grasping simulator, to find a set of highly stable grasps, all of which are physically realizable. We show grasp results on various models using a Barrett hand.
Learning to grasp novel objects using vision
- In 10th International Symposium of Experimental Robotics (ISER
, 2006
"... Summary. We consider the problem of grasping novel objects, specifically, ones that are being seen for the first time through vision. We present a learning algorithm which predicts, as a function of the images, the position at which to grasp the object. This is done without building or requiring a 3 ..."
Abstract
-
Cited by 34 (12 self)
- Add to MetaCart
(Show Context)
Summary. We consider the problem of grasping novel objects, specifically, ones that are being seen for the first time through vision. We present a learning algorithm which predicts, as a function of the images, the position at which to grasp the object. This is done without building or requiring a 3-d model of the object. Our algorithm is trained via supervised learning, using synthetic images for the training set. Using our robotic arm, we successfully demonstrate this approach by grasping a variety of differently shaped objects, such as duct tape, markers, mugs, pens, wine glasses, knife-cutters, jugs, keys, toothbrushes, books, and others, including many object types not seen in the training set. 1
An Overview of 3D Object Grasp Synthesis Algorithms
, 2011
"... This overview presents computational algorithms for generating 3D object grasps with autonomous multi-fingered robotic hands. Robotic grasping has been an active research subject for decades, and a great deal of effort has been spent on grasp synthesis algorithms. Existing papers focus on reviewing ..."
Abstract
-
Cited by 31 (3 self)
- Add to MetaCart
This overview presents computational algorithms for generating 3D object grasps with autonomous multi-fingered robotic hands. Robotic grasping has been an active research subject for decades, and a great deal of effort has been spent on grasp synthesis algorithms. Existing papers focus on reviewing the mechanics of grasping and the finger-object contact interactions [7] or robot hand design and their control [1]. Robot grasp synthesis algorithms have been reviewed in [63], but since then an important progress has been made toward applying learning techniques to the grasping problem. This overview focuses on analytical as well as empirical grasp synthesis approaches.
Efficient grasping from rgbd images: Learning using a new rectangle representation
- In IEEE Int’l Conference on Robotics and Automation
, 2011
"... Abstract — Given an image and an aligned depth map of an object, our goal is to estimate the full 7-dimensional gripper configuration—its 3D location, 3D orientation and the gripper opening width. Recently, learning algorithms have been successfully applied to grasp novel objects—ones not seen by th ..."
Abstract
-
Cited by 25 (10 self)
- Add to MetaCart
(Show Context)
Abstract — Given an image and an aligned depth map of an object, our goal is to estimate the full 7-dimensional gripper configuration—its 3D location, 3D orientation and the gripper opening width. Recently, learning algorithms have been successfully applied to grasp novel objects—ones not seen by the robot before. While these approaches use low-dimensional representations such as a ‘grasping point ’ or a ‘pair of points’ that are perhaps easier to learn, they only partly represent the gripper configuration and hence are sub-optimal. We propose to learn a new ‘grasping rectangle ’ represen-tation: an oriented rectangle in the image plane. It takes into account the location, the orientation as well as the gripper opening width. However, inference with such a representation is computationally expensive. In this work, we present a two step process in which the first step prunes the search space efficiently using certain features that are fast to compute. For the remaining few cases, the second step uses advanced features to accurately select a good grasp. In our extensive experiments, we show that our robot successfully uses our algorithm to pick up a variety of novel objects. I.