Sei sulla pagina 1di 3

EE106b - Spring 2016 Project Ideas

EE106b Project Ideas


Your EE106b Staff
March 1, 2016

1 Guidelines
Youll have to submit proposals for your project on March 11th at 11:59pm. Weve listed
some new ideas for your projects, but feel free to come up with your own. These are organized
by topic, but the projects to do not have to be limited to these areas, as long as they are
relevant to the coursework. Projects from EE106a can be found here.

2 Grasping
1. Part-based grasp analysis
One hypothesis about how humans quickly grasp objects is by identifying affordances,
or parts that are easy to grasp, such as handles, cylindrical, or spherical parts. In this
project, the goal is to segment 3D models and analyze grasp quality for each segment.
A stretch goal of the project is to find similarities between parts on different objects
and to use these correspondences to transfer grasps between objects.
2. Robust grasping wit hDex-Net on the Baxter
In class weve talked about force closure, which assumes that the object shape, contact
locations, finger forces, etc are known exactly. In reality these quantities are not known
exactly due to noise in sensing and imprecision in robot control. One way to approach
this shortcoming is measure grasps using a robust quality metric: an expectation of
analytic grasp quality metrics under uncertainty. The goal of this project is to evaluate
the performance of robust quality metrics computed using the Dexterity Network (Dex-
Net) versus the analytic metrics talked about in class on the Baxter.

3. Relation between two-finger grasps and multi-finger grasps (grasp axis hy-
pothesis)
As we know from class, it is possible to construct force closure grasps with two soft
fingers with friction on 3D objects. This raises the question of whether or not two finger
grasps can be used to construct grasps with additional fingers. The goal of this project
is to investigate this hypothesis by constructing two-finger force closure grasps, then
splitting one of the fingers into two or more new fingers and shifting the new fingers
along the surface of the object to improve stability.
4. Caging on the Baxter
Another way of characterizing grasps is by caging: whether or not a hand configuration
prevents the object from escaping. Some differences between caging and wrench space
analysis are that cages can take into account the full hand (palm, fingers, etc) and
can be used to transport the object without immobilizing it. Recent theory from Prof

1
EE106b - Spring 2016 Project Ideas

Goldbergs group shows how to prove cages for any 2-dimensional object, but all exper-
iments to-date are in simulation. The goal of this project is to evaluate the performance
of caging grasps computed offline on the Baxter robot.

5. Using Regression to Predict Grasp Quality on the Baxter


Recent research from Prof. Goldbergs group suggests that regression techniques can
be used to predict quality metrics from the local object surface around contact points
in simulation. The goal of this project is to extend current methods to predict grasp
quality directly from point clouds taken with the Kinect on the Baxter.

6. Grasping in Clutter with Dex-Net


The Dexterity Network (Dex-Net) is a system and dataset of over 10,000 objects to
study the scaling effects of computing grasps for a large number of objects. In other
words, can a robot quickly know how to grasp new objects as its base of previous
knowledge grows? Current research suggests this effect is true for grasping, however
the grasps are evaluated using force closure when the object is in isolation. The goal of
this project is to study how to use grasps from Dex-Net to pick up objects in clutter,
which involves a characterization of the clutter objects in the environment. A stretch
goal of the project is an implementation on the Baxter.
7. Active Perception with the TakTile Hand
Back in the day, Prof. Bajcsy introduced the concept of Active Perception, which
brought together concepts from learning, psychology, and robotics. The essence of
Active Perception is that perception (i.e. sensing and estimation) is directly coupled
with control, and can be integrated to explore new spaces and learn things about our
world. One application is understanding what an object is, based on touch. Using the
TakTile Hand (which is the same hand you used in Lab 2, but with force sensors along
the fingers), perform experiments to assist in collecting data about different objects and
see what we can learn about them.

3 Nonholonomic Vehicles
1. Implement and test parallel parking
In nonholonomic systems, we are constrained in our degrees of freedom. For instance,
in a car, we cannot directly move laterally. This makes things like parallel parking
rather difficult, and sometimes not possible. This project would consist of analyzing
the feasibility of a maneuver as well as planning and executing.

2. Motion Planning with Constrained Dynamics


In the MLS textbook, a number of different methods for planning trajectories with
constrained dynamics are presented. There are a number of other motion planning al-
gorithms that have more recently been developed, to more efficiently derive trajectories.
Implementing and testing these different motion planning techniques (particularly in
dynamic environments) would merge sensing and estimation techniques (for the robot
itself as well as its surroundings) and algorithmic planning.
3. Projects with on Safety Systems for Vehicles
Recently there has been a great deal of attention on autonomous vehicles and what are
called active safety systems, where the vehicle intervenes when a collision is imminent.
A potential project is considering a dangerous scenario and planning a safety maneuver.
This could be possibly be validated on the car simulator, if interested.

2
EE106b - Spring 2016 Project Ideas

4 Robot Coordination
1. Multiple robots working to achieve one goal
Most of what weve been considering is how to deal with controlling and planning for
one robot or one type of hand. Some tasks, however, require coordination between
multiple robots in order to be completed. This could consist of performing grasping
with different types of hands, or formation design and control with multiple mobile
robots.
2. Active Perception with multiple agents
Similarly to the project described above, what if we wanted to explore an area or learn
about an object and we have robots that have different skill sets. For instance, suppose
we have a mobile robot that can easily explore an move around an object and we have
a hand that can feel the texture of an object. Can we design a framework to take
advantage of each robots capabilities to better explore?
3. Lifting application
In class, weve been talking a lot about how to plan a stable grasp, which will later
be extending to lifting objects. Using a robotic arm (like the UR5) or Baxter, can we
implement lifting of arbitrary objects? What if the object is deformable or moving?
4. Interacting with Humans
Consider you have a robot that is working in the same space as a human. How can
you effectively plan around the human, given safety constraints as well as preferences?
For instance, if we are handing a mug to a human, wed like to hand it to them in a
convenient location (e.g. put it some place they can reach and not hit them) and hand
it to them in a usable way (e.g. they probably dont want the object upside down,
and might perfer to grasp the handle). This can be considered in spaces other than
grasping, like planning for nonholonomic vehicles where there is also a human driver.

Potrebbero piacerti anche