Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Abstract
Building to order implies responding to individual customer requests, not simply producing large numbers of goods
for stock and encouraging their sale by promotion and discounting. The paint shop is one of the biggest bottlenecks
in production today. Adapting state-of-the-art robotized painting lines to new variants is time-consuming and leaving
the line in a non-productive state. To overcome the limitations of state-of-the-art robotizes painting system - that are
economically not viable at small lotsizes, the Flexpaint project (2000 - 2002) did develop a new "what you see is
what you paint" approach. This approach scans unknown parts being transported to the painting cabinet, reconstructs
their geometry and painting-relevant features, and automatically plans the painting strokes and executable - collision
free - trajectories. Finally robot code is automatically generated and executed by the painting robots. The Flexpaint
system poses a new paradigm in the frame of agile manufacturing. Resorting to 3D sensing technology in a precedent
step to the task planning process proved the fitness of that concept for small volume high variant painting in experiments.
This paper presents an extension of the "what you see is what you paint" approach by means of dynamic vision and
3D object recognition that is able to complement missing data of scanned parts by use of CAD information of recognized and localized parts.
Keywords: Automatic robot programming, industrial robotics, 3D computer vision.
1. Introduction
Production on demand, mass customisation, rapid
reaction to market changes and quick time-to-market
of new products and variants at small batches are
needed - at low cost and high quality. As investments
in automatic painting lines are considerably high and
as the painting line often is the bottleneck in production, it is imperative to prevent non-productive times
and maximize the use of the expensive equipment.
Aforementioned shrinking volumes and increasing
variances challenge the state of the art. Highly flexible, scalable and user-friendly production equipment is
needed, including robotic systems for painting - a
common process in production. The presented work
The part surface/shape is to be categorized into a generic surface fraction that can be reached optimally
by the paint-fan with the gun oriented in defined orientation to the surface and the paint-stroke. Cavities and
rips and customer-specific features are recognized in
parallel as they need to be handled differently by the
paint-process and thus the planning tool.
2.3. Collision free paint path generation
Next the idea of the employed paint-process
planning is to link elementary geometries to a process
model. This link is established with a flexibility that
considers that the precise painting strategy that is
mapped to the geometries may vary from customer to
customer. Scheduling of individual strokes follows
specific criteria, as cycle time or others. Next, the sensory retrieved pose of the part are employed
by the AMROSE collision avoidance SW to plan collision-free robot paths the paint trajectories planned by
the INROPA paint-planner. The reconstructed shape of
Fig. 6. Examples of parts hanging in a skid in an industrial setup. The scenario is challenging since the complexity of several parts lead to large occlusion and parts
are oscillating
Finally, the generic program is parsed and converted to the robot-specific program and executed.
2.4. Experimental evaluation Sensor based robot
painting
Sensor data acquisition includes reconstruction,
sensor-data fusion, and extraction of process-relevant
features. Results of the individual steps are visualized
in the following.
As can be seen in the Figure 3, all painting-process
critical features of the gear-box (cavities and other
parts of the object that are hardly reachable) have been
detected despite imperfect sensor data. A remaining
challenge is the handling of regions of the part which
are completely invisible to the sensors because of (i)
occlusion or (ii) surface properties (as at the front of
the gear-box).
Automatic generation of the programs for controlling the robot and the cell is achieved within 60 seconds to 300 seconds, depending on the complexity and
the numbers of the objects and the (density of the)
structure of the environment.
Generally the recognition task is considered as matching task between two surfaces. The proposed 3D object
recognition scheme is based on spin images which do
not impose a parametric representation on the data, so
they are able to represent surfaces of general shape.
Finding correspondences of spin images between
model and scene points. A loss function of the correlation coefficient is used as measure of similarity.
Finding correspondences using the correlation coefficient is computationally expensive, and therefore, a
different way of managing the information conveyed
periments, the frame and the hooks are black and shiny
absorbing most of the laser light. As a result of this
physical effect the frame is more or less shredded.
Hooks have a quite flimsy geometry and barely reveal
a dense point cloud. In our application, the main purpose of the object recognition algorithm is to robustly
label 3D scene point data with the identification numbers of the 3D models kept in a database. In a subsequent step the corresponding models are matched
against labeled scene data to retrieve position and orientation of CAD models related to the world coordinate space which is of importance for the robot application. Figure 10 depicts the recognition result for
each of the test objects. Apparently, the position and
orientation has been estimated subsequently. The robustness of the object recognition algorithm strongly
depends on the quality of the sensor data. Much care
has been taken to generate a smooth surface model
suppressing most of the outliers as surface normals
determine the quality of the basic feature spin image
[12].
4. Conclusion
References
[1] Flexpaint. [Online]. Available: www.flexpaint.org
[2] Autere, Resource allocation between path planning algorithms using meta a*, in ISRA, 1998.
[3] N. W. C. Robertson, R.B. Fisher and A. Ashbrook, Finding machined artifacts in complex range data surfaces, in
Proc. ACDM2000, 2000.
[4] R. J. Campbell and P. J. Flynn, Eigenshapes for 3D object recognition in range data, pp. 505510. [Online]. Available: citeseer.ist.psu.edu/137290.html
[5] O. Camps, C. Huang, and T. Kanungo, Hierarchical
organization of appearance-based parts and relations for object recognition, 1998. [Online]. Available: citeseer.ist.psu.edu/camps98hierarchical.html
[6] E.Freund, D. Rokossa, and J. Rossmann, Processoriented approach to an efficient off-line programming of
industrial robots, in IECON 98: Proceedings of the 24th
Annual Conference of the IEEE Industrial Electronics Society, 1998.
[7] P. Hertling, L. Hog, L. Larsen, J. Perram, and H. Petersen, Task curve planning for painting robots - part i: Process
modeling and calibration, IEEE Transactions on Pattern
Analysis and Machine Intelligence, vol. 12, no. 5, pp. 324
330, April 1996.
[8] R. Hoffman and A. K. Jain, Segmentation and classification of range images, IEEE Trans. Pattern Anal. Mach. Intell., vol. 9, no. 5, pp. 608620, 1987.
[9] A. Hoover, G. Jean-Baptiste, X. Jiang, P. J. Flynn, H.
Bunke, D. B. Goldgof, K. K. Bowyer, D. W. Eggert, A. W.
Fitzgibbon, and R. B. Fisher, An experimental comparison
of range image segmentation algorithms, IEEE Transactions
on Pattern Analysis and Machine Intelligence, vol. 18, no. 7,
pp. 673689, 1996. [Online]. Available: citeseer.csail.mit.edu/hoover96experimental.html
[10] N. Jacobsen, K. Ahrentsen, R. Larsen, and L. Overgaard, Automatic robot welding in complex ship structures,
in 9th Int. Conf. on ComputerApplication in Shipbuilding,
1997, pp. 410430.
[11] R. J.Campbell and P. J. Flynn, A survey of free-form
object representation and recognition techniques, Comput.
Vis. Image Underst., vol. 81, no. 2, pp. 166210, 2001.
[12] A. Johnson and M. Hebert, Using spin images for efficient object recognition in cluttered 3d scenes, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.
21, no. 5, pp. 433 449, May 1999.
[13] T. Kadir and M. Brady, Scale, saliency and image description, International Journal of Computer Vision, vol. 45,
no. 2, pp. 83105, 2001.
[14] K.K.Gupta and A. D. Pobil, Apartical motion planning
in robotics: Current approaches and future directions, 1998.
[15] K. Kwok, C. Louks, and B. Driessen, Rapid 3-d digitizing and tool path generation for complex shapes, in IEEE
International Conference on Robotics and Automation, 1998,
pp. 27892794.
[16] D. Marshall, G. Lukacs, and R. Martin, Robust segmentation of primitives from range data in the presence of
Fig. 10. Visualization of recognition result: Reconstructed sensor data (green shaded) and recognized
matched CAD model (grey wire frame).
geometric degeneracy, IEEE Trans. Pattern Analysis and
Machine Intelligence, vol. 23, no. 3, pp. 304314, 2001.
[17] H. Murase and S. K. Nayar, Visual learning and recognition of 3-d objects from appearance, Int. J. Comput. Vision, vol. 14, no. 1, pp. 524, 1995.
[18] M. Olsen and H. Petersen, A new method for estimating parameters of a dynamic robot model, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 17,
no. 1, pp. 95100, 2001.
[19] A. Pichler, M. Vincze, O. M. H. Anderson, and K.
Haeusler, A method for automatic spray painting of unknown parts, in In IEEE Intl. Conf. on Robotics and Automation, 2002.
[20] F. R.B., F. A.W., M. Waite, O. M., and E. Trucco,
Recognition of complex 3-d objects from range data, in
CIAP93, 1993, pp. 509606.
[21] X. Sheng and M. Krmker, Surface reconstruction and
extrapolation from multiple range images for automatic turbine blades repair, in IEEE IECON Conference, 1998, pp.
13151320.
[22] W. Tse and Y. Chen, A robotic system for rapid prototyping, in IEEE International Conference on Robotics and
Automation, 1997, pp. 1815-1820.
[23] Bauer, A., Eberst, C., Nhmeyer, H., Minichberger, J.,
Pichler, A., Umgeher, G. , Self-programming Robotized
Cells for Flexible Paint-Jobs, International Conference on
Mechatronics and Robotics 2004, Aachen, Germany.