Sei sulla pagina 1di 700

Industrial Robotics

Programming, Simulation and Applications

Industrial Robotics

Programming, Simulation and Applications

pro literatur Verlag

Edited by

Kin-Huat Low

Published by the plV pro literatur Verlag Robert Mayer-Scholz

plV pro literatur Verlag Robert Mayer-Scholz Mammendorf Germany

Abstracting and non-profit use of the material is permitted with credit to the source. Statements and opinions expressed in the chapters are these of the individual contributors and not necessarily those of the editors or publisher. No responsibility is accepted for the accuracy of information contained in the published articles. Publisher assumes no responsibility liability for any damage or injury to persons or property arising out of the use of any materials, instructions, methods or ideas contained inside. After this work has been published by the Advanced Robotic Systems International, authors have the right to republish it, in whole or part, in any publication of which they are an author or editor, and the make other personal use of the work.

© 2007 Advanced Robotic Systems International

www.ars-journal.com Additional copies can be obtained from:

publication@ars-journal.com

First published February 2007

Typeface Palatino Linotype 10/11/12 pt

Printed in Croatia

A catalog record for this book is available from the German Library.

Industrial Robotics: Programming, Simulation and Applications / Edited by Low Kin Huat. p. cm. ISBN 3-86611-286-6 1. Manipulators. 2. Calibration. 3. Design I. Title.

V

Contents

 

Preface

IX

1. A Design Framework for Sensor Integration in Robotic Applications

1

D.

I. Kosmopoulos

2. Joint Torque Sensory in Robotics

23

 

F. Aghili

3. Design and Validation of a Universal 6D Seam Tracking System in Robotic Welding Based on Laser Scanning

47

M.

Fridenfalk, G. Bolmsjö

4. Designing Distributed, Component-based Systems for Industrial Robotic Applications

61

M.

Amoretti, S. Caselli, M. Reggiani

5. Designing and Building Controllers for 3D Visual Servoing Applications under a Modular Scheme

81

M.

Bachiller, C. Cerrada, J. A. Cerrada

6. Robot Calibration: Modeling, Measurement and Applications

107

J.

M. S. T. Motta

7. Accuracy and Calibration Issues of Industrial Manipulators

131

M.

Abderrahim, A. Khamis, S. Garrido, L. Moreno

8. Calibration of Serial Manipulators: Theory and Applications

147

I.

Fassi, G. Legnani, D. Tosi , A. Omodei

9. Calibration of Parallel Kinematic Machines: Theory and Applications

171

G. Legnani, D. Tosi, R. Adamini, I. Fassi

VI

10. Interfacing the Gryphon Robot with the World Wide Web

195

R.

T. F. Ah King, S. Mootien, H. C. S. Rughooputh

11. Internet-Based Service Robotic System Using CORBA as Communication Architecture

215

S.

Jia, K. Takase

12. Path Coordination Planning and Control in Robotic Material Handling and Processing

231

X.

F. Zha, X. Chen

13. Optimal Motion Planning for Manipulator Arms Using Nonlinear Programming

255

J.-K. Park

14. Video Applications for Robot Arms

273

V.

Niola, C. Rossi

15. Kinematic Control of a Robot-Positioner System for Arc Welding Application

293

A.

P. Pashkevich, A. B. Dolgui

16. Control and Scheduling Software for Flexible Manufacturing Cells

315

A.

Ferrolho, M. Crisóstomo

17. Adaptive Robot Based Reworking System

341

B.

Kuhlenkötter, C. Krewet, T. Schüppstuhl

18. Off-line Programming Industrial Robots Based in the Information Extracted From Neutral Files Generated by the Commercial CAD Tools

349

V.

Bottazzi, J. Fonseca

19. A Teaching-Free Robot System Utilizing Three-Dimensional CAD Product Data

365

Y.

Nagao, H. Ohta, F. Honda

20. A Non-Contact Method for Detecting On-Line Industrial Robot Position Errors

373

G. C. Smith

VII

21. Study on Three Dimensional Dual Peg-in-hole in Robot Automatic Assembly

393

Y.

Q. Fei, J. F. Wan, X. F. Zhao

22. Computer Vision and Fuzzy Rules Applied to a Dispensing Application in an Industrial Desktop Robot

417

C.

P. Santos, M. J Ferreira

23. Virtual and Mixed Reality in Telerobotics: A Survey

437

C.

S. Tzafestas

24. Simulation and Experimental Tests of Robot Using Feature-Based and Position-Based Visual Servoing Approaches

471

M.

H. Korayem, F. Heidari, H. Aliakbarpour

25. Sheet Incremental Forming: Advantages of Robotised Cells vs. CNC Machines

493

M.

Callegari, D. Amodio, E. Ceretti, C. Giardini

26. Machining with Flexible Manipulators: Critical Issues and Solutions

515

J. Wang, H. Zhang, Z. Pan

27. Welding Robot Applications in Shipbuilding Industry:

Off-Line Programming, Virtual Reality Simulation, and Open Architecture

537

C.-S. Kim, K.-S. Hong, Y.-S. Han

28. Intelligent Robotic Handling of Fabrics Towards Sewing

559

P.

Koustoumpardis, P. Zacharia, N. Aspragathos

29. Complex Carton Packaging with Dexterous Robot Hands

583

V.

N. Dubey, J. S. Dai

30. Arc Welding Robot Automation Systems

595

B.-S. Ryuh, G. R. Pennock

31. Robot Automation Systems for Deburring

609

B.-S. Ryuh, G. R. Pennock

VIII

32. Robot-based Gonioreflectometer

623

A.

Höpe, D. Hünerhoff, K.-O. Hauer

33. Serpentine Robots for Industrial Inspection and Surveillance

633

G.

Granosik, J Borenstein, M. G. Hansen

34. Using Parallel Platforms as Climbing Robots

663

O.

Reinoso, R. Aracil, R. Saltaren

35. Robotics for Improving Quality, Safety and Productivity in Intensive Agriculture: Challenges and Opportunities

677

G. Belforte, P. Gay, D. R. Aimonino

IX

Preface

About 86 years ago, in 1921, Karel Capek introduced the word “robot” in his book, Ros- sum’s Universal Robots. The term was then coined and first used by Isaac Asimov in a short story published in 1942. Subsequently, after World War II, automatic machines were de- signed to increase productivity, and machine-tool manufacturers made numerically- controlled (NC) machines to enable manufacturers to produce better products. At the end, a marriage between the NC capability of machine tools and the manipulators created a simple robot. The early robot was sold in 1961 by Unimation, the first robot company established by Engelberger. This happened 40 years after the word “robot” was introduced. Today, after another 40 year, robotic arms are widely used in industrial manufacturing. There is no doubt that robots have increased in capability and performance through im- proved mechanisms, controller, software development, sensing, drive systems, and materi- als.

The ARS presents the state-of-the-art developments in this book on “Industrial Robotics:

Programming, Simulation and Applications,” which collects 35 chapters of recent robotics topics in different applications. The articles are contributed by research scientists, computer scientists, professors, and engineers from 18 countries in different regions of the world. It contains a good representation of today’s advanced robotics, maintaining a good balance be- tween practical and research scientific focus. The works presented in this book can be divided into several groups:

Sensor Issues

Chapters 1 and 2

Welding Technology

Chapters 3, 15, 27, and 30

Robotics System Planning, Networking, Scheduling, Programming, and Reworking

Chapters 4 and Chapters

16-19

Computer Vision, Visual Study, and Applications

Chapters 5, 14, 22, and 24

Robotics Calibration

Chapters 6-9

Internet-based Robotics

Chapters 10 and 11

Motion Study and Path Planning

Chapters 12 and 13

Assembly for Various Industrial Applications

Chapters 28, 29, and 35

Cover Issues of Automation Assembly and Robotics Cells

Chapters 20, 21, 25, and 31

Special Topics: Tele-robotics, Flexible Robots, Gonio- reflectometer, Surveillance and Climbing Robots

Chapter 23, 26, and Chap- ters 32-34

X

This book is dedicated to robotics researchers, practicing engineers, and R&D managers, who wish to enhance and broaden their knowledge and expertise on the fundamentals, technologies and applications of industrial robots. Providing an overview of contemporary research with advanced robotic systems, the above articles provide up-to-date advances and solutions to some theoretical and applica- tion problems encountered in various areas of industrial robots. By no means, however, we can cover the whole area of industrial robots in this single book. The editor is indebted to all authors for their valuable contributions to this book. Special thanks to Editors in Chief of In- ternational Journal of Advanced Robotic Systems for their effort in making this book possi- ble.

Kin-Huat Low (K. H. Low) School of Mechanical and Aerospace Engineering Nanyang Technological University Singapore 639798 mkhlow@ntu.edu.sg

1

A Design Framework for Sensor Integration in Robotic Applications

Dimitrios I. Kosmopoulos

National Centre for Scientific Research “Demokritos” Institute of Informatics and Telecommunications 153 10 Agia Paraskevi Greece

1. Introduction

The benefits of open robot controllers’ architecture have not been ignored by the robot constructors industry and in the recent years the closed proprietary architectures were partially modified to allow access for the integration of external sensors. However, the application integrators are still not able to exploit to the full extend this openness and installations that employ sensors are still the exception in most industrial plants, mainly due to the development difficulties. What is needed to the developers of robotic solutions is a generic method for rapid development and deployment of sensors and robots to enable quick responses to the market needs. This need is mainly dictated by the fact that many such applications sharing common functionality are required in industry. By sharing and reusing design patterns there is a possibility to reduce costs significantly. Therefore, a careful domain analysis that will identify the similarities and differences between the various types of applications and which will provide the infra structure for the development of new ones is of obvious importance.

A major hindrance in the development of sensor-guided robotic applications is the

complexity of the domain, which requires knowledge from the disciplines of robot vision, sensor-based control, parallel processing and real time systems. A framework defines how the available scientific knowledge can be used to create industrial systems by embodying the theory of the domain; it clarifies the core principles and methods and the algorithmic alternatives. Therefore new algorithms may be easily integrated through the addition or substitution of modular components.

A design framework should provide the means for the integration of the entailed

components and give an overview of their collaboration. The technological advances may then be easily adopted provided that the same cooperation rules will be applied.

In the past many academic and commercial frameworks have been presented, covering

parts of the disciplines entailed in sensor-guided robotic applications. However, the available frameworks that can be used for sensor integration in robotic applications were

either limited in scope or were not developed targeting to that general goal. From the point

of view of the developer of sensor-guided robots the available frameworks treated the

general problem in a rather fragmented manner. This work presents a design framework that defines the basic components that comprise robotic sensor guided systems. It also shows how these components interact to implement

the most common industrial requirements and provides some application examples.

2

Industrial Robotics - Programming, Simulation and Applications

The rest of this work is structured as follows: the following section defines the context of this research; section 3 summarizes the contribution of this work; section 4 identifies the commonalities and differences between various industrial applications; section 5 describes the user and the design requirements; section 6 describes the basic component, while section 7 shows how these components interact to implement the basic use cases; section 8 demonstrates the validity of the approach by presenting two applications: a sunroof fitting robot and a depalletizing robot. The chapter ends with section 9, which presents the conclusions and the future work.

2. Related work

The literature regarding integration of robotic systems is quite rich. On the one hand there are many middleware systems, which are quite general, treat sensor data processing and the control issues separately and do not address the issue of their interaction in typical use cases. On the other hand some methodologies handle the issue of the cooperation between sensors and actuators but are limited to certain sensor types, control schemes or application- specific tasks. Thus methods that will bridge this gap are needed. The middleware systems that have been proposed implement the interfaces to the underlying operating system and hardware and they provide programming interfaces that are more familiar to control application builders than the real-time operating system primitives. Some the most interesting of these frameworks are the OSACA (Sperling & Lutz 1996) and the Orca (Orca 2006). The scope of these frameworks is to provide the infra structure and not to specify what the application will functionally be able to do, what data it will accept and generate, and how the end users and the components will interact. They can be used in combination with our framework, which specifies the functionality and the behavior for sensor-guided applications. Some other popular middleware platforms for mobile robots are the Player/Stage/Gazebo (Vaughan et al. 2003), the CARMEN (Montemerlo et al. 2003), the MIRO (Utz et al. 2002), and the CLARAty (Nesnas et al. 2006). Many other attempts have been made to implement open control objects through the introduction of class or component libraries. Such libraries providing motion control functionality or the related components are the Orca (Orca 2006), the SMART (Anderson 1993), the START (Mazer et al. 1998). The integration of sensors for robot guidance is addressed by libraries such as the Servomatic (Toyama et al. 1996), XVision (Hager & Toyama 1998) but these libraries are limited to certain sensors, control schemes or application-specific tasks thus they are inappropriate for general use. For object detection in sensory data a large number of frameworks has been presented by researchers and industry. Examples of such frameworks are the Image Understanding Environment (ECV-Net –IUE 2000) and the vxl (VXL 2006). A large number of libraries are available such as the Open CV (Intel 2006), the Halcon (MVTec 2006), etc. Such libraries or frameworks may be used in combination to the proposed methodology.

3. Contribution

The proposed design framework capitalizes on the increased “openness” of modern robot controllers, by extending the capabilities provided by the manufacturer, through intelligent handling and fusion of acquired information. It provides a modular and systematic way in order to encompass different types of sensory input; obviously, this approach can be fully profitable,

A Design Framework for Sensor Integration in Robotic Applications

3

when the control system architecture allows for this information exchange to take place. The presented work proposes how to develop software that will undertake the robot control at the end-effector level by profiting of the sensory data. More specifically, it presents:

A unified approach for implementing sensor guided robot applications considering the restrictions posed by the controller manufacturers and the capabilities of the available control and vision methods.

A generalization of the available approaches to enable the integration of the popular sensors to any open controller using a wide variety of control schemes.

Patterns of use including open loop and closed loop control schemes.

Concrete examples of the development of industrial applications facilitated by the presented framework. The proposed design is independent of the organizational layer (procedure management, user interface, knowledge base) and execution levels (actuators, sensors) of the application and of the rest layers of the robot controller (integration platform, operating system and hardware).

4. Domain description

A unified approach for the integration of industrial sensors in robotic systems through a

common object-oriented framework is reasonable because the necessary abstractions are present for the sensors, for the robotic actuators and for their interactions. This will be clarified through a domain description which will lead to the requirements.

4.1 Sensors

The sensors that are frequently used for industrial robotic tasks may be categorized to contact and non-contact ones. Typical contact sensors that are used in industry are force-torque sensors and tactile sensors. Popular non-contact sensors are cameras, laser sensors, ultrasound sensors

which give similar output and inductive, capacitive or Hall sensors for detection of the presence

of an object. The processing of the sensory data aims to the identification of features, which in

this context normally means the parts of the image with local properties (e.g., points, edges etc.), from which useful information about the environment can be extracted. The use of sensors in most cases presupposes that the inverse sensor model is available, which allows calculation of the environment state from the sensory data. The sensors can be treated in a unified manner in design due to the fact that the sensory data can be described as vectors or 2-d arrays and what differs is the type of the sensory data (bits, bytes, double precision numbers etc.). Examples of vector images (one-dimensional) are the outputs of the force sensors and the laser sensors; examples of two dimensional images are the outputs of the monochrome cameras, the tactile sensors and the arrays of “binary” sensors; example of image with higher dimensions is the color image, which consists of three monochrome channels (e.g., red, green and blue).

4.2 Robot controllers

The robot controller is the module that determines the robot movement that is the pose, velocity and acceleration of the individual joints or the end-effector. This is performed through motion planning and motion control. Motion planning includes the trajectory definition considering its requirements and restrictions, as well as the manipulator dynamic features. The robot operates in the joint space but the motion is normally programmed in the

4

Industrial Robotics - Programming, Simulation and Applications

Cartesian space due to easier comprehension for humans. Therefore the controller has to solve the inverse kinematic problem (the calculation of the joint states from the end-effector state) to achieve a desired state in the Cartesian space for the end-effector; then the controller must calculate the proper current values to drive the joint motors in order to produce the desired torques in the sampling moments. The industrial robot controllers are designed to provide good solutions to this problem within certain subspaces. In closed architectures the user is able to program the robot movement by giving the target states and eventually by defining the movement type, e.g., linear, circular etc. The controller then decides how to reach these states by dividing the trajectory into smaller parts defined by interpolation points. The time to perform the movement through a pair of interpolation points is defined by the interpolation frequency; in order to reach the interpolation points closed loop control at the joint level is performed with much higher frequency. Generally the control at the joint level is not open to the programmer of industrial robots.

level is not open to the programmer of industrial robots. Fig. 1. Closed loop control scheme

Fig. 1. Closed loop control scheme at the end-effector level. The end–effector’s state x is measured by the sensing devices and the corresponding measurement xˆ is given as feedback; then xˆ is compared to the desired state x d , the difference is fed to the regulator and the regulation output is sent to the robot controller to move the manipulator. The joint- level loop is also displayed.

the manipulator. The joint- level loop is also displayed. Fig. 2 The cycles of robot and

Fig. 2 The cycles of robot and sensor and their synchronization for compensating for a target movement. The total delay from the initial target movement to the new robot pose to compensate for this movement depends mainly on the robot and sensor cycle (adapted from (Coste-Maniere & Turro 1997)).

A Design Framework for Sensor Integration in Robotic Applications

5

For tasks requiring low accuracy a “blind” trajectory execution with closed loop control at the joint level is acceptable. However, for the tasks that require high accuracy, closed-loop control at the end-effector level is necessary, during which the next interpolation points have to be calculated in each interpolation cycle using external input; this need is satisfied by the open architecture robot controllers, which unlike the typical closed architecture controllers, permit the user or an external system to define precisely the interpolation points and oblige the end-effector to strictly pass through them. However this restriction may produce oscillations at the end-effector level depending partially on the coarseness of the trajectory. These oscillations may be avoided if an additional regulator is used. The control scheme presented in Fig. 1 is then applied.

4.3 Use of sensors by robots The sensors can be used for robot guidance at the end-effector level to compensate for the environmental uncertainties. Open and closed loop configurations are then applied. The open loop configurations are the simplest, since the desired robot state is sent to the robot controller and executed by the manipulator without concern about the actual end-effector state. In the closed loop configurations the current end-effector state is compared to the desired one in each interpolation cycle and corrected accordingly in the next cycle. To identify the commonalities and the differences between the various closed loop control schemes we present the various schemes that may be applied. The schemes identified in (Hutchinson et al. 1996) for visual sensors are generally applicable also to the other sensors types if their type permits it. For the extraction of the desired robot state the “Endpoint Open Loop” scheme assumes perception of only the target’s state using non-contact sensors. The “Endpoint Closed Loop” scheme assumes that both the end-effector and target state are perceived and can be applied using contact and non-contact sensors. According to another categorization the closed loop control schemes are divided to the “Position-based”, which require extraction of the position for the perceived entities (end-effector and/or target), while the “Image-based” schemes extract the desired robot state by exploiting directly the image features. In order to implement the aforementioned schemes various methods may be employed according to the application needs. For position-based tasks the physical state of all the observed objects (target, end-effector) is calculated; it is done through optimization methods that use the 3D measurements (or image measurements) in combination to object and sensor models and “fit” the model to the measurements. In image-based systems generally no object or sensor model is used but only an equation that provides the current state from the image measurements, provided that a Jacobian matrix is known (for more details the reader may refer to (Hutchinson et al. 1996)). The Jacobian matrix may be calculated online or (more frequently) offline through a training procedure. In many applications a single sensor is not adequate for state calculation and in these cases tools such such as Kalman or particle filters may be employed. From the measurement input the system state is calculated, assuming that the system dynamics are linear around the small region where the task is to be executed (for more details the reader may refer to (Valavanis & Saridis 1992)). The filters provide also estimation about the next system state and can also be used for iterative solution of optimization problems in position-based applications. Another common issue for the closed-loop control systems is that the ideal case in every moment that the robot has to move to reach a desired state this state has to be based on

6

Industrial Robotics - Programming, Simulation and Applications

sensory data that depict the current environmental situation. However, this is usually not possible due to differences in the sensor and robot cycles and due to the time required for processing and transmission of the sensory data (see Fig. 2). Therefore synchronization schemes are necessary that may use prediction for the calculation of the currently required state, based on previous data. For the open loop systems this problem does not exist because the robot moves only after reading and processing the sensory data.

4.4 Application layered architecture The architecture of a typical robotic application is illustrated in Fig. 3. The left part is adapted from (Valavanis & Saridis 1992), where the levels of organization, processing and execution are presented where intelligence increases from bottom to top. The organization level includes the procedure manager that defines the robotic tasks execution sequence (usually by using a robot programming language), the knowledge base that includes the system’s knowledge about the environment and the user interface. The processing level includes the controller, which defines the robot motion by using the processed input from internal and external sensors. The execution level includes the actuators, which perform the requested tasks and the sensors that provide raw sensory data to the system. The right part in Fig. 3 presents the layered architecture of the open robot controllers as presented in OSACA (Sperling & Lutz 1996). The upper layers use the services provided by the lower ones. The hardware layer includes the processing units that are used for the software execution. The operating system controls and synchronizes the use of the resources and acts as the intermediate layer between the hardware and the application software. The communication implements the information exchange between the subsystems. The configuration allows the installation and parameterization of the desired subsystems. The control objects are the subsystems that provide the continuous processing functionality in the application. They are interfaced to the software and hardware system layers through the software- and hardware - API layer, which should provide plug and play capabilities.

API layer, which should provide plug and play capabilities. Fig. 3. Typical layered architecture of roboti

Fig. 3. Typical layered architecture of robotic applications. The organizational level includes the Procedure Manager, the Knowledge Base and the User Interface; the execution level includes the Sensors and the Actuators; the processing level includes the robot controller, which may be analyzed to the open architecture layers.

5. Requirements

5.1 Functional and non functional A generic framework has to consider some functional and non-functional requirements derived from the description about the sensors, the robot controllers and the control

A Design Framework for Sensor Integration in Robotic Applications

7

schemes using sensors. These requirements as seen from the user perspective are briefly presented in the following. As regards the basic functional requirements, the framework should provide the infra structure and patterns of use for modelling and interfacing of the sensors, should allow sensor integration with industrial robotic controllers in open loop control and closed loop control schemes, should use multiple sensors of the same or different type, should allow object recognition from the sensory data and should use virtual sensory data and manipulators. The basic non-functional requirements are related to the software quality factors. The composed systems must be characterized by reliability, efficiency, integrity, usability, maintainability, flexibility, testability, reusability, portability and interoperability.

5.2 Use cases

The tasks that the user requires to be executed (use cases) are more or less common for the

typical industrial applications and have to be covered by the framework. These are:

Manual movement (offline). The user moves the robot manually using a user interface module by specifying the target state.

State reading. (offline-online). Some sections of the robot trajectory can be executed without strict precision requirements. In these cases the system reads the desired state from the organization level. The consecutive states, from which the robot trajectory is composed, may be defined at organizational level through a robotic language (where high-level tasks are defined) or may be pre-recorded during state teaching.

State teaching (offline). Trajectory sections (set-points) are recorded, while the user moves manually the robot.

Parameterization (offline). During parameterization the data needed for sensor-based control are manually defined and stored. These data are essential for the operation of system and they are retrieved each time a sensor-guided task has to be executed. For each task the system must be parameterized separately. The parameterization may regard the end-effector, the sensor, the regulator and the processing.

Training (offline). Some special system parameters are automatically calculated e.g., inverse Jacobian matrices resulting from feature measurements while the robot executes movement at predetermined steps for each degree of freedom.

Sensor-guided movement (online). The system executes a task in real time. The task includes movement with and without sensory feedback at the end-effector level. When the robot moves without sensory feedback it reads from a database the trajectory that has been stored during teaching. When sensory feedback is required the movement is calculated online using the parameters that have been fixed during parameterization; the parameters are loaded from a database.

5.3 Design requirements and fulfillment

There are several requirements in designing a reusable framework for sensor integration in industrial robotic systems and they are dictated by the user requirements presented in the previous section. The most important design requirements are presented next, followed by the decisions for their fulfillment. The composing elements have to be modular and decoupled to cover the requirements for reusability, reliability, testability, usability, flexibility, integrity and maintainability. The

8

Industrial Robotics - Programming, Simulation and Applications

semantic decoupling is achieved by grouping functionalities according to general needs and not according to application specific requirements. The interconnection decoupling is achieved by specifying the control objects layer (capsules, described in next section) in a modular way. Each of them has well-specified responsibilities and attributes and communicates with the other ones only through ports using the related protocols (described in next section). By forcing capsules to communicate solely through ports, it is possible to fully de-couple their internal implementations from any direct knowledge about the environment and to make them highly reusable. Virtual sensors and robots can also be used keeping the same protocols. The separation of objects is achieved through the use of the object - oriented C++ language. Interface objects are used wherever possible for component decoupling based on the different roles of the elements.

decoupling based on the differe nt roles of the elements. Fig. 4. The decomposition of the

Fig. 4. The decomposition of the processing layer system to control objects (capsules) and their interconnection with modules of the organization and the execution layer (black and white boxes indicate client and server ports correspondingly). It is composed by the AM (which communicates with the manufacturer’s controller), the SAI, the SC and the SM capsules. The capsules’ activation and states are controlled by the ProcedureManager. The system parameters are read from the ParameterDB, and the predefined states are provided by the StateServer. The interface with the user is executed by the user interface, which communicates with the processing layer through the ParameterDB.

The framework has to be independent of sensor - specific and industrial robot controllers – specific implementations in order to provide integration of every sensor to any industrial controller type. This is achieved by common communication rules (protocols) for all sensors and controllers.

A Design Framework for Sensor Integration in Robotic Applications

9

The framework has to be independent of specific control schemes to ensure wide applicability. It is achieved by defining separate components handling the control scheme – specific aspects of the design (AM, SC capsules as will be mentioned in next section. The framework has to be independent of specific vision algorithms and recognition patterns (features) as well. It should allow the organizational layer to change the used algorithms for a task at run time without disturbing the execution at the control object layer. The use of the

“strategy” pattern (see e.g., (Douglass 2002)) allows for substitution of algorithms at run- time from higher architectural layers without altering the control flow of the lower layers; it applies to it also facilitates the easy integration of existing algorithms. It also allows easy incorporation of existing code or frameworks. In this manner the fulfillment of the requirements for reliability, efficiency, integrity, usability, maintainability, flexibility, testability, and reusability can be achieved. The data structures used for handling sensory information are variable mainly due to the type of the acquired data. Therefore the related structures should be flexible enough to incorporate data generated by a wide variety of sensors. The use of class templates that are available in C++ provides a solution to the problem.

A

balance has to be achieved between flexibility and usability. The framework should make

as

less assumptions as possible but on the other hand the less assumptions made the more

effort is required to build a system. A good trade-off between flexibility and usability is provided if we build a library of components, which is enriched as long as more applications are implemented. The implementation of the design decisions at the control object layer is demonstrated in the following subsections. At the class level the design followed does not differ significantly from popular approaches and thus will not be analyzed here.

6. Control objects

This work focuses on the control objects layer, which offers its services independently of the organization, execution and its underlying layers with regard to the open controller architecture. The control objects are usually associated with a particular sensor or actuator type or they can just provide services based on input from other control objects. The sequence of the provided services is defined by the organization level (procedure manager, user interface) and is influenced by the environment. For the modeling of the system structure we use the UML notation for real-time systems that is proposed in (Selic & Rumbaugh 1998), where the notions of capsules, ports and protocols are defined. Capsules are complex, physical, possibly distributed components that interact with their surroundings through one or more signal-based boundary objects called ports. A port is a physical part of the implementation of a capsule that mediates the interaction of the capsule with the outside world— it is an object that implements a specific interface. Each port of a capsule plays a particular role in a collaboration that the capsule has with other objects. To capture the complex semantics of these interactions, ports are associated with a protocol that defines the valid flow of information (signals) between connected ports of capsules. In the presented design the capsules are the high level system components (control objects), the ports are the interface classes and the protocols are the communication schemes between the components. These are presented

in Fig. 4 and explained in the following.

10

Industrial Robotics - Programming, Simulation and Applications

1

/* Initialization */

2

{COMMUNICATION WITH DB AND INITIALIZATION OF LOCAL PARAMETERS}

3

4

/*Main loop*/

5

while {CONDITION}

6

{

7

{RECEIVING INPUT}

8

{PROCESSING INPUT}

9

{SENDING OUTPUT}

10

}

11

12

/* Exit */

13

{STORING TO DB AND MEMORY DEALLOCATION}

Table 1. Execution scheme of a system process with the initialization phase, the main loop and the exit phase.

Generally the role of the Controller that was presented in Fig. 3 is to receive the sensory data from the sensors, to calculate the desired state and communicate the next state to the actuator. It may be divided to the Manufacturer’s Controller (MC), which includes the software and hardware provided by the robot vendor for motion planning and motion control, and the sensor integration modules. These modules are displayed as capsules in Fig. 4 and are the Actuator Manager (AM), the Sensor Manager (SM), the State Calculator (SC) and the Sensor-Actuator Interface (SAI); their communication with the organization and execution layer modules is also presented. Before we proceed to the description of the processing-layer capsules it is necessary to describe briefly the organizational level. The role of the Procedure Manager is to issue high level commands using events coming from the processing layer. It activates and defines the states of all the processing layer capsules. The StateServer is the module of the organization level that maintains the pre-defined robot states, which may include pose, velocity and acceleration. The states may be retrieved from a database or may come as input after online calculation. The ParameterDB stores all the parameters and attributes that are useful for the execution of the capsules in the processing layer. The parameters’ update is performed online and offline. ParameterDB may also act as a link between the User Interface and the processing layer. We begin the description of the processing layer capsules with the AM. Its role is to calculate the next robot state and to forward it to the MC for execution. The AM receives at each robot cycle the current end-effector state from the MC and the desired state from the sensory modules and then it calculates the next state (e.g., using a control law - state regulation which helps to avoid the unwanted end-effector oscillations); the next state is then sent to the MC. Different control laws can be applied to each DOF. The robot interpolation cycle and the sensor cycle may differ significantly (multi-rate system). Therefore for closed loop control sensor-guided tasks the robot needs to be coupled with the sensors by using the intermediate capsule SAI; SAI receives asynchronously the desired state from the sensory modules whenever it is made available by SC; SAI also forwards the current state - that is calculated using sensory input - synchronously to AM at each robot interpolation cycle.

A Design Framework for Sensor Integration in Robotic Applications

11

The role of the SM is to operate at the image level and to extract the image features from the sensory data. On the contrary, the role of the SC is to operate at the workspace level and to use the image-level measurements of SM to calculate the environmental state (e.g., physical pose) of the end-effector or the target objects. The SM sends data requests to the sensors and receives from them the sensory data. After image processing it outputs to the SC the feature measurements. The SC receives the feature measurements from the SM along with the current actuator state from the ΑΜ through SAI. It outputs to ΑΜ through the SAI the desired state of the end-effector. Each of the aforementioned capsules operates within a loop in a system process or thread. The scheme of each of those system processes is displayed in Table 1. During initialization the initial local parameters are loaded to memory from the ParameterDB and the appropriate memory blocks are initialized (2). The main loop (5-10) performs the processing work. At the beginning of the loop the data from other processes and the parameter database (when appropriate) are read (7). Then the data become processed (8) and the results are sent to the other system capsules or to the organization level (9). The integration platform may implement synchronous or asynchronous communication from either or both sides. The main loop commands may vary according to the capsule state. During the exit procedure, which is executed when the task is finished, the memory is released and the data may be stored into the ParameterDB. The implementation of the design framework uses the services of a communication layer (Fig. 3), which has to be deterministic to cover the needs of a real-time industrial system. The respective communication is performed through the ports presented in Fig. 4. The communication that is performed within the system follows the producer - consumer concept. A method for communication of modules that reside in different processes is the “proxy” pattern. In this pattern a pair of proxy objects, each of them residing on a different thread/process are used. The local calls to the proxy objects are mapped to inter-process messages (message destinations may be located either at compile time or at run-time through broker objects). The synchronization protocols are distinguished in synchronous and asynchronous. The “message queuing” pattern can be used for implementing the asynchronous port communication protocol, e.g., through queued messages of unit length. A synchronous protocol can be implemented via the “rendezvous” pattern. The basic behavioral model is that as each capsule becomes ready to rendezvous, it registers with a Rendezvous class and then blocks until the Rendezvous class releases it to run. A detailed description of the use of the proxy, message queuing and rendezvous patterns for inter-process communication and synchronization may be found in (Douglass 2002). In the applications that we created using the proposed framework the ports were implemented as proxy objects.

7. Use cases implementation

In the following we show how to implement the use cases by employing the control objects defined in section 6. At the beginning we briefly describe the tasks that do not require sensory input and namely the manual movement, the state reading and the teaching. Then we describe the general pre- and post-conditions. Finally we discuss the interactions for the tasks that require sensory input, which are the parameterization, the training and the sensor-guided movement.

12

Industrial Robotics - Programming, Simulation and Applications

7.1 Operations without sensory feedback During manual movement the user defines through a user interface module the desired robot state, which results in robot movement. Normally the robot manufacturers provide user interface units through which the user is able to move the robot. The user input is then passed to the StateServer and the robot reaches the new state after several interpolation cycles calculated in MC; for such tasks no third-party software modules are necessary. During state reading the desired trajectory is read from the organizational level and executed without sensory input. The desired states come from the StateServer. The state reading task is usually executed before and after tasks that need sensory data; the robot moves without external sensing until a state is reached (e.g., a nominal pose) at which a task that requires sensory input has to be executed e.g., “blind” movement from a “home” robot position to a nominal position where a grasping task is to be executed. The trajectory has been previously recorded during the teaching task described next. The teaching task is performed in order to record a trajectory (interpolation nodes) that will be used later for robot guidance without sensory input (state reading). Its execution has as follows: while the user moves the robot along the desired trajectory each intermediate state is received synchronously from the robot controller and sent to the StateServer for storing.

robot controller and sent to the StateServer for storing. Fig. 5. The general pattern for task

Fig. 5. The general pattern for task sensor-guided movement.

7.2 Pre- and post conditions We define a common pattern for the conditions that must be satisfied before the invocation of a use case (task) as well as what happens at the end of its execution; this is displayed in Fig. 5. Before the task invocation the ProcedureManager activates the capsules that participate in this specific task and sets their states (1, 3, 5, 7). Through this activation message(s) the ProcedureManager also sends to the capsules the unique id number of the task that is to be executed. This id identifies the type of the task as well the environmental or (and) system status parameters as they were (will be) defined during parameterization or training. Using this id each capsule is able to load the task-specific parameters from the parameter database e.g., threshold values for image processing, control variables for regulation etc. (messages 2,

A Design Framework for Sensor Integration in Robotic Applications

13

4, 6, 8). This communication was previously mentioned in line 2 of Table 1. After the task

execution the participating capsules are deactivated or eventually set to idle state (messages

9, 10, 11, 12). Before deactivation the new parameters or results are stored into the

ParameterDB (line 13 of Table 1).

In the tasks that are presented in the following these pre-and post - conditions apply but

will omit their presentation for simplicity.

7.3 Parameterization

During the offline parameterization task the user changes manually the parameters used by one or more capsules and is able to evaluate the impact of this change. The parameters are

changed to the ParameterDB through the user interface and then they are read asynchronously by the active capsules and are applied. If appropriate the parameters may

be

saved by the user to be used in future tasks. The interactions for the parameterization of

an

SM capsule are given in the following:

The ProcedureManager instructs the SM to measure (1). The measurement is executed (2) and the results are sent to the ParameterDB (3), from where they are accessed (asynchronously) by the User Interface (4). The steps (2-5) are repeated until the user changes an SM parameter e.g., a

region of interest (6). The new parameters are read by the SM (7) and new measurements are performed (8) and written to the ParameterDB (9). The new measurements are read asynchronously by the User Interface (10). The steps 7-10 last for a sensor cycle and are performed until the user decides to store permanently the content of the ParameterDB (11).

to store permanently the content of the ParameterDB (11). Fig. 6. A parameterization example for the

Fig. 6. A parameterization example for the SM capsule. The user changes the SM parameters and the SM reads them through the ParameterDB and applies them in processing. Finally the parameters are saved by the user.

7.4 Training

A sub use case of parameterization with particular interest is the training, which is an

automated calculation of some sensor-related system parameters. An example of training is

14

Industrial Robotics - Programming, Simulation and Applications

the calculation of a feature Jacobian, which provides the system state from the image features; another example is the calculation of the camera pose relative to the robot end- effector (calibration). In these training tasks the end-effector moves stepwise along a predefined training trajectory for each degree of freedom of the task space. During each training step the sensor subsystem measures the features on the images that are acquired synchronously by the cameras. When the training movement is completed the required parameters are calculated based on the feature measurements; they may be task-specific (e.g., feature Jacobian matrix) or system-specific (e.g., camera pose with respect to robotic hand). Finally the calculated parameters are stored into the database.

RobotServer ProcedureMana AM SEC SM StateServer ParameterDB ger 1: PM_ReadState 2: GetState 3:
RobotServer
ProcedureMana
AM
SEC
SM
StateServer
ParameterDB
ger
1: PM_ReadState
2: GetState
3: CalculateNextState
4: SetState
steps 3-4 are repeated until
threshold reached
5: SetState
6: Measure
7: Measure
8: AppendMeasurements
steps 2-8
are repeated for
all training positions
9: Calculate
10: SetParameters
11: PM_TaskFinished

Fig. 7. The capsule interaction during training. When a task requiring sensory input is to be trained the end-effector moves stepwise along a predefined training trajectory for each degree of freedom of the task space and the sensor subsystem measures the image features. When the training movement is completed the parameters are calculated and then stored into the database.

The capsules that participate in training are the AM, SC, SM that communicate with the StateServer, ParameterDB. The StateServer holds the states defining the training movement. The training interaction using a single SM instance is presented in detail in Fig. 7 and has as follows:

1. The ProcedureManager commands AM to read the desired training state from the StateServer.

2. AM requests synchronously from SC the state vector that defines the next training End Effector State (EES).

3. After receiving the requested state the next EES is calculated using a control law.

4. AM sets the next EES to the RobotServer to move the robot. The current EES is returned.

A Design Framework for Sensor Integration in Robotic Applications

15

Steps 3-4 may be repeated until the distance between the current and the desired training state becomes smaller than a predefined threshold, because high positioning accuracy is required; another scenario is that a timeout occurs without achieving the desired positioning accuracy and in this case we have an exception that either stops training or warns the user. Assuming the first case we have:

5. AM sends to the SC the currently achieved state, which acts as a trigger to activate the sensors and measure. AM waits until the measurement execution.

6. SC requests from SM to measure and remains blocked waiting for response.

7. SM executes the measurement. After measuring, the SC receives the measurements and becomes unblocked.

8. SC stores the measurement vector. Then the AM becomes unblocked.

Interaction steps 2 to 8 are repeated for each training step for all degrees of freedom of the task space. After loop completion:

9. SC calculates the required parameters.

10. The parameters are sent to ParameterDB where they are stored.

11. The ProcedureManager is informed about the task termination.

After step (11) training is finished. The same procedure has to be repeated for all tasks that need training. The steps (2-4) are executed in the robot interpolation cycle, while the

duration of step 7 is decided mainly by the duration of the sensor cycle.

7.5 Sensor-guided movement During sensor-guided movement the robot moves based on the desired states that are calculated after processing sensor input using the pre-defined parameters. This movement may be performed using closed-loop control at the end-effector level, or open loop schemes such as the “look-then-move” or “scan-then-move” that will be presented in the following. In this task the actuator manager requests from the sensor – actuator interface the measured EES. Due to the difference between the robot and the camera sampling rate the measurement may not be available; in that case a zero state vector is received; else from the state vector (desired state) a new vector is calculated (using a regulator for the reasons explained in section 4.2) and sent to the robot controller. The procedure is repeated until the difference between the desired state and the current becomes very small. The task execution is performed in real time using the parameters defined during parameterization and training (Fig. 8). The participating subsystems are the AM, SAI, SC, SM and the StateServer, ParameterDB. The RobotServer, ΑΜ and StateServer operate at robot interpolation rate, while the SC and SM, operate at sensor sampling rate. The SAI communicates asynchronously with AM and SC. Therefore the relative order of the signals between subsystems operating in different rates is just

indicative e.g., step 12 is not certain to precede 13. More specifically the interaction has as follows:

1. The ProcedureManager commands the AM to execute the task using sensory feedback.

2. AM requests an EES vector from SAI. The SAI has no available state error vector, due to the fact that no sensor data have been acquired yet and thus returns a zero vector.

3. SAI requests the error from the SC asynchronously, triggering the sensor-related part of the system to measure.

4. ΑΜ calculates from the error and the current state the desired EES using a regulator.

5. SC requests a measurement from SM synchronously, so that SC will be able to calculate the state error.

16

Industrial Robotics - Programming, Simulation and Applications

6. AM sends to the robot the correction vector.

7. The call is executed similarly to (2) and zero error is returned since there is no measurement available yet.

8. Similarly to (4) the next EES is calculated using zero error.

9. The next state is set to the robot.

10. Similar to (7) since no measurement available.

11. Similar to (8).

12. Data acquisition and processing are finished and SM returns to SC the feature measurement vector.

13. Similar to (9).

14. SC calculates the EES based on the new measurements.

15. SC finished the calculations and calls SAI asynchronously and sets the current error vector.

16. ΑΜ requests from SAI the calculated error vector and SAI returns it.

17. ΑΜ calculates the desired EES from the input error using a regulator.

18. ΑΜ sends to the robot the state calculated in (17), which will move the robot towards the desired EES.

The steps 2-18 are repeated until a threshold is reached. Then:

19. AM informs the ProcedureManager about the task termination.

informs the ProcedureManager about the task termination. Fig. 8. The capsule interactions during sensor-guided

Fig. 8. The capsule interactions during sensor-guided movement for a closed loop system. The actuator manager requests from the SAI the measured error of the EES, the SAI returns the current state error or zero (due to the different camera and robot interpolation rate) and the regulated pose is sent to the robot controller. The procedure is repeated until the state error becomes very small.

A Design Framework for Sensor Integration in Robotic Applications

17

The task of sensor-guided movement in open-loop configurations is much simpler than the respective closed-loop task. We will examine the „look-then-move“ and the „scan- then- move“ cases. In the former the sensors acquire a local view of the target, while in the latter the sensors acquire a global view of the workspace before executing the task. In the “look-then-move” scheme, when the robot is at a nominal pose (without moving) the sensors acquire the data, these data are processed, the new state is extracted and then the end-effector moves to reach the required pose. Here there is no need for an AM capsule due to the fact that there is no need to apply a regulator at the end-effector level. In Figure. 9. the general interactions at the control object layer is described. The role of the StateServer here is to provide the current end-effector state to the SC and to communicate the calculated states based on the sensory data to the rest modules of the organizational level and to the robot controller. The StateServer (after receiving a command at the organizational level) sends to the SC the current end-effector state (1) and then the SC requests synchronously from the SM to acquire sensory data and provide image measurements (2). When the SM is ready with data processing (3) the SC receives the measurements and continues with state calculation (4). The calculated state is then sent to the StateServer (5). Then the state is forwarded to the robot controller. In the “scan-then-move” approach the robot moves initially to a predefined set of poses, where the sensor is activated and thus a part of the workspace is scanned. The acquired images are processed for feature extraction and registered for each pose using the end- effector pose (or perhaps the images are processed after the execution of the scan movement). From the extracted image features the end-effector states are calculated in order to perform the task. The StateServer (after receiving a command from the execution layer) sends the current state to the SC. The SC requests synchronously from the SM to acquire data and measure (2); when this is done (3) the measurements are registered (4) and the ProcedureManager becomes informed about the end of the registration for the current end-effector pose. Steps 1-4 are repeated for all predefined poses; then the next end-effector states are calculated (6) and sent to the StateServer. The StateServer forwards them appropriately to the robot controller.

forwards them appropriately to the robot controller. Fig. 9. Sensor-guided movement for a. look-then-move scheme
forwards them appropriately to the robot controller. Fig. 9. Sensor-guided movement for a. look-then-move scheme

Fig. 9. Sensor-guided movement for a. look-then-move scheme b. the scan-then-move execution scheme.

18

Industrial Robotics - Programming, Simulation and Applications

8. Applications

The applicability of the framework has been verified through the implementation of industrial applications. The UML language has been used for modeling and the C/C++ languages have been used for coding. The first application presented here is the sunroof placement fitting, which aims to fit a sunroof onto a car body on the automobile production line (Kosmopoulos et al. 2002). We used the 6-DOF manipulator K-125 of KUKA with the KRC-1 controller, which permits the employment of external software for control at the end-effector level. The task of fitting the sunroof on the car body was performed using four CCD cameras, monitoring the four corners of the sunroof opening. The corners were identified as the intersection points of the monitored edges (Fig. 10). The general scheme introduced in Fig. 4 has been used. For this application four instances of the SM capsule have been employed and one of the AM, SAI and SC. As regards the SM capsule, the acquired data were of type char due to the monochrome camera images and thus image objects of type unsigned char were defined. In the SC capsule we extracted the desired robot state through an image-based, endpoint open loop approach. For the system state calculation we used an extended Kalman filter. The system state was given by the vector:

W k =[x, x’, y, y’, z, z’, a, a’, b, b’, c, c’] T k

which refers to the end-effector pose error and the corresponding velocities with respect to a nominal state, which is achieved when the sunroof is correctly placed on the target. For the calculation of the end-effector pose we have used an over-determined system through

measuring the image coordinates x i , y i of the four corner points. The measurement vector f k

is given by the following equation:

f k = [x 1 , y 1 , x 2 , y 2 , x 3 , y 3 , x 4 , y 4 ]

The system Jacobian has been assumed to be locally constant and has been calculated through training. We have also used classes for coordinate transformations from the target coordinate system to the end-effector, the robot base and other coordintate systems. For the AM capsule we have implemented PID control laws for each degree of freedom. The SAI was programmed to send the calculated state to the AM as soon it was requested. A zero vector was sent until the next calculated state was available from SC. All the use cases described in section 5.2 have been addressed. The closed loop

communication sequences described in sections 7 for a single SM capsule in this application were executed in parallel for the four SM capsules.

A second industrial application implemented using our generic framework, addresses the

depalletizing (or robotic bin picking, or pick and place problem) in the context of which a number of boxes of arbitrary dimensions, texture and type must be automatically located, grasped and transferred from a pallet (a rectangular platform), on which they reside, to a specific point defined by the user (Katsoulas & Kosmopoulos 2006). We used the KR 15/2 KUKA manipulator, a square vacuum-gripper, and a time-of-flight laser range sensor (model SICK LMS200), which provides 3D points detected by a reflected laser beam. The sensor is mounted on the gripper and the latter is seamlessly attached to the robot's hand (Figure 10).

A Design Framework for Sensor Integration in Robotic Applications

19

Due to the fact that only a line is acquired in each sensor cycle, the robot moves the sensor over the pile while acquiring data so that the whole pile can be viewed. The “scan then move” process is utilized for registering one dimensional scan lines obtained by the laser sensor during the robot hand movement, into a 2.5D range image. Finally the object states are extracted from the constructed image and the boxes lying on the upper layer, which is visible to the sensor, are grasped one by one and placed to a new position. For the next layer the same procedure is executed until the pallet is empty. The general scheme presented in Figure 4 has been implemented omitting the AM (due to the fact that no closed-loop control functionality is needed), and the SAI (because the robot moves only when the data acquisition is finished and no additional synchronisation is needed); their connections are also omitted. The communication with the manufacturer’s robot controller is achieved through the organizational level (StateServer). The processing layer of the application consists of one instance of the SM and the SC capsules; the SM controls the sensor for data acquisition and the SC calculates the state of the target objects, in other words their pose in the world coordinate system (target localization). Sensor-guided movement follows the “scan then move” pattern, described in figure 9b. Firstly, while the robot is at the initial scan position the StateServer sends the current robot state to the SC thus triggering a message from SC to SM to acquire data. The SM acquires an image (scan line), which is returned to SC. The SC receives as input the range image corresponding to the current configuration of the object on the pallet; using the current robot state the coordinates of the image elements are calculated in the world coordinate system. These steps are repeated for all predefined end-effector poses and then the scan lines are combined to a single image. This range image, which holds information about the environment, is processed and from it the target object states (poses) are finally extracted and sent to the organizational level through StateServer.

and sent to the organizational level through StateServer . Fig. 10. (a) The sunroof fitting robot

Fig. 10. (a) The sunroof fitting robot using four cameras in a closed loop control scheme and (b), (c) the depalletizing system during scanning and grasping

9. Conclusions and future work

In this chapter we have introduced a design framework for developing software for integrating sensors to robotic applications, provided that open controller architecture is available. The design is general enough and enables the integration of the popular sensors to any open controller using a wide variety of control schemes. By keeping some simple

20

Industrial Robotics - Programming, Simulation and Applications

interfacing rules it is possible to replace modules, e.g., for implementing control laws for the actuator DOFs or for processing the sensor data without much effort. This enables using the freely available or commercial libraries-components maximising the benefit of reusing well – tested modules. It is also possible to integrate multiple instances of the related processes-capsules provided that the interfaces will be the same. The need for a synchronisation module between sensors and actuator in closed loop control schemes has been underlined. The most common use cases for industrial applications have been presented and the related interaction patterns have been demonstrated. They include some typical open loop and closed loop control schemes. Furthermore, two concrete examples of the development of industrial applications facilitated by the presented framework were demonstrated. Some more applications, not presented here due to confidentiality issues, ranging from robotic measurement to grasping have already employed this approach. Although the development of design solutions and frameworks for robots is a challenging task we have managed to identify the similarities and differences for the sub-domain of sensor-guided industrial manipulators. The development time for the related applications has been significantly reduced to approximately one third of the initial development time due to reusability of design and components. A good trade-off has been found between over-generalizing and the application-specific design by building concrete examples based on the general principles. The presented work targets industry-specific systems, where the environment is structured to a significant level. This means that the decisions/use cases taken at the organization level are limited. More complex behaviors e.g., concerning mobile robots moving in unstructured environments are not addressed here. Anyone who deals with the development of robotic applications can benefit from this work, especially those that seek to couple the mechanical flexibility of industrial robots, with the flexibility to “build” various diverse applications with common characteristics. In the future the integration in our scheme of a sophisticated configuration tool, such as glue code (code level) and configuration tool in textual or graphical mode (integration and application level), may facilitate the flexibility and the rapid deployment of sensor- specific industrial applications, thus making our framework highly reconfigurable in a dynamic manner. Another objective is to provide functionality through a black box framework from an open integrated environment for the development and testing of the control objects. This environment will support the use of many sensors and actuators and will include their models for simulation purposes employing a variety of vision and control algorithms.

10. References

Anderson, R. J. (1993). SMART: A Modular Architecture for Robots and Teleoperation, IEEE International Conference on Robotics and Automation, pp. 416-421, Atlanta, Georgia, May 1993 Corke, P. I. (1993). “Video rate robot visual servoing”, In:Visual Servoing, K. Hashimoto (Ed.), vol. 7, Robotics and Automated Systems, pp. 257-283, World Scientific, ISBN 9810213646, Singapore

A Design Framework for Sensor Integration in Robotic Applications

21

Coste-Manière, E., Turro, N. (1997). The MAESTRO language and its environment:

Specification, validation and control of robotic missions. IEEE International Conference on Intelligent Robots and Systems, Vol. 2, pp. 836-841, Grenoble, France, September 1997. Douglass, B.P. (2002). Real-Time Design Patterns: Robust Scalable Architecture for Real-Time Systems, Addison Wesley, ISBN 0201699567. Hager, G., Toyama, K. (1998). XVision: A portable Substrate for Real-Time Vision Applications, Computer Vision and Image Understanding, Vol. 65, No 1, January 1998,

pp.14-26, ISSN 1077-3142 Hutchinson, S., Hager, G., Corke, P. (1996). A tutorial introduction to visual servo control, IEEE Transactions on Robotics and Automation, Vol. 12, No 5, May 1996, pp. 651-670, ISSN 0882-4967. Intel (2006). Open source computer vision library, http://www.intel.com/research/ mrl/research/opencv/ Katsoulas, D.K., Kosmopoulos, D.I, (2006). Box-like Superquadric Recovery in Range Images by Fusing Region and Boundary based Information, Int. Conf. on Pattern Recognition, Hong Kong, to appear. Kosmopoulos, D.I, Varvarigou, T.A, Emiris D.M., Kostas, A., (2002). MD-SIR: A methodology for developing sensor-guided industry robots, Robotics Computer Integrated Manufacturing, Vol. 18, No 5-6, Oct-Dec 2002, pp. 403-419, ISSN 0736-

5845.

Mazer, E., Boismain, G., Bonnet des Tuves, J.M., Douillard, Y., Geoffroy, S., Doubourdieu, J.M., Tounsi M., Verdot, F., (1998). START: An application builder for industrial robotics, Proceedings of IEEE International Conference on Robotics and Automation, Vol.

2, pp. 1154-1159, May 1998, Leuven, Belgium. MVTec (2006). http://www.mvtec.com/ Selic, B., Rumbaugh, J. (1998). “Using UML for modeling complex real-time systems”, White Paper. Sperling, W., Lutz, P., (1996). Enabling open control systems – an introduction to the OSACA system platform, Robotics and Manufacturing, Vol. 6., ISRAM 97, New York, ASME Press, ISBN 0-7918-0047-4. Orca (2006). Orca: components for robotics http://orca-robotics.sourceforge.net/index.html Toyama, K., Hager, G., Wang, J. (1996). “Servomatic: a modular system for robust positioning using stereo visual servoing”, In Proceedings International Conference on Robotics and Automation, Vol.3, pp. 2636-2643, April 1996, Minneapolis, USA. Valavanis, Κ.P., Saridis, G.N. (1992). Intelligent Robotics Systems: Theory, Design and Applications, ISBN 0792392507, Boston. ECV-Net –IUE (2000). http://www-sop.inria.fr/robotvis/projects/iue/main.html VXL (2006). The vxl homepage, http://vxl.sourceforge.net Vaughan, R. T., Gerkey, B., and Howard, A. (2003). On device abstractions for portable, reusable robot code, In Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 2421-2427,October 2003,Las Vegas, USA. Montemerlo, M., N. Roy and S. Thurn (2003). Perspectives on standardization in mobile robot programming: The Carnegie Mellon Navigation (CARMEN) toolkit. In Proceedings of the IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 2436-2441, October 2003, Las Vegas, USA.

22

Industrial Robotics - Programming, Simulation and Applications

Utz, H., Sablatnog, S., Enderle, S., and Kraetzschmar, G. (2002). MIRO - middleware for mobile robot applications, IEEE Transactions on Robotics and Automation, Vol. 18, No 4, August 2002, pp. 493-497, ISSN 0882-4967. Nesnas, I. A.D., Simmons, R., Gaines, D., Kunz, C., Diaz-Calderon, A., Estlin, T., Madison, R., Guineau, J., McHenry, M, Hsiang Shu, I., Apfelbaum, D., (2006). CLARAty:

Challenges and Steps Toward Reusable Robotic Software, International Journal of Advanced Robotic Systems, Vol. 3, No. 1, March 2006, pp. 023-030, ISSN 1729-8806.

2

Joint Torque Sensory in Robotics

Farhad Aghili

Canadian Space Agency Canada

1. Introduction

Joint Torque sensory Feedback (JTF) can substantially improve the performance of a robotic system. JTF makes it possible to achieve dynamic control of a robotic manipulator without the need for modeling its link dynamics. Moreover, it has been proved that JTF can achieve a precise torque tracking in a manipulator joint by compensating the eect of joint friction and actuator nonlinearities. Despite these advantages, accurate joint torque measurement encounters several practical challenges. Since much of the torque/force reaction of the link load on the joints appears in the form of nontorsional components, e.g. overhung force and bending moment, the torque sensing device has to not only bear but remain insensitive to these force/moment components. In addition, it is desirable to design the structure of the sensor such that it generates a large strain for a given load torque and therefore has a high sensitivity. However, this is in conict with the high-stiness requirement for minimizing the joint angle error introduced by the sensor. The main objectives of this chapter are twofold: Firstly, in Sections 2 and 3, we describe the technical challenges and practical solutions to accurate measurement of joint torques in the presence of the non-torsional components in a robot joint. For a torque sensing device, dierent mechanical structures will be examined and their properties, such as sensitivity and stiness in dierent directions and decoupling, will be explored. This allows a systematic design of a sensor structure which is suitable for torque measurement in a robot joint. Finally, the state-of-the-art and design of a torque sensor for robots will be presented in Section 4. The design achieves the conicting requirements of high stiness for all six force and torque components, high sensitivity for the one driving torque of interest, yet very low sensitivity for the other ve force/torque components. These properties, combined with its donut shape and small size make this sensor an ideal choice for direct drive robotic applications. Experimental data validates the basic design ideas underlying the sensor’s geometry, the nite element model utilized in its optimization, and the advertised performance. The second objective is to describe the application of joint torque sensory feedback (JTF)in robot control. The main advantages claimed for JTF are (i) it can simplify the complexity of the system dynamics by eliminating its link dynamics; and (ii) the control system is inherently robust with respect to both external force disturbance and parameter variation. First, assuming both actuator and torque sensor are ideal, we describe robot control with JTF in Section 5. Then, development of an adaptive JTF is presented in Section 6that requires only the incorporation of uncalibrated joint torque signals, i.e., the gains and osets of multiple sensors are unknown. Also, all physical parameters of the joints including inertia of the rotors, link twist angles, and friction parameters are assumed unknown to the

24

Industrial Robotics - Programming, Simulation and Applications

controller. Finally, in Section 7, JTF is modi ed to cope with actuator’s nite bandwidth dynamics actuator, i.e., no ideal actuator. An optimal JTF is designed in the frequency domain that minimizes the control system’s sensitivity to load torque disturbances and load dynamics. Experimental results demonstrate that the JTF remarkably improves the disturbance attenuation and load decoupling properties of a joint servo controller.

2. Sensing Joint Torque

The benets of using joint torque sensory feedback to improve the performance of robotic system have been recognized in the robotics community. For example, joint torque feedback can be used to compensate the nonlinearities and modeling uncertainties of manipulator dynamics (Hashimoto, 1989a; Kosuge et al., 1990;Aghili et al., 2001) or simply those of actuators (Asada and Lim, 1985; deSilva et al., 1987; Zhangand Furusho, 1998; Luh et al., 1983). Moreover, in geared systems, the implementation of model based controllers is dicult without measuring the actual output torque, since the eciency of gears depends greatly on the torque, and to a lesser extend, on the joint velocity, and yet this data is typically not made available by gear manufacturers. Thus there is a need for torque sensors that can be integrated simply between the actuator (and possibly gear) and the load. The sensor research described in this section was motivated by the lack of suitable sensors needed for our joint servo system. Accurate joint torque measurement encounters several design challenges. In the design of robot manipulators, it is desirable that much of the torque/force reaction of the link load on the joints appears in the form of non-torsional components, because actuation then takes less eort. SCARA robot arm designs, for instance, prevent gravity torques from acting on the joint motors (Newman and Patel, 1991). However, since torque sensors are directly attached to the motor’s distal links, they have to bear those potentially large non-torsional components. The rst challenge is to measure torque with minimal inuence from simultaneous and potentially large non-torsional components. In the following, we shall call the one axis of motor torque of interest the torsion. The other two torque and three force components, we shall call for simplicity the non-torsional components. The second challenge relates to the sensor stiness. High torsion stiness is important because any de ection adds positioning errors that cannot be compensated by the joint servo controller. To increase the signal-to-noise ratio and sensitivity of the sensor, it is desirable to design a structure that generates a large strain for a given load torque. However, the sensitivity introduces a torsion compliance that must be minimized. Thus there are two conicting requirements: High stiness and high sensitivity for torsion. A new solution to these two challenges will be described here and is distinct from existing designs. Some aspects of this research have been previously reported in (Aghili et al., 2002a). There is a large literature on the systematic design of six degree-of-freedom (dof) force/torque sensors (Hirose and Yoneda, 1990;Svinin and Uchiyama, 1995;Uchiyama et al., 1991). It is important to note that the design criteria for one and six dof sensors are very dierent. For instance, isotropy (uniform sensitivity)is a desirable property of a six degree- of-freedom force/torque sensor, hence its elastic structure tends to be fragile and compliant in all directions. In contrast, the elastic sensitivity of a torque sensor has to be maximized only around its torsional axis. Various techniques have been proposed to instrument geared motors for torque sensing (Hashimoto, 1989b; Luh et al., 1983; Pfeer et al., 1989; Vischer and Khatib, 1995), while little attention has been paid to nd an adequate structure for joint torque sensing (Asada

Joint Torque Sensory in Robotics

25

and Lim, 1985; Jacobsen et al., 1991; deSilva et al., 1987). Hashimoto et al. (Hashimoto, 1989b) utilized the elasticity of the ex-spline in a harmonic drive to measure the joint torque. This technique has the advantage of using the existing structural exibility of the robots. However, eliminating the error caused by rotation of the wave generator is dicult, it requires a nontrivial gear modi cation, and this approach cannot be used in direct drive systems. Many researchers (Pfeer et al., 1989; Wu, 1985;Luh et al., 1983;deSilva et al., 1987; Asada and Lim, 1985; Vischer and Khatib, 1995) choose not to place the sensor directly at the joint shaft to avoid the detrimental eects of the support forces and moments. Pfeer et al. (Pfeer et al., 1989) replaced standard attachment bolts in the PUMA 500 joints with a exure instrumented with strain gauges. Wu (Wu, 1985) used a shaft with a thin hollow circular section that is supported by two bearings. Strain gauges are mounted on the thin section. Luh et al. (Luh et al., 1983) cemented strain gauges on the connecting output shaft which is mounted to the ex-spline of the harmonic drive for each joint of the Stanford manipulator. Visher (Vischer and Khatib, 1995) integrated a torque sensor with the gear transmission, while Asada et al. (Asada and Lim, 1985) integrated strain gauges in the hub supporting the robot of a direct-drive motor. The strain gauges are cemented on three beams connecting the outer ring, mounted to the motor rotor, and the inner ring, which is coupled to the motor shaft. Asada et al. (Asada and Lim, 1985) cemented strain gauges inside the rotor of a direct- drive motor for torque measurement. Since these sensors are not mounted directly on the joints of a manipulator, the entire set of forces and moments are supported by the bearing set rather than the sensor structure. However, these sensors are not ideal because they can not account for the friction in the joint bearings. Moreover, the mechanical joints are complicated and sometimes bulky. In commercial torque sensors non-torsional components are not permitted or are highly restricted. Furthermore, they usually come in bulky packages, are built for shaft mounting and thus are not suitable for integration in a robot joint.

and thus are not suitable for integration in a robot joint. Fig. 1. Di ff erent

Fig. 1. Dierent structure for torque sensor.

Fig. 1 illustrates conventional geometries (a, b, c, and d) and the proposed hollow hexaform design (e) for torque sensors. Solid (a) and hollow (b) cylinders have been used extensively for joint torque sensing (Pfeer et al., 1989; Wu, 1985; Luh et al., 1983; deSilva et al., 1987; Wu and Paul, 1980) but are sensitive to non-torsional components (Wu and Paul, 1980). For this reason, they are usually mounted before the joint bearings so that the bearing can support the non- torsional components. In addition to requiring a bearing support structure, the main drawback of this method is that joint friction can not be observed by the sensor. Hub-sprocket designs Fig.1(c) have been used for geared joints (Hirzinger et al., 2001; Vischer and Khatib, 1995) as well as direct-drive joints (Asada and Lim, 1985; Tani et al., 1983). Although a better rejection

26

Industrial Robotics - Programming, Simulation and Applications

to non-torsional components has been reported for this type of sensor (Vischer and Khatib, 1995), the structure is not adequate for a modular robot joint. This is because of the drastic change in the physical size between the input (inner ring) and output (outer ring) of the sensor. Rather this type of sensor should be integrated with the gear or with the rotor of a direct-drive motor, and hence it suers from the same drawbacks as type (a) and (b) sensors. The hollow cruciform design Fig. reg: quad is used in commercially available torque sensors (Lebow, 1997). In this design, strain is induced mainly by bending of the wing elements. In order to achieve good sensitivity, the wing and sensor height is large, and as a result, the stiness is low and non-torsional torques must be kept small. The proposed hollow hexaform sensor Fig. 1(e) is similar in its basic geometry to the hollow cruciform sensor (d) with only four wings. However, there are fundamental functional dierences. Due to the increased number of wing pairs, and the shorter height, strain is induced primarily in torsion, resulting in a much stier sensor, and improved sensitivity. In addition, this design can be optimized to support non-torsional torques, making it suitable for direct drive robotic applications.

3. Sensor Design

In this section we describe how the new hollow-hexaform sensor achieves

(i)

high sensitivity to torsion,

(ii)

low sensitivity to non-torsional components, and

(iii)

high stiness in all axes of forces and moment.

3.1 Design for Decoupling In general, torque measurements are derived from strain measurements at several locations on an elastic sensor body. Assuming a linearly elastic material, there is a linear relationship between the applied forces and torques and the resultant strains described by

(1)

where is the vector of m measured strains, is the generalized force/moment vector acting at the center of the sensor body where the z-axis and joint axis are identical, and is the sensitivity matrix whose elements c ij denote the sensitivity of the ith strain gauge to the jth component of the generalized force/moment. This matrix permits the reconstruction of the torsion moment from the output signal with the gain vector. Unlike in 6-axis force/torque sensors, it is desired to reconstruct only the torsion moment n z from the measured strains . However, the sensitivity matrix underlies the mechanical coupling transmitted through the force/moment sensor structure. Therefore, the sensor output should be decoupled from the non-torsional components of forces and moments. We show that one can use the additive properties of the Wheatstone bridge to achieve the decoupling without the need for any subsequent arithmetic. The resulting advantage is a reduction of instrumentation and the number of wires by completing the bridge wiring inside the sensor, and simplication of tedious calibration. The question arises concerning the condition for which there exists such a mapping. It is necessary to consider each component of force to be a linear function of all strain gauge sensors in order to correct for the coupling. Let v out and v ex represent the output voltage and the excitation voltage of a half-bridge conguration, and GF denote the gauge factor

the output voltage and the excitation voltage of a half-bridge con fi guration, and GF denote
the output voltage and the excitation voltage of a half-bridge con fi guration, and GF denote
the output voltage and the excitation voltage of a half-bridge con fi guration, and GF denote
the output voltage and the excitation voltage of a half-bridge con fi guration, and GF denote
the output voltage and the excitation voltage of a half-bridge con fi guration, and GF denote

Joint Torque Sensory in Robotics

27

(Omega, 1995) of the strain gauges. Then, assuming every strain gauge pair constitutes a half-bridge circuit, the overall voltage output is given by where

is the gain of the strain gauges andcircuit, the overa ll voltage output is given by where represents the gain signs corresponding to

represents the gain

is the gain of the strain gauges and represents the gain signs corresponding to the negative

signs corresponding to the negative and positive branches of a Wheatstone bridge circuit. Substituting from (1) into the latter equation, we obtain

Substituting from (1) into the latter equation, we obtain (2) where w i is the inner
Substituting from (1) into the latter equation, we obtain (2) where w i is the inner

(2)

where w i is the inner product of vectors t and the ith column of C. It is evident from (2) that, in the most general case, the sensor output is the superposition of weighted components of the generalized forces and moments transmitted through the sensor unless all weights related to the exogenous forces and moments are zero. That is, the decoupling is achieved if and w 6 0. In this case, the sensor output is solely proportional to the torsion torque i.e.,

(3)

where α = κw 6 represents the overall sensitivity of the sensor. That vector t is orthogonal to all columns of C matrix except the last one underlines a condition on the structure of a torque sensor by which the senor exhibits the decoupling. As a results, such as sensor is not sensitive to the supporting forces and moments transmitted through the structure of the sensor.

moment s transmitted through the structure of the sensor. Fig. 2. Basic torque sensor structure. A:
moment s transmitted through the structure of the sensor. Fig. 2. Basic torque sensor structure. A:
moment s transmitted through the structure of the sensor. Fig. 2. Basic torque sensor structure. A:

Fig. 2. Basic torque sensor structure. A: solid discs; B: elastic element; C: strain gauge.

For the candidate geometry in Fig. 2andwith four strain gauges located on the numbered locations, the parametric form of the sensitivity matrix can be derived from the symmetry of the structure with respect to the external forces and torques as

with respect to the external forces and torques as The j th column of the matrix

The jth column of the matrix represents the strain sensitivities in the four site locations with respect to the jth load case, e.g., the third column represents the sensitivity due to f 3 .The identical elements in the matrix can be implied from the symmetric conguration of the structure with respect to dierent load cases. For instance, the symmetric condition of the

28

Industrial Robotics - Programming, Simulation and Applications

strain gauges with respect to the axial load, f 3 , implies identical elements of the third column. Now one can readily verify that and and hence the structure satis es the condition for mechanical decoupling. There are two main reasons in practice that violate the above assumption of exact symmetry among the measured strains. First, strain gauges exhibit variations in their gauge factor. Second, the strain gauges will be placed on areas with high strain gradients. This makes the gauge outputs sensitive to placement errors. This can also be modeled as a gauge gain error. As a consequence, exact cancelation of the non-torsional components may not be achieved with the theoretical gain vector. By virtue of the linear mapping (1), the non-torsional components produce no output, if all elements of the sensitivity matrix except that for the last column are suciently small. This implies that the strain sensitivity to the non-torsional components has to be held to a minimum by mechanical design. This condition in conjunction with the decoupling property of the sensitivity matrix actually determines the capability of the sensor to reject the eect of non-torsional force/torque to the output and to provide a high delity output signal.

the output and to provide a high fi delity output signal. 3.2 Maximum sensitivity To increase
the output and to provide a high fi delity output signal. 3.2 Maximum sensitivity To increase

3.2 Maximum sensitivity To increase the signal-to-noise (S/N) ratio and the resolution of the sensor, it is desirable to design the elastic component to provide large output signals, i.e., large mechanical gain α. Therefore one of the design criteria is to increase the torsional sensitivity, subject to not exceeding the allowable strain. In the absence of non-torsional components, the maximum attainable strain sensitivity depends solely on the material properties as the strain due to the maximum load should be close to the maximum allowable material strain or stress. However, non-torsional components produce strains which add to the strain caused by torsion. To ensure that the allowable maximum material strain is not exceeded, we consider the worst-case scenario where the generalized force/torque vector has its maximum force and moment. Then, in order to exploit maximum torsion sensitivity, c 16 , the other sensitivity components, i.e. , must be minimized by proper geometry design. This design requirement is consistent with a decoupling property of the sensor. It is interesting to note that cylinders are mainly used in the design of commercial torque sensors. By elementary strength of material analysis, one can show that bending moments produce twice the stress than the same magnitude torsion moment. This is why shear and thrust forces and bending moments must be kept small in these sensors.

and bending moments must be kept small in these sensors. 3.3 High torsional and bending stiffness

3.3 High torsional and bending stiffness Torsional deection degrades the position accuracy of the joint angle controller. Moreover, to increase the signal-to-noise ratio requires maximizing the sensor sensitivity. However, highly stisensors tend to be less sensitive ones. Therefore, one of the critical design challenges is to maximize the stiness while maintaining high sensitivity. We propose η, called structure eciency, which is the product of sensitivity and stiness as a performance index to capture these contradictory requirements,

(torsional sensitivity) (torsional stiness)

(torsional sensitivity) (torsional sti ff ness) (4) where δ is the torsional de fl ection. As

(4)

where δ is the torsional deection. As mentioned earlier, the gain of the strain gauge, κ, is

Joint Torque Sensory in Robotics

29

Joint Torque Sensory in Robotics 29 independent of sensor structure. Moreover, is a dimensionless variable which

independent of sensor structure. Moreover, is a dimensionless variable which captures the ratio of the local and global strains. These facts suggest that η is a decisive factor in the sensor design and should be maximized. Moreover, since it is dimensionless, the index provides a useful basis for comparison of dierent size torque sensors. The index is maximized in elastic structures that produce high strain concentration in torsion. In theory, there is no limit on the strain concentration in an elastic body. However, the high strain concentration takes place in a very small area, which might be smaller than the physical size of available strain gauges. Moreover, since strain gauges average the strain eld over their area, the detected strain can be signi cantly lower than the calculated value. Therefore, it is important to generate high strain over a suciently large area. This objective seems dicult to formulate analytically, but can be inspected by nite element methods. Introducing a torque sensor in a robot joint adds exibility. Although torsional exibility can, in principle, be compensated via sophisticated controllers, deection in the other axes is more problematic. Consequently, another design criterion dictates high stiness in non-torsional directions. Fortunately, the requirements for low deection and low strain sensitivity for non- torsional components are consistent. The structure shown in Fig. 2 exhibits high bending stiness around the x-axis. However, its poor stiness around the y-axis is a drawback. This problem can be simply solved by adding more wing pairs as shown in Fig. 1E. This improves the uniformity of the bending stiness along dierent axes as well as the body stiness. In general, all performance aspects of the sensor improve with the number of wing pairs, but since we will want to machine the sensor from one solid piece of metal, the limit is imposed by manufacturability. For this reason, we consider six wings in our design.

3.4 Practical shape considerations

Addition of a torque sensor to a robot joint must not require the redesign of the joint and should result in a minimal change in the manipulator’s kinematics, in particular the link oset. Hence, a shape with a small width is desirable. Minimizing the eects of thermal stresses is a design factor that cannot be ignored. Motors are a source of heat that ows from the motor to the attached link through the sensor body. Therefore, it is desirable to have an axisymmetric design that constrains the heat to ow in the axial direction, where no position constraint usually exists. The common hub-sprocket designs are prone to thermal stresses because of the temperature dierence between the hub and the wheel. Since the sensor is specically designed for a direct-drive motor with hollow shaft, ange mounting is preferred. Finally, the body should be designed for ease of manufacture. It should be a monolithic structure, that is, the body should be machined from a solid piece of metal. This

decreases the hysteresis and increase the strength and repeatability of the sensor. The hollow hexaform geometry shown in Fig. 1E satis es these requirements.

3.5 Material properties and overloading

So far only geometric properties of the elastic body were considered. Nevertheless, the stiness and sensitivity characteristics of the torque sensor are also determined by the material properties. The maximum allowable strain for foil strain gauges is typically 3 %, which is at least one order of magnitude higher than that of industrial metals , making the materials the limiting factor for sensitivity. Furthermore, the stiness depends linearly on Young’s modulus E of the material. By virtue of Hook’s law,

Furthermore, the sti ff ness depends linearly on Young’s modulus E of the material. By virtue
Furthermore, the sti ff ness depends linearly on Young’s modulus E of the material. By virtue

30

Industrial Robotics - Programming, Simulation and Applications

one can conclude that high sensitivity and stiness are achievable simultaneously only by use of a high-strength material. Because a linear response is desired from the sensor, the chosen sensor material must have a linear strain-stress relationship. Steel is the best available industrial material that has good linearity properties within a large stress range. Moreover, due to the oscillatory nature of the loading, steel can work with in nite fatigue life as the allowable strains are determined based on the endurance limit. The endurance limit or fatigue limit is the maximum stress under which mechanical failure will not occur, independent of the number of load cycles. Only ferrous metals and alloys have an endurance limit. The sensor is designed for a nominal torque 300 Nm that is based on the endurance limit of mild steel, which is twice as much as the yield point. Hence the safety factor in torque overloading is two. Remarkably, FEM results demonstrated that the stress induced by bending moment is very low for the proposed structure. As a result the structure can resist bending moments as high as 2000 Nm, which is almost an order of magnitude higher than the nominal torque.

3.6 Thermal deviation

The gauge resistance and gauge factor of all known strain sensitive materials vary with temperature. The change in resistance with temperature for a mounted strain gauge is a function of the dierence in the thermal expansion coecient between the gauge and the sensor body and of the thermal coecient of resistance of the gauge alloy. Self-temperature compensating gauges can be achieved for speci c materials by processing the strain sensitive alloy such that it has thermal characteristics that compensate for the eects of the mismatch in thermal expansion coecients between the gauge and the body of the sensor (Omega, 1995). The manufacturer of the strain gauge (OMEGA (Omega, 1995)) claims that their products accurately compensate the eect of temperature if the yare chosen according to speci c coecient of thermal expansion of material on which the gauges are mounted.

4. Design and Analysis

4.1 FEM Analysis

Once we had determined the basic hollow hexaform shape of the sensor, we used the FEM capabilities of IDEAS (Structural Dynamics Research Corp.) to optimize the sensor dimensions and to determine the size and placement of the strain gauges. Strain concentration is the design key to simultaneously achieve high torsional sensitivity and high stiness. For maximum sensitivity, strain gauges should be located where maximum induced strains due to the torsion load occur. Since the strain eld is averaged over the area covered by the strain gauges, it is very important rst to determine the loci of the peak strain, and second to ensure the creation of a suciently large strain eld. FEM is ideally suited to solve this problem. The sensor body is modeled by solid elements as shown in Fig. 3A. Since the body is symmetrical in geometry and applied boundary conditions, it suces to analyze only one half, provided that adequate position constraints are imposed on the nodes of the cutting plane. To simplify the FEM, small geometric features of the body are suppressed. Several load cases were investigated, corresponding to axial and shear forces as well as bending and torsion moments. In our application, the maximum forces and moments are 1000 N and 300 Nm,

Joint Torque Sensory in Robotics

31

respectively. A preliminary stress analysis showed that the axial and shear forces have negligible elastic eects because they produce a uniform strain/stress eld in the elastic body, resulting in a very weak maximum strain. In fact, the bending moment is the critical non-torsional component, and consequently two dierent load cases corresponding to the external torsion and bending torques are established for FEM. It is important to note that in robotic applications the maximum angular de ection due to external torques (which is ampli ed by the robot links)is a more restrictive constraint than linear de ection due to the forces. It has been investigated that the worst-case strain due to the bending load happens when its axis lies perpendicular to one of the wings, and consequently that axis is chosen for the bending.

wings, and consequently that axis is chosen for the bending. Fig. 3. FEM analysis. Moreover, as

Fig. 3. FEM analysis.

Moreover, as mentioned earlier, maximum torsional sensitivity requires minimum bending sensitivity. In particular, the radial displacement of the disc’s outer diameter due to the torsion, principal strain at the strain gauge seats due to both load cases, and maximum von Mises stresses/strains due to a combination of all load cases are selected as the design benchmarks. The described design criteria can be checked with the FEM results to modify the geometry of the sensor iteratively. In the shape optimization process we chose as design variables the wing thickness, the distance between two disks, and the inner hole diameter. The 95 mm outer diameter was selected to match our particular motor. The selected dimensions were varied to maximize the structure effciency (4) subject to keeping the maximum von Mises stresses σ within the allowable limits , considering fatigue. That is

within the allowable limits , considering fatigue. That is The IDEAS solver uses the Steepest Decent
within the allowable limits , considering fatigue. That is The IDEAS solver uses the Steepest Decent

The IDEAS solver uses the Steepest Decent (the gradient method) method with penalty function for finding the local minimum - the penalty function adds simply the weighted constraint equation into the objective function. The post-processing stage was guided by the following design benchmarks: the tangential

32

Industrial Robotics - Programming, Simulation and Applications

and axial displacement of the disc’s outer diameter; the principal strain in the axial direction and parallel to the gauge axes, due to both load cases, and the maximum von Mises stress/strain due to a combination of all load cases. Hence, the performance index can be obtained from Figures 3(c) and 3(e), while the the constraint condition is given by Fig. 3(b). These design criteria were checked with the FEM results to modify the geometry of the sensor iteratively. The FEM results of the elastic body’s final design are shown on Fig. 3. The worst-case von Mises stress, i.e. the combination of the two load cases, is shown in Fig. 3(b) where its maximum occurs at 150 MPa. This is close to the endurance limit of mild steel with a reasonable factor of safety. Figs 3(c) and 3(d) illustrate the tangential and axial displacement elds by which the torsional and bending stinesses are carried out; k z =2.7 ×10 5 Nm/rad and k x =4.9 ×10 6 Nm/rad, respectively. The axisymmetric pattern in the gure con rms the correctness of the imposed boundary conditions. Figs. 3(e) and 3(f) show the strain contour in the axial direction in which the strain gauges are oriented, for the 1 st and 2 nd load cases, respectively. The FEM results demonstrate that the strain sensitivity in torsion is seven times higher than in bending, while the bending stiness is 18 times higher than the torsional stiness

4.2 Experimental Characterization The torque sensor was machined from a solid steel rod (Fig. 4). Foil strain gauges (SG- 3/350-LY41 from Omega (Omega, 1995)) were cemented at the locations determined by FEM. The strain gauge bridge is excited by a precisely regulated 8.0 V DC voltage. Instrumentation ampli ers built into the sensor boost the signal level of the Wheatstone bridge output before A/D conversion. We took advantage of the hollow motor shaft, which is common in direct-drive motors, to locate the electronic circuit board beside the sensor. The local signal conditioning provides a stronger output signal and improves the S/Nratio. Moreover, since the electronic circuit is totally enclosed by the motor’s hollow shaft, it is well shielded from the powerful magnetic noise created by the motor.

from the powerful magn etic noise created by the motor. Fig. 4. The torque sensor prototype.

Fig. 4. The torque sensor prototype.

4.2.1 Static Test In order to characterize the linearity and sensitivity of the sensor, static torsional and bending torques were applied in the experimental apparatus illustrated in Fig. 5. One side of the sensor is axed to a bracket, while two aluminum bars were attached radially and axially to the other side. The ends of the bar were connected to a mechanical lever via ropes

Joint Torque Sensory in Robotics

33

in which load cells (MLP-50 from Transducer Techniques (Techniques, n.d.)) were installed. The lever varied the tension in the cord gradually between zero and maximum. During loading and unloading, the reference load cell output and the torque sensor output (device under test) were recorded.

the torque sensor output (device under test) were recorded. Fig. 5. The static test setup. (a)

Fig. 5. The static test setup.

under test) were recorded. Fig. 5. The static test setup. (a) Sensor output versus applied torque

(a) Sensor output versus applied torque Fig. 6. Characteristics of the torque sensor.

The force transducer signal is scaled to torque and then is plotted versus the torque sensor output voltage in Fig. 6(a) for 2, 000 sample points. The slope of the line indicates the sensor calibration coecient of α =30mV/Nm.The experimental data shows that all collective deviations from linearity are less than 0.2 % full scale. Low sensitivity to the other axes is one of the key characteristic of a good joint torque sensor. The cross sensitivity measurements were performed by utilizing the static tesbed setup. Forces and moments are applied on dierent axes by the system of pullies and weights shown in Fig. 5. The bending moment is applied via an axial bar rmly connected to the sensor, while the torsion torque is applied by the radial arm. The direction of the force is set by a pulley, as shown in Fig. 5. Theoretically, the sensor output should not be responsive to the bending moment or the forces at all. However, in practice, due to inaccurate placement of the strain gauges and/or dierences in the Gauge Factors of the strain gauges, exact decoupling may not be achieved. In the course of the experiment, it becomes evident that, with the exception of the torsion moment, the bending moment dominates the sensitivity of the sensor. The experimental result indicates that the ratio of the sensor readings with respect to the bending and torsion -the cross sensitivity -is only 0.6%. This conrms that the sensor eectively decouples the eect of the non-torsional components on the measured torque signal.

(b) Dynamic response

34

Industrial Robotics - Programming, Simulation and Applications

4.2.2 Dynamic Test Dynamic testing serves mainly to validate the FEM results on which the stress analysis is based. The experiment is arranged to extract the stiness of the sensor prototype. Again, the sensor is held rigidly by a bracket while a steel disk is anged to the other side. The disk is massive, with an inertia of I zz =0.24 kgm 2 , and the whole system behaves like a second order system. To detect all the vibration modes corresponding to all compliance directions, the cross sensitivity is deliberately increased by electrically by-passing the strain of all strain gauge pairs except one. Therefore, the torque sensor no longer has the decoupling property, and its output is the summation of all torque/force components weighted by their corresponding gains. The system is excited impulsively by a hammer, and a data acquisition system records the subsequent vibration with a sampling rate of 3.2 kHz. Since the torsion gain is highest, the sensor signal is dominated by torsion vibration which decays due to the structural damping of the sensor material. Nevertheless, the sensor’s modes are shown clearly in the frequency domain. To this end, a 0.5 second interval of the signal is taken via a Hamming window and then its spectrum is found by FFT. Fig. 6(b) reveals the modal frequencies associated with the bending and torsion compliances occurred at 150 Hz and 980 Hz. Due to the low damping, the modal frequencies are almost the same as the natural frequencies. The corresponding torsion stiness is calculated to be k z =2.4 ×10 5 Nm/rad, which results in a high torsional stiness. The bending stiness can be found in the same fashion. However, it should be noted that the relative inertia is half of the inertia for disks, i.e. I xx =0.5I zz . The bending stiness is calculated to be twenty times higher than the torsion stiness, k x =4.8 ×10 6 Nm/rad. A comparison with the FEM predictions reveals an acceptable 20% error.

5. Joint Torque Sensory Feedback

5.1 Motivation Model-based robot control schemes have been proposed in the past for achieving precise motion tracking, e.g., resolved acceleration control (Luh et al., 1980), or the computed torque method (An et al., 1988). These approaches depend on dynamics modeling of the manipulator’s load and link dynamics; they potentially perform poorly when the model is not accurate (An et al., 1988). Adaptive control of manipulators was proposed to deal with parametric uncertainties of the manipulator model (Slotine and Li, 1987). However, these controllers cannot deal with robotic systems with a unknown or variable payloads unless a more complicated adaptive versions is used (Yong-Duan et al., 1989). Robust control (Slotine, 1985) and robust adaptive control of manipulators (Reed and Ionnou, 1989) have been developed that can maintain the stability with respect to uncertainties, including bounded disturbance, time-varying parameters, as well as unmodeled dynamics. However, the performance may be compromised in these control approaches. Alternatively, the dynamic control of manipulators can be performed by using joint- torque sensory feedback without the need for modelling link dynamics (Hashimoto, 1989b; Kosuge et al., 1990;Stokic and Vukobratovic, 1993; Aghili et al., 2001; Aghili and Namvar, 2006). Such a controller can achieve robust performance against variation of link dynamics and rejection of external force disturbances at the expense of using additional sensors. Kosuge et al. (Kosuge et al., 1990) demonstrated experimentally the eectiveness of using joint-torque measurements to compensate for the nonlinear link dynamics of a

Joint Torque Sensory in Robotics

35

SCARA-type direct-drive robot. Hashimoto (Hashimoto, 1989b) applied this technique to

a harmonic-drive actuator where the deformation of ex-splines are used to measure joint-torques.

A survey of JTF can be found in (Stokic and Vukobratovic, 1993). In summary, the main

advantages claimed for JTF are:

(i)

Joint sensory feedback can simplify the complexity of a manipulator system dynamics by eliminating the link dynamics;

(ii)

The control system is inherently robust w.r.t. parameter variation, e.g., a manipulator grasping a payload with mass variation; and

(iii)

The control system can completely reject external force disturbances applied

on the links.

It should be pointed out that essentially the joint torque measurement is possible if there is a

small, but nonzero, joint deformation. A complete model of elastic-joint manipulators, which accounted for the dynamics of both the rotors of the drive system and the link system, were developed in (Murphy et al., 1990). These models were used by many researchers for development and analysis of non-adaptive (Spong, 1987; Tomei, 1991) and adaptive (Lozano and Brogliato, 1992) controllers for exible joint manipulators. These controllers aimed at achieving high performance by taking nite stiness of the joints into consideration; however, this complicates the equations of motion by increasing the order of the system. Tomei (Tomei, 1991) showed that even a simple PD controller, similar to that

used for rigid manipulators, suces to globally stabilize the elastic-joint manipulators about

a reference point. The dynamics model developed in (Kosuge et al., 1990) for manipulators

with JTF is a special case of the exible joint model where the stiness approaches innity. Unlike the other control approaches, the JTF controller (Kosuge et al., 1990) requires only the rotor dynamics, i.e., a half-model. However, since the joint deformation is not compensated for by the controller, a suciently high joint stiness is required so that the overall joint deformation can be assumed negligible.

5.2 Robot Dynamics with JTF

In robotics application, joint torque sensing can be used to eliminate completely the eect of

link dynamics if the manipulator’s joint are endowed with joint torque sensor device. The system dynamics are determined completely by the gyroscopic motion of the rotors of the joint actuators.

are determined completely by the gyroscopic motion of the rotors of the joint actuators. Fig. 7.

Fig. 7. The ith joint.

36

Industrial Robotics - Programming, Simulation and Applications

Fig. 7 depicts the ith motor axis and joint axis of a robot with n revolute-joints where each joint is driven by a geared motor with gear ratio n i and assuming that the motor shaft is cut right at its junction to the link. In the following the subscript i denotes quantities related to the ith link or joint. Assume that ω zi be the z-component of the absolute angular velocity of the rotor ω i ; θ i and denote the rotor angle and the rotor inertia along the z-axis. Also, and denote the torques of the motor, the joint, and the bearing friction, respectively, all of them seen after the gearbox. Let us assume the followings: (i) The principal axes of all rotor inertias are parallel to their own joint axes; (ii) all rotors are statically balanced, i.e., their center of mass coincident with rotation axis; and (iii) the rotors are axisymmetric, i.e., their inertias about x and y axes are identical. Also as in (Kosuge et al., 1990), we assume that the deformation of the joint torque sensors are negligible. Moreover, let coordinate system be attached to ith joint according to the Denavit-Hartenberg convention. Since the rotors are axisymmetric, the equation of motion of each rotor can be simply described by

(5)

of motion of each rotor can be simply described by (5) In the above equation, we
of motion of each rotor can be simply described by (5) In the above equation, we
of motion of each rotor can be simply described by (5) In the above equation, we
of motion of each rotor can be simply described by (5) In the above equation, we
of motion of each rotor can be simply described by (5) In the above equation, we

In the above equation, we need to obtain an expression for the absolute angular velocity in

terms of joint angle quantities. Let

joint axis. Then, since the deformations of the torque sensors are assumed negligible, one can compute the link velocities by a technique similar to Luh et al. (Luh et al., 1980)

by a technique similar to Luh et al . (Luh et al ., 1980) represent a

represent a unit vector in the direction of the jth

1980) represent a unit vector in the direction of the j th (6) (7) Equations (6)
1980) represent a unit vector in the direction of the j th (6) (7) Equations (6)

(6)

(7)

Equations (6) and (7) can be combined in the vectorial form as

where N =diag{n i }, and

in the vectorial form as where N =diag{ n i }, and is a lower triangular

is a lower triangular matrix whose elements are

i }, and is a lower triangular ma trix whose elements are (8) It is worth

(8)

It is worth pointing out that the unit vectors constitute the columns of the manipulator’s

.Hence,

constitute the columns of the manipulator’s .Hence, rotation Jacobian (Sciavicoo and Siciliano, 2000), the D

rotation Jacobian (Sciavicoo and Siciliano, 2000),

the D matric can be constructed from the Jacobian by

i.e.

the D matric can be constructed from the Jacobian by i.e. where matrix function tri(.) returns
the D matric can be constructed from the Jacobian by i.e. where matrix function tri(.) returns

where matrix function tri(.) returns only the lower-triangular elements of the input matrix, while the rest of elements of the returned matrix are zero. Dening and and using (8) in (5), we get

(9)

zero. De fi ning and and using (8) in (5), we get (9) where is the
where
where

is the net torque acting on the rotors.

Example. 1. The coupling matrix D of a general 3-DOF robot is

. 1. The coupling matrix D of a general 3-DOF robot is where a 1 and

where a 1 and a 2 are the twist angles, and sa i and ca i represent sin(a i ) and cos(a i ).

Joint Torque Sensory in Robotics

37

Now, one can readily show that the following inverse-dynamics control scheme

show that the fo llowing inverse-dynamics control scheme (10) where G D > 0 and G

(10)

where G D > 0 and G P > 0 are the feedback gains, can achieve dynamic tracking of the desired trajectory θ d .

6. Adaptive Joint Torque Feedback with Uncalibrated Torque Sensor

In this section, we present development of an adaptive JTF controller that takes uncalibrated

joint-torque signals, and then identies the sensor calibration parameters as well as all joint

parameters including rotor inertia, the link twist angles, and joint-friction parameters (Aghili and Namvar, 2006). We show that asymptotic stability of the proposed adaptive controller can be ensured if an upper bound on the estimated sensor-gain is respected. Subsequently, the parameter adaptation law is modi ed according to the projection parameter adaptation to meet the stability condition.

6.1 Adaptive Control Note that dynamics formulation (9) is not adequate for an adaptive control because J T is not

a symmetric matrix. This problem can be xed by changing the torque coordinate. Pre- multiplying (9) by T (θ) T leads to

(11)

where

Remark. 1. Since J is positive-denite and T is a nonsingular matrix, the symmetric inertia matrix

and T is a nonsingular matr ix, the symmetric inertia matrix and M R is positive-de
and T is a nonsingular matr ix, the symmetric inertia matrix and M R is positive-de

and

T is a nonsingular matr ix, the symmetric inertia matrix and M R is positive-de fi

M R is positive-denite. Also, it can be readily veri ed that matrix M R 2C R is skew-symmetric. We assume that all joints are equipped with torque sensing devices and that vector represents the sensor outputs. Recall from (3) that the output signals are linearly related to the actual joint-torques. Hence, we can say

(12)

where α =diag{α i }and β T =[β 1 , , βn] are sensor gains and osets, respectively. Substituting (12) in (11)gives

(13)

where the friction b(θ ˙ ) can be captured by a friction model reported in the literature (de Wit

et al., 1995).

The control objective here is to nd a control law τ M such that the rotor positions θ tracks the

desired trajectory θ d (t), while parameters J, D, α, β and friction coecients are considered unknown. Moreover, note that the manipulators’ twist angles are the only relevant parameters to matrix D. Hence, T (θ, ρt) can be parameterized in terms of the relevant kinematics parameters ρ t –note that the kinematics parameters appear also independently in ρ s in combination with dynamics and calibration parameters. Assume that the dynamics equation (13) is rewritten in a linearly parameterized form as

(14)

where vector ρ s contains combined dynamics, kinematics and calibration parameters. Now,

parameterized form as (14) where vector ρ s contains combined dynamics, kinematics and calibration parameters. Now,
parameterized form as (14) where vector ρ s contains combined dynamics, kinematics and calibration parameters. Now,
parameterized form as (14) where vector ρ s contains combined dynamics, kinematics and calibration parameters. Now,
parameterized form as (14) where vector ρ s contains combined dynamics, kinematics and calibration parameters. Now,

38

Industrial Robotics - Programming, Simulation and Applications

denoting

control law

Simulation and Applications denoting control law and as the estimated variables, we propose the following

and

Simulation and Applications denoting control law and as the estimated variables, we propose the following (15)

as the estimated variables, we propose the following

law and as the estimated variables, we propose the following (15) where (16) and L >

(15)

where

the estimated variables, we propose the following (15) where (16) and L > 0 and Λ
the estimated variables, we propose the following (15) where (16) and L > 0 and Λ

(16)

and L > 0 and Λ > 0 are the gains. The matrix inversion in (15) should not be a problem, because det meaning that is always nonsingular regardless of the value of . Consider the parameter update law ˙

(17)

where

following, the stability of the control error will be investigated.

the stability of the control error will be investigated. and Γ > 0 is the gain
the stability of the control error will be investigated. and Γ > 0 is the gain
the stability of the control error will be investigated. and Γ > 0 is the gain

and Γ > 0 is the gain matrix. In the

Therorem. 1. The control law (15) together with the parameter update law (17) ensure the

stand for the space of(15) together with the parameter update law (17) ensure the bounded and square integrable signals, respectively.

bounded and square integrable signals, respectively. Proof: Substituting the control signal (15) in the system dynamics (13) yields the error dynamics described by

following properties:

andthe error dynamics described by following properties: where and (18) Here we have used the property

the error dynamics described by following properties: and where and (18) Here we have used the

where

error dynamics described by following properties: and where and (18) Here we have used the property

and

dynamics described by following properties: and where and (18) Here we have used the property that

(18)

Here we have used the property that for any dimensionally compatible vector x,

property that for any dimensionally compatible vector x , Introducing the positive storage function and di

Introducing the positive storage function

vector x , Introducing the positive storage function and di ff erentiating it along with trajectories

and dierentiating it along with trajectories of the error dynamics and using Remark 1 yields

(19)

From this, a standard argument (Ioannou and Sun, 1996) proves the theorem.

argument (Ioanno u and Sun, 1996) proves the theorem. 6.2 Asymptotic Stability Asymptotic stability of control

6.2 Asymptotic Stability Asymptotic stability of control error or s, can be ensured provided that s is uniformly continuous, i.e., is bounded. The latter can be inferred only if control input τ M is bounded. The condition for boundedness of τ M is obtained in the following. For simplicity in presentation we assume that kinematics parameter vector is known and then we can derive a simple condition for boundedness of τ M . Extending results to the case where is unknown is straight forward. Following the Lagrangian approach in (Luca and Lucibello, 1998), the joint torque can be computed from

(20)

contains all the1998), the joint torque can be computed from (20) captures the e ff ect of allexternal

captures the eect of allexternal force disturbances

where nonlinear terms, andvector

the captures the e ff ect of allexternal force disturbances ′ where nonlinear terms, andvector is
the captures the e ff ect of allexternal force disturbances ′ where nonlinear terms, andvector is
the captures the e ff ect of allexternal force disturbances ′ where nonlinear terms, andvector is
the captures the e ff ect of allexternal force disturbances ′ where nonlinear terms, andvector is
the captures the e ff ect of allexternal force disturbances ′ where nonlinear terms, andvector is

is link inertia matrix,

Joint Torque Sensory in Robotics

39

Eliminating the joint accelerations from equations (13) and (20) and solving the resultant equation in terms of τ S yields

, and
,
and

where

(12)into (15) gives an expression for the control input independent of τ S . That is

Substituting

where

and
and
 

(21)

τ S

from

(22)

˙ is the normalized estimation of torque sensor gain. Since all variables of function are bounded, then is bounded too. Hence, we can say τ M is bounded if and

only if
only if
too. Hence, we can say τ M is bounded if and only if is invertible, i.e.

is invertible, i.e.

say τ M is bounded if and only if is invertible, i.e. where (23) Therefore, the

where

τ M is bounded if and only if is invertible, i.e. where (23) Therefore, the control

(23)

Therefore, the control input τ M remains bounded and the control error asymptotically converges to zero if this part of the system states α} does not enter the region . In this case, is bounded and hence s converges asymptotically to zero, which, in turn, implies convergence of to zero.

In the following, we derive a sucient condition on the gain sensor estimate to achieve a

bounded control input. Remark 2 Using norm properties, we can obtain a conservative condition satisfying (23) as follows

obtain a conservative condition satisfying (23) as follows (24) Therefore, joint torques remain bounded prov idedthat
obtain a conservative condition satisfying (23) as follows (24) Therefore, joint torques remain bounded prov idedthat
obtain a conservative condition satisfying (23) as follows (24) Therefore, joint torques remain bounded prov idedthat
obtain a conservative condition satisfying (23) as follows (24) Therefore, joint torques remain bounded prov idedthat
obtain a conservative condition satisfying (23) as follows (24) Therefore, joint torques remain bounded prov idedthat
obtain a conservative condition satisfying (23) as follows (24) Therefore, joint torques remain bounded prov idedthat
obtain a conservative condition satisfying (23) as follows (24) Therefore, joint torques remain bounded prov idedthat
obtain a conservative condition satisfying (23) as follows (24) Therefore, joint torques remain bounded prov idedthat

(24)

Therefore, joint torques remain bounded providedthat an over-estimation ofthe sensor- gains does not exceed Let us choose constant so that always holds. Then, by virtue of Remark 2, one can show (24) is satised if

(25)

Based on the projection adaptation law (Ioannou and Sun, 1996), it is possible to modify the parameter update law (17) to ensure that inequality (25) holds. Assume that α i and ψ i be the

ith elements of and −ΓY T s, respectively. Then, consider the following projection parameter adaptation law for ˆ

er the following projection parameter adaptation law for ˆ A standard argument (Ioannou and Sun, 1996)
er the following projection parameter adaptation law for ˆ A standard argument (Ioannou and Sun, 1996)
er the following projection parameter adaptation law for ˆ A standard argument (Ioannou and Sun, 1996)
er the following projection parameter adaptation law for ˆ A standard argument (Ioannou and Sun, 1996)

A standard argument (Ioannou and Sun, 1996) shows that the modied parameter update

law guarantees (25), while keeping (19) negative, i.e.

and Sun, 1996) shows that the modi fi ed parameter update law guarantees (25), while keeping

40

Industrial Robotics - Programming, Simulation and Applications

7. Joint Torque Feedback in the Presence of Actuator Dynamics

Ideally, positive joint torque feedback decouples the load dynamics exactly and provides innite stiness for external torque disturbance. This can be possible if an ideal actuator reproduces the same torque as measured by the torque sensor and hence the load torque can be trivially compensated. However, in practice, due to actuator’s nite bandwidth dynamics, the feedback system may not respond fast enough to the measured load-torque. As a result, a complete compensation of disturbance torque cannot be achieved in the presence of actuator dynamics and/or delay. This section presents an optimal lter for positive joint torque feedback control which takes the actuator’s dynamics into account and minimizes the servo- controller sensitivity to load torque disturbance (Aghili et al., 2001). We show that nding optimal torque feedback is equivalent to the model-matching problem that has an exact solution (?). In the case of dynamic load, optimal torque feedback minimizes the eect of load perturbation on the nominal rotor dynamics. A single variable case is considered herein, yet the analytic solution can be readily extended for a multi-variable case.

7.1 Optimal JTF Design

extended for a multi-variable case. 7.1 Optimal JTF Design Fig. 8. Joint torque feedback through fi

Fig. 8. Joint torque feedback through lter Q(s).

Fig.8 illustrates the general block diagram of a JTF system, where transfer function H(s) represents the actuator dynamics and Q(s) is a lter. The external disturbance τ J is measured via a torque sensor and the torque signal is fed back for compensation through the lter Q(s). Let u(s)be the compensated control input under positive joint torque feedback in the Laplace domain. Similar to (10), we can dene the compensated control input u as

where s is the Laplace variable. The disturbance sensitivity function (26) (27)

where s is the Laplace variable. The disturbance sensitivity function

where s is the Laplace variable. The disturbance sensitivity function

(26)

(27)

shows how the disturbance τJ(s)is transmitted to the net torque τnet(s) acting on the rotor. Note that for ideal actuator, where H(s) = 1, we can achieve complete decoupling of the disturbance, i.e., χ(s)=0, by trivially choosing Q(s) =1. Also, let G (s) represent the compliance of a position feedback controller, i.e.

the compliance of a position feedback controller, i.e. Then, one can show that addition of the

Then, one can show that addition of the JTF loop changes the overall compliance of the motion servo to

loop changes the overall compliance of the motion servo to which is equivalent to the weighted

which is equivalent to the weighted disturbance function (Aghili et al., 2001), if the

weighting function is chosen as and realizable lter

maximum weighted sensitivity of the system is minimized, that is

weighted sensitivity of the system is minimized, that is .Now, the problem is to fi nda

.Now, the problem is to nda stable (the class of H functions which are rational) such that the