JAMRIS 2011 Vol 5 No 3

Page 1

Journal of Automation, Mobile Robotics & Intelligent Systems pISSN 1897-8649 (PRINT) / eISSN 2080-2145 (ONLINE) VOLUME 5, N째 3 2011 www.jamris.org Publisher: Industrial Research Institute for Automation and Measurements PIAP

pISSN 1897-8649 (PRINT) / eISSN 2080-2145 (ONLINE)

VOLUME 5,

N째 3

2011

www.jamris.org


JOURNAL of AUTOMATION, MOBILE ROBOTICS & INTELLIGENT SYSTEMS

Editor-in-Chief Janusz Kacprzyk

Executive Editor: Anna Ładan aladan@piap.pl

(Systems Research Institute, Polish Academy of Sciences; PIAP, Poland)

Associate Editors: Mariusz Andrzejczak (PIAP, Poland) Katarzyna Rzeplińska-Rykała (PIAP, Poland)

Co-Editors: Dimitar Filev (Research & Advanced Engineering, Ford Motor Company, USA)

Kaoru Hirota

Webmaster: Tomasz Kobyliński tkobylinski@piap.pl

(Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of Technology, Japan)

Witold Pedrycz (ECERF, University of Alberta, Canada)

Roman Szewczyk (PIAP, Warsaw University of Technology, Poland)

Editorial Office: Industrial Research Institute for Automation and Measurements PIAP Al. Jerozolimskie 202, 02-486 Warsaw, POLAND Tel. +48-22-8740109, office@jamris.org

Copyright and reprint permissions Executive Editor

Editorial Board: Chairman: Janusz Kacprzyk (Polish Academy of Sciences; PIAP, Poland) Plamen Angelov (Lancaster University, UK) Zenn Bien (Korea Advanced Institute of Science and Technology, Korea) Adam Borkowski (Polish Academy of Sciences, Poland) Wolfgang Borutzky (Fachhochschule Bonn-Rhein-Sieg, Germany) Oscar Castillo (Tijuana Institute of Technology, Mexico) Chin Chen Chang (Feng Chia University, Taiwan) Jorge Manuel Miranda Dias (University of Coimbra, Portugal) Bogdan Gabryś (Bournemouth University, UK) Jan Jabłkowski (PIAP, Poland) Stanisław Kaczanowski (PIAP, Poland) Tadeusz Kaczorek (Warsaw University of Technology, Poland) Marian P. Kaźmierkowski (Warsaw University of Technology, Poland) Józef Korbicz (University of Zielona Góra, Poland) Krzysztof Kozłowski (Poznań University of Technology, Poland) Eckart Kramer (Fachhochschule Eberswalde, Germany) Andrew Kusiak (University of Iowa, USA) Mark Last (Ben–Gurion University of the Negev, Israel) Anthony Maciejewski (Colorado State University, USA) Krzysztof Malinowski (Warsaw University of Technology, Poland)

Andrzej Masłowski (PIAP, Poland) Tadeusz Missala (PIAP, Poland) Fazel Naghdy (University of Wollongong, Australia) Zbigniew Nahorski (Polish Academy of Science, Poland) Antoni Niederliński (Silesian University of Technology, Poland) Witold Pedrycz (University of Alberta, Canada) Duc Truong Pham (Cardiff University, UK) Lech Polkowski (Polish-Japanese Institute of Information Technology, Poland) Alain Pruski (University of Metz, France) Leszek Rutkowski (Częstochowa University of Technology, Poland) Klaus Schilling (Julius-Maximilians-University Würzburg, Germany) Ryszard Tadeusiewicz (AGH University of Science and Technology in Kraków, Poland)

Stanisław Tarasiewicz (University of Laval, Canada) Piotr Tatjewski (Warsaw University of Technology, Poland) Władysław Torbicz (Polish Academy of Sciences, Poland) Leszek Trybus (Rzeszów University of Technology, Poland) René Wamkeue (University of Québec, Canada) Janusz Zalewski (Florida Gulf Coast University, USA) Marek Zaremba (University of Québec, Canada) Teresa Zielińska (Warsaw University of Technology, Poland)

Publisher: Industrial Research Institute for Automation and Measurements PIAP

If in doubt about the proper edition of contributions, please contact the Executive Editor. Articles are reviewed, excluding advertisements and descriptions of products. The Editor does not take the responsibility for contents of advertisements, inserts etc. The Editor reserves the right to make relevant revisions, abbreviations and adjustments to the articles.

All rights reserved ©

1


JOURNAL of AUTOMATION, MOBILE ROBOTICS & INTELLIGENT SYSTEMS VOLUME 5, N° 3, 2011

Special Issue on: Design, modelling and control of robots Editors: Cezary Zieliński, Krzysztof Tchoń

CONTENTS 3

67

EDITORIAL

Terrain map building for a walking robot equipped with an active 2D range sensor P. Łabęcki, D. Rosiński, P. Skrzypczyński

C. Zieliński, K. Tchoń 9

Functional characteristics of a new special gripper with flexible fingers K. Mianowski 14

The servo drive with friction wheels Ł. Frącczak, L. Podsędkowski, M. Zawierucha 21

Mathematical model of a multi-link surgical manipulator joint with an antiseptic coating R. Leniowski, L. Leniowska 27

Modelling and experimental research of nonholonomic ball gear B. Krysiak, D. Pazderski, K. Kozłowski 33

Motion planning of the underactuated manipulators with friction in constrained state space A. Ratajczak, M. Janiak 41

Different kinematic controllers stabilizing nonholonomic mobile manipulators of (nh, nh) type about desired configuration A. Mazur, J. Płaskonka 49

Motion planning of wheeled mobile robots subject to slipping K. Zadarnowska, A. Oleksy 59

Real-time obstacle avoidance using harmonic potential functions P. Szulczyński, D. Pazderski, K. Kozłowski

2

79

A mobile robot navigation with use of CUDA parallel architecture B. Siemiątkowska, J. Szklarski, M. Gnatowski, A. Borkowski, P. Węclewski


VOLUME 5,

Journal of Automation, Mobile Robotics & Intelligent Systems

SPECIAL ISSUE

Design, Modelling and Control of Robots

Guest Editors: Cezary Zieliński and Krzysztof Tchoń

N° 3

2011



Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 5,

N° 3

2011

Editorial

The idea of publishing an issue of Journal of Automation, Mobile Robotics and Intelligent Systems devoted to the design, modelling and control of robots emerged in the discussions during the 11th National Conference on Robotics organized by the Department of Fundamental Cybernetics and Robotics, Institute of Computer Engineering, Control and Robotics, Wroc³aw University of Technology in Karpacz, Poland from 9th till 12th of September 2010. This conference showed that the Polish robotics community has produced significant results in this field, and thus the Program Committee decided that it would be beneficial, if those achievements would be put together within one publication, so that they could complement each other. Hence a selected group of authors working on design, modelling and control methods in robotics was invited to submit papers describing their research results. It should be underscored that the papers contained in this issue of JAMRIS are by no means a simple translation into English of the conference papers. Here the reported results have been described more comprehensively and in wider context, and subjected to the regular JAMRIS review procedure. Gratitude should be expressed to all of the reviewers who provided in depth comments enabling many clarifications and overall improvement of the contents of the papers.

Ten papers compose this issue. These papers have common features that enable us to treat them as a single, unified thematic group. All of them pertain to certain aspects of designing a robotic system, hence, if our purpose is the creation of a complex robotic system, we need to take into account all of those aspects. The first and foremost is the idea standing behind the mechanical structure of the system. Then a certain model of that system has to be established. The purpose of that is the elaboration of the control algorithm and the computer simulation of its performance unveiling the properties of the proposed system. Next, the motion planning has to be taken into account. Finally, the experiments proving the correctness of the proposed approach have to be executed. The collected papers address all of those subjects. Dexterous grasping of both rigid and floppy objects is at the center of attention of the robotics researchers. Here, the research is concentrated both on a novel gripper design, and on the control of those devices supplemented by the grasp planning. The design of a multi-fingered hand is addressed by the paper entitled Functional characteristics of a new special gripper with flexible fingers, authored by Krzysztof Mianowski. A major problem in grasping is establishing contact between the fingers of the gripper and the acquired object. The location and the dimensions of the latter are usually known only approximately, thus some form of compliance has to be included into the mechanism. A control algorithm itself is not sufficient, if fast motions are to be attained. To some extent, the design of the mechanism must be such that compliance will be due to its elasticity, however the precision of positioning should not be sacrificed. A novel passive compliance mechanism that provides the estimate of the applied force and compliance is presented in this paper. Motion of manipulators in constrained spaces requires miniaturization of their drives. The paper entitled The servo drive with friction wheels, is the work of £ukasz Fr¹cczak, Leszek Podsêdkowski and Marcin Zawierucha that focuses on motion transmission utilising friction in laparoscopic devices. This solution leads to the miniaturization of the servo drive, what is of utmost importance in the case of laparoscopic instruments operating in very constrained spaces. The paper presents the general idea of friction motion transmission, the proposed design of the mechanism, including proprioceptive sensing, and the control method.

Editorial

5


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 5,

N° 3

Operation within human body requires the manipulator to be coated with an antiseptic skin. This type of coating adversely influences the motion capabilities of the manipulator. At the focus of attention of the paper entitled Mathematical model of a multi-link surgical manipulator joint with an antiseptic coating, by Ryszard Leniowski and Lucyna Leniowska, the model of the manipulator joint and practical aspects of its control are written, taking into account both the extremely high gear ratio used and the friction and damping introduced by the antiseptic coating. The model of the coating is of paramount importance from the point of view of the control algorithm. Non-holonomic devices usually employ a reduced number of actuators in comparison with the number of dimensions of the space that they act in. Non-holonomic mobile platforms have been studied in depth, but nonholonomic manipulators are a novelty. A non-holonomic gear utilising friction transmission is the topic of the paper entitled Modelling and experimental research of nonholonomic ball gear, contributed by Bart³omiej Krysiak, Dariusz Pazderski and Krzysztof Koz³owski. Modelling and experimental results provide information on the maximum driving torque and gear efficiency that can be provided by such a device as well as the resultant slip that can occur due to the application of excessive torques. The use of passivity in robot control is perceived as an advantage that can lead to a considerable reduction of energy requirements. This excites the interest in all aspects of the utilisation of passive joints. Motion planning taking into account friction in the passive joints of a manipulator is the subject of the paper entitled Motion planning of the underactuated manipulators with friction in constrained state space, produced by Adam Ratajczak and Mariusz Janiak. The imbalanced Jacobian algorithm, derived from the endogenous configuration space approach, is utilised in the motion planning task for an underactuated manipulator with passive joints, both exhibiting friction and frictionless. The simulation experiments demonstrate the effectiveness of the proposed approach. Control of mobile platforms and manipulators treated separately has received considerable attention. Control of those devices in conjunction is not that well studied, especially if both subsystems are nonholonomic. Superiority of kinematic feedback control over kinematic open-loop control of a non-holonomic manipulator mounted on top of a non-holonomic platform has been proved in the paper entitled Different kinematic controllers stabilizing nonholonomic mobile manipulators of (nh, nh) type about desired configuration, provided by Alicja Mazur, Joanna P³askonka. Simulation experiments have underscored the advantages of the proposed feedback control method. The non-holonomic platform and the non-holonomic manipulator are being controlled by separate kinematic controllers. The choice of the kinematic controller for one subsystem affects the behavior of the other subsystem due to high dynamic interactions. Nevertheless, the kinematic feedback is able to cope with such a disturbance. The majority of papers devoted to the control of non-holonomic mobile platforms assumes that the constraints imposed on the motion of their wheels are not violated. However, the paper entitled Motion planning of wheeled mobile robots subject to slipping, created by Katarzyna Zadarnowska and Adam Oleksy assumes a more realistic stance and takes into account that slip is a natural phenomenon in the motion of those devices. Slip is modelled as a small perturbation of the ideal constraints. The motion planning algorithm, derived from the endogenous configuration space approach, uses the linearization of the control system representation of the system's kinematics and dynamics along the desired trajectory. Simulation experiments demonstrate that the proposed approach is viable. The paper entitled Real-time obstacle avoidance using harmonic potential functions, authored by Pawe³ Szulczyñski, Dariusz Pazderski, Krzysztof Koz³owski, deals with the mobile platform motion planning by means of harmonic functions. The method uses analytical description of the solution to Laplace equation. The algorithm takes into account elliptically shaped static obstacles. Simulations affirm that the method ensures the collision avoidance and the convergence to the goal. The superiority of the proposed method in relation to other potential field methods is due to the reduced curvature of the resulting trajectory.

6

Editorial

2011


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 5,

N° 3

2011

The solution of the above mentioned path planning algorithms requires the knowledge about the terrain that is to be traversed, i.e., its map is needed. The paper entitled Terrain map building for a walking robot equipped with an active 2D range sensor, written by Przemys³aw £abêcki, Dawid Rosiñski and Piotr Skrzypczyñski addresses this subject. Perception of rough terrain and its mapping is at the focus of attention of this work. Two walking robots equipped with inexpensive optical range sensors providing 2D data are the devices utilised in the experiments. Structured light sensor and a laser scanner are also used. Although the laser scanner is superior in delivering accurate measurements, the structured light sensor is much cheaper. The discussed online mapping algorithm includes a provision for removing map artifacts that result from the qualitative errors in range measurements. If a robot is to act in a changing environment, it should be capable both of quick perception and fast planning of its activities. The time of processing the sensor data is crucial. The paper entitled A mobile robot navigation with use of CUDA parallel architecture, written by Barbara Siemi¹tkowska, Jacek Szklarski, Micha³ Gnatowski, Adam Borkowski and Piotr Wêclewski delves into the problem of parallelization of 3D laser scanner image processing for the purpose of map building. The proposed path planning method utilises the structural similarity between a cellular neural network and a graphic processing unit in order to find a collision free path within the map. The experiments confirmed that due to the employed parallelization the path planning problem can be solved quickly enough that the mobile robot controller use it in real-time. All of the enumerated topics are at the forefront of the currently ongoing research into robot design, modelling and control. Each of the papers gives a valuable insight into a particular problem, providing its formulation and deriving a solution. This selection of papers reveals the wide scope and diversity of contemporary robotics.

Guest Editors: Cezary Zieliñski Warsaw University of Technology c.zielinski@elka.pw.edu.pl

Krzysztof Tchoñ Wroc³aw University of Technology tchon@ict.pwr.wroc.pl

Editorial

7



VOLUME 5,

Journal of Automation, Mobile Robotics & Intelligent Systems

N째 3

2011

FUNCTIONAL CHARACTERISTICS OF A NEW SPECIAL GRIPPER WITH FLEXIBLE FINGERS Krzysztof Mianowski

Abstract: Functional characteristics of an originally proposed design of multifingered gripper with flexible fingers is presented in the paper. The design work concentated on ensuring that the hand can contact with a variety of different shape objects made of flexible materials of different properties as well as fragile objects requiring particular caution when manipulated. Additionally it was assumed that the gripper should ensure a satisfactory reliability level, required high functionality and easy maintenance. The gripper design ensures beneficial variability of characteristics of the forces acting upon the object irrespective of the openning distance of fingers i.e. of the manipulated object size. Keywords: multifingered gripper, flexible fingers, service robotics, force-feedback manipulation.

1. Introduction Modern robotics takes up new challenges to ensure that the robots are capable of executing more and more difficult tasks and manipulators of performing new prehension functions. Nowadays, service robotics is one of most interesting fields of robot applications, involving the desire for substituting for a human being in uncomplicated manipulation processes that require precise object identification, gentle grasping and precise object handling for manipulation. There arises the need for the application of special grippers provided with sensing capability. To ensure that the gripper could execute manipulation tasks typical of human being to perform, it seems reasonable to use the one with antropomorphic shape.Ahuman hand has 5 fingers and up to 23 degrees of freedom, while the whole hand with the arm and forearm has 30 degrees of freedom [5]. According to the literature, available the minimal number of fingers necessary for successful performance of the prehension, i.e., the process consisting in grasping the object with a simultaneous leaning against the palm surface is equal to three (four-point grasp) [1], [5]. Each human finger has 3 to 4 degrees of freedom, therefore, requiring proper control. Additionally, the control of fingers should ensure their proper cooperation, corresponding to the object shape. Therefore, on the one hand a solution to the grasping problem should reveal adequate generality level, and on the other hand, a general approach should be able to perform using a number of specific, relatively simple methods. Primary analysis of the grasping process proves that application of three-finger-grasp is most profitable. When manipulating different objects in the course of

a service task a human being uses two arms, and his/her hand are most frequently engaged. For such a task, typical of human work should be executed by a robot, it requires appropriate adaptation of both the manipulator robot kinematical structure and grasping devices, as well as working environment in the way ensuring that a robot not only can replace a human being in specified manual operations but also can execute the task faster, with better accuracy and more precision. Therefore, the system should reveal a high level of adaptability and be provided with a number of sensing devices of different types allowing for gathering the information from environment that is necessary for manipulation performance. Modern 3D CAD systems allow for a general approach to the design process including determination of basic characteristics of the designed systems and the integration of subsystems. Therefore high qualities of the design (like, high functionality and aesthetics) can be achieved as early as at the design stage. At the initial stage of the developed design of multifingered gripper it was assumed that a gripper consisting of a thumb (with additional revolution relative the longitudinal axis in its starting position) and two fingers with redundant DOF's pressing against the object. The fingers were actuated by means of the flexible elements that ensured automatic control of the pressure exerted by particular phalanges upon the object. However, the initial tests proved that the above hand-like design with a smaller number of fingers reduces substantially the grasping capabilities of the gripper. It occurred more profitable to apply one supporting finger and two thumb-type fingers, having the possibility of revolution about the finger longitudinal axis. The driving systems were situated in the gripper body, while the phalanges were driven through connecting rods provided with additional systems of elastic clutches. By adding flexibility to the fingers, we improve substantially the gripper functionality through rising its level of anthropomorphism. The fingers will be covered with a special flexible glove with embedded contact pressure force sensors and other measurement gauges. The gripper was designed for grasping an object by means of hooking clamping, grasping, supporting - stabilized by the other hand/gripper (like in a typical two-hand manipulation). Additionally, the design was supplied with the following means: - force sensors, pressure sensors, contact sensors, - six-axis-wrist force/moment sensor for identification of body forces, - image processing methods for identification of geometry of the manipulated object to achieve optimal prehension and grasping processes, as well as optimal control of the manipulation task execution. Articles

9


Journal of Automation, Mobile Robotics & Intelligent Systems

To ensure highest performance of the system, in the developed design, is was assumed that all mechanical, electric and electronic elements (necessary driving systems, gear and transmission systems, joints, sensors, etc.) will be integrated with the mechanical system of a gripper. The mechanical part was designed using the ProEngineer system and then the initial simulations were performed using the ADAMS package, allowing for estimation of basics characteristics and their optimisation according to the assumed criteria. A general scheme of the gripped of the properties described above, that performs a grasping prehension is shown in Fig.1.

Fig. 1. Basic scheme of the gripper performing a grasping prehension. To simplify the control process when manipulating the object, it was assumed that the performance of one finger is subject to kinematic control and it plays a the leading role, while the other two fingers are situated on the opposite side of the manipulated object and they stabilize the object through the flexible shape adaptation to the remaining surfaces of the object. The finger presented in Fig. 1 on the left side of an object plays the superior role. It is a simple plane mechanism (2D) with 3 DOFs and it consists of three phalanges. In the course of grasping process, the finger has to execute the task of supporting the gripped object at three points situated in the plane of its motion, grasping it at the same time in the way preventing it from slipping out. Hereinafter, the finger will be considered as the so-called leading one. The two other fingers are similar mechanisms, but having an additional degree of freedom, i.e., rotation (over vertical axes in Fig. 1). They are also capable of grasping object but in contrast to the leading finger their working planes may be turned and adapted for cooperation purposes with the object to allow for grasping it from three directions. In a general case, the grasping task formulated in the above way should ensure that each phalange contacts with the object, that would require a separate drive for each phalange, as well as measurement of the contact force acting upon the object at many points. Since it was planned that driving systems provided with additional sensors to measure contact forces would be situated in the gripper base, a design of a single finger following the above concept with transmission systems might occur extremely complex. Additio10

Articles

VOLUME 5,

N째 3

2011

nally, the determination of current configuration of the object-gripper system might be difficult; namely, for such a system with rigid links the rigid problem of dynamics (kinetostatics) is statically undefined. When attempting at control of such a gripper one may be confronted with obstacles resulting in damage of the manipulated object or gripper failure.

2. The assumed structural solutions of a gripper with flexible fingers From the both design and kinematics viewpoints a gripper of grasping type is a complex mechanism comprising a few fingers - small manipulators, a smart cooperation between which is necessary for grasping and holding a manipulated object. The gripper task, after ensuring that the arm configuration is proper, consists in safe prehension and firm grasping of the object along the whole trajectory and then precise and safe releasing the object at the target. A proper grasping of the object, that frequently depends on many factors [4] is of crucial importance in the process. A precise control of the cooperation between fingers requires the prior determination of accurate kinematical and dynamical models of each finger and the updated information on parameters of particular parts of kinematical chain of the gripper. In particular, the measurements of forces are affected by non-linear friction components, mostly of undetermined nature, while the force measurement performed by means of surface sensors suffer from large errors due to noises and other influences in the signal processing channels. Therefore, in the design process special attention was focused on choosing the mechanical system structure that allowed for minimisation of measurement errors arising from its work. In a mechanical part of the gripper each given kinematical task may be executed by mechanisms of different types; like cams, levers, rods, toothed mechanisms or mixed (combined) [5], [6]. Therefore some types of simple mechanisms were chosen that allow for minimisation of both mechanical noises and mechanical hysteresis. In the design process of gripper, the following phases were considered: - choice of the mechanism type; - selection of a structure within that type; - determination of geometrical parameters. with special attention focused on minimisation of the undesirable effects mentioned above. That required the ball bearing to be introduced into the design, which additionally complicated the solution. Since a gripper, especially that of the grasping type, is a very complicated mechanism, for simplification purposes it was assumed that the leading finger would be the simplest possible open serial chain ensuring the possibility of reducing a number of drives to the minimal one. The link mechanism with three degrees of freedom has occured to be most profitable. The initial concept of a finger in the multifingered gripper was developed on the assumption that the gripper would be used to grasp cylindical objects of the sizes comparable to those of the objects manipulated by a human of average capabilities. The average mass of the object was assumed as 0.5 kg with the


VOLUME 5,

Journal of Automation, Mobile Robotics & Intelligent Systems

size fitting easily into a sphere (cylinder) with up to 80 mm in diameter. Fig. 2a presents the basic kinematical scheme of the “leading” finger shown in Fig.1. a)

N° 3

2011

mechanisms situated inside the first and second phalanges, respectively. In the proposed design the MAXON miniature DC electric motors were applied, supplied with planetary gears and additional external sensors of rotation.

b)

Fig. 2. Kinematical scheme of the “leading” finger. To simplify the control process and reduce the number of drives, a parallelogram mechanism was introduced into the kinematical system of gripper ensuring that the orientation of second (medium) phalange relative to the gripper body remained the same (see Fig. 2b)). Connecting rods for drives are equipped with springs assuring elasticity of the structure. It was realized by special clutches with springs. By measuring elastic deformations of springs we can measure internal forces of connecting rods. Parallelogram reduces the number of finger drives by one, improving the possibility of loading the second phalange by adapting the characteristics of load carrying capacity of the finger to the object size. In the same way it reduces a total necessary driving power, improves the control process via decoup-ling the model of dynamical behaviour of the finger. Addi-tionally, it rises the levels of stiffness and load carrying capacity of the finger and allows for reducing a total mass of the gripper, fostering as well the introduction of flexibi-lity that eliminates the effects of statical indeterminability and additionally it allows for direct measurement of the forces acting upon a finger. In similar way, the structure of the rotational finger was developed. Simplified structure of this finger is shown in Fig. 3. The transmission system of this finger is a little more complicated.

Fig. 4. 3D “leading”finger design together with drive systems, elastic clutchs and measurement sensors (transducers) developed within the ProEngineer software package. Between each motor and the corresponding drive transmission system an elastic clutch has been introduced, ensuring on the one hand the compliance of crocodile clip type, and on the other hand direct measurement of the forces acting upon the manipulated object. The angle of elastic deflection of the coupling corresponds to the magnitude of external loading moment acting upon the jaw. A main advantage of the proposed design consists in reduction of both the number of variables determining the object size and the number of drives. To reduce the mass and size of the finger and to ensure that its functionality is high enough, it was assumed that drive systems would be mounted in the gripper body. That required the application of transmission mechanisms. To solve the problem the plane lever mechanisms were applied. Owing to which the crank motion was transmitted to the corresponding phalange. The design was developed using the ProEngineer software package v.4.0. It should be emphasized that since the parallelogram mechanisms were applied, thus in the proposed design, the mechanical moment is measured as relative angle of the elastic clutch which is, in fact, directly the moment transmitted from the gripper jaw. The other fingers were designed in a similar way, supplied with an additional revolution, however in their mechanisms of revolution no elastic clutches were introduced. It means that there is no elasticity and it is impossible to measure the external torque in the direction of this rotation.

3. Gripper prototype Fig. 3. Kinematic structure of the rotational finger. According to the above three-finger gripper concept, the design of “leading” finger was developed (see Fig. 4). The finger comprises 3 phalanges; the first one has the form of parallelogram, the orientation of the second one remains constant, and the third one is provided with a separate drives and connected in series with the second phalange. The orientation of the second phalange remains the same and it moves according to the end of the first phalange; both the phalanges first and second, are driven by the same actuator. The drive of the third phalange is transmitted through a series connection of two parallelogram

According to the developed design, the mechanisms of finger, both the leading finger and the revolute one of the thumb type supplied with their drives were built. Elements of the phalanges were made of typical duralumin. Particularly phalanges of the fingers and structural elements of the both driving system and the transmission one were equipped with miniature ball bearings. The gripper driving system comprised miniature motors supplied with toothed gears, produced by the MAXON Co. A very high efficiency of drive transmission was ensured both from motors to jaws and from the external torque produced by each phalange and measured by elastic deformations of special clutches, thus the system offers very good backArticles

11


VOLUME 5,

Journal of Automation, Mobile Robotics & Intelligent Systems

driveability. Fig. 5 shows an assembled prototype of the gripper to be tested on the experimental stand.

Fig. 5. Prototype of the developed gripper.

4. Functional characteristics of the gripper Functional characteristics of the gripper show the sizes and masses of the objects, which the gripper can grasp and manipulate. Those characteristics are developed with taking into consideration the driving system and at the assumed size limits of the manipulated objects. In the developed gripper design those characteristics depends on the corresponding characteristics of particular fingers. In the operation of a single finger the most important role plays the second phalange, which maintains constant orientation relative to base. That phalange during the revolution of the first phalange (parallelogram) translates at a constant orientation outwards/inwards opening/closing the gripper. It exerts the pressure upon the object in the direction perpendicular to the internal surface of phalange, while the clamping force is produced by the moment from the first phalange motor. The magnitude of that moment is limited by the maximum moment of the drive unit and due to elastic deformation of the clutch assumes the values between zero and maximal value of the motor moment (1) depending on the phalange loading and current value of the control signal. Magnitude of the force generated by the a)

b)

N째 3

2011

driving system depends on the angle of the phalange opening. Fig. 6a) presents the characteristics of the second phalange extension versus the angle of revolution of the first phalange. It can be clearly seen that the phalange opens the gripper to a size of about 80 mm, which during operation of the opposing fingers ensures that the gripper can grasp the object of the sizes corresponding to a spherical object of about 150 mm in diameter. Fig. 6b) and Fig. 6c) presents characteristics of the clamping forces of the gripper fingers in the course of manipulation. The maximal value of the force varies in almost parabolic way (more precisely it is inverse-cosine) depending on the object size and for the maximal value of the fingers extension reaches the magnitude of 14 N. In the case when the phalanges are covered with a rubber glove, the coefficient of friction, of which is about 0.35, the value of additional load carried due to friction reaches about 5 N). It is worthwhile to note that the characteristics varies depending on the gripper opening and is approximately coincident with the characteristics of mass versus the object size. It is worthwhile to note that some manipulated object of complicated shapes can be grasped in a proper way, despite their relatively too high mass (weight). To determine the external load acting upon the robot end-effector, between the arm end flange and a gripper, an additional sensor of six force/moment components has been mounted. The developed gripper was mounted on the robot Irb-6 located at the Laboratory of Robotics at the Institute of Automatics of Warsaw University of Technology. Initial tests have proved that the assumed assumptions were correct and the developed gripper is working properly. Now the gripper is being used most often in developing the project performed under the Polish Ministry grant.

5. Final remarks The design was made using the software package ProEgineer v. 4.0. It should be emphasized that the 3D system was applied which allowed for performing at the same time various activities; namely, developing the design concept, making analyses and calculations within the scope of material strength and dynamics, as they are necessary. First of all, it was possible to reach optimal arrangement of particular elements as a result of minimisation of the space occupied by the considered assembly, as early c)

Fig. 6. Functional characteristics of the gripper: a) extension of the second phalange versus the angle of revolution of the fist phalange, b) maximum clamping force of the second phalange versus the angle of revolution of the first phalange, c) maximum clamping force of the first phalange versus the magnitude of gripper opening. 12

Articles


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 5,

N° 3

2011

as at the designing stage. In particular, the driving motors with gears and their equipment; like limit and synchronisation microswitches, wiring, electric connectors were integrated with the gripper body in an aesthetical way, obtaining a compact mechatronic design. Animation of cooperation between particular sub-assemblies allowed for making optimal use of the space available for mechanisms, as well as for integration of all parts and placing them in one aesthetical casing. ACKNOWLEDGMENTS The presented research is part of the project carried out under grant No. 514128733 by the Ministry of Science and Higher Education ”Problems of active sensing, interpretation of sensor gathering information and manipulation in service robots”, at the Institute ofAutomatics, Warsaw University of Technology.

AUTHOR Krzysztof Mianowski - Warsaw University of Technology, Instutute of Aeronautics and Applied Mechanics, 24 Nowowiejska Str., 00-665 Warsaw. E-mail: kmianowski@meil.pw.edu.pl.

References [1]

[2] [3]

[4]

[5]

[6] [7]

[8]

[9]

D. Berns K., Asfour T., Dillmann R., “ARMAR An Anthropomorfic Arm for Humanoid Robot”. In: Proc. Int. Conf on Rbotics and Automation, An Arbor, Mich., USA, 1999, pp. 702-707. Kato I., Development of WASEDA Robot. The Study of Biomechanisms at Kato Laboratory, 2nd edition, 1987. Mianowski K., Nazarczuk K., S³omkowski T., “Dynamic model for the selection of servomotors in serialparallel manipulator”. Int. Conf. CIM, Zeszyty Naukowe Polit. Œl¹skiej, seria Mechanika, z. 108, Gliwice, 1992, pp. 261-268. Mianowski K., “Analiza w³aœciwoœci chwytaka specjalnego przeznaczonego dla robota us³ugowego”. Mat. Kraj. Konf. TMM, Zielona Góra 2006, pp. 185-190. (in Polish) Morecki A., Basic of Robotics, “Theory and Componenets of Robots and Manipulators”. Springer Wien, NewYork, 1999. Olêdzki A., “Basics of TMM”. WNT Warsaw, 1987. (in Polish). Szynkiewicz W., “Planning of two-handed manipulation”. In: Postêpy robotyki, ed. K. Tchon, WK£, Warszawa 2005, pp. 187-196. (in Polish) Zieliñski C., Szynkiewicz W., Mianowski K., Nazarczuk K., “Mechatronic design of open-structure multirobot controllers”. Mechatronics, 11(8), 2001, pp. 9871000. Zieliñski C., Szynkiewicz W., Mianowski K., Rydzewski A., Winiarski T., “Efektory robota us³ugowego do dwurêcznej manipulacji z czuciem”. In: Proc. of KKR 2006. (in Polish)

Articles

13


VOLUME 5,

Journal of Automation, Mobile Robotics & Intelligent Systems

N° 3

2011

THE SERVO DRIVE WITH FRICTION WHEELS

Łukasz Frącczak, Leszek Robert Podsędkowski, Marcin Zawierucha

Abstract: This paper presents the servo drive with force transmission by friction in two DOF. The first section describes some of the problems with surgery robots. In the second section there is shown the operating principle of the friction drive, which is a solution to the mentioned problems. The next section presents the motion control system and the sensors used therein. In the fourth section the author deals with the mechanical solution of the servo drive. Keywords: servo drive, friction wheels, surgery robot.

1. Introduction Recently technological developments in the field of laparoscopic surgery is an important issue owing to increasing safety as well as patient satisfaction and comfort. For this reason the surgery robots have been introduced into clinical practice. They made it possible to avoid sternotomy, which shortened the recovery time after performed surgery, and minimized the risk of post operative complication. At present the surgery robots: “Zeus” [1] and “Da Vinci” [2] are used in the clinical practice. However now only “Da Vinci” is still produced. In Poland, some works have been taken to design another surgery robot called “Robin Heart” [3,4,5]. The work was funded by the KBN and Cardiac Surgery Development Foundation in Zabrze. Its prototype was designed and made in the Cardiac Surgery Development Foundation in Zabrze end the Institute of Machine Tools and Production Engineering at the Technical University of £ódŸ. At present there are some drawbacks in the use of the surgery robots. One of the most important drawbacks during a cardiac operation is the collision of the robot's arms. This is especially disturbing when does three or fourrobot arms are used. The working spaces of these arms interference one with another. If the surgeon not pay enough attention to the arms position, he often leads them to collision. The working space of the robot's arm depends on the kinematics, which is responsible for the spherical motion of the laparoscope, end the linear drive, which is responsible for the tracts in and out of the patient's body. The robot's arm responsible for the spherical motion of the laparoscope is optimally designed to move the laparoscope with sufficient accuracy. Therefore, it is the only solution that is left is the replacement of the large linear drive with the smaller one. For this purpose a linear drive was developed, which is much smaller than the recently used ones. This linear drive also allows rotation around laparoscope axis. A different approach of this movement was used in 14

Articles

the in the design of the servo drive. The solution is based on the use of the laparoscope camera sleeve, or a laparoscope tool sleeve (hereinafter the laparoscope slideway) like a slideway. Then, is only possible solution is a drive, which uses friction, because of the laparoscope slideway is smooth. Currently, in the literature there are several designed mechanisms, which use friction drive with a two DOF. The most common use of the friction drive is moving the drive linearly and around the shaft axis. This kinematic is used in force simulation mechanism [6], gimbals mechanism [7,8] and ball screw spline model (BNS) [9]. In the BNS, to ensure the accuracy, the shaft have an incision which is unacceptable in the laparoscope slideway. Interesting solutions are: the rolling ring drives [10] and rohlix zero max [11] however, this drive convert only the rotary motion to linear one. In the literature the friction drive is also applicable in more than two DOF drives. One example is the mecanum wheel robot (MWR)[12], which can move in any direction on the plane, it is called holonomic mobile robot.

2. The operating principle The operating principle of the mechanism is based on the kinematic and force strength motion of the laparoscope slideway, respect to the wheel positioned obliquely to the laparoscope slideway axis (Fig. 1).

Fig. 1. Idea of the friction drive mechanism. The drive is transferred through the modules, which are coaxial to laparoscope slideway and rotate around this axis. In this modules there are friction wheels which transfer the drive to the laparoscope slideway. The velocity distribution at the points of contact of the friction wheels with the laparoscope slideway is shown in Fig. 2. This figure is based on the assumption that there is contact point between wheels and slideway and there is no slippage. Wanted velocities: Vo and Vt. It have to be assumed that VB1 = VB2. Then after summing of the forces in each axis we get the following equation:


VOLUME 5,

Journal of Automation, Mobile Robotics & Intelligent Systems

w2 + w1 2

N째 3

2011

Vo = Vu1 - Vw1 * cosa1 = Vu2 - Vw2 * cosa2

(1)

wt =

Vt = Vw1 * sina1 = Vw2 * sina2

(2)

The consequence of this equation is the independent rotary and linear control of the laparoscope slideway. The most characteristic movement control is when the movement is in one direction. To move the laparoscope slideway rotary the motor shafts must rotate in the same directions with the same rotary speed. From equation (8) we get the result that, there is no linear move, and from equation (9) we get that wt is different from zero. In turn, to move the laparoscope slideway linear, the motor shafts must rotate in the opposite directions with the same rotary speed. Then, from equation (9) we have that, there is no rotary move, and from equation (8) we have that Vt is different from zero. The sum of rotary speed and different direction of motor shafts will result in movements in both directions (rotary and linear) of laparoscope slideway.

Taking into consideration kinematic and geometric dependences the above presented equations will look as fallows: w0rt = w1rt - ww1rw cosa1 = w2rt - ww2rw cosa2

(3)

Vt = ww1rw sina1 =ww2rw sina2

(4)

Where: w1, w2, ww1, ww2 - rotational speed respectively module 1 and module 2, then friction wheels at the module 1 and 2. Vu1, Vu2 - drifting speed at module 1 and 2. Vw1,Vw2 - velocity of the point on the friction wheels surface which represent the linear speed from the rotational speed respectively module 1 and 2. VB1, VB2 - speed which is the sum of the Vui and Vwi respectively at module 1 and 2. Vo - velocity on the laparoscope slideway surface which represent the linear speed of the rotational speed. Vt - linear speed of the laparoscope slideway. rt - radius of the laparoscope slideway. rw - radius of the friction wheels. Setting the ww1 out of (4)and substituting ww1 to (3) gives: ww1 =

rt(w1 - w2) rw sina1(ctga1 - ctga2)

(5)

Now substituting the ww1 to (4) we get Vt that is equal: Vt =

rt(w1 - w2) ctga1 - ctga2

(6)

and substituting the ww1 to (3) we get the wt that is equal: wt =

w2ctga1 - w1ctga2 ctga1 - ctga2

(7)

Assuming that: a1 = -a2 = a the above presented equation gives: Vt =

rt(w1 - w2) 2ctga

(8)

(9)

3. The control system The friction contact in the servo drive is causing the problem with control system. This is related to the accuracy of friction contact and the possibility of slip between friction wheels and laparoscope slideway. During operation with the use of surgery robots, the uncontrolled slip between driving parts end laparoscope slideway is not allowed, because of health or live of the patient. The restriction of the creation of slip can't be achieved by robot's arm construction, therefore we must ensure this restriction by the automatic control of movement. It is necessary to design appropriate response of the control system, which does not allow to lose control of the laparoscope slideway. For this purpose it is necessary to create cooperation between several measuring systems, which are tracking the movements of both the motor shafts and laparoscope slideway. For this purpose the Automatic Control System (ACS) is realized by two loops. The one (interior) loop is realized by motor drive controllers with measurements of rotary of the motor shafts, the second is realized by tracking the movement of the laparoscope slideway. The second loop is the superior and controls the interior loop, furthermore the external loop have the slip control algorithm. The block diagram is in the Fig. 3. The main task of the control system is to control the rotation of the motors shafts so that the displacement of the laparoscope slideway is consistent with the set value. The

Fig. 2. The velocity distribution at the points of contact friction wheels with laparoscope slideway. Articles

15


VOLUME 5,

Journal of Automation, Mobile Robotics & Intelligent Systems

servo control system have to convert the set value. The convert of the set value is necessary to the proper control of the motors shafts velocity and to achieve the set value at the right time. In the algorithm of motor shafts' velocity are minimal and maximal restrictions because of the minimal and maximal motors shafts velocity. The motor shaft velocity are defined by equation: w1 = wt - Vt w1 = wt + Vt

ctga rt

(10)

ctga rt

(11)

The controllers with the PID setting are responsible for the motors shafts velocity control. The controllers settings are selected in a way not permitting any slips between friction wheels and laparoscope slideway. The PID gains are experimentally determine by means of the appropriate software provided by the manufacturer of the drivers and motors, which are used in servo drive. Subsequently, the drive mechanism moves the laparoscope slideway. The laparoscope slideway movement is detected by two measuring systems (descriptions of the measuring systems can be found in the next paragraph of this paper). Each of this measuring systems requires an appropriate signal processing because of the necessary data fusion. In the process of data fusion the algorithm combines the advantages of each of the measuring systems, so that this measurement is obtained with high resolution, repeatability and accuracy. The slip estimator is an important module because of

N째 3

2011

the safety of friction force transmission. The slip estimator determines the friction contact nonlinearity's, by the angle drift of the friction wheels. The angle drift coefficient is determined by the theory of the road wheel [13] and gives: dww1 =

Fww1 kww1

(12)

dww2 =

Fww2 kww2

(13)

Where the forces are as fallows: Fww1 =

M1 rt sina

(14)

Fww2 =

M2 rt sina

(15)

Where: dww1, dww2 - the angle drift coefficient in the friction wheels respectively in module 1 and 2, Fww1, Fww2 - the axial force of the friction wheels respectively in module 1 and 2. The state observes concerns detecting the slip. The algorithm responsible for this task is based on the fuzzy logic systems. This algorithm must be derived outside of the ACS, because of the patient's health. The response of the ACS when the slip will be detected can't be based only on the quantity regulation, as it is important to know the

Fig. 3. The block diagram. L, j - displacement of the laparoscope slideway respectively linear and rotation. Lz, jz - input quantity respectively, linear and rotation. Lr, jr - feedback quantity respectively, linear and rotation. wz1, wz2 - input quantity of the motor shaft respectively 1 and 2. w11, w12 - rotary speed error of the motor shaft respectively 1 and 2. w12, w22 - rotary speed of the motor shaft respectively 1 and 2. I1, I2, M1, M2 - current and torque of the motor respectively 1 and 2. Pi, Pj - slip quantity respectively, linear and rotation. 16

Articles


VOLUME 5,

Journal of Automation, Mobile Robotics & Intelligent Systems

cause of the slip and response to it adequately. This involves the change of the input quantity and appropriate reaction of the surgery robot. Obviously, the surgery robot control system can modify the input quantity and the parameters of the PID regulator of the servo drive. The reasons of the slippage may be different. The detection of these reasons will based on the different variables too. The algorithm, which is responsible for detecting the slip reasons will based on: angle drift coefficient, measurement of the motor current, measurement of the motor shaft velocity, measurement of the laparoscope slideway displacement, force measurement on the laparoscope tool, measurement of the robot master displacement, Depending on this values of measurement, it is possible to detect slippage, and recognize the reasons of the uprising. The most important task of the slippage algorithm is to check up whether that slips occur. This task is realized by the velocity of the motors shafts measurements with angle drift coefficient correction are equal to the laparoscope slideway velocity. If they are equal then the servo drive will work correctly. On the other hand, if it is larger than limit values, it will mean with slippage. The equation responsible for this algorithm is as fallows: Pmin < w1(1 + dww1) -

djz dL r - z w * tana < Pmax dt dt r t

(16)

N° 3

2011

should decrease until there is no slip. After solving of this situation, the servo drive can work without the motor current modification. The recognition of the specific causes of slips and other emergency situations is a very complex process. In addition, the properly chosen strategies for the surgery robot's reaction on the identified emergencies complicate the control of the surgery robot and servo drive mechanism. However, the welldesigned control system greatly enhances the safety of using of the surgery robot.

4. The sensory system A very important issue concerning the servo drive addition to the drive mechanism and the control system is the selection of the sensors. Recently in the market there are number of sensors with high accuracy, but most of them are useless in the designed servo drive, because of the immobilization these sensors in one of the moving directions of the servo drive. Therefore, it is necessary to create a new measuring system. A significant limitation, in addition to the requirements of the drive mechanism kinematics, is the need to measure displacement without contact with laparoscope slideway. If there is a contact between measurement element and laparoscope slideway, the measurement element must be sterile because the servo drive is designed for the surgery robot. 4.1.

Triangulation laser

Where: Pmin and Pmax the limit slip estimation value. The next step is to identify what was the cause of the slippage. If the slip is caused by the change of the friction coefficient through the body fluids that get in to the drive mechanism, it is sufficient to compare the change of the motor shaft velocity value with laparoscope slideway velocity value and check the motor current value. At the time of the slip the motor shaft velocity value increases and the motor current value decreases. An important element in determining of this kind of slip is to check if the slippage is on both modules, because this kind of slip usually appears only on the one of modules (nearest of the patients body). If the slip is only on the one module than surely we have to deal with the slip with the change of the friction coefficient. The ACS reaction to this slip is the change of the input value to gain back the control of the laparoscope slideway displacement. The next move is imposition of the limitation on the motor current value because of the friction force limitation with new friction coefficient. The further strategy is up to the surgeons, who can work with slower servo drive mechanism or stop working for the time needed to replace the drive mechanism. Another cause of the slip is the high resistant force on the laparoscope slideway. The control system tries to truck with input value and put up the motor current which will result in high force on the friction wheels and braking the friction contact. To identify this kind of slip is measure the laparoscope tool forces. If the laparoscope tool forces decrease when the slippage is detected and the laparoscope slideway does not move, it means that the slip is caused by high resistant force on the laparoscope slideway, for instance from collision of arms. Then the motor current

Fig. 4. Measurement with triangulation lasers. One of the solutions is the use of the triangulation laser. At it is known, the triangulation laser measures the distance from the object, but if we use two lasers and one of the lasers will measure distance to the spiral surface and the second will measure distance to the plane surface we can measure both of the movement directions of the drive mechanism (Fig. 4). Then the laser which measures the distance to the plane surface, inform about the linear displacement, the second laser inform about the rotary disArticles

17


Journal of Automation, Mobile Robotics & Intelligent Systems

placement. The most significant drawback of this method is the point of discontinuity of the spiral, because there is no possibility of measurement. This discontinuity is about 5 degree. This drawback can be limited by the ACS with program restriction, which will prevent the achievement of this point. On the other hand, this restriction will prevent a total rotation and limit the kinematic of the laparoscope slideway. The second problem of this measurement method is the accuracy. The triangulation laser used in the servo drive has 200 mm range with 0,1 mm resolution. The accuracy of this laser is ±0,2 mm. This is less than the minimal requirements of the servo drive which is 0,1 mm. 4.2. The optical sensors used in laser mice An interesting approach to the measurement of the laparoscope slideway displacement is the use of the laser image processing sensors which are used in laser computer mice. The operation principle of these sensors bases on the comparison of two pictures taken by the CMOS (complementary metal oxide semiconductor) matrix. The controller built into the CMOS microchip tracks movement of all points on the next pictures and transmits this movement information to the computer by the microcontroller. The resolution that can be obtained by this sensors is determined by the DPI (dot per inch) value. For example, if the mouse sensor have 4000 DPI resolution it is equivalent with 0,00635 mm resolution. This resolution is higher than the resolution of triangulation laser sensor. The problem of using this sensor is the low reliability, and the high sensibility to the changes of the measuring surface. Another problem is the “image flow”. There is the “image flow” when the sensor detects a small displacement when it he does not move.

VOLUME 5,

N° 3

2011

5. Construction of the drive mechanism In the design of the drive mechanism, there were taken the following assumptions. the angle between friction wheels and laparoscope slideway are 30 and -30 degree. the friction wheels have to be sterile. the drive mechanism have to be fast mounted on the Robin Heart surgery robot. the drive mechanism have to be small. The construction of the drive mechanism is shown in the Fig. 7. The drive mechanism consists of two frames connected by screws. Between these frames there is a pillow, which separates the sterile parts of the drive mechanism. In the non sterile frame there are motors, encoders and elements, which support the stabile connection to the surgery robot Robin Heart. In the second frame there are two modules with the friction wheels. The modules are mounted to the main frame by the ceramic ball bearings. Both modules can rotate independently. Each of these modules contains a set of the three friction wheels. The friction wheels are between two frames, external and internal, in both of the modules. The internal frame can rotate and move linearly relatively to the extern frame. This solution of the friction wheels can be pressed to laparoscope slideway. The force to press friction wheels to the laparoscope slideway is obtained by the springs (Fig. 6). The springs are tensioned by the regulation screw. Between the screw and the springs there are the thrust bearing and the separator.

finger 1

finger 2

Fig. 5. The operation principle of the mouse sensors [14]. 4.3. The summary of the sensory system If the above described measuring systems work independently they will not be able to provide enough accuracy and reliability of the measuring system. Therefore, they should work together in the servo drive. Then the two measurement systems will support each other and guarantee a sufficient accuracy and reliability of the measurement. In the practice thus system requires an affective algorithm of the data fusion. The results of the data fusion will be presented in the subsequent papers. 18

Articles

Fig. 6. Construction of the one module. The friction wheels are composed of the shaft and the sleeve. The sleeve has been mounted by bearings on the shaft. The shaft is spherically ended and can rotate in respect to laparoscope slideway. With this solution we can control the pressing force of the friction wheels to the laparoscope slideway.


VOLUME 5,

Journal of Automation, Mobile Robotics & Intelligent Systems

N° 3

2011

6. The summary

ACKNOWLEDGMENTS

The two DOF servo drive with friction wheels has been designed despite of many problems. The servo drive will be used in the surgery robot Robin Heart and will replace the linear drive of the laparoscope camera, or the automatic laparoscope tool. The designed servo drive is smaller than the linear drive that is currently in use. The comparison of the two structures is shown on the fig. 8. The current research in the sensory system will determine the accuracy and reliability of the measurement of the displacement of the laparoscope slideway. One of the important aspects of the control system is the control of the influence of the friction contact on the proper work of the drive mechanism. The next problem is the appropriate design of the state observer, which has to control the slip of the friction wheels. The result of this study will be included in the next publications.

This work was supported by the AGH University of Science and Technology under Grant No. 11.11.120.612.

AUTHORS £ukasz Fr¹cczak* - Politechnika £ódzka, wydzia³ Mechaniczny, Instytut Obrabiarek I TBM; 90-924 £ódŸ, ul. Stefanowskiego 1/15 tel. +48 42 631 22 03. E-mail: l.fracczak@gmail.com. Leszek Podsêdkowski - Instytut Obrabiarek I TBM; 90-924 £ódŸ, ul. Stefanowskiego 1/15 tel. +48 42 631 22 03. E-mail: lpodsedk@p.lodz.pl Marcin Zawierucha - Politechnika £ódzka, wydzia³ Mechaniczny, Instytut Obrabiarek I TBM; 90-924 £ódŸ, ul. Stefanowskiego 1/15 tel. +48 42 631 22 03. E-mail: marcin.zawierucha@p.lodz.pl * Corresponding author

References [1]

[2]

[3]

[4]

Fig. 8. Comparison of the linear drive: a) designed servo drive, b) the recently used linear drive.

[5]

Bochenek A., Cisowski M., „Doœwiadczenia w³asne (robot Zeus firmy Computer Motion) i perspektywy stosowania robotów w kardiochirurgii”, Zabrze. In: Konf. Roboty Kardiochirurgiczne, 2000. (in Polish) Cichoñ R., „Doœwiadczenia w³asne (robot da Vinci firmy Intuitive Surgical) i perspektywy stosowania robotów w kardiochirurgii”, Zabrze. In: Konf. Roboty Kardiochirurgiczne, 2000. (in Polish) Jezierski E., Granosik G., „Mo¿liwoœci realizacji zadañ projektu przez Instytut Automatyki Politechniki £ódzkiej”, Zabrze. In: Konf. Roboty Kardiochirurgiczne, 2000. (in Polish) Nawrat Z., Kostka P., „Robot kardiochirurgiczny - polski projekt; wstêpne za³o¿enia projektowe”, Dokument wewnêtrzny Fundacji Rozwoju Kardiochirurgii. Podsedkowski L. et al., „Dzia³alnoœæ IOiTBM oraz propozycja konstrukcji mo¿liwoœci wykonawcze robota

Fig. 7. The drive mechanism construction. Articles

19


Journal of Automation, Mobile Robotics & Intelligent Systems

[6]

[7]

[8]

[9]

[10] [11] [12] [13] [14]

[15]

20

kardiochirurgicznego”, Zabrze. In: Konf. Roboty Kardiochirurgiczne, 2000. (in Polish) Ikuta K., Takeichi M., Namiki T., „Virtual endoscope system with force sensation”. In: IEEE Int. Conf. on Robotics and Automation, Detroit, Michigan, 1999. Suzuki T. et al., „Compact Forceps Manipulator for laparoscopic Surgery”, Adwanced Biomedical Engineering end Sciences, Tokyo. Suzuki T. et al., „Mechanical error analysis of comapct forceps manipulator for laparoscopic surgery”, Advanced Biomedical Engineering and Science, Tokyo, 2006. Mohad Salih J. E. et al., „Designing OmniDirectional Mobile Robot with Mecanum Weel”. American Jurnal of Applied Sciences, 2006, pp. 1831-1835. THK. Ball Screw/Spline BNS, https://tech.thk.com/en/ products/thk_cat_mai n.php?id=891 ZeroMax. Rohlix zeromax, http://www.zeromax.com/ linearactuatorsc4len.html Amacoil. Rolling rings drives, http://www.amacoil. com/rollingring.html Prochowski L., „Pojazdy samochodowe Mechanika ruchu”, Warszawa. WK£, 2005. (in Polish) Chmarra M. K. et al., „TrEndo, a device for tracking minimally invasive surgical instruments in training setups”, 2006, p.A126. Fr¹cczak £., Podsêdkowski L. „Serwomechanizm z ciernym przeniesieniem napêdu w dwóch stopniach swobody”, Prace Naukowe Politechniki Warszawskiej elektronika, no. 175, pp. 253-263. (in Polish)

Articles

VOLUME 5,

N° 3

2011


VOLUME 5,

Journal of Automation, Mobile Robotics & Intelligent Systems

N° 3

2011

MATHEMATICAL MODEL OF A MULTI-LINK SURGICAL MANIPULATOR JOINT WITH AN ANTISEPTIC COATING Ryszard Leniowski, Lucyna Leniowska

Abstract: We present a synthesis of the mathematical model of a joint of a new generation multi-link surgical micromanipulator. A design of such a device involves finding solutions for several difficult problems which do not appear in classical 'large' robots. A prototype in development has six links, and it is driven by brushless servomotors with planetary and worm gears, for which the total transmission ratio is above 10000:1. Most importantly, whole construction is covered by an antiseptic coating which substantially changes manipulator's dynamics. Because of the complicated form of the drive model with three-stage planetary gearings and coating interactions, control of such system is significantly different from control of a typical industrial robot. This paper presents a synthesis of the model of the joint, which can be used to design the control system. In this work the derived model is used to examine joint's properties and characteristics. Results are presented graphically and discussed. Keywords: surgical manipulators, mathematical modeling, coating interactions.

1. Introduction A design of new generation multi-link surgical micromanipulators is connected with finding solutions of several difficult problems which do not appear in classical “big” robots. They concern the robot construction and its assembly as well as the control system design. An example of such construction is the model of a multi-link surgical manipulator which was designed within the scope of the project from the Ministry of Science and Higher Education No 2376/B/T02/2010/38. The manipulators' prototype contains 6 links with diameter of 8-10 mm and with the length of the modules about 130 mm. It is driven by brushless servomotors with planetary and worm gears, for which the total transmission ratio is above 10000:1. Essential feature of this manipulator is an antiseptic coating which covers all the construction. The manipulator in question belongs to the group of miniature robots (a working space has a cubic capacity less than 1 dcm3). The vital feature to distinguish the miniature robot constructions from their macro counterpart is substantially low efficiency of the whole driving path. For currently existing, the most perfect devices, the efficiency is about 45-50% (for motors) and 50-55% (the multi-stages planetary gearing). For comparison, the “macro” robots efficiency is approximately 90-93% (motors) and 78-86% (gearing). However, the construction of miniature harmonic drive gearing with a big reduction ratio

and diameter not greater than 5 mm is still not available. Low efficiency of the driving path (with reference to the generated torque) is caused by high friction occurred in motors and gearings. The other feature which differ the described multi-link robot from similar well-known constructions is the aforementioned antiseptic coating. It covers the outer surface of manipulator and in this way insulates all parts of construction from a human body providing a high sterility. During robots movement (bending and twisting), the coating generates some complex dynamic interactions acting on the joints. For those reasons model of joints dynamics is more complex than corresponding one for the “macro” robot constructions. In consequence it influences robot precision and also the design and implementation of control system is more complicated. This paper presents the synthesis of a mathematical model of the multi-link surgical manipulator joint with an antiseptic coating. The main aim of the work is to investigate with the use of simulation methods some characteristics and properties of joint in question. In this way it might be possible to answer on the following question: is it feasible that position accuracy for the manipulator in spite of complex drives dynamics in the miniature construction can be less than assumed level of 0.5 mm? To do this, it is important to build an appropriate model with special attention paid to the induction of vibration. These data will help in fitting the best method for model parameters identification and allow designing an accurate and saving control system. Additionally, the joint model is also a part of virtual simulator, which is currently developed for investigation and training purposes. In first approach, it will be used for designing of control system.

Fig. 1. View of the joint. The joint model (Fig. 1) consists of following elements: the planetary and worm gears, the brushless motor (BLDC) and the antiseptic coating. Separation of components in the model enables a better configuration of the structure, so that the described prototype can be characterized by high-tech parameters. Currently, the prototype Articles

21


VOLUME 5,

Journal of Automation, Mobile Robotics & Intelligent Systems

design of medical robot is based on the sub-assemblies from the Faulhaber [3] company. It is obvious, that they can be replaced by the better ones if such will be accessible in the future.

2. Planetary gearing model Considering disturbing torques acting on the joint in question it can be noticed that the biggest contribution belongs to the multi-stage planetary gearing. It is the reason, why understanding and recognition of its model is so important for designing of the control system. In recent years (2007-2009) some very accurate models of planetary gearings have been worked out [4]. They has form of matrix differential equation of second order and dimensions: 6N ´ 6N, N = (3 + p), where p is a number of planets. Unfortunately, in the cited paper, the description of friction is omitted because of its complexity. For p = 4, inertia and stiffness matrixes dimensions are (42 ´ 42) and they generate some of resonance frequencies. On the base of [4] it can be noticed that they consist three groups of frequencies. Regarding only low and middle resonance frequencies, the initial model can be replaced with reduced model, containing three spinning masses as follows:

N° 3

2011

where dz ïvï = v - s0 z, gv = a0 + a1e -(v/v0)2 dt gv

(3)

Parameters s0, s1 and s2 denote the fibre stiffness, the fibre damping and the viscous coefficient accordingly. Coefficients a0 and a1 model the shape of function, g v and v is the related velocity of rubbed surfaces. Non-linear differential equation (3), together with the function g(v), establishes the “system with memory” which permits for better modelling of real processes. Equations (2) and (3) will be rounded out in section 4 by the model of the coating interaction.

3. Simplified model of joint drive Four out of the group of six joints are driven by the miniature brushless DC motors. A three-phase model of electrical part of the motor can be replaced by a transfer function similar to DC motor, Fig.2. R and L are the engine parameters and w denote the temporary rotating velocity.

Jm qm + Fm qm = Tm q - h1Tg1 Fig. 2. Simplified engine schema.

Tg1 = k11 h1qm - qg1 + k31 h1qm - qg31 Jg1q1 + Fg1qg1 = Tg1 - h2Tg2 Tg2 = k12 h2qg1 - qg2 + k32 h2qg1 - q Jg2q2 + Fg2qg2 = Tg2 - Tl

3 g2

(1)

where: Jm is the motor inertia; Jg1 and Jg2 – the inertias of rotating gears; Tg1 and Tg2 – the non-linear torque springs interactions; Fm, Fg1, Fg2 – the friction torques; Tm – the driven torque; Tl – the arms loading torque; h1, h2 – the two-stage gear ratios. Because of existing in eq. (1) nonlinear torque springs interactions, the presented model is very close to real conditions. The essential component of eq. (1) are the friction torques. A phenomenon existing in all mechanical devices having any moving parts, friction can be observed as reaction forces/torques acting on the contact surfaces of two bodies. Its mathematical models currently in use can be classified into two categories. The first contains so called classical models of friction which are limited to static characteristics. The second category includes the modern models which try to take into account also the dynamics of this complex phenomenon. Such dynamical models, based on differential equations are usually results of empirical research not rarely originated from very loosely connected branches of science (e.g. mechanics and geophysics). Considering some new models of friction, the simplicity and realism of “LuGre” [5] makes it worthy of attention. It describes the friction force as a fibre interaction fastened between rubbed surfaces and treats fibres as viscoelastic. The friction torque reaction (Fm, Fg1, Fg2) contains three components: F = s0z + s1z + s2v

22

Articles

(2)

The inseparable element of a professional servo-mechanism is a high efficiency PWM power amplifier, with a PI current feedback controller. Besides linearizing properties it allows servo-mechanism to work in a torque control mode. A power amplifier transfer function, binding phase voltage v0 with control voltage vc equals KPWM v0 = vc 1+sTPWM

(4)

The amplification of PWM system is dimensionless and does not exceed the value of 20. The time constant TPWM of amplifier depends on the generator working frequency which modulates the current signal. In the designed servomechanism it comes to 20 kHz, which means that the time constant is 50 µsec. The value of time constant of the power amplifier appears insignificant in comparison with the other time parameters, thus it can be omitted. Because of that, the linear model of considered block with amplification coefficient KPWM can be assumed. The motor-amplifier system can be interpreted in particular cases as a voltage controlled torque generator. For Ki¹0 the decisive influence on the simplified transfer function has a gain introduced to the system by the amplifier and current PI controller I»

kt kb (V w) ki c KPWMKp

(5)

Because the denominator of the second component in brackets has a big value, the generated current depends very little on the rotating frequency. Thus, the dynamics contributed by the electrical part of the motor can be ignored.


VOLUME 5,

Journal of Automation, Mobile Robotics & Intelligent Systems

4. Effect of the antiseptic coating The multi-link surgical manipulator is covered by the antiseptic coating, which covers the outer surface of a manipulator and in this way insulates all parts of construction from a human body providing a high sterility. Despite of low thickness, during the fast robot movements (bending and twisting), the coating generates some complex dynamic interactions acting on the joints. The analytical model of coating can be described by Love’s equations [6]. They have a great theoretical potential, however, they are not useful enough in practice. Instead, it seems that the better solution is to apply simple FEM model because of its boundary conditions being easy to configure and no restrictions imposed on the external forces. 4.1. The coating model The applied model has concentrated masses of massspring-damping type and rectangular net topology with sixteen-links neighbourhood (Table 1). It ensures realistic behaviour of all kinds of coating deformations. Table 1. Kind of interactions for perpendicular net.

Interactions of type A.

Fijs = kij (ïêxi - xj ïê - Lij0 )

xi - xj - xjïê

ïêxi

N° 3

2011

(6)

where xi, xj are the spatial coordinates of point masses, - xj ïê- the current length of a link. The force components are proportional to the spring coefficient kij, and have agreeable directions with the links which are scaled versus current link length. This is expressed by the quotient in equation (6).

ïêxi

In similar way we calculate the friction forces which attenuated the move in a way proportional to the masses velocity (7) with coefficient dij. Introduction of a scalar product (vi - vj, xi - xj) allows describing in a very compact way a projection of friction forces on the spring interactions directions. Fdij = dij (vi - vj, xi - xj)

xi - xj - xjïê

ïêxi

(7)

In general, both coefficient, kij and dij, might depended on the node coordinates. It allows including nonlinear phenomena into the model. Surrounding and contact interactions are the components of the external forces. The force of such interaction on a node depends on a size of the coating surface (Fig. 3).

Interactions of type B.

Fig. 3. Normalized vector a medium reaction and the normal to surface in the node.

Interactions of type C.

Interactions of type D.

Interactions of type A - happen between masses in direct contact only in the main directions. These interactions are responsible for stretch deformations. Interactions of type B - occur between masses in direct contact only in the diagonal directions. These interactions are responsible for shearing deformations. Interactions of type C - happen between mass and its far neighbour in the main directions. These interactions are responsible for bending deformations. Interactions of type D - occur between far neighbours in the diagonal directions and improve bending properties of the surface. To obtain the three-dimensional coating equations, we assumed the initial (rest) length of the links are Lij0 . The value of a spring force (three components) between each two points of the net can be obtained from the following equation:

Normal vector Nnorm to the surface closed to the nearest node area is equal to the normalized vector calculated as an average from the four normal vectors, determined for the triangles which are adjacent to the node. Finally, the force vector F ijw acting on the node can be described as: Fdij = kw Ppow Nnorm rw

(8)

where Ppow is the shell surface in the node surrounding, rw is the pressure [Pa], generated by the medium on the surface which perpendicular to its direction. The gravity interactions are equal Fijg = gmij, where g = [0,0,-9,81]T. k

xij = mij-1 å ( F ijw + Fijg - Fijs + Fijd ), k = 12 or 16

(9)

i=1

Thus the equations of dynamics for the nodes of surface (9) are applied to obtain the acceleration by the use the Verlet algorithm described in subsection 5.1. Articles

23


Journal of Automation, Mobile Robotics & Intelligent Systems

4.2. The free and bounded nodes In the model of a robot coating, classification on free and bounded nodes has been introduced. Bounded nodes are fixed to some points of an arm and they are used to model clamps. The examples of such nodes are nodes which are located on the first circumference of a cylinder (Fig. 4). The rest of a coating node structure will follow their movements as long as the spring and interaction forces will permit.

VOLUME 5,

2011

roundings. It might cause small, but significant longitudinal movement. Secondly, while the joint is bending, the covering coating is influenced by two different deformations. On the one hand it is stretching above the joint and on the other side it is compressed, what is shown in the Fig. 5. Areaction on compression is difficult to derive because the transversal dislocations of coating are highly probable. It can be assumed that “improbable” transverse deformations (pointed by the vertical arrow in Fig. 5) will introduce a stochastic distribution of stretching moment and a slight amplitude. The stretched fragment acted on the joint with the moment distributed on the radius, which is proportional to local elongation of coating. The interactions are the strongest in the plain of movement and the weakest in the joint axis (Fig. 6). The angle b Î (-90, +90), qi is a joint angle and R denotes a robot arm radius. Fk » kDLm = kR sin(qi) cos(b) = 2kR sin(qi)

Fig. 4. The free (grey) and bounded (white) nodes on the robot arm surface.

N° 3

(8)

An elongation of the coating inflicts all four kinds of interactions for a rectangular net described in Table 1.

It is enable to act on the any free surface node or on the group of nodes by the vector of forces with different value and direction. 4.3.

Coating reaction to joint movement

Fig. 6. The local elongation of coating. In the fourth joint (the rotation around the longitude axis) the coating reactions correspond to the joint twist versus its rotation angle. It has been assumed that this reaction is connected with two layers of free nodes and the reaction force is derived from the equation (10) in a reconfigured form. In this case there are two kinds of interactions: typeA and type B, responsible for this reaction forces (in the diagonal directions).

5. Simulation investigations Fig. 5. The coating deformations. Difficulties in deriving the coating reaction to the joint bending are caused by two phenomena. Firstly, the majority of the coating free nodes are in permanent contact with arm surface and only little part remains free in joint sur-

Fig. 7. Block diagram of the joint model. 24

Articles

Joints models together with the coating model constitute resulting simulation structure, which glues together the systems described in subsections 2-4. The block diagram of the hinge joint is shown in Fig. 7. The dynamical interactions TL occurring in the joints are the sums of torques generated by the coating and arms


VOLUME 5,

Journal of Automation, Mobile Robotics & Intelligent Systems

N° 3

2011

reactions which are derived from closed models. Because of this, the influence of the reaction of elastic coating can be precisely evaluated and the effects of using the multistage planetary gearing can be observed. 5.1. Numerical stability Because of a big range of coefficients describing the gears and parameters of all the motors (the coefficients of stiffness, friction, inertia etc.), which exceeded to, the equations of motion are ill-conditioned. For assumed tolerance of calculations which was set at 0.01 (1%), despite the use of scaling technique for system variables, the numerical stability was achieved for the step of 0.1 [µs] and Bogacki-Shampine integration method of the third order. Thus, the time of the simulation consumes a lot of time. The coating contains several thousands nodes, conjugated each other, which might to have collisions with the robot construction. Simulation of this system behavior required fast and stable numerical methods. In this work the Verlet algorithm of second order [7] has been applied. It based on performing several operations in the three steps: xi(t+D) = xi(t) + vi(t)D +

Fig. 8. Time plot of angular positions of the robot joint: motor (gray); one-stage gear (——); joint shaft (------).

ai(t)D2 2

a (t)D D Actualisation ai(t+D) vi(t + ) = vi(t) + i 2 2 D a (t+D)D vi(t+D) = vi(t + ) + i 2 2

(9)

Fig. 9. Time plot of the joint angle velocity; motor (——); one-stage gear (------); joint haft light (gray).

These steps are as follows: - calculation of current position xi(t+D) and velocity vi(t+D/2) in mid-point, - calculation of acceleration ai(t+D) from equation (9) - updating velocity. For D = 10-4 [s] this method allows to simulate a system with 80x36 nodes in real-time. 5.2. Results of investigations The aim of performed simulations was to examine the properties of the surgical manipulator joint during its typical movements. The start-stop and forward-reverse movements with the different characteristic have been tested. Results of the conducted tests are showed below. In the Fig. 8 time plots of the position, velocity and interactions between the torque of gear and motor are presented. One can observe the difference between an angle position of shaft for the motor and joint. Its maximum value is smaller than 2×10-3 rad which means that the position error is about 0.05-0.08 mm for joints with different length. Considering all joints, it can be stated, that the total error of the angle position is less than assumed and does not exceed 0.3 mm. The phase shift compensation of the motor and the joint angle position as well as the correction of the amplitude angle position must be done by related servomechanism. It is visible in the Fig. 8 a phase shift between angle position of the motor and the joint and it doesn’t exceeded 0.27 rad. It can be seen from the Fig. 9 that for the considered joint some high frequency vibrations appear in the reversion phase of work. They do not influence the changes of the joint position, but they will probably lead to quicker fatigue.

Fig. 10. Time plots of joint reaction on the motor: without coating (——); with coating (------). The torques of joint reactions on the motor are comprised in the acceptable range and they don’t contain highfrequency components with substantial amplitude, Fig. 10. The effect of an elastic coating on the joint can be seen in the zoom (Fig. 11). Observed reactions do not exceed 2% of the motor torque value which ensures the high precision of movements.

Fig. 11. Zoom of the time plots of joint reaction on the engine: without coating (——); with coating (------). Articles

25


VOLUME 5,

Journal of Automation, Mobile Robotics & Intelligent Systems

Performed investigations have crucial importance for the realisation of the next steps essential for the construction and control of the manipulator prototype. The support software is still in development phase and is subject to change. However, the current architecture consists of two parts: a set of s-functions for Matlab/Simulink enviroment, and a component library based on signals & slots technology. For visualization and graphic animation of components the OpenGL library is used with some parts of application written in Cg language. As both, the front-end and the back-end of the manipulator software are important parts of the whole project and are still being completed, they will be dealt with in a separate paper.

6. Conclusion The obtained mathematical model of a multi-link medical robot with an antiseptic coating allows analysing dynamical properties of a considered object with very good reliability. Its structure considers real features of a designed device. The parameters values were evaluated using technical data, which were available from producers of sub-assemblies. The model takes the form of the three-level cascade structure with the blocks containing the nonlinear differential equations. An associated model of an elastic, antiseptic coating is described by FEM model of the “mass-spring-damping” type, with the rectangular net topology and sixteen-neighbour system. This model reproduces all kinds of deformations which influence the working joint with satisfactory realism. Design of the surgical manipulator is unique construction, it contains 6 links with diameter of 8-10 mm and with the length of the modules about 130 mm. It is driven by brushless DC servomotors with planetary and worm gears, for which the total transmission ratio is above 10000:1. Essential feature of this manipulator is an antiseptic coating which covers all the construction. Because of the complicated form of the drive model with three-stage planetary gearings and coating interactions, a control of such system is significantly different from typical industrial robot control. The results of performed simulations show that the value of the phase shift of the shaft angle position versus the joint axis comprise in the range of ± 2×10-3 rad and the corresponding velocity amplitude reduction level is about 15%. It means that to achieve a high precision of planed manipulator movements, a compensation of a rotation angle of the motor shaft should be introduced in the process of designing control. Additionally, performed tests assert the presence of high-frequency vibrations in considered system which can be seen in Fig. 9. They have an adverse influence on the joint work and will complicate a control system. Because the manipulator in question is intended to minimallyinvasive surgical operations where the precision of movement plays a crucial role, information about main properties of model dynamics are essential for the designing of control system. The appropriate choice of the identification and control algorithms for the derived model allows to suppress the adverse phenomena and to exceed required precision of movements. The performed investigations will be put in practice using the specially constructed laboratory stand with the medical robot prototype which is now under construction.

26

Articles

N° 3

2011

ACKNOWLEDGMENTS The research was supported by the Ministry of Science and Higher Education in Poland. Under project No 2376/B/T02/ 2010/38.

AUTHORS Ryszard Leniowski* - Department of Computer and Control Engineering, Rzeszów University of Technology Wincentego, Pola 2, 35-959 Rzeszów. E-mail: lery@prz-rzeszow.pl. Lucyna Leniowska - Institute of Technology, University of Rzeszów, Rejtana 16A, 35-959 Rzeszów. E-mail: lleniow@univ.rzeszow.pl. * Corresponding author

References [1]

[2]

[3] [4]

[5]

[6] [7]

J. Cieœlik, R. Leniowski, L. Leniowska, “Roboty medyczne nowej generacji - prototyp wielocz³onowego robota chirurgicznego”. In: Proc. of KKR’11, Karpacz 2010. (in Polish) R. Pajda, L. Leniowska, R. Leniowski, J. Cieœlik, “Projekt wielocz³onowego manipulatora chirurgicznego nowej generacji”. In: Proc. of KKR’11, Karpacz 2010. (in Polish) Faulhaber GMBH: Brushless DC-Servomotors. Technical information, 2010. T. Eritenel, R. Parker., “Vibration Modes of a Helical Planetary Gears”. In: Proceedings of the IDETC/CIE 2009, San Diego, USA, 2009. H. Olsson, K.J. Aström, “Observer-based friction compensation”. In: Proceedings of the 35th IEEE Conference on Decision and Control, Kobe, Japan, 1996, pp. 43454350. F. Axisa, P. Trompette, “Modelling of Mechanical Systems: Structural Elements”, vol. 2, Elsevier 2005. E. Hairer, C.Lubich, G. Werner, “Geometric numerical integration illustrated by the Stormer-Verlet method”, Acta Numerica, Cambridge University Press, 2003, pp. 399-450.


VOLUME 5,

Journal of Automation, Mobile Robotics & Intelligent Systems

N° 3

2011

MODELLING AND EXPERIMENTAL RESEARCH OF NONHOLONOMIC BALL GEAR Bartłomiej Krysiak, Dariusz Pazderski, Krzysztof Kozłowski

Abstract: The paper considers kinematic and dynamic model of ball gear which can be used in a nonholonomic manipulator. The phase constraints are described and constrained forces and torques are discussed. In order to describe mechanical properties of the gear experimental research were conducted. Based on experimental results maximum driving torque, gear efficiency and resultant slip are estimated. Keywords: nonholonomic constraint, nonholonomic manipulator, kinematics and dynamics.

mental research have been conducted. The results of this research are given in the paper covering maximum value of transmitted torque and overall efficiency of the mechanism.

2. General description of the ball gear In Figs. 1 and 2 considered mechanism of the nonholonomic gear is presented. It consists of: ball, three active rollers (input W0 and output W1, W2) which are responsible for torque transmission, and two passive rollers W3 and W4, which give support for the ball and exert proper pressure on it needed for torque transmission.

1. Introduction Mechanical systems with phase constraints are an important class of objects in robotics. Among of them, one can distinguish systems with kinematic (i.e. first order) constraints which have their roots in pure rolling between bodies in contact or angular momentum preservation principle. If the constraints are not integrable, i.e. they do not reduce size of configuration space, one can call them nonholonomic constraints. Taking into account robotic applications the most important class of nonholonomic systems constitute majority of wheeled vehicles. Their kinematic structure gives possibility to assume that the motion is not affected by slip phenomenon. Less well known group of nonholonomic robots are nonholonomic manipulators. Their main advantage, at least from a theoretical point of view, is the ability to obtain a lightweight mechanical structure as a result of decreased number of independent actuators with respect to the dimension of manipulator configuration. One of the most familiar proposition of nonholonomic manipulator was given by Sørdalen and others [5]. It is based on ball gears which takes advantage of phase constraints in order to ensure transmission of the torques between joints. The manipulator has been designed and realized in practice in two kinematic versions [1]. In a robotics literature some propositions of continuous and discontinuous control algorithms have been proposed [1–4] to solve regulation and trajectory following tasks. However, not much works have been devoted to analysis of the fundamental part of the manipulator, namely nonholonomic gear. This paper can be treated as an attempt to fill this gap. It presents detail description of the nonholonomic ball gear taking into account the structure built in the Chair of Control and Systems Engineering, Poznan University of Technology. The idea of mechanism is inspired by proposition presented in [5]. The paper is focused on theoretical description of the gear model including kinematics and dynamics as well and some practical features of the gear. In order to verify fundamental properties of the gear experi-

Fig. 1. Visualization of 3D model of the gear structure

Fig. 2. Picture of the built gear mechanism The area of contact between input and output rollers and the ball are the source of main phase constraints observed for the gear. Rollers W0 and W3 are placed on the poles of the ball. It is assumed that point contact without slipping exists between each roller and the ball. Direction of the ball rotation is determined by direction of input roller W0 motion. The ball is driven by roller W0 which transmits torque to output rollers W1 and W2. Axes of rollers W1 and W2 lies on a plane of the ball equator. The contacts between the rollers W1 and W2 and the ball creates two constraints Articles

27


Journal of Automation, Mobile Robotics & Intelligent Systems

planes. The intersection of those planes determines the rotation axis of the ball. As a result degrees of freedom of the ball becomes reduced to one. Consequently, the ball cannot rotate around axis through its north and south pole. Taking into account that system of associated output rollers can change its orientation with respect to that axis, one can consider angular velocity of the output rollers as a function of an angle between input and output rollers.

3. Analytical model of the gear In this paper it is assumed that R ∈ SO(3) describes any rotation matrix, while   cos γ − sin γ 0 cos γ 0 ∈ SO(3) Rz (γ) ,  sin γ (1) 0 0 1 is a matrix denoting rotation around z axis by an angle γ. In order to describe a vector product of two vectors T α = [α1 α2 α3 ] ∈ R3 and β ∈ R3 the following notation is used: α × β = S (α) β, (2)

VOLUME 5,

N◦ 3

2011

not change its orientation with respect to inertial one. Axes of the second frame OXl Yl Zl contain points P0 , P1 and P2 – this frame rotates in the same way as the system of associated rollers W1 and W2. The third frame is fixed to the ball and it isn’t denoted in the figure (3) because of it’s clarity reasons. It is assumed that ρ denotes radius of the ball, while rw0 , rw1 and rw2 denote radius of rollers W0, W1 and W2, respectively. Orientation of the ball with respect to the inertial frame is described by rotation matrix R ∈ SO(3), while its angular velocity defined in this frame T is denoted as ω = [ωx ωy ωz ] . Then, time derivative of rotation matrix can be calculated as R˙ = S (ω) R. Furthermore, we assume that value of angular velocity of ith roller determined in its local frame is described by ωwi . At the contact point P0 linear velocity on the ball surface determined in the inertial frame can be written as follows: T p˙ 0 = S (ω) e3 ρ = [ωy ρ − ωx ρ 0] ,

(4)

while linear velocity on the roller W0 surface at the same points equals:

where

p˙ 0 = −S (e2 ) e3 ωw0 rw0 = −e1 ωw0 rw0 . 

0 S (α) ,  α3 −α2

−α3 0 α1

(5)

α2 −α1  ∈ so(3) 0

(3)

is a skew-symmetric matrix. Unit vectors codirectional T with axes of any 3D frame are referred as: e1 , [1 0 0] , T T e2 , [0 1 0] , e3 , [0 0 1] . Taking into account the gear model we consider only these components which are relevant for torque transmission, namely the ball and active rollers. Passive rollers which support the ball in this analysis are not taken into account in order to simplify the description. However, these components can be relatively easy included in the model based on presented modeling framework. 3.1. Kinematic description

Making similar analysis with respect to rollers W1, W2 and the ball at points P1 and P2 one obtains: ˙ (e3 ) Rz (θ) e1 ρ − Rz (θ) S (e2 ) e1 ωw1 rw1 = p˙ 1 = θS   ˙ sin θ −θρ ˙ cos θ  , =  θρ ωw1 rw1   (6) −ωz ρ sin θ , ωz ρ cos θ p˙ 1 = S (ω) Rz (θ) e1 ρ =  ωx ρ sin θ − ωy ρ cos θ (7) ˙ (e3 ) Rz θ + p˙ 2 = θS − Rz θ +

π 2

π 2

e1 ρ

S (e2 ) e1 ωw2 rw2

  ˙ cos θ −θρ ˙ sin θ  =  −θρ ωw2 rw2 (8)

and 

 −ωz ρ cos θ . −ωz ρ sin θ p˙ 2 = S (ω) Rz θ + 2 e1 ρ =  ωx ρ cos θ + ωy ρ sin θ (9) Next, assuming pure rolling of the roller on the ball and making use of (4)-(9) we get the following equations describing the first order phase constraints: π

Fig. 3. Geometry of the gear

ωw0 rw0 + ωy ρ = 0,

(10)

−ρ (ωx sin θ − ωy cos θ) + ωw1 rw1 = 0,

(11)

−ρ (ωx cos θ + ωy sin θ) + ωw2 rw2 = 0,

(12)

ωx = 0,

(13)

ωz − θ˙ = 0.

(14) T

Let us consider three reference frames with origins placed at center of the ball. The first frame OXg Yg Zg does 28

Articles

Next, defining velocity vector ξ = [ξ1 ξ2 . . . ξ7 ] , h iT ωx ωy ωz θ˙ ωw0 ωw1 ωw2 allows one to present the


Journal of Automation, Mobile Robotics & Intelligent Systems

N◦ 3

VOLUME 5,

2011

above constraints in the following Pfaffian form: A (θ) ξ = 0,

(15)

where A (θ) ∈ R5×7 is the constraint matrix defined as:  0  rw1 0     0 rw2  A (θ) ,    0 0  0 0 (16) with sθ and cθ denoting sin θ and cos θ, respectively. Taking into account dimension of vector ξ together with number of independent phase constraints we conclude that considered mechanical system has only two degrees of freedom. Consequently, it implies that one can find two independent vector fields g1 and g2 , such that rows of matrix A can be seen as their annihilators. For example these vector fields can be written as follows: 

0 −ρsθ −ρcθ 1 0

ρ ρcθ −ρsθ 0 0

0 0 0 0 0 0 0 0 1 −1

rw0 0 0 0 0

0

Fig. 4. Forces and torques in the gear

T

g1 , [0 0 − 1 1 0 0 0] , h rw0 g2 (θ) , 0 − rw0 ρ 0 0 1 rw1 cos θ −

rw0 rw2

iT

. (17) Next, using (17) we can define the following affine system which describes kinematics of the nonholonomic gear: sin θ

and Lagrange equation one can derive the following dynamics of the gear: J ξ˙ = Bτ + Bz τz + Qw , where

ξ = G (θ) u, T

B,

(18)

J , diag

2

5 mρ

2 2 , 5 mρ2 , 25 mρ2 ,

Jl , Jw0 , Jw1 , Jw2

∈ R7×7 (19) is a positive definite symmetric mass matrix, with m denoting mass of the ball, Jl being inertia of the output rollers system determined around Zg axis, while Jwi determines inertia of ith rollers around its axis of rotation. The control input is considered as torques applied to roller W0 and the system of rollers W1 and W2, T and it is described by τ , [τ1 τ2 ] ∈ R2 . Additionally external torques (disturbances) denoted by τz , T [τz1 τz2 τz3 τz4 ] ∈ R4 are considered. These torques can be seen mainly as a result of the torque transmission to the components of mechanism external to the gear (for example links of a manipulator, interaction with an environment, etc.) as well as some internal friction effects (including rolling friction on the ball surface and friction which appears in bearings of the rollers). T d ∂L Calculating inertia generalized forces as dt = ∂ξ ˙ considering input τ , disturbances τz and phase conJ ξ, straints interactions, and referring to d’Alemberd principle

0 0

0  0 Bz ,   0 0

T

where G (θ) , [g1 g2 (θ)] ∈ R7×2 and u , [u1 u2 ] ∈ R2 is kinematic input vector in the form of quasi-velocities (or alternatively: auxiliary-velocities). 3.2. Dynamic description Dynamic model of the ball gear will be derived based on Lagrange formalism assuming that potential energy of the system does not change. Then, one can consider the following Lagrangian L , 12 ξ T J ξ, where

(20)

0 0 0 0 0 0

0 0 0 0 0 0

1 0 1 0 0 0

0 1 0 1 0 0

0 0 0 0 1 0

0 0

T ,

T 0 0   0  1

(21)

are input matrices (related to control input and disturbances, respectively), T

Qw = [Qw1 Qw2 . . . Qw7 ] = AT (θ) λ

(22)

denotes generalized forces of the phase constraints, with λ ∈ R5 determining vector of Langrange multipliers. Taking advantage of kinematics (18) allows one to present dynamics equation (20) in the following reduced form: ¯ u˙ + Cu ¯ = Bτ ¯ +B ¯ z τz , M (23) T T T ¯ ¯ ˙ ¯ where M = G J G, C = G J G, B = G B and ¯ z = GT Bz . Vector of Lagrange multipliers can be B calculated from the following relationship: −1 ˙ + AJ −1 (Bτ + Bz τz ) . λ = − AJ −1 AT Aξ (24) Making analysis of the term (22) one can find physical interpretation of individual components of vector λ. Then, fulfillment of the constraints imply that: Qw5 = rw1 λ1 , Qw6 = rw2 λ2 , Qw7 = rw3 λ3 , Qw3 = −Qw4 = λ5

(25) (26)

and Qw1 = −ρ (sin θ · λ2 + cos θ · λ3 ) + λ4 .

(27)

Next, considering Fig. 4 and taking into account (25), (26) allows one to make the following statements: Articles

29


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 5,

N◦ 3

2011

– λ1 , λ2 and λ3 multipliers determine lateral components of interaction forces between rollers W0, W1, W2 and the ball – cf. equation (25). – λ4 multiplier determines torque constraint coming from lack of the ball rotation around Xg axis – cf. equation (27); it is generated by lateral component of force interaction between the roller W0 and the ball, – λ5 multiplier denotes torque constraint, which results from lack of relative rotation of the ball with respect to the system of output rollers W1 and W2 around Zg axis – cf. equation (26); it is caused by lateral components of resultant forces between rollers W1, W2 and the ball (value of these components can be denoted by λ05 /ρ and λ005 /ρ, with λ05 + λ005 = λ5 ). It is worth to note that the constraints forces and torques coming form interactions between components of the gear and their maximum magnitudes are physically limited by permissible friction forces. Considering Fig. 4 we can find the following inequalities r r 2 (λ0 ) λ24 2 λ1 + ρ2 ≤ Fw0 max , λ22 + ρ52 ≤ Fw1 max , (28) r 00 2 λ ( ) 5 λ23 + ρ2 ≤ Fw2 max ,

Fig. 5. Model of experimental setup – the gear coupled with drive and brake

where Fwi max = µi Ni denotes maximum force friction which results from contact force Ni between ith roller and the ball and µi is the Coulumb friction coefficient, with i = 0, 1, 2.

4. Experimental research on nonholonomic gear Mathematical model of the nonholonomic gear presented in Section 3 is relatively complicated. The main difficulty is a determination of friction coefficients, due to many mechanical elements. In practice it is hard to define these parameters because they are related with many variables such as pressure force, kind of material and wear and tear of material. Taking into account practical aspects in a sense of application of the nonholonomical gear we are interested to know the maximum transmission driving moment and the efficiency of transmission. Concerning the method of transmission drive, occurrence of a skid between the ball and roller becomes also very important issue. Its appearance means that constraints described by equation (15) are violated. These problems were experimentally examined. 4.1. Experiment description Experimental workstation consists of one joint of a manipulator using the nonholonomic gear and electrical driving and braking system – see Figs. 5 and 6. The gear is driven by motor1 MN. Planetary gear integrated with this motor transforms drive torque τN = τ2 to W0 roller with use of toothed belt. The output roller W2, which receives torque, is coupled with identical motor MH which is a braking motor. This motor works in generator mode and provides resistance torque τz4 = −τH . In order to determine characteristic of resistances which occurs in motor, planetary gears, driving toothed belt and 30

Articles

Fig. 6. The real experimental setup

Fig. 7. Block schema of driving and braking system in transmission shaft (indicated in Fig. 7) tests of neutral (i.e. no loaded on the input or output side) gear were carried out separately for driving system and for braking system. The driving system consists of electrical motor, planetary gear and transmission system with toothed belt. The braking system consists of second electrical motor, planetary gear, transmission system with transmission shaft and angle toothed gear. From linear regression the resistance torque characteristics were carried out with use of current measurement and angle velocity measurement (τP N for driving system and τP H for braking system). τP N = (0.0037ωN + 0.42) ki n, τP H = = (0.0018ωH + 0.18) ki n,

(29)

where ki = 0.09 Nm/A is the engine electromechanical constant assumed on the basis of catalog data 2 , n = 4.5


Journal of Automation, Mobile Robotics & Intelligent Systems

is the ratio of reducing gear integrated with engines and ωN = ωw0 , ωH = ωw2 are angular velocities calculated in [rad/s]. Angular velocities of the rollers were measured with use of the pulse angular position sensor integrated with engines. Estimation of torques τN and τH was based on assumed torque resistance characteristics and motor currents. In experiments roller WO exerted around 80N pressure force on the ball and roller W2 exerted pressure force around 40N. These forces were calculated based on known construction of used holdfast springs. Considering determination uncertainty of Young module and manufacture imprecision of dimensions for used springs it is worth to mark that forces are calculated with high inaccuracy. The radius of rollers WO and W2 are identical and equal rw0 = rw2 = 10 mm. For identification of relative skid during torque transmission from roller W0 to roller W2 we consider kinematic equations (17) and (18). Then, we obtain ∗ ωw0 =−

rw2 ωw2 · 100 [%] , rw0 sin θ

VOLUME 5,

N◦ 3

2011

value of torque output roller starts to slide on the ball and the gear does not transmit the torque anymore. In the range of normal operation of the gear, the increase of braking torque load causes the increase of relative skid between the rollers and the ball. For the value τHmax the relative skid σ was equal 5%. Figure 9 presents power efficiency ηP and torque efficiency ητ as a function of τH . With the increase of torque τH the efficiency ηP and ητ also increases. Increasing the braking torque by 0.11 Nm causes increasing of the efficiency factor by 13%. Probably it asise from increasing roller tension force applied on the ball caused by greater gear load appearance. Very small bearing backlash and bearings resistance forces may cause partial transformation of a rotation moment into holdfast force, which grows while gear moment load increases. It can be observed that ηP efficiency changs in a similar way as ητ efficiency, but while load force increased up to τHmax there appears diversification between these two quantities. For the value τHmax coefficient ηP is 4% smaller then ητ . It results from the growth of skid for larger values of τH .

(30)

∗ where ωw0 is interpreted as the transformed velocity ωw2 on the input side of nonholonomic gear. The relative skid is defined as

σ=

∗ ωw0 − ωw0 · 100 [%] . ωw0

(31)

Substituting equation (30) in (31) we have3 σ=

ωw0 +

rw2 rw0 sin θ ωw2

ωw0

· 100 [%] .

(32)

Fig. 8. Gear skid as a function of load force for the angle θ = − π2

Further taking into account the same radius of rw0 and rw2 relation (32) can be simplified as follows σ=

ωw0 + sin−1 (θ) ωw2 · 100 [%] . ωw0

(33)

It is worth to note that using this method of skid estimation does not allows one to determine if it appears between input roller W0 and the ball, or between the ball and the output roller W2. Instead of this we get only measure of resulting slip, which is the most important from a practical point of view. The gear efficiency is considered with respect to power and torque transmission and is described using the following two factors: ηP ,

PN · 100 [%] , PH

ητ ,

τN · 100 [%] , τH

(34)

where PN , ωN τN and PH , ωH τH denote input and output mechanical power, respectively. 4.2. Results of experiments The experiments conducted for the nonholonomic gear were realized for fixed value of angle θ with constant angular velocity ωw0 = 23 rad/s (the velocity was stabilized by an industrial PID motor controller). The first experiment was performed for joint angle θ = − π2 . From Fig. 8 one can observe that braking torque range is limited to τHmax = 0.13 Nm. Exceeding this

Fig. 9. Gear efficiency as a function of load force for the angle θ = − π2 The second experiment was carried out for a different configuration of the gear, namely θ = π4 . In this case torque τH is restricted to τHmax = 0.083 Nm – cf. Fig. 10. Similarly as in the first experiment with the growth of τH relative skid also increases up to maximum value σ = 6.2%. From the analysis of efficiency graph (see Fig. 11) also arises empirical statement that for the growth of τH the efficiency ηP and efficiency ητ grows. Torque τH rises by 0.08 Nm and causes about 20% efficiency increase. This relation is similar to one observed for the first experiment and we conclude that it is caused by the same reason as in the experiment taken for θ = − π2 . Change of efficiency ηP in comparison to efficiency ητ as a function of moment τH shows that ηP has a bigger increase, what also results from growth of relative skid σ as a function of τH . Articles

31


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 5,

N◦ 3

2011

enough to find application of it in a construction of small planar nonholonomic manipulator.

Acknowledgment This work was supported by the Ministry of Science and Higher Education grant No. N514 299735.

Fig. 10. Gear skid as a function of load force for the angle θ = π4

Fig. 11. Gear efficiency as a function of load force for the angle θ = π4

AUTHORS Bartłomiej Krysiak∗ – Chair of Control and Systems Engineering, Poznań University of Technology, ul. Piotrowo 3a, 60–965 Poznań, POLAND, e-mail: bartlomiej.krysiak@put.poznan.pl, www: http://control.put.poznan.pl/pl/users/ bartlomiejkrysiak Dariusz Pazderski – Chair of Control and Systems Engineering, Poznań University of Technology, ul. Piotrowo 3a, 60–965 Poznań, POLAND, e-mail: dariusz.pazderski@put.poznan.pl, www: http://etacar.put.poznan.pl/dariusz.pazderski Krzysztof Kozłowski – Chair of Control and Systems Engineering, Poznań University of Technology, ul. Piotrowo 3a, 60–965 Poznań, POLAND, e-mail: krzysztof.kozlowski@put.poznan.pl, www: http://control.put.poznan.pl ∗

From comparison of experimental results for different values of angle θ we can say that relative skid was more or less 1% bigger for experiment with angle θ = π4 . Concerning small difference of relative skid for both experiments it can be stated that changes of angle θ do not influence significantly skid between rollers and the ball. It leads to conclusion that efficiency of the gear is not related to value of angle θ. These results confirm the symmetry of the gear. The same conclusion can be obtained from comparison of efficiency obtained for two different angles. The difference between efficiencies ηP for angle θ = − π2 and for angle θ = π4 is at the level of 0.3%, and difference between efficiencies ητ for angle θ = − π2 and for angle θ = π4 is at the level of 1.5%.

5. Conclusions This article presents model analysis and experimental tests of nonholonomic ball gear. Nonholonomic gear mechanism built in the Chair of Control and System Engineering was introduced. Mathematical modeling was considered at a level of kinematics and dynamics pointing out physical interpretation of generalized constraint forces. Experimental research described here gave possibility to verify properties of the gear in practice as well as to prepare some modifications in the mechanism in order to improve it. The results show some limitations of the examined gear taking into account maximum values of transmitted torque and power. In spite of them one can expect that the properties of the gear should be satisfactory

32

Articles

Corrresponding author

References [1] W. Chung. Nonholonomic manipulators. SpringerVerlag, Berlin Heidenberg, 2004. [2] A. Mazur and Ł. Nocek. Trajectory tracking for rigid 3-pendulum with nonholonomic gears. In Proc. of the Fith International Workshop on Robot Motion and Control, pages 187–192, June 2005. [3] Y. Nakamura, W. Chung, and O. J. Sørdalen. Design and control of the nonholonomic manipulator. IEEE Transactions on Robotics and Automation, 17(1):48– 59, February 2001. [4] D. Pazderski, K. Kozłowski, and K. Krysiak. Nonsmooth stabilizer for three link nonholonomic manipulator using polar-like coordinate representation. In Lecture Notes in Control and Computer Sciences, pages 35–44. Springer-Verlag, Berlin Heidenberg, 2009. [5] O. J. Sørdalen, Y. Nakamura, and W. Chung. Design of a nonholonomic manipulator. In Proc. of the IEEE Int. Conf. on Robotics and Automation, pages 8–13, August 1994.

Notes 1 In

tests was used DC motor GR42x20 made by Dunkermotoren. value of constant ki additionally was confirmed by tests with use of dynamometer sensor DFM22-5.0 made by Megatron. 3 Lack of sign ”–” in numerator in expression (31) comes from definition convention of velocity sign for ωw2 and ωw0 . 2A


VOLUME 5,

Journal of Automation, Mobile Robotics & Intelligent Systems

N° 3

2011

MOTION PLANNING OF THE UNDERACTUATED MANIPULATORS WITH FRICTION IN CONSTRAINED STATE SPACE Adam Ratajczak, Mariusz Janiak

Abstract: This paper addresses the constrained motion planning problem for passive joint manipulators with friction. Constraints are imposed on a system state space vector. The dynamics of underactuated manipulators are described by a control–affine system with a drift term. In order to solve the constrained motion planning problem the imbalanced Jacobian algorithm derived from an endogenous configuration space approach is used. The state space constraints are included into the system representation of the manipulator dynamics. The extended system is subject to regularisation because of the Jacobian singularities, then the unconstrained motion planning problem is solved for the regularised system. The solution of the motion planning problem for this system is equivalent to the solution of the constrained motion planning problem for an original system. Performance of the imbalanced Jacobian algorithm has been demonstrated with series of simulation for the two kinds of manipulators with and without friction. Keywords: Constrained Motion Planning, Underactuated Manipulators, Endogenous Configuration Space.

1. Introduction The typical example of an underactuated robotic system is the manipulator with one or more passive joints. In the classic manipulator every joint has its own actuator, so the number of degrees of freedom (d.o.f.) is equal to the number of control inputs. Contrary, in the underactuated case, the dimension of the state space exceeds the dimension of the control space. The general review of the underactuated manipulators and dedicated control strategies has been presented in [1] and [2]. The underactuated systems could be divided into the models with the gravity terms and the models where the potential terms are neglected. The popular systems belonging to the first group are two double pendulum manipulators: Acrobot [3] with free first joint and Pendubot [4] without an actuator in the second joint. The robot models in which the gravity terms are excluded are e.g. P R manipulator in [5] or RRR manipulator in [6]. Unfortunately, in many publications, the motion equations of underactuated robots are modelled as the frictionless systems. Few examples of the underactuated systems with friction could be found in [7–9]. For the purpose of this paper we enrol two types of manipulators. First, with the presence of gravity, and second one, where the gravity is neglected. For both models we consider two cases, frictionless and with friction. A mechanical system whose motion is subject to nonintegrable position and velocity constraints is called nonholonomic, and such constraints are the first order non-

holonomic constraints. When the constraints assume the Pfaffian form, the nonholonomic system is represented by a smooth driftless control system. Examples of such systems are robotics systems, like wheeled mobile platforms or certain systems of dexterous manipulations. In contrast to these systems motion of manipulators with passive joints is subject to the second order nonholonomic constraints [10]. In this case, the system is represented by a control–affine system with non–trivial drift term, what makes the motion planning problem more complicated. In order to solve the unconstrained motion planning problem for underactuated manipulator, the endogenous configuration space approach [11] has been applied in [10, 12]. With this approach, it is possible to generalise the Jacobian inverse kinematics algorithms, known for classic stationary manipulators, to other robotics systems. The fundamental concept of this method is that the endogenous configuration space includes all admissible controls of the robotic system. Originally, this method has been widely applied to mobile manipulators [11, 13], however, it can be easily adapt to manipulators with passive joints, because of the similarity between the equations of nonholonomic mobile platforms kinematics, and manipulator dynamics model. A different solution of unconstrained motion planning problem has been presented in [14] where the continuation method has been combined with the predictive control scheme. This paper deals with the constrained motion planning problem for underactuated manipulators with friction and bounded state. Such problem for frictionless underactuated manipulators has been solved in [15], where the imbalanced Jacobian algorithm [13] derived from the endogenous configuration space approach has been utilised. Alternative approach to solving the motion planning problem in presence of constraints is offered by optimal control theory, as proposed e.g. in [16], and in particular by the Nonlinear Model Predictive Control approach [17, 18]. The comparison of these two methods applied to solving the constrained motion planning problem for nonholonomic mobile robots has been presented in [19]. This paper extends the approach presented in [15] for underactuated manipulators with friction. Following [15], the constrained motion planning problem is replaced by an unconstrained one, addressed in an extended control system representation. State variable bounds are incorporated into the manipulator motion equations, through adding an extra state variable driven by the plus function depending on the violation of constraints. Then, the expanded system is subject to a regularizing perturbation in order to eliminate the singularities of the Jacobian introduced by the additional state variable and the associated plus function, Articles

33


Journal of Automation, Mobile Robotics & Intelligent Systems

in the region where the constraints are satisfied. The unconstrained motion planning problem in regularised system is solved by the imbalanced Jacobian algorithm [13, 20], derived from the endogenous configuration space approach. A feature of the imbalanced algorithm is that applies inverse Jacobian operator of the regularised system to the error produced in the extended system. A solution of the unconstrained motion planning problem for regularised system is also a solution of the constrained motion planning problem for basic system. Presented algorithm has been used to solve the constrained motion planning problem for two underactuated manipulators: an inverse pendulum mounted on a movable cart, and a planar double pendulum. The algorithm performance has been illustrated with the computer simulations. The paper is organised as follows. Section 2 describes derivation of underactuated manipulator dynamics equations. The motion planning algorithm is presented in section 3. Performance of the algorithm is presented with computer simulations in section 4. The paper concludes with section 5.

2. Underactuated Manipulator with Friction (1)

where q ∈ Rn is the vector of joint coordinates. The matrix M (q) is the symmetry and positive definite inertia matrix, the vector N (q, Ë™ q) collects centrifugal, Coriolis and possibly gravitational terms. The vector F denotes control forces and other external forces. Let us set that only m < n joints are actuated. Moreover we split the vector q into two parts: the active joints positions qa ∈ Rm and the positions of the passive joints qb ∈ Rn−m . Then we can rewrite the dynamics (1) as Maa (q) Mab (q) q¨a Na (q, Ë™ q) Ď„ + Fa + = , T q¨b Nb (q, Ë™ q) Fb Mab (q) Mbb (q) (2) where Maa , Mab , Mbb , Na and Nb are elements of matrix M (q) and vector N (q, Ë™ q) from (1) suitably. The vector Ď„ ∈ Rm is the control vector, Fa and Fb denote the external forces acting on active and passive joints respectively. 2.1. Partial feedback linearization Now, we perform a partial feedback linearization [21] of the system (2). By a substitution q¨b from second equation of (2) into first equation of (2) and introducing a new input vector u ∈ Rm we arrive with a feedback controller −1

T Ď„ = Maa (q) − Mab (q)Mbb (q) Mab (q) u −1 + Na (q, Ë™ q) − Fa − Mab (q)Mbb (q) Nb (q, Ë™ q) − Fb ,

(3) which transforms (2) into ( q¨a = u, T q¨b = −Mbb (q)−1 Mab (q)u + Nb (q, Ë™ q) − Fb . 34

Articles

Nâ—Ś 3

2011

Finally, we take the new state space vector as x = (x1 , x2 , x3 , x4 ) = (qa , q˙a , qb , q˙b ) ∈ R2n and the equations (4) takes the form x˙ = f (x) + G(x)u,

(5)

where T −1 f (x) = x2 , 0, x4 , −Mbb (x)(Nb (x) − Fb ) ,   0   Im . G(x) =    0 −1 T −Mbb (x)Mab (x) Now, let us define the underactuated manipulator with friction. We consider a viscous friction model Ff =

dq , dt

where Ff is a friction force and is a friction coefficient. Following [7] we take Fa = 0 and Fb = − q˙b . We assume that the friction appears only in underactuated joints, because the friction in actuated joints could be compensated by the partial feedback linearization (3).

3. Motion Planning Algorithm

The dynamics of classic manipulator is defined as M (q)¨ q + N (q, Ë™ q) = F,

VOLUME 5,

(4)

Given the control system representation (5), the following constrained motion planning problem will be addressed: starting from an initial state x0 = x(0) ∈ R2n , find a control function u(¡) ∈ L2m [0, T ], such that for the prescribed state xd ∈ R2n there holds x(T ) = xd , while the instantaneous values of state variables are bounded lb xub i ≤ xi (t) ≤ xi ,

(6)

for every t ∈ [0, T ], and for some, perhaps not all, 1 ≤ i ≤ 2n, where xub and xlb denote state upper and lower bounds respectively. In order to solve this problem we use so-called imbalanced Jacobian algorithm devised in [13, 20], originating from the endogenous configuration space approach [11]. 3.1. Endogenous configuration space In accordance to [11], admissible controls u(¡) in the system (5), acting on time interval [0, T ], constitute the endogenous configuration space X = L2m [0, T ] of the underactuated manipulator. This space is an infinite dimensional Hilbert space with inner product

u1 (¡), u2 (¡)

R

ZT =

uT 1 (t)R(t)u2 (t) dt,

0

where R(t) is a positive defined weight matrix. To every endogenous configuration u(¡) ∈ X there corresponds a manipulator state space trajectory x(t) = Ď•x0 ,T u(¡) . We shall assume that this trajectory is well defined for every t ∈ [0, T ], then we can define an end-point map Kx0 ,T : X −→ R2n of the system (5), transforming endogenous configuration space into manipulator state space Kx0 ,T u(¡) = x(T ) = Ď•x0 ,T u(¡) . (7)


Journal of Automation, Mobile Robotics & Intelligent Systems

Derivative of the end-point map

α=0

ZT Φ(T, t)B(t)v(t) dt, (8) 0

will be referred to as the system Jacobian, where

are matrices of the linear approximation of the system (5) along the configuration u(·). The fundamental ma∂ Φ(t, s) = trix Φ(t, s) fulfils the evolution equation ∂t A(t)Φ(t, s) with initial condition Φ(s, s) = I2n . For the fixed u(·), the Jacobian transforms the endogenous con figuration space into manipulator state space Jx0 ,T u(·) : X −→ R2n . Configurations at which this map is surjective are named regular, otherwise they are singular. Equivalently, u(·) is regular if and only if the Gram matrix Φ(T, t)B(t)B T (t)ΦT (T, t)dt.

(9)

0

is full rank 2n. 3.2. Extension and regularization One way of dealing with the constrained motion planning problem consists in replacing the constrained problem by an unconstrained one addressed in an extended control system representation. To include the state constraints (6) into the basic system, we shall describe them using the smooth approximation of the plus function x+ = max{0, x} [22] x+ ≈ p(x, β) = x +

 x˙ = f (x) + G(x)u, 2n P x˙ 2n+1 = p xi (t) − xub i ,β + i=1

∂ A(t) = f (x) + G x(t) u(t) , B(t) = G x(t) ∂x

Gx0 ,T (u(·)) =

2011

Jacobian of extended system becomes singular. In order to overcome this difficulty, system (10) will be subject to regularization. The new state variable is perturbed by the regularization function r(x) that prevents the extended system Jacobian of became singular, usually we apply r(x) = xT x. As a result, we obtain a regularized system

Jx0 ,T u(·) v(·) = DKx0 ,T u(·) v(·) =

d

Kx0 ,T u(·) + αv(·) =

ZT

N◦ 3

VOLUME 5,

1 log(1 + e−βx ). β

p −xi (t) + xlb i ,β

+ r(x). (11)

Set again x ¯ = (x, x2n+1 ) and let Rx¯0 ,T (u(·)) denote the end-point map of (11). Then, the motion planning problem for the regularized system consists in determining the control function u(·) that drives x ¯(T ) to x ¯d (u(·)) = RT (xd, r(x) dt). 0

3.3. Imbalanced Jacobian algorithm The unconstrained motion planning problem in regularised system will be solved using imbalanced Jacobian algorithm [13]. In accordance with endogenous configuration space approach [11], we choose a smooth curve uθ (·) ∈ X , parametrised by θ ∈ R, and starting from an arbitrary chosen point u0 (·) in the endogenous configuration space. Along this curve, we define the output error e(θ) = Rx¯0 ,T uθ (·) − x ¯d (θ) = Ex¯0 ,T uθ (·) − x ¯d , (12) where x ¯d (θ) = x ¯d (uθ (·)) and x ¯d = (xd , 0) denote the desirable output of the regularized and extended systems, respectively. The curve uθ (·) is requested to decrease the error exponentially with a rate γ > 0, so de(θ) = −γe(θ). dθ

(13)

Differentiation of error (12) with respect to θ, and using (13) yields

Function p(x, β) approaches x+ when β increases to +∞. The constraints will be satisfied, when the functions lb p(xi (t) − xub i , β) and p(−xi (t) + xi , β) vanish for every t ∈ [0, T ], and any β > 0. This will also be held, whenever the sum of integrals over [0, T ] of those functions is zero. The constraints can be included into system (5) by adding an extra state variable as follows

duθ (·) d¯ de(θ) xd (θ) = J¯x¯0 ,T uθ (·) − = −γe(θ). dθ dθ dθ (14) where J¯x¯0 ,T uθ (·) stands for the Jacobian of the regularised system. The basic idea of the imbalanced Jacobian algorithm is d (θ) to omit the term d¯xdθ in (14) and to compute uθ (·) from the following equation

 x˙ = f (x) + G(x)u, 2n P x˙ 2n+1 = p xi (t) − xub i ,β +

duθ (·) J¯x¯0 ,T uθ (·) = −γe(θ). dθ

(15)

i=1

p −xi (t) + xlb i ,β

. (10)

Let x ¯ = (x, x2n+1 ) denote the extended state variable, and Ex¯0 ,T (u(·)) denote the end-point map of (10). Now, the solution of the unconstrained motion planning problem x ¯(T ) = (xd , 0) for extended system, is also the solution of the constrained motion planning problem for basic system (5). Unfortunately, if the constraints are satisfied, the

We use the fact that the error in the regularised system coincides with the error in the extended system, see equation (12). Consequently, this means that we operate on error which is defined in the extended system but we use the Jacobian defined for regularised system Rx¯0 ,T u(·) . Let J¯x¯#P u(·) be a pseudo inverse of the Jacobian, so 0 ,T

J¯x¯#P u(·) η (t) = B T (t)ΦT (T, t)Gx¯−1 (u(·))η. 0 ,T 0 ,T Articles

35


Journal of Automation, Mobile Robotics & Intelligent Systems

Then, applying this inverse to (15) leads to a dynamic system duθ (¡) = âˆ’Îł JÂŻxÂŻ#P uθ (¡) e(θ). (16) 0 ,T dθ Assume that uθ (¡) is the solution of (16). After its substitution into (14), we get de(θ) = âˆ’Îłe(θ) + Ď€(θ), dθ

ÎŚÎť (T, t)BÎť (t)P (t) dt,

(18)

0

where matrices ÎŚÎť (T, t) and BÎť (t) are finite dimensional equivalent of ÎŚ(T, t) and B(t). In finite dimensional case, the Jacobian pseudo inverse of (18) takes a standard form −1 JÂŻxÂŻ#P (Îť) = JÂŻxÂŻT0 ,T (Îť) JÂŻxÂŻ0 ,T (Îť)JÂŻxÂŻT0 ,T (Îť) . 0 ,T Consequently, we apply the fixed step size Euler method to (16), so the discrete version of the imbalanced Jacobian algorithm is defined by following dynamic system Îť(θ + 1) = Îť(θ) − Îł JÂŻxÂŻ#P Îť(θ) e(θ). (19) 0 ,T In order to guarantee the numerical stability of the algorithm and low value of the discretization error, the parameter Îł should be sufficient small. The algorithm (19) will be applied to solve a constrained motion planning problem for two different underactuated manipulators with friction.

4. Simulations As a testbed we choose two underactuated manipulators with friction. The P R (fig. 1) and RR (fig. 2). The bar over the joint’s symbol denotes the passive joint. Both models have only one underactuated joint, but there is no difficulty to apply the presented algorithm for more complicated models. The simulations are performed in the Matlab environment. 36

Articles

2011

l2,m2

(17)

ZT

Nâ—Ś 3

y2

q2

m1

d (θ) where Ď€ = − dÂŻxdθ represents a perturbation. The system (16) will define the imbalanced Jacobian algorithm, if the perturbation Ď€(θ) is such that the error (17) tends to 0 along with θ. If this is a case, solution of the unconstrained motion planning problem in the extended system can be computed as the limit ud (t) = limθ→+∞ uθ (t) of the trajectory of (16). This is also the solution of the constrained motion planning problem for the basic system. Convergence of the imbalanced algorithm results from a well known property of stable linear systems [13, 20, 23]. For practical and computational reason it is useful to use the finite dimensional representation of the endogenous configuration. We assume that the control functions in (11) are represented by truncated orthogonal series, i.e. u(t) = P (t)Îť, where P (t) is a block matrix comprising basic orthogonal functions in the Hilbert control space L2m [0, T ] and Îť ∈ Rs denotes control parameters. The more terms in the series are included, the closer we approach â€?the trueâ€? control function. It follows that the endogenous configuration u(¡) is represented by a point Îť ∈ Rs . Now, the Jacobian of the regularised system is defined by a matrix

JÂŻxÂŻ0 ,T (Îť) =

VOLUME 5,

y1

q1

Fig. 1. P R Manipulator, where y1 = q1 + l2 cos(q2 ), y2 = l2 sin(q2 ).

Fig. 2. RR Manipulator, where y1 = l1 cos(q1 ) + l2 cos(q1 + q2 ), y2 = l1 sin(q1 ) + l2 sin(q1 + q2 ). 4.1. P R manipulator The first model is the P R manipulator, in which only the first, prismatic joint is actuated. The second, revolute joint has no actuator but there is a friction force. This underactuated manipulator is a planar robot moving with the presence of the gravity. The model corresponds to a popular cart–pole system. Dynamics The dynamics of the P R manipulator is defined as m1 + m2 −m2 l2 sin(q2 ) q¨+ −m2 l2 sin(q2 ) m2 l22 −m2 l2 cos(q2 )qË™22 Ď„ = , (20) m2 l2 g cos(q2 ) − qË™2 where m1 and m2 denote the masses of first and second link respectively, and the l2 is the length of the pendulum. The Ď„ is the control force in the first joint, stands for the friction coefficient and g denotes the gravitational acceleration. When we use the partially feedback linearization (3) and change the state space vector x = (x1 , x2 , x3 , x4 ) = (q1 , aË™ 1 , q2 , qË™2 ) the equation (20) takes the form   xË™ =   |

− lg2

x2 0 x4 cos(x3 ) − {z f (x)



x4 l22 m2



  +   }

|

1 l2

 0  1  u.  0 sin(x3 ) {z } G(x)


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 5,

Nâ—Ś 3

2011

Fig. 3. x1 trajectory (left side – frictionless, right side – with friction).

Fig. 5. Control function u. Fig. 4. Task–space path (top – frictionless, bottom – with friction). Simulation results In the simulations we take the pendulum length equal l2 = 0.2, first link mass m1 = 3, second link mass m2 = 0.1, and the gravitational acceleration g = 9.81. We will proceed the swing–up manoeuvre, so the initial point x0 = (0, 0, âˆ’Ď€/2, 0) and the desirable xd = (0, 0, Ď€/2, 0). We choose the representation of the control function as the truncated Fourier series, so the matrix P (t) collects the following basic function {1, sin(iωt), cos(iωt)} for i = 1, . . . , 4. The time horizon is T = 1, the convergence coefficient Îł = 0.15, and the initial control parameter vector Îť = (1, 01Ă—8 ). We simulate two cases: first, without friction = 0, and second, with friction coefficient equal to = 0.02. For both cases we set the limits for the first state element as −0.6 < x1 < 0.6. The fig. 3 presents the trajectory of the state variable x1 . We also mark the limits with dotted lines. In fig. 4 we show the task–space path. The posture of the manipulator is sampled every 0.2 time units. As one can expect, the control function in the frictionless case has smaller amplitude (fig. 5). The algorithm convergence for two value of the friction coefficient is demonstrated in fig. 6. When we consider a swing–up manoeuvre, for the P R robot, the energy needed to perform the motion is greater

Fig. 6. Algorithm convergence.

for the manipulator with friction than for the robot without friction. This fact can be observed in figs. 3–5. The model with friction needs more oscillations and higher amplitude than the frictionless manipulator. The algorithm takes more steps to converge for the model with friction (fig. 6). In this case, the slower convergence arises from the fact that the Articles

37


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 5,

Nâ—Ś 3

2011

Fig. 7. x2 trajectory (left side – frictionless, right side – with friction).

Fig. 8. Task–space path (left side – frictionless, right side – with friction). state variable x1 is closer to the selected limits than in the model without friction (see fig. 3). 4.2. RR manipulator As a second model we choose the double pendulum, RR planar manipulator with actuated first joint. We assume that the model moves without the gravity. Similar to previous system, the friction force appears in the second, passive joint. Dynamics takes the form

The dynamics of the RR manipulator

M11 M12 q¨+ M12 M22 1 − 2 m2 l1 l2 sin(q2 )(2qË™1 qË™2 + qË™22 ) Ď„ + = , 1 2 − qË™2 2 m2 l1 l2 sin(q2 )qË™1 where M11 M12 M22

38

1 2 1 2 2 = l1 m1 + m2 l1 + l2 + l1 l2 cos(q2 ) , 3 3 1 1 = l2 m2 l1 cos(q2 ) + l2 , 2 3 1 2 = l2 m 2 . 3

Articles

The symbols m1 , m2 , l1 and l2 stand for masses and lengths of the first and second link respectively. Again, Ď„ is the control moment and denotes the friction coefficient. After the same transformation like in the P R model we obtain the dynamic system     x2 0   0  1     u. xË™ =  + x   0 4   3l1 x22 sin(x3 ) 3l1 cos(x3 )+2l2 3 x4 − − l 2 m2 − 2l2 2l 2 {z 2 } | {z } | f (x)

G(x)

Simulation results The parameters of the second testbed we choose as l1 = 0.5, l2 = 0.5, m1 = 1, and m2 = 0.5. We take the initial point as x0 = (0, 0, Ď€/4, 0), and we want to drive to a desirable point xd = (0, 0, âˆ’Ď€/4, 0). Identically to the simulation of the P R manipulator, we choose the representation of the control function as the truncated Fourier series. We take the vector P (t) = (1, sin(ωt), cos(ωt), sin(2ωt), cos(2ωt), sin(3ωt), cos(3ωt), sin(4ωt), cos(4ωt)). The time horizon for the RR manipulator is T = 2, the decay rate Îł = 0.5, and the initial control vector Îť = (1, 01Ă—8 ). Similarly to the previous model we also simulate two cases. First, frictionless = 0, and second with friction coefficient = 0.15. Contrary to the P R manipulator, here we


Journal of Automation, Mobile Robotics & Intelligent Systems

limit the velocity in the actuated joint −7.5 < x2 < 7.5. The trajectory of the state x2 with marked limits are presented in fig. 7. The fig. 8 shows the task-space path. The manipulator posture is sampled with the frequency of 0.4 time units. Here, likewise to the P R manipulator, the control function for the nonzero friction coefficient is larger than for the frictionless case (fig. 9). The comparison of the algorithm convergence for the two cases is presented in the fig. 10

Fig. 9. Control function u.

Fig. 10. Algorithm convergence. The velocities x2 for the RR manipulator (fig. 7) for both friction cases have comparable amplitude. However, the path of the end–effector is much longer (fig. 8) and the velocity profile has more oscillations (fig. 7) for the model with friction. Fig. 10 draws a conclusion that the algorithm needs almost equal number of steps to solve the problem with and without friction.

5. Conclusions In this paper we have presented the imbalanced Jacobian algorithm applied to the constrained motion planning problem for underactuated manipulators with friction. This algorithm derived from an endogenous configuration space approach, originally has been dedicated to mobile manipulators. Following the standard procedure, the state

VOLUME 5,

N◦ 3

2011

constraints are included into robot’s dynamics. Next, the expanded systems are regularized in order to avoid the algorithm singularity. Finally, using the motion planning Jacobian algorithm for the extended system one can obtain the solution of the constrained problem for the original system. The efficiency of the algorithm has been presented in simulations for two kinds of underactuated manipulators with and without friction. For the P R manipulator we have bounded the position of the first joint x1 , by the reason that in real robots the range of the prismatic joint is limited. Also, for the second, RR manipulator, the limitation of the velocity x2 has the practical reason. In real manipulators, the velocity of revolute link depends on used actuator. For both manipulators and for both friction cases the presented algorithm has solved the constrained motion planning problem correctly. In all cases the desirable points have been reached with assumed accuracy and the state constraints have been kept. The presented result could be improved in several ways. There is no problem to constrain more than one state variable, or even the whole state vector. Using the similar way of reasoning, we can also limit the control functions. It should be taken into account that the motion could be impossible to perform with badly chosen boundaries. Additionally, too narrow limits effect in slower algorithm convergence. Also, the underactuated models could be more complicated. Firstly, the inequality between the dimension of control space and state space could be more than one. Secondly, there are many other underactuated models, which can be considered, e.g. dynamics of wheeled mobile robots, surface and underwater vehicles or flying systems. Finally, we present some advantages of our approach contrary to the other methods for underactuated manipulators. First of all, the analogy of the system (5) to other underacutated systems yields a conclusion that the presented algorithm could be used to other models which can take the form of control–affine systems with or without drift. Comparing our method with methods using linear algorithm based on a linearization around the operation point. In the presented approach we use the linearization along the trajectory, so the area of implementation is larger. The method presented in [1] consists in two phases, in first phase the actuated joints are positioned, and next using the specific methods the passive joints converge to the desired position. Our algorithm is one-phase and the motion and control forces are continuous and smooth over the time horizon. Other method [6] relays on the segmented trajectory. For specific models the two kinds of reference trajectory are constructed. Using such approach one can move the underactuated manipulators in translational or rotational way. The disadvantage of this algorithm, comparing to our, is that here the manipulator must be at least 3 d.o.f. In the presented method there is no restriction on the system dimension.

ACKNOWLEDGEMENTS The first author was supported from the funds for Polish science in 2010–2012 as a research project, the research of the second author was supported from a statutory grant from Wrocław University of Technology. Articles

39


Journal of Automation, Mobile Robotics & Intelligent Systems

AUTHORS Adam Ratajczak∗ – Institute of Computer Engineering, Control and Robotics, Wrocław University of Technology, 50-372 Wrocław, Janiszewskiego 11/17, e-mail: adam.ratajczak@pwr.wroc.pl Mariusz Janiak – Institute of Computer Engineering, Control and Robotics, Wrocław University of Technology, 50-372 Wrocław, Janiszewskiego 11/17, e-mail: mariusz.janiak@pwr.wroc.pl ∗

Corrresponding author

References [1] A. De Luca, S. Iannitti, R. Mattone, and G. Oriolo, “Control problems in underactuated manipulators,” in Proc. IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM 2001), (Como , Italy), pp. 855–861, July 2001.

N◦ 3

2011

[11] K. Tchoń and J. Jakubiak, “Endogenous configuration space approach to mobile manipulators: a derivation and performance assessment of Jacobian inverse kinematics algorithms,” Int. J. Contr., vol. 76, no. 14, pp. 1387–1419, 2003. [12] A. Ratajczak and K. Tchoń, “Control of underactuated robotic manipulators: an endogenous configuration space approach,” in Proc. IEEE International Conference on Methods and Models in Automation and Robotics (MMAR 2007), (Szczecin, Polska), pp. 985 – 990, Aug. 2007. [13] M. Janiak, Jakobianowe algorytmy kinematyki odwrotnej manipulatorów mobilnych z ograniczeniami na sterowanie, stan i zachowanie. PhD thesis, Politechnika Wrocławska, Wrocław, Polska, 2009. [14] S. Jung and J. T. Wen, “Nonlinear model predictive control for the swing–up of a rotary inverted pendulum,” Trans. ASME, vol. 126, pp. 666–673, 2004.

[2] M. W. Spong, “Underactuated mechanical systems,” in Control Problems in Robotics and Automation (B. Siciliano and K. Valavanis, eds.), Springer-Verlag, 1997.

[15] A. Ratajczak and M. Janiak, “Motion planning of an underactuated manipulators with state space constraints,” Scientific Papers of Warsaw University of Technology, 175), vol. 175, no. 2, pp. 495–504, 2010. in Polish.

[3] M. W. Spong, “The swing up control problem for the Acrobot,” IEEE Control Systems Magazine, vol. 15, no. 1, pp. 49–55, 1995.

[16] M. Galicki, “Inverse kinematics solution to mobile manipulators,” Int. J. Robotics Res., vol. 22, no. 12, pp. 1041–1064, 2003.

[4] M. W. Spong and D. J. Block, “The Pendubot: A mechatronic system for control research and education,” in Proc. IEEE Conference on Decision and Control (CDC 1995), (New Orleans, USA), pp. 555– 557, Dec. 1995.

[17] F. Algöwer, R. Findeisen, Z. Nagy, M. Diehl, Georg, G. Bock, J. Schl, and O. X, “Efficient nonlinear model predictive control for large scale constrained systems,” in Proc. 6th Int. MMAR conference, (Międzyzdroje), pp. 43–52, 2000.

[5] A. De Luca, S. Iannitti, and G. Oriolo, “Stabilization of a PR planar underactuated robot,” in Proc. IEEE International Conference on Robotics and Automation (ICRA 2001), (Seoul, Korea), pp. 2090–2095, May 2001.

[18] M. Diehl, H. J. Ferreau, and N. Haverbeke, “Efficient numerical methods for nonlinear mpc and moving horizon estimation,” in Nonlinear Model Predictive Control (L. Magni, D. M. Raimando, and F. Algöwer, eds.), pp. 419–432, Berlin: Springer-Verlag, 2009.

[6] H. Arai, K. Tanie, and N. Shiroma, “Nonholonomic control of a three-DOF planar underactuated manipulator,” IEEE Transactions on Robotics and Automation, vol. 14, pp. 681–695, Oct. 1998.

[19] M. Janiak and K. Tchoń, “Constrained robot motion panning: Imbalanced jacobian algorithm vs. optimal control approach,” in Proc. 15th Int. MMAR conference, (Międzyzdroje), pp. 25–30, 2010.

[7] T. Suzuki, W. Miyoshi, and Y. Nakamura, “Control of 2R underactuated manipulator with friction,” in Proc. IEEE Conference on Decision and Control (CDC 1998), (Tampa, USA), pp. 2007 – 2012, Dec. 1998.

[20] M. Janiak and K. Tchoń, “Towards constrained motion planning of mobile manipulators,” in IEEE Int. Conf. Robot. Automat., (Anchorage, Alaska), pp. 4990– 4995, 2010.

[8] A. Sørensen and A. S. Shiriaev, “Friction compensation in the furuta pendulum for stabilizing rotational modes,” in Proc. IEEE Conference on Decision and Control (CDC 2001), (Orlando, USA), pp. 3772 – 3777, Dec. 2001.

[21] M. W. Spong, “Partial feedback linearization of underactuated mechanical systems,” in Proc. IEEE/RSJ Intelligent Robots and Systems (IR0S 1994), (Munich, Germany), pp. 314–321, Sept. 1994.

[9] M. Li and B. Ma, “Control of underactuated manipulators with uncertain static friction,” in Chinese Control and Decision Conference (2010 CCDC), (Xuzhou, China), pp. 676 – 679, May 2010. [10] A. Ratajczak, “Układy robotyczne z pasywnym stopniem swobody,” Master’s thesis, Politechnika Wrocławska, Wrocław, Polska, 2007.

40

VOLUME 5,

Articles

[22] C. Chen and O. L. Mangasarian, “Smoothing methods for convex inequalities and linear complementarity problems,” Mathematical Programming, vol. 71, pp. 51–69, 1995. [23] K. S. Narendra and A. A. Annaswamy, Stable Adaptive Systems. Englewood Cliffs, New Jersey: Prentice Hal, 1989.


VOLUME 5,

Journal of Automation, Mobile Robotics & Intelligent Systems

N° 3

2011

DIFFERENT KINEMATIC CONTROLLERS STABILIZING NONHOLONOMIC MOBILE MANIPULATORS OF (NH, NH) TYPE ABOUT DESIRED CONFIGURATION Alicja Mazur, Joanna Płaskonka

Abstract: In the paper, a problem of selection of kinematic control algorithm for nonholonomic manipulator has been considered. For a nonholonomic manipulating arm, two kinematic algorithms – Astolfi algorithm (working in closed-loop) and Nakamura, Chung and Sørdalen algorithm (working in open loop of control) have been compared. Simulation results have shown that influence of the dynamics on the behavior of mobile manipulator of (nh, nh) type is huge. It means that only kinematic control algorithms using feedback in the control loop are sufficiently robust to apply them in practical applications. Then, for the nonholonomic wheeled mobile platform, Astolfi algorithm, which belongs to discontinuous class of kinematic algorithms, has also been compared to a discontinuous algorithm proposed by Zhang & Hirschorn. Keywords: nonholonomic systems, point stabilization, discontinuous control.

1. Introduction In the paper, we will discuss a possible choice of control algorithms preserving point stabilization for mobile manipulators. A mobile manipulator, we will call a robotic system – rigid manipulator mounted on a mobile platform. The stabilization problem of nonholonomic systems has attracted a lot of attention in the literature in recent years. It has been shown that it can’t be solved using smooth and static feedback law, [2]. Thus, discontinuous or timevarying feedback law should be designed. If we take into account the type of components mobility of mobile manipulators, we obtain 4 possible configurations: type (h, h) – both the platform and the manipulator are holonomic, type (h, nh) – a holonomic platform with a nonholonomic manipulator, type (nh, h) – a nonholonomic platform with a holonomic manipulator, and type (nh, nh) – both the platform and the manipulator are nonholonomic. The notion ”doubly nonholonomic” manipulator was introduced in [7] for the type (nh, nh). In the sequel, we will restrict our considerations to mobile manipulators of (nh, nh) type, and as an object of simulations we will take a vertical 3-pendulum mounted on the mobile platform of the class (2, 0). The problem of designing a control law for rigid robotic manipulators received much attention in late 80th and 90th years of the last century. There were many works on the control algorithms for the manipulators based on different level of knowledge about their dynamics, which assumed that each degree of freedom was controlled independently by actuator. Such system can be regarded as fully actuated mechanical system.

Recently, a new approach to the problem of robot drive has been proposed. In [5] Nakamura, Chung and Sordalen proposed a new nonholonomic mechanical gear, which is able to transmit velocities from the inputs to many passive joints, see Fig. 2. Usage of nonholonomic gears causes that nonholonomic manipulator is an example of underactuated system (at kinematic level). In [5] the prototype of the nonholonomic manipulator was introduced and discussed. The nonholonomic constraints of the gear were due to the rolling contact without slipping between balls of gear and special supporting wheels in the robot joints. The authors presented the prototype with 4 joints; a big advantage of such a construction is the possibility to drive many rotational joints (not necessary lying on a plane) with only two input engines.

2. Mathematical model of the doubly nonholonomic mobile manipulator In the paper, we will consider a mobile manipulator that consists of two subsystems, namely the mobile platform (which is often called in the literature ”a mobile robot”), playing a role of the transportation part, and the rigid manipulating arm equipped with specific nonholonomic drives. The schematic presentation of such an object is given in Fig. 1. Y0

T2 l1

X2 Y2

Yb

l2

l3

T3

T1

Xb

T

Yp Xp

a

y

Y1 L

x

X1

X0

Fig. 1. Object of considerations: 3-pendulum mounted on the unicycle. Each of the subsystems has specific nonholonomic constraints, which determine its motion. Such constraints have to be included in the description of the whole system, i.e. of the doubly nonholonomic mobile manipulator. Articles

41


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 5,

(1)

OW1

OW2 (a)

aO

A1 (qm )G1 (qm ) = 0. In the sequel, we restrict our consideration to the nonholonomic mobile platform of the (2,0) class, presented in Fig. 1. For such a platform, the nonholonomic constraints have the form     x˙ v cos θ q˙m =  y˙  =  v sin θ  = w θ˙   cos θ 0 v   = sin θ 0 = G1 u, (3) w 0 1 where qm = (x, y, θ)T describe posture of the platform. (x, y) are cartesian coordinates of the platform’s mass center relative to the inertial basic frame X0 Y0 , and angle θ describes the orientation of the platform (the angle between the local axis Xp and the global axis X0 ), see Fig. 1. Symbols v and w denote linear and angular velocities of the platform, respectively. 2.2. Nonholonomic constraints for the manipulator In [5], a new approach to the control problem for the manipulators has been presented. The authors have developed a prototype of a new nonholonomic gear, which can transmit velocities to many passive joints. More complex and detailed scheme of such a gear has been introduced in [5]. In this paper only basic scheme is presented, see Fig. 2. The basic components of the gear presented in Fig. 2 are a ball and three wheels – an input wheel IW and two output wheels OW1 and OW2 . The velocity constraints of the ball are only due to the point contact with the wheels. The input wheel IW is mounted in the first joint, the output wheels are mounted in the next joint. The wheel IW rotates around the fixed axis αI with an angular velocity ρ˙ = η2 , 42

Articles

OW1

w1,1

q1

w2,1 IW

R

rI

OW2 rO2

(b)

(2)

Matrix G1 (qm ) of n × m size is full rank matrix, which fulfills the following equality

rO1

aI

where A1 (qm ) is a constraint matrix for the platform of (l × n) size. From equation (1) we can conclude that the generalized velocities of the platform q˙m are always in the null space of A1 matrix. It means that it is always possible to find a vector of auxiliary velocities u ∈ Rm , m = n − l which define some expression (the kinematics) q˙m = G1 (qm )u.

2011

IW

2.1. Nonholonomic constraints for the mobile platform The behavior of the mobile platform can be described using generalized coordinates qm ∈ Rn and generalized velocities q˙m ∈ Rn . In our further considerations, we make an assumption that the slipping effect does not occur in the motion of the platform and its motion is pure rolling. It means that momentary velocity between each wheel and the surface equals to zero. The constraints introduced by this assumption are nonholonomic, and they can be expressed in the so-called Pfaffian form A1 (qm )q˙m = 0,

N◦ 3

Fig. 2. Nonholonomic gear: a) schematic of the the gear, b) the gear seen from above. which plays the role of a control input. The rotating input wheel IW makes the ball rotate. The wheel OW1 rotates around an axis αO , which forms with the axis of the input wheel the joint angle θ1 . The nonholonomic constraints in the gear appear by the assumption of rolling contact without slippage between the elements of the gear (balls and wheels) in the robot joints. The nonholonomic constraints (the kinematics) of 3-pendulum can be expressed as the following relationship q˙r = G2 (qr )η, or, in more detail,   ˙   θ1 1 0  η1 a2 sin θ1 q˙r =  θ˙2  =  0 η2 0 a3 sin θ2 cos θ1 θ˙3 = G2 η,

(4)

where θi is the position of the i-joint in the 3-pendulum, and positive coefficients a2 and a3 depend on gear ratios between output and input wheels in the second and third joint. Angular velocity of the first joint θ˙1 = η1 and angular velocity of the input wheel IW in the first joint ρ˙ = η2 play the role of control inputs to the nonholonomic constraints given by equation (4). 2.3. Dynamics of doubly nonholonomic mobile manipulator Because of the nonholonomy of constraints (3) and (4) appearing in the motion of doubly nonholonomic mobile manipulator, to obtain the dynamic model of the mobile manipulator, the d’Alembert Principle has to be used Q(q)¨ q +C(q, q) ˙ q+D(q) ˙ = A11 (qm )λ1 +A21 (qr )λ2 +Bτ, (5)


Journal of Automation, Mobile Robotics & Intelligent Systems

where: Q(q) – inertia matrix of the mobile manipulator, C(q, q) ˙ – matrix of centrifugal and Coriolis forces acting on the mobile manipulator, D(q) – vector of gravity, Aii – matrix of nonholonomic constraints for the ith subsystem, λi – vector of Lagrange multipliers for the ith subsystem, B – input matrix, τ – vector of controls. Matrices A11 (qm ) and A21 (qr ) are Pfaff’s matrices for the platform and the manipulator respectively, whereas B is the so-called input matrix T 0 A1 (qm ) A11 = , A21 = , 0 AT2 (qr ) B=

B1 0 . 0 B2

Submatrices B1 and B2 describe which coordinates of subsystems are directly driven by actuators. Equations of constraints (3) and (4) can be rewritten as the kinematics in the block form q˙m G1 0 u q= ˙ = = Gζ, (6) q˙r 0 G2 η u ζ= , η where ζ is the vector of auxiliary velocities for both subsystems of mobile manipulator. After substituting the equation (6) into the dynamics (5) we get Q∗ ζ˙ + C ∗ ζ + D∗ = B ∗ τ

VOLUME 5,

N◦ 3

2011

T Design a control law τ T = (τm , τrT ) such that a mobile manipulator with fully known dynamics will realize tasks defined separately for each subsystem, and tracking errors will converge asymptotically to zero.

For developing the control law for the mobile manipulator, it is important to note that a complete model of the nonholonomic system has a structure of two cascaded equations: kinematics (nonholonomic constraints) and dynamics. For this reason the structure of the controller has to be divided into two independent parts connected in the cascaded way. Therefore the backstepping-like procedure for the design of the control law should be evoked [3]: 1) kinematic controller ζr (t) – represents an embedded control input, which ensures the realizability of the tasks defined for both groups of nonholonomic constraints. The kinematic controller can be treated as a solution to the kinematics (6), if the dynamics were not present. Such a controller generates a ’velocity profile’, which has to be executed in practice. The kinematic control algorithm can be designed with or without feedback from the output. 2) dynamic controller τ – as a consequence of the cascaded structure of the model, the system’s auxiliary velocities ζ cannot be commanded directly, as it is assumed in the design of the kinematic control, but they must be realized as the output of the dynamics (7) driven by τ . The dynamic input τ intends to regulate the real velocities ζ toward the reference control ζr and, therefore, attempts to provide control input necessary to achieve the desired tasks for both subsystems.

(7)

with elements defined in the following way Q∗ = GT QG, C ∗ = GT (QG˙ + CG), D∗ = GT D, B ∗ = GT B. Equation (7) describes the dynamics of the doubly nonholonomic mobile manipulator expressed in the auxiliary coordinates.

3. Control problem statement In this paper we are looking for a control algorithm preserving the proper cooperation between the mobile platform and its onboard manipulating arm. We assume that the desired task for the mobile manipulator can be decomposed into two independent parts for both subsystems: – the end-effector of the robotic arm has to go to the desired point in the configuration space, – the task of the platform is to go asymptotically to the desired trajectory qmd (t) lying on the plane (only for checking the influence of the dynamics on behavior of manipulator’s joints) or to go to the desired configuration in such a way that the position tracking errors go to 0 (main task). A goal of the paper is to address the following control problem for the doubly nonholonomic mobile manipulators:

4. Algorithms of point stabilization for nonholonomic manipulator As it was mentioned earlier, we take into considerations a manipulating arm equipped in special nonholonomic gears designed by Nakamura, Chung and Sørdalen. From the control point of view, it is very convenient to transform the kinematics of the nonholonomic manipulator into some generic form, namely the so-called ”chained form” [4]. Such an approach makes it possible to use all the existing control laws developed for the chained systems. The kinematics of the 3-pendulum, given earlier by equation (4), should be expressed now as follows θ˙1 θ˙2 θ˙3 φ˙

= = = =

η1 , a2 sin θ1 η2 , a3 sin θ2 cos θ1 η2 , cos θ1 cos θ2 η2 ,

(8)

where φ is the orientation of wheel OW in the second joint of the 3-pendulum. It is mentioned in [5] that it is inconvenient to transform the kinematics of the nonholonomic manipulator into the chained form if the variable φ is not added to the set of state variables of the manipulator. Transformation coordinates z = h(φ, qr ) and feedback signal ν = F (φ, qr ) introduced in [5] are local (they are valid only for angles θi ∈ (− π2 , π2 ), i = 1, 2) and they can have Articles

43


Journal of Automation, Mobile Robotics & Intelligent Systems

the following form z1 z2 z3 z4

= = = =

φ, tan θ1 a2 a3 cos 3θ , 2 a3 tan θ2 , θ3 ,

(9)

VOLUME 5,

and matrix has a form  p˜2 p˜3  k˜ 0  ˜  A= 0 k  . .  .. ..

p˜4 . . . p˜n−1 0 ... 0 0 ... 0 .. .. .. . . . ˜ 0 0 0 ... k

with new defined inputs to the system ν1 = cos θ1 cos θ2 η2 , 2 θ1 sin θ2 η2 ν2 = a2 a3 cos2 θ1η1cos3 θ2 + 3a2 sin . cos θ1 cos4 θ2 4.1. Astolfi control algorithm The control law introduced by Astolfi in [1] belongs to the algorithms, which realize the point stabilization of nonholonomic system using discontinuous static feedback from the state. This algorithm is dedicated to the chained systems with only two control inputs, similar to other algorithms, e.g. [6]. The chained form is a special generic form, which many mechanical systems could be transformed to. Into such form can be transformed two classes of single wheeled mobile platforms, namely (2, 0) (unicycle) and (1, 1) (kinematic car) and e.g. mobile platforms with trailers. The equations of the chained system can be expressed as a driftless control system as follows x˙ = g1 (x)u1 + g2 u2 , with vector fields      g1 =    

1 0 x2 x3 .. .

     ,   

(10)

 0 1   0   g2 =  0  .    ..  . 

xn−1

Let us assume, that the control signal u1 is constant, and ˜ Now the system containing only the denote it as u1 = k. state variables x2 , . . . , xn can be rewritten in the matrix form      0 0 0 ... 0 0 x˙ 2 x2  x˙ 3   k˜ 0 0 . . . 0 0   x3        x˙ 4   0 k˜ 0 . . . 0 0   x4   =     ..   . . . . . .   ..   .   .. .. .. .. .. ..   .  x˙ n xn 0 0 . . . k˜ 0 0  (11) 1 0     +  0  u2 .  ..  . 0 We consider the following signal as the control u2 (12)

Then subsystem (11) can be expressed as a linear system x˙ ob = A xob , where xob = (x2 x3 . . . xn )T , 44

Articles

 p˜n 0   0  . ..  . 

2011

(13)

0

System (11) is a controllable linear system, which can be stabilized by using linear feedback control given by equation (12). The closed-loop system (11)-(12) must be asymptotically stable. It means that coefficients p˜i have to be selected in such the way that all eigenvalues of the matrix A have negative real part. In other words, the characteristic polynomial of the A matrix, i.e. pA (λ) = λn−1 − p˜2 λn−2 − p˜3 k˜ λn−3 − . . . ˜ n−3 λ − p˜n (k) ˜ n−2 −˜ pn−1 (k) must be Hurwitz polynomial. If an absolute value of k˜ decreases, the eigenvalues of A matrix are constant for properly scaled p˜i coefficients. Now we assume, that k˜ = −kx1 and the form of control input u2 is the same as previously. Then the chained system can be written as follows x˙ 1 = −kx1 , x˙ 2 = p˜2 x2 + p˜3 x3 + . . . + p˜n−1 xn−1 + p˜n xn , x˙ 3 = −kx1 x2 , .. . x˙ n = −kx1 xn−1 .

0

u2 = p˜2 x2 + p˜3 x3 + . . . + p˜n−1 xn−1 + p˜n xn .

N◦ 3

Now we introduce a change of coordinates (change of basis)   x2 (t)    x3 (t)  ξ2   x1 (t)   ξ3       ..     . . (14) ξ =  ...  =   xn−1(t)      ξn−1    n−3   x1 (t)  ξn  xn (t)  xn−2 (t) 1 Jacobi matrix for such a transformation, given below,   1 0 0 ... 0 0 1   0 ... 0 0  0    x1 (t)   1  0  0 . . . 0 0   x21 (t)   J= . . . . . . ... ... ... ...      1  0  0 0 . . . 0   n−3 x1 (t)     1 0 0 0 ... 0 n−2 x1 (t) is always nonsingular because x1 (t) = x1 (0)e−kt and x1 (0) 6= 0 from assumption. The time derivative of ith variable ξi (i = 3, 4, . . . , n) equals to xi x˙ i x1i−1 − xi (i − 2) xi−3 x˙ 1 1 ˙ξi = d = 2 dt i−2 i−2 x1 x1 xi−1 xi = −k i−3 + (i − 2)k i−2 . x1 x1


Journal of Automation, Mobile Robotics & Intelligent Systems

If coefficients p˜i have the form p˜i =

pi , xi−2 1

∀i = 2, . . . , n,

ξ˙ = Λ ξ,

... ... ... ... .. .

pn−1 0 0 0 .. .

pn 0 0 0 .. .

. . . (n − 3)k 0 ... −k (n − 2)k

       . (15)    

The control law for the system (10) proposed by Astolfi is given below   −kx1 xn−1 xn  u =p x + p x3 + . . . + p n−1 n−3 + pn n−2 2 2 3 x1 x1 x1 (16) This control algorithm is well-conditioned only if the assumption x1 6= 0 holds. The above considerations can be formulated as the following theorem. Theorem 1 (Astolfi) Let us consider a system given by (10) with initial condition x(0), such that x1 (0) 6= 0. Then discontinuous control law equation (16) for x1 6= 0, T u= 0 0 for x1 = 0, converts the chained system to the origin, if k > 0 and σ(Λ) ⊂ C − , where σ(Λ) is a spectrum of the matrix Λ (Λ is described by equation (15)) and C − is open complex half-plane. 4.2. Polynomial algorithm proposed by Nakamura, Chung and Sørdalen Kinematics of the 3-pendulum expressed as the chained system can be written in the form z˙1 z˙2 z˙3 z˙4

= = = =

ν1 , ν2 , z2 ν1 , z3 ν1 .

2011

or, in more detail, by  1 2 1 3 T 2T 3T   T2 T3 T4  b0 2 b0 6 b0 12   3

4

5



c0

      c1  +     c2

b20 T6 b20 T24 b20 T60      1 0 0 z2 (T ) z2 (0)             z3 (0)  =  z3 (T )  . b T 1 0 + 0           2 T2 2 z (T ) z (0) b0 2 b0 T 1 4 4 If z1 (0) 6= z1 (T ), then matrix M is nonsingular, and trajectories of individual state variables (joints of nonholonomic manipulator) could be computed analytically. It should be emphasized that the presented control algorithm realizes an open-loop control (i.e. without including any information about real state of the system and of the environment).

5. Stabilization control algorithms for nonholonomic mobile platform There are different control schemes, which can be implemented to solve stabilization problem for nonholonomic mobile robot of (2, 0) class. Two discontinuous approaches are compared – Astolfi algorithm for nonholonomic systems which can be transformed into chained form and Zhang & Hirschorn algorithm dedicated to mobile robots of (2, 0) class. 5.1. Astolfi algorithm Nonholonomic mobile platform can be transformed into chained form z˙1 = u1 , z˙2 = u2 , z˙3 = z2 u1

(17)

Then input signals proposed by Nakamura, Chung and Sørdalen in [5] are equal to ν1r b0 ηr = = (18) ν2r c0 + c1 t + c2 t2 with coefficient b0 equal to b0 =

N◦ 3

simplicity of their practical realization. The control law leading the system (17) from the initial state z(0) to the desired final state z(T ) over the bounded time horizon T , is defined by equation (18), whereas the coefficients ci of the second input polynomial can be computed explicite (after integration of state equations of the chained system) as follows M c + N z(0) = z(T ) (19)

then the system (11) can be expressed as

with matrix Λ equal to  p2 p3 p4  −k k 0   0 −k 2k   Λ =  0 0 −k  .. .. ..  . . .   0 0 0 0 0 0

VOLUME 5,

z1 (T ) − z1 (0) . T

Polynomial inputs are often used due to their smoothness, smoothness of obtained trajectories of the system, and

by using global diffeomorphism z1 = θ, z2 = −x cos θ − y sin θ, z3 = −x sin θ + y cos θ and static feedback equal to u1 = ω, u2 = −v − ωz3 . The discontinuous control law proposed by Astolfi for such a system has the following form −kz1 u1r ur = = . (20) p2 z2 + p3 zz31 u2r Articles

45


Journal of Automation, Mobile Robotics & Intelligent Systems

5.2. Zhang & Hirschorn algorithm The algorithm proposed by Zhang & Hirschorn in [8] ensures stabilization of the robot and smoothness of the (x, y) trajectory. It is dedicated to the wheeled mobile robots of (2, 0) class. The idea of this algorithm is based on two observations: the mobile robot can achieve a pure rotation by letting v = 0 and ω 6= 0, and for any point (x0 , y0 ) in the XY -plane, there is a circle which passes both (x0 , y0 ) and the origin, and is centered on the Y axis. Thus, stability can be achieved by first making a pure rotation until the orientation angle θ(t) is almost tangent to circle C, and then driving the robot asymptotically to the origin along that circle in such a way that orientation angle is controlled to make the tangent of a trajectory approach to the tangent of the circle. Equation (21) represents the family of circles C in the XY -plane C = {(x, y) | x2 + (y − r)2 = r2 }.

(21)

The angle of the tangent to C at (x, y) can be defined as 2 tan−1 xy , (x, y) 6= (0, 0), θd (x, y) = 0, (x, y) = (0, 0). In Fig. 3, the example of circle C with marked angle θd (x, y) has been shown.

VOLUME 5,

Nâ—Ś 3

2011

6. Dynamic control algorithm To compare the influence of the kinematic control algorithms, we choose a simple dynamic control algorithm, namely exact linearization control algorithm n o Ď„ = (B ∗ )−1 Q∗ (Μ˙r − Km eÎś ) + C ∗ Îś + D∗ , eÎś = Îś − Îśr . The above control law seems to have the least influence on the behavior of the whole object – it is due to the full compensation of Coriolis, centrifugal and gravity forces, however some disturbances may appear in discontinuity points of Îśr and Μ˙r .

7. Simulation results Simulations have been made for the aforementioned object, namely the mobile manipulator, which consists of the nonholonomic 3-pendulum located on the mobile platform of (2, 0) class (unicycle). The simulations cover two problems considered in the paper: disturbing influence of the dynamic on the solution obtained in pure kinematic algorithm, and behavior of the discontinuous control law applied to the mobile platform coming from (2, 0) class. 7.1. The influence of the dynamics on behavior of the nonholonomic manipulator The influence of the dynamic control algorithm on trajectories of the nonholonomic manipulator during convergence to the desired configuration (point stabilization problem) has been considered and simulated. The desired configuration was equal to θ(0) = [−10â—Ś , −10â—Ś , −10â—Ś ], T = 10 [s]. θ(T ) = [10â—Ś , 10â—Ś , 10â—Ś ],

Fig. 3. The example of circle C and the angle θd (x, y). Following discontinuous control law designed by Zhang & Hirschorn enables asymptotic stabilization of a mobile platform at the origin: −k1 sign(θ − θd ), |θ − θd | > , ωr (x, y, θ) = − k 1 (θ − θd ), |θ − θd | ≤ ,  |θ − θd | > ,  0, −k2 (x2 + y 2 ), x ≼ 0, |θ − θd | ≤ , vr (x, y, θ) =  k2 (x2 + y 2 ), x < 0, |θ − θd | ≤ , with k1 > 0, k2 > 0 and > 0 to be sufficiently small. Let G = {(x, y) : |x| ≤ M, |y| ≤ M }, where M is positive number which defines the region in which robot should stay. Practically M could be equal to max{|x0 |, |y0 |}. Imposing the condition 2k2 (1 + )M ≤ k1 on k1 and k2 guarantees the requirement that once |θ(T ) − θd (T )| ≤ is met for some T > 0 then |θ(t) − θd (t)| ≤ for t > T . 46

Articles

The trajectories of the respective joints of the 3-pendulum during convergence to the desired position using Astolfi algorithm have been presented in Fig. 4. The regulation parameters in the Astolfi kinematic control algorithm have been chosen in such a way that all eigenvalues of the matrix Λ were equal to −2. The next algorithm proved by simulations was algorithm given by Nakamura, Chung and Sørdalen. The trajectories of respective joints of the 3-pendulum during convergence to the desired position using this algorithm have been presented in Fig. 5. During the regulation process, the mobile platform was in persistent motion – it has tracked a circle with radius 5 m. Such a behavior is aimed to increase the influence of the dynamics on the solution of kinematic controllers, because dynamic interactions between the platform and the manipulator in the mobile manipulator are very big. 7.2. Behavior of the discontinuous control law applied to the mobile platform The second group of simulation research is aimed to check the way the mobile platform goes to the desired configuration (point stabilization) qmd = 0. Initial configuration of the platform was equal to (x0 , y0 , θ0 ) = (1 m, 1 m, 1 rad). The plots of the mobile platform’s trajectories obtained using Astolfi and Zhang & Hirschorn algorithm have been shown in Fig. 6. The regulation parameters in the Astolfi kinematic control algorithm were


Journal of Automation, Mobile Robotics & Intelligent Systems

10

1

10

0.8 Y [m]

−5

Y [m]

0.5

0

−10

0

5 time [s]

10

−15 0

(a)

0.6 0.4 0.2

−10 −20 0

2011

1

5 0

Nâ—Ś 3

VOLUME 5,

5 time [s]

10

−0.5 −2

−1

X [m]

0

0 0

1

(a)

(b)

0.5 X [m]

1

(b)

10

Fig. 6. Trajectories of the mobile platform during convergence to the origin: (a) – Astolfi kinematic controller for the platform, (b) – Zhang & Hirschorn kinematic controller for the platform.

5 0 −5 −10 5 time [s]

1

10

T [r]

(c)

Fig. 4. Influence of the dynamic control algorithm with gain Km = 200 on behavior of the system. Dotted line – hypothetical trajectories obtained by Astolfi kinematic algorithm, continuous line – real trajectories (kinematic and dynamic control) of the joints: (a) angle θ1 , (b) angle θ2 , (c) angle θ3 . 20

150

10

100

0

50

−10

0

−20 0

5 time [s]

10

−50 0

(a)

5 time [s]

10

(b) 1000

1.5 T [r]

−15 0

0.5

1 0.5

0 0

0.1 0.2 time [s]

(a)

0 0

0.1 0.2 time [s]

(b)

Fig. 7. Orientation of the mobile platform during convergence to the origin: (a) – Astolfi kinematic controller for the platform, (b) – Zhang & Hirschorn kinematic controller for the platform.

in XY -plane may appear but for considered mobile platform θ(t) is smooth (Fig. 7a). If we introduce the measure of smoothness as a number of turns (changes) of the state coordinates x and y, then we could conclude that Zhang & Hirschorn algorithm (no turns) is smoother that Astolfi algorithm (one turn).

500

8. Conclusions

0

−500 0

5 time [s]

10

(c)

Fig. 5. Influence of the dynamic control algorithm with gain Km = 200 on behavior of the system. Dotted line – hypothetical trajectories obtained by Nakamura, Chung & Sørdalen kinematic algorithm, continuous line – real trajectories (kinematic and dynamic control) of the joints: (a) angle θ1 , (b) angle θ2 , (c) angle θ3 .

chosen in such a way that all eigenvalues of the matrix Λ were equal to −2. For Zhang & Hirschorn algorithm parameters were equal to (k1 , k2 , ) = (100, 40, 0.001). Trajectories (x(t), y(t)) generated by Zhang & Hirschorn algorithm are smooth because they are part of circle C – Fig. 6b. It’s a specific character of this algoritm, that the shape of (x(t), y(t)) trajectories is given. However θ(t) is switched so it’s non-smooth, see Fig. 7b. Astolfi algorithm doesn’t influence a shape of a robot’s trajectory in the way Zhang & Hirschorn algorithm does. It can be observed in Fig. 6a that turns in the motion of robot

The control process of a nonholonomic system needs to use two simultaneously working controllers: kinematic control algorithm (hypothetical case) solving the constraints equations and dynamic control algorithm, which realizes the kinematic solution in practice. In view of its function, the kinematic controller is often called task solver or motion planner. There are many kinematic controllers of different type, with or without feedback from the state or the robot environment, which have been presented in the literature. In this paper we have proposed to use different types of algorithms stabilizing the system in desired configuration: algorithms dedicated to the chained systems (Astolfi algorithm; polynomial algorithm given by Nakamura, Chung and Sørdalen) and an algorithm dedicated only to the mobile platform of (2, 0) class. In the simulation research, we wanted to check how useful different types of kinematic control algorithms are in practical applications. The results presented in Figure 5 show that kinematic controllers working in the open-loop are not robust on disturbances coming from dynamic level (second step in the cascade structure). Such controllers produce huge errors by generating trajectories going to the desired configurations. If the manipulator’s joint is further then the position error is bigger, see Fig. 5. Articles

47


Journal of Automation, Mobile Robotics & Intelligent Systems

In turn, for the kinematic control algorithm with closedloop of feedback, see Fig. 4, there are differences between trajectories coming from kinematic control and kinematic and dynamic controllers working simultaneously, but the manipulator achieves the desired configuration. It means that only kinematic control laws using feedback signal should be used in practical applications. Such algorithms for point stabilization of the nonholonomic manipulator can preserve robustness on disturbances, especially generated by the dynamic level of control. It is worth to mention that the choice of the kinematic controller for one subsystem of the doubly nonholonomic manipulator affects the behavior of the second subsystem due to big dynamical interactions between them. During the realization of the stabilization process for nonholonomic mobile robots, trajectories of a robot in XY -plane may have different properties. Astolfi algorithm does not ensure that the trajectory is differentiable, although it is easy to be implemented for a big variety of nonholonomic systems (e.g. wheeled mobile platform, manipulator with nonholonomic gears etc.). Using Zhang & Hirschorn controller enables the state of a robot to asymptotical convergence to the target configuration, and the resulting (x(t), y(t)) trajectories are smooth.

AUTHORS Alicja Mazur – Institute of Computer Engineering, Control and Robotics, Wrocław University of Technology, ul. Janiszewskiego 11/17, 50–372 Wrocław, e-mail: alicja.mazur@pwr.wroc.pl Joanna Płaskonka – Institute of Computer Engineering, Control and Robotics, Wrocław University of Technology, ul. Janiszewskiego 11/17, 50–372 Wrocław, e-mail: joanna.plaskonka@pwr.wroc.pl

48

Articles

VOLUME 5,

N◦ 3

2011

References [1] A. Astolfi, ”Exponential stabilization of a car-like vehicle”, In: IEEE International Conference on Robotics and Automation. Proceedings, Nagoya – Japan, 1995, p. 1391–1396. [2] R. W. Brockett, ”Asymptotic stability and feedback stabilization”, In: Differential Geometric Control Theory (R. W. Brockett, R. S. Millman and H. J. Sussmann, eds.), Birkhauser, Boston, 1983, p. 181–191. [3] M. Krstić, I. Kanellakopoulos, P. Kokotović, ”Nonlinear and adaptive control design”, Wiley, New York, 1995. [4] R. Murray, S.S. Sastry, ”Nonholonomic motion planning: steering using sinusoids”, IEEE Transactions on Automatic Control, 1993, Vol. 38, p. 700–716. [5] Y. Nakamura, W. Chung, O. J. Sørdalen, ”Design and control of the nonholonomic manipulator”, IEEE Transactions on Robotics and Automation, 2001, Vol. 17, No. 1, p. 48–59. [6] D. Pazderski, K. Kozłowski, B. Krysiak, ”Nonsmooth stabilizer for three link nonholonomic manipulator using polar-like coordinates”, In: Lecture Notes in Control and Information Sciences, No. 396, p. 35–44, Springer, London, 2009. [7] K. Tchoń, J. Jakubiak, K. Zadarnowska, ”Doubly nonholonomic mobile manipulators”, In: IEEE International Conference on Robotics and Automation. Proceedings, New Orleans – USA, 2004, p. 4590–4595. [8] M. Zhang, R. M. Hirschorn, ”Discontinuous Feedback Stabilization of Nonholonomic Wheeled Mobile Robots”, Dynamics and Control, 1997, Vol. 7, No. 2, p. 155-169.


VOLUME 5,

Journal of Automation, Mobile Robotics & Intelligent Systems

N° 3

2011

MOTION PLANNING OF WHEELED MOBILE ROBOTS SUBJECT TO SLIPPING Katarzyna Zadarnowska, Adam Oleksy

Abstract: Most dynamic models of wheeled mobile robots assume that the wheels undergo rolling without slipping, thus in general wheeled mobile robots are considered as nonholonomic systems. This paper deals with the problem of motion planning of mobile robots, whose nonholonomic constraints have been violated during the motion. The slipping phase is studied in details. A static model of interaction forces between wheels and ground is adopted by means of the singular perturbation approach [2]. A novel control theoretic framework for mobile robots subject to slipping is defined: both kinematics and dynamics of a mobile robot are modeled as a control system with outputs, the performance of a locally controllable system is nontrivial, the Jacobian of the mobile robot is defined in terms of the linear approximation to the system [36]. A novelty of the methodology consists in respecting of the analogy between the stationary and mobile robots and deriving performance characteristics from local controllability. In this paper we address the problem of motion planning by means of the Jacobian pseudo inverse algorithm. The effectiveness of the Jacobian pseudo inverse motion planning algorithm is demonstrated with reference to differential drive type robot (Pioneer 2DX) subject to slipping. Keywords: dynamics, kinematics, motion planning, slipping effect, wheeled mobile robot.

1. Introduction Wheeled mobile robots are most often assumed to be capable of rolling without slipping, and modeled as nonholonomic systems. Numerous literature sources deal with mobile robots subject to nonholonomic constraints (namely pure rolling and nonslipping condition) [1, 5, 14, 36, 40]. Such an assumption is, however, far from realistic in practice. In fact, due to various effects such as slipping, sliding, compliance of the wheels, the ideal constraints are never strictly satisfied. Since friction is the major mechanism for generating forces acting on the vehicle’s wheels, the problem of modeling and predicting tire friction has always been an area of intense research in the automotive industry. However, accurate tire–ground friction models are difficult to obtain symbolically, the most common tire friction models used in the literature are static [22, 30]. The dynamic friction models attempt to capture the transient behaviour of the tire–road contact forces under time–varying velocity conditions [7, 13]. The modeling of friction forces exerted at the wheels has been used for various studies [3, 29, 35, 39, 41]. A trajectory tracking problem for wheeled mobile robots subject to some drift and slipping effects, when the velocities become significant and therefore do not

satisfy perfectly the ideal constraints, was undertaken by d’Andrea-Novel and coauthors [2] by means of the singular perturbation approach. A robust controller, that handles uncertain soil parameters, for trajectory tracking of four– wheel differentially driven, skid–steering vehicles has been presented in [8]. A traction control strategy minimizing slip in rough terrain has been presented in [6, 18, 24]. In this paper we are concerned with a problem of motion planning of wheeled mobile robots, for which nonholonomic constraints of pure rolling and nonslipping have been violated during the motion. The problem can be stated as follows: given a desirable location of the mobile robot in the taskspace, find a configuration in which the robot reaches the desirable position and orientation in a prescribed time horizon. The inverse kinematic algorithm based on the inverse of the Jacobian is the most popular method [4, 27]. Limitations of the methods presented in the abovementioned literature have been discussed in [42]; the inverse Jacobian method is not applicable to mobile robots. As an alternative, the endogenous configuration space approach [36], which applies to mobile robots, has been proposed. This alternative has been inspired by the continuation method paradigm in motion planning of nonholonomic systems [34], and by the existing theory of stationary manipulators. As a starting point the methodology assumes a control system representation of the mobile robot and postulates that the performance of a locally controllable system should be non-trivial. The end point map of the system plays the role of the kinematics and dynamics of the mobile robot. Control actions exerted on the system are regarded as the endogenous configurations of the mobile robot. The Jacobian of the mobile robot is defined in terms of the linear approximation to the control system. As far as the Jacobian equation is concerned, a distinction between singular and regular configuration has been made. The motion planning problem for mobile robots is formulated as a control problem with prescribed control time horizon. The endogenous configuration space approach applies to any mobile robot. Inverse kinematics algorithms based on the pseudo inverse of the Jacobian for mobile robots have been examined in [20, 37]. The literature dealing with extension of the endogenous configuration space approach to the motion planning of nonholonomic mobile robots whose control system representation includes not only kinematics but also dynamics of the robotic system is rather modest [32]. The motion planning problem of mobile robots subject to slipping effects examined using traditional methods have been presented in [2, 9, 19, 31, 33]. Articles

49


Journal of Automation, Mobile Robotics & Intelligent Systems

Assuming the control theoretic point of view, our main objective will be to show that the motion planning problem for a mobile robot subject to slipping effects can be solved within the endogenous configuration space approach [36]. We shall begin with a description of the robot dynamics. To this aim we shall assume a linear dependence of the traction forces on the slip of the wheels, and then incorporate the obtained traction forces into the singular perturbation model introduced in [2], see also [12, 15]. This procedure will provide a control system representation of the robot motion, comprising a slow and a fast subsystem. The motion planning problem will be solved by means of a Jacobian inversion of the task map. To this aim we shall employ the Jacobian pseudo inverse. We show the functionality of the Jacobian pseudo inverse motion planning algorithm elaborated within the endogenous configuration space approach. To our best knowledge, an application of the endogenous configuration space methodology to the motion planning problem for mobile robots subject to slipping have not been tackled yet. We believe that the Jacobian motion planning algorithm designed by combining the singular perturbation modeling and the endogenous configuration space approach is a specific contribution of this paper. By design, the motion planning algorithm proposed in this paper applies to any control system representation having controllable linear approximation. The proposed motion planning method provides the open–loop control functions. Nevertheless, there exist some modifications of the method, which take into consideration the uncertainties of the model (e.g. Iterative Learning Control Strategy or Nonlinear Model Predictive Control Algorithm) [21, 26]. The paper is composed as follows. Section 2 presents an analysis of robotic systems subject to slipping effects. The analysis, patterned on [2], makes use of an explicit modeling of the dissipative nature of the interaction forces applied to the system by the external world. Section 2 also summarizes basic concepts of the endogenous configuration space approach; it introduces the Jacobian pseudo inverse motion planning algorithm. Section 3 presents the results related to the application of the Jacobian pseudo inverse motion planning algorithm to the Pioneer 2DX mobile robot moving with slipping. The paper is concluded with section 4.

VOLUME 5,

Nâ—Ś 3

2011

small parameter Îľ representing the violation of constraints. Observe, that A(q)qË™ = A(q)AT (q)ξ¾, so the ideal case corresponds to Îľ = 0. The introduction of quasi-velocities and the scaling parameter has been originally proposed in [2] as a key element of the singular perturbation approach. The Lagrange equations of the mobile robot dynamics assume the following form [2] Q(q)¨ q + C(q, q) Ë™ = F (q) + B(q)u.

(2)

with Q(q) and C(q, q), Ë™ denoting, respectively the inertia matrix, and the vector of centrifugal, Coriolis, frictional and gravity forces. B(q) stands for the control matrix. The vector F (q) denotes the interaction forces (exerted on the system by the external world). u represents the control functions. Having rewritten the equations of motion (2) and with q¨ expressed from (1) as ΡË™ T Ë™ q¨ = G(q) A (q) + G(q)Ρ + AË™ T (q)ξ¾, ξ¾Ë™ Ë™ where G(q) = ∂AT (q) ∂q (G(q)Ρ

∂G(q) ∂q (G(q)Ρ T

+ AT (q)ξ¾) AË™ T (q) =

+ A (q)ξ¾), and added of an output function characterizing the task of the mobile robot, we obtain an affine control system representation of the kinematics and the dynamics of the mobile robot  T qË™ = G(q)Ρ   + A (q)ξ¾  ΡË™ (3) = P (q, Ρ, ξ¾) + R(q)u Îľ ¾˙    y = k(q, q). Ë™ The terms appearing in eq. (3) are defined in the following way P1 (q, Ρ, ξ¾) P (q, Ρ, ξ¾) = = P2 (q, Ρ, ξ¾) −1 Ë™ G(q) AT (q) −(G(q)Ρ + AË™ T (q)ξ¾)− 0 Q−1 (q)C(q, G(q)Ρ + AT (q)ξ¾) +H −1 (q) , A(q)F −1 −1 R1 (q) = G(q) AT (q) R(q) = Q (q)B(q). R2 (q) The matrix

2. Basic concepts

Let us consider a class of wheeled mobile robots, whose nonholonomic constraints are not satisfied. Let q ∈ Rn denote generalized coordinates of the mobile robot. We shall assume that l (l < n) velocity constraints A(q)qË™ = 0, imposed on the robot motion can be violated. Intuitively, we are expecting that the violation of constraints measured by the norm ||A(q)q|| Ë™ is small. Using the canonical decomposition of Rn = KerA(q) ⊕ ImAT (q), we introduce quasi-velocities Ρ âˆˆ Rm , Âľ ∈ Rl , m + l = n, such that the mobile robot kinematics are represented as qË™ = G(q)Ρ + AT (q)ξ¾,

(1)

where columns g1 (q), . . . , gm (q) of the matrix G(q) span the null space KerA(q), so A(q)G(q) = 0. Ρ âˆˆ Rm - vector of auxiliary velocities and Âľ ∈ Rl - vector of slips velocities. The quasi-velocity Âľ has been scaled by a 50

Articles

H(q) =

GT (q)Q(q)G(q) GT (q)Q(q)AT (q) A(q)Q(q)G(q) A(q)Q(q)AT (q)

is symmetric and positive definite, while the output function y = k(q, q) Ë™ may describe the position coordinates or velocities of the mobile robot in its motion plane. If is sufficiently small, the state variables of the control system (3) can be divided into slow (q, Ρ) and fast Âľ, corresponding to the slow and the fast dynamics. Furthermore, it is easy to show, that after taking Îľ = 0, the control system (3) reduces to the kinematics and dynamics model of the mobile robot subject to the nonholonomic constraints. In this case, the third line represents the traction forces (∈ ImAT (q)) that enforce satisfaction of the nonholonomic constraints, see [2].


Journal of Automation, Mobile Robotics & Intelligent Systems

Since the admissible control functions of the mobile robot (3) belong to the Hilbert space, they will be assumed Lebesgue square integrable over the time interval [0, T ], u(·) ∈ L2m [0, T ]. They have sense of forces or torques, and constitute the dynamic endogenous configuration space X ∼ = L2m [0, T ], [42]. X is a Hilbert space. Let z = (q, η, µ) ∈ Rn+m+l denotes the state vector of eq.(3), and suppose that for a given initial state z0 and every control u(·) there exists a state trajectory z(t) = ϕz0 ,t (u(·)) = (q(t), η(t), µ(t)) and an output trajectory y(t) = k(q(t)) of system (3). In accordance with the endogenous configuration space approach the task map of the mobile robot will be identified with the end point map of the control system (3) Tz0 ,T (u(·)) = y(T ) = k(ϕz0 ,T (u(·)))

(4)

and computes the system output at T , when driven by u(·). Given the system (3), we shall study the following motion planning problem for the mobile robot: find a control function u(·) such that, at a given time instant T , the system output reaches a desirable point yd of the task space, so y(T ) = yd . The motion planning problem becomes equivalent to the inversion of the task map: find a control function u(·) such that Tz0 ,T (u(·)) = yd . This inversion can be achieved using a Jacobian algorithm [36]. To this aim, we define the Jacobian of (4) as the derivative

v(·) ∈ X . The Jacobian map transforms tangent vectors to the dynamic endogenous configuration space into R3 , and describes how an infinitesimal change in the input force is transmitted into a change of the position and orientation of the mobile robot at T . In order to compute the Jacobian map at a given configuration u(·) ∈ X we introduce the variational system associated with (3) ξ˙ = A(t)ξ + B(t)v, ζ = C(t)ξ

(5)

as the linear approximation to (3) along z(t), initialized at ξ0 = 0, where A(t) = ∂(G(q(t))η(t)+A(q(t))εµ(t)) ∂q ∂(P (q(t),η(t),εµ(t))+R(q(t))u(t)) ∂q

G(q(t)) ∂P (q(t),η(t),εµ(t)) ∂η

AT (q(t))ε ∂P (q(t),η(t),εµ(t)) ∂µ

B(t) =

0 R(q(t))

,

C(t) =

# ,

∂k(z(t)) . ∂z

ξ ∈ Rn denotes the state of the variational system; its evolution is described by the eq. (5). Taking into account (3) and (5), we can compute the Jacobian map as the output trajectory ζ(T ) of the variational system (5) [36] Z T Jz0 ,T (u(·))v(·) = C(T ) Φ(T, t)B(t)v(t)dt, (6) 0

N◦ 3

2011

where Φ(t, s) denotes the fundamental matrix of system (5), that satisfies the evolution equation ∂Φ(t,s) ∂t

= A(t)Φ(t, s), Φ(s, s) = In+m+l .

Observe that the Jacobian (6) corresponds to the compliance map introduced in [42]. Given the end point map (4) of the mobile robot, and a desirable point yd in the taskspace, the motion planning problem consists in defining an endogenous configuration ud (·) ∈ X such that Tz0 ,T (ud (·)) = yd . Usually, this problem may be solved numerically, by means of a Jacobian pseudo inverse motion planning algorithm. Let uϑ (·) ∈ X , ϑ ∈ R denote a smooth curve in the dynamic endogenous configuration space, passing through an initial configuration u0 (·). The taskspace error e(ϑ) = Tz0 ,T (uϑ (·)) − yd along this curve, describing the difference between actual and desirable mobile robot locations at T should decrease along with ϑ, in a prescribed way, e.g. exponentially. By requiring that the error should decrease d exponentially, dϑ e(ϑ) = −γe(ϑ), with decay rate γ > 0, we derive the Ważewski-Davidenko equation Jz0 ,T (uϑ (·))

duϑ (·) = −γe(ϑ). dϑ

(7)

Finally, using the Jacobian pseudo inverse operator J# (u(·))ζ (t) = B(t)ΦT (T, t)M−1 z0 ,T z0 ,T (u(·))ζ, (8) where

Jz0 ,T (u(·))v(·) = D Tz0 ,T (u(·))v(·),

"

VOLUME 5,

Mz0 ,T (u(·)) = RT C(T ) 0 B(t)Φ(T, t)ΦT (T, t)B T (t)dtC T (T ) is the mobility matrix of the endogenous configuration u(·), we arrive at the dynamic system defining the Jacobian pseudo inverse algorithm duϑ (t) = −γ J# z0 ,T (uϑ (·))e(ϑ) (t). dϑ

(9)

Solution of the motion planning problem is obtained as the limit at +∞ of the trajectory of (9), u(t) = limϑ→+∞ uϑ (t). Once the inverse operator in (8) is chosen, the solution of eq. (9) is unique. Since the dynamic endogenous configuration space X is infinite-dimensional, in order to carry out effective computations we shall use a finite parameterization of the control functions u(·) in (3) by truncated orthogonal expansions (Fourier series) Pk uci (t) = j=0 ci2j−1 sin jωt + ci2j cos jωt, i = 1, 2, . . . , m ω = 2π ci−1 = 0. T , (10) Subscript c in (10) means that the control functions are parameterized. After the parameterization, the dynamic endogenous configuration u ∈ X is represented by a vector c ∈ Rs , s = m(2k + 1) and the Jacobian (6) becomes a Jacobian matrix Jz0 ,T (c). A discretization of the motion planning algorithm (9) results in changing the dynamic endogenous configuration c ∈ Rs iteratively, with iterations indexed by an integer ϑ, i.e. cϑ+1 = cϑ − γJz#0 ,T (cϑ )eϑ ,

ϑ = 1, 2, . . . ,

(11)

Articles

51


Journal of Automation, Mobile Robotics & Intelligent Systems

−1 where Jz#0 ,T (c) = JzT0 ,T (c) Jz0 ,T (c)JzT0 ,T (c) , and eϑ = Tz0 ,T (ucϑ (·)) − yd . The discrete Jacobian pseudo inverse algorithm (11), which relies on the Euler approximation of the differential equations (9), is sensitive to its parameters. In particular, by increasing γ we may lose convergence, on the other hand, by decreasing it excessively we may slow down the convergence, i.e. computational complexity of the algorithm increases. Additionally, the final solution depends strongly on the initial controls. The proposed motion planning algorithm is local with respect to the initial controls. This is a common feature of the Newton-like algorithms.

3. Case study As an example, let us consider Pioneer 2DX – the differential drive type mobile robot (a mobile robot of this class has been studied in [2]; fig. 1). The robot is a

b

Fig. 1. a – Mobile robot Pioneer 2DX. b – Desirable positions of the robot. equipped with two independently actuated, identical wheels (of radius r) mounted on a common axle (of length 2l) (fig. 2). Let us assume that, as the result of a deformation at the contact point, the wheels may slip, both longitudinally as well as laterally. The generalized platform coordinates are chosen as q = (x, y, lθ, rϕ1 , rϕ2 )T , where x, y are position coordinates of the center of wheel axle, θ denotes the orientation of the platform, and ϕ1 , ϕ2 are revolution angles of the wheels. The meaning of geometric parameters of the robot is explained in fig. 2. The mathematical model

j2 Z

q (x,y) r

X Y y1 y

vy j . vx2 2y 2 . x2 . y l l

y2 x2

q . x y. 1

vx1 . x1

. j1 rj1 x x1 X

Fig. 2. Mobile robot. 52

Articles

j1

N◦ 3

2011

of the kinematics and dynamics of the mobile robot is represented by the control system (3),   sin θ − cos θ 0 0 0 sin θ 1 −1 0  A(q)=cos θ sin θ −1 0 −1 cos θ cos θ cos θ  sin θ sin θ  T   0 0 0 1r 0   1 , B = G(q) =  −1 , 0 0 0 0 1r  0 2  2 0 (12) thus −1 η˙ ˙ −G(q)η − A˙ T (q)εµ+ = G(q) AT (q) εµ˙ 0 −1 −1 Q (q)B(q)u + H (q) . A(q)F The actuation of the wheels is achieved by two motors that generate two torques defining the inputs u = (u1 , u2 ) to the system. The output function describes the position coordinates and the orientation angle of the mobile robot in its motion plane (y(q) = (x, y, θ) ∈ SE(2) ∼ = R2 × S1 ). Assuming that the center of mass of the robot is located at the middle point of the axle of wheels,nwe get the following o I I form of the inertia matrix Q = diag m, m, Il2θ , rϕ2 , rϕ2 , m – the mass of the robot, Iθ – the moment of inertia of the robot around the vertical axis, and Iϕ – the moment of inertia of each wheel. Now let us derive a model for the generalized interaction force F . As in [2], we shall adopt a ”pseudo–slipping” model [29], according to which the lateral and longitudinal forces applied by the ground to the wheels are proportional, respectively, to the slip angle and the slip coefficient. Let v denotes the velocity of the center of the wheel, that rotates with the angular velocity ϕ. ˙ Let vx and vy be, respectively, the longitudinal and lateral components of v. The longitudinal slipping velocity of the wheel is equal to the difference vx − rϕ, ˙ and the longitudinal slip s is given −r ϕ˙ by s = vx||v|| . Eventually, the longitudinal traction force applied by the ground is characterized as fx = −Cs = −

l Y

VOLUME 5,

C (vx − rϕ), ˙ ||v||

where C is a slip stiffness coefficient, depending on the nature of the wheel and the ground. The force fx opposes the longitudinal slip. The slip angle δ, defined as the angle between the plane of the wheel and the velocity of its center is given by vy δ ' sin δ = ||v|| . The lateral traction force applied by the ground, opposed to the lateral velocity component vy , takes the form D fy = −Dδ = − vy ||v|| with D denoting a cornering stiffness coefficient depending on the nature of the wheel and the ground. Finally, we get the following form of the interaction force acting on the wheel 1 fx C 0 vx − rϕ˙ f= =− . (13) fy vy ||v|| 0 D


Journal of Automation, Mobile Robotics & Intelligent Systems

N◦ 3

VOLUME 5,

a

2011

b 4.5

4.5

4

4

3.5

3.5

y [m]

3

2.5

2.5

2

2

1.5

1.5

1

1

0.5

0.5

0

−3

−2

−1

0

1

x [m]

0

2

q1, q2, q3

5

q1 q2 q3

0 −5 0

0.5

1

1.5

t [s]

2

2.5

−5 0

3

0

1

x [m]

2

3

q1 q2 q3 0.5

1

1.5

t [s]

2

2.5

3

4

1 0 0

0.5

1

1.5

t [s]

2

2.5

3

0.5

vs1 vs2 vs3

0 −0.5 0

0.5

1

1.5

2

t [s]

2.5

3

u1, u2

u1 u2

vs1, vs2, vs3 [rad/s]

u1, u2

−1

0

2

vs1, vs2, vs3 [rad/s]

−2

5

q1, q2, q3

y [m]

3

u1 u2

2 0 0

0.5

1

1.5

t [s]

2

2.5

3

20

vs1 vs2 vs3

0 −20 0

0.5

1

1.5

t [s]

2

2.5

3

Fig. 3. Long distance, point B: Robot paths and posture trajectories, control functions u = (u1 , u2 ), and slips (vs1 , vs2 , vs3 ) = (vx − rϕ˙1 , vx − rϕ˙2 , vy ), a – ε = 10−3 , b – ε = 10−1 . From the definition (12) of A(q) we deduce that for the wheel i vix − rϕ˙ i = Li (q)A(q)q, ˙ (14) viy where Li (q) is a 2 × 3 matrix, i = 1, 2. Specifi0 1 0 cally, the matrix L1 (q) = , while L2 = −1 0 0 0 0 1 . −1 0 0 The generalized interaction force of the wheel i is computed using the principle of virtual work FiT q˙ = fiT vi , implying that Fi = AT (q)LTi (q)fi . Using (13) and (14), we get T

Fi = −A

(q)LTi (q)

1 ||vi ||

Ci 0

0 Di

Li (q)A(q)q. ˙ (15)

Finally k X 1 T Ci F = −A (q) L ||vi || i 0 i=1

0 L A(q)q. ˙ Di i (16) In computations we shall additionally assume that Di and Ci have the same value for both wheels. In order to T

avoid numerical problems that may appear for small values of ||vi || we slightly modify the model introducing a saturation. In particular, if ||vi || < δ then ||vi || is replaced by δ, where δ is a small positive constant. To the needs of computer simulations we assume the following real geometric and dynamic parameters of the mobile platform Pioneer 2DX: l = 0.163m, r = 0.0825m, m = 8.67kg, Iθ = 0.256kgm2 , Iϕ = 0.02kgm2 [17]. Additionally, we assume the saturation coefficient δ = 10−6 , the cornering stiffness coefficient Di = 0.4 and the slip stiffness coefficient Ci = 1 for i = 1, 2 [2]. The control functions will take the form (10) with k = 1, so c ∈ R6 . The state space of the system (3) is 10dimensional, z = (q, η, µ)T ∈ R10 . The following motion planning problem will be examined: for given initial state z0 = 0 of the robot, and three desirable positions (A, B) in the plane (see figure 1b)), find an endogenous configuration c guaranteeing that the positions are reached in time T = 5s at the final robot orientation θd = 0. A collection of final positions of the platform consists of 3 points distributed around a circle of radius r = 2m (short distance) or r = 5m (long distance). The initial controls of the mobile platform are fixed as (c10 , c20 ) = (1, 1), the remaining initial values of coefficients being zero. The error decay rate γ = 0.75. The motion planning problem is regarded as solved when the taskspace error norm ||e|| = ||yd − k(q(T ))|| drops below 10−6 within ≤ 500 iterations. Every solution of the motion planning problem is accompanied with a computation of the platform trajectory length d, the Articles

53


Journal of Automation, Mobile Robotics & Intelligent Systems

N◦ 3

VOLUME 5,

b 1.4

1.2

1.2

1

1

0.8

0.8

y [m]

1.4

0.6

0.6

0.4

0.4

0.2

0.2

q1, q2, q3

0

0

0.2

0.4

0.6

0.8

1

x [m]

1.2

1.4

2

q1 q2 q3

0 −2 0

0.5

1

1.5

t [s]

2

2.5

0

1.6

q1, q2, q3

y [m]

a

2011

1.5

t [s]

2

2.5

3

0.05

vs1 vs2 vs3

0 −0.05 0

0.5

1

1.5

t [s]

2

2.5

3

u1, u2 vs1, vs2, vs3 [rad/s]

u1, u2 vs1, vs2, vs3 [rad/s]

0.5 1

0.4

0.6

0.8

1

x [m]

1.2

1.4

1.6

2

q1 q2 q3

0 0.5

1

1.5

2

t [s]

2.5

3

1

u1 u2 0.5

0.2

−2 0

3

1

0 0

0

u1 u2

0.5 0 0

0.5

1

1.5

2

t [s]

2.5

3

2

vs1 vs2 vs3

0 −2 0

0.5

1

1.5

2

t [s]

2.5

3

Fig. 4. Short distance, point A: Robot paths and posture trajectories, control functions u = (u1 , u2 ), and slips (vs1 , vs2 , vs3 ) = (vx − rϕ˙1 , vx − rϕ˙2 , vy ), a – ε = 10−3 , b – ε = 10−1 RT control energy e = 0 u(t)T u(t)dt, and the number of iterations necessary to find the solution i. The results are summarized in tab. (1) (ε1 = 10−1 and ε2 = 10−3 ) and in fig. (3–4).

4.5

4

3.5

Tab. 1. Motion planning of the mobile robot Pioneer 2DX. position A position B i d[m] e i d[m] e ε1 24 2,06 1,57 25 2,22 2,43 r1 ε2 24 2,19 1,05 25 2,80 1,78 ε1 24 5,15 7,51 26 5,62 14,01 r2 ε2 24 5,41 3,58 25 6,76 5,71

y [m]

3

2.5

2

1.5

1

0.5

0

−2

−1

0

1

2

3

x [m]

54

Articles

4

q1

2

q2

0

q3

−2

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

t [s] 1.5

u1, u2

1

u1

0.5

u2

0 −0.5

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

t [s] vs1, vs2, vs3 [m/s]

As the next step ε have been assumed to be equal 1. Now the traction properties in the tire – road contact zone are illustrated by the changes of the parameters Ci , Di . Naturally, the bigger Ci and Di are, the better traction properties we get. Let us assume additionally Di = 2Ci (the lateral slip is twice larger than the longitudinal slip) and Ci equal, respectively, Ci1 = 100 , Ci2 = 102 , Ci3 = 104 and Ci4 = 106 . The results are summarized in tab. 2 and in figs. 3, 8. It is worth observing, that in most cases the energy lost due to slipping and skidding effects increases along with increasing ε. Point B is the most difficult final point for the algorithm: the acceptable trajectories are obtained with large energy expenditure. Finally, the closer to the ideal case we are, the less number of iterations i of the band-limited Jacobian pseudoinverse algorithm is needed and the longer

q1, q2, q3

6

2 vs1

0

vs2 vs3

−2 −4

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

t [s]

Fig. 5. Long distance, point B: Robot paths and posture trajectories, control functions u = (u1 , u2 ), and slips (vs1 , vs2 , vs3 ) = (vx − rϕ˙1 , vx − rϕ˙2 , vy ), Ci = 100 .


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 5,

a

2011

q1, q2, q3

6

4.5

4

4

q1

2

q2

0

q3

−2

0

0.5

1

1.5

2

2.5

3.5

4

4.5

5

0.8

u1, u2

3

2.5

0.6

u1

0.4

u2

0.2

2

0

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

t [s]

1.5

0.5

−3

−2

−1

0

1

−5

vs1, vs2, vs3 [m/s]

1

0

3

t [s]

3.5

y [m]

N◦ 3

2

6

x 10

4

vs1

2

vs2

0

vs3

−2

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

t [s]

x [m]

b q1, q2, q3

6

4.5

4

4

q1

2

q2

0

q3

−2

0

0.5

1

1.5

2

3.5

2.5

3

3.5

4

4.5

5

t [s] 0.8

u1, u2

y [m]

3

2.5

0.6

u1

0.4

u2

0.2 2

0

0

0.5

1

1.5

2

vs1, vs2, vs3 [m/s]

1

0.5

0

2.5

3

3.5

4

4.5

5

t [s]

1.5

−3

−2

−1

0

1

2

0.6 0.4

vs1

0.2

vs2 vs3

0 −0.2

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

t [s]

x [m]

Fig. 6. Long distance, point B: Robot paths and posture trajectories, control functions u = (u1 , u2 ), and slips (vs1 , vs2 , vs3 ) = (vx − rϕ˙1 , vx − rϕ˙2 , vy ), a – Ci = 106 , b – Ci = 102 . a

1.5

q1, q2, q3

1.4

1.2

1

q1

0.5

q2

0

q3

−0.5

0

0.5

1

1.5

2

1

2.5

3

3.5

4

4.5

5

t [s] 0.6 0.4

y [m]

u1, u2

0.8

u1

0.2

u2

0

0.6

−0.2

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

t [s]

0.2

0

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

−6

vs1, vs2, vs3 [m/s]

0.4

10

x 10

vs1

5

vs2 vs3

0 −5

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

t [s]

x [m]

b

1.5

q1, q2, q3

1.4

1.2

q1

1

q2 q3

0.5 0

1

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

t [s] 0.6

y [m]

u1, u2

0.8

0.4

u2

0

0.6

−0.2

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

0.2

0

0.2

0.4

0.6

0.8

x [m]

1

1.2

1.4

1.6

vs1, vs2, vs3 [m/s]

t [s]

0.4

0

u1

0.2

0.15 0.1

vs1

0.05

vs2 vs3

0 −0.05

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

t [s]

Fig. 7. Short distance, point A: Robot paths and posture trajectories, control functions u = (u1 , u2 ), and slips (vs1 , vs2 , vs3 ) = (vx − rϕ˙1 , vx − rϕ˙2 , vy ), a – Ci = 106 , b – Ci = 102 .

Articles

55


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 5,

N◦ 3

2011

1.4

Tab. 2. Motion planning of the mobile robot Pioneer 2DX.

1.2

1

0.8

y [m]

r1

0.6

0.4

r2 0.2

Ci1 Ci2 Ci3 Ci4 Ci1 Ci2 Ci3 Ci4

i 17 14 14 14 14 14 14 14

position A d[m] e 2,1480 1,1809 2,2156 0,7733 2,2357 0,7293 2,2359 0,7289 5,1471 2,0313 5,4266 1,4108 5,4926 1,3584 5,4934 1,3579

i 15 15 15 15 19 17 17 17

position B d[m] e 2,3371 1,4171 2,8208 0,9656 2,9319 0,9206 2,9332 0,9201 6,6976 5,5604 6,7147 1,9244 7,1136 1,9099 7,1190 1,9099

0 −0.2

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

x [m]

q1, q2, q3

1.5 1

q1

0.5

q2 q3

0 −0.5

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

t [s] 1

u1, u2

0.5

u1 u2

0 −0.5

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

pseudoinverse algorithm copes well with long distances (see fig. 9). Preliminary results (see tabs. 1 and 2) confirm, that substituting the role of the perturbation parameter ε for the parameters Ci and Di expressing the traction properties of the wheels (violation of the nonholonomic constraints) reduces considerably the computational complexity of the Jacobian pseudoinverse algorithm.

5

vs1, vs2, vs3 [m/s]

t [s] 1 0.5

vs1

0

vs2

−0.5

vs3

−1

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

5

t [s]

Fig. 8. Short distance, point A: Robot paths and posture trajectories, control functions u = (u1 , u2 ), and slips (vs1 , vs2 , vs3 ) = (vx − rϕ˙1 , vx − rϕ˙2 , vy ), Ci = 100 . 18

16

14

y [m]

12

10

8

6

4

2

0

−10

−5

0

5

10

x [m]

q1, q2, q3

20 q1

10

q2 q3

0 −10

0

1

2

3

4

5

6

7

8

9

10

t [s]

u1, u2

0.8 0.6

u1

0.4

u2

0.2 0

0

1

2

3

4

5

6

7

8

9

10

vs1, vs2, vs3 [m/s]

t [s] −5

15

x 10

10

vs1

5

vs2 vs3

0 −5

0

1

2

3

4

5

6

7

8

9

10

t [s]

Fig. 9. Long distance r = 20m, T = 10s, point B: Robot paths and posture trajectories, control functions u = (u1 , u2 ), and slips (vs1 , vs2 , vs3 ) = (vx − rϕ˙1 , vx − rϕ˙2 , vy ), Ci = 106 , i = 18, d = 26, 7597[m], e = 3, 6421.

distance the platform covers while executing the task of motion planning (fig. 3a and 4a). The band-limited Jacobian 56

Articles

4. Conclusions and future works The main objective of this paper has been to propose a novel approach to motion planning for wheeled mobile robots subject to slipping effects. We have focused on mobile robots, whose ideal pure rolling and nonslipping constraints have been violated during the motion. Following [2], this transgression has been modeled as a small perturbation of the ideal constraints. To describe the kinematics and the dynamics of the robot, the endogenous configuration space approach has been adopted. The basic concepts of mobile robots subject to slipping have been defined by correspondence to nonholonomic mobile robots. Thus, the proposed approach is a combination of the singular perturbation modeling and the endogenous configuration space approach. As the result, we have obtained a control system representation of the robot kinematics and dynamics, defined a task map as the end point map of this system, and reduced the motion planning problem to the inversion of the task map achieved by means of the Jacobian pseudo inverse operator. The motion planning algorithm devised in this paper relies on the linearization of the control system along a trajectory, and applies to any system whose linearization is controllable. The motion planning problem has been solved by means of the Jacobian pseudo inverse algorithm. As an illustration of the theory developed in the paper, the motion planning problem for the mobile robot Pioneer 2DX subject to slipping have been solved. Our computer experiments have confirmed that this algorithm is able to solve efficiently the motion planning problem of the mobile robot subject to slipping; it is computationally rather expensive, but has good convergence properties. The motion planning algorithm derived within the endogenous configuration space approach is computable and useful in application to mobile robots whose nonholonomic constraints are violated. Future research will be conducted towards experimental verification of this approach. Current works are conducted towards application more adequate models of interaction forces between wheels and the ground [7]. A construction of hybrid model of a mobile robot whose motion consists of slipping and nonslipping phases would be of interest.


Journal of Automation, Mobile Robotics & Intelligent Systems

Acknowledgments This research has been supported by the Wrocław University of Technology under a statutory grant.

AUTHORS Katarzyna Zadarnowska∗ – Institute of Computer Engineering, Control and Robotics; Wrocław University of Technology; ul. Janiszewskiego 11/17, 50-372 Wrocław, Poland, e-mail: Katarzyna.Zadarnowska@pwr.wroc.pl Adam Oleksy – Institute of Computer Engineering, Control and Robotics; Wrocław University of Technology; ul. Janiszewskiego 11/17, 50-372 Wrocław, Poland, e-mail: Adam.Oleksy@pwr.wroc.pl ∗

Corrresponding author

References [1] B. d’Andrea-Novel, G. Bastin and G. Campion, ”Dynamic feedback linearization of nonholonomic wheeled mobile robot”, in: Proc. of the IEEE Conf. on Robotics and Automation, Nice, 1992, pp. 2527– 2532. [2] B. d’Andrea-Novel G. Campion and G. Bastin, ”Control of wheeled mobile robots not satisfying ideal velocity constraints: a singular perturbation approach”, Int. J. of Robust and Nonlinear Control, 5, 1995, pp. 243–267.

VOLUME 5,

N◦ 3

2011

[10] G. S. Chirikjian and A. B. Kyatkin, ”Engineering Applications of Noncommutative Harmonic Analysis”, CRC Press, Boca Raton, 2001. [11] B. J. Choi and S. V. Sreenivasan, ”Motion Planning of A Wheeled Mobile Robot with Slip-Free Motion Capability”, in: Proc. of the IEEE Conf. on Robotics and Automation, Leuven, Belgium, 1998, pp. 3727– 3732. [12] H. Chou, and B. d’Andrea-Novel, ”Global vehicle control using differential braking torques and active suspension forces”, Vehicle System Dynamics, 43(4), 2005, pp. 261 – 284. [13] P. R. Dahl, ”Solid Friction Damping of Mechanical Vibrations”, AIAA Journal, 14(12), 1976, pp. 1675– 1682. [14] W. E. Dixon, D. M. Dawson, E. Zergeroglu and A. Behal ”Nonlinear Control of Wheeled Mobile Robots”, Springer-Verlag, 2001. [15] M. Ellouze, and B. d’Andrea-Novel, ”Control of Unicycle–Type Robots in the Presence of Sliding Effects with Only Absolute Longitudinal and Yaw Velocities Measurements”, European J. Control, 6, 2000, pp. 567–584. [16] J. F. Gardner and S. A. Velinsky, ”Kinematics of mobile manipulators and implications for design”, J. Robot. Syst., 17(6), 2000, pp. 309–320.

[3] R. Balakrishna and A. Ghosal, ”Modeling of Slip for Wheeled Mobile Robots”, IEEE Trans. on Robotics and Automation, 11(1), 1995, pp. 126–132.

[17] J. Giergiel et al., ”Symboliczne generowanie równań kinematyki mobilnego robota Pioneer – 2DX”, Przegląd Mechaniczny, 19-20/2000, 2000, pp. 26– 31.

[4] B. Bayle, J.-Y. Fourquet and M. Renaud, ”Manipulability of wheeled mobile manipulators: application to motion generation”, Int. J. Robot. Res., 22(7–8), 2003, pp. 565–581.

[18] K. Iagnemma and S. Dubovsky, ”Traction Control of Wheeled Robotic Vehicles in Rough Terrain with Application to Planetary Rovers”, Int. J. of Robotics Research, 23(10–11), 2004, pp. 1029–1040.

[5] A. M. Bloch, M. Reyhanoglu and N. H. McClamroch, ”Control and stabilization of nonholonomic dynamic systems”, IEEE Trans. on Automatic Control, AC-37, 1992, pp. 1746–1757.

[19] G. Ishigami, K. Nagatani and K. Yoshida, ”Path Following Control with Slip Compensation on Loose Soil for Exploration Rover”, in: Proc. of the IEEE/RSJ Int. Conf. on Intelligent Robotics and Systems, Beijing, China, 2006, pp. 5552–5557.

[6] D. Caltabiano, D. Ciancitto and G. Muscato, ”Experimental Results on a Traction Control Algorithm for Mobile Robots in Volcano Environment”, in: Proc. of the IEEE Int. Conf. on Robotics and Automation, New Orlean, LA, 2004, pp. 4375–4380.

[20] J. Jakubiak and K. Tchoń, ”Motion planning in endogenous configuration space”, in: 3rd International Workshop on Robot Motion and Control, 2002, pp. 231–236.

[7] C. Canudas de Wit, P. Tsiotras, E. Velenis, M. Basset and G. Gissinger, ”Dynamic Friction Models for Road/Tire Longitudinal Interaction”, Vehicle System Dynamics, 39(3), 2003, pp. 189–226.

[21] S. Jung, J.T. Wen, ”Nonlinear Model Predictive Control for the Swing–Up of a Rotary Inverted Pendulum”, Journal of Dyn. Systems, Measurement, and Control, 126, 2004, pp. 666–673.

[8] L. Caracciolo, A. de Luca and S. Iannitti, ”Trajectory Tracking Control of a Four–Wheel Differentially Driven Mobile Robot”, in: Proc. of the IEEE Conf. on Robotics and Automation, Detroit, MI, 1999, pp. 2632–2638.

[22] U. Kiencke and L. Nielsen, Automotive Control Systems, Springer, 2000.

[9] P. Cheng, E. Frazzoli, V. Kumar, ”Motion Planning for the Roller Racer with a Sticking/Slipping Switching Model”, in: Proc. of the IEEE Conf. on Robotics and Automation, Orlando, FL, 2006, pp. 1637–1642.

[24] P. Lamon and R. Siegwart, ”Wheel Torque Control in Rough Terrain – Modeling and Simulation”, in: Proc. of the IEEE Int. Conf. on Robotics and Automation, Barcelona, Spain, 2005, pp. 867–872.

[23] K. Kozłowski and D. Pazderski, ”Modeling and control of a 4–wheel skid–steering mobile robot”, Int. J. Appl. Math. Comput. Sci., 14(4), 2004, pp. 477–496.

Articles

57


Journal of Automation, Mobile Robotics & Intelligent Systems

[25] R. Lenain, B. Thuilot, C. Cariou and P. Martinet, ”Model Predictive Control for Vehicle Guidance in Presence of Sliding: Application to Farm Vehicles Path Tracking”, in: Proc. of the IEEE Int. Conf. on Robotics and Automation, Barcelona, Spain, 2005, pp. 885–890. [26] F. Lizarralde, J. T. Wen, and L. Hsu, ”A New Model Predictive Control Strategy for Affine Nonlinear Control Systems”, in: Proc. 1999 American Control Conf., San Diego, CA, 1999, pp. 4263–4267. [27] A. de Luca, G. Oriolo and P.R. Giordano, ”Kinematic modelling and redundancy resolution for nonholonomic mobile manipulators”, in; Proc. of the IEEE Int. Conf. on Robotics and Automation, Orlando, FL, 2006, pp. 1867–1873. [28] J.E. Marsden and T.J.R. Hughes, Mathematical Foundations of Elasticity, Prentice-Hall, Englewood Cliffs, N.J., 1983. [29] N. Matsumoto and M. Tomizuka, ”Vehicle lateral velocity and yaw rate control with two independent control inputs”, Transaction of the ASME, J. of Dynamics Systems, Measurement, and Control, 114, 1992, pp. 606–613. [30] H. B. Pacejka and R. S. Sharp, ”Shear Force Developments by Pneumatic Tires in Steady–State Conditions: A Review of Modeling Aspects”, Vehicle Systems Dynamics, 20, 1991, pp. 121–176. [31] S.-T. Peng, J.-J. Sheu and C.-C. Chang, ”A Control Scheme for Automatic Path Tracking of Vehicles Subject to Wheel Slip Constraint”, in: Proc. of the American Control Conf., Boston, MA, 2004, pp. 804– 809. [32] A. Ratajczak, J. Karpińska and K. Tchoń, ”Task priority motion planning of underactuated systems: an endogenous configuration space approach”, Robotica, 2009 (in press). [33] N. Sidek and N. Sarkar, ”Dynamic Modeling and Control of Nonholonomic Mobile Robot with Lateral Slip”, in 3rd Int. Conf. on Systems, 2008, pp. 35–40.

58

Articles

VOLUME 5,

N◦ 3

2011

[34] H.J. Sussmann, ”A continuation method for nonholonomic path finding problems”, in: Proc. 32nd IEEE CDC, San Antonio, TX, 1993, pp. 2718–2723. [35] H.-S. Tan and Y.-K. Chin, ”Vehicle traction control: variable structure control approach”, Trans. ASME J. Dyn. Syst., Meas., and Cont., 113, 1991, pp. 223–230. [36] K. Tchoń and J. Jakubiak, ”Endogenous configuration space approach to mobile manipulators: A derivation and performance assessment of Jacobian inverse kinematics algorithms”, Int. J. Control, 76(14), 2003, pp. 1387–1419. [37] K. Tchoń and Ł. Małek, ”On Dynamic Properties of Singularity Robust Jacobian Inverse Kinematics”, IEEE Trans. on Automatic Control, 54(6), 2009, pp. 1402–1406. [38] K. Tchoń and K. Zadarnowska, ”Kinematic dexterity of mobile manipulators: an endogenous configuration space approach”, Robotica, 21, 2003, pp. 521–530. [39] Y. Yokokohji, S. Chaen and T. Yoshikawa, ”Evaluation of Traversability of Wheeled Mobile Robots on Uneven Terrains by Fractal Terrain Model”, in: Proc. of the IEEE Int. Conf. on Robotics and Automation, New Orlean, LA, 2004, pp. 2183–2188. [40] G. Walsh, D. Tilbury, S. Sastry, R. Murray and J. P. Laumond, ”Stabilization of trajectories for systems with nonholonomic constraints”, in: Proc. of Int. Conf. on Robotics and Automation, Nice, 1992, pp. 1999–2004. [41] R. L. Williams, B. E. Carter, P. Gallina and G. Rosati, ”Dynamic Model with Slip for Wheeled Omnidirectional Robots”, IEEE Trans. on Robotics and Automation, 18(3), 2002, pp. 285–293. [42] K. Zadarnowska, K. Tchoń, ”A control theory framework for performance evaluation of mobile manipulators”, Robotica, vol. 25, 2007, pp. 703–715.


VOLUME 5,

Journal of Automation, Mobile Robotics & Intelligent Systems

N° 3

2011

REAL-TIME OBSTACLE AVOIDANCE USING HARMONIC POTENTIAL FUNCTIONS Paweł Szulczyński, Dariusz Pazderski, Krzysztof Kozłowski

Abstract: The paper presents a solution of motion planning and control of mobile robot in a two-dimensional environment with elliptical static obstacle based on hydrodynamics description. Theoretical background refers to solution of Laplace equation using complex algebra. The method of designing complex potential with respect to stationary elliptical obstacle and stationary goal is formally shown. Next, the planning motion problem is extended assuming that the goal is moving. Then results of motion planning is used in order to design closed-loop control algorithms which is based on decoupling technique. Theoretical considerations are supported by numerical simulations illustrating example results of motion planning and control. Keywords: motion planning, obstacle avoidance, mobile robot, harmonic function .

1. Introduction The issue of motion planning and control in a constrained space can be considered as fundamental theoretical and practical problem in mobile robotics [6, 12]. The well-known paradigm for solving this problem introduced in a robotics literature by Khatib [9] is based on potential functions. A detailed analysis of this method with respect to existence of local minima was carried out by Koditschek [11]. Next, Koditschek and Rimon introduced so called navigation function ensuring one global minimum for star obstacles [17]. So far many control solutions based on the potential function approach has been reported also with respect to nonholonomic systems. Some works concentrate on obstacle avoidance for single robot [3], others consider multi-robotic systems [14]. In order to meet requirements arising from nonholonomic constraints a new navigation functions have been developed [18]. An alternative method of motion planning based on potential functions may take advantage of Poisson equation. This equation can be used to model fluid flow dynamics, potential of the gravitational and electrostatic field, as well as a temperature distribution in solid bodies, etc. In the homogeneous case Poisson equation becomes Laplace equation, and potential function which solves it is called harmonic function. Fundamental advantage of the harmonic function which is required in the motion planning methods is lack of local minima. Majority of works devoted to this method in robotics use the discrete approach [1, 5]. It gives possibility to describe quite complicated environments with obstacles and to consider additional constraints. For example in [13] curvature of paths were optimized in order to satisfy phase constrained subjected to nonholonomic system. So called panel method described in [10] can be seen

as modification of discrete method and assumes representation of the environment as a set of primitive segments. High computational complexity required to solve Laplace equation numerically can be seen a serious disadvantage of using discrete domain. Recently some efficient implementation method of numerical solvers using FPGA has been proposed [8]. Another approach of using Laplace equation for motion planning and control is defined a continuous domain. In [7] Feder and Slotine outlined some possible solutions of formal description of two dimensional environments with stationary and non-stationary planar obstacles. This problem was also considered by Waydo and Murray in [20]. Recently, this method has been presented in [19] where the control algorithm of nonholonomic robot of class (2,0) is considered. This paper is mainly inspired by the idea presented in [20]. It extends previous results to the static and dynamic case (with moving goal) of motion planning in the environment with the single elliptical obstacle. It takes advantage of continuous description of the environment using analytical functions. Moreover, application of harmonic function to control two-wheeled nonholonomic robot of class (2,0) using linearization technique is discussed. Theoretical considerations are supported by numerical simulations in order to show an effectiveness of proposed planning motion and control strategy.

2. Recalling of harmonic functions theory At the beginning, a formal definition of harmonic function will be given. Definition 1. Function u ∈ C 2 (Ω), where Ω is an open subset of Rn is called a harmonic function on Ω (and it is written as u ∈ H(Ω)) if it solves Laplace equation: Δu = 0, where Δ is the Laplace operator. The important property of a harmonic function is the principle of superposition, which follows from the linearity of the Laplace’s equation. That is, if H1 and H2 are harmonic, then any linear combination of them is also harmonic. Key properties of harmonic function can be derived from the mean-value property as well as the maxima (minima) principle [15]. These principles indicate that harmonic function has its extremes only on the boundary of Ω, so it does not have local maxima (minima) inside Ω. Describing vector field of the flow in the twodimensional space one can consider stream function ψ : R2 → R and potential function ϕ : R2 → R. These functions are orthogonal and satisfy the Cauchy-Riemann Articles

59


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 5,

Nâ—Ś 3

2011

equations, namely: vx =

âˆ‚Ďˆ âˆ‚Ď• = , ∂y ∂x

vy = −

âˆ‚Ďˆ âˆ‚Ď• = , ∂x ∂y

(1)

where vx and vy are the components of the vector field T v = [vx vy ] defining the fluid flow. In the two-dimensional space one can refer to the Gauss plane and conveniently apply the complex algebra. Then, coordinates of any point can be represented as z , x+iy √ ∈ C, where x = < {z} , y = = {z} ∈ R and i , −1 denotes the imaginary unit. Consequently, one can define complex potential according to the following theorem: Theorem 1. [20] Assuming that potential Ďˆ and stream Ď• : R2 → R are at least twice differentiable functions then: w (z) , Ď• (x, y) + iĎˆ (x, y) ∈ C, (2) describes the complex potential of a two-dimensional flow such that w (z) ∈ H and Ďˆ, Ď• ∈ H. Taking into account complex potential (2) and (1) the following components of vector field can be derived: vx =

∂ ∂ = {w} = < {w} ∂y ∂x

(3)

∂ ∂ = {w} = < {w} . ∂x ∂y

(4)

and vy = −

3. Motion planning 3.1. Problem description Let us consider motion planning task defined in the twodimensional Cartesian workspace Q ∈ R2 with respect to point robot (i.e. with zero area). It is assumed that in the workspace one elliptical obstacle O ∈ Q is present Ëœ, (cf. Fig. 1). Non-colliding (free) space is defined as Q Q \ O and configuration of the robot is described by p , T [x y] ∈ Q. We assume that the goal position is governed T by some reference trajectory pr (t) = [xr (t) yr (t)] which satisfies: Ëœ A1: non-colliding condition, namely: ∀t ≼ 0 pr (t) ∈ Q, A2: time-differentiable condition such that: ∀t > 0 kpË™ r k < ∞.

Fig. 2. Elliptical obstacle defined on the Gauss plane. – trajectory p converges to some neighborhood of point pr , namely: limt→∞ kp(t) − pr (t)k ≤ , where ≼ 0 is some assumed constant (which can be made arbitrarily small), – trajectory p does not intersect obstacle (i.e. obstacle is Ëœ avoided): ∀ t ≼ 0 p(t) ∈ Q, Here we analyze two cases: – Static case, for which the reference trajectory becomes constant, namely pË™ r ≥ 0 – then, one can expect asymptotic convergence of trajectory p to point pr (i.e. ≥ 0), – Dynamic case, for which the reference trajectory pr is time-varying – then, one can consider convergence of trajectory p (t) to the some neighborhood of trajectory pr (t) with radius > 0 which can be made arbitrarily small. 3.2. General description of the goal-obstacle Following [20] we assume that the goal is represented by the source Z with complex potential defined by some holomorphic complex function f : C → C. Next, complex potential of the obstacle O in the presence of source Z is denoted by f (wo (z)), with wo (z) being base complex potential of the obstacle. Consequently we can define the following resultant complex potential of the goal-obstacle : w (z) , f (z) + f (wo (z)) .

(5)

A boundary condition can be seen as a problem of determining a zero stream on the edge of the obstacle according to the following lemma: Lemma 1. If the base complex potential wo of the obstacle O satisfies the following relation ∀z ∈ ∂O

wo (z) = z ∗ ,

(6)

∗

where (¡) is an operator of complex conjugate, then the edge of the obstacle ∂O contains a zero line flow current. Proof. Taking into account that wo (z) = z ∗ and considering the following identity f (z ∗ ) = f ∗ (z) it follows that Fig. 1. Illustration of the problem of motion planning. The path planning problem is formulated as follows: Problem 1. For any reference trajectory pr satisfying assumptions A1 and A2 find trajectory p(t) such that Ëœ the following requirements are satisfied: ∀p (0) ∈ Q 60

Articles

w (z) = f (z) + f ∗ (z) ⇒ = {w (z)} = 0, (7) which directly proofs lemma 1. ∀z ∈ ∂O

3.3. Elliptical obstacle description We consider an obstacle O in the form of ellipse with the origin ox , oy ∈ R rotated by angle Îąo and major and


Journal of Automation, Mobile Robotics & Intelligent Systems

minor semi-axes rox , roy > 0 (cf. Fig. 2). Its equation on the Gauss plane may be presented by the following constraint 2 2 λ (z) − rox roy = 0, (8) where λ (z) , kroy · < {(z − o) exp (−iαo )} +

(9)

2

+irox · = {(z − o) exp (−iαo )}k , with o , ox + ioy . In order to find complex potential for the obstacle one can use lemma 1. Then, for every z satisfying constraint (8) (i.e. ∀z ∈ ∂O) one should expect that wo (z) = z ∗ . This requirement can be satisfied assuming that wo is chosen as follows wo ,

2 2 rox roy (z ∗ − o∗ ) + o∗ . λ (z)

(10)

Consequently, complex potential for the considered obstacle can be written as ! 2 2 rox roy (z ∗ − o∗ ) f (wo (z)) = f + o∗ . (11) λ (z) The result (11) is an extension of considerations presented in [20] where only circular obstacles are taken into account. 3.4. The static case In this subsection it is assumed that pr = const describes static goal and its coordinates are represented with complex number as zr = xr + iyr ∈ C. It is required that the point zr is a unique attractor for all trajectories z(t) with initial condition included in the collision-free ˜ This attractor can be treated as space, namely p (0) ∈ Q. a source (or more precisely as a sink) with the following complex potential f (z) = −ν log (z − zr ) ,

(12)

where ν > 0 is a design parameter determining velocity of the streamline (trajectory). Taking into account the elliptical obstacle in the space, considering (11) and lemma 1 we can design the total complex potential of the structure goal – obstacle: w (z) = −ν [log (z − zr ) + 2 2 ∗ ∗ i r r (z −o ) log ox oyλ(z) + o∗ − zr∗ . (13)

VOLUME 5,

T

T

where e = [ex ey ] , [x − xr y − yr ] describes position errors, a= b=

2 2 rox roy (y − oy ) + λ (z) (oy − yr ), 2 2 rox roy (x − ox ) + λ (z) (ox − xr ),

lx =

∂λ(z) ∂x (oy ex − ox ey + xr y − yr x) − 2 2 rox roy (y − oy ) − λ (z) (oy − yr ),

ly =

∂λ(z) ∂y (oy ex − ox ey + xr y − yr x) + 2 2 rox roy (x − ox ) + λ (z) (ox − xr ).

2011

It is worth to note that solution (14) and (15) at the goal is not well determined. This property directly results from the description of liquid dynamics. It should be also emphasized that this peculiarity is achieved in finite time dependent on value of parameter ν. This problem was solved in [19] applying the discontinuous approximation of solution in the neighborhood of zr . In this paper an alternative solution is proposed. It enables to preserve a continuity of the solution assuming that ν is the scalar function defined as follows: 2 ν = ν (e) , k kek , (16) where k > 0 determines the convergence rate of trajectory p to point pr . In such a case it is possible to prove that |vx | , |vy | ∈ L∞ and limkek→0 vx = 0 and limkek→0 vy = 0. 3.5. The dynamic case Now we extend the result given in Subsection 3.4, assuming that the goal is moving in the free-collision space and its coordinates are described by time-varying reference trajectory pr (t). In such a case we can write, that zr , zr (t) = xr (t) + iyr (t). Then, the goal motion with respect to the inertial frame can be modeled by the potential of the following homogeneous flow: ˜ c (z) = (x˙ r (t) − iy˙ r (t)) z. w

(17)

Furthermore, velocity of the stationary obstacle with respect to moving reference frame associated with the goal is equal to −p˙ r (t). Referring to the analysis given in the paper [20] for the movable obstacle in the considered case the additional stream resulting from the relative movement between the goal point and the obstacle has the form of 2 2 ∗ ∗ r r (z −o ) ˜ o (z) = − (x˙ r (t) + iy˙ r (t)) ox oyλ(z) w + o∗ − zr∗ . (18) Hence taking into account (13), (17) and (18) the following resultant complex potential can be considered ˜ (z) + w ˜ c (z) + w ˜ o (z) , w (z) = w

(19)

˜ (z) is defined by (13). Then, determining velocity where w of p according to relations (3) and (4) we have vx = v˜x + v˜cx + x˙ r (t) , vy = v˜y + v˜cy + y˙ r (t) , (20) where

T

Then, one can calculate vector field v = [vx vy ] based on (3)–(4) and obtain: 2 2 rox roy ly ex vx = −ν kek , (14) 2 − 2 2 a +b 2 2 r roy lx e vy = −ν keky 2 + ox , (15) a2 +b2

N◦ 3

v˜cy

2 2 −rx ry λ(z)2

x˙ r (t) λ (z) − ∂λ(z) (y − o ) + y ∂y y˙ r (t) ∂λ(z) (21) ∂y (x − ox ) , 2 2 −rx ry = λ(z) y˙ r (t) λ (z) − ∂λ(z) 2 ∂x (x − ox ) + x˙ r (t) ∂λ(z) (22) ∂x (y − oy )

v˜cx =

and v˜x and v˜x are defined by (14) and (15) respectively. Remark 1. From a theoretical point of view in the dynamic case reference trajectory pr (t) is the unique attractor for trajectory p (t) only if coefficient ν (related to the static part of the flow) is constant. However, introducing scaling according to Eq. (16) in order to avoid singularity of solution at p = pr one cannot further guarantee that the minimum will stay at pr . As a result asymptotic convergence in general is not achieved in the dynamic case. Articles

61


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 5,

N◦ 3

2011

3.6. Simulation results To illustrate theoretical considerations numerical simulations have been conducted. Firstly, the static case is considered, in which coordinates of the goal point and ellipse parameters were chosen as follows: xr = −0.5 m, yr = −0.8 m, ox = 0 m, oy = 0.3 m, rox = 0.3 m, roy = 0.1 m, αo = − π6 rad, while gain coefficient is k = 0.5. Secondly, in the dynamic case the following circular reference trajectory is chosen: cos µt x pr (t) = R + r0 (23) sin µt yr0 with R = 0.3 m, µ = 0.5 rad/s, xr0 = yr0 = −4.5 m or xr0 = yr0 = −1.5 m. In Figs. 3, 4 a potential graph, flow lines and paths obtained with respect to the static case are illustrated assuming different initial conditions p(0). Analyzing Fig. 3 one can confirm that function ϕ determined over domain ˜ as the harmonic function has one global minimum at Q the goal. As a result almost all trajectories p (t), which can be interpreted as fluid flow, tend to goal invariantly. The one issue is related to existence of saddle points or stagnation points. These points are placed at intersections of the line, determined by initial position p (0) and the goal, and the boundary of the ellipse. However, taking into account that saddle points as are not stable equilibrium points, and one can relatively easy introduce some disturbance to ensure that p (t) will not get stuck at these points. Taking into account flow paths depicted in Fig. 4 one can easily confirm that the elliptical obstacle is avoided – it is a result of the fact that maxima of potential (harmonic) function are observed only on the obstacle boundary.

Fig. 3. Static case – potential and stream. Figs. 5, 6 present paths p obtained for the dynamic case taking into account different initial point p (0) and diferent placement of the obstacle. According to them one can conclude that in both cases the obstacle is avoided. It can be noticed that if reference point approaches boundary of the obstacle relatively closely tracking error e increases. It means that the task of collision avoidance is a priority. This conclusion confirms time plots of the tracking error shown in Fig. 7 – one can easily notice that signal ke (t)k is bounded it but does not converge to zero. Moreover, from Fig. 8 it follows that velocity of point p remains bounded at each time instant. 62

Articles

Fig. 4. Static case – paths obtained for different initial conditions.

Fig. 5. Dynamic case – paths obtained for different initial conditions with circular reference trajectory.

Fig. 6. Dynamic case – paths obtained for different initial conditions with circular reference trajectory.

4. Control algorithm Motion planning methods in the static and dynamic case can be directly used for designing the closed-loop control algorithm for mobile robot operated in the real time. In such application one should be aware of some phase constraints which may be intrinsic for many types of mobile robots. It is well known that most of the kinematic structures of wheeled vehicles are subjected to nonholonomic constraints. The trajectories generated based on fluid description are smooth. However, their curvatures may locally


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 5,

N◦ 3

2011

of the left and right wheel, while 0.8

− dr r G (qC ) ,  2 cos θ r 2 sin θ 

ex , ey [m]

0.6 0.4 0.2 0

−0.2

t[s] 0

10

20

30

40

Fig. 7. Dynamic case (for the trajectory indicated in Fig. 5 by dotted line) – tracking error: ex (black), ey (gray).

r 2 r 2

r d

cos θ ∈ R3×2 sin θ

(25)

is an input matrix with r and d denoting the radius of the wheels and the distance between them, respectively. Moreover, from a practical reason, we assume that the robot has no zero area and its boundary can be inscribed in a rectangle of dimensions a × b – cf. Fig. 10. We call it as rectangular mobile robot and denote the space occupied by it as OR .

Fig. 8. Dynamic case – (for the trajectory indicated in Fig. 5 by dotted line) – velocity: v˜x (black), v˜y (gray).

exceeded some upper bound which results from vehicle kinematics. In this paper we address problem of control with respect to two-wheeled nonholonomic robot which belongs to class (2,0). Such a system can track any smooth trajectory with bounded curvature (i.e. it cannot instantaneously change its orientation). As a result fluid trajectories can be seen as feasible trajectories for this system.

Fig. 10. Rectangle robot and radius of external circle.

4.2. Control development We consider a control problem which is directly based on motion planning problem defined in subsection 3.1. Basically, it can be defined as follows: Problem 2. Find bounded input ω for the rectangular robot with kinematics (24) such that some point P fixed in the robot frame follows some reference non-colliding trajectory pr satisfying assumptions A1 and A2 such that elliptical obstacle O is avoided simultaneously, namely ∀t ≥ 0 O ∩ OR = ∅.

In this section we propose to solve given problem using the simples method, namely decoupling technique by taking advantage of the following linearization output function (see also [4]): x cos θ p = h (qC ) , C + l (27) yC sin θ

Fig. 9. Geometry of two-wheeled mobile robot.

4.1. Robot model and obstacle description The kinematic model of the robot illustrated in Fig. 9 can be defined as follows q˙C = G (qC ) ω, T

(26)

(24)

where qC , [θ xC yC ] ∈ S1 × R2 determines the robot configuration composed of the orientation and coordinates of the point put centrally on the wheel drive axle, ω , T [ωL ωR ] ∈ R2 is an input determining angular velocities

with l 6= 0 being non zero parameter. In order to extend obstacle avoidance result originally formulated for the particle (i.e. point robot) to the rectangular robot one can increase size of the obstacle [2]. Taking into account definition (27) and assuming that orientation θ of the robot can be arbitrary one can consider circle with the center placed at P and radius ρ > 0 which indicates maximum space occupied by the robot. In this case radius ρ can be calculated from Fig. 10 using law of cosines: r ρ≥

l2 +

p 1 2 (a + b2 ) + l a2 + b2 cos φ, 4

(28)

Articles

63


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 5,

N◦ 3

2011

Remark 2. As a result of the choice of decoupling outputs in the given algorithm stabilization of robot orientation is not considered. The robot orientation is regarded as an auxiliary variable which is subordinated to the main control objective, i.e. tracking of reference trajectory by output p defined by (27). However, this algorithm allows one quite effectively to relax control difficulties arising from nonholonomic constraints.

Fig. 11. Illustration of non zero robot area and increasing of the obstacle size. where φ , arctan b/a. Next, in order to take into account the space occupied by the robot, we introduce virtual ellipse obstacle O∗ such that: ∀ ξ ∈ ∂O

dist (ξ, ∂O∗ ) ≥ ρ,

(29)

Remark 3. Considering practical implementation one should take into account input saturation (as a result of drive limitation or assumed upper bound). In the static case one can easily guarantee by proper gain scheduling (by changing value of coefficient k) that saturation will not occur. Moreover, this method guarantee that shape of the robot path will be preserved. In the dynamic case one should be aware that maximum velocity of the robot should be high enough in order to meet requirement coming form reference trajectory and the flow near obstacle. Then, one cannot further expect that scaling gain k gives possibility to achieve any upper bound of input signal ω. 4.3. Simulation results

where dist denotes Euclidean distance of a point from boundary of plane figure. This problem is illustrated in Fig. 11. Then, we can formulate the following proposition of control law: Proposition 1. Using the control law defined as: T ω = Λ−1 (θ) vx vy ,

(30)

where Λ (θ) ,

r 2

2 dl sin θ + cos θ −2 dl sin θ + cos θ −2 dl cos θ + sin θ 2 dl cos θ + sin θ

Fig. 12. Static case – time plots of error convergence: k = 0.5 (in black), k = 5 (in grey).

(31) is invertible matrix for l 6= 0, while vx and vy are velocity components calculated from (20) and assuming that parameters of the obstacle and rectangular robot satisfy requirement (29) with (28), solves problem 2. Proof. Taking time derivative of (27) one has r x˙ − sin θ ˙ cos θ 2r cos θ p˙ = c + l θ = 2r ω r y˙ c cos θ 2 sin θ 2 sin θ − sin θ r r −d d ω = +l (32) cos θ = Λ (θ) ω,

(33)

where Λ (θ) is defined by (31). Next, considering global input transformation η , Λ (θ) ω ∈ R2 , one can obtain the following linear system p˙ = η.

(34)

Hence, one can consider (34) as a description of the particle kinematics and assume that T η , vx vy , (35) where vx and vy determine velocity of the flow calculated based on (20). 64

Articles

Fig. 13. Static case – time plots of control input for k = 0.5: ωL (in black), ωR (in grey). The control law developed in previous subsection has been verified using numerical simulations. The geometrical parameters of the robot correspond to the MiniTracker 3 robot, namely it has been assumed that: a = b = 0.075 m. The parameters of the obstacle O have been selected as in the static case regarded in Subsection 3.6, however, in order to meet requirement (29) virtual obstacle has been properly increased. Structure of the controller is depicted in Fig. 15. In the static case the initial and desired coordinates were selected as θ(0) = π rad, xC (0) = 0.6 m, yC (0) = 0.5 m, xr = −0.5 m, yr = −0.8 m. From Fig. 12 one


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 5,

N◦ 3

2011

Fig. 17. Dynamic case – time plots of control input for k = 0.5: ωL (in black), ωR (in grey). Fig. 14. Illustration of the robot path and the obstacle – the stroboscopic view.

Fig. 15. Control scheme diagram. Fig. 18. Illustration of the robot path and the obstacle (k = 0.5) – the stroboscopic view. can conclude that error tends to zero asymptotically with convergence rate dependent on selection of gain k. Control input composed of angular velocities of wheels is illustrated in Fig. 13. One can easily notice that velocity signals are saturated to the assumed value ωmax = 10 rad/s – it is achieved using dynamic scaling of gain k. The robot’s path shown in Fig. 14 allows one to conclude that the control tasks are satisfied, namely the goal is achieved, obstacle is avoided in such a way that area of robot does not intersect the real obstacle presented in the form of smaller white ellipse.

Fig. 16. Dynamic case – time plots of error convergence: k = 0.5 (in black), k = 5 (in grey). For the dynamic case the circular trajectory with center T [−4.5 − 4.5] m, similar to that described in subsection 3.6, has been considered. In Fig. 16 error convergence is presented with respect to different values of gain k. It can be seen that error is bounded but it does not converge to zero. The bound of errors in the steady state can be reduced using higher value of gain k. From Fig. 18 one can confirm that the path p is deformed as a result of influence

of dynamic part of the flow. Control input remains bounded and is saturated using gain scheduling – cf. Fig. 13.

5. Summary In this work method of motion planning using harmonic functions taking advantage of analytical description of solution to Laplace equation is considered. It takes into account elliptical obstacle described in two-dimensional environment with static and dynamic goal. Discussed method ensures collision avoidance and convergence to the goal. The control algorithm proposed for two-wheeled robot is verified using numerical simulations. The results allows one to expect proper performance of the algorithm in practical applications. Some advantages of path planning based on harmonic functions over more classic approach based on simple potential functions are related to relatively low curvature of the resultant trajectories. As a result this method can be effectively used for some class of nonholonomic system for which this issue becomes critical. Moreover, the results achieved by using harmonic functions may give some new impact on nonholonomic motion planning (cf. for example some dipole-like functions considered in [16]). Additionally, analytical description in the continuous domain gives possibility to improve real-time properties of the planning algorithm and design closed-loop control algorithms. Furthermore, it is worth to point out possible modification and extension of considered motion planning/control method. Taking advantage of smoothness of trajectories generated by harmonic functions one can design control which takes into account stabilization of the configuraArticles

65


Journal of Automation, Mobile Robotics & Intelligent Systems

tion of robot’s platform (position and orientation). Further research directions may also include a generalization of the description in the case of moving obstacles, a proposal of the analytical obstacles with different shape description methods and taking into account the presence of many obstacles.

Acknowledgment This work was supported by the Ministry of Science and Higher Education grant No. N514 023 32/3262.

AUTHORS Paweł Szulczyński∗ – Chair of Control and Systems Engineering, Poznań University of Technology, ul. Piotrowo 3a, 60-965 Poznań, POLAND, e-mail: pawel.szulczynski@put.poznan.pl, www: http://etacar.put.poznan.pl/pawel.szulczynski/ Dariusz Pazderski – Chair of Control and Systems Engineering, Poznań University of Technology, ul. Piotrowo 3a, 60-965 Poznań, POLAND, e-mail: dariusz.pazderski@put.poznan.pl, www: http://etacar.put.poznan.pl/dariusz.pazderski/ Krzysztof Kozłowski – Chair of Control and Systems Engineering, Poznań University of Technology, ul. Piotrowo 3a, 60-965 Poznań, POLAND, e-mail: krzysztof.kozlowski@put.poznan.pl, www: http://control.put.poznan.pl ∗

Corrresponding author

References [1] S. Akishita, S. Kawamura, and K Hayashi. Laplace potential for moving obstacle avoidance and approach of a mobile robot. Japan-USA Symposium on Flexible Automation, A Pacific Rim Conference, pages 139– 142, 1990. [2] R. C. Arkin. Principles of Robot Motion Theory, Algorithms and Implementation. MIT Press, Boston, 2005. [3] A. Behal, Dixon W., D. M. Dawson, and B. Xian. Lyapunov-Based Control of Robotic Systems. CRC Press, 2010. [4] G. Campion, G. Bastin, and B. D’Andrea-Novel. Structural properties and classification of kinematic and dynamic models of wheeled mobile robots. IEEE Transactions on Robotics and Automation, 12(1):47– 62, 1996. [5] C. I. Connolly, J. B. Burns, and R. Weiss. Path planning using Laplace’s equation. In Proceedings of the IEEE International Conference on Robotics and Automation, pages 2102–2106, 1990. [6] I. Dulęba. Algorithms of motion planning for nonholonomic robots. Publishing House of Wrocław University of Technology, 1998. [7] H. J. S. Feder and J. J. E. Slotine. Real-time path planning using harminic potentials in dynamic environments. In Proceedings of the IEEE International

66

Articles

VOLUME 5,

N◦ 3

2011

Conference on Robotics and Automation, pages 874– 881, Appril 1997. [8] B. Girau and A. Boumaza. Embedded harmonic control for dynamic trajectory planning on fpga. In Proceedings of the 25th conference on Proceedings of the 25th IASTED International Multi-Conference: artificial intelligence and applications, pages 244– 249, Anaheim, CA, USA, 2007. [9] O Khatib. Real-time obstacle avoidance for manipulators and mobile robots. International Journal of Robotics Research, 5:90–98, 1986. [10] J. Kim and P. Khosla. Real-time obstacle avoidance using harmonic potential functions. IEEE Transactions on Robotics and Automation, pages 338–349, 1992. [11] D. Koditschek. Exact robot navigation by means of potential functions: Some topological considerations. In Proceedings of the IEEE International Conference on Robotics and Automation, pages 1–6, 1987. [12] S. M. LaValle. Planning Algorithms. Cambridge University Press, Cambridge, U.K., 2006. Available at http://planning.cs.uiuc.edu/. [13] C. Louste and A. Liégeois. Path planning for nonholonomic vehicles: a potential viscous fluid field method. Robotica, 20:291–298, 2002. [14] S. Mastellone, D. M. Stipanovic´, C. R. Graunke, K. A. Intlekofer, and M. W. Spong. Formation control and collision avoidance for multi-agent non-holonomic systems: Theory and experiments. The International Journal of Robotics Research, 27(1):107–126, 2008. [15] L.M. Milne-Thomson. Theoretical hydrodynamics. Dover Publications, 1996. [16] D. Panagou, H. G. Tanner, and K. J. Kyriakopoulos. Dipole-like fields for stabilization of systems with pfaffian constraints. In Proceedings of the IEEE International Conference on Robotics and Automation, pages 4499–4504, Anchorage, Alaska, USA, 2010. [17] E. Rimon and D. E. Koditschek. The construction of analytic diffeomorphisms for exact robot navigation on star worlds. In Proceedings of the IEEE International Conference on Robotics and Automation, pages 21–26, 1989. [18] G. P. Roussos, D. V. Dimarogonas, and K. J. Kyriakopoulos. 3d navigation and collision avoidance for a non-holonomic vehicle. In Proceedings of American Control Conference, pages 3512–3517, Seattle, WA, USA, 2008. [19] R. Soukieh, I. Shames, and B. Fidan. Obstacle avoidance of non-holonomic unicycle robots based on fluid mechanical modeling. In Proceedings of European Control Conference, pages 3269–3274, Budapest, Hungary, 2009. [20] S. Waydo and R. M. Murray. Vehicle motion planning using stream functions. In Proceedings of the IEEE International Conference on Robotics and Automation, pages 2484–2491, 2003.


VOLUME 5,

Journal of Automation, Mobile Robotics & Intelligent Systems

N° 3

2011

TERRAIN MAP BUILDING FOR A WALKING ROBOT EQUIPPED WITH AN ACTIVE 2D RANGE SENSOR Przemysław Łabęcki, Dawid Rosiński, Piotr Skrzypczyński

Abstract: This paper deals with problems of rough terrain perception and mapping for walking robots equipped with inexpensive optical range sensors providing 2D data only. Two different sensing modalities are considered: the structured light sensor, and the Hokuyo URG-04LX laser scanner. Measurement uncertainty in both sensors is taken into account, and different geometric configurations of these sensors on the walking robot are analysed, yielding the configurations that are best for the task of terrain perception. Then, application of the acquired range data in local terrain mapping is presented. The mapping algorithm as well as novel methods for removing map artifacts that result from qualitative errors in range measurements are detailed. Experimental results are provided. Keywords: walking robot, laser scanner, structured light, sensor model, map building.

1. Introduction Nowadays, walking robots are increasingly employed in environments where the classic 2D maps are insufficient. A perceived terrain map is required by the robot’s control system in order to avoid obstacles and to select safe footholds. However, in a walking robot the exteroceptive sensors have to be compact and light-weight. Although vision is a popular sensing modality in walking machines [9], active laser sensors offer more reliable terrain perception than most of the vision-based approaches, consuming usually a fraction of the computing power required by passive vision. Although laser scanning is now a matured robotics technology, the available 3D laser scanners are bulky, power consuming and expensive. On the other hand, the recently introduced technology of the timeof-flight (ToF) 3D cameras is still in its infancy, and does not offer enough reliability [20]. a

b

for terrain perception in walking robots. Distance measurements in laser range sensors are accomplished either by triangulation, or by the time-of-flight principle, which in turn can be implemented in several ways [1, 23]. A triangulationbased range sensor may be constructed adding a laser stripe projector to the on-board camera, which is present in the robot because of the requirements of the teleoperation interface [13], as it was shown on our small Ragno hexapod (Fig. 1a). This makes the range sensor lightweight and very cheap, but the structured light ranging principle has important drawbacks related to the ambient light [17]. Therefore, we experiment also with the Hokuyo URG-04LX 2D laser scanner as the terrain sensor on our bigger hexapod robot Messor (Fig. 1b). The URG series are new generation miniature laser scanners that unlike their older counterparts (e.g. the popular Sick LMS sensors) can be used on a middle-sized walking robot like Messor [3]. The scanner is tilted down, so the laser beam plane sweeps the ground ahead of the robot, enabling it to sense the terrain profile [15]. Both systems: the structured light sensor, and the tilted URG-04LX scanner provide only 2D data about a terrain stripe located in front of the robot. To obtain a terrain map the sparse range data have to be registered in a map using an estimate of the robot motion. The aim of this paper is to discuss the issues related to terrain perception with 2D active sensors, and to show that sparse 2D range data can be efficiently used to build a local terrain map, which serves the purpose of safe feet placement. For this task a grid-type elevation map may be applied, which is easy to update in real-time, and can be directly used to select proper footholds [2]. However, an appropriate map updating algorithm is required, which takes into account the uncertainty of range measurements. In this paper the grid-based elevation map approach to the terrain mapping problem is employed for both sensing modalities under study. Basically, the same mapping algorithm is used, however, it turned out that depending on the sensing modality this algorithm has to be augmented with additional procedures that remove the map artifacts or spurious range data. The remaining part of the paper is organized as follows. In Section 2 we discuss related work in terrain mapping from range data. Sections 3 and 4 describe in a unified manner the structured light sensor and the laser scanner, respectively. Section 5 describes our map building method. Section 6 reports experimental results and provides a comparison between mapping results achieved with the two sensing systems, and Section 7 concludes the paper.

Fig. 1. Walking robots: Ragno (a) and Messor (b).

2. Related work In order to overcome the above-mentioned limitations we experiment with active, laser-based 2D range sensors

Triangulation with laser light has been used in many different robotic systems before, addressing different apArticles

67


Journal of Automation, Mobile Robotics & Intelligent Systems

plication areas, from assistive technology for vision impaired users [6], through affordable range sensing for commercial vehicles [17], to obstacle avoidance for planetary rovers [16]. However, this sensing modality was not used previously in a walking robot for terrain perception in any published work of which we are aware. Laser scanners are widely used in mobile robotics, and were the subject of many analysis in the literature. However, the Hokuyo URG-04LX is a relatively recent development. As for now, a more detailed characterization of this laser scanner is given in [8] and [18]. Both papers analyze the dependency between the accuracy of distance measurements and the observed surface properties, distance to the target, and laser beam incidence angle. Moreover, [18] notices also the existence of mixed measurements produced by this sensor. The qualitative errors known as mixed measurements or “mixed pixels” are caused by the interaction of the laser beam with particular objects in the environment. They occur when the laser beam hits simultaneously two objects at different distances or two surfaces having different reflective properties [24]. The elevation map approach was used for the first time in [10], but it is used also on other robots working in rough terrain, e.g. the Lauron III [4], and the more recent Lauron IV [21] that builds a 2.5D height map holding both height and credibility values. This approach is similar to the one presented in this paper, but in [21] a ToF camera is used, that enables direct 3D perception, while we show that an elevation map can be built efficiently with sparse range data from the 2D range sensors. Sparse range data can be also used to estimate specific properties (shapes, dimensions) of a complex environment during walking or climbing, like in [11], but such an approach does not yield a terrain map that can be used for more general motion planning. Recent research in terrain modelling resulted in methods that do not assume a fixed discretization of the space, such as the work of Plagemann et al. [19], which applies Gaussian process regression to the problem of uneven terrain mapping. While this approach is promising, enabling to fill-in large discontinuities that can appear in a map constructed with 3D data from a long-range sensor, it is computation-intensive, and cannot work in real-time on the embedded computer of our robot. In contrast, the map building system we present here is conceived as a source of information for the foothold selection module, and optimized for that purpose. A relatively small local map is built using a short-range 2D sensor specially configured to yield dense data from a limited area located in front of the robot. Thus, missing data can be filled-in by a much simpler method, while our goal is an artifact-free map that can be constructed on-board in real time.

3. Terrain perception with the structured light sensor 3.1. Structured light sensor The motivation for using a structured light sensor on the small walking robot Ragno [25] stems primarily from the fact, that we wanted to use the on-board camera for teleoperation, but still be able to use it at the same time for ground profile acquisition supporting the foothold selection module [2]. The Ragno robot was equipped with the 68

Articles

VOLUME 5,

N◦ 3

2011

webcam-class colour camera Phillips NPC 1300/NC and a visible red light laser stripe emitter, forming together a structured light sensor. The laser stripe projector is an integrated unit obtained, together with the spherical lens and the electronics, from a dismantled commercial laser level tool. The principle of operation of the triangulation-based range sensor is shown in Fig. 2. The laser beam is passed through a cylindrical lens, which defocuses it in the horizontal plane, but leaves it unaffected in the vertical direction. Thus, a light sheet is created, which intersects objects in front of the robot. If the laser sheet emitter is tilted down by the angle β, the light plane intersects the ground ahead of the robot, enabling it to sense the terrain profile. laser stripe projector

u

b

b tc zc g

xc camera

c

v

yc

Fig. 2. Geometry of the structured light sensor and camera image. A camera, which optical center is located at the distance b from the emitter perceives the stripe of laser light reflected by the illuminated surface. For the sake of simplicity we assume that the emitter is located within the plane of the camera CMOS sensor. The field of view of the camera is defined by the horizontal viewing angle 2γc and the vertical viewing angle 2τc . The relation between the location of points in the image u, v and the location of scene points xc , yc , zc with regard to (w.r.t.) the camera coordinates is given by the equations: zc =

b 2v Rver ) tan τc

(1 − + cot β 2u xc = zc 1 − tan γc , Rhor cot β yc = −b 1 − 2v (1 − Rver ) tan τc + cot β

(1) ! ,

where Rver and Rhor are the vertical and horizontal size of the considered image (in pixels), respectively. Equations (1) define a transformation from the image coordinate frame [u v]T to the 3D coordinates [xc yc zc ]T . Parameters of this transformation are given by the vector [b β τc γc ]T . These parameters should be known as precisely, as possible, so they are obtained by calibration. The main calibration procedure consists of two stages: the first one is a standard camera calibration procedure, while the second one is calibration of the geometric parameters of the emitter-camera system [14]. The latter calibration procedure provides the b and β parameters. When the actual focal length f and center of the image are known, a symmetrical region of interest window is defined, with the height Rver and width Rhor . Using these parameters the


Journal of Automation, Mobile Robotics & Intelligent Systems

viewing angles τc and γc are computed: Rver Rhor τc = arctan , γc = arctan , 2f 2f

a

a

15

40

sZ [mm]

sX [mm]

50

10 5

30 20 10

0 2000

0 2000

1500 1000 zC[mm]

500

0

-1000

-500

0 xC[mm]

500

1000

1500 1000 zC[mm] 500 0 -1000

-500

0

500

zc

zc yc

yc

b

60

20

2011

c

b

(2)

3.2. Measurement uncertainty of the structured light sensors Main causes of errors in the measurements of the scene points location by means of the structured light sensor as well as sensitivity of the measurements to the parameters of the camera-projector system are analysed in [13]. Details of the error propagation procedure are given in [14]. 25

N◦ 3

a

where f is given in pixels. Detection of the laser stripe on the RGB camera image is accomplished by thresholding of the colour image, and then by elimination of the remained artifacts using some morphological operations performed on the binary image. This method was introduced in [12], and its slightly modified version used in our system is detailed in [13].

30

VOLUME 5,

1000

xC[mm]

Fig. 3. Uncertainty depending on the location of a point w.r.t. the sensor coordinates. Here we focus on the spatial resolution of the structured light sensor, which is important for the proper choice of the geometric configuration of the sensor mounted on the robot. This resolution depends on the resolution of the camera, and on the distance between the camera and the measured point [13, 14]. Figure 3 shows the analytically obtained dependence between the location of the measured point and the spatial uncertainty of the measurement for an example sensor configuration: VGA image resolution, b = 100 mm, β = 70◦ , τc = 19.1◦ , γc = 24.8◦ . Because the yc and zc coordinates are coupled – they both depend only from v (1) – the uncertainty is shown only w.r.t. the xc zc plane, while the camera is assumed to be at the xc = 0, zc = 0 coordinates. The uncertainty along the xc axis grows only slightly with the distance from the sensor, but depends on the lateral distance from the camera optical axis (Fig. 3a). In contrary, the uncertainty along the zc axis grows hyperbolically with the growing distance from the sensor (Fig. 3b). 3.3. Sensor configuration Unfortunately, in a triangulation-based range sensor the parameters required for high resolution and low uncertainty of the measurements conflict with the requirements as to the field of view and operational characteristics. Thus, the final sensor design must be a compromise. In order to reduce the uncertainty the spatial resolution of the measurements should be improved by decreasing the distance along the zc axis between the measured point and the camera. To implement this we can decrease the β angle, or tilt the whole sensor by some α angle (Fig. 4a and b).

z1

z2 < z1

Fig. 4. Tilting the sensor towards the ground. Because the robot uses the obtained terrain map mainly for foothold selection, it makes sense to configure the sensor in such a way that a relatively small area immediately ahead of the robot is measured with good resolution. However, a constraint is the field of view along the xc axis. The robot should perceive a stripe of terrain wide enough to put all feet on the known ground (Fig. 4c). An advantage of a sensor that is tilted towards the ground is reduction of mutual occlusions between the observed obstacles on an uneven terrain. Figure 5 presents results of a simulation of a robot equipped with the structured light sensor moving through a random stepfield consisting of boxes of various height. Multiple occlusions between the boxes cause many missing distance measurements (Fig. 5a). Decreasing the β parameter by 10◦ reduces the number of occlusions, and reduces the uncertainty of range measurements, which is coded by the grayscale level of the simulated laser stripe (Fig. 5b). Tilting the sensor by α=10◦ (instead of decreasing the β parameter) reduces the occlusions and uncertainty of the measurements even a little more (Fig. 5c). However, both of those configurations make the observed area of the ground more narrow. Considering all these constraints, we chose the final set of parameters for the structured light sensor prototype mounted on the Ragno robot: τc = 28.14◦ , γc = 38.8◦ , b = 120 mm, β = 45◦ , and α = 10◦ . When the Ragno robot assumes its neutral posture the camera is located 17.4 cm above the ground, which results in the maximum measured range of 29.4 cm (on the ground plane).

4. Terrain perception with the laser scanner 4.1. URG-04LX scanner In most of the laser scanners the distance is measured either by directly determining the time of flight of an emitted laser pulse traveling to a target and then reflected back (ToF principle), or indirectly, by determining the phase-shift between an amplitude modulated continuous wave and its reflection. The Hokuyo URG-04LX 2D laser scanner exploits the phase-shift-based ranging technique. The range to the target object r is proportional to the measured phase shift φd : r=

c 1 φd φd = λAM , 4πfAM 2 2π

(3)

where fAM is the modulation frequency, λAM is the wavelength of the modulation, and c is the speed of light. Since φd is determined modulo 2π, the range measurement is unambiguous only within the distance ru = 12 λAM from the sensor. The URG sensor uses two different modulation frequencies in order to overcome this ambiguity. Thus, it achieves both good distance measurement resolution of Articles

69


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 5,

N◦ 3

2011

Fig. 5. Simulation of three configurations of the structured light system. 1 mm and a satisfactory maximum range of 4.095 m [7]. The scanner produces 683 measurements per revolution within the angular range of 240◦ . Complete scans are taken with a frequency of 10 Hz. The sensor weighs only 160 g and has a size of 50 × 50 × 70 mm. The URG-04LX specification defines its distance measurement accuracy for a white sheet of paper as 10 mm between 20 mm and 1000 mm, and 1% of the measured distance between 1 m and 4 m. As a range sensor based on the phase-shift principle the URG-04LX is characterized by a coupling between the accuracy of the measured distance and the received signal intensity. This coupling introduces important systematic errors into the distance measurements, which cannot be calibrated as in [1], because the URG scanner (at least with the standard software) does not allow for direct signal intensity measurements. Both [8] and [18] propose calibration models for reducing the systematic errors. However, the linear model of [18] results in large residual distance errors whenever it is used in terrain mapping to correct distances smaller than 1000 mm at incidence angles of about 30◦ . Also the model given in [8] is not applicable, as it tries to describe the behaviour of the range measurements up to 4000 mm with one non-linear function, which makes its precision insufficient. 20

range measurement error [mm]

10

0

-10

raw measurements

-20

calibration function after calibration

-30

-40 0

100

200

300

400

500

600

measured distance [mm]

700

800

900

Fig. 6. Calibration of the URG-04LX measurements. Therefore we have established our own calibration model, which is intended to capture behaviour of the URG sensor in the application of terrain profile acquisition. The experiment was performed with a grey paper target (R=G=B=128) for the incidence angle of 30◦ , and the measured distances from 50 to 1000 mm. The results of range measurements are shown in Fig. 6 using a solid line, with the third order polynomial calibration curve overlaid 70

Articles

using a dashed line. The resulting calibration is given as: r = rm + ∆rm , ∆rm = 38.6 + 0.18rm + 2 3 + 0.3 × 10−3 rm − 0.2 × 10−7 rm ,

(4)

where rm and r are the raw and the corrected range measurement, respectively. The calibration procedure given by (4) is valid only for the interval of distances from 50 mm to 1000 mm, but it is precise enough for the specific application of terrain perception. For instance, the systematic error of the corrected measurements is kept within the boundaries specified by the sensor’s manufacturer, as shown in Fig. 6 by the dotted line that fits within the slanted area. 4.2. Sensor configuration The mechanical design of the robot platform gives a possibility for two different variants of the sensing system [3]. The first one has the URG scanner attached between the two aluminium plates constituting the chassis (Fig. 8a), while in the second variant the scanner is mounted on the upper deck plate (Fig. 8b). There are few sensor-specific requirements which should be taken into account when configuring the system. It was found in [18] that some targets produce specular reflections, but for most matted-surface targets these errors are insignificant if the incidence angle is smaller than 30◦ . Considering this result we configured our URG sensor in such way that the parts of terrain most interesting for the placement of robot feet, i.e. nearly-flat, horizontal surfaces are perceived with small incidence angle. Another concern is the non-linearity of the measurement errors in function of the distance. The sensing system configuration should ensure that target surfaces of heights from −10 to 15 cm (with regard to the local ground level), which can be climbed by the Messor robot will appear at distances falling into the most favorable range interval from 200 to 900 mm, which ensures lowest measurement errors (cf. Fig. 6). Considering the above-mentioned requirements, we simulated three different configurations of the terrain profile sensing system. The first one (Fig. 7a) with the scanner mounted at the height of dz = 16 cm and the pitch angle α = 15◦ , the second one (Fig. 7b) with the scanner mounted at the height of dz = 26 cm and the angle α = 20◦ , and the third one (Fig. 7c), which has the scanner mounted also on the upper deck of the robot, but at the angle of 35◦ . The simulations present a walking robot equipped with the URG-04LX sensor in the three mentioned configurations moving through a randomly generated terrain.


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 5,

N◦ 3

2011

16.0

range measurement error [mm]

14.0 12.0

a

b

c

10.0 8.0 6.0 4.0 2.0 0.0 -2.0 -4.0

dz=16cm, a=15o

dz=26cm, a=20o

dz=26cm, a=35o

Fig. 7. Simulation of three configurations of the Hokuyo-based sensing system. a

c

Zs Ym Ys

8

2.

=6

x

hmax=15 [cm]

r ma

m [c ]

hmin=-10 [cm]

Zm

a=35o

]

b

m [c

dz=26 [cm]

2 9.

=1 r min

assignment of the coordinate systems

ymin=15.7 [cm]

ymax=51.4 [cm]

Fig. 8. Two variants of the terrain sensing system design (a,b), and geometry of the sensing system on the Messor robot (c). The systematic error of range measurements is coded by the grayscale level of the simulated laser footprints on the observed objects. This error is computed according to the model given by (4). As it can be seen from Fig. 7a the first configuration of the sensor causes multiple occlusions between the obstacles, and as a result many missing distance measurements. Putting the sensor on the upper deck of the robot increases the field of view and reduces the number of occlusions, but increases the range measurement errors, as the observed obstacles are located too far (Fig. 7b). Tilting the sensor by α=35◦ further reduces the number of occlusions, and alse reduces the uncertainty of range measurements (Fig. 7c), as in this configuration the ground is closer to the sensor along the line of sight. Although the increased pitch angle makes the observed area of the ground more narrow and reduces the lookahead distance, these drawbacks are less profound in our application: the observed ground stripe is still wide enough to put the robot’s feet on it, while the look-ahead distance is not so important because the robot uses the terrain map for foothold selection rather than for dynamic obstacle avoidance. Thus, it makes sense to configure the scanner in such a way that a small area immediately in front of the robot is perceived with high accuracy and without much occlusions. Therefore we finally attached the URG-04LX scanner to the upper deck of our robot at the height of dz = 26 cm, and the pitch angle of α = 35◦ (Fig. 8c).

5. Map building method 5.1. Local terrain map concept In the elevation grid map each cell holds a value that represents the height of the object at that cell [10]. However, a classic elevation map provides no means to com-

pensate such undesirable effects as missing data and range measurement artifacts. Therefore, we developed a map updating method, which is inspired by the algorithm of Ye and Borenstein [26], originally conceived for a wheeledd vehicle with the Sick LMS 200 scanner. The terrain map consists of two grids of the same size: an elevation grid and a certainty grid. The elevation grid holds values that estimate the height of the terrain, while each cell in the certainty map holds a value that describes the accuracy of the corresponding cell’s estimate of the elevation. A sensorcentered local grid of the size 100 × 100 cells is used. The cell size varies depending on the sensor and the walking robot used. For the Ragno robot with the structured light sensor the cell size is 5×5 mm, while for the bigger Messor robot cells of the size 10 × 10 mm or 15 × 15 mm are used, depending on the expected self-localization uncertainty. The local map that integrates the sparse range measurements moves with the robot always covering its surroundings. As in [26] it is assumed that an estimate of the 6-d.o.f. robot pose is available. For a legged robot such an estimate is not easy to obtain. The sparse data from the 2D range sensors we use cannot be employed for that purpose, because there is a lack of any significant overlapping between the new measurements and the recently perceived part of the terrain. For small local maps centered in the robot coordinates, and used for foothold selection the proprioceptive sensing exploiting the feet contacts and the Inertial Measurement Unit (IMU) is enough [5], while for exploration of more extended areas a vision-based SLAM (Simultaneous Localization and Mapping) procedure can be used [22]. 5.2. Map updating algorithm Regardless of the sensing modality used (i.e., structured light or 2D scanner) the range measurements are converted to 3D-points ps in the sensor coordinate frame, then transformed to the map coordinates by using the robot pose estimate, and finally projected onto the 2D grid map: pm = Tm s ps , Tm s

=

(5)

00 Rot(Xm , ϕr )Rot(Ym0 , ψr )Rot(Xm , α),

where ps =[xs ys zs ]T and pm =[xm ym zm ]T are coordinates of the measured point in the sensor and the map frame, respectively. The homogeneous matrix Tm s describes the transformation from the sensor frame to the map frame. This transformation consists of three rotations: the pitch ϕr and roll ψr angles of the robot trunk, and the α angle Articles

71


Journal of Automation, Mobile Robotics & Intelligent Systems

that represents the constant pitch angle of the sensor w.r.t. the robot frame. These rotations are shown in Fig. 9a,b, and c respectively, for a robot with the laser scanner, but they are the same for the structured light sensor.

a

b

m

'm

' m r

m

'' m

m

''m r

c

d

m

m

N◦ 3

2011

For the structured light sensor this prediction is slightly more complicated:

∂zm

∆zmax = d(k,k+1) tan γ +

∆b

+ ∂b

∂zm

∂zm

+

∆β

+

∆τc

+ ∂β ∂τc

∂zm ∂zm

+

∆b

+

∆v

+ ∂b ∂v

∂zm

∂zm

+

∆ψr

+

∆ϕr

+ ∂ψr ∂ϕr

∂zm

+

∆α

, ∂α

(8)

s

m

m

m s

d

newmax

old

d

Fig. 9. Kinematic transformations of the measurements (a, b, c) and maximum change of elevation (d). Once the measured points are available in the map coordinates, a plausibility assessment is accomplished in order to check if a newly obtained measurement is consistent with the existing elevation map and the constraints imposed by the movement of the robot. To this end a prediction of the maximum instantaneous change of elevation is used (Fig. 9d). For every two consecutive measurements rk and rk+1 , the change of elevation ∆zmax in a given cell of the map is computed taking into account the range measurement uncertainty, and the uncertainty of robot pose estimate. For the laser scanner sensor this prediction is given by: ∆zmax

VOLUME 5,

∂zm ∂zm

= d(k,k+1) tan γ +

∆r +

∆ψr

+ ∂r ∂ψr

∂zm

∂zm ∆ϕr

+

∆α

, (6) +

∂ϕr ∂α

because the term giving the dependence between the uncertainty of the measured elevation and the range measurement uncertainty is replaced by four elements describing how the uncertainty of the perceived elevation depends on the errors in parameters of the camera-projector system (1). The values of ∆b, ∆β, and ∆τc are determined during the calibration procedure [14]. The ∆v value is an estimate of the error in the vertical location of the laser stripe image on the camera matrix. The maximum elevation change given by (6) is valid only if the robot is moving along a straight line. However, the walking robot often changes its orientation (the yaw angle θr ) during motion because it has to put its feet at proper footholds. To enable computation of the ∆zmax while changing the orientation, the idea given in [26] is extended to include also the turning motion. We assume that the robot observes an obstacle of constant elevation, and turns by an angle of θ. Hence, the distance between the two points being observed by the sensor, p and p0 can be computed from the intersection of the straight lines at p0 , provided that the location of p is known (Fig. 10). Because we are interested only in the distance increment, without loss of generality we can let xp = 0, and compute the distance as ∆x = xp0 = (r + ds ) tan(θ/2)sgn(θ). Then, the instantaneous horizontal translation of the sensor is computed: d(k,k+1) = ∆x tan(θ), and used in (6) or (8). (x p' )

x

(x p )

where zm is the measured elevation of the observed point computed from (5), d(k,k+1) is a horizontal translation of the robot from k to k +1 time stamp, and γ angle is the total rotation of the trunk w.r.t. the Xm axis, which is obtained from the elements of the Tm s matrix: γ = atan2

cos ϕr sin α + sin ϕ cos α cos ψr − sin ϕr sin α + cos ϕ cos α cos ψr

.

(7) The values of ∆ϕr , ∆ψr and ∆α are maximum errors of the respective angles, while ∆r is the laser scanner range measurement error, which is computed upon the sensor model (4). The value of ∆α is 1◦ (a constant), but the values of ∆ψr and ∆ϕr depend on the accuracy of the IMU sensor used in the robot. 72

Articles

Fig. 10. Determination of the elevation change while turning. A cell in the elevation map is denoted as h[i,j] , and a cell in the certainty map as c[i,j] . Whenever a new measurement


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 5,

2011

URG-04LX

is available cells in the certainty map are updated at first:  [i,j] [i,j]    c(k) + a if |hm(k+1) − h(k) | ≤ |∆zmax(k) | [i,j]

N◦ 3

obsta

situation 1 situation 2

cle

[i,j]

c(k+1) =

or c(k) = 0

[i,j]

c(k) otherwise,

(9) where a is the increment of the certainty value, and hm is the elevation of the measured point, computed as hm = zm + dz + href taking into account the current height of the robot dz (it is computed upon the angles measured in joints), and the reference elevation href at which the robot is located (obtained from the already created part of the map). Next, cells in the elevation map are updated: ( [i,j] hm(k+1) if hm(k+1) > h(k) [i,j] h(k+1) = (10) [i,j] h(k) otherwise. 5.3. Removal of artifacts and mixed measurements The main advantage of the elevation map variant proposed by Ye and Borenstein [26] is the built-in mechanism for filtering of the artifacts. This mechanism is known as the Certainty Assisted Spatial (CAS) filter, and it employs the constraints on the spatial continuity of both the elevation map and the certainty map in order to remove corrupted values from the elevation map. Another useful mechanism is the weighted median filter that enables filling-in of the unseen cells with values interpolated from the neighboring cells of known elevation. We implemented the CAS filter in our mapping system and tested it sucessfully using the data from the structured light sensor [13]. However, during tests of the mapping system with the URG-04LX scanner on various obstacles the CAS filter failed to remove most of the elevation map artifacts due to mixed measurements. Different spatial characteristics of the mixed range measurements in the ToF-based LMS 200 used in [26], and the phase-shift-based URG scanner are a possible explanation of this behaviour. Because of the range–intensity coupling phenomenon [1] in the phase-shift-based sensor the mixed measurements can appear not only between the two surfaces being observed, but also behind the farther object, or in front of the closer object. This may be a reason for false updates in the certainty map, and henceforth the erratic behaviour of the CAS filter, which unnecessarily deletes the elevation values in some cells of the map on the basis of their low certainty and a lack of spatial continuity in the elevation map and the certainty map. For that reason the CAS filter algorithm rule for removing the artifacts is not used with the URG-04LX sensor. However, the weighted median filter proposed as the mechanism that fills-in the missing data is still used. The output hwm of the filter is assigned to each cell in the elevation map that is unknown, i.e., has the value of c[i,j] =0. [i,j] hwm if c[i,j] = 0 [i,j] h = (11) h[i,j] otherwise. This mechanism enables to fill-in small portions of the elevation map that are invisible to the sensor due to occlusions. Because the CAS filter is unable to remove the artifacts due to mixed measurements, the erroneous range data

Fig. 11. Typical causes of mixed pixels in terrain profile acquisition. should be eliminated at the pre-processing stage to avoid erratic behaviour of the map-building procedure. The methods for detection of mixed measurements known from the literature require the intensity information [1], or they are based on accumulation of the range evidence in some form of local environment representation [24]. Unfortunately, the intensity output is not available, and it is not possible to accumulate the range measurements, because the scanner beam sweeps the terrain in front of the robot, and most of the points are measured only once. a

b

c

ys [mm]

  

xs [mm]

xs [mm]

xs [mm]

Fig. 12. Removal of mixed measurements by using the clustering approach. Hence, the spatial and temporal dependencies between the laser scanner measurements in the tilted configuration are exploited to solve the mixed measurements problem. On the basis of our experimental observations it is assumed that typically the mixed measurements arise in two situations (Fig. 11): 1) when a beam hits a very thin object or a sharp edge that is vertical or nearly-parallel to the horizontal direction of the beam, 2) when the laser beam plane aimed downward hits a sharp edge that is roughly perpendicular to the direction of the beams.

a

b

Fig. 13. Example results of the clustering approach to mixed measurements removal. The first situation results in a single mixed measurement or a small group of points that are spatially isolated. Therefore, such mixed measurements can be removed reliably by clustering the measurements and analyzing discontinuities in the scanned sequence. We use the Range Clustering algorithm proposed in our previous work [23], Articles

73


Journal of Automation, Mobile Robotics & Intelligent Systems

which is computationally efficient and based on the underlying physics of the laser range measurements. Figure 12a shows a single scan taken by the tilted URG-04LX scanner (which is located in the origin of the coordinate system). The range measurements belonging to this scan are then clustered, and separated groups are indicated by different shades of gray in Fig. 12b. The same sequence of measurements is shown in Fig. 12c with the small and isolated groups removed (marked in dark gray). Results of using this procedure are shown in Fig. 13, where the artifacts visible in the elevation map (Fig. 13a) on the sides of the box disappear when the mixed measurements are removed from the input data (Fig. 13b). b

c

ys [mm]

a

s(k+1)

s(k) xs [mm]

s(k+2) xs [mm]

xs [mm]

Fig. 14. Removal of mixed measurements by analysing the consecutive scans. In the second situation described above the group of mixed pixels might be large, and appear as an extension of the actual structures detected by the sensor. To fight out these spurious measurements we check spatial continuity between three neighboring scans: s(k−1) , s(k) , and s(k+1) . At first, we convert the range measurements to the 2D points in the scanner coordinates, and extract groups of co-linear points from the first scan s(k−1) , looking for structures that might be edges roughly perpendicular to the scanning direction (Fig. 14a). If such a structure is found, we define an angular sector of measurements that contains this structure. Then, we check the scattering of points along the vertical direction in the second scan s(k) (Fig. 14b) by computing the standard deviation: sP pr i 2 i=pl (y s(k,j) − ys(k,j) ) y σs(k,j) = , (12) p i where ys(k,j) is the coordinate of the i-th point in the j-th sector of the k-th scan, y s(k,j) is the mean ys value in the considered angular sector, while pl and pr are limits of this sector, and p is the total number of points within these limits. If points in the sector of scan s(k) that was identified as containing an edge have the standard deviation of their ys values much bigger than the same parameter computed for the scan s(k+1) , next in a sequence (Fig. 14c), they are treated as mixed pixels and removed. This method effectively removes mixed measurements that appear behind obstacles having sharp horizontal edges, which is visible in the example given in Fig. 15. The artifacts caused by a group of mixed measurements (Fig. 15a) are removed without destroying the shape of the mapped object (Fig. 15b).

6. Experimental results 6.1. Controlled environment experiments Preliminary tests of the mapping system were performed on a simple test bed, constructed in order to make 74

Articles

VOLUME 5,

a

N◦ 3

2011

b

Fig. 15. Example results of the mixed measurements removal using scan sequence analysis. these tests independent from the robot pose estimate errors. The test bed consists of a rail with a meter on its side, and a sliding cart (of the same height as the robot) that has the range sensor mounted on it, and can be positioned manually anywhere along the rail. Two versions of the cart were built, one equipped with the complete structured light sensor designed for the Ragno robot (Fig. 16a), and another one with the URG-04LX scanner attached in the same configuration as on the Messor robot (Fig. 16b). a

b

Fig. 16. Simple cart/rail system for tests of the range sensors. An example of the results obtained with the structured light sensor is shown in Fig. 17. As it can be seen in this figure, the elevation map closely resembles the shape of the boxes used in the experiment (cf. Fig. 16a). Note that the CAS filter has successfully filled-in any missing data preserving the shapes.

Fig. 17. Elevation map obtained with the structured light sensor. Similar objects were used to test the mapping software with the Hokuyo URG-04LX (Fig. 18a). As it can be seen in Fig. 18b, the obtained elevation map contains many artifacts caused by mixed measurements. However, if the pre-processing algorithms proposed in Section 5.3


Journal of Automation, Mobile Robotics & Intelligent Systems

are applied, the shapes of the mapped objects look very much like the boxes and cylinder used in the experiment (Fig. 18c).

VOLUME 5,

Nâ—Ś 3

2011

of the structured light measurements depends on the area in which the measured object is located in the perceived image – note that the rail, which appears in the very bottom part of the image, is practically indistinguishable in the elevation map in Fig. 20a, however it is clearly visible in the map obtained from the laser scanner data (Fig. 20b). On the other hand, the surfaces of the boxes in the elevation map produced from the structured light data are smoother. The values in the neighboring cells in the map from the laser scanner data differ much more, what is caused by different measurement errors for surfaces having slightly different brightness. z [cm]

a

x [cm]

y [cm] z [cm]

b

Fig. 18. Elevation maps obtained with the URG-04LX sensor. x [cm]

In order to compare qualitatively and quantitatively the quality of the elevation maps obtained using the two investigated terrain sensors an experiment in controlled environment was performed. The same scene with some objects of different types was scanned with both sensors moved on the cart/rail system. The measurements were obtained each 10 mm of the rail. The main object in the scene, consisting of two wooden boxes was measured by hand to have the ground truth that can be compared with the maps (Fig. 19). a

b c d

Fig. 19. Boxes measured by the structured light sensor and the URG-04LX sensor. Figure 20a shows the elevation map obtained using the structured light sensor, while the elevation map of the same scene produced from the laser scanner range data is depicted in Fig. 20b. Comparing the maps qualitatively, one can see that the URG scanner provides much wider field of view, being able to yield range data also describing the smaller objects on the sides of the rail. The field of view of the structured light sensors is narrow, what is a side-effect of the tilted sensor configuration. The spatial resolution

y [cm]

Fig. 20. Comparison of the elevation maps obtained with both sensing modalities. Quantitative results of the comparison are presented in Tab. 1. The dimensions given in the table are those shown in Fig. 19. The measurements errors are shown as relative values. The elevation map built using the URG04LX sensor data is more accurate with regard to all, but one measured dimension. Tab. 1. Comparison of the maps obtained using the structured light system and the URG scanner. Dim. Ground Structured URG-04LX truth light scanner result error result error [mm] [mm] [%] [mm] [%] a 103 120 16.5 110 6.8 b 115 126.6 10.0 120.8 5.0 c 154 160 3.9 150 -2.6 d 78 81.3 4.2 91.8 17.7 It was also tested how the sensors behave under different light conditions. All the experiments presented in this article were performed indoors, in the lab, avoiding direct sunlight and light spots from the incandescent bulbs. Under these conditions the structured light system works reliably (Fig. 21a), however if the light is stronger, the laser stripe becomes hardly recognizable in the images (Fig. 21b). This Articles

75


Journal of Automation, Mobile Robotics & Intelligent Systems

makes the structured light system unusable outdoors, where any control over the ambient light is not possible. In contrary the URG-04LX scanner shows almost no dependency between the quality of the range measurements and the level of ambient light (Fig. 21c), what is in accordance with the results provided in [8].

a

VOLUME 5,

Nâ—Ś 3

2011

terrain mapping experiments with the Ragno robot, performed on a rocky terrain mockup (Fig. 22a) provided evidence that the map updating algorithm works properly on irregular obstacles, and is robust to small errors in the robot localization. Because the robot used only its proprioceptive sensors (accelerometers) for positioning, the covered distance was quite short and limited to the nearlyflat part of the mockup. Nevertheless, the robot has built an elevation map that correctly represents all obstacles in the field of view and is largely free from artifacts (Fig. 22b).

a

b movement direction

b

c

z [cm]

y [cm] DARKENED ROOM

x [cm]

NATURAL SUNLIGHT

c

a

y [cm] x [cm] high

d

certainty

6.2. Terrain mapping experiments The experiments in controlled environment (discussed also in [13] and [15]) ensured us that both the structured light sensor and the laser scanner yield range measurements that are precise enough to be used in mapping applications.

z [cm]

y [cm]

Fig. 21. Comparison of the structured light system performance under different lighting conditions.

low

x [cm]

Fig. 23. Elevation and certainty maps obtained on the rocky terrain mockup. b

Fig. 22. Uneven terrain experiment and 3D view of the obtained map. The structured light sensor was used on the small Ragno robot, which does not have an on-board PC, and its autonomy is therefore quite limited. However, the preliminary 76

Articles

First experiments aimed at testing the Hokuyo URG04LX on the rocky terrain mockup were performed using the cart/rail setup, in order to separate the adverse effects of robot positioning errors from the errors caused by spurious range measurements (Fig. 23a). Using directly the mapping algorithm from [26] the system built an elevation map that reasonably represents the objects in the scanner’s field of view, but contains a number of artifacts (Fig. 23b). These artifacts are mostly caused by mixed measurements behind obstacles and on their sides. There are also many cells that have no valid elevation value set. The elevation map built using our modified method, which relies on range data preprocessing to eliminate spurious measurements and applies the weighted median filter to fill-in the unobserved cells is almost free from artifacts (Fig. 23c). The certainty map (Fig. 23d) contains regions of very low certainty behind


Journal of Automation, Mobile Robotics & Intelligent Systems

larger obstacles. These areas were never observed by the sensor due to occlusions, and they are too large to be filledin by the median filter.

a

b

c

Fig. 24. Messor on a terrain mockup and elevation maps obtained in the experiment.

VOLUME 5,

7. Conclusions This work presents a modified version of the Ye and Borenstein mapping algorithm that contains new artifact removal procedures and is appropriate for a walking robot with a 2D sensor providing sparse range data. Mapping precision of the proposed method is shown in controlled environment experiments with a simple cart/rail test bed, while its potential for real applications is demonstrated with the two walking robots, Ragno and Messor.

2011

Experimental results proved that both 2D range sensors under study are precise enough for terrain mapping, but the reliability and much wider field of view of the laser scanner suggest that the URG-04LX should be the sensor of choice for our Messor robot. On the other hand, the low-cost structured light sensor is a reasonable choice for small educational robots, like the Ragno. Developing the terrain perception and mapping system with the Hokuyo URG scanner we achieved our main goal: the terrain map is built on-line, on the on-board PC (a lowpower Intel Atom machine) of the Messor robot, which is an enabling factor for the autonomy of this robot. The results of using the obtained elevation maps for foothold selection were already shown elsewhere [2].

AUTHORS Przemysław Łabęcki – Poznań University of Technology, Institute of Control and Information Engineering, ul. Piotrowo 3A, 60-965 Poznań, Poland, e-mail: pl@cie.put.poznan.pl Dawid Rosiński – Poznań University of Technology, Institute of Control and Information Engineering, ul. Piotrowo 3A, 60-965 Poznań, Poland, e-mail: dawid.rosinski@gmail.com Piotr Skrzypczyński∗ – Poznań University of Technology, Institute of Control and Information Engineering, ul. Piotrowo 3A, 60-965 Poznań, Poland, e-mail: piotr.skrzypczynski@put.poznan.pl ∗

Also mapping experiments with the Messor robot were performed on the rocky terrain mockup of 2 × 2 m size. A ceiling-mounted camera was used to obtain an estimate of the horizontal translation and the yaw angle θr (robot orientation), while the ϕr (pitch) and ψr (roll) angles were provided by the inertial measurement unit of the robot. The colored circles visible on the robot (Fig. 24a) form a landmark used by the vision system. The elevation maps obtained in this experiment acknowledge the results from the cart/rail experiments. The elevation map built with the CAS filter has many peak-like artifacts behind the observed objects (Fig. 24b). These artifacts are not present when the range data pre-processing is applied (Fig. 24c). Although this elevation map correctly represents all encountered obstacles, it is slightly distorted due to the imprecise pose estimates obtained by the robot. Unfortunately, at the moment of preparing this paper there was no precise ground truth available for the rocky terrain mockup. Obtaining such a ground truth requires scanning of the whole mockup with a precisely moved sensor (e.g., mounted on a manipulator), which was not possible so far.

N◦ 3

Corrresponding author

References [1] M. D. Adams, Sensor modelling, design and data processing for autonomous navigation. Singapore, World Scientific 1999. [2] D. Belter, P. Łabęcki, P. Skrzypczyński, Map-based adaptive foothold planning for unstructured terrain walking. Proc. IEEE Int. Conf. on Robot. and Automat., Anchorage, 2010, pp. 5256–5261. [3] D. Belter, K. Walas, Messor – versatile walking robot for search and rescue missions, Journal of Automation, Mobile Robotics and Intelligent Systems, 5(2), 2011, pp. 28–34. [4] B. Gaßmann, L. Frommberger, R. Dillmann, K. Berns, Real-time 3D map building for local navigation of a walking robot in unstructured terrain, Proc. IEEE Int. Conf. on Intelligent Robots and Systems, Las Vegas, 2003, pp. 2185–2190. [5] B. Gaßmann, J. M. Zöllner, R. Dillmann, Navigation of walking robots: localisation by odometry. Climbing and Walking Robots VIII, Berlin, Springer 2005, pp. 953–960. [6] D. Ilstrup, G. H. Elkaim, Low Cost, low power structured light based obstacle detection, Proc. IEEE/ION Position, Location and Navigation Symp., Monterey, 2008, pp. 771–778. [7] H. Kawata, A. Ohya, S. Yuta, W. Santosh, T. Mori, Development of ultra-small lightweight optical range sensor system, Proc. IEEE/RSJ Int. Conf. on Intell. Articles

77


Journal of Automation, Mobile Robotics & Intelligent Systems

Robots and Systems, Edmonton, 2005, pp. 1078– 1083. [8] L. Kneip, F. Tache, G. Caprari, R. Siegwart, Characterization of the compact Hokuyo URG-04LX 2D laser range scanner, Proc. IEEE Int. Conf. on Robotics and Automation, Kobe, 2009, pp. 1447–1454. [9] J. Z. Kolter, Y. Kimz, A. Y. Ng, Stereo vision and terrain modeling for quadruped robots, Proc. IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, Kobe, 2009, pp. 1557–1564. [10] E. Krotkov, R. Hoffman, Terrain mapping for a walking planetary rover. IEEE Trans. Robot. and Automat., 10(6), 1994, pp. 728–739. [11] V.-G. Loc, S. Roh, I.-M. Koo, D.-T. Tran, H.-M. Kim, H. Moon, H.-R. Choi, Sensing and gait planning of quadruped walking and climbing robot for traversing in complex environment, Robotics and Autonomous Systems, 58(5), 2010, pp. 666–675. [12] P. Łabęcki, A. Kasiński, An active vision system for a walking robot, Pomiary Automatyka Kontrola, 55(9), 2009, pp. 731–736 (in Polish). [13] P. Łabęcki, A. Łopatowski, P. Skrzypczyński, Terrain perception for a walking robot with a low-cost structured light sensor, Proc. European Conf. on Mobile Robots, Dubrovnik, 2009, pp. 199–204. [14] P. Łabęcki, Analysis and optimization of a structured light sensor configuration for terrain map building, Studies in Automation and Information Technology, 34, 2009, pp. 37–56 (in Polish). [15] P. Łabęcki, D. Rosiński, P. Skrzypczyński, Terrain perception and mapping in a walking robot with a compact 2D laser scanner, in: Emerging Trends in Mobile Robotics (H. Fujimoto et al., eds.), Singapore, World Scientific 2010, pp. 981–988. [16] L. Matthies, T. Balch, B. Wilcox, Fast optical hazard detection for planetary rovers using multiple spot laser triangulation, Proc. IEEE Int. Conf. Robotics and Automation, 1997, Albuquerque, pp. 859–866.

78

Articles

VOLUME 5,

N◦ 3

2011

[17] C. Mertz, J. Kozar, J. Miller, C. Thorpe, Eye-safe laser line striper for outside use, Proc. IEEE Intell. Vehicle Symp., 2002, vol. 2, pp. 507–512. [18] Y. Okubo, C. Ye, J. Borenstein, Characterization of the Hokuyo URG-04LX laser rangefinder for mobile robot obstacle negotiation. Unmanned Systems Technology XI, Proc. SPIE 7332, 2009 (on-line). [19] C. Plagemann, S. Mischke, S. Prentice, K. Kersting, N. Roy, W. Burgard, Learning predictive terrain models for legged robot locomotion, Proc. IEEE/RSJ Int. Conf. Intell. Robots and Systems, Nice, 2008, pp. 3545–3552. [20] J. Poppinga, A. Birk, K. Pathak, A Characterization of 3D sensors for response robots. RoboCup 2009: Robot Soccer World Cup XIII (red. J. Baltes et al.), LNAI Vol. 5949, Berlin, Springer 2010. [21] A. Roennau, T. Kerscher, M. Ziegenmeyer, J. M. Zöllner, R. Dillmann, Six-legged walking in rough terrain based on foot point planning, in: Mobile Robotics: Solutions and Challenges (O. Tosun et al., eds.), Singapore, World Scientific, 2009, pp. 591–598. [22] A. Schmidt, A. Kasiński, The visual SLAM system for a hexapod robot, in: Computer Vision and Graphics (L. Bolc et al., eds.), LNCS Vol. 6375, Berlin, Springer, 2010, pp. 260–267. [23] P. Skrzypczyński, Perception uncertainty management in a mobile robot navigation system, Poznań, Wyd. Politechniki Poznańskiej 2007 (in Polish). [24] P. Skrzypczyński, On qualitative uncertainty in range measurements from 2D laser scanners, Journal of Automation, Mobile Robotics and Intelligent Systems, 2(2), 2008, pp. 35–42. [25] K. Walas, D. Belter, A. Kasiński, Control and environment sensing system for a six-legged robot, Journal of Automation, Mobile Robot. & Intell. Syst., 2(3), 2008, pp. 26–31. [26] C. Ye, J. Borenstein, A novel filter for terrain mapping with laser rangefinders. IEEE Trans. Robot. and Automat., 20(5), 2004, pp. 913–921.


VOLUME 5,

Journal of Automation, Mobile Robotics & Intelligent Systems

N° 3

2011

A MOBILE ROBOT NAVIGATION WITH USE OF CUDA PARALLEL ARCHITECTURE Barbara Siemiątkowska, Jacek Szklarski, Michał Gnatowski, Adam Borkowski, Piotr Węclewski

Abstract: In this article we present a navigation system of a mobile robot based on parallel calculations. It is assumed that the robot is equipped with a 3D laser range scanner. The system is essentially based on a dual grid-object, where labels are attached to detected objects (such maps can be used in navigation based on semantic information). We use a classical SMPA (Sense - Model - Plan - Act) architecture for navigation, however, some steps concerning object detection, planning and localization are parallelized in order to speed up the entire process. The CUDA (Compute Unified Device Architecture) technology allows us to execute our algorithms on many processing units with use of a inexpensive graphics card which makes it possible to apply the proposed navigation system in a real time. Keywords: navigation, neural network parallel computing.

1. Introduction As robots move away from laboratory and act in complex real-world scenarios, both the control architectures and perception must become more powerful. For example, a service robot collaborating with a human needs to classify three-dimensional objects, know their functionalities and relationships between them. It also has to plan a path, avoid collisions and calculate low level control. All operations have to be done in a real time. The oldest classical control architecture is called SMPA. In this approach the navigation system is decomposed into a series of units (Fig. 1). It consists of following, repeated steps: • Perception - the robot senses its environment • Modeling - the map of the environment is built • Planning - actions of the robot are planned • Task execution - the robot performs the planned action Sensors Perception Modeling Planning Task execution Motor Controls Actuators Fig. 1. A decomposition of a mobile robot control system.

This architecture is natural and easy to implement, however, one has to take care about sensor fusion, world modeling and path planning before executing an action. Traditionally an algorithm is implemented as a serial stream of instructions, and long time of reaction to environmental changes can become an issue [1,2]. In 1986 Rodney Brooks introduced subsumption architecture [3,4,5]. It consists of layered behaviors, with simple interfaces between them - units can take the inputs and/or outputs of other units. Each behavior has its own control program that is capable to react to environmental changes in real time. This architecture supports parallel computation and avoids centralized control. When this approach is used it is difficult to obtain smooth behavior of the robot. The method fails in case of more complex tasks. The hybrid architecture [6,7] integrates the advantages of SMPA and subsumption and avoids the disadvantages. The whole framework of the decision-making system is based on the planning (SMPAarchitecture) and behavioral models are applied in the dynamic situation. This third approach is efficient but it can still fail when the robot acts in complex real-world environment. In our approach we propose to use hybrid architecture, however, main modules: mapping, localization and collision-free path planning are divided into independent units so that each processing element can execute its part of the algorithm simultaneously with the other. Parallel programming allows multiple processes to be executed concurrently using separate threads. It can help reduce runtimes while still producing the same results as if it were run in serial. There are some restrictions with using parallelism and not every algorithm can be done in parallel. Parallel computing [8] can by classified according the level at which the hardware supports parallelism. We can distinguish single multi-processor computers and clusters and grids of computers. Recently platforms using graphics processing units (GPU) have become very popular. In our approach we propose to use Cellular Neural Network implemented using CUDA technology. The article consists of the following parts: in section 2 the cellular neural network paradigm and the implementation of CNN using CUDA technology are presented. In section 3 the architecture of the system is described. In section 4 the experimental results are shown. The article finishes with the conclusions.

2. CNN Implementation using CUDATechnology Cellular Neural Network (CNN) also known as Nonlinear Neural Network was invented by Leon O. Chua and Lin Young in 1990 [9]. CNN is an array of analog dynamic Articles

79


VOLUME 5,

Journal of Automation, Mobile Robotics & Intelligent Systems

2011

processors called cells. A standard CNN architecture consists of an N ´ M cells which are denoted C(i, j). A typical example of CNN is presented in Fig. 2.

processes threads grouped in so called “wraps”. The number of threads in each wrap (wrap size) is different in each graphic card. SM may also be classified as a SIMT (Single Instruction, Multiple Thread) due to computing by a number of threads at the same time. Graphic card is also equipped with fast RAM named “Global”. Additionally every SM has a local memory. In computer games this technology is used in graphics rendering, physics calculations like fire, smoke or water diffusion. CUDA technology is not only used in graphical applications, but also in computational biology, cryptography and others. An example of CUDA technology is BOINIC which is a non-commercial system for grid computing. It became useful as a platform of distributed applications in mathematics, medicine, molecular biology, astrophysics and climatology. CUDA provides simple access to GPGPU (General-Purpose Computation on Graphics Processing Units) and allows us parallel computing on its own GPU’s. GPU’s have numerous cores that operate in parallel to run timeconsuming graphics operations. They have much more processing power than CPU’s.

Fig. 2. The CNN architecture.

The flowchart of CUDA program consists of following stages:

All CNNs are characterized by the fact that a cell’s state xi,j is a function of its input yi,j and output of its neighbors only (and is discrete in the time domain). In other words: xi,jt +1 = f (xi,jt , Nt ),

(1)

where Nt denotes states at time t of neighborhood of xi,j (exact definition of neighborhood depends on the particular implementation). The structure of CNN is similar to a human retina and the CNNs are also widely used in image processing. Using graphics processing units is a natural way of CNN implementation. The most popular architectures are CUDA (Compute Unified Device Architecture) and Fire Stream. The former is a system dedicated for NVidia cards while the latter for ATI-AMD cards. It is worth to mention about a project of OpenCL libraries which is developed by the above companies and IBM (cluster computing technology) and others. Merging the simplest technologies of parallel computing emulations in operating systems (threads), computing on multi-core computers, interfaces to operations on graphic cards and computing the cluster systems give versatile and scalable solution but is more demanding to a programmer. In our experiments we decided to use NVidia CUDA, due to its popularity, wide range of useful tools, documentation and examples. While writing a sequential programme, a programmer does not need to know the hardware he or she works with. In applications where the key point lies on the short execution time, in particular implementing parallel computing, the structure and hardware often become important. The graphic card GPU (Graphics Processing Unit) consists of multiprocessors (SM - stream multiprocessors) and a steering processor. Every SM device in each cycle 80

N° 3

Articles

• • • •

Blocks and threads are configurated Global memory is allocated The data is copied to the GPU The GPU kernel is called

Output data is copied back to the CPU. Fig. 3 presents the GPU architecture.

Fig. 3. GPU architecture (from NVidia.com).


VOLUME 5,

Journal of Automation, Mobile Robotics & Intelligent Systems

3. The SystemArchitecture The architecture of the navigation system is presented in Fig. 4. Gray units present time consuming processes which we try to parallelize. In this section steps of the algorithm are described. 3.1. Sensors The decision making system is essentially based on observations of distance to the nearest obstacles. At the present stage of the research, the mobile robot Elektron [11] is equipped with a 2D laser range finder Sick LMS 200, which measures this distance in a 2D plane for -90° £ q £ 90° with the resolution 0.5° (for q = 0 the laser ray points forward q = -90° towards robot’s left, etc.). Moreover, the laser is mounted on a support which rotates around the horizontal axis. Therefore, the range finder can move up and down with the angle j, and finally it provides a matrix M with numbers representing distance to an obstacle. Such matrix can be depicted as a gray-scale image. Fig. 6 presents the matrix M obtained for point cloud shown in Fig. 5. The robot is also equipped with odometry which gives information about robot's displacement. 3.2. Processing The matrix M is used for: 1) Localization 2) Constructing a map of the environment used for navigation.

N° 3

2011

Localization problem is solved using particles filter. The probability density of the estimated robot position is represented by a set of "particles", each encoding a single possible state (pose) of the vehicle. The particles are iteratively propagated using control input (motion model). On the basis of the measurement model a weight is attached to each particle. The number of particles depends on the uncertainty of odometry. Tab. 1 presents the dependency between number of particles and the accuracy of determining robot displacement (the accuracy of odometry is assumed to be 10%). The experiments were performed with the use of robot Elektron1 in a real office environment. Tab. 1. Dependency between the accuracy of displacement and the number of particles. particles 1000 10000 100000 1000000

the accuracy of displacement [%] 7 2 1 0.2

For more than 1000000 particles the accuracy does not change. Due to its natures the particles filter algorithm can easily be parallelized. The problem of object recognition is a much more complicated task, and there are no general solutions. Our approach to this subject is based on the idea that the matrix M can be represented as an RGB image which possesses useful geometric information. Each row and column of M corresponds to certain q and j angles, so each entry Mi,j describes a point in a 3D (q, j, R) space (R is the distance to obstacles). The robot is at the center of this coordinate system. Let Mk,l is a vector defined by points Mi,j and Mi+k, j+l. By taking the sum of the cross products: V = N10 ´ N01 + N01 ´ N0i -1 + N0i -1 ´ N0i -1 + N0i -1 ´ N10 (2) one obtains the vector V=[nx,ny,nz] which is (approximately) normal to the surface spanned on Mi,j and its neighbors. V is normalized: |V|=1

(3)

|nx| £ 1, |ny| £ 1, |nz| £ 1

(4)

The final image is constructed by assigning red/green/ blue (RGB) color values according to the x,y,z components of V.

Fig. 4. The architecture of navigation system. 3) Recognition of objects of certain classes. The recognized objects can be placed onto the map of environment enabling the semantic navigation. Such navigation makes it possible to give to the robot commands in a human-understandable form, e.g., “find a nearest wastebasket”.

R=255*|nx| G=255*|ny| B=255*|nz|

(5)

Such images have interesting properties, which makes them useful as an input to an object recognition algorithm [12-14]. The process of converting matrix M into the RGB image can be efficiently solved via a typical SIMD (Single Instruction Multiple Data) architecture. Therefore it is a perfect task which can be parallelized, so the time-cost can be Articles

81


VOLUME 5,

Journal of Automation, Mobile Robotics & Intelligent Systems

significantly reduced. In particular, the relatively inexpensive CUDAtechnology fits very well for that purpose. The object recognition procedure is based on Haar-like features and utilizes the concept of integral images. It is very fast and even for sequential single-processor, takes only short amount of time (approx. 0.1 s for typical parameters). However, for each Haar classifier only an object of a single certain class can be detected. Therefore, parallel approach is necessary to achieve short times for large databases of objects. Although at present we do not use any parallel programs to achieve this goal, it will be done in the future research.

N° 3

2011

3.3 Path planning Having a map of the environment, together with some detected objects, the robot must find its way. The path planning algorithm is realized with use of CNNs. For our CNN planning (described in details in, e.g., [14]), each xi,j corresponds to a cell in the grid-based map. Then, after initialization and the appropriate diffusion process, a steady state is achieved. The robot then, being at a place represented by a cell xk,l, moves towards increasing gradient of x, and achieves its goal where maximum of x have been found. Due to its natures, such CNN, can easily be parallelized, also with use of the CUDA technology. This algorithm has following advantages: the time of collision-free path planning is comparable to reactive behavior, the goal can be given in natural language so we can ask the robot to go to specify object for example a chair. We can also ask the robot to go to the object far from other object or to avoid some places. In comparison to well known potential field method it does not suffer from local minima problem.

4. Experiments

Fig. 5. Point cloud.

Fig. 6. Matrix M, representing distance to obstacles in meters, depicted as a gray-scale image.

The main goal of the experiments was to compare serial and parallel implementations of selected units. We implemented the mobile robot localization method using particle filters and the algorithm which computes normal vector. The experiments were performed using Intel Centrino (2x2.0GHz) processor (serial implementation) and NVidia GeForce 9300MG (parallel implementation)

Fig. 8. Timing results (particles filter). Fig. 8 presents a log-log plot of the overall execution time for the CPU and GPU implementations for different numbers of particles. The speedup factors (CPU_time / GPU_time) increase with problem size. We can notice that a speedup factor is greater than 1 if the number of particles exceeds 104.

Fig. 7. The RGB image constructed from M using the procedure described in text. The black rectangle denotes the region which has been recognized as “base of an office chair� by the Haar classifier. 82

Articles

Fig. 9 Timing results (normal vector).


VOLUME 5,

Journal of Automation, Mobile Robotics & Intelligent Systems

N° 3

2011

Fig. 9 presents the time of computing normal vectors for 361´215 point clouds using GPU and CPU. The speedup factors (CPU_time / GPU_time) increase with problem size. Speedup factor is greater than 1 if the array size exceeds 10´10. Fig. 10 presents the pseudo-code of the algorithm implemented using GPU. Finally, we must stress that at the present stage of our research, the slowest part is still the process of gathering data about the distance to obstacles. However, in the next stage we shall start experiments with hardware stereoscopic cameras giving such matrices at the rate of 20 FPS. In this case, parallel calculations will lead to significant performance improvement. Fig. 9. A pseudo-code showing how to calculate the matrix with normal vectors from the distance matrix M. The function NormVect is executed on GPU multiprocessors using nBlocks blocks, each running blockSize threads.

5. Conclusions

__global__ NormVect(Matrix<Point3D> in) { index ¬ blockDim.x * blockIdx.x + threadIdx.x col ¬ index % width(in) row ¬ index / width(in) /* calculate sum of cross products of vectors from in[col][row] and its neighbors: */ out[col][row] ¬vector(from in[col][row] to in[col+1][row]) × vector(from in[col][row] to in[col][row-1]) + vector(from in[col][row] to in[col][row+1]) × vector(from in[col][row] to in[col-1][row]) + ... return out; }

AUTHORS Barbara Siemi¹tkowska* - Warsaw University of Technology, Department of Mechatronics, Institute of Fundamental Technological Research PAS, Warsaw, Poland. E-mail bsiem@ippt.gov.pl Jacek Szklarski - Institute of Fundamental Technological Research PAS, Warsaw, Poland. E-mail jszklar@ippt.gov.pl Micha³ Gnatowski - Institute of Fundamental Technological Research PAS, Warsaw, Poland. E-mail mignat@ippt.gov.pl Adam Borkowski - Institute of Fundamental Technological Research PAS, Warsaw, Poland. E-mail abork@ippt.gov.pl Piotr Wêclewski - Warsaw University of Technology, Department of Mechatronics, Warsaw, Poland. E-mail: piotr.weclewski@stud.mchtr.pw.edu.pl * Corresponding author

CalcPointCloud(Matrix M, Vector j, Vector q) { Matrix<Point3D> r; for each element i,j in M: r[i][j].x ¬ M[i][j] * sin q[j] * cos j[i] r[i][j].y ¬ M[i][j] * sin j[i] r[i][j].z ¬ M[i][j] * cos q[j] * sin j[i] return r }

In this article we presented a navigation system of a mobile robot equipped with a 3D laser range finder. Considerable amount of processing time can be saved by parallelizing some stages of the navigation system. This is accomplished by breaking problems into small independent parts which are executed simultaneously. The main goal of the experiments was to compare serial and parallel implementations of selected units of navigation system. The project is on an early stage of development but the results presented in Fig. 8 and 9 are promising and show that for the tested algorithms and typical data size, CUDA allows us to perform the task significantly faster than on a regular CPU. For example, as can be seen in Fig. 9, a typical image (with size 300 x 200) based on normal vectors is created in less than 0.001 s with use of CUDA, and about 1s otherwise. This is a crucial difference for a system working in real.

References [1] [2]

CUDACalcNormalVectors(Matrix M, Vector j, Vector q) { /* M[i][j] is the distance for angles j[i] and q[j] */ Matrix<Point3D> cloud ¬ CalcPointCloud(M, j, q)

[3]

[4]

copyToCUDADevice(cloud) allocateOnCUDADevice(Matrix<Vector> out) blockSize ¬ 16; N ¬ width(M) * height(M) nBlocks ¬ N/blockSize + (N % block_size == 0 ? 0:1) NormVect <<< nBlocks, blockSize >>> result ¬ copyFromCUDADevice(out) } Fig. 10. The Pseudo-code for calculating normal vectors with use of CUDA.

[5]

[6]

[7]

J. Nilsson, “Artificial Intelligence: A New Synthesis”, Machine Industry Press, 1999. G. Saridis. “Toward the realization of intelligent controls”, Proceedings of the IEEE. vol. 67, 1979, pp. 11151133. R.A. Brooks, “A robust layered control system for a mobile robot”, IEEE Journal of Robotics and Automation, vol. 2 no. 11, 1986, pp. 14-23. R.A. Brooks. “New approaches to robotics”. Science, vol. 253, 1991, pp. 1227-1232. M.J. Mataric., “Integration of representation into goaldriven behavior-based robots”, IEEE Trans. on Robotics and Automation, 8(3), 1992, pp. 304-312. K.H. Low, W.K. Leow, and M.H. Ang, Jr., “A hybrid mobile robot architecture with integrated planning and control”. In: Proc. 1st International Joint Conference on Autonomous Agents and MultiAgent Systems (AAMAS02), 2002, pp. 219–226. N.J. Nilsson. “Teleo-reactive programs for agent control”, Journal of Artificial Intelligence Research, vol. 1, 1994, pp. 139–158. Articles

83


Journal of Automation, Mobile Robotics & Intelligent Systems

[8] [9] [10]

[11]

[12]

[13]

[14]

84

S.H. Seyed, “Parallel processing and parallel algorithms: theory and computation”, Springer, 2000. L. Chua, L. Young, “Cellular Neural Network”, IEEE Transaction on Circuit System, 1990, pp. 500-505. Harris, Mark, “Optimizing Parallel Reduction in CUDA”, NVIDIA Developer Technology. http://developer.download.nvidia.com Chojecki. R, Olszewski M., “A Mobile Robot for Laboratory Purposes and Its Applications”, PAK, no. 3, 2009, 55, pp. 190-193. B. Siemi¹tkowska, J. Szklarski, M. Gnatowski, A. Zychewicz, “Budowa hybrydowej semantyczno-rastrowej reprezentacji otoczenia robota mobilnego na podstawie wskazañ dalmierza laserowego 3D”, PAK, no. 3, 2010, pp. 278-282. (in Polish) M. Gnatowski, B. Siemi¹tkowska, J. Szklarski, “Extraction of semantic information from 3D laser range Finder”, In: 18th Symposium on Robot Design, Dynamics, and Control, Springer, 2010, pp. 383-389. A. Borkowski, B. Siemi¹tkowska, J. Szklarski, “Towards Semantic Navigation In Mobile Robotics”. In: G. Engles, C. Lewerenz, W. Schafer, A. Schurr, B. Westfechtel (Eds.): Graph Transformations and Model Driven Engi-neering - Essays Dedicated to Manfred Nafle, LNCS 5765, Springer.

Articles

VOLUME 5,

N° 3

2011


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.