JAMRIS 2017 vol 11 no 4

Page 1

VOLUME 11 N°4 2017 www.jamris.org pISSN 1897-8649 (PRINT) / eISSN 2080-2145 (ONLINE)

Indexed in SCOPUS


JOURNAL OF AUTOMATION, MOBILE ROBOTICS & INTELLIGENT SYSTEMS

Editor-in-Chief

Typesetting:

Janusz Kacprzyk (Polish Academy of Sciences, PIAP, Poland)

Ewa Markowska, PIAP

Advisory Board:

Piotr Ryszawa, PIAP

Dimitar Filev (Research & Advenced Engineering, Ford Motor Company, USA) Kaoru Hirota (Japan Society for the Promotion of Science, Beijing Office) Witold Pedrycz (ECERF, University of Alberta, Canada)

Webmaster: Editorial Office:

Co-Editors:

Industrial Research Institute for Automation and Measurements PIAP Al. Jerozolimskie 202, 02-486 Warsaw, POLAND Tel. +48-22-8740109, office@jamris.org

Roman Szewczyk (PIAP, Warsaw University of Technology) Oscar Castillo (Tijuana Institute of Technology, Mexico) Marek Zaremba (University of Quebec, Canada)

Copyright and reprint permissions Executive Editor The reference version of the journal is e-version. Printed in 300 copies.

Executive Editor: Anna Ładan aladan@piap.pl

Associate Editor:

The title receives financial support from the Minister of Science and Higher Education of Poland under agreement 857/P-DUN/2016 for the tasks: 1) implementing procedures to safeguard the originality of scientific publications, and 2) the creation of English-language versions of publications.

Maciej Trojnacki (PIAP, Poland)

Statistical Editor: Małgorzata Kaliczynska (PIAP, Poland)

Editorial Board: Chairman - Janusz Kacprzyk (Polish Academy of Sciences, PIAP, Poland) Plamen Angelov (Lancaster University, UK) Adam Borkowski (Polish Academy of Sciences, Poland) Wolfgang Borutzky (Fachhochschule Bonn-Rhein-Sieg, Germany) Bice Cavallo (University of Naples Federico II, Napoli, Italy) Chin Chen Chang (Feng Chia University, Taiwan) Jorge Manuel Miranda Dias (University of Coimbra, Portugal) Andries Engelbrecht (University of Pretoria, Republic of South Africa) Pablo Estévez (University of Chile) Bogdan Gabrys (Bournemouth University, UK) Fernando Gomide (University of Campinas, São Paulo, Brazil) Aboul Ella Hassanien (Cairo University, Egypt) Joachim Hertzberg (Osnabrück University, Germany) Evangelos V. Hristoforou (National Technical University of Athens, Greece) Ryszard Jachowicz (Warsaw University of Technology, Poland) Tadeusz Kaczorek (Bialystok University of Technology, Poland) Nikola Kasabov (Auckland University of Technology, New Zealand) Marian P. Kazmierkowski (Warsaw University of Technology, Poland) Laszlo T. Kóczy (Szechenyi Istvan University, Gyor and Budapest University of Technology and Economics, Hungary) Józef Korbicz (University of Zielona Góra, Poland) Krzysztof Kozłowski (Poznan University of Technology, Poland) Eckart Kramer (Fachhochschule Eberswalde, Germany) Rudolf Kruse (Otto-von-Guericke-Universität, Magdeburg, Germany) Ching-Teng Lin (National Chiao-Tung University, Taiwan) Piotr Kulczycki (AGH University of Science and Technology, Cracow, Poland) Andrew Kusiak (University of Iowa, USA)

Mark Last (Ben-Gurion University, Israel) Anthony Maciejewski (Colorado State University, USA) Krzysztof Malinowski (Warsaw University of Technology, Poland) Andrzej Masłowski (Warsaw University of Technology, Poland) Patricia Melin (Tijuana Institute of Technology, Mexico) Fazel Naghdy (University of Wollongong, Australia) Zbigniew Nahorski (Polish Academy of Sciences, Poland) Nadia Nedjah (State University of Rio de Janeiro, Brazil) Dmitry A. Novikov (Institute of Control Sciences Sciences, Russian Academy of Sciences, Moscow, Russia) Duc Truong Pham (Birmingham University, UK) Lech Polkowski (Polish-Japanese Institute of Information Technology, Poland) Alain Pruski (University of Metz, France) Rita Ribeiro (UNINOVA, Instituto de Desenvolvimento de Novas Tecnologias, Caparica, Portugal) Imre Rudas (Óbuda University, Hungary) Leszek Rutkowski (Czestochowa University of Technology, Poland) Alessandro Saffiotti (Örebro University, Sweden) Klaus Schilling (Julius-Maximilians-University Wuerzburg, Germany) Vassil Sgurev (Bulgarian Academy of Sciences, Department of Intelligent Systems, Bulgaria) Helena Szczerbicka (Leibniz Universität, Hannover, Germany) Ryszard Tadeusiewicz (AGH University of Science and Technology in Cracow, Poland) Stanisław Tarasiewicz (University of Laval, Canada) Piotr Tatjewski (Warsaw University of Technology, Poland) Rene Wamkeue (University of Quebec, Canada) Janusz Zalewski (Florida Gulf Coast University, USA) Teresa Zielinska (Warsaw University of Technology, Poland)

Publisher: Industrial Research Institute for Automation and Measurements PIAP

If in doubt about the proper edition of contributions, please contact the Executive Editor. Articles are reviewed, excluding advertisements and descriptions of products. All rights reserved © Articles

1


JOURNAL OF AUTOMATION, MOBILE ROBOTICS & INTELLIGENT SYSTEMS VOLUME 11, N° 4, 2017 DOI: 10.14313/JAMRIS_4-2017

CONTENTS 51

3

Toward Emotion Recognition Embodied in Social Robots: Implementation of Laban Movement Analysis into NAO Robot Krzysztof Arent, Małgorzata Gakis, Janusz Sobecki, Remigiusz Szczepanowski DOI: 10.14313/JAMRIS_4-2017/31

Mobile Robot Transportation for Multiple Labware with Hybrid Pose Correction in Life Science Laboratories Mohammed M. Ali, Ali A. Abdulla, Norbert Stoll, Kerstin Thurow DOI: 10.14313/JAMRIS_4-2017/36

Comparison of Keypoint Detection Methods for Indoor and Outdoor Scene Recognition Urszula Libal, Łukasz Łoziuk DOI: 10.14313/JAMRIS_4-2017/32

Analysis of the Effect of Soft Soil’s Parameters Change on Planetary Vehicles’ Dynamic Response Hassan Shibly DOI: 10.14313/JAMRIS_4-2017/37

Study of Postural Adjustments for Humanoidal Helpmates Jessica Villalobos, Teresa Zielińska DOI: 10.14313/JAMRIS_4-2017/33

Using Functions from Fuzzy Classes of k-valued Logic for Decision Making Based on the Results of Rating Evaluation Оlga M. Poleshchuk DOI: 10.14313/JAMRIS_4-2017/38

7

15

26

Design and Development of a Semi-active Suspension System for a Quarter Car Model using PI Controller Hudyjaya Siswoyo, Nazim Mir-Nasiri, Md. Hazrat Ali DOI: 10.14313/JAMRIS_4-2017/34 34

Integration of Navigation, Vision, and Arm Manipulation towards Elevator Operation for Laboratory Transportation System using Mobile Robots Ali A. Abdulla, Mohammed M. Ali, Norbert Stoll, Kerstin Thurow DOI: 10.14313/JAMRIS_4-2017/35 2

Articles

65

73


Journal of Automation, Mobile Robotics & Intelligent Systems

T

E

I

R L

VOLUME 11,

E M

S A

N◦ 4

2017

R : NAO R

Submi ed: 17th July 2017; accepted: 5th December 2017

Krzysztof Arent, Małgorzata Gakis, Janusz Sobecki, Remigiusz Szczepanowski DOI: 10.14313/JAMRIS_4-2017/31 Abstract: This research note focuses on human recogni on of emoons embodied in a moving humanoid NAO robot. Emoonal movements emulated on NAO intended to induce joy or sadness were presented to par cipants whose facial expressions were recorded and analysed with a Noldus FaceReader. Our preliminary results indicate reliable emo on recogni on using the Laban choreographic approach in modelling robot’s affec ve gestures. Keywords: social robot, Laban movement analysis, ar ficial emo ons, FaceReader

1. Introduc on Recently, a number of robotics applications has become more and more widely used in real-life interactions. In the past, robots were mainly employed in manufacturing, but nowadays robots are used in education, health care, entertainment, communication or collaborative team-work [3]. At present, also a paradigm of embodied cognition is widely explored in robotics to design and implement effective HRI (HumanRobot Interaction). According to this theoretical framework, cognitive and affective processes are mediated by perceptual and motoric states of the body [3]. Some authors [8] also emphasise that non-verbal affective behaviours implemented in a humanoid robot such as NAO [9] are treated as important cues for human observers to recognise affective internal states of the robot and judge its personality. For instance, a novel robotics software designed to NAO [8] already implements dispositional or affective features of this robot tailoring its traits, moods, emotions or even attitudes toward human subject. A similar approach is considered in terms of developing child-robot interaction [2]. These authors focus on providing the humanoid NAO robot with the capacity to express emotions by its body postures and head position in order to convey emotions effectively [2]. Another approach to enhance the HRI interaction capacity of NAO combines its communicative behaviour with non-verbal gestures through the movements of hand and head as well as gaze orienting [6]. Some studies identi ied also a ixed set of non-verbal gestures of NAO that were used for enhancing their presentations and turnmanagement capability of NAO in the conversational interactions. In fact, the emotional expressions set for NAO has become a standard [2] successively employed in several applications, for instance, in designing a more complicated model of emotion expression in hu-

manoid robots [10]. This model of robot’s emotion involves components of arousal and valence, which are affected by ongoing emotional states of the partner in social interaction game with the NAO robot who expresses emotion states through its voice, posture, whole body poses, eye colours and gestures. There are also several studies that aim at endowing the robot with ability to recognise and interpret human affective gestures, see for instance [1, 5]. Some research shows that quantitative movement parameters can be matched to emotional states embodied in the agent (human or robot) with a Laban Movement Analysis (LMA) [1]. The work based on LMA [5] uses a parallel real time framework in the robotic system for recognition of emotions based on video clips of human movements. In particular, the authors argue that LMA analysis can serve as a tool for implementing common emotional language in terms of expressing and interpreting movements of the HRI and resemble in that way coding principles between action and perception of human. In fact, several works on recognition of expressive gestures in robots [1,5] rely on LMA to a large degree. There is little known on using LMA in terms of recognition human affective states triggered by expressive gestures of the robot [2, 6, 8, 10]. Some authors [10] suggest to use professional software (e.g. Noldus FaceReader [7]) to monitor basic emotions of a human triggered by robot’s movements, but none of mentioned works have employed this measurement technique. To the best of our knowledge, there is no ready implementation of complete and comprehensive analysis of emotional expression emulated on the NAO robot or a similar humanoid robot in real-life interactions. Thus, the idea of our research was to examine effectiveness of the Noldus FaceReader [7] as the recognition system of human emotions triggered by the NAO robot’s affective gestures designed with Laban movement analysis.

2. Laban Movement Analysis 2.1. Laban Effort and Shape Components of Movements To establish affective gestures, the LMA model provides ive major components describing movements, so called: Body, Effort, Shape, Space and Relationship parameters [13]. In terms of designing affective states based on the robot’s gestures, the most relevant parameters of LMA analysis are Effort and Shape parameters [13]. The Laban Effort parameter reveals dynamics and expressiveness of the movement. The Shape 3


Journal of Automation, Mobile Robotics & Intelligent Systems

parameter refers to changes of the body shape during the movement [13]. In our study, we focused on the Effort component and its relevant movement attributes: (i) space which reveals approach to the surroundings, (ii) weight which is an attitude to the movement impact, (iii) time that expresses a need or lack of urgency; and (iv) low which refers to amount of control and bodily tension [13]. Each quality can be expressed in two extreme polarities: space quality can be direct or indirect (straight or lexible movements), weight can be strong or light (powerful or weightless movements), time can be quick or sustained (quick or lingering movements), while low parameter may be bound or free (controlled or uncontrolled movements) [13]. For example, the difference between movements of reaching for something or punching someone is based on not only an arm organisation, but on weight, low and time qualities as well [1]. These parameters can be translated into dynamic characteristics including curvature, acceleration and velocity [1].

2.2. The Laban Effort Graph for Genera ng Robots Affec ve Gestures To design expressive movements, one can use a Laban effort graph [5] (see Figure 1). Depending on affective state to be generated in the robot, there is a need to focus on setting up speci ic movement parameters of Effort. For instance, if NAO robots’ movement is intended to express state of joy, according to Laban Effort graph for joy, robot’s gesture should engage speci ic Effort qualities denoted on red in the graph: space – indirect, weight – light, low – free, time – sudden. It means that the robot should perform gesture not directed into any particular point in the space (space – indirect), and movements that are delicate (weight – light), uncontrolled and vivid ( low – free) in a quick way (time – sudden). In case of sadness, the robot’s gesture should involve the following combinations of movements qualities: space – direct, weight – light, low - bound, time – sudden [5]. Therefore, the robot should perform the gesture which indicates a speci ic point in the space (space – direct), and also movements that are delicate (weight – light), limited ( low – bound) and quick (time – sudden). As shown in Figure 2, we generated affective states (joy and sadness) of the NAO robot according to such LMA requirements. joy

sadness

Figure 1. States of joy and sadness presented on Laban Effort graphs [5]

4

VOLUME 11,

N◦ 4

2017

3. Methodology 3.1. Par cipants Eleven undergraduate students took part in the study (1 female and 10 males) with an average age of 23.5 (SD = 1.2) from the Faculty of Electronics, Wrocław University of Technology. All completed informed consent forms before the experiment. This study was approved by the local Ethics Committee at the Institute of Psychology, University of Zielona Gó ra. 3.2. Face Reader Noldus FaceReader [7] is a software for facial analysis than can detect six basic emotions of human: joy (happiness), sadness, anger, surprise, fear, disgust, and a neutral state. FaceReader can also determine components of facial expression such as contempt, gaze orientation and eyes or mouth openness or closeness as well as a position of eyebrows. In addition, FaceReader can detect the valence of emotion (whether it is positive or negative), the gender, age, ethnicity, the presence of glasses and facial hair (beard and moustache). Facial expression analysis (FEA) is a highly demanding task because of a multidimensional problem of mathematical space [11]. Image and video data of human face is quite dif icult to analyse in a form of bitmaps, therefore it is necessary to transform such data into a set of features beforehand. In addition, the FEA analysis is obscured be several factors as [11]: a location of the face in the image, size and orientation of the face, lighting of the face, and individual differences in facial expression. FEA analysis with FaceReader proceeds in the three steps [7]: (i) face inding, which determines the face position using a Viola Jones method [12]; (2) face modelling, which is based on Active Appearance Model (AAM) [4] to synthesise the arti icial face model with 500 key points and the surrounded facial texture, AAM uses then a database of the annotated images to calculate image modi ications; (iii) face classi ication, which is performed by the neutral network trained with over 10,000 manually annotated samples. The accuracy of the neural network in recognising six basic emotions can be up to the level of 90% or higher. 3.3. NAO robot NAO is an autonomous, programmable humanoid robot developed by Aldebaran Robotics [9]. It is 57 cm tall, its weight is 5.4 kg. Rich capabilities of NAO in terms of HRI interactions motivate researchers to apply this robot as a research platform in various applications. Examples of emotional movements (gestures) used in our study modelled with the Laban movement analysis are shown in Figure 2. We developed two emotional gestures presenting joy and sadness. Note that speci ic stages of the relevant body motions of NAO are distributed along the time axis to express the values of Effort (space, weight, time and low) and Shape parameters (horizontal, vertical and sagittal). The development of joy gesture was inspired by work [8].


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

N◦ 4

2017

joy

sadness

0.0

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0

4.5

5

t[s]

Figure 2. Emo onal expressions in me: joy and sadness inspired by LMA 3.4. Experimental Scenario

(a)

Each participant was seated in the chair at the front of the NAO robot (at a distance about 1 meter). Participant’s facial expression reactions were recorded with a camera located on the table. The participant was observing the NAO robot who was either in a neutral position or performed gestures intended to be emotions of joy (3 sec) and sadness (5 sec), separated by a neutral position of NAO (4 sec). When the robot was in a neutral state (no movements), it spoke aloud numbers to alert participant about emotion changes emulated by NAO movements. Facial emotional reactions for each individual were analysed with the Noldus FaceReader [7] in an off-line mode on the basis of the recorded video material.

(b)

4. Results To examine accuracy of emotion recognition of emotions emulated by the NAO movements, we used the individual FEA data from the FaceReader. For each participant, we evaluated accuracy of facial emotion recognition as a function of time. First, we started our analysis by visually inspecting recognition of joy embodied within the robot’s gestures. For eight individuals, we found that facial expression changes were indicative of joy. The results from two representative individuals (participants 4 and 5) are presented in Figure 3(a). As can be seen, there was a distinct emotional response in case of the joy movements emulated by the robot’s scenario. To quantify the correspondence between the accuracy of joy recognition and the NAO’s affective gesture, we calculated the Pearson correlation coef icients. The correlation coef icients were calculated for the 7-sec time window. The start and the inish time lags of this window were established with respect to the negative time lag for the pre-stimulus (1 sec) and the positive lag for the post-stimulus (3 sec) (see Figure 3(b)). The correlation analysis showed that there was a strong positive correlation for one participant, r = 0.73, p < 0.0001. The moderate correlations between the facial expression changes and affective gesture produced by NAO were observed for three sub-

Figure 3. Facial expression recogni on of joy and sadness; results for par cipants 4 (a) and 5 (b) jects who yielded the correlation values: r = 0.49, p = 0.0001, r = 0.38, p < 0.0001 and r = 0.43, p < 0.0001. In case of four participants the correlation values were low but signi icant, r = 0.25, r = 0.17, r = 0.20, r = 0.27 (for all cases p < 0.05). The remaining participants showed no association between the affective gestures and the facial response, or presented some artefacts. The same methodology was repeated for sadness recognition. In case of sadness, the correspondence between emotion recognition accuracy and affective robotic gesture was rather low. The visual inspection of the stimulus-response 5


Journal of Automation, Mobile Robotics & Intelligent Systems

plots indicated that only two participants responded in a distinctive manner to sadness expressed by NAO movements. This inding was con irmed by a positive correlation coef icient value that was 0.32, p < 0.0001, for the participant 1. We also observed a signi icant negative, moderate correlation for participant 5 (see Figure 3(b)), r = –0.48, p < 0.0001 suggesting that there is a time lag between the stimulus and facial expression response for this emotion. Note that the time window for analysing sadness was expanded to 11 sec.

5. General Discussion Our study provides compelling evidence that affective states embodied in the humanoid NAO robot on the basis of the Laban movement analysis can be effectively recognised by human. We presented participants with two emotional gestures embodied in NAO such as happiness and sadness. As indicated by FaceReader analysis happiness expressed through the NAO’s body was a readable emotion. Although a similar conclusion in terms of sadness cannot be made as this emotional gesture was less pronounced. Our research indicates that future social robot construction should be equipped with facial expression recognition system, similar to FaceReader. This enables a new social robot’s design that can effectively carry out objective measurements of basic emotions of human in a real-time processing mode from a videostream of a camera (built-in a robot or external with respect to a robot) [10].

AUTHORS

Krzysztof Arent∗ – Department of Cybernetics and Robotics, Electronics Faculty, Wroclaw University of Science and Technology, ul. Wybrzeż e Wyspiań skiego 27, 50-370 Wroclaw, Poland, e-mail: krzysztof.arent@pwr.edu.pl, www: www.pwr.edu.pl. Małgorzata Gakis – SWPS University of Social Sciences and Humanities, Wrocław Faculty of Psychology, ul. Ostrowskiego 30b, 53-238 Wrocław, Poland, e-mail: mgakis@st.swps.edu.pl, www: http://english.swps.pl/wroclaw/. Janusz Sobecki – Department of Computer Science, Faculty of Computer Science and Management, Wroclaw University of Science and Technology, ul. Wybrzeż e Wyspiań skiego 27, 50-370 Wrocław, Poland, e-mail: janusz.sobecki@pwr.edu.pl, www: www.pwr.edu.pl. Remigiusz Szczepanowski – Institute of Psychology, Faculty of Education, Psychology and Sociology, University of Zielona Gó ra, al. Wojska Polskiego 69, 65-762 Zielona Gó ra, Poland, e-mail: rszczepanowski@uz.zgora.pl, www: http://psychologia.wpps.uz.zgora.pl/. ∗

Corresponding author

ACKNOWLEDGEMENTS We kindly thank Prof. J. Grobelny and Dr. R. Michalski from Wroclaw University of Science and Technology 6

VOLUME 11,

N◦ 4

2017

for making available the necessary facilities to carry out this research.

References [1] E. I. Barakova and T. Lourens, “Expressing and interpreting emotional movements in social games with robots”, Personal and Ubiquitous Computing, vol. 14, no. 5, 2010, 457–467. [2] A. Beck, L. Cañ amero, A. Hiolle, L. Damiano, P. Cosi, F. Tesser, and G. Sommavilla, “Interpretation of emotional body language displayed by a humanoid robot: A case study with children”, International Journal of Social Robotics, vol. 5, no. 3, 2013, 325–334. [3] C. Breazeal, K. Dautenhahn, and T. Kanda. Handbook of Robotics, chapter Social Robots. Springer, 2016. [4] T. Cootes and C. Taylor. “Statistical models of appearance for computer vision. wolfson image analysis unit, imaging science and biomedical engineering, university of manchester”. Technical report, Online technical report, 2000. [5] T. Lourens, R. van Berkel, and E. Barakova, “Communicating emotions and mental states to robots in a real time parallel framework using laban movement analysis”, Robotics and Autonomous Systems, vol. 58, no. 12, 2010, 1256 – 1265, Intelligent Robotics and Neuroscience. [6] R. Meena, K. Jokinen, and G. Wilcock, “Integration of gestures and speech in human-robot interaction”. In: 2012 IEEE 3rd International Conference on Cognitive Infocommunications (CogInfoCom), 2012, 673–678. [7] Noldus. FaceReader version 6. Reference Manual. Noldus Information Technology, 2014. [8] S. Park, L. Moshkina, and R. C. Arkin. Intelligent Autonomous Systems 11, chapter Recognizing Nonverbal Affective Behavior in Humanoid Robots. IOS Press Ebooks, 2010. [9] SoftBank Robotics, http://doc.aldebaran. com/2-1/home_nao.html. NAO Documentation. [10] M. Tielman, M. Neerincx, J.-J. Meyer, and R. Looije, “Adaptive emotional expression in robot-child interaction”. In: Proceedings of the 2014 ACM/IEEE International Conference on Human-robot Interaction, New York, NY, USA, 2014, 407–414. [11] H. Van Kuilenburg, M. Den Uyl, M. Israë l, and P. Ivan, “Advances in face and gesture analysis”, Measuring Behavior 2008, 2008, 371. [12] P. Viola and M. J. Jones, “Robust real-time face detection”, International journal of computer vision, vol. 57, no. 2, 2004, 137–154. [13] L. Zhao. Synthesis and Acquisition of Laban Movement Analysis Qualitative Parameters for Communicative Gestures. PhD thesis, University of Pennsylvania, Philadelphia, PA, USA, 2001. AAI3015399.


Journal of Automation, Mobile Robotics & Intelligent Systems

C

K

D S

VOLUME 11,

M

I

N∘ 4

2017

O

R

Submi ed: 2nd October 2017; accepted: 12th December 2017

Urszula Libal, Łukasz Łoziuk DOI: 10.14313/JAMRIS_4-2017/32 Abstract: We describe an experimental study, based on several million video scenes, of seven keypoint detec on algorithms: BRISK, FAST, GFTT, HARRIS, MSER, ORB and STAR. It was observed that the probability distribu ons of selected keypoints are dras cally different between indoor and outdoor environments for all algorithms analyzed. This paper presents a simple method for dis nguishing between indoor and outdoor environments in a video sequence. The proposed method is based on the central loca on of keypoints in video frames. This has lead to a universally effec ve indoor/outdoor environment recogni on method, and may prove to be a crucial step in the design of robo c control algorithms based on computer vision, especially for autonomous mobile robots. Keywords: machine vision, keypoint, UAVs, scene recogni on

1. Introduc on Correct determination of the sensed environment by a multitasking robot is crucial to the success of its control algorithms. For various kinds of environments, different sets of sensors and devices can be used (e.g., IR scanner for indoor and GPS for outdoor) to support autonomous control and other methods of determining the trajectory of the mobile robot. Most of the models and algorithms that work well in outdoor environments work poorly inside buildings [8]. The simplest method for distinguishing a robot environment is to divide it into interior and outdoor areas. It is clear that the weather conditions and the nature of the obstacles encountered by the robot on its path can be very different between indoor and outdoor environments [19]. Striving to create autonomous mobile robots forces us to supply them with control algorithms broad enough to be able to cope with a variety of different environments. The division of environments into indoor [3], [4] and outdoor [1] does not exhaust all the possibilities. However, this may be used as an initial recognition step, allowing the robot to switch between two dedicated control algorithms, better-adapted to open and closed spaces, respectively. This paper focuses on demonstrating the differences between probabilities of keypoint locations for indoor and outdoor areas on the basis of characteristic points identi ied by the algorithms: BRISK, FAST, GFTT, HARRIS, MSER, ORB and STAR for video sequences. The difference in the central position of char-

acteristic points (on video frames) between the interior and the exterior environment provides a strong basis for the development of a simple and effective method for distinguishing the environment surrounding the robot (based on video footage). For land-based robots, many methods have been proposed for the detection and avoidance of obstacles. However, for aerial robots (such as unmanned aerial vehicles(UAVs)) there are still many challenges left to be solved. For UAVs, avoiding obstacles is more dificult, because they operate in 3D space, whereas for land-based robots, whose movement can be simpli ied to the 2D plane, obstacle detection is made simpler. Since UAVs have a limited carrying capacity, they are not able to be equipped with as many sensors as landbased robots in order to detect obstacles, for example, laser scanners [2]. Computer vision systems provide a good solution to this problem, because the video cameras are lightweight and energy ef icient. In contrast to scanning sensors such as Lidar and sonar, cameras offer higher resolution and noise immunity. Land-based robots can also disengage the drive and stop for a long analysis of the environment. However, UAVs can not stay in ixed position for a long time, because it adversely affects their light time. In contrast to land-based robots, aerial robots operate in several different environments. In urban areas, the following types of neighborhoods, shown in Figs.1–4, may be encountered: Indoors, inside buildings – Characterized by enclosed, often rectangular spaces containing obstacles, including mobile ones can appear from all directions. There may also be mirrors, and windows adding to the complexity object recognition systems

Fig. 1. Indoors

Streets – Streets are characterized by having many moving obstacles. However, the street plane can serve 7


Journal of Automation, Mobile Robotics & Intelligent Systems

as a reference to identify moving objects in the image, and thanks to such an assumption, many automotive obstacle detection algorithms can be used.

Fig. 2. Street

Urban canyons – Obstacles are usually located on the sides, e.g. buildings, trees, and the occasional obstacle in front. A robot must also avoid colliding with power lines and streetlights.

VOLUME 11,

N∘ 4

2017

2. Mo va on Proper scene recognition, whether it involves distinguishing if the robot is located inside or outside a building [19] to a more sophisticated environment analysis, is necessary if the robot loss could prove costly. Correct characterization of the environment allows for the use of dedicated control algorithms. The main objective is thus to avoid colliding with obstacles and damaging the robot or obstacle. It is not necessarily just about avoiding the destruction of the robot (e.g., UAV), but also to eliminate collisions with people, which is important because of the possibility of severe injury by such a lying robot. In some papers, a completely different approach is presented, allowing the destruction of the robot. One example is training robots to cross a highway collisionfree [18]. In this process, a certain number of robots must be discarded (damaged, or run over by vehicles), in order that the otherrobots can acquire the knowledge needed to learn.

3. Data

Fig. 3. Urban canyon

The space above the city – Compared to other environments, there are relatively few obstacles and they are usually static, e.g. radio antennas, tall buildings, chimneys.

Fig. 4. The space above the city There are many specialized obstacle-detection algorithms used to ind barriers in certain environments. The effectiveness of the algorithms strongly depends on the area where the mobile robot works. In each of the environments, other types of obstacles occur, so it is necessary to distinguish between the different environments and use dedicated control algorithms adapted to that speci ic environment. 8

The analyzed data is a collection of more than 3,000,000 scenes shot from various movies. The videos have a resolution ranging from VGA (640x480) to Full HD (1920x1080). The video frame sequences were shot using various different cameras, with photodetector matrices based on either CCD or CMOS technologies. The videos were recorded at speeds of between 20 and 30 frames per second. They can be divided into two categories: recordings captured on premises (indoor) and those captured in open spaces (outdoor). The scenes are characterized by different lighting and camera movement dynamics. Video sequences, shot in closed areas depict interior spaces such as museums or apartments. It should be noted that most of the movies ilmed in a closed area are characterized by camera movements through areas such as corridors, where the orientation of the camera is close to that of the typical orientation of the human eye when walking through a room. Movies, shot in open areas, come from cameras mounted on drones (UAVs). The movies were recorded during drone lights over ields, roads, and mountains. We made sure that parts of the drone do not enter the frame All videos were recorded by cameras with lenses pointed in the direction of the drone light. Of the 3,000,000 video frames, 1,200,000 video frames depict indoor areas, whilst 1,800,000 depict outdoor areas. The diversity of the video material serves as a basis for constructing a simple and rapid method for determining and distinguishing between indoor and outdoor environments captured by cameras.

4. Keypoints Selec on Algorithms The following seven keypoint selection methods were tested: BRISK, FAST, GFTT, HARRIS, MSER, ORB and STAR. The keypoint pixels, detected by each of the listed algorithms, are shown on exemplary frames in


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

(a) ORIGINAL

(b) BRISK

(c) FAST

(d) GFTT

(e) HARRIS

(f) MSER

(g) ORB

(h) STAR

N∘ 4

2017

Fig. 5. Keypoints on exemplary frame for indoor environment

(a) ORIGINAL

(b) BRISK

(c) FAST

(d) GFTT

(e) HARRIS

(f) MSER

(g) ORB

(h) STAR

Fig. 6. Keypoints on exemplary frame for outdoor environment Figures 5 and 6. The original pictures are presented in Figure 5a (indoor scene–a covered sports hall for basketball) and in Figure 6a (outdoor scene–a landscape with ields in the foreground and mountains in the background). All algorithms compared in this paper were implemented using the Open CV [6] library. The software [17] contains all if the previously mentioned methods for keypoint selection in video sequences. The program associated with this paper is freely available via github [17]. A short description of each method is given below, 1) BRISK The Binary Robust Invariant Scalable Keypoints method was proposed by Leutenegger et al. [9]. The algorithm is a modi ication of the BRIEF [5] (Binary Robust Independent Elementary Features) method, which uses binary strings for corner detection and can be computed by performing simple intensity difference tests. The output of every test is either zero or one, and the result is ap-

pended to the end of the string. In contrast to the BRIEF method, BRISK relies on a circular sampling pattern from which it computes brightness comparisons to form a binary descriptor string. The BRIEF method is not invariant to large, in-plane rotations. However the BRISK method handles simple in-plane rotations very well. 2) FAST Features from Accelerated Segment Test is another corner detection method, published by Rosten and Drummond [11]. The method is the most computationally ef icient algorithm among corner detectors, leading to extremely fast execution times. 3) GFTT The Good Features To Track detector was presented by Shi and Tomasi [10]. This method is based on the calculation of the eigenvalues and eigenvectors of the deformation matrix. If both eigenvalues are small, then the feature does not vary much in any direction and is designated a lat region (bad feature). If one eigenvalue is 9


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

2017

much larger than the other, then the feature varies mainly in one direction, i.e., an edge (bad feature). If both eigenvalues are large, then the feature varies signi icantly in both directionsi.e., a corner (good feature).

(Binary Robust Independent Elementary Features) and the FAST [11] (Features from Accelerated Segment Test) keypoint detectors. The ORB algorithm is robust to in-plane rotations and it uses a nearest neighbor search instead of random sampling.

4) HARRIS The HARRIS feature detector took its name from the surname of one of its originators, Harris and Stephens [12]. This algorithm is the oldest of those analyzed in this paper, having irst been published in 1988. The HARRIS identi ies similar regions among images by selecting and thresholding autocorrelated patches. A high positive response function value determines that it is a corner region, negative an edge region, and a small value determines a lat region. The detection method is similar to GFTT, instead of eigenvalues, the value of the response function is used.

7) STAR The CenSurE (Center Surround Extrema) feature detector, published by Agrawal, Konolige and Blas [16], has been implemented in the Open CV library [6], where it got designated a new name—the STAR—and some minor changes were applied, e.g., circular shapes were replaced with approximations. It is designed as a multiscale detector with masks of different sizes.

5. Keypoint Distribu on 5.1. Loca on The location of central keypoints is measured with the median. For every set of keypoints

5) MSER The Maximally Stable Extremal Regions is a method that inds blobs in images, and was introduced by Matas et al [13, 14]. This algorithm is invariant to af ine transformations of the image intensities.

đ?‘˜đ?‘? = {(đ?‘Ľ , đ?‘Ś ), (đ?‘Ľ , đ?‘Ś ), ‌ , (đ?‘Ľ , đ?‘Ś )},

đ?‘€ (đ?‘˜đ?‘?) = median (đ?‘Ľ ),

(2)

đ?‘€ (đ?‘˜đ?‘?) = median (đ?‘Ś ).

(3)

The median is used instead of the mean value, because the distributions for all analyzed algorithms are

Tab. 1. Average characteris cs per frame for indoor scenes INDOOR Algorithm

Keypoint Center

Execution

No. of

đ?‘€

đ?‘€

�

BRISK

0.410

0.691

0.299

�

Time [s]

Keypoints

0.153

0.466

435.2

FAST

0.445

0.666

0.331

0.189

0.003

5824.8

GFTT

0.446

0.692

0.308

0.164

0.019

904.0

HARRIS

0.405

0.695

0.292

0.139

0.018

507.9

MSER

0.407

0.691

0.330

0.186

0.104

285.5

ORB

0.424

0.670

0.256

0.122

0.009

498.8

STAR

0.411

0.691

0.270

0.132

0.026

256.6

Keypoint St Dev

Tab. 2. Average characteris cs per frame for outdoor scenes OUTDOOR Algorithm

Keypoint Center

(1)

detected on a ixed frame, medians for the vertical and the horizontal coordinates are calculatedas

6) ORB The Oriented FAST and Rotated BRIEF is a fast feature detector, proposed by Rublee et al. [15]. It is based on both the visual descriptor BRIEF [5]

10

N∘ 4

Keypoint St Dev

Execution

No. of Keypoints

đ?‘€

đ?‘€

�

�

Time [s]

BRISK

0.464

0.776

0.415

0.102

0.526

4053.2

FAST

0.474

0.771

0.437

0.105

0.022

51705.7

GFTT

0.448

0.769

0.398

0.095

0.079

1000.0

HARRIS

0.451

0.770

0.396

0.095

0.072

932.2

MSER

0.494

0.786

0.443

0.110

0.352

1497.1

ORB

0.399

0.770

0.353

0.081

0.045

499.3

STAR

0.449

0.783

0.379

0.093

0.137

1517.1


Journal of Automation, Mobile Robotics & Intelligent Systems

(a) BRISK

VOLUME 11,

(b) FAST

(e) MSER

(c) GFTT

(f) ORB

N∘ 4

2017

(d) HARRIS

(g) STAR

Fig. 7. Marginal histograms of keypoint posi on (x,y) for indoor scenes. Average keypoint occurrence by frame for every rela ve pixel posi on (x,y): x – blue line, y – red line

(a) BRISK

(b) FAST

(e) MSER

(c) GFTT

(f) ORB

(d) HARRIS

(g) STAR

Fig. 8. Marginal histograms of keypoint posi on (x,y) for outdoor scenes. Average keypoint occurrence by frame for every rela ve pixel posi on (x,y): x - blue line, y - red line. skewed, or have more than one mode (see Figures 78). In such situations, the mean value does not point to a central location and so the median is more accurate. For one type of environment, the vertical coordinates đ?‘€ of central keypoints take values on the same level for all algorithms. However, the vertical median đ?‘€ values are signi icantly different for indoor and outdoor video sequences: higher relative pixel position for outdoor and lower for indoor. High relative pixel positions are obtained for pixels placed in the lower section of frame. For every frame, the pixel in the upper left corner is the origin, (0,0). A relative scale is used, because videos of a wide range of resolutions were analyzed. 5.2. Dispersion The dispersion of keypoints đ?‘˜đ?‘? = {(đ?‘Ľ , đ?‘Ś )} on a frame is measured by the standard deviation of the median: đ?‘† (đ?‘˜đ?‘?) =

1 đ?‘›

(đ?‘Ľ − đ?‘€ ) ,

(4)

The standard deviations (horizontal, � , and vertical, � ) show the dispersion from central points, obtained by the median (� , � ). The lowest deviation is observed for the �-coordinate for outdoor video sequences. This means that the �-coordinates of keypoints for outdoor scenes are highly concentrated on the median indicated by � . The concentration of keypoints in the lower part of the video frame is characteristic of outdoor lanscapes. This attribute is shown in Figures 6b-6h and also in Figure 8.

6. Algorithm Performance 6.1. Number of Keypoints per Frame The average numbers of keypoints for all executed algorithms are presented in Table 1 (for indoor) and Table 2 (for outdoor). The largest number of keypoints was detected by the FAST algorithm, for both environment types. This can also be observed in Figures 5c and 6c. The large number of selected points by the FAST algorithm coincides with the shortest average computation time. 6.2. Execu on Time per Frame

1 đ?‘† (đ?‘˜đ?‘?) = đ?‘›

(đ?‘Ś − đ?‘€ ) .

(5)

As mentioned earlier, the FAST algorithm needed the shortest average time to calculate keypoints. The 11


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

N∘ 4

2017

second fastest algorithm is the ORB (Oriented FAST and Rotated BRIEF) algorithm. HARRIS and GFTT perform calculations in a comparably short time. The BRISK algorithm proved to be the slowest procedure. All execution times are shown in Tables 1 and 2.

7. ClassiďŹ ca on Problem for Scene Recognion Scene recognition can be de ined as a two-class classi ication problem, with class 1 (indoor) and class 2 (outdoor). The signi icant differences for keypoints distributions, shown in this paper, should enable the discrimination of both of these classes. The central keypoint locations are placed in lower sections on frames for videos taken outside, and in central sections for videos taken inside. Most of the selected keypoints for outdoor landscapes occur in the bottom half of frame. For indoor scenes, keypoints can be found on upper part of frame as well. This property brings hope that the classes (indoor and outdoor) are distinguishable. We performed the classi ication based on the following decision rule:

Ψ(đ?‘˜đ?‘?) =

đ??ź (indoor), if ⎧ ⎪ |đ?‘€ (đ?‘˜đ?‘?) − đ?‘€ | ≤ |đ?‘€ (đ?‘˜đ?‘?) − đ?‘€ |, ⎨ đ?‘‚ (outdoor), if ⎪ ⎊ |đ?‘€ (đ?‘˜đ?‘?) − đ?‘€ | > |đ?‘€ (đ?‘˜đ?‘?) − đ?‘€ |.

(6)

A frame represented by a set of keypoints đ?‘˜đ?‘? is classiied to class đ??ź − đ?‘–đ?‘›đ?‘‘đ?‘œđ?‘œđ?‘&#x;, when the median vertical pixel position đ?‘€ (đ?‘˜đ?‘?) of keypoints detected in the frame is closer to the median vertical pixel position đ?‘€ of keypoints for indoor frames (see Table 1). It is classi ied to class đ?‘‚ − đ?‘œđ?‘˘đ?‘Ąđ?‘‘đ?‘œđ?‘œđ?‘&#x;, when the median đ?‘€ (đ?‘˜đ?‘?) is closer to the median đ?‘€ , calculated for outdoor frames (see Table 2). Without loss of generality, we present the rule for the ixed case đ?‘€ < đ?‘€ as follows: Ψ(đ?‘˜đ?‘?) =

đ??ź (indoor), if đ?‘€ (đ?‘˜đ?‘?) ≤ đ?œ† đ?‘‚ (outdoor), if đ?‘€ (đ?‘˜đ?‘?) > đ?œ†

, ,

(7)

where the threshold distinguishing between the two classes is given by đ?œ†

= (đ?‘€ + đ?‘€ )/2.

(8)

The proposed classi ication method, based on central keypoint position, was compared to the Naive Bayes classi ier [20,21], since the distributions in both classes are known (estimated by histograms to be precise). Table 3 contains the results for the two fastest keypoint selection methods: FAST and ORB, and one with medium computation time: STAR. The proposed classi ier has surprisingly good performance, attributed to both the simplicity of the rule and the reduction of a high dimensional problem to one dimension (i.e., only vertical components). For the algorithms FAST and ORB, the results are comparable to the Naive Bayes classi ier. The decision areas for both classi iers in those cases were similar, as indicated by thresholds đ?œ† and đ?œ† (see Figure 9). For 12

Fig. 9. Example of threshold đ?œ† for ver cal components đ?‘Ś of keypoints (đ?‘Ľ, đ?‘Ś) detected with the ORB algorithm.

Fig. 10. Example of threshold đ?œ† for ver cal components đ?‘Ś of keypoints (đ?‘Ľ, đ?‘Ś) detected with the STAR algorithm. the STAR algorithm, the results differ between the two classi ication methods, because for the Naive Bayes classi ier, four decision areas were created, whereas for the central keypoint method, only two are created (see the border points đ?œ† in Figure 10).

8. Conclusions The results presented in Figures 7 and 8 cover the full collection of 3 million scenes, divided into two groups: indoor and outdoor. For closed areas, all investigated algorithms: BRISK, FAST, GFTT, HARRIS, MSER, ORB and STAR produced quite similar distributions of keypoints location in an average video frame (see Figure 7). The same was observed for open areas (see Figure 8). This means that the distribution of keypoints for each environment (indoor and outdoor) is practically independent from the algorithm that has been used. The method for distinguishing between indoor and outdoor environments, on the basis of characteristic points obtained from video sequences, may be considered as objective and reliable, regardless of the algorithm used for determining the keypoints. Statistical analysis showed that the probability density function (as estimated by histograms–see Figures 7-


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

N∘ 4

2017

Tab. 3. ClassiďŹ ca on performance of proposed central keypoint method in comparison to Naive Bayes classiďŹ er. Algorithm

Central Keypoint Method

Naive Bayes

precision

recall

F-score

precision

recall

F-score

FAST

0.8954

0.7837

0.8358

0.9086

0.7715

0.8345

ORB

0.7918

0.8588

0.8240

0.7647

0.8916

0.8233

STAR

0.6765

0.7367

0.7053

0.8327

0.6243

0.7136

8) of keypoints for an average video frame captured inside buildings has a maximum in the middle of the image. Characteristic points appear in random places on the whole picture, but most often they occur in the middle of the image. This is connected to the fact that the person ilming usually directs the optical center of the camera in the direction of a large number of characteristic points, eg., at the end of the corridor or at objects such as sculptures, paintings, faces, etc. The probability density function of the location of the average characteristic points in open areas has a maximum at the bottom of the image. This is connected to the fact that for open spaces, most of the upper part of frame is the sky. The upper part of video frames usually has a small number of characteristic points, in contrast to the lower part. The bottom part of the image contains the land and buildings and the number of characteristic points for all algorithms is signi icantly higher– assuming a horizontal orientation of the camera. For all of the algorithms analyzed, the median vertical � calculated for outdoor environments is higher, with respect to the � for indoor environments, as would be expected. In contrast, there is no signi icant difference in the median position calculated horizontally � by the different algorithms for determining the characteristic points. Therefore, the distinction between indoor and outdoor environments should be based on the vertical position of the characteristic points centers (� , � ): in the middle of the video frame for indoor, at the bottom of the video frame for outdoor–this can be explained by the camera positioning on the horizon line. In addition to the distinction between scenes, an important aspect is also the calculation speed. In reallife applications, such distinction has to be done in real time, and therefore, the comparison of the average computation time needed to determine the keypoints for different algorithms was performed. The FAST algorithm was shown to be the fastest, and gives the highest number of keypoints (see Tables 1-2). The second fastest algorithm was the ORB, which is a modiication of the FAST method. In contrast, the slowest methods include the MSER and BRISK algorithms.

9. Future Work Probability distributions for indoor and outdoor environments describing keypoints location–such as medians � , � (central pixel position), and standard deviations � , � were calculated for every frame. This leads to outlying central keypoints positions if the number of keypoints is noticeably smaller than for the

average frame. To avoid this problem, one can analyze not only one frame in every step, but a subsequence of frames. This approach is often applied for moving object analyses (see, e.g., Foresti and Micheloni [7]). The results of this paper are reduced only to the application of the average occurrence distinguishing between indoor and outdoor scenes. It would be very interesting to continue the this line of research, and investigate more kinds of scenes, e.g., those mentioned in the introduction.

So ware Open source software [17], together with a graphical user interface, written in python, is attached to this paper. The software was created and used by authors for keypoint selection in video sequences.

REFERENCES [1] S. Srinivasan, L. Kanal, �Qualitative landmark recognition using visual cues�, Pattern Recognition Letters, vol. 18, no.11-13, 1997, 1405–1414. DOI: 10.1016/S0167-8655(97)00142-6. [2] M. Bosse, R. Zlot, �Keypoint design and evaluation for place recognition in 2D lidar maps�, Robotics and Autonomous Systems, vol. 57, no.12, 2009, 1211–1224. DOI: 10.1016/j.robot.2009.07.009. [3] P. Espinace, T. Kollar, N. Roy, A. Soto, �Indoor scene recognition by a mobile robot through adaptive object detection�, Robotics and Autonomous Systems, vol. 61, no. 9, 2013, 932–947. DOI: 10.1016/j.robot.2013.05.002. [4] A. Vailaya, A.K. Jain, H. Zhang, �On image classi ication: city images vs. landscapes�, Pattern Recognition, vol. 31, no. 12, 1998, 1921-1935. DOI: 10.1016/S0031-3203(98)00079-X. [5] M. Calonder, V. Lepetit, C. Strecha, P. Fua, �BRIEF: Binary Robust Independent Elementary Features�. In: European Conference on Computer Vision (ECCV), Heraklion, Crete, Greece, September 5-11, Proceedings - Part IV, 778–792, 2010. DOI: 10.1007/978-3-642-15561-1_56. [6] Itseez. Open CV Library. http://docs.opencv. org/modules/features2d/doc/common_ interfaces_of_feature_detectors.html, Accessed: 2017-04-30. [7] G.L. Foresti, C. Micheloni, �Real-time videosurveillance by an active camera�, Ottavo Convegno Associazione Italiana Intelligenza Arti i13


Journal of Automation, Mobile Robotics & Intelligent Systems

ciale (AI*IA)– Workshop sulla Percezione e Visione nelle Macchine, Universita di Siena, 2002. DOI: 10.1.1.19.4723. [8] A. Quattoni, A. Torralba, ”Recognizing indoor scenes”. In: IEEE Conference on Computer Vision and Pattern Recognition, IEEE Computer Society, Los Alamitos, CA, USA, 413–420, 2009. DOI: 10.1109/CVPRW.2009.5206537. [9] S. Leutenegger, M. Chli, R.Y. Siegwart, ”BRISK: Binary Robust invariant scalable keypoints”. In: IEEE International Conference on Computer Vision (ICCV), 2548–2555, 2011. DOI: 10.1109/ICCV.2011.6126542. [10] J. Shi, C. Tomasi, ”Good features to track”, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR),1994, 593–600. DOI: 10.1109/CVPR.1994.323794. [11] E. Rosten, T. Drummond, ”Fusing Points and Lines for High Performance Tracking”, IEEE International Conference on Computer Vision (ICCV), 17-20 October 2005, Beijing, China, 1508–1515, 2005. DOI: 10.1.1.451.4631. [12] C. Harris, M. Stephens, ”A Combined Corner and Edge Detector”. In: Alvey Vision Conference, (AVC), Manchester, UK, September, 1-6, 1988. DOI: 10.5244/C.2.23 [13] J. Matas, O. Chum, M. Urban, T. Pajdla, ”Robust wide-baseline stereo from maximally stable extremal regions”, Image Vision Computing, vol. 22, no. 10, 2004, 761-767. DOI: 10.1016/j.imavis.2004.02.006. [14] K. Mikolajczyk, T. Tuytelaars, C. Schmid, A. Zisserman, J. Matas, F. Schaffalitzky, T. Kadir, L. Van Gool, ”A Comparison of Af ine Region Detectors”, International Journal of Computer Vision, vol. 65,

14

VOLUME 11,

N∘ 4

2017

no. 1– 2, 2005, 43–72. DOI: 10.1007/s11263005-3848-X. [15] E. Rublee, V. Rabaud, K. Konolige, G. Bradski, ”ORB: An ef icient alternative to SIFT or SURF”, IEEE International Conference on Computer Vision, IEEE Computer Society, Los Alamitos, CA, USA, 2564– 2571, 2011. DOI: 10.1109/ICCV.2011.6126544. [16] M. Agrawal, K. Konolige, M.R. Blas, ”CenSurE: Center Surround Extremas for Realtime Feature Detection and Matching”, European Conference on Computer Vision (ECCV), Marseille, France, October 12–18, 2008, Proceedings – Part IV, 102–115, 2008. DOI: 10.1007/978-3-54088693-8_8. [17] Ł. Łoziuk. Video Analysis Algorithms – Software. https://github.com/bitcoinsoftware/video AlgorithmsAnalysis, Accessed: 2017- 08-30. [18] A. Lawniczak, B. Di Stefano, J. Ernst, ”Stochastic Model of Cognitive Agents Learning to Cross a Highway”, Stochastic Models, Statistics and Their Applications, Springer Proceedings in Mathematics and Statistics, 122:319–326, 2015. DOI: 10.1007/978-3-319-13881-7_35. [19] D. Lopez De Luise, G. Barrera, S. Franklin, ”Robot Localization Using Consciousness”, Journal of Pattern Recognition Research, 6(1), 2011, 96–119. DOI: 10.13176/11.257. [20] R.O. Duda, P.E. Hart, D.G. Stork, Pattern Classi ication, 2nd Edition, Wiley-Interscience, 2000. [21] C.M. Bishop, Pattern Recognition Machine Learning, Springer, 2006. 10.1117/1.2819119.

and DOI:


Journal of Automation, Mobile Robotics & Intelligent Systems

S

P

A

VOLUME 11,

H

N◦ 4

2017

H

Submi ed: 6th Septembery 2017; accepted: 4th January 2018

Jessica Villalobos, Teresa Zielińska DOI: 10.14313/JAMRIS_4-2017/33 Abstract: Humanoid robots and humans look alike, and therefore are expected to adjust their posture in a similar way. We analysed a set of human sta c postures that should be considered for humanoids ac ng as caretakers. A dynamic situa on was studied to learn how humanoids can react in a dynamical way. Human data were obtained with a professional mo on capture system and anthropometric tables. The sta c postures were studied using a segmented human body model, but for mo on analysis the single and double pendulums with moving masses were also employed. For robot mo on synthesis we need to know the rela on between the posture and the postural stability. We have shown that the posi ons of mass centres of the pendulum segments (which match the human body point masses) are crucial for postural stabiliza on. The Zero-Moment Point criterion was applied for the dynamic case. The sta c analysis demonstrates that there are some common features of the postures. The dynamic analysis indicated that both pendulums are good models of human body mo on, and are useful for humanoid moon synthesis. In humanoids, it is easier to apply results represented by inverted pendulums than postural models represented by s ck diagrams. This is because humanoids and humans do not obviously share the same mass distribu on and sizes (propor ons) of all body segments. Moreover our descrip ons indicate where to locate the suppor ng leg/legs in single and double support, which in general, is missing in inverted pendulum models discussed in the literature. The paper’s aim is to deepen the knowledge about the adjustment of human postures for the purpose of robocs. Keywords: centre of mass, zero moment point, anthropometric data, postural stabiliza on

1. Introduc on Postural stability is crucial for humanoid motion synthesis. Humanoids should move in similar way to human beings. The actual research objective is to deliver methods for autonomous and human-like postural adjustment. Having humanoids that can perform more actions like a person means they can be used as human helpmates or caretakers.

Stabilization of humanoids has been studied in many works, some examples are [10, 13, 24]. This problem is treated from different perspectives, some works use neurobiological inspiration [4,14], other researchers are applying the pure theoretical approach [7, 11, 19, 22, 23], or the human postural data are considered [3, 18]. It can be noticed that inverted pendulum models are often used as simpli ied descriptions of human body dynamics [1, 7, 8, 12, 15, 19] allowing the investigation of postural stability measures. Nowadays it is more common to get inspiration from nature, one of such examples can be found in [17] where the model of human sensing was used for postural control synthesis that allowed the robot’s motion response to external stimuli to be comparable to those of the human being. Our objective was to investigate the postural adjustment during different activities, this was achieved by observing and recording a set of human postures and analysing the obtained data. To obtain information on how stable postures are achieved, our analysis included two situations: i) static cases for describing typical static postures that can be used (repeatedly) by assistive robots, ii) a dynamic case for investigating how the motion of body parts helps recover the lost stability. For investigating the postural stability, different models of the human body were used. In [17] the single inverted pendulum was applied in order to study the upright stance, while a model with 8 links and point masses was used in [9] to analyse the stability of humanoids during walking with both, rigid and compliant feet. In [16] it was demonstrated that the kinematics and ground reaction forces for single and double inverted pendulums are similar. In [20] the inverted pendulum models were used to represent the gait stance phase, an enhanced model was also proposed. To evaluate the dynamic stability, the ZeroMoment Point (ZMP) criterion is used. In [21] it was demonstrated that during double support the centre of pressure (CoP) and the ZMP are equivalent if both feet are on the same planar surface. Different methods have been used to observe and record the human body adjustments, in [2] photographic techniques and motion capture systems were applied. As mentioned in [5] the new methods include the use of 3D cameras, force platforms, and more affordable devices like the Microsoft Kinect™ and the Nintendo Wii Balance Board™. The recording technology that is used depends on the aim of study and the 15


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

N◦ 4

2017

required accuracy of the results.

2. Postural Stability The ZMP criterion was originally de ined for single support phase and is used to investigate the postural stability during motion. The de inition that we present in this section is based on [25]. The ZMP is the point where the resultant reaction force vector must be applied in order to produce the moment equilibrating the moments due to body motion dynamics. In other words, the ZMP is the point that makes Eq. 1 true. Mx = 0 My = 0

(1)

If the computed ZMP is located outside the footprint area of the supporting leg, the resultant reaction force vector is actually applied at the foot edge, which causes the body to rotate and leads to its falling. The ZMP criterion is used to verify if the body is in dynamic equilibrium. There are two support phases that need to be considered - single and double. For each of them, the support polygon is different: i) during single support (when only one foot is on the ground) it is the ground area covered by the supporting foot, ii) for double support (when both feet are in contact with the ground) it is composed of the contact area of both feet and the ground area between them.

Fig. 1. Segmented model of the human body. The stars mark the loca on of each CoM

3. Human Body Models For robot motion synthesis it is crucial to know the correlation between the posture and the postural stability [12]. The anthropometric data used for the postural stability analysis include: i) the mass of each body segment (expressed in the literature as a percentage of the total mass of a person), ii) the length of each body segment (expressed as a percentage of the height of a person), iii) the location of each centre of mass (CoM, expressed as the ratio of each segment length with respect to the proximal end of the segment). The segmented model was used for both, the static and dynamic analyses. It divides the body into 11 segments and takes into account the anthropometric data. The model is shown in Fig. 1. The segmented model was used to evaluate the location of the masses in more compact models, which were the inverted pendulums. Tab. 1 shows the anthropometric data as percentage of the length and mass of each body segment, those values were used to specify the segmented model. The information about the position of the CoMs was considered when evaluating the postural stability for the set of still postures. For this study we obtained the position of the overall CoM or of the CoMs of the upper and lower sections of the body. The position of a CoM that combines two or more partial masses is expressed by Eq. 2, where k is the overall amount of partial masses that are being combined and mi is one of 16

them. The equation shows how to obtain the x coordinate, but for 3D cases similar formulas are applied for the remaining coordinates. k ∑

xCoM =

mi · xi

i=1 k ∑

(2) mi

i=1

The single and double inverted pendulum models with moving masses were used for the dynamic analysis. The length of the single inverted pendulum was equal to the height of the person minus the distance between the ankle and the ground (0.048 of total body height for women and 0.043 for men [6]). The position of the pivot point of the pendulum depends on the gait phase, during single support it is the same as for the ankle of the supporting leg, while for double support it is located between both ankles. In Fig. 2, we can see the single pendulum obtained from the segmented model shown in Fig. 1. The double inverted pendulum has the same total length and pivot point as the single one. To build this pendulum, it was decided to divide the body at the waist. This means that with respect to the height of a person, the lower segment length is equal to 0.584 for women and to 0.587 for men, while for the upper segment length it is equal to 0.368 for women and to 0.370 for men. Fig. 3 shows the double pendulum model at


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

N◦ 4

2017

Tab. 1. Used anthropometric data Segment Forearm and hand Upper arm Foot and shank Thigh Abdomen and pelvis Thorax Head and neck

Segment Weight/ Total Body Weight

Centre of Mass/ Segment Length

0.022 0.028 0.061 0.100 0.281 0.216 0.081

0.682 0.436 0.606 0.433 0.270 0.820 1.000

Segment Length/Height Woman Man 0.152 0.193 0.234 0.242 0.108 0.193 0.0714∗

0.145 0.189 0.242 0.245 0.100 0.189 0.0743∗

The data in the columns “Segment Weight/Total Body Weight” (segment weight divided by the total body weight) and “Centre of Mass/Segment Length” (location of mass centre position measured from the segment proximal end, and normalized to the segment length) were obtained from [26]. The data shown in both columns of “Segment Length/Height” (segment length normalized to the body height) were obtained from [6], with the exception of the normalized values marked with ∗ , which were obtained directly by us. Note: the anthropometric data taken from the literature were consistent with those of the tested persons.

Fig. 2. Single pendulum model used during the dynamic analysis. The star marks the loca on of the overall CoM the same time instant shown of Fig. 1. The location of the CoMs of both pendulums are obtained by using the data of the segmented model.

4. Sta c Analyses To analyse the static analysis we focused on 6 typical postures. For each of them, a person was asked to pose in the same way they would when getting ready

Fig. 3. Double pendulum model used during the dynamic analysis. The stars mark the loca on of the upper and lower CoMs

to perform an action, and to keep both soles in contact with the loor. In every case, a picture was taken in order to draw a segmented model. The postural adjustment is personal, moreover for the same person it can differ from case to case, in our work we were not aiming to repeat many trials for producing the data as the personal average is characteristic. For our studies we only needed a reliable example of the human be17


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

Nâ—Ś 4

2017

haviour, therefore the participant was a healthy woman without motion disorders and with normal body build. The aim was to investigate the position of the overall CoM (and its projection to the loor) depending on the postures. This advises how the total CoM location should be adjusted in a humanoid robot. To compensate the visual distortion in the picture, we measured the tested person and scaled the length of each body segment with respect to the upper arm. This means that the pictures were only used to evaluate the relative position of the body segments. The pictures were taken from such point that nominally minimalised the inaccuracy in the position evaluation. Since they were taken with a normal automatic photo camera, the grid lines were used in order to try to adjust the position with respect to the loor. The person was asked to hold a posture and not to exert any force on the objects that were part of the scenario. The body segments and their ends were marked, as shown in Fig. 4.

Fig. 5. Segmented model for a person prepared to push a light object. The black dots mark the loca on of each segment CoM

Fig. 4. Markers used for the sta c analysis To decide which postures to study, we considered some scenarios that can be useful for humanoids that take care of people. These are: - Starting posture for pushing an object: here, there are two options, in Fig. 5 we can see the posture when a person is prepared to push a light object, while in Fig. 6 we can observe how it is when the object is heavy. In both cases, the two hands are used. - Starting posture for pulling an object: as before, we have two options, in Fig. 7 we can see the posture when a person is prepared to pull an object by using both hands, and in Fig. 8 we can observe how it is when only one hand is used. - Starting posture for collecting an object from a 18

Fig. 6. Segmented model for a person prepared to push a heavy object. The black dots mark the loca on of each segment CoM

height: Fig. 9 shows the posture of a person which is trying to take an object from the top of a storage with both hands. - Starting posture for passing an object: Fig. 10 shows the posture of a person taking an object when it is close to her, and Fig. 11 shows the posture used


Journal of Automation, Mobile Robotics & Intelligent Systems

Fig. 7. Segmented model for a person prepared to pull an object by using both hands. The black dots mark the loca on of each segment CoM

Fig. 8. Segmented model for a person prepared to pull an object by using one hand. The black dots mark the loca on of each segment CoM when the object is farther. Here the action is considered to be done by using only one hand. - Starting posture for collecting an object from the loor: in Fig. 12 we can see the posture of a person taking an object from the loor with both hands. In this case adjusting the length of the “thorax” and the “ab-

VOLUME 11,

N◦ 4

2017

Fig. 9. Segmented model for a person prepared to collect an object from a height. The black dots mark the loca on of each segment CoM

Fig. 10. Segmented model of a person prepared to take an object that is close. The black dots mark the loca on of each segment CoM domen and pelvis” was needed because the position of the markers did not allow to depict the sizes properly. To obtain these values, the length of each segment was directly measured as a straight line connecting their ends while the back was bent. - Starting posture for opening a cupboard’s doors: Fig. 13 shows the posture of a person before opening doors that are at a certain height. 19


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

N◦ 4

2017

Fig. 11. Segmented model for a person prepared to take an object that is far. The black dots mark the loca on of each segment CoM Fig. 13. Segmented model for a person prepared to open the doors of a cupboard. The black dots mark the loca on of each segment CoM

Fig. 12. Segmented model of a person prepared to collect an object from the floor. The black dots mark the loca on of each segment CoM For those postures we evaluated the location of the overall CoM projection onto the loor and obtained the reaction forces on both feet. We made two assumptions: i) when both feet are on the same x position, the overall ground reaction force is divided equally between them, in this case it is possible to obtain the location of the application point for the leg-end force vectors; ii) when the feet are in different x positions, the application points of the reaction force vectors are at the ankles (such simpli ication allows to compute the value of the reaction forces). All the results shown in Tab. 2 were computed by using Matlab. When the ankles are together, the application points of the reaction forces (and the overall CoM projection) are located within the soles. In the cases 20

where the ankles are at different positions, the projection of the overall CoM is located between both feet, this means that it stays within the support polygon which allows the posture to be stable. The postures in Fig. 9 and Fig. 13 are similar, this shows that a small number of postures can be used to represent many actions. This is an important observation, because it is not possible to study all the postures a person can perform. In Tab. 2, for 5 out of 9 postures the smaller force is at the front of the overall CoM (for pushing, this is FL and for the other ones it is FR ). In most of these postures both arms are in the front of the body (Fig. 5, Fig. 7, Fig. 9 and Fig. 13). Only for the two postures in which the ankles are apart (Fig. 6 and Fig. 10), the reaction force acting on the frontal foot is greater. But for a general conclusion, more situations need to be studied.

5. Dynamic Analysis One recorded data set was analysed, however several recordings were done for general con irmation of the general repeatability of reaction (the human responses to the push are not identical but hold similar features). The recordings were done by using a motion capture system and the “Plug-in Gait” protocol to place the markers on the person. The communication protocol is served automatically by the VICON system and the user does not have access to it. A VICON system with assisting software was used. The person was a healthy woman without motion


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

N◦ 4

2017

Tab. 2. Data obtained from sta c analysis Starting posture for Pushing a light object (Fig. 5) Pushing a heavy object (Fig. 6) Pulling with both hands (Fig. 7) Pulling with one hand (Fig. 8) Collecting object from a height (Fig. 9) Taking a close object (Fig. 10) Taking an object that is far (Fig. 11) Collecting object from the loor (Fig. 12) Opening a cupboard’s doors (Fig. 13)

xCoM (cm)

FLR (N)

FRR (N)

FL (N)

FR (N)

29.3840 29.1433 12.5190 11.9603 19.6618 17.9394 11.5548 16.5266 13.5135

220.7250 108.2870 220.7250 220.7250 220.7250 336.3817 220.7250 220.7250 220.7250

220.7250 333.1630 220.7250 220.7250 220.7250 105.0683 220.7250 220.7250 220.7250

175.2557 246.3291 361.5476 173.9313 273.2576 241.4732 164.2194 307.6907 273.2576

266.1944 195.1209 79.9025 267.5187 168.1925 199.9769 277.2306 133.7594 168.1925

The location of the overall CoM projection on the x-axis (xCoM ) is shown in the igure of each segmented model. When the feet are apart, the CoM projection is marked with an “x”, and when they are together it is at the same position as the reaction forces acting on the left and right foot - FLR and FRR , respectively. FL and FR represent the forces due to the gravity acting on the left and right of xCoM , respectively.

disorders and with normal body build. The action consists on four main parts: i) a person moves one step forward into a force plate; ii) when both feet are on the force plate, the person is suddenly pushed to the left; iii) in order to ind a stable posture, the person moves her arms and legs; iv) inally, the person goes back to the force plate and then to her initial position. The push situation is not a case which can be repeated with identical result, which is also dif icult for typical gait since the leg-end trajectories are not identical for every step. Our aim was not to conclude about the statistically relevant pattern of this response but to analyse the motion dynamics using the inverted pendulum approach. To read the recordings, the 3D Motion Kinematic & Kinetic Analyzer (Mokka) software was used. These data were combined with anthropometric data because the markers did not always indicate both ends of each segment. Both pendulum models were obtained by using Eq. 2 for the x coordinate and similar formulas for the remaining ones. To compute the ZMP Eq. 3 was used, here F represents the total reaction force, the subscript s is for the support (ankle joint or pivot point of the pendulum), and N represents the amount of partial masses that compose the model for which the ZMP is computed. ∑N i=1

Fz (xi − xs ) − ∑N

∑N i=1

vi+1 =

pi+1 − pi ∆t

Fx =

N ∑

mi axi

i=1

Fy =

N ∑

mi ayi

(6)

i=1

Fz =

N ∑

mi (azi + g)

i=1

To investigate how the motion of the double pendulum masses is related, we used Eq. 7, here n represents the amount of samples used - total of the recorded instants of time. By removing the summation from the numerator of the correlation equation we obtained a motion correlation measure for each instant of time.

ri = √∑n

(xi − x ¯)(yi − y¯) ∑n ¯)2 i=1 (yi − y

¯ )2 i=1 (xi − x

(7)

5.1. Obtained Results

Fx (zi − zs )

+ xs i=1 Fz ∑N ∑N i=1 Fz (yi − ys ) − i=1 Fy (zi − zs ) Py = + ys ∑N i=1 Fz (3) To compute the reaction forces it was necessary to obtain the acceleration of each body segment at every time instant. For this, Eq. 4 was used to compute the velocities (using the actual and future CoM coordinates p), then these results were used in Eq. 5 to obtain the accelerations. Both, the velocities and accelerations were computed for each axis. Px =

vi+1 − vi (5) ∆t Finally, knowing the values for the acceleration and the mass of each segment, it was possible to compute the reaction forces using Eq. 6, where the gravity constant is g = 9.81 sm2 . ai+1 =

(4)

Fig. 14 shows the irst method we used to obtain the ZMP trajectories using the segmented model, for double support the projection of CoP was used as an approximation of the ZMP trajectory. The dashed lines connect the ZMP trajectories that were computed for single and double support phases. To obtain the location of the CoP, we de ined the overall force in the z direction to be linearly divided between both feet depending on the location of the overall CoM. In this case, again, the reaction forces were assumed to be located at the ankles. This is shown in Eq. 8 where the ratio was used to obtain the value of the vertical component of the reaction force on the right foot (FRRz ), here dL_CoM is the distance along the xy plane from the left ankle to the overall 21


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

N◦ 4

2017

CoM and dLR is the distance along the xy plane between both ankles. Finally, with both results, we evaluated the vertical component of the reaction force on the left foot (FLRz ). dL_CoM dLR = ratio · Fz

ratio = FRRz

FLRz = Fz

(8)

FRRz

Using Eq. 9 with the previously obtained data and the reaction forces coordinates, it is possible to compute the location of the CoP. xRR · FRRz + xLR · FLRz Fz yRR · FRRz + yLR · FLRz CoPy = Fz

CoPx =

(9)

In both plots (Fig. 14 and Fig. 15), the support phases are represented by three colours: i) green for double support, ii) blue for right single support, iii) and red for left single support. The footprints are shown, and the black line represents the trajectory of the projection of the overall CoM. The arrows indicate the ZMP and the CoM displacement. The second method is shown in Fig. 15 where the ZMP position in double support phase is approximated by connecting the ends of the trajectories obtained for consecutive single support phases. It can be noticed that in the second case there is a good coincidence between the ZMP and CoM projection, and that the irst method of ZMP approximation in double support gives a signi icant discrepancy between the ZMP and the CoM projection. This indicates that the second method of ZMP approximation is more accurate.

Fig. 15. ZMP results for the segmented model - using lines connec ng the values for single support phases to approximate the ones for double support is consistent with [16] which stated that both models have similar features.

Fig. 16. ZMP trajectories for single and double pendulums Fig. 17 (the different colours distinguish each support phase) shows the normalization of the distance from the beginning of each segment (pivot point or waist) to its CoM (drod_CoM ) with respect to its segment length (lrod ). For this, Eq. 10 is used and both values are in mm. We can observe that the motion trend of both CoMs is similar, especially at the interval between 6 and 10s, which is approximately the time when the person is pushed and goes back to the force plate. Fig. 14. ZMP results for the segmented model - using the CoP to obtain it for double support In Fig. 16 the ZMP trajectories for both pendulums are shown, in red for the single one and in blue for the double one. One can notice that the trajectories obtained with both pendulums are similar to those in Fig. 15, and also that they are similar to each other. This brings the conclusion that both pendulums are good representations of the human body, this observation 22

normalization =

drod_CoM lrod

(10)

Fig. 18 (the colours indicate the support phases) presents the correlation measure obtained by using the results from Fig. 17 in Eq. 7. Here, positive values mean that both CoMs move simultaneously upward or downward, and the greater the value is, the displacement is more similar. It is possible to see that in overall, the correlation is positive. We divided this plot into some stages: i) hands up (when the person’s arms are


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

N◦ 4

2017

Fig. 17. Posi on of the double pendulum CoMs normalized to the length of each rod. The first plot is for the lower segment and the second plot for the upper one

Fig. 18. Correla on measure at each instant of me between the upper and lower mass posi on (normalised data were considered as they are shown in Fig. 17). For be er explana on, small sketches of the human posture are displayed stretched to the sides more than during normal walking), ii) hands down (when the person’s arms are close to her body, similar to their position during normal walking), iii) balancing (after the person was pushed and before she is able to start going back to the original position), iv) correcting step (from the time when the person started moving back to her original position until the irst foot was placed again on the force plate). During the “balancing” and “correcting step” phases, the correlation measure increases, but

it starts decreasing when normal standing is approached again. These values are coherent with the behaviours shown in Fig. 17. We can also observe that the correlation tends to be greater during sudden motions (see Fig. 18 at the interval between 6 and 8s), this happens in single support phases. During double support phases, the lowest correlation measure is present when the person is in standing position with small or no visible motion, the measure is almost constant but in the inal double support, we can notice that the cor23


Journal of Automation, Mobile Robotics & Intelligent Systems

relation measure increases due to the fact that the person’s body sways backward.

6. Conclusions Knowledge of the CoM position is relevant for transferring the postural adjustment patterns to humanoids. The dynamic case con irmed that similar results can be obtained with the single and double pendulums. However the latter one has an advantage, because it helps to understand how the motion of the upper and lower body in luences the position of the resultant CoMs. Such model can help to adjust the postural stability in humanoids by deciding when and how to move the arms or legs in order to make each CoM reach a speci ic location. The analyses of the static cases indicated that the total reaction force vector should be applied at the same location as the projection of the overall CoM in order to maintain the postural equilibrium. We also observed that the projection of the overall CoM is always within the support polygon. The results also explain how the posture of a humanoid can be adjusted depending on what it is expected to do. The postural stability is assured when the location of the projection of the overall CoM is within the support polygon, this can be achieved by adjusting the posture and measuring if the ratio of the feet reaction forces is similar to the one obtained for human beings. By computing the ZMP it was possible to see that in this case the CoP was not a good approximation for double support due to the fact that the location of the force acting on each feet was assumed. But, when that happens, it was proven that using straight lines to connect the ZMP values for single support phases is also a good approximation. Here we could also demonstrate how different technologies can be used to study the human body motion, in this work we used photographic techniques and a motion capture system. With both inverted pendulums, we could also verify that the results obtained by using them are similar, which proves both are good representations of the human body. As future work, we suggest: i) to use the spring loaded inverted pendulum model for non-typical cases and to see if other considerations are necessary in order to use it for different motion situations, ii) to study the energy that people require to perform non-typical actions (i.e. to ind a stable posture after being pushed). Combining all the information can help answer in what manner we can de ine the human motion synergies for the purpose of robotics.

ACKNOWLEDGEMENTS This work was supported by EMARO+ (European Master on Advanced Robotics Plus, which is supported by the European Commission). 24

VOLUME 11,

N◦ 4

2017

AUTHORS

Jessica Villalobos∗ – Warsaw University of Technology, Nowowiejska 24, Warsaw, 00-665, e-mail: jessica.villalobos9@gmail.com. Teresa Zielińska – Warsaw University of Technology, Nowowiejska 24, Warsaw, 00-665, e-mail: teresaz@meil.pw.edu.pl. ∗

Corresponding author

REFERENCES [1] A. Akash, S. Chandra, A. Abha, and G. Nandi, “Modeling a bipedal humanoid robot using inverted pendulum towards push recovery”. In: Communication, Information & Computing Technology (ICCICT), 2012 International Conference on, 2012, 1– 6. [2] T. P. Andriacchi and E. J. Alexander, “Studies of human locomotion: past, present and future”, Journal of biomechanics, vol. 33, no. 10, 2000, 1217–1224. [3] Y.-W. Chao, J. Yang, B. Price, S. Cohen, and J. Deng, “Forecasting human dynamics from static images”, arXiv preprint arXiv:1704.03432, 2017. [4] R. Chiba, K. Takakusaki, J. Ota, A. Yozu, and N. Haga, “Human upright posture control models based on multisensory inputs; in fast and slow dynamics”, Neuroscience research, vol. 104, 2016, 96–104. [5] R. A. Clark, Y.-H. Pua, K. Fortin, C. Ritchie, K. E. Webster, L. Denehy, and A. L. Bryant, “Validity of the microsoft kinect for assessment of postural control”, Gait & posture, vol. 36, no. 3, 2012, 372– 377. [6] R. Contini, “Body segment parameters. ii.”, Arti icial limbs, vol. 16, no. 1, 1972, 1–19. [7] S. Dafarra, F. Romano, and F. Nori, “Torquecontrolled stepping-strategy push recovery: Design and implementation on the icub humanoid robot”. In: Humanoid Robots (Humanoids), 2016 IEEE-RAS 16th International Conference on, 2016, 152–157. [8] A. Denisov, R. Iakovlev, I. Mamaev, and N. Pavliuk, “Analysis of balance control methods based on inverted pendulum for legged robots”. In: MATEC Web of Conferences, vol. 113, 2017, 02004. [9] A. Gonzá lez de Alba and T. Zielinska, “Postural equilibrium criteria concerning feet properties for biped robots”, Journal of Automation Mobile Robotics and Intelligent Systems, vol. 6, 2012, 22– 27. [10] I. Ha, Y. Tamura, and H. Asama, “Gait pattern generation and stabilization for humanoid robot based on coupled oscillators”. In: 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2011, 3207–3212.


Journal of Automation, Mobile Robotics & Intelligent Systems

[11] S. M. Kasaei, N. Lau, A. Pereira, and E. Shahri, “A reliable model-based walking engine with push recovery capability”. In: Autonomous Robot Systems and Competitions (ICARSC), 2017 IEEE International Conference, 2017, 122–127. [12] Y. Lee, K. Lee, S.-S. Kwon, J. Jeong, C. O’Sullivan, M. S. Park, and J. Lee, “Push-recovery stability of biped locomotion”, ACM Transactions on Graphics (TOG), vol. 34, no. 6, 2015, 180:1–180:9. [13] Z. Li, B. Vanderborght, N. G. Tsagarakis, L. Colasanto, and D. G. Caldwell, “Stabilization for the compliant humanoid robot coman exploiting intrinsic and controlled compliance”. In: Robotics and Automation (ICRA), 2012 IEEE International Conference on, 2012, 2000–2006.

VOLUME 11,

N◦ 4

2017

[23] B. Stephens, “Humanoid push recovery”. In: Humanoid Robots, 2007 7th IEEE-RAS International Conference on, 2007, 589–595. [24] T. Takenaka, “The control system for the honda humanoid robot”, Age and ageing, vol. 35, no. suppl 2, 2006, ii24–ii26. [25] M. Vukobratović and B. Borovac, “Zero-moment point—thirty ive years of its life”, International Journal of Humanoid Robotics, vol. 1, no. 1, 2004, 157–173. [26] D. A. Winter, Biomechanics and motor control of human movement, John Wiley & Sons: Hoboken, New Jersey, 2009, 82–95.

[14] V. Lippi and T. Mergner, “Human-derived disturbance estimation and compensation (dec) method lends itself to a modular sensorimotor control in a humanoid robot”, Frontiers in neurorobotics, vol. 11, 2017, 49:1–49:22. [15] N. Maalouf, I. H. Elhajj, D. Asmar, and E. Shammas, “Model-free human-like humanoid push recovery”. In: Robotics and Biomimetics (ROBIO), 2015 IEEE International Conference on, 2015, 1560–1565. [16] M. McGrath, D. Howard, and R. Baker, “The strengths and weaknesses of inverted pendulum models of human walking”, Gait & posture, vol. 41, no. 2, 2015, 389–394. [17] T. Mergner, F. Huethe, C. Maurer, and C. Ament, “Human equilibrium control principles implemented into a biped humanoid robot”, Romansy 16, Robot design, dynamics, and control, 2006, 271–278. [18] M. Pijnappels, M. F. Bobbert, and J. H. van Dieë n, “Push-off reactions in recovery after tripping discriminate young subjects, older non-fallers and older fallers”, Gait & posture, vol. 21, no. 4, 2005, 388–394. [19] J. Pratt, J. Carff, S. Drakunov, and A. Goswami, “Capture point: A step toward humanoid push recovery”. In: Humanoid Robots, 2006 6th IEEE-RAS International Conference on, 2006, 200–207. [20] S. Sakka, C. Hayot, and P. Lacouture, “A generalized 3d inverted pendulum model to represent human normal walking”. In: Humanoid Robots (Humanoids), 2010 10th IEEE-RAS International Conference on, 2010, 486–491. [21] P. Sardain and G. Bessonnet, “Forces acting on a biped robot. center of pressure-zero moment point”, IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, vol. 34, no. 5, 2004, 630–637. [22] C. H. Soo and J. M. Donelan, “Coordination of push-off and collision determine the mechanical work of step-to-step transitions when isolated from human walking”, Gait & posture, vol. 35, no. 2, 2012, 292–297. 25


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

N° 4

2017

Design and Development of a Semi-active Suspension System for a Quarter Car Model using PI Controller Submitted: 21st November 2017; accepted 8th January 2018

Hudyjaya Siswoyo, Nazim Mir-Nasiri, Md. Hazrat Ali

DOI: 10.14313/JAMRIS_4-2017/34 Abstract: This paper presents the design and development of a semi-active suspension system for a vehicle. The main idea is to develop a system that is able to damp vibration of the vehicle body while crossing the bumps on the road. This system is modeled for a single wheel assembly and then the laboratory prototype of the complete system has been manufactured. It is used to physically simulate the spring-mass-damper system in vehicle and observe the frequency response to the external disturbances. The developed low-cost smart experimental equipment consists of a motor with offset mass which works as an oscillator to induce vibration, a spring-mass-damper system where the variable damper works as a pneumatic cylinder that allows varying the damping constant (c). Proportional-Integral (PI) controller is used to control the damping properties of the semi-active suspension system automatically. The system is designed in contrast to the most of the available suspension systems in the market that have only passive damping properties. The results of this research demonstrate the efficiency of the developed variable damper-based control system for the vehicle suspension system. Keywords: semi-active suspension, control, damper, road profile, vibration

1. Introduction

26

Active suspension system and its control strategy are discussed in details by many researchers. Most of the researchers have discussed the design and simulation results based on software tools. The real prototype is not widely available in the market. However, by reviewing various methods and products available for experimental purpose, it can be summarized as two types: a) hands on experimental apparatus and, b) virtual lab apparatus. The virtual lab apparatus is a good method in terms of cost saving and efficiency, but by this method people are not exposed to the real world experience. In many works, most of the tunable parameters are tuned manually through the control box available together with the set of apparatus. However, for data gathering, it requires another unit based on PC-aided data acquisition module which is very expensive. In this developed system, the input and output data is recorded and displayed on a PC through Universal Serial Bus (USB) connection

which is widely available in most of the PC nowadays. The key researches and works are done by many researchers which can be highlighted as follows. A modelling and simulation based study carried out for one-quarter vehicle model to reduce vibration. An emphasis has been placed upon the interrelations between computer-aided simulation and other elements of the development process [1]. Another paper presets a mathematical model for the passive and active suspension systems for quarter car model using PID controller. Current automobile suspension systems use passive components only by utilizing spring and damping coefficient with fixed rates. The performance of the proposed system is evaluated using Matlab Simulink [2]. In fact, H∞ controller is responsible for minimizing the infinity norm of two subsystems. The first one is from car body travel to road disturbance, and the second one is from suspension deflection to road disturbance. These two control targets are improved by a logical control input that is determined by H∞ control approach. In addition, the sensitivity analysis is done to show that the active suspension system is able to work when spring-mass changes based on number of passengers [3]. In a similar work, a suspension system has been modelled as a two-degree-of freedom of a quarter-car model to represent passive and active suspension systems. A fuzzy logic controller for an active vehicle suspension system is designed and simulated using MATLAB, and compared the results with a passive suspension system [4]. In another research, an optimal preview control of a vehicle suspension system traveling on a rough road is studied and a three-dimensional seven degree-of-freedom car-riding model and several descriptions of the road surface roughness heights, including haver sine (hole/bump) and stochastic filtered white noise models, are used in the analysis. In this study, a contact-less sensors affixed to the vehicle front bumper to measure the road surface height at some distances in the front of the car. The suspension systems are optimized with respect to ride comfort and road holding preferences including accelerations of the spring-mass, tire deflection, suspension rattle space and control force [5]. Another similar work presents a non-linear design method using LQR theory to simulate and observe a vehicle’s active suspension response and effect in a vehicle [6]. A review paper presents the advantages and disadvantages associated with the suspension systems of vehicles of conventional, active and semi-


Journal of Automation, Mobile Robotics & Intelligent Systems

-active systems based on the elements of controlled characteristics of both elastic elements and damping. It was suggested to apply and investigate an advance signal processing methods in vehicle’s vibration research [7]. A thesis report [8] presents two new adaptive vehicle suspension control methods, which significantly improve the performance of mechatronic suspension systems by adjusting the controller parametrization to the current driving state. The first concept is an adaptive switching controller structure, which dynamically interpolates between differently tuned linear quadratic regulators. The second control approach (adaptive reference model based suspension control) emulates the dynamic behavior of a passive suspension system, which is optimally tuned to improve ride comfort for the current driving state while keeping constraints on the dynamic wheel load and the suspension deflection [8]. In another work, the Linear Quadratic Control (LQR) technique is implemented to the active suspension system for a quarter car model. Comparison between passive and active suspensions system are performed based on simulation by selecting different types of road profiles [9]. For the tracking control problem of vehicle suspension system, a robust design method of adaptive sliding mode control is derived and designed so that the practical system can track the state of the reference model. The influence of parameter uncertainties and external disturbances on the system performance can be reduced and system robustness can be improved [10]. In another study, two active vibration controllers are proposed for hydraulic or electromagnetic suspension systems, which only require position measurements. Some numerical simulation results are provided to show the efficiency, effectiveness and robust performance of the feedforward and feedback linearization control scheme proposed for a nonlinear quarter-vehicle active suspension system [11]. In a different research work, a design approach of robust active vibration control schemes for vehicle suspension systems using differential flatness, sliding modes and Generalized Proportional-Integral control techniques is discussed in order to attenuate undesirable vibrations induced by irregular road disturbances [12]. Some other works are done on fuzzy control systems, active vibration rejection technique, based on four degree of freedom, state derivative feedback system and permanent magnet based active suspension system [13–21]. Based on the above literatures, it can be said that the works done by the researchers are mainly software based design and simulation. The semi-active suspension system based on the damping fluid control is a new area of research. This paper describes the complete design, development and analysis of the semi-active suspension system through varying damping constant (c). Changing the value of damping constant, the suspension system can be adjusted in real time.

2. Modelling and Design of the System

The conceptual design suggested the following structural components for the apparatus:

VOLUME 11,

N° 4

2017

– Frame – Slider crank mechanism – Valve control mechanism

2.1. Frame

The frame is designed to rigidly hold all the fixed parts as shown in Figure 1. The frame is also designed to support the dynamic load that is occurred during the vibration of the spring-mass-damper system. The whole frame is constructed using Flex-link aluminum profile.

Fig. 1. Designed frame with slider bars Figure 2 shows the semi-active suspension system with the particular elements.

Fig. 2. Vibrating system with particular elements

2.2. Slider Crank Mechanism 1 - Motor Shaft 2 - Crank 3 - Connection Rod 4 - Slider

Fig. 3. Slider crank mechanism diagram Slider crank mechanism is an arrangement of mechanical parts that are designed to convert rotary motion to straight-line motion. As shown in Fig. 3, when the motor shaft is turning, the crank will move in rotational motion while the connecting rod will push and pull the slider. The end of the connecting rod is Articles

27


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

connected to the slider. Its motion is restricted by the slider guide along a single line. The slider displacement versus time is a sinusoidal wave with the peak-to peak amplitude equal to twice of the crank length and the frequency of the crank. The crank speed is equal to the motor speed. This mechanism functions as an exciter to the spring-mass-damper system. As the motor rotates up and down, the slider physically simulates a car as if it is going over a bumpy road. In addition, two slider bars are attached to guide the spring-mass-damper to move in a single line and to avoid side motion as highlighted in Fig. 4. Pneumatic cylinder is shown in Fig. 5.

N° 4

2017

(2)

(3)

(4)

To calculate spring constant, k, the mass is set to be m = 3.2 kg and 1.4 kg is the weight of the damper cylinder. The cylinder is a part of the unsprung mass. By setting the natural frequency to be half of the motor’s speed, we derive the value of K as follows. Radian/s

2.4. Valve Control Mechanism The variable damper is shown in Fig. 6, consists of pneumatic cylinder and pneumatic valve. The pneumatic valve is designed to automatically control the damping factor (c) of the dynamic system. To achieve that, an actuator (stepper motor) is attached to turn the valve. The angular position of the valve knob is measured by a potentiometer that is attached as a feedback sensor. The transmission from the stepper motor to the valve knob is performed by the belt-pulley transmission mechanism. The timing belt is used to achieve the accuracy.

Fig. 4. Designed slider crank mechanism

Fig. 5. Pneumatic cylinder is shown in figure 5

2.3. Selection of Parameters of the Dynamic System The selection of mass and spring parameters is based on the parameter of the exciter motor. Calculation below shows how we determine the amount of mass and the spring stiffness for the apparatus. Motor maximum torque = 1.8 Nm Crank length = 0.03 m Τ = rF, F=mg T = rmg m = T / (rg) = 6.12 kg The rule of thumb is that the total mass lifted by the motor should not exceed more than 75% of the maximum mass it is capable to lift. Hence, applying Eqns. 1 and 2 we get, 28

Articles

(1)

Fig. 6. Designed valve control mechanism

Fig. 7. CAD assembly of the system


Journal of Automation, Mobile Robotics & Intelligent Systems

The spring-mass-damper system is mounted on the frame and the slider is inserted into the slider bars to guide linearly the movement of the system. The CAD assembly of the system is shown in Fig. 7. The developed system is shown in Figs. 8 and 9. The damper is connected in parallel with the spring, as sketched in Fig. 8. Frequency ωn is chosen to be at the half of the motor maximum speed (300 rpm). This is to make sure that the system is able to reach the natural frequency range in order to test the effectiveness of the suspension system. The maximum speed of the motor is 5 rev/s = 300 rpm. The system is forced with its natural frequency. This is to make sure that the system is able to reach the natural frequency range in order to test the effectiveness of the suspension system.

VOLUME 11,

N° 4

2017

Fig. 10. Free body diagram of spring mass damper system (3)

Yes y is the kinematic excitation which is applied by the slider crank mechanism. And, the force produced by damper is calculated using Eqn. 5,

(4) (5)

Applying Newton’s second law, Eqns. 6 and 7 were derived.

(6)

(7) Fig. 8. Developed system: front view

Taking natural frequency into consideration, the following Eqns. 8 and 9 can be obtained.

,

Spring

and

(8)

(9)

In order to convert the equations from time domain to frequency domain, Laplace Transform is applied into both sides. (10) (11)

Fig. 9. Developed system: enlarge view

2.5. Mathematical Model of Spring-Mass-Damper System From the free body diagram shown in Fig. 10, the force produced by spring is calculated using Eqn. 3.

(12) Sinusoidal transfer function can also be derived as in Eqn. 13. Articles

29


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

(13)

N° 4

2017

Figure 11 shows a specific relationship between damping ratio and oscillation amplitude. It shows that the possible values of damping ratio require to achieve the desired oscillation amplitude over a range of frequency (taking k=750N/m and m=3.2 kg).

2.6. Control of the Semi-active Suspension Apparatus

Equations 14 to 20 are simplified equations. Eqn. 21 is used to plot the frequency response graph and observe the overall system performance. (14) (15) (16) (17) (18) (19)

Fig. 12. Control implementation Figure 12 shows the control system implementation flow chart. As shown in this figure, the system consists of two controllers such as; main controller and slave controller. The main controller is responsible for controlling exciter motor, reading data from sensors, controlling valve and communicate with PC. The slave controller is responsible for controlling the servo system of the electronic controlled valve. It reads the data sent by the main controller and adjusts

(20)

(21)

Fig. 11. Damping ratio vs excitation frequency for different amplitude 30

Articles

Fig. 13. Control system’s block diagram the opening of the valve with the help of feedback from the potentiometer. The system can be configured to run in manual mode or automatic mode. In the manual mode, user is able to control the damping ratio manually and the system measures and displays a plot of oscillation amplitude of the spring-mass -damper system over the applied frequency range. In automatic mode, the system adjusts the damping ratio automatically in order to achieve desired oscillation amplitude over the applied frequency range. To achieve the desired oscillation amplitude, a closed loop system with Proportional-Integral (PI) controller is proposed. It controls the position of the valve in relation to the damping ratio. Diagram in Fig. 13 shows the configuration of the closed loop control system.


Journal of Automation, Mobile Robotics & Intelligent Systems

2.7. System Data Communication In this research, two different types of hardware communication were established.

2.7.1. Microcontroller-to-Microcontroller communication The slave controller reads the data sent by the main controller as the input for the valve opening. To transfer data from one to another, two controllers were connected for effective communication purpose. A synchronous serial communication is developed to transfer the 10-bits data from the main controller to the slave controller. The communication port consists of three commands which are TRIGGER, CLOCK, and DATA. Figure 14 shows the main data communication circuit diagram for the complete system. The data transfer procedures are as follows: i) Main controller sends pulse to the TRIGGER pin ii) Slave controller waits for falling edge pulse of the CLOCK ii) Main controller sets the data on the DATA pin and generates falling edge pulse on the CLOCK iv) Slave controller reads the DATA pin and saves the value into the memory v) Steps (2–4) are repeated until 10 bits of data are transferred. 2.7.2. Microcontroller-to-Computer Communication In order to make the whole system to be controllable and observable from the GUI software, it is necessary to establish communication between computer and the main controller. After doing some researches regarding the communication between

VOLUME 11,

N° 4

2017

computer and hardware, it was decided to use USB communication. Universal Serial Bus (USB) is a serial bus standard to interface devices. A USB port was designed to allow peripherals to be connected using a single standardized interface socket, to improve plug-and-play capabilities by allowing devices to be connected and disconnected without rebooting the computer (hot swapping). Other convenient features include powering low-consumption devices without the need for an external power supply and allowing some devices to be used without requiring individual device drivers to be installed. USB is intended to use in helping serial and parallel port’s data communication system.

3. Experimental Results and Discussion

The functional results of the built semi-active suspension system are shown in the figures below. The figures show the outcomes for various distinct operational conditions. Figure 15 shows the manual mode operation with the valve fully open. Figures 16 and 17 show the manual mode operation with 50% and 90% opening of valve, respectively. These results demonstrate a very high sensitivity of the system to the variable value of the damping factor (c) of the pneumatic damper. Figures 18 and 19 show the system operating in an automatic mode with the desired amplitude of 1.5 dB and 1.8 dB, respectively. The last two figures prove the ability of the system to control and maintain automatically the designed pick amplitude value of the mass-spring-damper system for the artificially injected external frequency disturbances.

Fig. 14. Circuit connection with the controller Articles

31


Journal of Automation, Mobile Robotics & Intelligent Systems

N° 4

2017

Fig. 15. Bode plot of manual mode operation (valve is opened fully)

Fig. 18. Bode plot of automatic mode operation with desired amplitude at 1.5 dB

Fig. 16. Bode plot of manual mode operation (valve is 50% closed)

Fig. 19. Bode plot of automatic mode operation with desired amplitude at 1.8dB

Fig. 17. Bode plot of manual mode operation (valve is 90% closed) The result can be analyzed as follows. When the valve is fully opened, the vehicle’s suspension system is uncontrolled and passengers feel the maximum level of jerking and discomfort. Whereas, when it is opened 50%, passengers feel less oscillation than the first case. Similarly, when the valve is closed 90% and opened 10%, passenger will feel oscillation but with less jerking and more comfort. These are the cases of manual control of the suspension system. Now, if we look at the results in Figs. 18 and 19, we can see the expected results produced by the developed system. The developed system follows the commands of the control input variables by adjusting damping factor (c).

4. Conclusions

32

VOLUME 11,

The paper demonstrates the design and experimental development of the semi-active suspension system that simulates a quarter car suspension systems. The Articles

main achievement of the work is the ability of the designed and developed system to adjust the amplitude of the system vibration regardless of external disturbances. If it is implemented in the cars, it will give safety and comfort to the car passengers in case of crossing bump and rough roads. The desired result is achieved by the real time tuning of the pneumatic damping coefficient for the sudden external disturbances. The paper presented the mathematical modelling of the system in terms of damping factor variation as well as it describes the complete construction and control of the suspension system. The experimental results obtained from the apparatus (Figs. 15–19) show the efficiency of the developed system and its compliance with the derived mathematical models.

AUTHORS Hudyjaya Siswoyo – Swinburne University Sarawak, Malaysia

Nazim Mir-Nasiri, Md. Hazrat Ali* – School of Engineering, Nazarbayev University, Kazakhstan *Corresponding author: md.ali@nu.edu.kz

REFERENCES

[1]

Vladimir Popovic, Branko Vasic, Milos Petrovic, Sasa Mitic, “System Approach to Vehicle Suspen-


Journal of Automation, Mobile Robotics & Intelligent Systems

sion System Controling CAE Environment”, Strojniški vestnik – Journal of Mechanical Engineering, vol. 57, 2011, no. 2, 100–109. DOI: 10.5545/sv-jme.2009.018. [2] Abd El-Nasser S. Ahmed, Ahmed S. Ali, Nouby M. Ghazaly, G. T. Abd el-Jaber, “PID controller of active suspension system for a quarter car model”, International Journal of Advances in Engineering & Technology, Dec. 2015, vol. 8, no. 6, 899–909. [3] J. Marzbanrad, N. Zahabi, “H∞ Active Control of a Vehicle Suspension System Exited by Harmonic and Random Roads”, Mechanics and Mechanical Engineering, vol. 21, 2017, no. 1, 171–180. [4] Zheng Yinhuan, “Research on fuzzy logic control of vehicle suspension system”. In: 2010 International Conference on Mechanic Automation and Control Engineering, Wuhan, 2010, 307–310. [5] J. Marzbarad, G. Ahmadi, Y. Hojjat, H. Zohoor, “Optimal active control of vehicle suspension system including time delay and preview for rough roads”, Journal of Vibration and Control, vol. 8, 2002, no. 7, 967–991. DOI: 10.1177/107754602029586. [6] R.F. Harrison, S.S. Banks, A new non-linear design method for active vehicle suspension systems. Research report. ACSE research report 700, Department of Control and Systems Engineering, University of Sheffield, UK, 1998. [7] Rafał Burdzik, Łukasz Konieczny, Błażej Adamczyk, “Automatic Control Systems and Control of Vibrations in Vehicles CaR”. In: International Conference on Transport Systems Telematics TST 2014: Telematics – Support for Transport, chapter 13, 120–129. DOI: 10.1007/978-3-66245317-9_13. [8] Guido P. A. Koch, Adaptive Control of Mechatronic Vehicle Suspension Systems, PhD thesis report, 2011, University of Technology in Muenchen, Germany. [9] Abdolvahab Agharkakli, Ghobad Shafiei Sabet, Armin Barouz, “Simulation and Analysis of Passive and Active Suspension System Using Quarter Car Model for Different Road Profile”, International Journal of Engineering Trends and Technology, vol. 3, 2012, no. 5, 636–644. [10] J. Fei, M. Xin, “Robust adaptive sliding mode controller for semi-active vehicle suspension system”, International Journal of Innovative Computing, Information and Control, vol. 8, no. 1(B), January 2012, pp. 691–700. [11] Francisco Beltran-Carbajal, Esteban Chavez-Conde, Gerardo Silva Navarro, Benjamin Vazquez Gonzalez, Antonio Favela Contreras, “Control of Nonlinear Active Vehicle Suspension Systems Using Disturbance Observers”, Vibration Analysis and Control – New Trends and Development, 131–150. DOI: 10.5772/25131. [12] Esteban Chavez-Conde, Francisco Beltran-Carbajal Antonio Valderrábano González, Antonio Favela Contreras, “Active vibration control of vehicle suspension systems using sliding modes, differential flatness and generalized proportional-integral control”, Rev. Fac. Ing. Univ.

VOLUME 11,

[13]

[14]

[15]

[16]

[17]

[18]

[19]

[20]

[21]

[22]

N° 4

2017

Antioquia, Universidad de Antioquia, Medellín, Colombia no. 61, 2011, 104–113. G. Koch, T. Kloiber, “Driving State Adaptive Control of an Active Vehicle Suspension System”. In: IEEE Transactions on Control Systems Technology, vol. 22, no. 1, Jan. 2014, 44–57. X. Wei, J. Li, X. Liu, “LQR control scheme for active vehicle suspension systems based on modal decomposition”. I: 25th Chinese Control and Decision Conference (CCDC), Guiyang, 2013, 3296– 3301. DOI: 10.1109/CCDC.2013.6561516. D. Zhaoxiang, L. Fei, “Electromagnetic Active Vehicle Suspension System”. In: Third International Conference on Measuring Technology and Mechatronics Automation, Shangshai, 2011, 15–18. DOI: 10.1109/ICMTMA.2011.291. L. Yan, L. Shaojun, “Preview Control of an Active Vehicle Suspension System Based on a Four-Degree-of-Freedom Half-Car Model”. In: 2nd International Conference on Intelligent Computation Technology and Automation, Changsha, Hunan, 2009, 826–830. T. J. Gordon, C. Marsh, Q. H. Wu, “A learning automaton methodology for control system design in active vehicle suspensions”. In: International Conference on Control, Coventry, vol. 1, UK, 1994, 326–331. DOI: 10.1049/cp:19940153. M. Sever,H. Yazici, “Active control of vehicle suspension system having driver model via L2 gain state derivative feedback controller”. In: 4th International Conference on Electrical and Electronic Engineering (ICEEE), Ankara, Turkey, 2017, 215–222. S. Wen, M. Z. Q. Chen, Z. Zeng, X. Yu,T. Huang, “Fuzzy Control for Uncertain Vehicle Active Suspension Systems via Dynamic Sliding-Mode Approach”. In: IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 47, no. 1, Jan. 2017, 24–32. DOI: 10.1109/TSMC.2016.2564930. A. Tiwari, M. Lathkar, P. D. Shendge, S. B. Phadke, “Skyhook control for active suspension system of heavy duty vehicles using inertial delay control”. In: IEEE 1st International Conference on Power Electronics, Intelligent Control and Energy Systems (ICPEICES), Delhi, 2016, 1–6. DOI: 10.1109/ICPEICES.2016.7853289. Y. Shen, Q. Lu and Y. Ye, “Double-Stator Air-Core Tubular Permanent Magnet Linear Motor for Vehicle Active Suspension Systems”, 2016 IEEE Vehicle Power and Propulsion Conference (VPPC), Hangzhou, 2016, 1–6. DOI: 10.1109/ VPPC.2016.7791667. Y. Xia, Y. Xu, F. Pu, M. Fu, “Active disturbance rejection control for active suspension system of tracked vehicles”. In: 2016 IEEE International Conference on Industrial Technology (ICIT), Taipei, 2016, 1760–1764. DOI: 10.1109/ ICIT.2016.7475030.

Articles

33


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

N° 4

2017

Integration of Navigation, Vision, and Arm Manipulation Towards Elevator Operation for Laboratory Transportation System Using Mobile Robots Submitted: 2nd August 2017; accepted: 16th January 2018

Ali A. Abdulla, Mohammed M. Ali, Norbert Stoll, Kerstin Thurow

DOI: 10.14313/JAMRIS_4-2017/35 Abstract: In the automated environments, mobile robots play an important role to perform different tasks such as objects transportation and material handling. In this paper, a new method for a glassy elevator handling system based on H20 mobile robots is presented to connect distributed life science laboratories in multiple floors. Various labware and tube racks have to be transported to different workstations. Locating of elevator door, entry button detection, internal buttons recognition, robot arm manipulation, current floor estimation, and elevator door status checking are the main operations to realize a successful elevator handling system. The H20 mobile robot has dual arms where each arm consists of 6 revolute joints and a gripper. The gripper has two degrees of freedom. Different sensors have been employed with the robot to handle these operations such as Intel RealSense F200 vision sensor for entry and internal buttons detection with position estimation. A pressure sensor is used for current floor estimation inside the elevator. Also, an ultrasonic proximity distance sensor is utilized for checking the elevator door status. Different strategies including HSL color representation, adaptive binary threshold, optical character recognition, and FIR smoothing filter have been employed for the elevator operations. For pressing operation, a hand camera base and a new elevator finger model are designed. The elevator finger is resolved in a way to fit the arm gripper which is used also to manipulate the labware containers. The Kinematic solution is utilized for controlling the arms’ joints. A server/client socket architecture with TCP/IP command protocol is used for data exchange between Multi-Floor System and the H20 robot arms. Many experiments were conducted in life science laboratories to validate the developed systems. Experimental results prove an efficient performance with high success rate under different lightening condition. Keywords: mobile robot, multi-floor, elevator handler, floor estimation, labware transportation system, kinematic analysis, robotic arm control, object detection and localization, Intel Real sense F200 sensor

1. Introduction

34

In recent years, the development of transportation systems based on mobile robots has progressed rapidly to meet requirements such as high precision, routine task execution, transportation in hazardous

areas, and low-cost manufacturing. For automated life science laboratories, transportation task based mobile robots requires high precision of movement, the handling of lab equipment, the integration with automation islands, and scheduling the robot’s activity in accordance with the main laboratory’s control schedule. Most of the earlier developed transportation systems based on mobile robots work only on a single floor of restricted size [1]–[3]. In this work, H20 mobile robots (Dr. Robot, Canada) are used for labware transportation in multiple floors of life science environment. H20 robot is a wireless networked autonomous humanoid mobile robot. It has a PC tablet, dual arms, and an indoor GPS navigation system (see Figure 1).

Fig. 1. H20 mobile robots Since the H20 mobile robot does not have the ability to climb stairs, an elevator operation management system is an essential aspect of moving between different floors in a multi-floor transportation system. The elevator handling system must include many operations such as the locating of the elevator door and its status (open/closed), the detection and the pressing of entry and destination floor buttons, and the estimation of the current floor. The target button position can be estimated using a suitable visual sensor with a proper detection technique. The detection process includes the extraction of specific features from the image related to the target. In general, these features are divided into two categories. The first is the appearance features of objects such as the color, form, and size. Color detection can be performed using different color systems such as RGB, HSL, HSV, and YCbCr. The second category is the local texture, shape, and edges of the target itself.


Journal of Automation, Mobile Robotics & Intelligent Systems

The required features have to be extracted from the image to be matched with the features in the database related to the target. To find the target position, stereo vision system and 3D camera are considered the most appropriate visual sensors. But 3D cameras are more preferable because they provide directly the depth data without applying any complicated processes for the images as in the case of stereo vision. Many methods have been used to handle elevator operations. Each has achieved some degree of success, albeit with limitations. A template matching technique has been utilized to detect elevator buttons which achieved a success rate up to 85% for the entry button and 63% for floor buttons inside the elevator [4]implementation and experimental evaluation of a semi-humanoid robotic system for autonomous multi-floor navigation. This robot, a Personal Robot 2 named Charlie, is capable of operating an elevator to travel between rooms located on separate floors. Our goal is to create a robotic assistant capable of locating points of interest, manipulating objects, and navigating between rooms in a multi-storied environment equipped with an elevator. Taking the elevator requires the robot to (1. The disadvantage of this method is the excessive time required for buttons detection (4.5 s and 4.3 s, respectively). An artificial neural network has also been proposed to achieve the best matching by discarding weak candidates for entry and internal buttons detection [5]. However, this method has not been validated in a real elevator environment. Multiple symbol methods have been adopted for both external and internal buttons panel detection, while a combination of image processing techniques were utilized to recognize external and internal elevator buttons [6]. These methods achieved satisfactory performance for button recognition but did not provide real coordinates, which are significant for guiding the robot’s arm in button pressing operations. For current floor estimation, many techniques can be utilized, such as Radio-Frequency Identification (RFID) [7], the Received Signal Strength (RSS) of a wireless network [8], and a height measurement system [9]. Each approach can realize floor estimation but with some disadvantages and limitations. The RFID technique requires high costs since RFID antennas have to be installed on each floor [7]. The floor estimation based on RSS has the same disadvantage of being very expensive since it would require at least 4 wireless network sources on each floor to achieve an acceptable success rate [8]. On the other hand, the height measurement system depends on pressure sensors for floor estimation has a low-cost. It requires only one sensor attached to the robot for all floors [9]. But frequent recalibrations have to be performed to overcome the problem of wide daily variations in atmospheric pressure. The design of elevator finger is one of the important issues which plays a main role to achieve the buttons pressing operation. Some factors have to be taken into consideration to design a suitable finger model like the structure of end effector with gripper

VOLUME 11,

N° 4

2017

and the characteristics of the buttons like its shape and material, etc. For pressing operation of elevator buttons, the robotic arms have to be controlled reliably. This requires the position estimation of the target button with respect to the arm base followed by using an accurate kinematic model to move the arm end effector to the target in a safe path. The kinematic analysis is how to describe the arm links motion without considering its forces. There are two types of kinematic models: forward kinematics (FK) and inverse kinematics (IK). The FK model is a mathematical model used to calculate the end-effector pose relative to the arm base according to the given joints’ angles. On the other hand, the IK model is a mathematical model used to calculate the values of joints angles according to the given end-effector pose with respect to the arm base. The IK model considers as an important issue to enable the arm end-effector to reach the desired position accurately. Generally, the IK problem can be solved using two approaches: analytic [10]–[12] "container-title": "IEEE Transactions on Aerospace and Electronic Systems", "page": "695-706", "volume": "AES-20", "issue": "6", "source": "IEEE Xplore", "abstract": "A geometric approach for deriving a consistent joint solution of a six-point PUMA1 robot is presented. The approach calls for the definition of various possible arm configurations based on the link coordinate systems and human arm geometry. These arm configurations are then expressed in an exact mathematical way to allow the construction of arm configuration indicators and their corresponding decision equations. The arm configuration indicators are prespecified by a user for finding the joint solution. These indicators enable one to find a solution from the possible four solutions for the first three joints, a solution from the possible two solutions for the last three joints. The solution is calculated in two stages. First a position vector pointing from the shoulder to the wrist is derived. This is used to derive the solution of the first three joints by looking at the projection of the position vector onto the xi-1-yi-1(i = 1,2,3, and numeric [13]–[15] this paper investigates methods of resolved motion rate control (RMRC. Also, the arm joints’ limits and reachable workspace have to be taken into the consideration to control the robotic arm. Normally, the analytical approach is more preferable because all the exact solutions can be found and it is computationally faster than the numerical approach. In this paper, an elevator operation management system for H20 mobile robots is presented. In this system, the passive landmarks with the stargazer sensor module are employed to localize the robot in front of the elevator and inside it. The Intel RealSense F200 vision sensor is utilized for detection and position calculation of entry and internal buttons. Several image processing techniques such as HSL color representation, adaptive binary filter, shape and color filters are used for entry button detection. In addition to the previous techniques, the Optical Character Recognition (OCR) is employed to specify the destination floor label for internal buttons recognition. For pressArticles

35


Journal of Automation, Mobile Robotics & Intelligent Systems

ing operation, a new finger design for the H20 mobile robot is presented and the kinematic solution is employed to control the arm joints movements. A robust current floor estimation based on height measurement is utilized. As a height system hardware, the LPS25HP pressure sensor with STM32F411 microcontroller is configured and programmed. The FIR smoothing filter with an adaptive calibration stage is utilized to overcome the problem of oscillation and wide daily variations in atmospheric pressure. Finally, the ultrasonic distance sensor is utilized for checking the elevator door status. This paper is organized as follows: the architecture of the system is described in section 2, while section 3 explains the selection of the vision sensor. The working process, the kinematic solution, and finger design are detailed in sections 4–6. Section 7 describes the outside elevator procedure while section 8 explains the inside procedure. The complete button pressing operation is explained in section 9. The final section summarizes the results.

2. System Architecture

Higher Level

In recent years, researchers at the Center for Life Sciences Automation in Rostock (Germany), have developed a Hierarchical Workflow Management System (HWMS) to manage the entire process for establishing a fully automated laboratory [16]. The HWMS has a middle control layer which is a Transportation and Assistance Control System (TACS), including the Robot Remote Centre (RRC). The RRC is the layer between the Robot Board Computer (RBC) and the TACS. The RRC is a GUI developed to manage transportation tasks as follows. The transportation task is received and will be forwarded to the appropriate robot with the highest battery charge value. Finally, the transportation results are reported back to the TACS.

Robot nLevel

Hierichal Mangement System

Robot 1 Level

and Assistance Control System

VOLUME 11,

Elevator Handling System

Middle Level

Robot Remote Center

Internal Management Automated Door Controller System path planning)

Floor

Robot Arm Module

Center Object

Fig. 2. Complete structure for mobile robot transportation system 36

Articles

2017

The Multi-Floor System (MFS) is the core component of the RBC level [17]. It has been developed to execute the transportation tasks in multiple locations in automated life science laboratories. To execute the transportation task, the MFS is developed to realize the functions of a multiple floor navigation system with a Robot Arm Kinematic Module (RAKM) [18], Elevator Handler System (EHS) [19] a new system is presented to manage the elevator operations. A WiFi socket is established to connect with the ADAM module for calling the elevator and requesting the destination floor. This technique does not provide any feedback on the elevator’s door status or its current floor which in some situations can make the robot losing its way to the destination. Computer vision can be utilized to identify the current floor. In some special situation (human obstacle between the robot camera and the floor number indicator, difficult light conditions etc., and Collision Avoidance System (CAS) [20]. A multiple floor navigation system including mapping, indoor localization, path planning, an Internal Management Automated Door Controlling System (IMADCS), communication system, Internal Battery Charging Management System (IBCMS) has been reported [17]. The MFS has been integrated with the Elevator Handling System (EHS) to interact with the elevator environment and to realize the necessary information regarding button positions, current floor number, and elevator door status. For grasping/placing/pressing operation, the MFS guides the robot to the destination position and sends a grasp, place, or press order to the RAKM with the required information. The Robot motion center provides the way to control the robot hardware by receiving the robot sensors readings and executing the required movements. The client-server connection architecture module (asynchronous socket) is used to control the

Collision Avoider System

Automated Elevator

N° 4


Journal of Automation, Mobile Robotics & Intelligent Systems

interaction of the MFS with these sub-systems over Ethernet. A TCP/IP command protocol based serverclient structure is used to guarantee the reliability and the expandability. So, any kind of devices can be added into the communication network conveniently with a new IP. Figure 2 shows the full structure of the mobile robot transportation system.

VOLUME 11,

N° 4

2017

ing the elevator buttons instead of the Kinect sensors. The Intel RealSense F200 camera is fixed on the H20 arm where an appropriate camera base has been designed for this reason as shown in Figure 4.

3. Selection of Suitable Visual Sensor

Multiple H20 mobile robots are used to perform multiple labware transportation between different workstations. Labware manipulation requires an accurate process to achieve the grasping and placing tasks in a safe way [21]. Two Kinect sensors V2 are used with H20 mobile robot as shown in Figure 3. An

(a)

(b)

Fig. 3. a) H20 robot in grasping task; b) the H20 robot is in front of the elevator door upper Kinect is installed on a holder with a suitable tilt angle to provide a clear view of the workstation which has multiple positions of labware containers as shown in Figure 3.a. The upper Kinect is used to identify and localize the required labware and holder for the manipulation task [22] ”container-title”:”2016 IEEE 17th International Symposium on Computational Intelligence and Informatics (CINTI. Polarization and intensity filters are fixed on the Kinect camera to decrease the effects of sunlight and glossy light on the identification process. The upper Kinect can’t be used for buttons detection because it doesn’t provide a clear view for the panel. Its position on the holder with its tilt angle makes the view direction of the camera somehow parallel with the button panel (see Figure 3.b). On the other hand, the lower Kinect is used for obstacle avoidance and human-robot interaction [23]. It is not possible to detect the elevator button using the lower Kinect because it is not within the Kinect FOV as shown in Figure 3.b. This requires to increase the distance between the robot and the button panel to detect the button. But, in this case, the button will be out of the arm workspace. Furthermore, it is not feasible to move the robot closer to the panel after the step of button detection and localization. This is related to the lack of accurate feedback of robot pose after the movement step [24], where there are always errors in robot positioning after reaching the required location [18]. To cope with all these issues, a 3D hand camera is used to perform the task of press-

Fig. 4. 3D hand camera with its base The Intel RealSense F200 camera was developed by Intel Company to work over short distances based on coded light technology to extract the depth information. It has a full HD camera resolution (1,080 pixels at 30 FPS), a working range (20–120 cm) for depth sensing, gesture tracking at (20–60 cm), face recognition at (25–100 cm), and a microphone array for speech recognition. Figure 5 shows the components of RealSense F200 camera where it works under Windows 8, or 10 operating system. Since this camera extracts the depth information at a short range (20–120 cm), it has a limited view. This feature can be considered as a challenge because the camera has to be located at a suitable position in front of the buttons panel. Therefore, it requires a high robot positional accuracy to make the button within the camera FOV.

Fig. 5. RealSense F200 camera components

4. Working Process The essential processes for the mobile robot to use the elevator are as follows. The elevator door has to be located, starting by finding the way to the elevator entrance based on a navigation system developed earlier [17]. Then the elevator entry button should be identified by finding the button landmark and its center point in image coordinates and extracting the position information. This information is subsequently fed to the next stage of pressing the entry button. The button position is sent to the arm’s kinematic module which performs the required calculation to press the button and informs the MFS if the pressing operation is not possible. After this, the robot must enter the elevator, move to a predefined position after the door has opened. Afterward, the robot localizes itself based on recognizing a landmark inside the elevator. Next, the control panel which includes the buttons of floors numbers has be found and the destination floor must Articles

37


Journal of Automation, Mobile Robotics & Intelligent Systems

be chosen. Then, the robot moves towards the control panel according to the ceiling landmark, finds the required X, Y, Z coordinates of the target button and then feeds this information to the next stage. The required calculation is then performed to press the destination floor button, using the kinematic module. After pressing the floor button, the elevator moves to the required floor and its door eventually opens. The reached destination floor is checked and the destination map is loaded. Then, the robot can finally leave the elevator to complete the transportation task in the required floor. Object detection process requires separating the image into regions to extract useful information which leads to find the target. The required button can be detected according to its specific global or local features to be distinguished from the other objects in the view. Specific object features like its color, shape, or texture can be used to identify the target in the view. In case, that the target doesn’t have distinct adequate features for detection, attached marks can be used. The position of mark or label can be considered as a reference to localize the target. This position has to be transformed to find the button position related to the camera. Then, another transformation step has to be performed to find the button position related to the robot finger which will press it. Afterward, the kinematic model calculates the required joints values to guide the arm finger to the button. The design of the finger is one of the most important issues which has to be taken into consideration to guarantee a successful task. Figure 6 shows the architecture of a pressing system for elevator buttons.

VOLUME 11,

N° 4

2017

an example of workstation which is combined between H20 mobile robots and Orca stationary robot. The H20 mobile robot uses the elevator to move to other floors for delivering the labware to the required workstations.

5. Kinematic Analysis of H20 Arms

The H20 mobile robot has dual arms with 6-DOF and 2-DOF grippers. Figure 8 shows the joints structure of arms where the values of d3, d5, and de are 0.236 m, 0.232 m, and 0.069 m, respectively [18]. The Denavit-Hartenberg (D-H) representation is used to describe the translation and rotation relationship between the arm links. According to D-H notation, there are four parameters to analyze the robotic arm: the link length (ai-1), the link twist (αi-1), the link offset (di), and the joint angle (θi) where (i) refers to the link number [26]. By following the D-H rules, the homogeneous transformations between adjacent links are defined. The D-H parameters and the limit of each joint are described in Table 1 [18].

Fig. 8. H20 arms structure [27] Table 1. D-H parameters and joints limit Left and Right Arms

Fig. 6. Architecture of button pressing system

θi

The Multi-Floor transportation with mobile robots is very important to connect workstations in different floors which in turn increases the productivity and saves human resources. This includes also the collective work of stationary robots and mobile robots to transport the labware to multiple stations to perform different analytical tests [25]. Figure 7 shows

θ1

Fig. 7. Combined workstation of H20 robots 38

Articles

θ2 θ3 θ4 θ5 θ6

α(i-1)

α(i-1)

a(i-1)

di (m)

di (m)

(L)

(R)

(LR)

(L)

(R)

0o

0o

0

0

0

-90o

0

-0.236

0.236

0

-0.232

0.232

90o

-90o

-90o

90o

90o 90o

-90o

-90o 90o

0 0 0

0 0 0

Joints limit (LR) -20o~192o

0

-200o~-85o

0

-129o~0o

0

-195o~15o 0o~180o

-60o~85o

The analytic IK solution of H20 arms has been found using the reverse decoupling mechanism method [18], [28], [29]. The strategy of this method depends on viewing the kinematic chain of the manipulator in reverse order with decoupling the position and orientation. In other words, the arm can be viewed in reverse order so that the pose of the arm base can be described relative to the end effector. The kinematics solutions have been validated and


Journal of Automation, Mobile Robotics & Intelligent Systems

simulated using MATLAB with robotics toolbox [18]. Also, a selecting algorithm is used to choose the suitable solution within multiple solutions. Moreover, the IK solutions for three cases of singularity have been found [18]. The singularity is the case when some joint axes are aligned with each other. This leads to eliminate one or more degrees of freedom.

6. Finger Design

For labware transportation with the mobile robots, suitable grippers with labware containers have been designed [21]. Since the labware contains chemical and/or biological components, any kind of spilling has to be avoided. Therefore, a specific design of grippers and labware containers is required to guarantee secure grasping and placing operations. Figure 9.a shows the designs of the grippers and Figure 9.b shows the bottom design of labware containers [21]. Related to the operation of button pressing, the arm gripper can’t be used directly for this task. This is related to

(a)

(b)

(c)

Fig. 9. Design of a) arm gripper, b) labware container bottom, c) button finger bottom the arm workspace and robot position in front of the button panel. The robot has to be at a specific distance to the panel to press the button and to enter the elevator. Therefore, an elevator finger with a specific length has been designed as shown in Figure 9.c. The bottom design of labware container and elevator finger is the same to fit the gripper design. Figure 10 shows the 3D model of elevator finger with labware container and how the gripper grasps them. For labware transportation and buttons pressing purposes, a holder has been installed on the H20 body. This holder has two placing positions for left and right arms to be used for secure transportation of labware and elevator finger. The elevator finger has to be placed on the right position of the holder to be manipulated by the right arm for pressing the required button. Figure 11 shows the holder and how the elevator finger sits on it. The finger has a spring in the middle and a rubber end tip for reliable pressing tasks.

(a)

(b)

Fig. 10. 3D model of, a) gripper with elevator finger, b) gripper with labware container [21]

VOLUME 11,

(a)

N° 4

2017

(b)

Fig. 11. a) Holder for elevator finger and labware container, b) the elevator finger sits on the holder

7. Elevator Entry Process This section covers the robot requirements before entering the elevator. This includes finding the elevator entrance door, the recognition of the entry button, the determination of the real position, the detection of the elevator door’s status, and finally going inside the elevator. All these aspects are explained in detail in the following sections.

7.1. Movement to Elevator Area

The first step in elevator handling is the determination of the elevator zone. Then, the required analysis is performed and processes are activated to reach the target destination. This stage is performed based on predefined waypoints on the map of the elevator area. This depend on the reflective artificial landmarks installed near the elevator to read the exact position and direction of the robot. Based on this information, the robot moves until it reaches the best position, as shown in Figure 12.a. This movement is the basis for the next two stages which are entry button detection and elevator entrance door status rec-

a) Entry position movement idea

b) Higher accuracy robot positioning Fig. 12. Robot position outside elevator Articles

39


Journal of Automation, Mobile Robotics & Intelligent Systems

ognition. The H20 mobile robot arms have a limited workspace, and thus it is necessary to control the robot until it reaches a desired predefined position with high accuracy so that the arm is able to work within its workspace. The arm should move in a vertical straight path to the panel to press the button where its workspace is increased or decreased according to the robot’s position near the elevator area. The robot must locate itself in a specific position that allows to press the entry button and to enter the elevator after changing its orientation when the door opens. Therefore, a hybrid method is used to achieve a higher positional accuracy, which utilizes a correction function based on SGM (see Figure 12.b) and a motor encoder correction based on an ultrasonic distance sensor as follows. First, the movement core is utilized until the predefined position is achieved. The movement repeatability reaches 5 cm in the x-axis and 3.5 cm in yaxis [17]. Then a correction procedure is performed to reach the predefined position with a repeatability of 3 cm in x-axis and 2 cm in y-axis based on the last movement direction. Finally, the ultrasonic distance sensor with the motor encoder is employed to ensure that the robot reaches the exact distance to the door within an accuracy range of 1–2 cm.

VOLUME 11,

2017

related to the button center point Various filters related to shape recognition can be applied such as size and distortion in order to increase the success rate. Finally, the mapping between the RGB and depth pixels is performed to derive the real and accurate position coordinates of the landmark related to the 3D camera. Two kinds of color representation filters are used to remove the background and retain the required color range for the button which are RGB and HSL filters. The working environment has different lighting and sunlight conditions which may easily affect the detection process with RGB color system. Thus, the success rate may be reduced significantly. This problem has been solved by using HSL color representation which is more stable against dynamic lighting conditions. The flowchart of button detection is shown in Figure 14. Real coordinates Get F200 Image stream

Image and depth

Real Coordinates Extracted

Get F200 Depth stream Shape Edge Searching

7.2. External Button Detection

N

N Report error Message to the MFS

Fig. 14. HSL-based entry button detection An experiment was performed to examine the performance of the F200 vision sensor with the HSL filter method for entry button detection. The F200 camera was installed on a camera stand in front of the external button as shown in Figure 15.a. The experiment was repeated 100 times for each distance (13, 20, 30, and

(a) 100%

0%

84%

13

20 cm

30

40

Real Coord. (Sunny) Real Coord. (Normal)

(b) Fig. 13. Elevator entry button and its colored landmark 40

Articles

Y

Shape, Color, Size

Success Rate

The glass elevator used has a passenger cabin with two sides made of glass panels and metal frames. The transparent surfaces add a big challenge for entry button detection, especially in sunny weather. The entry button detection technique uses computer vision and a depth sensor. It uses an Intel F200 sensor to acquire the RGB image and depth information. Since the entry button and its panel are made from the same reflective material, the detection of the button is difficult to be realized. Therefore, a landmark has been fixed close to the entry button with a specific shape and color to perform an easy and applicable recognition process by the mobile robot as shown in Figure 13. The elevator entry button detection process starts by initializing the F200 camera with the required frame rate, image resolution, and depth resolution. Then an image is captured from the F200 RGB sensor and a filtering step is applied to remove the unwanted band of colors. The edges of the detected object are extracted to find the shape of the button landmark. Afterward, the position of mark center point is calibrated to be

N° 4

Fig. 15. Entry button detection based on F200


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

0.014

± 0.014

cm

Z

13.100

0.00

0

cm

X

-3.665

0.008

± 0.0215

cm

Y

1.290

0.011

± 0.021

cm

Z

20.000

0.00

0

cm

X

-4.434

0.025

± 0.0315

cm

Y

1.775

0.057

± 0.066

cm

Z

29.978

0.042

± 0.05

cm

X

-3.337

0.037

± 0.046

cm

Y

2.156

0.003

± 0.003

cm

Z

39.937

0.049

± 0.05

cm

40 cm

Unit

0.869

Tolerance

Y

30 cm

STDEV

cm

20 cm

Mean

± 0.014

Unit

0.004

Tolerance

40 cm

-3.955

STDEV

30 cm

Mean

20 cm

Cartesian Coordinates

Distance to Entry Button 13 cm

X

13 cm

Cartesian Coordinates

Table 2. Entry button detection in sunny weather and at different distance

2017

Table 3. Entry button detection in cloudy weather at different distances Distance to Entry Button

40 cm) and under different lighting conditions. The experiment results are shown in the chart in Figure 15.b, and it is clear that the entry button detection based on the F200 camera gives stable depth detection even at 13 cm under different lighting conditions, and entry button recognition was 100% successful in normal weather. Sunny weather reduced the success rate of the detection operation to 96% when the distance was 40 cm. The standard deviations (STDEV), means, and tolerance of entry button detection in sunny and cloudy weather are shown in Figure 15: Entry button detection based on F200. Table 2 and Table 3, where tolerance can be defined as the possible limits of variation in the value of positional error that can be found by calculating the difference between the maximum and the minimum readings value. The calculated tolerance values cannot be improved since it depends on the F200 hardware. In these tables, the extracted entry button position referenced to the origin point of the utilized camera. Thus, the x and y values can be positive/negative due to entry button position from the camera.

N° 4

X

0.870

0.019

± 0.0305

cm

Y

0.632

0.014

± 0.016

cm

Z

12.974

0.044

± 0.05

cm

X

1.074

0.024

± 0.063

cm

Y

1.072

0.018

± 0.021

cm

Z

20.000

0.00

0

cm

X

0.515

0.034

± 0.0945

cm

Y

1.500

0.011

± 0.066

cm

Z

30.101

0.010

± 0.05

cm

X

-1.107

0.00

± 0.0015

cm

Y

1.901

0.001

± 0.002

cm

Z

39.798

0.014

± 0.05

cm

Another possibility to avoid the sunlight effects on color detection process are local features matching algorithms [30]–[32]. These algorithms are somehow independent to the changes in scale, illumination, and orientation. The speeded-up robust features (SURF) algorithm can be considered as an efficient object recognition method with a fast scale- and rotationinvariant detector and descriptor [33]. The process starts with an offline step by capturing an image of the target to be saved in the database as a matching reference. Then, the matching process is performed by extracting local features from the reference image to be identified with the current image. A specific textured mark has been fixed on the panel of the entry button. This mark is considered as a reference to localize the button for pressing operation. The reference mark is recognized and localized using SURF algorithm with Intel F200 camera. Then, this information is used to guide the finger to the target and press the button as shown in Figure 16. Articles

41


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

N° 4

2017

stability. Next, the buttons on the control panel with the destination floor have to be detected to read the current floor and to leave the elevator when the door is opened. The movement strategy inside the elevator, internal button detection, and current floor reading methods are explained in detail in this section.

8.1. Robot Movement Inside Elevator Fig. 16. SURF algorithm with F200 camera for button detection and localization

7.3. Elevator Opening Detection Once the entry button has been pressed, the robot has to enter the elevator. The mobile robot has to monitor the status of the entrance door and checks if there is free space in the elevator when the door has opened. The H20 mobile robot is equipped with many sensor modules, including IR distance approximation and ultrasonic detection. Since the elevator door consists of a metal frame with glass panels, the IR sensor module cannot be used since the IR beam does not reflected from glass surfaces. Therefore, the ultrasonic distance sensor has been chosen as a range finder sensor to detect the door’s status. A new method was established for checking the elevator door status based on the ultrasonic distance sensor. The data from the ultrasonic distance sensor determines whether the elevator door is opened or closed, and in addition detects whether the elevator has a free space or not.

The arm’s limited workspace and the small size of the cabin add further challenges for the robot movement inside the elevator. To overcome these difficulties, the movement core utilizes the installed landmark with a correction procedure to reach a predefined position accurately. Then, the robot rotates towards the control panel. Finally, the ultrasonic range finder sensor with the motor encoder is employed to reach the exact position in front of the panel to conform to the arm’s limited workspace. These procedures (clarified in Figure 18) are the basis for destination floor button pressing operation. After completion, the movement core returns the robot back towards the starting position so that it can leave the elevator safely when the destination floor is reached.

8. Destination Floor Transition

It is important for the mobile robot to localize itself inside the elevator to reach the control panel. Two method can be used for inside elevator localization. The first is based on the ceiling light as a natural landmark as shown in Figure 17.a. This landmark has unique features and it can be extracted easily using the vision sensor. However, using the ceiling light as

Fig. 18. Robot behavior inside the elevator

Fig. 19. Buttons inside the elevator

8.2. Internal Button Detection (a)

(b)

Fig. 17. a) Ceiling light natural landmark and b) artificial landmark inside the elevator

42

a landmark is not secure since it may break at any time. The second method is based on an artificial passive landmark and SGM to localize the robot inside the elevator as shown in Figure 17.b. After performing many experiments, localization inside the elevator based on artificial landmarks was adopted due to its Articles

As a first stage to detect the destination floor button, the movement core is used to specify the destination floor. At this level, the movement core depends on the transportation task status (Grasp Position Done, Place Position Done, and Charge Position Done) to determine the current destination. When the robot enters the elevator, the movement core checks the current destination floor based on the current intermediate goal, as clarified in Table 4. For example, if the grasping operation has been completed, the placing operation floor has to be requested. Then the elevator is directed to go to the required destination floor.


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

N° 4

2017

Get F200 Image Stream Convert image into Grayscale

Method Extract candidate based on width, height inverse Image pixels

Select Next Y

More candidate

N Y

N Set Failure

Get F200 Depth Stream Extract Real coordinates based on

Fig. 20. Internal button detection flowchart

(a) B1

(b) B2

(c) B3

Fig. 21. Elevator handler GUI and Internal button operation

(d) B4

(e) B_A

(f) B_E

Fig. 22. Internal button detection stability Articles

43


Journal of Automation, Mobile Robotics & Intelligent Systems

Table 4. Destination floor selection strategy Grasp

Place

Charge

Station

Station

Station

Done

Done

Done

not yet

not yet

Done Done

Entry Button

Internal Button

not yet

Current Floor

Grasp Station Floor

not yet

not yet

Current Floor

Done

not yet

Current Floor

Place Station Floor

Charge Station Floor

At this level of the elevator handling system, the Optical Character Recognition (OCR) is embedded with a new method to find the internal buttons (see Figure 19). In this method, it is important to recognize each button label separately, and thus the developed method applies a combination of filters. These include grayscale conversion, which makes the captured image suitable for subsequent stages, stretch contrast to improve the contrast in the image by stretching the domain of intensity, and an adaptive threshold to choose the best threshold under different light conditions for binary image conversion. Next, the search among each button candidate uses specific width and height features and takes the inverse value of the pixel and flips the candidate images horizontally (as the F200 image stream has a mirror image) to make them suitable for the OCR stage. Each extracted candidate passes to the OCR engine for comparison with the required destination and finally, based on the position of the matching candidate’s the image and depth information, the real coordinates are extracted and translated into a robot arm reference. Figure 20 demonstrates the internal button detection flowchart. An experiment was conducted to verify the internal button detection method. The Intel RealSense F200 (RGB-D) sensor was positioned facing the internal control panel experiment was repeated a hundred times for each button. The F200 captured an image and the processed image with the related information (extracted coordinate, the real button coordinate, the

Success Rate

100% 80%

100

100

100

98

99

100

B_1B

_2

B_3B

_4

B_AB

_E

60% 40% 20% 0%

Fig. 23. Internal buttons detection success rate comparison 44

Articles

VOLUME 11,

N° 4

2017

detection time, etc…) are demonstrated in Figure 21. The extracted real button coordinates were measured based on the F200 as a reference position. Button real coordinates are reported as a chart for each button separately as clarified in Figure 22. In in Figure 22, the red circles indicate the false negative cases while the green circles represent false positive cases. The false positive appeared when the system detects the original number on the button instead of the installed landmark. Therefore, the numbers on the buttons were covered and the detection experiments have been repeated. The new results of 600 experiments show that the false positive errors were removed and the success rate reaches to 99.5% as shown in Figure 23.

8.3. Current Floor Estimation

The floor estimation technique is applied to inform the robot about its current position inside the elevator. When the destination matches the current floor and the ultrasonic distance sensor recognizes the door status as opened, the robot can leave the elevator. As a first method, a computer vision method was used to read the current floor number indicator in the glassy elevator environment. As explained later in this section, this approach has many limitations and thus a current floor estimation method based on a height measurement system is utilized instead.

8.3.1. Floor Reader Based On Computer Vision This system recognizes the current floor number which is installed in the elevator shaft for each floor based on computer vision as shown in Figure 24. As

Fig. 24. Floor number installed on elevator door a first step, the image is converted to grayscale to make it suitable for the adaptive binary threshold filter. Secondly, the objects inside the processed binary image are collected to choose the best candidate based on size. Finally, the OCR engine is applied to the candidate object to recognize the floor number. When the destination matches the current floor number, the robot leaves the elevator. This approach has been validated in normal weather conditions and its functionality proved with a success rate reaching 99% [34]. However, this approach has many limitations. Since the floor number mark is installed over a glassy surface, the robot may not be able to read the number at some times of the day because of sunlight reflecting over the number mark. The F200 camera is installed in the robot’s arm thus; the arm should be lifted all the


Journal of Automation, Mobile Robotics & Intelligent Systems

time so it can read the number mark which consumes a lot of power. The robot must also detect the floor number before the door is opened, and in certain situations such as a human forming an obstacle between the robot camera and the floor number indicator the robot will fail to detect the floor number mark. Incorrect current floor estimation can make the robot lose its way to the destination. Thus, an innovative current floor estimation method based on a height measurement system is used instead.

8.3.2. Floor Estimation System Based On Height Measurement As a hardware platform for the height measurement system, the LPS25HB pressure sensor and STM32L053 microcontroller were configured and programmed to detect the current floor position. Many challenges must be solved to use the pressure sensor as a floor estimation system which are as follows. Firstly, a soldering drift, which is defined as the difference between the accuracy of the sensor before and after soldering, appeared when the pressure sensor was attached to the STM32L053 microcontroller. A one-point calibration technique was used to solve the soldering drift problem by comparing the pressure sensor readings after attachment with a precision barometer. The difference was calculated and added to each pressure sensor reading. Secondly, the absolute digital barometer (pressure sensor) readings at the same floor of the building keep changing in a day due to various weather condition. The oscillation of the output signal and wide variations in readings during the week would reduce the utility of this technique for floor detection. Two methods were applied to deal with the variations in pressure. Firstly, a smoothing filter with a finite impulse response (FIR) structure was used to solve the problem of small variations in pressure sensor readings, and an adaptive calibration method was used to calibrate the sensor readings for the robot’s current floor before entering the elevator in order to overcome the wide variation in daily pressure readings [19]a new system is presented to manage the elevator operations. A Wi-Fi socket is established to connect with the ADAM module for calling the elevator and requesting the destination floor. This technique does not provide any feedback on the elevator’s door status or its current floor which in some situations can make the robot losing its way to the destination. Computer vision can be utilized to identify the current floor. In some special situation (human obstacle between the robot camera and the floor number indicator, difficult light conditions etc. This method proves its efficiency with 100% success rate.

9. Elevator Handling System and Robot Arm Kinematic Module Sockets

The EHS is a stand-alone program created to handle the elevator operations for multi-floor navigation. This system has the ability to detect the elevator environment through the following process. The exact position of the elevator entry button is first extracted. Then, the destination key is selected, and the current floor number is read. All this information is sent back over the socket to the MFS.

VOLUME 11,

N° 4

2017

The system of button pressing operation has been realized using three main coding platforms which are the MFS, RAKM, and the button detection and localization using F200 camera. These 3 systems are connected with each other through an asynchronous socket. The 3 coding platforms exchange the orders and information between each other using 2 clientserver communication models as shown in Figure 25. The MFS receives the X, Y, Z position information from the module of button detection and localization. Then, the MFS sends this information as an order to the RAKM to press the button. Afterward, the RAKM sends back the performance status to the MFS. Additionally, the RAKM socket is used also for transportation tasks to send a grasping and placing order. The RAKM informs the MFS when the requested operation (grasp, place, or press button) has been performed. The MFS system waits for a predefined time and then, if the RAKM fails to complete the task, it informs the RRC and stops the current transportation task to save time. Figure 26 shows the data flow between MFS, RAKM, and elevator handler.

Fig. 25. Client-server models for button pressing operation

Fig. 26. Data flow between the MFS, elevator handler System, and RAKM

10. The Button Operation (Complete Scenario) The entry and internal buttons detection and current floor reading methods have been developed as the main operations of elevator handling system. In the EHS, the F200 camera is utilized as an RGB-D sensor with HSL filtering for a stable color detection. The MFS with the EHS and the arm’s kinematic module are developed to perform the detection and pressing operation for the required button. The button pressing operation procedures can be explained as follows. The movement core unit is used to reach the accurate elevator button detection position. The MFS is then sending an initial movement request via a socket to the arm’s kinematic module. Afterward, the EHS Articles

45


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

N° 4

2017

EHS

AKC EHS

Y

F200 sensor start as selected method

N Send back the requested

detected

Counter<3

Y

Press btn on Failed EHS

N

Finish Y End EHS

Fig. 27. Complete scenario

46

sends an initial request via a socket to the F200 camera after the arm reaches its initial position. The arm initial movement is essential to place the camera with arm finger in a close position in front of the required button. The destination button label is detected and its real coordinate’s position is extracted. A checking process related to button detection is performed. The detection request is re-sent to the EHS if it fails to detect the button after 3 attempts. After the button detection step, its extracted position is sent back to the MFS. This position information belongs to the position of button label related to the camera. Therefore, two calibration steps are required. The first is to find the button position related to the camera according to the position difference between the button and its label. The second is to find the position of button related to elevator finger according to the position difference between the camera and finger. Finally, a pressing order is sent with the required x, y, and z coordinate to the kinematic module to move the arm to the target. For pressing operation, the inverse kinematic solution is utilized to calculate the required angles of the arm joints based on the destination button position. The calculated joint angles are then sent via socket to the arm’s servo motors. The flowchart of this scenario is shown in Figure 27. In the case of failure at any of these stages, the MFS rearranges the transportation task to complete it or to Articles

return to the charge station and informs the RRC level about the failure. To press the required button, the arm should firstly grasp the elevator finger from the H20 holder and then moves to the initial position to be close to the button. The initial position of arm movement depends on the position of the button related to the arm shoulder. The elevator entry button and the internal buttons inside the elevator (see Figure 28) have a fixed height related to the arm shoulder. According to the height of each button and the range of error positioning in front of the button panel, the initial position of the finger can be estimated. This step is essential to provide a clear view for the 3D camera to detect the required button. Also, this step makes the finger close to the target to realize a small arm movement to press the button.

Fig. 28. Positions of buttons inside the elevator


Journal of Automation, Mobile Robotics & Intelligent Systems

The process of entering the elevator has to be organized carefully by the robot. There is an IR transmitter and receiver on the both sides of the elevator door. After pressing the entry button, the elevator door keeps open for 15 seconds in case that nobody blocks the IR signals by entering the elevator. This time is quite enough for the robot to enter the elevator. In case that someone enters the elevator, the door keeps open for just 3 seconds. This time is very short which causes collision of the door with the H20 robot body. Another problem is related to the interference of IR signals of F200 camera with the IR signals of the elevator door. This causes the same effect of blocking the IR signals when someone enters the elevator. To cope with this issue, the left arm of the H20 robot is configured to be in front of the body as shown in Figure 29. The left arm keeps blocking the elevator signals which in turn keeps the door open and avoids the collision of the door with the robot during its movement.

Fig. 29. Left arm configuration to keep the elevator door open An experiment was performed to validate the elevator entry button detection with RAKM in a real transportation task. In this experiment, the robot speed was 0.2 m/s while the angular speed was 0.34 rad/s. The H20 mobile robot was employed for the transportation task. The MFS started to execute a multi-floor transportation task between the second and third floors at the celisca building. When the MFS reached the predefined elevator entry position (the position that allows the robot to enter the elevator directly after opening the door), the MFS requested that the EHS initialize the F200 camera and order the RAKM to control the robot arm. The RAKM starts its working process by grasping the finger as a new end effector for button pressing then controlling the arm to move to the initial detection position. After completing the entry button detection method and extracting the button’s real x, y, and z coordinates, the RAKM will press the button. This experiment was repeated for ten times. Each time the communication sockets success to transport the MFS orders to the EHS and the RAKM and the EHS detect the entry position and send the x, y, and z coordinate to the MFS to add the calibration value then send it to RAKM for pressing operation. The robot succeeds to reach the elevator entry button position with repeatability range is ±1.25cm for x-axis and ±1cm for the y-axis. The steps of pressing scenario are shown in Figure 30: Complete operation scenario.

VOLUME 11,

N° 4

2017

(a) Starting step (arm at rest configuration)

(b) RAKM pick the finger

(c) Finger attached to the robot arm

(d) Arm reach to initial position and button is detected

(e) Entry button is pressed

(f) Elevator door is opened Fig. 30. Complete operation scenario Articles

47


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

In the pressing operation of entry button, the robotic arm moves firstly from the rest configuration to the manipulation point of the elevator finger to grasp it. The elevator finger is placed on the robot holder. Then the arm moves with the finger to the initial position according to the required button. At the rest configuration, the position of the end effector related to the arm shoulder is (X= 0.566m, Y= 0 m, Z= 0m). This position information is according to the shoulder coordinates which is shown in Figure 8 where X represents the arm length in the rest configuration. Figure 31 shows a chart for the changes in the end

Fig. 31. End effector position versus time

Fig. 32. End effector path in XY plane Table 5. Joints values in degrees for button pressing Configuration

J1

J2

J3

J4

J5

J6

Rest

-90°

-90°

90°

37°

-146° -98°

-85°

57°

Finger grasping Initial position

48

-3°

-92°

-91°

-75°

-18° 27°

effector position during the arm movement from the rest configuration to the initial position for entry button. In this example, the position of finger manipulation point related to the arm shoulder is (X= 0.312m, Y= 0.302m, Z= 0.005m). On the other hand, the initial position of end effector for entry button related to the arm shoulder is (X= 0.00m, Y= 0.38m, Z= 0.20m). The approximate time required to reach the manipulation point of the finger is about 16 seconds. But the complete time required to reach the initial position of entry button is 34 seconds. Figure 32 shows the path Articles

N° 4

2017

of the end effector in XY plane for the same example. Furthermore, Table 5 shows the changes of arm joints’ values in degrees from the rest configuration to the finger grasping point and then to the initial position. full required time for pressing the entry button is about 43 seconds. The work has been developed using Microsoft Visual Studio 2015 with C# programming language. The project is running on a Windows 10 platform in the H20 tablet.

11. Conclusion

A new approach to handle the glassy elevator operations based mobile robot is presented to enable multi-floor transportation in life science laboratories. Passive landmarks with stargazer sensor model are utilized to localize the robot in front of the elevator. The Intel RealSense F200 vision sensor is used for entry and internal buttons detection and localization. This sensor is fixed on the robot arm to reduce the effect of the sunlight on the entry button detection and to compensate the weakness of the H20 arm joints. A landmark has been installed around the entry button with a specific shape and color so as to enable an easy and applicable recognition process. The Optical Character Recognition (OCR) algorithm is used to recognize the numbers of elevator internal buttons. The LPS25HB pressure sensor and STM32L053 microcontroller were configured and programmed to work as a hardware platform for robust current floor estimation approach. For buttons pressing operation, a kinematic solution is employed to control the arm joints movement and a special finger model is designed. An IEEE 802.11g communication network with serverclient structure and TCP/IP command protocol is used to establish a reliable and extendable communication network. Three sets of the experiment have been performed to validate the presented systems. The experiment results approve that the presented elevator’s operations handler system has an efficient performance, which meets the requirement of life science transportation task based on mobile robots.

ACKNOWLEDGMENTS

The authors would like to thank the Ministry of Higher Education and Scientific Research in Iraq for the scholarship provided by Mosul University (Ph.D. stipend A. A. Abdulla), the German Academic Exchange Service (Ph.D. stipend M. M. Ali) and the German Federal Ministry of Education and Research for the financial support (FKZ: 03Z1KN11, 03Z1KI1). We also wish to thank DrRobot Company (Canada) for the technical support of the H20 mobile robots, and Mr. Lars Woinar for his contributions in the 3D modelling and printing designs.

AUTHORS Ali A. Abdulla* – Center for Life Science Automation (celisca), University of Rostock, Rostock 18119, Germany, 2- College of Engineering, University of Mosul, Mosul, Iraq. E-mail: Ali.abdulla@celisca.de.


Journal of Automation, Mobile Robotics & Intelligent Systems

Mohammed M. Ali* – Center for Life Science Automation (celisca), University of Rostock, Rostock 18119, Germany. E-mail: mohammed.myasar.ali@celisca.de.

Norbert Stoll – Institute of Automation, University of Rostock, Rostock 18119, Germany. E-mail: Norbert.Stoll@uni-rostock.de. Kerstin Thurow – Center for Life Science Automation (celisca), University of Rostock, Rostock 18119, Germany E-mail: Kerstin.Thurow@celisca.de. Underlined authors made equal contributions to this paper. * Corresponding authors

REFERENCES

[1]

[2]

[3]

[4]

[5]

[6]

[7]

[8]

M. Wojtczyk, A. Knoll, “Utilization of a mobile manipulator for automating the complete sample management in a biotech laboratory. A real world application for service Robotics.” In: 6th International Symposium on Mechatronics and its Applications, 2009. ISMA ’09, 2009, 1–9. DOI: 10.1109/ISMA.2009.5164800. H. Liu, N. Stoll, S. Junginger, K. Thurow, “Mobile robotic transportation in laboratory automation: Multi-robot control, robot-door integration and robot-human interaction.” In: 2014 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2014, 1033–1038. DOI: 10.1109/ROBIO.2014.7090468. B. Siemiątkowska, B. Harasymowicz-Boggio, M. Wiśniowski, “The Application of Mobile Robots for Building Safety Control,” J. Autom. Mob. Robot. Intell. Syst., vol. 10, 2016, no. 02, 9–14. DOI: 10.14313/JAMRIS_2-2016/11. D. Troniak et al., “Charlie Rides the Elevator – Integrating Vision, Navigation and Manipulation towards Multi-floor Robot Locomotion.” In: 2013 International Conference on Computer and Robot Vision (CRV), 2013, 1–8. DOI: 10.1109/ CRV.2013.12. J.-G. Kang, S.-Y. An, S. Oh, “Navigation strategy for the service robot in the elevator environment.” In: International Conference on Control, Automation and Systems, 2007. ICCAS ’07, 2007, 1092– 1097. X. Yu, L. Dong, L. Li, K. E. Hoe, “Lift-button detection and recognition for service robot in buildings.” In 2009 16th IEEE International Conference on Image Processing (ICIP), 2009, 313–316. DOI: 10.1109/ICIP.2009.5413667. Chen L.-K., Hsiao M.-Y., “Control of service robot by integration of multiple intermittent sensors.” In: AMPT 2013, Taipei; Taiwan, 2014, vol. 939, 609–614. DOI: 10.4028/www.scientific.net/ AMR.939.609. K. Maneerat, C. Prommak, K. Kaemarungsi, “Flo-

VOLUME 11,

[9]

[10]

[11]

[12] [13]

[14]

[15]

[16]

[17]

[18]

[19]

N° 4

2017

or estimation algorithm for wireless indoor multi-story positioning systems.” In: 2014 11th International Conference on Electrical Engineering/ Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), 2014, 1–5. DOI: 10.1109/ECTICon.2014.6839893. L. Chen, B. Sun, X. Chang, “MS5534B Pressure Sensor and Its Height Measurement Applications.” In: 2011 International Conference on Information Technology, Computer Engineering and Management Sciences (ICM), 2011, vol. 1, 56–59. DOI: 10.1109/ICM.2011.276. C. Lee, M. Ziegler, “Geometric Approach in Solving Inverse Kinematics of PUMA Robots,” IEEE Trans. Aerosp. Electron. Syst., vol. AES20, no. 6, Nov. 1984,695–706. DOI: 10.1109/ TAES.1984.310452. T. Ho, C.-G. Kang, S. Lee, “Efficient closed-form solution of inverse kinematics for a specific six-DOF arm,” Int. J. Control Autom. Syst., vol. 10, no. 3, 567–573, Jun. 2012. DOI: 10.1007/s12555012-0313-9. C. Man, X. Fan, C. Li, Z. Zhao, “Kinematics Analysis Based on Screw Theory of a Humanoid Robot,” J. China Univ. Min. Technol., vol. 17, no. 1, 49–52, Mar. 2007. G. Tevatia, S. Schaal, “Inverse kinematics for humanoid robots.” In: Proceedings 2000 ICRA. Millennium Conference. IEEE International Conference on Robotics and Automation. Symposia Proceedings (Cat. No.00CH37065), 2000, vol. 1, 294–299. DOI: 0.1109/ROBOT.2000.844073. M. Mistry, J. Nakanishi, G. Cheng, S. Schaal, “Inverse kinematics with floating base and constraints for full body humanoid robot control.” In: Humanoids 2008 – 8th IEEE-RAS International Conference on Humanoid Robots, 2008, 22–27. DOI: 10.1109/ICHR.2008.4755926. L. Nie, Q. Huang, “Inverse kinematics for 6-DOF manipulator by the method of sequential retrieval.” In: the Proceedings of the International Conference on Mechanical Engineering and Material Science, China, 2012, 255–258. X. Gu, S. Neubert, N. Stoll, K. Thurow, “Intelligent scheduling method for life science automation systems.” In: 2016 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), 2016, 156–161. DOI: 10.1109/MFI.2016.7849482. A. A. Abdulla, H. Liu, N. Stoll, K. Thurow, “A New Robust Method for Mobile Robot Multifloor Navigation in Distributed Life Science Laboratories,” J. Control Sci. Eng., vol. 2016, Jul. 2016. DOI: 10.1155/2016/3589395. M. M. Ali, H. Liu, N. Stoll, K. Thurow, “Kinematic Analysis of 6-DOF Arms for H20 Mobile Robots and Labware Manipulation for Transportation in Life Science Labs,” J. Autom. Mob. Robot. Intell. Syst., vol. 10, no. 4, 2016. DOI: 10.14313/JAMRIS_4-2016/30. A. A. Abdulla, H. Liu, N. Stoll, K. Thurow, “An automated elevator management and multi-floor estimation for indoor mobile robot transportation based on a pressure sensor.” In: 2016 17th Articles

49


Journal of Automation, Mobile Robotics & Intelligent Systems

[20]

[21]

[22]

[23]

[24]

[25]

[26] [27]

[28]

[29] [30]

50

[31]

International Conference on Mechatronics – Mechatronika (ME), 2016, 1–7. H. Liu, N. Stoll, S. Junginger, K. Thurow, “Human face orientation recognition for intelligent mobile robot collision avoidance in laboratory environments using feature detection and LVQ neural networks.” In: 2015 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2015, 2003–2007. DOI: 10.1109/ROBIO.2015.7419067. M. M. Ali, H. Liu, R. Stoll, K. Thurow, “Multiple Lab Ware Manipulation in Life Science Laboratories using Mobile Robots.” In: 2016 17th International Conference on Mechatronics(ME2016), Prague, Czech Republic, 2016. M. M. Ali, H. Liu, N. Stoll, K. Thurow, “An identification and localization approach of different labware for mobile robot transportation in life science laboratories.” In: 2016 IEEE 17th International Symposium on Computational Intelligence and Informatics (CINTI), 2016, 000353–000358. DOI: 10.1109/CINTI.2016.7846432. M. Ghandour, H. Liu, N. Stoll, K. Thurow, “A Hybrid Collision Avoidance System for Indoor Mobile Robots based on Human-Robot Interaction.” In: 2016 17th International Conference on Mechatronics(ME2016), Prague, Czech Republic, 2016. “Dr Robot Inc.: WiFi 802.11 robot, Network-based Robot, robotic, robot kit, humanoid robot, OEM solution.” [Online]. Available: http://www. drrobot.com/products_H20.asp. [Accessed: 19Jan-2015]. X. Chu, T. Roddelkopf, H. Fleischer, N. Stoll, M. Klos, K. Thurow, “Flexible robot platform for sample preparation automation with a user-friendly interface,” in 2016 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2016, 2033–2038. DOI: 10.1109/ROBIO.2016.7866628. J. Denavit, R. S. Hartenberg, “A kinematic notation for lower-pair mechanisms based on matrices,” Trans. ASME Journal, 1955, vo. 22, 215–221. M. M. Ali, H. Liu, N. Stoll, K. Thurow, “Intelligent arm manipulation system in life science labs using H20 mobile robot and Kinect sensor.” In: 2016 IEEE 8th International Conference on Intelligent Systems (IS), 2016, 382–387. DOI: 10.1109/ IS.2016.7737449. M. A. Ali, H. A. Park, C. G. Lee, “Closed-form inverse kinematic joint solution for humanoid robots.” In: Intelligent Robots and Systems (IROS), 2010 IEEE/RSJ International Conference on, 2010, 704–709. DOI: 10.1109/IROS.2010.5649842. R. O’Flaherty et al., “Kinematics and Inverse Kinematics for the Humanoid Robot HUBO2+,” Georgia Institute of Technology, Technical Report, 2013. D. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” Int. J. Comput. Vis., vol. 60, no. 2, Nov. 2004, 91–110. DOI: 10.1023/B:VISI.0000029664.99615.94. E. Rosten, R. Porter, T. Drummond, “Faster and Better: A Machine Learning Approach to Cor-

Articles

VOLUME 11,

N° 4

2017

ner Detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 1, Jan. 2010, 105–119. DOI: 10.1109/TPAMI.2008.275. [32] E. Rublee, V. Rabaud, K. Konolige, G. Bradski, “ORB: An efficient alternative to SIFT or SURF.” In: 2011 International Conference on Computer Vision, 2011, 2564–2571. DOI: 10.1109/ ICCV.2011.6126544. [33] H. Bay, A. Ess, T. Tuytelaars, L. Van Gool, “Speeded-Up Robust Features (SURF),” Comput. Vis. Image Underst., vol. 110, no. 3, 346–359, Jun. 2008. [34] A. A. Abdulla, H. Liu, N. Stoll, K. Thurow, “A Robust Method for Elevator Operation in Semioutdoor Environment for Mobile Robot Transportation System in Life Science Laboratories.” In: International Conference on Intelligent Engineering Systems (INES), 2016 IEEE International, Budapest, Hungary, 2016, 45–50.


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

N° 4

2017

Mobile Robot Transportation for Multiple Labware with Hybrid Pose Correction in Life Science Laboratories Submitted: 2nd August 2017; accepted: 8th January 2018

Mohammed M. Ali, Ali A. Abdulla, Norbert Stoll, Kerstin Thurow

DOI: 10.14313/JAMRIS_4-2017/36 Abstract: In automated working environments, mobile robots can be used for different purposes such as material handling, domestic services, and objects transportation. This work presents a transportation process for multiple labware with hybrid pose correction in life science laboratories using H20 mobile robots. Multiple labware and tube racks, which contain chemical and biological components, have to be transported safely between laboratories on different floors of life science environment. Therefore, an accurate approach for labware transportation is required. The H20 robot has dual arms each consisting of 6 revolute joints with 6-DOF. The problem statement of robot positioning error in front of the workstation is presented. The navigation strategy with its related systems is presented for multi-floor mobile robot transportation environment. A Stargazer module is used as a stable and low-cost mapping and localization sensor with artificial landmarks. An error management system to overcome incorrect stargazer reading problems is presented. Different strategies of pose correction for mobile robots are described. The H20 robot is equipped with sonar sensors and Kinect V2 to be used for labware manipulation and position correction. The Kinect sensor V2 with SURF algorithm (Speeded-Up Robust Features) is used to recognize and localize the target. The communication procedure between the transportation platforms is done using client-server models. Keywords: robot position correction, multiple labware transportation, mobile robot localization, motor encoder, localization error handler, Kinect V2, grasping and placing operation, multi-floor

1. Introduction Mobile robots are widely used to perform different tasks in automation fields such as product transportation [1], domestic services [2], teleoperation [3], or material handling [4]. In this work, a labware transportation system using mobile robots (H20 robot, Dr. Robot, Canada) in a life science environment is presented. H20 robot is a wireless networked autonomous humanoid mobile robot. It has a PC tablet, dual arms, and an indoor GPS navigation system (see Fig. 1). The labware, which is shown in Fig. 2, contains chemical and/or biological components. Dealing with

such objects requires an accurate and secure manipulation with transportation because any kind of spilling has to be avoided. Some technical achievements have been developed at the Center for Life Science Automation (Celisca, University of Rostock) to improve the H20 transportation system [5], [6]. Different automation islands in different laboratories and floors can be connected using stationary and mobile robots. This connection leads to increased productivity and saves human resources by ensuring a 24/7 operation and by reducing the routine work for the employees. This requires several prerequisites like robot navigation control, object recognition with position estimation, and arm control. The navigation system includes the mapping, robot localization and path planning. Related to object manipulation, the robotic arm has to be guided to the target. The target pose can be acquired visually using a suitable sensor with a proper recognition algorithm. Then, the kinematic model is used to calculate the required joints’ angles that guide the arm end-effector to the desired object accurately [6]. For indoor maneuvering, The Stargazer sensor with ceiling landmarks (Hagisonic Company, Korea) are used with H20 mobile robots for moving between the adjacent labs. This guidance system inevitably causes positioning and orientation errors in front of the automated islands. The inaccuracy in robot pose is related to two reasons. The first is the strong lighting and/ or sunlight, which blinds the stargazer and affects the identification of ceiling landmarks. The second reason is related to the accumulation errors of the odometry system. This system has encoders mounted on the robot wheels to provide feedback information of robot motion. Several reasons are responsible for creating accumulated errors like different wheels’ diameter, wheel-slippage, wheels’ misalignment and finite encoder resolution. According to the experimental results and previous studies, the rotation of the robot is the greatest factor for odometry errors [7], [8]. Related to the robot transportation, Hui et al. presented a single floor transportation system based on the H20 mobile robot [9]. In this system, the mapping and localization was completely based on Stargazer sensor module. Two hybrid methods are proposed for path planning from a single source to single destination points. To handle a complex building structure with laboratories distributed on different floors, a multiple floor transportation system has been developed [5]. In a mobile robot multi-floor transportation system, the robot onboard computer is developed to

51


Journal of Automation, Mobile Robotics & Intelligent Systems

realize the functions of mapping, indoor localization, path planning, an automated door controlling system, communication system, battery charging management system, and an elevator handler system [10], [11].

Fig. 1. H20 mobile robot in front of the workstation

Fig. 2. Different labware and tube racks

52

The multi-floor environment adds more challenges for map-building since the map must represent positions in X, Y coordinates with floor numbers. In the developed mapping method [5], the SGM is used as a HEX reader in ‘alone’ working mode. The landmark ID is utilized to define the current floor. The information extracted from the IDs is used to build the relative map. Two kinds of mapping are employed which are relative map (metric map) and path map. The relative map is used as a global map in the multi-floor environment with a unique reference point. On the other hand, the path map is used to realize an obstacle-free set of paths between a starting position and the destination position. The path map relies on the relative map to specify a waypoint position inside it. A localization method based on the relative map is used to find the mobile robot’s position inside the multi-floor environment [5]. A new static path method with a dynamic goal selection is designed to realize obstaclefree paths which direct the robot to the required goal. This method optimizes the planning speed as well as Articles

VOLUME 11,

N° 4

2017

the number of paths used to reach the destination. However, the developed method cannot deal with unexpected dynamic obstacles. Thus, another path planning method is developed using a Floyd searching algorithm. This method is used, due to its efficiency and simplicity, to dynamically plan the path from any point to an intermediate destination [12]. The Floyd method is implemented when the dynamic obstacle avoidance integrated with the multi-floor system or if static paths become unavailable for any other reasons. A smart management system is created to select between these two methods so as to achieve high speed and flexible path generation. To cope with the problem of robot positioning errors, an intelligent procedure to manipulate the required object and to correct the robot pose in front of the workstation is required. This is very crucial to guarantee secure and successful grasping and placing tasks for the labware. The robot has to be close enough to the workstation to ensure that the required target is within the reachable workspace of arms. This issue is also very necessary in case that the robot has unstable and weak arms. The optimal desired distance between the robot center and the manipulation point of workstation is 45 cm. The closer the robot distance is to this value, the better success rates for object manipulation can be obtained. Thus, the required distance range should be within ±2cm (43–47 cm) related to the optimal distance to obtain a sufficient success rate for labware manipulation. In order to correct the robot position and to manipulate the required target, sensors for distance feedback are required. Visual, IR, and sonar sensors can be considered very useful for such tasks [13–18]. Visual sensors like 3D cameras are suitable and preferable since they provide position information related to the working space. Using the visual sensors for position correction, the target reference in the image should be identified and localized. There are several features which can be used or extracted from the captured image to find the target. Color, shape, and textured features can be considered the most important sources for object identification. In order to use specific local textures, feature matching algorithms can be used. The local features have to be extracted and matched with the features in the database related to the object of interest. SIFT (scale invariant feature transform), SURF (SpeededUp robust features), and FAST (Features from Accelerated Segment Test) are the most common algorithms for such purposes [19], [20], [21]. Theses algorithms are somehow independent to the changes in scale, illumination, and orientation. Katsuki et al. attached marks on the target objects to deal with them using robot system [22]. Zickler et al. used humanoid robots to achieve detection and localization of multiple objects on the kitchen desk [23]. Anh et al. proposed an object tracking method based on SURF [24]. Some researchers use the Kinect as a visual sensor for providing position feedback for the view. The Kinect sensor is very preferable since it provides directly the depth data without implementing any steps in image


Journal of Automation, Mobile Robotics & Intelligent Systems

processing as in the case of stereo vision. Chung et al. used the Kinect sensor to help humans in object transportation with service mobile robot [1]. RAMISA ET AL. used the Kinect for cloth manipulation by depending on the depth frames [25]. According to the previous mentioned researches, the target detection and localization are essential to guide the mobile robot to achieve the required tasks. The target position can be used as a reference for arm manipulation and robot position correction tasks. In this work, five H20 mobile robots are used for maneuvering between the laboratories for transporting multiple labware. Several concepts and challenges are taken into consideration to realize an efficient performance. The information feedback from multiple sensors improves the accuracy of labware manipulation and transportation. Sonar sensors are used for robot distance and orientation correction. Also, the Kinect V2 with speeded-Up robust features algorithm (SURF) is used to recognize and localize different object for manipulation and position correction purposes. In this paper, a position error management system is developed. The Stargazer sensor module is firstly used to reach the destination position. The main limitation of the Stargazer sensor is the complex building structure (transparent and the reflective surfaces) which directly affects its performance. Thus, a fine correction method is utilized to realize a stable performance when the error is less than 10 cm. Finally, a robot localization using Kinect sensor with the magnetic encoder is used to improve the robot positioning accuracy in front of the workstation. This paper is organized as follows: in section 2, the parts of multi-floor transportation system are presented. The localization with error management of landmark reader is given in sections 3 and 4 respectively. Section 5 shows the manipulation of multiple labware which will be followed by the strategies of robot position correction. Finally, the results are concluded and discussed.

2. Multi-Floor Transportation System

The multi-floor system was developed to execute the transportation task in multiple floor environment. It includes mapping, indoor localization, path planning, automated doors management system, arm control and multiple labware manipulation, elevator handling, and collision avoidance as shown in Fig. 3.

Fig. 3. Main parts of multi-floor transportation system Multiple labware transportation requires robot maneuvering between different automated islands, laboratories, and floors. It requires also a coopera-

VOLUME 11,

N° 4

2017

tion between different stationary robots and mobile robots. To cope with these issues, an appropriate management system is developed. The hierarchical workflow management system (HWMS) controls the workflow with scheduling and distributing the transportation tasks [26]. The workflow management system sends the plan to the mobile robot transportation system as shown in Fig. 4. The plan includes the information related to starting station, end station, and the required labware to be transported. The labware transportation system includes 3 main parts: the robots management, the multi-floor system, and the grasping/placing system. The grasping/placing system is separated into two parts, object identification and localization and the arm control. The object identification and localization software with the visual sensor is utilized to recognize the target and to estimate its pose. The pose information is sent to the arm kinematic control and to the navigation system.

Fig. 4. Structure of mobile robot transportation system

3. Localization Sensor Localization is considered the key point for mobile robots and can be defined as estimating the absolute or relative position. Many indoor localization approaches can be utilized for mobile robots. Each indoor localization method has its advantages, disadvantages, and limitations. For example, dead reckoning methods have the advantage of being simple and cheap and require a relatively short time for robot indoor localization. [27]. However, a positional error will accumulate over time, and thus they are unsuitable. RFID reader and IC tag methods are robust but unsuitable for large environments due to the expensive installation of IC tags [28]. Image vision methods give the robot accurate information about its environment [29] but fail to work properly in low light levels with certain complex situations. In addition, the required time is not satisfying. Methods using multiple sensors may be efficient and stable [30], [31]. But, the sensors could affect each other if they are employed in large areas. In comparison with other existing indoor localization techniques, methods using artificial landmarks are somehow not very sensitive to lighting conditions. These methods are relatively easy to install, to maintain, and can cover large areas. Artificial landmarks have their advantages, in comparison with natural landmarks, of allowing a flexible and robust navigation system to be built. Passive landmarks are preferred over active landmarks due to their low cost, with the facility of installation and maintenance (no wires Articles

53


Journal of Automation, Mobile Robotics & Intelligent Systems

are required). They do not require a power supply and have the ability to cover a large area. Thus, passive artificial landmarks are utilized with a Stargazer sensor module (SGM) for indoor localization in this application. The SGM can recognize 4,096 landmarks and each landmark can localize 1.6–6.5 m in a diameter based on the ceiling height. Therefore, it can cover an area of 4,096*landmark range. The SGM is a low-cost localization sensor for large indoor environments which is accurate, robust, and reliable [32]. Fig. 5 shows artificial passive landmarks installed on the ceiling of life sciences laboratory. The SGM sensor works in two modes which are mapping and alone mode. In mapping mode, the SGM requires the configuration of map size, reference landmark, and type of landmark. Map building is then easily achieved by moving the SGM around the building to collect information on the relationships between landmarks. Information acquired from the ceiling landmarks gives the robot the ability to localize itself on the map according to the landmark’s reference position. This working mode cannot build more than one map since it uses only an x and y position. In this application, the SGM is used as a HEX reader in ‘alone’ working mode. The landmark ID is utilized to define the current floor where the information extracted with the IDs is used to build the map. There are some restrictions that prevent the stargazer sensor to be accurate enough for multiple labware transportations. These restrictions are related to its unstable behavior in special conditions such as the navigation with transparent/reflective surfaces and/ or with low robot’s battery voltage. Thus, an error handling management system is utilized to deal with the incorrect landmark readings. On the other hand, the fine function method is developed to overcome the shifting errors from Stargazer module readings (less than 10 cm) usually caused by increasing the robot speed, and wheel slipping. Finally, a robot positioning approach based on the target position using Kinect vision sensor and a motor encoder is employed to handle the positioning errors near the labware station.

VOLUME 11,

N° 4

2017

Fig. 6. Vision landmark reader error handling errors are resulted from, for example, reflection or direct strong light and sunlight. The error handling for the SGM was developed to overcome these problems of incorrect readings to adapt with multiple labware transportation tasks. Fig. 6 shows the system, which mainly consists of the error handling core which is responsible for analyzing input error and choosing the actions required for error handling. Two scenarios can be performed in the case of error detection. Firstly, the SGM could repeat the ID readings ten times to eliminate the wrong ID. Secondly, the robot could move backward or forwards till realizing the right ID. The system detects a wrong ID reading if the SGM reads a non-stored landmark ID or if the calculated distance to reach the next position is larger than the normal distance. For example, if the reading for the next position is 10 m while the specified distance between waypoints is 3 m on average. The error handling core continuously monitors these two expected input errors. If the system detects any error, the first scenario is to keep reading until the correct ID is received. Usually, the SGM error handling works well by suspending the robot’s movement until the right reading is received. But, this way may take a long time or might even fail, especially in the glassy elevator environment where many light reflection occurs. Fig. 7 shows a reflection of a landmark appeared on the elevator’s glass walls. These delays may signifi-

Fig. 7. Reflective ceiling landmark on the glassy wall

Fig. 5. StarGazer localization sensor

4. Landmarks Reader Error Management

54

The SGM reading collects errors which occur while the robot is moving in a complex environment. These Articles

cantly affect the whole time required to execute transportation tasks. This problem was resolved by specifying the number of repeat attempts as ten successive wrong readings. The second scenario described above starts immediately when the first has failed. The error handling system controls the robot in moving in the backward (BW) or forward (FW) direction with rotation (ROT) until the correct ID is received. This action is repeated five times as maximum range and in each


Journal of Automation, Mobile Robotics & Intelligent Systems

time the first scenario is performed until identifying the right ID. Fig. 8 shows the implemented method for correction by movement, while the error handling core scenarios are shown in Fig. 9.

Move Robot FW/ BW/ ROT

Wrong ID & Free BW Space & Counter<5

Wrong ID & Free FW Space & Counter<5

N

VOLUME 11,

N° 4

2017

5. Multiple Labware Manipulation The success of multiple labware manipulation depends significantly on 3 main aspects: the accuracy of robot positioning, the accuracy of arm control, and the accuracy of recognition with position estimation of the required labware. According to the arms workspace of the H20 robot with the workstation structure, each arm can manipulate two labwares alongside each other as shown in Fig. 10. The workstation has a length of 110 cm and it consists of 8 locations of la-

Y

N

Y

Move BW 10 cm

End

Move FW 10 cm

Fig. 8. ID reading correction by movement Fig. 10. Workstation structure for multiple labware

Stargazer reading error correction 1 Read position data In Relative Map

Move Robot FW/BW/ROT

N

Y

Wait 100ms

Y

N

Iteration =10

End

5.1. Visual Sensor for Labware Manipulation

Fig. 9.a. Error management (ID is out of relative map)

Stargazer reading error correction 2 Move Robot FW/BW/ROT

Read position data Calculate distance to next point

In Range

Wait 100ms

N

bware containers. This requires two positions for the mobile robot to manipulate all locations. The shift distance between these two positions of robot is 29 cm. The required labware has to be identified and localized wherever it is located and the robot has to change its position for this purpose. The position of the target is used to guide the robotic arm. This requires an arm control based kinematic model to calculate the required joints’ values. The kinematic model of H20 arms has been developed and applied physically on the system of H20 arms to guide them to the target [33], [34]. Different visual sensors can be used for such tasks like stereo vision and 3D camera. The Kinect sensor V2 is considered as an optimal solution because it directly provides the depth information. Also, its cheap price (≈150 €) makes it very attractive for such applications. Kinect V2 uses ‘time of flight’ technology to provide the depth data. It indirectly measures the pulses time of laser light to travel from the projector to a target surface, and back to the sensor. The Ki-

Y N

Iteration =10

Y

End

Fig. 9.b. Error management (ID is in relative map)

Fig. 11. Kinect holder fixed on the H20 body Articles

55


Journal of Automation, Mobile Robotics & Intelligent Systems

nect sensor V2 has been fixed on the H20 body using a holder with a suitable height and tilt angle to provide a clear view for the automated island as shown in Fig. 11.The distance between the Kinect on holder and the workstation has to be carefully configured where the minimum depth value of Kinect V2 is 50 cm. The Kinect holder should not obstruct the head movement and stargazer FOV (see Fig. 11). A 12 V battery with current-voltage stabilizer is installed on the H20 body to supply the Kinect with the required power as shown in Fig. 12.

VOLUME 11,

N° 4

2017

the holder number to the mobile robot transportation system. The limitation with this strategy is that all holders’ positions related to each other and to the labels have to be identical for all other workstations. For this reason, a specific label has been attached to each holder as shown in Fig. 15.

Fig. 14. Label for each position of robot manipulation

Fig. 12. Battery and stabilizer for Kinect sensor

5.2. Grasping and Placing Operations For grasping and placing tasks with mobile robots, different grippers and labware containers have been designed [34], [35]. The designs have to be selected very carefully to guarantee a secure manipulation. Fig. 13 shows the final designs of the gripper and how it grasps the labware container.

Fig. 15. Label for each holder on the workstation To grasp the required labware, it is better to track it directly to avoid any manipulation mistake. Since the labware have transparent or white lids (see Fig. 2) for protection from cross contamination, it is not possible to identify them. Therefore, a specific label has been attached on each labware lid for the identification process as shown in Fig. 16. The label contains the labware information with a particular number for classification purposes. The required time for performing the grasping operation is about 69 seconds while 59 seconds are required for the placing operation [35]. Fig. 17 shows the grasping operation for the labware and how the robotic arm places it on the H20 holder for transportation task.

Fig. 13. Design of gripper and labware container

56

Different strategies have been applied to perform multiple labware manipulation [35]. According to the holders’ appearance shown in Fig. 14, it is complicated to differentiate them. Also, the existence of a labware on the holder complicates its identification for the grasping task. To cope with this issue, 2 labels are used as a reference for each robot position as shown in Fig. 14. Each label is recognized and localized using Kinect V2 with SURF algorithm [36]. Each label position is used as a visual reference for 4 holders to achieve the grasping and placing operation. The label recognition is assigned by drawing a polygon around it with a cross to specify its center point to obtain its position. The recognition process starts with an offline step by saving the target image in the database as a matching reference. Since the Kinect sensor provides the depth data directly, it is simple to find the position of any point in the view. The workflow management system sends the order (grasp/place) with Articles

Fig. 16. Labware label on lid for identification process

Fig. 17. Grasping operation for the required labware It is possible to use just numbers, characters or barcode for labware identification and manipulation. But the use of labels is still better and more helpful. The labware/holder information with a background picture in the label gives adequate features. These labels


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

are not used just for identification, but for position estimation also. These labels can be recognized under strong lighting conditions and even when they are partially occluded by some object as shown in Fig. 18. This can be considered as one of the advantages of using this method. It is still possible to grasp the required labware even if the related label is partially seen by the visual sensor.

Fig. 18. Label recognition with partial occlusion

5.3. Problem Description of H20 Arms The H20 robot has unstable arms with weak joints where the joints compliance causes positional errors especially for the case of dealing with wide workstation. The gravity with payload increases the elasticity of each joint [37]. There are some other reasons of imprecision like the resolution of the DC servo motors with control system and the imprecision of the mechanical linkages. Also, the friction, temperature, and manufacturing tolerances play a role in arm positional errors. The accuracy of H20 arms has been checked according to the grasping configuration which is shown in Fig. 19. The arm end effector has been moved to be at the height of Y=180 mm at dif-

N° 4

2017

Dealing with such robotic arms requires more efforts and processes which have to be performed to decrease the errors. Using the hand camera can be one of the solution as shown in Fig. 20. The Intel RealSense F200 camera, which is a 3D camera, can be used for labware grasping. Specific marks or barcodes can be identified and localized to guide the robotic arm and to correct the end effector position. This methodology requires the installation of a hand camera for each arm. Also, it is not possible to use the hand camera in the placing task because it is not possible to identify the holder label due to two reasons. The first is the posture of holder label which is in parallel with the view direction of the hand camera. The second reason is related to the existence of labware in front of the hand camera. This labware blocks the camera view during the placing task. The other methodology, which can be used to decrease the arm positional error, is to track the end effector during the movement operation as shown in Fig. 21. This approach requires to fix a label at each hand for position tracking. The real time tracking for the hand during the approaching process is computationally intensive. It requires more data processing and time which burden the CPU and memory. To cope with all these issues, the robot position has to be corrected. Moving the robot closer to the workstation will decrease the positioning errors of the arm end effector and will improve the success rate of grasping and placing operation. Distance correction is very necessary especially for the placing tasks due to the labware weight which increases the positional errors.

Fig. 20. Intel RealSense F200 camera for grasping task

Fig. 19. Grasping order and shoulder coordinates ferent distances (Z-values) between the shoulder and end effector. Table 1 shows in millimeter the error value in Y-axis at each Z value. It is clear that the Y-error increases by increasing the Z value. This is related to the unstable and weak joints with the other reasons Table 1. Position error of end effector in Y-axis Z values (mm) Y-error (mm)

350 15

380 20

400 25

420 35

440 45

450 50

which are previously mentioned. Also, it is important to mention that these error values are without dealing with any extra weight. It means that the arm positional errors will increase after dealing with labware which their weight range are between (200 g – 800 g).

Fig. 21. End effector tracking in grasping task Related to the robot orientation, sometimes the robot is not straight enough in front of the workstation. For this case, the orientation angle of the target related to the robot can be calculated. This angle leads the robotic arm to manipulate the target in the right way. For calculation of orientation angle, coplanar POSIT algorithm, which stands for POSe Iterations, can be used [38]. To use this algorithm, the target has Articles

57


Journal of Automation, Mobile Robotics & Intelligent Systems

to be previously known. The positions of target corner points related to its center have to be calculated according to the real physical coordinates. Also, these corner points have to be found in the image coordinates as shown in Fig. 22. The H20 arms are not stable and accurate enough to be used with this algorithm. Also, the H20 arm doesn’t have a spherical wrist which simplifies this kind of manipulation. Therefore, the more direct way to deal with this problem is to correct the robot orientation in front of the workstation.

Fig. 22. Orientation angle for labware manipulation

6. Position Correction Strategies of Robot 6.1. Fine Position Correction Method

In laboratory automation, a high transportation speed over large areas is important to minimize the required time for the whole laboratories operation. Moving at a high speed using H20 mobile robots adds challenges related to the inaccuracy of movement. The fine correction function is used to increase the robot’s position accuracy at higher speed during multiple labware transportation tasks. The robot linear velocity was increased by 20% to reach 0.2 m/sec while increasing the rotation velocity was adapted with the required rotation angle degree to get the balance between speed and accuracy. For example, if the rotational angle required is more than 30 degrees, then the highest rotation speed of 0.34 rad/sec is used. For a lower angle, the angular speed will be decreased to achieve the highest accuracy. The fine positioning method was developed to overcome this problem and to achieve a higher positional accuracy. This method uses two techniques. Firstly, the robot’s speed is controlled in order to minimize movement error caused by wheel slip and to give more time for the motor encoder to be read and updated. Secondly, a position correction is added in the X or Y direction (depending on the latest movement) as shown in Fig. 23. The H20 robot has a differential driver, and thus it is not easy to correct its

VOLUME 11,

N° 4

position in the right/left position. Thus, the fine positioning method records the direction of the last movement and utilizes this direction to correct the robot’s position until reaching the motor accuracy limitation which is 1 cm. Experiments were conducted to determine the repeatability of robot positioning at different grasping, placing and charging stations before and after employing the fine position correction method. These tests have been performed 50 times in multiple floor environment. Each time the robot moves from the charging station towards the grasping station. Then, it moves to the placing station and finally returns to the charging station. In these experiments, a 100% success rate was achieved. Fig. 24 clarifies the repeatability tests of the mobile robot’s position at the important stations in the transportation path. In Fig. 24, group (a) and (b) represent the repeatability without and with using fine function respectively. In comparison between groups (a) and (b) in Fig. 24, it can be noticed that the precision and repeatability results have been improved in the required positions after using fine correction method. In the grasping position (see Fig. 24 (Test 1)), the repeatability range has been improved from 5 cm to 1.5 cm in the x-axis. To improve the positioning accuracy in the transportation path, calibration processes have been performed at the important stations. Table 2 reported the repeatability, and standard division of robot positioning. Table 3

Test 1: repeatability at grasping position.

Test 2: repeatability at placing position.

Test 3: repeatability at robot charging position.

Test 4: repeatability at elevator position. Fig. 23. Fine position correction based on last direction 58

Articles

2017

Fig. 24. Contrast experiment for fine method


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

shows an example of the consumed time for mobile robot transportation between two stations with and without fine correction method. The time has been reported for 50 times of transportation in multiple floor environment. Table 2. Comparison of repeatability results in cm Without Fine Method

Grasping point

Placing point Charging point Elevator point

With Fine Method

Axis

S.D

Repeatability

S.D

Repeatability

X

0.95

±2.5

0.38

±0.75

X

0.67

±1.5

0.58

±1

Y

0.92

±2.25

Y

0.65

±1.25

Y

0.58

±1.5

0.82

±1.75

X

X

Y

0.95

1.05

0.51

0.43

±1.1

0.68

±1.75

±2.5

0.59

±1.5

0.53

±1

Min

13:22

Max

17:20

Av

15:32

Min

With Fine Method

10:46

Max

12:44

(1) where D is the distance in m, Wc is the wheel circumference, and Wp is No. of encoder pulses per wheel rotation. The navigation system sends the required distance with the movement direction (FW/BW) to the motion and power control which calculate the required encoder pulses.

±1

Table 3. Consumed time for mobile robot (in minutes) Without Fine Method

2017

(consist of rotary magnet disk and Hall Effect sensor as shown in Fig. 26.b), and 49-1 gearbox ratio. The H20 main onboard controller connected to Sabertooth which controls these motors. The Sabertooth dual motor driver board is used for providing the motor with the required voltage with a specific direction to turn the motor on/off. The movement based on encoders starts by converting the required distance to the number of encoder pulses. The wheel radius and the number of encoder pulses per rotation are taken into consideration to calculate the number of encoder’s pulses as shown in Eq. (1),

±1.3

±2

0.64

N° 4

a

b[39]

Fig. 26. a: wheels platform, b: magnetic encoder Av

11:26

6.2. Position Correction Based on Sensors In this section, two kinds of sensors are used to correct the robot position, sonar sensor and Kinect sensor. The information from these sensors can be used as a feedback to correct the robot position and orientation in front of the work station. Fig. 25 shows the system architecture of robot pose correction.

Two stages of robot position correction method based on the target location are performed as shown in Fig. 27. The first is implemented to correct the robot position in right/left direction. The workstation has 8 locations of labware containers and each arm of H20 robot can manipulate two locations alongside each other (see Fig. 10). Therefore, two robot positions in the right/left direction are required to manipulate all locations. The shift distance (SD) between these two positions is 29 cm. When the navigation system gets the SD value, special procedures are performed to correct the robot position based on the motor encoder. It starts with storing the current robot Orientation Angle (OA) and controlling the robot

Fig. 25. System architecture of robot pose correction The H20 robot has a non-holonomic wheeled mobile platform which has driving and castor wheels (see Fig. 26.a). The driving wheels are driven by motors, so the robot can either move forward, backward or rotate around itself. Two EMG49 motors are used to drive the mobile robot. This motor has a 24V DC motor, 980 pulses per rotation magnetic encoder

Fig. 27. Position correction directions Articles

59


Journal of Automation, Mobile Robotics & Intelligent Systems

backward by specific Moving Distance (MD). Then, the robot rotates 90 degrees to right/left according to the required shift direction. Thereafter, the robot moves with SD value, rotates towards the station, and moves forward with MD value. The final step is to correct the robot orientation based on OA value using stargazer sensor. The second stage is to correct the distance between the workstation and the robot to obtain a sufficient success rate for labware manipulation. The desired distance range between the robot center and the manipulation point of workstation is 43–47cm.

6.2.1. Sonar Sensors The sonar sensors can be used for different applications of mobile robots such as collision avoidance and distance detection. The distance data is precisely calculated by the time interval between the instant of sending the sonar signal and the instant when the echo signal is received. The front base of H20 robot has 3 built-in DUR5200 sonar sensors. One sensor is in the middle and the other two are on the left and right sides. The DUR5200 sonar sensor can detect the range information from 4 cm to 255 cm since the controller board uses only one byte to represent the distance. This means that if the range is less than 4 cm or more than 255 cm, it will be reported as 4 cm and 255 cm, respectively. These sonar sensors can be used to correct the robot distance to the workstation as shown in Fig. 28. Also, the robot orientation can be corrected to be straight by rotating the robot left/right till the reading form the two sensors on the left and right side are equalized. But this strategy of robot

Fig. 28. Robot pose correction based on sonar sensors pose correction is not reliable enough due to some reasons. Flat surface is always required to reflect the sonar signal. It is not possible in some stations to install such a surface due to the environment structure. Existence of obstacles in front of the sonar sensors leads to wrong estimation for the distance and orientation of robot. Also, this strategy lacks the positioning feedback in the X-axis (see Fig. 28). Therefore, using the Kinect sensor to provide feedback information is more reliable for robot position correction.

60

6.2.2. Kinect Sensor V2 The object detection and position estimation using Kinect V2 can be considered an efficient strategy to correct the robot position especially in front Articles

VOLUME 11,

N° 4

2017

of wide workstation. The required label for grasping/ placing has to be recognized first. Then, the position of label center point in the image coordinate is derived. This can be calculated using the corners’ position of the label as shown in Fig. 29. In order to find the position of this center point related to the Kinect sensor, mapping processes have to be performed. Since the RGB frame and depth frame are not identical, the interested point in the RGB frame has to be mapped to its related point in the depth frame. Then, another mapping step is performed from depth frame to the Kinect space coordinates. The result of these mapping steps is the real position of the label center point related to the Kinect on the holder [36].

Fig. 29. Target recognition for position correction The next important step is to apply the extrinsic calibration. The position information related to the Kinect camera has to be transformed to be related to the robot center point as shown in Fig. 30. The calibration from Kinect space to robot space includes the transformation in translation and orientation. This belongs to the difference in the position and the tilt angle (t) between the Kinect and robot space [35]. According to the distance between the Kinect on holder and workstation, the position precision which can be obtained from Kinect is about ±1 mm.

Fig. 30. Position calculation and extrinsic calibration The position of target related to the robot center is used to correct the robot position. The position correction is applied in two direction: left/right (L/R) and FW/BW. The correction in FW/BW direction is very helpful to solve the problems of unstable and weak arms. On the other hand, the correction in left/ right direction helps to deal with wide workstation. If the grasping operation is performed with a particular arm (right or left), the placing operation for the grasped labware has to be achieved using the same arm. For this case, the left/right correction is very required. The correction information is sent from the Kinect platform to the navigation platform through client-server model. Using this communication mod-


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

Fig. 31. Client-server model for system integration Table 4. Tests results of distance correction Distance (cm)

times

Corr. status

Number of

Before corr.

After corr.

Errors

42

45

-3

1

Yes

44

44

-1

2

No

43 45 46 47 48 49 50 52 54 60

43 45

-2 0

46

+1

45

+3

47 45 45 45 45 45

+2 +4 +5 +7 +9

+15

1 5 2 4

No No No No

5

Yes

3

Yes

4 1 1

1

Yes

N° 4

2017

el, these parts can exchange the orders and information between each other as shown in Fig. 31. The client-server connection architecture module (asynchronous socket) is enabled to control the interaction of the navigation system with other sub-system over Ethernet. A TCP/IP command protocol based serverclient structure is used to guarantee the reliability and the expandability. So any kind of devices can be added into the communication network conveniently with a new IP. Table 4 shows the distance correction results of 30 tests based on Kinect V2 sensor. The table includes the robot distance values before and after the correction procedure. The optimal distance between the robot center and the manipulation point of workstation is 45 cm. However, the distance range should be within 43–47cm (±2cm) to obtain a sufficient success rate for labware manipulation. The robot doesn’t need to correct its distance if it is within the desired range. Table 5 shows the overall success rate of the grasping and placing operation with and without distance correction. It can be clearly noticed that the success rate has been improved to reach 97% for the grasping and placing tasks. There are still 3% errors in the performance which belong to the instability of robot arms which have weak joints. Also, the low voltage of robot battery affects the manipulation performance. The re-

Table 5. Success rate of operations

Yes

Dis. Corr.

Attempts

Succ. Grasp

Succ. Place

Yes

No

Yes

50

92%

90%

Yes

30

97%

97%

Fig. 32. Operation processes of the position correction in front of (Grasping/Placing) station Articles

61


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

quired time (in seconds) for correction procedure can be calculated according to the following equation: 3+1.5+(Abs.(dis. error (cm))/8.3 cm/sec)

(2)

where 3 seconds are required as delay time to be sure that the Kinect is not trembling and the robot is stable when it reaches the workstation. The 1.5 seconds of time is required for sockets communications, target recognition, position calculation, and sending the order to the navigation system. Finally, 8.3 cm/sec represents the linear speed of robot during the distance correction procedure. Fig. 32 shows the flowchart of position correction with the communication process between multi-floor system (MFS) and labware manipulation system (LMS). This project work has been developed using Microsoft Visual Studio 2015 with C# programming language. The project is running on a Windows 10 platform in the H20 tablet.

7. Conclusion

In this paper, a new system for multiple labware transportations based mobile robot in life science laboratories is presented. To realize the required accuracy for multiple labware transportations based mobile robot, the Stargazer sensor as a low-cost and reliable localization module is used. Stargazer sensor module has unstable behavioral under direct sunlight and with reflective surfaces. Thus, a robot position error management and correction function are developed. In this paper, a hybrid approach for robot pose correction in life science laboratories has been presented. The problem statement with the proposed methodologies has been discussed. The hybrid strategy depends on using the fine method and Kinect sensor V2 with a motor encoder. The Kinect sensor can be considered one of the powerful 3D cameras which provides the position information in a fast way. Kinect V2 provides high resolution image, wide field of view, and accurate position data directly that makes it very desirable for such tasks. The client server model has been used to integrate and connect the identification and localization system with the navigation system. Two experiments are provided to validate the efficiency of the system and the new positioning strategy. The experimental results show that the proposed correction strategy has efficient performance, which meets all requirements to realize a successful multiple labware transportations based mobile robots in life science laboratories.

ACKNOWLEDGEMENTS

62

This work was funded by the German Federal Ministry of Education and Research (FKZ: 03Z1KN11, 03Z1KI1). The study is supported by the German Academic Exchange Service – DAAD (Ph.D. stipend M. M. Ali) and the Ministry of Higher Education and Scientific Research in Iraq for the scholarship provided by Mosul University (Ph.D. stipend A. A. Abdulla). The authors would also like to thank the Canadian DrRobot Company for the technical support of the H20 mobile robots. Articles

N° 4

2017

AUTHORS Mohammed M. Ali* – Center for Life Science Automation (celisca), University of Rostock, Rostock 18119, Germany. E-mail: mohammed.myasar.ali@celisca.de. Ali A. Abdulla* – with 1- Center for Life Science Automation (Celisca), University of Rostock, Rostock 18119, Germany. 2- College of Engineering, University of Mosul, Mosul, Iraq. E-mail: Ali.abdulla@celisca.de. Norbert Stoll, Institute of Automation, University of Rostock, Rostock 18119, Germany. E-mail: Norbert.Stoll@uni-rostock.de. Kerstin Thurow, Center for Life Science Automation (celisca), University of Rostock, Rostock 18119, Germany. E-mail: Kerstin.Thurow@celisca.de. *Corresponding authors

REFERENCES

[1]

[2]

[3]

[4]

[5]

[6]

[7]

H. Chung, C. Hou, Y. Chen, and C. Chao, “An intelligent service robot for transporting object.” In: IEEE International Symposium on Industrial Electronics (ISIE), Taipei, Taiwan, 2013, 1–6. DOI: 10.1109/ISIE.2013.6563645. M. Ciocarlie, K. Hsiao, E. G. Jones, S. Chitta, R. B. Rusu, I. A. Şucan, “Towards reliable grasping and manipulation in household environments.” In: 12th International Symposium on Experimental Robotics (ISER), Springer Berlin Heidelberg, 2014, 241–252. DOI: 10.1007/978-3-64228572-1_17. R. O’Flaherty, P. Vieira, M. X. Grey, P. Oh, A. Bobick, M. Egerstedt, M. Stilman, “Humanoid robot teleoperation for tasks with power tools.” In: IEEE International Conference on Technologies for Practical Robot Applications, Woburn, MA, 2013, 1–6. DOI: 10.1109/TePRA.2013.6556362. T. J. Tsay, M. S. Hsu, R. X. Lin, “Development of a mobile robot for visually guided handling of material.” In: IEEE International Conference on Robotics and Automation (ICRA), Taipei, Taiwan, 2003, 3397–3402. A. A. Abdulla, H. Liu, N. Stoll, K. Thurow, “A New Robust Method for Mobile Robot Multifloor Navigation in Distributed Life Science Laboratories”, J. Control Sci. Eng., vol. 2016, Jul. DOI: 10.1155/2016/3589395. M. M. Ali, H. Liu, R. Stoll, K. Thurow, “Arm Grasping for Mobile Robot Transportation using Kinect sensor and Kinematic Analysis.” In: IEEE International Conference on Instrumentation and Measurement Technology (I2MTC), Pisa, Italy, 2015, 516–521. DOI: 10.1109/I2MTC.2015.7151321. Borenstein, H. R. Everett, L. Feng, “Where am I? Sensors and methods for mobile robot position-


Journal of Automation, Mobile Robotics & Intelligent Systems

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

ing,” University of Michigan, USA, vol. 119, no. 120, 1996. Borenstein, “The CLAPPER: A dual-drive mobile robot with internal correction of dead-reckoning errors.” In: IEEE International Conference on Robotics and Automation, San Diego, CA, 1994, 3085–3090. DOI: 10.1109/ROBOT.1994.351095. H. Liu, N. Stoll, S. Junginger, K. Thurow, “Mobile robotic transportation in laboratory automation: Multi-robot control, robot-door integration and robot-human interaction.” In: IEEE International Conference on Robotics and Biomimetics (ROBIO), Bali, Indonesia, 2014, 1033–1038. A. A. Abdulla, H. Liu, N. Stoll, K. Thurow, “A Robust Method for Elevator Operation in Semioutdoor Environment for Mobile Robot Transportation System in Life Science Laboratories.” In: IEEE International Conference on Intelligent Engineering Systems (INES), Budapest, Hungary, 2016, 45–50. A. A. Abdulla, H. Liu, N. Stoll, K. Thurow, “An automated elevator management and multi-floor estimation for indoor mobile robot transportation based on a pressure sensor.” In: IEEE International Conference on Mechatronics (MEHATRONIKA), Prague, Czech Republic, 2016, 1–7. A. A. Abdulla, H. Liu, N. Stoll, K. Thurow, “A Backbone-Floyd Hybrid Path Planning Method for Mobile Robot Transportation in Multi-Floor Life Science Laboratories.” In: IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), BadenBaden, Germany, 2016, 406–411. DOI: 0.1109/ MFI.2016.7849522. D. Chwa, “Robust Distance-Based Tracking Control of Wheeled Mobile Robots Using Vision Sensors in the Presence of Kinematic Disturbances”, IEEE Trans. Ind. Electron., vol. 63, no. 10, 6172–6183, Oct. 2016. DOI: 10.1109/ TIE.2016.2590378. G. R. Yu, P. Y. Liu, Y. K. Leu, “Design and implementation of a wheel mobile robot with infrared-based algorithms.” In: IEEE International Conference on Advanced Robotics and Intelligent Systems (ARIS), Taipei, Taiwan, 2016, 1–6. DOI: 10.1109/ARIS.2016.7886625. L. D’Alfonso, A. Grano, P. Muraca, P. Pugliese, “Mobile robot localization in an unknown environment using sonar sensors and an incidence angle based sensors switching policy — Experimental results.” In: 10th IEEE International Conference on Control and Automation (ICCA), Hangzhou, China, 2013, 1526-1531. DOI: 10.1109/ ICCA.2013.6565163. C.-K. Joo, Y.-C. Kim, M.-H. Choi, Y.-J. Ryoo, “Self localization for intelligent mobile robot using multiple infrared range scanning system.” In: International Conference on Control, Automation and Systems, Seoul, South Korea, 2007, 606–609. P. Zingaretti, E. Frontoni, “Vision and sonar sensor fusion for mobile robot localization in aliased

VOLUME 11,

[18]

[19] [20]

[21] [22]

[23]

[24]

[25]

[26]

[27]

[28]

N° 4

2017

environments.” In: IEEE/ASME International Conference on Mechatronics and Embedded Systems and Applications, Beijing, China, 2006, 1–6. DOI: 10.1109/MESA.2006.296971. Q. Chen, H. Xie, P. Woo, “Vision-based fast objects recognition and distances calculation of robots.” In: 31st Annual Conference of IEEE Industrial Electronics Society (IECON), Raleigh, NC, USA, 2005, 363–368. G. Lowe, “Object Recognition from Local ScaleInvariant Features”. In: IEEE International Conference on Computer Vision, Corfu, Greece, 1999, 1150–1157. DOI: 10.1109/ICCV.1999.790410. H. Bay, A. Ess, T. Tuytelaars, L. V. Gool, “SURF: Speeded Up Robust Features”, Journal of Computer Vision and Image Understanding (CVIU), vol. 110, no. 3, 2008, 346–359. DOI: 10.1016/j. cviu.2007.09.014. E. Rosten, T. Drummond, “Machine Learning for High-Speed Corner Detection.” In: Computer Vision –ECCV 2006, chapter 34, Springer, 430– 443. DOI: 10.1007/11744023_34. R. Katsuki, J. Ota, Y. Tamura, T. Mizuta, T. Kito, T. Arai, T. Ueyama, T. Nishiyama, “Handling of Objects with Marks by a Robot.” In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, Nevada, 2003, vol. 1, 130–135. DOI: 10.1109/IROS.2003.1250617. S. Zickler, M. M. Veloso, “Detection and Localization of Multiple Objects.” In: IEEE-RAS International Conference on Humanoid Robots, Genoa, Italy, 2006, 20–25. DOI: 10.1109/ ICHR.2006.321358. L. T. Anh, J. B. Song, “Object Tracking and Visual Servoing using Features Computed from Local Feature Descriptor.” In: International Conference on Control Automation and Systems (ICCAS), Gyeonggi, South Korea, 2010, 1044–1048. A. Ramisa, G. Alenya, F. Moreno-Noguer, C. Torras, “Using Depth and Appearance Features for Informed Robot Grasping of Highly Wrinkled Clothes.” In: IEEE International Conference on Robotics and Automation (ICRA), St. Paul, Minnesota, USA, 2012, 1703–1708. DOI: 10.1109/ ICRA.2012.6225045. X. Gu, S. Neubert, N. Stoll, K. Thurow, “Intelligent Scheduling Method for Life Science Automation Systems.” In: IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), Baden-Baden, Germany, 2016, 156–161. DOI: 10.1109/MFI.2016.7849482. S. Jang, K. Ahn, J. Lee, Y. Kang, “A study on integration of particle filter and dead reckoning for efficient localization of automated guided vehicles.” In: IEEE International Symposium on Robotics and Intelligent Sensors (IRIS), Langkawi, Malaysia, 2015, 81–86. Y. Takahashi, Y. Ii, M. Jian, W. Jun, Y. Maeda, M. Takeda, R. Nakamura, H. Miyoshi, H.Takeuchi, Y. Yamashita, H. Sano, A. Masuda, “Mobile robot self localization based on multi-antenna-RFID Articles

63


Journal of Automation, Mobile Robotics & Intelligent Systems

[29]

[30]

[31]

[32]

[33]

[34]

[35]

[36]

[37]

64

reader and IC tag textile”. In: IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO), Tokyo, Japan, 2013, 106–112. DOI: 10.1109/ ARSO.2013.6705514. S. J. Lee, J. Lim, G. Tewolde, J. Kwon, “Autonomous tour guide robot by using ultrasonic range sensors and QR code recognition in indoor environment”. In: IEEE International Conference on Electro/Information Technology (EIT), Milwaukee, WI, USA, 2014, 410–415. DOI: 10.1109/ EIT.2014.6871799. T. Lee, W. Bahn, B. Jang, H.-J. Song, and D. D. Cho, “A new localization method for mobile robot by data fusion of vision sensor data and motion sensor data.” In: IEEE International Conference on Robotics and Biomimetics (ROBIO), Guangzhou, China, 2012, 723–728. DOI: 10.1109/ROBIO.2012.6491053. X. Li, Q. Wang, X. Zhang, “Application of Electronic Compass and Vision-Based Camera in Robot Navigation and Map Building.” In: IEEE International Conference on Mobile Ad-hoc and Sensor Networks (MSN), Dalian, China, 2013, 546–549. DOI: 10.1109/MSN.2013.101. I. Ul-Haque, E. Prassler, “Experimental Evaluation of a Low-cost Mobile Robot Localization Technique for Large Indoor Public Environments.” In: 41st International Symposium on Robotics (ISR) and 6th German Conference on Robotics (ROBOTIK), Munich, Germany, 2010, 1–7. M. M. Ali, H. Liu, N. Stoll, and K. Thurow, “Kinematic Analysis OF 6-DOF Arms for H20 Mobile Robots and Labware Manipulation for Transportation in Life Science Labs”, Journal of Automation, Mobile Robotics & Intelligent Systems, vol. 10, no. 4, 40–52, 2016. DOI: 10.14313/JAMRIS_4-2016/30. M. M. Ali, H. Liu, N. Stoll, K. Thurow, “Intelligent Arm Manipulation System in Life Science Labs Using H20 Mobile Robot and Kinect Sensor.” In: IEEE International Conference on Intelligent Systems (IS’16), Sofia, Bulgaria, 2016, 382–387. DOI: 10.1109/IS.2016.7737449. M. M. Ali, H. Liu, N. Stoll, K. Thurow, “Multiple Lab Ware Manipulation in Life Science Laboratories using Mobile Robots.” In: IEEE International Conference on Mechatronics (MECHATRONIKA), Prague, Czech Republic, 2016, 415–421. M. M. Ali, H. Liu, N. Stoll, K. Thurow, “An Identification and Localization Approach of Different Labware for Mobile Robot Transportation in Life Science laboratories.” In: IEEE International Symposium on Computational Intelligence and Informatics (CINTI), Budapest, Hungary, 2016, 353–358. DOI: 10.1109/CINTI.2016.7846432. K. L. Conrad, P. S. Shiakolas, T. C. Yih, “Robotic calibration issues: Accuracy, repeatability and calibration.” In: Proceedings of the 8th Mediterranean Conference on Control and Automation (MED2000), Rio, Patras, Greece, 2000.

Articles

VOLUME 11,

N° 4

2017

[38] D. Oberkampf, D. F. DeMenthon, L. S. Davis, “Iterative Pose Estimation Using Coplanar Feature Points”, Computer Vision and Image Understanding, vol. 63, no. 3, 1996, 495–511. DOI: 10.1006/ cviu.1996.0037. [39] Understanding Resolution in optical and magnetic Encoders.[Online].Available:http://www. elektronikpraxis.vogel.de/e.


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

N° 4

2017

Analysis of the Effect of Soft Soil’s Parameters Change on Planetary Vehicles’ Dynamic Response Submitted: 25th September 2017; accepted: 16th January 2018

Hassan Shibly

DOI: 10.14313/JAMRIS_4-2017/37 Abstract: The mobility of a planetary vehicle has numerous constraints imposed by the types of terrain. Navigation is difficult through uneven and rocky terrain, and becomes worse due to abrupt changes of ground level which may cause a fall to a lower ground level. This article examines the effect of the soil’s parameters change due to repetitive falls on the vehicle’s dynamic behavior. After each free fall of the vehicle there is a collision of the vehicle’s wheel with the ground. If the ground is made up of soft soil there is an increase in the soil compactness after each collision. The increase in the soil compactness causes a change in the soil parameters. These changes modify the algorithm’s parameters of the vehicle’s dynamic model. The dynamic model is a quarter vehicle model with single rigid wheel which falls on soft soil. Simplified forms of the Pressure-Sinkage models of Bekker and Reece for the sinkage of a rigid body into soft soil are incorporated in the numerical solution of the governing equations of motion. The dynamic interaction of a rigid wheel and soft soil has three stages: sinkage stage, wheel dwell stage, and wheel pullout from soil stage. By comparing the simulations results when the soil’s parameters are kept constant and when their changes are incorporated in the dynamic model showed that the difference in the dynamic response are not significant and can be neglected. There is a gradual change in the dynamic mechanical quantities when the soil’s parameters are kept constant, while the changes in the dynamic mechanical quantities between the second fall and the successive falls are small. Keywords: rigid wheel-soil sinkage, dynamic response of rover, sinkage by free fall, soft soil parameters change, work of normal force.

1. Introduction Expanding the planetary mission exploration area requires increasing the planetary vehicle’s speed. Planetary mission planners carefully select the rout of planetary vehicles on the surface of a plant, although, the vehicles are expected to face an extremely complicated and challenging terrains. A motion at high speed could face an abrupt change of ground level which may lead the vehicle to fall on to a lower soft ground level. As a result; planetary vehicle’s design requires a new design that enhances the navigation capability of vehicles to navigate on a various types of terrain

and to be able to recover from unexpected falls. The study and simulation of the dynamic response of the vehicle for a specific type of terrain provides the designers with adequate information to adjust their design to overcome such cases. The dynamic response of planetary vehicles after a fall on soft soil has not been investigated enough. Such a situation is expected in any planetary exploration mission, as well as off road vehicles. The special thing of such a case is the dynamic interaction between a rigid wheel and soft soil during the penetration of the wheel until its maximum sinkage. This study and simulation examining the dynamic response of a planetary vehicle (rover) during multiple falls on soft soil which are initiated by an abrupt change of the ground level. The analysis of falling on soft soil which leads to the sinkage of the rover’s rigid wheel into the soft soil requires the use of Pressure-Sinkage relations. Many researchers investigated the pressure sinkage relationship of the sinkage of a rigid body that penetrates into soft soil by applying a normal load. The majority of them used rigid flat plates as a rigid body. Experiments were performed by loading the plate and measuring force and sinkage into the soil assuming homogeneous terrain in the vertical direction of the sinkage, it is called Bevameter technique. One of the earlier reported model for pressure-sinkage relationship used in terra mechanics was [1], [2]. The model, Equation (1), is a fundamental empirical formula developed to estimate the pressure-sinkage relationship of a rigid body that sinks into soil under a uniform pressure. (1)

Where z soil sinkage, k soil deformation modulus, n constant, and p loading pressure. In order to measure penetration interaction mechanics into soil under vertical loads loaded plates were used, Bevameter tests. For a homogeneous soil, the pressure-sinkage relationship equation (2) was proposed by [3], and [4]. Bekker introduced an empirical model for the pressure-sinkage relationship by replacing k with (kc/b + kφ) as shown in Equation (2). (2)

Where p is the uniform load pressure applied on the flat plate measured at z sinkage, n is the soil material sinkage exponent experimentally obtained and de-

65


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

fines the curvature of the pressure-sinkage curve of a soil under normal load, kc [kN/mn+1] the cohesion module, kϕ [kN/mn+2] the friction module of the soil, and b is the smallest width of the loaded flat plate. Example of soil parameters are given in Table 1. To demonstrate the effect of the soil exponent n on the pressure-sinkage curve, a five curves were plotted as shown in Figure 1 based on the empirical values of Table 1. Table 1. Bekker pressure-sinkage model parameters for three terrains [5] Terrain Type Dry Sand

Lete Sand

Loam Clay

Heavy Clay

Content Soil Cohesion Friction Cohesion Friction of Exponent Module Module c Angle Moisture n kc kϕ ϕ[deg.] 0.0%

1.10

0.0%

38.0%

46.0% 25.0%

1528.40

1.04

28.0

0.79

102.00 5301.00

1.30

31.1

0.50

13.19

4.14

13.0

0.73 0.13

0.99

41.60

2471.00

0.691

12.70

1555.59

68.95

692.00

33.3 34.0

N° 4

2017

Based on experimental results Reece [7] proposed a new and non-dimensional model for the pressure-sinkage relationship as it shown in Equation (4). (4)

Where c soil cohesion, γ unit weight density of soil, and , are dimensionless constants of the cohesion and friction module, and n soil exponential constant. Wong [4] recommends that term is negligible for cohesion less dry sand, and the term that includes is negligible for frictionless terrain as clay. The conversion between the soil’s parameters in the three pressure-sinkage models are:

(5) Meirion et al. modified the pressure-sinkage models for small wheels ranging from 01–0.3 meters and increased the load up to 450 newton. The proposed model, [8], considering wheel diameter is given by Equation (6). (6)

Where d wheel diameter and m a fitting diameter exponent constant and for dry sand m = 0.39 [8]. The soil pressure-sinkage relation for a repetitive loading and unloading was described by [9] as shown in Equation (7).

(7)

Fig. 1. Pressure-sinkage curves for sand, loam, and clay soils Upadhyaya, et al. [6] proposed a modified form of Bekker model, Equation (2), by normalizing the sinkage of the plat width as shown in Equation (3)

(3)

66

Where k1 [kPa] and k2 [kPa/m] are the soil sinkage constants and are independent of the plate dimension. To obtain the soil sinkage constants a set of experiments have to be done by using two plates with different sizes [6]. In order to minimize soil variations in the test there is a need to have a large difference in the plates’ sizes. The measured data sets of the pressure sinkage were analyzed theoretically and graphically to obtain the best fit using logarithmic scale. From the straight line best fit the values of the constants were obtained. Articles

Fig. 2. Soil response to repetitive loading-unloading [9] The line segment from zero to point A describes the first continuous loading. At point A the maximum sinkage is zA. At point A the unloading process starts toward point B. At point B the pressure is zero while the residual sinkage is zr. The second continuous reloading starts from point B toward point A back to the maximum sinkage zA. From Point A toward point C the new sinkage will continue to follow the same original pressure-sinkage curve. For more loading and unloading process this curve repeats itself. During elastic reloading or unloading, line on the soil response of


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

repetitive loading-unloading curve can be considered as the soil stiffness to loading, and experiments showed that a good approximation is that the pressure is a linear function of the total sinkage measured from the uncompact soil surface as given in Equation (8): (8)

The parameters ko [kN/m3] and kuA [kN/m2] are soil specific parameters, kA is the slope of the loading-unloading curve and depends on zA sinkage. A graphical description for the relation between soil stiffness kA and initial unloading sinkage is shown in Figure 3.

N° 4

2017

pressure-sinkage behavior remains the same for all passes. Experimental works of [12], [13], and [14] showed variation in the soil reaction forces under a consecutive pass by the rear wheels as a result of variations in the soil compaction and density. Therefore the terramechanics expression has to be modified to include the effect of soil compaction under repetitive loading and unloading. Extensive experimental work was done by [14] to test the multiple wheel passages. Holm tested multiple pass of wheels on the same patch considering slip and tire deflection. The study shows that soil properties change after each pass, and the soil properties variations are strongly dependent on the wheel slip, therefore the driven wheel produces a stronger effect on soil properties variation than a towed wheel. Loading and unloading on the same soil spot of wheel multi passage case is analogical to wheel multi falls on the same soil spot. Therefore the results of the wheel multi passage are used in this analysis. Similarly, each fall of the wheel will experience new soil properties compared to the previous fall.

Fig. 3. Relationship between soil stiffness and initial unloading sinkage

Fig. 5. Terrain properties variation for multiple passages [14], (sd: slip ratio)

Fig. 4. Soil pressure-sinkage behavior under loading-unloading process It can be noticed from Equation (8) that higher sinkage zA at the end of the first loading results in more soil compaction, therefore the elastic rebound (e1= zA– zr) is reduced during unloading, and the elastic rebound moves the total sinkage back to zA. A second loading of the soil starts with elastic reloading where the sinkage increases up to zA and continues to follow the original pressure-sinkage curve for pressure larger than pA,. The first loading-unloading fall produces a plastic deformation p1 and elastic deformation e1 so that the first maximum sinkage z1= zA=p1+e1, and a second loading by a second fall over the same location produces an elastic deformation from point B to A which is equal to e1. The second maximum sinkage consists of plastic and elastic deformation so that z2= p2+e2 as it is described in Figure 4. It can be realized that Wong model, as it is shown in Figure 2, is not the best choice to use because of its piecewise behavior which does not follow a monotonic sinkage. Earlier works [10], [11] for finding the wheel-terrain rolling resistance for multi-pass case assumed that

Soil properties as cohesion and density increase after each passage, and the largest increase occurs between the first and second pass, while for the successive runs the increase in these properties becomes less and less. Based on Holm’s experimental results as it is shown in Figure 5. Senatore and Sandu [15] came up with a number of fitted relations which relate soil properties as function of previous number of passages and slip ratio. The proposed relations for density and cohesion are shown in Equations (9–11). (9) (10) Where; index n for value at the current passage, index o for value of untouched soil, np number of previous passages, ip slip ratio at previous passage, γ soil density, c soil cohesion , K soil shear displacement modulus, k1, k2, and k3 are dimensionless fitting constants. Example of values are shown in Table 2. Articles

67


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

Table 2. Example of soil parameters for multipass simulation [15] n

c

ϕ

[-]

[N/m2]

[deg]

1

220

33.1

kc

1400

820

[kN/mn+1] [kN/mn+2]

k1

k2

[-]

[-]

k3

[-]

0.1178 0.1672 0.0348

Rewrite Equations (9–10) for the case of zero slip ratio ip=0 to obtain a simplified form:

(11)

(12)

In this case the change in the two soil properties is the same and it is equal to k3np. The relative change in accuracy is defined by the accuracy of the fitting coefficient k3. The relative changes in percentage for a fitting coefficient value of 0.0348 and for various number of passages are given in Table 3. Table 3. Relative change in soil’s properties np Number of previous passages %Relative change

1

3.48

2

6.96

3

10.44

4

13.92

5

17.40

Previous experimental work of pressure-sinkage on sand using three plates with different diameters was done by [16] to investigate the evolution of sand bearing capacity with density. His work results were presented graphically showing the dependency of Bekker’s coefficients, kc, kϕ, and n with sand density. The value of kc determined is often negative for dry granular soil [17]. Based on his results a curve fitting is done to find an analytical dependency of the two coefficients as a function of the sand density. The fittings are given in Equations (13) and (14). (13) (14)

N° 4

2017

2. Rigid Wheel-Soft Soil Analysis Previous research works by Shibly et al. [18], and Reece [7] showed that stress distribution around a rigid wheel during penetration into soft soil can be substituted with a very good accuracy by a triangular distribution for the two stress zones depicted in Figure 7. The linear equivalent stress distribution Sn of the normal stress p acts on the rigid wheel during sinkage is a triangle with two sides which are defined by: (15) Where the indices 1 and 2 refer to the right and left sides of the maximum stress location, the vertex of the triangle. The equivalent distribution of the normal stresses is an isosceles triangle where the location of maximum stress is at ϴ = 0o and spread equally in both sides so that the magnitudes of both angles are equal. The resultant of the normal stress acts on the rigid wheel is determined by integrating the equivalent stress distribution around the wheel contact considering the symmetry of the stress distribution (ϴ1 = –ϴ2) as shown in Equation (15). (16)

Substitution of the stress distribution as in Equation (15) to obtain the vertical force Fz as

(17)

The trigonometric parenthetical expression in Equation (17) as 0°£ q £ 45° can be approximated by fitting a straight line with slop of 0.98, Shibly [19]. Using this fitting and the geometry of this case to obtain the following relations: (18)

where f = 1.4286 A more simplified form of the normal force is:

(19)

Combining Equations (4) and (6) to obtain the normal stress: (20)

Where c soil cohesion, γ unit weight density of soil, and , and n dimensionless constants. It is recommended by [4] that term is negligible for cohesion less dry sand, and the term that includes is negligible for frictionless terrain as clay. After considerable simplifications the normal force can be obtained as: Fig. 6. Curve fitting of Bekker’s coefficients dependency on soil density based on [16] experimental data 68

Articles

Where the geometrical constant is:

(21)


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

(22)

The normal force Fz that acts on the rigid wheel resists the wheel penetration into the soil. This force is a function of the sinkage zm and the soil exponent n. This function is highly nonlinear. For a specific wheel-soil parameters, the sinkage coefficient kz as it is shown in Equation (23) is function of soil density and parameters variations. (23)

The coefficient kz can be considered as the soil stiffness modulus in the vertical direction. As a result the soil resist force will have its final form as:

3. Dynamic Model Analysis

N° 4

2017

(24)

A four-wheel rover is composed of a platform which is connected to four wheels by a mechanical suspension. The mechanical suspension has stiffness and low damping properties. In order not to increase the nonlinearity and the complexity of the interaction with the soil a simplified linear quarter-rover model is used. The quarter rover model has two lumped masses, one quarter of the rover platform is the sprung mass ms and the rigid wheel is the unsprung mass mus. Both masses are connected by a vertical pure linear spring with high stiffness ks and a vertical pure linear damper with a low damping coefficient cs, a schematic drawing is shown in Figure 9.

(a)

Fig. 9. Dynamic model of quarter rover

(b)

Figure 7. a) Free body diagram of rigid wheel on soft soil, b) Equivalent triangular distribution of normal stresses

The dynamic response of the rover caused by its fall on soft soil begins by the wheel touching the soil, and the sinkage phase of the wheel starts until it reaches its maximum sinkage. The wheel remains at maximum sinkage and at rest state until it is pulled by the sprung mass if it has enough energy, and this is the dwell phase. The pulling out of the wheel from the soil is when the wheel leaves the ground to a certain height, this is the pull out phase. A second fall starts when the sprung mass reaches its zero velocity and moves down towards the soil for a second touch on the same spot of the soil. Using newton’s second law to obtain the dynamic equation of motion of a quarter rover Equations (25) and (26). The initial condition of the motion is at the instant of first soil contact has zero initial positions and initial velocity equal to the final velocities of the fall. (25)

Fig. 8. The normal force coefficients at five falls for constant soil’s parameters and for varying soil’s parameter

(26)

The state space representation of the dynamic equations is given in equation (27). Articles

69


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

N° 4

2017

is used to calculate the new values of the soil weight density gn and cohesion cn. The new soil parameters are calculated using Equations (11–14). The relations given in Equation (30) are used to obtain the dimensionless soil sinkage coefficients. Then kz is determined based on the new soil parameters values, while Wong model for a repetitive passage is incorporated in the computer program of simulation.

Or in a generic state space representation:

(31)

The repetitive fall and pull out of the wheel increases the compactness of the soil and changes the soil’s parameters. The number of falls, on the same spot,

Fig. 10. The displacements of the sprung mass zs and the wheel zus during time for constant soil parameters

Fig. 12. The normal force which acts on the wheel during sinkage for constant soil parameters

Fig. 11. The displacements of the sprung mass zs and the wheel zus during time considering soil parameters’ change

Fig. 13. The normal force which acts on the wheel during sinkage considering soil parameters’ change

4. Soil Parameters Modification

Articles

(28)

Where the unit conversion factors are:

(30)

The equations of motion are solved numerically and the simulation results are shown for: displacement-time, Figures 10 & 11, normal force-time, Figures 12 & 13, and normal force-sinkage, Figures 14 & 15. Figures 10 & 11 depict the displacement of the rover body and wheel at successive falls for a particular set of soil and dynamic model parameters. The displacement of the sprung mass in the dynamic model is plotted twice on the same figure, one plot is by itself and a second plot is shifted to be on the unsprung

Where:

70

(27)

(29)


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

N° 4

2017

Fig. 14. The normal force which acts on the wheel during sinkage as function of the sinkage for constant soil parameters

Fig. 15. The normal force which acts on the wheel during sinkage as function of the sinkage considering the soil’s parameters change

mass displacement for comparison purpose. Point 1 in the figure represents the touch instant of the wheel with the soil. This is the start point of the system interaction. At this instant the initial values of motion of the two masses are the free fall velocities, zero initial displacement of the wheel, and body displacement equal to the unstretched length of the suspension spring. At point 2 wheel reaches its maximum sinkage and dwells for a very short period of time with zero velocity. Point 3 is the start of the wheel pull out of the soil until it reaches point 4.

The interaction with the soft soil has a merit of “stiffness” and the soft soil behaves as a nonlinear spring as shown in Equation (24) which makes any fall “collision” softer, while the sinkage in the soil is deeper than a harder soil. A deeper sinkage decreases the ability of the wheel to pull out of the soil. In contrast, falling on a harder soil resulted in a smaller sinkage and increases the ability of the spring mass to pull out the wheel (unsprung mass). The behavior of the normal force coefficient kz (soil stiffness) for a constant soil’s parameters remains constant during all falls, and when the soil’s parameters increase for any additional falls, the normal force coefficient kz increases rapidly as shown in Figure 8. This increase is caused by the increase of soil compactness for each additional fall which leads to an increase in the soil weight density and cohesion. The increase in these soil properties increases the normal force coefficient. The simulations were done for two cases, one case when the soil parameters were kept constant during the whole time period, while in the second case the soil cohesion and density are changed as a result of a multi fall of the wheel on the same spot of the soft soil. The multi fall case is considered as a multi passages wheel case and the previous proposed relations for a wheel multi passages were used. The dynamic displacements of the wheel (unsprung mass), and the normal force during sinkage, for a multi fall of the wheel on the soft soil are shown for two cases. In the first case the soil’s parameters were kept unchanged for all falls on the soil, Figures 10, and 12 respectively, while in the second case there was a change in the soil cohesion and soil density as a result of the wheel multi fall, Figures 11, and 13 respectively. Figures 14, and 15 show the normal force during sinkage as function of the sinkage for the two aforementioned cases respectively, where the areas under the curves give the work done by the normal forces. By comparing the results of the simulation for the dynamic displacements, the normal forces, and the work of the normal forces in the two cases, it can be noticed that the changes in the first case are gradual changes along the whole period, while in the second

5. Normal Force Work Estimation

The work w of the normal force during sinkage is determined by finding the area under the curves in Figures 14 & 15. A numerical integration is required to find the area for a nonlinear normal force curve. Fortunately the shape of the areas under the curve resembles a right angle triangular and can be approximated by finding the area of the triangle which one side of it is the maximum normal force at the maximum sinkage and the base is the maximum sinkage. (32)

It can be noticed that the work of the normal force is a function of the maximum sinkage and the system-soil parameters Equation (32), and causes dissipation of the mechanical energy of the system.

6. Results and Discussion

The dynamic interaction between the rigid wheel (unsprung mass) and the soft soil for each fall has three stages. The three stages of the first fall are; first stage starts at point 1 and ends at point 2 as depicted in Figure 10, second stage starts at point 2 and ends at point 3, and the third stage starts at point 3 and ends at point 4. During the first stage, the wheel penetrates the soft soil until it reaches maximum sinkage with a maximum normal force. In the second stage the wheel dwells and the sprung mass continues to vibrate. At the third stage the wheel leaves the soil and the two masses vibrates together. During the last two stages the normal force is zero.

Articles

71


Journal of Automation, Mobile Robotics & Intelligent Systems

case the major changes occur between the first fall and the second fall, and monotonic changes occur between the second fall and the falls after in comparison to the second fall. This behavior is expected because the first fall makes the soil more compact, as a result the soil parameters are changed and the sinkage is much less than the first time and it is harder to penetrate into the soil, while during the successive falls by the same mass the soil’s compactness increase is smaller resulting in smaller changes of the soil properties. The normal force behavior during sinkage into the soft soil acts in a very short time and it has a geometric shape resemblance to an impulsive force during collision, therefore in future work the sinkage stage will be modeled as a collision of two bodies, a hard body and soft body. The work of the normal force during sinkage dissipates the mechanical energy of the system. The dissipated energy in the first fall is the same in both cases. In the first case there is a gradual reduction of energy and in the second case the energy reduction in the successive falls is small. For a soil with a soil’s exponent value n=0.5 the approximated work of the normal force has an energy expression as the work of a linear spring. The simulation results show that by keeping the soil’s parameters unchanged as a result of repetitive falls of a rotation less wheel, zero slip, has negligible effect on the dynamic behavior of the rover. For a rotation less wheel, zero slip, the second terms in both Equations (9) and (10) vanish. In this case there is no contribution of a sheer stress which leads to less compactness of the soil under the wheel. The existence of these terms in both equations add to the soil’s density and soil’s cohesion values up to 16.7% of the original values.

AUTHOR

Hassan Shibly – Central Connecticut State University New-Britain, CT, 06050, USA. E-mail:hshibly@ccsu.edu.

REFERENCES

72

[1] E. Bernstien, Probleme zur experimentellen motorplugmechanic, Heft: Der Motorwagen 16, 1913.  [2] B. P. Goriatchkin, “Theory and development of agriculture machinery”, 1938.  [3] M. G. Bekker, Theory of land locomotion the mechanics of vehicle mobility, University of Michigan Press, Ann Arbor, 1956.  [4] J. Wong, Theory of ground vehicles, New York: J. Wiley, 1993.  [5] J. Y. Wong, “On the study of wheel-soil interaction”, Journal of Terramechanics, vol. 21, no. 2, 1984, 117–131. DOI: 10.1016/00224898(84)90017-X.  [6] S.K.Upadhyaya, D.Wulfsohn, J.Mehlschau, “An instrumented device to obtain traction related parameters”, Journal of Terramechanics, vol. 30, 1993,1–20. DOI: 10.1016/00224898(93)90027-U. Articles

VOLUME 11,

N° 4

2017

[7] A. Reece, “Principles of soil vehicle mechanics”. In: Proceeding of the Institution of Mechanical Engineers, 1965.  [8] G. Meirion-Griffith, M. Spenkom “A modified pressure-sinkage model for small rigid wheels on deformable terrains”, Journal of Terramechanics, vol. 48, no. 2, 2011, 149–155. DOI: 10.1016/j. jterra.2011.01.001.  [9] J. Y. Wong, “An introduction to terramechanics”, Journal of Terramechanics, vol. 21, no. 1, 1984, 5–17. DOI: 10.1016/0022-4898(84)90004-1. [10] M. G. Bekker, Off the Road Locomotion, Ann Arbor, Michigan: The University of Michigan Press, 1960. [11] M. G. Bekker, Theory of Land Locomotion, Ann Arbor, Michigan: The University of Michigan Press, 1965. [12] A. Reece, “Problems of soil vehicle mechanics”, ATAC, Warren, MI, USA, 1964. [13] R. A. Liston, L. A. Martin, “Multipass behavior of a rigid wheel”. In: 2 Proc. Sec. Int. Conf. on Terrain-Vehicle Systems, Quebec City, Que., Toronto Univ. Press, 1966. [14] I. C. Holm, “Multi-pass behaviour of pneumatic tires”, Journal of Terramechanics, vol. 6, no.3, 1969, 347–71. DOI: 10.1016/00224898(69)90128-1. [15] C. Senatore, C. Sandu, “Off-road tire modeling and the multi-pass effect for vehicle dynamics simulation”, Journal of Terramechanics, vol. 48, no. 4, 2011, 265–276. DOI: 10.1016/j.jterra.2011.06.006. [16] S. Shaaban, “Evolution of the bearing capacity of dry sand with its density”, Journal of Terramechanics, vol. 20, nos. 3–4, 1983, 129–138. DOI: 10.1016/0022-4898(83)90044-7. [17] D. Dewhirst, “A load-sinkage equation for lunar soil”, AIAA Journal, vol. 2 (4), 1963, 761–762. [18] O. Onafeko, A.R. Reece, “Soil stresses and deformation beneath rigid wheels”, Journal of Terramechanics”, vol. 4, no. 1, 1967, 59–80. DOI: 10.1016/0022-4898(67)90104-8. [19] I. Shmulevich, U. Mussel, D. Wolf, “The effect of velocity on rigid wheel Performance”, Journal of Terramechanics, vol. 35, no. 3, 1998,189–207. DOI: 10.1016/S0022-4898(98)00022-6. [20] M. Grahn, “Prediction of sinkage and rolling resistance for off the road vehicles considering penetration velocity”, Journal of Terramechanics, vol. 28 , no. 4, 1991, 339–347. DOI: 0.1016/00224898(91)90015-X. [21] H. Shibly, “Dynamic Modeling of Planetary Vehicle’s Fall on Soft Soil”, Journal of Automation, Mobile Robotics, & Intelligent Systems, vol. 10, no 3, 2016, 21–27. DOI: 10.14313/JAMRIS_3-2016/20. [22] H. Shibly, K. Iagnemma, S. Dubowsky, “An equivalent soil mechanics formulation for rigid wheels in deformable terrain, with application to planetary exploration rovers”, Journal of Terramechanics, vol. 42, no. 1, 2005, 1–13. DOI: 10.1016/j.jterra.2004.05.002.


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

N° 4

2017

Using Functions from Fuzzy Classes of k-valued Logic for Decision Making Based on the Results of Rating Evaluation Submitted 11th November 2017; accepted: 4th December 2017

Оlga M. Poleshchuk

DOI: 10.14313/JAMRIS_4-2017/38 Abstract: IIn this paper we present an approach based on logic functions with fuzzy conditions for constructing a decision support system for rating evaluation of objects. This approach provides an effective and efficient way for separating rating marks into clusters, associated with a control effect directed at a successful functionality of objects in future. Keywords: rating marks, linguistic statements, fuzzy logic functions, decision support.

1. Introduction Ratings are widely used in diverse areas of human activities (education, engineering, economics, ecology, etc.) and allow acquiring available and up-to-date information in some sort of a neutral integral index that can be used in decision making. A number of difficulties in acquiring a rating estimate is discussed in detail in [1]. These difficulties are connected with the heterogeneity of characteristics, unstable final results resulting from different scales, and results recognition required for decision making. Thresholds, separating range of values into intervals, are used for rating the recognition of estimates. Control effects are applied while a rating estimate gets into certain intervals. The task of acquiring threshold values is resolved experimentally or based on experts’ opinion which is not always possible. A posteriori statistical information may be missing which may lead to significant difficulties and mistakes. Besides that, rating estimates always hold uncertainty zones, leading to a complication of control effect selection. Usually these zones are located near threshold values or “mean-values-zones” that are hard for the recognition because of an ambiguity of the situation cinsidered. The use of linguistic statements makes it possible to define rating estimates with heterogeneous characteristics and to prevents incorrect arithmetic operations, common for the traditional rating estimation models [2], [3]. But the problem of rating estimates’ recognition by purposely acquiring control effects has its flaws, as discussed before. The reason of the flaws is a lack of formalized approach that would be able to reduce experts’ mistakes coming from an incomplete

or illegible information. This article proposes an approach for the recognition of rating estimates, and the a decision making support based on fuzzy logic functions developed and adapted for fuzzy conditions and goals [4]–[5].

2. Construction of Functions from Fuzzy Classes of k -valued Logic

Consider characteristics , with corresponding values , l = 1 ,m j , j = 1 ,m , characterizing their state. Assume these characteristics depend on characteristic Y with values range Yl ,l = 1 ,k , if Y is associated with some information aggregating operator, allowing to compute value of Y with X j , j = 1 ,m. An information aggregation operator OY is a function defined on a set of all possible values X j , j = 1 ,m and accepting values on set of values Yl ,l = 1 ,k : OY : X 1l1 × X 2l2 ×  × X mlm → Yl .

Historically, the first approach for selecting an information aggregation operator is geometric based on the representation of an operator as a surface in the (m + 1)-dimensional space [5]. A flaw of this approach is the necessity of knowing the value of an aggregation operator on at least (m + 1) values of characteristics X j , j = 1 ,m and an inability of applying additional experts’ information. A logical approach to selecting an information aggregation operator is applicable when some conditions on operator OY can be applied. If we have k values of characteristic Y , we can introduce an information aggregation operator as some k-valued logic function. If the amount of an dependent characteristic X j equals m, the information aggregation operator can be represented as a k-valued logic function of m variables. Suppose that an expert can formulate fuzzy conditions of an unknown function’s behavior like: «Function slightly decreases when the first variable strongly increases», «When arguments 3 and 5 simultaneously increase, the function value strongly increases», etc. In this case we can speak of fuzzy classes of k-valued logic [5] or k -valued logic functions of m variables with fuzzy conditions. A fuzzy condition formulated by an expert describes the membership of function to certain fuzzy class (for example, slightly increasing or slightly decreasing) based on values of a function at points i and i + 1 ,0 ≤ i ≤ k − 1 . Value µ S ( p,q ) is a degree of mem-

73


Journal of Automation, Mobile Robotics & Intelligent Systems

bership to a certain fuzzy class given that f (i ) = p, f (i + 1) = q, 0 ≤ p ≤ k − 1 , 0 ≤ q ≤ k − 1. All detailed explanations will be given below. According to [5], the fuzzy condition S is represented as a fuzzy binary relation S between sets X ,Y which is a fuzzy set S : ∀ ( x , y ) ∈ X × Y µ S ( x , y ) ∈[0 ,1], and X = {x}, Y = { y} are non-fuzzy sets. If sets X ,Y are finite , then a fuzzy binary relation S may be represented in a matrix form, the rows and columns of which are associated with elements of sets, and on the intersection of i-th row and j-th column is an element µ S x i , y j , i.e.

(

)

 µ S ( x1 , y1 ) µ S ( x1 , y2 ) ... µ S ( x1 , ym )  µ ( x , y ) µ ( x , y ) ... µ ( x , y )  m  2 2 2 S S R = S 2 1 .   ... ... ... ...   µ S ( x n , ym )  µ S ( x n , y1 ) µ S ( x n , y2 )

A fuzzy binary relation on set X is a fuzzy set : . ∀ ( x , y) ∈ Consider a fuzzy condition S on the behavior of a function f of one variable. Fuzzy relation , corresponding to S, describes the membership of a function to a certain fuzzy class (for example, “slightly increasing” or “slightly decreasing”) based on function values at points i and i + 1 ,0 ≤ i ≤ k − 1 . Value is a degree of membership to a certain fuzzy class given that 0 ≤ p ≤ k − 1 , 0 ≤ q ≤ k − 1. Fuzzy relation matrix , corresponding to fuzzy condition S, looks like:

R = (µ S ( p,q)) .

Let the function behavior depends on several fuzzy conditions – S r ,r = 1 ,s. Each of these conditions has a corresponding fuzzy relation matrix S r ,r = 1 ,s. The matrix, generalizing all these conditions, is obtained via a Т–norm: s S s = ΤS r . r =1

The triangular norm (T– norm) is real-valued function T : [0 ,1] × [0 ,1] → [0 ,1], satisficing these conditions: 1) T (0 ,0) = 0 ,T (µ A ,1) = T (1 , µ A ) = µ A (boundedness);

2) nicity); 3)

4)

74

if

(monoto-

(commutativeness);

(associativity).

If relation matrix S s has at least one zero row, then the set of conditions S r ,r = 1 ,s is contradictory because f satisfies it with the zero value. Next, an algorithm of subset selection is suggested for such contradictory sets. All pairs of conditions are tested for inconsistency. Inconsistent pairs found are removed from the consideration. In the next step all triplets of conditions are tested for inconsistency. Inconsistent Articles

VOLUME 11,

N° 4

2017

triplets found are removed from the consideration. This operation is repeated until step l ,1 ≤ l ≤ s , in which all subsystems consisting of l + 1 fuzzy conditions, are inconsistent. Then, the consistent subsystems are all subsystems found in step l − 1 and consisting of l fuzzy conditions. Hereby, any number of fuzzy conditions for one variable may be easily reduced to one fuzzy condition for this variable. Consider now k-valued logic functions with m variables and imposed on fuzzy conditions S. Assume m = 2 and |S| = 2, for simplicity. Let S1 to be first fuzzy condition being determined by the first variable, and the second condition S 2 – by the second variable. Compute relation matrices S1 and S 2 . The satisfaction of conditions S means that conditions S1 and S 2 are satisfied simultaneously. This means that, for (i1 ,i2 ), as the x1 and x2 values (i1 ,i2 ∈{0 ,1 ,...,k − 1}) we should use the values resulting from the use of a Т – norm of the (i1 + 1) -th row of relation matrix S 1 and the (i2 + 1) -th row of matrix S 2 , they are the row values of matix S . This matrix has all properties of a relation matrix required. Assume that the function behavior satisfies some fuzzy condition S and furthermore some initial condition (for example f (0) = 0 ) . Fuzzy relation S , corresponding to a fuzzy condition S, becomes the basis for the fuzzy relation S , formalizing both function behavior conditions. According to [4, 5], there holds ,

with the initial value f (0) = 0 . The first row of the relation matrix S , corresponding to value 0, equals (1, 0, ..., 0) – as the initial condition. The elements of S are acquired by the multiplication of the previous row of the relation matrix S on columns of the matrix S but instead of the multiplication a triangle norm is used. A triangle norm is a real function of two variables, K : [0 ,1] × [0 ,1] → [0 ,1], satisfying the following conditions: 1) K (1 ,1) = 1 ,K (µ A ,0) = K (0 , µ A ) = µ A (limited);

2) nicity);

if

(monoto-

3) K (µ A , µ B ) = K (µ B , µ A ) (commutativity);

(

) (

)

4) K µ A ,K (µ B , µ C ) = K K (µ A , µ B ) , µ C (associativity).

Suppose, the initial condition is formulated for f ( k − 1) instead of f (0) . Let it be f ( k − 1) = q , (1≤ q ≤ k-1), defining the k-th row of the relation matrix S . The row consists of zeros except for 1 in column q. The other rows of matrix S are

,

.

Assume that the initial condition is set for some intermediate value f l ∗ − 1 , 1 < l ∗ − 1 < k . Let it be f l ∗ − 1 = q,(1 ≤ q ≤ k − 1), defining the l ∗ -th row of

(

)

(

)(

)


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

matrix S . This row consists of zeros except for 1 in column q. A common way of obtaining the matrix S is defined by the formula: for

Suppose some function of one variable has one fuzzy condition and t initial conditions. Therefore that if there are several initial conditions, then Si is computed for each i-th condition separately (1 < i ≤ t), and the and resulting matrix S 1 is given as follows t

S 1 = ΤSi i =1

As a result of the formalization of all conditions of k-valued logic functions a fuzzy relation matrix is produced. Functions, defined by forming fuzzy relation, applied to a real task. The posterior information obtained is compared with prior information from experts. Functions with disagreements are discarded. The remaining functions are left. If no functions is left, then we require an additional specification about function conditions from experts and we should retry the computations.

3. Rating Evaluation and Functions from Fuzzy Classes of k -valued Logic

Consider N objects with some quality characteristics (characteristic features) evaluated, X j , j = 1 ,m , with corresponding values X lj , l = 1 ,m j , j = 1 ,m . Assume that the characteristics X j , j = 1 ,m have a significant impact on the characteristic Y (with values Yl ,l = 1 ,k ) that implies a successful functioning of objects in the future. As a result of a rating evaluation of objects with respect to X j , j = 1 ,m, control effects on Y occur. To define rating estimates of objects, results of scoring are formalized using complete orthogonal semantic spaces [1]. A linguistic variable is a quintuple

{X ,T ( X ) ,U ,V ,S } ,

2017

possible to formulate the valid requirements for the membership functions µ l ( x ) ,l = 1 ,m, of their termsets T ( X ) = X l ,l = 1 ,m [8]:

{

}

 1.  For each X l ,l = 1 ,m, there is U l ≠ Ø, where U l = {x ∈U : µ l ( x ) = 1} is a point or an interval.

 2. If U l = {x ∈U : µ l ( x ) = 1}, then µ l ( x ) ,l = 1 ,m, does not decreaseto the left of U l and does not increase to the right of U l .

for

.

N° 4

{

}

where X – is a name of a variable; T ( X ) = X i ,i = 1 ,m – a term-set of variable X, i.e. a set of terms or names of linguistic values of variable X (each of these values is a fuzzy variable with a value from a universal set U); V – is a syntactical rule that gives names of the values of a linguistic variable X; S – is a semantic rule that gives to each fuzzy variable with a name from T(X) a corresponding fuzzy subset of a universal set U [7]. A semantic scope is a linguistic variable with a fixed term-set [7]. A theoretical analysis of the properties of semantic scopes aimed at adequacy improvement of the expert assessment models and their usefulness for practical tasks solution has made it

3. µ l ( x ) ,l = 1 ,m have maximally two points of discontinuity of the first type. 4. For each

, there holds

.

The semantic scope the membership functions of which meet the requirements mentioned has been termed the Full Orthogonal Semantic Scope (FOSS) [8]. Based on results from [2], let us construct m FOSS’s, X j , j = 1 ,m, with their corresponding term-sets X lj , l = 1 ,m j , j = 1 ,m . Let be a membership function of fuzzy number X lj , corresponding to the l-th term of the j-th FOSS, l = 1 ,m j , j = 1 ,m . A fuzzy number [9] A is a fuzzy set with the membership function µ A ( x ) : R → [0 ,1] . Let X nj and µ nj ( x ) ≡ anj 1 ,anj 2 ,anjL ,anjR ,n = 1 ,N , j = 1 ,m, be an estimate of a characteristic X j of the n-th object. Fuzzy value X nj with its membership function µ nj ( x ) is equal to one of the fuzzy values X lj , l = 1 ,m j , j = 1 ,m . The first two parameters in brackets are the abscissas of the apexes of the trapezoid upper bases that are graphs of the corresponding membership functions; while the last two parameters are the lengths of the left and right trapezoid wings, respectively. We denote the weight coefficients of the evaluated k characteristics by ω j , j = 1 ,k , ∑ ω j = 1. A fuzzy rating

(

)

j=1

point of the n-th object [1, 9], n = 1 ,N , within the characteristics X j , j = 1 ,m, is determined as a fuzzy number A n = ω1 ⊗ X 1n ⊕ ... ⊕ ω k ⊗ X mn

with its membership function

.

The defuzzification of the fuzzy values A n ,n = 1 ,N , , using the method of gravity center [10] gives us the crisp values An ,n = 1 ,N ,B1 ,Bm. Value An ,n = 1 ,N is called a rating point of appearance of the quality characteristics X j , j = 1 ,m, of the n-th object, n = 1 ,N .

A normed rating point of the n-th object, n = 1 ,N , is calculated as follows En =

An − B1 ,n = 1 ,N . Bm − B1

and the range of values of E n ,n = 1 ,N is the unit interval, [0, 1]. Articles

75


Journal of Automation, Mobile Robotics & Intelligent Systems

To generate control effects as a result of a rating estimation we will use fuzzy logic functions. Suppose that we have m variables X j , j = 1 ,m , and the desired function takes k values (corresponding to the values of s). Our aim is to construct a function with m variables from fuzzy classes of a k-valued logic. The constructed function make it possible to split estimates on k clusters corresponding to the values of Y. A control effect aiming at a successful functioning of the object in future is set for each cluster. The behavior of the desired function is restricted by initial and fuzzy conditions. The construction of such a function is demonstrated in the next section.

4. Decision Making Aimed at Guaranteeing a Commercial Success of Software

Twenty software products designed for retail sales automation, banking, insurance and intercompany accounting were selected for the research. Developed products were used by consumers on a trial mode basis. As input characteristics of software products three qualities were selected: X1 – modifiability, X 2 – learning curve, and X 3 – functionality. The modifiability is a characteristic feature of software simplifying a modification of a product, including the modularity, scalability and structuring. A low learning curve makes it possible to reduce efforts on learning and understanding the software and documentation and includes: the informativeness, structuring and readability. The functionality provides a set of functions defined in a product description and satisfying customers’ needs. As an output characteristic feature, the success of product was used – Y, including its popularity, sales and experts’ recognition. All characteristics mentioned above were compared with respect to three linguistic values: «low», «middle», «high», with their corresponding scores 0, 1, 2, respectively. As a result of a trial usage of software products and their rating points, a recommendation system meant for the improvement of the product’s success is presented below. The experts evaluation results are shown in Table 1. Table 1. Evaluation results of the software products

76

n

X1

X2

X3

1

0

1

0

2

0

0

0

3

1

0

2

4

1

1

1

5

2

0

2

6

2

1

1

7

0

1

1

8

1

0

1

9

1

2

0

10

1

2

0

11

0

0

1

12

1

1

0

Articles

VOLUME 11,

N° 4

2017

The data given in Table 1 were formalized using FOSS [2]. The membership functions of linguistic variables «low», «middle», «high» are shown in Table 2. If the membership functions are trapezoid, then the membership function is defined by four parameters. The first two parameters are abscissas of the apexes of the trapezoid upper bases that are graphs of the corresponding membership function while the last two parameters are the lengths of the left and right trapezoid wings, correspondingly. If the membership function is triangular, then it is clearly defined by three parameters. The first parameter is the abscissa of the vertex of the triangle, and the remaining two parameters are the lengths of the left and right wings, respectively. Table 2. Software products’ evaluations given as the trapezoid fuzzy numbers n

X1

X2

X3

1

(0,0.15,0,0.3)

(0.375,0.425,0.25,0.35)

(0,0.125,0,0.25)

2

(0,0.15,0,0.3)

(0,0.125,0,0.25)

(0,0.125,0,0.25)

3

(0.45,0.55,0.3,0.3)

(0,0.125,0,0.25)

(0.85,1,0.3,0)

4

(0.45,0.55,0.3,0.3) (0.375,0.425,0.25,0.35) (0.375,0.55,0.25,0.3)

5

(0.85,1,0.3,0)

6

(0.85,1,0.3,0)

(0.375,0.425,0.25,0.35) (0.375,0.55,0.25,0.3)

7

(0,0.15,0,0.3)

(0.375,0.425,0.25,0.35) (0.375,0.55,0.25,0.3)

8

(0.45,0.55,0.3,0.3)

(0,0.125,0,0.25)

(0.375,0.55,0.25,0.3)

9

(0.45,0.55,0.3,0.3)

(0.775,1,0.35,0)

(0,0.125,0,0.25)

10 (0.45,0.55,0.3,0.3)

(0.775,1,0.35,0)

(0,0.125,0,0.25)

11

(0,0.125,0,0.25)

(0.375,0.55,0.25,0.3)

(0,0.15,0,0.3)

(0,0.125,0,0.25)

(0.85,1,0.3,0)

12 (0.45,0.55,0.3,0.3) (0.375,0.425,0.25,0.35)

(0,0.125,0,0.25)

The rating points have been calculated and are shown in Table 3. The weight coefficients ω j , j = 1 ,3 ,

are equal to

1 according to the experts. 3

The evaluation results obtained and the ratings are then used to produce control recommendations aiming at achieving a success of the product. Usually the rating points are split into several intervals obtained in an interaction with the experts. Upon attaining a certain interval, a corresponding control effect is used. We assume that there are 3 intervals corresponding to the quantity of output characteristic feature, and the developed control effects. If the rating point attains for the first time then the product’s success Y is low, and a severe refinement is necessary. If the rating point of the software product in question attains the medium interval [x, y] then the software product’s success Y is medium, and


Journal of Automation, Mobile Robotics & Intelligent Systems

Table 3. Rating points and rating of software products n

Rating points

Rating

1

0.248

11

2

0

12

3

0.746

2

4

0.542

6

5

0.816

1

6

0.676

3

7

0.457

8

8

0.433

9

9

0.613

4, 5

10

0.613

4, 5

11

0.329

10

12

0.462

7

VOLUME 11,

N° 4

2017

Table 6. Fuzzy relation matrix describing «slightly-increase» of logic function on variable X3 0

1

2

0

0.8

1

0.8

1

0

0.8

1

2

0

0

0.8

As a result the functions describing the behavior conditions, following fuzzy relation matrices from Tables 4–6, are obtained as shown in Tables 7–9. Table 7. Fuzzy relation matrix describing logic function F values on variable X1

the product requires a minor work. If the rating point attains the last interval [y, 1], then the product’s success is high and the product is ready for the market. A logic function with fuzzy conditions will be used to generate the control effects. Function F depending on variables X 1 , X 2 , X 3 , takes on one of three values – «product success is low», «product success is medium», «product success is high». These values correspond to the values 0, 1 and 2 and the corresponding control effects are «product requires severe refinement», «product requires minor refinement», «product is ready for the market». The linguistic values X 1 , X 2 , X 3 «low», «middle», «high» correspond to the values 0, 1 and 2. The experts form the initial conditions as follows: F ( X 1 = 2) = 2, F ( X 2 = 2) = 2, F ( X 3 = 2) = 2, and the fuzzy conditions as «slightly-increase» on function behavior for each variable. These conditions are formalized using fuzzy relations. Matrices of these fuzzy relations are shown in Tables 4–6. Table 4. Fuzzy relation matrix describing «slightly-increase» of logic function F on variable X1 0

1

2

0

0.9

1

0.9

1

0

0.9

1

2

0

0

0.9

Table 5. Fuzzy relation matrix describing «slightly-increase» of logic function on variable X2 0

1

2

0

0.7

1

0.7

1

0

0.7

1

2

0

0

0.7

0

1

2

0

1

0.9

0.9

1

0.9

0.9

0.9

2

0

0

1

Table 8. Fuzzy relation matrix describing logic function values on variable X2 0

1

2

0

1

0.7

0.7

1

0.7

0.7

0.7

2

0

0

1

Table 9. Fuzzy relation matrix describing logic function values on variable X3 0

1

2

0

1

0.8

0.8

1

0.8

0.8

0.8

2

0

0

1

The intersection of the i + 1 -th row and j + 1-th column of the matrices from Tables 7–9 describes the level of confidence of the function F acceptance value j when variables X 1 , X 2 , X 3 equal to i , i = 0 ,2 , j = 0 ,2 . As a result, by taking into account all conditions an equation with a 27 row and 3 column matrix is obtained. The elements of this matrix are levels of confidence of F taking a certain value depending on the values of variables X 1 , X 2 , X 3 . For example, the level of confidence of F being equal to 1 when X 1 = 0 , X 2 = 1 , X 3 = 0, is obtained as follows: take the minimum element on the intersection of the first row and the second column of the matrix shown in Table 7, the element on the intersection of the first row and the second column of the matrix shown in Table 8 and the element on the intersection of the first row and the second column of the matrix shown in Table 9. After all the computation, a fuzzy relation describing the fuzzy logic function F is obtained, The elements of the entries of the matrix representing this fuzzy relation are shown in Table 10. Articles

77


Journal of Automation, Mobile Robotics & Intelligent Systems

Table 10. Fuzzy relation describing function F

N° 4

1

2

Variables

Function value

000

1

0.7

0.7

000

0

001

0.8

0.7

0.7

001

0

002

0

0

0.7

002

2

010

0.7

0.7

0.7

010

0

011

0.7

0.7

0.7

011

1

012

0

0

0.7

012

2

020

0

0

0.8

020

2

021

0

0

0.8

021

2

022

0

0

0.9

022

2

100

0.9

0.7

0.7

100

0

101

0.8

0.7

0.7

101

0

102

0

0

0.7

102

2

110

0.7

0.7

0.7

110

1

111

0.7

0.7

0.7

111

1

112

0

0

0.7

112

2

120

0

0

0.8

120

2

121

0

0

0.8

121

2

122

0

0

0.9

122

2

200

0

0

0.7

200

2

201

0

0

0.7

201

2

202

0

0

0.7

202

2

210

0

0

0.7

210

2

211

0

0

0.7

211

2

121

0

0

0.7

121

2

220

0

0

0.8

220

2

221

0

0

0.8

221

2

222

0

0

1

222

2

Articles

2017

Table 11. The 3-valued logic function F of 3 variables

0

The interaction with the experts has made it possible to construct the fuzzy function of the 3-valued logic with 3 variables, and this function’s values are shown in Table 11. Using function F we can conclude that: the software products Nr 1, 2, 8, 11, with the rating points in the first interval [0, 0.45] require severe improvements, the software products Nr 4, 7, 12, with the rating points in the second interval [0.45, 0.55] require slight improvements, the software products Nr 3, 5, 6, 9, 10, in the third interval [0.55, 1] are ready for the marked. Thus, function F has made it possible to split the rating points into three intervals and set a control effect for each interval that is focused on the product’s success in the future. These results match the experts’ opinion based on experience and knowledge. 78

VOLUME 11,

5. Concluding remarks The rating evaluation is applied in a vast number of human activities and used to form control effects aiming at an effective and efficient functioning of objects under consideration. Difficulties in the selection of control effect come from incomplete information or possible errors in the experts’ decisions. Furthermore indefinite zones of the rating score values exist and make decision making ambiguous. This paper proposed an approach to decision making on rating points using functions from fuzzy classes of the k -valued logic. The derivation of such functions is performed using initial conditions and fuzzy conditions on their behavior. The functions obtained make it possible to cluster the rating points with their corresponding control effect, for a better functionality success. An example of a practical application confirms the effectiveness and efficiency of the developed approach.


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 11,

N° 4

2017

AUTHOR Оlga M. Poleshchuk – Department of Mathematics, Bauman Moscow State Technical University, str. Baumanskaya 2-ya, 5, Moscow 105005, Russian Federation. E-mails: poleshchuk@mgul.ac.ru, olga.m.pol@yandex.ru.

REFERENCES [1] Poleshchuk O., “The determination of students’ fuzzy rating points and qualification levels”, International Journal of Industrial and Systems Engineering, 2011, vol. 9, no. 1, 13–20. [2] Poleshchuk О., Komarov Е. “The determination of rating points of objects with qualitative characteristics and their usage in decision making problems”, International Journal of Computational and Mathematical Sciences, 2009, vol. 3, no. 7, 360 – 364. [3] Poleshchuk О., Komarov Е., “The determination of rating points of objects and groups of objects with qualitative characteristics.” In: Annual Conference of the North American Fuzzy Information Processing Society, 2009, p. 5156416. [4] Ryjov A., “Fuzzy data bases: description of objects and retrieval of information.” In: Proceeding of the First European Congress in Intelligent Technologies, 1993, vol. 3, 1557– 1562. [5] Rogozhin S., Ryjov A., “Fuzzy classes in k-valued logic.” In: V National conference “Neurocomputers and applications”, 1999, 460–463. [6] Darwish A., Poleshchuk O., “New models for monitoring and clustering of the state of plant species based on sematic spaces”, Journal of Intelligent and Fuzzy Systems, 2014, vol. 26, no. 3, 1089–1094. [7] Zadeh L.A., “The Concept of a linguistic variable and its application to approximate reasoning”, Part 1, 2 and 3, Information Sciences, 1975, vol. 8, 199–249, 301-357, 1976,vol. 9, 43–80. [8] Ryjov A., “The Concept of a Full Orthogonal Semantic Scope and the Measuring of Semantic Uncertainty.” In: Fifth International Conference Information Processing and Management of Uncertainty in Knowledge-Based Systems, 1994, 33–34. [9] Poleshchuk O., Komarov E., “Expert Fuzzy Information Processing”, Studies in Fuzziness and Soft Computing, 2011, 1–239. [10] D. Dubois, H. Prade, “Fuzzy real algebra: some results,” Fuzzy Sets and Systems, 1979, vol. 2, no.4, 327–348. [11] Yager R., Filev D.P., “On the issue of defuzzification and selection based on a fuzzy set”, Fuzzy Sets and Systems, 1993, vol. 55, no. 3, 255– 272.

Articles

79


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.