JAMRIS 2016 Vol 10 No 4

Page 1

VOLUME 10 N°4 2016 www.jamris.org pISSN 1897-8649 (PRINT) / eISSN 2080-2145 (ONLINE)

Indexed in SCOPUS


JOURNAL OF AUTOMATION, MOBILE ROBOTICS & INTELLIGENT SYSTEMS

Editor-in-Chief

Associate Editors:

Janusz Kacprzyk

Jacek Salach (Warsaw University of Technology, Poland) Maciej Trojnacki (PIAP, Poland)

(Polish Academy of Sciences, PIAP, Poland)

Statistical Editor:

Advisory Board: Dimitar Filev (Research & Advenced Engineering, Ford Motor Company, USA)

Małgorzata Kaliczynska (PIAP, Poland)

Kaoru Hirota (Japan Society for the Promotion of Science, Beijing Office)

Typesetting:

Jan Jabłkowski (PIAP, Poland)

Ewa Markowska, PIAP

Witold Pedrycz (ECERF, University of Alberta, Canada)

Webmaster: Piotr Ryszawa, PIAP

Co-Editors: Roman Szewczyk (PIAP, Warsaw University of Technology) Oscar Castillo (Tijuana Institute of Technology, Mexico)

Editorial Office:

(ECERF, University of Alberta, Canada)

Industrial Research Institute for Automation and Measurements PIAP Al. Jerozolimskie 202, 02-486 Warsaw, POLAND Tel. +48-22-8740109, office@jamris.org

Executive Editor:

Copyright and reprint permissions Executive Editor

Anna Ładan aladan@piap.pl

The reference version of the journal is e-version. Printed in 300 copies.

Editorial Board:

Andrew Kusiak (University of Iowa, USA) Mark Last (Ben-Gurion University, Israel) Anthony Maciejewski (Colorado State University, USA) Krzysztof Malinowski (Warsaw University of Technology, Poland) Andrzej Masłowski (Warsaw University of Technology, Poland) Patricia Melin (Tijuana Institute of Technology, Mexico) Fazel Naghdy (University of Wollongong, Australia) Zbigniew Nahorski (Polish Academy of Sciences, Poland) Nadia Nedjah (State University of Rio de Janeiro, Brazil) Duc Truong Pham (Birmingham University, UK) Lech Polkowski (Polish-Japanese Institute of Information Technology, Poland) Alain Pruski (University of Metz, France) Rita Ribeiro (UNINOVA, Instituto de Desenvolvimento de Novas Tecnologias, Caparica, Portugal) Imre Rudas (Óbuda University, Hungary) Leszek Rutkowski (Czestochowa University of Technology, Poland) Alessandro Saffiotti (Örebro University, Sweden) Klaus Schilling (Julius-Maximilians-University Wuerzburg, Germany) Vassil Sgurev (Bulgarian Academy of Sciences, Department of Intelligent Systems, Bulgaria) Helena Szczerbicka (Leibniz Universität, Hannover, Germany) Ryszard Tadeusiewicz (AGH University of Science and Technology in Cracow, Poland) Stanisław Tarasiewicz (University of Laval, Canada) Piotr Tatjewski (Warsaw University of Technology, Poland) Rene Wamkeue (University of Quebec, Canada) Janusz Zalewski (Florida Gulf Coast University, USA) Teresa Zielinska (Warsaw University of Technology, Poland)

Marek Zaremba (University of Quebec, Canada)

Chairman - Janusz Kacprzyk (Polish Academy of Sciences, PIAP, Poland) Plamen Angelov (Lancaster University, UK) Adam Borkowski (Polish Academy of Sciences, Poland) Wolfgang Borutzky (Fachhochschule Bonn-Rhein-Sieg, Germany) Bice Cavallo (University of Naples, Italy) Chin Chen Chang (Feng Chia University, Taiwan) Jorge Manuel Miranda Dias (University of Coimbra, Portugal) Andries Engelbrecht (University of Pretoria, Republic of South Africa) Pablo Estévez (University of Chile) Bogdan Gabrys (Bournemouth University, UK) Fernando Gomide (University of Campinas, São Paulo, Brazil) Aboul Ella Hassanien (Cairo University, Egypt) Joachim Hertzberg (Osnabrück University, Germany) Evangelos V. Hristoforou (National Technical University of Athens, Greece) Ryszard Jachowicz (Warsaw University of Technology, Poland) Tadeusz Kaczorek (Bialystok University of Technology, Poland) Nikola Kasabov (Auckland University of Technology, New Zealand) Marian P. Kazmierkowski (Warsaw University of Technology, Poland) Laszlo T. Kóczy (Szechenyi Istvan University, Gyor and Budapest University of Technology and Economics, Hungary) Józef Korbicz (University of Zielona Góra, Poland) Krzysztof Kozłowski (Poznan University of Technology, Poland) Eckart Kramer (Fachhochschule Eberswalde, Germany) Rudolf Kruse (Otto-von-Guericke-Universität, Magdeburg, Germany) Ching-Teng Lin (National Chiao-Tung University, Taiwan) Piotr Kulczycki (AGH University of Science and Technology, Cracow, Poland)

Publisher: Industrial Research Institute for Automation and Measurements PIAP

If in doubt about the proper edition of contributions, please contact the Executive Editor. Articles are reviewed, excluding advertisements and descriptions of products. All rights reserved © Articles

1


JOURNAL OF AUTOMATION, MOBILE ROBOTICS & INTELLIGENT SYSTEMS VOLUME 10, N° 4, 2016 DOI: 10.14313/JAMRIS_4-2016

CONTENTS 3

40

Control Based on Brain-Computer Interface Technology for Video-Gaming with Virtual Reality Techniques Szczepan Paszkiel DOI: 10.14313/JAMRIS_4-2016/26

Kinematic Analysis of 6-DOF Arms for H20 Mobile Robots and Labware Manipulation for Transportation in Life Science Labs Mohammed M. Ali, Hui Liu, Norbert Stoll, Kerstin Thurow DOI: 10.14313/JAMRIS_4-2016/30

8

Adaptive Neuro-Sliding Mode Control of PUMA 560 Robot Manipulator Ali Medjebouri, Lamine Mehennaoui DOI: 10.14313/JAMRIS_4-2016/27

53

Analysis, Modelling and Planning the Communal Sewarage Systems DOI: 10.14313/JAMRIS_4-2016/31

17

Designing Social Robots for Interaction at Work: Socio-Cognitive Factors Underlying Intention to Work with Social Robots Nuno Piçarra, Jean-Christophe Giger, DOI: 10.14313/JAMRIS_4-2016/28 27

A New Approach for Handling Element Accessibility Problems Faced by Persons with a Wheelchair Ali Saidi sief, Alain Pruski, Abdelhak Bennia DOI: 10.14313/JAMRIS_4-2016/29

2

Articles

52

Group Decision Making Problem by General Convexity or Concavity and Aggregation Process DOI: 10.14313/JAMRIS_4-2016/32


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 10,

N° 4

2016

Submitted: 5th May 2016; accepted:17th October 2016

Szczepan Paszkiel DOI: 10.14313/JAMRIS_4-2016/26 Abstract: The paper describes possibilities of application of braincomputer interface technology in neurogaming. To that end number of experiments were conducted in the Laboratory of Neuroinformatics and Decision Systems of the Opole University of Technology using Emotiv EPOC+ NeuroHeadset. Moreover, the paper presents an aspect of possibility of using brain-computer interface (BCI) technology in combination with virtual reality for controlling avatars/object in videogames. Keywords: neurogaming, brain computer-interfaces, virtual reality

!"#$%&'"($! A positive impact of neurogaming on human body has been confirmed in practice by its use for executing brain neuroplastic abilities for given purposes. Brain work changes depending on the activity of a given individual during specific time interval. Therefore, currently we observe a dynamic growth of brain fitness industry, i.e. field offering brain exercises for healthy individuals. The development of this industry would be impossible if not for globally increasing interest and associated number of BCI-based devices. In practice, BCI technology is based on three paradigms: SCP, i.e. slow cortical potentials [2], induced potential P300 and ERD/ERS – event related desynchronization/synchronization [1]. Brain computer interfaces, due to correlation with external event (or lack of it), may be divided into asynchronous – based on spontaneous brain activity not related to an external event, and synchronous – related to occurrence of external event. The conducted literature review indicates that development of BCI technology determines creation of a constantly increasing number of software solutions for signal analysis and identification [7], as well as prototype products for showing changes of brain electric activity using LEDs [6].

) #*(! $+,&"-# -'.!$/$01 (! (%-$ *+(!0 23(!0 44 Brain-computer technology in a non-invasive version gains more and more possible practical applications in different domains of life, including neurogaming [4]. The main advantage of this technology is a possibility to affect game action using only brain signals without using them directly to trigger effector

muscle of given limb, which is the case in standard controlling. As shown by the literature review, one of the first neurogames was NeuroRacer, aiming to develop cognitive abilities. Studies conducted using the game allowed to draw a conclusion that people playing the game significantly improved their working memory and cognitive skills [3]. An important positive aspect from the game was also enhancing the multitasking capacity for mental operations, a decline of which is particularly observed in the elderly. The performed analyses indicate that neurogaming has a relatively low competitive level due to the presence of large delays between signal transmitted from the surface of human head to a work station, and due to occurrence of measurement artifacts that are hard to eliminate in practical applications. This requires use of proper signal filtering. One of them is FFT (Fast Fourier Transform). This involves transforming process which gives a transform as a result. Fourier Transform determined for discrete signal, i.e. signal based on specific number of samples is called Discrete Fourier Transform. If x0, ‌., xN-1 are complex numbers, the DFT can be expressed in the following form (1). (1) One of the components of Fourier series is a harmonic component that represents the signal in spectral domain. By using DFT, signal samples a0, a1, a2, a3, ‌, aN-1, assuming ai ! R are transformed into series of harmonics A0, A1, A2, A3, ‌, AN-1, assuming that Ai ! C, it is done according to equation (2), where i – imaginary unit, k – harmonic number, n – signal sample number, an – signal sample value, N – number of samples.

(2)

When conducting FFT analysis, one should remember that biological signals are never sinusoidal signal, but rather a component of many. FFT analysis enables quick and accurate identification of signal components. Fig. 1 presents example of FFT analysis conducted using Emotiv TestBench. Data were taken from O2 electrode for maximum signal amplitude in range 80 to –60 dB. Figure 1 presents signal as gain and frequency, while on the right hand side there are classifications 3


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 10,

N° 4

2016

is related to no information processing in a given time window. Thus it can be concluded that FFT analysis provides correct signal filtration and identification of activity of given brain wave ranges for specific electrodes for the neurogaming purposes. A dynamic development of games implies also need of increasing number of signals recognised by brain-computer interfaces. The studies conducted at the Laboratory of Neuroinformatics and Decision Systems of the Opole University of Technology involved use of 14 channel electrodes + 2 reference electrodes contained in the Emotiv EPOC+ NeuroHeadset device presented in Figure 2. This device does not require use of gel or conductive paste, however, saline solution is helpful in its correct operation by saturating elements (felts) at skin-electrode contact [5]. An important factor in ensuring correct device operation is that it must be properly placed on the head of the test sub-

Fig. 1. FFT analysis of signal using Emotiv TestBench application of individual rhythms of brainwaves for signal in gainfrequency plane. High activity of alpha waves in range from 7 to 13 Hz is clearly visible here. This is related to idle/relaxed state of the test subject. For comparison, range of beta waves is in this case very low, which

Fig. 3. Location of electrodes on the head of the subject in accordance with standard 10-20 for Emotiv EPOC+ NeuroHeadset with identifiers

Fig. 2. Emotiv EPOC+ NeuroHeadset and visualisation of placement of the device on head of the controlling person 4

Articles

ject as presented in Figure 2. Device speed and accuracy is ensured by sampling 2048 times per second, and sample filtration. Once the sampling process has been complete, 256 samples per each channel are obtained with the maximum resolution of 16 bits. The device performs FFT filtration. Frequency the device responds to is in range from 0.16 to 43 Hz, which allows, inter alia, to read state of: meditation, idleness, boredom and concentration. The electrodes are located on the head of the controlling person based on international system 10-20, as shown in Figure 3. It should be noted that CMS and DRL are reference electrode and grounding electrode, respectively. Other electrodes following the standard of the International Federation for Clinical Neurophysiology have (in their elements) even numerical identifiers for right hemisphere and odd ones for left hemisphere, as well as letter identifiers based on brain lobes they are connected to.


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 10,

N° 4

2016

age. From the practical perspective, virtual reality can be defined as a combination of special equipment and software. Software solutions play a role of supporting hardware accelerators in scope of transforming environment into image, which implies large amount of mathematical computations. However, hardware solutions may support immersion, i.e. dive deeper into generated virtual reality. Currently, visualisation of virtual reality is based mainly on implementations used in devices produced by Oculus. Unfortunately, as shown by experiments so far, this technology does not currently allow experiencing virtual reality for longer periods of time, as it is fatiguing to human body, particularly to eyeballs.

7 $!'-," $8 $+9(!(!0 ;(". 8$# ".- (%-$ *+(!0 <&#,$3-3

Fig. 4. Head outline in the main window of EPOC Control Panel and cognitive training performed in Cognitiv Suite tab

Unfortunately, present-day use of BCI for neurogaming requires aforementioned concept of controlling using additional controllers. This is mainly due to the difficulty in distinguishing information from electroencephalography tests for controlling game action, based on several different mental states. However, it is worth noting that for such neourogaming tools as Spirit Mountain (Figure 5) by Emotiv Inc., application of the BCI technology proved to work very well in practice, which was confirmed by tests conducted in the Laboratory of Neuroinformatics and Decision System of the Opole University of Technology. This is caused by the fact that number of handled

In terms of controlling avatars in a virtual reality, there is still a problem with the BCI today as deviceuser calibration is a very time-consuming process that must be performed prior the commencement of controlling. In order to identify control signals after placing the device on the head of the test subject correctly, it is necessary to run EPOC Control Panel app to verify the connection status and thus proper correlation at skin-electrode interface for all 16 electrodes presented in the visualisation (Figure 4, left-hand side). EPOC control panel should be also used for performing cognitive training, effect of which can be used in neurogaming applications. Cognitive training involves archiving brain activity in a given time in correlation to a specific event, and then comparing archived patterns with observed events. For the performed tests, it was necessary to identify brain activity in relaxed state and increased mental effort aimed at movement in a specific direction (north, south, east, west) shown in Figure 4 – on the right side, marked by a cube in 3D space.

5 (#"&*/ -*/("1 6 Virtual reality originated in the 1960s. However, recently its development became much more dynamical due to the appearance of new technological solutions such as goggles that can display pictures both in 2D and 3D mode. Such goggles have two miniature screens, each of them showing a proper part of the im-

Fig. 5. Main window of EmoKey application and example of virtual environment Spirit Mountain created using Unity engine Articles

5


Journal of Automation, Mobile Robotics & Intelligent Systems

actions is rather low. In a virtual world modelled in this application, it is possible to move objects by thinking. Moreover, one can move objects in relation to each other by focusing his or her attention. The game is based on the world exploration using internal imaginations based on the induced potentials that are additionally supported by gyro, which is a part of equipment of Emotiv EPOC+ NeuroHeadset. This is because the manufacturer wishes to improve sense of first-person game-play, and to eliminate mechanical components such as computer mouse from controlling process. In order to perform tests of neurogaming applications based on BCI technology which is used by Emotiv EPOC+ NeuroHeadset, it was necessary to correlate buttons with some specific human mental states as presented in Figure 5. The conducted tests indicate that controlling virtual objects using brain is a complex and difficult process. Additionally, there is a delay during information transfer, which has negative impact on the controlling process. Cerebral Constructor was another tested neurogaming application involving Emotiv EPOC+ NeuroHeadset for the purposes of this experiment. In this case, controlling allowed identification of seven brain activity states associated with moving up, down, left and right, lifting, pulling and lowering used for rotating a given object. As shown by the performed tests from the practical perspective, there are only three commands: one for rotation and two for moving. In case of applying the BCI technologies for games other than neurogaming ones, there are some problems arising, mainly related to the lack of direct correlation between the products. Another important problem in terms of virtual reality operation is the lack of model cooperation of human body with the technique. For sense of sight there is an identified issue of lagging – delay that occurs between head movements registered by VR goggles and an image generated on the display of workstation. Another problem is related to anatomical labyrinth and its erroneous identification of orientation in relation to gravity as compared to the orientation calculated by the algorithm operating in a given app. Another issue is the fact that image is created by VR goggles at fixed distance from the participant, which is a rather different approach than real reality, where we focus sight on objects located at different distances from us. In case of combination of devices based on the BCI technology such as: Emotiv EPOC+ NeuroHeadset with VR goggles, it is worth keeping in mind that artifacts may occur if both devices are placed close to each other. VR goggles operation causes small changes in operation of Emotiv EOPC+ NeuroHeadset, which in the end allows to control the character using the virtual reality. There are currently simulator prototypes using combination of these devices. It is possible to move in these simulators by using generated internal events that to some extent (in a general sense) can be seen as thoughts. As indicated by the literature review, in the future it should be possible in the future to use electromyography (EMG), i.e. diagnostics of the electrical 6

Articles

VOLUME 10,

N° 4

2016

activity of muscles and peripheral nerves, for determination of body behaviour in a given moment, and converting this information into movement of the avatar. Moreover, the proposed hybrid solution can be supplemented with EOG, i.e. electrooculography that recognizes data on resting potential around eyeballs in order to verify the current looking direction in the virtual reality that surrounds the subject. There are also attempts to combine BCI-based devices with augmented reality (AR). This technology allows us to observe the surrounding world on the streets combined with elements produced by virtual reality. Augmented Reality is based on combination of two-worlds, real-time interaction, and freedom of movement in three dimensions. It is worth noting that there is an increasing number of AR-based classes taught at the Massachusetts Institute of Technology (MIT) and it is just one of such higher education institutions. Students use their smartphones and GPS devices to gradually explore the campus area that has been enhanced with information to assist learning. Augmented reality can be used in: education – allows gaining information from objects used by students via immediate verification and electronic data-based feedback; medicine – access to data on the internal organ structure of the examined person; marketing, and robotics – by identifying objects that make up the environment in which a robot moves, and supporting generation of potentials in a brain of a person controlling the robot using BCI in a feedback loop [8].

= >&++*#1 It is worth noting that neurogaming is currently widely used in treating mental disorders such as attention deficit hyperactivity disorder (ADHD), and Post-traumatic stress disorder (PTSD). Increased interest in neurogaming in the world has resulted in organisation of periodic conference held in San Francisco, USA, where the topics related to the ones mentioned above are discussed. Neurogaming, just as other practical applications of the brain-computer interface technology, raises ethical controversies. There are dilemmas concerning potential gaining/taking control over human mind by a machine or an individual [9]. However, in the perspective of brain fitness, it becomes a promising tool which was confirmed by tests conducted for the purposes of this paper. Actually there are many practical implementations of technology based on augmented reality among other things in the entertainment industry for the construction of urban games. BCI technology, which develop rapidly for several years, are an excellent example of a technology that is in line with the virtual reality. This technology may be an interesting tool among other things for the implementation of control processes including avatars. Controlling by means of the human mind without the use of evoked potentials is difficult in terms of implementations in everyday conditions as evidenced by the author’s studies. In practise it is easier to control the output of evoked potential and thus the accuracy is higher. In the BCI technology the classification takes longer time, so the game is slower.


Journal of Automation, Mobile Robotics & Intelligent Systems

?2 @B Szczepan Paszkiel – Department of Electrical, Control & Computer Engineering, Institute of Control & Computer Engineering, Opole University of Technology, Opole, 45-316, Poland. E-mail: s.paszkiel@po.opole.pl.

VOLUME 10,

N° 4

2016

[9] Wolpaw J. R., Winter Wolpaw E. , Brain computer interfaces: something new under the sun�, Brain Computer Interfaces: Principles and Practice, Oxford University Press New York, 2012, DOI: 10.1093/acprof:oso/9780195388855.003.0001.

C4C CD C> [1] Aloise F., Schettini F., Arico P., Salinari S., Babiloni F., Cincotti F., “A comparison of classification techniques for a gaze-independent P300-based brain-computer interface�, Journal of Neural Engineering, 9(4):045012, 2012. DOI: 10.1088/17412560/9/4/045012. [2] Amiri S., Rabbi A., Azinfar L., Fazel-Rezai R., A review of P300, SSVEP, and hybrid P300/SSVEP brain computer interface systems, Brain Computer Interface Systems – Recent Progress and Future Prospects, 2013. DOI: 10.5772/56135. [3] Basar E., Basar-Eroglu C., Karakas S. Schurmann M., “Are cognitive processes manifested in eventrelated gamma, alpha, theta and delta oscillations in the EEG?�, Neuroscience Letters, Vol. 259, No. 3/1999, 165-168, DOI: 10.1016/S03043940(98)00934-3 [4] Blankertz B., Dornhege G., Krauledat M., Muller K. R., Curio G., “The non-invasive Berlin Brain Computer Interface: fast acquisition of effective performance in untrained subjects�, Neuroimage, vol. 37, no. 2, 2007, 539–500. [5] Gargiulo G., Calvo R. A., Bifulco P., Cesarelli M., Jin C., Mohamed A., Schaik A., “A new EEG recording system for passive dry electrodes�, Clinical Neurophysiology, vol. 121, no. 5, 2010, 686–693, DOI: 10.1016/j.clinph.2009.12.025. [6] Paszkiel S., Hunek W., Shylenko A., “Project and simulation of a portable proprietary device for measuring bioelectrical signals from the brain for verification states of consciousness with visualization on LEDs�. In: Recent research in automation, robotics and measuring techniques, Editors:

"# $ %# ' #* < Challenges in Automation, Robotics and Measurement Techniques, Advances in Intelligent Systems and Computing 440, Springer 2016, Switzerland, 25–36. DOI: 10.1007/978-3-319-29357-8. =>Q # XY Z \ Z ^ rzania danych EEG z wykorzystaniem analizy czynnikowej i pseudoinwersji Moore-Penrose�, Informatyka, Automatyka, Pomiary w Gospodarce , Lublin, no. 4, 2014, 62–64. [8] Paszkiel S., Recent Advances in Automation, Robotics and Measuring Techniques, chapter 20: Augmented reality of technological environment in correlation with brain computer interfaces for control processes, series Advances in Intelligent Systems and Computing, vol. 267, AISC, Springer, Switzerland 2014, 197–203. DOI: 10.1007/9783-319-05353-0_20. Articles

7


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 10,

N° 4

2016

? E D > F <2F? =GH I F Submitted: 28th June 2016; accepted: 7th October 2016

Ali Medjebouri, Lamine Mehennaoui DOI: 10.14313/JAMRIS_4-2016/27 Abstract: The classical sliding mode control (SMC) is a robust control scheme widely used for dealing with nonlinear systems uncertainties and disturbances. However, the conventional SMC major drawback in real applications is the chattering phenomenon problem, which involves extremely high control activity due to the switched control input. To overcome this handicap, a pratical design method that combines an adaptive neural network and sliding mode control principles is proposed in this paper. The controller design is divided into two phases. First, the chattering phenomenon is removed by replacing the sign function included in the switched control by a continuous smooth function; basing on Lyapounov stability theorem. Then, an adaptive linear neural network, that has the role of online estimate the equivalent control in the neighborhood of the sliding manifold, is developed when the controlled plant is poorly modeled. Simulation results show clearly the satisfactory chattering free tracking performance of proposed controller when it is applied for the joints angular positions control of a 6-DOF PUMA 560 robot arm. Keywords: Puma 560, position control, NN, SMC, robustness, chattering

!"#$%&'"($! Robotic manipulators are highly nonlinear systems including high coupling dynamics. Moreover, uncertainties caused by link parameters imprecision, payload variations, unmodeled dynamics, such nonlinear friction and external disturbances etc. make the motion control of rigid manipulators very complicated task [1, 2]. Since sliding mode control is a strong tool for dealing with nonlinear systems, it has been widely used during the past decades in robotic systems control field. The sliding mode control (SMC) design principle is based on the use of a high frequency switching control (corrective control) to drive and maintain the system states onto a particular surface expressed in the error space named sliding surface; this surface defines the closed loop desired behavior. After reaching the sliding surface, the controller turns the sliding phase on, and applies an equivalent control law to keep system states on this surface. The closed loop response becomes totally insensitive to external disturbances and model uncertainties. 8

However, the conventional SMC has some serious structural disadvantages that limit its implementation in real applications. The first drawback is the so-called chattering phenomenon, due to the switched control term that may excite high-frequency un modeled dynamics, and causes harmful effects to the controlled system (e.g. system instability, wear of the mechanism and actuators in mechanical systems) [3–5]. The second disadvantage is the equivalent control calculation difficulty when system modeling is very hard, or when system is subject to a wide range of parameters variation, or external disturbances [2, 6]. The most common used solutions for chattering reducing are the boundary layer approach, where a continuous approximation of the switched control is used instead of the sign function around the sliding surface. Nevertheless, the boundary layer thickness causes a trade-off relation between control performances and chattering elimination. The second main approach is the high order SMC; unfortunately the control design needs complex calculation procedure [6, 20]. In recent years, soft-computing methods such as artificial neural networks (NN) and fuzzy logic systems (FLS) have been successfully applied to overcome the practical problems met in the implementation of sliding mode controllers [1, 2, 6–17]. In the application of NN-based controllers to improve conventional SMC drawbacks, few main ideas were considered. The first one attempts to exploit NN learning capacities for online estimating the equivalent control or modeling errors [1, 2, 6, 11, 12], the neural net role’s is then to compensate model nonlinear terms and disturbances effects; if this compensation term is sufficiently precise, the switched control, responsible of chattering phenomenon, goes to zero. The second idea tries to online determining the adequate switching control gain, just needed to overcome disturbances effects, for reducing the chattering phenomenon amplitude [13–17]. Among different approaches found in literature, this paper is interested to the algorithm developed by Y. Yildiz et al. in reference [11]. This choice is justified by the simplicity of design and the ease of practical implementation for this control algorithm based on strong mathematical foundations. Moreover, the closed loop system can achieve high tracking robustness while eliminating harmful effects of chattering phenomenon. The control synthesis was realized into two phases. First, the corrective control shape was adjusted to a continuous smooth function using


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 10,

Lyapunov stability theorem; where the time derivative of Lyapunov candidate function was preselected to satisfy some particular quadratic form. Second, an adaptive linear neural network (ADALINE) was used to online estimate the equivalent control in the neighborhood of the sliding manifold through an adequate self-tuning mechanism to demonstrate the robust control performance of the proposed algorithm, several numerical simulations were applied for controlling joints angular positions of a PUMA 560 robot arm.

) $!"#$//-# J-3(0!

where: −

2016

where,

, and have multiple negative real roots. To remove chattering, let us choose the following Lyapunov candidate function (6)

To make the time derivative of (6) negative definite and satisfies some preselected form, we have to find the adequate control input. Equating the time derivative of this Lyapunov function to a negative definite function of the form,

Consider the non linear system governed by the state space model:

(1)

N° 4

(7) ^ ˆ { ZZ where, matrix to satisfy Lyapunov conditions. Now, taking the time derivative of (6) and replacing it into (7), the following requirement is found, (8)

denotes the output vector;

−

For 0 ˆ Z { condition given by the equation below:

{ is the state vector, with the system relative degree; is an unknown, continuous and bounded − function; is the input matrix whose elements are − continuous and bounded and, − is an unknown, bounded disturbance. Both f(x) and d(x, t) satisfy the matching conditions and all their components are bounded |fi(x}~ € Mi and |di(x, t}~ € N . If assume that yd  =y1d, ‌, ymd]T represents the known desired trajectories, the control objective for Z ‚„} Z { as:

(2)

(9) † { Z conditions is given by

(10) It is clear that the control law does not contain any discontinuous term. Therefore, the chattering phenomenon is perfectly eliminated. However, the socalled equivalent control ueq is unknown since f and d are unknown and not measurable. Therefore, it will be estimated using an ADALINE neural network presented in Figure below,

where: (3)

(4) † Z ‚„} { ‡ ‡ in the errors space as follows: Fig. 1. Proposed Neural Network scheme

(11)

(5)

where, etj is the jth row of et, and the wij refer to network weights. ‰ ^ Z Z { neural network that minimize the error function cost Articles

9


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 10,

N° 4

2016

‚„Š} { ‹ Z ‚Œ} Z from the Lyapunov stability conditions, (12) Due the simplicity of the selected NN structure, the on-line learning procedure can easily be calculated using the simple back propagation gradient descent algorithm (13) and (14). The existence and the stability of the global minima, and the stability of the sliding manifold when error function minima are reached; were proven in [11]:

Fig. 2. PUMA 560 robot manipulator

(13)

where, Bi(x) is the ith column of the matrix B(x). For the bias terms wi0, the weight update can be computed using the same procedure,

.

(14)

Notice that the control design does not require the knowledge of vectors f and d. So, from control point of view, they can be considered as unknown functions satisfying some particular conditions as mentioned above.

tance diagonal matrices, is the armature current vector, denotes the actuators congears ratios diagonal matrix, trol voltage, is the back e.m.f constants diagonal matrix, is the motors torques constants diagoand nal matrix. In order to facilitate the control task, we propose to simplify (15) by neglecting the inductances of the actuators armatures. Then, currents vector expression becomes: (16) Finally, system (15) is reduced to:

5 $9$" F*!(,&/*"$# F*".-+*"('*/ F$%-/ The PUMA 560 manipulator, Figure 2, powered by DC motors is modeled by the following non linear dy Z Z =„� „Œ Š„Q

(17) where,

(18) (15)

Friction is frequently modeled as [21], denotes actuators torques vector, actuators friction torques, and denote joints angular positions, velocities and accelerations and are notation for the vectors. Symbols n( )/2-vector of velocity products and the n-vector and are given by: of squared velocities. where,

(19) is the viscous friction, and tC denotes where, Coulomb friction,

(20)

. To write system (17) in the form (1), choosing the following state vector x,

is the inertia matrix, the centhe Coriolis torques matrix, the gravity trifugal torques matrix, and are, respecand torques vector. tively, the armature windings inductance and resis10

Articles

(21)


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 10,

2016

7 >(+&/*"($! -3&/"3

The output vector is defined as, (22)

The input vector is, (23)

u  V

[21]. where: Hence, dynamic model (17) can be rewritten in the state space form (1). Where,

(24) Fi(x) is the ith row of the

N° 4

Here, the proposed neuro-sliding mode controller applied for the position control of 6-DOF PUMA 560 robot arm, is tested by numerical simulations using Matlab/Simulink. First, robot is controlled in joint space for a point to point motion using elliptic trajectories [22] as reference inputs. Then, control results in operational space (Cartesian space) are shown using kinematic Z ^ =„ŒQ# In order to test the robustness and the chattering rejection, wide parameters uncertainties were introduced are nomiinto the manipulator nominal model ( nal values of M, h); and a comparison with the conventional sliding mode control were performed. The classical SMC control law is defined as:

vector defined as,

uSMC = ueq + udisc where,

(25)

(26) where, Hij(x) are the elements of the trix defined as:

ma-

(27)

(30)

For checking the robustness of the controller, disturbance torques di(t) =„ŒQ ered,

The obtained results using the 4th order Runge Kutta solver with fixed step time  Â•#••„ shown below. Reference trajectories in both joint space and operational space are defined as, 1. Joint space reference signals,

(28) where: di(t}  >#‘ ‚’#“‘>‘t} ” “#‘ ‚Œ#�Š‘} ” + 3.5 sin(2.7075) – 4.5. (29) The proposed controller block scheme applied for the position control of 6-DOF PUMA 560 robot arm is given by the figure below.

(31) 2. Operational space reference signals, The parametric representation of the Butterfly – ‡ =„ŒQ —˜ ^ ˆ

(32) Fig. 3. The proposed adaptive neuro sliding mode control bloc scheme

Notice that wrist joints q4, q5, and q6 were kept to zero. Articles

11


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 10,

N° 4

L$(!" 3,*'- '$!"#$/M D$+(!*/ +$%-/ '$!"#$/ ;(".$&" %(3"&#9*!'-M

30 15 0

0

10

20

30 time (sec)

40

0 0

5

10

15

20

25 30 time (sec)

35

40

45

50

55

10 q5(deg)

25

q2(deg)

20

50

50 0 -25

0 -10

-50 0

10

20

30 time (sec)

40

50

0

10

20

30 time (sec)

40

50

0

10

20

30 time (sec)

40

50

100

70 60 45

q6(deg)

q3(deg)

Reference ANN SMC SMC

40 q4(deg)

q1(deg)

50 45

30 15 0

50 0 -50

0

10

20

30 time (sec)

40

50

Fig. 4. Desired positions tracking for nominal model

ANN SMC SMC

0.04 e4(deg)

e1(deg)

0.06 0.02 0 -0.02 0

0.04 0.02 0

10

20

30 time (sec)

40

50

0

0

30 time (sec)

40

50

0

10

20

30 time (sec)

40

50

10

20

30 time (sec)

40

50

0 -0.05

-0.02 10

20

30 time (sec)

40

50

0

0.02

0.04 e6(deg)

e3(deg)

20

0.05 e5(deg)

e2(deg)

0.02

10

0

0 -0.02

-0.02 0

0.02

10

20

30 time (sec)

40

50

0

Fig. 5. Tracking errors for nominal model

SMC ANN SMC

20 V4(volt)

V1(volt)

0 -20

0

-40 0

10

20

30 time(sec)

40

50

20 0 0

10

20

30 time(sec)

40

V6(volt)

V3(volt)

20

30 time(sec)

40

50

0

10

20

30 time(sec)

40

50

0

10

20

30 time(sec)

40

50

0

20

0 -10 0

10

20

30 time(sec)

40

Fig. 6. Control inputs for nominal model Articles

10

-10 -20

50

10

12

0 10 V5(volt)

V2(volt)

40

-20

10

50

10 0

2016


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 10,

N° 4

2016

ANN SMC SMC

0.01 e4(deg)

e1(deg)

0.05 0 -0.05

0 -0.01 -0.02

0

10

20

30 time (sec)

40

50

0

e5(deg)

e2(deg)

0.05 0

10

20

30 time (sec)

40

50

10

20

30 time (sec)

40

50

10

20

30 time (sec)

40

50

0 -0.02

-0.05 0

10

20

30 time (sec)

40

-0.04 0

50

0.02 e6(deg)

e3(deg)

0.02 0

0

-0.02 0

10

20

30 time (sec)

40

-0.02 0

50

0

5

10

15

20

25 30 time(sec)

35

40

45

50

0 0

10

20

30 time(sec)

40

50

0

10

20

30 time(sec)

40

50

0

10

20

30 time(sec)

40

50

10

15 0 0

5

10

15

20

25 30 time(sec)

35

40

45

50

0 -10 -20

55

20 V6(volt)

10 V3(volt)

10

-10

55

40 30

-15

SMC ANN SMC

20 V4(volt)

10 0 -10 -20 -30 -40 -50

V5(volt)

V2(volt)

V1(volt)

Fig. 7. Tracking errors for uncertain model with disturbance

0 -10

10 0

-20

0

10

20

30 time(sec)

40

50

Fig. 8. Control inputs for uncertain model with disturbance

) B,-#*"($!*/ 3,*'- '$!"#$/M ) D$+(!*/ +$%-/ ;(".$&" %(3"&#9*!'-M

Reference ANN SMC

x(m)

0.4

0.35

0.3

0

5

10

15

20 time(sec)

25

30

35

40

0

5

10

15

20 time(sec)

25

30

35

40

0.26 0.24 y(m)

0.22 0.2 0.18 0.16

Fig. 9. Desired position tracking for nominal model Articles

13


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 10,

N° 4

-4

6

x 10

ex(m)

3 0 -3 -6

0

5

10

15

20 time(sec)

25

30

35

40

5

10

15

20 time(sec)

25

30

35

40

-4

6

x 10

ey(m)

3 0 -3 -6

0

Fig. 10. Tracking errors for nominal model

Reference ANN SMC

0.26

0.24

0.22

0.2

0.18

0.16

0.3

0.32

0.34

0.36

0.38

0.4

0.42

Fig. 11. Desired and actual output butterfly trajectory for nominal model

) ) 2!'-#"*(! +$%-/ ;(". %(3"&#9*!'-M -3

x 10

1

ex(m)

0.5 0 -0.5 -1

0

5

10

15

20 time(sec)

25

30

35

40

5

10

15

20 time(sec)

25

30

35

40

-3

x 10

1

ey(m)

0.5 0 -0.5 -1

0

Fig. 12. Tracking errors for uncertain model with disturbance 14

Articles

2016


V1(volt)

Journal of Automation, Mobile Robotics & Intelligent Systems

45 30 15 0 -15 -30 -45

VOLUME 10,

N° 4

0

5

10

15

20 time(sec)

25

30

35

40

0

5

10

15

20 time(sec)

25

30

35

40

0

5

10

15

20 time(sec)

25

30

35

40

2016

V2(volt)

0 -15 -30

V3(volt)

-45 10 0 -15 -30 -45

Fig. 13. Control inputs for uncertain model with disturbance

0.28 Reference ANN SMC 0.26

0.24

0.22

0.2

0.18

0.16

0.3

0.32

0.34

0.36

0.38

0.4

0.42

Fig. 14. Desired and actual output butterfly trajectory for uncertain model with disturbance

The comparison between nominal and uncertain models, Figures 5, 7 and figures 10, 12; shows that the tracking errors remain limited near zero. Therefore, we can conclude the satisfactory robustness of the proposed controller tracking performance against disturbances effects and modeling errors (parametric uncertainties and neglected actuators dynamics). Figures 6 and 8 compare the control inputs between adaptive NN SMC and conventional sliding mode control. It is obvious that the harmful effects of the chattering phenomenon are completely removed by the proposed control voltage. Therefore, this solution improves significantly the conventional sliding mode control qualities in pratical implimentations. In addition, we observe that the proposed control magnitudes are much lower, which leads to a smaller control energy. As conclusion, simulation results confirm that the best compromise between tracking performance robustness and chattering elimination is ensured by the proposed adaptive NN sliding mode controller.

= $!'/&3($! In this paper, an adaptive neural network controller based on sliding mode control has been successfully applied for PUMA 560 robot arm robust trajectory tracking. The simulation results show that the adaptive NN sliding mode controller can achieve very satisfactory chattering-free trajectory tracking performance compared to conventional SMC. In addition, the magnitude of control inputs were smaller than that of the classical scheme; which makes the energy efficiency better.

?2 @B > – Department of Mechanical Engineering, Skikda University, 21000, Algeria. E-mail: ali.medjbouri@gmail.com. – Department of Electrical Engineering, Skikda University, 21000, Algeria. E-mail: me_lamine@yahoo.fr. *Corresponding author Articles

15


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 10,

N° 4

2016

C4C CD C> [1]

[2]

[3]

[4]

[5]

[6]

[7]

[8]

=ÂŒQ

[10]

[11]

[12]

[13]

16

Lee H., Nam D., Park C. H., â€?A Sliding Mode Controller Using Neural Networks for Robot Manipulatorâ€?, ESANN’2004 Proceedings, Bruges (Bel Z} š^ ŠÂ?›“• Š••’ „Œ“›„ŒÂ?# Shafiei S. E., Soltanpour M. R., â€?Neural Network Sliding-Mode-PID Controller Design for Electrically Driven Robot Manipulatorâ€?, International Journal of Innovative Computing, Information and Control, vol. 7, no. 2, 2011, 511–524. Young K. D., Utkin V. I., Ă–zgĂźner Ăœ., â€?A Control Engineer’s Guide to Sliding Mode Controlâ€?, IEEE Transactions on Control Systems Technology ˆ # > # “ „ŒŒŒ “ŠÂ?›“’Š# YÂœÂ?< „•#„„•ŒžÂ?>#>Â&#x;„•‘“# Ertugrul M., Kaynak O., Kerestecioglu F., â€?Gain adaptation in sliding mode control of robotic manipulatorsâ€?, International Journal of Systems Science ˆ # “„ # ÂŒ Š••• „•ŒŒ›„„•Â&#x;# Slotine J. J., â€?The robust Control of Robot Manipulatorsâ€?, The International Journal of Robotics Research ˆ # Â’ # Š „ŒÂ?‘ ’Œ›Â&#x;Â’# YÂœÂ?< „•#„„>>ž•Š>Â?“Â&#x;Â’ÂŒÂ?‘••’••Š•‘# Le T. D., Kang H. J., Suh Y. S., â€?Chattering-Free Neuro-Sliding Mode Control of 2-DOF Planar Parallel Manipulatorsâ€?, International Journal of Advanced Robotic Systems, vol. 10, 2013, 1–15. DOI: 10.5772/55102. Erbatur K., Kaynak O., â€?Use of Adaptive Fuzzy Systems in Parameter Tuning of Sliding-Mode Controllersâ€?, IEEE/ASME Transactions on Mechatronics, vol. 6, no. 4, 2001, 474–482. YÂœÂ?< „•#„„•Œž“‘„Â&#x;#ÂŒ>Â’Â?Â&#x;„# Ha Q.P., Nguyen Q.H., Rye D.C., Durrant-Whyte H.F., â€?Fuzzy Sliding-Mode Controllers with Applicationsâ€?, IEEE Transactions on Industrial Electronics, vol. 48, no. 1, 2001, 38–46. YÂœÂ?< „•#„„•Œž’„#Œ•’‘’Â?# ÂĄ# ¢ ' Âœ# ÂŁ ˜ % ¤ Soft Computing: A Surveyâ€?, IEEE Transactions on Industrial Electronics ˆ # ‘Â&#x; # ÂŒ Š••Œ “Š>‘› “ŠÂ?‘# YÂœÂ?< „•#„„•Œž‰Â?ÂĽ#Š••Œ#Š•Š>‘“„# Sahamijoo A., F. Piltan, M. Mazloom, M. Avazpour, H. Ghiasi, N. Sulaiman, â€?Methodologies of Chattering Attenuation in Sliding Mode Controllerâ€?, International Journal of Hybrid Information Technology ˆ # ÂŒ # Š Š•„Â&#x; „„›“Â&#x;# YÂœÂ?< „•#„’Š‘>ž – #Š•„Â&#x;#ÂŒ#Š#•Š# Yildiz Y., Ĺ abanovic A., K. Abidi, â€?Sliding-Mode Neuro-Controller for Uncertain Systemsâ€?, IEEE Transactions on Industrial Electronics, vol. ‘’ # “ Š••> „Â&#x;>Â&#x;›„Â&#x;Â?‘# YÂœÂ?< „•#„„•Œž ‰Â?ÂĽ#Š••>#Â?Œ’>„Œ S. W. Lin, Chen C. S., â€?Robust adaptive sliding mode control using fuzzy modeling for a class of uncertain MIMO nonlinear systemsâ€?, IEE Proc. Control Theory Appl. ˆ # „’Œ # “ Š••Š „Œ“›Š•„# YÂœÂ?< „•# „•’Œž ^˜ <Š••Š•Š“Â&#x; Hoang D. T., H. J. Kang, â€? Fuzzy Neural Sliding Mode Control for Robot Manipulatorâ€?, Lecture Notes in Computer Science ˆ # ÂŒ>>“ Š•„Â&#x; ‘’„› ‘‘•# YÂœÂ?< „•#„••>žŒ>Â?˜“˜“„Œ˜’ŠŠŒ>˜Â?§Â‘•#

Articles

=„’Q # $ ÂĄ# $# ‰ ÂŁ" † ¨ ¨ Based Sliding Mode Control of a Lower Limb Exoskeleton Suitâ€?, Journal of Mechanical Engineering, vol. 60, no. 6, 2014, 437–446. DOI: 10.5545/sv-jme.2013.1366. [15] Huang K., Zuo S., â€?Neural Network-based Sliding Mode Control for Permanent Magnet Synchronous Motorâ€?, The Open Electrical & Electronic Engineering Journal ˆ # ÂŒ Š•„‘ “„’›“Š•# [16] Namazil M. M., Rashidil A., S-Nejadl S. M., Ahn J.W., â€?Chattering-Free Robust Adaptive Slidingmode Control for Switched Reluctance Motor Driveâ€?, IEEE Transportation Electrification Conference and Expo, Asia Pacific (ITEC), June 1–4, 2016, Busan (Korea), 474–478. [17] Chu Y., Fei J., â€? Adaptive Global Sliding Mode Control for MEMS Gyroscope Using RBF Neural Networkâ€?, Mathematical Problems in Engineering, ˆ # Š•„‘ „›Œ# DOI: 10.1155/2015/403180. [18] Armstrong B., Khatib O., Burdick J., â€?The Explicit Dynamic Model and Inertial Parameters of the PUMA 560 Armâ€?, 1986 IEEE International Conference on Robotics and automation, San Fran ‚ª š} š^ >›„• ˆ # “ „ŒÂ?Â&#x; ‘„•›‘„Â?# =„ŒQ # š# ' Z # ÂŁ ÂŞ š ‘Â&#x;• Âœ^ Z ‰ jectory Control using Genetic Algorithm, Simulated Annealing and Generalized Pattern Search Techniquesâ€?, World Academy of Science, Engineering and Technology ˆ # ’„ Š••Â? Â?••›Â?•Œ# [20] Kim K. J., Park J. B., Choi Y. H., â€?Chattering Free Sliding Mode Controlâ€?, SICE-ICASE International Joint Conference, Busan (Korea) October 18–21, 2006, 732–735. [21] Corke P., â€?Visual control of robots: high-performance visual servoing’’, Research Studies Press, „ŒŒÂ&#x;# [22] Biagiotti L., Melchiorri C., â€?Trajectory Planning for Automatic Machines and Robots’’, Springer, 2008.


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 10,

N° 4

2016

J > I N OM > E 4 2 N O > I Submitted: 7th October 2016; accepted: 2nd December 2016

DOI: 10.14313/JAMRIS_4-2016/28 ?93"#*'"M This paper discusses the effects of robot design (machine-like, humanoid, android) and users’ gender on the intention to work with social robots in the near future. For that purpose, the theoretical framework afforded by the theory of planned behavior (TPB) is used. Results showed effects for robot design and users’ gender. As the robot got more human-like the lower the intention to work with it. Female participants showed lower intention to work with social robots. These effects are mediated by the variables of the TPB. Perceived behavioral control and subjective norm are the main predictors of the intention to work with social robots in the near future. P-1;$#%3M social robots, intention to work, social robots at work, robots design, gender, theory of planned behavior

!"#$%&'"($! Late XX and early XXI century society has witnessed a phenomenal increase in the computational power of electronic devices, which was accompanied by a significant production cost reduction [1]. This resulted in a changed outlook on were and how these devices could assist their users. The field of robotics is no stranger to this change. Robots are no longer “caged� inside factories, growing in autonomy and interaction competence, multiplying in form, and playing an increasingly larger role in daily life [2]. Of particular interest to this paper is the concept of social robot at work. A social robot can be broadly defined as a robot with high level of autonomy, capable of interacting with people, following contextually correct social norms, attentive to gaze and emotional cues, and able to adapt its responses to user’s specific traits and personality (see [2–5] for a more thorough discussion of the definition). Social robots differ from the lay representation of robot, a high-tech industrial machine [6] in that they are “designed to engage people in an interpersonal manner, often as partners, in order to achieve social or emotional goals� [7]. Despite the diversity of forms, social robots share this focus on interpersonal interactions. As such, successful deployment of social robots requires a broader focus of analysis, in order to account for future user’s attitudes, beliefs and expectations, and how they will impact human-robot interaction [8].

The purpose of this paper is to study the interplay between robot design (machine-like, humanoid and android), users’ gender and individual intention to work with a social robot in the near future. The following sections review the state of current research.

$9$" J-3(0! Social robots are developed under the assumption that people will apply social norms when interacting with them [7]. With this in mind, robot designers have tried to integrate human physical (e.g. eyes, mouth, limbs) and psychological traits (e.g. attention, voice tone) in an attempt to build better interaction metaphors. Below are reviewed some studies dedicated to the subject. Research by DiSalvo et al. [9] on the effects of the robot’s physical appearance showed that the presence of nose, eyelids and mouth are the traits that increased the perception of humanness of a robot’s head. Blow et al. [10] compared various robot smiles, reporting that expressions with a natural transition time where preferred by the participants. Lee et al. [11] and Walters et al. [12], studied perception of robot’s height, concluding respectively that: participants preferred a robot with a height similar to theirs, as this allowed eye contact; and that higher robots where perceived as more human-like and conscientious. Moreover, Walters et al. [12] also studied the effect of robot general design (machine-like vs humanoid), concluding that robots with a more human-like design where perceived as more intelligent. Robot “gender� was also found to affect user behavior. Powers et al. [13] had their participants discussing dating preferences with either a male or female version of a robot, and found that participants spent more time talking with the opposite gender robot. Eyssel et al. [14] found that participants formed a more positive image of the same gender robot, reporting more psychological closeness. These results however should bear in mind the findings of [15], which suggest that participants, not only prefer the robot that matches their “personality� style, but also tend to attribute to the robot “personality� traits similar to their own. Research on the effects of the robot’s voice shows a preference for robots with human voice [16], and an increased task performance, when participants had a robot whispering cues [17]. As for voice tone, Niculescu et al. [18], comparing two robots with female appearance, reports a preference for a robot with a high-pitched voice. 17


Journal of Automation, Mobile Robotics & Intelligent Systems

Salem et al. [19] focused on multimodal communication, studying the effects of voice and gestures in the perception of human-likeness and likability of a humanoid robot. They concluded that the combined use of gestures and voice increased likability and the future intention to use the robot. Ham et al. [20] studied the effects of gestures and gaze, finding that when combining gestures and gaze the robot was perceived as more persuasive. Bartneck et al. [21] identified a relation between robots perceived animacy and perceived intelligence. However, the use of a human-like design does not always facilitate interaction. For instance, people who though that humanoid robots were the more acceptable robot design for house chores, also reported being uncomfortable with the idea of interacting with them [22, 23]. In another study Broadbent et al. [24], asked participants to imagine either a robot with human form, or a machine-like robot. Afterwards they had their blood pressure measured by a robot and reported their emotional state. Participants who had imagined the robot with human form, showed greater increases in blood pressure readings and reported more negative emotions. These feelings of eeriness toward humanoid robots have been described as the Uncanny Valley effect (see [25] for review of concept and theoretical models). That is, as a robot increases in human resemblance so does likeability, until a point where this resemblance induces feelings of eeriness and dread. Research on this subject suggests a link between human appearance and eeriness [26-28] and as identified what seems to be the evolutionary [29] and developmental [30] roots for this dread response. In short, designing robots with human-like traits can enhance their interactive and social proficiency. Also, different degrees of human likeness seem to impact differently potential user’s expectations and behavior. However, this approach should be cautious, since robots with human resemblance may arouse some anxiety in their users.

) 23-# -!%-# If technology, per se, can be regarded as gender neutral, its use is clearly embedded in social conventions and norms, that prescribe how men and women should think, feel and behave towards technology [31]. Research results have underlined the role played by socio-cognitive factors in the observed gender differences in technology use. Gefen and Straub [32] compared female and male beliefs about e-mail and e-mail use, using the technology acceptance model (TAM; [33]). They found that female and male participants held different beliefs about e-mail’s social presence, usefulness and ease of use. Women reported a higher sense of social presence and usefulness, while men reported a higher sense of ease of use. Interestingly, no differences were found in terms of actual e-mail use. Venkatesh and Morris [34], also using TAM, compared over a period of 5 months, the usage of a software application. After controlling for the effects of income, occupation, education level, and prior experience with computers, 18

Articles

VOLUME 10,

N° 4

2016

they found that, men’s use was determined by the perception of usefulness, while women’s use was determined by perceived ease of use and subjective norm. Venkatesh et al. [35], used the theory of planned behavior (TPB; [36]) to study the introduction of a new software application over a period of 5 months. After controlling for the effects of income, organization position, education and computer self-efficacy, they found that while men’s use was predicted by their attitude towards using the software application, women’s use was predicted by perceived behavioral control and subjective norm. Long term use was correlated with early use behavior, underlining the importance of early evaluations. These gender differences are also visible in people’s understanding of robots. Piçarra et al. [6] found that male and female participants although sharing the social representation of robot as a technological machine, associated it with different contexts (industrial vs. domestic robot). Also, while female participants associated the idea of robot with help at home (domestic robot), male participants associated the idea of help with unemployment. Kuo et al. [37] studied people’s reaction to health care robots by using a robot to measure blood pressure. They found that male participants had a more positive attitude towards healthcare robots, reporting no differences by age group. Eyssel et al. [16] compared two robots with gender neutral look, using either masculine or feminine voice uttered with a human or robotic tone. They identified interaction effects between gender of robot voice and gender of participant. Female participants showed a preference for the robot with female voice, reporting higher levels of psychological closeness and anthropomorphization. Male participants showed the same preference but towards the robot with a male voice. These effects were not noticed with the robot with a “roboticâ€? voice. In short, the variable gender seems to account for differences in both how technology is used and the perception of its usefulness. These effects seem to extend to human-robot interactions, suggesting that men and women, not only perceive social robots differently, but also interact differently.

5 .-$#1 $8 </*!!-% -.*Q($# R < S Despite its productivity potential, technology is only useful as long as it is used. Some authors estimate that 50% to 75% of the difficulties of implementing technological solutions at work may stem from human factors [38], therefore the importance of understanding users’ behaviors. Although the common belief that someone’s predisposition towards something is a sure indicator of future behavior, scientific research has shown attitudes to be poor predictors of specific behaviors [36]. To deal with this problem, Ajzen and Fishbein [39] proposed that the proximal cause of behavior is behavioral intention (BI), being attitudes a distal cause. Intention is then an indication of the effort a person is willing to put in order to perform a certain behavior. Intentions imply some forms of planning, and temporal framing, and they are associated


Journal of Automation, Mobile Robotics & Intelligent Systems

with a reasonable level of confidence in the capacity of performing the action [40]. The stronger the intention, the more likely the performance of a certain behavior. Intention is then the central element of the TPB, but is not the only one, since performing a behavior also depends on the availability of resources (personal and/or material). This evaluation of available resources is labelled perceived behavioral control (PBC). PBC is the perception of how easy or difficult it will be to perform a particular behavior, and includes not only perceived obstacles and strengths, but also past experiences [36]. PBC can have a direct effect on behavior, but can also have an indirect effect on behavior via intentions. The other two elements of the TPB are attitudes and subjective norms. As mentioned, attitudes per se have proven to be unreliable predictors of behavior. Nonetheless they play a role in behavior. Ajzen [36, p. 191] puts it this way: “In the case of attitudes toward a behavior, each belief links the behavior to a certain outcome, or to some other attribute such as the cost incurred by performing the behavior. Since the attributes that come to be linked to the behavior are already valued positively or negatively, we automatically and simultaneously acquire an attitude toward the behavior.” Subjective norms assess the person’s beliefs about significant other’s opinions and judgments. That is, if they think a person should or should not perform a given action. It is a measure of social compliance [36]. Figure 1 shows a diagram of the TPB.

VOLUME 10,

N° 4

2016

being their weights dependent of the context and behavior. The intention to work with social robots, according to TPB, is the result of the interplay of attitudes, subjective norms and perceived behavioral control. All other factors, like socio-demographic status, gender, personality traits, should have their effects on behavioral intention mediated by attitudes, subjective norms and perceived behavioral control [36].

7 >&++*#1 *!% BQ-#Q(-; $8 ".- >"&%1 The previous sections presented examples of research about the effects of robot design on user’s perception, gender differences in technology use and the role of socio-cognitive factors, namely those defined by the TPB, in the prediction of technology use. Based on those results, this study tests the following hypotheses: 1) The level of human-likeness of the social robot will have an effect on participants’ intention, attitude, perceived behavioral control and subjective norm. 2) Male and female participants will have different levels of intention attitude, perceived behavioral control and subjective norm. 3) The components of the TPB (attitude, perceived behavioral control and subjective norm) will predict the intention to work with a social robot. 4) The effects of robot design and participant gender on intention to work with the social robot are mediated by the components of the TPB.

) <*#"('(,*!"3 *!% <#$'-%&#-

Fig. 1. The theory of planned behavior The TPB has received ample empirical confirmation of its usefulness, both in theoretical and applied fields of research. Ajzen [36] reviews empirical evidence for the prediction of behavior using BI, PBC and the TPB. Ajzen [41–43], reports on recent theoretical and empirical progresses, while responding to some criticisms to the model. The TPB has been found useful in the prediction of among other behaviors, exercise (see [44] and [45] for reviews), health related behaviors (see [46] for a meta-analysis), buying behavior [47] and consumer adoption intentions [48]. The TPB has also been applied to behavior related to the use of technology, namely, intention to use information systems [49], online shopping [50, 51], ecommerce adoption [52], and digital piracy [53–57]. In short, according to the TPB, behavior is a function of intention, which is the combined expression of attitude, perceived behavioral control and subjective norms. None of these variables have fixed effects,

In order to conduct this experiment, the participants were randomly assigned to one of three conditions, machine-like robot (video of Snackbot), humanoid robot (video of Asimo) and android robot (video of Actroid DER). The use of indirect methods, like video, to study human-robot interaction (HRI) is quite common [58–61] and proved to be a valid method [62]. The sample used in this research is composed of 90 students from the University of the Algarve, Portugal. From these 51 are woman (Mage = 21.78; SDage = 4.87) and 39 are man (Mage = 21.87; SDage = 4.86). Sixty-five are humanities students, 25 are science students. Forty-five had already seen presented type of robot, 45 had never seen it. Thirty participants were assigned to each condition. After being informed about the conditions of participation and confidentiality of the data collected, the participants were shown the video. The video lasted about 1 minute and 50 seconds and was projected on the wall facing the subjects using a ceiling projector. Before viewing the videos, participants received the following instructions: “In the future it will be common to interact with robots. This will happen in public spaces (factories, offices, museums) and in our houses. We are going to show you a video with one of these social robots. Your task is to imagine yourself working with this robot in the future and forming an opinion about it”. During the video a female voice narrated the following: “Hello, my name is Snackbot (or Asimo, or Actroid) and I’m Articles

19


Journal of Automation, Mobile Robotics & Intelligent Systems

a social robot. A social robot is a robot created to interact with people in a natural fashion. In order to do that, my creators included in my design human characteristics like eyes, mouth, language and the capacity to understand and perform social behaviors. In the future I will be performing such jobs as hotel receptionist, personal trainer or office clerk. Some even say that in the future I will be responsible for caring for the elders. Goodbye and see you in the future.� Both the instructions and the video dialogue, underlined how working with a social robot will be different from working with a current day industrial robots, by focusing on the socio-affective aspects of these interactions. After watching the video, participants were asked to complete a questionnaire. At the end of the experiment they were debriefed about the research project. The measures used are presented in the next section.

) F*"-#(*/ In order to assess the effects of different robot designs, videos of the following robots were used, Snackbot, Asimo and Actroid DER (see figure 2).

Fig. 2. Robot design, from left to right: Snackbot (machine-like design), Asimo (humanoid design), Actroid DER (android design) Snackbot (machine-like design), is an assistive social robot developed at Carnegie Mellon University. The wheels set on its base allow the robot to move autonomously. The robot is about 142 cm high, with a round shaped head that served as housing for the visual and verbal hardware. A led display was used to simulate the mouth. The robot is able to produce simple verbal interactions. Although its arms are not fully functional, they carry a tray that allows Snackbot to transport objects from one place to another. (Lee et al. [11]). Asimo (humanoid design), is a humanoid bipedal social robot developed by Honda Corporation1. With a height of 130 cm, Asimo can move autonomously and use its hands to pick up and use objects. Actroid DER (android design), is a full body human-like female social robot with a corporate look (i.e., make-up, black blazer, crème trousers, white shirt, and collar)2. During its speech, the Actroid displayed nonverbal behaviors (e.g. arm movements, blinks), was shown in different angles (e.g., ž) and looking straightforward at the participant. 1 2

20

http://asimo.honda.com/ http://www.kokoro-dreams.co.jp/ Articles

VOLUME 10,

N° 4

2016

In order to measure the intention to work with the social robot in the near future, the measures proposed by the TPB were used. Scale items were based on [63]. Behavioral Intention (BI). Measures the effort a person is willing to invest in order to work with the social robot presented in the video in the near future (e.g. I’m willing to try hard to work with this robot in the future: disagree/agree). It is composed of 5 items, measured on a 7-point Likert type scale (1 = minimum to 7 = maximum). Higher scores indicate a stronger intention to work with the social robot presented in the video. Attitude towards working with the social robot (ATW). Measures a person’s attitude towards working with the social robot presented in the video (e.g., working with this robot will be useless/useful). It is composed of 10 items, measured on a 7-point Likert type scale (1 = minimum to 7 = maximum). Higher scores indicate a more positive attitude towards working with the social robot presented in the video. Subjective norms (SN). Measures the person’s beliefs about significant others attitude towards him working with the social robot presented in the video in the future (e.g. people close to me, would approve/ disapprove that I work with robots in the future). It is composed of 3 items, measured on a 7-point Likert type scale (1 = minimum to 7 = maximum). Higher scores indicate more favorable subjective norms towards working with the social robot presented in the video. Perceived behavioral control (PBC). Measures the extent that a person sees himself as capable of operating the social robot presented in the video (e.g. It would be easy to work with this robot: disagree/agree). It is composed of 7 items measured on >˜^ Ž ^ ‚„  Z Z Z >  Z — mum). Higher scores indicate a higher level of perceived behavioral control in operating the social robot presented in the video. In order to control for the effects of video presentation and previous familiarity with social robots, the following measures were used: Animacy (ANI). This measure is adapted from the Godspeed scale, developed by Bartneck, Kulic, Croft and Zoghbi [64], and is comprised of 5 items each. Items (e.g. inert/ interactive) are rated from 1 (strongly disagree) to 7 (strongly agree) in a Likert type scale. Familiarity with robots. In order to control for the effects of previous familiarity with robots, participants were asked if they were knowledgeable of the type of robot presented in the video.

5 -3&/"3 Analysis of the data indicates that it meets the assumptions of normality, skewness (Skew.) and kurtosis (Kurt.). Data was also analyzed for missing values and outliers. No variable had more than 2% of missing values and all were missing at random. These values were replaced using the expectation-maximization method. No outliers were identified. The analysis was performed using IBM SPSS Statistics (Version 20). Table 1 shows the descriptive statistics and re-


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 10,

liability for the scales used. All measures presented % ÂŻ ˆ ZZ #>• ˆ # Table 1. Descriptive statistics and scales reliability Range

Mean

Std. Dev.

BI

1-7

3.14

1.36

ATW

1-7

4.26

PBC

1-7

SN

1-7

Skew.

Kurt.

.88

0.05

-0.71

1.46

.95

-0.30

-0.50

4.44

1.33

.89

-0.62

-0.11

3.71

1.39

.79

0.07

-0.40

Notes: * p <.05; ** p <.01; *** p < .001. BI = Intention; ATW = Attitude towards working; PBC = Perceived behavioral control; SN = Subjective norm.

5 C88-'"3 $8 $9$" J-3(0! *!% <*#"('(,*!" -!%-# In order to measure the effects of robot design and participant gender (hypotheses 1 and 2), a multiple analysis of variance (MANOVA) was conducted

N° 4

2016

on the variables, VOL, ATW, PBC, and SN. MANOVA is a generalized form of univariate analysis of variance (ANOVA) that uses the covariance between outcome variables for comparing the means of two or more dependent variables at the same time [65]. Although results of Box’s test indicate that the assumption of equality of covariance matrices is met, results of the Levene’s test suggests that the assumption of equality of covariances is not met for PBC. A post hoc analysis with Games-Howell procedure was used for this variable. All experimental groups meet the assumptions of normality, skewness and kurtosis. Figures 3 and 4 show the scales means by robot type and participant gender, respectively. Because the assumption of homogeneity of variance-covariance was violated, an analysis of Pillai’s trace was conducted. Results indicated statistically significant effects for robot design (V = 0.18, F(8,164) = 2.09, p = .039). Analysis of the univariate tests suggest that there are statistically significant differences for the variables: BI (F (2, 84) = 3.53, p= .034) and PBC (F (2, 84) = 5.90, p = .004). Robot design had no

Fig. 3. Scale means by robot design

Fig. 4. Scale means by participant gender Articles

21


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 10,

effect on the variables ATW (F (2, 84) = 2.55, p = .084) and SN (F (2, 84) = 0.17, p = .847). That is, robot design seems to affect both the intention to work with the social robot and the perceived ability to do it. There is a statistically significant difference (p< .05) for the means of BI and PBC between Snackbot and Actroid, with the first presenting higher means for the two variables. Although no statistically significant differences were found for the means of BI and PBC for Snackbot vs. Asimo, and Asimo vs. Actroid, trend analysis indicates there is a significant linear trend in the effect of robot type for BI (F (1, 87) = 6.75, p = .011) and PBC (F (1, 87) = 10.79, p = .001). That is, as the robot gets more human-like in appearance, the lower the participant’s perceived behavioral control and intention to work with the social robot in the near future. Participants state both a stronger intention and a perceived behavioral control towards working with Snackbot. Although analysis of Pillai’s trace suggests no statistically significant differences between genders (V = 0.10, F(4, 81) = 2.15, p = .082), the univariate tests show statistically significant differences for the variables: BI (F (1, 84) = 5.03, p = .028), PBC (F (1, 84) = 5.60, p = .020) and SN (F (1, 84) = 5.84, p = .018). No effects were detected on ATW (F (1, 84) = 0.96, p = .330). That is, although participants share a positive attitude towards working with social robots in the near

N° 4

future, female participants have a lower intention to work with social robots in the near future, perceive themselves has less able to do it, and think that working with social robots is less sociably acceptable than men do. No interaction effects were found between robot design and gender (V # F } # = .226).

5 ) <#-%('"(!0 !"-!"($! "$ N$#T ;(". >$'(*/ $9$"3 The third research hypothesis was about the effectiveness of the TPB model to predict the intention to work with a social robot in the near future. Table 2 shows the correlations between the studied variables. Table 2. Correlations for the variables of the TPB 1

2

3

1-BI

2-ATW

.45***

3-PBC

.65***

.54***

4-SN

.40***

.14

.35**

Notes: * p <.01; ** p <.001; *** p <.001.

Table 3. Predictors of intention to work with a social robot in the near future t

Sig.

CI

15.19

.000

[3.19, 4.15]

-.25

-2.07

.041

[-1.38, -0.03]

-.31

-2.60

.011

[-1.56, -0.21]

12.86

.000

[2.87, 3.92]

-.25

-2.18

.032

[-1.39, -0.06]

.33

-.30

-2.59

.011

[-1.53, 0.20]

.62

.27

.23

2.26

.026

[0.08, 1.17]

Constant

-.17

.51

-.33

.744

[-1.18, 0.84]

Snack vs. Asimo

-.47

.26

-.16

-1.81

.074

[-0.99, 0.05]

Snack vs. Actroid

-.20

.28

-.07

-.74

.458

[-0.75, 0.34]

Female vs. Male

.14

.22

.05

.64

.521

[-0.30, 0.59]

ATW

.13

.09

.14

1.54

.126

[-0.04, 0.30]

PBC

.50

.10

.49

4.76

.000

[0.29, 0.71]

SN

.18

.08

.19

2.21

.030

[0.02, 0.35]

B

Std. Error

Constant

3.67

.24

Snack vs. Asimo

-.71

.34

Snack vs. Actroid

-.89

.34

Constant

3.40

.26

Snack vs. Asimo

-.73

.33

Snack vs. Actroid

-.87

Female vs. Male

Beta

Model 1

Model 2

Model 3

Notes: * p <.05; ** p <.01; *** p < .001. Model 1: F(2,87) = 3.77, p = .027, R2 = .08, Adjusted R2 = .06. Model 2: F(3,86) = 4.34, p = .007, R2 = .13, Adjusted R2 = .10. Model 3: F(6,83) = 13.78, p < .001, R2 = .50, Adjusted R2 = .46.

22

Articles

2016

4


Journal of Automation, Mobile Robotics & Intelligent Systems

All variables have positive significant correlations with the intention to work with social robots in the near future. Moreover, ATW and PBC are positively correlated and PBC and SN are positively correlated. A multiple regression analysis was conducted, using BI as the dependent variable. In order to check for the effects of robot design and participant gender, variables were entered using the hierarchical method. Robot design was entered in the first block, gender in the second block, and the variables of the TPB on the third block. Table 3 shows the results of the multiple regression analysis. Results of the multiple regression analysis indicate that robot design is a statistically significant negative predictor of the intention to work with social robots, accounting for 6% of the explained variance. That is, as the robot design gets more human-like the lower the intention to work with it. Analysis of model 2 shows that participant gender is also a predictor of the intention to work with social robots, with the model predicting 10% of the variance. Male participants have a stronger intention to work with social robots than female participants. Model 3 accounts for 46% of the observed variance. Analysis of the individual contributions of these variables show that PBC ( = .49) and SN ( = .19) are statistically significant predictors of intention to work with social robots. That is, the more people rate themselves as capable of working with social robots, and think that working with social robots is viewed as an acceptable task by their significant ones, the more they intend to work with a social robot. Adding the variables of the TPB to the regression model reduced the effects of robot design and participant gender, which supports hypothesis 4, that is, all factors external to the model, have their contribution to behavioral intention mediated by the model’s dependent variables.

7 J(3'&33($! In this paper the authors set out to study the interplay between robot design (machine-like, humanoid and android), user gender and the socio-cognitive factors defined by the TPB in the building of a person’s intention to work with a social robot in the near future. Like it was hypothesized, the level of human-likeness of the social robot design, affects participant’s intention to work with a social robot in the near future (BI), and their perceived capacity of doing it (PBC). These results partly support hypothesis 1, since no effects were found for attitude towards working with a social robot and subjective norm. Results suggest that participants, not only, would prefer to work with a less human-like social robot, but also, would feel more confident of their capability of working with a social robot, if the robot is less human-like. Although the results seem in line with the Uncanny Valley hypothesis, two aspects must be noted. First, the Uncanny Valley hypothesis suggests an acceptance curve that drops abruptly when the robot looks too human. Given this, it would be reasonable to expect Asimo to have higher means for BI and PBC than Snackbot, and

VOLUME 10,

N° 4

2016

to see a drop in the means’ values as we move from Asimo to Actroid. However, the trend analysis result suggests a steady decrease in the means, as we move from Snackbot, to Asimo, to Actroid. Second, this effect is limited to BI and PBC, with no effects found in ATW and SN. Given the evaluative character of attitudes and the social-normative character of subjective norm, it would be reasonable to expect that these variables were sensible to robot design. Although these results generally confirm that the use of human traits in the design of social robots is useful (Snackbot presents a head, with what resembles a pair of eyes and a mouth). They also underline the need for further research on the interplay between social robot design and the socio-cognitive factors. Hypothesis 2 was partly confirmed, with participant gender affecting BI, PBC and SN. Male participants presented significantly higher means for these variables, while no differences were found for ATW. Unlike other studies reviewed previously, the differences found between female and male participants are quantitative not qualitative. That is, female participants present lower means than male participants for the three variables. Hypothesis 3 was confirmed, with the TPB explaining a considerable amount of the variance of the intention to work with a social robot in the near future. Like posited by the model, the components contribute differently to the intention to work with social robots in the near future, with PBC and SN, showing the larger effects. These results are of particular interest, because more than the personal evaluation of the value of working with robots, it is the perception of the ability to do it, and the social norms surrounding the idea of working with robots that supports the intention to work with it. Thus, the deployment of human-robot solutions in the work environment should account not only for individual factor, like competence, but also for socio-normative factors, like work colleagues’ acceptance of social robots. Finally, hypothesis 4 was also confirmed. The effects of robot design and participants gender is mediated by the variables of the TPB. This means that these effects will be mediated by a set of personal representations and beliefs about the value of robots, how capable a person is to use them, and the social norms regarding their role. As such, objective changes in robot design are bound to produce variable effects in user perceptions of the robot’s qualities.

? PDBNUCJ CFCD > This paper is financed by National Funds provided by FCT – Foundation for Science and Technology through project UID/SOC/04020/2013.

?2 @B > Nuno Piçarra – University of Algarve, Faro, Portugal. E-mail: nuno.psicologia@outlook.pt Jean-Christophe Giger – Department of Psychology and Sciences of Education, University of Algarve and Articles

23


Journal of Automation, Mobile Robotics & Intelligent Systems

Research Centre for Spatial and Organizational Dynamics – CIEO. E-mail: jhgiger@ualg.pt Grzegorz Pochwatko* – Virtual Reality and Psychophysiology Lab, Institute of Psychology, Polish Academy of Sciences, Warsaw, 00-378, Poland. E-mail: grzegorz.pochwatko@psych.pan.pl

– Institute of Automatic Control and Robotics (IAiR), Faculty of Mechatronics, Warsaw University of Technology, Warsaw, Poland. E-mail: j.mozaryn@mchtr.pw.edu.pl *Corresponding author

C4C CD C> [1]

[2]

[3]

[4]

[5]

[6]

[7]

[8]

[9]

24

Thrun S., “Towards A Framework for Human-Robot Interactionâ€?, Human-Computer Interaction, vol. 19, no. 1–2, 2004, 9–24. DOI: 10.1080/07370024.2004.9667338. Bartneck C., Forlizi J., “A Design-Centred Framework for Social Human-Robot Interactionâ€?, ROMAN 2004: 13th IEEE International Workshop on Robot and Human Interactive Communication, 2004, 591–594. DOI: 10.1109/ROMAN.2004.1374827. Breazeal C., “Toward sociable robotsâ€?, Robotics and Autonomous Systems, vol. 42, no. 3, 2003, 167–175. DOI:10.1016/S0921-8890(02)00373-1. Dautenhahn K., “Socially intelligent robots: dimensions of human–robot interactionâ€?, Philosophical Transactions of the Royal Society B: Biological Sciences ˆ # “Â&#x;Š # „’Â?• Š••> Â&#x;>Œ› 704. DOI:10.1098/rstb.2006.2004. Fong T., Nourbakhsh I., Dautenhahn K., “A survey of socially interactive robotsâ€?, Robotics and Autonomous Systems, vol. 42, no. 3, 2003, 143–166. DOI:10.1016/S0921-8890(02)00372-X. Piçarra N., Giger J.-C., Pochwatko G., Gonçalves G., “Making sense of social robots: A structural analysis of the layperson’s social representation of robotsâ€?, European Review of Applied Psychology/ Revue EuropĂŠenne de Psychologie AppliquĂŠe, vol. 66, no. 6, 2016, 277–289. DOI: 10.1016/j. erap.2016.07.001. Breazeal C., Takanishi A., Kobayashi T., “Social robots that interact with peopleâ€?. In: B. Siciliano, Âœ# ' ‚¼ #} Springer handbook of robotics, 2008, 1349–1369. Springer. DOI:10.1007/978-3540-30301-5_59. Bernstein D., Crowley K., Nourbakhsh I., “Working with a robot. Exploring relationship potential in human–robot systemsâ€?, Interaction Studies, vol. 8, no. 3, 2007, 465–482. DOI: 10.1075/is.8.3.09ber. DiSalvo C. F., Gemperle F., Forlizzi J., Kiesler S., “All robots are not created equal: the design and perception of humanoid robot headsâ€?, Proceedings of the 4th conference on Designing interactive systems processes practices methods and techniques, 2002, 321–326. DOI: 10.1145/778712.778756.

Articles

VOLUME 10,

N° 4

2016

[10] Blow M., Dautenhahn K., Appleby A., Nehaniv C., Lee D., “Perception of Robot Smiles and Dimensions for Human-Robot Interaction Design�, ROMAN 2006 The 15th IEEE International Symposium on Robot and Human Interactive Communication, 2006, 469–474. DOI: 10.1109/ROMAN.2006.314372. [11] Lee M. K., Forlizzi J., Rybski P. E., Crabbe F., Chung W., Finkle J., Glaser E., Kiesler S., “The snackbot: documenting the design of a robot for long-term Human-Robot Interaction�, Proceedings of the 4th ACMIEEE international conference on Human-Robot Interaction, 2009, 7–14. DOI: 10.1145/1514095.1514100. [12] Walters M., Koay K., Syrdal D., Dautenhahn K., Boekhorst R., “Preferences and perceptions of robot appearance and embodiment in HumanRobot Interaction trials�, Artificial intelligence and simulation of behavior AISB’09 convention, 2009, 136–143. Retrieved from http://hdl.handle.net/2299/3795. [13] A. Powers, A. Kramer, S. Lim, J. Kuo, S. Lee, S. Kiesler, “Eliciting Information from people with a gendered humanoid robot�, IEEE International Workshop on Robot and Human Interactive Communication ROMAN, 2005, 158–163. DOI: 10.1109/ROMAN.2005.1513773. [14] F. Eyssel, D. Kuchenbrandt, S. Bobinger, L. Ruiter, F. Hegel, “If You Sound Like Me, You Must Be More Human: On the Interplay of Robot and User Features on Human Robot Acceptance and Anthropomorphism�, HRI ‘12 Proceedings of the 7th annual ACM/IEEE international conference on Human-Robot Interaction, 2012, 125–126. DOI: 10.1145/2157689.2157717. [15] D. Syrdal, K. Dautenhahn, S.Woods, M. Walters, K. Koay, “Looking good? Appearance preferences and robot personality inferences at zero acquaintance�, Proceedings of AAAI Spring Symposia, 2007, 86–92, 2007. Retrieved from: https:// www.aaai.org/Papers/ Symposia/ Spring/2007/ SS-07-07/SS07-07-019. [16] F. Eyssel, D. Kuchenbrandt, F. Hegel, L. de Ruiter, “Activating elicited agent knowledge: How robot and user features shape the perception of social robots�, IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, 2012, 851–857. DOI: 10.1109/ ROMAN.2012.6343858. [17] K. Nakagawa, M. Shiomi, K. Shinozawa, R. Matsumura, H. Ishiguro, N. Hagita., “Effect of Robot’s Whispering Behavior on People’s Motivation�, International Journal of Social Robotics, vol. 5, no. 1, 2013, 5–16. DOI: 10.1007/s12369-012-0141-3. [18] A. Niculescu, B. Dijk, A. Nijholt, S.L. See, “The influence of voice pitch on the evaluation of a social robot receptionist �, International Conference on User Science and Engineering (i-USEr), 2011. DOI: 10.1109/iUSEr.2011.6150529. [19] M. Salem, F. Eyssel, K. Rohlfing, S. Kopp, F. Joublin, “To Err is Human(-like): Effects of Robot Gesture on Perceived Anthropomorphism and Likability�, International Journal of Social Robotics, vol. 5,


Journal of Automation, Mobile Robotics & Intelligent Systems

[20]

[21]

[22]

[23]

[24]

[25]

[26]

[27]

[28]

[29]

[30]

[31]

no.3, 313–323, 2013. DOI: 10.1007/s12369013-0196-9. J. Ham, R. H. Cuijpers, J.-J. Cabibihan, “Combining Robotic Persuasive Strategies: The Persuasive Power of a Storytelling Robot that Uses Gazing and Gestures”, International Journal of Social Robotics, vol. 7, no.4, 479–487, 2015. DOI: 10.1007/s12369015-0280-4 C. Bartneck, T. Kanda, O. Mubin, A. Al Mahmud, “Does the design of a robot influence its animacy and perceived intelligence?”, International Journal of Social Robotics, vol. 1, no. 2, 2009, 195–204. DOI: 10.1007/s12369-009-0013-7. J. Carpenter, J. Davis, N. Erwin-Stewart, T. Lee, J. Bransford, N. Vye, “Gender representation and humanoid robots designed for domestic use”, International Journal of Social Robotics, vol. 1, # # Y < # > 009-0016-4. J. Carpenter, M. Eliot, D. Schultheis, “Machine or friend: understanding users’ preferences for and expectations of a humanoid robot companion”, Proceedings of 5th conference on Design and Emotion, 2006. Retrieved from http://citeseerx.ist. psu.edu. E. Broadbent, Y. Lee, R Stafford, I. Kuo, B. MacDonald, “Mental Schemas of Robots as More Human-Like Are Associated with Higher Blood Pressure and Negative Emotions in a Human-Robot Interaction,” International Journal of Social Robotics, vol. 3, no. 3, 2011, 291–297. DOI 10.1007/s12369-011-0096-9. J. Kätsyri, K. Förger, M. Mäkäräinen, T. Takala, “A review of empirical evidence on different Uncanny Valley hypotheses: support for perceptual mismatch as one road to the valley of eeriness”, Frontiers in Psychology, vol. 6 no. 390, 2015. DOI: 10.3389/fpsyg.2015.00390. K. MacDorman, “Mortality Salience and the Uncanny Valley”, Proceedings of 5th IEEE-RAS International Conference on Humanoid Robots, 2005, 399-405. DOI: 10.1109/ICHR.2005.1573600. J. Seyama, R. Nagayama, “The Uncanny Valley: Effect of Realism on the Impression of Artificial Human Faces”, Presence, vol. 16, no. 4, 2007, 337–35. DOI: 10.1162/pres.16.4.337. K. MacDorman, R. Green, C. Ho, C. Koch, “Too real for comfort? Uncanny responses to computer generated faces”, Computers in Human Behavior, vol. 25, no. 3, 2009, 695–710. DOI: 10.1016/j. chb.2008.12.026. S. Steckenfinger, A. Ghazanfar, “Monkey visual behavior falls into the Uncanny Valley”, Proceedings of the National Academy of Sciences of the United States of America, vol. 106, no. 43, 2009, 18362– 18366. DOI: 10.1073/pnas.0910063106. D. Lewkowicz, A. Ghazanfar, “The Development of the Uncanny Valley in Infants,” Developmental Psychobiology, vol. 54, no. 2, 2012, 124–132. DOI: 10.1002/dev.20583. S. Turkle, “Computational reticence: why women fear the intimate machine”. In: C. Kramarae (ed.), Technology and Women’s Voices, New York: Pergamon Press, 1986, 40-61.

VOLUME 10,

N° 4

2016

[32] D. Gefen, D. Straub, “Gender difference in the perception and use of E-Mail: an extension to the technology acceptance model”, MIS Quarterly, vol. 21, no.4, 1997, 389–400. DOI: 10.2307/249720. [33] F. Davis, “Perceived usefulness, perceived ease of use and user acceptance of information technology”, MIS Quarterly, vol. 13, no. 3, 1989, 319– 340. DOI: 10.2307/249008. [34] V. Venkatesh, M.G. Morris, “Why don’t men ever stop to ask for directions? Gender, social influence, and their role in technology acceptance and usage behavior”, MIS Quarterly # # 2000, 115–139. DOI: 10.2307/3250981 [35] V. Venkatesh, M. G. Morris, P. L. Ackerman, “A Longitudinal Field Investigation of Gender Differences in Individual Technology Adoption DecisionMaking Processes”, Organizational Behavior and Human Decision Processes, vol. 83, no. 1, 2000, 33–60. DOI: 10.1006/obhd.2000.2896. [36] I. Ajzen, “The theory of planned behavior”, Organizational Behavior and Human Decision Processes, vol. 50, no. 2, 1991, 179–211. DOI: 10.1016/0749-5978(91)90020-T. [37] I. H. Kuo, J. M. Rabindran, E. Broadbent, Y. I. Lee, N. Kerse, R. M. Q. Stafford, B. A. MacDonald, ”Age and gender factors in user acceptance of healthcare robots”, The 18th IEEE International Symposium on Robot and Human Interactive Communication ROMAN 2009, 214–219. DOI: 10.1109/ ROMAN.2009.5326292. [38] C.A. Chung, “Human issues influencing the successful implementation of advanced manufacturing technology”, Journal of Engineering and Technology Management, vol. 13, no. 3–4, 1996, 283–299. DOI: 10.1016/S0923-4748(96)01010-7. [39] I, Ajzen, M. Fishbein, “Attitude - Behavior relations: a theoretical analysis and review of empirical research”, Psychological Bulletin, vol. 84, no. 5, 1977, 888–918. DOI: 10.1037/00332909.84.5.888 [40] M. Perugini, R. Bagozzi, “The distinction between desires and intentions”, European Journal of Social Psychology, vol. 34, no. 1, 2004, 69–84. DOI: 10.1002/ejsp.186. [41] I. Ajzen, “The theory of planned behaviour: Reactions and reflections”, Psychology & Health, vol. 26, no. 9, 2011, 1113–1127. DOI: 10.1080/08870446.2011.613995. [42] I. Ajzen, “The theory of planned behavior”, 2012, 438–459. In: P. Lange, A. Kruglanski, E. Higgins (Eds.), Handbook of theories of social psychology, vol. 1, London, UK: Sage. [43] I. Ajzen, “The theory of planned behaviour is alive and well, and not ready to retire: a commentary on Sniehotta, Presseau, and Araújo-Soares”, Health Psychology Review, vol. 9, no. 2, 2014, 131– 137. DOI: 10.1080/17437199.2014.883474. [44] C. Blue, “The predictive capacity of the theory of reasoned action and the theory of planned behavior in exercise behavior: An integrated literature review”, Research in Nursing & Health, vol. 18, no. 2, 1995, 105– 121. DOI: 10.1002/ nur.4770180205. Articles

25


Journal of Automation, Mobile Robotics & Intelligent Systems

[45] G. Godin, “Theories of reasoned action and planned behavior: usefulness for exercise promotionâ€?, Medicine and science in sports and exercise, vol. 26, no. 11, 1994, 1391–1394. DOI: 10.1249/00005768-199411000-00014 [46] G. Godin, G. Kok, “The theory of planned behavior: a review of its implications to health related behaviorsâ€?, American Journal of Health Promotion, vol. 11, no. 2, 1996, 87-98. DOI: 10.4278/0890-1171-11.2.87. [47] M. Cannière, P. Pelsmacker, M. Geuens, “Relationship Quality and the Theory of Planned Behavior models of behavioral intentions and purchase behaviorâ€?, Journal of Business Research, vol. 62, no. 1, 2009, 82–92. DOI: 10.1016/j. jbusres.2008.01.001. [48] S. Taylor, P. Todd, “Decomposition and crossover effects in the theory of planned behavior: A study of consumer adoption intentionsâ€?, International Journal of Research in Marketing, vol. 12, no. 2, 1995, 137–155. DOI: 10.1016/01678116(94)00019-K. [49] K. Mathieson, “Predicting User Intentions: Comparing the Technology Acceptance Model with the Theory of Planned Behaviorâ€?, Information Systems Research, vol. 2, no. 3, 1991, 173–191. DOI: 10.1287/isre.2.3.173. [50] T. Hansen, J. M. Jensen, H. S. Solgaard, “Predicting online grocery buying intention: a comparison of the theory of reasoned action and the theory of planned behaviorâ€?, International Journal of Information Management, vol. 24, no. 6, 2004, 539–550. DOI: 10.1016/j.ijinfomgt.2004.08.004. [51] M.-H. Hsu, C.-H. Yen, C.-M. Chiu, C.-M. Chang, “A longitudinal investigation of continued online shopping behavior: An extension of the theory of planned behaviorâ€?, International Journal of Human-Computer Studies, vol. 64, no. 9, 2006, 889–904. DOI: 10.1016/j.ijhcs.2006.04.004. [52] E. GrandĂłn, S. Nasco, P. Mykytyn, “Comparing theories to explain e-commerce adoptionâ€?, Journal of Business Research, vol. 64, no. 3, 2011, 292–298. DOI: 10.1016/j.jbusres.2009.11.015. [53] A. d’Astous, F. Colbert, D. Montpetit, “Music Piracy on the Web – How Effective Are Anti-Piracy Arguments? Evidence from the theory of planned behaviorâ€?, Journal of Consumer Policy, vol. 28, no. 3, 2005, 289–310. DOI: 10.1007/ s10603-005-8489-5. [54] T. Kwong, M. Lee, “Behavioral intention model for the exchange mode internet music piracyâ€?, Proceedings of the 35th Hawaii International Conference on System Sciences, HICSS, 2002, 24812490. DOI: 10.1109/HICSS.2002.994187. [55] C. Liao, H.-N. Lin, Y-P. Liu, “Predicting the Use of Pirated Software: A Contingency Model Integrating Perceived Risk with the Theory of Planned Behaviorâ€?, Journal of Business Ethics, vol. 91, no. 2, 2010, 237–252. DOI: 10.1007/s10551-0090081-5. [56] C. Yoon, “Theory of Planned Behavior and Ethics Theory in Digital Piracy: An integrated Modelâ€?, 26

Articles

VOLUME 10,

N° 4

2016

Journal of Business Ethics, vol. 100, no. 3, 2011, 405–417. DOI: 10.1007/s10551-010-0687-7. [57] C. Yoon, “Digital piracy intention: a comparison of theoretical modelsâ€?, Behaviour & Information Technology, vol. 31, no. 6, 2012, 565–576. DOI: 10.1080/0144929X.2011.602424. [58] S. N. Woods, M. L. Walters, K. L. Koay, K. Dautenhahn, “Methodological Issues in HRI: a comparison of live and video based methods in robot to human approach direction trialsâ€?, ROMAN 2006 –The 15th IEEE International Symposium on Robot and Human Interactive Communication, 2006, 51–58. DOI: 10.1109/ROMAN.2006.314394. =‘ŒQ # #˜%# # "Âą ˜¤ # ² '# ' \ # ¨# Âł “Polish Version of the Negative Attitude Toward Robots Scale (NARS-PL)â€?, Journal of Automation, Mobile Robotics & Intelligent Systems, vol. 9, no. 3, 2015, 65–72. DOI: 10.14313/JAMRIS_3-2015/25. [60] H. Cramer, N. Kemper, A. Amin, V. Evers, B. Wielinga, “‘Give me ahug’: The effects of touch and autonomy on people’s responses to embodied social agentsâ€?, Computer Animation and Virtual Worlds, 2009,20(2–3), 437–445. DOI: 10.1002/ cav.317Dautenhahn. [61] K. Tsui, M. Desai, H, Yanco, H. Cramer, N. Kemper, “Measuring attitudes towards telepresence robotsâ€?, International Journal of Intelligent Controland Systems, 16(2), 2011, 1–11 (Retrieved from http://www.ezconf.net/newfiles/IJICS/203/ nars-telepresence-IJICS-cameraReady.pdf). [62] N. Piçarra, J-C. Giger, G. Gonçalves, G. Pochwatko, “Validation of the Portuguese Version of the Negative Attitudes Towards Robots Scaleâ€?, European Review of Applied Psychology/ Revue EuropĂŠenne de Psychologie AppliquĂŠe, vol. 65, no. 2, 2015, 93–104. DOI: 10.1016/j.erap.2014.11.002. [63] M. Perugini, M. Conner, “Predicting and understanding behavioral volitions: the interplay between goals and behaviorsâ€?, European Journal of Social Psychology, vol. 30, no. 5, 2000, 705-731. DOI: 10.1002/1099-0992(200009/10) 30:5<705::AID-EJSP18>3.0.CO;2-#. [64] C. Bartneck, D. Kulic, E. Croft, S. Zoghbi, “Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robotsâ€?, International Journal of Social Robotics, vol. 1, no. 1, 2009, 71–81. DOI 10.1007/s12369-008-0001-3. [65] A. Field, “Discovering Statistics Using IBM SPSS Statisticsâ€?, SAGE, 2014, 645–649.10.1002/ 1099-0992(200009/10)30:5<705:: AID-EJSP18>3.0.CO;2-#. [64] C. Bartneck, D. Kulic, E. Croft, S. Zoghbi, “Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robotsâ€?, International Journal of Social Robotics, vol. 1, no. 1, 2009, 71–81. DOI 10.1007/s12369-008-0001-3. [65] A. Field, “Discovering Statistics Using IBM SPSS Statisticsâ€?, SAGE, 2014, 645–649.


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 10,

N° 4

2016

? D ? @ C ? I < I 4 I < N

Submitted: 24th May 2016; accepted: 6th December 2016

Ali Saidi sief, Alain Pruski, Abdelhak Bennia DOI: 10.14313/JAMRIS_4-2016/29 Abstract: The built environment accessibility evaluation is required if the person physical capacities no longer correspond to the habitat requirements, which generally occur after an accident. For the person with disabilities, the inner accessibility of habitat is a highly important factor that allows him to live and work independently. This paper presents a new approach to determine the accessibility of handling elements like doors, windows, etc. inside the habitat for the wheelchair user. Thus, allowing housing professionals to assess the needed changes in terms of accessibility. The idea is to involve a new computer approach to evaluating the performance of these elements against wheelchair user capacity. The presented approach simulated wheelchair user behavior when he/ she is operating a handling element in order to determine the dimensions/positions of wheelchair clearance space and handle grip optimal heights while considering wheelchair arrival direction and respecting joint limits constraints of person upper body and wheelchair nonholonomy constraints. Keywords: accessibility, handling element, wheelchair, person with disability, wheelchair wheelchair clearance space, environment rehabilitation, inverse kinematics

1. Introduction The accessibility represents the objects, the apartments, the information and the technologies which the persons with physical limitations can use. It is an important factor for people with disabilities to enable them to live and work independently and to minimize the cost of personal care. For wheelchair users (the subject of our proposed approach), the rehabilitation represents tools, process and systems adaptation in order to customize and to aid them to overcome the obstacles. For this category of persons, environments may create obstacles if they are not incompatibles with persons technical aids used, like wheelchair which cannot execute maneuver to cross a door, etc., or may facilitate the inclusion if their designs are more flexible. For that, the environments must be well adapted to the wheelchair users, not only in terms of the quality of ground surfaces which must be flat and smooth, but also at the level of navigation which must allow to the wheelchair users to navigate freely within. Universal design

principle cannot consider each handicap needs in the same time. So, to increase autonomy of wheelchair users in their house and reducing accident risk. In this presented word we aim to propose a new numeric simulation tool, used by professional designers to assess the accessibility of handling element (doors, windows, etc.) considering the person capabilities and wheelchair designs. The accessibility test is done by computing the required wheelchair maneuvering clearance at the handling element, dimensions and the optimum handle height, which ensure easy and smooth navigation for those persons.

) -/*"-% N$#T3 *!% $!"-Z" There is not at present a consensus of numerical methodology to be used for accessibility assessments of interior habitat. In most industrialized countries, standards or recommendations are available for building professionals to be guided in the design of new buildings. Among these assigned laws, we can take up the disability discrimination act (DDA) of the United Kingdom (2005) [1]. In France(2005), a certified handicap law [2], aimed to make products and the built environment accessible and usable for people with disabilities. In the United State (1997), the Americans with disabilities act accessibility guidelines (ADAAG) [3], contain “prescriptiveâ€? specifications for determining the existence of a valid wheelchair accessible route as well as other objectives for disabled access. In recent years, this accessibility field has experienced a new progress in the proposed numerical approaches. According to the results of the old laws of accessibility assessment, we note that they do not comply strictly with people disabled requirements. Here, we quote some research intended to develop numerical approaches using the capacities of virtual reality (VR) and virtual prototyping which includes 3D modeling and simulation systems as simulation tools. The HM2PH Project (Habitat Modular and Mobile for Persons Handicap) is developed by experts. Its objective is to specify the living environment mobile functionality, open to the outside for disabled persons, with major concern for an access to an enhanced autonomy by an appropriate devices (technical aids, domotic‌ etc.) [4], [5]. In [6] and [7] the authors proposed new tools based on VR, to determine the accessible circulation zones of the wheelchair users within a domestic habitat which ameliorate a user action capability within domestic environment. 27


Journal of Automation, Mobile Robotics & Intelligent Systems

In [8], authors have proposed a method to determine if there is a usable wheelchair accessible route in a facility using motion-planning technique, to predict the performance of a facility design against requirements of a building code. In [9], the authors have presented approaches to assess the accessibility of the interior of an environment for wheelchair users. In [10], authors proposed a new approach to determine the human reach envelope, which help designers to determine the zones with different discomfort levels. This capability is a powerful tool for ergonomic designers. In [11], the authors proposed an approach to designing way finding aids to fit people needs to facilitate environmental knowledge acquisitions for people and improve their way finding performance. In [12] we find a new study about a usage and accessibility problems faced by disabled (whether in pain or not) users of assistive devices and physical barriers that limit their mobility, and recognize the socio-cultural practices excluding them from the design process of such devices. In same field of research there is a study of Theresa Marie Crytzer et al. [13] toilet seat, bath bench, car seat, which describes the results of focus groups held during Independent Wheelchair Transfer (IWT) workgroup. The idea is consisted in connecting three focus groups composed of experts in the field of assistive technology by Live web-based conferencing using Adobe Connect technology to study the impact of the built environment on the wheelchair transfer process within the community to participate in daily activities, wheelchair users’ needs during transfers in the built environment, and future research directions. Recent study (2014) proposed by Myriam Winance [14], the authors suggest a way of changing the concept of the Universal Design in order to take into account uniqueness and diversity, in order to allow the shaping of abilities. In the following section, the interest of this approach, the problem addressed and the contribution that we have provided to the field of accessibility for people with reduced mobility, will be presented. Generally, all accessibility assessment prescription adopted in many countries are manually approaches, based on norms or recommendations. Whatever, the manual prescription can be ambiguous, and unduly restrictive in practice. In order to compensate the measurements error of these approaches, we will propose a software numerical tool to assess the accessibility inside habitat. The objective of this present work is the accessibility assessment of handling elements indoor living space for wheelchair users such as doors, windows etc. and the problematic that we are going to address, is how we can simulate virtual movements, physically feasible by wheelchair user, within 3D virtual environment.

5 <-#3$! N.--/'.*(# P(!-+*"(' F$%-/ Definition To describe human movement, we use an open loop molded by links and joints such as those used 28

Articles

VOLUME 10,

N° 4

2016

in robotic field. Numerical and kinematic model of the upper part of the body used in our simulation is that proposed by [15] that contains 21 degrees of freedom (DOF), from bottom of the spine to the right hand which seems the most suitable for our application (see Figure 1). Each joint variable is bounded by lower and upper constraints limits. All joints are modeled by rotary joints and each contributes to the movements in one or several plans. The position vector of the joint model of the body upper part described in terms of joint coordinates is: (1) With the set of joint variables [ ]T Rn is called ( Ă—1) joint vector. These joint variables uniquely determine the configuration f( ) of the open articulated structure with n DOF and are called the generalized coordinates. Then the position vector of a point of interest attached to the frame { } of the hand can be written with respect to the global frame {0} using the homogeneous transformation matrix (4Ă—4) DHi defined by Denavit and Harterberg [16]. The wheelchair is a non-holonomic vehicle, characterized by a non-holonomic constraint imposed on its displacements. This constraint indicates the tangent direction along the entire feasible trajectory, and the limit of curvature of the trajectory. Generally, the non-holonomic vehicle position (vehicle rolling without sliding) is defined by two parameters (x, y) and one parameter of orientation 0(see Fig. 1). The non-holonomic constraint indicates that the path displacement tangent and the vehicle direction have the same trajectory. In our simulation, only the obstacle avoidance and non-holonomic (position and orientation) constraints are considered. In order to respect the constraint of a non-holonomy, we distinct two allowed displacements, go straight and turning. Â? “ YϠ ‡ ‚— š0), we obtain a model with 24 DOF which we used in the simulation, described in Fig. 1.

4. Definitions 7 4-*3(9/- J(3,/*'-+-!" $8 * D$! .$/$!$+(' F$9(/- *3The term used in this text “a wheelchair feasible displacements� is equivalent to the trajectory planning in the field of robotics. In robotics, a nonholonomic mobile base moves in a Euclidean space W (work space), represented as RN (R is the set of real numbers, and N the spatial dimension). In the case of wheelchair, the displacement assumes a twodimensional space where N = 2. The space W populated with obstacles represented as B1, B2,...,Bq. The moving feasibility is based on the generation of a configuration space from the geometric properties of the wheelchair, obstacles Bi, work space W and geometric properties of handling elements. Computing the feasible displacements of a wheelchair consists on ensuring the successive displacements without colli-


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 10,

N° 4

2016

7 5 ?''-33(9(/("1 $8 * @*!%/(!0 C/-+-!" (! -/*"($! "$ * N.--/'.*(# 23-#

X Y

Fig. 1. 24-DOF of person-wheelchair couple used in simulation

According to the Center of Expertise and Education on the Risks, the Environment, Mobility and Development (CEREMA) [20], the handling element is accessible by wheelchair user if it is able to handle it independently. According to ADDAG [3], the handling element can be reached if there is a space around it containing a continuous path, unobstructed and connected allowing to manipulate it. Manipulation is a generic term that concerns the grip with objective of performing a movement function. For a door, we need to grasp it, push it, pull it or slide it depending on the type of the opening. This aspect is valid regardless of the type of handling element to manipulate. Both definitions adopted in France and the United States take only into account wheelchair in simulation while a handling operation requires the intervention of the upper body of the person (arms and trunk). Generally, handling elements are accessible if the person is able to manipulate it freely in a continuous space, unobstructed and connected.

= B&# ?,,#$*'. sion during the operations for the handling elements (doors, windows, etc.) in a configuration space C for a 2-dimensions space W, the dimensional m of the configuration space C is 3. The wheelchair moves in the xy-plane Cartesian space ( !2) and has three degrees of freedom: translations in x and y directions š0. The obstacles Bi are changed into CBi in the C space by applying a transformation by the Minkowski sum proposed by [17] and Minkowski difference proposed by Svetlana [18], [19], for each orientation of the wheelchair. The definition of the configuration space transforms the problem of obstacle avoidance to the problem of a point moving feasibility.

7 ) .- (+,$#"*!'- $8 F*!-&Q-#(!0 ?#-* *" * -Q$/Q(!0 %$$#M >(]- *!% <$3("($! Revolving doors are among the most used. They consist in one or two leaves which are pivoted on a vertical axis (the hinges). The opening direction is fixed relative to the bulkhead or wall. Pull revolving doors denote pivoting door that are pulled toward the user, while push revolving doors denote pivoting door that are pushed away from the user. In the following we will use the two terms to distinguish them. The disadvantage of this type of door is that it requires a large area of deflection but in the open position, the passage is completely free. Depending on the building type and the type of door used, wheelchair clearance space is necessary to both sides to enabling to wheelchair user to open, cross and close a door independently. This space is required in front of any door, gate, and any door opening on the common areas, any door of a local collective, and any erasing door-opening or hinged-leaf door of a public establishment, collective residential buildings or individual houses.

Our approach is oriented specifically towards accessibility evaluation of handling elements inside individual or public apartment for wheelchair user. The main objective of our algorithm is to generate successive configurations of the wheelchair-user couple that describe the handling operation respecting join limits (see Table 3) and non-holonomic constraints, respectively. Our application is part of a 3D human movement simulation and analysis field. However, in our application only the joint limit constraints of the person upper body are taken into account. Our objective is to assess the accessibility of handling elements, by computing dimension and position of wheelchair clearance space which is required at these elements to propose the appropriate modifications inside the habitat. Because we believe that if the upper body articulated structure postures are executing within the joint limits constraints, they are physically feasible by the person. We will generate only the postures considering joint limit and nonholonomic constraints. We suppose that handling elements are without weight, so dynamic constraints like muscular energy, external loads etc. are ignored [21], [22], [23]. The judgment of our results is done by taking in consideration only the joint limit constraints of a person upper body and non-holonomic constraints of the wheelchair. To resolve our problematic, we developed a simulation tool using Visual Studio C++. This tool is divided in three blocks. The first one is used to modeling a wheelchair mobility space, a person upper body, a wheelchair (section 3) and the 3D environment. The second one is used to fix constraints of simulation, the upper body joints limits and the non-holonomy. The last block, includes the algorithm and difArticles

29


Journal of Automation, Mobile Robotics & Intelligent Systems

ferent sub-blocks of the computation. This last block is considered as an interface between block one and block two, which are the inputs of the third block.

= N.--/'.*(# F$9(/("1 >,*'The wheelchair user moves parallel to the ground

‡ ‡ Z ‚— š0} š0 is the orientation of the wheelchair relative to the universal framework and (x, y) describing its position. The wheelchair mobility space or configuration polygons determine all displacement areas of wheelchair reduced to a point. This technique, well known in the field of mobile robotics and motion planning such as that proposed by Latombe [24] and that proposed by Pruski [25] for accessibility assessment, corresponds to computing the Minkowski sum and difference. In our application, the mobile device is a wheelchair which is considered as a rectangle and the reference point selected is the center of the mass. The Minkowski sum and difference are applied to the polygon which can be an obstacle to avoid or an envelope polygon corresponding to world space in which the wheelchair can move. The mobility space corresponds to the space wherein the wheelchair, reduced to a point, can move and to a single value of its ori š0. In our case, the polygon envelope, integrating the handling elements, is modified over time. The handling elements corresponding to moving obstacles affect the overall shape of the displacement space. So, dynamic mobility space is defined according to the orientation of the chair and the position as well as the orientation of handling elements. The blue space (see Fig. 3a) represents the dynamic mobility space Cd, within the wheelchair can move for a given orientation of the wheelchair, and different positions of the revolving doors. Figures 3b, 3c and 3d show the dynamic mobility space corresponding to the opening of a revolving door for different orientations of the wheelchair. The passage from the configuration b to d requires a feasible displacements of the wheelchair and a permanent contact between the hand of the person and the door

Fig. 2. Simulation tool 30

Articles

VOLUME 10,

N° 4

2016

handle. In Figure 3, the wheelchair adapts its direction to follow the rotation of the door which causes a change in the dynamic mobility space. Path-planning between the configurations of the wheelchair corresponding to Pinitial to Pfinal, using the generalized polygon configuration technique is not necessary in our application. Because of we are not trying to compute the trajectory connecting two configurations but only to check if it exists. We consider a handling element is accessible if the person is able to position his/ her wheelchair in minimum space to manipulate the element regardless of the direction of arrival. To simulate the handling operation we developed the following algorithm.

= ) ?/0$#(".+ The proposed algorithm is used to assess the accessibility of handling elements, presented inside individual or public buildings. Its main principle consists on simulating feasible wheelchair-user configurations used to manipulate handling element within 3D environment. Table 1 shows the inputs and the outputs of algorithm.

= 5 ?/0$#(".+ B,-#*"($!3 The complete algorithm is shown below: 1. Initialize randomly: the joint variables , the counters: Cter1, Cter2, Hn, Hi. 2. Do for each point Hn 2.1. Cter1 = Cter1+1 2.2. Define the inverse kinematic in relation to the target point Hi of the dynamic path of the hand (call to the IAA algorithm) 2.3. If (the target point Hi is not reachable) then write, target point is not reachable and move to the next hand path point (move to line 2.1) 2.4. Else (the target point Hi is reachable) 2.4.1. Define the wheelchair configuration 2.4.2. If (the wheelchair configuration in the world frame is not feasible and the wheelchair displacement is not feasible) then write, target point is not reachable and move to the next hand path point (move to line 2.1) 2.4.3. Else if (the wheelchair configuration in the world frame is feasible and the wheelchair displacement is feasible) 1. Cter2=Cter2+1. 2. 2.4.3.1.If (Cter1 >Hn) 3. Move to the next hand path point (move to line 2.1) 4. 2.4.3.2.Else (Cter1 <Hn) 5. 2.4.3.2.1.If (Hn == Cter2) 6. Dynamic element is accessible, Write to output file 7. 2.4.3.2.2.If(Hn º % Š} 8. Dynamic element is not accessible, Write to output file 3. While (stop conditions not verified ) Hn represents number of hand target points, Cter1 is a counter of tested target points and Cter2 is a counter of reachable target points (feasible configurations of a wheelchair-user).


Journal of Automation, Mobile Robotics & Intelligent Systems

Fig. 3. Dynamic mobility space: Example of opening a pull/push revolving door

Wheelchair-user configuration kept only if target point is reached by hand of person, and configuration/displacement of the wheelchair in the frame work is feasible. The algorithm is fast and has no local minimum. The appropriate configurations of the upper body are computed from the hand (21st joint) to the 1st joint (see Fig. 3). The wheelchair configurations are computed according to the position/orientation of hand by direct kinematics which increases the speed of convergence. In the algorithm operations, we considered that the handling element is accessible if all hand path points are accessible. In fact, we divide the path created by the handling element at handle level in many adjacent points Hi (Pinitial to Pfinal), (see Fig. 3). The person should reach successively by his/her hand these points. At each point we compute the values of the – ˆ ši ‡ ‡‚ši) ‡ ^^ š0 values that describe wheelchair configurations/displacements in the mobility space, by minimizing the error function between the

VOLUME 10,

N° 4

2016

hand and the target point (Cter1). The Cter2 used to compute the number of the reachable points. Finally, we evaluate the value of the Cter2 with that of the Hn (number of hand path point), if we confirm that Cter2 value is the same as Hn so the handling element is accessible else it will be not. This condition is fixed to guarantee entirely the accessibility of handling element, but we can in some cases, change the value of Hn in the two last lines of the algorithm by a threshold. The decision in this case is done according to the number and the positions of the not accessible points, because of the hand path points are very closer (in order of a few centimeters). If an inaccessible point lies between two others accessible points, we can consider that is accessible, consequently the element is accessible. In the case of opening and closing an involving door, the path to be executed corresponds to a semicircle. To open door, person must grasp the handle, turn the door around its pivot and move the wheelchair. The action carried out on the door requires, first, continuous contact between the hand of the person and the door handle (inverse kinematics). Secondly, it requires feasible wheelchair configurations and finally feasible wheelchair displacements. Such operation is divided into three sub-operations, and the algorithm functionality is divided in three principal operations: " # $ % '

' ' *

% ;' *

% ;' $ * Step 1: Inverse Kinematics The first step determines the configuration of the upper part of the person body. The methodology principle detailed in paper proposed by Otmani and Moussaoui [26] virtual reality or game in particular, are very interested in these algorithms. We propose in this paper a comparison between several algorithms of incremental type. The considered application concerns the accessibility evaluation of an environment used by a handicapped person (an apartment, a house, an institution‌), is to optimize the error between the hand and the point to reach (the path between Pinitial Pfinal) changing incrementally the values of the joint variables. The computation of the inverse kinematic of the articulated structure is realized, not in

Tab. 1. Estimation/generation of wheelchair-user configurations Inputs

– Joint limits constraints of the articulated structure. – Constraints related to the habitat geometry. – Wheelchair dimensions and non-holonomic constraints.

Algorithm

Estimation/generation of wheelchairuser configurations.

Outputs

Upper body configurations. Wheelchair configurations.

– Characteristics of the handling element. – Wheelchair dynamic mobility space . – Hand path points Pinitial to Pfinal. – Acceptable error between hand and target point equal to 1 unite. Note. L, U lower and upper joint limits variables < respectively. C wheelchair mobility space. Pinitial and Pfinal represent the initial and final target hand points.

Articles

31


Journal of Automation, Mobile Robotics & Intelligent Systems

relation to a point but with respect to a surface that corresponds to the mobility space including the target point (Pinitial ‌Pi ‌Pfinal). The inverse kinematics allows us to compute the configuration of the articulated structure and the position as well as the orientation of the hand with respect to the universal landmark. Step 2: Wheelchair Configurations When we determine the position of the hand by computing inverse kinematics, we get all joint variables values. They allow us to determine the position of the wheelchair by the direct kinematics assuming that the error between the hand and the target point equal to zero. This method has the advantage of ensuring obstacle avoidance without using path-planning techniques. Step 3: Wheelchair Configuration Displacements The second advantage of this method is that it allows us to have the right configurations displacements without using the path-planning techniques. Because we are not interested in the shape of the trajectory or its optimization, it is sufficient that the translations are feasible in the mobility space. To have feasible displacements of the wheelchair, the constraint of obstacle avoidance and that of a non-holonomy must be respected. The first constraint was already verified by the computation of mobility space and by step 2. The following property ensures that the displacements of a wheelchair is still feasible and respect the constraints of a non-holonomy: • Property: In [27], Laumond proved that if two configurations belong to the same domain connected then there exist feasible paths that connect them and respect the joint limit constraints. According to this property, the translations of the wheelchair are feasible if the wheelchair configurations belong to the same connected mobility space. Our approach aims to determine wheelchair clearance space required, which ensures the conviviality of the handling element. The verification of the wheelchair displacement feasibility is successively realized between each two feasible configurations of the wheelchair computed by steps 1 and 2 for the following two reasons: – The initial and final configurations of the wheelchair are not predefined at the start of

Whh = 40 cm

WhL2

WhW Fig. 4. Wheelchair Dimensions 32

Articles

WhL1

VOLUME 10,

N° 4

2016

computation. That is why the path-planning in this case is not feasible. – The process of the accessibility evaluation of handling elements aims to check at each position and orientation of the handling element, if there are feasible wheelchair-user configurations. In this particular example, wheelchair having the following dimension values: wheelchair width value ‚¤ ¤}  Â&#x;• Z ‚ Z} on the ground value (WhL1) = 60 centimeters (cm) and wheelchair length on seat level value (WhL2) = 60 centimeters (cm). We use this example to illustrate different steps of a wheelchair minimum clearance space computation during the process of the opening/crossing/closing of a pull/push revolving door, and the suitable handle-door height interval values according to person capacities (see Table 3).

6. Results and Discussion G -Q$/Q(!0 J$$# "$ <&3.M -3&/"3 *!% Discussion The corridor width relative to the push revolving door width value must have sufficient dimensions, to ensure an ample wheelchair user clearance space. In fact, the wheelchair maneuver space in front of doors not only depends on the wheelchair geometric, but also to the volume form occupied by the person arm, while contacting the handle door. The advantage of our approach is that it considers person upper body in the accessibility evaluation. Depending of the needs

Door Width

Not Feasible Maneuver

Corridor Width Fig. 5. Example of a not crossed revolving door to push of the experience, it is possible to exploit certain parts of the upper body structure without others. Figure 5 presents an example of a not-crossed revolving door to push, with inconvenient dimensions. 6.1.1. Minimum Corridor Width To ensure appropriate and reasonable modification to the habitat for a wheelchair user in this case, we need to determine the minimum required corridor/door width values. In the initial stage we fix a door width at the value 88 cm superior to WhW


Journal of Automation, Mobile Robotics & Intelligent Systems

which ensuring a direct cross. Then we gradually reduce the corridor width from 190 cm value (which is superior to the wheelchair diagonal value 180 cm, ensuring a direct cross), to 60 cm (value sufficiently

1.5

Corridor Width; Lac 90 cm

Feasible 1 Maneuver

0.5

Not Feasible Maneuver 0

60

80

100 120 140 160 Corridor Width; Lac (cm)

180

Fig. 6a. Feasibility to cross a revolving door to push in relation to the corridor width

VOLUME 10,

N° 4

2016

width. When the wheelchair maneuver exists, we note that the algorithms take an average of 0.33 milliseconds (ms) to compute the appropriate wheelchairuser configuration if it exists. When the wheelchair maneuver does not exist the algorithms take an average of 87.50 ms to confirm that wheelchair-user configuration is not realizable. 6.1.2. Minimum Revolving Door to Push Width (Corridor Width Value Fixed at 90 Centimeter) In this step, we determine the minimum width value of the door guaranteeing wheelchair maneuver, with corridor width value equal to 90 cm. We have to make the same computation as the previous step. Figure 7a presents door width values used. We note that the range of values [70 cm, 87 cm] contain no feasible maneuver of the wheelchair. Because of the area is not sufficient to turn the wheelchair from its horizontal position to the perpendicular position. When door width value is equal to or greater than 88 cm, the wheelchair maneuver is realizable. So, we notice that the minimum sum of the door width + corridor width, required to cross a door in this case, with such wheelchair dimensions is 178 cm. This value is computed with respect to the wheelchair design and specific person upper body capabilities (see Table 3), our simulation tool allows changing easily these constraints according to the person and the experience needs.

G ) -Q$/Q(!0 J$$# "$ <&//M -3&/"3 *!% Discussion Fig. 6b. Execution time

inferior to wheelchair diagonal, 180 cm), by checking at each corridor width value the possibility of a crossing the door or not. The results of the simulations are presented in the Fig. 6. The curve shown in Fig. 6a can be shared in two parts. The first one, when the corridor width values belong to the interval [190 cm, 90 cm ], is the interval of values when the wheelchair maneuver to cross the door is possible. The second one, [90 cm, 60 cm ] contains the values of corridor width when wheelchair maneuver is not possible. We observe that to cross a push revolving door having a width equal 88 cm, by a wheelchair having such dimensions (WhW= 60 cm, WhL1 = 90 cm and WhL2 = 60 cm), the corridor width value must be superior or equal to 90 cm. The variations of the corridor width values to an eventual modification may implicate a displacement of the wall entirely, which is difficult to realize in practice, because it would be convenient to increase the doorway or just the right parts of the door. The execution time is a method for evaluating algorithm performances in real time application. Figure 6b shows the execution time of the crossing revolving door to push process for several values of the corridor width. We note that the runtime decrease while increasing corridor width and it is increasing when the corridor width values approach to the wheelchair

To cross a pull revolving door with wheelchair, the rectangle corresponds to the corridor width/length must be sufficiently wide. In the previous case, we are interested to compute the minimum sum of door and corridor widths. In this second case, we determine the minimum width/length of the corridor which guaranteeing wheelchair maneuver with a fixed door width value. According to the simulation carried out using different corridor width/length values presented in Fig. 9a, we notice that we require a corridor of a minimum length equal to 194 cm and a minimum width 164 cm, to cross a pull revolving door, irrespective of the arrival direction of the wheelchair.

G 5 F(!(+&+ *!% F*Z(+&+ @*!%/- @-(0." $8 -Q$/Q(!0 J$$# "$ <&3.h<&// The door handle grip position must be installed in such a way as will be easy to handle. In order to easy use it, we have to consider several aspects, among them its position in relating to internal angle of the wall or any other obstacle, and the kind of handle used (generally the handle grip that can handle by “drop it the hand”) which are the most appropriates. We have computed the interval of the handle grip height values from the ground which are the best suited to handling door. According to the data shown in the Fig. 11, we note that the height handle values directly affect the clearance space dimensions of the door. Figure 11 presents the simulation results of an opening pull/push revolving door with different handle grip heights and in different clearance space diArticles

33


Journal of Automation, Mobile Robotics & Intelligent Systems

mensions (corridor width/length). We will notice that the interval of handle height [40 cm, 100 cm] contains the appropriate handle height values, because it is the only one which its values allow the wheelchair user to cross pull/push a door in the minimum clearance space (164 cm to 194 cm) computed previously in section 6.2. Therefore, we can consider it as the most appropriate interval to opening/closing a revolving door to pull/push, with such person joint limits (see Table 3) and wheelchair constraints (see Fig. 4). Crossing pull/push revolving door by a wheelchair user has been tested with various handle heights and within various clearance space dimensions. Heights values between [40 cm to 100 cm] are the only ones in which the person can cross such a door easily when the wheelchair clearance space is minimum (164 cm to 194 cm). Figure 12 presents useful simulation times to opening revolving door with different handle heights and in different wheelchair clearance space dimensions. Handle heights values between [40 cm, 100 cm], are the most suitable to the wheelchair dimensions in

VOLUME 10,

N° 4

2016

Fig. 9a. Revolving door to pull: required clearance space dimensions

Fig. 9b. Execution time

Fig. 7a. Feasibility to cross a door with a push in relation to the door width

this case. The period necessary to opening a revolving door to pull/push when wheelchair clearance space dimension is (164 cm to 194 cm) is an average of 200 ms. It is noted that this period of simulation is decreased (an average of 80 ms) when the clearance space dimension at door is sufficiently wide (184 cm to 248 cm), and it is increased gradually (an average of 400 ms) when the clearance space is decreased. In the two others intervals [10 cm, 20 cm] and [140 cm, 138 cm], we need to an average of 400 ms to evaluate each height handle. However, the period simulation in the three intervals does not exceed 400 ms.

j $+,*#*"(Q- ;(". ?/"-#!*"(Q- *!% >&,,/-+-!"*#1 ?,,#$*'.-3

Fig. 7b. Execution time

Fig. 8. Crossed revolving door to push 34

Articles

Figures 13 and 14 present four main configurations of wheelchair-user to opening a pull/push revolving door. Here, we can see clearly the clearance space dimensions that could be respected in home design according to the position of the wheelchairuser configurations (as we detailed in sections 6.1, 6.2 and 6.3). Compared to the results adopted by the governments of some countries like United state [3] and France [2], legal requirements prescription presented by these approaches ignored individual abilities/preferences details. For example, the prescriptive ADAAG can inform the design of the wheelchair by manufactures, but it cannot represent their specific situation. In our approach, the detailed behavior model and simulation can readily accommodate the behavior details of different wheelchair designs, and designers can also use this method to analyze the performance of different wheelchairs within different building designs while considering the disabilities and preferences of different users.


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 10,

N° 4

2016

Fig. 11. Wheelchair clearance space of revolving door to push/pull in relation to handle height

Fig. 10. Revolving door to pull: required clearance space dimensions

Fig. 12. Execution time

Our approach allows us to accurate computing the useful handle height values and a wheelchair clearance space, with respect to the constraints of a person body and a wheelchair design used, which allows customizing the accessibility assessment. Unlike real environments, it makes accessibility assessment safe, without risks with less expensive. It also allows manipulating freely person and wheelchair specifications imposed in simulation. The simulation tool that we have developed enabled us to simulate 3D feasible movements of wheelchair-user couple, which allow us to evaluate the interior pieces of habitat, by computing the required clearance space at the pull/push revolving door as well as the appropriate heights of the corresponding handle. Our developed simulation tool based on virtual reality in which we can control easily the interaction between the person with disabilities and his/her environment. Its structure makes easy to consider the detailed abilities and preferences of any wheelchair user, handling elements dimensions and the habitat interior design. The wheelchair mobility space is computed in way that it can be changed according to the wheelchair orientations and handling elements variations (paragraph 5.1). Unlike to the precedent approaches that aim to determine the adequate mobility space width linked between different pieces in the habitat, in our cases we aim to determine the clearance space dimensions around handling element (doors, windows,‌ etc.). We created feasible movements of a person upper body. The following Table 2 presents the positive and negative points provided by our

proposed approach against the approaches adopted in some countries for handling elements accessibility assessment. The following table (Table 4) gives the benefits provided by the approach proposed here, compared to the approaches used in France and the USA, in accessibility assessment for a wheelchair user.

k $!'/&3($! *!% 4&"&#- N$#T3 In this paper, we proposed a new numerical approach to analyze the accessibility of a handling elements indoor habitat for person on wheelchair. Accessibility assessment prescriptions adopted in many are manually approaches, based on norms or recommendations. The manual measurements can be ambiguous, and unduly restrictive in practice. In order to compensate the measurements error of these approaches we used computer data-based tool implemented on Visual Studio, to simulate feasible behavior of wheelchair user at handling elements. Here, we have discussed our results using an example of revolving-door which is considered as an essential element of access and required an important clearance space with specific dimension. To guarantee the correct handling of this element by wheelchair user, we determine for both door type push and pull, the corridor length/width minimum values in the case of an involving door to pull and minimum door/corridor width values in the case of an involving door to push. We also determined the handle door height values and its influence on wheelchair clearance space size. Articles

35


Journal of Automation, Mobile Robotics & Intelligent Systems

Fig. 13. Mains configurations to opening revolving door to pull

VOLUME 10,

N° 4

2016

Fig. 14. Mains configurations to opening revolving door to push

approaches adopted in some countries Wheelchair maneuvering clearances dimension

Wheelchair maneuvering clearances position

Interpretation

– Corridor width + door width (doorways) must be superior or equal to 2 meter [20].

– The maneuvering clearances length extends from the hinge of the door, integrating the door and the handle.

– Swinging door to push: wheelchair clearances equal to: 1.7 m × 1.2 m [20].

– The maneuvering clearances length extends from the hinge of the door, integrating the door and the handle.

– Swinging door to pull: wheelchair clearances equal to: 2.2 m×1.2 m [20].

– The maneuvering clearances length extends from the hinge of the door, integrating the door and the handle.

– Swinging door to push: for front side approach the wheelchair maneuvering clearances equal to: doorways × 1.22 m for hinged approach must be equal to: (doorways+0.56 m) × 1.07 m and for latch approach must be equal to: (doorways+0.61 m) × 1.07 m [3].

– The maneuvering clearances length extends from the hinge of the door, integrating the door and the handle. For the hinged approach the maneuvering clearances length extends from the latch of the door, integrating the door and the handle.

– The relation between maneuvering clearance dimension and wheelchair used is not defined clearly. The values defined are very specific and person’s deficiencies didn’t considered.

– The maneuvering clearances length extends from the hinge of the door, integrating the door and the handle.

– Generally, the reachability area is defined according to the person’s capabilities at upper body articulated structure level, so these values must be specifics for one person, and they are not converting specific needs for the others.

– The wheelchair maneuvering clearance dimension is very specific and it is computed without considering person’s deficiencies, beside the wheelchair approach direction didn’t taken into account.

– Swinging door to pull: for front side approach the wheelchair maneuvering clearances equal to: (doorways+0.46 m) × 1.52 m. for hinged approach must be equal to: (doorways+0.91 m) ×1.52 m and for latch approach must be equal to: (doorways+0.61 m) × 1.22 m [3]. – Forward reach: the handle height shall be 1.22 m maximum where the minimum is 0.51 m [3]. – Handle door height must be to 0.4 m to 1.3 m [20]. – side reach: the handle height shall be 1.22 m maximum and the 0.38 minimum [3]. 36

Articles


Journal of Automation, Mobile Robotics & Intelligent Systems

Our approach has advantages over traditional approaches for assessing acceptability of designs, which is adopted until today by countries for assessing wheelchair accessibility. These methods can be complex and difficult to implement as a computer application. Our new numerical tool models a feasible wheelchair user behavior that is related to handling elements design inside individual or public apartments and to the wheelchair user requirements. In another hand, handling elements accessibility analysis is done by considering the volume occupied by person body which determined by the person capabilities. Because we negligee handling element weights, the dynamics constraints used in ergonomic field to predict actually a person upper body movement are ignored, so just the joints limits constraints are used to decide the results. In the case we believe that if the articulated structure posture respects the joint limit constraints (see Table 3), so it is feasible physically by the person. Although, the analysis of the accessibility with only joint limit constraints of the person stores it in one aspect, a more general analysis could store both the geometric (joint limit) and dynamic aspect of the user for more potential and reusable analysis. One of the lines that we will work on in the future is to introduce the dynamic constraints in the assessment accessibility process such as muscular energy rate, external loads, torque limits to predict real human movements in the simulation.

N° 4

2016

?,,-!%(Z Here, we present Tables that containing clearance space dimensions at the both kind of revolving door to push/pull with different wheelchair dimensions. Tab.4. Useful corridor and revolving door to push width for different wheelchair dimensions Minimum corridor

Minimum door

width values in centimeter (cm)

width values in centimeter (cm)

Min1

Min2

Min1

Min2

104

94

90

100

115

105

100

110

127

116

110

120

138

127

120

130

148

138

130

140

Wheelchair dimension in centimeter (cm)

> ??< @?< >K X? > ?< Y?< >K Z? > K?< [?< >K @? > \?< ??< >K Y?

Tab.3. Person upper body joint limits Joint number

VOLUME 10,

Joint rotation limits

> ]?< ?<

Minimum

Maximum

0

-180

+180

1

0

0

2

-9

+15

3

-9

+9

4

-9

+9

5

-9

+9

6

-9

+9

7

-9

+9

8

-9

+9

9

-9

+9

10

-9

+9

11

-9

+9

12

0

+15

13

-15

+15

WhL1=110,

14

0

+30

WhW =80,

15

-89

+89

16

0

+120

17

-60

+60

WhL1=120,

18

0

+120

WhW =90,

19

-30

+30

WhL2=70

20

-90

+90

21

-19

+19

Note. Upper body segments have following lengths in centimeter (cm) (see Fig. 1): L1=10 cm, L2=20 cm, L3=10 cm, L4=5 cm, L5=0.1 m, L6=10 cm, L7=30 cm, L8=30 cm

>K [?

Tab.5. Minimum wheelchair clearance space dimensions at revolving door to pull according to wheelchair dimensions

Wheelchair dimension in centimeter (cm)

Minimum corridor width values in centimeter (cm)

Minimum corridor length values in centimeter (cm)

WhL1=100, WhW =70,

174

204

184

214

195

225

206

233

WhL2=0.5

WhL2=60

WhL1=130, WhW=100, WhL2=80

Articles

37


Journal of Automation, Mobile Robotics & Intelligent Systems

Tab.6. Achievable height handles of revolving door to pull according to different wheelchair dimension Wheelchair dimension in centimeter (cm)

Height handle intervals [Minimum, Maximum] in centimeter (cm)

WhL1=100, WhW =70,

[40 , 162]

WhL2=50 WhL1=130, WhW =100,

[40 , 155]

WhL2=90 WhL1=150, WhW =130,

[40 , 143]

WhL2=110 WhL1=160, WhW =150,

[40 , 134]

WhL2=110

Tab.7. Achievable height handles of revolving door to pull according to different wheelchair dimensions Height handle intervals Wheelchair dimension [Minimum, Maximum] in centimeter (cm) in centimeter (cm)

WhL1=100, WhW =70, WhL2=50 WhL1=100, WhW =100, WhL2=90 WhL1=100, WhW =130, WhL2=110 WhL1=100, WhW =150,

[40 , 162]

[40 , 153]

[40 , 143] [40 , 133]

WhL2=110 Note. WhL1 is fixed at 100 centimeter (cm)

? PDBNUCJ FCD > The authors want to thank all persons participate in this study for their interest, comments, time and effort.

?2 @B > Ali Saidi sief* – PhD student at the University of Frères Mentouri Constantine, Algeria, Department of Electronics, Signal Processing Laboratory, saidi_sief_ali@yaho.com. Alain Pruski – Professor at the University of Metz, LCOMS laboratory, ISEA, Metz, France, alain.pruski@univ-lorraine.fr. Abdelhak Bennia – professor at University of Frères Mentouri, Constantine, Algeria, Department of Electronics, Signal Processing Laboratory, abdelhak. bennia@yahoo.com. *Corresponding author 38

Articles

VOLUME 10,

N° 4

2016

C4C CD C> [1]

“Legislation.gov.uk,� 12-Jun-2015. [Online]. Available: http://www.legislation.gov.uk/ukpga. [Accessed: 12-Jun-2015]. [2] > " _ K??X ?K ' %`# K??X ' {`; `

< | `

` . 2005. [3] “Americans with disabilities act accessibility guide.� Washington, DC: Access Board, US Architectual ans TransporationBarriers Compliance Board, 1997. [4] Arnaud J., }' $ ; % # ' ' % ~ ' ~ % ~

% [PhD thesis]. 2007. [5] Leloup J. Le projet HM2PH, habitat modulaire et Z ^ ^ ^ Ÿ< ^ ‡ tion d’un espace de vie adaptÊ pour personne en dÊficit d’autonomie [Internet]. Tours; 2004 [cited 2015 Dec 2]. Available from: http://www. theses.fr/2004TOUR4055. [6] Goncalves F., {' # $ # ' #

{ $$ ' $'

' % ' ' ' [Internet]. 2014 [cited 2015 Dec 2]. Available from: http:// www.theses.fr/s77298. [7] Taychouri F., Monacelli E., Hamam Y., Chebbo N., “Analyse d’accessibilitĂŠ avec prise en compte de la qualitĂŠ de conduite d’un fauteuilâ€?, € *  * ‚ ' ƒ *, 2007, no. 1(2), 173–92. DOI: DOI: 10.3166/sth.1.173-192. [8] Han C. S., Law K. H., Latombe J.-C., Kunz J. C., “A performance-based approach to wheelchair accessible route analysisâ€?, } #* „ ;* " % $*, vol. 16, no. 1, 53–71, Jan. 2002. [9] Otmani A. M. R., “A new approach to indoor accessibilityâ€?, " * †* €$ ƒ $ , vol. 3, Oct. 2009. [10] Yang James, Abdel-Malek K., “Human reach envelope and zone differentiation for ergonomic designâ€?, ƒ'$* ‡ „ ; * ˆ '%* € #* " *, vol. 19, Jan. 2009, no. 1, 15–34. DOI: 10.1002/ hfm.20135. [11] Vilar E., Rebelo F., Noriega P., “Indoor Human Wayfinding Performance Using Vertical and Horizontal Signage in Virtual Realityâ€?, ƒ'$* ‡ „ ; * ˆ '%* € #* " *, vol. 24, no. 6, Nov. 2014, 601–615. DOI: 10.1002/hfm.20503. [12] Herrera-Saray P., PelĂĄez-Ballestas I., Ramos-Lira L., SĂĄnchez-Monroy D., Burgos-Vargas R., “Usage problems and social barriers faced by persons with a wheelchair and other aids. Qualitative study from the ergonomics perspective in persons disabled by rheumatoid arthritis and other conditionsâ€?, ! '$ * *, vol. 9, Feb. 2013, no. 1, 24–30. DOI: 10.1016/j.reumae.2012.10.001. [13] Crytzer T. M., Cooper R., Jerome G., Koontz A., “Identifying research needs for wheelchair transfers in the built environment,â€? ‰ ~ * ! ~ * } *  *, May 2015, 1–7. DOI: 10.3109/17483107.2015.1042079. [14] Winance M., “Universal design and the challenge of diversity: reflections on the principles of UD,


Journal of Automation, Mobile Robotics & Intelligent Systems

[15]

[16]

[17]

[18]

[19]

[20]

[21]

[22]

[23]

[24] [25]

[26]

[27]

VOLUME 10,

N° 4

2016

based on empirical research of people’s mobility�, ‰ ~ * ! ~ *, vol. 36, no. 16, 2014, 1334– 1343. DOI: 10.3109/09638288.2014.936564. Yang J., Pitarch E.P., “Digital Human Modeling and Virtual Reality for FCS,� The University of Iowa, Contract/PR NO.DAAE07-03-D-L003/0001, Technical Report VSR-04.02, 2004. Denavit J., Hartenberg R.S., “A Kinematic Notation for Lower Pair Mechanisms Based on Matrices,� † ' % }

ˆ , vol. 77, 1955, 215–221. Lozano-Perez T., “Spatial Planning: A Configuration Space Approach,� "„„„  * $ ' *, vol. C–32, no. 2, Feb. 1983, 108–120. Barki H., Denis F., Dupont F., “A New Algorithm for the Computation of the Minkowski Difference of Convex Polyhedra�. In: ‚

; % €ˆ" K? ? " % € ˆ

; } , 2010, 206–210. DOI: 10.1109/SMI.2010.12. TomiczkovĂĄ S., “Algorithms for the computation of the Minkowski differenceâ€?. In: ‚

; % KZth % ; $ | $ '

; ½ Âť ¿– ˆ < ÂŞ ˆ ‡ Bohemia, 2006, 37–42. D. technique T. et ville, “Direction technique Territoires et ville,â€? 11-Jun-2015. [Online]. Available: http://www.territoires-ville.cerema.fr/. [Accessed: 12-Jun-2015]. Alexander R. M., “A minimum energy cost hypothesis for human arm trajectories,â€? Biol. Cy~ *, vol. 76, no. 2, Feb. 1997, 97–105. Gallagher S., Marras W. S., Davis K. G., Kovacs K., “Effects of posture on dynamic back loading during a cable lifting task,â€? „ ; $ , vol. 45, no. 58, Apr. 2002, 380–39. Kim J. H., Yang J., Abdel-Malek K., “A novel formulation for determining joint constraint loads during optimal dynamic motion of redundant manipulators in DH representationâ€?, ˆ' ~ | €| * ‰| *, vol. 19, no. 4, Jan. 2008, 427–451. Latombe J.-C., ! ~ ˆ ‚ ;, vol. 124, 11 vols. Boston, MA: Springer US, 1991. Pruski A., “A unified approach to accessibility for a person in a wheelchairâ€?, ! ~ * }' * €| *, vol. 58, no. 11, Nov. 2010, 1177–1184. Moussaoui A., Otmani R., and A. Pruski, “A comparative study of incremental algorithms for computing the inverse kinematics of redundant articulatedâ€?, † ' % }' $ < ˆ ~ ! ~ " ; €| * $ , vol. 4, no. 3, 2010, 3–9. Laumond J., “Feasible trajectories for mobile robots with kinematic and environment constraintsâ€?. In: " ; }' $ ' €| $ , L.O. Hertzberger, F. C. A. Groen (Eds), New York: North_Holland, 1987, 346–354.

Articles

39


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 10,

N° 4

2016

P ? G JB4 ? @)H F I I U I F U > U I Submitted: 27th September 2016; accepted 7th November 2016

Mohammed M. Ali, Hui Liu, Norbert Stoll, Kerstin Thurow DOI: 10.14313/JAMRIS_4-2016/30 ?93"#*'"M This paper presents the kinematic analysis of the H20 humanoid mobile robot. The kinematic analysis for the robot arms is essential to achieve accurate grasping and placing tasks for object transportation. The H20 robot has dual arms with 6 revolute joints with 6-DOF. For each arm, the forward kinematics is derived and the closed-form solution for the inverse kinematic problem with different cases of singularities is found. A reverse decoupling mechanism method is used to solve the inverse kinematic problem analytically by viewing the arm kinematic chain in reverse order. The kinematics solution is validated using MATLAB with robotics toolbox. A decision method is used to determine the optimal solution within multiple solutions of inverse kinematic depending on the joints’ limits and minimum joints motion. The workspace analysis of the arm is found and simulated. Finally, a verification process was performed on the real H20 arms by applying blind and vision based labware manipulation strategies to achieve the transportation tasks in real life science laboratories. P-1;$#%3M kinematic analysis 6-DOF robotic arm, validation of kinematic solution, labware localization and manipulation, Kinect sensor

!"#$%&'"($! Mobile robots are generally used to support efficient transportation for increasing productivity and saving human resources. They are widely used in different fields of automation such as product transportation [1], domestic services [2], [3], [4], teleoperation for the tasks with power tools [5] or material handling [6]. In this work, we present the use of a mobile robot (H20 robot, Dr. Robot, Canada) in a life science environment. The robot is a wireless networked autonomous humanoid mobile robot. It has a PC tablet, dual arms, and an indoor GPS navigation system (see Fig. 1). Some key technical issues such as a wireless remote control system [7], a low-cost robot localization [8], and multi-floor navigation system [9], have been solved recently to develop the transportation system of the H20 mobile robots. For object transportation, the grasping and placing tasks are very essential and have to be performed reliably, carefully, and in a safe way. The manipulation of a desired object requires the finding of the pose of these objects with respect to the arm base depending on specific sensors followed by using an accurate kinematic model to move the 40

arm end effector from one pose to another precisely and in a safe path. The kinematic analysis is the way to describe the motion of the arm links without considering the forces that cause this motion. There are two types of kinematic problems: forward kinematics (FK) and inverse kinematics (IK). The forward kinematics describes how to find the end-effector pose relative to the arm base for the given joint angles. On the other hand, the inverse kinematics is based on finding the joint angles for the given pose of the endeffector with respect to the arm base.

Fig. 1. H20 mobile robot The inverse kinematics plays an active role in object manipulation because it is an important issue to enable the arm end-effector to reach the desired object accurately. Also, there are other issues, which have to be taken into the consideration when controlling the robotic arm, such as singularities, joint limits and reachable workspace. Generally, the IK problem can be solved using two approaches: analytic and numeric. But, the inverse kinematics problem does not have a unique solution and the solution which ensures collision-free configuration and minimum joint motion is considered more optimal [10]. Therefore, it is important to use a decision strategy to choose the suitable solution for the required task. Most researchers prefer numerical methods for solving the IK problems to avoid the difficulty of finding the analytical solution [11], [12], [13]. Normally, the analytical approach is appropriate for real time applications because all the solutions can be found and it is computationally fast in comparison with the numerical approach. The analytical solution can be classified into geometric (closed-form) and algebraic. For the geometric method, the complexity of finding the IK solution increases when the manipula-


Journal of Automation, Mobile Robotics & Intelligent Systems

tor has more than 4 joints. Furthermore, the solution approach cannot be generalized from one manipulator to another because it depends on the number of manipulator joints, their types, structure, and coordinates frames. The closed-form solution can only be found for specific types of robotic arms, which have a particular structure with 6-DOF or less. D. L. Peiper indicates that in case there are 3 consecutive joints axes which are parallel to each other or intersecting at a single point then the closed-form solution can be existent [14]. The closed-form solution for the H20 arms can be found because the three shoulder joint axes intersect at a single point as shown in Fig. 2. There are many researches related to the closedform solution of the IK problem. C. G. S. Lee et al. proposed a closed-form solution of inverse kinematics for a 6-dof PUMA robot [15]. T. Ho et al. proposed a fast closed-form inverse kinematic solution for a specific 6-DOF arm [16]. G. Huang et al. presented a strategy for solving the inverse kinematic equations for a 6-DOF arm of humanoid meal service robot [17]. C. Man et al. introduced a mathematical approach for kinematic analysis of a humanoid robot [18]. R. P. Paul et al. proposed an inverse-transform technique to solve the IK problem for a 6-DOF robotic manipulator [19]. T. Zhao et al. proposed a method to divide the IK problem of a 7-DOF humanoid arm into sub-problems to find the closed-form solution taking the constraint of the elbow position into consideration [20]. M. A. Ali et al. proposed a reverse decoupling mechanism method to solve the IK problem of humanoid robots analytically [21]. The strategy of this method depends on viewing the kinematic chain of the manipulator in reverse order with decoupling the position and orientation. In other words, the arm can be viewed in reverse order so that the pose of the arm base can be described relative to the end effector. This method includes also decision equations to choose the suitable solution within multiple solutions. R. O’Flaherty et al. utilized the same method to find the closed-form solution of the IK problem for the HUBO2+ humanoid robot [22]. In this paper, the forward and inverse kinematics solutions for the 6-DOF H20 arms are derived. The closed-form solution of the IK problem has been found using the reverse decoupling mechanism method [21]. Also, the IK solution has been validated by MATLAB and verified experimentally on the H20 arms. Two approaches for labware manipulation are implemented: a blind approach using sonar sensor and a vision based approach using Kinect sensor. The paper is organized as follows: in section 2, the description of the manipulation problem is presented. In section 3, the H20 arms structure with FK and IK solutions are presented. The strategy to choose the desired IK solution and the validation of the kinematic model are given in section 4 and 5 respectively. Â&#x; ^ ‡ ÀŠ• arm. Section 7 shows the verification process of the kinematic solution with the H20 arms. The clientserver model is presented in section 8. The labware

VOLUME 10,

N° 4

2016

manipulation strategies using the ultrasonic sensor and the Kinect sensor are shown in section 9. Finally, the results are summarized with the conclusions.

Fig. 2. H20 arms structure and coordinate frames

) <#$9/-+ >"*"-+-!" For objects transportation, mobile robots usually follow a predefined path to a specified station using a guidance control system. The H20 mobile robots use the Stargazer sensor with ceiling landmarks for maneuvering between the adjacent labs (Hagisonic Company, Korea). This system inevitably causes orientation and position error of Âą3cm in Z-axis and Âą2cm in X-axis in front of the workstation. The error is related to two reasons. The first is the strong lighting and sunlight, which makes the star gazer unable to recognize the ceiling landmarks. The second reason is related to the odometry system, which includes encoders that are mounted on the robot wheels to provide the motion information that updates the robot pose. The odometry system accumulates errors for different reasons such as different wheels diameter, wheel-slippage, wheels misalignment and finite encoder resolution, and according to the experimental results and previous studies, the rotation of the robot is the greatest factor for odometry errors [23], [24]. Uncertainties in the pose of the robot in front of the work bench lead to failures in the grasping and placing tasks. These failures have to be avoided because the H20 robots deal with labwares, which contain chemical and biological components. The required accuracy for labware manipulation has to be less than 1 cm to guarantee safe tasks. Therefore, the more direct way of dealing with this problem is to use sensors to provide the position and orientation of the target [25]. Then, the joints of the arm have to be configured using the kinematic model to place the arm end effector precisely and safely in the desired pose if it is inside the reachable space.

5 @)H ?#+3 P(!-+*"(' This section describes in more details the structure and the kinematic analysis of the H20 arms.

5 >"#&'"&#- $8 @)H ?#+3 The H20 mobile robot has dual arms, each consists of 6 revolute joints with 6-DOF. In addition, each arm also has a 2-DOF gripper. Fig. 2 shows the structure and the coordinate frames for the H20 arms. The length of Articles

41


Journal of Automation, Mobile Robotics & Intelligent Systems

the upper arm d3 and the forearm d5 are 0.236 m and 0.232 m, respectively. Also, the distance (de) between the wrist joint and the end-effector is 0.069 m.

5 ) J @ -,#-3-!"*"($! The Denavit-Hartenberg representation is used to describe the translation and rotation relationship between the arm adjacent links where it provides a guide for locating coordinate systems on each link of a multi-link kinematic chain. Denavit and Hartenberg proposed to define the manipulator with four jointlink parameters for each link [26]. Fig. 3 shows a pair of adjacent links, link(i-1) and link i, their associated joints, joint (i-1), i and (i+1), and axis (i-2), (i-1) and i, respectively. A frame {i} is assigned to link i as follows: • The Zi-1 lies along the axis of motion of the ith joint. • The Xi axis is normal to the Zi-1 axis, and pointing away from it. • The Yi axis completes the right–handed coordinate system as required.

VOLUME 10,

N° 4

2016

By following the D-H rules, the homogeneous transformations between adjacent links are defined. The D-H parameters and the rotational limit for each joint of the H20 arms are described in Table 1.

5 5 4$#;*#% P(!-+*"('3 $+,&"*"($! The forward kinematics is how to find the end-effector pose relative to the arm base for the given joint angles. This can be solved by finding the transformation matrices of the arm from one link to the next according to the D-H coordinate system. Eq. 1 represents the 4Ă—4 general homogeneous transformation matrix of the H20 arms. By substituting the link parameter from table 1 into (1), the transformation matrices ( ‚ Â Â„ĂƒÂ&#x;}} ‡ ÀŠ• Z ‡ #

(1) š

† # Š ‡ ‚•#•Â&#x;ÂŒ Z} between the wrist joint and the end-effector. The transformation matrix which describes this translation is as follows:

Fig. 3. D-H conventions for frame assigning [26] There are four parameters used in the manipulator analysis: the link length (ai-1} ‚¯i-1), the link offset (di} – ‚ši) where (i) refers to the link number. The definitions of D-H parameters are as follows: • Link length (ai): The distance measured along xi axis from the point of intersection of xi axis with zi-1 axis to the origin of frame {i}. • ÂŽ ‚¯i): The angle between zi-1 and zi axes measured about xi-axis in the right hand sense. • Joint distance (di): The distance measured along zi-1 axis from the origin of frame {i–1} to intersection of xi axis with zi-1 axis. • ‚ši): The angle between xi-1 and xi axes measured about the zi-1 axis in the right hand sense.

Finally, the forward kinematic solution which describes the pose of the end effector relative to the arm base can be obtained using Eq. (2). In this equation (nx, ny , nz) is the orthogonal vector, (ox, oy, oz) is the orientation vector, (ax , ay , az) is the approach vector and (px, py, pz) is the end effector position vector.

Table 1. The D-H parameters and the joints limit

The inverse kinematics enables the finding of the joint angles for the given position and orientation of the end-effector with respect to the reference coordinate system. The inverse kinematic problem was solved using the reverse decoupling mechanism method [21]. With this method, the kinematic chain of the arm is viewed in reverse order. This means that the shoulder coordinate frame is described relative to the end-effector coordinate frame. In this case, the ^ ˆ ‡ ‡ “ – š4 š5, and š6. To start solving the inverse kinematic problem of the left and right arms, the inverse of forward kinematic matrix has to be found as follows:

Left and Right Arms

i

42

Š(i-1) (L) Š(i-1) (R) a(i-1) (LR) di (m)(L) di (m)(R) Joints limit (LR)

1

0o

0o

0

0

0

-20o~192o

2

90o

-90o

0

0

0

-200o~-85o

3

90o

-90o

0

-0.236

0.236

-195o~15o

4

-90o

90o

0

0

0

-129o~0o

5

90o

-90o

0

-0.232

0.232

0o~180o

6

-90o

90o

0

0

0

-60o~85o

Articles

5 7 !Q-#3- P(!-+*"('3 $+,&"*"($!


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 10,

(3)

To find the solution for the joints 4, 5, and 6, the both sides of (3) have to be multiplied by ( ¡ ) and the result is as follows:

The right side of (4) is the following matrix:

N° 4

2016

‰ ‡ ‚ Š‚ š š}} more consistent to be used for finding the angle value ‚š} # ‰ ‡ inaccurate behavior to determine the required angle in case of using the arc sine and arc cosine functions. The complex numbers are generated in case the target position is not within the reachable workspace of the arm. Therefore, (real) function is used to ignore the imaginary parts of complex numbers and take only the real part in the joint solutions. Thus, the solution that is closest to the target position can be ob =ŠŠQ# š‡ ‡ ˆ ‡ š4, the equation of S5 ˆ ‚Â?} ‡ š5 as follows:

and the left side of (4) is:

‰ ‡ š6 can be obtained by dividing (11) by (10) to get the following equation:

where S and C are the abbreviation of sine and cosine of the angle, respectively. By equating the position elements of (5) and (6), it is obtained:

‰ ˆ – š4, suppose the follow) and ( ), where ing terms: ( r  Â‚

), g  Â‚ Š ‚

,

)), and

(atan2) is the two argument arc tangent function. These terms are obtained according to the arms coordinates frame with reverse order. By substituting these terms into (7) and (9) and using the angle sum identities, the following equations can be obtained:

The equation of C4 for the left and right arms can be obtained by squaring (8), (10), and (11) and adding them. The solution is as follows:

‰ ˆ ‡ š6 can be found as follows:

where (WToPi) is a function to wrap the angle to the ˆ ݀ Ă„ =ŠŠQ# ‰ ‡ for the joints 1, 2, and 3, the both sides of (3) have ) and the result is been multiplied by ( as follows:

The left side of (18) is the following matrix:

By taking the element (3,3) in the left and right sides of (18) and equating them, the equation of C2

‡ š2 can be obtained as follows:

‰ ‡ š4 for the left and right arms can be found.

† ‡ š3, the elements (1,3) and (2,3) in the left and right sides of (18) are compared and divided by each other to get S3 and C3 which are used ‡ ‡ š3: Articles

43


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 10,

N° 4

2016

To find the IK solution for this case, keep the previous ˆ ‡ š3 ‡ šT  š3 ” š5# † Z š4  Â• ‡ š6 by using the elements (1,4) and (2,4) of (3) and dividing them by each other to get S6 and C6# ‰ ‡ š6 for the left and right arms are found as follows:

‰ ^ ‡ ‡ š1, which is obtained by comparing the elements (3,1) and (3,2) in the left and right sides of (18) and dividing them by each other to get S1 and C1 which are used to find the ‡ š1 as the following:

† ‡ š1, compare the elements (3,1) and (3,2) in the left and right sides of (4). The elements (3,1) and (3,3) in the left and right ‡ ‚’} Z^ ‡ š2 which is as follows:

According to the previous joints’ solutions, it can š2 š4, and š5 have two solutions. This causes 8 total solutions for the inverse kinematics problem of the H20 arms. The strategy about how to choose the optimal solution will be discussed in section 4.

, then . And if ‰ ‡ š5 ‡ ˆ šT first. This is done by using the elements (1,3) and (2,3) in the left and right sides of (4) with the angle sum identities to get S3+5 and C3+5# ‰ ‡ šT is as follows:

5 = J-*/(!0 ;(". ".- >(!0&/*#("1 *3-3 Singularities are arm configurations in which one or more degrees of freedom are eliminated when some joints’ axes align with each other. Thus, the number of solutions for the IK problem will be infinite. Three cases of singularity have been determined within the joints limits. The inverse kinematic solution for every case of singularity is as follows: ! " #$% The axis of the third joint is aligned with the fifth joint axis as shown in † # ’#

If And if

, then  ^‰ ‚ , then  ^‰ ‚

). ).

† ‡ š5 can be obtained as follows:

& ! #$ " % The axis of the first joint is aligned with the third joint axis as shown in Fig. 5.

Fig. 5. Case B where the 1st and 3rd joints are aligned

Fig. 4. Case A where the 3rd and 5th joints are aligned 44

Articles

‰ ‡ š4 š5 š6 are found using the same equations as in section 3.4. Also, the previous ˆ ‡ š1 ^ ‡ šT  š1 ” š3. To find ‡ š5 ‡ Z š2  Â˜Ă„# ‰ ‡ šT by using the elements (3,1) and (3,2) in the left and right sides of (4) with the angle and . The solution is as sum identities to obtain follows:


Journal of Automation, Mobile Robotics & Intelligent Systems

If S4 Å •

VOLUME 10,

= wrapToPi (

' ! #$% The joints 1, 3, and 5 are collinear as shown in Fig. 6.

Fig. 6. Case C, the 1st, 3rd, and 5th joints are aligned

† ‡ š6 is found using the Z ‹ š ‚ š4  Â• š2 Âş #Ă„}# š ^ ^ ˆ ˆ ‡ š1 š3, then de‡ šT  š1 ” š3” š5# ‰ ‡ ‡ š5, first Z š2  Â˜Ă„ š4  Â•# ‰ ‡ šT by using the elements (2,1) and (2,2) in the left and right sides of (4) with the angle sum identities to obtain S1+3+5 and C1+3+5 as follows:

† ‡ š5 can be obtained as follows:

2016

ous configuration and for every possible solution as ) [22]. follows: ( The next step is to compare the sum of the previous configuration with the sum of every solution. The solution with the closest sum value to the sum value of the previous configuration is the desired solution.

= */(%*"($! $8 ".- P(!-+*"(' F$%-/

= wrapToPi ( If S4 Æ • ! "3 # $ $ % ! & ' = wrapToPi (

N° 4

7 .- >-/-'"($! $8 * J-3(#-% >$/&"($! For object manipulating tasks, the pose information of the object relative to the arm base determines whether this object is inside the workspace or not. In case that the object is outside the workspace, then there is no solution for the inverse kinematic problem. But, there will be 8 solutions for the required pose if it is inside the reachable workspace (as detailed in section 3.4). To choose the suitable solution within the 8 solutions, the joints limits have to be taken into consideration. If one joint value related to a specific solution is not within the joint limits, the solution will be ignored. In case that there are multiple solutions, where all the joints’ values of every solution are within the joints’ limits, a selecting algorithm has to be used. The solution with minimum joint motion will be selected using this algorithm. This is done by finding the sum of the squared joint values for the arm previ-

MATLAB software with Robotic toolbox has been used to validate the inverse kinematics solution. The joints results with the simulation plot give a clear proof for the inverse kinematic behavior of the robotic arms. The validation process has been done after giving random joints values as an input to the forward kinematic model. The pose information of the end effector, which is received from the FK model, is inserted to the inverse kinematic model that includes the joints limits with the selecting algorithm. Then, the joints values that are inserted to the FK model are compared with the result of the IK model as shown in Fig. 7.

Fig. 7. The validation process for the IK solution

Different arm configurations have been used for validation processes. The results have been compared and plotted and the joints values which were inserted to the FK model are identical with the joints values which were obtained from the IK model as shown in the example of Fig. 8. In this example, Fig. 8.A shows the simulation plot of the arm with the position and orientation of the end effector according to the joint values inserted to the FK model. Fig. 8.B represents the simulation plot of the arm according to the joints obtained from the IK solution. a

b

Fig. 8. The simulation plot of the H20 arm according to the configuration [50o, -90o, -90o, -30o, 180o, 10o]

G N$#T3,*'- ?!*/13(3 The workspace is the space which is swept out by the arm end effector after executing all possible motions. The workspace is one of the essential parameters for robotic arm performance in addition to its speed and accuracy. The calculation of the arm workArticles

45


Journal of Automation, Mobile Robotics & Intelligent Systems

space is very important to decide whether the desired object, which has to be manipulated, is inside the reachable space or not. The workspace of the robotic arm can be determined according to the links length, joints type, and joints limit. The length of the H20 arm is 0.537 m (d3  Â•#Š“Â&#x; Z 5  Â•#Š“Š Z e  Â•#•Â&#x;ÂŒ Z} as shown in Fig. 2. Related to the joints, the H20 arm consists of 6 rotary joints (limits values’ mentioned in Table 1). MATLAB software with Robotic toolbox has been used to calculate the workspace of the H20 arm by inserting the links length and all the possible joint values within the joints limit to the FK model to find the position of the end-effector for every sample. Finally, all possible positions which the robotic arm can reach will be found. The simulation of workspace envelope for the H20 arm is shown in Fig. 9.

Fig. 9. The workspace envelope of the H20 arm

j .- P(!-+*"(' F$%-/ -#(8('*"($! This section describes the verification procedures of the developed kinematic model with the real robotic arm. A labware manipulation strategy has been implemented using the IK solution and ultrasonic sensor to calculate the distance between the H20 robot base and the required work bench. Some essential steps such as calibration and accuracy testing have been implemented as preliminary procedures before performing the labware manipulation strategy.

j ?!0/- "$ >-#Q$ <$3("($! $!Q-#3($! As an initial step for applying the kinematic model with the robotic arm, a conversion process has been performed to convert the required angles values’ of the joints to the related servo motors positions. The positioning resolution of the H20 arm servos (joints) is 0.09°/unit. Thus, the servo has to move 1,000 units to rotate 90° degrees. This resolution value can’t be

VOLUME 10,

N° 4

2016

used with the H20 arms because they are unstable and have weak joints where the joints compliance causes positional errors. The effects of gravity, weight of arm parts, payload, and inertia cause the elasticity of each joint [27]. Also, the differences between the actual physical joint zero position and the physical joint zero position reported by the robot controller normally causes accuracy errors for the robotic arm. To cope with this issue, a digital tilt meter has been used (see Fig. 10). Different angle values have been configured for every joint using the tilt meter and the value of the related servo motor has been registered at each configuration. This process helps to build the equations of angle to servo conversion which is important for decreasing the accuracy errors.

j ) ?''&#*'1 *!% -,-*"*9(/("1 $8 @)H ?#+3 The checking of the accuracy and repeatability of the robotic arms is an essential step for the tasks of object manipulation. The repeatability of the robotic arm describes how precisely this arm can return to a taught position. In general, larger robots have larger errors in repeatability. On the other hand, the accuracy of the robotic arm describes how precisely this arm can reach the required position. One of the main technological limitations in the robotics industry is the improvement of the accuracy by reducing the error between the tool frame and the goal frame. The precision depends on some elements such as the resolution of the control system, joint compliance, and the imprecision of the mechanical linkages and DC servo motors. Also, the accuracy and the repeatability depend upon many other different factors such as friction, temperature, loading, and manufacturing tolerances. In case that the robotic arm does not provide the required accuracy, the arm has to be calibrated. Robot calibration can be performed using both contact and noncontact probing methods. Non-contact methods include the use of beam breakers, laser sensors, visual servoing, etc. [27]. The accuracy and repeatability of the H20 arm have been checked using a grid paper and a marker attached at the end-effector as shown in Fig. 11. A grasping configuration has been prepared to enable the end-effector to reach the space ^ ‡ ¥ “• ZZ ¢ Â„Â?• ZZ $ Â“Â?• ZZ to the arm shoulder.

Fig. 11. The grid paper and the end-effector marker

Fig. 10. Conversion process using tilt meter 46

Articles

The arm grasping movement has been repeated 40 times where the position of the end-effector has been registered at the end of every movement with the indication of the marker on the grid paper. The registered positions range of the end-effector are described in Table 2. It can be noticed from the case “before calibration� in Table 2 that there is an accuracy error in the Y-axis where the expected position


Journal of Automation, Mobile Robotics & Intelligent Systems

is not within the range of the registered positions. On the other hand, the expected position in X and Z axes are within the range of the registered positions. The reason of this error is the weakness of the joints with the joint compliance which is effected by the weight of arm parts. To improve the accuracy of reaching the required position, the robotic arm has to be calibrated. The accuracy and repeatability of the H20 arm have been checked again after performing a calibration process and the positions of the end-effector have been registered as shown in the row “after calibrationâ€? in Table 2. Also, the Gaussian distributions of the end-effector positions after calibration with the related mean and standard deviation are shown in Fig. 12. According to the results obtained from this experiment, the accuracy of the used arm in reaching a specific position according to the related configuration is as the following: (X:Âą4 mm, Y:Âą4 mm, Z:Âą2 mm). Table 2. The expected and registered positions Case

X(mm)

Y(mm)

Z(mm)

The expected position (hand related to shoulder)

30

180

380

The registered positions (before calibration)

26~34

195~203

378~382

The registered positions (after calibration)

26~34

176~184

378~382

(A)

(B)

(C)

Fig. 12. Gaussian distribution, A: for X-values, B: for Y-values, C: for Z-values

The Y-values for IK model (mm)

A calibration process for the robotic arm has been performed to keep the end-effector at a fixed height of 180 mm for different distances between the shoulder and the end-effector where the value 180 mm represents the height between the robot shoul-

170 160 150 140

VOLUME 10,

N° 4

2016

k /(-!" >-#Q-# $++&!('*"($! F$%-/ The programming code to control each device or component related to the life science automation system has been developed using a specific language. It is complex task to integrate different control platforms in a single one due to the size of the platforms and the usually different programming languages. Therefore, it is required to develop a communication system that enables the simultaneous interaction of all the devices for a flexible process execution. The control systems of the robot components are connected in a common LAN. The client-server communication model can enable the control system of each component to interact with the others over Ethernet using a specific IP address and port number. The client initiates the process with the server by requesting a connection to a specific socket address using TCP/ IP where the socket address is a combination of IP address and a port number. If the requested port is free, then the server will establish the connection to communicate with the client. A client-server model has been developed to connect the arm manipulation system (AMS) with the H20 navigation control system (NCS) [8], [9]. Both control systems exchange the orders and information to perform the transportation task for the labwares.

m U*9;*#- F*!(,&/*"($! For the verification of the developed IK solution with real applications, object manipulation strategies have been performed to achieve the labware transportation in different life science laboratories using H20 robots. According to the workspace of the H20 arms and to the robot position in front of the labware station, each arm can manipulate 2 labwares alongside each other as shown in Fig. 14. R1 and R2 labwares can be manipulated using the right arm. Whereas, the left arm can manipulate the L1 and L2 labwares. The labware containers have fixed positions on the workstation. The information about the tasks and target is sent from the process management system (PMS) to the NCS which in turn transfers it to the AMS. Two approaches for labware manipulation have been implemented: a blind sonar sensor based method and a vision based method using Kinect sensor. Both methods have been developed using Microsoft Visual Studio 2015 with C# programming language. The projects are running on a Windows 10 platform in the H20 tablet.

130 350 360 370 380 390 400 410 420 430 440 450

The Z-values = the distance between the shoulder and the end-effector (mm)

Fig. 13. The calibration process to keep the end-effector at the height of 180 mm for different distances der and the labware handle on the workstation. This has been done by inserting a specific Y-value to the IK model at each specific distance as shown in Fig. 13. For example, in case that the required distance is 400 mm, the Y-value which has to be inserted to the IK is 155 mm, to keep the end-effector at the height of 180 mm.

Fig. 14. Manipulation ability of H20 arms Articles

47


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 10,

N° 4

2016

m ?#+ F*!(,&/*"($! 23(!0 >$!*# >-!3$# This strategy has been performed using the developed kinematic model and the built-in DUR5200 ultrasonic sensors. The ultrasonic sensor can be used for different applications such as map building for mobile robot environment, collision avoidance, robot range finder and distance detection. The DUR5200 ultrasonic range sensor module can detect the range information from 4 cm to 340 cm. The distance data is precisely calculated by the time interval between the instant when the measurement is enabled and the instant when the echo signal is received. For the process of grasping and placing, the API and the communication process between the NCS and the AMS have been implemented. As the H20 robot arrives at the desired location in front of the workstation, the first step, which will be performed, is the orientation correction of the robot to be straight. As the labware container has a specific posture on the workstation according to its design (see Fig. 15), the pitch and roll orientation related to the robot are fixed. But the yaw orientation has to be corrected. The yaw orientation has been corrected using two sonar sensors mounted on the base of the H20 robot (see Fig. 17). The distance (Z) from each channel to the workstation is checked and the robot rotates till the values of both sensors are equalized. Also, the height of every workstation in the labs is known. The error value of the desired robot position in front of the workstation is Âą2cm in the X-axis. This error has been compensated by the design of grippers and labware container handles as shown in Fig. 16. There is a space range of Âą3cm between the lengths of grippers and handle where this space range compensates the X-error of robot position to guarantee a secure grasping. Using the client-server communication model, the orders are sent from the NCS to the AMS. The order includes the required task (grasping or placing), the height (Y-value) of the workstation, the desired target (R1, R2, L1, L2) which determines the X- value position, and the distance (Z-value) between the robot

Fig. 17. The framework of labware manipulation

Fig. 18. The flowchart of labware manipulation

Fig. 19. Multiple positions of robot for manipulation Fig. 15. 3D design and posture of labware container

Fig. 16. The manner of grasping the handle 48

Articles

base and the workstation obtained from the sonar sensor. Depending on this information, the labware pose related to the arm shoulder is found and checked whether it is inside the arm reachable space or not. If it is inside the reachable space, the IK solution will be calculated and the joints limits will be checked. If there are multiple solutions, then a decision procedure will be used to select the solution with minimum joints motion. Afterwards, the converting equations will be used to convert the angle value of each joint to a servo position value to move the arm to the required pose as shown in Fig. 17. The average of accuracy with


Journal of Automation, Mobile Robotics & Intelligent Systems

this strategy of labware manipulation is less than 1cm and the task of grasping or placing takes about 40 seconds to finish the process. The flowchart of this manipulation strategy is shown in Fig. 18. For the purpose of safe transportation, a holder has been mounted on the robot body (see Fig. 17) to guarantee a straight and secure posture for the labware which contains chemical and biological components. This strategy can be applied also for multiple robot positions in front of the workbench. Different positions can be defined for the robot to reach. Then, 4 labwares locations on the workstation can be determined for each robot position. Using this way, multiple labware manipulation for multiple robot positions can be realized as shown in Fig. 19. The shift distance between the two positions is 29 cm. This strategy depends on the required robot position and the required labware position as follows: P1R1, P1R2, P1L1, P1L2, P2R1, P2R2, P2L1, and P2L2. The possible weight, which the arm can manipulate using this design of gripper with handle, is about 350g. This limited payload is related to the weak wrist joint of the H20 arms. To manipulate heavier labwares blindly, a vertical handle has been designed as shown in Fig. 20. Using this design, the arm configuration will be in the form which locks the weak wrist joint

Fig. 20. The design of vertical handle

A B Fig. 21. The arm structure for manipulation. A: horizontal handle. B: vertical handle when the grippers grasp the handle. In this case, the lifting process depends on the elbow joint which has more powerful torque (see Fig. 21). The possible pay ‡ ‡ ‘•• # Fig. 21.A shows the wrist and elbow joint of the H20 arm for the case of grasping horizontal handle. The wrist joint is very weak and it is unable to lift heavy labwares which leads to unsecure manipulations. With the vertical handle, the lifting movement of the wrist joint will be locked as shown in Fig. 21.B. The elbow joint, that is more powerful than wrist joint, will be responsible for lifting the heavy labwares from the workstation.

VOLUME 10,

N° 4

2016

Fig. 22. The holder of Kinect sensor tion. To perform that visually, the required target has to be identified and its pose related to the robot has to be calculated. Different visual sensors can be used for this purpose such as stereo vision and 3D camera. The Kinect sensor, which is a kind of 3D camera, is considered as a preferred solution for such tasks since it provides the depth information without the need of deep image processing steps as in stereo vision. There are 2 kinds of Kinect, V1 and V2. Both of them have been used to perform the labware manipulation with H20 robots. The Kinect sensor has been fixed on the H20 body using a holder with a suitable height and tilt angle to guarantee a clear and wide view for the whole workstation as shown in Fig. 22. For transporting multiple labware, it is necessary to have an intelligent behavior to grasp the desired handle where the required labware is positioned. The Kinect sensor V1 has been used to detect and localize single color objects [28]. The labware container handle and its placing holder have been detected using RGB color filtering as shown in Fig. 23. The Kinect V1 has 640Ă—480 and 320Ă—240 color frame and depth frame resolution, respectively. To improve the identification of different handles, a new design with flat panels on the upper side has been developed. Proper grippers have been designed also to fit the new handle. The upper flat panel is used for fixing different colored or pictorial marks to distinguish multiple handles as shown in Fig. 24 [29]. HSV (Hue, Saturation, Value) color segmentation method with shape (rectangle) and area detection has been used with Kinect V2 to find the required handle. Also, a mark with specific features can be fixed on

m ) ?#+ F*!(,&/*"($! 23(!0 P(!-'" >-!3$# The required labware has to be distinguished and manipulated wherever it is located on the worksta-

Fig. 23. The detection of handle and placing holder [28] Articles

49


Journal of Automation, Mobile Robotics & Intelligent Systems

Fig. 24. The design of grippers and handle [29]

Fig. 25. Handle detection using SURF and HSV [29]

the handle to be recognized using SURF (Speeded-Up Robust Features) algorithm. A polygon with cross is drawn around the target to define it and to identify its center point as shown in Fig. 25 [29]. The detection and localization strategies have been applied also to find the holder related to the required position for labware placing tasks. HSV can be considered the most powerful system to be used for color segmentation because it is more robust to the changes of lighting conditions in comparison with RGB color system. The high resolution of the RGB and depth cameras of Kinect V2 make it very desired to be used for object detection and localization. The RGB camera of Kinect V2 captures color frames with a resolution of 1920×1080 pixels, whereas the IR camera, which is used for depth frame acquisition, has 512×424 pixels resolution. Using the design shown in Fig. 24, about 350 g can be manipulated. To manipulate heavier labware visually, it is complex to use the vertical handle shown in Fig. 20. It is difficult to identify multiple vertical handles in the view due to their design. To cope with this issue, the required torque, which the wrist joint has to provide for lifting the labware, has to be decreased. This can be achieved by removing the handle attached to labware container to decrease the lever arm of the wrist joint. In this case, the labware weight center will be closer to the wrist. New fingers with labware containers have been designed for this goal as shown in Fig. 26 [30]. The maximum payload which can be >•• # To perform the visual manipulation using this design, the labware itself has to be recognized and localized. Since the labwares have transparent or white lids to protect the components from cross contamination, it is not applicable to recognize and differentiate them 50

Articles

VOLUME 10,

N° 4

2016

on the workbench. To cope with this issue, a specific mark has been fixed on each labware lid as shown in Fig. 27. The mark image gives adequate features to differentiate multiple labware. Different labware marks have been recognized and localized using Kinect sensor V2 with SURF algorithm [31]. The recognition of the labware is assigned by drawing a polygon around its mark with cross to specify the center point. After the step of target recognition, the center point of this target is obtained. Then, the position of this center point related to Kinect is found using a mapping process. The required point is mapped from the color frame space to the Kinect space coordinates. The position of the center point is used as reference for estimating the grasping or placing point positions where the arm end effector has to reach [30]. To move the robotic arm to the goal, an extrinsic calibration has to be applied. The purpose of this step is to transform the position information from the Kinect space to be related to the arm shoulder space. Then, the inverse kinematic model is used to control the arm joints and guide the end effector to the target. The Kinect-toshoulder transformation includes the difference in the position and the tilt angle (t) between them according to the Kinect holder as show in Fig. 28. This transformation is vital to use the visual input as a reference for manipulation or interaction. The Kinect-to-shoulder transformation consists of two steps. The first is the transformation from the Kinect sensor to the hinge, then, the transformation

Fig. 26. Finger and labware container design [30]

Fig. 27. Labware lid with and without mark [30]


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 10,

N° 4

2016

stipend M. M. Ali). The authors would also like to thank the Canadian DrRobot Company for the technical support of the H20 mobile robots in this study.

?2 @B > Mohammed M. Ali* – Center for Life Science Automation (CELISCA), University of Rostock, Rostock 18119, Germany, mohammed.myasar.ali@celisca.de. Hui Liu – Center for Life Science Automation (CELISÉ CA), University of Rostock, Rostock 18119, Germany, Hui.Liu@celisca.de.

Fig. 28. Kinect-to-shoulder transformation

from hinge to shoulder. The transformation matrices are as follows:

,

,

where, a,b,c represent the position differences in x,y,z axes between the Kinect and hinge. On the other hand, d,e,f represent the position difference in x,y,z axes between the hinge and arm shoulder. The tilt angle is represented by `t`. The transformation from Kinect sensor to hinge is easier in the process because the changes are just in the translation but not in the rotation. To find the final matrix to be inserted to the IK model, the process shown in Fig. 29 is performed.

Norbert Stoll – Institute of Automation, University of Rostock, Rostock 18119, Germany, Norbert.Stoll@uni-rostock.de. Kerstin Thurow – Center for Life Science Automation (CELISCA), University of Rostock, Rostock 18119, Germany, Kerstin.Thurow@celisca.de. *Corresponding author

C4C CD C> [1]

[2]

Fig. 29. The final matrix calculation process The required time for performing the visual grasping is about 69 seconds while about 59 seconds are required for the visual placing.

[3]

H $!'/&3($! In this paper, the forward and inverse kinematics solution for the H20 robot arms have been derived. Also, the IK solutions for different singularity cases have been found. The reverse decoupling mechanism method has been used to solve the IK problem analytically. The derived solution of the IK problem can be used for any other robotic arm which has the same joints structure and coordinate frames. A decision model has been used to select the desired joint values within multiple choices. Computer simulations have been used to validate the IK solution and to calculate the reachable workspace of the H20 arms. Two labware manipulation strategies have been performed using the sonar sensor and the Kinect sensor.

? PDBNUCJ CFCD > This work was funded by the Federal Ministry of Education and Research (FKZ: 03Z1KN11, 03Z1KI1) and the German Academic Exchange Service (Ph.D.

[4]

[5]

[6]

Chung H., Hou C., Chen Y., Chao C., “An intelligent service robot for transporting objectâ€?. In: IEEE International Symposium on Industrial Electronics (ISIE), Taipei, Taiwan, 2013, 1–6. Ciocarlie M., Hsiao K., Jones E. G., Chitta S., Rusu R. # ĂŠ Â?# š# X‰ ^ Z nipulation in household environmentsâ€?. In: 12th International Symposium on Experimental Robotics (ISER), Springer Berlin Heidelberg, 2014, 241–252. DOI: 10.1007/978-3-642-28572-1_17. Graf B., Reiser U., Hägele M., Mauz K., Klein P., “Robotic home assistant Care-O-botÂŽ 3-product vision and innovation platformâ€?. In: IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO), Tokyo, Japan, 2009, 139–144. Vahrenkamp N., Berenson D., Asfour T., Kuffner J., Dillmann R., “Humanoid motion planning for dual-arm manipulation and re-grasping tasksâ€?. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), St. Louis, USA, 2009, 2464–2470. DOI: 10.1109/ IROS.2009.5354625. O’Flaherty R., Vieira P., Grey M. X., Oh P., Bobick A., Egerstedt M., Stilman M., “Humanoid robot teleoperation for tasks with power toolsâ€?. In: IEEE International Conference on Technologies for Practical Robot Applications (TePRA), Woburn, MA, 2013, 1–6. DOI: 10.1109/TePRA.2013.6556362. Tsay T. J., Hsu M. S., Lin R. X., “Development of a mobile robot for visually guided handling of materiaâ€?. In: IEEE International Conference on Robotics and Automation (ICRA), Taipei, Taiwan, 2003, 3397–3402. Articles

51


Journal of Automation, Mobile Robotics & Intelligent Systems

[7]

[8]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

[19]

52

Liu H., Toll N. S, Junginger S., Thurow K., “A common wireless remote control system for mobile robots in laboratory”. In: IEEE Conference on Instrumentation and Measurement Technology (I2MTC), Graz, Austria, 2012, 688–693. Liu H., Stoll N., Junginger S., Thurow K., “Mobile Robot for Life Science Automation,” International Journal of Advanced Robotic Systems, vol. 10, 2013, 1–14. DOI: 10.5772/56670. Abdulla A. A., Liu H., Stoll N., Thurow K., “A New Robust Method for Mobile Robot Multifloor Navigation in Distributed Life Science Laboratories,” J. Control Sci. Eng., vol. 2016, Jul. 2016, 1–17. DOI: 10.1155/2016/3589395. Iqbal J., ul Islam R., Khan H., “Modeling and Analysis of a 6 DOF Robotic Arm Manipulator”, Canadian Journal on Electrical and Electronics Engineering, vol. 3, 2012, 300–306. Tevatia G., Schaal S., “Inverse kinematics for humanoid robots”. In: IEEE International Conference on Robotics and Automation (ICRA), San Francisco, CA, USA, 2000, 294–299. DOI: 10.1109/ROBOT.2000.844073. Mistry M., Nakanishi J., Cheng G., Schaal S., “Inverse kinematics with floating base and constraints for full body humanoid robot control”. In: IEEE-RAS International Conference on Humano Robots, Daejeon, Korea, 2008, 22–27. DOI: 10.1109/ICHR.2008.4755926. Nie L., Huang Q., “Inverse kinematics for 6-DOF manipulator by the method of sequential retrieval”. In: Proceedings of the International Conference on Mechanical Engineering and Material Science, China, 2012, 255–258. Pieper D. L., The kinematics of manipulators under computer control, Ph.D. Dissertation, Stanford University, 1968. Lee C. G. S., Ziegler M., “Geometric approach in solving inverse kinematics of PUMA robots,” IEEE Transactions on Aerospace and Electronic Systems, vol. 20, 1984, 695–706. DOI: 10.1109/ TAES.1984.310452. Ho T., Kang, C.-G., Lee S., “Efficient closed-form solution of inverse kinematics for a specific six-DOF arm”, International Journal of Control Systems and Automation, vol. 10, no. 3, 2012, 567–573. DOI: 10.1007/s12555-012-0313-9. Huang G.-S., Tung C.-K., Lin H.-C., Hsiao S.-H., “Inverse kinematics analysis trajectory planning for a robot arm”. In: 8th Asian Control Conference (ASCC), Kaohsiung, Taiwan, 2011, 965– 970. Man C.-H., Fan X., Li C.-R., Zhao Z.-H., “Kinematics analysis based on screw theory of a humanoid robot,” Journal of China University of Mining and Technology, vol. 17, no. 1, 2007, 49–52. DOI: 10.1016/S1006-1266(07)60011-X. Paul R. P., Shimano B. E., Mayer G., “Kinematic control equations for simple manipulators”, IEEE Transactions on Systems, Man, and Cybernetics, vol. 11, 1981, 449–455. DOI: 10.1109/ CDC.1978.268148.

Articles

VOLUME 10,

N° 4

2016

[20] Zhao T., Yuan J., Zhao M., Tan D., “Research on the Kinematics and Dynamics of a 7-DOF Arm of Humanoid Robot”. In: IEEE International Conference on Robotics and Biomimetics (ROBIO), Kunming, China, 2006, 1553–1558. DOI: 10.1109/ ROBIO.2006.340175. [21] Ali M. A., Park H. A., Lee C. G., “Closed-form inverse kinematic joint solution for humanoid robots”. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Taipei, Taiwan, 2010, 704–709. [22] O’Flaherty R., Vieira P., Grey M., Oh P., A. Bobick, Egerstedt M., Stilman M., “Kinematics and Inverse Kinematics for the Humanoid Robot HUBO2,” Georgia Institute of Technology, Atlanta, GA, USA, Technical Report, 2013. [23 Borenstein J., Everett H. R., Feng L., “Where am I? Sensors and methods for mobile robot positioning,” University of Michigan, USA, 1996. [24] Borenstein J., “The CLAPPER: A dual-drive mobile robot with internal correction of dead-reckoning errors”. In: IEEE International Conference on Robotics and Automation, San Diego, CA, 1994, 3085–3090. DOI: 10.1109/ROBOT.1994.351095. [25] Lysenkov I., Rabaud V., “Pose estimation of rigid transparent objects in transparent clutter”. In: IEEE Conference on Robotics and Automation (ICRA), Karlsruhe, Germany, 2013, 162–169. DOI: 10.1109/ICRA.2013.6630571. [26] Denavit J., Hartenberg R. S., “A kinematic notation for lower-pair mechanisms based on matrices,” ASME Journal of Applied Mechanics, vol. 22, 1955, 215–221. [27] Conrad K. L., Shiakolas P. S., Yih T. C., “Robotic calibration issues: Accuracy, repeatability and calibration”. In: Proceedings of the 8th Mediterranean Conference on Control and Automation (MED2000), Rio, Patras, Greece, 2000. [28] Ali M. M., Liu H., Stoll R., Thurow K., “Arm grasping for mobile robot transportation using Kinect sensor and kinematic analysis”. In: IEEE International Conference on Instrumentation and Measurement Technology (I2MTC), Pisa, Italy, 2015, 516–521. DOI: 10.1109/I2MTC.2015.7151321. [29] Ali M. M., Liu H., Stoll R., Thurow K., “Intelligent Arm Manipulation System in Life Science Labs Using H20 Mobile Robot and Kinect Sensor”. In: IEEE International Conference on Intelligent Systems (IS’16), Sofia, Bulgaria, 2016, 382–387. [30] Ali M. M., Liu H., Stoll R., “Multiple Lab Ware Manipulation in Life Science Laboratories using Mobile Robots”. In: IEEE International Conference on Mechatronics, Prague, Czech Republic, 2016, 415–421. [31] Ali M. M., H. Liu, N. Stoll, and K. Thurow, “An Identification and localization Approach of Different Labware for Mobile Robot Transportation in Life Science laboratories. In IEEE International Symposium on Computational Intelligence and Informatics, Budapest, Hungary, 2016, 353–358.


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 10,

N° 4

2016

? p F < > > Submitted: 28th June 2016; accepted 20th October 2016

! " DOI: 10.14313/JAMRIS_4-2016/31 Abstract: The hydraulic calculations of sewage networks are done ususally by the use of nomograms being the diagrams that show the relation between the main network parameters like pipe diameters, flow rates, hydraulic slopes and flow velocities. In traditional planning of sewage networks the appropriate hydraulic values are read mechanically from the the nomograms. Another way of calculation is the use of professional programs like the SWMM5 hydraulic model and genetic or heuristic optimization algorithms. In the paper still another way of realizing the hydraulic and planning calculations is presented in which the basic hydraulic rules and formulas describing the sewage networks and their functioning are used. The numerical solutions of nonlinear equations resulted from the formulas and describing the main phenomena of sewage flows are used in the paper to solve the tasks of hydraulic calculation and planing of the networks. Keywords: drink water networks, hydraulic water nets modeling, water net revitalization, water net reliability

!"#$%&'"($! Modelling and planning of municipal sewage networks is a complex task because of the complexity of the equations describing the sewage flows in the canals. The basic hydraulic parameters describing a sewage net are sewage flows and sewage filling heights in the canals that result from the canal diameters and canal slopes. The standard approach of planning the sewage nets consists in using the nomograms which are the diagrams showing the relations between canal diameters, sewage flows, canal hydraulic slopes and flow velocities. The values of these variables are picked off from the diagrams that are results of the former calculation of the standard hydraulic formulas for computing the sewage network canals. A more advanced approach of planning the sewage networks bases on the use of professional programs for calculating the network hydraulic models like SWMM5 developed by EPA (US Environmental Protection Agency). The first approach with the diagrams is simple but pure mechanical and the second one is very complicated. In the paper an indirect approach to calculate the hydraulic parameters of sewage networks is present-

ed and it consists in relative simple numerical solutions of the nonlinear equations resulted from the basic hydraulic rules and formulas. The method proposed for modelling and planing the sewage networks enables to analyse quickly the network parameters what makes it similar to the nomograms approach and it enables to understand easily the mutual relations between the different hydraulic parameters of the network canals.

) .- *3(' ?33&+,"($!3 In the paper some analytical and design methods for municipal wastewater networks are presented. The networks concerned are gravitational and presented in form of graphs divided by nodes into branches and sectors. The main hydraulic parameters are sewage flows and filling heights in the canals and the factors deciding about their values are canal diameters, slopes and profiles. The nodes are the points of connection of several network segments or branches or they are the points of changing the network parameters or the points of localization of sewage inflows into the network (sink basins, rain inlets, connecting basins). There is stated that the hydraulic parameters of net canals such as the shape, dimension, slope and the roughness are constant for the segments investigated. The sewage inflow into the net nodes occurs pointwise. There is assumed that all relations concerning the sewage flows in the canals are of steady state type. In the net nodes the flow balance equations and the condition of levels consistence are satisfied.

5 .- ?!*/13(3 $8 ".- >-;*0- D-" @1%#*&/(' <*#*+-"-#3 The analysis of hydraulic parameters of a sewage network consists in calculating the canal filling degrees and flow velocities depending on sewage flow rates for the known cross-sections and canal slopes. The calculation is done for individual net segments using the flow values in the net nodes determined before. The method consists in numerical solving of some nonlinear equations resulted from the relations which are algebraic and describe the investigated sewage net. These relations are formulated basing on the main basic hydraulic principles and formulas [3, 4]. The equations describing the canal filling degree H/d depending on the flow rate Q have the following form [1, 3]: – for the i-th net segment (resulted from the flow balance equation): 53


Journal of Automation, Mobile Robotics & Intelligent Systems

Q = qi + ∑ Q j

VOLUME 10,

(1)

N° 4

2016

– for H/d>0,5:

j<i

Š

› ‡ — Àž £ 0,5:

⎛ Ď€ − 0,5 â‹… j + 0,5 â‹… sin(j)⎞ 3 v = b1 â‹… ⎜ âŽ&#x;⎠âŽ? Ď€ − 0,5 â‹… j

(6a)

⎛ H ⎞ j‚—} = Š ⋅

⎜ Š â‹… − „âŽ&#x; âŽ? d âŽ

(6b)

(2a)

(7)

(2b)

(2c) – for x=H/d>0,5: (3a)

(3b)

(3c)

(4)

where: d – circle diameter in [m], n – roughness coefficient in [s/m1/3], J – canal bottom slope in %, Ď• – central angle, H – fulfillment height in [m], H/d – fulfillment degree, Q – flow rate in [m3/s] for a segment, qi – sewage inflow rate for the i-th segment. ‰ ĂŒ ^ Z ‚’} ^ the canal diameter d and on the canal slope J. For the fixed values of diameters d and slopes J the parameter ĂŒ # ¼‹ ‚Š ›“ } and some known approximating methods can be used ˆ Z# ¼‹ ‚„ Š ›Š “ ›“ } Z the model describing the main relations in the sewage net. They describe the relation between the canal fulfillment degree x and the sewage flow rate Q for the known cross-sections and canal slopes. In this model the canal fulfillment degree x means the variable that depends on the sewage inflow rates supplying the individual net nodes. The canal diameters d and bottom slopes J are the stated network parameters. Tak Z

‹ ĂŒÂ†Â‚Â—}›Ă? Â• solved in dependence on the flow rate Q. Each change of the flow rate Q causes then a change in the solution of the equation. For the calculated canal feeling degree H/d the velocities of the sewage flow can be computed according to the following relations:

i=i+1

Calculate flow rate Q in i-th node from the flow balance

Calculate according to the formula (4)

N

Equation has two roots or does not have solutions

Q<2 π⋅ ȕ

Y

Solving equations (2a) – (2c) or (3a)(3c) we obtain canal filling degree

Change of wastewater inflow q in the i-th node

Y

N

x 70%

N Y Calculate the flow velocity v according to the formula (5a) - (5b) or (6a)-(6b) and (7)

N

N

Change canal diameter d and canal slope J

Y i=N

(5a)

(5b) Articles

Entering the network structure and input data, i.e.: number of nodes NW, number of segments N, set of nodes W = {i=1,...,NW}, set of segments U = {i=1,..,N}, set of diameters {di }, set of slopes for segments {Ji }, the inflow rates {qi } in nodes, i=1,‌.,N

Y

› ‡ Àž £ • ‘<

54

Knowing the sewage net geometry (the shape, the diameters and canal bottom slopes) and the values of inflows Q one can calculate the fulfillment heights and flow velocities for the each net segment. The calculation is to do in turn for the each net segment beginning from the farthest one and finishing it for the segment that is closest to the sewage treatment plant. The method used in the following algorithm enables the fast analysis of the main net parameters (the fulfillment degrees and velocities) and gives the possibility to simulate the process of sewage flows in the network. Changing the sewage inflow values in some indicated net nodes one can calculate in the very simply way the new values of the fulfillment degrees and flow velocities in the net segments connected directly with the these nodes. The method of calculating the new diameters d and bottom slopes J for the net canals investigated is included into the algorithm shown in Fig. 1.

Calculate new value canal diameter d and canal slope J

END

Figure 1. The algorithm of analysis of hydraulic parameters of sewage networks


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 10,

N° 4

2016

7 .- 4-*"&#-3 $8 ".- $""$+ *!*/3 >/$,-3 In the following discussion concerning the problem of planning the sewage networks two kinds of the canal slopes are relevant: the permissible slope and self-cleaning slope.

(11a)

(11b)

5 <-#+(33(9/- >/$,Liquid flows in canals can be in general of quiet, critical or of turbulent character. This depends on the value of the following Froude number [1, 5, 10]:

(8)

with: Fr – Froude number, v – average flow velocity [m/s], A – surface area of the active cross-section [mŠ], B –width of the sewage surface [m], g – gravity acceleration [m/sŠ]. Depending on the Froude number the sewage flow can be quiet (laminar) (for Fr<1) or critical (for Fr=1) or turbulent (for Fr>1). In the steady state flows with the free sewage level the slope of the canal bottom decides about the average flow velocity. Assuming the equality between the hydraulic slope and canal bottom the critical slope has got the following form [8]:

with: H – height of the canal filling, d – canal diameter, Ď• – central angle. The diagrams showing the changes of the canal critical slope for different canal diameters depending ‡ ‡ Z ˆ † # Š# The critical slope values for the canal fillings equal to 0 or 100% are going to infinity. The critical slope Jkr reaches its minimal value for the canal filling equal to ŠŒ#>ĂŽ ^ Z ^ g. Jkr 2,00% 1,80% 1,60% 1,40% 1,20% 1,00% 0,80% 0,60% 0,40%

(9)

0,20% 0,00% 0

0,1

0,2

0,3

0,4

d=0.5

with: Jkr – critical slope [%], U – length of the canal

Z‡ " › ÂŻ › % coefficient, n – roughness. The critical slope of a canal is a function of the canal geometrical dimension and the canal filling. If the canal filling and circumference length are rising monotonically then the surface width of sewage level and the hydraulic radius are going from zero to their extremes. For a canal with the circle crosssection the surface width is growing from zero to its maximum when the canal filling is raising to the half of its diameter and afterwards it is going back to zero. The hydraulic radius is rising then from zero to its maximum reached by the canal filling equal to 81.3% and then it diminishes to the value that is reached by the canal filling equal to the half or full canal diameter. One can deduce then that the critical canal slope has got an extremum depending on the fulfillment degree x=H/d and it is as follows: – for ‹Œ0.5: (10a)

(10b) – for x>0.5:

0,5

0,6

d=0.3

0,7

0,8

0,9

1

1,1

d=0.8

Figure 2. Relations between the critical slope Jkr and the filling degree x for different diameters d

Stating for a canal n=0.013, Š=1 and the fulfillZ — ŠŒ#>ĂŽ ‡ ‡ permissible slope results: (12)

It means that for securing the laminar sewage flow in a canal the canal slope value has to be less than the ^ Z ^ # ‰ ‡ Z ‚„Š} that the permissible slope depends only on the canal diameter d.

5 ) *!*/ >/$,- >-'&#(!0 ".- <#$'-33 $8 ".- *!*/ >-/8 /-*!(!0 The sewage passing the canals should have an appropriate large flow velocity called the self-cleaning speed. Such velocity in case of intensive sewage flows assures a dilution and a transport of the sediments that have been settled on the canal bottom at the time of smaller intensity of sewage flows. The self-cleaning speed can be secured when the friction between the sewage and canal wall is bigger than t min = 0.150 [kg/ mŠ] for the rain wastewater and bigger than [kg/mŠ] for the communal and industrial sewage [16]. Articles

55


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 10,

The average pass tension between the canal wall and sewage is described with the formula [16, 10]:

1,26%

(13)

1,05%

< Ă? › ^ = žZŠQ Ă? › ^ ‡ ˆ ‡ sewage [kg/m3], R – hydraulic radius for the canal filled partially [m], J – canal slope [%]. By means of relation (13) the minimal canal slopes Js securing the self-cleaning speed by the gravitational sewage flow can be calculated:

N° 4

2016

J

0,84%

0,63%

0,42%

0,21%

– for xŒ0.5: (14a)

d

0,00% 0,1

0,2

0,3

0,4

0,5

0,6

tau=0.225

0,7

0,8

0,9

tau=0.3

1

1,1

tau=0.25

– for x>0.5: (14b)

< —  Àž ‡ Ñ › Z calculated from (10b) or (11b). The diagrams showing the change of minimal canal slope in dependence on canal filling degree for different canal diameters are shown in Fig. 3.

Figure 4. Relations between the canal minimal slope Js (tau) As results from the above analysis the self-cleaning process of the canals is secured when the canal slopes are bigger than the minimal slope Js. In Fig. 5 the diagrams describing the relations between the permisible slope Jg and the minimal slope Js and the canal diameters d are shown.

J 7,0%

J

6,0%

1,80% 1,60%

5,0%

1,40%

4,0%

1,20% 3,0%

1,00%

2,0%

0,80% 0,60%

1,0%

0,40%

x

0,0% 0

0,1

0,2

0,3

0,4

d=0.2

0,5

d=0.3

0,6

0,7

d=0.5

0,8

0,9

1

1,1

d=0.8

From the diagrams results that the minimal slopes J shrink with the growing filling degrees; this behaviour is fastest for the filling degree smaller than 10%, the minimal slopes reach their minimum by the filling degree equal to 60% and then they are growing insignificantly. Inserting into the above formulas the value of hydraulic radius corresponding to the canal filling degree equal to ‹  Â&#x;•Î following formula for the minimal slope assuring the self-cleaning process in a canal: (15)

The diagrams describing the dependence between the canal minimal slope Js and canal diameter d for different pass tensions t min are shown in Fig. 4; the slope values are diminishing with the growing diameter values d. Articles

d

0,00% 0

Figure 3. Relations between the minimal canal slope J and the canal filling degree x for different canal diameters d

56

0,20% 0,1

0,2

0,3

0,4

0,5

0,6

Js

0,7

0,8

0,9

1

1,1

1,2

Jg

Figure 5. Relations between the minimal Js and the permisible slope Jg depending on the canal diameters d From the diagram results that in order to assure the self-cleaning process in the canals and in the same time to secure the laminar sewage flow, the canal slope values shall be appointed between the minimal slope Js and the permisible one Jg.

= .- <#$9/-+3 $8 ".- >-;*0- D-" </*!!(!0 The following designing of the sewage net segments concerns the cases when new segments are connected to the existing sewage net or when the fillings for some segments exceed the permissible values. There is stated that the structure of the new segment and the input flows to the new designed nodes are known. The bottom slopes J and diameters d of the new canals must guarantee the laminar and selfcleaning sewage flow in them what means that some functional restrictions for the network have to be fulfilled. The sewage filling levels in the canals cannot exceed some determined values.


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 10,

= U(+("*"($!3 The first group of limitations has to assure that the process of self-cleaning and of laminar sewage flow occur in the canals. These demands are fulfilled with the determination of specified canal slope that shall be bigger than minimal slope Js and less than the permisible slope Jg# š Z ‚„Š} ‚„‘} ^ Z values n=0.013 [s/m1/3Q  ÂŒ#Â?•Â&#x;> =Zž ŠQ Ă? ÂŒÂŒÂŒ#Â&#x; [kg/m3Q Ă?min Â• Š‘ = žZŠ] these conditions can be written down in form of the following inequalities:

N° 4

2016

The second group of limitations cover the existence of solutions of the nonlinear algebraic equations describing the relations between the canal filling degree x and the sewage inflow Q into the network in its steady state operation. By solving these equations the filling degree x for the known network parameters and given sewage inflows Q is stated. A detailed analysis of the equations has been done in [3] and [4]. As results from there a solution of the equations exists when the following inequalities are fulfilled:

(16a)

J – Js

(16b)

J – Jg

The diagrams of these relations are shown in Fig Â&#x; ># J-Js 0,0120 0,0100

After some transformation the following relation occurs: (17) Regarding the necessity of securing the canals ventilation and the right operation of the inside installations of the canal system there is noted that the canal filling degree x=H/d can not be bigger than >•Î# † Z ‡ <

0,0080 0,0060 0,0040 0,0020

Function FŠ‚Ò} ‡ ‡ Z<

0,0000 -0,0020 -0,0040

(18) 1,00%

d

1

J

0,10%

1,05

0,9

0,95

0,8

0,85

0,40%

0,75

0,6

0,7

0,65

0,5

0,55

0,45

0,3

0,70%

0,4

0,35

0,15

0,25

0,212930866

-0,0060

Figure 6. The diagram of the relation J – Js depending on the diameter d and on the slope J According to relation (16a) such the values of diameters d and slopes J are to select for which the surface presented in the Fig. 6 will get the positive values. J-Jg

with jŠ ‚—} = Š ⋅

(Š â‹… — − „) . % ‚„Â?} ‡ — >•Î ‡ tions result:

(19)

*+ / :

deThe diagram for function pending on diameter d and slope J is shown in Fig. 8. The flow value Q is equal to100 [dm3/s].

0,008 0,006 2,5

0,004 0,002

2

0 1,5

-0,002 1

-0,004 -0,006

0,5

1,00%

J

0

0,10%

1,00% -0,5

J

0,10%

1,05

0,95

0,85

0,75

0,40%

0,65

0,55

0,45

0,35

0,70%

0,25

0,15

1,05

0,95

0,75

0,65

0,40%

0,85

d

0,55

0,45

0,35

0,70%

0,25

0,15

-0,008

d

Figure 7. The diagram of relation J – Jg depending on the diameter d and on the slope J

Figure 8. The diagram of function

According to relation (16b) such the values of diameters d and slopes J are to select for which the sur‡ ^ † # > ˆ ˆ #

š

‚„Œ} ˆ ‡ diameters d and slopes J are to select for which the surface presented in Fig. 8 will get the positive values.

.

Articles

57


Journal of Automation, Mobile Robotics & Intelligent Systems

W6

VOLUME 10,

W5

N° 4

2016

W7 W8

W4 W3 W9 W10 W16

W11

X5

W12 W2 X4

W13 W23

X2

W15

X6 W26

X1

W14

X3

W19 W27 W21 W22

W24

W20 W25

W18

W1 W17

Figure 9. Diagram of the sewage network investigated

The problem presented below concerns the planning of a network branch consisting of K segments. The task is to determine such the values of diameters d and slopes J that would ensure the safe and correct sewage net operation. The net structure, segments number and length and the wall depths of the first networks segments are known. The problem consists ˆ ‚„Â&#x; } ‚„Â&#x; } ‚„Œ} ‡ ˜ segment of the designed network branch. The equations of the flows balance are fulfilled in each node of the network:

G ?! CZ*+,/- $8 F$%-//(!0 *!% </*!!(!0 $8 * >-;*0- D-";$#T The algorithm of sewage networks analysis presented above is applied in the following to investigate ‡ Š> ŠÂ&#x; ^ ^ # Â? „‘ ^ (W6, W7, W8, W10, W11, W14, W15, W16, W19, W20, W21, W23 , W25, W26, W27) and 1 output node W1 while the other nodes called montage nodes make only the connections of the related net pipes. ‰ Z † # ÂŒ =„Â&#x;Q =„>Q# The values of the sewage inflows into the network are stated in the input nodes and the sewage flows in the pipes are marked with darts on the diagram. In the montage nodes the sewage flows are to calculate according to the appropriate flow balance equations. For the individual network pipes the diameter values ‹  •#Š Z ˆ ‹ 58

Articles

to J=0.5% are stated. For the network of the structure given the canal fulfillment degrees H/d and the sewage flow velocities in all pipes have been calculated by means of the formulas carried out in the earlier considerations. The network has been calculated also using the MOSKAN program developed in IBS PAN and based on the hydraulic model SWMM5 available in Internet as an open source application [15]. With the interrupted lines the additional nodes (X1, X2, X3, X4, X5) are marked which are to be added to the network existed. For the canals connecting the nodes the diameters d and decrease values J are to calculate. The calculation results received are shown in Table 1. One can see from the table that the values calculated using the algorithm formulated in the paper are very similar to these ones received by means of the MOSKAN program. The negligible differences which are visible in the table result from the numerical roundings made in MOSKAN. Calculation of the parameter values regarding the canal diameters d and canal decreases J have been done for two different cases. The case no. 1 deals with the situation in which the sewage inflows in 5 network nodes WŠ“, WŠ‘, WŠÂ&#x;, WŠ’, WŠ> have been increased. These nodes are marked in † # ÂŒ # ‰ ‡ with the increase of the fulfillment degrees in the canals which are connected with these nodes. The cal ‡ ‰ Š# ‰ analysis of the results shows that the canal fulfillment ^ ^ Š„ ŠÂ&#x; the nodes WŠŠ with W„> and W„> with W1 respectively


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 10,

N° 4

2016

Table 1. Hydraulic calculation results for the sewage network shown in Fig. 9 Upper node

Lower node

Segment

input flows in node q [dm3/s]

flows in segments Q [dm3/s]

H/d [%]

v [m/s]

H/d [%] MOSKAN

v [m/s] MOSKAN

W6

W5

1

0.56

0.56

„•#>Š

•#“•Œ

11

•#ŠŒ

¤>

W5

Š

0.31

0.31

�#•Œ

•#Š‘Œ

8

•#ŠÂ&#x;

W5

W4

3

•#Š>

1.14

15.08

0.383

15

0.38

W10

¤ÂŒ

4

0.36

0.36

Â?#Â&#x;ÂŒ

•#Š>„

Œ

•#Š>

W11

¤ÂŒ

5

1.13

1.13

„‘#•Š

•#“�Š

14,6

•#“Œ

¤ÂŒ

W4

6

0.64

Š#„“

Š•#’�

0.460

Š•

0.46

W4

W3

>

0.64

“#Œ„

Š>#>�

•#‘’Œ

Š�

0.55

W8

W3

8

0.11

0.11

’#Œ�

•#„�Œ

5

•#„Œ

W3

¤ÂŠ

Œ

0.1

’#„Š

Š�#‘“

•#‘‘>

ŠŒ

0.56

W14

W13

10

0.11

0,.11

’#Œ�

•#„�Œ

5

•#„Œ

W15

W13

11

•#“Š

•#“Š

�#ŠŠ

•#ŠÂ&#x;„

8

•#ŠÂ&#x;

W13

¤Â„Š

„Š

•#Š“

0.66

„„#‘Œ

•#“Š‘

„Š

0.33

W16

¤Â„Š

13

•#Š’

•#Š’

>#„>

•#Š’•

>

•#Š’

¤Â„Š

¤ÂŠ

14

1.86

Š#>Â&#x;

Š“#ŠŒ

•#’Œ>

Š“

•#’Œ

¤ÂŠ

W1

15

•#>“

>#Â&#x;„

“Œ#’Š

0.661

“Œ

0.66

¤ÂŠÂ“

¤ÂŠÂŠ

16

0.56

0.56

„•#>Š

•#“•Œ

11

•#ŠŒ

¤ÂŠ>

¤ÂŠÂŠ

„>

0.4

0.4

Œ#„“

•#Š�•

Œ

•#Š>

¤ÂŠÂ‘

¤ÂŠÂ’

18

0.81

0.81

„Š#>Œ

0.346

13

0.36

¤ÂŠÂ&#x;

¤ÂŠÂ’

„Œ

0.83

0.83

„Š#Œ’

0.348

13

0.38

¤ÂŠÂ’

¤ÂŠÂŠ

Š•

•#•Œ

„#>“

18.48

0.433

18

0.43

¤ÂŠÂŠ

¤Â„>

Š„

1.53

’ ŠŠ

Š�#�Œ

0.561

ŠŒ

0.56

¤Â„ÂŒ

W18

ŠŠ

0.83

0.83

„Š#Œ’

0.348

„Š

0.38

¤ÂŠÂ•

W18

Š“

0.3

0.3

>#Œ>

•#Š‘Â&#x;

>

•#ŠÂ&#x;

¤ÂŠÂ„

W18

Š’

•#„Œ

•#„Œ

Â&#x;#’Š

•#ŠŠ“

6

•#ŠŠ

W18

¤Â„>

Š‘

•#ŠŠ

1.54

„>#Â’Â&#x;

•#’„Œ

18

•#’Œ

¤Â„>

W1

ŠÂ&#x;

•#‘>

6.33

“‘#>•

•#Â&#x;ŠŒ

36

0.63

W1

Sewage plant

—

^ ˆ ‡ >•Î# In this situation these pipes are to reconstruct what means that their new diameters d and decrease degrees J are to be calculated in such the way that the pipes functioning will be already correct. To do it the ‹ ‚„Â&#x; ›„Â&#x; } ‚„Œ} ˆ to be solved. The results of the calculation are shown in Table 3. From the analysis of the table data results that the new canal diameters d and decrease degrees J are Z ‡ ^ ^ Š„ ŠÂ&#x;# † values of d and J the new fulfillment degreases H/d and flow velocities in the related pipes have been calculated. The present values of H/d for the both pipes exceed slightly 50%. Â? # Š Z Z ˆ been added to the network existed. The new nodes added (X1, X2, X3, X4, X5 ) are noticed with the inter-

„“#Œ’

^ † # ÂŒ# ‰ structure, the number and the lengths of all pipes, the sewage flow values in the network nodes as well as the deepening values of the beginning pipes of the net are known. The task to be solved consists then in calculating such the diameters and decrease degreases of the new canals with which the network functioning will be correct. The given input data concerning the network and the calculation results received are shown in Table 4. One can see from the table that the values of d and J calculated for the new segments are the same what results of the small values of the projected sewage inflows q. For such the inflows the canal fulfillment — ˆ ‡ Š•Î# † segments added their decrease values are placed between the minimal decrease Js and the border decrease Jg. Articles

59


Journal of Automation, Mobile Robotics & Intelligent Systems

VOLUME 10,

N° 4

2016

Table 2. Calculation results for the network pipes with the inflows changed in the nodes stated Upper node

Lower node

Segment

input flows in node q [dm3/s]

flows in segments Q [dm3/s]

H/d [%]

v [m/s]

H/d [%] MOSKAN

v [m/s] MOSKAN

¤ÂŠÂ“

¤ÂŠÂŠ

16

4.56

4.56

30.06

•#‘>’

30

0.58

¤ÂŠ>

¤ÂŠÂŠ

„>

4.4

4.4

ŠŒ#‘„

0.568

30

•#‘>

¤ÂŠÂ‘

¤ÂŠÂ’

18

4.81

4.81

ҥ#Υ

•#‘�Š

31

0.58

¤ÂŠÂ&#x;

¤ÂŠÂ’

„Œ

3.53

3.53

ŠÂ&#x;#“>

0.533

ŠÂ&#x;

0.53

¤ÂŠÂ’

¤ÂŠÂŠ

Š•

“#Â&#x;ÂŒ

„Š#•“

51.10

•#>’‘

51

•#>‘

¤ÂŠÂŠ

¤Â„>

Š„

1.53

ŠŠ#‘Š

>Œ#’>

0.841

>Œ

0,84

¤Â„>

W1

ŠÂ&#x;

•#‘>

Š’#Â&#x;“

�Œ#Š>

•#�“Š

�Œ

0.83

Table 3. Calculation results for the network pipes with the inflow values changed Upper node

Lower node

Segment

Flows in segment Q [dm3/s]

d [m]

J [%]

H/d [%]

v [m/s]

¤ÂŠÂŠ

¤Â„>

Š„

ŠŠ ‘Š

• Š‘

0,5

‘Š

•#Â&#x;•Œ

¤Â„>

W1

ŠÂ&#x;

Š’ Â&#x;“

• Š‘

0,5

55

0.635

Table 4. Calculation results for the new network branch added Upper node

Lower node

Segment

input flows in node q [dm3/s]

flows in segments Q [dm3/s]

d [m]

J [‰]

H/d [%]

v [m/s]

X5

X4

L1

1.86

1.86

•#Š‘

’#Œ�

15.4

•#’’Œ

X6

X4

ŽŠ

0.56

0.56

•#Š‘

Â’#ÂŒÂ&#x;

10.6

•#“‘>

X4

¥Š

L3

0.64

3.06

•#Š‘

’#ŒŒ

„Œ �

• ‘Š“

X3

¥Š

L4

•#>“

•#>“

•#Š‘

’#Œ‘

„“#>

•#’„Œ

¥Š

X1

L5

1.86

5.65

•#Š‘

’#ŒŒ

Š‘

0.6

The new net branch planned has been added to the network existed on the pipe connecting the node W2 with the node W1 being the sewage receiver via the new node X1. The wastewater flows into the node W1 from the node W2 being the element of the network existed and from the node X2 being the element of the network new planned. There is a risk then that the sewage inflow into the node X1 would be so high that the fulfillment degree H/d of the canal placed — ˆ ‡ >•Î# Â? such the situation the net segment lying between the nodes X1 and W1 should be reconstructed. From the calculation done results however that from the node W2 ˆ Z ‡ >#Â&#x;„ = Z“ž Q ‡ while from the node X2 the sewage volume of 5.65 [dm3/s] runs off. From the mass balance results then ˆ Z ‡ „“#ŠÂ&#x; = Z“ž Q ‡ the node X1. The following calculation shows that for the canal diameter and canal decrease values equal to  Â• Š  Â•#‘Î ‡ ‡ Z Àž ‡ pipe connecting the nodes X1 and W1 will amount to 54.1% and the corresponding flow velocity will be equal to v=0.6 [m/s]. In this situation there is no need to reconstruct the referred segment. 60

Articles

j $!'/&3($!3 In the paper a simple and practical approach for planing sewage networks is proposed that differs from the approaches commonly used in the today’s practice. The standard method of sewage network calculation consists in using nomograms which enable to calculate quite mechanically the basic parameters of the designed nets such as canal diameters and slopes on the base of determined sewage inflow values. The modern planning approach consists in applying advanced computer programs like SWMM developed by EPA or MIKE URBAN developed by DHI that use in their calculation the network hydraulic models. This approach requires an advanced computer knowledge from the network planers. The important obstacle in using this software is the need of having a calibrated hydraulic model of the network under investigation. To calibrate the model a GIS system to generate the hydraulic graph of the network and a monitoring system to collect the measurements data have to be installed in the waterworks what generates expensive costs. The approach for the sewage networks planning presented in the paper is an indirect solution between the standard and modern ap-


Journal of Automation, Mobile Robotics & Intelligent Systems

proaches trying to keep their advantages and to eliminate their drawbacks. It uses the analytical relations concerning the hydraulics and geometry of sewage networks and it transforms them to nonlinear equations from which demanded canal fillings and sewage speeds or canal diameters and slopes can be directly calculated. The analysis of the equations formulated enables to determine available maximal sewage inflows ingoing to the network nodes. The calculation can be done quickly and exactly avoiding the use of the network hydraulic model. The computational example presented for steady state modelling and planning of sewage networks is rather simple but the algorithms proposed can be either used unproblematic for modeling and designing more complex municipal sewage systems.

VOLUME 10,

[8]

[9]

[10]

[11]

[12]

?2 @B > ;< > ?@ – Systems Research Institute of the Polish Academy of Sciences ¨ Â&#x; •„˜’’> ¤ # E-mail: studzins@ibspan.waw.pl.

[13] [14]

[15]

\ ^ < _> – Systems Research Institute of the Polish Academy of Sciences ¨ Â&#x; •„˜’’> ¤ # E-mail: petricz@ibspan.waw.pl. *Corresponding author

[16]

C4C CD C> [1]

[2]

[3]

[4]

[5]

[6]

[7]

Biedugnis, S., Metody informatyczne w wodo ÂŽ; ‘ Â’ , Oficyna Wydawnicza ¤ – ¤ „ŒŒÂ?# (in Polish) \ ¤# Z Ă€# \ # Kanali‘ Â’ * € $ * Tom I. Arkady. Warsza „ŒÂ?“# ‚ } ÂŽ# # # X Z cal modeling and computer aided planning of communal sewage networksâ€?, Journal of Automation, Mobile Robotics & Intelligent Systems, vol. Â? # Š ¤ ‚Š•„’}# YÂœÂ?< „•#„’“„“ž š "Â? §ÂŠÂ˜ÂŠÂ•Â„’ž„’# ÂŽ# # # X elling the steady state of sewage networks as a support tool for their planning and analysisâ€?, ! ‰ ˆ # Š‘ # “ ¤ \ Š•„‘# Chudzicki, J., Sosnowski, S., " Â’ ‘ |Â’ . Wydawnictwo Seidel-Przywecki, ¤ Š••’# ‚ } Imhoff K., Imhoff K. R., ” ‘ Â’ $ ‘| ‘ ‘ • – * ‚ . (Urban sewage systems and sewage treatment. A guidebook). Wyd. –^ ZÂ˜ÂĽ'Âœ ‚„ŒŒÂ&#x;}# ‚ } ' # ¨ › \ š# Ă“Âœ \Âą Ă” ^ ÂŁ Nowa

[17]

[18]

[19] [20]

N° 4

2016

 " —| € Â’ ÂŽ; ” ‘ Â’ ¤ „ŒÂ?„# ‚ } Mizgalewicz P., Knapik P., Wieczysty A., „Analiza pracy sieci kanalizacyjnych przy zastosowaniu EMCâ€?, „ŒÂ?Â’ # ’“’ž“›’ Š•›Š„# ‚ } ¨ ¤# % ^ ^\ kanalizacji deszczowej. , no. ’“’ž“›’ Š•›Š„ „ŒÂ?Â’# ‚ } # # Xš ^^ ‡ static modelling of sewage networks based on the hydraulic formulasâ€?, ‚

; % "€ 2015 conference. ÂĽ# ¨# |$ ˜– • | $ ÂŽ | | ‘ | ™— š | ‘ | . , no. 3 4, ¤ \ „ŒÂ?Â’# Rossman L., € $ ˆ ; $ ˆ

›€ ˆˆœ Â? { $ ' Ă˜ ‘#•#•ŠŠ Š•„Š# www.epa.gov/nrmrl/wswrd/wq/models/ swmm/. Saegrov S., € $ ' }

! ~ % € € $ ž . IWA š À Ž Š••‘# \ š# # $ ¹\ š# $ ‘ % ; ' ' ;

| tem for sewage design, management and revital ‘ ˆ €”}ž, In: Simulation in Umwelt- und Â&#x; % , ed.: J. Wittmann, M. Mueller, Ă€ # š Â? ˜ š „’Â&#x; Š•„“ Š•“›Š„•# \ š# # ¤¹Â– # $ Âą\ A., „ ' ;

| ' ˆ

$$' }~ ‘ '

” ~ ' ; ‰ * In: Modelierung und Simulation von Ă–kosystemen, Reihe: Umweltinformatik (Nguyen Xuan Thinh, Hrsg.), Shaker Ă˜ š ‚Š•„“} ‚ ). š# # $ Âą\ š# Ă“" nerunterstĂźtze Planung von kommunalen Abwassernetzen mittels des hydraulischen Mod Âœ^ Z X Modellierung ' € $' #  | $ (Nguyen Xuan ‰ Ă€ #} ¤ ^ ' ^ Š•„Š Ă˜ š Š•„“ „Š“›„““# ‚ Z } Wartalski A., Wartalski J., „Projektowanie hy Ă™ Âą ÂŁ # „ž>Â&#x; Š••• „Œ›Š’# http://www.epa.gov/nrmrl/wswrd/wq/models/swmm/. www.mikebydhi.com/Products/Cities/MIKEURBAN.aspx.

Articles

61


AgmjfYd g^ 8mlgeYlagfย DgZad] IgZgla[k ยฌ @fl]dda_]fl Jqkl]ek

MFCLD< ~ย ย

E; ย

ย ~ย

# $ %

!" #$ $%& ' *+#,5! ;%$5 ?93"#*'"M

!

P-1;$#%3M

"

!"#$%&'q$! ย ^ ^ ย ย { Z ย Z Z ย ย ย ย ย Z ย ^ Z ย # ย Z ย ย ^ ^ ย ย ย =ย ย Q# ร ^ Z ยก # # ^ Z { X ยฃ ย ย ย

ยก ^ ย ย

^ ย ย ย ย ย ย ย ^ # ย ย ย ย ย Z ^ Z ย ย ย Z^ ย ย ย Z # ย ^ ^ ย ^ Z ย ย ย ย ย ย

# ย ย ^ ^ { ย ^ ย # ย ย ย Z^ ย Z ^ ย ย ย ^ Z ^^ # # ^ Z Z ย ^ ย ย ^ # ย Z ย Z ย ย ย Z ย ย Z^ ย ย

Z^ Z Z Z Z # ยช ย

Z ย # ย ย ย ย ย

^^ ^ Z ^ Z ย Z ย ^ ย ย Z ย ^ ย # ยค ^ Z

^ ย { ย Z ย ย { ย Z ย ย ย ย ย ^ ย ย ย ย ย ย ย

}# ^ Z ย ย ย ย ^^

# ย ^ ^ ^ ย ย ^ ย # # ย ^ ย ย Z ย } ย ย ^ ย # # ^ ^ ย ย Z

ย }# Z < ย Z Z Z ^ Z ^ Z ^ ย #

ย ย ย ย ' ย =ย Q ย Z =ย ย Q# ย { ย Z ^ ^ ^ Z Z Z^ # ย ร Z^ Z ย < ย ย ย Z ย ย Z ย ย # ย ^ ย Z ย ย ^ ^^ ย ย Z ^ Z # ย ^ ย ^^ ย ย Z^ # ย ย ย ^^ =ย ย Q =ย ย Q =ย ย Q# ย Z ย ^ ^ ย ย ย ย ย ย Z # ย ^ ^ Z^ ย ^ ^^ ^ ย ^ ย =ย ย Q# ย

ย { ย ย Z ย Z ^ ย Z

# ย ย ^ ย ย # % ย ย ย ย ย Z Z^ ย ^ ย Z

^ ^ ย ^ ย ย ย ย ย # Z ย ย ย ย Z ย ย ^ ^ Z ^ ย ย =ย Q =>Q}# ร { ย

ย ย ย Z ^ Z ^ Z ย =ย ย Q}# ย Z^ ^ ^ ย ย ย ย ย ^ ย # ยค Z ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย ย # ย ย Z

ย ย ย ย ย #


AgmjfYd g^ 8mlgeYlagf DgZad] IgZgla[k ¬ @fl]dda_]fl Jqkl]ek

Z ^ ^

Z { ^ ^ Z { Z^ # ^ Z { < >?@ C ?D E { G >K@ C KL E Z # M@ ML ¡ ^ ^ # ¤ ^^ ^ Z Z ^ Z #

^ Z ^ ^ ^ ^

^ Z # ^ ^ ^ ^ ^ ^ ^ # % Z ^ Z Z = Q { < ^ ^ ^ Z # ^ { < Z ^ ^ Z M NM MO# % M Q W M Q W < M Q W N O QXYQZ\^_` QXYQqq_{_XZ_ |_^}__X ?~ \XY ? ?~ ?

M Q W N O {_ {_`_X^` \X XZ_{^\QX {_q_{_XZ_ q ?~ _{ ? ?~ ? ?~ ? q { ~ / N O

MFCLD< ~

E;

~

) *3(' J-v!(q$!3 { ># ` j < !{| } ~ } ¡ # < |$$ # * } |

* } >

; %| ;

% ~ <

$ |< $$' # | # |* ' | $ # ' ¢' $ $ ' ~ ' ' ¢'

$ $ ~ ' * }

~ '

%

* }

$ % % ~ | ! * ) Cx&(Q*/-!" -/*q$! *!% z&*3('$!'*Q("1p z&*3('$! Q-Z("1 ¨ # ª Z { Z ^ < ` j < !{! } ~| ~! ' q < ¢' # q < % q ? q K ? K q { ? K <

}

¤ !{ > ~ < q < ¢' # % | %< ~ ¡ ¢@ '

M Q W N O } _X ?~ Q` Y_qQXQ^_ K Z_{^\QX K

£¤ ¥ ? q ?

{_q_{{_Y ^ ?

= Q <

M Q W N O } _X ? Q` Y_qQXQ^_ K Z_{^\QX K {_q_{{_Y ^ ?~ ^ Z " Z ^ # # M Q W M Q W Q W X< M Q W M W Q M Q Q M Q Q Z ^ ^ ^ Z

Z^ Z ^ # Z

Z^ Z ^ # ^ ^ # { ^ ^ Z # ^ ^ { Z ^ ^ # Z # # Z # ^ Z^ ^^ ^ #

!{ > ~ < q < ~ ¢' # * £¤ ¦ ¥ q ? q K ? K

}

£¤ ¦ ¥ q ? § q K ? § K

}

£¤ ¦ ¥ q ? q K ? K

}

£¤ ¦ ¥ q ? ¨¨ q K ? ¨¨ K

}

# ¤ }# ® q ? K © <# Y { # q ? q K q ? / q K \XY q ? q K ? / K \XY ? K ? K

^ } } Z^ Z # Y { # } } q ? q K q ? / q K \XY q ? § q K ? / K \XY ? § K ? K } # } < q ? ¨¨ q K ª«¬ q ? q K { q ? q K


AgmjfYd g^ 8mlgeYlagf DgZad] IgZgla[k ¬ @fl]dda_]fl Jqkl]ek

ª«¬ q ? q K \XY ª«¬ q ? q K ª«¬ ? K \XY ª«¬ ? K ª«¬ ? K { ? K ? ¨¨ K

^ # ^ @ < !{ q < ¢' # %

| % q %' ¡ ] X * # ® ZZ # ­ Q # } } ª«¬ q ? q K q ? q K q ? ¨¨ q K ? K ? ¨¨ K ª«¬ ? K }# Z ® ZZ # ' !{ > q < * "% q %' ¡ ] X < %' ¡ K \ * ^ @ < !{ > q < * "% q %' ¡ K ] < | %' ¡ X *

E;

~

` j < !{ } ~| > < ¶ M D * } %' q <

¢' # < % q ·? ¸ · K / q ? q K < ¢' # < % q ·? ¸ · K q ? q K < % # | ? K < # | · N O ·? ¸ · K < ¤ } } # ZZ # # ' !{| > ~

* > q < * "% q ¢' # ¢' #

q < ¢' # ¢' # * ^ ^ ^ ^ # ` j < !{|| > q ¹ * q # % q \ | q | Z q \ Z

# ® ZZ # ª«¬ q ? ¨¨ q K q ? q K q ? ® q K q ? q K ? K { ? ® K ? K ª«¬ ? ¨¨ K ^ }#

q ' | # %

!{ > ~ $ < q < * "% q <

!{|! > ~

* "% q < q # ' | # %

| % # ' | # *

£¤ ¥ q ? ¯ q K ? ¯ K } ¦ °©¥

~ ±

£¤ ¥ q ? ² q K ? ² K >} ¦ °©¥

q \ | q | Z / q \ Z

# ® q ? K ³ # ¤

< q ? K q K ³ q ? K q ? K q K ³ ? K ? K K ³

~ ±

# ® © < ? ³ < q ? ´¦ °©¥ q K # { { Z Z q ? ´¦ °©¥ q K £¦ ° q ? q K \XY £¦ ° q ³ q K ­ q ³ q ? } } £¦ ° q ? q K £¦ ° ? K

£¦ ° q ³ q K £¦ ° ³ K

q ³ q ? ³ ?

} £¦ ° ³ K ­ ³ ?

q ? K q ? ³ ? K ? ³

q ? K q K ³ q ? ³ ? K K ³ ? ³ } q ? K q K ³ q K ³ q ? K q K ³ K ³ ? K K ³

q K ³ q ? ³ K ³ ? ³

# Z Z ^ #

5 '$!'*Q("1 *!% > '$!Q-Z("1 8$# (!*#1 -/* q$!3 ¤ ^ º { ^ ^ º >N?@ ?¹ O ?@ ?¹ N O ?@ ?¹ E

? ² K ¦ °

Z ^ Z^ >} ^ Z Z Z ^ }# Z < § µ < ¶ M D #

MFCLD< ~

N?@ ?¹ O » NK@ K¹ O ?@ K@ ?¹ K¹

^ N?@ ?¹ O NK@ K¹ O N ?@ K@ ?¹ K¹ O

N?@ ?¹ O NK@ K¹ O N ?@ K@ ?¹ K¹ O º Z^ » N O » N O#


AgmjfYd g^ 8mlgeYlagfย DgZad] IgZgla[k ยฌ @fl]dda_]fl Jqkl]ek

` j < ย {| } ~ย ย ~| } ยผ ย ย ยบ D ย ย ยบ

;; ; %' % ;

ยผ ยพยพ ร ยพยพ ร ยฝย ยพยปยพ ยฟ ย ยป ย ยป ยผ ยฝย ยพยปยพ ยฟ ย ยป ย ยป Dร

Dร

ย ย ย ย ย ^ ย # ` j < ย {! } ~ย > ยผ ย ย ยบ ยน ย ย ยบ ~ ; ; ; %' * ยผ ~ ~ ; ; ; %' % ย ย ย ;; ; %' ย @ ย ยน ย N Oยน ย N O ' < % # | N?@ ?ยน O NK@ Kยน O ย ย ยบ < ย @ ย ย ยน ยผ N?@ ?ยน O NK@ Kยน O Nย @ ?@ K@ ย ยน ?ยน Kยน O ย ย ย ย ย ยบ { ^ ย ย ย ยบ ย @ ย ยน ย { ย @ ย ยน ย ย # ย Z ย Z^ ย ^ ย ย Z < ย ^ ^ ยผยฐ N?@ ?ยน O NK@ Kยน O N?@ K@ ?ยน Kยน O

E; ย

ย ~ย

ยฌ@ ยฌยน ย ย Z ยฌ@ ย ยฌยน ร ? K Nร @ ?@ K@ ร ยน ?ยน Kยน O

ร @ ร ยน ย ย Z ร @ ย ร ยน # ยจ ย ย ย ย ย ย ย } ย Z =ย ย Q ย ย ย { # ` j < ย {ย > < ยถ M D < ย ~ ~ '

* } %' q ย < ยน ย ย

ย ย # < ยน % q ยท? ยธ ย ยท K ยทยณ ยธ ย ยท ^ / ยฌ q ? ยณ q K ^

ย ย # ย < ยน % q ยท? ยธ ย ยท K ยทยณ ยธ ย ยท ^ ย ร q ? ยณ q K ^

% # | ? K ยณ ^ ย < ยท ย N O ยท? ยธ ย ยท K ยทยณ ยธ ย ยท ^ ย < Z ย ย ย =ย ย Q ย ย ย ย ย ^ Z ย ย >ย \ ย ย ย ># ` j < ย {ย } ~! } # # '

%'ย ย | M < $ ; M ย < ยน ย ย ยบ

ย ^ Z Z ยผร ร ร D N?@ ?ยน O NK@ Kยน O N

MFCLD< ~ย ย

?@ ยธ K@ ?ยน ยธ Kยน

O

ย ^ ย ย ย ย ย ย q Nq@ qยน O

q@ qยน ย #

ย ^ Z Z ยผยก N?@ ?ยน O NK@ Kยน O Nร ?@ K@ ร ?ยน Kยน O

ย ^ ^ ย Z ยผยฐ ร ร ร D N?@ ?ยน O NK@ Kยน O N?@ K@

?ยน ยธ Kยน O

ย ย ย ย {ย > ย ย N Oยน ย N O ~ ;; ; %' * ย %' ยผ ย ย ยบ ยน ย ย ยบ <

ยผ ? K ร

N O

N ย ?@ Kยน O

? K N O N O

~ ;; ; %' ย ยบ * ^ ย ย ^ย Z ^ย Z ย Y ย ย ย ย } ` j < ย { }_ย { ~|ย } ;' $ ยฌ ~ '

ย ;< $$' # < # ยฌ ย ย ยน ย ย ' $ ย * } ;' $ ร ย ;< $$' # < # ร ย ย ยน ย ย '

$ ย * ยฅ ^ ย ^ ย ย Z ย

Z # # ยฌ ? K Nยฌ@ ?@ K@ ยฌยน ?ยน Kยน O

ยค ย ย ย ย ย

ย ย ย ย ย } ย ย ย ย ย ย ย Z =ย ย Q}# ย {ย > ยฌ ย ย ยบ ยน ย ย ยบ ~ ~ ;' $* ย # # '

%'ย ย | q Nq@ qยน O ย < ยน ย ย ยบ ย # ย ย # ย ย % | % q@ ยฌ@ # qยน ยฌยน # ย ร @ ร ยน # ย < # |ย * ย ย ย # ^ ย @ < ย {ย > < ยถ MD < ยผ ย ย ยบ ยน ย ย ยบ ~ ~ ;; ; < q ย ย < ยน ย ย ยบ ~ ย #

ย ย # ย ย # # '

%'ย ย | < ยฌ ร ย ย ยบ ยน ย ย ยบ ~ ~ ;' $ ย $ย * ยผ # ย # | ย ย # ย |ย % | % # ย @ ย ยน # ยฌ@ # |< ยฌยน # | ย ร @ # ย |< ร ยน # ย |ย < # |* ยค Z ย ย ย ย ย ย ย ย ย } ย ย ย ย ย ย } ย Z ย ย # =ย ย Q}# ย ย { # ` j < ย {ย } ~|ย > q ย ย ย ยน ย ย * ย \ | Z Y ย ย ย % ย # % ยฌ q \ | q | Z ย q \ Z

ย % ' | ย # % ร q \ | q | Z / q \ Z ย ย


AgmjfYd g^ 8mlgeYlagf DgZad] IgZgla[k ¬ @fl]dda_]fl Jqkl]ek

MFCLD< ~

^ @ < {| > < ¶ M D < ¼ º ¹ º ~ ~ ;; ; < q <¹ º ~ #

# # # '

%' | * "% ¼ # # |< ¼ q # * # ® Z q ¼ ^ # # ³ È É < ¬ q ³ È q È É q ³ É ¬ ³ È È É ³ É ¬ ¼ q ³ È ³ È ¼ q È É È É ¼ q ³ É ³ É } ¥ ^ < ¬ q ³ È q È É q ³ É ¬ ³ È È É ³ É ¼ q ³ É ³ É ¼ ¬ q ³ È q È É ¬ ³ È È É

}

^ ¼ ¼ ¬ q ³ È q È É ¬ ³ È È É ¬ ¼ q ³ È ³ È ¼ q È É È É } } } ¼ ¬ q ³ È q È É ¬ ³ È È É ¬ ¼ q ³ È ³ È ¼ q È É È É } ¨ q Z ¼ } ¼ q ·³ ¸ · È ·È ¸ · É

·³ ¸ · È ·È ¸ · É / ¼ ¬ q ³ È q È É ¬ ³ È È É ¬ ¼ q ³ È ³ È ¼ q È É È É ¼ q # Z^ ^ Z

Z ¼ N @ ¹ O ^ Ê Ê N¬ ¬ O Z ¬ \ | É\? \ ¸ | ^ Z Z ¼ N @ ¹ O ^ Ê° ¬° \ | \| Ê° N¬° ¬° O# ^ @ < {|| > < ¶ M D < ¼ º ¹ º ~ ~ ;; ; < q < ¹ º ~ # # # '

%' | * "% ¼ # ' | # |< ¼ q # * Z # ® ZZ # Z ^ }#

E;

~

7 ?,,/('*q$! Z^ Z^ Z Z ^ ^ Z

Z^ # ^ ^ ^ Z^ ^ # Z^ Z ^ Z

# ^ ^ ^ ^ # ¤ Z ^ Z ^ ^ Z { # Z ^ ^ Z Z ^ # ^ Z Z Z Z ^ Z ^ ^ # ^ ^ ^^ ^ ^ Z ^ ^ Ú ^ Z # ¨ ¨ } Z ^ Z Z ^ ^ Z Z ^ ^ < ^ # ^^ ^ ^ ^ ^# Z^ Z

^ ## Z Z ^ ¤ # ^ < ^^ = Q = >Q = Q}# ¤ ^ ^ ^ ^ ^ Z^ ^ Z # ^ Z ^^ # < >?@ C ? E } ^ Z^ # ¤ { Ë >È@ ÈD E ^ ^ Ì >_@ _L E# ^ ^ ^ Z Z ÃÍ Z ^ Z^ } MLÎ

Q È W X# ^ ^ ^

^ ^ Z }# " ^ ^ { < M Q W / N O M W È / N O ­ M Q È / N O# ^ ^ ^ < ?~ ^ ?L ?L ^ ? ^ ? # Z }

^ ^ <#


AgmjfYd g^ 8mlgeYlagf DgZad] IgZgla[k ¬ @fl]dda_]fl Jqkl]ek

MFCLD< ~

¤ ^ Z { Z # ¤ ^ ^ ^ #

E;

~

};< % ^ < Q È ä~L N O Q ® È

< ÃÍ

<@% < >?@ C ? E * MLÎ ^ ^ ÃÍ qLÎ < ¹ º q Q W / q Q È Z ?~ Z ^ ? ?L }* ^ ^ # }* Ï< # Ð< N\ |O ÏÑ Ò NZ YO \ | ® Z Y \ | Z Y Ð \ | Ó Ð Z Y # Z º { Ó » º # = Q} # ^ Z Z^ Z # = Q} G~ < ÇË G~ ¸ ÔÎ YÕ Ö G~

}

Ö ^ N O ÔÎ G~ G~ # X £ G~ ÇË G~ G~ Z ^ Z Z # } Ø " # = Q} < YÕ M Ç D

× ¨M Q W Ç Q W ¨¸¨M Q W Ç Q W ¨¸¨ Ù Q W Ú Q W ¨ X ~ Ø@

M NM MO Ç NÇ ÇO Û ÜM < < >?@ ?D E

X ± # N\ |O ÏÝ NZ YO \ Y = Q = Q} º ÏÝ ÏÝ Z ^ ^ ^ }# < <% < ?ÞÃßÃàá~âD # };< | ^ ^ < ÃÍ MLÎ

LÎ q {W Èã

};< ! ^ < ~ LÎ ã };< Ø { # ^ }* ^ }*

/ M Q È å æ~ M W È ®

å æL M Q W

®

ä~L ~L ä~L N M~L M~L O ä

ä~L N ML~ MäL~ Oã };< < Q W ã

@ç~çÂ

};< % < ¤ { # ?~ ?L Q È > ÉE ^

^ ^ Û ¤ { Z Û

^ ^ ^ ^ Û

^ Z { # ZZ

^ ^ Z ^ # ª ^ ^ ^ ^ Z ^ = Q } ^ ^ Z Z Z { ^^ # ® ZZ # Z #

Z Z #

= $!'/&3($! ¤ Z ^ Z ^ ^ ^ # Z

# Z^ ·? ¸ · K ? K % ^ #

Z ^ ^ ^ Z { º # >


AgmjfYd g^ 8mlgeYlagf DgZad] IgZgla[k ¬ @fl]dda_]fl Jqkl]ek

?2 @B & ^ ª " Ý ^ % % Z^ # " Ý Z < ^ Þ # #^ #

? PDBNUCJ CFCD > ^ ^^ % ¨ ¥ ' ª " Ý ^ " '# # # #

C4C CD C> = Q ZZ ¥# # £ ^ Z ^ Z £ ' | | * # # = Q '# £ { ^

£ ' % $| # # = Q # À# % # £ ^ £# < ' ' % $ ' ; ^ # # Y < # > > > # = Q # > | % # # # ># = Q À# # # "# # £ Z ^ Z £ ' * ! # # > > # Y < # # # # # # = Q À# # ' Ý Ý # "# £ Z £ ' | | * # >># Y < # # # # ># # =>Q À# ¥# % # Z # # £% Z ^ Z ^ Z ^ { % ^ £ " % $ ' # > # Y < # # # # # # = Q % À# $ ®# £ ^^ ^ Z ^

^ £ | $ } # # # Y < # # # # # > # = Q Y # £Í Z Z ¤ Ú £ >} > } K? # Y < # { # # #

MFCLD< ~

E;

~

= Q Y ª# # £¥ £ ' | | * # # # Y < # # # # # # = Q #%# £ £ ' % $ | ;| # > > # = Q #%# | | %

$ ; # ¤ ¨ ¢ > # = Q # " # ' | %

; ' ' ' Z Y # ¨< > > # = Q # #®# "# ^ ¥# £ £ |

% $ $ # > % Z ª # ¨< > ># = Q # # ß Ø# # £

£

; % £> % # = Q ß Ø# ' Ý # " ß Ý # £ ^ ^ £# " %* * # > # Y < # # # # # # = >Q ' Z ¥# # "# ^ ¥# ;' $ ' # # Y # = Q ® # $ ¤# # $ ®#À# £ ^

Z Z Z ^ ^ Z £ ' | | $ # # Y < # # # # ># # = Q Ø# Z # # £ ^ £ ' | | $ # # # = Q Ø# £ ^ £ " % $* * # >> # Y < # # # # ># # = Q Y# # ¥# À# # Z ¥# £ Z Z ^ Z £ | $ } # # >> >> # Y < # # # # # # = Q # £ Z % % £# < * ' * Y ®* # #} } # " ; | $ $ ' ; # # = Q " ZàÝ # Ø # £ ^ Z Z £ } * * ! * # # > # Y < # > > # = Q Z "# £ è { # ^^ Ý Ú ^ £# * * ª â > # = Q # £ < Z £ $ # # >> #


AgmjfYd g^ 8mlgeYlagf DgZad] IgZgla[k ¬ @fl]dda_]fl Jqkl]ek

MFCLD< ~

E;

~

= Q Z Y# Z # Y ^ # Y# £ " Z ¨ £# < ¤ ^ Z % ¤ ¨ # > # Y < # # # = >Q #"# À# £ " Z ¥ { Z % Z^

¨ £ " ' ~ # $ ' ; " # # # = Q Z ¥# ' ^ # £Y < ^^ Z £# \ " * " %* " ; | $ " ?Z ® > > # = Q Z ¥# ' ^ # £ Z Z " " Ú £# < ¥# " "# ¢ ¨# ®# ¥ #} ! } # ; ^ À > # = Q ¤ Z # '# } | ¤ } % Z ª # ¨ < > > > # = Q ¡ ¢# ® '#¤# ¤ À# £ Z^ ^ ^^ £ $ ' " ' ;

; # > # Y < # # # # # # = Q ¡ ¢# ¤ À# ¢ Y# £% Z ¤ £

; ¥

| $ # # = Q ¢ "#"# ¡ $# £ ^ Z Z Z ^£ ' | $ ; # # # Y < # > > # = Q $ ®# # £ £ " % $ # #


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.