Page 1

Imperial College London

Smart Guide Dog Jiayun Ding Wanyun Yang Daniil Tarakanov Yuan Zhai Mohammad Butt Yunqi Liu



Contents Introduction -----------------------------------------------------------------------------------2 1. Motivation & user requirements --------------------------------------------------------3 2. High level design -------------------------------------------------------------------------4 3. GPS Module Design ----------------------------------------------------------------------5 3.1 Voice input -----------------------------------------------------------------------------------5 3.2 Determining the current location ----------------------------------------------------------5 3.3 Calculating the route ------------------------------------------------------------------------5

4. Motor Control Design---------------------------------------------------------------------6 4.1 Controlling the motor -----------------------------------------------------------------------6 4.2 Choosing a DC motor -----------------------------------------------------------------------7 4.3 Choosing a power supply -------------------------------------------------------------------7

5. Obstacle Avoidance Module Design ---------------------------------------------------7 6. Testing for the Prototype -----------------------------------------------------------------8 6.1 Test 1 -------------------------------------------------------------------------------------------8 6.2 Test 2 --------------------------------------------------------------------------------------------9

7. Present development and future trend of blind mobility aid devices --------------10 7.1 Present development of blind mobility aid devices -------------------------------------10 7.2 The trend of blind mobility aid devices ---------------------------------------------------10 7.3 The multi-functionality of blind mobility aid devices ----------------------------------10 7.4 The image processing ability of blind mobility aid devices ---------------------------10

Conclusion ------------------------------------------------------------------------------------10 References----------------------------------------------------------------------------------------------11

Introduction Guide dogs have been used successfully to aid navigation and movement for the blind and visually impaired for many years. The concept of the 'Smart Guide Dog’ (SGD) is modelled around that of an actual guide dog. In the same way as a guide dog becomes the eyes of its master the SGD navigates its visually impaired user with the use of modern technology. The SGD achieves this with the use of the Global Positioning System (GPS) and obstacle detection techniques. The GPS allows the SGD to not only let the user know where they are via an earphone but also to guide them to their desired location using built-in maps and preset destinations. While obstacle detection techniques of the SGD enable the user to be guided safely. In much the same way as a blind person is led by a guide dog by holding on to its strap, the SGD comes with handles which the user can hold on to. While the angle and the length of the handles of the SGD can be adjusted according to the users’ requirement. The SGD then navigates the user with the help of motors and wheels which adjust speed and direction according to the needs of

Figure 1: the 3D design of smart guide dog


the user or situation. The 3D Sketch of the SGD in Figure 1 demonstrates what a prototype might look like. The SGD comprises of 3 main modules: the GPS module, Obstacle detection module and the Control module. The GPS module provides the location of the user and works out the most suitable route to the desired location. The Obstacle detection module detects any obstacles encountered by the SGD and relays the information to the control module. The control module is responsible for the steering of the SGD based on the information received from the GPS and Obstacle detection module. More detailed explanations of the working of these modules as well as other aspects of the SGD are provided in the design part of this report.

1. Motivation & user requirements While guide dogs provide excellent service for their masters, their utility is constrained by the dog's ability to 'think' and take decisions. Our motivation for the SGD project stems from this limitation of the guide dog. If the guide dog's ability to avoid any obstacles its handler might encounter is coupled with the ability to make the user aware of their location then such a machine would go a long way in improving movement and navigation for blind people and therefore making their lives a lot easier. There are several features which make the SGD a lot more suitable to the user's requirements than a normal guide dog or to any other navigating alternative. • Light weight: The SGD complete with its handles, wheels, panels, internal circuitry etc. is still very light weight and therefore easy to use. • User voice input: To make the SGD even more user friendly, it comes with a voice recognition user interface. The user can issue voice commands to operate the SGD removing the need for touch based interface which might pose a problem for visually impaired people to operate. • Cheap: Despite the use of latest navigation technology, the SGD comes at a very affordable price in relation to its alternatives. The cost of breeding and training a guide dog is significantly larger than the price of the SGD. The total lifetime cost of a guide dog is about 48 thousand pounds. • Auto Navigation: While most GPS navigation devices merely mark out the route to the desired destination, the SGD, using its motor function, takes the user to the desired destination. • Not an actual dog: Users have less constrains in using it due to issues such as allergies to dogs. A guide dog for each person has to be carefully selected to match the user’s lifestyle, travel and physical needs [1]. The SGD and its widespread use comes with the promise of far reaching social benefits. With its increased utility and efficiency, an obvious benefit is that directly to the people who use it. Being easy to use and providing greater functionality, the SGD improves the lives of the blind and visually impaired. Another social benefit to come from the SGD is towards the dogs that it substitutes as a tool for navigation. It saves the animal from intensive and strenuous training and breeding routines. With the number of guide dogs limited by the breed, strict training level and the number of donated dogs, there is usually a waiting list for people who apply for a guide dog. For this reason, the totally blind people have the priority to have a guide dog, while the partial visually impaired people have to wait. In most countries, the applicants have


to be over 18 years old. However, the SGD provides a solution where everyone would be able to have access to navigation tools thereby delivering on improved quality and quantity.

2. High level design

Figure2: the control diagram The control of the entire system is managed by the Atmega328 microcontroller, which can be programmed in C by using Arduino Uno R3. A push button wakes up the whole system letting the SGD stand by for the GPS module, consisting of determining current location and calculating the route, and the turning handle for motor control module. The infrared sensors for obstacle detection module are also ready to provide output values simultaneously. Input is taken in the form of speech recognition. Navigation output is provided in a form of voice output and simultaneously fed to the locomotive part of the robot to control the direction of movement. A motor with gearbox is used to drive the back two wheels, providing the power for the SGD to run. When turning the handle, the SGD starts to go along the track set by the GPS module with the speed adjusting by the turning angle of the handle. Three Sharp IR sensors, one at the front and the other two at the sides, keep getting the obstacle information of the surrounding environment. When an obstacle is within the set boundary from the front of SGD, it compares the outputs of the side sensors to determine a safer route it can go or otherwise run backwards to find another route or just stop. The information is then providing to a servo motor which is responsible for rotating the front wheel. Hence, the SGD’s direction can be adjusted to go through an alternative route. The servo motor can also be given with an instruction to turn an angle set by the GPS module. Furthermore, we design the SGD with a triangular chassis and three wheels at each corner to let it to be stable even on uneven ground and be able to adjust the running direction. The wheels are chosen to be anti-slip and with a sufficiently large diameter in case of unsteady ground and bad weather. The separate control of the direction and driving circuit enables fewer errors to occur while running.

Figure3: the circuit diagram


3. GPS Module Design 3.1 Voice input Voice input is sensed by using EasyVR Arduino Shield or uSpeech Arduino library. uSpeech is an open source software: its implementation is cheaper than the EasyVR shield. However, this is very power-intensive on the ATmega microcontroller, so it is not able to function when not listening to the input. Setting up the shield to use with Arduino requires attaching it to our module and downloading the EasyVR libraries. During its first use, it requires individual user training, as it only responds to the pre-recorded set of commands. This is easily achieved by the supplied EasyVR Commander as shown in Figure 4. We pre-record the user-defined sounds to control commands, such as “Guide”, “To”, “Home”, etc..

Figure 4: EasyVR Commander GUI

3.2 Determ in ing the current location Determin ining To read the current coordinates, we use Parallax’s PMB-688 GPS SiRF External Antenna, connected in the circuit as shown in Figure 3. Then we use the TinyGPS library example for Arduino, to capture the data from the GPS module via Arduino Serial Port, so that it is displayed on a computer. The port speed has to be adjusted at 4800 Baud [2]. The computer terminal outputs a set of NMEA sentences, namely $GPGGA: Global Positioning System Fix Data; $GPGSV: GPS satellites in view; $GPGSA: GPS DOP and active satellites; $GPRMC: Recommended minimum specific GPS/Transit data [3]. An application is shown in Figure 5.

Figure 5: an example for NMEA sentences [2] 3.3 Calculating the route The calculation of the route is achieved by attaching a cellular module, SIM900 4 Frequency GPRS module, to the Arduino. This is able to transmit the coordinates from the GPS module to the TCP server via GPRS as shown in Figure 6.

Figure 6: GPS tracking network [4]


For the next step, we set up the TCP server to receive the incoming data from the SGD and store the GPS data in the MySQL database. This is done by writing a simple Python application [5]. We manage the MySQL data by using Plone content managing software, which is open source. This allows to use Google Maps API to route the path and extract the datapoints to an Excel document that can be written back into MySQL database and transmitted back to the SGD. Arduino MCU implements internal logic to compare the current coordinates with destination coordinates and use the digital compass within the GPS shield to navigate to the next datapoint during each leg of the journey. The navigational logic is sent to the control module of the SGD, which moves the dog in the desired direction. The use of GSM module on our SGD enables us to enhance GPS performance in the environments hostile to propagation of satellite signals, such as urban areas and inside the buildings [6]. Voice guidance can be implemented using a speech synthesizer: SpeakJet [7] as shown in Figure 7.

Figure 7: SpeakJet typical connection [8]

4. Motor Control Design 4.1 Controlling the motor A Quad Half H-bridge motor driver (SN754410) is used to control the motor, as it is capable of driving the 24V high voltage motor and enabling a voltage to apply across the motor in either direction [9]. By connecting the two leads of the motor to two output terminals of the motor driver chip and connecting another two corresponding input terminals to the digital input pin of the microcontroller, the motor rotates in both clockwise and anticlockwise directions by writing digitalWrite(motorpin1, HIGH) and digitalWrite(motorpin2, LOW), respectively, when rotating in clockwise direction (where “motorpin1” and “motorpin2” correspond to the pin number 2 and 3 in the circuit diagram in Figure 3). The “Enable” pin of the motor driver is used to change the speed of the motor rotation in an easy way using pulse-width modulation. The voltage at the “Enable” pin is in the range 0 to 5V, which is the supply voltage at Vcc. It is converted to a digital value in the code with the range 0 to 255. This value in the code is determined by the user input on the turning handle. A variable resistor with a maximum value of 5kΩ is set inside the handle. By turning the handle, the value of the variable resistor changes and gives a DC voltage of 0 to 5V to an analogue input terminal of the microcontroller through the adjusting lead of the variable resistor. Choosing the maximum of 5V can protect the chip from exceeding voltage input, and a resistor of 510Ω at the input terminal of the chip can prevent the input current from going beyond the current limit of the chip. Then, feed this digital-to-analogue converted value to the “Enable” pin of the motor driver using speedvalue=analogueRead(A5) and analogueWrite(6, speedvalue) as shown in the circuit diagram (Figure 3). The pin 6 is chosen for PWM. The motor driver separates the power supply for DC motor which prevents the motor from driving large current, therefore power from other parts of the circuit, stoping them from


working properly. A capacitor of 1μF is connected across the V+ motor power supply and ground to avoid any spikes appearing in the voltage caused by the back emf of the motor. 4.2 Choosing a DC motor To choose a suitable motor, we calculate the motor power using the equations: 1. Pλ = τω, where P is the motor power, λ is the motor efficiency, τ is the motor torque, ω is the motor rotating speed per second. 2.

, where F is the forward force produced by the mechanical system, η is the

system efficiency, k is the total gear ratio, r is the wheel radius. 3. f = μN, where f is the friction force produced between the wheels and the ground, μ is the coefficient of friction which is a dimensionless scalar value that describes the ratio of the force of friction between two bodies and the force pressing them together, N is the weight of the SGD times the gravity. 4.

, where V is the speed of the SGD, L is the perimeter of the wheel.

Assume the weight of SGD is about 10kg, and the uneven ground has a coefficient of friction equal to 0.35. Using Equation 3, we get f = 35N. F in Equation 2 must be greater than f to enable the SGD to move, so we choose F = 45N. In Equation 1, λ is about 95%. In Equation 2, η is about 90%, r is 10cm. We want the SGD to move at a maximum speed of 5Km/hour = 1.39m/s. In Equation 4, the perimeter of wheel is 0.1×2π = 0.63m, ω = 1.39/0.63×2πk = 4.4πk rad/s. After combining Equation 1,2,4, we get

. After simplification, we get

. The motor with this power output weights about 1.5kg. 4.3 Choosing a power supply We use a 80W motor with the input of 12V and 6.7A, and therefore we choose a 12V40Ah lithium ion rechargeable battery, where 12V40Ah means that the battery can give an output at 12V and 6.7A for 40/6.7=5.97 hours. The battery with this capacity weighs about 4kg. We choose lithium ion battery because lithium has great electrochemical potential which provides large energy density and low self-discharge. It keeps providing high current to applications. No periodic discharge is needed. To charge up the battery, simply connect a charger to the battery, and wait for approximately 8 hours for the battery to be fully charged.

5. Obstacle Avoidance Module Design In comparison to the ultrasonic distance sensor which gives large error for small distance and lidar sensor which is too expensive, the Sharp IR sensor (GP2D12) provides a suitable range, up to 80cm, of distance detection as well as a reasonable price, £8.95. A narrow infrared beam transmits from the sensor, and then can be reflected back to the sensors receiver end. By knowing the emitting angle of the beam and its tiny displacement, the spacing can be determined by

, where h = distance from

the obstacle, d = displacement on the receiver, S = distance

Figure 8: distance calculation schematic


between transmitter and receiver, θ = emitting angle as shown in Figure 8. The output of the sensor is converted to a DC voltage value, 0 to 2.4V, which decreases with the increase of the distance from the obstacle. The voltage is then given to the microcontroller for further processing. The threshold voltage of obstacle detection is set according to the speed of the motor, as the higher the speed, the less time the SGD takes to approach the obstacle. To avoid collision, when the speed is high, the threshold voltage is set to be low. The output values for both sides’ sensors also need to meet the condition before turning the direction to either side. Then, the servo motor goes through an angle of 30 degrees which is set in the code using the library <servo.h> in Arduino. The input terminal of the servo motor is connected to a digital output pin of the microcontroller. The code for it to change the angle is myservo=attach(7) and myservo.write(anglevalue) (where 7 is the output pin number for the servo motor). If after turning, there is still an obstacle, the code can loop back and it turns through 30 degree again. Some limitations The IR beam for sensing is too narrow so that some very thin obstacles like chair’s legs might be missed. Some add-on ultrasonic sensors can be used together with IR sensors as they provide very wide beams and are also cheap [10]. For outdoor usage, in bright sunlight, the IR sensor’s output DC voltage shifts to a higher value. According to the datasheet, when the light intensity on the surface of a reflective obstacle is larger than 12000 lx, the voltage shifts above a maximum value of 0.2V. In this case, a further object might be felt nearer. A phototransistor may be used to detect the strength of the sunlight to adjust the threshold voltage value.

Figure 9: The cost of selected components (excluding resistors and capacitors)

6. Testing for the Prototype We have carried out some testing on the infrared sensor to figure out how the sensor responds to obstacles, and then see how we can improve the system. 6.1 Test 1 This test assesses the stability of the sensor in a steady environment. We used Arduino serial monitor to get the output values of the sensors. The range of the output voltage is between 0 and 2.4V. Analogue-to-digital conversion in the microcontroller scales 5V into 1024. To


make the output voltage change more sharply, we scale 2.4V into 5V in the code. The test results are plotted in Figure 10.

Figure 10: the output values of the sensor in a steady environment

Figure 11: the output values of the sensor seen on the oscilloscope

From the results, we have noticed that the voltage ripples are very significant, therefore we use an oscilloscope to analyse the output signal of the sensor. Figure 11 shows the test results. The increase of voltage implies that an obstacle has been detected. By zooming in to the graph, we have noticed that the output voltage is quantised. The sensor samples about every 0.04 seconds which is a sufficiently quick response time for the implementation of the obstacle avoidance. According to the results, the sensor output signal is very noisy: we have decided to purify the output signal with a low pass filter. Therefore, we have used RC coupling and picked some values for testing. A 51K立 resistor and a 332nF capacitor proved to be a good choice as they provided a significant decrease in noise as Figure 12 shows.

Figure 12: the output values of the sensor seen on the oscilloscope after RC coupling has been added The results show that the slew rate of the sensor output is 18.7V/s. Usually an obstacle that is in close proximity results in a change of voltage of about 1V, and approximately a 0.05s delay in response is regarded to be safe. 6.2 Test 2 This test investigates how the sensor voltage changes with obstacles moving further away. From Figure 13 we can see that the largest voltage occurs when the obstacle is 5cm from the sensor. A very close obstacle may be misconsidered as an obstacle further away. Therefore we set the position of the sensors 5cm away from the front and side margin of the SGD to efficiently turn the sensor output voltage against distance into a monotonicly decreasing function. According to the speed of the motor, if 20cm is the safety boundary, we choose the voltage at 25cm in figure 13 to be the boundary for an obstacle to be detected in the code.

Figure 13: the output values of the sensor against the distance from the obstacle


7. Present development and future trend of blind mobility aid devices 7.1 Present development of blind mobility aid devices Currently, the blind mobility aid devices can be divided into four categories: ultrasonic-wave-processing devices, mobile robotics, wearable guide devices and smart blind sticks [11]. However, their downsides cause them not to be widely used. The ultrasonic guide device moves slowly and only processes obstacle detection in a narrow range; the mobile robotics’ structure is too complicated and its mobility is restricted in some typical landscapes, such as on slope and hard to climb stairs; in terms of wearable guide devices, they are heavy to carry around and lack of the sense of security as the users have nothing to hold. Smart blind sticks are expensive, inconvenient to be carried around due to oversize characteristics. Generally, those types of aid devices are cost-ineffective, not highly mobile or practical; nevertheless, people who are sight impaired have low mobility and they would like to purchase such a device with reasonable price, as a result, those products are not ready to be deployed in blind people’s daily lives. Thus a cheap, accurate, effectively detecting device is in demand as a future developing model in the blind aid device market. 7.2 The trend of blind mobility aid devices A survey reported by Access Economics in 2009 stated:” Almost two million people in the UK are living with sight impaired.” By 2050, the predicted number of blinds will double to four million, as there is a dramatic increase in the amount of people with sight impaired [12]. Most of the blind guide devices are guide dogs. However, in such a highly technology intensive era, traditional guide tools no longer satisfy people’s needs. Blind people require advanced technology to assist and support their movement. 7.3 Th Thee multi-functionality of blind mobility aid devices Since blind people easily get lost, the aid devices are required to contain the GPS navigation and the GPRS function to offer the present location information to the user. Apart from the above point, blind mobility aid devices need to interact with their owners through voice communication, for example, according to our proposed design, microphone and earphones are used, since the visual interaction (human computer interaction) is not applicable to people who suffer sight loss. The realization of voice communication would make the device become more user-friendly, smart and independent. For example, an existing product, called Trekker Breeze is a handheld GPS device that uses voice input/output to guide visually impaired persons to their desired destination. However the price, 325 GBP, is too high to be afforded. 7.4 The image processing ability of blind mobility aid devices Image processing is a new field in the research and development of blind mobility devices. The device processes the image by conducting diverse processing techniques after receiving environmental images from detectors or cameras. Image processing allows the blind mobility device to identify the user’s surrounding information to guide him effectively and safely. In comparison with ultrasonic sensors and radars, visual detector or cameras are more accurate, reliable and it is easier to combine their data with other sensors’ received information [13]. Applying the image processing technology to the development of blind mobility aid devices increases their accuracy and minimizes delays to benefit sight impaired people’s lives. The


image processing technology could be a further investigation and improvement for us to develop in the future. At the current stage, we focus on implementing multiple sensors at different angles to improve the accuracy and the efficiency for the surrounding detection, since cameras are expensive to install and the application of image processing is relatively complicated. Thus after comparing with alternative choices, multi-sensor detection is the one that is cheap as well as efficient for us to implement.

Conclusion Smart Guide Dog, aiming to be a user-friendly product that helps to make the blindsâ&#x20AC;&#x2122; life easier, is endowed with the functions of GPS outdoor navigation, automatic obstacle avoidance, input via voice recognition as well as output via sound notifications. We believe that the human-robot interaction ability of the SGD enables the device to satisfy the needs of the custom in a better way and work in a more efficient way. This kind of device can certainly contribute to the future trend of blind aid devices after we carry out further research and experiments in more details. Reference Referencess 1.

Guide Dog Queensland


Retrieving GPS data, accessed on 01/03/13


Arduino GPS tutorial accessed on 01/03/12


GPS tracker project, accessed .on 01/03/13


GPS with GSM tracker tutorial. Accessed on 01/03/13


Ramjee Prasad and Marina Ruggieri (2005). Applied Satellite Navigation Using GPS, GALILEO

and Augmentation Systems. USA: Artech House. p233. 7.

SpeakJet features, accessed on 01/03/03


SpeakJet datasheet, accessed on 02/03/13


Al Williams (2002). Microcontroller projects using the Basic Stamp (2nd ed.). Focal Press. p. 344.

10. society of robots 11. access at 15/02/2013 12. und-sight-loss/ access at 17/02/2013 13. Ge juanhua, Obstacle detection based on image process [D], Southwest University of Science and Technology, 2009.


Smart Guide Dog Report  

Smart Guide Dog Report-imperial college EEE group report