Tim Su (timsu)<timsu@stanford.edu> Ebrahim Parvand (eparvand)<eparvand@stanford.edu> William Choi (wchoi25)<wchoi25@stanford.edu> CS 221 PA4 Write-up 1. Explanation of the Model Our model combines four distributions to achieve better performance. Here are the four distributions: 1) A normal distribution around the given true distance. This models the local noise around the true distance. Unlike the laser sensor, we restrict our distribution to the domain [0,30], and renormalize so that the area under the curve in this domain adds up to 1. 2) An exponential distribution 位e -位x with lambda equal to 0.05. This translates to an exponential distribution that has 20 as its mean. We restrict our domain to [0,30] and renormalize as with the normal distribution. This distribution models unexpected obstacles that give short readings. Thus, the distribution helps us assign more probability closer to the sensor. 3) A distribution that is zero everywhere except at dmax = 30, where it has the probability 1. Laser failures caused by weird surfaces, etc. are modeled by this distribution since the laser sensor just gives dmax in this case. 4) A uniform distribution over [0,30] which models random variations that are likely everywhere. We make sure that each of these four distributions add up to 1 over the interval [0,30]. To combine these distributions, we take a weighted sum of them, where the weights all add up to 1. This ensures that the resulting distribution will also add up to 1, so that we have a valid probability model. 2. Parameters and Weights We observed our error across various values for our four weights for all the distributions, as well as the lambda value for the exponential distribution. The final weights we used to combine the distributions were: 0.97*(Normal distribution) + 0.02*(Exponential distribution) + 0.005*( dmax distribution) + 0.005*(Uniform distribution) This make sense since the normal local noise is always present with the sensor readings, whereas the events by the other three distributions, such as sensor failure, is not as likely. Also of these three, the events modeled by the exponential distribution (unexpected obstacles) is much more common than the other two. The model thus makes sense.

Following presents some runs with various parameters we ran to compare their performances. w1 (uniform dist) 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.3 0.3

w2 (dmax dist) 0.1 0.1 0.1 0.1 0.3 0.3 0.3 0.5 0.5 0.7 0.1 0.1

w3(exponential dist) 0.1 0.3 0.5 0.7 0.1 0.3 0.5 0.1 0.3 0.1 0.1 0.3

w4 (normal dist) 0.7 0.5 0.3 0.1 0.5 0.3 0.1 0.3 0.1 0.1 0.5 0.3

lambda 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05

Avg error 1.574 1.679 1.91 2.486 1.623 1.811 2.388 1.698 2.229 1.946 1.696 1.919

0.05 0.1 0.025 0.01 0.005 0.005 0.005 0.005

0.05 0.05 0.025 0.01 0.005 0.005 0.005 0.005

0.1 0.05 0.05 0.08 0.09 0.04 0.02 0.01

0.8 0.8 0.9 0.9 0.9 0.95 0.97 0.98

0.05 0.05 0.05 0.05 0.05 0.05 0.05 0.05

1.533 1.538 1.498 1.493 1.491 1.484 1.484 1.488

0.005 0.005 0.005

0.005 0.005 0.005

0.02 0.02 0.02

0.97 0.97 0.97

0.1 0.07 0.03

1.492 1.485 1.484

3. Graphs Here are the plots for our model, given t (true distance) = 5, 15, 25, 35 Each plot looks like a normal distribution centered around t, since the weight on the normal distribution is very heavy relative to the other distributions.