16. 322 Stochastic Estimation and Control, Fall 2004 Prof vander velde Lecture 6 Example: Sun of two independent randoin variable Z=X+Y f(=xh=P(a<Z≤b) =P(a<X+Y≤b P(a-X<Y≤b-X) =im∑P(x<X≤x+d)P(a-x<y≤b-x) =im∑f(x)dxf(y f(x)x」f(y)小y We can reach this same point by just integrating the joint probability density function for X and y over the region for which the event is true In the interior strip, the event a <=s bis true. Page 1 of 10
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde Lecture 6 Example: Sum of two independent random variables Z=X+Y b f z dz () ( = Pa < Z ≤ b) ∫ z a = P a( < X +Y ≤ b) = P a( − X < Y ≤ b − X ) = lim ∑P x( < X ≤ x + dx P a ) ( − x < Y ≤ b − x) dx→0 x b x − = lim ∑ f ( x dx f y dy ) ( ) dx→0 x ∫ y x a x − ∞ b x − () y f x dx ( ) ∫ = f y dy ∫ x −∞ a x − We can reach this same point by just integrating the joint probability density function for X and Y over the region for which the event is true. In the interior strip, the event a z < ≤ b is true. Page 1 of 10
16. 322 Stochastic Estimation and Control, Fall 2004 Prof. VanderⅤelde P(a<zsb)=aJJ、(xyx df(x)∫(y Let z=x+y, d=d 」女f(x)f(2-x)d f,(x),(z-x)dxd= This is true for all a b. Therefore f (=)=f(x)f,(z-x)dx =f((=-y)dy This result can readily be generalized to the sum of more independent random variables z=X1+X2+…+Xn f()=∫∫-Jdn2(x)(x)J(x)(-x-x Also, if w=Y-X, for X, Y independent f(w)=f(x),(w+x)dx ∫()/(-m Direct determination of the joint probability density of several functions o several ra andom variables Suppose we have the joint probability density function of several random variables x, Y, Z, and we wish the joint density of several other random variables defined as functions xyz U=u(X,Y, z V=v(X,Y,Z) W=(X,Y,2Z) Page 2 of 10
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde ∞ b x − ( < ≤ b) = dx f , P a Z (x y dy ∫ ∫ xy , ) −∞ a x − ∞ b x − dx f x () f y dy ∫ = ∫ x y ( ) −∞ ax − Let zx = + y dz , = dy ∞ b dx f x ∫ = () f z ( ) − x dz ∫ x y −∞ a b ∞ ⎡ ⎤ ⎢ f () x f z − x dx dz y ( ) = ∫ ∫ x ⎥ a ⎣−∞ ⎦ This is true for all a,b. Therefore: ∞ f () z = f x f z − x dx ∫ x ( ) y ( ) z −∞ ∞ f () ( ) y = y f z − y dy ∫ x −∞ This result can readily be generalized to the sum of more independent random variables. Z = X1 + X2 + ... + Xn ∞ ∞ ∞ f (z) = dx1 dx2... ∫ dxn−1 f (x f (x )... f (xn−1) f xn (z − x − x2 − ... − x ) x1 xn−1 z ∫ ∫ 1 1 n−1 ) x2 2 −∞ −∞ −∞ Also, if W Y = − X , for X,Y independent: ∞ f () w = f x f w + x dx ∫ x ( ) y ( ) w −∞ ∞ f () ( ) y = y f y − w dy ∫ x −∞ Direct determination of the joint probability density of several functions of several random variables Suppose we have the joint probability density function of several random variables X,Y,Z, and we wish the joint density of several other random variables defined as functions X,Y,Z. = U u X Y Z ( ,, ) V v X Y Z = ) ( ,, = i W w X Y Z ( ,, ) Page 2 of 10
16.322 Stochastic Estimation and Control, Fall 2004 Prof. VanderⅤelde If fw(x, y, = is finite evervwhere, the density fux.(u, v, w)can be found directly by the following steps 1. Evaluate the jacobian of the transformation from X, Y, Z to U,V,w. ou(x,y, =) au(x,y, 2) au(,y, = J(x,y,=) o(x,y2)∂(x,y2)∂(x,y2) aw(x,y,=av(x,y, =)av(x,y,z 2. For every value of u, v, w, solve the transformation equations for x, y, =. If there is more than one solution get all of then i(X, Y, Z x, (u, v, w) f(X,Y,Z)="}→{y(xw) W(X, Y, z)=w 二(l,1. 3. Then fm(u,v,w)= ∑ Jx2(x,y,=) with x;.];, =i given in terms of u,v,w This approach can be applied to the determination of the density function for m variable which are defined to be functions of n variables(n>m) by adding some simple auxiliary variables such as x, y, etc to the list of m so as to total n variables Then apply this procedure and finally integrate out the unwanted auxiliary variables Example: Product u=XY To illustrate this procedure, suppose we are given (x, y) and wish to find the probability density function for the product U=Xr First, define a second random variable; for simplicity, choose V=X. Then use the given 3 step procedure 1. Evaluate the jacobian: (x,y) 2. Solve the transformation equations x=1 Xv=u y=x=7 3. Then find f(u,v)=f(v
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde If f ( , , ,, xyz , , xyz) is finite everywhere, the density f uvw (,, uvw) can be found directly by the following steps: 1. Evaluate the Jacobian of the transformation from X,Y,Z to U,V,W. ∂uxyz ( , , ) ∂x ∂vxyz ( , , ) J xyz ( , , ) = ∂x ∂wxyz ( , , ) ∂x ∂uxyz ( , , ) ∂y ∂vxyz ( , , ) ∂y ∂wxyz ( , , ) ∂y ∂uxyz ( , , ) ∂z ∂vxyz ( , , ) ∂z ∂wxyz ( , , ) ∂z 2. For every value of u,v,w, solve the transformation equations for x,y,z. If there is more than one solution, get all of them. ( ,, ⎪ i u X Y Z ) = u ⎫ ⎧ x u v w (,, ) v X Y Z ( , , (,, ) = v ⎬ ⎪ → ⎨ y u v w ) i ( ,, ⎩ w X Y Z ) = w z u v w ) ⎪ ⎭ ⎪ (,, i 3. Then ( , , x y z ) f (,, ,, i i uvw ) = ∑ f xyz i uvw , , J x y z ( , , ) i i i k with xi,yi,zi given in terms of u,v,w. This approach can be applied to the determination of the density function for m variable which are defined to be functions of n variables (n>m) by adding some simple auxiliary variables such as x,y,etc. to the list of m so as to total n variables. Then apply this procedure and finally integrate out the unwanted auxiliary variables. Example: Product U=XY To illustrate this procedure, suppose we are given f x y ( , x y) and wish to find , the probability density function for the product U = XY . First, define a second random variable; for simplicity, choose V = X . Then use the given 3 step procedure. 1. Evaluate the Jacobian: y x Jxy ( , ) = = −x 10 2. Solve the transformation equations: ⎧ x = v xy = u⎫ ⎪ ⎬ ⇒ ⎨ u u v x = ⎭ y = = ⎪ ⎩ x v 3. Then find: 1 f ( , uv ) = f xy (v, u ) uv , , v v Page 3 of 10
16.322 Stochastic Estimation and Control, Fall 2004 Prof. VanderⅤelde To get the density for the product only, integrate out with respect to v. f(u)=言f,(v,=)a If X and Y are independent this becomes (0-J f() f(x)f The Uniform distribution In our problems we have been using the uniform distribution without having concisely defined it. This is a continuous distribution in which the probability density function is uniform(constant) over some finite interval F(x) 1/(b-a) Thus a random variable having a uniform distribution takes values only over some finite interval(a, b) and has uniform probability density over that interval In what situation does it arise? Examples include part tolerances, quantization error, limit cycles. Often you do not know anything more than that the unknown value lies between known bounds Page 4 of 10
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde To get the density for the product only, integrate out with respect to v. ∞ () = ∫ 1 f u f x y (v, u )dv u , u v −∞ If X and Y are independent this becomes ∞ 1 u fu () = ∫ ( ) ⎛ ⎞ f v f y ⎜ ⎟dv, or u x v ⎝ ⎠ v −∞ ∞ 1 u f () u = ∫ ( ) ⎛ ⎞ f x f y ⎜ ⎟dx u x x ⎝ ⎠ x −∞ The Uniform Distribution In our problems we have been using the uniform distribution without having concisely defined it. This is a continuous distribution in which the probability density function is uniform (constant) over some finite interval. Thus a random variable having a uniform distribution takes values only over some finite interval (a,b) and has uniform probability density over that interval. In what situation does it arise? Examples include part tolerances, quantization error, limit cycles. Often you do not know anything more than that the unknown value lies between known bounds. Page 4 of 10
16.322 Stochastic Estimation and Control, Fall 2004 Prof vander velde X=x d x 2 d x b3-a3 3 6 (a2+ab+b2) x2=(a2+ab+b2)-(a2+2ab+b2) (a2-2ab+b2) 12 O 12 The binomial distribution Outl 1. Definition of the distribution 2. Determination of the binomial coefficient and binomial distribution 3. Useful relations in dealing with binomial coefficients and factorials 4. The mean, mean square and variance of the binomial distribution 1. Definition of the distribution Consider an experiment in which we identify two outcomes: one of which we call success and the other failure. The conduct of this experiment and the observation of the outcome may be called a simple trial. If the trial is then repeated under such circumstances that we consider the outcome on any trial to be independent of the outcomes on all other trials, we have a process frequently called bernoulli Trials after the man who first studied at length the results of such a process The number of successes in n bernoulli trials is a random discrete variable whose distribution is known as the binomial distribution ote that the binomial distribution need not refer p s observing the number of heads in n tosses of a coin. An experiment may have a great many simple outcomes- the outcomes may even be continuously Page 5 of 10
16.322 Stochastic Estimation and Control, Fall 2004 Prof. Vander Velde 1 2 − 2 b X = ∫ x 1 dx = 2 (b a ) ba − b a − a 1 = (a b + ) 2 b X 2 1 1 3 = ∫ x 2 1 dx = (b a 3 − ) ba − − 3 b a a 2 = 1 (a ab b 2 + + ) 3 2 X 2 2 σ = − = 1 (a ab b 2 ) 2ab b 2 X ) 2 + + − 1 (a 2 + + 3 4 = 1 (a 2 − 2ab b + 2 ) 12 = 1 (b a) 2 − 12 1 σ = (b a − ) 12 The Binomial Distribution Outline: 1. Definition of the distribution 2. Determination of the binomial coefficient and binomial distribution 3. Useful relations in dealing with binomial coefficients and factorials 4. The mean, mean square, and variance of the binomial distribution 1. Definition of the distribution Consider an experiment in which we identify two outcomes: one of which we call success and the other failure. The conduct of this experiment and the observation of the outcome may be called a simple trial. If the trial is then repeated under such circumstances that we consider the outcome on any trial to be independent of the outcomes on all other trials, we have a process frequently called Bernoulli Trials after the man who first studied at length the results of such a process. The number of successes in n Bernoulli trials is a random discrete variable whose distribution is known as the Binomial Distribution. Note that the binomial distribution need not refer only to such simple situations as observing the number of heads in n tosses of a coin. An experiment may have a great many simple outcomes – the outcomes may even be continuously Page 5 of 10