Home Page > > Details

STAT551A AssignmentHelp With , Java,Python,c/c++ Programming Assignment,Help With Mathematical Statistics AssignmentHelp With SQL|

J. Chen, Handout 7, STAT551A 1
Probability and Mathematical Statistics
http://www-rohan.sdsu.edu/ jchenyp/STAT551A.htm
3.7 Joint Probability Density Function (joint pdf)
Up to now, we have only discussed single random variable. However, investigators
are often interested in probability statement concerning two or more random variables.
If X1 and X2 are random variables. Then (X1, X2) is called a bi-variate random
vector. In general, X1, X2, · · · , Xn are n random variables, then X = (X1, · · · , Xn)
is called an multivariate random vector. For much of this section, we will consider
n = 2 bivariate case.
The purpose of this section is to introduce the concepts, definitions, and mathematical
techniques associated with distributions based on two (or more) random
variables.
1. Discrete joint pdfs
Def. 1. Suppose S is discrete sample space on which two random variables, X
and Y , are defined with the sets of values SX and SY . The joint probability density
function of X and Y (or joint pdf) is denoted pX,Y (x, y), where
pX,Y (x, y) = P(X = x, Y = y) = P({s|X(s) = x, Y (s) = y}), x ∈ SX; y ∈ SY .
The requirements for pX,Y (x, y) to be pdf are that
1) pX,Y (x, y) ≥ 0 for all x ∈ SX and y ∈ SY , and
2) P
x∈SX,y∈SY
pX,Y (x, y) = 1.
The following Theorem give a formal statement of the relationship between the
joint pdf and the marginal pdfs.
Theorem 1. Suppose that pX,Y (x, y) is joint pdf of the discrete random variables X
and Y . Then
pX (x) = X
y∈SY
pX,Y (x, y), pY (y) = X
x∈SX
pX,Y (x, y)
J. Chen, Handout 7, STAT551A 2
Note that an individual pdf pX (x) or pY (y), obtained by summing a joint pdf over
all values of the other random variable is called a marginal pdf of X or Y .
Proof: Since ∪y∈SY
{Y = y} = S, it forms a partition of S. Then {X = x}
= ∪y∈SY
{X = x, Y = y}. Then
pX (x) = P(X = x) = X
y∈SY
P(X = x, Y = y) = X
y∈SY
pX,Y (x, y).
Theorem follows.
EX 1: A consumer testing agency classified automobile defects as minor and major.
Let X denote the number of minor defects and Y the number of major defects in a
randomly selected as in the table
pX,Y (x, y) y = 0 y = 1 y = 2 y = 3 pX (x)
x = 0 0.1 0.2 0.2 0.1 0.6
x = 1 0.05 0.05 0.1 0.1 0.3
x = 2 0 0.01 0.04 0.05 0.1
pY (y) 0.15 0.26 0.34 0.25 1
1) The entries in the table represent the joint of X and Y . For example
pX,Y (2, 1) = P(X = 2, Y = 1) = 0.01, pX,Y (3, 2) = P(X = 3, Y = 2) = 0.05.
2) Also, from the joint pdf in table, the individual distributions of X and Y can be
found. For example, the probability of having one minor defect is found by summing
all entries in the table for X = 1. Thus
pX (1) = P(X = 1) = pX,Y (1, 0) + pX,Y (1, 1) + pX,Y (1, 2) = .2 + .05 + 0.01 = 0.26.
Similarly, the probability of (Y = 1) is
pY (1) = P(Y = 1) = pX,Y (0, 1) + pX,Y (1, 1) + pX,Y (2, 1) + pX,Y (3, 1) = 0.3.
3) P(X + Y < 1) = pX,Y (0, 0) = 0.1, and
P(X + Y < 2) = pX,Y (0, 0) + pX,Y (0, 1) + pX,Y (1, 0) = 0.1 + 0.2 + 0.05 = 0.35
and P(XY = 1) = pX,Y (1, 1) = 0.05.
J. Chen, Handout 7, STAT551A 3
EX 2: Consider an experiment tossing a fair coin and a fair die. Let X = the face
values of the coin, and Y = the face values of the die.
Now consider two random variable (X, Y ). The sample space for Z = (X, Y ) is
SZ = {(0, 1),(0, 2),(0, 3),(0, 4),(0, 5),(0, 6),(1, 1), · · · ,(1, 6)}
Since coin and die are independent, then
P{(x, y)} = P{(X = x)(Y = y)} = P(X = x)P(Y = y)
for any combination. The joint pdf of X and Y can be obtained.
Next, we need to find the pdf of X + Y . We always consider possible values
(sample space) first, then probabilities
Sample space SU = {1, 2, 3, 4, 5, 6, 7}. Define U = X + Y , then
Distribution of U can be obtained.
Thirdly, their combination P(X ≤ Y ) = P(S) = 1, P(X + Y ≤ 1) = 1/12 , and
P(X + Y ≤ 2) = 3/12.
EX 3: In non-rush hour 2 checkout lines: Let X = number of customers in line 1,
Y = number of customers in lines 2. Joint pdf of (X, Y ) is
P(X = 2, Y = 3) = 0.015, P(X = 0) = 0.30 marginal for X = 0 and P(Y = 1) =
0.47 marginal for Y = 1.
P(| X − Y |= 1) = X X
|x−y|=1
pX,Y (x, y)
= pX,Y (0, 1) + pX,Y (1, 0) + pX,Y (1, 2) + pX,Y (2, 3)
= 0.2 + 0.2 + 0.05 + 0.05 + 0.025 + 0.025 = 0.55
2. Continuous joint Pdfs
J. Chen, Handout 7, STAT551A 4
1). Joint probability density function (pdf)
Def. 2. Let X and Y be two continuous r.v. Then a joint probability density
function (pdf) of (X, Y ) is a two dimensional function fX,Y (x, y) such that for any
region of the real number R,
P((X, Y ) ∈ R) = Z Z
R
fX,Y (x, y)dxdy,
this means that the double integrated yields the probability that (X, Y ) lies in a
specified region of the xy-plane Ω2.
It is clear that the probability has the following properties:
i). For any real number a and c, P(X = a, Y = c) = R a
fX,Y (x, y)dxdy = 0 (it
must be a region to have a positive probability).
ii). P(a ≤ X ≤ b, c ≤ Y ≤ d) = R b
fX,Y (x, y)dxdy.
Furthermore, for any function fX,Y (x, y), if it satisfies the following two conditions:
• fX,Y (x, y) ≥ 0 for all (x, y) ∈ Ω2,
−∞ fX,Y (x, y)dxdy = 1.
Then fX,Y (x, y) must be a joint pdf of the (X, Y ).
EX 4: Let X and Y be two cont. r.vs with a joint pdf
fX,Y ((x, y) =
k(x + y) 0 < x < 1, 0 < y < 1
0 otherwise.
(1) Find k, (2) P(0 ≤ X ≤ 1
*EX 6: Let X and Y be two cont. r.vs with a joint pdf fX,Y (x, y) = xye−(x+y)
, x >
0, y > 0. Find P(X > 2Y ).
Solution: By the integration by parts, for a > 0, we have the following results
Theorem 2: Let X and Y be two cont. r.vs with a joint pdf fX,Y (x, y). Then both
individual pdfs of X and Y are given by, respectively
fX(x) = Z ∞
−∞
fX,Y (x, y)dy, fY (x) = Z ∞
−∞
fX,Y (x, y)dx,
we call fX (x) is a marginal pdf of X, and fY (y) is a marginal pdf of Y .
Proof: We only prove the first form in the Thm 2. By the Def., the single cdf X
is given by
FX(x) = P(X < x) = Z ∞
−∞
[
Z x
−∞
fX,Y (x, y)dx]dy,
J. Chen, Handout 7, STAT551A 7
differentiating both ends of equation above, we have
fX (x) = d
dxFX(x) = Z ∞
−∞
fX,Y (x, y)dy.
EX 7: (cont.) Let the joint pdf of the r.v X and Y is fX,Y (x, y) = x + y, 0 < x <
1, 0 < y < 1. Find fX (x) and fY (y).
2). Joint cumulative distribution functions (cdfs)
Def. 3. Let X and Y be any two random variables. Then the joint cumulative
distribution function (cdf) of X and Y is
FX,Y (u, v) = P(X < u, Y < v) = Z u−∞Z v−∞fX,Y (x, y)dxdy,
for any (x, y) ∈ Ω2.
Some properties of the joint cdf
• P(a ≤ X ≤ b, c ≤ Y ≤ d) = FX,Y (b, d) − FX,Y (a, d) −FX,Y (b, c) + FX,Y (a, c).
In fact: P(a ≤ X ≤ b, c ≤ Y ≤ d) = R b
Further, the joint pdf can be obtained by differentiating the joint cdf.
Theorem 3: Let FX,Y (x, y) be the joint cdf of cont. r.vs X and Y . Then then joint
pdf fX,Y (x, y) of X and Y is a second partial derivative of the joint cdf
fX,Y (x, y) = ∂
2
∂x∂yFX,Y (x, y),
J. Chen, Handout 7, STAT551A 8
provided FX,Y (.) has a continuous second partial derivative.
EX 9: (cont.) Let the joint pdf of (X, Y ) be fX,Y (x, y) = x + y for 0 < x < 1, 0 <
y < 1. Find cdf FX,Y (u, v)
Solution: By using the Def. of the cdf , we have
. 0 < u < 1, 0 < v < 1.
EX 10: Let the joint pdf fX,Y (x, y) = 3
4
(x + xy), 0 < x < 1, 0 < y < 1. Find
FX,Y (x, y).
Solution: By the Def. of the cdf, for 0 < x < 1, 0 < y < 1, we have
FX,Y (u, v) = Z u
3). Some common joint continuous pdf of X and Y
• Joint exponential distribution: Let (X, Y ) be a two continuous r.vs vector with
the pdf
We call (X, Y ) follows a double exponential distribution with the parameters
λ1 and λ2 (hazard rate), and denoted by Exp(λ1, λ2).
J. Chen, Handout 7, STAT551A 9
• Joint uniform distribution: Let (X, Y ) be a two continuous r.v. vector with the
pdf
We call X follows an uniform distribution on a region [a, b]×[c, d], and denoted
by Unif[a,b;c,d]. The joint pdf is
• Joint Normal distribution: Let (X, Y ) be a continuous r.v. vector with the pdf
fX,Y (x, y) = 1
for −∞ < x < ∞, −∞ < y < ∞, where (µ1, µ2) are two locative parameters,
(σ1, σ2) are two shape parameters, and r is a related coefficient of X
and Y . We call (X, Y ) follows a double normal distribution, and denoted by
N(µ1, σ2
1
; µ2, σ2
2
; r). Actually, µ1 indicates a mean of the X while σ1 expresses
a standard deviation of X.
When µ1 = 0, µ2 = 0 and σ1 = 1, σ2 = 1, we say (X, Y ) has a standard double
normal distribution
3. Multivariate density functions
Let X1, · · · , Xn are n r.v’s. Then the joint pdf of n discrete r.v’s is defined as
pX1,···,Xn
(x1, · · · , xn) = P(X1 = x1, · · · , Xn = xn).
The joint cdf of n cont. r.v’s is
FX1,···,Xn
(x1, · · · , xn) = P(X1 < x1, · · · , Xn < xn)
=
Z x1
−∞
· · · Z xn
−∞
fX1,···,Xn
(x1, · · · , xn)dx1 · · · dxn,
J. Chen, Handout 7, STAT551A 10
and the joint pdf is
fX1,···,Xn
(x1, · · · , xn) = ∂
n
∂x1 · · · ∂xn
FX1,···,Xn
(x1, · · · , xn)
provided that FX1,···,Xn
(.) has a cont. nth partial derivative.
4. Independence of two random variables
First of all, we review the independence of the two events. Let A and B be two
any events defined in the sample space S. If
P(A ∩ B) = P(A)P(B),
then A and B ar independence. Now we extend the definition to the case of the two
r.vs.
Def. 4. The r.v’s X and Y are said to be independence if for every intervals A and
B,
P(X ∈ A, Y ∈ B) = P(X ∈ A)P(Y ∈ B),
then r.v’s X and Y are independence. Notice that
• If X and Y are discrete r.vs, then X and Y are the independence iff pX,Y (x, y) =
pX (x)pY (y).
• If X and Y are cont. r.vs, then X and Y are the independence iff fX,Y (x, y) =
fX (x)fY (y).
The following Theorem is useful for checking independence:
Theorem 4: The random variables X and Y are independence iff there are functions
g(x) and h(y) such that
fX,Y (x, y) = g(x)h(y).
If it holds, there is a constant k such that fX(x) = kg(x) and fY (y) = 1
k
h(y).
EX. 11: Let X and Y are cont. r.vs with a joint pdf fX,Y (x, y) = x + y, 0 < x <
1, 0 < y < 1, and 0, otherwise. Are both X and Y independence?.
Clearly, fX (x) = 1
2+x, and fY (y) = 1
2+y. So, fX,Y (x, y) 6= fX (x)fY (y). Therefore,
X and Y are not independence.
J. Chen, Handout 7, STAT551A 11
EX 12: Let X and Y are cont. r.vs with a joint pdf fX,Y (x, y) = 12xy(1 − y),
0 < x < 1, 0 < y < 1, and 0, otherwise. Questions: (1). Are X and Y independence?
(2) find fX (x) and fY (y).
Taking g(x) = 12x and h(y) = y(1−y). Clearly, fX,Y (x, y) = g(x)h(y). Therefore.
fX,Y (x, y) can be factored into a function of x times a function of y. According to
Thm. 4, both X and Y are independence.
Further, by theorem 3, 1 = R ∞
−∞ kg(x)dx = k
R 1
0 12xdx = 6k, hence k = 1/6.
Thus, fX (x) = kg(x) = 2x, and fY (y) = 1
k
h(y) = 6y(1 − y).
In general, the independence of n (n > 2) random variables can be defined as
follows:
Def. 5. The n random variables X1, X2, · · · , Xn are said to be independence if there
are functions g1(x1), g2(x2), · · · , gn(xn) such that for every x1, · · · , xn,
fX1,···,Xn
(x1, · · · , xn) = g1(x1)· · · gn(xn).

Contact Us - Email:99515681@qq.com    WeChat:codinghelp
Programming Assignment Help!