X is said to be a Gaussian Vector if any linear combination of its components is
Gaussian (possibly degenerate i.e. almost sure constant). This means that :
X is said to be a Gaussian vector if and only if its characteristic function is written as :
∀ξ∈Rd,ΦX(ξ)=exp(iξTm−21ξTΓξ)
Proof
It suffices to apply the definition, remembering that the characteristic function of the Gaussian distribution N(m,σ2) is ξ→eimξ−2σ2ξ2. For the converse, with the characteristic function that ξTX follows a Gaussian distribution for all ξ∈Rd.
Theorem
Let X be a Gaussian vector of Rd with expectation m and covariance matrix Γ. Then X has a density if and only if Γ is invertible, in which case the density is written as :
fX(x)=det(2πΓ)1exp(−21(x−m)TΓ−1(x−m))
Proof
It exists U∈O(n) (i.e, UUT=UTU=I) such that UTΓU=D=diag(σ12,…,σd2).
Since Γ is invertible, we have σi2>0.
Let R=Udiag(σ1,…,σd)UT that verify R2=Γ.
Let Z∼N(0,Id) (Z is the vector with its coordinates i.i.d. normal and centered and with variance 1), m+RZ∼N(m,Γ) and then X and m+RZ have the same law.
Thus, for any function φ:Rd→R mesurable ≥0,
Let X and Y be gaussian vectors of Rp and Rq respectively. We assume that the vector Z=(XT,YT) is gaussian.
Then X,Y are independant if and only if for all i,j,Cov(Xi,Yj)=0
Proof
The direct implication is obvious. For the converse, if X,Y are decorrelated, then the matrix of
covariance Z is written as
ΓZ=(ΓX00ΓY)
But as Z is Gaussian, its characteristic function is written as