Convergence In Distribution Of Components Implies Convergence In Distribution Of Vector?

by ADMIN 89 views

Introduction

In probability theory, the concept of convergence in distribution is a fundamental idea that deals with the behavior of random variables as the sample size increases. When we say that a sequence of random variables XnX_n converges in distribution to a random variable XX, we mean that the distribution of XnX_n approaches the distribution of XX as nn goes to infinity. In this article, we will explore the relationship between the convergence in distribution of components and the convergence in distribution of a vector.

Convergence in Distribution

Convergence in distribution is a type of convergence that is often used in probability theory. It is defined as follows:

  • A sequence of random variables XnX_n is said to converge in distribution to a random variable XX if for every continuous function ff with compact support, we have:

limnE[f(Xn)]=E[f(X)]\lim_{n \to \infty} \mathbb{E}[f(X_n)] = \mathbb{E}[f(X)]

This definition can be extended to the case where XX is a random vector. In this case, we say that XnX_n converges in distribution to XX if for every continuous function ff with compact support, we have:

limnE[f(Xn)]=E[f(X)]\lim_{n \to \infty} \mathbb{E}[f(X_n)] = \mathbb{E}[f(X)]

Convergence in Distribution of Components

Now, let's consider the case where we have two sequences of random variables XnX_n and YnY_n that converge in distribution to random variables XX and YY, respectively. We can write this as:

XndXX_n \xrightarrow{d} X

YndYY_n \xrightarrow{d} Y

where XX and YY are random variables. In this case, we can ask the question: does the convergence in distribution of XnX_n and YnY_n imply the convergence in distribution of the vector (Xn,Yn)(X_n, Y_n)?

Theorem

Let XndXX_n \xrightarrow{d} X and YndYY_n \xrightarrow{d} Y, where XX and YY are random variables. Then, we have:

(Xn,Yn)d(X,Y)(X_n, Y_n) \xrightarrow{d} (X, Y)

Proof

To prove this theorem, we need to show that for every continuous function ff with compact support, we have:

limnE[f(Xn,Yn)]=E[f(X,Y)]\lim_{n \to \infty} \mathbb{E}[f(X_n, Y_n)] = \mathbb{E}[f(X, Y)]

Using the definition of convergence in distribution, we can write:

limnE[f(Xn,Yn)]=limnE[E[f(Xn,Yn)Xn]]\lim_{n \to \infty} \mathbb{E}[f(X_n, Y_n)] = \lim_{n \to \infty} \mathbb{E}[\mathbb{E}[f(X_n, Y_n) | X_n]]

=limnE[f(Xn,E[YnXn])]= \lim_{n \to \infty} \mathbb{E}[f(X_n, \mathbb{E}[Y_n | X_n])]

=limnE[fn,Y)]= \lim_{n \to \infty} \mathbb{E}[f_n, Y)]

=E[f(X,Y)]= \mathbb{E}[f(X, Y)]

where the last equality follows from the fact that YnY_n converges in distribution to YY.

Corollary

As a corollary of the theorem, we can show that if XndXX_n \xrightarrow{d} X and YndYY_n \xrightarrow{d} Y, where XX and YY are random variables, then:

(Xn,Yn)d(X,Y)(X_n, Y_n) \xrightarrow{d} (X, Y)

Proof

The proof of this corollary follows directly from the theorem.

Conclusion

In this article, we have shown that the convergence in distribution of components implies the convergence in distribution of a vector. This result has important implications in probability theory and statistics, and it can be used to study the behavior of random vectors in various contexts.

References

  • Billingsley, P. (1995). Probability and Measure. John Wiley & Sons.
  • Casella, G., & Berger, R. L. (2002). Statistical Inference. Duxbury Press.
  • Shao, J. (2003). Mathematical Statistics. Springer.

Further Reading

For further reading on this topic, we recommend the following books:

  • Billingsley, P. (1995). Probability and Measure. John Wiley & Sons.
  • Casella, G., & Berger, R. L. (2002). Statistical Inference. Duxbury Press.
  • Shao, J. (2003). Mathematical Statistics. Springer.

Introduction

In our previous article, we explored the relationship between the convergence in distribution of components and the convergence in distribution of a vector. We showed that if XndXX_n \xrightarrow{d} X and YndYY_n \xrightarrow{d} Y, where XX and YY are random variables, then (Xn,Yn)d(X,Y)(X_n, Y_n) \xrightarrow{d} (X, Y). In this article, we will answer some frequently asked questions about this topic.

Q: What is the difference between convergence in distribution and convergence in probability?

A: Convergence in distribution and convergence in probability are two different types of convergence in probability theory. Convergence in distribution is a type of convergence that deals with the distribution of a random variable, while convergence in probability is a type of convergence that deals with the probability of a random variable taking on a specific value.

Q: Can you give an example of convergence in distribution?

A: Yes, consider the sequence of random variables XnX_n that takes on the value 11 with probability 1/n1/n and the value 00 with probability 11/n1-1/n. Then, XnX_n converges in distribution to a random variable XX that takes on the value 11 with probability 11 and the value 00 with probability 00.

Q: What is the relationship between convergence in distribution and the law of large numbers?

A: The law of large numbers is a fundamental result in probability theory that states that the average of a large number of independent and identically distributed random variables will converge to the population mean. Convergence in distribution is a type of convergence that is related to the law of large numbers. In fact, the law of large numbers can be used to prove convergence in distribution.

Q: Can you give an example of a sequence of random variables that converges in distribution but not in probability?

A: Yes, consider the sequence of random variables XnX_n that takes on the value nn with probability 1/n1/n and the value 00 with probability 11/n1-1/n. Then, XnX_n converges in distribution to a random variable XX that takes on the value 00 with probability 11, but it does not converge in probability.

Q: What is the relationship between convergence in distribution and the central limit theorem?

A: The central limit theorem is a fundamental result in probability theory that states that the distribution of the sum of a large number of independent and identically distributed random variables will converge to a normal distribution. Convergence in distribution is a type of convergence that is related to the central limit theorem. In fact, the central limit theorem can be used to prove convergence in distribution.

Q: Can you give an example of a sequence of random variables that converges in distribution but not in mean square?

A: Yes, consider the sequence of random variables XnX_n that takes on the value nn with probability 1/n1/n and the value 00 with probability 11/n1-1/n. Then, XnX_n converges in distribution to a random variable XX that takes on the value 00 with probability 11, but it does not converge in mean square.

Q: What is the relationship between convergence in distribution and the concept of stochastic convergence?

A: Stochastic convergence is a type of convergence that deals with the behavior of random variables over time. Convergence in distribution is a type of stochastic convergence that deals with the distribution of a random variable.

Conclusion

In this article, we have answered some frequently asked questions about the convergence in distribution of components and its implications for the convergence in distribution of a vector. We hope that this article has provided a useful introduction to this topic and has helped to clarify some of the concepts involved.

References

  • Billingsley, P. (1995). Probability and Measure. John Wiley & Sons.
  • Casella, G., & Berger, R. L. (2002). Statistical Inference. Duxbury Press.
  • Shao, J. (2003). Mathematical Statistics. Springer.

Further Reading

For further reading on this topic, we recommend the following books:

  • Billingsley, P. (1995). Probability and Measure. John Wiley & Sons.
  • Casella, G., & Berger, R. L. (2002). Statistical Inference. Duxbury Press.
  • Shao, J. (2003). Mathematical Statistics. Springer.