Abstract
This article introduces a unified approach to estimating the mutual density ratio, defined as the ratio between the joint density function and the product of the individual marginal density functions of two random vectors. It serves as a fundamental measure for quantifying the relationship between two random vectors. Our method uses the Bregman divergence to construct the objective function and leverages deep neural networks to approximate the logarithm of the mutual density ratio. We establish a non-asymptotic error bound for our estimator, achieving the optimal minimax rate of convergence under a bounded support condition. Additionally, our estimator mitigates the curse of dimensionality when the distribution is supported on a lower-dimensional manifold. We extend our results to overparameterized neural networks and the case with unbounded support. Applications of our method include conditional probability density estimation, mutual information estimation, and independence testing. Simulation studies and real data examples demonstrate the effectiveness of our approach. Supplementary materials for this article are available online, including a standardized description of the materials available for reproducing the work.
| Original language | English |
|---|---|
| Pages (from-to) | 1990-2001 |
| Number of pages | 12 |
| Journal | Journal of the American Statistical Association |
| Volume | 120 |
| Issue number | 551 |
| DOIs | |
| Publication status | Published - 2 Jun 2025 |
Keywords
- Bregman divergence
- Conditional probability density
- Deep neural network
- Mutual density ratio
- Mutual information
ASJC Scopus subject areas
- Statistics and Probability
- Statistics, Probability and Uncertainty