PAC Learning Mixtures of AxisAligned Gaussians with No Separation Assumption
Abstract
We propose and analyze a new vantage point for the learning of mixtures of Gaussians: namely, the PACstyle model of learning probability distributions introduced by Kearns et al. [13]. Here the task is to construct a hypothesis mixture of Gaussians that is statistically indistinguishable from the actual mixture generating the data; specifically, the KL divergence should be at most .
In this scenario, we give a time algorithm that learns the class of mixtures of any constant number of axisaligned Gaussians in . Our algorithm makes no assumptions about the separation between the means of the Gaussians, nor does it have any dependence on the minimum mixing weight. This is in contrast to learning results known in the “clustering” model, where such assumptions are unavoidable.
Our algorithm relies on the method of moments, and a subalgorithm developed in [8] for a discrete mixturelearning problem.
1 Introduction
In [13] Kearns et al. introduced an elegant and natural model of learning unknown probability distributions. In this framework we are given a class of probability distributions over and access to random data sampled from an unknown distribution that belongs to The goal is to output a hypothesis distribution which with high confidence is close to as measured by the the KullbackLeibler (KL) divergence, a standard measure of the distance between probability distributions (see Section 2 for details on this distance measure). The learning algorithm should run in time . This model is wellmotivated by its close analogy to Valiant’s classical Probably Approximately Correct (PAC) framework for learning Boolean functions [18].
Several notable results, both positive and negative, have been obtained for learning in the Kearns et al. framework of [13], see, e.g., [10, 15]. Here we briefly survey some of the positive results that have been obtained for learning various types of mixture distributions. (Recall that given distributions and mixing weights that sum to 1, a draw from the corresponding mixture distribution is obtained by first selecting with probability and then making a draw from .) Kearns et al. gave an efficient algorithm for learning certain mixtures of Hamming balls; these are product distributions over in which each coordinate mean is either or for some fixed over all mixture components. Subsequently Freund and Mansour [11] and independently Cryan et al. [4] gave efficient algorithms for learning a mixture of two arbitrary product distributions over . Recently, Feldman et al. [8] gave a time algorithm that learns a mixture of any many arbitrary product distributions over the discrete domain for any .
1.1 Results
As described above, research on learning mixture distributions in the PACstyle model of Kearns et al. has focused on distributions over discrete domains. In this paper we consider the natural problem of learning mixtures of Gaussians in the PACstyle framework of [13]. Our main result is the following theorem:
Theorem 1
(Informal version) Fix any , and let be any unknown mixture of axisaligned Gaussians over There is an algorithm that, given samples from and any , as inputs, runs in time and with probability outputs a mixture of axisaligned Gaussians over satisfying
A signal feature of this result is that it requires no assumptions about the Gaussians being “separated” in space. It also has no dependence on the minimum mixing weight. We compare our result with other works on learning mixtures of Gaussians in the next section.
Our proof of Theorem 1 works by extending the basic approach for learning mixtures of product distributions over discrete domains from [8]. The main technical tool introduced in [8] is the WAM (Weights And Means) algorithm; the correctness proof of WAM is based on an intricate error analysis using ideas from the singular value theory of matrices. In this paper, we use this algorithm in a continuous domain to estimate the parameters of the Gaussian mixture. Dealing with this more complex class of distributions requires tackling a whole new set of issues around sampling error that did not exist in the discrete case.
Our results strongly suggest that the techniques introduced in [8] (and extended here) extend to PAC learning mixtures of other classes of product distributions, both discrete and continuous, such as exponential distributions or Poisson distributions. Though we have not explicitly worked out those extensions in this paper, we briefly discuss general conditions under which our techniques are applicable in Section 7.
1.2 Comparison with other frameworks for learning mixtures of Gaussians
There is a vast literature in statistics on modeling with mixture distributions, and on estimating the parameters of unknown such distributions from data. The case of mixtures of Gaussians is by far the most studied case; see, e.g., [14, 17] for surveys. Statistical work on mixtures of Gaussians has mainly focused on finding the distribution parameters (mixing weights, means, and variances) of maximum likelihood, given a set of data. Although one can write down equations whose solutions give these maximum likelihood values, solving the equations appears to be a computationally intractable problem. In particular, the most popular algorithm used for solving the equations, the EM Algorithm of Dempster et al. [7], has no efficiency guarantees and may run slowly or converge only to local optima on some instances.
A change in perspective led to the first provably efficient algorithm for learning: In 1999, Dasgupta [5] suggested learning in the clustering framework. In this scenario, the learner’s goal is to group all the sample points according to which Gaussian in the mixture they came from. This is the strongest possible criterion for success one could demand; when the learner succeeds, it can easily recover accurate approximations of all parameters of the mixture distribution. However, a strong assumption is required to get such a strong outcome: it is clear that the learner cannot possibly succeed unless the Gaussians are guaranteed to be sufficiently “separated” in space. Informally, it must at least be the case that, with high probability, no sample point “looks like” it might have come from a different Gaussian in the mixture other than the one that actually generated it.
Dasgupta gave a polynomial time algorithm that could cluster a mixture of spherical Gaussians of equal radius. His algorithm required separation on the order of times the standard deviation. This was improved to by Dasgupta and Schulman [6], and this in turn was significantly generalized to the case of completely general (i.e., elliptical) Gaussians by Arora and Kannan [2]. Another breakthrough came from Vempala and Wang [19] who showed how the separation could be reduced, in the case of mixtures of spherical Gaussians (of different radii), to the order of times the standard deviation, times factors logarithmic in . This result was extended to mixtures of general Gaussians (indeed, logconcave distributions) in works by Kannan et al. [12] and Achlioptas and McSherry [1], with some slightly worse separation requirements. It should also be mentioned that these results all have a running time dependence that is polynomial in , where denotes the minimum mixing weight.
Our work gives another learning perspective that allows us to deal with mixtures of Gaussians that satisfy no separation assumption. In this case clustering is simply not possible; for any data set, there may be many different mixtures of Gaussians under which the data are plausible. This possibility also leads to the seeming intractability of finding the maximum likelihood mixture of Gaussians. Nevertheless, we feel that this case is both interesting and important, and that under these circumstances identifying some mixture of Gaussians which is statistically indistinguishable from the true mixture is a worthy task. This is precisely what the PACstyle learning scenario we work in requires, and what our main algorithm efficiently achieves.
Reminding the reader that they work in significantly different scenarios, we end this section with a comparison between other aspects of our algorithm and algorithms in the clustering model. Our algorithm works for mixtures of axisaligned Gaussians. This is stronger than the case of spherical Gaussians considered in [5, 6, 19], but weaker than the case of general Gaussians handled in [2, 12, 1]. On the other hand, in Section 7 we discuss the fact that our methods should be readily adaptable to mixtures of a wide variety of discrete and continuous distributions — essentially, any distribution where the “method of moments” from statistics succeeds. The clustering algorithms discussed have polynomial running time dependence on , the number of mixture components, whereas our algorithm’s running time is polynomial in only if is a constant. We note that in [8], strong evidence was given that (for the PACstyle learning problem that we consider) such a dependence is unavoidable at least in the case of learning mixtures of product distributions on the Boolean cube. Finally, unlike the clustering algorithms mentioned, our algorithm has no running time dependence on .
1.3 Overview of the approach and the paper
An important ingredient of our approach is a slight extension of the WAM algorithm, the main technical tool introduced in [8]. The algorithm takes as input a parameter and samples from an unknown mixture of product distributions over The output of the algorithm is a list of candidate descriptions of the mixing weights and coordinate means of the distributions Roughly speaking, the guarantee for the algorithm proved in [8] is that with high probability at least one of the candidate descriptions that the algorithm outputs is “good” in the following sense: it is an additive accurate approximation to each of the true mixing weights and to each of the true coordinate means for which the corresponding mixing weight is not too small. We give a precise specification in Section 3.
As described above, when WAM is run on a mixture distribution it generates candidate estimates of mixing weights and means. However, to describe a Gaussian we need not only its mean but also its variance. To achieve this we run WAM twice, once on and once on what might be called “” — i.e., for the second run, each time a draw is obtained from we convert it to and use that instead. It is easy to see that corresponds to a mixture of the distributions , and thus this second run gives us estimates of the mixing weights (again) and also of the coordinate second moments . Having thus run WAM twice, we essentially take the “crossproduct” of the two output lists to obtain a list of candidate descriptions, each of which specifies mixing weights, means, and second moments of the component Gaussians. In Section 4 we give a detailed description of this process and prove that with high probability at least one of the resulting candidates is a “good” description (in the sense of the preceding paragraph) of the mixing weights, coordinate means, and coordinate variances of the Gaussians .
To actually PAC learn the distribution , we must find this good description among the candidates in the list. A natural idea is to apply some sort of maximum likelihood procedure. However, to make this work, we need to guarantee that the list contains a distribution that is close to the target in the sense of KL divergence. Thus, in Section 5, we show how to convert each “parametric” candidate description into a mixture of Gaussians such that any additively accurate description indeed becomes a mixture distribution with close KL divergence to the unknown target. (This procedure also guarantees that the candidate distributions satisfy some other technical conditions that are needed by the maximum likelihood procedure.) Finally, in Section 6 we put the pieces together and show how a maximum likelihood procedure can be used to identify a hypothesis mixture of Gaussians that has small KL divergence relative to the target mixture.
Note. This is the full version of [9] which contains all proofs omitted in that conference version because of space limitations.
2 Preliminaries
The PAC learning framework for probability distributions. We work in the Probably Approximately Correct model of learning probability distributions which was proposed by Kearns et al. [13]. In this framework the learning algorithm is given access to samples drawn from the target distribution to be learned, and the learning algorithm must (with high probability) output an accurate approximation of the target distribution . Following [13], we use the KullbackLeibler (KL) divergence (also known as the relative entropy) as our notion of distance. The KL divergence between distributions and is
where here we have identified the distributions with their pdfs. The reader is reminded that KL divergence is not symmetric and is thus not a metric. KL divergence is a stringent measure of the distance between probability distances. In particular, it holds [3] that , where denotes total variation distance; hence if the KL divergence is small then so is the total variation distance.
We make the following formal definition:
Definition 1
Let be a class of probability distributions over . An efficient (proper) learning algorithm for is an algorithm which, given , and samples drawn from any distribution , runs in time and, with probability at least , outputs a representation of a distribution such that .
Mixtures of axisaligned Gaussians. Here we recall some basic definitions and establish useful notational conventions for later.
A Gaussian distribution over with mean and variance has probability density function An axisaligned Gaussian over is a product distribution over univariate Gaussians.
If we expect to learn a mixture of Gaussians, we need each Gaussian to have reasonable parameters in each of its coordinates. Indeed, consider just the problem of learning the parameters of a single onedimensional Gaussian: If the variance is enormous, we could not expect to estimate the mean efficiently; or, if the variance was extremely close to 0, any slight error in the hypothesis would lead to a severe penalty in KL divergence. These issues motivate the following definition:
Definition 2
We say that is a dimensional bounded Gaussian if is a dimensional axisaligned Gaussian with the property that each of its onedimensional coordinate Gaussians has mean and variance
Notational convention: Throughout the rest of the paper all Gaussians we consider are bounded, where for notational convenience we assume that the numbers , are at least 1 and that the number is at most . We will denote by the quantity , which in some sense measures the bitcomplexity of the problem. Given distributions over we write to denote , the th coordinate mean of the th component distribution, and we write to denote , the variance in coordinate of the th distribution.
A mixture of axisaligned Gaussians is completely specified by the parameters , and . Our learning algorithm for Gaussians will have a running time that depends polynomially on ; thus the algorithm is not strongly polynomial.
3 Listing candidate weights and means with Wam
We first recall the basic features of the WAM algorithm from [8] and then explain the extension we require. The algorithm described in [8] takes as input a parameter and samples from an unknown mixture of distributions where each is assumed to be a product distribution over the bounded domain . The goal of WAM is to output accurate estimates for the mixing weights and coordinate means ; what the algorithm actually outputs is a list of candidate “parametric descriptions” of the means and mixing weights, where each candidate description is of the form .
We now explain the notion of a “good” estimate of parameters from Section 1.3 in more detail. As motivation, note that if a mixing weight is very low then the WAM algorithm (or indeed any algorithm that only draws a limited number of samples from ) may not receive any samples from , and thus we would not expect WAM to construct an accurate estimate for the coordinate means We thus have the following definition from [8]:
Definition 3
A candidate is said to be parametrically accurate if:

for all ;

for all and such that .
Very roughly speaking, the WAM algorithm in [8] works by exhaustively “guessing” (to a certain prescribed granularity that depends on ) values for the mixing weights and for of the coordinate means. Given a guess, the algorithm tries to approximately solve for the remaining coordinate means using the guessed values and the sample data; in the course of doing this the algorithm uses estimates of the expectations that are obtained from the sample data. From each guess the algorithm thus obtains one of the candidates in the list that it ultimately outputs.
The assumption [8] that each distribution in the mixture is over has two nice consequences: each coordinate mean need only be guessed within a bounded domain and estimating is easy for a mixture of such distributions. Inspection of the proof of correctness of the WAM algorithm shows that these two conditions are all that is really required. We thus introduce the following:
Definition 4
Let be a distribution over . We say that is samplable if there is an algorithm which, given access to draws from , runs for steps and outputs (with probability at least over the draws from ) a quantity satisfying .
With this definition in hand an obvious (slight) generalization of WAM, which we denote WAM, suggests itself. The main result about WAM that we need is the following (the proof is essentially identical to the proof in [8] so we omit it):
Theorem 2
Let be a mixture of product distributions with mixing weights where each satisfies and is samplable for all . Given and any , WAM runs in time and outputs a list of many candidates descriptions, at least one of which (with probability at least ) is parametrically accurate.
4 Listing candidate weights, means, and variances
Through the rest of the paper we assume that is a wise mixture of independent bounded Gaussians , as discussed in Section 2. Recall also the notation from that section.
As described in Section 1.3, we will run WAM twice, once on the original mixture of Gaussians and once on the squared mixture In order to do this, we must show that both and satisfy the conditions of Theorem 2. The bound on coordinate means is satisfied by assumption for , and for we have that each is at most . It remains to verify the required samplability condition on products of two coordinates for both and ; i.e. we must show that both the random variables are samplable and that the random variables are samplable. We do this in the following proposition, whose straightforward but technical proof appears in Appendix B:
Proposition 1
Suppose is the mixture of twodimensional bounded Gaussians. Then both the random variable and the random variable are samplable.
The proof of the following theorem explains precisely how we can run WAM twice and how we can combine the two resulting lists (one containing candidate descriptions consisting of mixing weights and coordinate means, the other containing candidate descriptions consisting of mixing weights and coordinate second moments) to obtain a single list of candidate descriptions consisting of mixing weights, coordinate means, and coordinate variances.
Theorem 3
Let be a mixture of axisaligned Gaussians over , described by parameters . There is an algorithm with the following property: For any , , given samples from the algorithm runs in time and with probability outputs a list of many candidates such that for at least one candidate in the list, the following holds:

for all ; and

and for all such that .
Proof: First run the algorithm WAM with the random variable , taking the parameter “” in WAM to be , taking “” to be , and taking “” to be By Proposition 1 and Theorem 2, this takes at most the claimed running time. WAM outputs a list List1 of candidate descriptions for the mixing weights and expectations, List1 , which with probability at least contains at least one candidate description which is parametrically accurate.
Define . Run the algorithm WAM again on the squared random variable , with “” , “” , and “” By Proposition 1, this again takes at most the claimed running time. This time WAM outputs a list List2 of candidates for the mixing weights (again) and second moments, List2 , which with probability at least has a “good” entry which satisfies

for all ; and

for all such that .
We now form the “cross product” of the two lists. (Again, this can be done in the claimed running time.) Specifically, for each pair consisting of a candidate in List1 and a candidate in List2, we form a new candidate consisting of mixing weights, means, and variances, namely where (Note that we simply discard .)
When the “good” candidate from List1 is matched with the “good” candidate from List2, the resulting candidate’s mixing weights and means satisfy the desired bounds. For the variances, we have that is at most
This proves the theorem.
5 From parametric estimates to bona fide distributions
At this point we have a list of candidate “parametric” descriptions of mixtures of Gaussians, at least one of which is parametrically accurate in the sense of Theorem 3. In Section 5.1 we describe an efficient way to convert any parametric description into a true mixture of Gaussians such that:

any parametrically accurate description becomes a distribution with close KL divergence to the target distribution; and

every mixture distribution that results from the conversion has a pdf that satisfies certain upper and lower bounds (that will be required for the maximum likelihood procedure).
The conversion procedure is conceptually straightforward — it essentially just truncates any extreme parameters to put them in a “reasonable” range — but the details establishing correctness are fairly technical. By applying this conversion to each of the parametric descriptions in our list from Section 4, we obtain a list of mixture distribution hypotheses all of which have bounded pdfs and at least one of which is close to the target in KL divergence (see Section 5.2). With such a list in hand, we will be able to use maximum likelihood (in Section 6) to identify a single hypothesis which is close in KL divergence.
5.1 The conversion procedure
In this section we prove:
Theorem 4
There is a simple efficient procedure which takes values and a value as inputs and outputs a true mixture of many dimensional bounded Gaussians with mixing weights satisfying

, and

for all ,
where
Furthermore, suppose is a mixture of Gaussians with mixing weights , means , and variances and that the following are satisfied:

for we have where ; and

for all such that we have and .
Then will satisfy where
Proof: We construct a mixture of product distributions by defining new mixing weights expectations and variances The procedure is defined as follows:

For all , set

For all let
Let be such that Take . (This is just a normalization so the mixing weights sum to precisely 1.)
It is clear from this construction that condition (a) is satisfied. For (b), the bounds on are easily seen to imply that for all , and hence the same upper bound holds for the mixture , being a convex combination of the values . Similarly, using the fact that together with the bounds on and , we have that
We now prove the second half of the theorem; so suppose that conditions (c) and (d) hold. Our goal is to apply the following proposition (proved in [8]) to bound :
Proposition 2
Let , be mixing weights satisfying . Let . Let and be distributions. Suppose that

for all ;

for all ;

for all ;

for all
Then, letting denote the mixture of the ’s and the mixture of the ’s, for any we have
More precisely, our goal is to apply this proposition with parameters
; ; ; ; ;
To satisfy the conditions of the proposition, we must (1) upper bound for all ; (2) lower bound for all ; (3) upper bound for all such that ; and (4) upper bound for all We now do this.
(1) Upper bounding . A straightforward argument given in [8] shows that assuming , we get
(2) Lower bounding . In [8] it is also shown that assuming that
(3) Upper bounding for all such that . Fix an such that and fix any Consider some particular and and and , so we have and . Since by the definition of we have that , and likewise we have . Let and be the onedimensional Gaussians with means and and variances and respectively. By Corollary 4, we have
Each is the product of such Gaussians. Since KL divergence is additive for product distributions (see Proposition 5) we have the following bound for each such that :
5.2 Getting a list of distributions one of which is KLclose to the target
In this section we show that combining the conversion procedure from the previous subsection with the results of Section 4 lets us obtain the following:
Theorem 5
Let be any unknown mixture of axisaligned Gaussians over . There is an algorithm with the following property: for any , given samples from the algorithm runs in time and with probability outputs a list of many mixtures of Gaussians with the following properties:

For any such that , every distribution in the list satisfies for all .

Some distribution in the list satisfies .
Note that Theorem 5 guarantees that has bounded mass only on the range , whereas the support of goes beyond this range. This issue is addressed in the proof of Theorem 7, where we put together Theorem 5 and the maximum likelihood procedure.
Proof of Theorem 5: We will use a specialization of Theorem 3 in which we have different parameters for the different roles that plays:
Theorem 3 Let be a mixture of axisaligned Gaussians over , described by parameters . There is an algorithm with the following property: for any , given samples from , with probability it outputs a list of candidates such that for at least one candidate in the list, the following holds:

for all ; and

and for all such that .
The algorithm runs in time where
Let be given. We run the algorithm of Theorem 3 with parameters and With these parameters the algorithm runs in time poly. By Theorem 3, we get as output a list of poly many candidate parameter settings with the guarantee that with probability at least one of the settings satisfies

for all , and

and for all such that .
We now pass each of these candidate parameter settings through Theorem 4. (Note that as required by Theorem 4.) By Theorem 4, for any all the resulting distributions will satisfy for all . It is easy to check that under our parameter settings, each of the three component terms of (namely and ) is at most . Thus , so at least one of the resulting distributions satisfies . ,
6 Putting it all together
6.1 Identifying a good distribution using maximum likelihood
Theorem 5 gives us a list of distributions at least one of which is close to the target distribution we are trying to learn. Now we must identify some distribution in the list which is close to the target. We use a natural maximum likelihood algorithm described in [8] to help us accomplish this:
Theorem 6
[8] Let , , be such that Let