The topic model based on latent Dirichlet allocation relies on the prior statistics of topic proportionals for multinomial words. The words in a document are modeled as a random mixture of latent topics which are drawn from a single Dirichlet prior. However, a single Dirichlet distribution may not sufficiently characterize the variations of topic proportionals estimated from the heterogeneous documents. To deal with this concern, we present a Dirichlet mixture allocation (DMA) model which learns latent topics and their proportionals for topic and document clustering by using the prior based on a Dirichlet mixture model. Multiple Dirichlets pave a way to capture the structure of latent variables in learning representation from real-world documents covering a variety of topics. This paper builds a new latent variable model and develops a variational Bayesian inference procedure to learn model parameters consisting of mixture weights, Dirichlet parameters and word multinomials. Experiments on document representation show the merit of the proposed structural learning by increasing the number of Dirichlets in a DMA topic model.