Skip to the content.

Back to Home Page

This page includes several projects on learning the structure and parameters of Probabilistic Graphical Models (PGMs). We focus on Conditional Random Fields (CRFs), which model conditional probability distributions P(Y|X) and thus generalize Markov Random Fields (MRFs).

From my thesis: CRFs can offer computational and statistical advantages over generative models, yet traditional CRF parameter and structure learning methods are often too expensive to scale up to large problems. We develop methods capable of learning CRFs for much larger problems. We do so by decomposing learning problems into smaller, simpler subproblems. These decompositions allow us to trade off sample complexity, computational complexity, and potential for parallelization, and we can often optimize these trade-offs in model- or data-specific ways. The resulting methods are theoretically motivated, are often accompanied by strong guarantees, and are effective and highly scalable in practice.

Code

Statistical Inference and Learning Library (SILL): Probabilistic Graphical Models (Markov Random Fields and Conditional Random Fields) library for inference and learning. Also includes discriminative methods such as boosting, plus several sub-projects.

Documents

Joseph K. Bradley.
Learning Large-Scale Conditional Random Fields.
Ph.D. Thesis, Machine Learning Department, Carnegie Mellon University, 2013.

Joseph K. Bradley and Carlos Guestrin.
Sample Complexity of Composite Likelihood.
International Conference on Artificial Intelligence and Statistics (AISTATS), 2012.

Joseph K. Bradley and Carlos Guestrin.
Learning Tree Conditional Random Fields.
International Conference on Machine Learning (ICML), 2010.