Efficient learning of hierarchical latent class models

Nevin L. Zhang*, Tomáš Kočka

*Corresponding author for this work

Research output: Contribution to journalConference article published in journalpeer-review

42 Citations (Scopus)

Abstract

Hierarchical latent class (HLC) models are tree-structured Bayesian networks where leaf nodes are observed while internal nodes are hidden. In earlier work, we have demonstrated in principle the possibility of reconstructing HLC models from data. In this paper, we address the scalability issue and develop a search-based algorithm that can efficiently learn high-quality HLC models for realistic domains. There are three technical contributions: (1) the identification of a set of search operators; (2) the use of improvement in BIC score per unit of increase in model complexity, rather than BIC score itself, for model selection; and (3) the adaptation of structural EM for situations where candidate models contain different variables than the current model. The algorithm was tested on the COIL Challenge 2000 data set and an interesting model was found.

Original languageEnglish
Pages (from-to)586-593
Number of pages8
JournalProceedings - International Conference on Tools with Artificial Intelligence, ICTAI
Publication statusPublished - 2004
EventProceedings - 16th IEEE International Conference on Tools with Artificial Intelligence, ICTAI 2004 - Boca Raton, FL, United States
Duration: 15 Nov 200417 Nov 2004

Fingerprint

Dive into the research topics of 'Efficient learning of hierarchical latent class models'. Together they form a unique fingerprint.

Cite this