Faculty Candidate Seminar

Markov Logic: Representation, Inference and Learning

Daniel LowdPhD CandidateUniversity of Washington
SHARE:

Many applications of AI, including natural language processing, information extraction, bioinformatics, robot mapping, and social network analysis, have both relational and statistical aspects. Historically, there has been a divide between relational approaches based on first-order logic and statistical approaches based on probabilistic graphical models. Markov logic unifies the two by attaching weights to formulas in first-order logic, which are used as templates for constructing a Markov network.

In this talk, I will describe recent advances in Markov logic representation, algorithms, and applications. In particular, I will present my work on recursive Markov logic, which gives probabilistic models the full recursive capabilities of first-order logic. I will also show how faster and more efficient weight learning algorithms can be obtained by adapting ideas from convex optimization. Finally, I will discuss current work on combining learning with inference to make exact inference tractable even in very complex models such as Markov logic networks. I will illustrate these developments with applications to probabilistic databases, entity resolution, Web mining, and others.

Sponsored by

EECS - CSE Division