Register your product to gain access to bonus material or receive a coupon.
Allows students to test themselves.
Eases the student into the text without being overwhelmed.
Avoids confusion which can occur when the two are covered simultaneously.
For courses in Bayesian Networks or Advanced Networking focusing on Bayesian networks found in departments of Computer Science, Computer Engineering and Electrical Engineering. Also appropriate as a supplementary text in courses on Expert Systems, Machine Learning, and Artificial Intelligence where the topic of Bayesian Networks is covered.
This book provides an accessible and unified discussion of Bayesian networks. It includes discussions of topics related to the areas of artificial intelligence, expert systems and decision analysis, the fields in which Bayesian networks are frequently applied. The author discusses both methods for doing inference in Bayesian networks and influence diagrams. The book also covers the Bayesian method for learning the values of discrete and continuous parameters. Both the Bayesian and constraint-based methods for learning structure are discussed in detail.
Preface.
I. BASICS.
1. Introduction to Bayesian Networks.II. INFERENCE.
3. Inference: Discrete Variables.III. LEARNING.
6. Parameter Learning: Binary Variables.IV. APPICATIONS.
12. Applications.Bayesian networks are graphical structures for representing the probabilistic relationships among a large number of variables and for doing probabilistic inference with those variables. During the 1980s, a good deal of related research was done on developing Bayesian networks (belief networks, causal networks, influence diagrams), algorithms for performing inference with them, and applications that used them. However, the work was scattered throughout research articles. My purpose in writing the 1990 text Probabilistic Reasoning in Expert Systems was to unify this research and to establish a textbook and reference for the field which has come to be known as "Bayesian networks." The 1990s saw the emergence of excellent algorithms for learning Bayesian networks from data. However, by 2000 there still seemed to be no accessible source for "learning Bayesian networks." Similar to my purpose a decade ago, the goal of this text is to provide such a source.
In order to make this text a complete introduction to Bayesian networks, I discuss methods for doing inference in Bayesian networks and influence diagrams. However, there is no effort to be exhaustive in this discussion. For example, I give the details of only two algorithms for exact inference with discrete variables. . These algorithms are Pearl's message-passing algorithm and D'Ambrosio and Li's symbolic probabilistic inference algorithm. It may seem odd that I present Pearl's algorithm, since it is one of the oldest. I have two reasons for doing this: (1) Pearl's algorithm corresponds to a model of human causal reasoning, which is discussed in this text; and (2) Pearl's algorithm extends readily to an algorithm for doing inference with continuous variables, which is also discussed in this text.
The content of the text is as follows. Chapters 1 and 2 cover basics. Specifically, Chapter 1 provides an introduction to Bayesian networks; Chapter 2 discusses further relationships between DAGs and probability distributions such as d-separation, the faithfulness condition, and the minimality condition. Chapters 3-5 concern inference. Chapter 3 covers Pearl's message-passing algorithm, D'Ambrosio and Li's symbolic probabilistic inference, and the relationship of Pearl's algorithm to human causal reasoning. Chapter 4 presents an algorithm for doing inference with continuous variables, an approximate inference algorithm, and an algorithm for abductive inference (finding the most probable explanation). Chapter 5 discusses influence diagrams, which are Bayesian networks augmented with decision nodes and a value node, and dynamic Bayesian networks and influence diagrams. Chapters 6-10 address learning. Chapters 6 and 7 are concerned with parameter learning. Since the notation for these learning algorithm is somewhat arduous, I introduce the algorithms by discussing binary variables in Chapter 6. I then generalize to multinomial variables in Chapter 7. Furthermore, in Chapter 7, I discuss learning parameters when the variables are continuous. Chapters 8, 9, and 10 are concerned with structure learning. Chapter 8 presents the Bayesian method for learning structure in the cases of both discrete and continuous variables, while Chapter 9 discusses the constraint-based method for learning structure. Chapter 10 compares the Bayesian and constraint-based methods, and it presents several real-world examples of learning Bayesian networks. The text ends by referencing applications of Bayesian networks in Chapter 11.
This is a text on learning Bayesian networks; it is not a text on artificial intelligence, expert systems, or decision analysis. However, since these are fields in which Bayesian networks find application, they emerge frequently throughout the text. Indeed, I have used the manuscript for this text in my course on expert systems at Northeastern Illinois University. In one semester, I have found that I can cover the core of the following chapters: 1, 2, 3, 5, 6, 7, 8, and 9.
I would like to thank those researchers who have provided valuable corrections, comments, and dialog concerning the material in this text. They include Bruce D'Ambrosio, David Maxwell Chickering, Gregory Cooper, Tom Dean, Carl Entemann, John Erickson, Finn Jensen, Clark Glymour, Piotr Gmytrasiewicz, David Heckerman, Xia Jiang, James Kenevan, Henry Kyburg, Kathryn Blackmond Laskey, Don LaBudde, David Madigan, Christopher Meek, Paul-Andre Monney, Scott Morris, Peter Norvig, Judea Pearl, Richard Scheines, Marco Valtorta, Alex Wolpert, and Sandy Zabell. I thank Sue Coyle for helping me draw the cartoon containing the robots. The idea for the cover design was motivated by Eric Horvitz's graphic for the UAI `97 web page. I thank Mark McKernin for creating a stunning cover using that idea as a seed.