Advanced Artificial Intelligence
Purpose of Course showclose
This course will present advanced topics in Artificial Intelligence (AI). We will begin by defining the term “software agent” and discussing how software agents differ from programs in general. We will then take a look at those problems in the field of AI that tend to receive the most attention. Different researchers approach these problems differently. In this course, we will focus on how to build and search graph data structures needed to create software agents, an approach that you will find useful for solving many problems in AI. We will also learn to “break down” larger problems into a number of more specific, manageable subproblems.
In the latter portion of this course, we will review the study of logic and conceptualize the differences between propositional logic, firstorder logic, fuzzy logic, and default logic. After learning about statistical tools commonly used in AI and about the basic symbol system used to represent knowledge, we will focus on artificial neural network and machine learning, which are essential components of computational and statistical methods, and theoretical computer science. The course will then conclude with a study of the Turing machine and a discussion of the questionable claims that human thinking is a symbol manipulation.
Learning Outcomes showclose
 Define the term “intelligent agent,” list major problems in AI, and identify the major approaches to AI.
 Translate problems into graphs and encode the procedures that search the solutions with the graph data structures.
 Explain the differences between various types of logic and basic statistical tools used in AI.
 List the different types of learning algorithms and explain why they are different.
 List the most common methods of statistical learning and classification and explain the basic differences between them.
 Describe the components of Turing machine.
 Name the most important propositions in the philosophy of AI.
 List the major issues pertaining to the creation of machine consciousness.
 Design a reasonable software agent with java code.
Course Requirements showclose
√ Have access to a computer.
√ Have continuous broadband Internet access.
√ Have the ability/permission to install plugins or software (e.g., Adobe Reader or Flash).
√ Have the ability to download and save files and documents to a computer.
√ Have the ability to open Microsoft files and documents (.doc, .ppt, .xls, etc.).
√ Be competent in the English language.
√ Have read the Saylor Student Handbook.
Unit Outline show close
Expand All Resources Collapse All Resources

Unit 1: Intelligent Agents and Problems Of AI
AI is often seen through the autonomous, rational intelligent agents paradigm, which we will emphasize in this unit. This unit will begin by discussing what software agents are and how agents differ from programs in general. The unit will then provide a natural taxonomy of autonomous agents and discusses possibilities for further classification before presenting those problems in AI that seem to received the most attention. The problem of creating intelligence is then broken down into a number of specific subproblems, which consist of particular traits that should be found in an intelligent system. Note that different researchers approach the problems of AI from different perspectives, depending on their respective training, fields of expertise, and favored tools.
Unit 1 Time Advisory show close
Unit 1 Learning Outcomes show close

1.1 Is It an Agent, or Just a Program?
 Reading: The University of Memphis: Stan Franklin and Art Graesser's “Is It an Agent, or Just a Program?: A Taxonomy for Autonomous Agents”
Link: The University of Memphis: Stan Franklin and Art Graesser’s “Is It an Agent, or Just a Program?” (HTML)
Instructions: This resource covers subsections 1.1.11.1.5. Read the webpage to learn about the advent of software agents. Memorize the definitions of the AIMA, Maes, KidSim, HayesRoth, IBM, SodaBot, Foner, and Brustoloni Agents. Make sure you know how to define “agency” and work to memorize Franklin's definition of an agent. Read through the examples of the different taxonomies and classifications of agents.
About the link: Stan Franklin and Art Graesser are researchers of AI, and professors of computer science and cognitive science at the University of Memphis.
Terms of Use: Please respect the copyright and terms of use displayed on the web pages above.
 Reading: The University of Memphis: Stan Franklin and Art Graesser's “Is It an Agent, or Just a Program?: A Taxonomy for Autonomous Agents”
 1.1.1 What is an agent?
 1.1.2 The Essence of Agency
 1.1.3 Agent Classifications
 1.1.4 A Natural Kinds Taxonomy of Agents
 1.1.5 Subagents and Societies of Agents

1.1.6 John Lloyd on Intelligent Agents
 Lecture: videolectures.net: John Lloyd’s “Intelligent Agents: Part 1”
Link: videolectures.net: John Lloyd’s“Intelligent Agents: Part 1” (Adobe Flash and Windows Media Player)
Instructions: Watch the first part of this threepart video series by John Lloyd. As he lectures you may wish to work through the slides included on the page. Throughout the lecture, Professor Lloyd talks about AIMA agents and presents some pertinent examples. Please compare his thoughts with yours and Franklin's from the previous sections. This lecture is approximately 50 minutes in length. You can also download the PowerPoint slides in a PDF format by clicking on the link under “See Also.”
About the link: John Lloyd is a professor at Australian National University who shares lectures on videolectures.net. In the lecture, he introduces the basic ideas of agents and describes some agent architectures.
Terms of Use: The article above is released under a Creative Commons AttributionNonCommercial NoDerivatives License 3.0(HTML). It is attributed to (John Lloyd).
 Lecture: videolectures.net: John Lloyd’s “Intelligent Agents: Part 1”

1.1.7 Stan Franklin  A Cognitive Theory of Everything
 Lecture: Google Videos: Stan Franklin’s “A Cognitive Theory of Everything”
Link: Google Videos:Stan Franklin’s “A Cognitive Theory of Everything” (Google Video)
Instructions: Watch this video, which does an excellent job explaining how intelligent agents fit into “the big picture.” Ask yourself whether Franklin's thoughts make sense to you. This video is 40 minutes long.
About the link: In this video, Stan Franklin presents theories of cognition at the 2006 AGIRI workshop.
Terms of Use: Please respect the copyright and terms of use displayed on the web page above.The Saylor Foundation does not yet have materials for this portion of the course. If you are interested in contributing your content to fill this gap or aware of a resource that could be used here, please submit it here.
 Lecture: Google Videos: Stan Franklin’s “A Cognitive Theory of Everything”

1.2 Overview of AI General Problems
 Reading: Wikipedia’s “Artificial Intelligence: Problems”
Link: Wikipedia’s “Artificial Intelligence: Problems” (PDF)
Instructions: Read this entry on the general problems arising in the field of AI. After completing this assignment, you should know the meaning of terms such as knowledge representation, planning, learning, natural language processing, motion and manipulation, perception, social intelligence, creativity, and general intelligence. This link covers subsections 1.2.11.2.9. Note that sections 1.2.21.2.4 have additional resources assigned to them (see below) and require extra attention.
About the link: The article above is an entry from en.wikipedia.org, which is a webbased, freecontent encyclopedia project based on an openly editable model.
Terms of Use: The article above is released under a Creative Commons AttributionShareAlike License 3.0(HTML). This article is a modified version of an article of the same title originally found on Wikipedia. The Saylor Foundation has reformatted the entry and has omitted several of the original sections.You can find the original Wikipedia version of this article here(HTML).
 Reading: Wikipedia’s “Artificial Intelligence: Problems”
 1.2.1 Deduction, Reasoning, Problem Solving

1.2.2 Knowledge Representation
 Lecture: videolectures.net: Maurice Pagnucco’s “Knowledge Representation and Reasoning: Part 1”
Link: videolectures.net: Maurice Pagnucco’s “Knowledge Representation and Reasoning: Part 1” (Adobe Flash and Windows Media Player)
Instructions: Watch the first part of this threepart video series by Maurice Pagnucco. After viewing the lecture, you should be able to define the terms knowledge, representation, and reasoning; realize the advantages of this approach; and define the forms of knowledge representation. This lecture is approximately 1 hour long.
About the link: Maurice Pagnucco is a professor at the School of Computer Science and Engineering at University of New South Wales.
Terms of Use: The article above is released under a Creative Commons AttributionNonCommercial NoDerivatives License 3.0(HTML). It is attributed to (Maurice Pagnucco).
 Lecture: videolectures.net: Maurice Pagnucco’s “Knowledge Representation and Reasoning: Part 1”

1.2.3 Planning
 Lecture: videolectures.net: Jussi Rintanen’s “Planning: Part 1”
Link: videolectures.net: Jussi Rintanen’s“Planning: Part 1” (Adobe Flash and Windows Media Player)
Instructions: Watch the first part of the video by Jussi Rintanen. You may wish to work through the slides provided on the righthand side of the screen as Professor Rintanen lectures. After viewing the lecture, you should understand why planning can be difficult and be able to define the term “transition systems.” This video is about an hour long. You can also download the PowerPoint slides in a PDF format by clicking on the link under “See Also.”
About the link: Jussi Rintanen is a researcher and an associate professor at NICTA Canberra Research Laboratory and The Australian National University.
Terms of Use: The article above is released under a Creative Commons AttributionNonCommercial NoDerivatives License 3.0(HTML). It is attributed to (Jussi Rintanen).
 Lecture: videolectures.net: Jussi Rintanen’s “Planning: Part 1”

1.2.4 Learning
 Lecture: videolectures.net: Olivier Bousquet’s “Introduction to Learning Theory: Part 1”
Link: videolectures.net: Olivier Bousquet’s “Introduction to Learning Theory; part 1” (Adobe Flash and Windows Media Player)
Instructions: Watch Olivier Bousquet’s “Part 1,” working through the provided on the righthand side of the screen as you listen to his lecture. After viewing the lecture, you should have a general understanding of “learning theory,” be able to differentiate between deduction and induction, and describe, in general terms, the concept of probability and Bayes' rule. This lecture is about 1 hour long. You can also download the PowerPoint slides in a PDF format by clicking on the link under “See Also.”
About the link: Olivier Bousquet works at the Max Planck Institute for Biological Cybernetics.
Terms of Use: The article above is released under a Creative Commons AttributionNonCommercial NoDerivatives License 3.0(HTML). It is attributed to (Oliver Bousquet).
 Lecture: videolectures.net: Olivier Bousquet’s “Introduction to Learning Theory: Part 1”
 1.2.5 Natural Language Processing
 1.2.6 Motion and Manipulation
 1.2.7 Perception
 1.2.8 Social Intelligence
 1.2.9 General Intelligence

1.3 Approaches to AI
 Reading: Wikipedia’s “Artificial Intelligence: Approaches”
Link: Wikipedia’s“Artificial Inelligence: Approaches” (HTML)
Instructions: Read this entry on the different paradigms that guide AI research and make sure you know the differences between them. This link covers subsections 1.3.11.3.4.
About the link: The article above is an entry from en.wikipedia.org, which is a webbased, freecontent encyclopedia project based on an openly editable model.
Terms of Use: The article above is released under a Creative Commons AttributionShareAlike License 3.0(HTML). This article is a modified version of an article of the same title originally found on Wikipedia. The Saylor Foundation has reformatted the entry and has omitted several of the original sections.You can find the original Wikipedia version of this article here(HTML).
 Reading: Wikipedia’s “Artificial Intelligence: Approaches”
 1.3.1 Cybernetics and Brain Simulation
 1.3.2 Symbolic AI
 1.3.3 Subsymbolic AI
 1.3.4 Statistical

1.3.5 Systems with General Intelligence
 Lecture: videolectures.net: Michael Thielscher’s "Systems with General Intelligence"
Link: videolectures.net: Michael Thielscher’s "Systems with General Intelligence" (Adobe Flash and Windows Media Player)
Instructions: Watch this video about general problems in AI, working through slides provided on the righthand side of the screen as Thielscher lectures. After watching the video, you should be familiar with the chessasanintelligentsystem example, understand what general game playing is about, and identify the major questions with which general AI is concerned. Do not let yourself get bogged down by the details; work for a general understanding of AI. This lecture is 53 minutes long. You can also download the PowerPoint slides in a PDF format by clicking on the link under “See Also.”
About the link: In this video, Michael Thielscher of the School of Computer Science and Engineering, University of New South Wales, talks about general intelligence and AI problems, approaches, and history.
Terms of Use: The article above is released under a Creative Commons AttributionNonCommercial NoDerivatives License 3.0(HTML). It is attributed to (Michael Thielscher).
 Lecture: videolectures.net: Michael Thielscher’s "Systems with General Intelligence"

1.4 Agents in Code
 Assignment: National Taiwan Normal University: Department of Computer Science and Information Engineering: TsungChe Chiang’s “Vacuum Cleaner World”
Link: National Taiwan Normal University: Department of Computer Science and Information Engineering: TsungChe Chiang’s “Vacuum Cleaner World” (HTML)
Instructions: Please read through the webpage and follow the instructions to complete the activity.
Terms of Use: Please respect the copyright and terms of use displayed on the web page above.
 Assignment: National Taiwan Normal University: Department of Computer Science and Information Engineering: TsungChe Chiang’s “Vacuum Cleaner World”

Unit 2: Solving Problems By Searching
This unit will teach you how to build and search data structures needed to create software agents. We will focus on graph structures and a few classical graph search algorithms because their understanding is important for solving many problems that arise in AI. Graphs enable logical description of the problems. A graph search, then, represents the search for the solutions. We will begin this unit with some basic graph theory definitions and then learn how to solve some problems with a graph. The last section of this unit has a video that will expand the understanding of the graph structures.
Unit 2 Time Advisory show close
Unit 2 Learning Outcomes show close
 2.1 Graphs

2.1.1 Graph Definition
 Reading: planetmath.org: Cameron McLeman’s “Graph"
Link: planetmath.org: Cameron McLeman’s “Graph” (PDF)
Instructions: Study the definition of a graph from this section and draw some examples of your own.
About the link: planetmath.org is a mathematics encyclopedia with entries written and reviewed by members.
Terms of Use: The article above is released under a Creative Commons AttributionShareAlike License 3.0. It is attributed to Planetmath.org and the original version can be found here (HTML).
 Reading: planetmath.org: Cameron McLeman’s “Graph"

2.1.2 Binary Tree
 Reading: planetmath.org: Yann Lamontagne’s “Binary Tree"
Link: planetmath.org: Yann Lamontagne’s “Binary Tree” (PDF)
Instructions: Make sure you know how a binary tree differs from a regular tree after the reading this section.
About the link: planetmath.org is a mathematics encyclopedia with entries written and reviewed by members.
Terms of Use: The article above is released under a Creative Commons AttributionShareAlike License 3.0(HTML). It is attributed to Planetmath.org and the original version can be found here(HTML).
 Reading: planetmath.org: Yann Lamontagne’s “Binary Tree"

2.1.3 Example Problem: Minimum Spanning Tree
 Reading: planetmath.org: Cameron McLeman’s “Minimum Spanning Tree"
Link: planetmath.org:Cameron McLeman’s “Minimum Spanning Tree” (PDF)
Instructions: Read about minimum spanning trees and try to figure out how Prim's algorithm works; the solution can be found at brpreiss.com's link “Prim's Algorithm.” (HTML) Before you check the solution, try to solve problem yourself. After you have solved the problem (or if you have spent a couple of hours working on it, and are stumped!), study the solution.
About the link: planetmath.org is a mathematics encyclopedia with entries written and reviewed by members.
Terms of Use: The article above is released under a Creative Commons AttributionShareAlike License 3.0. It is attributed to Planetmath.org and the original version can be found here.
 Reading: planetmath.org: Cameron McLeman’s “Minimum Spanning Tree"
 2.2 Tree Search Algorithms

2.2.1 Binary Search Trees
 Reading: The University of Auckland in New Zealand: John Morris’s “Binary Search Tree"
Link: The University of Auckland in New Zealand: John Morris’s “Binary Search Tree” (PDF)
Instructions: Read the article to learn how to build and search binary trees.
About the link: This link is provided by John Morris, a professor in the Electrical and Computer Engineering Department at the University of Auckland in New Zealand.
Terms of Use: The linked material above has been reposted by the kind permission of John Morris, and can be viewed in its original form here. Please note that this material is under copyright and cannot be reproduced in any capacity without explicit permission from the copyright holder.
 Reading: The University of Auckland in New Zealand: John Morris’s “Binary Search Tree"

2.2.2 RedBlack Trees
 Reading: The University of Auckland in New Zealand: John Morris’s “RedBlack Trees"
Link: The University of Auckland in New Zealand: John Morris’s “RedBlack Trees” (PDF)
Instructions: Please read the linked material above. After reading this section, you should know how a binary tree differs from a redblack tree and understand the basics of building and searching redblack trees.
About the link: This link is provided by John Morris, a professor in the Electrical and Computer Engineering Department at University of Auckland in New Zealand.
Terms of Use: The linked material above has been reposted by the kind permission of John Morris, and can be viewed in its original form here. Please note that this material is under copyright and cannot be reproduced in any capacity without explicit permission from the copyright holder.
 Reading: The University of Auckland in New Zealand: John Morris’s “RedBlack Trees"

2.2.3 Skip List
 Reading: The University of Auckland in New Zealand: John Morris’s “Skip Lists"
Link: The University ofAuckland in New Zealand: John Morris’s “Skip List” (PDF)
Instructions: Please learn how to build and search a skip list by reading the linked material.
About the link: This link is provided by John Morris, a professor in the Electrical and Computer Engineering Department at University of Auckland in New Zealand.
Terms of Use: The linked material above has been reposted by the kind permission of John Morris and can be viewed in its original form here. Please note that this material is under copyright and cannot be reproduced in any capacity without explicit permission from the copyright holder.
 Reading: The University of Auckland in New Zealand: John Morris’s “Skip Lists"
 2.3 Common Search Techniques with Graphs

2.3.1 Depthfirst Search
 Reading: Wikipedia’s “DepthFirst Search"
Link: Wikipedia’s “DepthFirst Search” (PDF)
Instructions: Read this entry to learn how depthfirst search works. Be sure to study the example included.
About the link: en.wikipedia.org is a webbased, freecontent encyclopedia project based on an openly editable model.
Terms of Use: The article above is released under a Creative Commons AttributionShareAlike License 3.0(HTML). You can find the original Wikipedia version of this article here(HTML).
 Reading: Wikipedia’s “DepthFirst Search"

2.3.2 BreadthFirst Search
 Reading: Wikipedia’s “BreadthFirst Search"
Link: Wikipedia’s “BreadthFirst Search” (PDF)
Instructions: Read this section and make sure you know the differences between depthfirst and breadthfirst search algorithms.
About the link: en.wikipedia.org is a webbased, freecontent encyclopedia.
Terms of Use: The article above is released under a Creative Commons AttributionShareAlike License 3.0(HTML). You can find the original Wikipedia version of this article here(HTML).
 Reading: Wikipedia’s “BreadthFirst Search"

2.3.3 Dijkstra's Algorithm
 Reading: Wikipedia’s “Dijkstra's Algorithm"
Link: Wikipedia’s “Dijkstra's Algorithm” (PDF)
Instructions: Read this entry to learn how Dijkstra's algorithm works. Please work through the example included in this entry.
About the link: en.wikipedia.org is a webbased, freecontent encyclopedia.
Terms of Use: The article above is released under a Creative Commons AttributionShareAlike License 3.0(HTML). You can find the original Wikipedia version of this article here(HTML).
 Reading: Wikipedia’s “Dijkstra's Algorithm"

2.4 Search Algorithms in General
 Reading: Wikipedia’s “Search Algorithm”
Link: Wikipedia’s “Search Algorithm” (PDF)
Instructions: Read the entry linked above for an overview of search techniques.
Terms of Use: The article above is released under a Creative Commons AttributionShare Alike License 3.0 (HTML). You can find the Wikipedia source article here (HTML).
 Reading: Wikipedia’s “Search Algorithm”

2.5 Basic Notions in Graph Theory
 Lecture: videolectures.net: Zoubin Ghahramani’s “Graphical Models: Parts 13”
Link: videolectures.net: Zoubin Ghahramani’s “Graphical Models: Parts 13” (Adobe Flash and Windows Media Player)
Instructions: Watch this threepart video on graph theory to develop a better understanding of how to use graphs in AI. After viewing the first part, you should know about directed, undirected, and factor graphs, conditional independence, dseparation, and plate notation. The second part will teach you about inference in graphical models, key ideas in belief propagation, and the junction tree algorithm. Watch the third part for fun, trying to follow along as much as possible. The first lecture is 53 minutes; the second is 58 minutes; and the third is 1 hour and 18 minutes. You can also download the PowerPoint slides in a PDF format by clicking on the link under “See Also.”
About the link: Zoubin Ghahramani is Professor of Information Engineering at Department of Engineering, University of Cambridge. His research interests include Bayesian approaches to machine learning, artificial intelligence, statistics, information retrieval, bioinformatics, and computational motor control.
Terms of Use: The article above is released under a Creative Commons AttributionNonCommercial NoDerivatives License 3.0(HTML). It is attributed to (Zoubin Ghahramani).
 Lecture: videolectures.net: Zoubin Ghahramani’s “Graphical Models: Parts 13”

2.6 Graph Examples in Code
 Assignment: Artificial Intelligence Center’s “Route Finding Agent”
Link: Artificial Intelligence Center’s “Route Finding Agent” (JAVA)
Instructions: Create a routefinding agent given the environment in the form of a graph. One possible solution can be found via the link above, under the “Route Finding Agent” section. Study the solution code after you have already solved the problem, or if you have spent a substantial amount of time and are stuck (this problem could take you up to 12 hours to solve!)
About the link: The code provided by the link is a Java implementation of search algorithms from Norvig And Russell's "Artificial Intelligence  A Modern Approach,” 3rd Edition.
Terms of Use: Please respect the copyright and terms of use displayed on the web page above.
 Assignment: Artificial Intelligence Center’s “Route Finding Agent”

Unit 3: Logical Agents And Knowledge Representation
Intelligent agents are supposed to make rational decisions, which are not just logically reasoned, but optimal as well, given the available information. Accordingly, in this unit, we will review the study of logic and conceptualize the differences between propositional logic, firstorder logic, fuzzy logic, and default logic. This unit will also present an overview of common statistical tools used in AI. In the last part of this unit, we will try to clarify our definition of knowledge representation and will discuss its roles based on research conducted at MIT.
Unit 3 Time Advisory show close
Unit 3 Learning Outcomes show close

3.1 Logic Programming
 Reading: Wikipedia’s “Logic Programming”
Link: Wikipedia’s “Logic Programming” (PDF)
Instructions: Read this web page on logic programming. Make sure you understand the differences between abductive logic, metalogic, constraint logic, concurrent logic, and inductive logic, higherorder logic, and linear logic programming. This reading covers subunits 3.1.13.1.7.
Terms of Use: The article above is released under a Creative Commons AttributionShare Alike License 3.0 (HTML). You can find the Wikipedia source article here (HTML).  Lecture: videolectures.net: Alwen Tiu’s “Introduction to Logic: Parts 13”
Link: videolectures.net: Alwen Tiu’s “Introduction to Logic: Parts 13” (Adobe Flash and Windows Media Player)
Instructions: Watch the first lecture on logic and compare it to the reading above. In this lecture, you will learn about the syntax and semantics of propositional logic, boolean functions, satisfiability, and binary decision trees. You will need to know the difference between conjunctive and disjunctive normal forms. The first lecture is 56 minutes long. You can also download the PowerPoint slides in a PDF format by clicking on the link under “See Also.”
Then, watch the second lecture to learn about firstorder logic. Pay particular attention to the examples. This second lecture is 39 minutes long.
Finally, watch the third lecture, which presents modal logic. Make sure you know the differences between propositional, firstorder, and modal logic. The third lecture is 49 minutes long.
About the link: Alwen Tiu is a professor at the Australian National University.
Terms of Use: The article above is released under a Creative Commons AttributionNonCommercial NoDerivatives License 3.0(HTML). It is attributed to (John Lloyd).
 Reading: Wikipedia’s “Logic Programming”
 3.1.1 Abductive Logic
 3.1.2 Metalogic
 3.1.3 Constraint Logic
 3.1.4 Concurrent Logic
 3.1.5 Inductive Logic
 3.1.6 HigherOrder Logic
 3.1.7 Linear Logic Programming
 3.2 Probabilistic Methods for Uncertain Reasoning

3.2.1 Bayesian Network
 Reading: PROWL’s “Bayesian Networks”
Link: PROWL’s “Bayesian Networks” (HTML)
Instructions: Read the above web page on Bayesian networks. Focus on the definitions it provides and work through the example provided.
Terms of Use: Please respect the copyright and terms of use displayed on the web page above.  Lecture: videolectures.net: Christopher Bishop’s “Introduction to Bayesian Inference: Part 1”
Link: videolectures.net: Christopher Bishop’s “Introduction to Bayesian Inference: Part 1” (Adobe Flash and Windows Media Player)
Instructions: Watch the first part of the video above, which discusses Bayesian Inference. You may wish to work through the slides provided on the righthand side of the screen as Bishop lectures. Focus on learning the rules of probability and understanding the terms Bayes' theorem, Bayesian inference, probabilistic graphical models. Make sure you know how factor graphs are used. This lecture is 1 hour and 17 minutes long. You can also download the PowerPoint slides in a PDF format by clicking on the link under “See Also.”
About the link: Christopher Bishop works for Microsoft Research.
Terms of Use: The article above is released under a Creative Commons AttributionNonCommercial NoDerivatives License 3.0(HTML). It is attributed to (Christopher Bishop).
 Reading: PROWL’s “Bayesian Networks”

3.2.2 Hidden Markov Model
 Reading: Wikipedia’s “Hidden Markov Model”
Link: Wikipedia’s “Hidden Markov Model” (PDF)
Instructions: Read the linked entry above, which discusses the hidden Markov model. Focus on the 'Description' and 'Architecture' sections of the web site.
Terms of Use: The article above is released under a Creative Commons AttributionShare Alike License 3.0 (HTML). You can find the Wikipedia source article here (HTML).
 Reading: Wikipedia’s “Hidden Markov Model”
 3.2.3 Other Methods for Uncertain Reasoning

3.2.3.1 Kalman Filter
 Reading: Wikipedia’s “Kalman Filter”
Link: Wikipedia’s“ Kalman Filter” (PDF)
Instructions: Read this entry on the Kalman Filter, paying attention to the introductory part.
Terms of Use: The article above is released under a Creative Commons AttributionShare Alike License 3.0 (HTML). You can find the Wikipedia source article here (HTML).
 Reading: Wikipedia’s “Kalman Filter”

3.2.3.2 Decision Theory
 Reading: Wikipedia’s “Decision Theory”
Link: Wikipedia’s “Decision Theory” (PDF)
Instructions: Read this entry, making sure you understand the 'Normative and Descriptive Decision Theory' and 'What Kinds of Decisions Need a Theory?' sections of the text.
Terms of Use: The article above is released under a Creative Commons AttributionShare Alike License 3.0 (HTML). You can find the Wikipedia source article here (HTML).
 Reading: Wikipedia’s “Decision Theory”

3.3 Knowledge Representation and Reasoning
 Lecture: videolectures.net: Maurice Pagnucco’s “Knowledge Representation and Reasoning: Part 1”
Link: videolectures.net: Maurice Pagnucco’s “Knowledge Representation and Reasoning: Part 1” (Adobe Flash and Windows Media Player)
Instructions: Watch the first video. Focus on learning how to represent what we know and how to use representation to make inferences about that knowledge. Work carefully through the examples included in the lecture. This lecture is 54 minutes long.
About the link: Maurice Pagnucco is a professor at the School of Computer Science and Engineering at University of New South Wales.
Terms of Use: The article above is released under a Creative Commons AttributionNonCommercial NoDerivatives License 3.0(HTML). It is attributed to (Maurice Pagnucco).
 Lecture: videolectures.net: Maurice Pagnucco’s “Knowledge Representation and Reasoning: Part 1”

3.3.1 Discussion on Knowledge Representation
 Reading: MIT: Randall Davis, Howard Shrobe, and Peter Szolovit's “What Is a Knowledge Representation?”
Link:MIT: Randall Davis, Howard Shrobe, and Peter Szolovit's “What Is a Knowledge Representation?” (HTML)
Instructions: Read this web page, which presents different views on knowledge representation. This reading covers sections 3.3.1.13.3.1.4. Contrast the reading with your own opinions.
About the link: The article above is written by Randall Davis, Howard Shrobe, and Peter Szolovit, researchers in the field of AIat MIT.
Terms of Use: Please respect the copyright and terms of use displayed on the web page above.
 Reading: MIT: Randall Davis, Howard Shrobe, and Peter Szolovit's “What Is a Knowledge Representation?”
 3.3.1.1 Terminology and Perspective
 3.3.1.2 What is a Knowledge Representation?
 3.3.1.3 Consequences for Research and Practice
 3.3.1.4 The Goal of Knowledge Representation Research

3.4 Coding Drills
 Assignment: Artificial Intelligence Center’s “NQueens Problem Demo”
Link: Artificial Intelligence Center’s “NQueens Problem Demo” (JAVA)
Instructions: Please follow the instructions and solve the following problem.
Terms of Use: Please respect the copyright and terms of use displayed on the web page above.
 Assignment: Artificial Intelligence Center’s “NQueens Problem Demo”

Unit 4: Learning
This unit presents an artificial neural network (NN) as the most important learning tool for machine learning. Machine learning research tries to automatically extract information from data through computational and statistical methods. Machine learning is closely related to not only data mining and statistics, but also theoretical computer science. NN is a computational model based on biological neural networks. It consists of an interconnected group of artificial neurons and processes. Practically, neural networks are nonlinear statistical data modeling tools used to model complex relationships between inputs and outputs. After being successfully trained, NNs are able to perform classification, estimation, prediction, or simulation with new data. The second part of this unit reviews the Gaussian and Bayesian processes used in machine learning.
Unit 4 Time Advisory show close
Unit 4 Learning Outcomes show close

4.1 Machine Learning
 Reading: Wikipedia’s “Machine Learning”
Link: Wikipedia’s “Machine Learning” (PDF)
Instructions: Read this web page for an overview of machine learning. Be sure you understand the differences between the learning methods, which you can read about beneath the 'Algorithm types' section. This reading covers subsections 4.1.14.1.5.
Terms of Use: The article above is released under a Creative Commons AttributionShare Alike License 3.0 (HTML). You can find the Wikipedia source article here (HTML).
 Reading: Wikipedia’s “Machine Learning”
 4.1.1 Supervised Learning
 4.1.2 Unsupervised Learning

4.1.3 Reinforcement Learning
 Lecture: videolectures.net: John Lloyd’s “Intelligent Agents: Part 2”
Link: videolectures.net: John Lloyd’s “Intelligent Agents: Part 2” (Adobe Flash and Windows Media Player)
Instructions: Watch the second part of the video by John Lloyd and pay attention to the AIMA learning agent. Please compare Lloyd’s explanation of reinforced learning with the definition provided on the World Lingo site. This video is 50 minutes long. You can also download the PowerPoint slides in a PDF format by clicking on the link under “See Also.”
About the link: John Lloyd is a professor at Australian National University who shares lectures on videolectures.net.
Terms of Use: The article above is released under a Creative Commons AttributionNonCommercial NoDerivatives License 3.0(HTML). It is attributed to (John Lloyd).
 Lecture: videolectures.net: John Lloyd’s “Intelligent Agents: Part 2”
 4.1.4 Transduction
 4.1.5 Multitask Learning

4.1.6 Machine Learning, Probability, and Graphical Models
 Lecture: videolectures.net: Sam Roweis’s “Machine Learning, Probability, and Graphical Models: Part 1”
Link: videolectures.net: Sam Roweis’s “Machine Learning, Probability, and Graphical Models: Part 1” (Adobe Flash and Windows Media Player)
Instructions: Watch the first part of the video by Sam Roweisto review the applications of probabilistic learning, the concept of representation, and examples of training and graphical models. You may wish to work through the slides available on the lefthand side of this webpage as you listen to Professor Roweis’ lecture. This video is just over 1 hour long. You can also download the PowerPoint slides in a PDF format by clicking on the link under “See Also.”
About the link: Sam Roweis is an Associate Professor in the Department of Computer Science at the University of Toronto. His research interests include machine learning, data mining, and statistical signal processing.
Terms of Use: The article above is released under a Creative Commons AttributionNonCommercial NoDerivatives License 3.0(HTML). It is attributed to (John Lloyd).
 Lecture: videolectures.net: Sam Roweis’s “Machine Learning, Probability, and Graphical Models: Part 1”
 4.2 Neural Network

4.2.1 Introduction to Neural Networks
 Reading: Wolfram Mathematica’s “Introduction to Neural Networks”
Link: Wolfram Mathematica’s “Introduction to Neural Networks” (HTML)
Instructions: Read the general 2.1 section to learn about general neural networks and how they are mathematically defined.
About the link: This entry is from Wolfram, which is a software company known for Mathematica.
Terms of Use: Please respect the copyright and terms of use displayed on the web pages above.
 Reading: Wolfram Mathematica’s “Introduction to Neural Networks”

4.2.2 Feedforward Neural Networks
 Reading: Wolfram Mathematica’s “Feedforward Neural Networks”
Link: Wolfram Mathematica’s “Feedforward Neural Networks” (HTML)
Instructions: Read only the pages under section 2.5., which covers theFeedforward neural network. Make sure you understand this network’s mathematical definition and that you study the examples in figures 2.5 and 2.6.
Terms of Use: Please respect the copyright and terms of use displayed on the web pages above.
 Reading: Wolfram Mathematica’s “Feedforward Neural Networks”

4.2.3 Radial Basis Function Networks
 Reading: Wolfram Mathematica’s “Radial Basis Function Networks”
Link: Wolfram Mathematica’s “Radial Basis Function Networks” (HTML)
Instructions: Read only the pages under the section 2.5.2, which coversthe Radial Basis Function Network. Make sure youunderstand the network’s mathematical definition and be sure to study the examples in figures 2.7 and 2.8.
Terms of Use: Please respect the copyright and terms of use displayed on the web pages above.
 Reading: Wolfram Mathematica’s “Radial Basis Function Networks”

4.2.4 The Perceptron
 Reading: Wolfram Mathematica’s “The Perceptron”
Link: Wolfram Mathematica’s “The Perceptron” (HTML)
Instructions: Read only the pages under the section 2.4 about Perceptron. Be sure to understand its mathematical definition, learn the training algorithm, and study the example in figure 2.4.
Terms of Use: Please respect the copyright and terms of use displayed on the web pages above.
 Reading: Wolfram Mathematica’s “The Perceptron”

4.2.5 Vector Quantization (VQ) Networks
 Reading: Wolfram Mathematica’s “Vector Quantization Networks”
Link: Wolfram Mathematica’s “Vector Quantization Networks” (HTML)
Instructions: Starting at the third to last paragraph in section 2.8 which starts at “Another neural network type…”, read about the Vector Quantization network.
Terms of Use: Please respect the copyright and terms of use displayed on the web pages above.
 Reading: Wolfram Mathematica’s “Vector Quantization Networks”

4.2.6 Hopfield Network
 Reading: Wolfram Mathematica’s “Hopfield Network”
Link: Wolfram Mathematica’s “Hopfield Network” (HTML)
Instructions: Read only the pages under the section 2.7, which presents the Hopfield network and the equations that describe it.
Terms of Use: Please respect the copyright and terms of use displayed on the web pages above.
 Reading: Wolfram Mathematica’s “Hopfield Network”
 4.3 Other Classifiers and Statistical Learning Methods

4.3.1 Kernel Methods
 Reading: Wikipedia’s “Kernel Methods”
Link: Wikipedia’s “Kernel Methods” (PDF)
Instructions: Read this web page to review Kernel methods. Focus on the introductory part at the top of the text, i.e. the basic description and definition.
Terms of Use: The article above is released under a Creative Commons AttributionShare Alike License 3.0 (HTML). You can find the Wikipedia source article here (HTML).
 Reading: Wikipedia’s “Kernel Methods”

4.3.2 knearest Neighbor Algorithm
 Reading: Wikipedia’s “knearest Neighbor Algorithm”
Link: Wikipedia’s “knearest Neighbor Algorithm” (PDF)
Instructions: Make sure you know how the knearest neighbor algorithm works (in principle) after reading this entry.
Terms of Use: The article above is released under a Creative Commons AttributionShare Alike License 3.0 (HTML). You can find the Wikipedia source article here (HTML).
 Reading: Wikipedia’s “knearest Neighbor Algorithm”

4.3.3 Mixture Model
 Reading: Wikipedia’s “Mixture Model”
Link: Wikipedia’s “Mixture Model” (PDF)
Instructions: Read this web page to learn about the different types of Mixture Models. Pay attention to the 'General Mixture Model' section and read as much as you can from the 'Specific examples' and 'Examples' sections.
Terms of Use: The article above is released under a Creative Commons AttributionShare Alike License 3.0 (HTML). You can find the Wikipedia source article here (HTML).
 Reading: Wikipedia’s “Mixture Model”

4.3.4 Naive Bayes Classifier
 Reading: Wikipedia’s “Naive Bayes Classifier”
Link: Wikipedia’s “Naive Bayes Classifier” (PDF)
Instructions: After reading the linked material, make sure you know the definition of the naive Bayes classifier. Work through the 'The Naive Bayes Probabilistic Model' and 'Examples' sections.
Terms of Use: The article above is released under a Creative Commons AttributionShare Alike License 3.0 (HTML). You can find the Wikipedia source article here (HTML).
 Reading: Wikipedia’s “Naive Bayes Classifier”

4.3.5 Decision Tree
 Reading: Wikipedia’s “Decision Tree”
Link: Wikipedia’s “Decision Tree” (PDF)
Instructions: Read this entry; you should be able to define the term “decision tree” when you are done.
Terms of Use: The article above is released under a Creative Commons AttributionShare Alike License 3.0 (HTML). You can find the Wikipedia source article here (HTML).
 Reading: Wikipedia’s “Decision Tree”

4.3.6 Kernels and Gaussian Processes
 Lecture: videolectures.net’s: Mark Girolami’s “Kernels and Gaussian Processes: Parts 13”
Link: videolectures.net’s: Mark Girolami’s “Kernels and Gaussian Processes: Parts 13” (Adobe Flash and Windows Media Player)
Instructions: Watch the first video about machine learningand compare it to what you have learned in the readings from the sections above. After watching this video, you should the basics of linear regression, loss function, prediction techniques. Study nonlinear models, probabilistic regression, and uncertainty estimation. This lecture is just over 1 hour long. You may wish to work through the slides provided on the righthand side of the screen as you work through this lecture and the two below. You can also download the PowerPoint slides in a PDF format by clicking on the links under “See Also.”
Then, watch the second video lecture to learn about Bayesian regression and classification. This second lecture is 1 hour long.
Finally, watch the last lecture to learn about Gaussian processes, regression, and classification. This third installment is just over 1 hour long.
About the link: Mark Girolami is a professor at the University of Glasgow.
Terms of Use: The article above is released under a Creative Commons AttributionNonCommercial NoDerivatives License 3.0(HTML). It is attributed to (Mark Girolami).
 Lecture: videolectures.net’s: Mark Girolami’s “Kernels and Gaussian Processes: Parts 13”

4.4 Machine Learning Coding Drills
 Assignment: Artificial Intelligence Center’s “TicTacToe”
Link: Artificial Intelligence Center’s “TicTacToe Demo” (JAVA)
Instructions: Code an agent that plays the TicTacToe game. You can choose to play the game yourself by selecting board positions or have the Agent propose moves. One possible solution is available via the link above under the TicTacToe Demo section. Work towards a solution for no more than 10 hours and then check your work against the solution code.
About the link: The code above is a Java implementation of algorithms from Norvig And Russell's "Artificial Intelligence  A Modern Approach,” 3rd Edition.
Terms of Use: Please respect the copyright and terms of use displayed on the web page above.
 Assignment: Artificial Intelligence Center’s “TicTacToe”

Unit 5: Philosophical Foundations of AI
In this unit, we will study the Turing machine as a definition of the intuitive notion of computability in the discrete domain. In the theory of computation, many major complexity classes can be characterized by an appropriately restricted Turing machine. We will also discuss the claim that human thinking is a kind of symbol manipulation. Note that a symbol system is necessary for intelligence and that machines can be intelligent. Finally, we will discuss the ongoing neuroscientific attempt to understand how the human brain works and examine the possible role of consciousness in the machines. Is it possible, in theory and then in practice, to create a machine that has all the capabilities of a human being?
Unit 5 Time Advisory show close
Unit 5 Learning Outcomes show close
 5.1 Computing Machinery and Intelligence

5.1.1 Philosophical Issues and Turing Test
 Lecture: videolectures.net: John Lloyd’s “Intelligent Agents: Part 3”
Link: videolectures.net: John Lloyd’s “Intelligent Agents: Part 3” (Adobe Flash and Windows Media Player)
Instructions: Watch the third part of this video series by John Lloyd (you will need to click on the appropriate video once you navigate to the site’s landing page) and compare his interpretation of the Turing test with what you learn later in this unit. This lecture is 50 minutes long. You may wish to work through the slides provided on the righthand side of the screen as you work through this lecture. You can also download the PowerPoint slides in a PDF format by clicking on the link under “See Also.”
About the link: John Lloyd is a professor at Australian National University who shares lectures on videolectures.net.
Terms of Use: The article above is released under a Creative Commons AttributionNonCommercial NoDerivatives License 3.0(HTML). It is attributed to (John Lloyd).
 Lecture: videolectures.net: John Lloyd’s “Intelligent Agents: Part 3”

5.1.2 Computing Machinery and Intelligence
 Reading: Loebner.net: A. M. Turing’s “Computing Machinery and Intelligence”
Link: Loebner.net: A. M. Turing’s “Computing Machinery and Intelligence” (HTML)
Instructions: Read this classical paper written by A. M. Turing, a cornerstone in the field of A.I. studies.
About the link: The article above is an entry at loebner.net, which was established by Hugh Gene Loebner, who offers The Loebner Prize in Artificial Intelligence.
Terms of Use: Please respect the copyright and terms of use displayed on the web page above.
 Reading: Loebner.net: A. M. Turing’s “Computing Machinery and Intelligence”

5.1.3 Turing Machine
 Reading: Scholarpedia: Paul M.B. Vitanyi's “Turing Machine”
Link: Scholarpedia: Paul M.B. Vitanyi's “Turing Machine” (HTML)
Instructions: This reading is fairly challenging; read through it to the best of your abilities for a detailed description of the Turing Machine. After you have completed this reading, you should know how to define Turing machine and summarize the ChurchTuring theses. Make sure you know what the Halting problem is.
About the link: Article above is from Paul M.B. Vitanyi computer scientist at University of Amsterdam.
Terms of Use: This material is in the public domain.
 Reading: Scholarpedia: Paul M.B. Vitanyi's “Turing Machine”

5.1.4 Computability and Incompleteness
 Lecture: videolectures.net: Errol Martin’s “Computability and Incompleteness”
Link: videolectures.net: Errol Martin’s “Computability and Incompleteness” (Adobe Flash and Windows Media Player)
Instructions: These videos cover challenging topics mentioned earlier in this unit. Watch the first lecture to learn about Hilbert's consistency program, Godels incompleteness theorem, attributes of computable functions, Church's thesis, and three approaches to computability, paying particular attention to the examples. This first lecture is just over 1 hour long. You may wish to work through the slides provided on the righthand side of the screen as you listen to this lecture and the other installments below. You can also download the PowerPoint slides in a PDF format by clicking on the link under “See Also.”
In the second video, will learn about the Halting problem, universal Turing machine, and the undecidability proof. This second installment is 48 minutes long. Finally, watch the last two videos in this series, which are 56 and 53 minutes long, respectively.
About the link: Errol Martin is founder of an Enterprise Architecture and Systems Consulting Company based in Canberra Australia.
Terms of Use: The article above is released under a Creative Commons AttributionNonCommercial NoDerivatives License 3.0(HTML). It is attributed to (Errol Martin).
 Lecture: videolectures.net: Errol Martin’s “Computability and Incompleteness”

5.2 Important Propositions in the Philosophy of AI
 Reading: Wapedia’s “Artificial Brain” and “Physical Symbol System”
Link: Wapedia’s “Artificial Brain” (HTML) and “Physical Symbol System” (HTML)
Instructions: Carefully read these web pages about AI power. These readings cover sections 5.2.15.2.2.
About the link: Wapedia is a site for Wikipedia on mobile phones.
Terms of Use: The article above is released under a Creative Commons AttributionShareAlike License 3.0(HTML). It is attributed to (Wikipedia) and the original version can be found {here}(HTML) and {here}(HTML) respectively.
 Reading: Wapedia’s “Artificial Brain” and “Physical Symbol System”
 5.2.1 The Brain Can Be Simulated
 5.2.2 Human Thinking Is Symbol Processing

5.3 Machine Consciousness
 Reading: Scholarpedia: Igor Aleksander’s “Machine Consciousness”
Link: Scholarpedia: Igor Aleksander’s “Machine Consciousness”(HTML)
Instructions: Read all the sections of the web pagethat discuss machine consciousness. Focus on learning about early models of conciousness and neural models of conciousness.
About the link: The article above is an entry at www.scholarpedia.org, the peerreviewed openaccess encyclopedia written by scholars from all around the world.
Terms of Use: Please respect the copyright and terms of use displayed on the web page above.  Lecture: MIT: Marvin Minsky’s "Emotion Machine”
Link: MIT: Marvin Minsky’s "Emotion Machine" (Adobe Flash)
Also available in: iTunes U
Instructions: Watch this video about emotional machines. Ask yourself whether you think one is possible and begin to think about how you would approach its creation. This video is 1 hour and 23 minutes long.
About the link: Marvin Minsky is a pioneer of artificial intelligence.
Terms of Use: Please respect the copyright and terms of use displayed on the web page above.
 Reading: Scholarpedia: Igor Aleksander’s “Machine Consciousness”

Final Exam
 Final Exam: The Saylor Foundation's CS408 Final Exam
Link: The Saylor Foundation's CS408 Final Exam
Instructions: You must be logged into your Saylor Foundation School account in order to access this exam. If you do not yet have an account, you will be able to create one, free of charge, after clicking the link.
 Final Exam: The Saylor Foundation's CS408 Final Exam