About this course
This page focuses on the course 6.034 Artificial Intelligence as it was taught by Professor Patrick Winston in Fall 2015.
This course introduces students to representations, techniques, and architectures used to build applied systems and to account for intelligence from a computational point of view. Students learn applications of rule chaining, heuristic search, constrain propagation, constrained search, inheritance, and other problem-solving paradigms. They also learn applications of identification trees, neural nets, genetic algorithms, support-vector machines, boosting, and other learning paradigms. The course introduces contributions of vision, language, and story-understanding systems to human-level intelligence. The teaching format includes three components: 1) lectures, which introduce core material and provide “big picture” context for the content; 2) Right Now Talks, which offer students a view into what’s happening in today’s research projects in a way that complements the material presented in lectures; and 3) recitations, which allow students to review lecture material in more technical detail, to work through practice problems, and to ask questions in a small venue.
Course Outcomes
Course Goals for Students
- Gain familiarity with basic approaches to problem solving and inference and areas of application
- Demonstrate familiarity with basic and advanced approaches to exploiting regularity in data and areas of application
- Demonstrate familiarity with computational theories of aspects of human intelligence and the role of those theories in applications
- Gain familiarity with techniques for improving human learning and influencing human thought
Comments (0)
Calendar
LEC # TOPICS KEY DATES
1 Introduction and scope
2 Reasoning: goal trees and problem solving
3 Reasoning: goal trees and rule-based expert systems
4 Search: depth-first, hill climbing, beam Problem set 0 due
5 Search: optimal, branch and bound, A*
6 Search: games, minimax, and alpha-beta Problem set 1 due
Quiz 1
7 Constraints: interpreting line drawings
8 Constraints: search, domain reduction
9 Constraints: visual object recognition Problem set 2 due
10 Introduction to learning, nearest neighbors
11 Learning: identification trees, disorder
Quiz 2
12 Learning: neural nets, back propagation Problem set 3 due
13 Learning: genetic algorithms
14 Learning: sparse spaces, phonology
15 Learning: near misses, felicity conditions
16 Learning: support vector machines Problem set 4 due
Quiz 3
17 Learning: boosting
18 Representations: classes, trajectories, transitions
19 Architectures: GPS, SOAR, Subsumption, Society of Mind
20 The AI business
21 Probabilistic inference I
Quiz 4
22 Probabilistic inference II Problem set 5 due
23 Model merging, cross-modal coupling, course summary
n this lecture, Prof. Winston introduces artificial intelligence and provides a brief history of the field. The last ten minutes are devoted to information about the course at MIT.
Instructor: Patrick H. Winston
This lecture covers a symbolic integration program from the early days of AI. We use safe and heuristic transformations to simplify the problem, and then consider broader questions of how much knowledge is involved, and how the knowledge is represented.
Instructor: Patrick H. Winston
We consider a block-stacking program, which can answer questions about its own behavior, and then identify an animal given a list of its characteristics. Finally, we discuss how to extract knowledge from an expert, using the example of bagging groceries.
Instructor: Patrick H. Winston
This lecture covers algorithms for depth-first and breadth-first search, followed by several refinements: keeping track of nodes already considered, hill climbing, and beam search. We end with a brief discussion of commonsense vs. reflective knowledge.
Instructor: Patrick H. Winston
This lecture covers strategies for finding the shortest path. We discuss branch and bound, which can be refined by using an extended list or an admissible heuristic, or both (known as A*). We end with an example where the heuristic must be consistent.
Instructor: Patrick H. Winston
In this lecture, we consider strategies for adversarial games such as chess. We discuss the minimax algorithm, and how alpha-beta pruning improves its efficiency. We then examine progressive deepening, which ensures that some answer is always available.
Instructor: Patrick H. Winston
How can we recognize the number of objects in a line drawing? We consider how Guzman, Huffman, and Waltz approached this problem. We then solve an example using a method based on constraint propagation, with a limited set of junction and line labels.
Instructor: Patrick H. Winston
This lecture covers map coloring and related scheduling problems. We develop pseudocode for the domain reduction algorithm and consider how much constraint propagation is most efficient, and whether to start with the most or least constrained variables.
Instructor: Patrick H. Winston
We consider how object recognition has evolved over the past 30 years. In alignment theory, 2-D projections are used to determine whether an additional picture is of the same object. To recognize faces, we use intermediate-sized features and correlation.
Instructor: Patrick H. Winston
This lecture begins with a high-level view of learning, then covers nearest neighbors using several graphical examples. We then discuss how to learn motor skills such as bouncing a tennis ball, and consider the effects of sleep deprivation.
Instructor: Patrick H. Winston
In this lecture, we build an identification tree based on yes/no tests. We start by arranging the tree based on tests that result in homogeneous subsets. For larger datasets, this is generalized by measuring the disorder of subsets.
Instructor: Patrick H. Winston
In this video, Prof. Winston introduces neural nets and back propagation.
Instructor: Patrick H. Winston
In this lecture, Prof. Winston discusses modern breakthroughs in neural net research.
Instructor: Patrick H. Winston
This lecture explores genetic algorithms at a conceptual level. We consider three approaches to how a population evolves towards desirable traits, ending with ranks of both fitness and diversity. We briefly discuss how this space is rich with solutions.
Instructor: Patrick H. Winston
Why do “cats” and “dogs” end with different plural sounds, and how do we learn this? We can represent this problem in terms of distinctive features, and then generalize. We end this lecture with a brief discussion of how to approach AI problems.
Instructor: Patrick H. Winston
To determine whether three blocks form an arch, we use a model which evolves through examples and near misses; this is an example of one-shot learning. We also discuss other aspects of how students learn, and how to package your ideas better.
Instructor: Patrick H. Winston
In this lecture, we explore support vector machines in some mathematical detail. We use Lagrange multipliers to maximize the width of the street given certain constraints. If needed, we transform vectors into another space, using a kernel function.
Instructor: Patrick H. Winston
Can multiple weak classifiers be used to make a strong one? We examine the boosting algorithm, which adjusts the weight of each classifier, and work through the math. We end with how boosting doesn’t seem to overfit, and mention some applications.
Instructor: Patrick H. Winston
In this lecture, we consider the nature of human intelligence, including our ability to tell and understand stories. We discuss the most useful elements of our inner language: classification, transitions, trajectories, and story sequences.
Instructor: Patrick H. Winston
In this lecture, we consider cognitive architectures, including General Problem Solver, SOAR, Emotion Machine, Subsumption, and Genesis. Each is based on a different hypothesis about human intelligence, such as the importance of language and stories.
Instructor: Patrick H. Winston
We begin this lecture with basic probability concepts, and then discuss belief nets, which capture causal relationships between events and allow us to specify the model more simply. We can then use the chain rule to calculate the joint probability table.
Instructor: Patrick H. Winston
We begin with a review of inference nets, then discuss how to use experimental data to develop a model, which can be used to perform simulations. If we have two competing models, we can use Bayes’ rule to determine which is more likely to be accurate.
Instructor: Patrick H. Winston
This lecture begins with a brief discussion of cross-modal coupling. Prof. Winston then reviews big ideas of the course, suggests possible next courses, and demonstrates how a story can be understood from multiple points of view at a conceptual level.
Instructor: Patrick H. Winston
In this mega-recitation, we cover Problem 1 from Quiz 1, Fall 2009. We begin with the rules and assertions, then spend most of our time on backward chaining and drawing the goal tree for Part A. We end with a brief discussion of forward chaining.
Instructor: Mark Seifter
his mega-recitation covers Problem 2 from Quiz 1, Fall 2008. We start with depth-first search and breadth-first search, using a goal tree in each case. We then discuss branch and bound and A*, and why they give different answers in this problem.
Instructor: Mark Seifter
This mega-recitation covers Problem 1 from Quiz 2, Fall 2007. We start with a minimax search of the game tree, and then work an example using alpha-beta pruning. We also discuss static evaluation and progressive deepening (Problem 1-C, Fall 2008 Quiz 2).
Instructor: Mark Seifter
We begin by discussing neural net formulas, including the sigmoid and performance functions and their derivatives. We then work Problem 2 of Quiz 3, Fall 2008, which includes running one step of back propagation and matching neural nets with classifiers.
Instructor: Mark Seifter
We start by discussing what a support vector is, using two-dimensional graphs as an example. We work Problem 1 of Quiz 4, Fall 2008: identifying support vectors, describing the classifier, and using a kernel function to project points into a new space.
Instructor: Mark Seifter
This mega-recitation covers the boosting problem from Quiz 4, Fall 2009. We determine which classifiers to use, then perform three rounds of boosting, adjusting the weights in each round. This gives us an expression for the final classifier.
Instructor: Mark Seifter
This mega-recitation covers a question from the Fall 2007 final exam, in which we teach a robot how to identify a table lamp. Given a starting model, we identify a heuristic and adjust the model for each example; examples can be hits or near misses.
Instructor: Mark Seifter
This resource contains information related to rule-based systems, search.
This resource contains information related to games, constraint satisfaction problems.
This resource contains information related to K-nearest neighbors, decision trees, neural nets.
This resource contains information related to top-down approach to neural nets.
This resource contains information related to support vector machines, boosting.
This resource contains information related to probability, Bayes nets, naïve Bayes, model selection.
This resource contains information regarding assignment 0.
This resource contains information regarding assignment 1.
This resource contains information regarding assignment 2.
This resource contains information regarding assignment 3.
MIT6_034F10_lab4.pdf
This resource contains information regarding assignment 5.
