If you have questions or remarks to Part I or the tutorials, contact Melanie Schmidt.
When | Where | Start | Lecturer |
---|---|---|---|
Monday, 10:00-11:30 Wedn., 12:00-13:30 | LBH / Hörsaal III.03a | April 11 | Röglin (June-), Schmidt (-June) |
Notice: The lecture notes have been updated on 28th of June.
Notice: The Lecture Notes for the complete lecture (Part I+II) appeared! This PDF is now the main course material and will be updated irregularly. The PDFs for lectures 1-13 are still available below, but are now outdated and will not be updated.
Date | Contents | Course Materials |
---|---|---|
April 11 | 1 Discrete Event Spaces and Probabilities 1.1 Discrete Probability Spaces 1.2 Independent Events | Lecture Notes [MU05], pp. 3-6 |
April 13 | 1.2 (contd) Conditional Probability 1.3 Applications 1.3.1 The Minimum Cut Problem: Contract Alg. | Lecture Notes [MU05], pp. 6-7 [MU05], pp. 12-13 |
April 18 | 1.3.1 (contd) The Minimum Cut Problem: Contract Alg., FastCut | Lecture Notes [MU05], pp. 13-14, [MR95], pp. 289-294 |
April 20 | 1.3.1 (contd) The Minimum Cut Problem: FastCut 1.3.2 Reservoir Sampling | Lecture Notes [MR95], pp. 294-295 |
April 25 | 2 Evaluating Outcomes of a Random Process 2.1 Random Variables and Expected Values | Lecture Notes [MU05], pp. 20-23 |
April 27 | 2.1.1 Non-negative Integer Valued Random Variables 2.1.2 Conditional Expected Values | Lecture Notes [MU05], pp. 25, 31, 26-27 |
May 02 | 2.2 Binomial Distribution and Geometric Distribution 2.3 Applications 2.3.1 Randomized QuickSort | Lecture Notes [MU05], pp. 30-31, 34-38, 25-26 |
May 04 | 2.3.2 Randomized Approximation Algorithms 3 Concentration bounds: Markov's Inequality | Lecture Notes [MU05], pp. 129-130, 44 |
May 09 | 3.1 Variance and Chebyshev's Inequality 3.2 Chernoff/Rubon bounds 3.3 Applications 3.3.1 Parameter Estimation | Lecture Notes [MU05], pp. 45, 47-49, 64, 66-68 |
May 11 | 3.3.2 Routing in Hypercubes | Lecture Notes [MU05], pp. 72-74 [MR95], pp. 74-77 |
May 16 | no lecture (Pfingsten) | – |
May 18 | no lecture (Pfingsten) | – |
May 23 | 3.3.2 (contd) Routing in Hypercubes | Lecture Notes [MR95], pp. 77-79 |
May 25 | no lecture (Dies Academicus) | – |
May 30 | 4 Random Walks 4.1 Applications 4.1.1 A local search algorithm for 2-SAT | Lecture Notes [MU05], pp. 156-159 [MR95], pp. 128-129 |
June 01 | 4.1.2 Local Search algorithms for 3-SAT | Lecture Notes [MU05], pp.159-163 |
June 06 | 6 Knapsack Problem and Multiobjective Optimization 6.1 Nemhauser-Ullmann Algorithm | Lecture Notes |
June 08 | 6.2 Number of Pareto-optimal Solutions 6.2.1 Upper Bound | Lecture Notes |
June 13 | 6.2.1 (contd) Upper Bound 6.3 Multiobjective Optimization 6.2.2 Lower Bound | Lecture Notes |
June 15 | 6.2.2 (contd) Lower Bound 6.4 Core Algorithms | Lecture Notes |
June 20 | 6.4 (contd) Core Algorithms 7 Smoothed Complexity of Binary Optimization Problems | Lecture Notes |
June 22 | 7 (contd) Smoothed Complexity of Binary Optimization Problems | Lecture Notes |
June 27 | 7 (contd) Smoothed Complexity of Binary Optimization Problems 8 Successive Shortest Path Algorithm 8.1 The SSP Algorithm 8.1.1 Elementary Properties 8.2 Smoothed Analysis | Lecture Notes |
June 29 | 8.2.1 Terminology and Notation 8.2.2 Proof of the Upper Bound 8.2.3 Further Properties of the SSP Algorithm | Lecture Notes |
July 4 | 8.2.3 (contd) Further Properties of the SSP Algorithm 8.2.4 Proof of Lemma 8.7 | Lecture Notes |
July 6 | 9 The 2-Opt Algorithm for the TSP 9.1 Overview of Results 9.2 Polynomial Bound for phi-Perturbed Graphs 9.3 Improved Analysis | Lecture Notes |
July 11 | 9.3 (contd) Improved Analysis 10 The k-Means Method | Lecture Notes |
July 13 | 10.1 Potential Drop in an Iteration of k-Means 10.2 Iterations with Large Cluster Changes 10.3 Iterations with Small Cluster Changes | Lecture Notes |
July 18 | 10.4 Proof of Theorem 10.1 | Lecture Notes |
When | Where | Start | Lecturer |
---|---|---|---|
Tuesday, 15:15-16:00 | LBH, E.08 | April 19 | Schmidt |
Tuesday, 16:15-17:00 | LBH, E.08 | April 19 | Schmidt |
The Lecture Notes cover the complete lecture. Part I is largely based on the following two books.
The following PDFs correspond to the lectures in Part I. They are now outdated and will not be updated!
The lecture has two parts. First, we consider the design and analysis of randomized algorithms. Many algorithmic problems can be solved more efficiently when allowing randomized decisions. Additionally, randomized algorithms are often easier to design and analyze than their (known) deterministic counterparts. For example, we will see an elegant algorithm for the minimum cut problem. Randomized algorithms can also be more robust on average, like randomized Quicksort.
The analysis of randomized algorithms builds on a set of powerful tools. We will get to know basic tools from probabily theory, very useful tail inequalities and techniques to analyze random walks and Markov chains. We apply these techniques to develop and analyze algorithms for important algorithmic problems like sorting and k-SAT.
Statements on randomized algorithms are either proven to hold on expectation or with high probability over the random choices. This deviates from the classical algorithm analysis but is still a worst-case analysis in its core. In the second part of the lecture, we learn about probabilistic analysis of algorithms. There are a number of important problems and algorithms for which worst-case analysis does not provide useful or empirically accurate results. One prominent example is the simplex method for linear programming whose worst-case running time is exponential while in fact it runs in near-linear time on almost all inputs of interest. Another example is the knapsack problem. While this problem is NP-hard, it is a very easy optimization problem in practice and even very large instances with millions of items can be solved efficiently. The reason for this discrepancy between worst-case analysis and empirical observations is that for many algorithms worst-case instances have an artificial structure and hardly ever occur in practical applications.
In smoothed analysis, one does not study the worst-case behavior of an algorithm but its (expected) behavior on random or randomly perturbed inputs. We will prove, for example, that there are algorithms for the knapsack problem whose expected running time is polynomial if the profits or weights are slightly perturbed at random. This shows that instances on which these algorithms require exponential running time are fragile with respect to random perturbations and even a small amount of randomness suffices to rule out such instances with high probability. Hence, it can be seen as an explanation for why these algorithms work well in practice. We will also apply smoothed analysis to the simplex method, clustering problems, the traveling salesman problem, etc.
Even though there is no formal requirement to participate in the tutorials and to submit the homework problems, it is strongly recommended to do so. Oral exams can be taken on July 27, July 28, July 29, and August 11. Please schedule your exam with Antje Bertram until June 30. The re-exams will be on September 26.