Kernelization: New Upper and Lower Bound Techniques Hans L. Bodlaender Department of Information and Computing Sciences, Utrecht University, P.O. Box 80.089, 3508 TB Utrecht, the Netherlands hansb@cs.uu.nl Abstract. In this survey, we look at kernelization: algorithms that transform in polynomial time an input to a problem to an equivalent input, whose size is bounded by a function of a parameter. Several re- sults of recent research on kernelization are mentioned. This survey looks at some recent results where a general technique shows the existence of kernelization algorithms for large classes of problems, in particular for planar graphs and generalizations of planar graphs, and recent lower bound techniques that give evidence that certain types of kernelization algorithms do not exist. Keywords: fixed parameter tractability, kernel, kernelization, prepro- cessing, data reduction, combinatorial problems, algorithms. 1 Introduction In many cases, combinatorial problems that arise in practical situations are NP-hard. As we teach our students in algorithms class, there are a number of approaches: we can give up optimality and design approximation algorithms or heuristics; we can look at special cases or make assumptions about the input that one or more variables are small; or we can design algorithms that sometimes take exponential time, but are as fast as possible. In the latter case, a common approach is to start the algorithm with preprocessing. So, consider some hard (say, NP-hard) combinatorial problem. We start our algorithm with a preprocessing or data reduction phase, in which we transform the input I to an equivalent input I that is (hopefully) smaller (but never larger). Then, we solve the smaller input I optimally, with some (exponential time) algorithm. E.g., in practical settings, we can use an ILP-solver, branch and bound or branch and reduce algorithm, or a satisfiability-solver. After we obtained an optimal solution S for S, we transform this solution back to an optimal solution for I . In this overview paper, we want to focus on the following question for given com- binatorial problems: suppose the preprocessing phase takes polynomial time; what can we say about the size of the reduced instance, as a function of some parameter of the input? This question is nowadays phrased as: does the problem we consider have a kernel, and if so, how large is the kernel? So, kernelization gives us quanti- tative insights in what can be achieved by polynomial time preprocessing. In this J. Chen and F.V. Fomin (Eds.): IWPEC 2009, LNCS 5917, pp. 17–37, 2009. c Springer-Verlag Berlin Heidelberg 2009