C Cache-Conscious Query Processing Kenneth A. Ross Columbia University, New York, NY, USA Synonyms Cache-aware query processing; Cache-sensitive query processing Definition Query processing algorithms that are designed to efficiently exploit the available cache units in the memory hierarchy. Cache-conscious algo- rithms typically employ knowledge of architec- tural parameters such as cache size and latency. This knowledge can be used to ensure that the algorithms have suitable temporal and/or spatial locality on the target platform. Historical Background Between 1980 and 2005, processing speeds im- proved by roughly four orders of magnitude, while memory speeds improved by less than a single order of magnitude. As of 2017, it is com- mon for data accesses to RAM to require several hundred CPU cycles to resolve. Many database workloads have shifted from being I/O bound to being memory/CPU-bound as the amount of memory per machine has been increasing. For such workloads, improving the locality of data- intensive operations can have a direct impact on the system’s overall performance. Scientific Fundamentals A cache is a hardware unit that speeds up access to data. Several cache units may be present at various levels of the memory hierarchy, depend- ing on the processor architecture. For example, a processor may have a small but fast Level- 1 (L1) cache for data and another L1 cache for instructions. The same processor may have a larger but slower L2 cache storing both data and instructions. Many processors also have an L3 cache. On multicore processors, the lower level caches are typically shared among groups of cores. A special kind of cache for mapping virtual memory to physical memory is known as the translation lookaside buffer (TLB). On a system with multiple CPUs, the caches of the different CPUs interact to ensure coherent ac- cess to data. Accessing data from a remote CPU cache is slower than accessing the corresponding local cache. Similarly, accessing data resident in remote CPU RAM is slower than accessing data from local RAM, a phenomenon known as nonuniform memory access (NUMA). Some initial analysis would typically be per- formed to determine the performance charac- teristics of a workload. For example, Ailamaki et al. [2] used hardware performance counters to demonstrate that several commercial systems © Springer Science+Business Media, LLC, part of Springer Nature 2018 L. Liu, M. T. Özsu (eds.), Encyclopedia of Database Systems, https://doi.org/10.1007/978-1-4614-8265-9