Foundations and Trends® in Databases >
Vol 4 > Issue 1–3

Graham Cormode, Minos Garofalakis, Peter J. Haas and Chris Jermaine (2011), "Synopses for Massive Data: Samples, Histograms, Wavelets, Sketches", Foundations and Trends® in Databases: Vol. 4: No. 1–3, pp 1-294. http://dx.doi.org/10.1561/1900000004

© 2012 G. Cormode, M. Garofalakis, P. J. Haas

Download article
**In this article:**

1. Introduction

2. Sampling

3. Histograms

4. Wavelets

5. Sketches

6. Conclusions and Future Research Directions

Acknowledgments

References

Methods for Approximate Query Processing (AQP) are essential for dealing with massive data. They are often the only means of providing interactive response times when exploring massive datasets, and are also needed to handle high speed data streams. These methods proceed by computing a lossy, compact synopsis of the data, and then executing the query of interest against the synopsis rather than the entire dataset. We describe basic principles and recent developments in AQP. We focus on four key synopses: random samples, histograms, wavelets, and sketches. We consider issues such as accuracy, space and time efficiency, optimality, practicality, range of applicability, error bounds on query answers, and incremental maintenance. We also discuss the tradeoffs between the different synopsis types.

1. Introduction

2. Sampling

3. Histograms

4. Wavelets

5. Sketches

6. Conclusions and Future Research Directions

Acknowledgements

References

*Synopses for Massive Data: Samples, Histograms, Wavelets, Sketches* describes basic principles and recent developments in
building approximate synopses (that is, lossy, compressed representations) of massive data. Such synopses enable approximate query processing,
in which the user's query is executed against the synopsis instead of the original data. It focuses on the four main families of synopses:
random samples, histograms, wavelets, and sketches. A random sample comprises a "representative" subset of the data values of interest,
obtained via a stochastic mechanism. Samples can be quick to obtain, and can be used to approximately answer a wide range of queries.
A histogram summarizes a data set by grouping the data values into subsets, or "buckets," and then, for each bucket, computing a small set of
summary statistics that can be used to approximately reconstruct the data in the bucket. Histograms have been extensively studied and have
been incorporated into the query optimizers of virtually all commercial relational DBMSs. Wavelet-based synopses were originally developed in
the context of image and signal processing. The data set is viewed as a set of M elements in a vector - i.e., as a function defined on the set
{0,1,2, …, M?1} - and the wavelet transform of this function is found as a weighted sum of wavelet "basis functions." The weights, or
coefficients, can then be "thresholded", e.g., by eliminating coefficients that are close to zero in magnitude. The remaining small set of
coefficients serves as the synopsis. Wavelets are good at capturing features of the data set at various scales. Sketch summaries are
particularly well suited to streaming data. Linear sketches, for example, view a numerical data set as a vector or matrix, and multiply the
data by a fixed matrix. Such sketches are massively parallelizable. They can accommodate streams of transactions in which data is both
inserted and removed. Sketches have also been used successfully to estimate the answer to COUNT DISTINCT queries, a notoriously hard problem.

*Synopses for Massive Data* describes and compares the different synopsis methods. It also discusses the use of AQP within
research systems, and discusses challenges and future directions. It is essential reading for anyone working with, or doing research on
massive data.