Articles | Volume 10, issue 1
https://doi.org/10.5194/gmd-10-413-2017
https://doi.org/10.5194/gmd-10-413-2017
Development and technical paper
 | 
27 Jan 2017
Development and technical paper |  | 27 Jan 2017

The compression–error trade-off for large gridded data sets

Jeremy D. Silver and Charles S. Zender

Download

Interactive discussion

Status: closed
Status: closed
AC: Author comment | RC: Referee comment | SC: Short comment | EC: Editor comment
Printer-friendly Version - Printer-friendly version Supplement - Supplement

Peer-review completion

AR: Author's response | RR: Referee report | ED: Editor decision
AR by Jeremy David Silver on behalf of the Authors (28 Oct 2016)  Author's response   Manuscript 
ED: Referee Nomination & Report Request started (09 Nov 2016) by Paul Ullrich
RR by Anonymous Referee #3 (17 Nov 2016)
RR by Anonymous Referee #1 (18 Nov 2016)
ED: Publish as is (19 Nov 2016) by Paul Ullrich
AR by Jeremy David Silver on behalf of the Authors (06 Dec 2016)  Author's response   Manuscript 
Download
Short summary
Many modern scientific research projects generate large amounts of data. Storage space is valuable and may be limited; hence compression is vital. We tested different compression methods for large gridded data sets, assessing the space savings and the amount of precision lost. We found a general trade-off between precision and compression, with compression well-predicted by the entropy of the data set. A method introduced here proved to be a competitive archive format for gridded numerical data.