![]() ![]() As before, HDFS will store the file as 16 blocks. Imagine now the file is a gzip-compressed file whose compressed size is 1 GB. With an HDFS block size of 64 MB, the file will be stored as 16 blocks, and a MapReduce job using this file as input will create 16 input splits, each processed independently as input to a separate map task. Consider an uncompressed file stored in HDFS whose size is 1 GB. When considering how to compress data that will be processed by MapReduce, it is important to understand whether the compression format supports splitting. Snappy is also significantly faster than LZO for decompression.ģ. LZO and Snappy, on the other hand, both optimize for speed and are around an order of magnitude faster than gzip, but compress less effectively. Bzip2’s decompression speed is faster than its compression speed, but it is still slower than the other formats. Bzip2 compresses more effectively than gzip, but is slower. Gzip is a general purpose compressor, and sits in the middle of the space/time trade-off. The different tools have very different compression characteristics. The tools listed in above table typically give some control over this trade-off at compression time by offering nine different options: –1 means optimize for speed and -9 means optimize for space. Snappy is widely used inside Google, in everything from BigTable and MapReduce to our internal RPC systems.Īll compression algorithms exhibit a space/time trade-off: faster compression and decompression speeds usually come at the expense of smaller space savings. On a single core of a Core i7 processor in 64-bit mode, Snappy compresses at about 250 MB/sec or more and decompresses at about 500 MB/sec or more. For instance, compared to the fastest mode of zlib, Snappy is an order of magnitude faster for most inputs, but the resulting compressed files are anywhere from 20% to 100% bigger. It does not aim for maximum compression, or compatibility with any other compression library instead, it aims for very high speeds and reasonable compression. Snappy is a compression/decompression library. But that is still 20-50% of the size of the files without any compression at all, which means that IO-bound jobs complete the map phase about four times faster. It doesn’t compress quite as well as gzip - expect files that are on the order of 50% larger than their gzipped version. Moreover, it was designed with speed in mind: it decompresses about twice as fast as gzip, meaning it’s fast enough to keep up with hard drive read speeds. The LZO compression format is composed of many smaller (~256K) blocks of compressed data, allowing jobs to be split along block boundaries. It typically compresses files to within 10% to 15% of the best available techniques (the PPM family of statistical compressors), whilst being around twice as fast at compression and six times faster at decompression. gzip is based on the DEFLATE algorithm, which is a combination of LZ77 and Huffman Coding.īzip2 is a freely available, patent free (see below), high-quality data compressor. Since the map output is written to disk and transferred across the network to the reducer nodes, by using a fast compressor such as LZO or Snappy, you can get performance gains simply because the volume of data to transfer is reduced. Therefore, it is necessary to compress the output before storing on HDFS.Įven if your MapReduce application reads and writes uncompressed data, it may benefit from compressing the intermediate output of the map phase. However, these history files may not be used very frequently, resulting in a waste of HDFS space. If the amount of output per day is extensive, and we often need to store history results for future use, then these accumulated results will take extensive amount of HDFS space. Often we need to store the output as history files. gz can be identified as gzip-compressed file and thus read with GzipCodec. If the input files are compressed, they will be decompressed automatically as they are read by MapReduce, using the filename extension to determine which codec to use. This time conservation is beneficial to the performance of job execution. ![]() If the input file is compressed, then the bytes read in from HDFS is reduced, which means less time to read data. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |