1. 数据完整性:任何语言对IO的操作都要保持其数据的完整性。Hadoop当然希望数据在存储和处理中不会丢失或损坏。检查数据完整性的常用方法是校验和。
- HDFS的数据完整性:客户端在写或者读取HDFS的文件时,都会对其进行校验和验证,当然我们可以通过在Open()方法读取之前,将false传给FileSystem中的setVerifyCheckSum()来禁用校验和。
- 本地文件系统,hadoop的本地文件系统执行客户端校验,这意味着,在写一个filename文件时,文件系统的客户端以透明方式创建了一个隐藏的文件.filename.crc,块的大小做为元数据存于此,所以读取文件时会进行校验和验证。
- ChecksumFileSystem:可以通过它对其数据验证。
2. 压缩:压缩后能够节省空间和减少网络中的传输。所以在hadoop中压缩是非常重要的。hadoop的压缩格式
压缩格式 | 算法 | 文件扩展名 | 多文件 | 可分割性 |
DEFLATEa | DEFLATE | .deflate | no | no |
gzip(zip) | DEFLATE | .gz(.zip) | no(yes) | no(yes) |
bzip2 | bzip2 | .bz2 | no | yes |
LZO | LZO | .lzo | no | no |
Compression format Hadoop CompressionCodec
DEFLATE org.apache.hadoop.io.compress.DefaultCodec
gzip org.apache.hadoop.io.compress.GzipCodec
bzip2 org.apache.hadoop.io.compress.BZip2Codec
LZO com.hadoop.compression.lzo.LzopCodec
可以用ComressionCodec轻松的压缩和解压缩。我们可以用CompressionOutput创建一个CompressionOutputStream(未压缩的数据写到此)。相反,可以用compressionInputStream进行解压缩。
-
-
-
- public static void main(String[] args) throws Exception
- {
-
- String codecClassname = args[0];
- Class<?> codecClass = Class.forName(codecClassname);
- Configuration configuration = new Configuration();
- CompressionCodec codec = (CompressionCodec)ReflectionUtils.newInstance(codecClass, configuration);
- CompressionOutputStream outputStream = codec.createOutputStream(System.out);
- IOUtils.copyBytes(System.in, outputStream, 4096,false);
- outputStream.finish();
- }
- public static void main(String[] args) throws IOException {
- if (args.length != 2) {
- System.err.println("Usage: MaxTemperatureWithCompression <input path> " +
- "<output path>");
- System.exit(-1);
- }
-
- JobConf conf = new JobConf(MaxTemperatureWithCompression.class);
- conf.setJobName("Max temperature with output compression");
-
- FileInputFormat.addInputPath(conf, new Path(args[0]));
- FileOutputFormat.setOutputPath(conf, new Path(args[1]));
-
- conf.setOutputKeyClass(Text.class);
- conf.setOutputValueClass(IntWritable.class);
-
- conf.setBoolean("mapred.output.compress", true);
- conf.setClass("mapred.output.compression.codec", GzipCodec.class,
- CompressionCodec.class);
-
- conf.setMapperClass(MaxTemperatureMapper.class);
- conf.setCombinerClass(MaxTemperatureReducer.class);
- conf.setReducerClass(MaxTemperatureReducer.class);
-
- JobClient.runJob(conf);
- }
3.序列化:将字节流和机构化对象的转化。hadoop是进程间通信(RPC调用),PRC序列号结构特点:紧凑,快速,可扩展,互操作,hadoop使用自己的序列化格式Writerable,
- package org.apache.hadoop.io;
- import java.io.DataOutput;
- import java.io.DataInput;
- import java.io.IOException;
- public interface Writable {
- void write(DataOutput out) throws IOException;
- void readFields(DataInput in) throws IOException;
- }
- package WritablePackage;
-
- import java.io.ByteArrayInputStream;
- import java.io.ByteArrayOutputStream;
- import java.io.DataInputStream;
- import java.io.DataOutputStream;
- import java.io.IOException;
-
-
- import org.apache.hadoop.io.Writable;
- import org.apache.hadoop.util.StringUtils;
- import org.hsqldb.lib.StringUtil;
-
- public class WritableTestBase
- {
- public static byte[] serialize(Writable writable) throws IOException
- {
- ByteArrayOutputStream outputStream = new ByteArrayOutputStream();
- DataOutputStream dataOutputStream = new DataOutputStream(outputStream);
- writable.write(dataOutputStream);
- dataOutputStream.close();
- return outputStream.toByteArray();
- }
-
- public static byte[] deserialize(Writable writable,byte[] bytes) throws IOException
- {
- ByteArrayInputStream inputStream = new ByteArrayInputStream(bytes);
- DataInputStream dataInputStream = new DataInputStream(inputStream);
- writable.readFields(dataInputStream);
- dataInputStream.close();
- return bytes;
- }
-
- public static String serializeToString(Writable src) throws IOException
- {
- return StringUtils.byteToHexString(serialize(src));
- }
-
- public static String writeTo(Writable src, Writable des) throws IOException
- {
- byte[] data = deserialize(des, serialize(src));
- return StringUtils.byteToHexString(data);
- }
- }
更多Hadoop相关信息见Hadoop 专题页面 http://www.linuxidc.com/topicnews.aspx?tid=13