十一、MapReduce--自定义Input输入

发布时间:2020-06-11 11:55:03 作者:隔壁小白
来源:网络 阅读:227

在“MapReduce--input之输入原理”中说到实现定义输入的方法,其实就是继承InputFormat以及 RecordReader实现其中的方法。下面例子讲解操作。

1、需求

将多个文件合并成一个大文件(有点类似于combineInputFormat),并输出。大文件中包括小文件所在的路径,以及小文件的内容。

2、源码

inputFormat

public class SFileInputFormat extends FileInputFormat<NullWritable, BytesWritable> {
    /**
     * 是否切片
     * @param context
     * @param filename
     * @return
     */
    @Override
    protected boolean isSplitable(JobContext context, Path filename) {
        return false;
    }

    /**
     * 返回读取文件内容的读取器
     * @param inputSplit
     * @param taskAttemptContext
     * @return
     * @throws IOException
     * @throws InterruptedException
     */
    @Override
    public RecordReader<NullWritable, BytesWritable> createRecordReader(InputSplit inputSplit, TaskAttemptContext taskAttemptContext) throws IOException, InterruptedException {
        SRecordReader sRecordReader = new SRecordReader();
        sRecordReader.initialize(inputSplit, taskAttemptContext);
        return sRecordReader;

    }
}

RecordReader:

public class SRecordReader extends RecordReader<NullWritable, BytesWritable> {
    private Configuration conf;
    private FileSplit split;
    //当前分片是否已读取的标志位
    private boolean process = false;
    private BytesWritable value = new BytesWritable();

    /**
     * 初始化
     * @param inputSplit
     * @param taskAttemptContext
     * @throws IOException
     * @throws InterruptedException
     */
    @Override
    public void initialize(InputSplit inputSplit, TaskAttemptContext taskAttemptContext) throws IOException, InterruptedException {
        split = (FileSplit)inputSplit;
        conf = taskAttemptContext.getConfiguration();
    }

    /**
     * 从分片中读取下一个KV
     * @return
     * @throws IOException
     * @throws InterruptedException
     */
    @Override
    public boolean nextKeyValue() throws IOException, InterruptedException {
        if (!process) {
            byte[] buffer = new byte[(int) split.getLength()];

            //获取文件系统
            Path path = split.getPath();
            FileSystem fs = path.getFileSystem(conf);

            //创建输入流
            FSDataInputStream fis = fs.open(path);

            //流对接,将数据读取缓冲区
            IOUtils.readFully(fis, buffer, 0, buffer.length);

            //将数据装载入value
            value.set(buffer, 0, buffer.length);

            //关闭流
            IOUtils.closeStream(fis);

            //读完就标志位设置为true,表示已读
            process = true;
            return true;

        }

        return false;
    }

    @Override
    public NullWritable getCurrentKey() throws IOException, InterruptedException {
        return NullWritable.get();
    }

    @Override
    public BytesWritable getCurrentValue() throws IOException, InterruptedException {
        return this.value;
    }

    @Override
    public float getProgress() throws IOException, InterruptedException {
        return process? 1 : 0;
    }

    @Override
    public void close() throws IOException {

    }
}

mapper:

public class SFileMapper extends Mapper<NullWritable, BytesWritable, Text, BytesWritable> {
    Text k = new Text();

    @Override
    protected void setup(Context context) throws IOException, InterruptedException {
        FileSplit inputSplit = (FileSplit)context.getInputSplit();
        String name = inputSplit.getPath().toString();
        k.set(name);
    }

    @Override
    protected void map(NullWritable key, BytesWritable value, Context context) throws IOException, InterruptedException {
        context.write(k, value);
    }   
}

reducer:

public class SFileReducer extends Reducer<Text, BytesWritable, Text, BytesWritable> {
    @Override
    protected void reduce(Text key, Iterable<BytesWritable> values, Context context) throws IOException, InterruptedException {
        context.write(key, values.iterator().next());
    }
}

driver:

public class SFileDriver {
    public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
        args = new String[]{"G:\\test\\date\\A\\order\\", "G:\\test\\date\\A\\order2\\"};

        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf);

        job.setJarByClass(SFileDriver.class);
        job.setMapperClass(SFileMapper.class);
        job.setReducerClass(SFileReducer.class);

        //设置输入和输出类,默认是 TextInputFormat
        job.setInputFormatClass(SFileInputFormat.class);
        job.setOutputFormatClass(SequenceFileOutputFormat.class);

        job.setMapOutputKeyClass(Text.class);
        job.setMapOutputValueClass(BytesWritable.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(BytesWritable.class);

        FileInputFormat.setInputPaths(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));

        job.waitForCompletion(true);

    }
}

自定义的inputformat需要在job中通过 job.setInputFormatClass() 来指定

推荐阅读:
  1. 十六、MapReduce--调优
  2. 十五、MapReduce--自定义output输出

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

自定义input输入 npu ce

上一篇:Cocos2dx学习笔记(1) Ref类型数据 垃圾回收机制

下一篇:php连接erp数据库失败怎么办

相关阅读

您好,登录后才能下订单哦!

密码登录
登录注册
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》