public class FileSplitInputFormat<K,V> extends WrapperInputFormat<org.apache.hadoop.io.NullWritable,org.apache.hadoop.mapreduce.lib.input.FileSplit,K,V>
| Modifier and Type | Class and Description |
|---|---|
static class |
FileSplitInputFormat.FileSplitRecordReader |
iInputFormat| Constructor and Description |
|---|
FileSplitInputFormat() |
| Modifier and Type | Method and Description |
|---|---|
org.apache.hadoop.mapreduce.RecordReader<org.apache.hadoop.io.NullWritable,org.apache.hadoop.mapreduce.lib.input.FileSplit> |
createRecordReader(org.apache.hadoop.mapreduce.InputSplit split, org.apache.hadoop.mapreduce.TaskAttemptContext context) |
createInternalInputFormat, getFittingInputSplit, getInternalInputFormat, getInternalInputFormatClass, getRecordInfoProvider, getRecordInfoProviderClass, getSplits, setInternalInputFormatClass, setRecordInfoProviderClassaddInputPath, addInputPathRecursively, addInputPaths, computeSplitSize, getBlockIndex, getFormatMinSplitSize, getInputDirRecursive, getInputPathFilter, getInputPaths, getMaxSplitSize, getMinSplitSize, isSplitable, listStatus, makeSplit, makeSplit, setInputDirRecursive, setInputPathFilter, setInputPaths, setInputPaths, setMaxInputSplitSize, setMinInputSplitSize
public org.apache.hadoop.mapreduce.RecordReader<org.apache.hadoop.io.NullWritable,org.apache.hadoop.mapreduce.lib.input.FileSplit> createRecordReader(org.apache.hadoop.mapreduce.InputSplit split,
org.apache.hadoop.mapreduce.TaskAttemptContext context)
throws java.io.IOException,
java.lang.InterruptedException
createRecordReader in class org.apache.hadoop.mapreduce.InputFormat<org.apache.hadoop.io.NullWritable,org.apache.hadoop.mapreduce.lib.input.FileSplit>java.io.IOExceptionjava.lang.InterruptedExceptionCopyright © 2016 Oracle and/or its affiliates. All Rights Reserved.