有没有办法在Hadoop中为Mapper提供构造函数args?可能通过一些包装创造就业的图书馆?
这是我的情景:
public class HadoopTest { // Extractor turns a line into a "feature" public static interface Extractor { public String extract(String s); } // A concrete Extractor,configurable with a constructor parameter public static class PrefixExtractor implements Extractor { private int endIndex; public PrefixExtractor(int endIndex) { this.endIndex = endIndex; } public String extract(String s) { return s.substring(0,this.endIndex); } } public static class Map extends Mapper<Object,Text,Text> { private Extractor extractor; // Constructor configures the extractor public Map(Extractor extractor) { this.extractor = extractor; } public void map(Object key,Text value,Context context) throws IOException,InterruptedException { String feature = extractor.extract(value.toString()); context.write(new Text(feature),new Text(value.toString())); } } public static class Reduce extends Reducer<Text,Text> { public void reduce(Text key,Iterable<Text> values,InterruptedException { for (Text val : values) context.write(key,val); } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); Job job = new Job(conf,"test"); job.setOutputKeyClass(Text.class); job.setOutputValueClass(Text.class); job.setMapperClass(Map.class); job.setReducerClass(Reduce.class); job.setInputFormatClass(TextInputFormat.class); job.setOutputFormatClass(TextOutputFormat.class); FileInputFormat.addInputPath(job,new Path(args[0])); FileOutputFormat.setOutputPath(job,new Path(args[1])); job.waitForCompletion(true); } }
应该清楚,因为Mapper仅作为类引用(Map.class)提供给配置,所以Hadoop无法传递构造函数参数并配置特定的Extractor.
有一些Hadoop包装框架就像Scoobi,Crunch,Scrunch(可能还有更多我不知道的)似乎有这种能力,但我不知道他们是如何实现的.编辑:在与Scoobi合作之后,我发现我对此有些不对劲.如果在“映射器”中使用外部定义的对象,则Scoobi要求它是可序列化的,并且如果不是,则会在运行时进行抱怨.所以也许正确的方法就是让我的Extractor可以在Mapper的设置方法中进行序列化和反序列化……
此外,我实际上在Scala工作,所以非常欢迎基于Scala的解决方案(如果不鼓励!)