在Debian系统上开发Hadoop应用,你需要遵循以下步骤:
Hadoop是用Java编写的,因此首先需要在Debian系统上安装Java。
sudo apt update
sudo apt install openjdk-11-jdk
验证安装:
java -version
从Apache Hadoop官方网站下载最新版本的Hadoop,并解压到本地目录。
wget https://downloads.apache.org/hadoop/common/hadoop-3.3.4/hadoop-3.3.4.tar.gz
tar -xzvf hadoop-3.3.4.tar.gz -C /usr/local/
配置Hadoop环境变量:
编辑~/.bashrc
文件,添加以下内容:
export HADOOP_HOME=/usr/local/hadoop-3.3.4
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
使配置生效:
source ~/.bashrc
编辑Hadoop配置文件,通常位于$HADOOP_HOME/etc/hadoop
目录下。
core-site.xml
: 配置Hadoop的核心设置。hdfs-site.xml
: 配置HDFS(Hadoop分布式文件系统)。mapred-site.xml
: 配置MapReduce。yarn-site.xml
: 配置YARN(Yet Another Resource Negotiator)。启动HDFS和YARN:
start-dfs.sh
start-yarn.sh
验证集群状态:
jps
你应该能看到NameNode、DataNode、ResourceManager和NodeManager等进程。
使用Java编写MapReduce程序。以下是一个简单的WordCount示例:
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.io.IOException;
import java.util.StringTokenizer;
public class WordCount {
public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
public static class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
javac -cp $(hadoop classpath) WordCount.java
jar cf wordcount.jar WordCount*.class
hadoop jar wordcount.jar WordCount input output
根据需要调试和优化你的Hadoop应用。可以使用Hadoop的日志文件和Web界面来监控作业的执行情况。
一旦你的应用在本地测试通过,可以将其部署到生产环境的Hadoop集群上。
通过以上步骤,你可以在Debian系统上开发和运行Hadoop应用。根据具体需求,你可能需要进一步学习和配置Hadoop生态系统中的其他组件,如Hive、Pig、Spark等。