Prerequisites
Before configuring HDFS on Debian, ensure the following prerequisites are met:
sudo apt update && sudo apt install -y openjdk-11-jdk
sudo apt install -y openssh-server
ssh-keygen -t rsa -N "" -f ~/.ssh/id_rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
Step 1: Download and Install Hadoop
wget https://downloads.apache.org/hadoop/common/hadoop-3.3.6/hadoop-3.3.6.tar.gz
/usr/local/hadoop):sudo tar -xzvf hadoop-3.3.6.tar.gz -C /usr/local/
sudo mv /usr/local/hadoop-3.3.6 /usr/local/hadoop
your_username with your actual username):sudo chown -R your_username:your_username /usr/local/hadoop
Step 2: Configure Environment Variables
~/.bashrc file to add Hadoop-specific environment variables:nano ~/.bashrc
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64 # Adjust based on your JDK installation path
source ~/.bashrc
Step 3: Configure HDFS Core Files
All Hadoop configuration files are located in $HADOOP_HOME/etc/hadoop. Modify the following files to define HDFS behavior:
core-site.xml: Specifies the default file system and temporary directory.
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://namenode:9000</value> <!-- Replace 'namenode' with the actual hostname/IP of the NameNode -->
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/var/lib/hadoop/tmp</value> <!-- Temporary directory for Hadoop operations -->
</property>
</configuration>
hdfs-site.xml: Configures HDFS replication, NameNode/DataNode directories, and optional secondary NameNode settings.
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value> <!-- Replication factor (adjust based on cluster size; 3 is standard for production) -->
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/data/hadoop/hdfs/namenode</value> <!-- Persistent storage for NameNode metadata -->
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/data/hadoop/hdfs/datanode</value> <!-- Storage directory for DataNode blocks -->
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>secondarynamenode:50090</value> <!-- Optional: Secondary NameNode address -->
</property>
</configuration>
mapred-site.xml: Configures MapReduce framework (use YARN as the resource manager).
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
yarn-site.xml: Configures YARN (Yet Another Resource Negotiator) for resource management.
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>resourcemanager</value> <!-- Replace with the actual hostname/IP of the ResourceManager -->
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value> <!-- Enables shuffle service for MapReduce -->
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>
Step 4: Format the NameNode
The NameNode must be formatted once before starting HDFS (this initializes the metadata storage). Run the following command on the NameNode machine:
hdfs namenode -format
Note: Formatting erases all existing HDFS data. Only run this command on a new cluster or if you need to reset the NameNode.
Step 5: Start HDFS Services
start-dfs.sh script (run from the NameNode):$HADOOP_HOME/sbin/start-dfs.sh
jps
You should see the following processes:
Step 6: Validate the Configuration
http://<namenode-ip>:9870.hdfs dfs -mkdir -p /test # Create a test directory
hdfs dfs -put /path/to/local/file.txt /test # Upload a local file to HDFS
hdfs dfs -ls /test # List contents of the test directory
hdfs dfs -cat /test/file.txt # View the uploaded file
Troubleshooting Tips
/data/hadoop/hdfs/namenode) have the correct ownership (hadoop:hadoop or your username).JAVA_HOME is set correctly in hadoop-env.sh (located in $HADOOP_HOME/etc/hadoop).$HADOOP_HOME/logs) for errors—common issues include formatting errors or incorrect fs.defaultFS configurations.