debian

Debian如何配置HDFS

小樊
42
2025-11-09 16:00:54
栏目: 智能运维

Prerequisites
Before configuring HDFS on Debian, ensure the following prerequisites are met:

Step 1: Download and Install Hadoop

  1. Download the latest stable Hadoop release from the Apache website. For example:
    wget https://downloads.apache.org/hadoop/common/hadoop-3.3.6/hadoop-3.3.6.tar.gz
    
  2. Extract the tarball to a dedicated directory (e.g., /usr/local/hadoop):
    sudo tar -xzvf hadoop-3.3.6.tar.gz -C /usr/local/
    sudo mv /usr/local/hadoop-3.3.6 /usr/local/hadoop
    
  3. Set directory ownership to the current user (replace your_username with your actual username):
    sudo chown -R your_username:your_username /usr/local/hadoop
    

Step 2: Configure Environment Variables

  1. Edit the ~/.bashrc file to add Hadoop-specific environment variables:
    nano ~/.bashrc
    
  2. Append the following lines to the end of the file:
    export HADOOP_HOME=/usr/local/hadoop
    export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
    export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64  # Adjust based on your JDK installation path
    
  3. Apply the changes to the current session:
    source ~/.bashrc
    

Step 3: Configure HDFS Core Files
All Hadoop configuration files are located in $HADOOP_HOME/etc/hadoop. Modify the following files to define HDFS behavior:

Step 4: Format the NameNode
The NameNode must be formatted once before starting HDFS (this initializes the metadata storage). Run the following command on the NameNode machine:

hdfs namenode -format

Note: Formatting erases all existing HDFS data. Only run this command on a new cluster or if you need to reset the NameNode.

Step 5: Start HDFS Services

  1. Start the HDFS daemons (NameNode and DataNodes) using the start-dfs.sh script (run from the NameNode):
    $HADOOP_HOME/sbin/start-dfs.sh
    
  2. Verify that the services are running by checking the process list:
    jps
    
    You should see the following processes:
    • NameNode: Manages HDFS metadata.
    • DataNode: Stores actual data blocks.
    • SecondaryNameNode (optional): Assists with NameNode metadata management.

Step 6: Validate the Configuration

  1. Check the HDFS status using the web interface (default port: 9870 for Hadoop 3.x):
    Open a browser and navigate to http://<namenode-ip>:9870.
  2. Run basic HDFS commands to verify functionality:
    hdfs dfs -mkdir -p /test  # Create a test directory
    hdfs dfs -put /path/to/local/file.txt /test  # Upload a local file to HDFS
    hdfs dfs -ls /test  # List contents of the test directory
    hdfs dfs -cat /test/file.txt  # View the uploaded file
    

Troubleshooting Tips

0
看了该问题的人还看了