您好,登录后才能下订单哦!
这篇文章主要介绍了怎么使用Java API操作Hdfs的相关知识,内容详细易懂,操作简单快捷,具有一定借鉴价值,相信大家阅读完这篇怎么使用Java API操作Hdfs文章都会有所收获,下面我们一起来看看吧。
可以使用listStatus方法实现上述需求。
listStatus方法签名如下
/** * List the statuses of the files/directories in the given path if the path is * a directory. * * @param f given path * @return the statuses of the files/directories in the given patch * @throws FileNotFoundException when the path does not exist; * IOException see specific implementation */ public abstract FileStatus[] listStatus(Path f) throws FileNotFoundException, IOException;
可以看出listStatus只需要传入参数Path即可,返回的是一个FileStatus的数组。
而FileStatus包含有以下信息
/** Interface that represents the client side information for a file.
*/
@InterfaceAudience.Public
@InterfaceStability.Stable
public class FileStatus implements Writable, Comparable {
private Path path;
private long length;
private boolean isdir;
private short block_replication;
private long blocksize;
private long modification_time;
private long access_time;
private FsPermission permission;
private String owner;
private String group;
private Path symlink;
....从FileStatus中不难看出,包含有文件路径,大小,是否是目录,block_replication, blocksize…等等各种信息。
import org.apache.hadoop.fs.{FileStatus, FileSystem, Path}
import org.apache.spark.sql.SparkSession
import org.apache.spark.{SparkConf, SparkContext}
import org.slf4j.LoggerFactory
object HdfsOperation {
val logger = LoggerFactory.getLogger(this.getClass)
def tree(sc: SparkContext, path: String) : Unit = {
val fs = FileSystem.get(sc.hadoopConfiguration)
val fsPath = new Path(path)
val status = fs.listStatus(fsPath)
for(filestatus:FileStatus <- status) {
logger.error("getPermission is: {}", filestatus.getPermission)
logger.error("getOwner is: {}", filestatus.getOwner)
logger.error("getGroup is: {}", filestatus.getGroup)
logger.error("getLen is: {}", filestatus.getLen)
logger.error("getModificationTime is: {}", filestatus.getModificationTime)
logger.error("getReplication is: {}", filestatus.getReplication)
logger.error("getBlockSize is: {}", filestatus.getBlockSize)
if (filestatus.isDirectory) {
val dirpath = filestatus.getPath.toString
logger.error("文件夹名字为: {}", dirpath)
tree(sc, dirpath)
} else {
val fullname = filestatus.getPath.toString
val filename = filestatus.getPath.getName
logger.error("全部文件名为: {}", fullname)
logger.error("文件名为: {}", filename)
}
}
}
}如果判断fileStatus是文件夹,则递归调用tree方法,达到全部遍历的目的。
上面的方法是遍历所有文件以及文件夹。如果只想遍历文件,可以使用listFiles方法。
def findFiles(sc: SparkContext, path: String) = {
val fs = FileSystem.get(sc.hadoopConfiguration)
val fsPath = new Path(path)
val files = fs.listFiles(fsPath, true)
while(files.hasNext) {
val filestatus = files.next()
val fullname = filestatus.getPath.toString
val filename = filestatus.getPath.getName
logger.error("全部文件名为: {}", fullname)
logger.error("文件名为: {}", filename)
logger.error("文件大小为: {}", filestatus.getLen)
}
} /**
* List the statuses and block locations of the files in the given path.
*
* If the path is a directory,
* if recursive is false, returns files in the directory;
* if recursive is true, return files in the subtree rooted at the path.
* If the path is a file, return the file's status and block locations.
*
* @param f is the path
* @param recursive if the subdirectories need to be traversed recursively
*
* @return an iterator that traverses statuses of the files
*
* @throws FileNotFoundException when the path does not exist;
* IOException see specific implementation
*/
public RemoteIterator<LocatedFileStatus> listFiles(
final Path f, final boolean recursive)
throws FileNotFoundException, IOException {
...从源码可以看出,listFiles 返回一个可迭代的对象RemoteIterator<LocatedFileStatus>,而listStatus返回的是个数组。同时,listFiles返回的都是文件。
def mkdirToHdfs(sc: SparkContext, path: String) = {
val fs = FileSystem.get(sc.hadoopConfiguration)
val result = fs.mkdirs(new Path(path))
if (result) {
logger.error("mkdirs already success!")
} else {
logger.error("mkdirs had failed!")
}
} def deleteOnHdfs(sc: SparkContext, path: String) = {
val fs = FileSystem.get(sc.hadoopConfiguration)
val result = fs.delete(new Path(path), true)
if (result) {
logger.error("delete already success!")
} else {
logger.error("delete had failed!")
}
} def uploadToHdfs(sc: SparkContext, localPath: String, hdfsPath: String): Unit = {
val fs = FileSystem.get(sc.hadoopConfiguration)
fs.copyFromLocalFile(new Path(localPath), new Path(hdfsPath))
fs.close()
} def downloadFromHdfs(sc: SparkContext, localPath: String, hdfsPath: String) = {
val fs = FileSystem.get(sc.hadoopConfiguration)
fs.copyToLocalFile(new Path(hdfsPath), new Path(localPath))
fs.close()
}关于“怎么使用Java API操作Hdfs”这篇文章的内容就介绍到这里,感谢各位的阅读!相信大家对“怎么使用Java API操作Hdfs”知识都有一定的了解,大家如果还想学习更多知识,欢迎关注亿速云行业资讯频道。
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。