0

I'm trying to mount my HDFS using the NFS gateway as it is documented here: http://hadoop.apache.org/docs/r2.7.1/hadoop-project-dist/hadoop-hdfs/HdfsNfsGateway.html

Unfortunately, following the documentation step by step does not work for me (Hadoop 2.7.1 on CentOS 6.6). When executing the mount command I receive the following error message:

[root@server1 ~]# mount -t nfs -o vers=3,proto=tcp,nolock,noacl,sync server1:/ /hdfsmount/ mount.nfs: mounting server1:/ failed, reason given by server: No such file or directory

I created the folder hdfsmount so that I can say it definitely exists. My questions are now:

  • Did anyone faced the same issue as I do?
  • Do I have to configure the NFS server before I start following the steps in the documentation (e.g. I read about editing /etc/exports).

Any help is highly apreciated!

1
  • where did you find the nfs log files? Commented Jul 27, 2017 at 10:18

1 Answer 1

0

I found the problem deep in the logs. When executing the command (see below) to start the nfs3 component of HDFS, the executing user needs permissions to delete /tmp/.hdfs-nfs which is configured as nfs.dump.dir in core-site.xml.

If the permissions are not set, you'll receive a log message like:

15/08/12 01:19:56 WARN fs.FileUtil: Failed to delete file or dir [/tmp/.hdfs-nfs]: it still exists. Exception in thread "main" java.io.IOException: Cannot remove current dump directory: /tmp/.hdfs-nfs

Another option is to simply start the nfs component as root.

[root]> /usr/local/hadoop/sbin/hadoop-daemon.sh --script /usr/local/hadoop/bin/hdfs start nfs3

Sign up to request clarification or add additional context in comments.

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.