||<>|| = TL;DR = A much shorter guide is available in [[https://help.ubuntu.com/stable/serverguide/network-file-system.html|the Ubuntu Server Guide]] If you need more info, read below. = Introduction = NFS (Network File System) allows you to 'share' a directory located on one networked computer with other computers/devices on that network. The computer where directory located is called the server and computers or devices connecting to that server are called clients. Clients usually 'mount' the shared directory to make it a part of their own directory structure. NFS is perfect for creating NAS (Networked Attached Storage) in Linux/Unix environment. It is a native Linux/Unix protocol as opposed to Samba which uses the SMB protocol developed by Microsoft. The Apple OS has good support for NFS. Windows 7 has some support for NFS. NFS is perhaps best for more 'permanent' network mounted directories such as `/home` directories or regularly accessed shared resources. If you want a network share that guest users can easily connect to, Samba is more suited. This is because tools exist more readily across old and proprietary operating systems to temporarily mount and detach from Samba shares. Ubuntu Before deploying NFS you should be familiar with: * Linux file and directory permissions * Mounting and detaching (unmounting) filesystems This HowTo makes Ubuntu server NFSv4 ready. = NFS quick start = Providing you understand what you are doing, use this brief walk-through to set up an NFSv4 server on Ubuntu (with no authentication security). Then mount the share on an Ubuntu client. It has been tested on Ubuntu 14.04. == NFS server == To check the NFS server is not installed, you can do... {{{ $ dpkg -l | grep nfs-kernel-server }}} Install the required packages... {{{ # apt-get install nfs-kernel-server }}} For easier maintenance we will isolate all NFS exports in single directory, where the real directories will be mounted with the {{{--bind}}} option. Let's say we want to export our users' home directories in {{{/home/users}}}. First we create the export filesystem: {{{ # mkdir -p /export/users }}} It's important that /export and /export/users have 777 permissions as we will be accessing the NFS share from the client without LDAP/NIS authentication. This will not apply if using authentication (see below). Now mount the real users directory with: {{{ # mount --bind /home/users /export/users}}} To save us from retyping this after every reboot we add the following line to {{{/etc/fstab}}} {{{ /home/users /export/users none bind 0 0}}} There are three configuration files that relate to an NFS server: {{{/etc/default/nfs-kernel-server}}}, {{{/etc/default/nfs-common}}} and {{{/etc/exports}}}. The only important option in {{{/etc/default/nfs-kernel-server}}} for now is NEED_SVCGSSD. It is set to "no" by default, which is fine, because we are not activating NFSv4 security this time. In order for the ID names to be automatically mapped, both the client and server require the {{{/etc/idmapd.conf}}} file to have the same contents with the correct domain names. Furthermore, this file should have the following lines in the {{{Mapping}}} section: {{{ [Mapping] Nobody-User = nobody Nobody-Group = nogroup }}} However, the client may have different requirements for the {{{Nobody-User}}} and {{{Nobody-Group}}}. For example on RedHat variants, it's {{{nfsnobody}}} for both. {{{cat /etc/passwd}}} and {{{cat /etc/group}}} should show the "nobody" accounts. This way, server and client do not need the users to share same UID/GUID. <
><
> For those who use LDAP-based authentication, add the following lines to your client's idmapd.conf: {{{ [Translation] Method = nsswitch }}} This will cause idmapd to know to look at nsswitch.conf to determine where it should look for credential information (and if you have LDAP authentication already working, nsswitch shouldn't require further explanation). To export our directories to a local network {{{192.168.1.0/24}}} we add the following two lines to {{{/etc/exports}}} {{{ /export 192.168.1.0/24(rw,fsid=0,insecure,no_subtree_check,async) /export/users 192.168.1.0/24(rw,nohide,insecure,no_subtree_check,async) }}} === Portmap Lockdown === ~- optional -~ Add the following line to /etc/hosts.deny: {{{ rpcbind mountd nfsd statd lockd rquotad : ALL }}} By blocking all clients first, only clients in /etc/hosts.allow below will be allowed to access the server. Now add the following line to /etc/hosts.allow: {{{ rpcbind mountd nfsd statd lockd rquotad : list of IP addresses }}} Where the "list of IP addresses" string is, you need to make a list of IP addresses that consists of the server and all clients. These have to be IP addresses because of a limitation in portmap (it doesn't like hostnames). Note that if you have NIS set up, just add these to the same line. '''Note:''' Ensure that the list of authorised IP addresses includes the localhost address ('''127.0.0.1''') as the startup scripts in recent versions of Ubuntu use the '''rpcinfo''' command to discover NFSv3 support, and this will be disabled if localhost is unable to connect. Now restart the service {{{ # service nfs-kernel-server restart}}} == NFSv4 client == Install the required packages: {{{ # apt-get install nfs-common }}} On the client we can mount the complete export tree with one command: {{{ # mount -t nfs -o proto=tcp,port=2049 :/ /mnt}}} You can also specify the NFS server hostname instead of its IP, but in this case you need to assure the hostname can be resolved to an IP on the client side (you can use /etc/hosts file for that). Note that {{{:/export}}} is not necessary in NFSv4, as it is in NFSv3. The root export {{{:/}}} defaults to export with {{{fsid=0}}}. We can also mount an exported ''subtree'' with: {{{ # mount -t nfs -o proto=tcp,port=2049 :/users /home/users}}} To save us from retyping this after every reboot we add the following line to {{{/etc/fstab}}}: {{{ :/ /mnt nfs auto 0 0}}} If after mounting, the entry in {{{/proc/mounts}}} appears as {{{://}}} (with two slashes), then you might need to specify two slashes in {{{/etc/fstab}}}, or else {{{umount}}} might complain that it cannot find the mount. The {{{auto}}} option mounts on startup. However this will not work if your client uses a wifi connection managed at the user level (after login), because the network will not be available at boot time. In Ubuntu 12.04 LTS and later, wifi connections are managed at the system level by default, so auto-mounting of NFS shares at boot time should work fine. Ubuntu Server doesn't come with any init.d/netfs or other scripts to do this for you. === Portmap Lockdown === ~- optional -~ Add the following line to /etc/hosts.deny: {{{ rpcbind : ALL }}} By blocking all clients first, only clients in /etc/hosts.allow below will be allowed to access the server. Now add the following line to /etc/hosts.allow: {{{ rpcbind : NFS server IP address }}} Where "NFS server IP address" is the IP address of the server. '''This must be numeric!''' It's the way portmap works. = NFS Server = == Pre-Installation Setup == '''None of the following pre-installation steps are strictly necessary.''' ==== User Permissions ==== NFS user permissions are based on user ID (UID). UIDs of any users on the client must match those on the server in order for the users to have access. The typical ways of doing this are: * Manual password file synchronization * Use of [[LDAPClientAuthentication|LDAP]] * Use of [[SettingUpNISHowTo|NIS]] It's also important to note that you have to be careful on systems where the main user has root access - that user can change UID's on the system to allow themselves access to anyone's files. This page assumes that the administrative team is the only group with root access and that they are all trusted. Anything else represents a more advanced configuration, and will not be addressed here. ==== Group Permissions ==== With NFS, a user's access to files is determined by his/her membership of groups on the client, not on the server. However, there is an important limitation: a maximum of 16 groups are passed from the client to the server, and, if a user is member of more than 16 groups on the client, some files or directories might be unexpectedly inaccessible. ==== Host Names ==== ~- optional if using DNS -~ Add any client name and IP addresses to /etc/hosts. The ''real'' (not 127.0.0.1) IP address of the server should already be here. This ensures that NFS will still work even if DNS goes down. You could rely on DNS if you wanted, it's up to you. ==== NIS ==== ~- optional - perform steps only if using NIS -~ '''Note:''' This '''only''' works if using NIS. Otherwise, you can't use netgroups, and should specify individual IP's or hostnames in {{{/etc/exports}}}. Read the '''BUGS''' section in {{{man netgroup}}}. Edit /etc/netgroup and add a line to classify your clients. (This step is not necessary, but is for convenience). {{{ myclients (client1,,) (client2,,) }}} Obviously, more clients can be added. {{{myclients}}} can be anything you like; this is a ''netgroup name''. Run this command to rebuild the YP database: {{{ sudo make -C /var/yp }}} ==== Portmap Lockdown ==== ~- optional -~ Add the following line to /etc/hosts.deny: {{{ rpcbind mountd nfsd statd lockd rquotad : ALL }}} By blocking all clients first, only clients in /etc/hosts.allow below will be allowed to access the server. Now add the following line to /etc/hosts.allow: {{{ rpcbind mountd nfsd statd lockd rquotad : list of IP addresses }}} Where the "list of IP addresses" string is, you need to make a list of IP addresses that consists of the server and all clients. These have to be IP addresses because of a limitation in portmap (it doesn't like hostnames). Note that if you have NIS set up, just add these to the same line. == Installation and Configuration == ==== Install NFS Server ==== {{{ sudo apt-get install rpcbind nfs-kernel-server }}} ==== Shares ==== Edit /etc/exports and add the shares: {{{ /home @myclients(rw,sync,no_subtree_check) /usr/local @myclients(rw,sync,no_subtree_check) }}} The above shares /home and /usr/local to all clients in the myclients netgroup. {{{ /home 192.168.0.10(rw,sync,no_subtree_check) 192.168.0.11(rw,sync,no_subtree_check) /usr/local 192.168.0.10(rw,sync,no_subtree_check) 192.168.0.11(rw,sync,no_subtree_check) }}} The above shares /home and /usr/local to two clients with fixed ip addresses. Best used only with machines that have static ip addresses. {{{ /home 192.168.0.0/255.255.255.0(rw,sync,no_subtree_check) /usr/local 192.168.0.0/255.255.255.0(rw,sync,no_subtree_check) }}} The above shares /home and /usr/local to all clients in the private network falling within the designated ip address range. {{{rw}}} makes the share read/write, and {{{sync}}} requires the server to only reply to requests once any changes have been flushed to disk. This is the safest option ({{{async}}} is faster, but dangerous. It is strongly recommended that you read {{{man exports}}}. After setting up /etc/exports, export the shares: {{{ sudo exportfs -ra }}} You'll want to do this command whenever {{{/etc/exports}}} is modified. ==== Restart Services ==== By default, portmap only binds to the loopback interface. To enable access to portmap from remote machines, you need to change /etc/default/portmap to get rid of either "-l" or "-i 127.0.0.1". If /etc/default/portmap was changed, portmap will need to be restarted: {{{ sudo service portmap restart }}} The NFS kernel server will also require a restart: {{{ sudo service nfs-kernel-server restart }}} === Security Note === Aside from the UID issues discussed above, it should be noted that an attacker could potentially masquerade as a machine that is allowed to map the share, which allows them to create arbitrary UIDs to access your files. One potential solution to this is [[IPSecHowTo|IPSec]], see also the NFS and IPSec section below. You can set up all your domain members to talk only to each other over IPSec, which will effectively authenticate that your client is who it says it is. IPSec works by encrypting traffic to the server with the server's key, and the server sends back all replies encrypted with the client's key. The traffic is decrypted with the respective keys. If the client doesn't have the keys that the client is supposed to have, it can't send or receive data. An alternative to IPSec is physically separate networks. This requires a separate network switch and separate ethernet cards, and physical security of that network. = NFS Client = == Installation == {{{ sudo apt-get install rpcbind nfs-common }}} ==== Portmap Lockdown ==== ~- optional -~ Add the following line to /etc/hosts.deny: {{{ rpcbind : ALL }}} By blocking all clients first, only clients in /etc/hosts.allow below will be allowed to access the server. Now add the following line to /etc/hosts.allow: {{{ rpcbind : NFS server IP address }}} Where "NFS server IP address" is the IP address of the server. '''This must be numeric!''' It's the way portmap works. ==== Host Names ==== ~- optional if using DNS -~ Add the server name to /etc/hosts. This ensures the NFS mounts will still work even if DNS goes down. You could rely on DNS if you wanted, it's up to you. === Mounts === ==== Check to see if everything works ==== You should try and mount it now. The basic template you will use is: {{{ sudo mount ServerIP:/folder/already/setup/to/be/shared /home/username/folder/in/your/local/computer }}} so for example: {{{ sudo mount 192.168.1.42:/home/music /home/poningru/music }}} ==== Mount at startup ==== NFS mounts can either be automatically mounted when accessed using autofs or can be setup with static mounts using entries in /etc/fstab. Both are explained below. ==== Automounter ==== Install autofs: {{{ sudo apt-get install autofs }}} The following configuration example sets up home directories to automount off an NFS server upon logging in. Other directories can be setup to automount upon access as well. Add the following line to the end of /etc/auto.master: {{{ /home /etc/auto.home }}} Now create /etc/auto.home and insert the following: {{{ * solarisbox1.company.com.au,solarisbox2.company.com.au:/export/home/& }}} The above line automatically mounts any directory accessed at /home/[username] on the client machine from either solarisbox1.company.com.au:/export/home/[username] or solarisbox2.company.com.au:/export/home/[username]. Restart autofs to enable the configuration: {{{ sudo service autofs start }}} ==== Static Mounts ==== Prior to setting up the mounts, make sure the directories that will act as mountpoints are already created. In /etc/fstab, add lines for shares such as: {{{ servername:dir /mntpoint nfs rw,hard,intr 0 0 }}} The {{{rw}}} mounts it read/write. Obviously, if the server is sharing it read only, the client won't be able to mount it as anything more than that. The {{{hard}}} mounts the share such that if the server becomes unavailable, the program will wait until it is available. The alternative is {{{soft}}}. {{{intr}}} allows you to interrupt/kill the process. Otherwise, it will ignore you. Documentation for these can be found in the {{{Mount options for nfs}}} section of {{{man mount}}}. The filesystems can now be mounted with {{{mount /mountpoint}}}, or {{{mount -a}}} to mount everything that should be mounted at boot. === Notes === ==== Minimalistic NFS Set Up ==== The steps above are very comprehensive. The minimum number of steps required to set up NFS are listed here: [[http://www.ubuntuforums.org/showthread.php?t=249889]] ==== Using Groups with NFS Shares ==== When using groups on NFS shares (NFSv2 or NFSv3), keep in mind that this might not work if a user is a member of more than 16 groups. This is due to limitations in the NFS protocol. You can find more information on Launchpad ([[https://bugs.launchpad.net/ubuntu/+source/linux-source-2.6.20/+bug/110132|"Permission denied when user belongs to group that owns group writable or setgid directories mounted via nfs"]]) and in this article: [[http://nfsworld.blogspot.com/2005/03/whats-deal-on-16-group-id-limitation.html|"What's the deal on the 16 group id limitation in NFS?"]] == IPSec Notes == If you're using IPSec, the default shutdown order in Breezy/Dapper causes the client to hang as it's being shut down because IPSec goes down before NFS does. To fix it, do: {{{ sudo update-rc.d -f setkey remove sudo update-rc.d setkey start 37 0 6 S . }}} A bug has been filed here: https://launchpad.net/distros/ubuntu/+source/ipsec-tools/+bug/37536 == Troubleshooting == === Mounting NFS shares in encrypted home won't work on boot === Mounting an NFS share inside an encrypted home directory will only work after you are successfully logged in and your home is decrypted. This means that using {{{/etc/fstab}}} to mount NFS shares on boot will not work - because your home has not been decrypted at the time of mounting. There is a simple way around this using Symbolic links: * Create an alternative directory to mount the NFS shares in: {{{ $ sudo mkdir /nfs $ sudo mkdir /nfs/music }}} * Edit {{{/etc/fstab}}} to mount the NFS share into that directory instead: {{{nfsServer:music /nfs/music nfs auto 0 0}}} * Create a symbolic link inside your home, pointing to the actual mount location (in our case delete the 'Music' directory already existing there first): {{{ $ rmdir /home/user/Music $ ln -s /nfs/music/ /home/user/Music }}} == Other resources == * [[https://en.wikipedia.org/wiki/Network_File_System|Network File System]] on Wikipedia ----