First you will need a kernel with the NFS file system either compiled in or available as a module. This is configured before you compile the kernel. If you have never compiled a kernel before you might need to check the kernel HOWTO and figure it out. If you're using a very cool distribution (like Red Hat) and you've never fiddled with the kernel or modules on it (and thus ruined it ;-), nfs is likely automagicaly available to you.
You can now, at a root prompt, enter a appropriate mount command and
the file system will appear. Continuing the example in the previous
section we want to mount /mn/eris/local
from eris. This is
done with this command:
mount -o rsize=1024,wsize=1024 eris:/mn/eris/local /mnt
(We'll get back to the rsize and wsize options.) The file system
is now available under /mnt
and you can cd
there, and
ls
in it, and look at the individual files. You will notice that
it's not as fast as a local file system, but a lot more convenient
than ftp. If, instead of mounting the file system, mount produces a
error message like mount: eris:/mn/eris/local failed, reason given
by server: Permission denied
then the exports file is wrong, or
you forgot to run exportfs after editing the exports file. If it says
mount clntudp_create: RPC: Program not registered
it means
that nfsd or mountd is not running on the server.
To get rid of the file system you can say
umount /mnt
To make the system mount a nfs file system upon boot you edit
/etc/fstab
in the normal manner. For our example a line such
as this is required:
# device mountpoint fs-type options dump fsckorder ... eris:/mn/eris/local /mnt nfs rsize=1024,wsize=1024 0 0 ...
That's all there is too it, almost. Read on please.
There are some options you should consider adding at once. They govern the way the NFS client handles a server crash or network outage. One of the cool things about NFS is that it can handle this gracefully. If you set up the clients right. There are two distinct failure modes:
The NFS client will report and error to the process accessing a file on a NFS mounted file system. Some programs can handle this with composure, most won't. I cannot recommend using this setting.
The program accessing a file on a NFS mounted file system will hang
when the server crashes. The process cannot be interrupted or
killed unless you also specify intr
. When the NFS server is
back online the program will continue undisturbed from where it
were. This is probably what you want. I recommend using
hard,intr
on all NFS mounted file systems.
Picking up the previous example, this is now your fstab entry:
# device mountpoint fs-type options dump fsckorder ... eris:/mn/eris/local /mnt nfs rsize=1024,wsize=1024,hard,intr 0 0 ...
Normally, if no rsize and wsize options are specified NFS will read and write in chunks of 4096 or 8192 bytes. Some combinations of Linux kernels and network cards cannot handle that large blocks, and it might not be optimal, anyway. So we'll want to experiment and find a rsize and wsize that works and is as fast as possible. You can test the speed of your options with some simple commands. Given the mount command above and that you have write access to the disk you can do this to test the sequential write performance:
time dd if=/dev/zero of=/mnt/testfile bs=16k count=4096
This creates a 64Mb file of zeroed bytes (which should be large enough that caching is no significant part of any performance perceived, use a larger file if you have a lot of memory). Do it a couple (5-10?) of times and average the times. It is the `elapsed' or `wall clock' time that's most interesting in this connection. Then you can test the read performance by reading back the file:
time dd if=/mnt/testfile of=/dev/null bs=16k
do that a couple of times and average. Then umount, and mount again with a larger rsize and wsize. They should probably be multiples of 1024, and not larger than 16384 bytes since that's the maximum size in NFS version 2. Directly after mounting with a larger size cd into the mounted file system and do things like ls, explore the fs a bit to make sure everything is as it should. If the rsize/wsize is too large the symptoms are very odd and not 100% obvious. A typical symptom is incomplete file lists when doing 'ls', and no error messages. Or reading files failing mysteriously with no error messages. After establishing that the given rsize/wsize works you can do the speed tests again. Different server platforms are likely to have different optimal sizes. SunOS and Solaris is reputedly a lot faster with 4096 byte blocks than with anything else.
Newer Linux kernels (since 1.3 sometime) perform read-ahead for rsizes larger or equal to the machine page size. On Intel CPUs the page size is 4096 bytes. Read ahead will significantly increase the NFS read performance. So on a Intel machine you will want 4096 byte rsize if at all possible.
Remember to edit /etc/fstab
to reflect the rsize/wsize you
found.
A trick to increase NFS write performance is to disable synchronous writes on the server. The NFS specification states that NFS write requests shall not be considered finished before the data written is on a non-volatile medium (normally the disk). This restricts the write performance somewhat, asynchronous writes will speed NFS writes up. The Linux nfsd has never done synchronous writes since the Linux file system implementation does not lend itself to this, but on non-Linux servers you can increase the performance this way with this in your exports file:
/dir -async,access=linuxbox
or something similar. Please refer to the exports man page on the machine in question. Please note that this increases the risk of data loss.