A network file system (NFS) is a client/server application that allows users to access shared files stored on computers within the same network. NFS was originally developed by Sun Microsystems in 1984 as an internal file-sharing system. Although the first version was never made publicly available, subsequent public releases have been widely used.
In this definition...
What does NFS do?
NFS allows users to mount remote directories locally. Sun Microsystems first introduced NFS as a way to share files between its workstations. Later, it evolved into an industry standard and was ported to various operating systems.
NFS is designed around clients’ mounting servers’ directories as if they were local devices. The protocol uses two TCP ports, 2049 (portmapper) and 111 (rpcbind), and supports UDP ports 111 and 2049. Portmapper acts as a directory service, while rpcbind maintains mappings between RPC (Remote Procedure Call) program numbers and network addresses. NFS standards are currently managed by the Internet Engineering Task Force (IETF).
In general, NFS is favored as a low-cost alternative that uses infrastructure resources already available. It allows for centralized management, so any authorized user can access remote files as if they were stored locally on the user’s own hard disk. One downside, however, is that it’s based on Remote Procedure Calls (RPCs), which have inherent security risks.
Versions of NFS
Since the original version was launched in 1984, there have been numerous versions of NFS that have been adopted by all types of enterprises:
- NFSv2: The first public version of NFS released in 1989. It used User Datagram Protocol (UDP) exclusively and had limited data access and transfer capabilities. This version has since become obsolete.
- NFSv3: Released in 1995, expanded the file offsets of version 2 to allow a larger amount of data to be processed at a faster rate. It also added Transmission Control Protocol (TCP) as a transport option. Despite later updates, NSFv3 is the most widely used version of NFS today.
- NSFv4: Released in 2003 as a stateful (as opposed to stateless) file system for better performance and security. It was the first version developed by IETF, with versions 4.1 and 4.2 published with a few added features and relatively minor updates.
How does NFS work?
A network file system uses RPCs to route requests between clients and servers using a client-server model. The server exports a directory tree, allowing clients to mount it based on privileges assigned to each file (read-only or read-write). When a user changes files on an NFS-mounted partition, those changes are replicated to all networked machines. As such, NFS is ideal for sharing directories between multiple systems.
Benefits of NFS
NFS provides several benefits, such as enabling local access to remote files, centralized management of data, and more. Other benefits of NFS service include:
- The process of mounting the file system is transparent to all users.
- Users can use the system to share files between computers on their network without physically transferring them from one computer to another.
- Centralization of data reduces system admin overhead.
- Reduced storage expenses by allowing computers to share applications rather than each user application requiring its own local disk space.
- All users may read the same files, ensuring that data is always up to date and consistent.
NFS protocol development
1980s: Sun’s NFS vs. AT&T’s RFS
RFS (Remote File System) from AT&T was one of the first networked file systems. It was developed in 1980 at AT&T Bell Laboratories and was initially delivered with UNIX System V Release 3. (SVR3). Each system call in SVR3 was immediately mapped to a server call. However, due to the semantics of SVR3’s system calls, this only worked for SVR3.
RFS also had difficulties with server failures or reboots since the server had to keep a lot of states for every client and every file accessed by a client. This state could not be recovered after a server reboot, so it usually took all its clients with it when an RFS server went down.
In 1984, Sun Microsystems released an early version of its NFS for UNIX. NFSv2 attempted to solve RFS’s drawbacks by making the server completely stateless and defining a small set of remote procedures that offered a basic set of file system operations in a far less operating system-dependent manner than RFS. Further, it attempted to be file system-agnostic to the point that it could be easily converted to multiple Unix file systems.
To unify the market, AT&T announced a partnership with Sun Microsystems in 1987, the major proponent of the Berkeley-derived strain of UNIX, and to jointly develop AT&T’s UNIX System V Release 4.
However, AT&T’s other licensees of the UNIX System were deeply concerned by the development. As a result, they banded together to create their own new open-systems operating system and named their organization Open Software Foundation (OSF). In reaction, the AT&T/Sun faction established UNIX International.
1990s: ISOC gained the right to add new versions
Sun Microsystems and the Internet Society, an umbrella body for the Internet Engineering Task Force (IETF), reached an agreement in 1998 that granted the Internet Society control over future versions of NFS. As a result of the agreement, the IETF specified NFS version 4 in 2003.
2000s: ASF code donated to the free software community, NFSv4.1
Andrew file system (AFS) is a Carnegie Mellon University-developed and -implemented distributed computing platform. AFS is the fundamental method for information-sharing among environment clients. Transarc Corporation, later acquired by IBM, assumed the development of AFS.
AFS was later adopted as the DFS (Distributed File System) for an industry coalition, resulting in Transarc DFS, a component of the OSF organization’s distributed computing environment (DCE). In 2000, IBM’s Transarc Lab stated that AFS would be made available as an open-source software termed OpenAFS under the IBM public license, while Transarc DFS would be retired as a commercial product.
NFS protocol extensions
WebNFS is an extension of NFS designed to allow clients and servers to communicate through a standard web browser protocol over the internet using firewalls.
NLM (Network Lock Manager) is a file-locking protocol used to synchronize access to NFS-shared files. The NLM is composed of two daemons: rpc.lockd and rpc.statd. The lockd daemon manages network locks, while the rpc.statd daemon monitors network status. NLM requires both the statd and lockd daemons to be running on all hosts that share an NFS server for it to function properly.
Remote quota (RQUOTAD) is a daemon that provides NFS client access to remote file systems exported by an NFS server. The RQUOTAD daemon is usually started from the system startup scripts at boot time. The daemon’s name comes from Remote Quota, referring to disk quotas for remote users.
NFS over RDMA
The NFS protocol uses a remote direct memory access (RDMA) adapter to achieve high performance and low latency. It enables NFS client applications to communicate with remote file servers using a high-performance network for CPU-intensive data transfers, such as large data files or databases.
NFS-Ganesha is a user-space NFS file server. It implements both client and server of all major versions of NFS protocol, including NFS v3, 4.0, 4.1, and 4.2 and 9P from the Plan9 operating system. In addition, it provides FSAL (File System Abstraction Layer) services to clients on top of FUSE (Filesystem in Userspace).
The Trusted Network File System (TNFS) extension is a file system protocol that enables clients to access files on remote servers over standard network protocols, supporting network file access in a multilevel secure (MLS) internet environment.
TNFS is designed for usage in environments where file system security is required. It supports remote access to security file attribute extensions, including file open, file name, and multilevel directory enhancements, supporting network file access in an MLS internet environment.