original development tree for Linux kernel GTP module; now long in mainline.
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.

602 lines
20 KiB

NFS: Define and create superblock-level objects Define and create superblock-level cache index objects (as managed by nfs_server structs). Each superblock object is created in a server level index object and is itself an index into which inode-level objects are inserted. Ideally there would be one superblock-level object per server, and the former would be folded into the latter; however, since the "nosharecache" option exists this isn't possible. The superblock object key is a sequence consisting of: (1) Certain superblock s_flags. (2) Various connection parameters that serve to distinguish superblocks for sget(). (3) The volume FSID. (4) The security flavour. (5) The uniquifier length. (6) The uniquifier text. This is normally an empty string, unless the fsc=xyz mount option was used to explicitly specify a uniquifier. The key blob is of variable length, depending on the length of (6). The superblock object is given no coherency data to carry in the auxiliary data permitted by the cache. It is assumed that the superblock is always coherent. This patch also adds uniquification handling such that two otherwise identical superblocks, at least one of which is marked "nosharecache", won't end up trying to share the on-disk cache. It will be possible to manually provide a uniquifier through a mount option with a later patch to avoid the error otherwise produced. Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Steve Dickson <steved@redhat.com> Acked-by: Trond Myklebust <Trond.Myklebust@netapp.com> Acked-by: Al Viro <viro@zeniv.linux.org.uk> Tested-by: Daire Byrne <Daire.Byrne@framestore.com>
13 years ago
NFS: Share NFS superblocks per-protocol per-server per-FSID The attached patch makes NFS share superblocks between mounts from the same server and FSID over the same protocol. It does this by creating each superblock with a false root and returning the real root dentry in the vfsmount presented by get_sb(). The root dentry set starts off as an anonymous dentry if we don't already have the dentry for its inode, otherwise it simply returns the dentry we already have. We may thus end up with several trees of dentries in the superblock, and if at some later point one of anonymous tree roots is discovered by normal filesystem activity to be located in another tree within the superblock, the anonymous root is named and materialises attached to the second tree at the appropriate point. Why do it this way? Why not pass an extra argument to the mount() syscall to indicate the subpath and then pathwalk from the server root to the desired directory? You can't guarantee this will work for two reasons: (1) The root and intervening nodes may not be accessible to the client. With NFS2 and NFS3, for instance, mountd is called on the server to get the filehandle for the tip of a path. mountd won't give us handles for anything we don't have permission to access, and so we can't set up NFS inodes for such nodes, and so can't easily set up dentries (we'd have to have ghost inodes or something). With this patch we don't actually create dentries until we get handles from the server that we can use to set up their inodes, and we don't actually bind them into the tree until we know for sure where they go. (2) Inaccessible symbolic links. If we're asked to mount two exports from the server, eg: mount warthog:/warthog/aaa/xxx /mmm mount warthog:/warthog/bbb/yyy /nnn We may not be able to access anything nearer the root than xxx and yyy, but we may find out later that /mmm/www/yyy, say, is actually the same directory as the one mounted on /nnn. What we might then find out, for example, is that /warthog/bbb was actually a symbolic link to /warthog/aaa/xxx/www, but we can't actually determine that by talking to the server until /warthog is made available by NFS. This would lead to having constructed an errneous dentry tree which we can't easily fix. We can end up with a dentry marked as a directory when it should actually be a symlink, or we could end up with an apparently hardlinked directory. With this patch we need not make assumptions about the type of a dentry for which we can't retrieve information, nor need we assume we know its place in the grand scheme of things until we actually see that place. This patch reduces the possibility of aliasing in the inode and page caches for inodes that may be accessed by more than one NFS export. It also reduces the number of superblocks required for NFS where there are many NFS exports being used from a server (home directory server + autofs for example). This in turn makes it simpler to do local caching of network filesystems, as it can then be guaranteed that there won't be links from multiple inodes in separate superblocks to the same cache file. Obviously, cache aliasing between different levels of NFS protocol could still be a problem, but at least that gives us another key to use when indexing the cache. This patch makes the following changes: (1) The server record construction/destruction has been abstracted out into its own set of functions to make things easier to get right. These have been moved into fs/nfs/client.c. All the code in fs/nfs/client.c has to do with the management of connections to servers, and doesn't touch superblocks in any way; the remaining code in fs/nfs/super.c has to do with VFS superblock management. (2) The sequence of events undertaken by NFS mount is now reordered: (a) A volume representation (struct nfs_server) is allocated. (b) A server representation (struct nfs_client) is acquired. This may be allocated or shared, and is keyed on server address, port and NFS version. (c) If allocated, the client representation is initialised. The state member variable of nfs_client is used to prevent a race during initialisation from two mounts. (d) For NFS4 a simple pathwalk is performed, walking from FH to FH to find the root filehandle for the mount (fs/nfs/getroot.c). For NFS2/3 we are given the root FH in advance. (e) The volume FSID is probed for on the root FH. (f) The volume representation is initialised from the FSINFO record retrieved on the root FH. (g) sget() is called to acquire a superblock. This may be allocated or shared, keyed on client pointer and FSID. (h) If allocated, the superblock is initialised. (i) If the superblock is shared, then the new nfs_server record is discarded. (j) The root dentry for this mount is looked up from the root FH. (k) The root dentry for this mount is assigned to the vfsmount. (3) nfs_readdir_lookup() creates dentries for each of the entries readdir() returns; this function now attaches disconnected trees from alternate roots that happen to be discovered attached to a directory being read (in the same way nfs_lookup() is made to do for lookup ops). The new d_materialise_unique() function is now used to do this, thus permitting the whole thing to be done under one set of locks, and thus avoiding any race between mount and lookup operations on the same directory. (4) The client management code uses a new debug facility: NFSDBG_CLIENT which is set by echoing 1024 to /proc/net/sunrpc/nfs_debug. (5) Clone mounts are now called xdev mounts. (6) Use the dentry passed to the statfs() op as the handle for retrieving fs statistics rather than the root dentry of the superblock (which is now a dummy). Signed-Off-By: David Howells <dhowells@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
15 years ago
NFS: Define and create superblock-level objects Define and create superblock-level cache index objects (as managed by nfs_server structs). Each superblock object is created in a server level index object and is itself an index into which inode-level objects are inserted. Ideally there would be one superblock-level object per server, and the former would be folded into the latter; however, since the "nosharecache" option exists this isn't possible. The superblock object key is a sequence consisting of: (1) Certain superblock s_flags. (2) Various connection parameters that serve to distinguish superblocks for sget(). (3) The volume FSID. (4) The security flavour. (5) The uniquifier length. (6) The uniquifier text. This is normally an empty string, unless the fsc=xyz mount option was used to explicitly specify a uniquifier. The key blob is of variable length, depending on the length of (6). The superblock object is given no coherency data to carry in the auxiliary data permitted by the cache. It is assumed that the superblock is always coherent. This patch also adds uniquification handling such that two otherwise identical superblocks, at least one of which is marked "nosharecache", won't end up trying to share the on-disk cache. It will be possible to manually provide a uniquifier through a mount option with a later patch to avoid the error otherwise produced. Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: Steve Dickson <steved@redhat.com> Acked-by: Trond Myklebust <Trond.Myklebust@netapp.com> Acked-by: Al Viro <viro@zeniv.linux.org.uk> Tested-by: Daire Byrne <Daire.Byrne@framestore.com>
13 years ago
NFS: Share NFS superblocks per-protocol per-server per-FSID The attached patch makes NFS share superblocks between mounts from the same server and FSID over the same protocol. It does this by creating each superblock with a false root and returning the real root dentry in the vfsmount presented by get_sb(). The root dentry set starts off as an anonymous dentry if we don't already have the dentry for its inode, otherwise it simply returns the dentry we already have. We may thus end up with several trees of dentries in the superblock, and if at some later point one of anonymous tree roots is discovered by normal filesystem activity to be located in another tree within the superblock, the anonymous root is named and materialises attached to the second tree at the appropriate point. Why do it this way? Why not pass an extra argument to the mount() syscall to indicate the subpath and then pathwalk from the server root to the desired directory? You can't guarantee this will work for two reasons: (1) The root and intervening nodes may not be accessible to the client. With NFS2 and NFS3, for instance, mountd is called on the server to get the filehandle for the tip of a path. mountd won't give us handles for anything we don't have permission to access, and so we can't set up NFS inodes for such nodes, and so can't easily set up dentries (we'd have to have ghost inodes or something). With this patch we don't actually create dentries until we get handles from the server that we can use to set up their inodes, and we don't actually bind them into the tree until we know for sure where they go. (2) Inaccessible symbolic links. If we're asked to mount two exports from the server, eg: mount warthog:/warthog/aaa/xxx /mmm mount warthog:/warthog/bbb/yyy /nnn We may not be able to access anything nearer the root than xxx and yyy, but we may find out later that /mmm/www/yyy, say, is actually the same directory as the one mounted on /nnn. What we might then find out, for example, is that /warthog/bbb was actually a symbolic link to /warthog/aaa/xxx/www, but we can't actually determine that by talking to the server until /warthog is made available by NFS. This would lead to having constructed an errneous dentry tree which we can't easily fix. We can end up with a dentry marked as a directory when it should actually be a symlink, or we could end up with an apparently hardlinked directory. With this patch we need not make assumptions about the type of a dentry for which we can't retrieve information, nor need we assume we know its place in the grand scheme of things until we actually see that place. This patch reduces the possibility of aliasing in the inode and page caches for inodes that may be accessed by more than one NFS export. It also reduces the number of superblocks required for NFS where there are many NFS exports being used from a server (home directory server + autofs for example). This in turn makes it simpler to do local caching of network filesystems, as it can then be guaranteed that there won't be links from multiple inodes in separate superblocks to the same cache file. Obviously, cache aliasing between different levels of NFS protocol could still be a problem, but at least that gives us another key to use when indexing the cache. This patch makes the following changes: (1) The server record construction/destruction has been abstracted out into its own set of functions to make things easier to get right. These have been moved into fs/nfs/client.c. All the code in fs/nfs/client.c has to do with the management of connections to servers, and doesn't touch superblocks in any way; the remaining code in fs/nfs/super.c has to do with VFS superblock management. (2) The sequence of events undertaken by NFS mount is now reordered: (a) A volume representation (struct nfs_server) is allocated. (b) A server representation (struct nfs_client) is acquired. This may be allocated or shared, and is keyed on server address, port and NFS version. (c) If allocated, the client representation is initialised. The state member variable of nfs_client is used to prevent a race during initialisation from two mounts. (d) For NFS4 a simple pathwalk is performed, walking from FH to FH to find the root filehandle for the mount (fs/nfs/getroot.c). For NFS2/3 we are given the root FH in advance. (e) The volume FSID is probed for on the root FH. (f) The volume representation is initialised from the FSINFO record retrieved on the root FH. (g) sget() is called to acquire a superblock. This may be allocated or shared, keyed on client pointer and FSID. (h) If allocated, the superblock is initialised. (i) If the superblock is shared, then the new nfs_server record is discarded. (j) The root dentry for this mount is looked up from the root FH. (k) The root dentry for this mount is assigned to the vfsmount. (3) nfs_readdir_lookup() creates dentries for each of the entries readdir() returns; this function now attaches disconnected trees from alternate roots that happen to be discovered attached to a directory being read (in the same way nfs_lookup() is made to do for lookup ops). The new d_materialise_unique() function is now used to do this, thus permitting the whole thing to be done under one set of locks, and thus avoiding any race between mount and lookup operations on the same directory. (4) The client management code uses a new debug facility: NFSDBG_CLIENT which is set by echoing 1024 to /proc/net/sunrpc/nfs_debug. (5) Clone mounts are now called xdev mounts. (6) Use the dentry passed to the statfs() op as the handle for retrieving fs statistics rather than the root dentry of the superblock (which is now a dummy). Signed-Off-By: David Howells <dhowells@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
15 years ago
NFS refactor nfs_find_client and reference client across callback processing Fixes a bug where the nfs_client could be freed during callback processing. Refactor nfs_find_client to use minorversion specific means to locate the correct nfs_client structure. In the NFS layer, V4.0 clients are found using the callback_ident field in the CB_COMPOUND header. V4.1 clients are found using the sessionID in the CB_SEQUENCE operation which is also compared against the sessionID associated with the back channel thread after a successful CREATE_SESSION. Each of these methods finds the one an only nfs_client associated with the incoming callback request - so nfs_find_client_next is not needed. In the RPC layer, the pg_authenticate call needs to find the nfs_client. For the v4.0 callback service, the callback identifier has not been decoded so a search by address, version, and minorversion is used. The sessionid for the sessions based callback service has (usually) not been set for the pg_authenticate on a CB_NULL call which can be sent prior to the return of a CREATE_SESSION call, so the sessionid associated with the back channel thread is not used to find the client in pg_authenticate for CB_NULL calls. Pass the referenced nfs_client to each CB_COMPOUND operation being proceesed via the new cb_process_state structure. The reference is held across cb_compound processing. Use the new cb_process_state struct to move the NFS4ERR_RETRY_UNCACHED_REP processing from process_op into nfs4_callback_sequence where it belongs. Signed-off-by: Andy Adamson <andros@netapp.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
11 years ago
NFS: Share NFS superblocks per-protocol per-server per-FSID The attached patch makes NFS share superblocks between mounts from the same server and FSID over the same protocol. It does this by creating each superblock with a false root and returning the real root dentry in the vfsmount presented by get_sb(). The root dentry set starts off as an anonymous dentry if we don't already have the dentry for its inode, otherwise it simply returns the dentry we already have. We may thus end up with several trees of dentries in the superblock, and if at some later point one of anonymous tree roots is discovered by normal filesystem activity to be located in another tree within the superblock, the anonymous root is named and materialises attached to the second tree at the appropriate point. Why do it this way? Why not pass an extra argument to the mount() syscall to indicate the subpath and then pathwalk from the server root to the desired directory? You can't guarantee this will work for two reasons: (1) The root and intervening nodes may not be accessible to the client. With NFS2 and NFS3, for instance, mountd is called on the server to get the filehandle for the tip of a path. mountd won't give us handles for anything we don't have permission to access, and so we can't set up NFS inodes for such nodes, and so can't easily set up dentries (we'd have to have ghost inodes or something). With this patch we don't actually create dentries until we get handles from the server that we can use to set up their inodes, and we don't actually bind them into the tree until we know for sure where they go. (2) Inaccessible symbolic links. If we're asked to mount two exports from the server, eg: mount warthog:/warthog/aaa/xxx /mmm mount warthog:/warthog/bbb/yyy /nnn We may not be able to access anything nearer the root than xxx and yyy, but we may find out later that /mmm/www/yyy, say, is actually the same directory as the one mounted on /nnn. What we might then find out, for example, is that /warthog/bbb was actually a symbolic link to /warthog/aaa/xxx/www, but we can't actually determine that by talking to the server until /warthog is made available by NFS. This would lead to having constructed an errneous dentry tree which we can't easily fix. We can end up with a dentry marked as a directory when it should actually be a symlink, or we could end up with an apparently hardlinked directory. With this patch we need not make assumptions about the type of a dentry for which we can't retrieve information, nor need we assume we know its place in the grand scheme of things until we actually see that place. This patch reduces the possibility of aliasing in the inode and page caches for inodes that may be accessed by more than one NFS export. It also reduces the number of superblocks required for NFS where there are many NFS exports being used from a server (home directory server + autofs for example). This in turn makes it simpler to do local caching of network filesystems, as it can then be guaranteed that there won't be links from multiple inodes in separate superblocks to the same cache file. Obviously, cache aliasing between different levels of NFS protocol could still be a problem, but at least that gives us another key to use when indexing the cache. This patch makes the following changes: (1) The server record construction/destruction has been abstracted out into its own set of functions to make things easier to get right. These have been moved into fs/nfs/client.c. All the code in fs/nfs/client.c has to do with the management of connections to servers, and doesn't touch superblocks in any way; the remaining code in fs/nfs/super.c has to do with VFS superblock management. (2) The sequence of events undertaken by NFS mount is now reordered: (a) A volume representation (struct nfs_server) is allocated. (b) A server representation (struct nfs_client) is acquired. This may be allocated or shared, and is keyed on server address, port and NFS version. (c) If allocated, the client representation is initialised. The state member variable of nfs_client is used to prevent a race during initialisation from two mounts. (d) For NFS4 a simple pathwalk is performed, walking from FH to FH to find the root filehandle for the mount (fs/nfs/getroot.c). For NFS2/3 we are given the root FH in advance. (e) The volume FSID is probed for on the root FH. (f) The volume representation is initialised from the FSINFO record retrieved on the root FH. (g) sget() is called to acquire a superblock. This may be allocated or shared, keyed on client pointer and FSID. (h) If allocated, the superblock is initialised. (i) If the superblock is shared, then the new nfs_server record is discarded. (j) The root dentry for this mount is looked up from the root FH. (k) The root dentry for this mount is assigned to the vfsmount. (3) nfs_readdir_lookup() creates dentries for each of the entries readdir() returns; this function now attaches disconnected trees from alternate roots that happen to be discovered attached to a directory being read (in the same way nfs_lookup() is made to do for lookup ops). The new d_materialise_unique() function is now used to do this, thus permitting the whole thing to be done under one set of locks, and thus avoiding any race between mount and lookup operations on the same directory. (4) The client management code uses a new debug facility: NFSDBG_CLIENT which is set by echoing 1024 to /proc/net/sunrpc/nfs_debug. (5) Clone mounts are now called xdev mounts. (6) Use the dentry passed to the statfs() op as the handle for retrieving fs statistics rather than the root dentry of the superblock (which is now a dummy). Signed-Off-By: David Howells <dhowells@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
15 years ago
nfs41: add session setup to the state manager At mount, nfs_alloc_client sets the cl_state NFS4CLNT_LEASE_EXPIRED bit and nfs4_alloc_session sets the NFS4CLNT_SESSION_SETUP bit, so both bits are set when nfs4_lookup_root calls nfs4_recover_expired_lease which schedules the nfs4_state_manager and waits for it to complete. Place the session setup after the clientid establishment in nfs4_state_manager so that the session is setup right after the clientid has been established without rescheduling the state manager. Unlike nfsv4.0, the nfs_client struct is not ready to use until the session has been established. Postpone marking the nfs_client struct to NFS_CS_READY until after a successful CREATE_SESSION call so that other threads cannot use the client until the session is established. If the EXCHANGE_ID call fails and the session has not been setup (the NFS4CLNT_SESSION_SETUP bit is set), mark the client with the error and return. If the session setup CREATE_SESSION call fails with NFS4ERR_STALE_CLIENTID which could occur due to server reboot or network partition inbetween the EXCHANGE_ID and CREATE_SESSION call, reset the NFS4CLNT_LEASE_EXPIRED and NFS4CLNT_SESSION_SETUP bits and try again. If the CREATE_SESSION call fails with other errors, mark the client with the error and return. Signed-off-by: Andy Adamson <andros@netapp.com> Signed-off-by: Benny Halevy <bhalevy@panasas.com> [nfs41: NFS_CS_SESSION_SETUP cl_cons_state for back channel setup] On session setup, the CREATE_SESSION reply races with the server back channel probe which needs to succeed to setup the back channel. Set a new cl_cons_state NFS_CS_SESSION_SETUP just prior to the CREATE_SESSION call and add it as a valid state to nfs_find_client so that the client back channel can find the nfs_client struct and won't drop the server backchannel probe. Use a new cl_cons_state so that NFSv4.0 back channel behaviour which only sets NFS_CS_READY is unchanged. Adjust waiting on the nfs_client_active_wq accordingly. Signed-off-by: Andy Adamson <andros@netapp.com> Signed-off-by: Benny Halevy <bhalevy@panasas.com> [nfs41: rename NFS_CS_SESSION_SETUP to NFS_CS_SESSION_INITING] Signed-off-by: Andy Adamson <andros@netapp.com> [nfs41: set NFS_CL_SESSION_INITING in alloc_session] Signed-off-by: Andy Adamson <andros@netapp.com> [nfs41: move session setup into a function] Signed-off-by: Andy Adamson <andros@netapp.com> Signed-off-by: Benny Halevy <bhalevy@panasas.com> [moved nfs4_proc_create_session declaration here] Signed-off-by: Benny Halevy <bhalevy@panasas.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
13 years ago
git-nfs-build-fixes Fix various problems with nfs4 disabled. And various other things. In file included from fs/nfs/inode.c:50: fs/nfs/internal.h:24: error: static declaration of 'nfs_do_refmount' follows non-static declaration include/linux/nfs_fs.h:320: error: previous declaration of 'nfs_do_refmount' was here fs/nfs/internal.h:65: warning: 'struct nfs4_fs_locations' declared inside parameter list fs/nfs/internal.h:65: warning: its scope is only this definition or declaration, which is probably not what you want fs/nfs/internal.h: In function 'nfs4_path': fs/nfs/internal.h:97: error: 'struct nfs_server' has no member named 'mnt_path' fs/nfs/inode.c: In function 'init_once': fs/nfs/inode.c:1116: error: 'struct nfs_inode' has no member named 'open_states' fs/nfs/inode.c:1116: error: 'struct nfs_inode' has no member named 'delegation' fs/nfs/inode.c:1116: error: 'struct nfs_inode' has no member named 'delegation_state' fs/nfs/inode.c:1116: error: 'struct nfs_inode' has no member named 'rwsem' distcc[26452] ERROR: compile fs/nfs/inode.c on g5/64 failed make[1]: *** [fs/nfs/inode.o] Error 1 make: *** [fs/nfs/inode.o] Error 2 make: *** Waiting for unfinished jobs.... In file included from fs/nfs/nfs3xdr.c:26: fs/nfs/internal.h:24: error: static declaration of 'nfs_do_refmount' follows non-static declaration include/linux/nfs_fs.h:320: error: previous declaration of 'nfs_do_refmount' was here fs/nfs/internal.h:65: warning: 'struct nfs4_fs_locations' declared inside parameter list fs/nfs/internal.h:65: warning: its scope is only this definition or declaration, which is probably not what you want fs/nfs/internal.h: In function 'nfs4_path': fs/nfs/internal.h:97: error: 'struct nfs_server' has no member named 'mnt_path' distcc[26486] ERROR: compile fs/nfs/nfs3xdr.c on g5/64 failed make[1]: *** [fs/nfs/nfs3xdr.o] Error 1 make: *** [fs/nfs/nfs3xdr.o] Error 2 In file included from fs/nfs/nfs3proc.c:24: fs/nfs/internal.h:24: error: static declaration of 'nfs_do_refmount' follows non-static declaration include/linux/nfs_fs.h:320: error: previous declaration of 'nfs_do_refmount' was here fs/nfs/internal.h:65: warning: 'struct nfs4_fs_locations' declared inside parameter list fs/nfs/internal.h:65: warning: its scope is only this definition or declaration, which is probably not what you want fs/nfs/internal.h: In function 'nfs4_path': fs/nfs/internal.h:97: error: 'struct nfs_server' has no member named 'mnt_path' distcc[26469] ERROR: compile fs/nfs/nfs3proc.c on bix/32 failed make[1]: *** [fs/nfs/nfs3proc.o] Error 1 make: *** [fs/nfs/nfs3proc.o] Error 2 **FAILED** Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Andreas Gruenbacher <agruen@suse.de> Cc: Andy Adamson <andros@citi.umich.edu> Cc: Chuck Lever <cel@netapp.com> Cc: David Howells <dhowells@redhat.com> Cc: J. Bruce Fields <bfields@fieldses.org> Cc: Manoj Naik <manoj@almaden.ibm.com> Cc: Marc Eshel <eshel@almaden.ibm.com> Cc: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: Andrew Morton <akpm@osdl.org> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
16 years ago
fs: convert fs shrinkers to new scan/count API Convert the filesystem shrinkers to use the new API, and standardise some of the behaviours of the shrinkers at the same time. For example, nr_to_scan means the number of objects to scan, not the number of objects to free. I refactored the CIFS idmap shrinker a little - it really needs to be broken up into a shrinker per tree and keep an item count with the tree root so that we don't need to walk the tree every time the shrinker needs to count the number of objects in the tree (i.e. all the time under memory pressure). [glommer@openvz.org: fixes for ext4, ubifs, nfs, cifs and glock. Fixes are needed mainly due to new code merged in the tree] [assorted fixes folded in] Signed-off-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Glauber Costa <glommer@openvz.org> Acked-by: Mel Gorman <mgorman@suse.de> Acked-by: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Acked-by: Jan Kara <jack@suse.cz> Acked-by: Steven Whitehouse <swhiteho@redhat.com> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: "Theodore Ts'o" <tytso@mit.edu> Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Al Viro <viro@zeniv.linux.org.uk> Cc: Artem Bityutskiy <artem.bityutskiy@linux.intel.com> Cc: Arve Hjønnevåg <arve@android.com> Cc: Carlos Maiolino <cmaiolino@redhat.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Chuck Lever <chuck.lever@oracle.com> Cc: Daniel Vetter <daniel.vetter@ffwll.ch> Cc: David Rientjes <rientjes@google.com> Cc: Gleb Natapov <gleb@redhat.com> Cc: Greg Thelen <gthelen@google.com> Cc: J. Bruce Fields <bfields@redhat.com> Cc: Jan Kara <jack@suse.cz> Cc: Jerome Glisse <jglisse@redhat.com> Cc: John Stultz <john.stultz@linaro.org> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: Kent Overstreet <koverstreet@google.com> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Marcelo Tosatti <mtosatti@redhat.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Steven Whitehouse <swhiteho@redhat.com> Cc: Thomas Hellstrom <thellstrom@vmware.com> Cc: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
8 years ago
13 years ago
NFS: Share NFS superblocks per-protocol per-server per-FSID The attached patch makes NFS share superblocks between mounts from the same server and FSID over the same protocol. It does this by creating each superblock with a false root and returning the real root dentry in the vfsmount presented by get_sb(). The root dentry set starts off as an anonymous dentry if we don't already have the dentry for its inode, otherwise it simply returns the dentry we already have. We may thus end up with several trees of dentries in the superblock, and if at some later point one of anonymous tree roots is discovered by normal filesystem activity to be located in another tree within the superblock, the anonymous root is named and materialises attached to the second tree at the appropriate point. Why do it this way? Why not pass an extra argument to the mount() syscall to indicate the subpath and then pathwalk from the server root to the desired directory? You can't guarantee this will work for two reasons: (1) The root and intervening nodes may not be accessible to the client. With NFS2 and NFS3, for instance, mountd is called on the server to get the filehandle for the tip of a path. mountd won't give us handles for anything we don't have permission to access, and so we can't set up NFS inodes for such nodes, and so can't easily set up dentries (we'd have to have ghost inodes or something). With this patch we don't actually create dentries until we get handles from the server that we can use to set up their inodes, and we don't actually bind them into the tree until we know for sure where they go. (2) Inaccessible symbolic links. If we're asked to mount two exports from the server, eg: mount warthog:/warthog/aaa/xxx /mmm mount warthog:/warthog/bbb/yyy /nnn We may not be able to access anything nearer the root than xxx and yyy, but we may find out later that /mmm/www/yyy, say, is actually the same directory as the one mounted on /nnn. What we might then find out, for example, is that /warthog/bbb was actually a symbolic link to /warthog/aaa/xxx/www, but we can't actually determine that by talking to the server until /warthog is made available by NFS. This would lead to having constructed an errneous dentry tree which we can't easily fix. We can end up with a dentry marked as a directory when it should actually be a symlink, or we could end up with an apparently hardlinked directory. With this patch we need not make assumptions about the type of a dentry for which we can't retrieve information, nor need we assume we know its place in the grand scheme of things until we actually see that place. This patch reduces the possibility of aliasing in the inode and page caches for inodes that may be accessed by more than one NFS export. It also reduces the number of superblocks required for NFS where there are many NFS exports being used from a server (home directory server + autofs for example). This in turn makes it simpler to do local caching of network filesystems, as it can then be guaranteed that there won't be links from multiple inodes in separate superblocks to the same cache file. Obviously, cache aliasing between different levels of NFS protocol could still be a problem, but at least that gives us another key to use when indexing the cache. This patch makes the following changes: (1) The server record construction/destruction has been abstracted out into its own set of functions to make things easier to get right. These have been moved into fs/nfs/client.c. All the code in fs/nfs/client.c has to do with the management of connections to servers, and doesn't touch superblocks in any way; the remaining code in fs/nfs/super.c has to do with VFS superblock management. (2) The sequence of events undertaken by NFS mount is now reordered: (a) A volume representation (struct nfs_server) is allocated. (b) A server representation (struct nfs_client) is acquired. This may be allocated or shared, and is keyed on server address, port and NFS version. (c) If allocated, the client representation is initialised. The state member variable of nfs_client is used to prevent a race during initialisation from two mounts. (d) For NFS4 a simple pathwalk is performed, walking from FH to FH to find the root filehandle for the mount (fs/nfs/getroot.c). For NFS2/3 we are given the root FH in advance. (e) The volume FSID is probed for on the root FH. (f) The volume representation is initialised from the FSINFO record retrieved on the root FH. (g) sget() is called to acquire a superblock. This may be allocated or shared, keyed on client pointer and FSID. (h) If allocated, the superblock is initialised. (i) If the superblock is shared, then the new nfs_server record is discarded. (j) The root dentry for this mount is looked up from the root FH. (k) The root dentry for this mount is assigned to the vfsmount. (3) nfs_readdir_lookup() creates dentries for each of the entries readdir() returns; this function now attaches disconnected trees from alternate roots that happen to be discovered attached to a directory being read (in the same way nfs_lookup() is made to do for lookup ops). The new d_materialise_unique() function is now used to do this, thus permitting the whole thing to be done under one set of locks, and thus avoiding any race between mount and lookup operations on the same directory. (4) The client management code uses a new debug facility: NFSDBG_CLIENT which is set by echoing 1024 to /proc/net/sunrpc/nfs_debug. (5) Clone mounts are now called xdev mounts. (6) Use the dentry passed to the statfs() op as the handle for retrieving fs statistics rather than the root dentry of the superblock (which is now a dummy). Signed-Off-By: David Howells <dhowells@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
15 years ago
NFS: Share NFS superblocks per-protocol per-server per-FSID The attached patch makes NFS share superblocks between mounts from the same server and FSID over the same protocol. It does this by creating each superblock with a false root and returning the real root dentry in the vfsmount presented by get_sb(). The root dentry set starts off as an anonymous dentry if we don't already have the dentry for its inode, otherwise it simply returns the dentry we already have. We may thus end up with several trees of dentries in the superblock, and if at some later point one of anonymous tree roots is discovered by normal filesystem activity to be located in another tree within the superblock, the anonymous root is named and materialises attached to the second tree at the appropriate point. Why do it this way? Why not pass an extra argument to the mount() syscall to indicate the subpath and then pathwalk from the server root to the desired directory? You can't guarantee this will work for two reasons: (1) The root and intervening nodes may not be accessible to the client. With NFS2 and NFS3, for instance, mountd is called on the server to get the filehandle for the tip of a path. mountd won't give us handles for anything we don't have permission to access, and so we can't set up NFS inodes for such nodes, and so can't easily set up dentries (we'd have to have ghost inodes or something). With this patch we don't actually create dentries until we get handles from the server that we can use to set up their inodes, and we don't actually bind them into the tree until we know for sure where they go. (2) Inaccessible symbolic links. If we're asked to mount two exports from the server, eg: mount warthog:/warthog/aaa/xxx /mmm mount warthog:/warthog/bbb/yyy /nnn We may not be able to access anything nearer the root than xxx and yyy, but we may find out later that /mmm/www/yyy, say, is actually the same directory as the one mounted on /nnn. What we might then find out, for example, is that /warthog/bbb was actually a symbolic link to /warthog/aaa/xxx/www, but we can't actually determine that by talking to the server until /warthog is made available by NFS. This would lead to having constructed an errneous dentry tree which we can't easily fix. We can end up with a dentry marked as a directory when it should actually be a symlink, or we could end up with an apparently hardlinked directory. With this patch we need not make assumptions about the type of a dentry for which we can't retrieve information, nor need we assume we know its place in the grand scheme of things until we actually see that place. This patch reduces the possibility of aliasing in the inode and page caches for inodes that may be accessed by more than one NFS export. It also reduces the number of superblocks required for NFS where there are many NFS exports being used from a server (home directory server + autofs for example). This in turn makes it simpler to do local caching of network filesystems, as it can then be guaranteed that there won't be links from multiple inodes in separate superblocks to the same cache file. Obviously, cache aliasing between different levels of NFS protocol could still be a problem, but at least that gives us another key to use when indexing the cache. This patch makes the following changes: (1) The server record construction/destruction has been abstracted out into its own set of functions to make things easier to get right. These have been moved into fs/nfs/client.c. All the code in fs/nfs/client.c has to do with the management of connections to servers, and doesn't touch superblocks in any way; the remaining code in fs/nfs/super.c has to do with VFS superblock management. (2) The sequence of events undertaken by NFS mount is now reordered: (a) A volume representation (struct nfs_server) is allocated. (b) A server representation (struct nfs_client) is acquired. This may be allocated or shared, and is keyed on server address, port and NFS version. (c) If allocated, the client representation is initialised. The state member variable of nfs_client is used to prevent a race during initialisation from two mounts. (d) For NFS4 a simple pathwalk is performed, walking from FH to FH to find the root filehandle for the mount (fs/nfs/getroot.c). For NFS2/3 we are given the root FH in advance. (e) The volume FSID is probed for on the root FH. (f) The volume representation is initialised from the FSINFO record retrieved on the root FH. (g) sget() is called to acquire a superblock. This may be allocated or shared, keyed on client pointer and FSID. (h) If allocated, the superblock is initialised. (i) If the superblock is shared, then the new nfs_server record is discarded. (j) The root dentry for this mount is looked up from the root FH. (k) The root dentry for this mount is assigned to the vfsmount. (3) nfs_readdir_lookup() creates dentries for each of the entries readdir() returns; this function now attaches disconnected trees from alternate roots that happen to be discovered attached to a directory being read (in the same way nfs_lookup() is made to do for lookup ops). The new d_materialise_unique() function is now used to do this, thus permitting the whole thing to be done under one set of locks, and thus avoiding any race between mount and lookup operations on the same directory. (4) The client management code uses a new debug facility: NFSDBG_CLIENT which is set by echoing 1024 to /proc/net/sunrpc/nfs_debug. (5) Clone mounts are now called xdev mounts. (6) Use the dentry passed to the statfs() op as the handle for retrieving fs statistics rather than the root dentry of the superblock (which is now a dummy). Signed-Off-By: David Howells <dhowells@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
15 years ago
NFS: Share NFS superblocks per-protocol per-server per-FSID The attached patch makes NFS share superblocks between mounts from the same server and FSID over the same protocol. It does this by creating each superblock with a false root and returning the real root dentry in the vfsmount presented by get_sb(). The root dentry set starts off as an anonymous dentry if we don't already have the dentry for its inode, otherwise it simply returns the dentry we already have. We may thus end up with several trees of dentries in the superblock, and if at some later point one of anonymous tree roots is discovered by normal filesystem activity to be located in another tree within the superblock, the anonymous root is named and materialises attached to the second tree at the appropriate point. Why do it this way? Why not pass an extra argument to the mount() syscall to indicate the subpath and then pathwalk from the server root to the desired directory? You can't guarantee this will work for two reasons: (1) The root and intervening nodes may not be accessible to the client. With NFS2 and NFS3, for instance, mountd is called on the server to get the filehandle for the tip of a path. mountd won't give us handles for anything we don't have permission to access, and so we can't set up NFS inodes for such nodes, and so can't easily set up dentries (we'd have to have ghost inodes or something). With this patch we don't actually create dentries until we get handles from the server that we can use to set up their inodes, and we don't actually bind them into the tree until we know for sure where they go. (2) Inaccessible symbolic links. If we're asked to mount two exports from the server, eg: mount warthog:/warthog/aaa/xxx /mmm mount warthog:/warthog/bbb/yyy /nnn We may not be able to access anything nearer the root than xxx and yyy, but we may find out later that /mmm/www/yyy, say, is actually the same directory as the one mounted on /nnn. What we might then find out, for example, is that /warthog/bbb was actually a symbolic link to /warthog/aaa/xxx/www, but we can't actually determine that by talking to the server until /warthog is made available by NFS. This would lead to having constructed an errneous dentry tree which we can't easily fix. We can end up with a dentry marked as a directory when it should actually be a symlink, or we could end up with an apparently hardlinked directory. With this patch we need not make assumptions about the type of a dentry for which we can't retrieve information, nor need we assume we know its place in the grand scheme of things until we actually see that place. This patch reduces the possibility of aliasing in the inode and page caches for inodes that may be accessed by more than one NFS export. It also reduces the number of superblocks required for NFS where there are many NFS exports being used from a server (home directory server + autofs for example). This in turn makes it simpler to do local caching of network filesystems, as it can then be guaranteed that there won't be links from multiple inodes in separate superblocks to the same cache file. Obviously, cache aliasing between different levels of NFS protocol could still be a problem, but at least that gives us another key to use when indexing the cache. This patch makes the following changes: (1) The server record construction/destruction has been abstracted out into its own set of functions to make things easier to get right. These have been moved into fs/nfs/client.c. All the code in fs/nfs/client.c has to do with the management of connections to servers, and doesn't touch superblocks in any way; the remaining code in fs/nfs/super.c has to do with VFS superblock management. (2) The sequence of events undertaken by NFS mount is now reordered: (a) A volume representation (struct nfs_server) is allocated. (b) A server representation (struct nfs_client) is acquired. This may be allocated or shared, and is keyed on server address, port and NFS version. (c) If allocated, the client representation is initialised. The state member variable of nfs_client is used to prevent a race during initialisation from two mounts. (d) For NFS4 a simple pathwalk is performed, walking from FH to FH to find the root filehandle for the mount (fs/nfs/getroot.c). For NFS2/3 we are given the root FH in advance. (e) The volume FSID is probed for on the root FH. (f) The volume representation is initialised from the FSINFO record retrieved on the root FH. (g) sget() is called to acquire a superblock. This may be allocated or shared, keyed on client pointer and FSID. (h) If allocated, the superblock is initialised. (i) If the superblock is shared, then the new nfs_server record is discarded. (j) The root dentry for this mount is looked up from the root FH. (k) The root dentry for this mount is assigned to the vfsmount. (3) nfs_readdir_lookup() creates dentries for each of the entries readdir() returns; this function now attaches disconnected trees from alternate roots that happen to be discovered attached to a directory being read (in the same way nfs_lookup() is made to do for lookup ops). The new d_materialise_unique() function is now used to do this, thus permitting the whole thing to be done under one set of locks, and thus avoiding any race between mount and lookup operations on the same directory. (4) The client management code uses a new debug facility: NFSDBG_CLIENT which is set by echoing 1024 to /proc/net/sunrpc/nfs_debug. (5) Clone mounts are now called xdev mounts. (6) Use the dentry passed to the statfs() op as the handle for retrieving fs statistics rather than the root dentry of the superblock (which is now a dummy). Signed-Off-By: David Howells <dhowells@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
15 years ago
NFS: Share NFS superblocks per-protocol per-server per-FSID The attached patch makes NFS share superblocks between mounts from the same server and FSID over the same protocol. It does this by creating each superblock with a false root and returning the real root dentry in the vfsmount presented by get_sb(). The root dentry set starts off as an anonymous dentry if we don't already have the dentry for its inode, otherwise it simply returns the dentry we already have. We may thus end up with several trees of dentries in the superblock, and if at some later point one of anonymous tree roots is discovered by normal filesystem activity to be located in another tree within the superblock, the anonymous root is named and materialises attached to the second tree at the appropriate point. Why do it this way? Why not pass an extra argument to the mount() syscall to indicate the subpath and then pathwalk from the server root to the desired directory? You can't guarantee this will work for two reasons: (1) The root and intervening nodes may not be accessible to the client. With NFS2 and NFS3, for instance, mountd is called on the server to get the filehandle for the tip of a path. mountd won't give us handles for anything we don't have permission to access, and so we can't set up NFS inodes for such nodes, and so can't easily set up dentries (we'd have to have ghost inodes or something). With this patch we don't actually create dentries until we get handles from the server that we can use to set up their inodes, and we don't actually bind them into the tree until we know for sure where they go. (2) Inaccessible symbolic links. If we're asked to mount two exports from the server, eg: mount warthog:/warthog/aaa/xxx /mmm mount warthog:/warthog/bbb/yyy /nnn We may not be able to access anything nearer the root than xxx and yyy, but we may find out later that /mmm/www/yyy, say, is actually the same directory as the one mounted on /nnn. What we might then find out, for example, is that /warthog/bbb was actually a symbolic link to /warthog/aaa/xxx/www, but we can't actually determine that by talking to the server until /warthog is made available by NFS. This would lead to having constructed an errneous dentry tree which we can't easily fix. We can end up with a dentry marked as a directory when it should actually be a symlink, or we could end up with an apparently hardlinked directory. With this patch we need not make assumptions about the type of a dentry for which we can't retrieve information, nor need we assume we know its place in the grand scheme of things until we actually see that place. This patch reduces the possibility of aliasing in the inode and page caches for inodes that may be accessed by more than one NFS export. It also reduces the number of superblocks required for NFS where there are many NFS exports being used from a server (home directory server + autofs for example). This in turn makes it simpler to do local caching of network filesystems, as it can then be guaranteed that there won't be links from multiple inodes in separate superblocks to the same cache file. Obviously, cache aliasing between different levels of NFS protocol could still be a problem, but at least that gives us another key to use when indexing the cache. This patch makes the following changes: (1) The server record construction/destruction has been abstracted out into its own set of functions to make things easier to get right. These have been moved into fs/nfs/client.c. All the code in fs/nfs/client.c has to do with the management of connections to servers, and doesn't touch superblocks in any way; the remaining code in fs/nfs/super.c has to do with VFS superblock management. (2) The sequence of events undertaken by NFS mount is now reordered: (a) A volume representation (struct nfs_server) is allocated. (b) A server representation (struct nfs_client) is acquired. This may be allocated or shared, and is keyed on server address, port and NFS version. (c) If allocated, the client representation is initialised. The state member variable of nfs_client is used to prevent a race during initialisation from two mounts. (d) For NFS4 a simple pathwalk is performed, walking from FH to FH to find the root filehandle for the mount (fs/nfs/getroot.c). For NFS2/3 we are given the root FH in advance. (e) The volume FSID is probed for on the root FH. (f) The volume representation is initialised from the FSINFO record retrieved on the root FH. (g) sget() is called to acquire a superblock. This may be allocated or shared, keyed on client pointer and FSID. (h) If allocated, the superblock is initialised. (i) If the superblock is shared, then the new nfs_server record is discarded. (j) The root dentry for this mount is looked up from the root FH. (k) The root dentry for this mount is assigned to the vfsmount. (3) nfs_readdir_lookup() creates dentries for each of the entries readdir() returns; this function now attaches disconnected trees from alternate roots that happen to be discovered attached to a directory being read (in the same way nfs_lookup() is made to do for lookup ops). The new d_materialise_unique() function is now used to do this, thus permitting the whole thing to be done under one set of locks, and thus avoiding any race between mount and lookup operations on the same directory. (4) The client management code uses a new debug facility: NFSDBG_CLIENT which is set by echoing 1024 to /proc/net/sunrpc/nfs_debug. (5) Clone mounts are now called xdev mounts. (6) Use the dentry passed to the statfs() op as the handle for retrieving fs statistics rather than the root dentry of the superblock (which is now a dummy). Signed-Off-By: David Howells <dhowells@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
15 years ago
nfs41: introduce nfs4_call_sync Use nfs4_call_sync rather than rpc_call_sync to provide for a nfs41 sessions-enabled interface for sessions manipulation. The nfs41 rpc logic uses the rpc_call_prepare method to recover and create the session, as well as selecting a free slot id and the rpc_call_done to free the slot and update slot table related metadata. In the coming patches we'll add rpc prepare and done routines for setting up the sequence op and processing the sequence result. Signed-off-by: Benny Halevy <bhalevy@panasas.com> [nfs41: nfs4_call_sync] As per 11-14-08 review. Squash into "nfs41: introduce nfs4_call_sync" and "nfs41: nfs4_setup_sequence" Define two functions one for v4 and one for v41 add a pointer to struct nfs4_client to the correct one. Signed-off-by: Andy Adamson <andros@netapp.com> [added BUG() in _nfs4_call_sync_session if !CONFIG_NFS_V4_1] Signed-off-by: Benny Halevy <bhalevy@panasas.com> [nfs41: check for session not minorversion] Signed-off-by: Andy Adamson <andros@netapp.com> Signed-off-by: Benny Halevy <bhalevy@panasas.com> [group minorversion specific stuff together] Signed-off-by: Alexandros Batsakis <Alexandros.Batsakis@netapp.com> Signed-off-by: Benny Halevy <bhalevy@panasas.com> Signed-off-by: Andy Adamson <andros@netapp.com> [nfs41: fixup nfs4_clear_client_minor_version] [introduce nfs4_init_client_minor_version() in this patch] Signed-off-by: Benny Halevy <bhalevy@panasas.com> [cleaned-up patch: got rid of nfs_call_sync_t, dprintks, cosmetics, extra server defs] Signed-off-by: Andy Adamson <andros@netapp.com> Signed-off-by: Benny Halevy <bhalevy@panasas.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
13 years ago
NFS: Discover NFSv4 server trunking when mounting "Server trunking" is a fancy named for a multi-homed NFS server. Trunking might occur if a client sends NFS requests for a single workload to multiple network interfaces on the same server. There are some implications for NFSv4 state management that make it useful for a client to know if a single NFSv4 server instance is multi-homed. (Note this is only a consideration for NFSv4, not for legacy versions of NFS, which are stateless). If a client cares about server trunking, no NFSv4 operations can proceed until that client determines who it is talking to. Thus server IP trunking discovery must be done when the client first encounters an unfamiliar server IP address. The nfs_get_client() function walks the nfs_client_list and matches on server IP address. The outcome of that walk tells us immediately if we have an unfamiliar server IP address. It invokes nfs_init_client() in this case. Thus, nfs4_init_client() is a good spot to perform trunking discovery. Discovery requires a client to establish a fresh client ID, so our client will now send SETCLIENTID or EXCHANGE_ID as the first NFS operation after a successful ping, rather than waiting for an application to perform an operation that requires NFSv4 state. The exact process for detecting trunking is different for NFSv4.0 and NFSv4.1, so a minorversion-specific init_client callout method is introduced. CLID_INUSE recovery is important for the trunking discovery process. CLID_INUSE is a sign the server recognizes the client's nfs_client_id4 id string, but the client is using the wrong principal this time for the SETCLIENTID operation. The SETCLIENTID must be retried with a series of different principals until one works, and then the rest of trunking discovery can proceed. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
9 years ago
nfs41: introduce nfs4_call_sync Use nfs4_call_sync rather than rpc_call_sync to provide for a nfs41 sessions-enabled interface for sessions manipulation. The nfs41 rpc logic uses the rpc_call_prepare method to recover and create the session, as well as selecting a free slot id and the rpc_call_done to free the slot and update slot table related metadata. In the coming patches we'll add rpc prepare and done routines for setting up the sequence op and processing the sequence result. Signed-off-by: Benny Halevy <bhalevy@panasas.com> [nfs41: nfs4_call_sync] As per 11-14-08 review. Squash into "nfs41: introduce nfs4_call_sync" and "nfs41: nfs4_setup_sequence" Define two functions one for v4 and one for v41 add a pointer to struct nfs4_client to the correct one. Signed-off-by: Andy Adamson <andros@netapp.com> [added BUG() in _nfs4_call_sync_session if !CONFIG_NFS_V4_1] Signed-off-by: Benny Halevy <bhalevy@panasas.com> [nfs41: check for session not minorversion] Signed-off-by: Andy Adamson <andros@netapp.com> Signed-off-by: Benny Halevy <bhalevy@panasas.com> [group minorversion specific stuff together] Signed-off-by: Alexandros Batsakis <Alexandros.Batsakis@netapp.com> Signed-off-by: Benny Halevy <bhalevy@panasas.com> Signed-off-by: Andy Adamson <andros@netapp.com> [nfs41: fixup nfs4_clear_client_minor_version] [introduce nfs4_init_client_minor_version() in this patch] Signed-off-by: Benny Halevy <bhalevy@panasas.com> [cleaned-up patch: got rid of nfs_call_sync_t, dprintks, cosmetics, extra server defs] Signed-off-by: Andy Adamson <andros@netapp.com> Signed-off-by: Benny Halevy <bhalevy@panasas.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
13 years ago
NFS: Share NFS superblocks per-protocol per-server per-FSID The attached patch makes NFS share superblocks between mounts from the same server and FSID over the same protocol. It does this by creating each superblock with a false root and returning the real root dentry in the vfsmount presented by get_sb(). The root dentry set starts off as an anonymous dentry if we don't already have the dentry for its inode, otherwise it simply returns the dentry we already have. We may thus end up with several trees of dentries in the superblock, and if at some later point one of anonymous tree roots is discovered by normal filesystem activity to be located in another tree within the superblock, the anonymous root is named and materialises attached to the second tree at the appropriate point. Why do it this way? Why not pass an extra argument to the mount() syscall to indicate the subpath and then pathwalk from the server root to the desired directory? You can't guarantee this will work for two reasons: (1) The root and intervening nodes may not be accessible to the client. With NFS2 and NFS3, for instance, mountd is called on the server to get the filehandle for the tip of a path. mountd won't give us handles for anything we don't have permission to access, and so we can't set up NFS inodes for such nodes, and so can't easily set up dentries (we'd have to have ghost inodes or something). With this patch we don't actually create dentries until we get handles from the server that we can use to set up their inodes, and we don't actually bind them into the tree until we know for sure where they go. (2) Inaccessible symbolic links. If we're asked to mount two exports from the server, eg: mount warthog:/warthog/aaa/xxx /mmm mount warthog:/warthog/bbb/yyy /nnn We may not be able to access anything nearer the root than xxx and yyy, but we may find out later that /mmm/www/yyy, say, is actually the same directory as the one mounted on /nnn. What we might then find out, for example, is that /warthog/bbb was actually a symbolic link to /warthog/aaa/xxx/www, but we can't actually determine that by talking to the server until /warthog is made available by NFS. This would lead to having constructed an errneous dentry tree which we can't easily fix. We can end up with a dentry marked as a directory when it should actually be a symlink, or we could end up with an apparently hardlinked directory. With this patch we need not make assumptions about the type of a dentry for which we can't retrieve information, nor need we assume we know its place in the grand scheme of things until we actually see that place. This patch reduces the possibility of aliasing in the inode and page caches for inodes that may be accessed by more than one NFS export. It also reduces the number of superblocks required for NFS where there are many NFS exports being used from a server (home directory server + autofs for example). This in turn makes it simpler to do local caching of network filesystems, as it can then be guaranteed that there won't be links from multiple inodes in separate superblocks to the same cache file. Obviously, cache aliasing between different levels of NFS protocol could still be a problem, but at least that gives us another key to use when indexing the cache. This patch makes the following changes: (1) The server record construction/destruction has been abstracted out into its own set of functions to make things easier to get right. These have been moved into fs/nfs/client.c. All the code in fs/nfs/client.c has to do with the management of connections to servers, and doesn't touch superblocks in any way; the remaining code in fs/nfs/super.c has to do with VFS superblock management. (2) The sequence of events undertaken by NFS mount is now reordered: (a) A volume representation (struct nfs_server) is allocated. (b) A server representation (struct nfs_client) is acquired. This may be allocated or shared, and is keyed on server address, port and NFS version. (c) If allocated, the client representation is initialised. The state member variable of nfs_client is used to prevent a race during initialisation from two mounts. (d) For NFS4 a simple pathwalk is performed, walking from FH to FH to find the root filehandle for the mount (fs/nfs/getroot.c). For NFS2/3 we are given the root FH in advance. (e) The volume FSID is probed for on the root FH. (f) The volume representation is initialised from the FSINFO record retrieved on the root FH. (g) sget() is called to acquire a superblock. This may be allocated or shared, keyed on client pointer and FSID. (h) If allocated, the superblock is initialised. (i) If the superblock is shared, then the new nfs_server record is discarded. (j) The root dentry for this mount is looked up from the root FH. (k) The root dentry for this mount is assigned to the vfsmount. (3) nfs_readdir_lookup() creates dentries for each of the entries readdir() returns; this function now attaches disconnected trees from alternate roots that happen to be discovered attached to a directory being read (in the same way nfs_lookup() is made to do for lookup ops). The new d_materialise_unique() function is now used to do this, thus permitting the whole thing to be done under one set of locks, and thus avoiding any race between mount and lookup operations on the same directory. (4) The client management code uses a new debug facility: NFSDBG_CLIENT which is set by echoing 1024 to /proc/net/sunrpc/nfs_debug. (5) Clone mounts are now called xdev mounts. (6) Use the dentry passed to the statfs() op as the handle for retrieving fs statistics rather than the root dentry of the superblock (which is now a dummy). Signed-Off-By: David Howells <dhowells@redhat.com> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
15 years ago
  1. /*
  2. * NFS internal definitions
  3. */
  4. #include "nfs4_fs.h"
  5. #include <linux/mount.h>
  6. #include <linux/security.h>
  7. #include <linux/crc32.h>
  8. #define NFS_MS_MASK (MS_RDONLY|MS_NOSUID|MS_NODEV|MS_NOEXEC|MS_SYNCHRONOUS)
  9. struct nfs_string;
  10. /* Maximum number of readahead requests
  11. * FIXME: this should really be a sysctl so that users may tune it to suit
  12. * their needs. People that do NFS over a slow network, might for
  13. * instance want to reduce it to something closer to 1 for improved
  14. * interactive response.
  15. */
  16. #define NFS_MAX_READAHEAD (RPC_DEF_SLOT_TABLE - 1)
  17. static inline void nfs_attr_check_mountpoint(struct super_block *parent, struct nfs_fattr *fattr)
  18. {
  19. if (!nfs_fsid_equal(&NFS_SB(parent)->fsid, &fattr->fsid))
  20. fattr->valid |= NFS_ATTR_FATTR_MOUNTPOINT;
  21. }
  22. static inline int nfs_attr_use_mounted_on_fileid(struct nfs_fattr *fattr)
  23. {
  24. if (((fattr->valid & NFS_ATTR_FATTR_MOUNTED_ON_FILEID) == 0) ||
  25. (((fattr->valid & NFS_ATTR_FATTR_MOUNTPOINT) == 0) &&
  26. ((fattr->valid & NFS_ATTR_FATTR_V4_REFERRAL) == 0)))
  27. return 0;
  28. fattr->fileid = fattr->mounted_on_fileid;
  29. return 1;
  30. }
  31. struct nfs_clone_mount {
  32. const struct super_block *sb;
  33. const struct dentry *dentry;
  34. struct nfs_fh *fh;
  35. struct nfs_fattr *fattr;
  36. char *hostname;
  37. char *mnt_path;
  38. struct sockaddr *addr;
  39. size_t addrlen;
  40. rpc_authflavor_t authflavor;
  41. };
  42. /*
  43. * Note: RFC 1813 doesn't limit the number of auth flavors that
  44. * a server can return, so make something up.
  45. */
  46. #define NFS_MAX_SECFLAVORS (12)
  47. /*
  48. * Value used if the user did not specify a port value.
  49. */
  50. #define NFS_UNSPEC_PORT (-1)
  51. /*
  52. * Maximum number of pages that readdir can use for creating
  53. * a vmapped array of pages.
  54. */
  55. #define NFS_MAX_READDIR_PAGES 8
  56. struct nfs_client_initdata {
  57. unsigned long init_flags;
  58. const char *hostname;
  59. const struct sockaddr *addr;
  60. size_t addrlen;
  61. struct nfs_subversion *nfs_mod;
  62. int proto;
  63. u32 minorversion;
  64. struct net *net;
  65. };
  66. /*
  67. * In-kernel mount arguments
  68. */
  69. struct nfs_parsed_mount_data {
  70. int flags;
  71. unsigned int rsize, wsize;
  72. unsigned int timeo, retrans;
  73. unsigned int acregmin, acregmax,
  74. acdirmin, acdirmax;
  75. unsigned int namlen;
  76. unsigned int options;
  77. unsigned int bsize;
  78. unsigned int auth_flavor_len;
  79. rpc_authflavor_t auth_flavors[1];
  80. char *client_address;
  81. unsigned int version;
  82. unsigned int minorversion;
  83. char *fscache_uniq;
  84. bool need_mount;
  85. struct {
  86. struct sockaddr_storage address;
  87. size_t addrlen;
  88. char *hostname;
  89. u32 version;
  90. int port;
  91. unsigned short protocol;
  92. } mount_server;
  93. struct {
  94. struct sockaddr_storage address;
  95. size_t addrlen;
  96. char *hostname;
  97. char *export_path;
  98. int port;
  99. unsigned short protocol;
  100. } nfs_server;
  101. struct security_mnt_opts lsm_opts;
  102. struct net *net;
  103. };
  104. /* mount_clnt.c */
  105. struct nfs_mount_request {
  106. struct sockaddr *sap;
  107. size_t salen;
  108. char *hostname;
  109. char *dirpath;
  110. u32 version;
  111. unsigned short protocol;
  112. struct nfs_fh *fh;
  113. int noresvport;
  114. unsigned int *auth_flav_len;
  115. rpc_authflavor_t *auth_flavs;
  116. struct net *net;
  117. };
  118. struct nfs_mount_info {
  119. void (*fill_super)(struct super_block *, struct nfs_mount_info *);
  120. int (*set_security)(struct super_block *, struct dentry *, struct nfs_mount_info *);
  121. struct nfs_parsed_mount_data *parsed;
  122. struct nfs_clone_mount *cloned;
  123. struct nfs_fh *mntfh;
  124. };
  125. extern int nfs_mount(struct nfs_mount_request *info);
  126. extern void nfs_umount(const struct nfs_mount_request *info);
  127. /* client.c */
  128. extern const struct rpc_program nfs_program;
  129. extern void nfs_clients_init(struct net *net);
  130. extern struct nfs_client *nfs_alloc_client(const struct nfs_client_initdata *);
  131. int nfs_create_rpc_client(struct nfs_client *, const struct rpc_timeout *, rpc_authflavor_t);
  132. struct nfs_client *nfs_get_client(const struct nfs_client_initdata *,
  133. const struct rpc_timeout *, const char *,
  134. rpc_authflavor_t);
  135. int nfs_probe_fsinfo(struct nfs_server *server, struct nfs_fh *, struct nfs_fattr *);
  136. void nfs_server_insert_lists(struct nfs_server *);
  137. void nfs_init_timeout_values(struct rpc_timeout *, int, unsigned int, unsigned int);
  138. int nfs_init_server_rpcclient(struct nfs_server *, const struct rpc_timeout *t,
  139. rpc_authflavor_t);
  140. struct nfs_server *nfs_alloc_server(void);
  141. void nfs_server_copy_userdata(struct nfs_server *, struct nfs_server *);
  142. extern void nfs_cleanup_cb_ident_idr(struct net *);
  143. extern void nfs_put_client(struct nfs_client *);
  144. extern void nfs_free_client(struct nfs_client *);
  145. extern struct nfs_client *nfs4_find_client_ident(struct net *, int);
  146. extern struct nfs_client *
  147. nfs4_find_client_sessionid(struct net *, const struct sockaddr *,
  148. struct nfs4_sessionid *, u32);
  149. extern struct nfs_server *nfs_create_server(struct nfs_mount_info *,
  150. struct nfs_subversion *);
  151. extern struct nfs_server *nfs4_create_server(
  152. struct nfs_mount_info *,
  153. struct nfs_subversion *);
  154. extern struct nfs_server *nfs4_create_referral_server(struct nfs_clone_mount *,
  155. struct nfs_fh *);
  156. extern void nfs_free_server(struct nfs_server *server);
  157. extern struct nfs_server *nfs_clone_server(struct nfs_server *,
  158. struct nfs_fh *,
  159. struct nfs_fattr *,
  160. rpc_authflavor_t);
  161. extern int nfs_wait_client_init_complete(const struct nfs_client *clp);
  162. extern void nfs_mark_client_ready(struct nfs_client *clp, int state);
  163. extern struct nfs_client *nfs4_set_ds_client(struct nfs_client* mds_clp,
  164. const struct sockaddr *ds_addr,
  165. int ds_addrlen, int ds_proto,
  166. unsigned int ds_timeo,
  167. unsigned int ds_retrans);
  168. extern struct rpc_clnt *nfs4_find_or_create_ds_client(struct nfs_client *,
  169. struct inode *);
  170. #ifdef CONFIG_PROC_FS
  171. extern int __init nfs_fs_proc_init(void);
  172. extern void nfs_fs_proc_exit(void);
  173. #else
  174. static inline int nfs_fs_proc_init(void)
  175. {
  176. return 0;
  177. }
  178. static inline void nfs_fs_proc_exit(void)
  179. {
  180. }
  181. #endif
  182. #ifdef CONFIG_NFS_V4_1
  183. int nfs_sockaddr_match_ipaddr(const struct sockaddr *, const struct sockaddr *);
  184. #endif
  185. /* nfs3client.c */
  186. #if IS_ENABLED(CONFIG_NFS_V3)
  187. struct nfs_server *nfs3_create_server(struct nfs_mount_info *, struct nfs_subversion *);
  188. struct nfs_server *nfs3_clone_server(struct nfs_server *, struct nfs_fh *,
  189. struct nfs_fattr *, rpc_authflavor_t);
  190. #endif
  191. /* callback_xdr.c */
  192. extern struct svc_version nfs4_callback_version1;
  193. extern struct svc_version nfs4_callback_version4;
  194. struct nfs_pageio_descriptor;
  195. /* pagelist.c */
  196. extern int __init nfs_init_nfspagecache(void);
  197. extern void nfs_destroy_nfspagecache(void);
  198. extern int __init nfs_init_readpagecache(void);
  199. extern void nfs_destroy_readpagecache(void);
  200. extern int __init nfs_init_writepagecache(void);
  201. extern void nfs_destroy_writepagecache(void);
  202. extern int __init nfs_init_directcache(void);
  203. extern void nfs_destroy_directcache(void);
  204. extern bool nfs_pgarray_set(struct nfs_page_array *p, unsigned int pagecount);
  205. extern void nfs_pgheader_init(struct nfs_pageio_descriptor *desc,
  206. struct nfs_pgio_header *hdr,
  207. void (*release)(struct nfs_pgio_header *hdr));
  208. void nfs_set_pgio_error(struct nfs_pgio_header *hdr, int error, loff_t pos);
  209. int nfs_iocounter_wait(struct nfs_io_counter *c);
  210. static inline void nfs_iocounter_init(struct nfs_io_counter *c)
  211. {
  212. c->flags = 0;
  213. atomic_set(&c->io_count, 0);
  214. }
  215. /* nfs2xdr.c */
  216. extern struct rpc_procinfo nfs_procedures[];
  217. extern int nfs2_decode_dirent(struct xdr_stream *,
  218. struct nfs_entry *, int);
  219. /* nfs3xdr.c */
  220. extern struct rpc_procinfo nfs3_procedures[];
  221. extern int nfs3_decode_dirent(struct xdr_stream *,
  222. struct nfs_entry *, int);
  223. /* nfs4xdr.c */
  224. #if IS_ENABLED(CONFIG_NFS_V4)
  225. extern int nfs4_decode_dirent(struct xdr_stream *,
  226. struct nfs_entry *, int);
  227. #endif
  228. #ifdef CONFIG_NFS_V4_1
  229. extern const u32 nfs41_maxread_overhead;
  230. extern const u32 nfs41_maxwrite_overhead;
  231. extern const u32 nfs41_maxgetdevinfo_overhead;
  232. #endif
  233. /* nfs4proc.c */
  234. #if IS_ENABLED(CONFIG_NFS_V4)
  235. extern struct rpc_procinfo nfs4_procedures[];
  236. #endif
  237. /* proc.c */
  238. void nfs_close_context(struct nfs_open_context *ctx, int is_sync);
  239. extern struct nfs_client *nfs_init_client(struct nfs_client *clp,
  240. const struct rpc_timeout *timeparms,
  241. const char *ip_addr);
  242. /* dir.c */
  243. extern unsigned long nfs_access_cache_count(struct shrinker *shrink,
  244. struct shrink_control *sc);
  245. extern unsigned long nfs_access_cache_scan(struct shrinker *shrink,
  246. struct shrink_control *sc);
  247. struct dentry *nfs_lookup(struct inode *, struct dentry *, unsigned int);
  248. int nfs_create(struct inode *, struct dentry *, umode_t, bool);
  249. int nfs_mkdir(struct inode *, struct dentry *, umode_t);
  250. int nfs_rmdir(struct inode *, struct dentry *);
  251. int nfs_unlink(struct inode *, struct dentry *);
  252. int nfs_symlink(struct inode *, struct dentry *, const char *);
  253. int nfs_link(struct dentry *, struct inode *, struct dentry *);
  254. int nfs_mknod(struct inode *, struct dentry *, umode_t, dev_t);
  255. int nfs_rename(struct inode *, struct dentry *, struct inode *, struct dentry *);
  256. /* file.c */
  257. int nfs_file_fsync_commit(struct file *, loff_t, loff_t, int);
  258. loff_t nfs_file_llseek(struct file *, loff_t, int);
  259. int nfs_file_flush(struct file *, fl_owner_t);
  260. ssize_t nfs_file_read(struct kiocb *, const struct iovec *, unsigned long, loff_t);
  261. ssize_t nfs_file_splice_read(struct file *, loff_t *, struct pipe_inode_info *,
  262. size_t, unsigned int);
  263. int nfs_file_mmap(struct file *, struct vm_area_struct *);
  264. ssize_t nfs_file_write(struct kiocb *, const struct iovec *, unsigned long, loff_t);
  265. int nfs_file_release(struct inode *, struct file *);
  266. int nfs_lock(struct file *, int, struct file_lock *);
  267. int nfs_flock(struct file *, int, struct file_lock *);
  268. ssize_t nfs_file_splice_write(struct pipe_inode_info *, struct file *, loff_t *,
  269. size_t, unsigned int);
  270. int nfs_check_flags(int);
  271. int nfs_setlease(struct file *, long, struct file_lock **);
  272. /* inode.c */
  273. extern struct workqueue_struct *nfsiod_workqueue;
  274. extern struct inode *nfs_alloc_inode(struct super_block *sb);
  275. extern void nfs_destroy_inode(struct inode *);
  276. extern int nfs_write_inode(struct inode *, struct writeback_control *);
  277. extern int nfs_drop_inode(struct inode *);
  278. extern void nfs_clear_inode(struct inode *);
  279. extern void nfs_evict_inode(struct inode *);
  280. void nfs_zap_acl_cache(struct inode *inode);
  281. extern int nfs_wait_bit_killable(void *word);
  282. /* super.c */
  283. extern const struct super_operations nfs_sops;
  284. extern struct file_system_type nfs_fs_type;
  285. extern struct file_system_type nfs_xdev_fs_type;
  286. #if IS_ENABLED(CONFIG_NFS_V4)
  287. extern struct file_system_type nfs4_xdev_fs_type;
  288. extern struct file_system_type nfs4_referral_fs_type;
  289. #endif
  290. struct dentry *nfs_try_mount(int, const char *, struct nfs_mount_info *,
  291. struct nfs_subversion *);
  292. void nfs_initialise_sb(struct super_block *);
  293. int nfs_set_sb_security(struct super_block *, struct dentry *, struct nfs_mount_info *);
  294. int nfs_clone_sb_security(struct super_block *, struct dentry *, struct nfs_mount_info *);
  295. struct dentry *nfs_fs_mount_common(struct nfs_server *, int, const char *,
  296. struct nfs_mount_info *, struct nfs_subversion *);
  297. struct dentry *nfs_fs_mount(struct file_system_type *, int, const char *, void *);
  298. struct dentry * nfs_xdev_mount_common(struct file_system_type *, int,
  299. const char *, struct nfs_mount_info *);
  300. void nfs_kill_super(struct super_block *);
  301. void nfs_fill_super(struct super_block *, struct nfs_mount_info *);
  302. extern struct rpc_stat nfs_rpcstat;
  303. extern int __init register_nfs_fs(void);
  304. extern void __exit unregister_nfs_fs(void);
  305. extern void nfs_sb_active(struct super_block *sb);
  306. extern void nfs_sb_deactive(struct super_block *sb);
  307. /* namespace.c */
  308. #define NFS_PATH_CANONICAL 1
  309. extern char *nfs_path(char **p, struct dentry *dentry,
  310. char *buffer, ssize_t buflen, unsigned flags);
  311. extern struct vfsmount *nfs_d_automount(struct path *path);
  312. struct vfsmount *nfs_submount(struct nfs_server *, struct dentry *,
  313. struct nfs_fh *, struct nfs_fattr *);
  314. struct vfsmount *nfs_do_submount(struct dentry *, struct nfs_fh *,
  315. struct nfs_fattr *, rpc_authflavor_t);
  316. /* getroot.c */
  317. extern struct dentry *nfs_get_root(struct super_block *, struct nfs_fh *,
  318. const char *);
  319. #if IS_ENABLED(CONFIG_NFS_V4)
  320. extern struct dentry *nfs4_get_root(struct super_block *, struct nfs_fh *,
  321. const char *);
  322. extern int nfs4_get_rootfh(struct nfs_server *server, struct nfs_fh *mntfh, bool);
  323. #endif
  324. struct nfs_pgio_completion_ops;
  325. /* read.c */
  326. extern struct nfs_read_header *nfs_readhdr_alloc(void);
  327. extern void nfs_readhdr_free(struct nfs_pgio_header *hdr);
  328. extern void nfs_pageio_init_read(struct nfs_pageio_descriptor *pgio,
  329. struct inode *inode,
  330. const struct nfs_pgio_completion_ops *compl_ops);
  331. extern int nfs_initiate_read(struct rpc_clnt *clnt,
  332. struct nfs_read_data *data,
  333. const struct rpc_call_ops *call_ops, int flags);
  334. extern void nfs_read_prepare(struct rpc_task *task, void *calldata);
  335. extern int nfs_generic_pagein(struct nfs_pageio_descriptor *desc,
  336. struct nfs_pgio_header *hdr);
  337. extern void nfs_pageio_reset_read_mds(struct nfs_pageio_descriptor *pgio);
  338. extern void nfs_readdata_release(struct nfs_read_data *rdata);
  339. /* super.c */
  340. void nfs_clone_super(struct super_block *, struct nfs_mount_info *);
  341. void nfs_umount_begin(struct super_block *);
  342. int nfs_statfs(struct dentry *, struct kstatfs *);
  343. int nfs_show_options(struct seq_file *, struct dentry *);
  344. int nfs_show_devname(struct seq_file *, struct dentry *);
  345. int nfs_show_path(struct seq_file *, struct dentry *);
  346. int nfs_show_stats(struct seq_file *, struct dentry *);
  347. void nfs_put_super(struct super_block *);
  348. int nfs_remount(struct super_block *sb, int *flags, char *raw_data);
  349. /* write.c */
  350. extern void nfs_pageio_init_write(struct nfs_pageio_descriptor *pgio,
  351. struct inode *inode, int ioflags,
  352. const struct nfs_pgio_completion_ops *compl_ops);
  353. extern struct nfs_write_header *nfs_writehdr_alloc(void);
  354. extern void nfs_writehdr_free(struct nfs_pgio_header *hdr);
  355. extern int nfs_generic_flush(struct nfs_pageio_descriptor *desc,
  356. struct nfs_pgio_header *hdr);
  357. extern void nfs_pageio_reset_write_mds(struct nfs_pageio_descriptor *pgio);
  358. extern void nfs_writedata_release(struct nfs_write_data *wdata);
  359. extern void nfs_commit_free(struct nfs_commit_data *p);
  360. extern int nfs_initiate_write(struct rpc_clnt *clnt,
  361. struct nfs_write_data *data,
  362. const struct rpc_call_ops *call_ops,
  363. int how, int flags);
  364. extern void nfs_write_prepare(struct rpc_task *task, void *calldata);
  365. extern void nfs_commit_prepare(struct rpc_task *task, void *calldata);
  366. extern int nfs_initiate_commit(struct rpc_clnt *clnt,
  367. struct nfs_commit_data *data,
  368. const struct rpc_call_ops *call_ops,
  369. int how, int flags);
  370. extern void nfs_init_commit(struct nfs_commit_data *data,
  371. struct list_head *head,
  372. struct pnfs_layout_segment *lseg,
  373. struct nfs_commit_info *cinfo);
  374. int nfs_scan_commit_list(struct list_head *src, struct list_head *dst,
  375. struct nfs_commit_info *cinfo, int max);
  376. int nfs_scan_commit(struct inode *inode, struct list_head *dst,
  377. struct nfs_commit_info *cinfo);
  378. void nfs_mark_request_commit(struct nfs_page *req,
  379. struct pnfs_layout_segment *lseg,
  380. struct nfs_commit_info *cinfo);
  381. int nfs_generic_commit_list(struct inode *inode, struct list_head *head,
  382. int how, struct nfs_commit_info *cinfo);
  383. void nfs_retry_commit(struct list_head *page_list,
  384. struct pnfs_layout_segment *lseg,
  385. struct nfs_commit_info *cinfo);
  386. void nfs_commitdata_release(struct nfs_commit_data *data);
  387. void nfs_request_add_commit_list(struct nfs_page *req, struct list_head *dst,
  388. struct nfs_commit_info *cinfo);
  389. void nfs_request_remove_commit_list(struct nfs_page *req,
  390. struct nfs_commit_info *cinfo);
  391. void nfs_init_cinfo(struct nfs_commit_info *cinfo,
  392. struct inode *inode,
  393. struct nfs_direct_req *dreq);
  394. int nfs_key_timeout_notify(struct file *filp, struct inode *inode);
  395. bool nfs_ctx_key_to_expire(struct nfs_open_context *ctx);
  396. #ifdef CONFIG_MIGRATION
  397. extern int nfs_migrate_page(struct address_space *,
  398. struct page *, struct page *, enum migrate_mode);
  399. #else
  400. #define nfs_migrate_page NULL
  401. #endif
  402. /* direct.c */
  403. void nfs_init_cinfo_from_dreq(struct nfs_commit_info *cinfo,
  404. struct nfs_direct_req *dreq);
  405. static inline void nfs_inode_dio_wait(struct inode *inode)
  406. {
  407. inode_dio_wait(inode);
  408. }
  409. extern ssize_t nfs_dreq_bytes_left(struct nfs_direct_req *dreq);
  410. /* nfs4proc.c */
  411. extern void __nfs4_read_done_cb(struct nfs_read_data *);
  412. extern struct nfs_client *nfs4_init_client(struct nfs_client *clp,
  413. const struct rpc_timeout *timeparms,
  414. const char *ip_addr);
  415. extern int nfs40_walk_client_list(struct nfs_client *clp,
  416. struct nfs_client **result,
  417. struct rpc_cred *cred);
  418. extern int nfs41_walk_client_list(struct nfs_client *clp,
  419. struct nfs_client **result,
  420. struct rpc_cred *cred);
  421. /*
  422. * Determine the device name as a string
  423. */
  424. static inline char *nfs_devname(struct dentry *dentry,
  425. char *buffer, ssize_t buflen)
  426. {
  427. char *dummy;
  428. return nfs_path(&dummy, dentry, buffer, buflen, NFS_PATH_CANONICAL);
  429. }
  430. /*
  431. * Determine the actual block size (and log2 thereof)
  432. */
  433. static inline
  434. unsigned long nfs_block_bits(unsigned long bsize, unsigned char *nrbitsp)
  435. {
  436. /* make sure blocksize is a power of two */
  437. if ((bsize & (bsize - 1)) || nrbitsp) {
  438. unsigned char nrbits;
  439. for (nrbits = 31; nrbits && !(bsize & (1 << nrbits)); nrbits--)
  440. ;
  441. bsize = 1 << nrbits;
  442. if (nrbitsp)
  443. *nrbitsp = nrbits;
  444. }
  445. return bsize;
  446. }
  447. /*
  448. * Calculate the number of 512byte blocks used.
  449. */
  450. static inline blkcnt_t nfs_calc_block_size(u64 tsize)
  451. {
  452. blkcnt_t used = (tsize + 511) >> 9;
  453. return (used > ULONG_MAX) ? ULONG_MAX : used;
  454. }
  455. /*
  456. * Compute and set NFS server blocksize
  457. */
  458. static inline
  459. unsigned long nfs_block_size(unsigned long bsize, unsigned char *nrbitsp)
  460. {
  461. if (bsize < NFS_MIN_FILE_IO_SIZE)
  462. bsize = NFS_DEF_FILE_IO_SIZE;
  463. else if (bsize >= NFS_MAX_FILE_IO_SIZE)
  464. bsize = NFS_MAX_FILE_IO_SIZE;
  465. return nfs_block_bits(bsize, nrbitsp);
  466. }
  467. /*
  468. * Determine the maximum file size for a superblock
  469. */
  470. static inline
  471. void nfs_super_set_maxbytes(struct super_block *sb, __u64 maxfilesize)
  472. {
  473. sb->s_maxbytes = (loff_t)maxfilesize;
  474. if (sb->s_maxbytes > MAX_LFS_FILESIZE || sb->s_maxbytes <= 0)
  475. sb->s_maxbytes = MAX_LFS_FILESIZE;
  476. }
  477. /*
  478. * Determine the number of bytes of data the page contains
  479. */
  480. static inline
  481. unsigned int nfs_page_length(struct page *page)
  482. {
  483. loff_t i_size = i_size_read(page_file_mapping(page)->host);
  484. if (i_size > 0) {
  485. pgoff_t page_index = page_file_index(page);
  486. pgoff_t end_index = (i_size - 1) >> PAGE_CACHE_SHIFT;
  487. if (page_index < end_index)
  488. return PAGE_CACHE_SIZE;
  489. if (page_index == end_index)
  490. return ((i_size - 1) & ~PAGE_CACHE_MASK) + 1;
  491. }
  492. return 0;
  493. }
  494. /*
  495. * Convert a umode to a dirent->d_type
  496. */
  497. static inline
  498. unsigned char nfs_umode_to_dtype(umode_t mode)
  499. {
  500. return (mode >> 12) & 15;
  501. }
  502. /*
  503. * Determine the number of pages in an array of length 'len' and
  504. * with a base offset of 'base'
  505. */
  506. static inline
  507. unsigned int nfs_page_array_len(unsigned int base, size_t len)
  508. {
  509. return ((unsigned long)len + (unsigned long)base +
  510. PAGE_SIZE - 1) >> PAGE_SHIFT;
  511. }
  512. /*
  513. * Convert a struct timespec into a 64-bit change attribute
  514. *
  515. * This does approximately the same thing as timespec_to_ns(),
  516. * but for calculation efficiency, we multiply the seconds by
  517. * 1024*1024*1024.
  518. */
  519. static inline
  520. u64 nfs_timespec_to_change_attr(const struct timespec *ts)
  521. {
  522. return ((u64)ts->tv_sec << 30) + ts->tv_nsec;
  523. }
  524. #ifdef CONFIG_CRC32
  525. /**
  526. * nfs_fhandle_hash - calculate the crc32 hash for the filehandle
  527. * @fh - pointer to filehandle
  528. *
  529. * returns a crc32 hash for the filehandle that is compatible with
  530. * the one displayed by "wireshark".
  531. */
  532. static inline u32 nfs_fhandle_hash(const struct nfs_fh *fh)
  533. {
  534. return ~crc32_le(0xFFFFFFFF, &fh->data[0], fh->size);
  535. }
  536. #else
  537. static inline u32 nfs_fhandle_hash(const struct nfs_fh *fh)
  538. {
  539. return 0;
  540. }
  541. #endif