linux-cifs.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 00/11] DFS fixes
@ 2019-11-27  0:36 Paulo Alcantara (SUSE)
  2019-11-27  0:36 ` [PATCH v3 01/11] cifs: Fix use-after-free bug in cifs_reconnect() Paulo Alcantara (SUSE)
                   ` (10 more replies)
  0 siblings, 11 replies; 19+ messages in thread
From: Paulo Alcantara (SUSE) @ 2019-11-27  0:36 UTC (permalink / raw)
  To: smfrench; +Cc: linux-cifs, Paulo Alcantara (SUSE)

Hi,

Follow v3 of DFS fixes and cleanups, and one fix related to
multichannel support.

This new version addresses Aurelien and Pavel comments on spliting the
huge cleanup changes into different commits.

Paulo Alcantara (SUSE) (11):
  cifs: Fix use-after-free bug in cifs_reconnect()
  cifs: Fix lookup of root ses in DFS referral cache
  cifs: Fix potential softlockups while refreshing DFS cache
  cifs: Clean up DFS referral cache
  cifs: Get rid of kstrdup_const()'d paths
  cifs: Introduce helpers for finding TCP connection
  cifs: Merge is_path_valid() into get_normalized_path()
  cifs: Fix potential deadlock when updating vol in cifs_reconnect()
  cifs: Avoid doing network I/O while holding cache lock
  cifs: Fix retrieval of DFS referrals in cifs_mount()
  cifs: Always update signing key of first channel

 fs/cifs/connect.c       |   78 ++-
 fs/cifs/dfs_cache.c     | 1094 ++++++++++++++++++++-------------------
 fs/cifs/smb2pdu.c       |   41 +-
 fs/cifs/smb2transport.c |    4 +
 4 files changed, 662 insertions(+), 555 deletions(-)

-- 
2.24.0


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [PATCH v3 01/11] cifs: Fix use-after-free bug in cifs_reconnect()
  2019-11-27  0:36 [PATCH v3 00/11] DFS fixes Paulo Alcantara (SUSE)
@ 2019-11-27  0:36 ` Paulo Alcantara (SUSE)
  2019-11-27  0:36 ` [PATCH v3 02/11] cifs: Fix lookup of root ses in DFS referral cache Paulo Alcantara (SUSE)
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 19+ messages in thread
From: Paulo Alcantara (SUSE) @ 2019-11-27  0:36 UTC (permalink / raw)
  To: smfrench; +Cc: linux-cifs, Paulo Alcantara (SUSE), Aurelien Aptel

Ensure we grab an active reference in cifs superblock while doing
failover to prevent automounts (DFS links) of expiring and then
destroying the superblock pointer.

This patch fixes the following KASAN report:

[  464.301462] BUG: KASAN: use-after-free in
cifs_reconnect+0x6ab/0x1350
[  464.303052] Read of size 8 at addr ffff888155e580d0 by task
cifsd/1107

[  464.304682] CPU: 3 PID: 1107 Comm: cifsd Not tainted 5.4.0-rc4+ #13
[  464.305552] Hardware name: QEMU Standard PC (Q35 + ICH9, 2009),
BIOS rel-1.12.1-0-ga5cab58-rebuilt.opensuse.org 04/01/2014
[  464.307146] Call Trace:
[  464.307875]  dump_stack+0x5b/0x90
[  464.308631]  print_address_description.constprop.0+0x16/0x200
[  464.309478]  ? cifs_reconnect+0x6ab/0x1350
[  464.310253]  ? cifs_reconnect+0x6ab/0x1350
[  464.311040]  __kasan_report.cold+0x1a/0x41
[  464.311811]  ? cifs_reconnect+0x6ab/0x1350
[  464.312563]  kasan_report+0xe/0x20
[  464.313300]  cifs_reconnect+0x6ab/0x1350
[  464.314062]  ? extract_hostname.part.0+0x90/0x90
[  464.314829]  ? printk+0xad/0xde
[  464.315525]  ? _raw_spin_lock+0x7c/0xd0
[  464.316252]  ? _raw_read_lock_irq+0x40/0x40
[  464.316961]  ? ___ratelimit+0xed/0x182
[  464.317655]  cifs_readv_from_socket+0x289/0x3b0
[  464.318386]  cifs_read_from_socket+0x98/0xd0
[  464.319078]  ? cifs_readv_from_socket+0x3b0/0x3b0
[  464.319782]  ? try_to_wake_up+0x43c/0xa90
[  464.320463]  ? cifs_small_buf_get+0x4b/0x60
[  464.321173]  ? allocate_buffers+0x98/0x1a0
[  464.321856]  cifs_demultiplex_thread+0x218/0x14a0
[  464.322558]  ? cifs_handle_standard+0x270/0x270
[  464.323237]  ? __switch_to_asm+0x40/0x70
[  464.323893]  ? __switch_to_asm+0x34/0x70
[  464.324554]  ? __switch_to_asm+0x40/0x70
[  464.325226]  ? __switch_to_asm+0x40/0x70
[  464.325863]  ? __switch_to_asm+0x34/0x70
[  464.326505]  ? __switch_to_asm+0x40/0x70
[  464.327161]  ? __switch_to_asm+0x34/0x70
[  464.327784]  ? finish_task_switch+0xa1/0x330
[  464.328414]  ? __switch_to+0x363/0x640
[  464.329044]  ? __schedule+0x575/0xaf0
[  464.329655]  ? _raw_spin_lock_irqsave+0x82/0xe0
[  464.330301]  kthread+0x1a3/0x1f0
[  464.330884]  ? cifs_handle_standard+0x270/0x270
[  464.331624]  ? kthread_create_on_node+0xd0/0xd0
[  464.332347]  ret_from_fork+0x35/0x40

[  464.333577] Allocated by task 1110:
[  464.334381]  save_stack+0x1b/0x80
[  464.335123]  __kasan_kmalloc.constprop.0+0xc2/0xd0
[  464.335848]  cifs_smb3_do_mount+0xd4/0xb00
[  464.336619]  legacy_get_tree+0x6b/0xa0
[  464.337235]  vfs_get_tree+0x41/0x110
[  464.337975]  fc_mount+0xa/0x40
[  464.338557]  vfs_kern_mount.part.0+0x6c/0x80
[  464.339227]  cifs_dfs_d_automount+0x336/0xd29
[  464.339846]  follow_managed+0x1b1/0x450
[  464.340449]  lookup_fast+0x231/0x4a0
[  464.341039]  path_openat+0x240/0x1fd0
[  464.341634]  do_filp_open+0x126/0x1c0
[  464.342277]  do_sys_open+0x1eb/0x2c0
[  464.342957]  do_syscall_64+0x5e/0x190
[  464.343555]  entry_SYSCALL_64_after_hwframe+0x44/0xa9

[  464.344772] Freed by task 0:
[  464.345347]  save_stack+0x1b/0x80
[  464.345966]  __kasan_slab_free+0x12c/0x170
[  464.346576]  kfree+0xa6/0x270
[  464.347211]  rcu_core+0x39c/0xc80
[  464.347800]  __do_softirq+0x10d/0x3da

[  464.348919] The buggy address belongs to the object at
ffff888155e58000
                which belongs to the cache kmalloc-256 of size 256
[  464.350222] The buggy address is located 208 bytes inside of
                256-byte region [ffff888155e58000, ffff888155e58100)
[  464.351575] The buggy address belongs to the page:
[  464.352333] page:ffffea0005579600 refcount:1 mapcount:0
mapping:ffff88815a803400 index:0x0 compound_mapcount: 0
[  464.353583] flags: 0x200000000010200(slab|head)
[  464.354209] raw: 0200000000010200 ffffea0005576200 0000000400000004
ffff88815a803400
[  464.355353] raw: 0000000000000000 0000000080100010 00000001ffffffff
0000000000000000
[  464.356458] page dumped because: kasan: bad access detected

[  464.367005] Memory state around the buggy address:
[  464.367787]  ffff888155e57f80: fc fc fc fc fc fc fc fc fc fc fc fc
fc fc fc fc
[  464.368877]  ffff888155e58000: fb fb fb fb fb fb fb fb fb fb fb fb
fb fb fb fb
[  464.369967] >ffff888155e58080: fb fb fb fb fb fb fb fb fb fb fb fb
fb fb fb fb
[  464.371111]                                                  ^
[  464.371775]  ffff888155e58100: fc fc fc fc fc fc fc fc fc fc fc fc
fc fc fc fc
[  464.372893]  ffff888155e58180: fc fc fc fc fc fc fc fc fc fc fc fc
fc fc fc fc
[  464.373983] ==================================================================

Signed-off-by: Paulo Alcantara (SUSE) <pc@cjr.nz>
Reviewed-by: Aurelien Aptel <aaptel@suse.com>
---
 fs/cifs/connect.c | 46 +++++++++++++++++++++++++++++++++++-----------
 1 file changed, 35 insertions(+), 11 deletions(-)

diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
index fce71a6cc3f4..668d477cc9c7 100644
--- a/fs/cifs/connect.c
+++ b/fs/cifs/connect.c
@@ -391,7 +391,7 @@ static inline int reconn_set_ipaddr(struct TCP_Server_Info *server)
 #ifdef CONFIG_CIFS_DFS_UPCALL
 struct super_cb_data {
 	struct TCP_Server_Info *server;
-	struct cifs_sb_info *cifs_sb;
+	struct super_block *sb;
 };
 
 /* These functions must be called with server->srv_mutex held */
@@ -402,25 +402,39 @@ static void super_cb(struct super_block *sb, void *arg)
 	struct cifs_sb_info *cifs_sb;
 	struct cifs_tcon *tcon;
 
-	if (d->cifs_sb)
+	if (d->sb)
 		return;
 
 	cifs_sb = CIFS_SB(sb);
 	tcon = cifs_sb_master_tcon(cifs_sb);
 	if (tcon->ses->server == d->server)
-		d->cifs_sb = cifs_sb;
+		d->sb = sb;
 }
 
-static inline struct cifs_sb_info *
-find_super_by_tcp(struct TCP_Server_Info *server)
+static struct super_block *get_tcp_super(struct TCP_Server_Info *server)
 {
 	struct super_cb_data d = {
 		.server = server,
-		.cifs_sb = NULL,
+		.sb = NULL,
 	};
 
 	iterate_supers_type(&cifs_fs_type, super_cb, &d);
-	return d.cifs_sb ? d.cifs_sb : ERR_PTR(-ENOENT);
+
+	if (unlikely(!d.sb))
+		return ERR_PTR(-ENOENT);
+	/*
+	 * Grab an active reference in order to prevent automounts (DFS links)
+	 * of expiring and then freeing up our cifs superblock pointer while
+	 * we're doing failover.
+	 */
+	cifs_sb_active(d.sb);
+	return d.sb;
+}
+
+static inline void put_tcp_super(struct super_block *sb)
+{
+	if (!IS_ERR_OR_NULL(sb))
+		cifs_sb_deactive(sb);
 }
 
 static void reconn_inval_dfs_target(struct TCP_Server_Info *server,
@@ -484,6 +498,7 @@ cifs_reconnect(struct TCP_Server_Info *server)
 	struct mid_q_entry *mid_entry;
 	struct list_head retry_list;
 #ifdef CONFIG_CIFS_DFS_UPCALL
+	struct super_block *sb = NULL;
 	struct cifs_sb_info *cifs_sb = NULL;
 	struct dfs_cache_tgt_list tgt_list = {0};
 	struct dfs_cache_tgt_iterator *tgt_it = NULL;
@@ -493,13 +508,15 @@ cifs_reconnect(struct TCP_Server_Info *server)
 	server->nr_targets = 1;
 #ifdef CONFIG_CIFS_DFS_UPCALL
 	spin_unlock(&GlobalMid_Lock);
-	cifs_sb = find_super_by_tcp(server);
-	if (IS_ERR(cifs_sb)) {
-		rc = PTR_ERR(cifs_sb);
+	sb = get_tcp_super(server);
+	if (IS_ERR(sb)) {
+		rc = PTR_ERR(sb);
 		cifs_dbg(FYI, "%s: will not do DFS failover: rc = %d\n",
 			 __func__, rc);
-		cifs_sb = NULL;
+		sb = NULL;
 	} else {
+		cifs_sb = CIFS_SB(sb);
+
 		rc = reconn_setup_dfs_targets(cifs_sb, &tgt_list, &tgt_it);
 		if (rc && (rc != -EOPNOTSUPP)) {
 			cifs_server_dbg(VFS, "%s: no target servers for DFS failover\n",
@@ -516,6 +533,10 @@ cifs_reconnect(struct TCP_Server_Info *server)
 		/* the demux thread will exit normally
 		next time through the loop */
 		spin_unlock(&GlobalMid_Lock);
+#ifdef CONFIG_CIFS_DFS_UPCALL
+		dfs_cache_free_tgts(&tgt_list);
+		put_tcp_super(sb);
+#endif
 		return rc;
 	} else
 		server->tcpStatus = CifsNeedReconnect;
@@ -642,7 +663,10 @@ cifs_reconnect(struct TCP_Server_Info *server)
 				 __func__, rc);
 		}
 		dfs_cache_free_tgts(&tgt_list);
+
 	}
+
+	put_tcp_super(sb);
 #endif
 	if (server->tcpStatus == CifsNeedNegotiate)
 		mod_delayed_work(cifsiod_wq, &server->echo, 0);
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 02/11] cifs: Fix lookup of root ses in DFS referral cache
  2019-11-27  0:36 [PATCH v3 00/11] DFS fixes Paulo Alcantara (SUSE)
  2019-11-27  0:36 ` [PATCH v3 01/11] cifs: Fix use-after-free bug in cifs_reconnect() Paulo Alcantara (SUSE)
@ 2019-11-27  0:36 ` Paulo Alcantara (SUSE)
  2019-11-27  0:36 ` [PATCH v3 03/11] cifs: Fix potential softlockups while refreshing DFS cache Paulo Alcantara (SUSE)
                   ` (8 subsequent siblings)
  10 siblings, 0 replies; 19+ messages in thread
From: Paulo Alcantara (SUSE) @ 2019-11-27  0:36 UTC (permalink / raw)
  To: smfrench; +Cc: linux-cifs, Paulo Alcantara (SUSE), Aurelien Aptel

We don't care about module aliasing validation in
cifs_compose_mount_options(..., is_smb3) when finding the root SMB
session of an DFS namespace in order to refresh DFS referral cache.

The following issue has been observed when mounting with '-t smb3' and
then specifying 'vers=2.0':

...
Nov 08 15:27:08 tw kernel: address conversion returned 0 for FS0.WIN.LOCAL
Nov 08 15:27:08 tw kernel: [kworke] ==> dns_query((null),FS0.WIN.LOCAL,13,(null))
Nov 08 15:27:08 tw kernel: [kworke] call request_key(,FS0.WIN.LOCAL,)
Nov 08 15:27:08 tw kernel: [kworke] ==> dns_resolver_cmp(FS0.WIN.LOCAL,FS0.WIN.LOCAL)
Nov 08 15:27:08 tw kernel: [kworke] <== dns_resolver_cmp() = 1
Nov 08 15:27:08 tw kernel: [kworke] <== dns_query() = 13
Nov 08 15:27:08 tw kernel: fs/cifs/dns_resolve.c: dns_resolve_server_name_to_ip: resolved: FS0.WIN.LOCAL to 192.168.30.26
===> Nov 08 15:27:08 tw kernel: CIFS VFS: vers=2.0 not permitted when mounting with smb3
Nov 08 15:27:08 tw kernel: fs/cifs/dfs_cache.c: CIFS VFS: leaving refresh_tcon (xid = 26) rc = -22
...

Fixes: 5072010ccf05 ("cifs: Fix DFS cache refresher for DFS links")
Signed-off-by: Paulo Alcantara (SUSE) <pc@cjr.nz>
Reviewed-by: Aurelien Aptel <aaptel@suse.com>
---
 fs/cifs/dfs_cache.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/fs/cifs/dfs_cache.c b/fs/cifs/dfs_cache.c
index 1692c0c6c23a..2faa05860a48 100644
--- a/fs/cifs/dfs_cache.c
+++ b/fs/cifs/dfs_cache.c
@@ -1317,7 +1317,6 @@ static struct cifs_ses *find_root_ses(struct dfs_cache_vol_info *vi,
 	int rc;
 	struct dfs_info3_param ref = {0};
 	char *mdata = NULL, *devname = NULL;
-	bool is_smb3 = tcon->ses->server->vals->header_preamble_size == 0;
 	struct TCP_Server_Info *server;
 	struct cifs_ses *ses;
 	struct smb_vol vol;
@@ -1344,7 +1343,7 @@ static struct cifs_ses *find_root_ses(struct dfs_cache_vol_info *vi,
 		goto out;
 	}
 
-	rc = cifs_setup_volume_info(&vol, mdata, devname, is_smb3);
+	rc = cifs_setup_volume_info(&vol, mdata, devname, false);
 	kfree(devname);
 
 	if (rc) {
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 03/11] cifs: Fix potential softlockups while refreshing DFS cache
  2019-11-27  0:36 [PATCH v3 00/11] DFS fixes Paulo Alcantara (SUSE)
  2019-11-27  0:36 ` [PATCH v3 01/11] cifs: Fix use-after-free bug in cifs_reconnect() Paulo Alcantara (SUSE)
  2019-11-27  0:36 ` [PATCH v3 02/11] cifs: Fix lookup of root ses in DFS referral cache Paulo Alcantara (SUSE)
@ 2019-11-27  0:36 ` Paulo Alcantara (SUSE)
  2019-11-27  0:36 ` [PATCH v3 04/11] cifs: Clean up DFS referral cache Paulo Alcantara (SUSE)
                   ` (7 subsequent siblings)
  10 siblings, 0 replies; 19+ messages in thread
From: Paulo Alcantara (SUSE) @ 2019-11-27  0:36 UTC (permalink / raw)
  To: smfrench; +Cc: linux-cifs, Paulo Alcantara (SUSE), Aurelien Aptel

We used to skip reconnects on all SMB2_IOCTL commands due to SMB3+
FSCTL_VALIDATE_NEGOTIATE_INFO - which made sense since we're still
establishing a SMB session.

However, when refresh_cache_worker() calls smb2_get_dfs_refer() and
we're under reconnect, SMB2_ioctl() will not be able to get a proper
status error (e.g. -EHOSTDOWN in case we failed to reconnect) but an
-EAGAIN from cifs_send_recv() thus looping forever in
refresh_cache_worker().

Signed-off-by: Paulo Alcantara (SUSE) <pc@cjr.nz>
Suggested-by: Aurelien Aptel <aaptel@suse.com>
Fixes: e99c63e4d86d ("SMB3: Fix deadlock in validate negotiate hits reconnect")
Reviewed-by: Aurelien Aptel <aaptel@suse.com>
---
 fs/cifs/smb2pdu.c | 41 +++++++++++++++++++++++++++++------------
 1 file changed, 29 insertions(+), 12 deletions(-)

diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
index c09ca6963394..3cd90088d8ac 100644
--- a/fs/cifs/smb2pdu.c
+++ b/fs/cifs/smb2pdu.c
@@ -252,7 +252,7 @@ smb2_reconnect(__le16 smb2_command, struct cifs_tcon *tcon)
 	if (tcon == NULL)
 		return 0;
 
-	if (smb2_command == SMB2_TREE_CONNECT || smb2_command == SMB2_IOCTL)
+	if (smb2_command == SMB2_TREE_CONNECT)
 		return 0;
 
 	if (tcon->tidStatus == CifsExiting) {
@@ -426,16 +426,9 @@ fill_small_buf(__le16 smb2_command, struct cifs_tcon *tcon, void *buf,
  * SMB information in the SMB header. If the return code is zero, this
  * function must have filled in request_buf pointer.
  */
-static int
-smb2_plain_req_init(__le16 smb2_command, struct cifs_tcon *tcon,
-		    void **request_buf, unsigned int *total_len)
+static int __smb2_plain_req_init(__le16 smb2_command, struct cifs_tcon *tcon,
+				  void **request_buf, unsigned int *total_len)
 {
-	int rc;
-
-	rc = smb2_reconnect(smb2_command, tcon);
-	if (rc)
-		return rc;
-
 	/* BB eventually switch this to SMB2 specific small buf size */
 	if (smb2_command == SMB2_SET_INFO)
 		*request_buf = cifs_buf_get();
@@ -456,7 +449,31 @@ smb2_plain_req_init(__le16 smb2_command, struct cifs_tcon *tcon,
 		cifs_stats_inc(&tcon->num_smbs_sent);
 	}
 
-	return rc;
+	return 0;
+}
+
+static int smb2_plain_req_init(__le16 smb2_command, struct cifs_tcon *tcon,
+			       void **request_buf, unsigned int *total_len)
+{
+	int rc;
+
+	rc = smb2_reconnect(smb2_command, tcon);
+	if (rc)
+		return rc;
+
+	return __smb2_plain_req_init(smb2_command, tcon, request_buf,
+				     total_len);
+}
+
+static int smb2_ioctl_req_init(u32 opcode, struct cifs_tcon *tcon,
+			       void **request_buf, unsigned int *total_len)
+{
+	/* Skip reconnect only for FSCTL_VALIDATE_NEGOTIATE_INFO IOCTLs */
+	if (opcode == FSCTL_VALIDATE_NEGOTIATE_INFO) {
+		return __smb2_plain_req_init(SMB2_IOCTL, tcon, request_buf,
+					     total_len);
+	}
+	return smb2_plain_req_init(SMB2_IOCTL, tcon, request_buf, total_len);
 }
 
 /* For explanation of negotiate contexts see MS-SMB2 section 2.2.3.1 */
@@ -2686,7 +2703,7 @@ SMB2_ioctl_init(struct cifs_tcon *tcon, struct smb_rqst *rqst,
 	int rc;
 	char *in_data_buf;
 
-	rc = smb2_plain_req_init(SMB2_IOCTL, tcon, (void **) &req, &total_len);
+	rc = smb2_ioctl_req_init(opcode, tcon, (void **) &req, &total_len);
 	if (rc)
 		return rc;
 
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 04/11] cifs: Clean up DFS referral cache
  2019-11-27  0:36 [PATCH v3 00/11] DFS fixes Paulo Alcantara (SUSE)
                   ` (2 preceding siblings ...)
  2019-11-27  0:36 ` [PATCH v3 03/11] cifs: Fix potential softlockups while refreshing DFS cache Paulo Alcantara (SUSE)
@ 2019-11-27  0:36 ` Paulo Alcantara (SUSE)
  2019-11-27 11:32   ` Aurélien Aptel
  2019-11-27  0:36 ` [PATCH v3 05/11] cifs: Get rid of kstrdup_const()'d paths Paulo Alcantara (SUSE)
                   ` (6 subsequent siblings)
  10 siblings, 1 reply; 19+ messages in thread
From: Paulo Alcantara (SUSE) @ 2019-11-27  0:36 UTC (permalink / raw)
  To: smfrench; +Cc: linux-cifs, Paulo Alcantara (SUSE)

Do some renaming and code cleanup.

No functional changes.

Signed-off-by: Paulo Alcantara (SUSE) <pc@cjr.nz>
---
 fs/cifs/dfs_cache.c | 565 ++++++++++++++++++++++----------------------
 1 file changed, 279 insertions(+), 286 deletions(-)

diff --git a/fs/cifs/dfs_cache.c b/fs/cifs/dfs_cache.c
index 2faa05860a48..aed4183840c5 100644
--- a/fs/cifs/dfs_cache.c
+++ b/fs/cifs/dfs_cache.c
@@ -22,60 +22,59 @@
 
 #include "dfs_cache.h"
 
-#define DFS_CACHE_HTABLE_SIZE 32
-#define DFS_CACHE_MAX_ENTRIES 64
+#define CACHE_HTABLE_SIZE 32
+#define CACHE_MAX_ENTRIES 64
 
 #define IS_INTERLINK_SET(v) ((v) & (DFSREF_REFERRAL_SERVER | \
 				    DFSREF_STORAGE_SERVER))
 
-struct dfs_cache_tgt {
-	char *t_name;
-	struct list_head t_list;
+struct cache_dfs_tgt {
+	char *name;
+	struct list_head list;
 };
 
-struct dfs_cache_entry {
-	struct hlist_node ce_hlist;
-	const char *ce_path;
-	int ce_ttl;
-	int ce_srvtype;
-	int ce_flags;
-	struct timespec64 ce_etime;
-	int ce_path_consumed;
-	int ce_numtgts;
-	struct list_head ce_tlist;
-	struct dfs_cache_tgt *ce_tgthint;
-	struct rcu_head ce_rcu;
+struct cache_entry {
+	struct hlist_node hlist;
+	const char *path;
+	int ttl;
+	int srvtype;
+	int flags;
+	struct timespec64 etime;
+	int path_consumed;
+	int numtgts;
+	struct list_head tlist;
+	struct cache_dfs_tgt *tgthint;
+	struct rcu_head rcu;
 };
 
-static struct kmem_cache *dfs_cache_slab __read_mostly;
-
-struct dfs_cache_vol_info {
-	char *vi_fullpath;
-	struct smb_vol vi_vol;
-	char *vi_mntdata;
-	struct list_head vi_list;
+struct vol_info {
+	char *fullpath;
+	struct smb_vol smb_vol;
+	char *mntdata;
+	struct list_head list;
 };
 
-struct dfs_cache {
-	struct mutex dc_lock;
-	struct nls_table *dc_nlsc;
-	struct list_head dc_vol_list;
-	int dc_ttl;
-	struct delayed_work dc_refresh;
-};
+static struct kmem_cache *cache_slab __read_mostly;
+static struct workqueue_struct *dfscache_wq __read_mostly;
 
-static struct dfs_cache dfs_cache;
+static int cache_ttl;
+static struct nls_table *cache_nlsc;
 
 /*
  * Number of entries in the cache
  */
-static size_t dfs_cache_count;
+static size_t cache_count;
+
+static struct hlist_head cache_htable[CACHE_HTABLE_SIZE];
+static DEFINE_MUTEX(list_lock);
 
-static DEFINE_MUTEX(dfs_cache_list_lock);
-static struct hlist_head dfs_cache_htable[DFS_CACHE_HTABLE_SIZE];
+static LIST_HEAD(vol_list);
+static DEFINE_MUTEX(vol_lock);
 
 static void refresh_cache_worker(struct work_struct *work);
 
+static DECLARE_DELAYED_WORK(refresh_task, refresh_cache_worker);
+
 static inline bool is_path_valid(const char *path)
 {
 	return path && (strchr(path + 1, '\\') || strchr(path + 1, '/'));
@@ -100,42 +99,42 @@ static inline void free_normalized_path(const char *path, char *npath)
 		kfree(npath);
 }
 
-static inline bool cache_entry_expired(const struct dfs_cache_entry *ce)
+static inline bool cache_entry_expired(const struct cache_entry *ce)
 {
 	struct timespec64 ts;
 
 	ktime_get_coarse_real_ts64(&ts);
-	return timespec64_compare(&ts, &ce->ce_etime) >= 0;
+	return timespec64_compare(&ts, &ce->etime) >= 0;
 }
 
-static inline void free_tgts(struct dfs_cache_entry *ce)
+static inline void free_tgts(struct cache_entry *ce)
 {
-	struct dfs_cache_tgt *t, *n;
+	struct cache_dfs_tgt *t, *n;
 
-	list_for_each_entry_safe(t, n, &ce->ce_tlist, t_list) {
-		list_del(&t->t_list);
-		kfree(t->t_name);
+	list_for_each_entry_safe(t, n, &ce->tlist, list) {
+		list_del(&t->list);
+		kfree(t->name);
 		kfree(t);
 	}
 }
 
 static void free_cache_entry(struct rcu_head *rcu)
 {
-	struct dfs_cache_entry *ce = container_of(rcu, struct dfs_cache_entry,
-						  ce_rcu);
-	kmem_cache_free(dfs_cache_slab, ce);
+	struct cache_entry *ce = container_of(rcu, struct cache_entry, rcu);
+
+	kmem_cache_free(cache_slab, ce);
 }
 
-static inline void flush_cache_ent(struct dfs_cache_entry *ce)
+static inline void flush_cache_ent(struct cache_entry *ce)
 {
-	if (hlist_unhashed(&ce->ce_hlist))
+	if (hlist_unhashed(&ce->hlist))
 		return;
 
-	hlist_del_init_rcu(&ce->ce_hlist);
-	kfree_const(ce->ce_path);
+	hlist_del_init_rcu(&ce->hlist);
+	kfree_const(ce->path);
 	free_tgts(ce);
-	dfs_cache_count--;
-	call_rcu(&ce->ce_rcu, free_cache_entry);
+	cache_count--;
+	call_rcu(&ce->rcu, free_cache_entry);
 }
 
 static void flush_cache_ents(void)
@@ -143,11 +142,11 @@ static void flush_cache_ents(void)
 	int i;
 
 	rcu_read_lock();
-	for (i = 0; i < DFS_CACHE_HTABLE_SIZE; i++) {
-		struct hlist_head *l = &dfs_cache_htable[i];
-		struct dfs_cache_entry *ce;
+	for (i = 0; i < CACHE_HTABLE_SIZE; i++) {
+		struct hlist_head *l = &cache_htable[i];
+		struct cache_entry *ce;
 
-		hlist_for_each_entry_rcu(ce, l, ce_hlist)
+		hlist_for_each_entry_rcu(ce, l, hlist)
 			flush_cache_ent(ce);
 	}
 	rcu_read_unlock();
@@ -159,35 +158,35 @@ static void flush_cache_ents(void)
 static int dfscache_proc_show(struct seq_file *m, void *v)
 {
 	int bucket;
-	struct dfs_cache_entry *ce;
-	struct dfs_cache_tgt *t;
+	struct cache_entry *ce;
+	struct cache_dfs_tgt *t;
 
 	seq_puts(m, "DFS cache\n---------\n");
 
-	mutex_lock(&dfs_cache_list_lock);
+	mutex_lock(&list_lock);
 
 	rcu_read_lock();
-	hash_for_each_rcu(dfs_cache_htable, bucket, ce, ce_hlist) {
+	hash_for_each_rcu(cache_htable, bucket, ce, hlist) {
 		seq_printf(m,
 			   "cache entry: path=%s,type=%s,ttl=%d,etime=%ld,"
 			   "interlink=%s,path_consumed=%d,expired=%s\n",
-			   ce->ce_path,
-			   ce->ce_srvtype == DFS_TYPE_ROOT ? "root" : "link",
-			   ce->ce_ttl, ce->ce_etime.tv_nsec,
-			   IS_INTERLINK_SET(ce->ce_flags) ? "yes" : "no",
-			   ce->ce_path_consumed,
+			   ce->path,
+			   ce->srvtype == DFS_TYPE_ROOT ? "root" : "link",
+			   ce->ttl, ce->etime.tv_nsec,
+			   IS_INTERLINK_SET(ce->flags) ? "yes" : "no",
+			   ce->path_consumed,
 			   cache_entry_expired(ce) ? "yes" : "no");
 
-		list_for_each_entry(t, &ce->ce_tlist, t_list) {
+		list_for_each_entry(t, &ce->tlist, list) {
 			seq_printf(m, "  %s%s\n",
-				   t->t_name,
-				   ce->ce_tgthint == t ? " (target hint)" : "");
+				   t->name,
+				   ce->tgthint == t ? " (target hint)" : "");
 		}
 
 	}
 	rcu_read_unlock();
 
-	mutex_unlock(&dfs_cache_list_lock);
+	mutex_unlock(&list_lock);
 	return 0;
 }
 
@@ -205,9 +204,9 @@ static ssize_t dfscache_proc_write(struct file *file, const char __user *buffer,
 		return -EINVAL;
 
 	cifs_dbg(FYI, "clearing dfs cache");
-	mutex_lock(&dfs_cache_list_lock);
+	mutex_lock(&list_lock);
 	flush_cache_ents();
-	mutex_unlock(&dfs_cache_list_lock);
+	mutex_unlock(&list_lock);
 
 	return count;
 }
@@ -226,25 +225,25 @@ const struct file_operations dfscache_proc_fops = {
 };
 
 #ifdef CONFIG_CIFS_DEBUG2
-static inline void dump_tgts(const struct dfs_cache_entry *ce)
+static inline void dump_tgts(const struct cache_entry *ce)
 {
-	struct dfs_cache_tgt *t;
+	struct cache_dfs_tgt *t;
 
 	cifs_dbg(FYI, "target list:\n");
-	list_for_each_entry(t, &ce->ce_tlist, t_list) {
-		cifs_dbg(FYI, "  %s%s\n", t->t_name,
-			 ce->ce_tgthint == t ? " (target hint)" : "");
+	list_for_each_entry(t, &ce->tlist, list) {
+		cifs_dbg(FYI, "  %s%s\n", t->name,
+			 ce->tgthint == t ? " (target hint)" : "");
 	}
 }
 
-static inline void dump_ce(const struct dfs_cache_entry *ce)
+static inline void dump_ce(const struct cache_entry *ce)
 {
 	cifs_dbg(FYI, "cache entry: path=%s,type=%s,ttl=%d,etime=%ld,"
-		 "interlink=%s,path_consumed=%d,expired=%s\n", ce->ce_path,
-		 ce->ce_srvtype == DFS_TYPE_ROOT ? "root" : "link", ce->ce_ttl,
-		 ce->ce_etime.tv_nsec,
-		 IS_INTERLINK_SET(ce->ce_flags) ? "yes" : "no",
-		 ce->ce_path_consumed,
+		 "interlink=%s,path_consumed=%d,expired=%s\n", ce->path,
+		 ce->srvtype == DFS_TYPE_ROOT ? "root" : "link", ce->ttl,
+		 ce->etime.tv_nsec,
+		 IS_INTERLINK_SET(ce->flags) ? "yes" : "no",
+		 ce->path_consumed,
 		 cache_entry_expired(ce) ? "yes" : "no");
 	dump_tgts(ce);
 }
@@ -284,25 +283,34 @@ static inline void dump_refs(const struct dfs_info3_param *refs, int numrefs)
  */
 int dfs_cache_init(void)
 {
+	int rc;
 	int i;
 
-	dfs_cache_slab = kmem_cache_create("cifs_dfs_cache",
-					   sizeof(struct dfs_cache_entry), 0,
-					   SLAB_HWCACHE_ALIGN, NULL);
-	if (!dfs_cache_slab)
+	dfscache_wq = alloc_workqueue("cifs-dfscache",
+				      WQ_FREEZABLE | WQ_MEM_RECLAIM, 1);
+	if (!dfscache_wq)
 		return -ENOMEM;
 
-	for (i = 0; i < DFS_CACHE_HTABLE_SIZE; i++)
-		INIT_HLIST_HEAD(&dfs_cache_htable[i]);
+	cache_slab = kmem_cache_create("cifs_dfs_cache",
+				       sizeof(struct cache_entry), 0,
+				       SLAB_HWCACHE_ALIGN, NULL);
+	if (!cache_slab) {
+		rc = -ENOMEM;
+		goto out_destroy_wq;
+	}
+
+	for (i = 0; i < CACHE_HTABLE_SIZE; i++)
+		INIT_HLIST_HEAD(&cache_htable[i]);
 
-	INIT_LIST_HEAD(&dfs_cache.dc_vol_list);
-	mutex_init(&dfs_cache.dc_lock);
-	INIT_DELAYED_WORK(&dfs_cache.dc_refresh, refresh_cache_worker);
-	dfs_cache.dc_ttl = -1;
-	dfs_cache.dc_nlsc = load_nls_default();
+	cache_ttl = -1;
+	cache_nlsc = load_nls_default();
 
 	cifs_dbg(FYI, "%s: initialized DFS referral cache\n", __func__);
 	return 0;
+
+out_destroy_wq:
+	destroy_workqueue(dfscache_wq);
+	return rc;
 }
 
 static inline unsigned int cache_entry_hash(const void *data, int size)
@@ -310,7 +318,7 @@ static inline unsigned int cache_entry_hash(const void *data, int size)
 	unsigned int h;
 
 	h = jhash(data, size, 0);
-	return h & (DFS_CACHE_HTABLE_SIZE - 1);
+	return h & (CACHE_HTABLE_SIZE - 1);
 }
 
 /* Check whether second path component of @path is SYSVOL or NETLOGON */
@@ -325,11 +333,11 @@ static inline bool is_sysvol_or_netlogon(const char *path)
 }
 
 /* Return target hint of a DFS cache entry */
-static inline char *get_tgt_name(const struct dfs_cache_entry *ce)
+static inline char *get_tgt_name(const struct cache_entry *ce)
 {
-	struct dfs_cache_tgt *t = ce->ce_tgthint;
+	struct cache_dfs_tgt *t = ce->tgthint;
 
-	return t ? t->t_name : ERR_PTR(-ENOENT);
+	return t ? t->name : ERR_PTR(-ENOENT);
 }
 
 /* Return expire time out of a new entry's TTL */
@@ -346,19 +354,19 @@ static inline struct timespec64 get_expire_time(int ttl)
 }
 
 /* Allocate a new DFS target */
-static inline struct dfs_cache_tgt *alloc_tgt(const char *name)
+static inline struct cache_dfs_tgt *alloc_tgt(const char *name)
 {
-	struct dfs_cache_tgt *t;
+	struct cache_dfs_tgt *t;
 
 	t = kmalloc(sizeof(*t), GFP_KERNEL);
 	if (!t)
 		return ERR_PTR(-ENOMEM);
-	t->t_name = kstrndup(name, strlen(name), GFP_KERNEL);
-	if (!t->t_name) {
+	t->name = kstrndup(name, strlen(name), GFP_KERNEL);
+	if (!t->name) {
 		kfree(t);
 		return ERR_PTR(-ENOMEM);
 	}
-	INIT_LIST_HEAD(&t->t_list);
+	INIT_LIST_HEAD(&t->list);
 	return t;
 }
 
@@ -367,63 +375,63 @@ static inline struct dfs_cache_tgt *alloc_tgt(const char *name)
  * target hint.
  */
 static int copy_ref_data(const struct dfs_info3_param *refs, int numrefs,
-			 struct dfs_cache_entry *ce, const char *tgthint)
+			 struct cache_entry *ce, const char *tgthint)
 {
 	int i;
 
-	ce->ce_ttl = refs[0].ttl;
-	ce->ce_etime = get_expire_time(ce->ce_ttl);
-	ce->ce_srvtype = refs[0].server_type;
-	ce->ce_flags = refs[0].ref_flag;
-	ce->ce_path_consumed = refs[0].path_consumed;
+	ce->ttl = refs[0].ttl;
+	ce->etime = get_expire_time(ce->ttl);
+	ce->srvtype = refs[0].server_type;
+	ce->flags = refs[0].ref_flag;
+	ce->path_consumed = refs[0].path_consumed;
 
 	for (i = 0; i < numrefs; i++) {
-		struct dfs_cache_tgt *t;
+		struct cache_dfs_tgt *t;
 
 		t = alloc_tgt(refs[i].node_name);
 		if (IS_ERR(t)) {
 			free_tgts(ce);
 			return PTR_ERR(t);
 		}
-		if (tgthint && !strcasecmp(t->t_name, tgthint)) {
-			list_add(&t->t_list, &ce->ce_tlist);
+		if (tgthint && !strcasecmp(t->name, tgthint)) {
+			list_add(&t->list, &ce->tlist);
 			tgthint = NULL;
 		} else {
-			list_add_tail(&t->t_list, &ce->ce_tlist);
+			list_add_tail(&t->list, &ce->tlist);
 		}
-		ce->ce_numtgts++;
+		ce->numtgts++;
 	}
 
-	ce->ce_tgthint = list_first_entry_or_null(&ce->ce_tlist,
-						  struct dfs_cache_tgt, t_list);
+	ce->tgthint = list_first_entry_or_null(&ce->tlist,
+					       struct cache_dfs_tgt, list);
 
 	return 0;
 }
 
 /* Allocate a new cache entry */
-static struct dfs_cache_entry *
-alloc_cache_entry(const char *path, const struct dfs_info3_param *refs,
-		  int numrefs)
+static struct cache_entry *alloc_cache_entry(const char *path,
+					     const struct dfs_info3_param *refs,
+					     int numrefs)
 {
-	struct dfs_cache_entry *ce;
+	struct cache_entry *ce;
 	int rc;
 
-	ce = kmem_cache_zalloc(dfs_cache_slab, GFP_KERNEL);
+	ce = kmem_cache_zalloc(cache_slab, GFP_KERNEL);
 	if (!ce)
 		return ERR_PTR(-ENOMEM);
 
-	ce->ce_path = kstrdup_const(path, GFP_KERNEL);
-	if (!ce->ce_path) {
-		kmem_cache_free(dfs_cache_slab, ce);
+	ce->path = kstrdup_const(path, GFP_KERNEL);
+	if (!ce->path) {
+		kmem_cache_free(cache_slab, ce);
 		return ERR_PTR(-ENOMEM);
 	}
-	INIT_HLIST_NODE(&ce->ce_hlist);
-	INIT_LIST_HEAD(&ce->ce_tlist);
+	INIT_HLIST_NODE(&ce->hlist);
+	INIT_LIST_HEAD(&ce->tlist);
 
 	rc = copy_ref_data(refs, numrefs, ce, NULL);
 	if (rc) {
-		kfree_const(ce->ce_path);
-		kmem_cache_free(dfs_cache_slab, ce);
+		kfree_const(ce->path);
+		kmem_cache_free(cache_slab, ce);
 		ce = ERR_PTR(rc);
 	}
 	return ce;
@@ -432,13 +440,13 @@ alloc_cache_entry(const char *path, const struct dfs_info3_param *refs,
 static void remove_oldest_entry(void)
 {
 	int bucket;
-	struct dfs_cache_entry *ce;
-	struct dfs_cache_entry *to_del = NULL;
+	struct cache_entry *ce;
+	struct cache_entry *to_del = NULL;
 
 	rcu_read_lock();
-	hash_for_each_rcu(dfs_cache_htable, bucket, ce, ce_hlist) {
-		if (!to_del || timespec64_compare(&ce->ce_etime,
-						  &to_del->ce_etime) < 0)
+	hash_for_each_rcu(cache_htable, bucket, ce, hlist) {
+		if (!to_del || timespec64_compare(&ce->etime,
+						  &to_del->etime) < 0)
 			to_del = ce;
 	}
 	if (!to_del) {
@@ -453,93 +461,84 @@ static void remove_oldest_entry(void)
 }
 
 /* Add a new DFS cache entry */
-static inline struct dfs_cache_entry *
+static inline struct cache_entry *
 add_cache_entry(unsigned int hash, const char *path,
 		const struct dfs_info3_param *refs, int numrefs)
 {
-	struct dfs_cache_entry *ce;
+	struct cache_entry *ce;
 
 	ce = alloc_cache_entry(path, refs, numrefs);
 	if (IS_ERR(ce))
 		return ce;
 
-	hlist_add_head_rcu(&ce->ce_hlist, &dfs_cache_htable[hash]);
+	hlist_add_head_rcu(&ce->hlist, &cache_htable[hash]);
 
-	mutex_lock(&dfs_cache.dc_lock);
-	if (dfs_cache.dc_ttl < 0) {
-		dfs_cache.dc_ttl = ce->ce_ttl;
-		queue_delayed_work(cifsiod_wq, &dfs_cache.dc_refresh,
-				   dfs_cache.dc_ttl * HZ);
+	mutex_lock(&vol_lock);
+	if (cache_ttl < 0) {
+		cache_ttl = ce->ttl;
+		queue_delayed_work(dfscache_wq, &refresh_task, cache_ttl * HZ);
 	} else {
-		dfs_cache.dc_ttl = min_t(int, dfs_cache.dc_ttl, ce->ce_ttl);
-		mod_delayed_work(cifsiod_wq, &dfs_cache.dc_refresh,
-				 dfs_cache.dc_ttl * HZ);
+		cache_ttl = min_t(int, cache_ttl, ce->ttl);
+		mod_delayed_work(dfscache_wq, &refresh_task, cache_ttl * HZ);
 	}
-	mutex_unlock(&dfs_cache.dc_lock);
+	mutex_unlock(&vol_lock);
 
 	return ce;
 }
 
-static struct dfs_cache_entry *__find_cache_entry(unsigned int hash,
-						  const char *path)
+/*
+ * Find a DFS cache entry in hash table and optionally check prefix path against
+ * @path.
+ * Use whole path components in the match.
+ * Return ERR_PTR(-ENOENT) if the entry is not found.
+ */
+static struct cache_entry *lookup_cache_entry(const char *path,
+					      unsigned int *hash)
 {
-	struct dfs_cache_entry *ce;
+	struct cache_entry *ce;
+	unsigned int h;
 	bool found = false;
 
-	rcu_read_lock();
-	hlist_for_each_entry_rcu(ce, &dfs_cache_htable[hash], ce_hlist) {
-		if (!strcasecmp(path, ce->ce_path)) {
-#ifdef CONFIG_CIFS_DEBUG2
-			char *name = get_tgt_name(ce);
+	h = cache_entry_hash(path, strlen(path));
 
-			if (IS_ERR(name)) {
-				rcu_read_unlock();
-				return ERR_CAST(name);
-			}
-			cifs_dbg(FYI, "%s: cache hit\n", __func__);
-			cifs_dbg(FYI, "%s: target hint: %s\n", __func__, name);
-#endif
+	rcu_read_lock();
+	hlist_for_each_entry_rcu(ce, &cache_htable[h], hlist) {
+		if (!strcasecmp(path, ce->path)) {
 			found = true;
+			dump_ce(ce);
 			break;
 		}
 	}
 	rcu_read_unlock();
-	return found ? ce : ERR_PTR(-ENOENT);
-}
 
-/*
- * Find a DFS cache entry in hash table and optionally check prefix path against
- * @path.
- * Use whole path components in the match.
- * Return ERR_PTR(-ENOENT) if the entry is not found.
- */
-static inline struct dfs_cache_entry *find_cache_entry(const char *path,
-						       unsigned int *hash)
-{
-	*hash = cache_entry_hash(path, strlen(path));
-	return __find_cache_entry(*hash, path);
+	if (!found)
+		ce = ERR_PTR(-ENOENT);
+	if (hash)
+		*hash = h;
+
+	return ce;
 }
 
 static inline void destroy_slab_cache(void)
 {
 	rcu_barrier();
-	kmem_cache_destroy(dfs_cache_slab);
+	kmem_cache_destroy(cache_slab);
 }
 
-static inline void free_vol(struct dfs_cache_vol_info *vi)
+static inline void free_vol(struct vol_info *vi)
 {
-	list_del(&vi->vi_list);
-	kfree(vi->vi_fullpath);
-	kfree(vi->vi_mntdata);
-	cifs_cleanup_volume_info_contents(&vi->vi_vol);
+	list_del(&vi->list);
+	kfree(vi->fullpath);
+	kfree(vi->mntdata);
+	cifs_cleanup_volume_info_contents(&vi->smb_vol);
 	kfree(vi);
 }
 
 static inline void free_vol_list(void)
 {
-	struct dfs_cache_vol_info *vi, *nvi;
+	struct vol_info *vi, *nvi;
 
-	list_for_each_entry_safe(vi, nvi, &dfs_cache.dc_vol_list, vi_list)
+	list_for_each_entry_safe(vi, nvi, &vol_list, list)
 		free_vol(vi);
 }
 
@@ -548,40 +547,38 @@ static inline void free_vol_list(void)
  */
 void dfs_cache_destroy(void)
 {
-	cancel_delayed_work_sync(&dfs_cache.dc_refresh);
-	unload_nls(dfs_cache.dc_nlsc);
+	cancel_delayed_work_sync(&refresh_task);
+	unload_nls(cache_nlsc);
 	free_vol_list();
-	mutex_destroy(&dfs_cache.dc_lock);
-
 	flush_cache_ents();
 	destroy_slab_cache();
-	mutex_destroy(&dfs_cache_list_lock);
+	destroy_workqueue(dfscache_wq);
 
 	cifs_dbg(FYI, "%s: destroyed DFS referral cache\n", __func__);
 }
 
-static inline struct dfs_cache_entry *
+static inline struct cache_entry *
 __update_cache_entry(const char *path, const struct dfs_info3_param *refs,
 		     int numrefs)
 {
 	int rc;
 	unsigned int h;
-	struct dfs_cache_entry *ce;
+	struct cache_entry *ce;
 	char *s, *th = NULL;
 
-	ce = find_cache_entry(path, &h);
+	ce = lookup_cache_entry(path, &h);
 	if (IS_ERR(ce))
 		return ce;
 
-	if (ce->ce_tgthint) {
-		s = ce->ce_tgthint->t_name;
+	if (ce->tgthint) {
+		s = ce->tgthint->name;
 		th = kstrndup(s, strlen(s), GFP_KERNEL);
 		if (!th)
 			return ERR_PTR(-ENOMEM);
 	}
 
 	free_tgts(ce);
-	ce->ce_numtgts = 0;
+	ce->numtgts = 0;
 
 	rc = copy_ref_data(refs, numrefs, ce, th);
 	kfree(th);
@@ -593,10 +590,10 @@ __update_cache_entry(const char *path, const struct dfs_info3_param *refs,
 }
 
 /* Update an expired cache entry by getting a new DFS referral from server */
-static struct dfs_cache_entry *
+static struct cache_entry *
 update_cache_entry(const unsigned int xid, struct cifs_ses *ses,
 		   const struct nls_table *nls_codepage, int remap,
-		   const char *path, struct dfs_cache_entry *ce)
+		   const char *path, struct cache_entry *ce)
 {
 	int rc;
 	struct dfs_info3_param *refs = NULL;
@@ -636,20 +633,20 @@ update_cache_entry(const unsigned int xid, struct cifs_ses *ses,
  * For interlinks, __cifs_dfs_mount() and expand_dfs_referral() are supposed to
  * handle them properly.
  */
-static struct dfs_cache_entry *
+static struct cache_entry *
 do_dfs_cache_find(const unsigned int xid, struct cifs_ses *ses,
 		  const struct nls_table *nls_codepage, int remap,
 		  const char *path, bool noreq)
 {
 	int rc;
 	unsigned int h;
-	struct dfs_cache_entry *ce;
+	struct cache_entry *ce;
 	struct dfs_info3_param *nrefs;
 	int numnrefs;
 
 	cifs_dbg(FYI, "%s: search path: %s\n", __func__, path);
 
-	ce = find_cache_entry(path, &h);
+	ce = lookup_cache_entry(path, &h);
 	if (IS_ERR(ce)) {
 		cifs_dbg(FYI, "%s: cache miss\n", __func__);
 		/*
@@ -690,9 +687,9 @@ do_dfs_cache_find(const unsigned int xid, struct cifs_ses *ses,
 
 		cifs_dbg(FYI, "%s: new cache entry\n", __func__);
 
-		if (dfs_cache_count >= DFS_CACHE_MAX_ENTRIES) {
+		if (cache_count >= CACHE_MAX_ENTRIES) {
 			cifs_dbg(FYI, "%s: reached max cache size (%d)",
-				 __func__, DFS_CACHE_MAX_ENTRIES);
+				 __func__, CACHE_MAX_ENTRIES);
 			remove_oldest_entry();
 		}
 		ce = add_cache_entry(h, path, nrefs, numnrefs);
@@ -701,7 +698,7 @@ do_dfs_cache_find(const unsigned int xid, struct cifs_ses *ses,
 		if (IS_ERR(ce))
 			return ce;
 
-		dfs_cache_count++;
+		cache_count++;
 	}
 
 	dump_ce(ce);
@@ -723,7 +720,7 @@ do_dfs_cache_find(const unsigned int xid, struct cifs_ses *ses,
 }
 
 /* Set up a new DFS referral from a given cache entry */
-static int setup_ref(const char *path, const struct dfs_cache_entry *ce,
+static int setup_ref(const char *path, const struct cache_entry *ce,
 		     struct dfs_info3_param *ref, const char *tgt)
 {
 	int rc;
@@ -736,7 +733,7 @@ static int setup_ref(const char *path, const struct dfs_cache_entry *ce,
 	if (!ref->path_name)
 		return -ENOMEM;
 
-	ref->path_consumed = ce->ce_path_consumed;
+	ref->path_consumed = ce->path_consumed;
 
 	ref->node_name = kstrndup(tgt, strlen(tgt), GFP_KERNEL);
 	if (!ref->node_name) {
@@ -744,9 +741,9 @@ static int setup_ref(const char *path, const struct dfs_cache_entry *ce,
 		goto err_free_path;
 	}
 
-	ref->ttl = ce->ce_ttl;
-	ref->server_type = ce->ce_srvtype;
-	ref->ref_flag = ce->ce_flags;
+	ref->ttl = ce->ttl;
+	ref->server_type = ce->srvtype;
+	ref->ref_flag = ce->flags;
 
 	return 0;
 
@@ -757,25 +754,25 @@ static int setup_ref(const char *path, const struct dfs_cache_entry *ce,
 }
 
 /* Return target list of a DFS cache entry */
-static int get_tgt_list(const struct dfs_cache_entry *ce,
+static int get_tgt_list(const struct cache_entry *ce,
 			struct dfs_cache_tgt_list *tl)
 {
 	int rc;
 	struct list_head *head = &tl->tl_list;
-	struct dfs_cache_tgt *t;
+	struct cache_dfs_tgt *t;
 	struct dfs_cache_tgt_iterator *it, *nit;
 
 	memset(tl, 0, sizeof(*tl));
 	INIT_LIST_HEAD(head);
 
-	list_for_each_entry(t, &ce->ce_tlist, t_list) {
+	list_for_each_entry(t, &ce->tlist, list) {
 		it = kzalloc(sizeof(*it), GFP_KERNEL);
 		if (!it) {
 			rc = -ENOMEM;
 			goto err_free_it;
 		}
 
-		it->it_name = kstrndup(t->t_name, strlen(t->t_name),
+		it->it_name = kstrndup(t->name, strlen(t->name),
 				       GFP_KERNEL);
 		if (!it->it_name) {
 			kfree(it);
@@ -783,12 +780,12 @@ static int get_tgt_list(const struct dfs_cache_entry *ce,
 			goto err_free_it;
 		}
 
-		if (ce->ce_tgthint == t)
+		if (ce->tgthint == t)
 			list_add(&it->it_list, head);
 		else
 			list_add_tail(&it->it_list, head);
 	}
-	tl->tl_numtgts = ce->ce_numtgts;
+	tl->tl_numtgts = ce->numtgts;
 
 	return 0;
 
@@ -829,7 +826,7 @@ int dfs_cache_find(const unsigned int xid, struct cifs_ses *ses,
 {
 	int rc;
 	char *npath;
-	struct dfs_cache_entry *ce;
+	struct cache_entry *ce;
 
 	if (unlikely(!is_path_valid(path)))
 		return -EINVAL;
@@ -838,7 +835,7 @@ int dfs_cache_find(const unsigned int xid, struct cifs_ses *ses,
 	if (rc)
 		return rc;
 
-	mutex_lock(&dfs_cache_list_lock);
+	mutex_lock(&list_lock);
 	ce = do_dfs_cache_find(xid, ses, nls_codepage, remap, npath, false);
 	if (!IS_ERR(ce)) {
 		if (ref)
@@ -850,7 +847,7 @@ int dfs_cache_find(const unsigned int xid, struct cifs_ses *ses,
 	} else {
 		rc = PTR_ERR(ce);
 	}
-	mutex_unlock(&dfs_cache_list_lock);
+	mutex_unlock(&list_lock);
 	free_normalized_path(path, npath);
 	return rc;
 }
@@ -876,7 +873,7 @@ int dfs_cache_noreq_find(const char *path, struct dfs_info3_param *ref,
 {
 	int rc;
 	char *npath;
-	struct dfs_cache_entry *ce;
+	struct cache_entry *ce;
 
 	if (unlikely(!is_path_valid(path)))
 		return -EINVAL;
@@ -885,7 +882,7 @@ int dfs_cache_noreq_find(const char *path, struct dfs_info3_param *ref,
 	if (rc)
 		return rc;
 
-	mutex_lock(&dfs_cache_list_lock);
+	mutex_lock(&list_lock);
 	ce = do_dfs_cache_find(0, NULL, NULL, 0, npath, true);
 	if (IS_ERR(ce)) {
 		rc = PTR_ERR(ce);
@@ -899,7 +896,7 @@ int dfs_cache_noreq_find(const char *path, struct dfs_info3_param *ref,
 	if (!rc && tgt_list)
 		rc = get_tgt_list(ce, tgt_list);
 out:
-	mutex_unlock(&dfs_cache_list_lock);
+	mutex_unlock(&list_lock);
 	free_normalized_path(path, npath);
 	return rc;
 }
@@ -929,8 +926,8 @@ int dfs_cache_update_tgthint(const unsigned int xid, struct cifs_ses *ses,
 {
 	int rc;
 	char *npath;
-	struct dfs_cache_entry *ce;
-	struct dfs_cache_tgt *t;
+	struct cache_entry *ce;
+	struct cache_dfs_tgt *t;
 
 	if (unlikely(!is_path_valid(path)))
 		return -EINVAL;
@@ -941,7 +938,7 @@ int dfs_cache_update_tgthint(const unsigned int xid, struct cifs_ses *ses,
 
 	cifs_dbg(FYI, "%s: path: %s\n", __func__, npath);
 
-	mutex_lock(&dfs_cache_list_lock);
+	mutex_lock(&list_lock);
 	ce = do_dfs_cache_find(xid, ses, nls_codepage, remap, npath, false);
 	if (IS_ERR(ce)) {
 		rc = PTR_ERR(ce);
@@ -950,14 +947,14 @@ int dfs_cache_update_tgthint(const unsigned int xid, struct cifs_ses *ses,
 
 	rc = 0;
 
-	t = ce->ce_tgthint;
+	t = ce->tgthint;
 
-	if (likely(!strcasecmp(it->it_name, t->t_name)))
+	if (likely(!strcasecmp(it->it_name, t->name)))
 		goto out;
 
-	list_for_each_entry(t, &ce->ce_tlist, t_list) {
-		if (!strcasecmp(t->t_name, it->it_name)) {
-			ce->ce_tgthint = t;
+	list_for_each_entry(t, &ce->tlist, list) {
+		if (!strcasecmp(t->name, it->it_name)) {
+			ce->tgthint = t;
 			cifs_dbg(FYI, "%s: new target hint: %s\n", __func__,
 				 it->it_name);
 			break;
@@ -965,7 +962,7 @@ int dfs_cache_update_tgthint(const unsigned int xid, struct cifs_ses *ses,
 	}
 
 out:
-	mutex_unlock(&dfs_cache_list_lock);
+	mutex_unlock(&list_lock);
 	free_normalized_path(path, npath);
 	return rc;
 }
@@ -989,8 +986,8 @@ int dfs_cache_noreq_update_tgthint(const char *path,
 {
 	int rc;
 	char *npath;
-	struct dfs_cache_entry *ce;
-	struct dfs_cache_tgt *t;
+	struct cache_entry *ce;
+	struct cache_dfs_tgt *t;
 
 	if (unlikely(!is_path_valid(path)) || !it)
 		return -EINVAL;
@@ -1001,7 +998,7 @@ int dfs_cache_noreq_update_tgthint(const char *path,
 
 	cifs_dbg(FYI, "%s: path: %s\n", __func__, npath);
 
-	mutex_lock(&dfs_cache_list_lock);
+	mutex_lock(&list_lock);
 
 	ce = do_dfs_cache_find(0, NULL, NULL, 0, npath, true);
 	if (IS_ERR(ce)) {
@@ -1011,14 +1008,14 @@ int dfs_cache_noreq_update_tgthint(const char *path,
 
 	rc = 0;
 
-	t = ce->ce_tgthint;
+	t = ce->tgthint;
 
-	if (unlikely(!strcasecmp(it->it_name, t->t_name)))
+	if (unlikely(!strcasecmp(it->it_name, t->name)))
 		goto out;
 
-	list_for_each_entry(t, &ce->ce_tlist, t_list) {
-		if (!strcasecmp(t->t_name, it->it_name)) {
-			ce->ce_tgthint = t;
+	list_for_each_entry(t, &ce->tlist, list) {
+		if (!strcasecmp(t->name, it->it_name)) {
+			ce->tgthint = t;
 			cifs_dbg(FYI, "%s: new target hint: %s\n", __func__,
 				 it->it_name);
 			break;
@@ -1026,7 +1023,7 @@ int dfs_cache_noreq_update_tgthint(const char *path,
 	}
 
 out:
-	mutex_unlock(&dfs_cache_list_lock);
+	mutex_unlock(&list_lock);
 	free_normalized_path(path, npath);
 	return rc;
 }
@@ -1047,7 +1044,7 @@ int dfs_cache_get_tgt_referral(const char *path,
 {
 	int rc;
 	char *npath;
-	struct dfs_cache_entry *ce;
+	struct cache_entry *ce;
 	unsigned int h;
 
 	if (!it || !ref)
@@ -1061,9 +1058,9 @@ int dfs_cache_get_tgt_referral(const char *path,
 
 	cifs_dbg(FYI, "%s: path: %s\n", __func__, npath);
 
-	mutex_lock(&dfs_cache_list_lock);
+	mutex_lock(&list_lock);
 
-	ce = find_cache_entry(npath, &h);
+	ce = lookup_cache_entry(npath, &h);
 	if (IS_ERR(ce)) {
 		rc = PTR_ERR(ce);
 		goto out;
@@ -1074,7 +1071,7 @@ int dfs_cache_get_tgt_referral(const char *path,
 	rc = setup_ref(path, ce, ref, it->it_name);
 
 out:
-	mutex_unlock(&dfs_cache_list_lock);
+	mutex_unlock(&list_lock);
 	free_normalized_path(path, npath);
 	return rc;
 }
@@ -1085,7 +1082,7 @@ static int dup_vol(struct smb_vol *vol, struct smb_vol *new)
 
 	if (vol->username) {
 		new->username = kstrndup(vol->username, strlen(vol->username),
-					GFP_KERNEL);
+					 GFP_KERNEL);
 		if (!new->username)
 			return -ENOMEM;
 	}
@@ -1103,7 +1100,7 @@ static int dup_vol(struct smb_vol *vol, struct smb_vol *new)
 	}
 	if (vol->domainname) {
 		new->domainname = kstrndup(vol->domainname,
-					  strlen(vol->domainname), GFP_KERNEL);
+					   strlen(vol->domainname), GFP_KERNEL);
 		if (!new->domainname)
 			goto err_free_unc;
 	}
@@ -1150,7 +1147,7 @@ static int dup_vol(struct smb_vol *vol, struct smb_vol *new)
 int dfs_cache_add_vol(char *mntdata, struct smb_vol *vol, const char *fullpath)
 {
 	int rc;
-	struct dfs_cache_vol_info *vi;
+	struct vol_info *vi;
 
 	if (!vol || !fullpath || !mntdata)
 		return -EINVAL;
@@ -1161,38 +1158,37 @@ int dfs_cache_add_vol(char *mntdata, struct smb_vol *vol, const char *fullpath)
 	if (!vi)
 		return -ENOMEM;
 
-	vi->vi_fullpath = kstrndup(fullpath, strlen(fullpath), GFP_KERNEL);
-	if (!vi->vi_fullpath) {
+	vi->fullpath = kstrndup(fullpath, strlen(fullpath), GFP_KERNEL);
+	if (!vi->fullpath) {
 		rc = -ENOMEM;
 		goto err_free_vi;
 	}
 
-	rc = dup_vol(vol, &vi->vi_vol);
+	rc = dup_vol(vol, &vi->smb_vol);
 	if (rc)
 		goto err_free_fullpath;
 
-	vi->vi_mntdata = mntdata;
+	vi->mntdata = mntdata;
 
-	mutex_lock(&dfs_cache.dc_lock);
-	list_add_tail(&vi->vi_list, &dfs_cache.dc_vol_list);
-	mutex_unlock(&dfs_cache.dc_lock);
+	mutex_lock(&vol_lock);
+	list_add_tail(&vi->list, &vol_list);
+	mutex_unlock(&vol_lock);
 	return 0;
 
 err_free_fullpath:
-	kfree(vi->vi_fullpath);
+	kfree(vi->fullpath);
 err_free_vi:
 	kfree(vi);
 	return rc;
 }
 
-static inline struct dfs_cache_vol_info *find_vol(const char *fullpath)
+static inline struct vol_info *find_vol(const char *fullpath)
 {
-	struct dfs_cache_vol_info *vi;
+	struct vol_info *vi;
 
-	list_for_each_entry(vi, &dfs_cache.dc_vol_list, vi_list) {
-		cifs_dbg(FYI, "%s: vi->vi_fullpath: %s\n", __func__,
-			 vi->vi_fullpath);
-		if (!strcasecmp(vi->vi_fullpath, fullpath))
+	list_for_each_entry(vi, &vol_list, list) {
+		cifs_dbg(FYI, "%s: vi->fullpath: %s\n", __func__, vi->fullpath);
+		if (!strcasecmp(vi->fullpath, fullpath))
 			return vi;
 	}
 	return ERR_PTR(-ENOENT);
@@ -1209,14 +1205,14 @@ static inline struct dfs_cache_vol_info *find_vol(const char *fullpath)
 int dfs_cache_update_vol(const char *fullpath, struct TCP_Server_Info *server)
 {
 	int rc;
-	struct dfs_cache_vol_info *vi;
+	struct vol_info *vi;
 
 	if (!fullpath || !server)
 		return -EINVAL;
 
 	cifs_dbg(FYI, "%s: fullpath: %s\n", __func__, fullpath);
 
-	mutex_lock(&dfs_cache.dc_lock);
+	mutex_lock(&vol_lock);
 
 	vi = find_vol(fullpath);
 	if (IS_ERR(vi)) {
@@ -1225,12 +1221,12 @@ int dfs_cache_update_vol(const char *fullpath, struct TCP_Server_Info *server)
 	}
 
 	cifs_dbg(FYI, "%s: updating volume info\n", __func__);
-	memcpy(&vi->vi_vol.dstaddr, &server->dstaddr,
-	       sizeof(vi->vi_vol.dstaddr));
+	memcpy(&vi->smb_vol.dstaddr, &server->dstaddr,
+	       sizeof(vi->smb_vol.dstaddr));
 	rc = 0;
 
 out:
-	mutex_unlock(&dfs_cache.dc_lock);
+	mutex_unlock(&vol_lock);
 	return rc;
 }
 
@@ -1241,18 +1237,18 @@ int dfs_cache_update_vol(const char *fullpath, struct TCP_Server_Info *server)
  */
 void dfs_cache_del_vol(const char *fullpath)
 {
-	struct dfs_cache_vol_info *vi;
+	struct vol_info *vi;
 
 	if (!fullpath || !*fullpath)
 		return;
 
 	cifs_dbg(FYI, "%s: fullpath: %s\n", __func__, fullpath);
 
-	mutex_lock(&dfs_cache.dc_lock);
+	mutex_lock(&vol_lock);
 	vi = find_vol(fullpath);
 	if (!IS_ERR(vi))
 		free_vol(vi);
-	mutex_unlock(&dfs_cache.dc_lock);
+	mutex_unlock(&vol_lock);
 }
 
 /* Get all tcons that are within a DFS namespace and can be refreshed */
@@ -1280,7 +1276,7 @@ static void get_tcons(struct TCP_Server_Info *server, struct list_head *head)
 	spin_unlock(&cifs_tcp_ses_lock);
 }
 
-static inline bool is_dfs_link(const char *path)
+static bool is_dfs_link(const char *path)
 {
 	char *s;
 
@@ -1290,7 +1286,7 @@ static inline bool is_dfs_link(const char *path)
 	return !!strchr(s + 1, '\\');
 }
 
-static inline char *get_dfs_root(const char *path)
+static char *get_dfs_root(const char *path)
 {
 	char *s, *npath;
 
@@ -1310,8 +1306,9 @@ static inline char *get_dfs_root(const char *path)
 }
 
 /* Find root SMB session out of a DFS link path */
-static struct cifs_ses *find_root_ses(struct dfs_cache_vol_info *vi,
-				      struct cifs_tcon *tcon, const char *path)
+static struct cifs_ses *find_root_ses(struct vol_info *vi,
+				      struct cifs_tcon *tcon,
+				      const char *path)
 {
 	char *rpath;
 	int rc;
@@ -1333,8 +1330,7 @@ static struct cifs_ses *find_root_ses(struct dfs_cache_vol_info *vi,
 		goto out;
 	}
 
-	mdata = cifs_compose_mount_options(vi->vi_mntdata, rpath, &ref,
-					   &devname);
+	mdata = cifs_compose_mount_options(vi->mntdata, rpath, &ref, &devname);
 	free_dfs_info_param(&ref);
 
 	if (IS_ERR(mdata)) {
@@ -1373,14 +1369,13 @@ static struct cifs_ses *find_root_ses(struct dfs_cache_vol_info *vi,
 }
 
 /* Refresh DFS cache entry from a given tcon */
-static void do_refresh_tcon(struct dfs_cache *dc, struct dfs_cache_vol_info *vi,
-			    struct cifs_tcon *tcon)
+static void refresh_tcon(struct vol_info *vi, struct cifs_tcon *tcon)
 {
 	int rc = 0;
 	unsigned int xid;
 	char *path, *npath;
 	unsigned int h;
-	struct dfs_cache_entry *ce;
+	struct cache_entry *ce;
 	struct dfs_info3_param *refs = NULL;
 	int numrefs = 0;
 	struct cifs_ses *root_ses = NULL, *ses;
@@ -1393,9 +1388,9 @@ static void do_refresh_tcon(struct dfs_cache *dc, struct dfs_cache_vol_info *vi,
 	if (rc)
 		goto out;
 
-	mutex_lock(&dfs_cache_list_lock);
-	ce = find_cache_entry(npath, &h);
-	mutex_unlock(&dfs_cache_list_lock);
+	mutex_lock(&list_lock);
+	ce = lookup_cache_entry(npath, &h);
+	mutex_unlock(&list_lock);
 
 	if (IS_ERR(ce)) {
 		rc = PTR_ERR(ce);
@@ -1421,12 +1416,12 @@ static void do_refresh_tcon(struct dfs_cache *dc, struct dfs_cache_vol_info *vi,
 		rc = -EOPNOTSUPP;
 	} else {
 		rc = ses->server->ops->get_dfs_refer(xid, ses, path, &refs,
-						     &numrefs, dc->dc_nlsc,
+						     &numrefs, cache_nlsc,
 						     tcon->remap);
 		if (!rc) {
-			mutex_lock(&dfs_cache_list_lock);
+			mutex_lock(&list_lock);
 			ce = __update_cache_entry(npath, refs, numrefs);
-			mutex_unlock(&dfs_cache_list_lock);
+			mutex_unlock(&list_lock);
 			dump_refs(refs, numrefs);
 			free_dfs_info_array(refs, numrefs);
 			if (IS_ERR(ce))
@@ -1448,30 +1443,28 @@ static void do_refresh_tcon(struct dfs_cache *dc, struct dfs_cache_vol_info *vi,
  */
 static void refresh_cache_worker(struct work_struct *work)
 {
-	struct dfs_cache *dc = container_of(work, struct dfs_cache,
-					    dc_refresh.work);
-	struct dfs_cache_vol_info *vi;
+	struct vol_info *vi;
 	struct TCP_Server_Info *server;
 	LIST_HEAD(list);
 	struct cifs_tcon *tcon, *ntcon;
 
-	mutex_lock(&dc->dc_lock);
+	mutex_lock(&vol_lock);
 
-	list_for_each_entry(vi, &dc->dc_vol_list, vi_list) {
-		server = cifs_find_tcp_session(&vi->vi_vol);
+	list_for_each_entry(vi, &vol_list, list) {
+		server = cifs_find_tcp_session(&vi->smb_vol);
 		if (IS_ERR_OR_NULL(server))
 			continue;
 		if (server->tcpStatus != CifsGood)
 			goto next;
 		get_tcons(server, &list);
 		list_for_each_entry_safe(tcon, ntcon, &list, ulist) {
-			do_refresh_tcon(dc, vi, tcon);
+			refresh_tcon(vi, tcon);
 			list_del_init(&tcon->ulist);
 			cifs_put_tcon(tcon);
 		}
 next:
 		cifs_put_tcp_session(server, 0);
 	}
-	queue_delayed_work(cifsiod_wq, &dc->dc_refresh, dc->dc_ttl * HZ);
-	mutex_unlock(&dc->dc_lock);
+	queue_delayed_work(dfscache_wq, &refresh_task, cache_ttl * HZ);
+	mutex_unlock(&vol_lock);
 }
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 05/11] cifs: Get rid of kstrdup_const()'d paths
  2019-11-27  0:36 [PATCH v3 00/11] DFS fixes Paulo Alcantara (SUSE)
                   ` (3 preceding siblings ...)
  2019-11-27  0:36 ` [PATCH v3 04/11] cifs: Clean up DFS referral cache Paulo Alcantara (SUSE)
@ 2019-11-27  0:36 ` Paulo Alcantara (SUSE)
  2019-11-27 11:33   ` Aurélien Aptel
  2019-11-27  0:36 ` [PATCH v3 06/11] cifs: Introduce helpers for finding TCP connection Paulo Alcantara (SUSE)
                   ` (5 subsequent siblings)
  10 siblings, 1 reply; 19+ messages in thread
From: Paulo Alcantara (SUSE) @ 2019-11-27  0:36 UTC (permalink / raw)
  To: smfrench; +Cc: linux-cifs, Paulo Alcantara (SUSE)

The DFS cache API is mostly used with heap allocated strings.

Signed-off-by: Paulo Alcantara (SUSE) <pc@cjr.nz>
---
 fs/cifs/dfs_cache.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/fs/cifs/dfs_cache.c b/fs/cifs/dfs_cache.c
index aed4183840c5..49c5f045270f 100644
--- a/fs/cifs/dfs_cache.c
+++ b/fs/cifs/dfs_cache.c
@@ -131,7 +131,7 @@ static inline void flush_cache_ent(struct cache_entry *ce)
 		return;
 
 	hlist_del_init_rcu(&ce->hlist);
-	kfree_const(ce->path);
+	kfree(ce->path);
 	free_tgts(ce);
 	cache_count--;
 	call_rcu(&ce->rcu, free_cache_entry);
@@ -420,7 +420,7 @@ static struct cache_entry *alloc_cache_entry(const char *path,
 	if (!ce)
 		return ERR_PTR(-ENOMEM);
 
-	ce->path = kstrdup_const(path, GFP_KERNEL);
+	ce->path = kstrndup(path, strlen(path), GFP_KERNEL);
 	if (!ce->path) {
 		kmem_cache_free(cache_slab, ce);
 		return ERR_PTR(-ENOMEM);
@@ -430,7 +430,7 @@ static struct cache_entry *alloc_cache_entry(const char *path,
 
 	rc = copy_ref_data(refs, numrefs, ce, NULL);
 	if (rc) {
-		kfree_const(ce->path);
+		kfree(ce->path);
 		kmem_cache_free(cache_slab, ce);
 		ce = ERR_PTR(rc);
 	}
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 06/11] cifs: Introduce helpers for finding TCP connection
  2019-11-27  0:36 [PATCH v3 00/11] DFS fixes Paulo Alcantara (SUSE)
                   ` (4 preceding siblings ...)
  2019-11-27  0:36 ` [PATCH v3 05/11] cifs: Get rid of kstrdup_const()'d paths Paulo Alcantara (SUSE)
@ 2019-11-27  0:36 ` Paulo Alcantara (SUSE)
  2019-11-27 11:33   ` Aurélien Aptel
  2019-11-27  0:36 ` [PATCH v3 07/11] cifs: Merge is_path_valid() into get_normalized_path() Paulo Alcantara (SUSE)
                   ` (4 subsequent siblings)
  10 siblings, 1 reply; 19+ messages in thread
From: Paulo Alcantara (SUSE) @ 2019-11-27  0:36 UTC (permalink / raw)
  To: smfrench; +Cc: linux-cifs, Paulo Alcantara (SUSE)

Add helpers for finding TCP connections that are good candidates for
being used by DFS refresh worker.

Signed-off-by: Paulo Alcantara (SUSE) <pc@cjr.nz>
---
 fs/cifs/dfs_cache.c | 44 +++++++++++++++++++++++++++++++-------------
 1 file changed, 31 insertions(+), 13 deletions(-)

diff --git a/fs/cifs/dfs_cache.c b/fs/cifs/dfs_cache.c
index 49c5f045270f..e889608e5e13 100644
--- a/fs/cifs/dfs_cache.c
+++ b/fs/cifs/dfs_cache.c
@@ -1305,6 +1305,30 @@ static char *get_dfs_root(const char *path)
 	return npath;
 }
 
+static inline void put_tcp_server(struct TCP_Server_Info *server)
+{
+	cifs_put_tcp_session(server, 0);
+}
+
+static struct TCP_Server_Info *get_tcp_server(struct smb_vol *vol)
+{
+	struct TCP_Server_Info *server;
+
+	server = cifs_find_tcp_session(vol);
+	if (IS_ERR_OR_NULL(server))
+		return NULL;
+
+	spin_lock(&GlobalMid_Lock);
+	if (server->tcpStatus != CifsGood) {
+		spin_unlock(&GlobalMid_Lock);
+		put_tcp_server(server);
+		return NULL;
+	}
+	spin_unlock(&GlobalMid_Lock);
+
+	return server;
+}
+
 /* Find root SMB session out of a DFS link path */
 static struct cifs_ses *find_root_ses(struct vol_info *vi,
 				      struct cifs_tcon *tcon,
@@ -1347,13 +1371,8 @@ static struct cifs_ses *find_root_ses(struct vol_info *vi,
 		goto out;
 	}
 
-	server = cifs_find_tcp_session(&vol);
-	if (IS_ERR_OR_NULL(server)) {
-		ses = ERR_PTR(-EHOSTDOWN);
-		goto out;
-	}
-	if (server->tcpStatus != CifsGood) {
-		cifs_put_tcp_session(server, 0);
+	server = get_tcp_server(&vol);
+	if (!server) {
 		ses = ERR_PTR(-EHOSTDOWN);
 		goto out;
 	}
@@ -1451,19 +1470,18 @@ static void refresh_cache_worker(struct work_struct *work)
 	mutex_lock(&vol_lock);
 
 	list_for_each_entry(vi, &vol_list, list) {
-		server = cifs_find_tcp_session(&vi->smb_vol);
-		if (IS_ERR_OR_NULL(server))
+		server = get_tcp_server(&vi->smb_vol);
+		if (!server)
 			continue;
-		if (server->tcpStatus != CifsGood)
-			goto next;
+
 		get_tcons(server, &list);
 		list_for_each_entry_safe(tcon, ntcon, &list, ulist) {
 			refresh_tcon(vi, tcon);
 			list_del_init(&tcon->ulist);
 			cifs_put_tcon(tcon);
 		}
-next:
-		cifs_put_tcp_session(server, 0);
+
+		put_tcp_server(server);
 	}
 	queue_delayed_work(dfscache_wq, &refresh_task, cache_ttl * HZ);
 	mutex_unlock(&vol_lock);
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 07/11] cifs: Merge is_path_valid() into get_normalized_path()
  2019-11-27  0:36 [PATCH v3 00/11] DFS fixes Paulo Alcantara (SUSE)
                   ` (5 preceding siblings ...)
  2019-11-27  0:36 ` [PATCH v3 06/11] cifs: Introduce helpers for finding TCP connection Paulo Alcantara (SUSE)
@ 2019-11-27  0:36 ` Paulo Alcantara (SUSE)
  2019-11-27 11:34   ` Aurélien Aptel
  2019-11-27  0:36 ` [PATCH v3 08/11] cifs: Fix potential deadlock when updating vol in cifs_reconnect() Paulo Alcantara (SUSE)
                   ` (3 subsequent siblings)
  10 siblings, 1 reply; 19+ messages in thread
From: Paulo Alcantara (SUSE) @ 2019-11-27  0:36 UTC (permalink / raw)
  To: smfrench; +Cc: linux-cifs, Paulo Alcantara (SUSE)

Just do the trivial path validation in get_normalized_path().

Signed-off-by: Paulo Alcantara (SUSE) <pc@cjr.nz>
---
 fs/cifs/dfs_cache.c | 21 ++++-----------------
 1 file changed, 4 insertions(+), 17 deletions(-)

diff --git a/fs/cifs/dfs_cache.c b/fs/cifs/dfs_cache.c
index e889608e5e13..1d1f7c03931b 100644
--- a/fs/cifs/dfs_cache.c
+++ b/fs/cifs/dfs_cache.c
@@ -75,13 +75,11 @@ static void refresh_cache_worker(struct work_struct *work);
 
 static DECLARE_DELAYED_WORK(refresh_task, refresh_cache_worker);
 
-static inline bool is_path_valid(const char *path)
+static int get_normalized_path(const char *path, char **npath)
 {
-	return path && (strchr(path + 1, '\\') || strchr(path + 1, '/'));
-}
+	if (!path || strlen(path) < 3 || (*path != '\\' && *path != '/'))
+		return -EINVAL;
 
-static inline int get_normalized_path(const char *path, char **npath)
-{
 	if (*path == '\\') {
 		*npath = (char *)path;
 	} else {
@@ -828,9 +826,6 @@ int dfs_cache_find(const unsigned int xid, struct cifs_ses *ses,
 	char *npath;
 	struct cache_entry *ce;
 
-	if (unlikely(!is_path_valid(path)))
-		return -EINVAL;
-
 	rc = get_normalized_path(path, &npath);
 	if (rc)
 		return rc;
@@ -875,9 +870,6 @@ int dfs_cache_noreq_find(const char *path, struct dfs_info3_param *ref,
 	char *npath;
 	struct cache_entry *ce;
 
-	if (unlikely(!is_path_valid(path)))
-		return -EINVAL;
-
 	rc = get_normalized_path(path, &npath);
 	if (rc)
 		return rc;
@@ -929,9 +921,6 @@ int dfs_cache_update_tgthint(const unsigned int xid, struct cifs_ses *ses,
 	struct cache_entry *ce;
 	struct cache_dfs_tgt *t;
 
-	if (unlikely(!is_path_valid(path)))
-		return -EINVAL;
-
 	rc = get_normalized_path(path, &npath);
 	if (rc)
 		return rc;
@@ -989,7 +978,7 @@ int dfs_cache_noreq_update_tgthint(const char *path,
 	struct cache_entry *ce;
 	struct cache_dfs_tgt *t;
 
-	if (unlikely(!is_path_valid(path)) || !it)
+	if (!it)
 		return -EINVAL;
 
 	rc = get_normalized_path(path, &npath);
@@ -1049,8 +1038,6 @@ int dfs_cache_get_tgt_referral(const char *path,
 
 	if (!it || !ref)
 		return -EINVAL;
-	if (unlikely(!is_path_valid(path)))
-		return -EINVAL;
 
 	rc = get_normalized_path(path, &npath);
 	if (rc)
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 08/11] cifs: Fix potential deadlock when updating vol in cifs_reconnect()
  2019-11-27  0:36 [PATCH v3 00/11] DFS fixes Paulo Alcantara (SUSE)
                   ` (6 preceding siblings ...)
  2019-11-27  0:36 ` [PATCH v3 07/11] cifs: Merge is_path_valid() into get_normalized_path() Paulo Alcantara (SUSE)
@ 2019-11-27  0:36 ` Paulo Alcantara (SUSE)
  2019-11-27 12:09   ` Aurélien Aptel
  2019-11-27  0:36 ` [PATCH v3 09/11] cifs: Avoid doing network I/O while holding cache lock Paulo Alcantara (SUSE)
                   ` (2 subsequent siblings)
  10 siblings, 1 reply; 19+ messages in thread
From: Paulo Alcantara (SUSE) @ 2019-11-27  0:36 UTC (permalink / raw)
  To: smfrench; +Cc: linux-cifs, Paulo Alcantara (SUSE)

We can't acquire vol_lock while refreshing the DFS cache because
cifs_reconnect() may call dfs_cache_update_vol() while we are walking
through the volume list.

To prevent that, make vol_info refcounted, create a temp list with all
volumes eligible for refreshing, and then use it without any locks
held.

Also turn vol_lock into a spinlock rather than a mutex.

Signed-off-by: Paulo Alcantara (SUSE) <pc@cjr.nz>
---
 fs/cifs/dfs_cache.c | 104 +++++++++++++++++++++++++++++---------------
 1 file changed, 69 insertions(+), 35 deletions(-)

diff --git a/fs/cifs/dfs_cache.c b/fs/cifs/dfs_cache.c
index 1d1f7c03931b..629190926197 100644
--- a/fs/cifs/dfs_cache.c
+++ b/fs/cifs/dfs_cache.c
@@ -52,12 +52,14 @@ struct vol_info {
 	struct smb_vol smb_vol;
 	char *mntdata;
 	struct list_head list;
+	struct list_head rlist;
+	int vol_count;
 };
 
 static struct kmem_cache *cache_slab __read_mostly;
 static struct workqueue_struct *dfscache_wq __read_mostly;
 
-static int cache_ttl;
+static atomic_t cache_ttl;
 static struct nls_table *cache_nlsc;
 
 /*
@@ -69,7 +71,7 @@ static struct hlist_head cache_htable[CACHE_HTABLE_SIZE];
 static DEFINE_MUTEX(list_lock);
 
 static LIST_HEAD(vol_list);
-static DEFINE_MUTEX(vol_lock);
+static DEFINE_SPINLOCK(vol_lock);
 
 static void refresh_cache_worker(struct work_struct *work);
 
@@ -300,7 +302,7 @@ int dfs_cache_init(void)
 	for (i = 0; i < CACHE_HTABLE_SIZE; i++)
 		INIT_HLIST_HEAD(&cache_htable[i]);
 
-	cache_ttl = -1;
+	atomic_set(&cache_ttl, -1);
 	cache_nlsc = load_nls_default();
 
 	cifs_dbg(FYI, "%s: initialized DFS referral cache\n", __func__);
@@ -464,6 +466,7 @@ add_cache_entry(unsigned int hash, const char *path,
 		const struct dfs_info3_param *refs, int numrefs)
 {
 	struct cache_entry *ce;
+	int ttl;
 
 	ce = alloc_cache_entry(path, refs, numrefs);
 	if (IS_ERR(ce))
@@ -471,15 +474,16 @@ add_cache_entry(unsigned int hash, const char *path,
 
 	hlist_add_head_rcu(&ce->hlist, &cache_htable[hash]);
 
-	mutex_lock(&vol_lock);
-	if (cache_ttl < 0) {
-		cache_ttl = ce->ttl;
-		queue_delayed_work(dfscache_wq, &refresh_task, cache_ttl * HZ);
+	ttl = atomic_read(&cache_ttl);
+	if (ttl < 0) {
+		ttl = ce->ttl;
+		queue_delayed_work(dfscache_wq, &refresh_task, ttl * HZ);
 	} else {
-		cache_ttl = min_t(int, cache_ttl, ce->ttl);
-		mod_delayed_work(dfscache_wq, &refresh_task, cache_ttl * HZ);
+		ttl = min_t(int, ttl, ce->ttl);
+		mod_delayed_work(dfscache_wq, &refresh_task, ttl * HZ);
 	}
-	mutex_unlock(&vol_lock);
+
+	atomic_set(&cache_ttl, ttl);
 
 	return ce;
 }
@@ -523,13 +527,16 @@ static inline void destroy_slab_cache(void)
 	kmem_cache_destroy(cache_slab);
 }
 
-static inline void free_vol(struct vol_info *vi)
+/* Must be called with vol_lock held */
+static inline void put_vol(struct vol_info *vi)
 {
-	list_del(&vi->list);
-	kfree(vi->fullpath);
-	kfree(vi->mntdata);
-	cifs_cleanup_volume_info_contents(&vi->smb_vol);
-	kfree(vi);
+	if (!--vi->vol_count) {
+		list_del_init(&vi->list);
+		kfree(vi->fullpath);
+		kfree(vi->mntdata);
+		cifs_cleanup_volume_info_contents(&vi->smb_vol);
+		kfree(vi);
+	}
 }
 
 static inline void free_vol_list(void)
@@ -537,7 +544,7 @@ static inline void free_vol_list(void)
 	struct vol_info *vi, *nvi;
 
 	list_for_each_entry_safe(vi, nvi, &vol_list, list)
-		free_vol(vi);
+		put_vol(vi);
 }
 
 /**
@@ -1156,10 +1163,12 @@ int dfs_cache_add_vol(char *mntdata, struct smb_vol *vol, const char *fullpath)
 		goto err_free_fullpath;
 
 	vi->mntdata = mntdata;
+	vi->vol_count++;
 
-	mutex_lock(&vol_lock);
+	spin_lock(&vol_lock);
 	list_add_tail(&vi->list, &vol_list);
-	mutex_unlock(&vol_lock);
+	spin_unlock(&vol_lock);
+
 	return 0;
 
 err_free_fullpath:
@@ -1169,7 +1178,8 @@ int dfs_cache_add_vol(char *mntdata, struct smb_vol *vol, const char *fullpath)
 	return rc;
 }
 
-static inline struct vol_info *find_vol(const char *fullpath)
+/* Must be called with vol_lock held */
+static struct vol_info *find_vol(const char *fullpath)
 {
 	struct vol_info *vi;
 
@@ -1199,21 +1209,22 @@ int dfs_cache_update_vol(const char *fullpath, struct TCP_Server_Info *server)
 
 	cifs_dbg(FYI, "%s: fullpath: %s\n", __func__, fullpath);
 
-	mutex_lock(&vol_lock);
+	spin_lock(&vol_lock);
 
 	vi = find_vol(fullpath);
 	if (IS_ERR(vi)) {
 		rc = PTR_ERR(vi);
-		goto out;
+		goto out_unlock;
 	}
 
 	cifs_dbg(FYI, "%s: updating volume info\n", __func__);
 	memcpy(&vi->smb_vol.dstaddr, &server->dstaddr,
 	       sizeof(vi->smb_vol.dstaddr));
+
 	rc = 0;
 
-out:
-	mutex_unlock(&vol_lock);
+out_unlock:
+	spin_unlock(&vol_lock);
 	return rc;
 }
 
@@ -1231,11 +1242,11 @@ void dfs_cache_del_vol(const char *fullpath)
 
 	cifs_dbg(FYI, "%s: fullpath: %s\n", __func__, fullpath);
 
-	mutex_lock(&vol_lock);
+	spin_lock(&vol_lock);
 	vi = find_vol(fullpath);
 	if (!IS_ERR(vi))
-		free_vol(vi);
-	mutex_unlock(&vol_lock);
+		put_vol(vi);
+	spin_unlock(&vol_lock);
 }
 
 /* Get all tcons that are within a DFS namespace and can be refreshed */
@@ -1449,27 +1460,50 @@ static void refresh_tcon(struct vol_info *vi, struct cifs_tcon *tcon)
  */
 static void refresh_cache_worker(struct work_struct *work)
 {
-	struct vol_info *vi;
+	struct vol_info *vi, *nvi;
 	struct TCP_Server_Info *server;
-	LIST_HEAD(list);
+	LIST_HEAD(vols);
+	LIST_HEAD(tcons);
 	struct cifs_tcon *tcon, *ntcon;
 
-	mutex_lock(&vol_lock);
-
+	/*
+	 * Find SMB volumes that are eligible (server->tcpStatus == CifsGood)
+	 * for refreshing.
+	 */
+	spin_lock(&vol_lock);
 	list_for_each_entry(vi, &vol_list, list) {
 		server = get_tcp_server(&vi->smb_vol);
 		if (!server)
 			continue;
 
-		get_tcons(server, &list);
-		list_for_each_entry_safe(tcon, ntcon, &list, ulist) {
+		vi->vol_count++;
+		list_add_tail(&vi->rlist, &vols);
+		put_tcp_server(server);
+	}
+	spin_unlock(&vol_lock);
+
+	/* Walk through all TCONs and refresh any expired cache entry */
+	list_for_each_entry_safe(vi, nvi, &vols, rlist) {
+		server = get_tcp_server(&vi->smb_vol);
+		if (!server)
+			goto next_vol;
+
+		get_tcons(server, &tcons);
+		list_for_each_entry_safe(tcon, ntcon, &tcons, ulist) {
 			refresh_tcon(vi, tcon);
 			list_del_init(&tcon->ulist);
 			cifs_put_tcon(tcon);
 		}
 
 		put_tcp_server(server);
+
+next_vol:
+		list_del_init(&vi->rlist);
+		spin_lock(&vol_lock);
+		put_vol(vi);
+		spin_unlock(&vol_lock);
 	}
-	queue_delayed_work(dfscache_wq, &refresh_task, cache_ttl * HZ);
-	mutex_unlock(&vol_lock);
+
+	queue_delayed_work(dfscache_wq, &refresh_task,
+			   atomic_read(&cache_ttl) * HZ);
 }
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 09/11] cifs: Avoid doing network I/O while holding cache lock
  2019-11-27  0:36 [PATCH v3 00/11] DFS fixes Paulo Alcantara (SUSE)
                   ` (7 preceding siblings ...)
  2019-11-27  0:36 ` [PATCH v3 08/11] cifs: Fix potential deadlock when updating vol in cifs_reconnect() Paulo Alcantara (SUSE)
@ 2019-11-27  0:36 ` Paulo Alcantara (SUSE)
  2019-11-27 13:32   ` Aurélien Aptel
  2019-11-27  0:36 ` [PATCH v3 10/11] cifs: Fix retrieval of DFS referrals in cifs_mount() Paulo Alcantara (SUSE)
  2019-11-27  0:36 ` [PATCH v3 11/11] cifs: Always update signing key of first channel Paulo Alcantara (SUSE)
  10 siblings, 1 reply; 19+ messages in thread
From: Paulo Alcantara (SUSE) @ 2019-11-27  0:36 UTC (permalink / raw)
  To: smfrench; +Cc: linux-cifs, Paulo Alcantara (SUSE)

When creating or updating a cache entry, we need to get an DFS
referral (get_dfs_referral), so avoid holding any locks during such
network operation.

Also, do some rework in the cache API and make it use a rwsem.

Signed-off-by: Paulo Alcantara (SUSE) <pc@cjr.nz>
---
 fs/cifs/dfs_cache.c | 547 +++++++++++++++++++++++---------------------
 1 file changed, 283 insertions(+), 264 deletions(-)

diff --git a/fs/cifs/dfs_cache.c b/fs/cifs/dfs_cache.c
index 629190926197..a2beb94963da 100644
--- a/fs/cifs/dfs_cache.c
+++ b/fs/cifs/dfs_cache.c
@@ -5,8 +5,6 @@
  * Copyright (c) 2018-2019 Paulo Alcantara <palcantara@suse.de>
  */
 
-#include <linux/rcupdate.h>
-#include <linux/rculist.h>
 #include <linux/jhash.h>
 #include <linux/ktime.h>
 #include <linux/slab.h>
@@ -44,7 +42,6 @@ struct cache_entry {
 	int numtgts;
 	struct list_head tlist;
 	struct cache_dfs_tgt *tgthint;
-	struct rcu_head rcu;
 };
 
 struct vol_info {
@@ -65,10 +62,10 @@ static struct nls_table *cache_nlsc;
 /*
  * Number of entries in the cache
  */
-static size_t cache_count;
+static atomic_t cache_count;
 
 static struct hlist_head cache_htable[CACHE_HTABLE_SIZE];
-static DEFINE_MUTEX(list_lock);
+static DECLARE_RWSEM(list_sem);
 
 static LIST_HEAD(vol_list);
 static DEFINE_SPINLOCK(vol_lock);
@@ -118,38 +115,29 @@ static inline void free_tgts(struct cache_entry *ce)
 	}
 }
 
-static void free_cache_entry(struct rcu_head *rcu)
-{
-	struct cache_entry *ce = container_of(rcu, struct cache_entry, rcu);
-
-	kmem_cache_free(cache_slab, ce);
-}
-
 static inline void flush_cache_ent(struct cache_entry *ce)
 {
-	if (hlist_unhashed(&ce->hlist))
-		return;
-
-	hlist_del_init_rcu(&ce->hlist);
+	hlist_del_init(&ce->hlist);
 	kfree(ce->path);
 	free_tgts(ce);
-	cache_count--;
-	call_rcu(&ce->rcu, free_cache_entry);
+	atomic_dec(&cache_count);
+	kmem_cache_free(cache_slab, ce);
 }
 
 static void flush_cache_ents(void)
 {
 	int i;
 
-	rcu_read_lock();
 	for (i = 0; i < CACHE_HTABLE_SIZE; i++) {
 		struct hlist_head *l = &cache_htable[i];
+		struct hlist_node *n;
 		struct cache_entry *ce;
 
-		hlist_for_each_entry_rcu(ce, l, hlist)
-			flush_cache_ent(ce);
+		hlist_for_each_entry_safe(ce, n, l, hlist) {
+			if (!hlist_unhashed(&ce->hlist))
+				flush_cache_ent(ce);
+		}
 	}
-	rcu_read_unlock();
 }
 
 /*
@@ -157,36 +145,39 @@ static void flush_cache_ents(void)
  */
 static int dfscache_proc_show(struct seq_file *m, void *v)
 {
-	int bucket;
+	int i;
 	struct cache_entry *ce;
 	struct cache_dfs_tgt *t;
 
 	seq_puts(m, "DFS cache\n---------\n");
 
-	mutex_lock(&list_lock);
-
-	rcu_read_lock();
-	hash_for_each_rcu(cache_htable, bucket, ce, hlist) {
-		seq_printf(m,
-			   "cache entry: path=%s,type=%s,ttl=%d,etime=%ld,"
-			   "interlink=%s,path_consumed=%d,expired=%s\n",
-			   ce->path,
-			   ce->srvtype == DFS_TYPE_ROOT ? "root" : "link",
-			   ce->ttl, ce->etime.tv_nsec,
-			   IS_INTERLINK_SET(ce->flags) ? "yes" : "no",
-			   ce->path_consumed,
-			   cache_entry_expired(ce) ? "yes" : "no");
-
-		list_for_each_entry(t, &ce->tlist, list) {
-			seq_printf(m, "  %s%s\n",
-				   t->name,
-				   ce->tgthint == t ? " (target hint)" : "");
-		}
+	down_read(&list_sem);
+	for (i = 0; i < CACHE_HTABLE_SIZE; i++) {
+		struct hlist_head *l = &cache_htable[i];
 
+		hlist_for_each_entry(ce, l, hlist) {
+			if (hlist_unhashed(&ce->hlist))
+				continue;
+
+			seq_printf(m,
+				   "cache entry: path=%s,type=%s,ttl=%d,etime=%ld,"
+				   "interlink=%s,path_consumed=%d,expired=%s\n",
+				   ce->path,
+				   ce->srvtype == DFS_TYPE_ROOT ? "root" : "link",
+				   ce->ttl, ce->etime.tv_nsec,
+				   IS_INTERLINK_SET(ce->flags) ? "yes" : "no",
+				   ce->path_consumed,
+				   cache_entry_expired(ce) ? "yes" : "no");
+
+			list_for_each_entry(t, &ce->tlist, list) {
+				seq_printf(m, "  %s%s\n",
+					   t->name,
+					   ce->tgthint == t ? " (target hint)" : "");
+			}
+		}
 	}
-	rcu_read_unlock();
+	up_read(&list_sem);
 
-	mutex_unlock(&list_lock);
 	return 0;
 }
 
@@ -204,9 +195,10 @@ static ssize_t dfscache_proc_write(struct file *file, const char __user *buffer,
 		return -EINVAL;
 
 	cifs_dbg(FYI, "clearing dfs cache");
-	mutex_lock(&list_lock);
+
+	down_write(&list_sem);
 	flush_cache_ents();
-	mutex_unlock(&list_lock);
+	up_write(&list_sem);
 
 	return count;
 }
@@ -303,6 +295,7 @@ int dfs_cache_init(void)
 		INIT_HLIST_HEAD(&cache_htable[i]);
 
 	atomic_set(&cache_ttl, -1);
+	atomic_set(&cache_count, 0);
 	cache_nlsc = load_nls_default();
 
 	cifs_dbg(FYI, "%s: initialized DFS referral cache\n", __func__);
@@ -354,14 +347,14 @@ static inline struct timespec64 get_expire_time(int ttl)
 }
 
 /* Allocate a new DFS target */
-static inline struct cache_dfs_tgt *alloc_tgt(const char *name)
+static struct cache_dfs_tgt *alloc_target(const char *name)
 {
 	struct cache_dfs_tgt *t;
 
-	t = kmalloc(sizeof(*t), GFP_KERNEL);
+	t = kmalloc(sizeof(*t), GFP_ATOMIC);
 	if (!t)
 		return ERR_PTR(-ENOMEM);
-	t->name = kstrndup(name, strlen(name), GFP_KERNEL);
+	t->name = kstrndup(name, strlen(name), GFP_ATOMIC);
 	if (!t->name) {
 		kfree(t);
 		return ERR_PTR(-ENOMEM);
@@ -388,7 +381,7 @@ static int copy_ref_data(const struct dfs_info3_param *refs, int numrefs,
 	for (i = 0; i < numrefs; i++) {
 		struct cache_dfs_tgt *t;
 
-		t = alloc_tgt(refs[i].node_name);
+		t = alloc_target(refs[i].node_name);
 		if (IS_ERR(t)) {
 			free_tgts(ce);
 			return PTR_ERR(t);
@@ -437,42 +430,45 @@ static struct cache_entry *alloc_cache_entry(const char *path,
 	return ce;
 }
 
+/* Must be called with list_sem held */
 static void remove_oldest_entry(void)
 {
-	int bucket;
+	int i;
 	struct cache_entry *ce;
 	struct cache_entry *to_del = NULL;
 
-	rcu_read_lock();
-	hash_for_each_rcu(cache_htable, bucket, ce, hlist) {
-		if (!to_del || timespec64_compare(&ce->etime,
-						  &to_del->etime) < 0)
-			to_del = ce;
+	for (i = 0; i < CACHE_HTABLE_SIZE; i++) {
+		struct hlist_head *l = &cache_htable[i];
+
+		hlist_for_each_entry(ce, l, hlist) {
+			if (hlist_unhashed(&ce->hlist))
+				continue;
+			if (!to_del || timespec64_compare(&ce->etime,
+							  &to_del->etime) < 0)
+				to_del = ce;
+		}
 	}
+
 	if (!to_del) {
 		cifs_dbg(FYI, "%s: no entry to remove", __func__);
-		goto out;
+		return;
 	}
+
 	cifs_dbg(FYI, "%s: removing entry", __func__);
 	dump_ce(to_del);
 	flush_cache_ent(to_del);
-out:
-	rcu_read_unlock();
 }
 
 /* Add a new DFS cache entry */
-static inline struct cache_entry *
-add_cache_entry(unsigned int hash, const char *path,
-		const struct dfs_info3_param *refs, int numrefs)
+static int add_cache_entry(const char *path, unsigned int hash,
+			   struct dfs_info3_param *refs, int numrefs)
 {
 	struct cache_entry *ce;
 	int ttl;
 
 	ce = alloc_cache_entry(path, refs, numrefs);
 	if (IS_ERR(ce))
-		return ce;
-
-	hlist_add_head_rcu(&ce->hlist, &cache_htable[hash]);
+		return PTR_ERR(ce);
 
 	ttl = atomic_read(&cache_ttl);
 	if (ttl < 0) {
@@ -485,13 +481,20 @@ add_cache_entry(unsigned int hash, const char *path,
 
 	atomic_set(&cache_ttl, ttl);
 
-	return ce;
+	down_write(&list_sem);
+	hlist_add_head(&ce->hlist, &cache_htable[hash]);
+	dump_ce(ce);
+	up_write(&list_sem);
+
+	return 0;
 }
 
 /*
  * Find a DFS cache entry in hash table and optionally check prefix path against
  * @path.
  * Use whole path components in the match.
+ * Must be called with list_sem held.
+ *
  * Return ERR_PTR(-ENOENT) if the entry is not found.
  */
 static struct cache_entry *lookup_cache_entry(const char *path,
@@ -503,15 +506,13 @@ static struct cache_entry *lookup_cache_entry(const char *path,
 
 	h = cache_entry_hash(path, strlen(path));
 
-	rcu_read_lock();
-	hlist_for_each_entry_rcu(ce, &cache_htable[h], hlist) {
+	hlist_for_each_entry(ce, &cache_htable[h], hlist) {
 		if (!strcasecmp(path, ce->path)) {
 			found = true;
 			dump_ce(ce);
 			break;
 		}
 	}
-	rcu_read_unlock();
 
 	if (!found)
 		ce = ERR_PTR(-ENOENT);
@@ -521,12 +522,6 @@ static struct cache_entry *lookup_cache_entry(const char *path,
 	return ce;
 }
 
-static inline void destroy_slab_cache(void)
-{
-	rcu_barrier();
-	kmem_cache_destroy(cache_slab);
-}
-
 /* Must be called with vol_lock held */
 static inline void put_vol(struct vol_info *vi)
 {
@@ -556,77 +551,74 @@ void dfs_cache_destroy(void)
 	unload_nls(cache_nlsc);
 	free_vol_list();
 	flush_cache_ents();
-	destroy_slab_cache();
+	kmem_cache_destroy(cache_slab);
 	destroy_workqueue(dfscache_wq);
 
 	cifs_dbg(FYI, "%s: destroyed DFS referral cache\n", __func__);
 }
 
-static inline struct cache_entry *
-__update_cache_entry(const char *path, const struct dfs_info3_param *refs,
-		     int numrefs)
+/* Must be called with list_sem held */
+static int __update_cache_entry(const char *path,
+				const struct dfs_info3_param *refs,
+				int numrefs)
 {
 	int rc;
-	unsigned int h;
 	struct cache_entry *ce;
 	char *s, *th = NULL;
 
-	ce = lookup_cache_entry(path, &h);
+	ce = lookup_cache_entry(path, NULL);
 	if (IS_ERR(ce))
-		return ce;
+		return PTR_ERR(ce);
 
 	if (ce->tgthint) {
 		s = ce->tgthint->name;
-		th = kstrndup(s, strlen(s), GFP_KERNEL);
+		th = kstrndup(s, strlen(s), GFP_ATOMIC);
 		if (!th)
-			return ERR_PTR(-ENOMEM);
+			return -ENOMEM;
 	}
 
 	free_tgts(ce);
 	ce->numtgts = 0;
 
 	rc = copy_ref_data(refs, numrefs, ce, th);
-	kfree(th);
 
-	if (rc)
-		ce = ERR_PTR(rc);
+	kfree(th);
 
-	return ce;
+	return 0;
 }
 
-/* Update an expired cache entry by getting a new DFS referral from server */
-static struct cache_entry *
-update_cache_entry(const unsigned int xid, struct cifs_ses *ses,
-		   const struct nls_table *nls_codepage, int remap,
-		   const char *path, struct cache_entry *ce)
+static int get_dfs_referral(const unsigned int xid, struct cifs_ses *ses,
+			    const struct nls_table *nls_codepage, int remap,
+			    const char *path,  struct dfs_info3_param **refs,
+			    int *numrefs)
 {
-	int rc;
-	struct dfs_info3_param *refs = NULL;
-	int numrefs = 0;
+	cifs_dbg(FYI, "%s: get an DFS referral for %s\n", __func__, path);
 
-	cifs_dbg(FYI, "%s: update expired cache entry\n", __func__);
-	/*
-	 * Check if caller provided enough parameters to update an expired
-	 * entry.
-	 */
 	if (!ses || !ses->server || !ses->server->ops->get_dfs_refer)
-		return ERR_PTR(-ETIME);
+		return -EOPNOTSUPP;
 	if (unlikely(!nls_codepage))
-		return ERR_PTR(-ETIME);
+		return -EINVAL;
 
-	cifs_dbg(FYI, "%s: DFS referral request for %s\n", __func__, path);
+	*refs = NULL;
+	*numrefs = 0;
 
-	rc = ses->server->ops->get_dfs_refer(xid, ses, path, &refs, &numrefs,
-					     nls_codepage, remap);
-	if (rc)
-		ce = ERR_PTR(rc);
-	else
-		ce = __update_cache_entry(path, refs, numrefs);
+	return ses->server->ops->get_dfs_refer(xid, ses, path, refs, numrefs,
+					       nls_codepage, remap);
+}
 
-	dump_refs(refs, numrefs);
-	free_dfs_info_array(refs, numrefs);
+/* Update an expired cache entry by getting a new DFS referral from server */
+static int update_cache_entry(const char *path,
+			      const struct dfs_info3_param *refs,
+			      int numrefs)
+{
 
-	return ce;
+	int rc;
+
+	down_write(&list_sem);
+	rc = __update_cache_entry(path, refs, numrefs);
+	up_write(&list_sem);
+
+	return rc;
 }
 
 /*
@@ -638,95 +630,86 @@ update_cache_entry(const unsigned int xid, struct cifs_ses *ses,
  * For interlinks, __cifs_dfs_mount() and expand_dfs_referral() are supposed to
  * handle them properly.
  */
-static struct cache_entry *
-do_dfs_cache_find(const unsigned int xid, struct cifs_ses *ses,
-		  const struct nls_table *nls_codepage, int remap,
-		  const char *path, bool noreq)
+static int __dfs_cache_find(const unsigned int xid, struct cifs_ses *ses,
+			    const struct nls_table *nls_codepage, int remap,
+			    const char *path, bool noreq)
 {
 	int rc;
-	unsigned int h;
+	unsigned int hash;
 	struct cache_entry *ce;
-	struct dfs_info3_param *nrefs;
-	int numnrefs;
+	struct dfs_info3_param *refs = NULL;
+	int numrefs = 0;
+	bool newent = false;
 
 	cifs_dbg(FYI, "%s: search path: %s\n", __func__, path);
 
-	ce = lookup_cache_entry(path, &h);
-	if (IS_ERR(ce)) {
-		cifs_dbg(FYI, "%s: cache miss\n", __func__);
-		/*
-		 * If @noreq is set, no requests will be sent to the server for
-		 * either updating or getting a new DFS referral.
-		 */
-		if (noreq)
-			return ce;
-		/*
-		 * No cache entry was found, so check for valid parameters that
-		 * will be required to get a new DFS referral and then create a
-		 * new cache entry.
-		 */
-		if (!ses || !ses->server || !ses->server->ops->get_dfs_refer) {
-			ce = ERR_PTR(-EOPNOTSUPP);
-			return ce;
-		}
-		if (unlikely(!nls_codepage)) {
-			ce = ERR_PTR(-EINVAL);
-			return ce;
-		}
+	down_read(&list_sem);
 
-		nrefs = NULL;
-		numnrefs = 0;
+	ce = lookup_cache_entry(path, &hash);
 
-		cifs_dbg(FYI, "%s: DFS referral request for %s\n", __func__,
-			 path);
+	/*
+	 * If @noreq is set, no requests will be sent to the server. Just return
+	 * the cache entry.
+	 */
+	if (noreq) {
+		up_read(&list_sem);
+		return IS_ERR(ce) ? PTR_ERR(ce) : 0;
+	}
 
-		rc = ses->server->ops->get_dfs_refer(xid, ses, path, &nrefs,
-						     &numnrefs, nls_codepage,
-						     remap);
-		if (rc) {
-			ce = ERR_PTR(rc);
-			return ce;
+	if (!IS_ERR(ce)) {
+		if (!cache_entry_expired(ce)) {
+			dump_ce(ce);
+			up_read(&list_sem);
+			return 0;
 		}
+	} else {
+		newent = true;
+	}
 
-		dump_refs(nrefs, numnrefs);
+	up_read(&list_sem);
 
-		cifs_dbg(FYI, "%s: new cache entry\n", __func__);
+	/*
+	 * No entry was found.
+	 *
+	 * Request a new DFS referral in order to create a new cache entry, or
+	 * updating an existing one.
+	 */
+	rc = get_dfs_referral(xid, ses, nls_codepage, remap, path,
+			      &refs, &numrefs);
+	if (rc)
+		return rc;
 
-		if (cache_count >= CACHE_MAX_ENTRIES) {
-			cifs_dbg(FYI, "%s: reached max cache size (%d)",
-				 __func__, CACHE_MAX_ENTRIES);
-			remove_oldest_entry();
-		}
-		ce = add_cache_entry(h, path, nrefs, numnrefs);
-		free_dfs_info_array(nrefs, numnrefs);
+	dump_refs(refs, numrefs);
 
-		if (IS_ERR(ce))
-			return ce;
+	if (!newent) {
+		rc = update_cache_entry(path, refs, numrefs);
+		goto out_free_refs;
+	}
 
-		cache_count++;
+	if (atomic_read(&cache_count) >= CACHE_MAX_ENTRIES) {
+		cifs_dbg(FYI, "%s: reached max cache size (%d)", __func__,
+			 CACHE_MAX_ENTRIES);
+		down_write(&list_sem);
+		remove_oldest_entry();
+		up_write(&list_sem);
 	}
 
-	dump_ce(ce);
+	rc = add_cache_entry(path, hash, refs, numrefs);
+	if (!rc)
+		atomic_inc(&cache_count);
 
-	/* Just return the found cache entry in case @noreq is set */
-	if (noreq)
-		return ce;
-
-	if (cache_entry_expired(ce)) {
-		cifs_dbg(FYI, "%s: expired cache entry\n", __func__);
-		ce = update_cache_entry(xid, ses, nls_codepage, remap, path,
-					ce);
-		if (IS_ERR(ce)) {
-			cifs_dbg(FYI, "%s: failed to update expired entry\n",
-				 __func__);
-		}
-	}
-	return ce;
+out_free_refs:
+	free_dfs_info_array(refs, numrefs);
+	return rc;
 }
 
-/* Set up a new DFS referral from a given cache entry */
-static int setup_ref(const char *path, const struct cache_entry *ce,
-		     struct dfs_info3_param *ref, const char *tgt)
+/*
+ * Set up a DFS referral from a given cache entry.
+ *
+ * Must be called with list_sem held.
+ */
+static int setup_referral(const char *path, struct cache_entry *ce,
+			  struct dfs_info3_param *ref, const char *target)
 {
 	int rc;
 
@@ -734,18 +717,17 @@ static int setup_ref(const char *path, const struct cache_entry *ce,
 
 	memset(ref, 0, sizeof(*ref));
 
-	ref->path_name = kstrndup(path, strlen(path), GFP_KERNEL);
+	ref->path_name = kstrndup(path, strlen(path), GFP_ATOMIC);
 	if (!ref->path_name)
 		return -ENOMEM;
 
-	ref->path_consumed = ce->path_consumed;
-
-	ref->node_name = kstrndup(tgt, strlen(tgt), GFP_KERNEL);
+	ref->node_name = kstrndup(target, strlen(target), GFP_ATOMIC);
 	if (!ref->node_name) {
 		rc = -ENOMEM;
 		goto err_free_path;
 	}
 
+	ref->path_consumed = ce->path_consumed;
 	ref->ttl = ce->ttl;
 	ref->server_type = ce->srvtype;
 	ref->ref_flag = ce->flags;
@@ -759,8 +741,7 @@ static int setup_ref(const char *path, const struct cache_entry *ce,
 }
 
 /* Return target list of a DFS cache entry */
-static int get_tgt_list(const struct cache_entry *ce,
-			struct dfs_cache_tgt_list *tl)
+static int get_targets(struct cache_entry *ce, struct dfs_cache_tgt_list *tl)
 {
 	int rc;
 	struct list_head *head = &tl->tl_list;
@@ -771,14 +752,13 @@ static int get_tgt_list(const struct cache_entry *ce,
 	INIT_LIST_HEAD(head);
 
 	list_for_each_entry(t, &ce->tlist, list) {
-		it = kzalloc(sizeof(*it), GFP_KERNEL);
+		it = kzalloc(sizeof(*it), GFP_ATOMIC);
 		if (!it) {
 			rc = -ENOMEM;
 			goto err_free_it;
 		}
 
-		it->it_name = kstrndup(t->name, strlen(t->name),
-				       GFP_KERNEL);
+		it->it_name = kstrndup(t->name, strlen(t->name), GFP_ATOMIC);
 		if (!it->it_name) {
 			kfree(it);
 			rc = -ENOMEM;
@@ -790,6 +770,7 @@ static int get_tgt_list(const struct cache_entry *ce,
 		else
 			list_add_tail(&it->it_list, head);
 	}
+
 	tl->tl_numtgts = ce->numtgts;
 
 	return 0;
@@ -837,19 +818,29 @@ int dfs_cache_find(const unsigned int xid, struct cifs_ses *ses,
 	if (rc)
 		return rc;
 
-	mutex_lock(&list_lock);
-	ce = do_dfs_cache_find(xid, ses, nls_codepage, remap, npath, false);
-	if (!IS_ERR(ce)) {
-		if (ref)
-			rc = setup_ref(path, ce, ref, get_tgt_name(ce));
-		else
-			rc = 0;
-		if (!rc && tgt_list)
-			rc = get_tgt_list(ce, tgt_list);
-	} else {
+	rc = __dfs_cache_find(xid, ses, nls_codepage, remap, npath, false);
+	if (rc)
+		goto out_free_path;
+
+	down_read(&list_sem);
+
+	ce = lookup_cache_entry(npath, NULL);
+	if (IS_ERR(ce)) {
+		up_read(&list_sem);
 		rc = PTR_ERR(ce);
+		goto out_free_path;
 	}
-	mutex_unlock(&list_lock);
+
+	if (ref)
+		rc = setup_referral(path, ce, ref, get_tgt_name(ce));
+	else
+		rc = 0;
+	if (!rc && tgt_list)
+		rc = get_targets(ce, tgt_list);
+
+	up_read(&list_sem);
+
+out_free_path:
 	free_normalized_path(path, npath);
 	return rc;
 }
@@ -881,22 +872,27 @@ int dfs_cache_noreq_find(const char *path, struct dfs_info3_param *ref,
 	if (rc)
 		return rc;
 
-	mutex_lock(&list_lock);
-	ce = do_dfs_cache_find(0, NULL, NULL, 0, npath, true);
+	cifs_dbg(FYI, "%s: path: %s\n", __func__, npath);
+
+	down_read(&list_sem);
+
+	ce = lookup_cache_entry(npath, NULL);
 	if (IS_ERR(ce)) {
 		rc = PTR_ERR(ce);
-		goto out;
+		goto out_unlock;
 	}
 
 	if (ref)
-		rc = setup_ref(path, ce, ref, get_tgt_name(ce));
+		rc = setup_referral(path, ce, ref, get_tgt_name(ce));
 	else
 		rc = 0;
 	if (!rc && tgt_list)
-		rc = get_tgt_list(ce, tgt_list);
-out:
-	mutex_unlock(&list_lock);
+		rc = get_targets(ce, tgt_list);
+
+out_unlock:
+	up_read(&list_sem);
 	free_normalized_path(path, npath);
+
 	return rc;
 }
 
@@ -932,21 +928,24 @@ int dfs_cache_update_tgthint(const unsigned int xid, struct cifs_ses *ses,
 	if (rc)
 		return rc;
 
-	cifs_dbg(FYI, "%s: path: %s\n", __func__, npath);
+	cifs_dbg(FYI, "%s: update target hint - path: %s\n", __func__, npath);
+
+	rc = __dfs_cache_find(xid, ses, nls_codepage, remap, npath, false);
+	if (rc)
+		goto out_free_path;
+
+	down_write(&list_sem);
 
-	mutex_lock(&list_lock);
-	ce = do_dfs_cache_find(xid, ses, nls_codepage, remap, npath, false);
+	ce = lookup_cache_entry(npath, NULL);
 	if (IS_ERR(ce)) {
 		rc = PTR_ERR(ce);
-		goto out;
+		goto out_unlock;
 	}
 
-	rc = 0;
-
 	t = ce->tgthint;
 
 	if (likely(!strcasecmp(it->it_name, t->name)))
-		goto out;
+		goto out_unlock;
 
 	list_for_each_entry(t, &ce->tlist, list) {
 		if (!strcasecmp(t->name, it->it_name)) {
@@ -957,9 +956,11 @@ int dfs_cache_update_tgthint(const unsigned int xid, struct cifs_ses *ses,
 		}
 	}
 
-out:
-	mutex_unlock(&list_lock);
+out_unlock:
+	up_write(&list_sem);
+out_free_path:
 	free_normalized_path(path, npath);
+
 	return rc;
 }
 
@@ -994,20 +995,19 @@ int dfs_cache_noreq_update_tgthint(const char *path,
 
 	cifs_dbg(FYI, "%s: path: %s\n", __func__, npath);
 
-	mutex_lock(&list_lock);
+	down_write(&list_sem);
 
-	ce = do_dfs_cache_find(0, NULL, NULL, 0, npath, true);
+	ce = lookup_cache_entry(npath, NULL);
 	if (IS_ERR(ce)) {
 		rc = PTR_ERR(ce);
-		goto out;
+		goto out_unlock;
 	}
 
 	rc = 0;
-
 	t = ce->tgthint;
 
 	if (unlikely(!strcasecmp(it->it_name, t->name)))
-		goto out;
+		goto out_unlock;
 
 	list_for_each_entry(t, &ce->tlist, list) {
 		if (!strcasecmp(t->name, it->it_name)) {
@@ -1018,9 +1018,10 @@ int dfs_cache_noreq_update_tgthint(const char *path,
 		}
 	}
 
-out:
-	mutex_unlock(&list_lock);
+out_unlock:
+	up_write(&list_sem);
 	free_normalized_path(path, npath);
+
 	return rc;
 }
 
@@ -1041,7 +1042,6 @@ int dfs_cache_get_tgt_referral(const char *path,
 	int rc;
 	char *npath;
 	struct cache_entry *ce;
-	unsigned int h;
 
 	if (!it || !ref)
 		return -EINVAL;
@@ -1052,21 +1052,22 @@ int dfs_cache_get_tgt_referral(const char *path,
 
 	cifs_dbg(FYI, "%s: path: %s\n", __func__, npath);
 
-	mutex_lock(&list_lock);
+	down_read(&list_sem);
 
-	ce = lookup_cache_entry(npath, &h);
+	ce = lookup_cache_entry(npath, NULL);
 	if (IS_ERR(ce)) {
 		rc = PTR_ERR(ce);
-		goto out;
+		goto out_unlock;
 	}
 
 	cifs_dbg(FYI, "%s: target name: %s\n", __func__, it->it_name);
 
-	rc = setup_ref(path, ce, ref, it->it_name);
+	rc = setup_referral(path, ce, ref, it->it_name);
 
-out:
-	mutex_unlock(&list_lock);
+out_unlock:
+	up_read(&list_sem);
 	free_normalized_path(path, npath);
+
 	return rc;
 }
 
@@ -1334,6 +1335,7 @@ static struct cifs_ses *find_root_ses(struct vol_info *vi,
 {
 	char *rpath;
 	int rc;
+	struct cache_entry *ce;
 	struct dfs_info3_param ref = {0};
 	char *mdata = NULL, *devname = NULL;
 	struct TCP_Server_Info *server;
@@ -1344,15 +1346,26 @@ static struct cifs_ses *find_root_ses(struct vol_info *vi,
 	if (IS_ERR(rpath))
 		return ERR_CAST(rpath);
 
-	memset(&vol, 0, sizeof(vol));
+	down_read(&list_sem);
 
-	rc = dfs_cache_noreq_find(rpath, &ref, NULL);
+	ce = lookup_cache_entry(rpath, NULL);
+	if (IS_ERR(ce)) {
+		up_read(&list_sem);
+		ses = ERR_CAST(ce);
+		goto out;
+	}
+
+	rc = setup_referral(path, ce, &ref, get_tgt_name(ce));
 	if (rc) {
+		up_read(&list_sem);
 		ses = ERR_PTR(rc);
 		goto out;
 	}
 
-	mdata = cifs_compose_mount_options(vi->mntdata, rpath, &ref, &devname);
+	up_read(&list_sem);
+
+	mdata = cifs_compose_mount_options(vi->mntdata, rpath, &ref,
+					   &devname);
 	free_dfs_info_param(&ref);
 
 	if (IS_ERR(mdata)) {
@@ -1386,16 +1399,15 @@ static struct cifs_ses *find_root_ses(struct vol_info *vi,
 }
 
 /* Refresh DFS cache entry from a given tcon */
-static void refresh_tcon(struct vol_info *vi, struct cifs_tcon *tcon)
+static int refresh_tcon(struct vol_info *vi, struct cifs_tcon *tcon)
 {
 	int rc = 0;
 	unsigned int xid;
 	char *path, *npath;
-	unsigned int h;
 	struct cache_entry *ce;
+	struct cifs_ses *root_ses = NULL, *ses;
 	struct dfs_info3_param *refs = NULL;
 	int numrefs = 0;
-	struct cifs_ses *root_ses = NULL, *ses;
 
 	xid = get_xid();
 
@@ -1403,19 +1415,23 @@ static void refresh_tcon(struct vol_info *vi, struct cifs_tcon *tcon)
 
 	rc = get_normalized_path(path, &npath);
 	if (rc)
-		goto out;
+		goto out_free_xid;
 
-	mutex_lock(&list_lock);
-	ce = lookup_cache_entry(npath, &h);
-	mutex_unlock(&list_lock);
+	down_read(&list_sem);
 
+	ce = lookup_cache_entry(npath, NULL);
 	if (IS_ERR(ce)) {
 		rc = PTR_ERR(ce);
-		goto out;
+		up_read(&list_sem);
+		goto out_free_path;
 	}
 
-	if (!cache_entry_expired(ce))
-		goto out;
+	if (!cache_entry_expired(ce)) {
+		up_read(&list_sem);
+		goto out_free_path;
+	}
+
+	up_read(&list_sem);
 
 	/* If it's a DFS Link, then use root SMB session for refreshing it */
 	if (is_dfs_link(npath)) {
@@ -1423,35 +1439,29 @@ static void refresh_tcon(struct vol_info *vi, struct cifs_tcon *tcon)
 		if (IS_ERR(ses)) {
 			rc = PTR_ERR(ses);
 			root_ses = NULL;
-			goto out;
+			goto out_free_path;
 		}
 	} else {
 		ses = tcon->ses;
 	}
 
-	if (unlikely(!ses->server->ops->get_dfs_refer)) {
-		rc = -EOPNOTSUPP;
-	} else {
-		rc = ses->server->ops->get_dfs_refer(xid, ses, path, &refs,
-						     &numrefs, cache_nlsc,
-						     tcon->remap);
-		if (!rc) {
-			mutex_lock(&list_lock);
-			ce = __update_cache_entry(npath, refs, numrefs);
-			mutex_unlock(&list_lock);
-			dump_refs(refs, numrefs);
-			free_dfs_info_array(refs, numrefs);
-			if (IS_ERR(ce))
-				rc = PTR_ERR(ce);
-		}
+	rc = get_dfs_referral(xid, ses, cache_nlsc, tcon->remap, npath, &refs,
+			      &numrefs);
+	if (!rc) {
+		dump_refs(refs, numrefs);
+		rc = update_cache_entry(npath, refs, numrefs);
+		free_dfs_info_array(refs, numrefs);
 	}
 
-out:
 	if (root_ses)
 		cifs_put_smb_ses(root_ses);
 
-	free_xid(xid);
+out_free_path:
 	free_normalized_path(path, npath);
+
+out_free_xid:
+	free_xid(xid);
+	return rc;
 }
 
 /*
@@ -1465,6 +1475,7 @@ static void refresh_cache_worker(struct work_struct *work)
 	LIST_HEAD(vols);
 	LIST_HEAD(tcons);
 	struct cifs_tcon *tcon, *ntcon;
+	int rc;
 
 	/*
 	 * Find SMB volumes that are eligible (server->tcpStatus == CifsGood)
@@ -1489,8 +1500,16 @@ static void refresh_cache_worker(struct work_struct *work)
 			goto next_vol;
 
 		get_tcons(server, &tcons);
+		rc = 0;
+
 		list_for_each_entry_safe(tcon, ntcon, &tcons, ulist) {
-			refresh_tcon(vi, tcon);
+			/*
+			 * Skip tcp server if any of its tcons failed to refresh
+			 * (possibily due to reconnects).
+			 */
+			if (!rc)
+				rc = refresh_tcon(vi, tcon);
+
 			list_del_init(&tcon->ulist);
 			cifs_put_tcon(tcon);
 		}
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 10/11] cifs: Fix retrieval of DFS referrals in cifs_mount()
  2019-11-27  0:36 [PATCH v3 00/11] DFS fixes Paulo Alcantara (SUSE)
                   ` (8 preceding siblings ...)
  2019-11-27  0:36 ` [PATCH v3 09/11] cifs: Avoid doing network I/O while holding cache lock Paulo Alcantara (SUSE)
@ 2019-11-27  0:36 ` Paulo Alcantara (SUSE)
  2019-11-27 13:33   ` Aurélien Aptel
  2019-11-27  0:36 ` [PATCH v3 11/11] cifs: Always update signing key of first channel Paulo Alcantara (SUSE)
  10 siblings, 1 reply; 19+ messages in thread
From: Paulo Alcantara (SUSE) @ 2019-11-27  0:36 UTC (permalink / raw)
  To: smfrench; +Cc: linux-cifs, Paulo Alcantara (SUSE), stable, Matthew Ruffell

Make sure that DFS referrals are sent to newly resolved root targets
as in a multi tier DFS setup.

Signed-off-by: Paulo Alcantara (SUSE) <pc@cjr.nz>
Link: https://lkml.kernel.org/r/05aa2995-e85e-0ff4-d003-5bb08bd17a22@canonical.com
Cc: stable@vger.kernel.org
Tested-by: Matthew Ruffell <matthew.ruffell@canonical.com>
---
 fs/cifs/connect.c | 32 ++++++++++++++++++++++----------
 1 file changed, 22 insertions(+), 10 deletions(-)

diff --git a/fs/cifs/connect.c b/fs/cifs/connect.c
index 668d477cc9c7..86d98d73749d 100644
--- a/fs/cifs/connect.c
+++ b/fs/cifs/connect.c
@@ -4776,6 +4776,17 @@ static int is_path_remote(struct cifs_sb_info *cifs_sb, struct smb_vol *vol,
 }
 
 #ifdef CONFIG_CIFS_DFS_UPCALL
+static inline void set_root_tcon(struct cifs_sb_info *cifs_sb,
+				 struct cifs_tcon *tcon,
+				 struct cifs_tcon **root)
+{
+	spin_lock(&cifs_tcp_ses_lock);
+	tcon->tc_count++;
+	tcon->remap = cifs_remap(cifs_sb);
+	spin_unlock(&cifs_tcp_ses_lock);
+	*root = tcon;
+}
+
 int cifs_mount(struct cifs_sb_info *cifs_sb, struct smb_vol *vol)
 {
 	int rc = 0;
@@ -4877,18 +4888,10 @@ int cifs_mount(struct cifs_sb_info *cifs_sb, struct smb_vol *vol)
 	/* Cache out resolved root server */
 	(void)dfs_cache_find(xid, ses, cifs_sb->local_nls, cifs_remap(cifs_sb),
 			     root_path + 1, NULL, NULL);
-	/*
-	 * Save root tcon for additional DFS requests to update or create a new
-	 * DFS cache entry, or even perform DFS failover.
-	 */
-	spin_lock(&cifs_tcp_ses_lock);
-	tcon->tc_count++;
-	tcon->dfs_path = root_path;
+	kfree(root_path);
 	root_path = NULL;
-	tcon->remap = cifs_remap(cifs_sb);
-	spin_unlock(&cifs_tcp_ses_lock);
 
-	root_tcon = tcon;
+	set_root_tcon(cifs_sb, tcon, &root_tcon);
 
 	for (count = 1; ;) {
 		if (!rc && tcon) {
@@ -4925,6 +4928,15 @@ int cifs_mount(struct cifs_sb_info *cifs_sb, struct smb_vol *vol)
 			mount_put_conns(cifs_sb, xid, server, ses, tcon);
 			rc = mount_get_conns(vol, cifs_sb, &xid, &server, &ses,
 					     &tcon);
+			/*
+			 * Ensure that DFS referrals go through new root server.
+			 */
+			if (!rc && tcon &&
+			    (tcon->share_flags & (SHI1005_FLAGS_DFS |
+						  SHI1005_FLAGS_DFS_ROOT))) {
+				cifs_put_tcon(root_tcon);
+				set_root_tcon(cifs_sb, tcon, &root_tcon);
+			}
 		}
 		if (rc) {
 			if (rc == -EACCES || rc == -EOPNOTSUPP)
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [PATCH v3 11/11] cifs: Always update signing key of first channel
  2019-11-27  0:36 [PATCH v3 00/11] DFS fixes Paulo Alcantara (SUSE)
                   ` (9 preceding siblings ...)
  2019-11-27  0:36 ` [PATCH v3 10/11] cifs: Fix retrieval of DFS referrals in cifs_mount() Paulo Alcantara (SUSE)
@ 2019-11-27  0:36 ` Paulo Alcantara (SUSE)
  10 siblings, 0 replies; 19+ messages in thread
From: Paulo Alcantara (SUSE) @ 2019-11-27  0:36 UTC (permalink / raw)
  To: smfrench; +Cc: linux-cifs, Paulo Alcantara (SUSE), Aurelien Aptel

Update signing key of first channel whenever generating the master
sigining/encryption/decryption keys rather than only in cifs_mount().

This also fixes reconnect when re-establishing smb sessions to other
servers.

Signed-off-by: Paulo Alcantara (SUSE) <pc@cjr.nz>
Reviewed-by: Aurelien Aptel <aaptel@suse.com>
---
 fs/cifs/smb2transport.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/fs/cifs/smb2transport.c b/fs/cifs/smb2transport.c
index 86501239cef5..387c88704c52 100644
--- a/fs/cifs/smb2transport.c
+++ b/fs/cifs/smb2transport.c
@@ -407,6 +407,10 @@ generate_smb3signingkey(struct cifs_ses *ses,
 				  SMB3_SIGN_KEY_SIZE);
 		if (rc)
 			return rc;
+
+		memcpy(ses->chans[0].signkey, ses->smb3signingkey,
+		       SMB3_SIGN_KEY_SIZE);
+
 		rc = generate_key(ses, ptriplet->encryption.label,
 				  ptriplet->encryption.context,
 				  ses->smb3encryptionkey,
-- 
2.24.0


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 04/11] cifs: Clean up DFS referral cache
  2019-11-27  0:36 ` [PATCH v3 04/11] cifs: Clean up DFS referral cache Paulo Alcantara (SUSE)
@ 2019-11-27 11:32   ` Aurélien Aptel
  0 siblings, 0 replies; 19+ messages in thread
From: Aurélien Aptel @ 2019-11-27 11:32 UTC (permalink / raw)
  To: Paulo Alcantara (SUSE), smfrench; +Cc: linux-cifs, Paulo Alcantara (SUSE)

"Paulo Alcantara (SUSE)" <pc@cjr.nz> writes:
> Do some renaming and code cleanup.
>
> No functional changes.

Reviewed-by: Aurelien Aptel <aaptel@suse.com>

-- 
Aurélien Aptel / SUSE Labs Samba Team
GPG: 1839 CB5F 9F5B FB9B AA97  8C99 03C8 A49B 521B D5D3
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg, DE
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 247165 (AG München)

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 05/11] cifs: Get rid of kstrdup_const()'d paths
  2019-11-27  0:36 ` [PATCH v3 05/11] cifs: Get rid of kstrdup_const()'d paths Paulo Alcantara (SUSE)
@ 2019-11-27 11:33   ` Aurélien Aptel
  0 siblings, 0 replies; 19+ messages in thread
From: Aurélien Aptel @ 2019-11-27 11:33 UTC (permalink / raw)
  To: Paulo Alcantara (SUSE), smfrench; +Cc: linux-cifs, Paulo Alcantara (SUSE)

"Paulo Alcantara (SUSE)" <pc@cjr.nz> writes:
> The DFS cache API is mostly used with heap allocated strings.

Reviewed-by: Aurelien Aptel <aaptel@suse.com>

-- 
Aurélien Aptel / SUSE Labs Samba Team
GPG: 1839 CB5F 9F5B FB9B AA97  8C99 03C8 A49B 521B D5D3
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg, DE
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 247165 (AG München)

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 06/11] cifs: Introduce helpers for finding TCP connection
  2019-11-27  0:36 ` [PATCH v3 06/11] cifs: Introduce helpers for finding TCP connection Paulo Alcantara (SUSE)
@ 2019-11-27 11:33   ` Aurélien Aptel
  0 siblings, 0 replies; 19+ messages in thread
From: Aurélien Aptel @ 2019-11-27 11:33 UTC (permalink / raw)
  To: Paulo Alcantara (SUSE), smfrench; +Cc: linux-cifs, Paulo Alcantara (SUSE)

"Paulo Alcantara (SUSE)" <pc@cjr.nz> writes:
> Add helpers for finding TCP connections that are good candidates for
> being used by DFS refresh worker.

Reviewed-by: Aurelien Aptel <aaptel@suse.com>

-- 
Aurélien Aptel / SUSE Labs Samba Team
GPG: 1839 CB5F 9F5B FB9B AA97  8C99 03C8 A49B 521B D5D3
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg, DE
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 247165 (AG München)

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 07/11] cifs: Merge is_path_valid() into get_normalized_path()
  2019-11-27  0:36 ` [PATCH v3 07/11] cifs: Merge is_path_valid() into get_normalized_path() Paulo Alcantara (SUSE)
@ 2019-11-27 11:34   ` Aurélien Aptel
  0 siblings, 0 replies; 19+ messages in thread
From: Aurélien Aptel @ 2019-11-27 11:34 UTC (permalink / raw)
  To: Paulo Alcantara (SUSE), smfrench; +Cc: linux-cifs, Paulo Alcantara (SUSE)

"Paulo Alcantara (SUSE)" <pc@cjr.nz> writes:
> Just do the trivial path validation in get_normalized_path().

Reviewed-by: Aurelien Aptel <aaptel@suse.com>

-- 
Aurélien Aptel / SUSE Labs Samba Team
GPG: 1839 CB5F 9F5B FB9B AA97  8C99 03C8 A49B 521B D5D3
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg, DE
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 247165 (AG München)

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 08/11] cifs: Fix potential deadlock when updating vol in cifs_reconnect()
  2019-11-27  0:36 ` [PATCH v3 08/11] cifs: Fix potential deadlock when updating vol in cifs_reconnect() Paulo Alcantara (SUSE)
@ 2019-11-27 12:09   ` Aurélien Aptel
  0 siblings, 0 replies; 19+ messages in thread
From: Aurélien Aptel @ 2019-11-27 12:09 UTC (permalink / raw)
  To: Paulo Alcantara (SUSE), smfrench; +Cc: linux-cifs, Paulo Alcantara (SUSE)

"Paulo Alcantara (SUSE)" <pc@cjr.nz> writes:
> We can't acquire vol_lock while refreshing the DFS cache because
> cifs_reconnect() may call dfs_cache_update_vol() while we are walking
> through the volume list.
>
> To prevent that, make vol_info refcounted, create a temp list with all
> volumes eligible for refreshing, and then use it without any locks
> held.
>
> Also turn vol_lock into a spinlock rather than a mutex.
>
> Signed-off-by: Paulo Alcantara (SUSE) <pc@cjr.nz>
> ---
>  fs/cifs/dfs_cache.c | 104 +++++++++++++++++++++++++++++---------------
>  1 file changed, 69 insertions(+), 35 deletions(-)
>
> diff --git a/fs/cifs/dfs_cache.c b/fs/cifs/dfs_cache.c
> index 1d1f7c03931b..629190926197 100644
> --- a/fs/cifs/dfs_cache.c
> +++ b/fs/cifs/dfs_cache.c
> @@ -52,12 +52,14 @@ struct vol_info {
>  	struct smb_vol smb_vol;
>  	char *mntdata;
>  	struct list_head list;
> +	struct list_head rlist;
> +	int vol_count;
>  };

If vol_count is going to be accessed/modified from multiple threads
without locks there might be races. I think you should use kref

https://www.kernel.org/doc/Documentation/kref.txt

-- 
Aurélien Aptel / SUSE Labs Samba Team
GPG: 1839 CB5F 9F5B FB9B AA97  8C99 03C8 A49B 521B D5D3
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg, DE
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 247165 (AG München)

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 09/11] cifs: Avoid doing network I/O while holding cache lock
  2019-11-27  0:36 ` [PATCH v3 09/11] cifs: Avoid doing network I/O while holding cache lock Paulo Alcantara (SUSE)
@ 2019-11-27 13:32   ` Aurélien Aptel
  0 siblings, 0 replies; 19+ messages in thread
From: Aurélien Aptel @ 2019-11-27 13:32 UTC (permalink / raw)
  To: Paulo Alcantara (SUSE), smfrench; +Cc: linux-cifs, Paulo Alcantara (SUSE)

"Paulo Alcantara (SUSE)" <pc@cjr.nz> writes:
> When creating or updating a cache entry, we need to get an DFS
> referral (get_dfs_referral), so avoid holding any locks during such
> network operation.
>
> Also, do some rework in the cache API and make it use a rwsem.
>

Commit msg should mention what is done in the patch to avoid locking:

* change cache hashtable sync method from RCU sync to a read/write lock
* use GFP_ATOMIC in memory allocations

Reviewed-by: Aurelien Aptel <aaptel@suse.com>

-- 
Aurélien Aptel / SUSE Labs Samba Team
GPG: 1839 CB5F 9F5B FB9B AA97  8C99 03C8 A49B 521B D5D3
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg, DE
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 247165 (AG München)

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [PATCH v3 10/11] cifs: Fix retrieval of DFS referrals in cifs_mount()
  2019-11-27  0:36 ` [PATCH v3 10/11] cifs: Fix retrieval of DFS referrals in cifs_mount() Paulo Alcantara (SUSE)
@ 2019-11-27 13:33   ` Aurélien Aptel
  0 siblings, 0 replies; 19+ messages in thread
From: Aurélien Aptel @ 2019-11-27 13:33 UTC (permalink / raw)
  To: Paulo Alcantara (SUSE), smfrench
  Cc: linux-cifs, Paulo Alcantara (SUSE), stable, Matthew Ruffell

"Paulo Alcantara (SUSE)" <pc@cjr.nz> writes:
> Make sure that DFS referrals are sent to newly resolved root targets
> as in a multi tier DFS setup.

Reviewed-by: Aurelien Aptel <aaptel@suse.com>

-- 
Aurélien Aptel / SUSE Labs Samba Team
GPG: 1839 CB5F 9F5B FB9B AA97  8C99 03C8 A49B 521B D5D3
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg, DE
GF: Felix Imendörffer, Mary Higgins, Sri Rasiah HRB 247165 (AG München)

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2019-11-27 13:33 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-27  0:36 [PATCH v3 00/11] DFS fixes Paulo Alcantara (SUSE)
2019-11-27  0:36 ` [PATCH v3 01/11] cifs: Fix use-after-free bug in cifs_reconnect() Paulo Alcantara (SUSE)
2019-11-27  0:36 ` [PATCH v3 02/11] cifs: Fix lookup of root ses in DFS referral cache Paulo Alcantara (SUSE)
2019-11-27  0:36 ` [PATCH v3 03/11] cifs: Fix potential softlockups while refreshing DFS cache Paulo Alcantara (SUSE)
2019-11-27  0:36 ` [PATCH v3 04/11] cifs: Clean up DFS referral cache Paulo Alcantara (SUSE)
2019-11-27 11:32   ` Aurélien Aptel
2019-11-27  0:36 ` [PATCH v3 05/11] cifs: Get rid of kstrdup_const()'d paths Paulo Alcantara (SUSE)
2019-11-27 11:33   ` Aurélien Aptel
2019-11-27  0:36 ` [PATCH v3 06/11] cifs: Introduce helpers for finding TCP connection Paulo Alcantara (SUSE)
2019-11-27 11:33   ` Aurélien Aptel
2019-11-27  0:36 ` [PATCH v3 07/11] cifs: Merge is_path_valid() into get_normalized_path() Paulo Alcantara (SUSE)
2019-11-27 11:34   ` Aurélien Aptel
2019-11-27  0:36 ` [PATCH v3 08/11] cifs: Fix potential deadlock when updating vol in cifs_reconnect() Paulo Alcantara (SUSE)
2019-11-27 12:09   ` Aurélien Aptel
2019-11-27  0:36 ` [PATCH v3 09/11] cifs: Avoid doing network I/O while holding cache lock Paulo Alcantara (SUSE)
2019-11-27 13:32   ` Aurélien Aptel
2019-11-27  0:36 ` [PATCH v3 10/11] cifs: Fix retrieval of DFS referrals in cifs_mount() Paulo Alcantara (SUSE)
2019-11-27 13:33   ` Aurélien Aptel
2019-11-27  0:36 ` [PATCH v3 11/11] cifs: Always update signing key of first channel Paulo Alcantara (SUSE)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).