All of lore.kernel.org
 help / color / mirror / Atom feed
* [Cluster-devel] [RESEND dlm/next 00/16] fs: dlm: fixes, cleanups and locktorture
@ 2022-08-15 19:43 Alexander Aring
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 01/16] fs: dlm: fix race in lowcomms Alexander Aring
                   ` (15 more replies)
  0 siblings, 16 replies; 17+ messages in thread
From: Alexander Aring @ 2022-08-15 19:43 UTC (permalink / raw)
  To: cluster-devel.redhat.com

Hi,

this patch series contains fixes for lowcomms and -EBUSY handling. In
lowcomms we have a race which could end in a use after free. The current
validation of DLM API and between -EBUSY cases vs -EINVAL cases has a
issue because -EINVAL is checked on first but reading variables which
are only valid if there is a non -EBUSY case. To fix it we moved the
-EBUSY case check at first which should be always the first check which
should be done (in case of the current DLM API behaviour and -EBUSY). 

Then there are bunch of cleanup/fixes patches regarding the dlm callback
behaviour. Adding traceevents for dlm user space locks, before we only
captured kernel locks. Another cleanups like constify resource name
parameter prepares dlm for the a new locktorture module.

- Alex

changes since sending on mailinglist:

- fixed some error handling in locktorture if cluster is not
  configured at module init.
- add patch for else if branch for DLM_RCOM
- fixes user space tracing and error assign
- add WARN_ON(1) for -EINVAL case
- cleanup commit message for invalid derefence of sb_lvbptr

Alexander Aring (16):
  fs: dlm: fix race in lowcomms
  fs: dlm: fix race between test_bit() and queue_work()
  fs: dlm: handle -EBUSY as first for lock validation
  fs: dlm: handle -EBUSY as first for unlock validation
  fs: dlm: use __func__ for function name
  fs: dlm: handle -EINVAL as log_error()
  fs: dlm: fix invalid derefence of sb_lvbptr
  fs: dlm: allow lockspaces have zero lvblen
  fs: dlm: handle rcom in else if branch
  fs: dlm: remove dlm_del_ast prototype
  fs: dlm: change ls_clear_proc_locks to spinlock
  fs: dlm: trace user space callbacks
  fs: dlm: move DLM_LSFL_FS out of uapi
  fs: dlm: LSFL_CB_DELAY only for kernel lockspaces
  fs: dlm: const void resource name parameter
  fs: dlm: initial commit of locktorture

 drivers/md/md-cluster.c    |   4 +-
 fs/dlm/Kconfig             |  11 +
 fs/dlm/Makefile            |   1 +
 fs/dlm/ast.c               |  15 +-
 fs/dlm/ast.h               |   1 -
 fs/dlm/dlm_internal.h      |   2 +-
 fs/dlm/dlm_locktorture.c   | 517 +++++++++++++++++++++++++++++++++++++
 fs/dlm/lock.c              | 160 ++++++++----
 fs/dlm/lock.h              |   2 +-
 fs/dlm/lockspace.c         |  32 ++-
 fs/dlm/lockspace.h         |  13 +
 fs/dlm/lowcomms.c          |   4 +
 fs/dlm/user.c              |  17 +-
 fs/gfs2/lock_dlm.c         |   2 +-
 fs/ocfs2/stack_user.c      |   2 +-
 include/linux/dlm.h        |   5 +-
 include/trace/events/dlm.h |  26 +-
 include/uapi/linux/dlm.h   |   1 -
 18 files changed, 718 insertions(+), 97 deletions(-)
 create mode 100644 fs/dlm/dlm_locktorture.c

-- 
2.31.1


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [Cluster-devel] [RESEND dlm/next 01/16] fs: dlm: fix race in lowcomms
  2022-08-15 19:43 [Cluster-devel] [RESEND dlm/next 00/16] fs: dlm: fixes, cleanups and locktorture Alexander Aring
@ 2022-08-15 19:43 ` Alexander Aring
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 02/16] fs: dlm: fix race between test_bit() and queue_work() Alexander Aring
                   ` (14 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Alexander Aring @ 2022-08-15 19:43 UTC (permalink / raw)
  To: cluster-devel.redhat.com

This patch fixes a race between queue_work() in
_dlm_lowcomms_commit_msg() and srcu_read_unlock(). The queue_work() can
take the final reference of a dlm_msg and so msg->idx can contain
garbage which is signaled by the following warning:

[  676.237050] ------------[ cut here ]------------
[  676.237052] WARNING: CPU: 0 PID: 1060 at include/linux/srcu.h:189 dlm_lowcomms_commit_msg+0x41/0x50
[  676.238945] Modules linked in: dlm_locktorture torture rpcsec_gss_krb5 intel_rapl_msr intel_rapl_common iTCO_wdt iTCO_vendor_support qxl kvm_intel drm_ttm_helper vmw_vsock_virtio_transport kvm vmw_vsock_virtio_transport_common ttm irqbypass crc32_pclmul joydev crc32c_intel serio_raw drm_kms_helper vsock virtio_scsi virtio_console virtio_balloon snd_pcm drm syscopyarea sysfillrect sysimgblt snd_timer fb_sys_fops i2c_i801 lpc_ich snd i2c_smbus soundcore pcspkr
[  676.244227] CPU: 0 PID: 1060 Comm: lock_torture_wr Not tainted 5.19.0-rc3+ #1546
[  676.245216] Hardware name: Red Hat KVM/RHEL-AV, BIOS 1.16.0-2.module+el8.7.0+15506+033991b0 04/01/2014
[  676.246460] RIP: 0010:dlm_lowcomms_commit_msg+0x41/0x50
[  676.247132] Code: fe ff ff ff 75 24 48 c7 c6 bd 0f 49 bb 48 c7 c7 38 7c 01 bd e8 00 e7 ca ff 89 de 48 c7 c7 60 78 01 bd e8 42 3d cd ff 5b 5d c3 <0f> 0b eb d8 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 55 48
[  676.249253] RSP: 0018:ffffa401c18ffc68 EFLAGS: 00010282
[  676.249855] RAX: 0000000000000001 RBX: 00000000ffff8b76 RCX: 0000000000000006
[  676.250713] RDX: 0000000000000000 RSI: ffffffffbccf3a10 RDI: ffffffffbcc7b62e
[  676.251610] RBP: ffffa401c18ffc70 R08: 0000000000000001 R09: 0000000000000001
[  676.252481] R10: 0000000000000001 R11: 0000000000000001 R12: 0000000000000005
[  676.253421] R13: ffff8b76786ec370 R14: ffff8b76786ec370 R15: ffff8b76786ec480
[  676.254257] FS:  0000000000000000(0000) GS:ffff8b7777800000(0000) knlGS:0000000000000000
[  676.255239] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  676.255897] CR2: 00005590205d88b8 CR3: 000000017656c003 CR4: 0000000000770ee0
[  676.256734] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[  676.257567] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[  676.258397] PKRU: 55555554
[  676.258729] Call Trace:
[  676.259063]  <TASK>
[  676.259354]  dlm_midcomms_commit_mhandle+0xcc/0x110
[  676.259964]  queue_bast+0x8b/0xb0
[  676.260423]  grant_pending_locks+0x166/0x1b0
[  676.261007]  _unlock_lock+0x75/0x90
[  676.261469]  unlock_lock.isra.57+0x62/0xa0
[  676.262009]  dlm_unlock+0x21e/0x330
[  676.262457]  ? lock_torture_stats+0x80/0x80 [dlm_locktorture]
[  676.263183]  torture_unlock+0x5a/0x90 [dlm_locktorture]
[  676.263815]  ? preempt_count_sub+0xba/0x100
[  676.264361]  ? complete+0x1d/0x60
[  676.264777]  lock_torture_writer+0xb8/0x150 [dlm_locktorture]
[  676.265555]  kthread+0x10a/0x130
[  676.266007]  ? kthread_complete_and_exit+0x20/0x20
[  676.266616]  ret_from_fork+0x22/0x30
[  676.267097]  </TASK>
[  676.267381] irq event stamp: 9579855
[  676.267824] hardirqs last  enabled at (9579863): [<ffffffffbb14e6f8>] __up_console_sem+0x58/0x60
[  676.268896] hardirqs last disabled at (9579872): [<ffffffffbb14e6dd>] __up_console_sem+0x3d/0x60
[  676.270008] softirqs last  enabled at (9579798): [<ffffffffbc200349>] __do_softirq+0x349/0x4c7
[  676.271438] softirqs last disabled at (9579897): [<ffffffffbb0d54c0>] irq_exit_rcu+0xb0/0xf0
[  676.272796] ---[ end trace 0000000000000000 ]---

I reproduced this warning with dlm_locktorture test which is currently
not upstream. However this patch fix the issue by make a additional
refcount between dlm_lowcomms_new_msg() and dlm_lowcomms_commit_msg().
In case of the race the kref_put() in dlm_lowcomms_commit_msg() will be
the final put.

Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
 fs/dlm/lowcomms.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/fs/dlm/lowcomms.c b/fs/dlm/lowcomms.c
index a4e84e8d94c8..59f64c596233 100644
--- a/fs/dlm/lowcomms.c
+++ b/fs/dlm/lowcomms.c
@@ -1336,6 +1336,8 @@ struct dlm_msg *dlm_lowcomms_new_msg(int nodeid, int len, gfp_t allocation,
 		return NULL;
 	}
 
+	/* for dlm_lowcomms_commit_msg() */
+	kref_get(&msg->ref);
 	/* we assume if successful commit must called */
 	msg->idx = idx;
 	return msg;
@@ -1375,6 +1377,8 @@ void dlm_lowcomms_commit_msg(struct dlm_msg *msg)
 {
 	_dlm_lowcomms_commit_msg(msg);
 	srcu_read_unlock(&connections_srcu, msg->idx);
+	/* because dlm_lowcomms_new_msg() */
+	kref_put(&msg->ref, dlm_msg_release);
 }
 #endif
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Cluster-devel] [RESEND dlm/next 02/16] fs: dlm: fix race between test_bit() and queue_work()
  2022-08-15 19:43 [Cluster-devel] [RESEND dlm/next 00/16] fs: dlm: fixes, cleanups and locktorture Alexander Aring
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 01/16] fs: dlm: fix race in lowcomms Alexander Aring
@ 2022-08-15 19:43 ` Alexander Aring
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 03/16] fs: dlm: handle -EBUSY as first for lock validation Alexander Aring
                   ` (13 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Alexander Aring @ 2022-08-15 19:43 UTC (permalink / raw)
  To: cluster-devel.redhat.com

This patch will fix a race by surround ls_cb_mutex in set_bit() and the
test_bit() and it's conditional code blocks for LSFL_CB_DELAY.

The function dlm_callback_stop() has the idea to stop all callbacks and
flush all currently queued onces. The set_bit() is not enough because
there can be still queue_work() around after the workqueue was flushed.
To avoid queue_work() after set_bit() we surround both by ls_cb_mutex
lock.

Cc: stable at vger.kernel.org
Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
 fs/dlm/ast.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/fs/dlm/ast.c b/fs/dlm/ast.c
index 19ef136f9e4f..a44cc42b6317 100644
--- a/fs/dlm/ast.c
+++ b/fs/dlm/ast.c
@@ -200,13 +200,13 @@ void dlm_add_cb(struct dlm_lkb *lkb, uint32_t flags, int mode, int status,
 	if (!prev_seq) {
 		kref_get(&lkb->lkb_ref);
 
+		mutex_lock(&ls->ls_cb_mutex);
 		if (test_bit(LSFL_CB_DELAY, &ls->ls_flags)) {
-			mutex_lock(&ls->ls_cb_mutex);
 			list_add(&lkb->lkb_cb_list, &ls->ls_cb_delay);
-			mutex_unlock(&ls->ls_cb_mutex);
 		} else {
 			queue_work(ls->ls_callback_wq, &lkb->lkb_cb_work);
 		}
+		mutex_unlock(&ls->ls_cb_mutex);
 	}
  out:
 	mutex_unlock(&lkb->lkb_cb_mutex);
@@ -288,7 +288,9 @@ void dlm_callback_stop(struct dlm_ls *ls)
 
 void dlm_callback_suspend(struct dlm_ls *ls)
 {
+	mutex_lock(&ls->ls_cb_mutex);
 	set_bit(LSFL_CB_DELAY, &ls->ls_flags);
+	mutex_unlock(&ls->ls_cb_mutex);
 
 	if (ls->ls_callback_wq)
 		flush_workqueue(ls->ls_callback_wq);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Cluster-devel] [RESEND dlm/next 03/16] fs: dlm: handle -EBUSY as first for lock validation
  2022-08-15 19:43 [Cluster-devel] [RESEND dlm/next 00/16] fs: dlm: fixes, cleanups and locktorture Alexander Aring
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 01/16] fs: dlm: fix race in lowcomms Alexander Aring
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 02/16] fs: dlm: fix race between test_bit() and queue_work() Alexander Aring
@ 2022-08-15 19:43 ` Alexander Aring
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 04/16] fs: dlm: handle -EBUSY as first for unlock validation Alexander Aring
                   ` (12 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Alexander Aring @ 2022-08-15 19:43 UTC (permalink / raw)
  To: cluster-devel.redhat.com

In case of lock args validation we should at first check on -EBUSY then
on -EINVAL. The -EINVAL conditions checks against lkb state variables
which are not stable in case something is in -EBUSY lkb condition state
e.g. lkb->lkb_grmode. This patch checks at first if -EBUSY condition is
not met, then it's will check on -EINVAL condition.

Cc: stable at vger.kernel.org
Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
 fs/dlm/lock.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
index dac7eb75dba9..c23413da40f5 100644
--- a/fs/dlm/lock.c
+++ b/fs/dlm/lock.c
@@ -2864,17 +2864,9 @@ static int set_unlock_args(uint32_t flags, void *astarg, struct dlm_args *args)
 static int validate_lock_args(struct dlm_ls *ls, struct dlm_lkb *lkb,
 			      struct dlm_args *args)
 {
-	int rv = -EINVAL;
+	int rv = -EBUSY;
 
 	if (args->flags & DLM_LKF_CONVERT) {
-		if (lkb->lkb_flags & DLM_IFL_MSTCPY)
-			goto out;
-
-		if (args->flags & DLM_LKF_QUECVT &&
-		    !__quecvt_compat_matrix[lkb->lkb_grmode+1][args->mode+1])
-			goto out;
-
-		rv = -EBUSY;
 		if (lkb->lkb_status != DLM_LKSTS_GRANTED)
 			goto out;
 
@@ -2884,6 +2876,14 @@ static int validate_lock_args(struct dlm_ls *ls, struct dlm_lkb *lkb,
 
 		if (is_overlap(lkb))
 			goto out;
+
+		rv = -EINVAL;
+		if (lkb->lkb_flags & DLM_IFL_MSTCPY)
+			goto out;
+
+		if (args->flags & DLM_LKF_QUECVT &&
+		    !__quecvt_compat_matrix[lkb->lkb_grmode+1][args->mode+1])
+			goto out;
 	}
 
 	lkb->lkb_exflags = args->flags;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Cluster-devel] [RESEND dlm/next 04/16] fs: dlm: handle -EBUSY as first for unlock validation
  2022-08-15 19:43 [Cluster-devel] [RESEND dlm/next 00/16] fs: dlm: fixes, cleanups and locktorture Alexander Aring
                   ` (2 preceding siblings ...)
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 03/16] fs: dlm: handle -EBUSY as first for lock validation Alexander Aring
@ 2022-08-15 19:43 ` Alexander Aring
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 05/16] fs: dlm: use __func__ for function name Alexander Aring
                   ` (11 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Alexander Aring @ 2022-08-15 19:43 UTC (permalink / raw)
  To: cluster-devel.redhat.com

This patch checks on -EBUSY for dlm_unlock() for non CANCEL or
FORCEUNLOCK case validation at first. Similar like it's done for
dlm_lock(). Although the current way looks okay we should anyway
moving the -EBUSY check at first after doing a check on -EINVAL
regarding to the lkb state. If new -EINVAL checks are added it
should be considered that some lkb fields are in a stable state
only when the lkb is in a non -EBUSY state. This patch is trying to
avoid such future mistake.

Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
 fs/dlm/lock.c | 44 ++++++++++++++++++++++----------------------
 1 file changed, 22 insertions(+), 22 deletions(-)

diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
index c23413da40f5..16d339d383cd 100644
--- a/fs/dlm/lock.c
+++ b/fs/dlm/lock.c
@@ -2918,23 +2918,12 @@ static int validate_lock_args(struct dlm_ls *ls, struct dlm_lkb *lkb,
 static int validate_unlock_args(struct dlm_lkb *lkb, struct dlm_args *args)
 {
 	struct dlm_ls *ls = lkb->lkb_resource->res_ls;
-	int rv = -EINVAL;
-
-	if (lkb->lkb_flags & DLM_IFL_MSTCPY) {
-		log_error(ls, "unlock on MSTCPY %x", lkb->lkb_id);
-		dlm_print_lkb(lkb);
-		goto out;
-	}
-
-	/* an lkb may still exist even though the lock is EOL'ed due to a
-	   cancel, unlock or failed noqueue request; an app can't use these
-	   locks; return same error as if the lkid had not been found at all */
+	int rv = -EBUSY;
 
-	if (lkb->lkb_flags & DLM_IFL_ENDOFLIFE) {
-		log_debug(ls, "unlock on ENDOFLIFE %x", lkb->lkb_id);
-		rv = -ENOENT;
+	/* normal unlock not allowed if there's any op in progress */
+	if (!(args->flags & (DLM_LKF_CANCEL | DLM_LKF_FORCEUNLOCK)) &&
+	    (lkb->lkb_wait_type || lkb->lkb_wait_count))
 		goto out;
-	}
 
 	/* an lkb may be waiting for an rsb lookup to complete where the
 	   lookup was initiated by another lock */
@@ -2949,7 +2938,24 @@ static int validate_unlock_args(struct dlm_lkb *lkb, struct dlm_args *args)
 			unhold_lkb(lkb); /* undoes create_lkb() */
 		}
 		/* caller changes -EBUSY to 0 for CANCEL and FORCEUNLOCK */
-		rv = -EBUSY;
+		goto out;
+	}
+
+	rv = -EINVAL;
+	if (lkb->lkb_flags & DLM_IFL_MSTCPY) {
+		log_error(ls, "unlock on MSTCPY %x", lkb->lkb_id);
+		dlm_print_lkb(lkb);
+		goto out;
+	}
+
+	/* an lkb may still exist even though the lock is EOL'ed due to a
+	 * cancel, unlock or failed noqueue request; an app can't use these
+	 * locks; return same error as if the lkid had not been found at all
+	 */
+
+	if (lkb->lkb_flags & DLM_IFL_ENDOFLIFE) {
+		log_debug(ls, "unlock on ENDOFLIFE %x", lkb->lkb_id);
+		rv = -ENOENT;
 		goto out;
 	}
 
@@ -3022,14 +3028,8 @@ static int validate_unlock_args(struct dlm_lkb *lkb, struct dlm_args *args)
 			goto out;
 		}
 		/* add_to_waiters() will set OVERLAP_UNLOCK */
-		goto out_ok;
 	}
 
-	/* normal unlock not allowed if there's any op in progress */
-	rv = -EBUSY;
-	if (lkb->lkb_wait_type || lkb->lkb_wait_count)
-		goto out;
-
  out_ok:
 	/* an overlapping op shouldn't blow away exflags from other op */
 	lkb->lkb_exflags |= args->flags;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Cluster-devel] [RESEND dlm/next 05/16] fs: dlm: use __func__ for function name
  2022-08-15 19:43 [Cluster-devel] [RESEND dlm/next 00/16] fs: dlm: fixes, cleanups and locktorture Alexander Aring
                   ` (3 preceding siblings ...)
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 04/16] fs: dlm: handle -EBUSY as first for unlock validation Alexander Aring
@ 2022-08-15 19:43 ` Alexander Aring
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 06/16] fs: dlm: handle -EINVAL as log_error() Alexander Aring
                   ` (10 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Alexander Aring @ 2022-08-15 19:43 UTC (permalink / raw)
  To: cluster-devel.redhat.com

There are several times of using hard-coded function names inside the
format string. When changing code checkpatch will drop a warning about
this. This patch prepares to not dropping a checkpatch warning when
introduce the same log message for a different loglevel by using
__func__ instead of a hard-coded function name.

Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
 fs/dlm/lock.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
index 16d339d383cd..026c203ff529 100644
--- a/fs/dlm/lock.c
+++ b/fs/dlm/lock.c
@@ -2901,7 +2901,7 @@ static int validate_lock_args(struct dlm_ls *ls, struct dlm_lkb *lkb,
 	rv = 0;
  out:
 	if (rv)
-		log_debug(ls, "validate_lock_args %d %x %x %x %d %d %s",
+		log_debug(ls, "%s %d %x %x %x %d %d %s", __func__,
 			  rv, lkb->lkb_id, lkb->lkb_flags, args->flags,
 			  lkb->lkb_status, lkb->lkb_wait_type,
 			  lkb->lkb_resource->res_name);
@@ -3038,7 +3038,7 @@ static int validate_unlock_args(struct dlm_lkb *lkb, struct dlm_args *args)
 	rv = 0;
  out:
 	if (rv)
-		log_debug(ls, "validate_unlock_args %d %x %x %x %x %d %s", rv,
+		log_debug(ls, "%s %d %x %x %x %x %d %s", __func__, rv,
 			  lkb->lkb_id, lkb->lkb_flags, lkb->lkb_exflags,
 			  args->flags, lkb->lkb_wait_type,
 			  lkb->lkb_resource->res_name);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Cluster-devel] [RESEND dlm/next 06/16] fs: dlm: handle -EINVAL as log_error()
  2022-08-15 19:43 [Cluster-devel] [RESEND dlm/next 00/16] fs: dlm: fixes, cleanups and locktorture Alexander Aring
                   ` (4 preceding siblings ...)
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 05/16] fs: dlm: use __func__ for function name Alexander Aring
@ 2022-08-15 19:43 ` Alexander Aring
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 07/16] fs: dlm: fix invalid derefence of sb_lvbptr Alexander Aring
                   ` (9 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Alexander Aring @ 2022-08-15 19:43 UTC (permalink / raw)
  To: cluster-devel.redhat.com

If the user generates a -EINVAL it's probably because the user using DLM
wrong. To give the user notice about that wrong behaviour we should
always print -EINVAL errors on the proper loglevel. In case of other
errors like -EBUSY it will be still printed on debug loglevel as the
current API handles it from user view as "retry again".

We add a WARN_ON(1) as well to see if dlm user reports about hitting
such case and we can investigate it.

Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
 fs/dlm/lock.c | 32 ++++++++++++++++++++++++++++++--
 1 file changed, 30 insertions(+), 2 deletions(-)

diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
index 026c203ff529..354f79254d62 100644
--- a/fs/dlm/lock.c
+++ b/fs/dlm/lock.c
@@ -2900,11 +2900,25 @@ static int validate_lock_args(struct dlm_ls *ls, struct dlm_lkb *lkb,
 #endif
 	rv = 0;
  out:
-	if (rv)
+	switch (rv) {
+	case 0:
+		break;
+	case -EINVAL:
+		/* annoy the user because dlm usage is wrong */
+		WARN_ON(1);
+		log_error(ls, "%s %d %x %x %x %d %d %s", __func__,
+			  rv, lkb->lkb_id, lkb->lkb_flags, args->flags,
+			  lkb->lkb_status, lkb->lkb_wait_type,
+			  lkb->lkb_resource->res_name);
+		break;
+	default:
 		log_debug(ls, "%s %d %x %x %x %d %d %s", __func__,
 			  rv, lkb->lkb_id, lkb->lkb_flags, args->flags,
 			  lkb->lkb_status, lkb->lkb_wait_type,
 			  lkb->lkb_resource->res_name);
+		break;
+	}
+
 	return rv;
 }
 
@@ -3037,11 +3051,25 @@ static int validate_unlock_args(struct dlm_lkb *lkb, struct dlm_args *args)
 	lkb->lkb_astparam = args->astparam;
 	rv = 0;
  out:
-	if (rv)
+	switch (rv) {
+	case 0:
+		break;
+	case -EINVAL:
+		/* annoy the user because dlm usage is wrong */
+		WARN_ON(1);
+		log_error(ls, "%s %d %x %x %x %x %d %s", __func__, rv,
+			  lkb->lkb_id, lkb->lkb_flags, lkb->lkb_exflags,
+			  args->flags, lkb->lkb_wait_type,
+			  lkb->lkb_resource->res_name);
+		break;
+	default:
 		log_debug(ls, "%s %d %x %x %x %x %d %s", __func__, rv,
 			  lkb->lkb_id, lkb->lkb_flags, lkb->lkb_exflags,
 			  args->flags, lkb->lkb_wait_type,
 			  lkb->lkb_resource->res_name);
+		break;
+	}
+
 	return rv;
 }
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Cluster-devel] [RESEND dlm/next 07/16] fs: dlm: fix invalid derefence of sb_lvbptr
  2022-08-15 19:43 [Cluster-devel] [RESEND dlm/next 00/16] fs: dlm: fixes, cleanups and locktorture Alexander Aring
                   ` (5 preceding siblings ...)
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 06/16] fs: dlm: handle -EINVAL as log_error() Alexander Aring
@ 2022-08-15 19:43 ` Alexander Aring
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 08/16] fs: dlm: allow lockspaces have zero lvblen Alexander Aring
                   ` (8 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Alexander Aring @ 2022-08-15 19:43 UTC (permalink / raw)
  To: cluster-devel.redhat.com

I experience issues when putting a lkbsb on the stack and have sb_lvbptr
field to a dangled pointer while not using DLM_LKF_VALBLK. It will crash
with the following kernel message, the dangled pointer is here
0xdeadbeef as example:

[  102.749317] BUG: unable to handle page fault for address: 00000000deadbeef
[  102.749320] #PF: supervisor read access in kernel mode
[  102.749323] #PF: error_code(0x0000) - not-present page
[  102.749325] PGD 0 P4D 0
[  102.749332] Oops: 0000 [#1] PREEMPT SMP PTI
[  102.749336] CPU: 0 PID: 1567 Comm: lock_torture_wr Tainted: G        W         5.19.0-rc3+ #1565
[  102.749343] Hardware name: Red Hat KVM/RHEL-AV, BIOS 1.16.0-2.module+el8.7.0+15506+033991b0 04/01/2014
[  102.749344] RIP: 0010:memcpy_erms+0x6/0x10
[  102.749353] Code: cc cc cc cc eb 1e 0f 1f 00 48 89 f8 48 89 d1 48 c1 e9 03 83 e2 07 f3 48 a5 89 d1 f3 a4 c3 66 0f 1f 44 00 00 48 89 f8 48 89 d1 <f3> a4 c3 0f 1f 80 00 00 00 00 48 89 f8 48 83 fa 20 72 7e 40 38 fe
[  102.749355] RSP: 0018:ffff97a58145fd08 EFLAGS: 00010202
[  102.749358] RAX: ffff901778b77070 RBX: 0000000000000000 RCX: 0000000000000040
[  102.749360] RDX: 0000000000000040 RSI: 00000000deadbeef RDI: ffff901778b77070
[  102.749362] RBP: ffff97a58145fd10 R08: ffff901760b67a70 R09: 0000000000000001
[  102.749364] R10: ffff9017008e2cb8 R11: 0000000000000001 R12: ffff901760b67a70
[  102.749366] R13: ffff901760b78f00 R14: 0000000000000003 R15: 0000000000000001
[  102.749368] FS:  0000000000000000(0000) GS:ffff901876e00000(0000) knlGS:0000000000000000
[  102.749372] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  102.749374] CR2: 00000000deadbeef CR3: 000000017c49a004 CR4: 0000000000770ef0
[  102.749376] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[  102.749378] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[  102.749379] PKRU: 55555554
[  102.749381] Call Trace:
[  102.749382]  <TASK>
[  102.749383]  ? send_args+0xb2/0xd0
[  102.749389]  send_common+0xb7/0xd0
[  102.749395]  _unlock_lock+0x2c/0x90
[  102.749400]  unlock_lock.isra.56+0x62/0xa0
[  102.749405]  dlm_unlock+0x21e/0x330
[  102.749411]  ? lock_torture_stats+0x80/0x80 [dlm_locktorture]
[  102.749416]  torture_unlock+0x5a/0x90 [dlm_locktorture]
[  102.749419]  ? preempt_count_sub+0xba/0x100
[  102.749427]  lock_torture_writer+0xbd/0x150 [dlm_locktorture]
[  102.786186]  kthread+0x10a/0x130
[  102.786581]  ? kthread_complete_and_exit+0x20/0x20
[  102.787156]  ret_from_fork+0x22/0x30
[  102.787588]  </TASK>
[  102.787855] Modules linked in: dlm_locktorture torture rpcsec_gss_krb5 intel_rapl_msr intel_rapl_common kvm_intel iTCO_wdt iTCO_vendor_support kvm vmw_vsock_virtio_transport qxl irqbypass vmw_vsock_virtio_transport_common drm_ttm_helper crc32_pclmul joydev crc32c_intel ttm vsock virtio_scsi virtio_balloon snd_pcm drm_kms_helper virtio_console snd_timer snd drm soundcore syscopyarea i2c_i801 sysfillrect sysimgblt i2c_smbus pcspkr fb_sys_fops lpc_ich serio_raw
[  102.792536] CR2: 00000000deadbeef
[  102.792930] ---[ end trace 0000000000000000 ]---

This patch fixes the issue by checking also on DLM_LKF_VALBLK on exflags
is set when copying the lvbptr array instead of if it's just null which
fixes for me the issue.

I think this patch can fix other dlm users as well, depending how they
handle the init, freeing memory handling of sb_lvbptr and don't set
DLM_LKF_VALBLK for some dlm_lock() calls. It might a there could be a
hidden issue all the time. However with checking on DLM_LKF_VALBLK the
user always need to provide a sb_lvbptr non-null value. There might be
more intelligent handling between per ls lvblen, DLM_LKF_VALBLK and
non-null to report the user the way how DLM API is used is wrong but can
be added for later, this will only fix the current behaviour.

Cc: stable at vger.kernel.org
Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
 fs/dlm/lock.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
index 354f79254d62..da95ba3c295e 100644
--- a/fs/dlm/lock.c
+++ b/fs/dlm/lock.c
@@ -3651,7 +3651,7 @@ static void send_args(struct dlm_rsb *r, struct dlm_lkb *lkb,
 	case cpu_to_le32(DLM_MSG_REQUEST_REPLY):
 	case cpu_to_le32(DLM_MSG_CONVERT_REPLY):
 	case cpu_to_le32(DLM_MSG_GRANT):
-		if (!lkb->lkb_lvbptr)
+		if (!lkb->lkb_lvbptr || !(lkb->lkb_exflags & DLM_LKF_VALBLK))
 			break;
 		memcpy(ms->m_extra, lkb->lkb_lvbptr, r->res_ls->ls_lvblen);
 		break;
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Cluster-devel] [RESEND dlm/next 08/16] fs: dlm: allow lockspaces have zero lvblen
  2022-08-15 19:43 [Cluster-devel] [RESEND dlm/next 00/16] fs: dlm: fixes, cleanups and locktorture Alexander Aring
                   ` (6 preceding siblings ...)
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 07/16] fs: dlm: fix invalid derefence of sb_lvbptr Alexander Aring
@ 2022-08-15 19:43 ` Alexander Aring
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 09/16] fs: dlm: handle rcom in else if branch Alexander Aring
                   ` (7 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Alexander Aring @ 2022-08-15 19:43 UTC (permalink / raw)
  To: cluster-devel.redhat.com

A dlm user can never use DLM_LKF_VALBLK flag with DLM API calls so a zero
lvblen should be allowed as per lockspace parameter.

Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
 fs/dlm/lockspace.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/fs/dlm/lockspace.c b/fs/dlm/lockspace.c
index 3972f4d86c75..56c79926e7be 100644
--- a/fs/dlm/lockspace.c
+++ b/fs/dlm/lockspace.c
@@ -416,7 +416,7 @@ static int new_lockspace(const char *name, const char *cluster,
 	if (namelen > DLM_LOCKSPACE_LEN || namelen == 0)
 		return -EINVAL;
 
-	if (!lvblen || (lvblen % 8))
+	if (lvblen % 8)
 		return -EINVAL;
 
 	if (!try_module_get(THIS_MODULE))
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Cluster-devel] [RESEND dlm/next 09/16] fs: dlm: handle rcom in else if branch
  2022-08-15 19:43 [Cluster-devel] [RESEND dlm/next 00/16] fs: dlm: fixes, cleanups and locktorture Alexander Aring
                   ` (7 preceding siblings ...)
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 08/16] fs: dlm: allow lockspaces have zero lvblen Alexander Aring
@ 2022-08-15 19:43 ` Alexander Aring
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 10/16] fs: dlm: remove dlm_del_ast prototype Alexander Aring
                   ` (6 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Alexander Aring @ 2022-08-15 19:43 UTC (permalink / raw)
  To: cluster-devel.redhat.com

Currently we handle in dlm_receive_buffer() everything else than a
DLM_MSG type as DLM_RCOM message. Although a different message than
DLM_MSG should be a DLM_RCOM we should explicit check on DLM_RCOM and
drop a log_error() if we see something unexpected.

Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
 fs/dlm/lock.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
index da95ba3c295e..c41aa8ab3230 100644
--- a/fs/dlm/lock.c
+++ b/fs/dlm/lock.c
@@ -5108,8 +5108,11 @@ void dlm_receive_buffer(union dlm_packet *p, int nodeid)
 	down_read(&ls->ls_recv_active);
 	if (hd->h_cmd == DLM_MSG)
 		dlm_receive_message(ls, &p->message, nodeid);
-	else
+	else if (hd->h_cmd == DLM_RCOM)
 		dlm_receive_rcom(ls, &p->rcom, nodeid);
+	else
+		log_error(ls, "invalid h_cmd %d from %d lockspace %x",
+			  hd->h_cmd, nodeid, le32_to_cpu(hd->u.h_lockspace));
 	up_read(&ls->ls_recv_active);
 
 	dlm_put_lockspace(ls);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Cluster-devel] [RESEND dlm/next 10/16] fs: dlm: remove dlm_del_ast prototype
  2022-08-15 19:43 [Cluster-devel] [RESEND dlm/next 00/16] fs: dlm: fixes, cleanups and locktorture Alexander Aring
                   ` (8 preceding siblings ...)
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 09/16] fs: dlm: handle rcom in else if branch Alexander Aring
@ 2022-08-15 19:43 ` Alexander Aring
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 11/16] fs: dlm: change ls_clear_proc_locks to spinlock Alexander Aring
                   ` (5 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Alexander Aring @ 2022-08-15 19:43 UTC (permalink / raw)
  To: cluster-devel.redhat.com

This patch removes dlm_del_ast() prototype which is not being used in
the dlm subsystem because there is not implementation for it.

Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
 fs/dlm/ast.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/fs/dlm/ast.h b/fs/dlm/ast.h
index 181ad7d20c4d..e5e05fcc5813 100644
--- a/fs/dlm/ast.h
+++ b/fs/dlm/ast.h
@@ -11,7 +11,6 @@
 #ifndef __ASTD_DOT_H__
 #define __ASTD_DOT_H__
 
-void dlm_del_ast(struct dlm_lkb *lkb);
 int dlm_add_lkb_callback(struct dlm_lkb *lkb, uint32_t flags, int mode,
                          int status, uint32_t sbflags, uint64_t seq);
 int dlm_rem_lkb_callback(struct dlm_ls *ls, struct dlm_lkb *lkb,
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Cluster-devel] [RESEND dlm/next 11/16] fs: dlm: change ls_clear_proc_locks to spinlock
  2022-08-15 19:43 [Cluster-devel] [RESEND dlm/next 00/16] fs: dlm: fixes, cleanups and locktorture Alexander Aring
                   ` (9 preceding siblings ...)
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 10/16] fs: dlm: remove dlm_del_ast prototype Alexander Aring
@ 2022-08-15 19:43 ` Alexander Aring
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 12/16] fs: dlm: trace user space callbacks Alexander Aring
                   ` (4 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Alexander Aring @ 2022-08-15 19:43 UTC (permalink / raw)
  To: cluster-devel.redhat.com

This patch changes the ls_clear_proc_locks to a spinlock because there
is no need to handle it as a mutex as there is no sleepable context when
ls_clear_proc_locks is held. This allows us to call those functionality
in non-sleepable contexts.

Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
 fs/dlm/dlm_internal.h | 2 +-
 fs/dlm/lock.c         | 8 ++++----
 fs/dlm/lockspace.c    | 2 +-
 fs/dlm/user.c         | 4 ++--
 4 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/fs/dlm/dlm_internal.h b/fs/dlm/dlm_internal.h
index 8aca8085d24e..e34c3d2639a5 100644
--- a/fs/dlm/dlm_internal.h
+++ b/fs/dlm/dlm_internal.h
@@ -661,7 +661,7 @@ struct dlm_ls {
 	spinlock_t		ls_recover_idr_lock;
 	wait_queue_head_t	ls_wait_general;
 	wait_queue_head_t	ls_recover_lock_wait;
-	struct mutex		ls_clear_proc_locks;
+	spinlock_t		ls_clear_proc_locks;
 
 	struct list_head	ls_root_list;	/* root resources */
 	struct rw_semaphore	ls_root_sem;	/* protect root_list */
diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
index c41aa8ab3230..65a7a0631ec8 100644
--- a/fs/dlm/lock.c
+++ b/fs/dlm/lock.c
@@ -6215,7 +6215,7 @@ static struct dlm_lkb *del_proc_lock(struct dlm_ls *ls,
 {
 	struct dlm_lkb *lkb = NULL;
 
-	mutex_lock(&ls->ls_clear_proc_locks);
+	spin_lock(&ls->ls_clear_proc_locks);
 	if (list_empty(&proc->locks))
 		goto out;
 
@@ -6227,7 +6227,7 @@ static struct dlm_lkb *del_proc_lock(struct dlm_ls *ls,
 	else
 		lkb->lkb_flags |= DLM_IFL_DEAD;
  out:
-	mutex_unlock(&ls->ls_clear_proc_locks);
+	spin_unlock(&ls->ls_clear_proc_locks);
 	return lkb;
 }
 
@@ -6264,7 +6264,7 @@ void dlm_clear_proc_locks(struct dlm_ls *ls, struct dlm_user_proc *proc)
 		dlm_put_lkb(lkb);
 	}
 
-	mutex_lock(&ls->ls_clear_proc_locks);
+	spin_lock(&ls->ls_clear_proc_locks);
 
 	/* in-progress unlocks */
 	list_for_each_entry_safe(lkb, safe, &proc->unlocking, lkb_ownqueue) {
@@ -6280,7 +6280,7 @@ void dlm_clear_proc_locks(struct dlm_ls *ls, struct dlm_user_proc *proc)
 		dlm_put_lkb(lkb);
 	}
 
-	mutex_unlock(&ls->ls_clear_proc_locks);
+	spin_unlock(&ls->ls_clear_proc_locks);
 	dlm_unlock_recovery(ls);
 }
 
diff --git a/fs/dlm/lockspace.c b/fs/dlm/lockspace.c
index 56c79926e7be..41a6504cfab5 100644
--- a/fs/dlm/lockspace.c
+++ b/fs/dlm/lockspace.c
@@ -584,7 +584,7 @@ static int new_lockspace(const char *name, const char *cluster,
 	atomic_set(&ls->ls_requestqueue_cnt, 0);
 	init_waitqueue_head(&ls->ls_requestqueue_wait);
 	mutex_init(&ls->ls_requestqueue_mutex);
-	mutex_init(&ls->ls_clear_proc_locks);
+	spin_lock_init(&ls->ls_clear_proc_locks);
 
 	/* Due backwards compatibility with 3.1 we need to use maximum
 	 * possible dlm message size to be sure the message will fit and
diff --git a/fs/dlm/user.c b/fs/dlm/user.c
index 99e8f0744513..df6215c73239 100644
--- a/fs/dlm/user.c
+++ b/fs/dlm/user.c
@@ -184,7 +184,7 @@ void dlm_user_add_ast(struct dlm_lkb *lkb, uint32_t flags, int mode,
 		return;
 
 	ls = lkb->lkb_resource->res_ls;
-	mutex_lock(&ls->ls_clear_proc_locks);
+	spin_lock(&ls->ls_clear_proc_locks);
 
 	/* If ORPHAN/DEAD flag is set, it means the process is dead so an ast
 	   can't be delivered.  For ORPHAN's, dlm_clear_proc_locks() freed
@@ -230,7 +230,7 @@ void dlm_user_add_ast(struct dlm_lkb *lkb, uint32_t flags, int mode,
 		spin_unlock(&proc->locks_spin);
 	}
  out:
-	mutex_unlock(&ls->ls_clear_proc_locks);
+	spin_unlock(&ls->ls_clear_proc_locks);
 }
 
 static int device_user_lock(struct dlm_user_proc *proc,
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Cluster-devel] [RESEND dlm/next 12/16] fs: dlm: trace user space callbacks
  2022-08-15 19:43 [Cluster-devel] [RESEND dlm/next 00/16] fs: dlm: fixes, cleanups and locktorture Alexander Aring
                   ` (10 preceding siblings ...)
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 11/16] fs: dlm: change ls_clear_proc_locks to spinlock Alexander Aring
@ 2022-08-15 19:43 ` Alexander Aring
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 13/16] fs: dlm: move DLM_LSFL_FS out of uapi Alexander Aring
                   ` (3 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Alexander Aring @ 2022-08-15 19:43 UTC (permalink / raw)
  To: cluster-devel.redhat.com

This patch adds trace callbacks for user locks. Unfortenately user locks
are handled in a different way than kernel locks in some cases. User
locks never call the dlm_lock()/dlm_unlock() kernel API and use the next
step internal API of dlm. Adding those traces from user API callers
should make it possible for dlm trace system to see lock handling for
user locks as well.

Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
 fs/dlm/lock.c              | 24 ++++++++++++++++++++----
 fs/dlm/user.c              |  7 ++++++-
 include/trace/events/dlm.h | 22 ++++++++++++----------
 3 files changed, 38 insertions(+), 15 deletions(-)

diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
index 65a7a0631ec8..cef25f8ac82e 100644
--- a/fs/dlm/lock.c
+++ b/fs/dlm/lock.c
@@ -3466,7 +3466,7 @@ int dlm_lock(dlm_lockspace_t *lockspace,
 	if (error == -EINPROGRESS)
 		error = 0;
  out_put:
-	trace_dlm_lock_end(ls, lkb, name, namelen, mode, flags, error);
+	trace_dlm_lock_end(ls, lkb, name, namelen, mode, flags, error, true);
 
 	if (convert || error)
 		__put_lkb(ls, lkb);
@@ -5842,13 +5842,15 @@ int dlm_user_request(struct dlm_ls *ls, struct dlm_user_args *ua,
 		goto out;
 	}
 
+	trace_dlm_lock_start(ls, lkb, name, namelen, mode, flags);
+
 	if (flags & DLM_LKF_VALBLK) {
 		ua->lksb.sb_lvbptr = kzalloc(DLM_USER_LVB_LEN, GFP_NOFS);
 		if (!ua->lksb.sb_lvbptr) {
 			kfree(ua);
 			__put_lkb(ls, lkb);
 			error = -ENOMEM;
-			goto out;
+			goto out_trace_end;
 		}
 	}
 #ifdef CONFIG_DLM_DEPRECATED_API
@@ -5863,7 +5865,7 @@ int dlm_user_request(struct dlm_ls *ls, struct dlm_user_args *ua,
 		ua->lksb.sb_lvbptr = NULL;
 		kfree(ua);
 		__put_lkb(ls, lkb);
-		goto out;
+		goto out_trace_end;
 	}
 
 	/* After ua is attached to lkb it will be freed by dlm_free_lkb().
@@ -5883,7 +5885,7 @@ int dlm_user_request(struct dlm_ls *ls, struct dlm_user_args *ua,
 		fallthrough;
 	default:
 		__put_lkb(ls, lkb);
-		goto out;
+		goto out_trace_end;
 	}
 
 	/* add this new lkb to the per-process list of locks */
@@ -5891,6 +5893,8 @@ int dlm_user_request(struct dlm_ls *ls, struct dlm_user_args *ua,
 	hold_lkb(lkb);
 	list_add_tail(&lkb->lkb_ownqueue, &ua->proc->locks);
 	spin_unlock(&ua->proc->locks_spin);
+ out_trace_end:
+	trace_dlm_lock_end(ls, lkb, name, namelen, mode, flags, error, false);
  out:
 	dlm_unlock_recovery(ls);
 	return error;
@@ -5916,6 +5920,8 @@ int dlm_user_convert(struct dlm_ls *ls, struct dlm_user_args *ua_tmp,
 	if (error)
 		goto out;
 
+	trace_dlm_lock_start(ls, lkb, NULL, 0, mode, flags);
+
 	/* user can change the params on its lock when it converts it, or
 	   add an lvb that didn't exist before */
 
@@ -5953,6 +5959,7 @@ int dlm_user_convert(struct dlm_ls *ls, struct dlm_user_args *ua_tmp,
 	if (error == -EINPROGRESS || error == -EAGAIN || error == -EDEADLK)
 		error = 0;
  out_put:
+	trace_dlm_lock_end(ls, lkb, NULL, 0, mode, flags, error, false);
 	dlm_put_lkb(lkb);
  out:
 	dlm_unlock_recovery(ls);
@@ -6045,6 +6052,8 @@ int dlm_user_unlock(struct dlm_ls *ls, struct dlm_user_args *ua_tmp,
 	if (error)
 		goto out;
 
+	trace_dlm_unlock_start(ls, lkb, flags);
+
 	ua = lkb->lkb_ua;
 
 	if (lvb_in && ua->lksb.sb_lvbptr)
@@ -6073,6 +6082,7 @@ int dlm_user_unlock(struct dlm_ls *ls, struct dlm_user_args *ua_tmp,
 		list_move(&lkb->lkb_ownqueue, &ua->proc->unlocking);
 	spin_unlock(&ua->proc->locks_spin);
  out_put:
+	trace_dlm_unlock_end(ls, lkb, flags, error);
 	dlm_put_lkb(lkb);
  out:
 	dlm_unlock_recovery(ls);
@@ -6094,6 +6104,8 @@ int dlm_user_cancel(struct dlm_ls *ls, struct dlm_user_args *ua_tmp,
 	if (error)
 		goto out;
 
+	trace_dlm_unlock_start(ls, lkb, flags);
+
 	ua = lkb->lkb_ua;
 	if (ua_tmp->castparam)
 		ua->castparam = ua_tmp->castparam;
@@ -6111,6 +6123,7 @@ int dlm_user_cancel(struct dlm_ls *ls, struct dlm_user_args *ua_tmp,
 	if (error == -EBUSY)
 		error = 0;
  out_put:
+	trace_dlm_unlock_end(ls, lkb, flags, error);
 	dlm_put_lkb(lkb);
  out:
 	dlm_unlock_recovery(ls);
@@ -6132,6 +6145,8 @@ int dlm_user_deadlock(struct dlm_ls *ls, uint32_t flags, uint32_t lkid)
 	if (error)
 		goto out;
 
+	trace_dlm_unlock_start(ls, lkb, flags);
+
 	ua = lkb->lkb_ua;
 
 	error = set_unlock_args(flags, ua, &args);
@@ -6160,6 +6175,7 @@ int dlm_user_deadlock(struct dlm_ls *ls, uint32_t flags, uint32_t lkid)
 	if (error == -EBUSY)
 		error = 0;
  out_put:
+	trace_dlm_unlock_end(ls, lkb, flags, error);
 	dlm_put_lkb(lkb);
  out:
 	dlm_unlock_recovery(ls);
diff --git a/fs/dlm/user.c b/fs/dlm/user.c
index df6215c73239..ca27f276a3f5 100644
--- a/fs/dlm/user.c
+++ b/fs/dlm/user.c
@@ -16,6 +16,8 @@
 #include <linux/slab.h>
 #include <linux/sched/signal.h>
 
+#include <trace/events/dlm.h>
+
 #include "dlm_internal.h"
 #include "lockspace.h"
 #include "lock.h"
@@ -882,7 +884,9 @@ static ssize_t device_read(struct file *file, char __user *buf, size_t count,
 		goto try_another;
 	}
 
-	if (cb.flags & DLM_CB_CAST) {
+	if (cb.flags & DLM_CB_BAST) {
+		trace_dlm_bast(lkb->lkb_resource->res_ls, lkb, cb.mode);
+	} else if (cb.flags & DLM_CB_CAST) {
 		new_mode = cb.mode;
 
 		if (!cb.sb_status && lkb->lkb_lksb->sb_lvbptr &&
@@ -891,6 +895,7 @@ static ssize_t device_read(struct file *file, char __user *buf, size_t count,
 
 		lkb->lkb_lksb->sb_status = cb.sb_status;
 		lkb->lkb_lksb->sb_flags = cb.sb_flags;
+		trace_dlm_ast(lkb->lkb_resource->res_ls, lkb);
 	}
 
 	rv = copy_result_to_user(lkb->lkb_ua,
diff --git a/include/trace/events/dlm.h b/include/trace/events/dlm.h
index bad21222130e..18575206295f 100644
--- a/include/trace/events/dlm.h
+++ b/include/trace/events/dlm.h
@@ -92,9 +92,10 @@ TRACE_EVENT(dlm_lock_start,
 TRACE_EVENT(dlm_lock_end,
 
 	TP_PROTO(struct dlm_ls *ls, struct dlm_lkb *lkb, void *name,
-		 unsigned int namelen, int mode, __u32 flags, int error),
+		 unsigned int namelen, int mode, __u32 flags, int error,
+		 bool kernel_lock),
 
-	TP_ARGS(ls, lkb, name, namelen, mode, flags, error),
+	TP_ARGS(ls, lkb, name, namelen, mode, flags, error, kernel_lock),
 
 	TP_STRUCT__entry(
 		__field(__u32, ls_id)
@@ -113,6 +114,7 @@ TRACE_EVENT(dlm_lock_end,
 		__entry->lkb_id = lkb->lkb_id;
 		__entry->mode = mode;
 		__entry->flags = flags;
+		__entry->error = error;
 
 		r = lkb->lkb_resource;
 		if (r)
@@ -122,14 +124,14 @@ TRACE_EVENT(dlm_lock_end,
 			memcpy(__get_dynamic_array(res_name), name,
 			       __get_dynamic_array_len(res_name));
 
-		/* return value will be zeroed in those cases by dlm_lock()
-		 * we do it here again to not introduce more overhead if
-		 * trace isn't running and error reflects the return value.
-		 */
-		if (error == -EAGAIN || error == -EDEADLK)
-			__entry->error = 0;
-		else
-			__entry->error = error;
+		if (kernel_lock) {
+			/* return value will be zeroed in those cases by dlm_lock()
+			 * we do it here again to not introduce more overhead if
+			 * trace isn't running and error reflects the return value.
+			 */
+			if (error == -EAGAIN || error == -EDEADLK)
+				__entry->error = 0;
+		}
 
 	),
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Cluster-devel] [RESEND dlm/next 13/16] fs: dlm: move DLM_LSFL_FS out of uapi
  2022-08-15 19:43 [Cluster-devel] [RESEND dlm/next 00/16] fs: dlm: fixes, cleanups and locktorture Alexander Aring
                   ` (11 preceding siblings ...)
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 12/16] fs: dlm: trace user space callbacks Alexander Aring
@ 2022-08-15 19:43 ` Alexander Aring
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 14/16] fs: dlm: LSFL_CB_DELAY only for kernel lockspaces Alexander Aring
                   ` (2 subsequent siblings)
  15 siblings, 0 replies; 17+ messages in thread
From: Alexander Aring @ 2022-08-15 19:43 UTC (permalink / raw)
  To: cluster-devel.redhat.com

The DLM_LSFL_FS should be never part of the dlm lockspace uapi headers.
If a user space dlm user is using this flag it's doing something wrong
for user space flags. If their program doesn't compile anymore because
this flag is missing we do a favour for them by signaling them there is
a bug. Even the kernel lockspaces does not need to set this flag. This
patch will always set it for kernel space lockspaces and do not set it
for user space lockspaces so no mistake can happen anymore. In future we
hopefully can silently remove this flag and this bit can be get reused
for something else.

Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
 drivers/md/md-cluster.c  |  4 ++--
 fs/dlm/lockspace.c       | 28 ++++++++++++++++++++++++----
 fs/dlm/lockspace.h       | 13 +++++++++++++
 fs/dlm/user.c            |  6 +++---
 fs/gfs2/lock_dlm.c       |  2 +-
 fs/ocfs2/stack_user.c    |  2 +-
 include/linux/dlm.h      |  3 ---
 include/uapi/linux/dlm.h |  1 -
 8 files changed, 44 insertions(+), 15 deletions(-)

diff --git a/drivers/md/md-cluster.c b/drivers/md/md-cluster.c
index 742b2349fea3..10e0c5381d01 100644
--- a/drivers/md/md-cluster.c
+++ b/drivers/md/md-cluster.c
@@ -876,8 +876,8 @@ static int join(struct mddev *mddev, int nodes)
 	memset(str, 0, 64);
 	sprintf(str, "%pU", mddev->uuid);
 	ret = dlm_new_lockspace(str, mddev->bitmap_info.cluster_name,
-				DLM_LSFL_FS, LVB_SIZE,
-				&md_ls_ops, mddev, &ops_rv, &cinfo->lockspace);
+				0, LVB_SIZE, &md_ls_ops, mddev,
+				&ops_rv, &cinfo->lockspace);
 	if (ret)
 		goto err;
 	wait_for_completion(&cinfo->completion);
diff --git a/fs/dlm/lockspace.c b/fs/dlm/lockspace.c
index 41a6504cfab5..bae050df7abf 100644
--- a/fs/dlm/lockspace.c
+++ b/fs/dlm/lockspace.c
@@ -703,10 +703,11 @@ static int new_lockspace(const char *name, const char *cluster,
 	return error;
 }
 
-int dlm_new_lockspace(const char *name, const char *cluster,
-		      uint32_t flags, int lvblen,
-		      const struct dlm_lockspace_ops *ops, void *ops_arg,
-		      int *ops_result, dlm_lockspace_t **lockspace)
+static int __dlm_new_lockspace(const char *name, const char *cluster,
+			       uint32_t flags, int lvblen,
+			       const struct dlm_lockspace_ops *ops,
+			       void *ops_arg, int *ops_result,
+			       dlm_lockspace_t **lockspace)
 {
 	int error = 0;
 
@@ -732,6 +733,25 @@ int dlm_new_lockspace(const char *name, const char *cluster,
 	return error;
 }
 
+int dlm_new_lockspace(const char *name, const char *cluster, uint32_t flags,
+		      int lvblen, const struct dlm_lockspace_ops *ops,
+		      void *ops_arg, int *ops_result,
+		      dlm_lockspace_t **lockspace)
+{
+	return __dlm_new_lockspace(name, cluster, flags | DLM_LSFL_FS, lvblen,
+				   ops, ops_arg, ops_result, lockspace);
+}
+
+int dlm_new_user_lockspace(const char *name, const char *cluster,
+			   uint32_t flags, int lvblen,
+			   const struct dlm_lockspace_ops *ops,
+			   void *ops_arg, int *ops_result,
+			   dlm_lockspace_t **lockspace)
+{
+	return __dlm_new_lockspace(name, cluster, flags, lvblen, ops,
+				   ops_arg, ops_result, lockspace);
+}
+
 static int lkb_idr_is_local(int id, void *p, void *data)
 {
 	struct dlm_lkb *lkb = p;
diff --git a/fs/dlm/lockspace.h b/fs/dlm/lockspace.h
index 306fc4f4ea15..03f4a4a3a871 100644
--- a/fs/dlm/lockspace.h
+++ b/fs/dlm/lockspace.h
@@ -12,6 +12,14 @@
 #ifndef __LOCKSPACE_DOT_H__
 #define __LOCKSPACE_DOT_H__
 
+/* DLM_LSFL_FS
+ *   The lockspace user is in the kernel (i.e. filesystem).  Enables
+ *   direct bast/cast callbacks.
+ *
+ * internal lockspace flag - will be removed in future
+ */
+#define DLM_LSFL_FS	0x00000004
+
 int dlm_lockspace_init(void);
 void dlm_lockspace_exit(void);
 struct dlm_ls *dlm_find_lockspace_global(uint32_t id);
@@ -20,6 +28,11 @@ struct dlm_ls *dlm_find_lockspace_device(int minor);
 void dlm_put_lockspace(struct dlm_ls *ls);
 void dlm_stop_lockspaces(void);
 void dlm_stop_lockspaces_check(void);
+int dlm_new_user_lockspace(const char *name, const char *cluster,
+			   uint32_t flags, int lvblen,
+			   const struct dlm_lockspace_ops *ops,
+			   void *ops_arg, int *ops_result,
+			   dlm_lockspace_t **lockspace);
 
 #endif				/* __LOCKSPACE_DOT_H__ */
 
diff --git a/fs/dlm/user.c b/fs/dlm/user.c
index ca27f276a3f5..c5d27bccc3dc 100644
--- a/fs/dlm/user.c
+++ b/fs/dlm/user.c
@@ -423,9 +423,9 @@ static int device_create_lockspace(struct dlm_lspace_params *params)
 	if (!capable(CAP_SYS_ADMIN))
 		return -EPERM;
 
-	error = dlm_new_lockspace(params->name, dlm_config.ci_cluster_name, params->flags,
-				  DLM_USER_LVB_LEN, NULL, NULL, NULL,
-				  &lockspace);
+	error = dlm_new_user_lockspace(params->name, dlm_config.ci_cluster_name,
+				       params->flags, DLM_USER_LVB_LEN, NULL,
+				       NULL, NULL, &lockspace);
 	if (error)
 		return error;
 
diff --git a/fs/gfs2/lock_dlm.c b/fs/gfs2/lock_dlm.c
index 6ce369b096d4..71911bf9ab34 100644
--- a/fs/gfs2/lock_dlm.c
+++ b/fs/gfs2/lock_dlm.c
@@ -1302,7 +1302,7 @@ static int gdlm_mount(struct gfs2_sbd *sdp, const char *table)
 	memcpy(cluster, table, strlen(table) - strlen(fsname));
 	fsname++;
 
-	flags = DLM_LSFL_FS | DLM_LSFL_NEWEXCL;
+	flags = DLM_LSFL_NEWEXCL;
 
 	/*
 	 * create/join lockspace
diff --git a/fs/ocfs2/stack_user.c b/fs/ocfs2/stack_user.c
index a75e2b7d67f5..64e6ddcfe329 100644
--- a/fs/ocfs2/stack_user.c
+++ b/fs/ocfs2/stack_user.c
@@ -991,7 +991,7 @@ static int user_cluster_connect(struct ocfs2_cluster_connection *conn)
 	lc->oc_type = NO_CONTROLD;
 
 	rc = dlm_new_lockspace(conn->cc_name, conn->cc_cluster_name,
-			       DLM_LSFL_FS | DLM_LSFL_NEWEXCL, DLM_LVB_LEN,
+			       DLM_LSFL_NEWEXCL, DLM_LVB_LEN,
 			       &ocfs2_ls_ops, conn, &ops_rv, &fsdlm);
 	if (rc) {
 		if (rc == -EEXIST || rc == -EPROTO)
diff --git a/include/linux/dlm.h b/include/linux/dlm.h
index ff951e9f6f20..f5f55c2138ae 100644
--- a/include/linux/dlm.h
+++ b/include/linux/dlm.h
@@ -56,9 +56,6 @@ struct dlm_lockspace_ops {
  * DLM_LSFL_TIMEWARN
  *   The dlm should emit netlink messages if locks have been waiting
  *   for a configurable amount of time.  (Unused.)
- * DLM_LSFL_FS
- *   The lockspace user is in the kernel (i.e. filesystem).  Enables
- *   direct bast/cast callbacks.
  * DLM_LSFL_NEWEXCL
  *   dlm_new_lockspace() should return -EEXIST if the lockspace exists.
  *
diff --git a/include/uapi/linux/dlm.h b/include/uapi/linux/dlm.h
index 0d2eca287567..1923f4f3b05e 100644
--- a/include/uapi/linux/dlm.h
+++ b/include/uapi/linux/dlm.h
@@ -69,7 +69,6 @@ struct dlm_lksb {
 /* dlm_new_lockspace() flags */
 
 #define DLM_LSFL_TIMEWARN	0x00000002
-#define DLM_LSFL_FS     	0x00000004
 #define DLM_LSFL_NEWEXCL     	0x00000008
 
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Cluster-devel] [RESEND dlm/next 14/16] fs: dlm: LSFL_CB_DELAY only for kernel lockspaces
  2022-08-15 19:43 [Cluster-devel] [RESEND dlm/next 00/16] fs: dlm: fixes, cleanups and locktorture Alexander Aring
                   ` (12 preceding siblings ...)
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 13/16] fs: dlm: move DLM_LSFL_FS out of uapi Alexander Aring
@ 2022-08-15 19:43 ` Alexander Aring
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 15/16] fs: dlm: const void resource name parameter Alexander Aring
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 16/16] fs: dlm: initial commit of locktorture Alexander Aring
  15 siblings, 0 replies; 17+ messages in thread
From: Alexander Aring @ 2022-08-15 19:43 UTC (permalink / raw)
  To: cluster-devel.redhat.com

This patch only set/clear the LSFL_CB_DELAY bit when it's actually a
kernel lockspace signaled by if ls->ls_callback_wq is set or not set in
this case. User lockspaces will never evaluate this flag.

Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
 fs/dlm/ast.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/fs/dlm/ast.c b/fs/dlm/ast.c
index a44cc42b6317..d60a8d8f109d 100644
--- a/fs/dlm/ast.c
+++ b/fs/dlm/ast.c
@@ -288,12 +288,13 @@ void dlm_callback_stop(struct dlm_ls *ls)
 
 void dlm_callback_suspend(struct dlm_ls *ls)
 {
-	mutex_lock(&ls->ls_cb_mutex);
-	set_bit(LSFL_CB_DELAY, &ls->ls_flags);
-	mutex_unlock(&ls->ls_cb_mutex);
+	if (ls->ls_callback_wq) {
+		mutex_lock(&ls->ls_cb_mutex);
+		set_bit(LSFL_CB_DELAY, &ls->ls_flags);
+		mutex_unlock(&ls->ls_cb_mutex);
 
-	if (ls->ls_callback_wq)
 		flush_workqueue(ls->ls_callback_wq);
+	}
 }
 
 #define MAX_CB_QUEUE 25
@@ -304,11 +305,11 @@ void dlm_callback_resume(struct dlm_ls *ls)
 	int count = 0, sum = 0;
 	bool empty;
 
-	clear_bit(LSFL_CB_DELAY, &ls->ls_flags);
-
 	if (!ls->ls_callback_wq)
 		return;
 
+	clear_bit(LSFL_CB_DELAY, &ls->ls_flags);
+
 more:
 	mutex_lock(&ls->ls_cb_mutex);
 	list_for_each_entry_safe(lkb, safe, &ls->ls_cb_delay, lkb_cb_list) {
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Cluster-devel] [RESEND dlm/next 15/16] fs: dlm: const void resource name parameter
  2022-08-15 19:43 [Cluster-devel] [RESEND dlm/next 00/16] fs: dlm: fixes, cleanups and locktorture Alexander Aring
                   ` (13 preceding siblings ...)
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 14/16] fs: dlm: LSFL_CB_DELAY only for kernel lockspaces Alexander Aring
@ 2022-08-15 19:43 ` Alexander Aring
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 16/16] fs: dlm: initial commit of locktorture Alexander Aring
  15 siblings, 0 replies; 17+ messages in thread
From: Alexander Aring @ 2022-08-15 19:43 UTC (permalink / raw)
  To: cluster-devel.redhat.com

The resource name parameter should never be changed by DLM so we declare
it as const. At some point it is handled as a char pointer, a resource
name can be a non printable ascii string as well. This patch change it
to handle it as void pointer as it is offered by DLM API.

Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
 fs/dlm/lock.c              | 23 +++++++++++++----------
 fs/dlm/lock.h              |  2 +-
 include/linux/dlm.h        |  2 +-
 include/trace/events/dlm.h |  4 ++--
 4 files changed, 17 insertions(+), 14 deletions(-)

diff --git a/fs/dlm/lock.c b/fs/dlm/lock.c
index cef25f8ac82e..c830feb26384 100644
--- a/fs/dlm/lock.c
+++ b/fs/dlm/lock.c
@@ -401,7 +401,7 @@ static int pre_rsb_struct(struct dlm_ls *ls)
    unlock any spinlocks, go back and call pre_rsb_struct again.
    Otherwise, take an rsb off the list and return it. */
 
-static int get_rsb_struct(struct dlm_ls *ls, char *name, int len,
+static int get_rsb_struct(struct dlm_ls *ls, const void *name, int len,
 			  struct dlm_rsb **r_ret)
 {
 	struct dlm_rsb *r;
@@ -412,7 +412,8 @@ static int get_rsb_struct(struct dlm_ls *ls, char *name, int len,
 		count = ls->ls_new_rsb_count;
 		spin_unlock(&ls->ls_new_rsb_spin);
 		log_debug(ls, "find_rsb retry %d %d %s",
-			  count, dlm_config.ci_new_rsb_count, name);
+			  count, dlm_config.ci_new_rsb_count,
+			  (const char *)name);
 		return -EAGAIN;
 	}
 
@@ -448,7 +449,7 @@ static int rsb_cmp(struct dlm_rsb *r, const char *name, int nlen)
 	return memcmp(r->res_name, maxname, DLM_RESNAME_MAXLEN);
 }
 
-int dlm_search_rsb_tree(struct rb_root *tree, char *name, int len,
+int dlm_search_rsb_tree(struct rb_root *tree, const void *name, int len,
 			struct dlm_rsb **r_ret)
 {
 	struct rb_node *node = tree->rb_node;
@@ -546,7 +547,7 @@ static int rsb_insert(struct dlm_rsb *rsb, struct rb_root *tree)
  * while that rsb has a potentially stale master.)
  */
 
-static int find_rsb_dir(struct dlm_ls *ls, char *name, int len,
+static int find_rsb_dir(struct dlm_ls *ls, const void *name, int len,
 			uint32_t hash, uint32_t b,
 			int dir_nodeid, int from_nodeid,
 			unsigned int flags, struct dlm_rsb **r_ret)
@@ -724,7 +725,7 @@ static int find_rsb_dir(struct dlm_ls *ls, char *name, int len,
    dlm_recover_locks) before we've made ourself master (in
    dlm_recover_masters). */
 
-static int find_rsb_nodir(struct dlm_ls *ls, char *name, int len,
+static int find_rsb_nodir(struct dlm_ls *ls, const void *name, int len,
 			  uint32_t hash, uint32_t b,
 			  int dir_nodeid, int from_nodeid,
 			  unsigned int flags, struct dlm_rsb **r_ret)
@@ -818,8 +819,9 @@ static int find_rsb_nodir(struct dlm_ls *ls, char *name, int len,
 	return error;
 }
 
-static int find_rsb(struct dlm_ls *ls, char *name, int len, int from_nodeid,
-		    unsigned int flags, struct dlm_rsb **r_ret)
+static int find_rsb(struct dlm_ls *ls, const void *name, int len,
+		    int from_nodeid, unsigned int flags,
+		    struct dlm_rsb **r_ret)
 {
 	uint32_t hash, b;
 	int dir_nodeid;
@@ -3320,8 +3322,9 @@ static int _cancel_lock(struct dlm_rsb *r, struct dlm_lkb *lkb)
  * request_lock(), convert_lock(), unlock_lock(), cancel_lock()
  */
 
-static int request_lock(struct dlm_ls *ls, struct dlm_lkb *lkb, char *name,
-			int len, struct dlm_args *args)
+static int request_lock(struct dlm_ls *ls, struct dlm_lkb *lkb,
+			const void *name, int len,
+			struct dlm_args *args)
 {
 	struct dlm_rsb *r;
 	int error;
@@ -3420,7 +3423,7 @@ int dlm_lock(dlm_lockspace_t *lockspace,
 	     int mode,
 	     struct dlm_lksb *lksb,
 	     uint32_t flags,
-	     void *name,
+	     const void *name,
 	     unsigned int namelen,
 	     uint32_t parent_lkid,
 	     void (*ast) (void *astarg),
diff --git a/fs/dlm/lock.h b/fs/dlm/lock.h
index a7b6474f009d..40c76b5544da 100644
--- a/fs/dlm/lock.h
+++ b/fs/dlm/lock.h
@@ -36,7 +36,7 @@ static inline void dlm_adjust_timeouts(struct dlm_ls *ls) { }
 int dlm_master_lookup(struct dlm_ls *ls, int nodeid, char *name, int len,
 		      unsigned int flags, int *r_nodeid, int *result);
 
-int dlm_search_rsb_tree(struct rb_root *tree, char *name, int len,
+int dlm_search_rsb_tree(struct rb_root *tree, const void *name, int len,
 			struct dlm_rsb **r_ret);
 
 void dlm_recover_purge(struct dlm_ls *ls);
diff --git a/include/linux/dlm.h b/include/linux/dlm.h
index f5f55c2138ae..c6bc2b5ee7e6 100644
--- a/include/linux/dlm.h
+++ b/include/linux/dlm.h
@@ -131,7 +131,7 @@ int dlm_lock(dlm_lockspace_t *lockspace,
 	     int mode,
 	     struct dlm_lksb *lksb,
 	     uint32_t flags,
-	     void *name,
+	     const void *name,
 	     unsigned int namelen,
 	     uint32_t parent_lkid,
 	     void (*lockast) (void *astarg),
diff --git a/include/trace/events/dlm.h b/include/trace/events/dlm.h
index 18575206295f..da0eaae98fa3 100644
--- a/include/trace/events/dlm.h
+++ b/include/trace/events/dlm.h
@@ -49,7 +49,7 @@
 /* note: we begin tracing dlm_lock_start() only if ls and lkb are found */
 TRACE_EVENT(dlm_lock_start,
 
-	TP_PROTO(struct dlm_ls *ls, struct dlm_lkb *lkb, void *name,
+	TP_PROTO(struct dlm_ls *ls, struct dlm_lkb *lkb, const void *name,
 		 unsigned int namelen, int mode, __u32 flags),
 
 	TP_ARGS(ls, lkb, name, namelen, mode, flags),
@@ -91,7 +91,7 @@ TRACE_EVENT(dlm_lock_start,
 
 TRACE_EVENT(dlm_lock_end,
 
-	TP_PROTO(struct dlm_ls *ls, struct dlm_lkb *lkb, void *name,
+	TP_PROTO(struct dlm_ls *ls, struct dlm_lkb *lkb, const void *name,
 		 unsigned int namelen, int mode, __u32 flags, int error,
 		 bool kernel_lock),
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [Cluster-devel] [RESEND dlm/next 16/16] fs: dlm: initial commit of locktorture
  2022-08-15 19:43 [Cluster-devel] [RESEND dlm/next 00/16] fs: dlm: fixes, cleanups and locktorture Alexander Aring
                   ` (14 preceding siblings ...)
  2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 15/16] fs: dlm: const void resource name parameter Alexander Aring
@ 2022-08-15 19:43 ` Alexander Aring
  15 siblings, 0 replies; 17+ messages in thread
From: Alexander Aring @ 2022-08-15 19:43 UTC (permalink / raw)
  To: cluster-devel.redhat.com

This patch will introduce a locktorture test for DLM subsystem. The idea
is to have a torture test to proof some performance indication for DLM.
This torture test will allocate for each lock task a DLM lock and try to
acquire the lock as much as it can. In a homogeneous cluster (all nodes
have equal hardware) other nodes will try to acquire those locks as well.

You can run it by "modprobe dlm_locktorture cluster=$CLUSTER_NAME", note
that cluster is required to provide a cluster name for the cluster
manager.

Currently there is only one simple lock operation which is to create a
lock in NL state and switch to EX and NL with a little bit of delay in
the middle to simulate lock contention. This locktorture module uses the
locktorture API from the linux kernel to provide such functionality.
However the lock ops are very different and currently handled as
start/stop and a iteration of a "testing step" e.g. switch to EX and NL
over some looping behaviour. In future we can add more test regarding
different lock modes in dlm or pressure tests for functionality such as
lock request cancellation functionality.

Signed-off-by: Alexander Aring <aahringo@redhat.com>
---
 fs/dlm/Kconfig           |  11 +
 fs/dlm/Makefile          |   1 +
 fs/dlm/dlm_locktorture.c | 517 +++++++++++++++++++++++++++++++++++++++
 3 files changed, 529 insertions(+)
 create mode 100644 fs/dlm/dlm_locktorture.c

diff --git a/fs/dlm/Kconfig b/fs/dlm/Kconfig
index 1105ce3c80cb..5b9ffa09c2fa 100644
--- a/fs/dlm/Kconfig
+++ b/fs/dlm/Kconfig
@@ -25,3 +25,14 @@ config DLM_DEBUG
 	Under the debugfs mount point, the name of each lockspace will
 	appear as a file in the "dlm" directory.  The output is the
 	list of resource and locks the local node knows about.
+
+config DLM_LOCKTORTURE
+	tristate "DLM locktorture"
+	depends on DLM && m
+	select TORTURE_TEST
+	help
+	This options provides a kernel module that runs torture tests on
+	the DLM subsystem. If loaded on a homogeneous cluster setup (e.g.
+	all cluster nodes have the same architecture) it will run
+	concurrent lock and unlock procedures. The printed stats will show
+	how many lock testcase iterations were possible.
diff --git a/fs/dlm/Makefile b/fs/dlm/Makefile
index 71dab733cf9a..4d333b4502ba 100644
--- a/fs/dlm/Makefile
+++ b/fs/dlm/Makefile
@@ -19,4 +19,5 @@ dlm-y :=			ast.o \
 				util.o 
 dlm-$(CONFIG_DLM_DEPRECATED_API) +=	netlink.o
 dlm-$(CONFIG_DLM_DEBUG) +=	debug_fs.o
+obj-$(CONFIG_DLM_LOCKTORTURE) += dlm_locktorture.o
 
diff --git a/fs/dlm/dlm_locktorture.c b/fs/dlm/dlm_locktorture.c
new file mode 100644
index 000000000000..fa3a2bc8bd49
--- /dev/null
+++ b/fs/dlm/dlm_locktorture.c
@@ -0,0 +1,517 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Module-based torture test facility for dlm locking
+ *
+ * Copyright (C) 2022 Red Hat, Inc.  All rights reserved.
+ * Copyright (C) IBM Corporation, 2014
+ *
+ * Authors: Alexander Aring <aahringo@redhat.com>
+ *
+ * Original Authors: Paul E. McKenney <paulmck@linux.ibm.com>
+ *		     Davidlohr Bueso <dave@stgolabs.net>
+ *
+ * Based on kernel/locking/locktorture.c.
+ */
+
+#define pr_fmt(fmt) fmt
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/kthread.h>
+#include <linux/sched/rt.h>
+#include <linux/smp.h>
+#include <linux/interrupt.h>
+#include <linux/sched.h>
+#include <uapi/linux/sched/types.h>
+#include <linux/moduleparam.h>
+#include <linux/delay.h>
+#include <linux/slab.h>
+#include <linux/torture.h>
+#include <linux/reboot.h>
+#include <linux/dlm.h>
+
+MODULE_LICENSE("GPL");
+MODULE_AUTHOR("Alexander Aring <aahringo@redhat.com>");
+
+torture_param(int, nlock_stress, -1,
+	      "Number of locking stress-test threads");
+torture_param(int, shuffle_interval, 3,
+	      "Number of jiffies between shuffles, 0=disable");
+torture_param(int, shutdown_secs, 0, "Shutdown time (j), <= zero to disable.");
+torture_param(int, stat_interval, 60,
+	      "Number of seconds between stats printk()s");
+torture_param(int, verbose, 1,
+	      "Enable verbose debugging printk()s");
+/* because torture_param() wants to use charp */
+typedef char *charp;
+torture_param(charp, cluster, NULL,
+	      "Cluster name that lockspace will join");
+
+#define DLM_LOCKTORTURE_RES_NAME_LEN (DLM_RESNAME_MAXLEN + 1)
+
+static struct task_struct **lock_tasks;
+static struct task_struct *stats_task;
+static char *torture_type = "exnl";
+static dlm_lockspace_t *ls;
+static long long prev_sum;
+
+struct lock_stress_stats {
+	long n_iter;
+};
+
+struct lock_data {
+	struct lock_stress_stats s;
+
+	char res_name[DLM_LOCKTORTURE_RES_NAME_LEN];
+	size_t res_name_len;
+};
+
+/*
+ * Operations vector for selecting different types of tests.
+ */
+struct lock_torture_ops {
+	int (*start)(struct dlm_lksb *lksb, const char *res_name,
+		     size_t res_name_len);
+	int (*iter)(struct dlm_lksb *lksb, const char *res_name,
+		    size_t res_name_len, struct torture_random_state *trsp);
+	int (*stop)(struct dlm_lksb *lksb);
+
+	const char *name;
+};
+
+struct lock_torture_cxt {
+	int nreallock_stress;
+	struct lock_torture_ops *cur_ops;
+	struct lock_data *lwd;
+};
+static struct lock_torture_cxt cxt = { 0, NULL, NULL};
+
+static void ast(void *astarg)
+{
+	complete(astarg);
+	pr_debug("dlm_locktorture: %s\n", __func__);
+}
+
+static void bast(void *astarg, int mode)
+{
+	pr_debug("dlm_locktorture: %s mode: %d\n", __func__, mode);
+}
+
+static void torture_delay(struct torture_random_state *trsp)
+{
+	const unsigned long longdelay_ms = 100;
+
+	/* We want a long delay occasionally to force massive contention.  */
+	if (!(torture_random(trsp) %
+	      (cxt.nreallock_stress * 2000 * longdelay_ms)))
+		mdelay(longdelay_ms * 5);
+	else
+		mdelay(longdelay_ms / 5);
+	if (!(torture_random(trsp) % (cxt.nreallock_stress * 20000)))
+		torture_preempt_schedule();  /* Allow test to be preempted. */
+}
+
+static int torture_dlm_lock_sync(int mode, uint32_t flags,
+				 struct dlm_lksb *lksb, const char *res_name,
+				 size_t res_name_len)
+{
+	struct completion completion;
+	int ret;
+
+	init_completion(&completion);
+retry:
+	ret = dlm_lock(ls, mode, lksb, flags, res_name, res_name_len, 0, ast,
+		       &completion, bast);
+	switch (ret) {
+	case 0:
+		wait_for_completion(&completion);
+		return 0;
+	case -EBUSY:
+		goto retry;
+	default:
+		break;
+	}
+
+	return ret;
+}
+
+static int torture_start(struct dlm_lksb *lksb, const char *res_name,
+			 size_t res_name_len)
+{
+	return torture_dlm_lock_sync(DLM_LOCK_NL, 0, lksb, res_name,
+				     res_name_len);
+}
+
+static int torture_stop(struct dlm_lksb *lksb)
+{
+	struct completion completion;
+	int ret;
+
+	init_completion(&completion);
+retry:
+	ret = dlm_unlock(ls, lksb->sb_lkid, 0, lksb, &completion);
+	switch (ret) {
+	case 0:
+		wait_for_completion(&completion);
+		return 0;
+	case -EBUSY:
+		goto retry;
+	default:
+		break;
+	}
+
+	return ret;
+}
+
+/* exclusive lock case, switch between EX and NL */
+
+static int torture_ex_iter(struct dlm_lksb *lksb, const char *res_name,
+			   size_t res_name_len, struct torture_random_state *trsp)
+{
+	int ret;
+
+	ret = torture_dlm_lock_sync(DLM_LOCK_EX, DLM_LKF_CONVERT,
+				    lksb, res_name, res_name_len);
+	if (ret)
+		return ret;
+
+	/* fake lock contention */
+	torture_delay(trsp);
+
+	ret = torture_dlm_lock_sync(DLM_LOCK_NL, DLM_LKF_CONVERT,
+				    lksb, res_name, res_name_len);
+	if (ret)
+		return ret;
+
+	return 0;
+}
+
+static struct lock_torture_ops exnl_lock_ops = {
+	.start		= torture_start,
+	.iter		= torture_ex_iter,
+	.stop		= torture_stop,
+	.name           = "exnl"
+};
+
+/*
+ * Lock torture lock kthread.  Repeatedly acquires and releases
+ * the lock, checking for duplicate acquisitions.
+ */
+static int lock_torture(void *arg)
+{
+	struct lock_data *lwd = arg;
+	DEFINE_TORTURE_RANDOM(rand);
+	struct dlm_lksb lksb;
+	int ret;
+
+	VERBOSE_TOROUT_STRING("lock_torture task started");
+	set_user_nice(current, MAX_NICE);
+
+	ret = cxt.cur_ops->start(&lksb, lwd->res_name, lwd->res_name_len);
+	if (WARN_ON_ONCE(ret))
+		return ret;
+
+	do {
+		if ((torture_random(&rand) & 0xfffff) == 0)
+			schedule_timeout_uninterruptible(1);
+
+		ret = cxt.cur_ops->iter(&lksb, lwd->res_name,
+					lwd->res_name_len, &rand);
+		if (WARN_ON_ONCE(ret))
+			break;
+
+		lwd->s.n_iter++;
+	} while (!torture_must_stop());
+
+	ret = cxt.cur_ops->stop(&lksb);
+	if (WARN_ON_ONCE(ret))
+		return ret;
+
+	torture_kthread_stopping("lock_torture");
+	return 0;
+}
+
+/*
+ * Create an lock-torture-statistics message in the specified buffer.
+ */
+static void __torture_print_stats(char *page,
+				  struct lock_data *ld)
+{
+	long long sum = 0, sum_diff;
+	int i, n_stress;
+
+	n_stress = cxt.nreallock_stress;
+	for (i = 0; i < n_stress; i++)
+		sum += ld[i].s.n_iter;
+
+	sum_diff = sum - prev_sum;
+	prev_sum = sum;
+
+	page += sprintf(page, "Iterations: %lld\n", sum_diff);
+}
+
+/*
+ * Print torture statistics.  Caller must ensure that there is only one
+ * call to this function@a given time!!!  This is normally accomplished
+ * by relying on the module system to only have one copy of the module
+ * loaded, and then by giving the lock_torture_stats kthread full control
+ * (or the init/cleanup functions when lock_torture_stats thread is not
+ * running).
+ */
+static void lock_torture_stats_print(void)
+{
+	int size = cxt.nreallock_stress * 200 + 8192;
+	char *buf;
+
+	buf = kmalloc(size, GFP_KERNEL);
+	if (!buf) {
+		pr_err("%s: Out of memory, need: %d",
+		       __func__, size);
+		return;
+	}
+
+	__torture_print_stats(buf, cxt.lwd);
+	pr_alert("%s", buf);
+	kfree(buf);
+}
+
+/*
+ * Periodically prints torture statistics, if periodic statistics printing
+ * was specified via the stat_interval module parameter.
+ *
+ * No need to worry about fullstop here, since this one doesn't reference
+ * volatile state or register callbacks.
+ */
+static int lock_torture_stats(void *arg)
+{
+	VERBOSE_TOROUT_STRING("lock_torture_stats task started");
+	do {
+		schedule_timeout_interruptible(stat_interval * HZ);
+		lock_torture_stats_print();
+		torture_shutdown_absorb("lock_torture_stats");
+	} while (!torture_must_stop());
+	torture_kthread_stopping("lock_torture_stats");
+	return 0;
+}
+
+static inline void
+lock_torture_print_module_parms(struct lock_torture_ops *cur_ops,
+				const char *tag)
+{
+	pr_alert("%s" TORTURE_FLAG
+		 "--- %s: cluser=%s nlock_stress=%d stat_interval=%d verbose=%d shuffle_interval=%d shutdown_secs=%d\n",
+		 torture_type, tag, cluster, cxt.nreallock_stress,
+		 stat_interval, verbose, shuffle_interval, shutdown_secs);
+}
+
+static void lock_torture_cleanup(void)
+{
+	int i, ret;
+
+	if (torture_cleanup_begin())
+		return;
+
+	/*
+	 * Indicates early cleanup, meaning that the test has not run,
+	 * such as when passing bogus args when loading the module.
+	 * However cxt->cur_ops.init() may have been invoked, so beside
+	 * perform the underlying torture-specific cleanups, cur_ops.exit()
+	 * will be invoked if needed.
+	 */
+	if (!cxt.lwd)
+		goto end;
+
+	if (lock_tasks) {
+		for (i = 0; i < cxt.nreallock_stress; i++)
+			torture_stop_kthread(lock_torture, lock_tasks[i]);
+
+		kfree(lock_tasks);
+		lock_tasks = NULL;
+
+		if (ls) {
+			ret = dlm_release_lockspace(ls, 2);
+			WARN_ON(ret);
+			ls = NULL;
+		}
+	}
+
+	torture_stop_kthread(lock_torture_stats, stats_task);
+
+	lock_torture_stats_print();  /* -After- the stats thread is stopped! */
+
+	if (torture_onoff_failures())
+		lock_torture_print_module_parms(cxt.cur_ops,
+						"End of test: LOCK_HOTPLUG");
+	else
+		lock_torture_print_module_parms(cxt.cur_ops,
+						"End of test: SUCCESS");
+
+	kfree(cxt.lwd);
+	cxt.lwd = NULL;
+
+end:
+	torture_cleanup_end();
+}
+
+static void recover_prep(void *arg)
+{
+	pr_info("dlm_locktorture: %s\n", __func__);
+}
+
+static void recover_slot(void *arg, struct dlm_slot *slot)
+{
+	pr_info("dlm_locktorture: %s nodeid: %d slot: %d\n", __func__,
+		slot->nodeid, slot->slot);
+}
+
+static void recover_done(void *arg, struct dlm_slot *slots,
+			 int num_slots, int our_slot,
+			 uint32_t generation)
+{
+	int i;
+
+	pr_info("dlm_locktorture: %s num_slots: %d our_slot: %d generation: %u\n",
+		__func__, num_slots, our_slot, generation);
+
+	for (i = 0; i < num_slots; i++) {
+		pr_info("dlm_locktorture: %s slot->nodeid: %d slot->slot: %d\n",
+			__func__, slots[i].nodeid, slots[i].slot);
+	}
+}
+
+static const struct dlm_lockspace_ops torture_ls_ops = {
+	.recover_prep = recover_prep,
+	.recover_slot = recover_slot,
+	.recover_done = recover_done,
+};
+
+static int __init lock_torture_init(void)
+{
+	static struct lock_torture_ops *torture_ops[] = {
+		&exnl_lock_ops,
+	};
+	char str[DLM_LOCKTORTURE_RES_NAME_LEN];
+	int i, ret;
+
+	if (!cluster) {
+		pr_err("dlm_locktorture: cluster parameter required\n");
+		return -EINVAL;
+	}
+
+	if (!torture_init_begin(torture_type, verbose))
+		return -EBUSY;
+
+	/* Process args and tell the world that the torturer is on the job. */
+	for (i = 0; i < ARRAY_SIZE(torture_ops); i++) {
+		cxt.cur_ops = torture_ops[i];
+		if (strcmp(torture_type, cxt.cur_ops->name) == 0)
+			break;
+	}
+	if (i == ARRAY_SIZE(torture_ops)) {
+		pr_alert("lock-torture: invalid torture type: \"%s\"\n",
+			 torture_type);
+		pr_alert("lock-torture types:");
+		for (i = 0; i < ARRAY_SIZE(torture_ops); i++)
+			pr_alert(" %s", torture_ops[i]->name);
+		pr_alert("\n");
+		ret = -EINVAL;
+		goto err;
+	}
+
+	if (nlock_stress >= 0)
+		cxt.nreallock_stress = nlock_stress;
+	else
+		cxt.nreallock_stress = 2 * num_online_cpus();
+
+	/* Initialize the statistics so that each run gets its own numbers. */
+	if (nlock_stress) {
+		cxt.lwd = kmalloc_array(cxt.nreallock_stress,
+					sizeof(*cxt.lwd),
+					GFP_KERNEL);
+		if (cxt.lwd == NULL) {
+			VERBOSE_TOROUT_STRING("cxt.lwd: Out of memory");
+			ret = -ENOMEM;
+			goto err;
+		}
+
+		for (i = 0; i < cxt.nreallock_stress; i++) {
+			cxt.lwd[i].s.n_iter = 0;
+
+			snprintf(str, DLM_LOCKTORTURE_RES_NAME_LEN, "%s_%d",
+				 cxt.cur_ops->name, i);
+			snprintf(cxt.lwd[i].res_name, DLM_LOCKTORTURE_RES_NAME_LEN,
+				 "%-64s", str);
+			cxt.lwd[i].res_name_len = strlen(cxt.lwd[i].res_name);
+		}
+	}
+
+	lock_torture_print_module_parms(cxt.cur_ops, "Start of test");
+
+	/* Prepare torture context. */
+	if (shuffle_interval > 0) {
+		ret = torture_shuffle_init(shuffle_interval);
+		if (ret)
+			goto err;
+	}
+
+	if (shutdown_secs > 0) {
+		ret = torture_shutdown_init(shutdown_secs,
+					    lock_torture_cleanup);
+		if (ret)
+			goto err;
+	}
+
+	if (nlock_stress) {
+		lock_tasks = kcalloc(cxt.nreallock_stress,
+				     sizeof(lock_tasks[0]), GFP_KERNEL);
+		if (lock_tasks == NULL) {
+			TOROUT_ERRSTRING("lock_tasks: Out of memory");
+			ret = -ENOMEM;
+			goto err;
+		}
+
+		ret = dlm_new_lockspace("locktorture", cluster, 0, 64, &torture_ls_ops,
+					NULL, &ret, &ls);
+		if (ret)
+			goto err;
+	}
+
+	/*
+	 * Create the kthreads and start torturing (oh, those poor little dlm locks).
+	 */
+	for (i = 0; i < cxt.nreallock_stress; i++) {
+		/* Create lockers. */
+		ret = torture_create_kthread(lock_torture, &cxt.lwd[i],
+					     lock_tasks[i]);
+		if (ret)
+			goto err;
+	}
+
+	if (stat_interval > 0) {
+		ret = torture_create_kthread(lock_torture_stats, NULL,
+					     stats_task);
+		if (ret)
+			goto err;
+	}
+
+	torture_init_end();
+
+	return 0;
+
+err:
+	torture_init_end();
+	lock_torture_cleanup();
+
+	if (ls) {
+		ret = dlm_release_lockspace(ls, 2);
+		WARN_ON(ret);
+	}
+
+	if (shutdown_secs)
+		kernel_power_off();
+
+	return ret;
+}
+
+module_init(lock_torture_init);
+module_exit(lock_torture_cleanup);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2022-08-15 19:43 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-15 19:43 [Cluster-devel] [RESEND dlm/next 00/16] fs: dlm: fixes, cleanups and locktorture Alexander Aring
2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 01/16] fs: dlm: fix race in lowcomms Alexander Aring
2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 02/16] fs: dlm: fix race between test_bit() and queue_work() Alexander Aring
2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 03/16] fs: dlm: handle -EBUSY as first for lock validation Alexander Aring
2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 04/16] fs: dlm: handle -EBUSY as first for unlock validation Alexander Aring
2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 05/16] fs: dlm: use __func__ for function name Alexander Aring
2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 06/16] fs: dlm: handle -EINVAL as log_error() Alexander Aring
2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 07/16] fs: dlm: fix invalid derefence of sb_lvbptr Alexander Aring
2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 08/16] fs: dlm: allow lockspaces have zero lvblen Alexander Aring
2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 09/16] fs: dlm: handle rcom in else if branch Alexander Aring
2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 10/16] fs: dlm: remove dlm_del_ast prototype Alexander Aring
2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 11/16] fs: dlm: change ls_clear_proc_locks to spinlock Alexander Aring
2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 12/16] fs: dlm: trace user space callbacks Alexander Aring
2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 13/16] fs: dlm: move DLM_LSFL_FS out of uapi Alexander Aring
2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 14/16] fs: dlm: LSFL_CB_DELAY only for kernel lockspaces Alexander Aring
2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 15/16] fs: dlm: const void resource name parameter Alexander Aring
2022-08-15 19:43 ` [Cluster-devel] [RESEND dlm/next 16/16] fs: dlm: initial commit of locktorture Alexander Aring

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.