All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/2] kcov: improve mmap processing
@ 2022-01-17 15:36 Aleksandr Nogikh
  2022-01-17 15:36 ` [PATCH v3 1/2] kcov: split ioctl handling into locked and unlocked parts Aleksandr Nogikh
  2022-01-17 15:36 ` [PATCH v3 2/2] kcov: properly handle subsequent mmap calls Aleksandr Nogikh
  0 siblings, 2 replies; 9+ messages in thread
From: Aleksandr Nogikh @ 2022-01-17 15:36 UTC (permalink / raw)
  To: kasan-dev, linux-kernel, akpm
  Cc: dvyukov, andreyknvl, elver, glider, tarasmadan, bigeasy, nogikh

Subsequent mmaps of the same kcov descriptor currently do not update the
virtual memory of the task and yet return 0 (success). This is
counter-intuitive and may lead to unexpected memory access errors.

Also, this unnecessarily limits the functionality of kcov to only the
simplest usage scenarios. Kcov instances are effectively forever attached
to their first address spaces and it becomes impossible to e.g. reuse the
same kcov handle in forked child processes without mmapping the memory
first. This is exactly what we tried to do in syzkaller and
inadvertently came upon this behavior.

This patch series addresses the problem described above.

v1 of the patch:
https://lore.kernel.org/lkml/20211220152153.910990-1-nogikh@google.com/

Changes from v1 to v2:
- Split into 2 commits.
- Minor coding style changes.

v2 of the patch:
https://lore.kernel.org/lkml/20211221170348.1113266-1-nogikh@google.com/T/

Changes from v2 to v3:
- The first commit now implements purely non-functional changes.
- No extra function is introduced in the first commit.

Aleksandr Nogikh (2):
  kcov: split ioctl handling into locked and unlocked parts
  kcov: properly handle subsequent mmap calls

 kernel/kcov.c | 98 ++++++++++++++++++++++++++-------------------------
 1 file changed, 50 insertions(+), 48 deletions(-)

-- 
2.34.1.703.g22d0c6ccf7-goog


^ permalink raw reply	[flat|nested] 9+ messages in thread

* [PATCH v3 1/2] kcov: split ioctl handling into locked and unlocked parts
  2022-01-17 15:36 [PATCH v3 0/2] kcov: improve mmap processing Aleksandr Nogikh
@ 2022-01-17 15:36 ` Aleksandr Nogikh
  2022-01-18  7:02   ` Dmitry Vyukov
  2022-01-24 22:33   ` Andrey Konovalov
  2022-01-17 15:36 ` [PATCH v3 2/2] kcov: properly handle subsequent mmap calls Aleksandr Nogikh
  1 sibling, 2 replies; 9+ messages in thread
From: Aleksandr Nogikh @ 2022-01-17 15:36 UTC (permalink / raw)
  To: kasan-dev, linux-kernel, akpm
  Cc: dvyukov, andreyknvl, elver, glider, tarasmadan, bigeasy, nogikh

Currently all ioctls are de facto processed under a spinlock in order
to serialise them. This, however, prohibits the use of vmalloc and other
memory management functions in the implementations of those ioctls,
unnecessary complicating any further changes to the code.

Let all ioctls first be processed inside the kcov_ioctl() function
which should execute the ones that are not compatible with spinlock
and then pass control to kcov_ioctl_locked() for all other ones.
KCOV_REMOTE_ENABLE is processed both in kcov_ioctl() and
kcov_ioctl_locked() as the steps are easily separable.

Although it is still compatible with a spinlock, move KCOV_INIT_TRACE
handling to kcov_ioctl(), so that the changes from the next commit are
easier to follow.

Signed-off-by: Aleksandr Nogikh <nogikh@google.com>
---
 kernel/kcov.c | 68 ++++++++++++++++++++++++++++-----------------------
 1 file changed, 37 insertions(+), 31 deletions(-)

diff --git a/kernel/kcov.c b/kernel/kcov.c
index 36ca640c4f8e..e1be7301500b 100644
--- a/kernel/kcov.c
+++ b/kernel/kcov.c
@@ -564,31 +564,12 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
 			     unsigned long arg)
 {
 	struct task_struct *t;
-	unsigned long size, unused;
+	unsigned long flags, unused;
 	int mode, i;
 	struct kcov_remote_arg *remote_arg;
 	struct kcov_remote *remote;
-	unsigned long flags;
 
 	switch (cmd) {
-	case KCOV_INIT_TRACE:
-		/*
-		 * Enable kcov in trace mode and setup buffer size.
-		 * Must happen before anything else.
-		 */
-		if (kcov->mode != KCOV_MODE_DISABLED)
-			return -EBUSY;
-		/*
-		 * Size must be at least 2 to hold current position and one PC.
-		 * Later we allocate size * sizeof(unsigned long) memory,
-		 * that must not overflow.
-		 */
-		size = arg;
-		if (size < 2 || size > INT_MAX / sizeof(unsigned long))
-			return -EINVAL;
-		kcov->size = size;
-		kcov->mode = KCOV_MODE_INIT;
-		return 0;
 	case KCOV_ENABLE:
 		/*
 		 * Enable coverage for the current task.
@@ -692,9 +673,32 @@ static long kcov_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
 	struct kcov_remote_arg *remote_arg = NULL;
 	unsigned int remote_num_handles;
 	unsigned long remote_arg_size;
-	unsigned long flags;
+	unsigned long size, flags;
 
-	if (cmd == KCOV_REMOTE_ENABLE) {
+	kcov = filep->private_data;
+	switch (cmd) {
+	case KCOV_INIT_TRACE:
+		/*
+		 * Enable kcov in trace mode and setup buffer size.
+		 * Must happen before anything else.
+		 *
+		 * First check the size argument - it must be at least 2
+		 * to hold the current position and one PC. Later we allocate
+		 * size * sizeof(unsigned long) memory, that must not overflow.
+		 */
+		size = arg;
+		if (size < 2 || size > INT_MAX / sizeof(unsigned long))
+			return -EINVAL;
+		spin_lock_irqsave(&kcov->lock, flags);
+		if (kcov->mode != KCOV_MODE_DISABLED) {
+			spin_unlock_irqrestore(&kcov->lock, flags);
+			return -EBUSY;
+		}
+		kcov->size = size;
+		kcov->mode = KCOV_MODE_INIT;
+		spin_unlock_irqrestore(&kcov->lock, flags);
+		return 0;
+	case KCOV_REMOTE_ENABLE:
 		if (get_user(remote_num_handles, (unsigned __user *)(arg +
 				offsetof(struct kcov_remote_arg, num_handles))))
 			return -EFAULT;
@@ -710,16 +714,18 @@ static long kcov_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
 			return -EINVAL;
 		}
 		arg = (unsigned long)remote_arg;
+		fallthrough;
+	default:
+		/*
+		 * All other commands can be normally executed under a spin lock, so we
+		 * obtain and release it here in order to simplify kcov_ioctl_locked().
+		 */
+		spin_lock_irqsave(&kcov->lock, flags);
+		res = kcov_ioctl_locked(kcov, cmd, arg);
+		spin_unlock_irqrestore(&kcov->lock, flags);
+		kfree(remote_arg);
+		return res;
 	}
-
-	kcov = filep->private_data;
-	spin_lock_irqsave(&kcov->lock, flags);
-	res = kcov_ioctl_locked(kcov, cmd, arg);
-	spin_unlock_irqrestore(&kcov->lock, flags);
-
-	kfree(remote_arg);
-
-	return res;
 }
 
 static const struct file_operations kcov_fops = {
-- 
2.34.1.703.g22d0c6ccf7-goog


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* [PATCH v3 2/2] kcov: properly handle subsequent mmap calls
  2022-01-17 15:36 [PATCH v3 0/2] kcov: improve mmap processing Aleksandr Nogikh
  2022-01-17 15:36 ` [PATCH v3 1/2] kcov: split ioctl handling into locked and unlocked parts Aleksandr Nogikh
@ 2022-01-17 15:36 ` Aleksandr Nogikh
  2022-01-18  7:02   ` Dmitry Vyukov
  2022-01-24 22:33   ` Andrey Konovalov
  1 sibling, 2 replies; 9+ messages in thread
From: Aleksandr Nogikh @ 2022-01-17 15:36 UTC (permalink / raw)
  To: kasan-dev, linux-kernel, akpm
  Cc: dvyukov, andreyknvl, elver, glider, tarasmadan, bigeasy, nogikh

Allocate the kcov buffer during KCOV_MODE_INIT in order to untie mmapping
of a kcov instance and the actual coverage collection process. Modify
kcov_mmap, so that it can be reliably used any number of times once
KCOV_MODE_INIT has succeeded.

These changes to the user-facing interface of the tool only weaken the
preconditions, so all existing user space code should remain compatible
with the new version.

Signed-off-by: Aleksandr Nogikh <nogikh@google.com>
---
 kernel/kcov.c | 34 +++++++++++++++-------------------
 1 file changed, 15 insertions(+), 19 deletions(-)

diff --git a/kernel/kcov.c b/kernel/kcov.c
index e1be7301500b..475524bd900a 100644
--- a/kernel/kcov.c
+++ b/kernel/kcov.c
@@ -459,37 +459,28 @@ void kcov_task_exit(struct task_struct *t)
 static int kcov_mmap(struct file *filep, struct vm_area_struct *vma)
 {
 	int res = 0;
-	void *area;
 	struct kcov *kcov = vma->vm_file->private_data;
 	unsigned long size, off;
 	struct page *page;
 	unsigned long flags;
 
-	area = vmalloc_user(vma->vm_end - vma->vm_start);
-	if (!area)
-		return -ENOMEM;
-
 	spin_lock_irqsave(&kcov->lock, flags);
 	size = kcov->size * sizeof(unsigned long);
-	if (kcov->mode != KCOV_MODE_INIT || vma->vm_pgoff != 0 ||
+	if (kcov->area == NULL || vma->vm_pgoff != 0 ||
 	    vma->vm_end - vma->vm_start != size) {
 		res = -EINVAL;
 		goto exit;
 	}
-	if (!kcov->area) {
-		kcov->area = area;
-		vma->vm_flags |= VM_DONTEXPAND;
-		spin_unlock_irqrestore(&kcov->lock, flags);
-		for (off = 0; off < size; off += PAGE_SIZE) {
-			page = vmalloc_to_page(kcov->area + off);
-			if (vm_insert_page(vma, vma->vm_start + off, page))
-				WARN_ONCE(1, "vm_insert_page() failed");
-		}
-		return 0;
+	spin_unlock_irqrestore(&kcov->lock, flags);
+	vma->vm_flags |= VM_DONTEXPAND;
+	for (off = 0; off < size; off += PAGE_SIZE) {
+		page = vmalloc_to_page(kcov->area + off);
+		if (vm_insert_page(vma, vma->vm_start + off, page))
+			WARN_ONCE(1, "vm_insert_page() failed");
 	}
+	return 0;
 exit:
 	spin_unlock_irqrestore(&kcov->lock, flags);
-	vfree(area);
 	return res;
 }
 
@@ -674,6 +665,7 @@ static long kcov_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
 	unsigned int remote_num_handles;
 	unsigned long remote_arg_size;
 	unsigned long size, flags;
+	void *area;
 
 	kcov = filep->private_data;
 	switch (cmd) {
@@ -683,17 +675,21 @@ static long kcov_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
 		 * Must happen before anything else.
 		 *
 		 * First check the size argument - it must be at least 2
-		 * to hold the current position and one PC. Later we allocate
-		 * size * sizeof(unsigned long) memory, that must not overflow.
+		 * to hold the current position and one PC.
 		 */
 		size = arg;
 		if (size < 2 || size > INT_MAX / sizeof(unsigned long))
 			return -EINVAL;
+		area = vmalloc_user(size * sizeof(unsigned long));
+		if (area == NULL)
+			return -ENOMEM;
 		spin_lock_irqsave(&kcov->lock, flags);
 		if (kcov->mode != KCOV_MODE_DISABLED) {
 			spin_unlock_irqrestore(&kcov->lock, flags);
+			vfree(area);
 			return -EBUSY;
 		}
+		kcov->area = area;
 		kcov->size = size;
 		kcov->mode = KCOV_MODE_INIT;
 		spin_unlock_irqrestore(&kcov->lock, flags);
-- 
2.34.1.703.g22d0c6ccf7-goog


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 1/2] kcov: split ioctl handling into locked and unlocked parts
  2022-01-17 15:36 ` [PATCH v3 1/2] kcov: split ioctl handling into locked and unlocked parts Aleksandr Nogikh
@ 2022-01-18  7:02   ` Dmitry Vyukov
  2022-01-24 22:33   ` Andrey Konovalov
  1 sibling, 0 replies; 9+ messages in thread
From: Dmitry Vyukov @ 2022-01-18  7:02 UTC (permalink / raw)
  To: Aleksandr Nogikh
  Cc: kasan-dev, linux-kernel, akpm, andreyknvl, elver, glider,
	tarasmadan, bigeasy

On Mon, 17 Jan 2022 at 16:36, Aleksandr Nogikh <nogikh@google.com> wrote:
>
> Currently all ioctls are de facto processed under a spinlock in order
> to serialise them. This, however, prohibits the use of vmalloc and other
> memory management functions in the implementations of those ioctls,
> unnecessary complicating any further changes to the code.
>
> Let all ioctls first be processed inside the kcov_ioctl() function
> which should execute the ones that are not compatible with spinlock
> and then pass control to kcov_ioctl_locked() for all other ones.
> KCOV_REMOTE_ENABLE is processed both in kcov_ioctl() and
> kcov_ioctl_locked() as the steps are easily separable.
>
> Although it is still compatible with a spinlock, move KCOV_INIT_TRACE
> handling to kcov_ioctl(), so that the changes from the next commit are
> easier to follow.
>
> Signed-off-by: Aleksandr Nogikh <nogikh@google.com>

Reviewed-by: Dmitry Vyukov <dvyukov@google.com>

> ---
>  kernel/kcov.c | 68 ++++++++++++++++++++++++++++-----------------------
>  1 file changed, 37 insertions(+), 31 deletions(-)
>
> diff --git a/kernel/kcov.c b/kernel/kcov.c
> index 36ca640c4f8e..e1be7301500b 100644
> --- a/kernel/kcov.c
> +++ b/kernel/kcov.c
> @@ -564,31 +564,12 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
>                              unsigned long arg)
>  {
>         struct task_struct *t;
> -       unsigned long size, unused;
> +       unsigned long flags, unused;
>         int mode, i;
>         struct kcov_remote_arg *remote_arg;
>         struct kcov_remote *remote;
> -       unsigned long flags;
>
>         switch (cmd) {
> -       case KCOV_INIT_TRACE:
> -               /*
> -                * Enable kcov in trace mode and setup buffer size.
> -                * Must happen before anything else.
> -                */
> -               if (kcov->mode != KCOV_MODE_DISABLED)
> -                       return -EBUSY;
> -               /*
> -                * Size must be at least 2 to hold current position and one PC.
> -                * Later we allocate size * sizeof(unsigned long) memory,
> -                * that must not overflow.
> -                */
> -               size = arg;
> -               if (size < 2 || size > INT_MAX / sizeof(unsigned long))
> -                       return -EINVAL;
> -               kcov->size = size;
> -               kcov->mode = KCOV_MODE_INIT;
> -               return 0;
>         case KCOV_ENABLE:
>                 /*
>                  * Enable coverage for the current task.
> @@ -692,9 +673,32 @@ static long kcov_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
>         struct kcov_remote_arg *remote_arg = NULL;
>         unsigned int remote_num_handles;
>         unsigned long remote_arg_size;
> -       unsigned long flags;
> +       unsigned long size, flags;
>
> -       if (cmd == KCOV_REMOTE_ENABLE) {
> +       kcov = filep->private_data;
> +       switch (cmd) {
> +       case KCOV_INIT_TRACE:
> +               /*
> +                * Enable kcov in trace mode and setup buffer size.
> +                * Must happen before anything else.
> +                *
> +                * First check the size argument - it must be at least 2
> +                * to hold the current position and one PC. Later we allocate
> +                * size * sizeof(unsigned long) memory, that must not overflow.
> +                */
> +               size = arg;
> +               if (size < 2 || size > INT_MAX / sizeof(unsigned long))
> +                       return -EINVAL;
> +               spin_lock_irqsave(&kcov->lock, flags);
> +               if (kcov->mode != KCOV_MODE_DISABLED) {
> +                       spin_unlock_irqrestore(&kcov->lock, flags);
> +                       return -EBUSY;
> +               }
> +               kcov->size = size;
> +               kcov->mode = KCOV_MODE_INIT;
> +               spin_unlock_irqrestore(&kcov->lock, flags);
> +               return 0;
> +       case KCOV_REMOTE_ENABLE:
>                 if (get_user(remote_num_handles, (unsigned __user *)(arg +
>                                 offsetof(struct kcov_remote_arg, num_handles))))
>                         return -EFAULT;
> @@ -710,16 +714,18 @@ static long kcov_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
>                         return -EINVAL;
>                 }
>                 arg = (unsigned long)remote_arg;
> +               fallthrough;
> +       default:
> +               /*
> +                * All other commands can be normally executed under a spin lock, so we
> +                * obtain and release it here in order to simplify kcov_ioctl_locked().
> +                */
> +               spin_lock_irqsave(&kcov->lock, flags);
> +               res = kcov_ioctl_locked(kcov, cmd, arg);
> +               spin_unlock_irqrestore(&kcov->lock, flags);
> +               kfree(remote_arg);
> +               return res;
>         }
> -
> -       kcov = filep->private_data;
> -       spin_lock_irqsave(&kcov->lock, flags);
> -       res = kcov_ioctl_locked(kcov, cmd, arg);
> -       spin_unlock_irqrestore(&kcov->lock, flags);
> -
> -       kfree(remote_arg);
> -
> -       return res;
>  }
>
>  static const struct file_operations kcov_fops = {
> --
> 2.34.1.703.g22d0c6ccf7-goog
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 2/2] kcov: properly handle subsequent mmap calls
  2022-01-17 15:36 ` [PATCH v3 2/2] kcov: properly handle subsequent mmap calls Aleksandr Nogikh
@ 2022-01-18  7:02   ` Dmitry Vyukov
  2022-01-24 22:33   ` Andrey Konovalov
  1 sibling, 0 replies; 9+ messages in thread
From: Dmitry Vyukov @ 2022-01-18  7:02 UTC (permalink / raw)
  To: Aleksandr Nogikh
  Cc: kasan-dev, linux-kernel, akpm, andreyknvl, elver, glider,
	tarasmadan, bigeasy

On Mon, 17 Jan 2022 at 16:37, Aleksandr Nogikh <nogikh@google.com> wrote:
>
> Allocate the kcov buffer during KCOV_MODE_INIT in order to untie mmapping
> of a kcov instance and the actual coverage collection process. Modify
> kcov_mmap, so that it can be reliably used any number of times once
> KCOV_MODE_INIT has succeeded.
>
> These changes to the user-facing interface of the tool only weaken the
> preconditions, so all existing user space code should remain compatible
> with the new version.
>
> Signed-off-by: Aleksandr Nogikh <nogikh@google.com>

Reviewed-by: Dmitry Vyukov <dvyukov@google.com>

> ---
>  kernel/kcov.c | 34 +++++++++++++++-------------------
>  1 file changed, 15 insertions(+), 19 deletions(-)
>
> diff --git a/kernel/kcov.c b/kernel/kcov.c
> index e1be7301500b..475524bd900a 100644
> --- a/kernel/kcov.c
> +++ b/kernel/kcov.c
> @@ -459,37 +459,28 @@ void kcov_task_exit(struct task_struct *t)
>  static int kcov_mmap(struct file *filep, struct vm_area_struct *vma)
>  {
>         int res = 0;
> -       void *area;
>         struct kcov *kcov = vma->vm_file->private_data;
>         unsigned long size, off;
>         struct page *page;
>         unsigned long flags;
>
> -       area = vmalloc_user(vma->vm_end - vma->vm_start);
> -       if (!area)
> -               return -ENOMEM;
> -
>         spin_lock_irqsave(&kcov->lock, flags);
>         size = kcov->size * sizeof(unsigned long);
> -       if (kcov->mode != KCOV_MODE_INIT || vma->vm_pgoff != 0 ||
> +       if (kcov->area == NULL || vma->vm_pgoff != 0 ||
>             vma->vm_end - vma->vm_start != size) {
>                 res = -EINVAL;
>                 goto exit;
>         }
> -       if (!kcov->area) {
> -               kcov->area = area;
> -               vma->vm_flags |= VM_DONTEXPAND;
> -               spin_unlock_irqrestore(&kcov->lock, flags);
> -               for (off = 0; off < size; off += PAGE_SIZE) {
> -                       page = vmalloc_to_page(kcov->area + off);
> -                       if (vm_insert_page(vma, vma->vm_start + off, page))
> -                               WARN_ONCE(1, "vm_insert_page() failed");
> -               }
> -               return 0;
> +       spin_unlock_irqrestore(&kcov->lock, flags);
> +       vma->vm_flags |= VM_DONTEXPAND;
> +       for (off = 0; off < size; off += PAGE_SIZE) {
> +               page = vmalloc_to_page(kcov->area + off);
> +               if (vm_insert_page(vma, vma->vm_start + off, page))
> +                       WARN_ONCE(1, "vm_insert_page() failed");
>         }
> +       return 0;
>  exit:
>         spin_unlock_irqrestore(&kcov->lock, flags);
> -       vfree(area);
>         return res;
>  }
>
> @@ -674,6 +665,7 @@ static long kcov_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
>         unsigned int remote_num_handles;
>         unsigned long remote_arg_size;
>         unsigned long size, flags;
> +       void *area;
>
>         kcov = filep->private_data;
>         switch (cmd) {
> @@ -683,17 +675,21 @@ static long kcov_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
>                  * Must happen before anything else.
>                  *
>                  * First check the size argument - it must be at least 2
> -                * to hold the current position and one PC. Later we allocate
> -                * size * sizeof(unsigned long) memory, that must not overflow.
> +                * to hold the current position and one PC.
>                  */
>                 size = arg;
>                 if (size < 2 || size > INT_MAX / sizeof(unsigned long))
>                         return -EINVAL;
> +               area = vmalloc_user(size * sizeof(unsigned long));
> +               if (area == NULL)
> +                       return -ENOMEM;
>                 spin_lock_irqsave(&kcov->lock, flags);
>                 if (kcov->mode != KCOV_MODE_DISABLED) {
>                         spin_unlock_irqrestore(&kcov->lock, flags);
> +                       vfree(area);
>                         return -EBUSY;
>                 }
> +               kcov->area = area;
>                 kcov->size = size;
>                 kcov->mode = KCOV_MODE_INIT;
>                 spin_unlock_irqrestore(&kcov->lock, flags);
> --
> 2.34.1.703.g22d0c6ccf7-goog
>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 1/2] kcov: split ioctl handling into locked and unlocked parts
  2022-01-17 15:36 ` [PATCH v3 1/2] kcov: split ioctl handling into locked and unlocked parts Aleksandr Nogikh
  2022-01-18  7:02   ` Dmitry Vyukov
@ 2022-01-24 22:33   ` Andrey Konovalov
  2022-01-26 18:11     ` Aleksandr Nogikh
  1 sibling, 1 reply; 9+ messages in thread
From: Andrey Konovalov @ 2022-01-24 22:33 UTC (permalink / raw)
  To: Aleksandr Nogikh
  Cc: kasan-dev, LKML, Andrew Morton, Dmitry Vyukov, Marco Elver,
	Alexander Potapenko, Taras Madan, Sebastian Andrzej Siewior

.On Mon, Jan 17, 2022 at 4:36 PM Aleksandr Nogikh <nogikh@google.com> wrote:
>
> Currently all ioctls are de facto processed under a spinlock in order
> to serialise them. This, however, prohibits the use of vmalloc and other
> memory management functions in the implementations of those ioctls,
> unnecessary complicating any further changes to the code.
>
> Let all ioctls first be processed inside the kcov_ioctl() function
> which should execute the ones that are not compatible with spinlock
> and then pass control to kcov_ioctl_locked() for all other ones.
> KCOV_REMOTE_ENABLE is processed both in kcov_ioctl() and
> kcov_ioctl_locked() as the steps are easily separable.
>
> Although it is still compatible with a spinlock, move KCOV_INIT_TRACE
> handling to kcov_ioctl(), so that the changes from the next commit are
> easier to follow.
>
> Signed-off-by: Aleksandr Nogikh <nogikh@google.com>
> ---
>  kernel/kcov.c | 68 ++++++++++++++++++++++++++++-----------------------
>  1 file changed, 37 insertions(+), 31 deletions(-)
>
> diff --git a/kernel/kcov.c b/kernel/kcov.c
> index 36ca640c4f8e..e1be7301500b 100644
> --- a/kernel/kcov.c
> +++ b/kernel/kcov.c
> @@ -564,31 +564,12 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
>                              unsigned long arg)
>  {
>         struct task_struct *t;
> -       unsigned long size, unused;
> +       unsigned long flags, unused;
>         int mode, i;
>         struct kcov_remote_arg *remote_arg;
>         struct kcov_remote *remote;
> -       unsigned long flags;
>
>         switch (cmd) {
> -       case KCOV_INIT_TRACE:
> -               /*
> -                * Enable kcov in trace mode and setup buffer size.
> -                * Must happen before anything else.
> -                */
> -               if (kcov->mode != KCOV_MODE_DISABLED)
> -                       return -EBUSY;
> -               /*
> -                * Size must be at least 2 to hold current position and one PC.
> -                * Later we allocate size * sizeof(unsigned long) memory,
> -                * that must not overflow.
> -                */
> -               size = arg;
> -               if (size < 2 || size > INT_MAX / sizeof(unsigned long))
> -                       return -EINVAL;
> -               kcov->size = size;
> -               kcov->mode = KCOV_MODE_INIT;
> -               return 0;
>         case KCOV_ENABLE:
>                 /*
>                  * Enable coverage for the current task.
> @@ -692,9 +673,32 @@ static long kcov_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
>         struct kcov_remote_arg *remote_arg = NULL;
>         unsigned int remote_num_handles;
>         unsigned long remote_arg_size;
> -       unsigned long flags;
> +       unsigned long size, flags;
>
> -       if (cmd == KCOV_REMOTE_ENABLE) {
> +       kcov = filep->private_data;
> +       switch (cmd) {
> +       case KCOV_INIT_TRACE:
> +               /*
> +                * Enable kcov in trace mode and setup buffer size.
> +                * Must happen before anything else.
> +                *
> +                * First check the size argument - it must be at least 2
> +                * to hold the current position and one PC. Later we allocate
> +                * size * sizeof(unsigned long) memory, that must not overflow.
> +                */
> +               size = arg;
> +               if (size < 2 || size > INT_MAX / sizeof(unsigned long))
> +                       return -EINVAL;
> +               spin_lock_irqsave(&kcov->lock, flags);

Arguably, we could keep the part of the KCOV_INIT_TRACE handler that
happens under the lock in kcov_ioctl_locked(). In a similar way as
it's done for KCOV_REMOTE_ENABLE. This would get rid of the asymmetric
fallthrough usage.

But I'll leave this up to you, either way looks acceptable to me.

> +               if (kcov->mode != KCOV_MODE_DISABLED) {
> +                       spin_unlock_irqrestore(&kcov->lock, flags);
> +                       return -EBUSY;
> +               }
> +               kcov->size = size;
> +               kcov->mode = KCOV_MODE_INIT;
> +               spin_unlock_irqrestore(&kcov->lock, flags);
> +               return 0;
> +       case KCOV_REMOTE_ENABLE:
>                 if (get_user(remote_num_handles, (unsigned __user *)(arg +
>                                 offsetof(struct kcov_remote_arg, num_handles))))
>                         return -EFAULT;
> @@ -710,16 +714,18 @@ static long kcov_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
>                         return -EINVAL;
>                 }
>                 arg = (unsigned long)remote_arg;
> +               fallthrough;
> +       default:
> +               /*
> +                * All other commands can be normally executed under a spin lock, so we
> +                * obtain and release it here in order to simplify kcov_ioctl_locked().
> +                */
> +               spin_lock_irqsave(&kcov->lock, flags);
> +               res = kcov_ioctl_locked(kcov, cmd, arg);
> +               spin_unlock_irqrestore(&kcov->lock, flags);
> +               kfree(remote_arg);
> +               return res;
>         }
> -
> -       kcov = filep->private_data;
> -       spin_lock_irqsave(&kcov->lock, flags);
> -       res = kcov_ioctl_locked(kcov, cmd, arg);
> -       spin_unlock_irqrestore(&kcov->lock, flags);
> -
> -       kfree(remote_arg);
> -
> -       return res;
>  }
>
>  static const struct file_operations kcov_fops = {
> --
> 2.34.1.703.g22d0c6ccf7-goog
>

Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 2/2] kcov: properly handle subsequent mmap calls
  2022-01-17 15:36 ` [PATCH v3 2/2] kcov: properly handle subsequent mmap calls Aleksandr Nogikh
  2022-01-18  7:02   ` Dmitry Vyukov
@ 2022-01-24 22:33   ` Andrey Konovalov
  2022-01-26 17:53     ` Aleksandr Nogikh
  1 sibling, 1 reply; 9+ messages in thread
From: Andrey Konovalov @ 2022-01-24 22:33 UTC (permalink / raw)
  To: Aleksandr Nogikh
  Cc: kasan-dev, LKML, Andrew Morton, Dmitry Vyukov, Marco Elver,
	Alexander Potapenko, Taras Madan, Sebastian Andrzej Siewior

On Mon, Jan 17, 2022 at 4:37 PM Aleksandr Nogikh <nogikh@google.com> wrote:
>
> Allocate the kcov buffer during KCOV_MODE_INIT in order to untie mmapping
> of a kcov instance and the actual coverage collection process. Modify
> kcov_mmap, so that it can be reliably used any number of times once
> KCOV_MODE_INIT has succeeded.
>
> These changes to the user-facing interface of the tool only weaken the
> preconditions, so all existing user space code should remain compatible
> with the new version.
>
> Signed-off-by: Aleksandr Nogikh <nogikh@google.com>
> ---
>  kernel/kcov.c | 34 +++++++++++++++-------------------
>  1 file changed, 15 insertions(+), 19 deletions(-)
>
> diff --git a/kernel/kcov.c b/kernel/kcov.c
> index e1be7301500b..475524bd900a 100644
> --- a/kernel/kcov.c
> +++ b/kernel/kcov.c
> @@ -459,37 +459,28 @@ void kcov_task_exit(struct task_struct *t)
>  static int kcov_mmap(struct file *filep, struct vm_area_struct *vma)
>  {
>         int res = 0;
> -       void *area;
>         struct kcov *kcov = vma->vm_file->private_data;
>         unsigned long size, off;
>         struct page *page;
>         unsigned long flags;
>
> -       area = vmalloc_user(vma->vm_end - vma->vm_start);
> -       if (!area)
> -               return -ENOMEM;
> -
>         spin_lock_irqsave(&kcov->lock, flags);
>         size = kcov->size * sizeof(unsigned long);
> -       if (kcov->mode != KCOV_MODE_INIT || vma->vm_pgoff != 0 ||
> +       if (kcov->area == NULL || vma->vm_pgoff != 0 ||
>             vma->vm_end - vma->vm_start != size) {
>                 res = -EINVAL;
>                 goto exit;
>         }
> -       if (!kcov->area) {
> -               kcov->area = area;
> -               vma->vm_flags |= VM_DONTEXPAND;
> -               spin_unlock_irqrestore(&kcov->lock, flags);
> -               for (off = 0; off < size; off += PAGE_SIZE) {
> -                       page = vmalloc_to_page(kcov->area + off);
> -                       if (vm_insert_page(vma, vma->vm_start + off, page))
> -                               WARN_ONCE(1, "vm_insert_page() failed");
> -               }
> -               return 0;
> +       spin_unlock_irqrestore(&kcov->lock, flags);
> +       vma->vm_flags |= VM_DONTEXPAND;
> +       for (off = 0; off < size; off += PAGE_SIZE) {
> +               page = vmalloc_to_page(kcov->area + off);

Hm, you're accessing kcov->area without the lock here. Although, the
old code does this as well. This is probably OK, as kcov->area can't
be changed nor freed while this handler is executing.


> +               if (vm_insert_page(vma, vma->vm_start + off, page))
> +                       WARN_ONCE(1, "vm_insert_page() failed");
>         }
> +       return 0;
>  exit:
>         spin_unlock_irqrestore(&kcov->lock, flags);
> -       vfree(area);
>         return res;
>  }
>
> @@ -674,6 +665,7 @@ static long kcov_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
>         unsigned int remote_num_handles;
>         unsigned long remote_arg_size;
>         unsigned long size, flags;
> +       void *area;
>
>         kcov = filep->private_data;
>         switch (cmd) {
> @@ -683,17 +675,21 @@ static long kcov_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
>                  * Must happen before anything else.
>                  *
>                  * First check the size argument - it must be at least 2
> -                * to hold the current position and one PC. Later we allocate
> -                * size * sizeof(unsigned long) memory, that must not overflow.
> +                * to hold the current position and one PC.
>                  */
>                 size = arg;
>                 if (size < 2 || size > INT_MAX / sizeof(unsigned long))
>                         return -EINVAL;
> +               area = vmalloc_user(size * sizeof(unsigned long));
> +               if (area == NULL)
> +                       return -ENOMEM;
>                 spin_lock_irqsave(&kcov->lock, flags);
>                 if (kcov->mode != KCOV_MODE_DISABLED) {
>                         spin_unlock_irqrestore(&kcov->lock, flags);
> +                       vfree(area);
>                         return -EBUSY;
>                 }
> +               kcov->area = area;
>                 kcov->size = size;
>                 kcov->mode = KCOV_MODE_INIT;
>                 spin_unlock_irqrestore(&kcov->lock, flags);
> --
> 2.34.1.703.g22d0c6ccf7-goog
>

Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 2/2] kcov: properly handle subsequent mmap calls
  2022-01-24 22:33   ` Andrey Konovalov
@ 2022-01-26 17:53     ` Aleksandr Nogikh
  0 siblings, 0 replies; 9+ messages in thread
From: Aleksandr Nogikh @ 2022-01-26 17:53 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: kasan-dev, LKML, Andrew Morton, Dmitry Vyukov, Marco Elver,
	Alexander Potapenko, Taras Madan, Sebastian Andrzej Siewior

Thanks for reviewing the code!

Yes, it is safe to access kcov->area without a lock.
1) kcov->area is set only once since KCOV_INIT_TRACE will succeed only
once. Reason
for that is that kcov->mode is only set to KCOV_MODE_DISABLED during
kcov_open().
2) kcov->area won't be freed because an ongoing mmap operation for the
kcov fd won't let
the kernel invoke release() on that same fd, while that release() is
necessary to finally
decrement kcov->refcount.


On Mon, Jan 24, 2022 at 11:33 PM Andrey Konovalov <andreyknvl@gmail.com> wrote:
>
> On Mon, Jan 17, 2022 at 4:37 PM Aleksandr Nogikh <nogikh@google.com> wrote:
> >
> > Allocate the kcov buffer during KCOV_MODE_INIT in order to untie mmapping
> > of a kcov instance and the actual coverage collection process. Modify
> > kcov_mmap, so that it can be reliably used any number of times once
> > KCOV_MODE_INIT has succeeded.
> >
> > These changes to the user-facing interface of the tool only weaken the
> > preconditions, so all existing user space code should remain compatible
> > with the new version.
> >
> > Signed-off-by: Aleksandr Nogikh <nogikh@google.com>
> > ---
> >  kernel/kcov.c | 34 +++++++++++++++-------------------
> >  1 file changed, 15 insertions(+), 19 deletions(-)
> >
> > diff --git a/kernel/kcov.c b/kernel/kcov.c
> > index e1be7301500b..475524bd900a 100644
> > --- a/kernel/kcov.c
> > +++ b/kernel/kcov.c
> > @@ -459,37 +459,28 @@ void kcov_task_exit(struct task_struct *t)
> >  static int kcov_mmap(struct file *filep, struct vm_area_struct *vma)
> >  {
> >         int res = 0;
> > -       void *area;
> >         struct kcov *kcov = vma->vm_file->private_data;
> >         unsigned long size, off;
> >         struct page *page;
> >         unsigned long flags;
> >
> > -       area = vmalloc_user(vma->vm_end - vma->vm_start);
> > -       if (!area)
> > -               return -ENOMEM;
> > -
> >         spin_lock_irqsave(&kcov->lock, flags);
> >         size = kcov->size * sizeof(unsigned long);
> > -       if (kcov->mode != KCOV_MODE_INIT || vma->vm_pgoff != 0 ||
> > +       if (kcov->area == NULL || vma->vm_pgoff != 0 ||
> >             vma->vm_end - vma->vm_start != size) {
> >                 res = -EINVAL;
> >                 goto exit;
> >         }
> > -       if (!kcov->area) {
> > -               kcov->area = area;
> > -               vma->vm_flags |= VM_DONTEXPAND;
> > -               spin_unlock_irqrestore(&kcov->lock, flags);
> > -               for (off = 0; off < size; off += PAGE_SIZE) {
> > -                       page = vmalloc_to_page(kcov->area + off);
> > -                       if (vm_insert_page(vma, vma->vm_start + off, page))
> > -                               WARN_ONCE(1, "vm_insert_page() failed");
> > -               }
> > -               return 0;
> > +       spin_unlock_irqrestore(&kcov->lock, flags);
> > +       vma->vm_flags |= VM_DONTEXPAND;
> > +       for (off = 0; off < size; off += PAGE_SIZE) {
> > +               page = vmalloc_to_page(kcov->area + off);
>
> Hm, you're accessing kcov->area without the lock here. Although, the
> old code does this as well. This is probably OK, as kcov->area can't
> be changed nor freed while this handler is executing.
>
>
> > +               if (vm_insert_page(vma, vma->vm_start + off, page))
> > +                       WARN_ONCE(1, "vm_insert_page() failed");
> >         }
> > +       return 0;
> >  exit:
> >         spin_unlock_irqrestore(&kcov->lock, flags);
> > -       vfree(area);
> >         return res;
> >  }
> >
> > @@ -674,6 +665,7 @@ static long kcov_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
> >         unsigned int remote_num_handles;
> >         unsigned long remote_arg_size;
> >         unsigned long size, flags;
> > +       void *area;
> >
> >         kcov = filep->private_data;
> >         switch (cmd) {
> > @@ -683,17 +675,21 @@ static long kcov_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
> >                  * Must happen before anything else.
> >                  *
> >                  * First check the size argument - it must be at least 2
> > -                * to hold the current position and one PC. Later we allocate
> > -                * size * sizeof(unsigned long) memory, that must not overflow.
> > +                * to hold the current position and one PC.
> >                  */
> >                 size = arg;
> >                 if (size < 2 || size > INT_MAX / sizeof(unsigned long))
> >                         return -EINVAL;
> > +               area = vmalloc_user(size * sizeof(unsigned long));
> > +               if (area == NULL)
> > +                       return -ENOMEM;
> >                 spin_lock_irqsave(&kcov->lock, flags);
> >                 if (kcov->mode != KCOV_MODE_DISABLED) {
> >                         spin_unlock_irqrestore(&kcov->lock, flags);
> > +                       vfree(area);
> >                         return -EBUSY;
> >                 }
> > +               kcov->area = area;
> >                 kcov->size = size;
> >                 kcov->mode = KCOV_MODE_INIT;
> >                 spin_unlock_irqrestore(&kcov->lock, flags);
> > --
> > 2.34.1.703.g22d0c6ccf7-goog
> >
>
> Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH v3 1/2] kcov: split ioctl handling into locked and unlocked parts
  2022-01-24 22:33   ` Andrey Konovalov
@ 2022-01-26 18:11     ` Aleksandr Nogikh
  0 siblings, 0 replies; 9+ messages in thread
From: Aleksandr Nogikh @ 2022-01-26 18:11 UTC (permalink / raw)
  To: Andrey Konovalov
  Cc: kasan-dev, LKML, Andrew Morton, Dmitry Vyukov, Marco Elver,
	Alexander Potapenko, Taras Madan, Sebastian Andrzej Siewior

On Mon, Jan 24, 2022 at 11:33 PM Andrey Konovalov <andreyknvl@gmail.com> wrote:
>
> .On Mon, Jan 17, 2022 at 4:36 PM Aleksandr Nogikh <nogikh@google.com> wrote:
> >
> > Currently all ioctls are de facto processed under a spinlock in order
> > to serialise them. This, however, prohibits the use of vmalloc and other
> > memory management functions in the implementations of those ioctls,
> > unnecessary complicating any further changes to the code.
> >
> > Let all ioctls first be processed inside the kcov_ioctl() function
> > which should execute the ones that are not compatible with spinlock
> > and then pass control to kcov_ioctl_locked() for all other ones.
> > KCOV_REMOTE_ENABLE is processed both in kcov_ioctl() and
> > kcov_ioctl_locked() as the steps are easily separable.
> >
> > Although it is still compatible with a spinlock, move KCOV_INIT_TRACE
> > handling to kcov_ioctl(), so that the changes from the next commit are
> > easier to follow.
> >
> > Signed-off-by: Aleksandr Nogikh <nogikh@google.com>
> > ---
> >  kernel/kcov.c | 68 ++++++++++++++++++++++++++++-----------------------
> >  1 file changed, 37 insertions(+), 31 deletions(-)
> >
> > diff --git a/kernel/kcov.c b/kernel/kcov.c
> > index 36ca640c4f8e..e1be7301500b 100644
> > --- a/kernel/kcov.c
> > +++ b/kernel/kcov.c
> > @@ -564,31 +564,12 @@ static int kcov_ioctl_locked(struct kcov *kcov, unsigned int cmd,
> >                              unsigned long arg)
> >  {
> >         struct task_struct *t;
> > -       unsigned long size, unused;
> > +       unsigned long flags, unused;
> >         int mode, i;
> >         struct kcov_remote_arg *remote_arg;
> >         struct kcov_remote *remote;
> > -       unsigned long flags;
> >
> >         switch (cmd) {
> > -       case KCOV_INIT_TRACE:
> > -               /*
> > -                * Enable kcov in trace mode and setup buffer size.
> > -                * Must happen before anything else.
> > -                */
> > -               if (kcov->mode != KCOV_MODE_DISABLED)
> > -                       return -EBUSY;
> > -               /*
> > -                * Size must be at least 2 to hold current position and one PC.
> > -                * Later we allocate size * sizeof(unsigned long) memory,
> > -                * that must not overflow.
> > -                */
> > -               size = arg;
> > -               if (size < 2 || size > INT_MAX / sizeof(unsigned long))
> > -                       return -EINVAL;
> > -               kcov->size = size;
> > -               kcov->mode = KCOV_MODE_INIT;
> > -               return 0;
> >         case KCOV_ENABLE:
> >                 /*
> >                  * Enable coverage for the current task.
> > @@ -692,9 +673,32 @@ static long kcov_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
> >         struct kcov_remote_arg *remote_arg = NULL;
> >         unsigned int remote_num_handles;
> >         unsigned long remote_arg_size;
> > -       unsigned long flags;
> > +       unsigned long size, flags;
> >
> > -       if (cmd == KCOV_REMOTE_ENABLE) {
> > +       kcov = filep->private_data;
> > +       switch (cmd) {
> > +       case KCOV_INIT_TRACE:
> > +               /*
> > +                * Enable kcov in trace mode and setup buffer size.
> > +                * Must happen before anything else.
> > +                *
> > +                * First check the size argument - it must be at least 2
> > +                * to hold the current position and one PC. Later we allocate
> > +                * size * sizeof(unsigned long) memory, that must not overflow.
> > +                */
> > +               size = arg;
> > +               if (size < 2 || size > INT_MAX / sizeof(unsigned long))
> > +                       return -EINVAL;
> > +               spin_lock_irqsave(&kcov->lock, flags);
>
> Arguably, we could keep the part of the KCOV_INIT_TRACE handler that
> happens under the lock in kcov_ioctl_locked(). In a similar way as
> it's done for KCOV_REMOTE_ENABLE. This would get rid of the asymmetric
> fallthrough usage.
>
> But I'll leave this up to you, either way looks acceptable to me.
>

That would indeed look nice and would work with this particular
commit, but it won't work with the changes that are introduced in the
next one. So it would go against the objective of splitting the change
into a patch series in the first place - the simplification of
reviewing of the commit with functional changes.

With kcov->area allocation in KCOV_INIT_TRACE, we unfortunately cannot
draw a single line between the unlocked and locked parts.

> > +               if (kcov->mode != KCOV_MODE_DISABLED) {
> > +                       spin_unlock_irqrestore(&kcov->lock, flags);
> > +                       return -EBUSY;
> > +               }
> > +               kcov->size = size;
> > +               kcov->mode = KCOV_MODE_INIT;
> > +               spin_unlock_irqrestore(&kcov->lock, flags);
> > +               return 0;
> > +       case KCOV_REMOTE_ENABLE:
> >                 if (get_user(remote_num_handles, (unsigned __user *)(arg +
> >                                 offsetof(struct kcov_remote_arg, num_handles))))
> >                         return -EFAULT;
> > @@ -710,16 +714,18 @@ static long kcov_ioctl(struct file *filep, unsigned int cmd, unsigned long arg)
> >                         return -EINVAL;
> >                 }
> >                 arg = (unsigned long)remote_arg;
> > +               fallthrough;
> > +       default:
> > +               /*
> > +                * All other commands can be normally executed under a spin lock, so we
> > +                * obtain and release it here in order to simplify kcov_ioctl_locked().
> > +                */
> > +               spin_lock_irqsave(&kcov->lock, flags);
> > +               res = kcov_ioctl_locked(kcov, cmd, arg);
> > +               spin_unlock_irqrestore(&kcov->lock, flags);
> > +               kfree(remote_arg);
> > +               return res;
> >         }
> > -
> > -       kcov = filep->private_data;
> > -       spin_lock_irqsave(&kcov->lock, flags);
> > -       res = kcov_ioctl_locked(kcov, cmd, arg);
> > -       spin_unlock_irqrestore(&kcov->lock, flags);
> > -
> > -       kfree(remote_arg);
> > -
> > -       return res;
> >  }
> >
> >  static const struct file_operations kcov_fops = {
> > --
> > 2.34.1.703.g22d0c6ccf7-goog
> >
>
> Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com>
>
> --
> You received this message because you are subscribed to the Google Groups "kasan-dev" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to kasan-dev+unsubscribe@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/msgid/kasan-dev/CA%2BfCnZdO%2BoOLQSfH%3D%2BH8wKNv1%2BhYasyyyNHxumWa5ex1P0xp0g%40mail.gmail.com.

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2022-01-26 18:11 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-17 15:36 [PATCH v3 0/2] kcov: improve mmap processing Aleksandr Nogikh
2022-01-17 15:36 ` [PATCH v3 1/2] kcov: split ioctl handling into locked and unlocked parts Aleksandr Nogikh
2022-01-18  7:02   ` Dmitry Vyukov
2022-01-24 22:33   ` Andrey Konovalov
2022-01-26 18:11     ` Aleksandr Nogikh
2022-01-17 15:36 ` [PATCH v3 2/2] kcov: properly handle subsequent mmap calls Aleksandr Nogikh
2022-01-18  7:02   ` Dmitry Vyukov
2022-01-24 22:33   ` Andrey Konovalov
2022-01-26 17:53     ` Aleksandr Nogikh

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.