All of lore.kernel.org
 help / color / mirror / Atom feed
* data-race in audit_log_start / audit_receive
@ 2022-08-18 22:23 Abhishek Shah
  2022-08-19  1:59   ` Paul Moore
  0 siblings, 1 reply; 10+ messages in thread
From: Abhishek Shah @ 2022-08-18 22:23 UTC (permalink / raw)
  To: eparis, linux-audit, paul; +Cc: linux-kernel, Gabriel Ryan


[-- Attachment #1.1: Type: text/plain, Size: 2642 bytes --]

Hi all,

We found a data race involving the *audit_cmd_mutex.owner *variable. We
think this bug is concerning because *audit_ctl_owner_current *is used at a
location that controls the scheduling of tasks shown here
<https://elixir.bootlin.com/linux/v5.18-rc5/source/kernel/audit.c#L1868>.
Please let us know what you think.

Thanks!


*-----------------Report----------------------*

*write* to 0xffffffff881d0710 of 8 bytes by task 6541 on cpu 0:
 audit_ctl_lock kernel/audit.c:237 [inline]
 audit_receive+0x77/0x2940 kernel/audit.c:1557
 netlink_unicast_kernel net/netlink/af_netlink.c:1319 [inline]
 netlink_unicast+0x652/0x730 net/netlink/af_netlink.c:1345
 netlink_sendmsg+0x643/0x740 net/netlink/af_netlink.c:1921
 sock_sendmsg_nosec net/socket.c:705 [inline]
 sock_sendmsg net/socket.c:725 [inline]
 ____sys_sendmsg+0x348/0x4c0 net/socket.c:2413
 ___sys_sendmsg net/socket.c:2467 [inline]
 __sys_sendmsg+0x159/0x1f0 net/socket.c:2496
 __do_sys_sendmsg net/socket.c:2505 [inline]
 __se_sys_sendmsg net/socket.c:2503 [inline]
 __x64_sys_sendmsg+0x47/0x50 net/socket.c:2503
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x3d/0x90 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x44/0xae

*read* to 0xffffffff881d0710 of 8 bytes by task 6542 on cpu 1:
 audit_ctl_owner_current kernel/audit.c:258 [inline]
 audit_log_start+0x127/0x690 kernel/audit.c:1868
 common_lsm_audit+0x61/0xee0 security/lsm_audit.c:457
 slow_avc_audit+0xcb/0x100 security/selinux/avc.c:796
 avc_audit security/selinux/include/avc.h:135 [inline]
 avc_has_perm+0x114/0x140 security/selinux/avc.c:1193
 selinux_socket_create+0xf1/0x170 security/selinux/hooks.c:4570
 security_socket_create+0x58/0xb0 security/security.c:2185
 __sock_create+0xe2/0x530 net/socket.c:1423
 sock_create net/socket.c:1519 [inline]
 __sys_socket+0xb8/0x210 net/socket.c:1561
 __do_sys_socket net/socket.c:1570 [inline]
 __se_sys_socket net/socket.c:1568 [inline]
 __x64_sys_socket+0x42/0x50 net/socket.c:1568
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x3d/0x90 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x44/0xae

Reported by Kernel Concurrency Sanitizer on:
CPU: 1 PID: 6542 Comm: syz-executor2-n Not tainted 5.18.0-rc5+ #107
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1
04/01/2014

*Reproducing Inputs*

Input CPU 0:
r0 = socket$nl_audit(0x10, 0x3, 0x9)
sendmsg$AUDIT_USER_AVC(r0, &(0x7f0000000200)={0x0, 0x0,
&(0x7f00000001c0)={&(0x7f0000000240)=ANY=[], 0x74}}, 0x0)

Input CPU 1:
r0 = socket$inet_dccp(0x2, 0x6, 0x0)
connect$inet(r0, &(0x7f0000000000)={0x2, 0x0, @private=0xa010101}, 0x10)

[-- Attachment #1.2: Type: text/html, Size: 3095 bytes --]

[-- Attachment #2: Type: text/plain, Size: 107 bytes --]

--
Linux-audit mailing list
Linux-audit@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-audit

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: data-race in audit_log_start / audit_receive
  2022-08-18 22:23 data-race in audit_log_start / audit_receive Abhishek Shah
@ 2022-08-19  1:59   ` Paul Moore
  0 siblings, 0 replies; 10+ messages in thread
From: Paul Moore @ 2022-08-19  1:59 UTC (permalink / raw)
  To: abhishek.shah; +Cc: eparis, linux-audit, linux-kernel, Gabriel Ryan

On Thu, Aug 18, 2022 at 6:23 PM Abhishek Shah
<abhishek.shah@columbia.edu> wrote:
> Hi all,
>
> We found a data race involving the audit_cmd_mutex.owner variable. We think this bug is concerning because audit_ctl_owner_current is used at a location that controls the scheduling of tasks shown here. Please let us know what you think.
>
> Thanks!
>
> -----------------Report----------------------
>
> write to 0xffffffff881d0710 of 8 bytes by task 6541 on cpu 0:
>  audit_ctl_lock kernel/audit.c:237 [inline]

...

> read to 0xffffffff881d0710 of 8 bytes by task 6542 on cpu 1:
>  audit_ctl_owner_current kernel/audit.c:258 [inline]

Yes, technically there is a race condition if/when an auditd instance
is registering itself the exact same time as another task is
attempting to log an audit record via audit_log_start().  The risk
being that a *very* limited number of audit records could be
mis-handled with respect to their queue priority and that is it; no
records would be lost or misplaced.  Correcting this would likely
involve a more complex locking scheme[1] or a rather severe
performance penalty due to an additional lock in the audit_log_start()
code path.  There may be some value in modifying
audit_ctl_owner_current() to use READ_ONCE(), but it isn't clear to me
that this would significantly improve things or have no impact on
performance.

Have you noticed any serious problems on your system due to this?  If
you have a reproducer which shows actual harm on the system could you
please share that?

[1] The obvious choice would be to move to a RCU based scheme, but
even that doesn't totally solve the problem as there would still be a
window where some tasks would have an "old" value.  It might actually
end up extending the race window on large multi-core systems due to
the time needed for all of the critical sections to complete.

-- 
paul-moore.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: data-race in audit_log_start / audit_receive
@ 2022-08-19  1:59   ` Paul Moore
  0 siblings, 0 replies; 10+ messages in thread
From: Paul Moore @ 2022-08-19  1:59 UTC (permalink / raw)
  To: abhishek.shah; +Cc: linux-audit, linux-kernel, eparis, Gabriel Ryan

On Thu, Aug 18, 2022 at 6:23 PM Abhishek Shah
<abhishek.shah@columbia.edu> wrote:
> Hi all,
>
> We found a data race involving the audit_cmd_mutex.owner variable. We think this bug is concerning because audit_ctl_owner_current is used at a location that controls the scheduling of tasks shown here. Please let us know what you think.
>
> Thanks!
>
> -----------------Report----------------------
>
> write to 0xffffffff881d0710 of 8 bytes by task 6541 on cpu 0:
>  audit_ctl_lock kernel/audit.c:237 [inline]

...

> read to 0xffffffff881d0710 of 8 bytes by task 6542 on cpu 1:
>  audit_ctl_owner_current kernel/audit.c:258 [inline]

Yes, technically there is a race condition if/when an auditd instance
is registering itself the exact same time as another task is
attempting to log an audit record via audit_log_start().  The risk
being that a *very* limited number of audit records could be
mis-handled with respect to their queue priority and that is it; no
records would be lost or misplaced.  Correcting this would likely
involve a more complex locking scheme[1] or a rather severe
performance penalty due to an additional lock in the audit_log_start()
code path.  There may be some value in modifying
audit_ctl_owner_current() to use READ_ONCE(), but it isn't clear to me
that this would significantly improve things or have no impact on
performance.

Have you noticed any serious problems on your system due to this?  If
you have a reproducer which shows actual harm on the system could you
please share that?

[1] The obvious choice would be to move to a RCU based scheme, but
even that doesn't totally solve the problem as there would still be a
window where some tasks would have an "old" value.  It might actually
end up extending the race window on large multi-core systems due to
the time needed for all of the critical sections to complete.

-- 
paul-moore.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-audit


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: data-race in audit_log_start / audit_receive
  2022-08-19  1:59   ` Paul Moore
@ 2022-08-19 12:06     ` Paul Moore
  -1 siblings, 0 replies; 10+ messages in thread
From: Paul Moore @ 2022-08-19 12:06 UTC (permalink / raw)
  To: abhishek.shah; +Cc: eparis, linux-audit, linux-kernel, Gabriel Ryan

On Thu, Aug 18, 2022 at 9:59 PM Paul Moore <paul@paul-moore.com> wrote:
> On Thu, Aug 18, 2022 at 6:23 PM Abhishek Shah
> <abhishek.shah@columbia.edu> wrote:
> > Hi all,
> >
> > We found a data race involving the audit_cmd_mutex.owner variable. We think this bug is concerning because audit_ctl_owner_current is used at a location that controls the scheduling of tasks shown here. Please let us know what you think.
> >
> > Thanks!
> >
> > -----------------Report----------------------
> >
> > write to 0xffffffff881d0710 of 8 bytes by task 6541 on cpu 0:
> >  audit_ctl_lock kernel/audit.c:237 [inline]
>
> ...
>
> > read to 0xffffffff881d0710 of 8 bytes by task 6542 on cpu 1:
> >  audit_ctl_owner_current kernel/audit.c:258 [inline]
>
> Yes, technically there is a race condition if/when an auditd instance
> is registering itself the exact same time as another task is
> attempting to log an audit record via audit_log_start().

I realized after I sent this and turned off my computer last night
that I typed the wrong thing - the race isn't between auditd and
audit_log_start(), it's between the code which changes the audit
subsystem state (see audit_receive() and the audit watch/tree code)
and audit_log_start().

> The risk
> being that a *very* limited number of audit records could be
> mis-handled with respect to their queue priority and that is it; no
> records would be lost or misplaced.  Correcting this would likely
> involve a more complex locking scheme[1] or a rather severe
> performance penalty due to an additional lock in the audit_log_start()
> code path.  There may be some value in modifying
> audit_ctl_owner_current() to use READ_ONCE(), but it isn't clear to me
> that this would significantly improve things or have no impact on
> performance.

Another thing I thought of last night - I don't believe READ_ONCE()
adds a memory barrier, which would probably be needed; although my
original statement still stands, I'm not sure the performance hit
would justify the marginal impact on the audit queue.

> Have you noticed any serious problems on your system due to this?  If
> you have a reproducer which shows actual harm on the system could you
> please share that?
>
> [1] The obvious choice would be to move to a RCU based scheme, but
> even that doesn't totally solve the problem as there would still be a
> window where some tasks would have an "old" value.  It might actually
> end up extending the race window on large multi-core systems due to
> the time needed for all of the critical sections to complete.

-- 
paul-moore.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: data-race in audit_log_start / audit_receive
@ 2022-08-19 12:06     ` Paul Moore
  0 siblings, 0 replies; 10+ messages in thread
From: Paul Moore @ 2022-08-19 12:06 UTC (permalink / raw)
  To: abhishek.shah; +Cc: linux-audit, linux-kernel, eparis, Gabriel Ryan

On Thu, Aug 18, 2022 at 9:59 PM Paul Moore <paul@paul-moore.com> wrote:
> On Thu, Aug 18, 2022 at 6:23 PM Abhishek Shah
> <abhishek.shah@columbia.edu> wrote:
> > Hi all,
> >
> > We found a data race involving the audit_cmd_mutex.owner variable. We think this bug is concerning because audit_ctl_owner_current is used at a location that controls the scheduling of tasks shown here. Please let us know what you think.
> >
> > Thanks!
> >
> > -----------------Report----------------------
> >
> > write to 0xffffffff881d0710 of 8 bytes by task 6541 on cpu 0:
> >  audit_ctl_lock kernel/audit.c:237 [inline]
>
> ...
>
> > read to 0xffffffff881d0710 of 8 bytes by task 6542 on cpu 1:
> >  audit_ctl_owner_current kernel/audit.c:258 [inline]
>
> Yes, technically there is a race condition if/when an auditd instance
> is registering itself the exact same time as another task is
> attempting to log an audit record via audit_log_start().

I realized after I sent this and turned off my computer last night
that I typed the wrong thing - the race isn't between auditd and
audit_log_start(), it's between the code which changes the audit
subsystem state (see audit_receive() and the audit watch/tree code)
and audit_log_start().

> The risk
> being that a *very* limited number of audit records could be
> mis-handled with respect to their queue priority and that is it; no
> records would be lost or misplaced.  Correcting this would likely
> involve a more complex locking scheme[1] or a rather severe
> performance penalty due to an additional lock in the audit_log_start()
> code path.  There may be some value in modifying
> audit_ctl_owner_current() to use READ_ONCE(), but it isn't clear to me
> that this would significantly improve things or have no impact on
> performance.

Another thing I thought of last night - I don't believe READ_ONCE()
adds a memory barrier, which would probably be needed; although my
original statement still stands, I'm not sure the performance hit
would justify the marginal impact on the audit queue.

> Have you noticed any serious problems on your system due to this?  If
> you have a reproducer which shows actual harm on the system could you
> please share that?
>
> [1] The obvious choice would be to move to a RCU based scheme, but
> even that doesn't totally solve the problem as there would still be a
> window where some tasks would have an "old" value.  It might actually
> end up extending the race window on large multi-core systems due to
> the time needed for all of the critical sections to complete.

-- 
paul-moore.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-audit


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: data-race in audit_log_start / audit_receive
  2022-08-19 12:06     ` Paul Moore
@ 2022-08-22 20:09       ` Gabriel Ryan
  -1 siblings, 0 replies; 10+ messages in thread
From: Gabriel Ryan @ 2022-08-22 20:09 UTC (permalink / raw)
  To: Paul Moore; +Cc: abhishek.shah, eparis, linux-audit, linux-kernel

Hi Paul,

Thanks for taking the time to review our report! It sounds like there
aren't severe negative impacts and patching to eliminate the race
would impose unnecessary performance penalties so we'll mark this as
benign for future reference.

Thanks,

Gabe

On Fri, Aug 19, 2022 at 8:06 AM Paul Moore <paul@paul-moore.com> wrote:
>
> On Thu, Aug 18, 2022 at 9:59 PM Paul Moore <paul@paul-moore.com> wrote:
> > On Thu, Aug 18, 2022 at 6:23 PM Abhishek Shah
> > <abhishek.shah@columbia.edu> wrote:
> > > Hi all,
> > >
> > > We found a data race involving the audit_cmd_mutex.owner variable. We think this bug is concerning because audit_ctl_owner_current is used at a location that controls the scheduling of tasks shown here. Please let us know what you think.
> > >
> > > Thanks!
> > >
> > > -----------------Report----------------------
> > >
> > > write to 0xffffffff881d0710 of 8 bytes by task 6541 on cpu 0:
> > >  audit_ctl_lock kernel/audit.c:237 [inline]
> >
> > ...
> >
> > > read to 0xffffffff881d0710 of 8 bytes by task 6542 on cpu 1:
> > >  audit_ctl_owner_current kernel/audit.c:258 [inline]
> >
> > Yes, technically there is a race condition if/when an auditd instance
> > is registering itself the exact same time as another task is
> > attempting to log an audit record via audit_log_start().
>
> I realized after I sent this and turned off my computer last night
> that I typed the wrong thing - the race isn't between auditd and
> audit_log_start(), it's between the code which changes the audit
> subsystem state (see audit_receive() and the audit watch/tree code)
> and audit_log_start().
>
> > The risk
> > being that a *very* limited number of audit records could be
> > mis-handled with respect to their queue priority and that is it; no
> > records would be lost or misplaced.  Correcting this would likely
> > involve a more complex locking scheme[1] or a rather severe
> > performance penalty due to an additional lock in the audit_log_start()
> > code path.  There may be some value in modifying
> > audit_ctl_owner_current() to use READ_ONCE(), but it isn't clear to me
> > that this would significantly improve things or have no impact on
> > performance.
>
> Another thing I thought of last night - I don't believe READ_ONCE()
> adds a memory barrier, which would probably be needed; although my
> original statement still stands, I'm not sure the performance hit
> would justify the marginal impact on the audit queue.
>
> > Have you noticed any serious problems on your system due to this?  If
> > you have a reproducer which shows actual harm on the system could you
> > please share that?
> >
> > [1] The obvious choice would be to move to a RCU based scheme, but
> > even that doesn't totally solve the problem as there would still be a
> > window where some tasks would have an "old" value.  It might actually
> > end up extending the race window on large multi-core systems due to
> > the time needed for all of the critical sections to complete.
>
> --
> paul-moore.com

-- 
Gabriel Ryan
PhD Candidate at Columbia University

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: data-race in audit_log_start / audit_receive
@ 2022-08-22 20:09       ` Gabriel Ryan
  0 siblings, 0 replies; 10+ messages in thread
From: Gabriel Ryan @ 2022-08-22 20:09 UTC (permalink / raw)
  To: Paul Moore; +Cc: abhishek.shah, linux-audit, linux-kernel, eparis

Hi Paul,

Thanks for taking the time to review our report! It sounds like there
aren't severe negative impacts and patching to eliminate the race
would impose unnecessary performance penalties so we'll mark this as
benign for future reference.

Thanks,

Gabe

On Fri, Aug 19, 2022 at 8:06 AM Paul Moore <paul@paul-moore.com> wrote:
>
> On Thu, Aug 18, 2022 at 9:59 PM Paul Moore <paul@paul-moore.com> wrote:
> > On Thu, Aug 18, 2022 at 6:23 PM Abhishek Shah
> > <abhishek.shah@columbia.edu> wrote:
> > > Hi all,
> > >
> > > We found a data race involving the audit_cmd_mutex.owner variable. We think this bug is concerning because audit_ctl_owner_current is used at a location that controls the scheduling of tasks shown here. Please let us know what you think.
> > >
> > > Thanks!
> > >
> > > -----------------Report----------------------
> > >
> > > write to 0xffffffff881d0710 of 8 bytes by task 6541 on cpu 0:
> > >  audit_ctl_lock kernel/audit.c:237 [inline]
> >
> > ...
> >
> > > read to 0xffffffff881d0710 of 8 bytes by task 6542 on cpu 1:
> > >  audit_ctl_owner_current kernel/audit.c:258 [inline]
> >
> > Yes, technically there is a race condition if/when an auditd instance
> > is registering itself the exact same time as another task is
> > attempting to log an audit record via audit_log_start().
>
> I realized after I sent this and turned off my computer last night
> that I typed the wrong thing - the race isn't between auditd and
> audit_log_start(), it's between the code which changes the audit
> subsystem state (see audit_receive() and the audit watch/tree code)
> and audit_log_start().
>
> > The risk
> > being that a *very* limited number of audit records could be
> > mis-handled with respect to their queue priority and that is it; no
> > records would be lost or misplaced.  Correcting this would likely
> > involve a more complex locking scheme[1] or a rather severe
> > performance penalty due to an additional lock in the audit_log_start()
> > code path.  There may be some value in modifying
> > audit_ctl_owner_current() to use READ_ONCE(), but it isn't clear to me
> > that this would significantly improve things or have no impact on
> > performance.
>
> Another thing I thought of last night - I don't believe READ_ONCE()
> adds a memory barrier, which would probably be needed; although my
> original statement still stands, I'm not sure the performance hit
> would justify the marginal impact on the audit queue.
>
> > Have you noticed any serious problems on your system due to this?  If
> > you have a reproducer which shows actual harm on the system could you
> > please share that?
> >
> > [1] The obvious choice would be to move to a RCU based scheme, but
> > even that doesn't totally solve the problem as there would still be a
> > window where some tasks would have an "old" value.  It might actually
> > end up extending the race window on large multi-core systems due to
> > the time needed for all of the critical sections to complete.
>
> --
> paul-moore.com

-- 
Gabriel Ryan
PhD Candidate at Columbia University

--
Linux-audit mailing list
Linux-audit@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-audit


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: data-race in audit_log_start / audit_receive
  2022-08-22 20:09       ` Gabriel Ryan
@ 2022-08-22 23:42         ` Paul Moore
  -1 siblings, 0 replies; 10+ messages in thread
From: Paul Moore @ 2022-08-22 23:42 UTC (permalink / raw)
  To: Gabriel Ryan; +Cc: abhishek.shah, eparis, linux-audit, linux-kernel

On Mon, Aug 22, 2022 at 4:09 PM Gabriel Ryan <gabe@cs.columbia.edu> wrote:
>
> Hi Paul,
>
> Thanks for taking the time to review our report! It sounds like there
> aren't severe negative impacts and patching to eliminate the race
> would impose unnecessary performance penalties so we'll mark this as
> benign for future reference.

I just want to reiterate that if you are seeing a serious problem
please let us know and we'll work with you to find a fix.

-- 
paul-moore.com

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: data-race in audit_log_start / audit_receive
@ 2022-08-22 23:42         ` Paul Moore
  0 siblings, 0 replies; 10+ messages in thread
From: Paul Moore @ 2022-08-22 23:42 UTC (permalink / raw)
  To: Gabriel Ryan; +Cc: abhishek.shah, linux-audit, linux-kernel, eparis

On Mon, Aug 22, 2022 at 4:09 PM Gabriel Ryan <gabe@cs.columbia.edu> wrote:
>
> Hi Paul,
>
> Thanks for taking the time to review our report! It sounds like there
> aren't severe negative impacts and patching to eliminate the race
> would impose unnecessary performance penalties so we'll mark this as
> benign for future reference.

I just want to reiterate that if you are seeing a serious problem
please let us know and we'll work with you to find a fix.

-- 
paul-moore.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-audit


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: data-race in audit_log_start / audit_receive
  2022-08-22 23:42         ` Paul Moore
  (?)
@ 2022-08-23 13:09         ` Gabriel Ryan
  -1 siblings, 0 replies; 10+ messages in thread
From: Gabriel Ryan @ 2022-08-23 13:09 UTC (permalink / raw)
  To: Paul Moore; +Cc: abhishek.shah, linux-audit, linux-kernel, eparis


[-- Attachment #1.1: Type: text/plain, Size: 783 bytes --]

Hi Paul,

Will do. For this race we don’t have any indication it’s a serious problem
beyond the race itself and manual inspection of the relevant code locations.

Best,

Gabe

On Mon, Aug 22, 2022 at 7:42 PM Paul Moore <paul@paul-moore.com> wrote:

> On Mon, Aug 22, 2022 at 4:09 PM Gabriel Ryan <gabe@cs.columbia.edu> wrote:
> >
> > Hi Paul,
> >
> > Thanks for taking the time to review our report! It sounds like there
> > aren't severe negative impacts and patching to eliminate the race
> > would impose unnecessary performance penalties so we'll mark this as
> > benign for future reference.
>
> I just want to reiterate that if you are seeing a serious problem
> please let us know and we'll work with you to find a fix.
>
> --
> paul-moore.com
>

[-- Attachment #1.2: Type: text/html, Size: 1441 bytes --]

[-- Attachment #2: Type: text/plain, Size: 107 bytes --]

--
Linux-audit mailing list
Linux-audit@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-audit

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-08-24 13:05 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-18 22:23 data-race in audit_log_start / audit_receive Abhishek Shah
2022-08-19  1:59 ` Paul Moore
2022-08-19  1:59   ` Paul Moore
2022-08-19 12:06   ` Paul Moore
2022-08-19 12:06     ` Paul Moore
2022-08-22 20:09     ` Gabriel Ryan
2022-08-22 20:09       ` Gabriel Ryan
2022-08-22 23:42       ` Paul Moore
2022-08-22 23:42         ` Paul Moore
2022-08-23 13:09         ` Gabriel Ryan

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.