All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] selinux: fix race condition when computing ocontext SIDs
@ 2021-07-28 14:03 Ondrej Mosnacek
  2021-08-05 20:48 ` Paul Moore
  0 siblings, 1 reply; 9+ messages in thread
From: Ondrej Mosnacek @ 2021-07-28 14:03 UTC (permalink / raw)
  To: Paul Moore; +Cc: selinux, Sujithra Periasamy, Xinjie Zheng

Current code contains a lot of racy patterns when converting an
ocontext's context structure to an SID. This is being done in a "lazy"
fashion, such that the SID is looked up in the SID table only when it's
first needed and then cached in the "sid" field of the ocontext
structure. However, this is done without any locking or memory barriers
and is thus unsafe.

Between commits 24ed7fdae669 ("selinux: use separate table for initial
SID lookup") and 66f8e2f03c02 ("selinux: sidtab reverse lookup hash
table"), this race condition lead to an actual observable bug, because a
pointer to the shared sid field was passed directly to
sidtab_context_to_sid(), which was using this location to also store an
intermediate value, which could have been read by other threads and
interpreted as an SID. In practice this caused e.g. new mounts to get a
wrong (seemingly random) filesystem context, leading to strange denials.
This bug has been spotted in the wild at least twice, see [1] and [2].

Fix the race condition by making all the racy functions use a common
helper that ensures the ocontext::sid accesses are made safely using the
appropriate SMP constructs.

Note that security_netif_sid() was populating the sid field of both
contexts stored in the ocontext, but only the first one was actually
used. The SELinux wiki's documentation on the "netifcon" policy
statement [3] suggests that using only the first context is intentional.
I kept only the handling of the first context here, as there is really
no point in doing the SID lookup for the unused one.

I wasn't able to reproduce the bug mentioned above on any kernel that
includes commit 66f8e2f03c02, even though it has been reported that the
issue occurs with that commit, too, just less frequently. Thus, I wasn't
able to verify that this patch fixes the issue, but it makes sense to
avoid the race condition regardless.

[1] https://github.com/containers/container-selinux/issues/89
[2] https://lists.fedoraproject.org/archives/list/selinux@lists.fedoraproject.org/thread/6DMTAMHIOAOEMUAVTULJD45JZU7IBAFM/
[3] https://selinuxproject.org/page/NetworkStatements#netifcon

Reported-by: Sujithra Periasamy <sujithra@google.com>
Cc: Xinjie Zheng <xinjie@google.com>
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
---
 security/selinux/ss/services.c | 162 ++++++++++++++++-----------------
 1 file changed, 77 insertions(+), 85 deletions(-)

diff --git a/security/selinux/ss/services.c b/security/selinux/ss/services.c
index 0a5ce001609b..c8db8a3432e4 100644
--- a/security/selinux/ss/services.c
+++ b/security/selinux/ss/services.c
@@ -2369,6 +2369,43 @@ err_policy:
 	return rc;
 }
 
+/**
+ * ocontext_to_sid - Helper to safely get sid for an ocontext
+ * @sidtab: SID table
+ * @c: ocontext structure
+ * @index: index of the context entry (0 or 1)
+ * @out_sid: pointer to the resulting SID value
+ *
+ * For all ocontexts except OCON_ISID the SID fields are populated
+ * on-demand when needed. Since updating the SID value is an SMP-sensitive
+ * operation, this helper must be used to do that safely.
+ *
+ * WARNING: This function may return -ESTALE, indicating that the caller
+ * must retry the operation after re-acquiring the policy pointer!
+ */
+static int ocontext_to_sid(struct sidtab *sidtab, struct ocontext *c,
+			   size_t index, u32 *out_sid)
+{
+	int rc;
+	u32 sid;
+
+	/* Ensure the associated sidtab entry is visible to this thread. */
+	sid = smp_load_acquire(&c->sid[index]);
+	if (!sid) {
+		rc = sidtab_context_to_sid(sidtab, &c->context[index], &sid);
+		if (rc)
+			return rc;
+
+		/*
+		 * Ensure the new sidtab entry is visible to other threads
+		 * when they see the SID.
+		 */
+		smp_store_release(&c->sid[index], sid);
+	}
+	*out_sid = sid;
+	return 0;
+}
+
 /**
  * security_port_sid - Obtain the SID for a port.
  * @protocol: protocol number
@@ -2406,17 +2443,13 @@ retry:
 	}
 
 	if (c) {
-		if (!c->sid[0]) {
-			rc = sidtab_context_to_sid(sidtab, &c->context[0],
-						   &c->sid[0]);
-			if (rc == -ESTALE) {
-				rcu_read_unlock();
-				goto retry;
-			}
-			if (rc)
-				goto out;
+		rc = ocontext_to_sid(sidtab, c, 0, out_sid);
+		if (rc == -ESTALE) {
+			rcu_read_unlock();
+			goto retry;
 		}
-		*out_sid = c->sid[0];
+		if (rc)
+			goto out;
 	} else {
 		*out_sid = SECINITSID_PORT;
 	}
@@ -2464,18 +2497,13 @@ retry:
 	}
 
 	if (c) {
-		if (!c->sid[0]) {
-			rc = sidtab_context_to_sid(sidtab,
-						   &c->context[0],
-						   &c->sid[0]);
-			if (rc == -ESTALE) {
-				rcu_read_unlock();
-				goto retry;
-			}
-			if (rc)
-				goto out;
+		rc = ocontext_to_sid(sidtab, c, 0, out_sid);
+		if (rc == -ESTALE) {
+			rcu_read_unlock();
+			goto retry;
 		}
-		*out_sid = c->sid[0];
+		if (rc)
+			goto out;
 	} else
 		*out_sid = SECINITSID_UNLABELED;
 
@@ -2523,17 +2551,13 @@ retry:
 	}
 
 	if (c) {
-		if (!c->sid[0]) {
-			rc = sidtab_context_to_sid(sidtab, &c->context[0],
-						   &c->sid[0]);
-			if (rc == -ESTALE) {
-				rcu_read_unlock();
-				goto retry;
-			}
-			if (rc)
-				goto out;
+		rc = ocontext_to_sid(sidtab, c, 0, out_sid);
+		if (rc == -ESTALE) {
+			rcu_read_unlock();
+			goto retry;
 		}
-		*out_sid = c->sid[0];
+		if (rc)
+			goto out;
 	} else
 		*out_sid = SECINITSID_UNLABELED;
 
@@ -2576,25 +2600,13 @@ retry:
 	}
 
 	if (c) {
-		if (!c->sid[0] || !c->sid[1]) {
-			rc = sidtab_context_to_sid(sidtab, &c->context[0],
-						   &c->sid[0]);
-			if (rc == -ESTALE) {
-				rcu_read_unlock();
-				goto retry;
-			}
-			if (rc)
-				goto out;
-			rc = sidtab_context_to_sid(sidtab, &c->context[1],
-						   &c->sid[1]);
-			if (rc == -ESTALE) {
-				rcu_read_unlock();
-				goto retry;
-			}
-			if (rc)
-				goto out;
+		rc = ocontext_to_sid(sidtab, c, 0, if_sid);
+		if (rc == -ESTALE) {
+			rcu_read_unlock();
+			goto retry;
 		}
-		*if_sid = c->sid[0];
+		if (rc)
+			goto out;
 	} else
 		*if_sid = SECINITSID_NETIF;
 
@@ -2685,18 +2697,13 @@ retry:
 	}
 
 	if (c) {
-		if (!c->sid[0]) {
-			rc = sidtab_context_to_sid(sidtab,
-						   &c->context[0],
-						   &c->sid[0]);
-			if (rc == -ESTALE) {
-				rcu_read_unlock();
-				goto retry;
-			}
-			if (rc)
-				goto out;
+		rc = ocontext_to_sid(sidtab, c, 0, out_sid);
+		if (rc == -ESTALE) {
+			rcu_read_unlock();
+			goto retry;
 		}
-		*out_sid = c->sid[0];
+		if (rc)
+			goto out;
 	} else {
 		*out_sid = SECINITSID_NODE;
 	}
@@ -2860,7 +2867,7 @@ static inline int __security_genfs_sid(struct selinux_policy *policy,
 	u16 sclass;
 	struct genfs *genfs;
 	struct ocontext *c;
-	int rc, cmp = 0;
+	int cmp = 0;
 
 	while (path[0] == '/' && path[1] == '/')
 		path++;
@@ -2874,9 +2881,8 @@ static inline int __security_genfs_sid(struct selinux_policy *policy,
 			break;
 	}
 
-	rc = -ENOENT;
 	if (!genfs || cmp)
-		goto out;
+		return -ENOENT;
 
 	for (c = genfs->head; c; c = c->next) {
 		len = strlen(c->u.name);
@@ -2885,20 +2891,10 @@ static inline int __security_genfs_sid(struct selinux_policy *policy,
 			break;
 	}
 
-	rc = -ENOENT;
 	if (!c)
-		goto out;
-
-	if (!c->sid[0]) {
-		rc = sidtab_context_to_sid(sidtab, &c->context[0], &c->sid[0]);
-		if (rc)
-			goto out;
-	}
+		return -ENOENT;
 
-	*sid = c->sid[0];
-	rc = 0;
-out:
-	return rc;
+	return ocontext_to_sid(sidtab, c, 0, sid);
 }
 
 /**
@@ -2981,17 +2977,13 @@ retry:
 
 	if (c) {
 		sbsec->behavior = c->v.behavior;
-		if (!c->sid[0]) {
-			rc = sidtab_context_to_sid(sidtab, &c->context[0],
-						   &c->sid[0]);
-			if (rc == -ESTALE) {
-				rcu_read_unlock();
-				goto retry;
-			}
-			if (rc)
-				goto out;
+		rc = ocontext_to_sid(sidtab, c, 0, &sbsec->sid);
+		if (rc == -ESTALE) {
+			rcu_read_unlock();
+			goto retry;
 		}
-		sbsec->sid = c->sid[0];
+		if (rc)
+			goto out;
 	} else {
 		rc = __security_genfs_sid(policy, fstype, "/",
 					SECCLASS_DIR, &sbsec->sid);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 9+ messages in thread

* Re: [PATCH] selinux: fix race condition when computing ocontext SIDs
  2021-07-28 14:03 [PATCH] selinux: fix race condition when computing ocontext SIDs Ondrej Mosnacek
@ 2021-08-05 20:48 ` Paul Moore
  2021-08-16 12:50   ` Ondrej Mosnacek
  0 siblings, 1 reply; 9+ messages in thread
From: Paul Moore @ 2021-08-05 20:48 UTC (permalink / raw)
  To: Ondrej Mosnacek; +Cc: selinux, Sujithra Periasamy, Xinjie Zheng

On Wed, Jul 28, 2021 at 10:03 AM Ondrej Mosnacek <omosnace@redhat.com> wrote:
>
> Current code contains a lot of racy patterns when converting an
> ocontext's context structure to an SID. This is being done in a "lazy"
> fashion, such that the SID is looked up in the SID table only when it's
> first needed and then cached in the "sid" field of the ocontext
> structure. However, this is done without any locking or memory barriers
> and is thus unsafe.
>
> Between commits 24ed7fdae669 ("selinux: use separate table for initial
> SID lookup") and 66f8e2f03c02 ("selinux: sidtab reverse lookup hash
> table"), this race condition lead to an actual observable bug, because a
> pointer to the shared sid field was passed directly to
> sidtab_context_to_sid(), which was using this location to also store an
> intermediate value, which could have been read by other threads and
> interpreted as an SID. In practice this caused e.g. new mounts to get a
> wrong (seemingly random) filesystem context, leading to strange denials.
> This bug has been spotted in the wild at least twice, see [1] and [2].
>
> Fix the race condition by making all the racy functions use a common
> helper that ensures the ocontext::sid accesses are made safely using the
> appropriate SMP constructs.
>
> Note that security_netif_sid() was populating the sid field of both
> contexts stored in the ocontext, but only the first one was actually
> used. The SELinux wiki's documentation on the "netifcon" policy
> statement [3] suggests that using only the first context is intentional.
> I kept only the handling of the first context here, as there is really
> no point in doing the SID lookup for the unused one.

The security_netif_sid() change looks fine to me.  I *think* that the
second netif label goes back to some of the original SELinux network
labeling, I believe the original intent may even predate my
involvement in the project but that is far enough back I can't say for
certain without doing some heavy mailing list searches :)

> I wasn't able to reproduce the bug mentioned above on any kernel that
> includes commit 66f8e2f03c02, even though it has been reported that the
> issue occurs with that commit, too, just less frequently. Thus, I wasn't
> able to verify that this patch fixes the issue, but it makes sense to
> avoid the race condition regardless.
>
> [1] https://github.com/containers/container-selinux/issues/89
> [2] https://lists.fedoraproject.org/archives/list/selinux@lists.fedoraproject.org/thread/6DMTAMHIOAOEMUAVTULJD45JZU7IBAFM/
> [3] https://selinuxproject.org/page/NetworkStatements#netifcon
>
> Reported-by: Sujithra Periasamy <sujithra@google.com>
> Cc: Xinjie Zheng <xinjie@google.com>
> Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2")
> Signed-off-by: Ondrej Mosnacek <omosnace@redhat.com>
> ---
>  security/selinux/ss/services.c | 162 ++++++++++++++++-----------------
>  1 file changed, 77 insertions(+), 85 deletions(-)
>
> diff --git a/security/selinux/ss/services.c b/security/selinux/ss/services.c
> index 0a5ce001609b..c8db8a3432e4 100644
> --- a/security/selinux/ss/services.c
> +++ b/security/selinux/ss/services.c
> @@ -2369,6 +2369,43 @@ err_policy:
>         return rc;
>  }
>
> +/**
> + * ocontext_to_sid - Helper to safely get sid for an ocontext
> + * @sidtab: SID table
> + * @c: ocontext structure
> + * @index: index of the context entry (0 or 1)
> + * @out_sid: pointer to the resulting SID value
> + *
> + * For all ocontexts except OCON_ISID the SID fields are populated
> + * on-demand when needed. Since updating the SID value is an SMP-sensitive
> + * operation, this helper must be used to do that safely.
> + *
> + * WARNING: This function may return -ESTALE, indicating that the caller
> + * must retry the operation after re-acquiring the policy pointer!
> + */
> +static int ocontext_to_sid(struct sidtab *sidtab, struct ocontext *c,
> +                          size_t index, u32 *out_sid)
> +{
> +       int rc;
> +       u32 sid;
> +
> +       /* Ensure the associated sidtab entry is visible to this thread. */
> +       sid = smp_load_acquire(&c->sid[index]);

Is there a reason why you decided to use smp_{load,store} to guard
ocontext.sid[] as opposed to RCU/spinlock?  RCU would allow us to
avoid the memory barrier in smp_load_acquire() on every call to
ocontext_to_sid() and it looks like all of the non-exclusive callers
are already in a RCU protected section so the additional impact on an
already cached value should be next to nothing.  The spinlock would
make things slightly more complicated (take the lock, recheck
ocontext.sid, set/bail, unlock, etc.) but we are already in the slow
path at that point.

> +       if (!sid) {
> +               rc = sidtab_context_to_sid(sidtab, &c->context[index], &sid);
> +               if (rc)
> +                       return rc;
> +
> +               /*
> +                * Ensure the new sidtab entry is visible to other threads
> +                * when they see the SID.
> +                */
> +               smp_store_release(&c->sid[index], sid);
> +       }
> +       *out_sid = sid;
> +       return 0;
> +}

-- 
paul moore
www.paul-moore.com

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] selinux: fix race condition when computing ocontext SIDs
  2021-08-05 20:48 ` Paul Moore
@ 2021-08-16 12:50   ` Ondrej Mosnacek
  2021-08-16 20:38     ` Paul Moore
  0 siblings, 1 reply; 9+ messages in thread
From: Ondrej Mosnacek @ 2021-08-16 12:50 UTC (permalink / raw)
  To: Paul Moore; +Cc: SElinux list, Sujithra Periasamy, Xinjie Zheng

On Thu, Aug 5, 2021 at 10:48 PM Paul Moore <paul@paul-moore.com> wrote:
> On Wed, Jul 28, 2021 at 10:03 AM Ondrej Mosnacek <omosnace@redhat.com> wrote:
[...]
> > diff --git a/security/selinux/ss/services.c b/security/selinux/ss/services.c
> > index 0a5ce001609b..c8db8a3432e4 100644
> > --- a/security/selinux/ss/services.c
> > +++ b/security/selinux/ss/services.c
> > @@ -2369,6 +2369,43 @@ err_policy:
> >         return rc;
> >  }
> >
> > +/**
> > + * ocontext_to_sid - Helper to safely get sid for an ocontext
> > + * @sidtab: SID table
> > + * @c: ocontext structure
> > + * @index: index of the context entry (0 or 1)
> > + * @out_sid: pointer to the resulting SID value
> > + *
> > + * For all ocontexts except OCON_ISID the SID fields are populated
> > + * on-demand when needed. Since updating the SID value is an SMP-sensitive
> > + * operation, this helper must be used to do that safely.
> > + *
> > + * WARNING: This function may return -ESTALE, indicating that the caller
> > + * must retry the operation after re-acquiring the policy pointer!
> > + */
> > +static int ocontext_to_sid(struct sidtab *sidtab, struct ocontext *c,
> > +                          size_t index, u32 *out_sid)
> > +{
> > +       int rc;
> > +       u32 sid;
> > +
> > +       /* Ensure the associated sidtab entry is visible to this thread. */
> > +       sid = smp_load_acquire(&c->sid[index]);
>
> Is there a reason why you decided to use smp_{load,store} to guard
> ocontext.sid[] as opposed to RCU/spinlock?  RCU would allow us to
> avoid the memory barrier in smp_load_acquire() on every call to
> ocontext_to_sid() and it looks like all of the non-exclusive callers
> are already in a RCU protected section so the additional impact on an
> already cached value should be next to nothing.  The spinlock would
> make things slightly more complicated (take the lock, recheck
> ocontext.sid, set/bail, unlock, etc.) but we are already in the slow
> path at that point.

I don't see any sane way to use RCU here - the implicit data
dependency that the memory barrier is guarding us against here is
between the sid field(s) and the memory regions in sidtab that hold
the struct context corresponding to the SID stored in the field. You'd
have to put both the SID value and the sidtab pointer behind some
dynamically allocated structure and that would just be horrible...

I assume that by using spinlock you mean something like:

sid = READ_ONCE(c->sid);
if (!sid) {
        spin_lock(...);
        sidtab_context_to_sid(..., &sid);
        WRITE_ONCE(c->sid, sid);
        spin_unlock(...);
}

...because taking the spinlock every time would obviously be less
efficient than this patch :)

That would, however, not solve the data dependency problem, so you'd
still need smp_*() instead of *_ONCE() and at that point the spinlock
would be redundant (and you're back to exactly what this patch does).

>
> > +       if (!sid) {
> > +               rc = sidtab_context_to_sid(sidtab, &c->context[index], &sid);
> > +               if (rc)
> > +                       return rc;
> > +
> > +               /*
> > +                * Ensure the new sidtab entry is visible to other threads
> > +                * when they see the SID.
> > +                */
> > +               smp_store_release(&c->sid[index], sid);
> > +       }
> > +       *out_sid = sid;
> > +       return 0;
> > +}
>
> --
> paul moore
> www.paul-moore.com
>


--
Ondrej Mosnacek
Software Engineer, Linux Security - SELinux kernel
Red Hat, Inc.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] selinux: fix race condition when computing ocontext SIDs
  2021-08-16 12:50   ` Ondrej Mosnacek
@ 2021-08-16 20:38     ` Paul Moore
  2021-10-07 15:34       ` Paul Moore
  0 siblings, 1 reply; 9+ messages in thread
From: Paul Moore @ 2021-08-16 20:38 UTC (permalink / raw)
  To: Ondrej Mosnacek; +Cc: SElinux list, Sujithra Periasamy, Xinjie Zheng

On Mon, Aug 16, 2021 at 8:51 AM Ondrej Mosnacek <omosnace@redhat.com> wrote:
> On Thu, Aug 5, 2021 at 10:48 PM Paul Moore <paul@paul-moore.com> wrote:
> > On Wed, Jul 28, 2021 at 10:03 AM Ondrej Mosnacek <omosnace@redhat.com> wrote:
> [...]
> > > diff --git a/security/selinux/ss/services.c b/security/selinux/ss/services.c
> > > index 0a5ce001609b..c8db8a3432e4 100644
> > > --- a/security/selinux/ss/services.c
> > > +++ b/security/selinux/ss/services.c
> > > @@ -2369,6 +2369,43 @@ err_policy:
> > >         return rc;
> > >  }
> > >
> > > +/**
> > > + * ocontext_to_sid - Helper to safely get sid for an ocontext
> > > + * @sidtab: SID table
> > > + * @c: ocontext structure
> > > + * @index: index of the context entry (0 or 1)
> > > + * @out_sid: pointer to the resulting SID value
> > > + *
> > > + * For all ocontexts except OCON_ISID the SID fields are populated
> > > + * on-demand when needed. Since updating the SID value is an SMP-sensitive
> > > + * operation, this helper must be used to do that safely.
> > > + *
> > > + * WARNING: This function may return -ESTALE, indicating that the caller
> > > + * must retry the operation after re-acquiring the policy pointer!
> > > + */
> > > +static int ocontext_to_sid(struct sidtab *sidtab, struct ocontext *c,
> > > +                          size_t index, u32 *out_sid)
> > > +{
> > > +       int rc;
> > > +       u32 sid;
> > > +
> > > +       /* Ensure the associated sidtab entry is visible to this thread. */
> > > +       sid = smp_load_acquire(&c->sid[index]);
> >
> > Is there a reason why you decided to use smp_{load,store} to guard
> > ocontext.sid[] as opposed to RCU/spinlock?  RCU would allow us to
> > avoid the memory barrier in smp_load_acquire() on every call to
> > ocontext_to_sid() and it looks like all of the non-exclusive callers
> > are already in a RCU protected section so the additional impact on an
> > already cached value should be next to nothing.  The spinlock would
> > make things slightly more complicated (take the lock, recheck
> > ocontext.sid, set/bail, unlock, etc.) but we are already in the slow
> > path at that point.
>
> I don't see any sane way to use RCU here - the implicit data
> dependency that the memory barrier is guarding us against here is
> between the sid field(s) and the memory regions in sidtab that hold
> the struct context corresponding to the SID stored in the field. You'd
> have to put both the SID value and the sidtab pointer behind some
> dynamically allocated structure and that would just be horrible...

I'm jumping between the io_uring stuff, stacking/audit issues, and now
this today and I fear that context switches have taken their toll
already and I haven't resolve this in my own mind yet.  However, since
this patch has been languishing in inboxes with a week or so between
replies I wanted to at least impart this comment today ...

You need to start thinking more about how this code is going to be
maintained long term.  To clarify, when I say "long term" I'm talking
about +10 years from now when most everyone involved has forgotten the
subtle data dependencies you're talking about now.  While memory
barriers are sometimes necessary due to the unavoidable complexity in
the kernel, they shouldn't be an excuse for properly crafting, and
synchronizing access to data structures.  My initial take two weeks
ago was that this patch has some nice ideas, e.g. the consolidation
into ocontext_to_sid(), but the use of memory barriers as a solution
to synchronizing the cached SID and underlying context was just not a
good first option (especially considering the minimal comments
explaining need for the barrier).  Maybe we get back to this solution
eventually, but I'm really hoping there is a better way, and your
commit made no comment about what other options you may have explored.

Please try to come up with a better solution that leverages proper
locking primitives, and if that isn't possible please explain (in
detail) why.

> I assume that by using spinlock you mean something like:
>
> sid = READ_ONCE(c->sid);
> if (!sid) {
>         spin_lock(...);
>         sidtab_context_to_sid(..., &sid);
>         WRITE_ONCE(c->sid, sid);
>         spin_unlock(...);
> }
>
> ...because taking the spinlock every time would obviously be less
> efficient than this patch :)

I didn't think I needed to clarify that, but yes, that is what I meant.

-- 
paul moore
www.paul-moore.com

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] selinux: fix race condition when computing ocontext SIDs
  2021-08-16 20:38     ` Paul Moore
@ 2021-10-07 15:34       ` Paul Moore
  2021-10-11  8:25         ` Ondrej Mosnacek
  0 siblings, 1 reply; 9+ messages in thread
From: Paul Moore @ 2021-10-07 15:34 UTC (permalink / raw)
  To: Ondrej Mosnacek; +Cc: SElinux list, Sujithra Periasamy, Xinjie Zheng

On Mon, Aug 16, 2021 at 4:38 PM Paul Moore <paul@paul-moore.com> wrote:
> Please try to come up with a better solution that leverages proper
> locking primitives, and if that isn't possible please explain (in
> detail) why.

It's been a little while so I wanted to check the status of this ...
have you spent any time on this, or is your position such that this is
the best you can come up with for a fix?

-- 
paul moore
www.paul-moore.com

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] selinux: fix race condition when computing ocontext SIDs
  2021-10-07 15:34       ` Paul Moore
@ 2021-10-11  8:25         ` Ondrej Mosnacek
  2021-10-11 23:27           ` Paul Moore
  0 siblings, 1 reply; 9+ messages in thread
From: Ondrej Mosnacek @ 2021-10-11  8:25 UTC (permalink / raw)
  To: Paul Moore; +Cc: SElinux list, Sujithra Periasamy, Xinjie Zheng

On Thu, Oct 7, 2021 at 5:34 PM Paul Moore <paul@paul-moore.com> wrote:
> On Mon, Aug 16, 2021 at 4:38 PM Paul Moore <paul@paul-moore.com> wrote:
> > Please try to come up with a better solution that leverages proper
> > locking primitives, and if that isn't possible please explain (in
> > detail) why.
>
> It's been a little while so I wanted to check the status of this ...
> have you spent any time on this, or is your position such that this is
> the best you can come up with for a fix?

Sorry, I had to put this on the "let me get back to this later" list
because of other priorities and didn't get to pop it out of that list
yet :/ I haven't yet looked at other alternatives.

--
Ondrej Mosnacek
Software Engineer, Linux Security - SELinux kernel
Red Hat, Inc.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] selinux: fix race condition when computing ocontext SIDs
  2021-10-11  8:25         ` Ondrej Mosnacek
@ 2021-10-11 23:27           ` Paul Moore
  2021-10-12 14:53             ` Ondrej Mosnacek
  0 siblings, 1 reply; 9+ messages in thread
From: Paul Moore @ 2021-10-11 23:27 UTC (permalink / raw)
  To: Ondrej Mosnacek; +Cc: SElinux list, Sujithra Periasamy, Xinjie Zheng

On Mon, Oct 11, 2021 at 4:25 AM Ondrej Mosnacek <omosnace@redhat.com> wrote:
>
> On Thu, Oct 7, 2021 at 5:34 PM Paul Moore <paul@paul-moore.com> wrote:
> > On Mon, Aug 16, 2021 at 4:38 PM Paul Moore <paul@paul-moore.com> wrote:
> > > Please try to come up with a better solution that leverages proper
> > > locking primitives, and if that isn't possible please explain (in
> > > detail) why.
> >
> > It's been a little while so I wanted to check the status of this ...
> > have you spent any time on this, or is your position such that this is
> > the best you can come up with for a fix?
>
> Sorry, I had to put this on the "let me get back to this later" list
> because of other priorities and didn't get to pop it out of that list
> yet :/ I haven't yet looked at other alternatives.

Okay.  I'm going to go ahead and merge this simply because it does fix
a visible problem, but I really would like you to revisit this in the
near future to see if there is a better fix.

While I'm going to mark this with the stable tag, considering the
relatively low rate of occurrence on modern kernels and the fact that
I'm not in love with the fix, I'm going to merge this into
selinux/next and not selinux/stable-5.15.  This should give us another
couple of weeks in case you come up with a better fix in the near
term.

-- 
paul moore
www.paul-moore.com

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] selinux: fix race condition when computing ocontext SIDs
  2021-10-11 23:27           ` Paul Moore
@ 2021-10-12 14:53             ` Ondrej Mosnacek
  2021-10-12 15:00               ` Paul Moore
  0 siblings, 1 reply; 9+ messages in thread
From: Ondrej Mosnacek @ 2021-10-12 14:53 UTC (permalink / raw)
  To: Paul Moore; +Cc: SElinux list, Sujithra Periasamy, Xinjie Zheng

On Tue, Oct 12, 2021 at 1:27 AM Paul Moore <paul@paul-moore.com> wrote:
> On Mon, Oct 11, 2021 at 4:25 AM Ondrej Mosnacek <omosnace@redhat.com> wrote:
> >
> > On Thu, Oct 7, 2021 at 5:34 PM Paul Moore <paul@paul-moore.com> wrote:
> > > On Mon, Aug 16, 2021 at 4:38 PM Paul Moore <paul@paul-moore.com> wrote:
> > > > Please try to come up with a better solution that leverages proper
> > > > locking primitives, and if that isn't possible please explain (in
> > > > detail) why.
> > >
> > > It's been a little while so I wanted to check the status of this ...
> > > have you spent any time on this, or is your position such that this is
> > > the best you can come up with for a fix?
> >
> > Sorry, I had to put this on the "let me get back to this later" list
> > because of other priorities and didn't get to pop it out of that list
> > yet :/ I haven't yet looked at other alternatives.
>
> Okay.  I'm going to go ahead and merge this simply because it does fix
> a visible problem, but I really would like you to revisit this in the
> near future to see if there is a better fix.
>
> While I'm going to mark this with the stable tag, considering the
> relatively low rate of occurrence on modern kernels and the fact that
> I'm not in love with the fix, I'm going to merge this into
> selinux/next and not selinux/stable-5.15.  This should give us another
> couple of weeks in case you come up with a better fix in the near
> term.

OK, though it seems something went wrong when you applied the patch -
in the commit it's missing the subject line:
https://git.kernel.org/pub/scm/linux/kernel/git/pcmoore/selinux.git/commit/?h=next&id=0550e9155dfb566e9817b776dd0ece0b3fb361f2

--
Ondrej Mosnacek
Software Engineer, Linux Security - SELinux kernel
Red Hat, Inc.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH] selinux: fix race condition when computing ocontext SIDs
  2021-10-12 14:53             ` Ondrej Mosnacek
@ 2021-10-12 15:00               ` Paul Moore
  0 siblings, 0 replies; 9+ messages in thread
From: Paul Moore @ 2021-10-12 15:00 UTC (permalink / raw)
  To: Ondrej Mosnacek; +Cc: SElinux list, Sujithra Periasamy, Xinjie Zheng

On Tue, Oct 12, 2021 at 10:54 AM Ondrej Mosnacek <omosnace@redhat.com> wrote:
> On Tue, Oct 12, 2021 at 1:27 AM Paul Moore <paul@paul-moore.com> wrote:
> >
> > Okay.  I'm going to go ahead and merge this simply because it does fix
> > a visible problem, but I really would like you to revisit this in the
> > near future to see if there is a better fix.
> >
> > While I'm going to mark this with the stable tag, considering the
> > relatively low rate of occurrence on modern kernels and the fact that
> > I'm not in love with the fix, I'm going to merge this into
> > selinux/next and not selinux/stable-5.15.  This should give us another
> > couple of weeks in case you come up with a better fix in the near
> > term.
>
> OK, though it seems something went wrong when you applied the patch -
> in the commit it's missing the subject line:

Thanks for the heads-up, I fixed the subject line and force pushed the
branch.  Portions of the patch had to be merged by hand and when I
copied the description over I must have missed the subject line.

-- 
paul moore
www.paul-moore.com

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2021-10-12 15:00 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-28 14:03 [PATCH] selinux: fix race condition when computing ocontext SIDs Ondrej Mosnacek
2021-08-05 20:48 ` Paul Moore
2021-08-16 12:50   ` Ondrej Mosnacek
2021-08-16 20:38     ` Paul Moore
2021-10-07 15:34       ` Paul Moore
2021-10-11  8:25         ` Ondrej Mosnacek
2021-10-11 23:27           ` Paul Moore
2021-10-12 14:53             ` Ondrej Mosnacek
2021-10-12 15:00               ` Paul Moore

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.