bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3] seccomp: Improve performace by optimizing rmb()
@ 2021-02-24  8:58 wanghongzhe
  0 siblings, 0 replies; 4+ messages in thread
From: wanghongzhe @ 2021-02-24  8:58 UTC (permalink / raw)
  To: keescook, luto
  Cc: andrii, ast, bpf, daniel, john.fastabend, kafai, kpsingh,
	linux-kernel, netdev, songliubraving, wad, wanghongzhe, yhs

As Kees haved accepted the v2 patch at a381b70a1 which just
replaced rmb() with smp_rmb(), this patch will base on that and just adjust
the smp_rmb() to the correct position.

As the original comment shown (and indeed it should be):
   /*
    * Make sure that any changes to mode from another thread have
    * been seen after SYSCALL_WORK_SECCOMP was seen.
    */
the smp_rmb() should be put between reading SYSCALL_WORK_SECCOMP and reading
seccomp.mode to make sure that any changes to mode from another thread have
been seen after SYSCALL_WORK_SECCOMP was seen, for TSYNC situation. However,
it is misplaced between reading seccomp.mode and seccomp->filter. This issue
seems to be misintroduced at 13aa72f0fd0a9f98a41cefb662487269e2f1ad65 which
aims to refactor the filter callback and the API. So let's just adjust the
smp_rmb() to the correct position.

A next optimization patch will be provided if this ajustment is appropriate.

v2 -> v3:
 - move the smp_rmb() to the correct position

v1 -> v2:
 - only replace rmb() with smp_rmb()
 - provide the performance test number

RFC -> v1:
 - replace rmb() with smp_rmb()
 - move the smp_rmb() logic to the middle between TIF_SECCOMP and mode

Signed-off-by: wanghongzhe <wanghongzhe@huawei.com>
---
 kernel/seccomp.c | 15 +++++++--------
 1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/kernel/seccomp.c b/kernel/seccomp.c
index 1d60fc2c9987..64b236cb8a7f 100644
--- a/kernel/seccomp.c
+++ b/kernel/seccomp.c
@@ -1160,12 +1160,6 @@ static int __seccomp_filter(int this_syscall, const struct seccomp_data *sd,
 	int data;
 	struct seccomp_data sd_local;
 
-	/*
-	 * Make sure that any changes to mode from another thread have
-	 * been seen after SYSCALL_WORK_SECCOMP was seen.
-	 */
-	smp_rmb();
-
 	if (!sd) {
 		populate_seccomp_data(&sd_local);
 		sd = &sd_local;
@@ -1291,7 +1285,6 @@ static int __seccomp_filter(int this_syscall, const struct seccomp_data *sd,
 
 int __secure_computing(const struct seccomp_data *sd)
 {
-	int mode = current->seccomp.mode;
 	int this_syscall;
 
 	if (IS_ENABLED(CONFIG_CHECKPOINT_RESTORE) &&
@@ -1301,7 +1294,13 @@ int __secure_computing(const struct seccomp_data *sd)
 	this_syscall = sd ? sd->nr :
 		syscall_get_nr(current, current_pt_regs());
 
-	switch (mode) {
+	/* 
+	 * Make sure that any changes to mode from another thread have
+	 * been seen after SYSCALL_WORK_SECCOMP was seen.
+	 */
+	smp_rmb();
+
+	switch (current->seccomp.mode) {
 	case SECCOMP_MODE_STRICT:
 		__secure_computing_strict(this_syscall);  /* may call do_exit */
 		return 0;
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* RE: [PATCH v3] seccomp: Improve performace by optimizing rmb()
  2021-02-24 16:18 ` Andy Lutomirski
@ 2021-02-25 12:03   ` Wanghongzhe (Hongzhe, EulerOS)
  0 siblings, 0 replies; 4+ messages in thread
From: Wanghongzhe (Hongzhe, EulerOS) @ 2021-02-25 12:03 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: keescook, andrii, ast, bpf, daniel, john.fastabend, kafai,
	kpsingh, linux-kernel, netdev, songliubraving, wad, yhs,
	tongxiaomeng

> > On Feb 24, 2021, at 12:03 AM, wanghongzhe <wanghongzhe@huawei.com>
> wrote:
> >
> > As Kees haved accepted the v2 patch at a381b70a1 which just replaced
> > rmb() with smp_rmb(), this patch will base on that and just adjust the
> > smp_rmb() to the correct position.
> >
> > As the original comment shown (and indeed it should be):
> >   /*
> >    * Make sure that any changes to mode from another thread have
> >    * been seen after SYSCALL_WORK_SECCOMP was seen.
> >    */
> > the smp_rmb() should be put between reading SYSCALL_WORK_SECCOMP
> and
> > reading seccomp.mode to make sure that any changes to mode from
> > another thread have been seen after SYSCALL_WORK_SECCOMP was seen,
> for
> > TSYNC situation. However, it is misplaced between reading seccomp.mode
> > and seccomp->filter. This issue seems to be misintroduced at
> > 13aa72f0fd0a9f98a41cefb662487269e2f1ad65 which aims to refactor the
> > filter callback and the API. So let's just adjust the
> > smp_rmb() to the correct position.
> >
> > A next optimization patch will be provided if this ajustment is appropriate.
> 
> Would it be better to make the syscall work read be smp_load_acquire()?
> 
> >
> > v2 -> v3:
> > - move the smp_rmb() to the correct position
> >
> > v1 -> v2:
> > - only replace rmb() with smp_rmb()
> > - provide the performance test number
> >
> > RFC -> v1:
> > - replace rmb() with smp_rmb()
> > - move the smp_rmb() logic to the middle between TIF_SECCOMP and mode
> >
> > Signed-off-by: wanghongzhe <wanghongzhe@huawei.com>
> > ---
> > kernel/seccomp.c | 15 +++++++--------
> > 1 file changed, 7 insertions(+), 8 deletions(-)
> >
> > diff --git a/kernel/seccomp.c b/kernel/seccomp.c index
> > 1d60fc2c9987..64b236cb8a7f 100644
> > --- a/kernel/seccomp.c
> > +++ b/kernel/seccomp.c
> > @@ -1160,12 +1160,6 @@ static int __seccomp_filter(int this_syscall, const
> struct seccomp_data *sd,
> >    int data;
> >    struct seccomp_data sd_local;
> >
> > -    /*
> > -     * Make sure that any changes to mode from another thread have
> > -     * been seen after SYSCALL_WORK_SECCOMP was seen.
> > -     */
> > -    smp_rmb();
> > -
> >    if (!sd) {
> >        populate_seccomp_data(&sd_local);
> >        sd = &sd_local;
> > @@ -1291,7 +1285,6 @@ static int __seccomp_filter(int this_syscall,
> > const struct seccomp_data *sd,
> >
> > int __secure_computing(const struct seccomp_data *sd) {
> > -    int mode = current->seccomp.mode;
> >    int this_syscall;
> >
> >    if (IS_ENABLED(CONFIG_CHECKPOINT_RESTORE) && @@ -1301,7
> +1294,13 @@
> > int __secure_computing(const struct seccomp_data *sd)
> >    this_syscall = sd ? sd->nr :
> >        syscall_get_nr(current, current_pt_regs());
> >
> > -    switch (mode) {
> > +    /*
> > +     * Make sure that any changes to mode from another thread have
> > +     * been seen after SYSCALL_WORK_SECCOMP was seen.
> > +     */
> > +    smp_rmb();
> > +
> > +    switch (current->seccomp.mode) {
> >    case SECCOMP_MODE_STRICT:
> >        __secure_computing_strict(this_syscall);  /* may call do_exit */
> >        return 0;
> > --
> > 2.19.1
> >
> Would it be better to make the syscall work read be smp_load_acquire()?
Maybe we can do something like this (untested): 
__syscall_enter_from_user_work(struct pt_regs *regs, long syscall)
{
-      unsigned long work = READ_ONCE(current_thread_info()->syscall_work);
+     unsigned long work = smp_load_acquire (&(current_thread_info()->syscall_work));

       if (work & SYSCALL_WORK_ENTER)
              syscall = syscall_trace_enter(regs, syscall, work);
However, this may insert a memory barrier and slow down all works 
behind it in SYSCALL_WORK_ENTER, not just seccomp, which  is not 
we want. And in order to match with the smp_mb__before_atomic() in 
seccomp_assign_mode() which called in seccomp_sync_threads(), it is 
better to use smp_rmb() between the work and mode read:
       task->seccomp.mode = seccomp_mode;
       /*
       * Make sure SYSCALL_WORK_SECCOMP cannot be set before the mode (and
       * filter) is set.
       */
*      smp_mb__before_atomic();
       /* Assume default seccomp processes want spec flaw mitigation. */
       if ((flags & SECCOMP_FILTER_FLAG_SPEC_ALLOW) == 0)
              arch_seccomp_spec_mitigate(task);
       set_task_syscall_work(task, SECCOMP);


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v3] seccomp: Improve performace by optimizing rmb()
  2021-02-24  8:49 wanghongzhe
@ 2021-02-24 16:18 ` Andy Lutomirski
  2021-02-25 12:03   ` Wanghongzhe (Hongzhe, EulerOS)
  0 siblings, 1 reply; 4+ messages in thread
From: Andy Lutomirski @ 2021-02-24 16:18 UTC (permalink / raw)
  To: wanghongzhe
  Cc: keescook, andrii, ast, bpf, daniel, john.fastabend, kafai,
	kpsingh, linux-kernel, netdev, songliubraving, wad, yhs,
	tongxiaomeng


> On Feb 24, 2021, at 12:03 AM, wanghongzhe <wanghongzhe@huawei.com> wrote:
> 
> As Kees haved accepted the v2 patch at a381b70a1 which just
> replaced rmb() with smp_rmb(), this patch will base on that and just adjust
> the smp_rmb() to the correct position.
> 
> As the original comment shown (and indeed it should be):
>   /*
>    * Make sure that any changes to mode from another thread have
>    * been seen after SYSCALL_WORK_SECCOMP was seen.
>    */
> the smp_rmb() should be put between reading SYSCALL_WORK_SECCOMP and reading
> seccomp.mode to make sure that any changes to mode from another thread have
> been seen after SYSCALL_WORK_SECCOMP was seen, for TSYNC situation. However,
> it is misplaced between reading seccomp.mode and seccomp->filter. This issue
> seems to be misintroduced at 13aa72f0fd0a9f98a41cefb662487269e2f1ad65 which
> aims to refactor the filter callback and the API. So let's just adjust the
> smp_rmb() to the correct position.
> 
> A next optimization patch will be provided if this ajustment is appropriate.

Would it be better to make the syscall work read be smp_load_acquire()?

> 
> v2 -> v3:
> - move the smp_rmb() to the correct position
> 
> v1 -> v2:
> - only replace rmb() with smp_rmb()
> - provide the performance test number
> 
> RFC -> v1:
> - replace rmb() with smp_rmb()
> - move the smp_rmb() logic to the middle between TIF_SECCOMP and mode
> 
> Signed-off-by: wanghongzhe <wanghongzhe@huawei.com>
> ---
> kernel/seccomp.c | 15 +++++++--------
> 1 file changed, 7 insertions(+), 8 deletions(-)
> 
> diff --git a/kernel/seccomp.c b/kernel/seccomp.c
> index 1d60fc2c9987..64b236cb8a7f 100644
> --- a/kernel/seccomp.c
> +++ b/kernel/seccomp.c
> @@ -1160,12 +1160,6 @@ static int __seccomp_filter(int this_syscall, const struct seccomp_data *sd,
>    int data;
>    struct seccomp_data sd_local;
> 
> -    /*
> -     * Make sure that any changes to mode from another thread have
> -     * been seen after SYSCALL_WORK_SECCOMP was seen.
> -     */
> -    smp_rmb();
> -
>    if (!sd) {
>        populate_seccomp_data(&sd_local);
>        sd = &sd_local;
> @@ -1291,7 +1285,6 @@ static int __seccomp_filter(int this_syscall, const struct seccomp_data *sd,
> 
> int __secure_computing(const struct seccomp_data *sd)
> {
> -    int mode = current->seccomp.mode;
>    int this_syscall;
> 
>    if (IS_ENABLED(CONFIG_CHECKPOINT_RESTORE) &&
> @@ -1301,7 +1294,13 @@ int __secure_computing(const struct seccomp_data *sd)
>    this_syscall = sd ? sd->nr :
>        syscall_get_nr(current, current_pt_regs());
> 
> -    switch (mode) {
> +    /* 
> +     * Make sure that any changes to mode from another thread have
> +     * been seen after SYSCALL_WORK_SECCOMP was seen.
> +     */
> +    smp_rmb();
> +
> +    switch (current->seccomp.mode) {
>    case SECCOMP_MODE_STRICT:
>        __secure_computing_strict(this_syscall);  /* may call do_exit */
>        return 0;
> -- 
> 2.19.1
> 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [PATCH v3] seccomp: Improve performace by optimizing rmb()
@ 2021-02-24  8:49 wanghongzhe
  2021-02-24 16:18 ` Andy Lutomirski
  0 siblings, 1 reply; 4+ messages in thread
From: wanghongzhe @ 2021-02-24  8:49 UTC (permalink / raw)
  To: keescook, luto
  Cc: andrii, ast, bpf, daniel, john.fastabend, kafai, kpsingh,
	linux-kernel, netdev, songliubraving, wad, wanghongzhe, yhs,
	tongxiaomeng

As Kees haved accepted the v2 patch at a381b70a1 which just
replaced rmb() with smp_rmb(), this patch will base on that and just adjust
the smp_rmb() to the correct position.

As the original comment shown (and indeed it should be):
   /*
    * Make sure that any changes to mode from another thread have
    * been seen after SYSCALL_WORK_SECCOMP was seen.
    */
the smp_rmb() should be put between reading SYSCALL_WORK_SECCOMP and reading
seccomp.mode to make sure that any changes to mode from another thread have
been seen after SYSCALL_WORK_SECCOMP was seen, for TSYNC situation. However,
it is misplaced between reading seccomp.mode and seccomp->filter. This issue
seems to be misintroduced at 13aa72f0fd0a9f98a41cefb662487269e2f1ad65 which
aims to refactor the filter callback and the API. So let's just adjust the
smp_rmb() to the correct position.

A next optimization patch will be provided if this ajustment is appropriate.

v2 -> v3:
 - move the smp_rmb() to the correct position

v1 -> v2:
 - only replace rmb() with smp_rmb()
 - provide the performance test number

RFC -> v1:
 - replace rmb() with smp_rmb()
 - move the smp_rmb() logic to the middle between TIF_SECCOMP and mode

Signed-off-by: wanghongzhe <wanghongzhe@huawei.com>
---
 kernel/seccomp.c | 15 +++++++--------
 1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/kernel/seccomp.c b/kernel/seccomp.c
index 1d60fc2c9987..64b236cb8a7f 100644
--- a/kernel/seccomp.c
+++ b/kernel/seccomp.c
@@ -1160,12 +1160,6 @@ static int __seccomp_filter(int this_syscall, const struct seccomp_data *sd,
 	int data;
 	struct seccomp_data sd_local;
 
-	/*
-	 * Make sure that any changes to mode from another thread have
-	 * been seen after SYSCALL_WORK_SECCOMP was seen.
-	 */
-	smp_rmb();
-
 	if (!sd) {
 		populate_seccomp_data(&sd_local);
 		sd = &sd_local;
@@ -1291,7 +1285,6 @@ static int __seccomp_filter(int this_syscall, const struct seccomp_data *sd,
 
 int __secure_computing(const struct seccomp_data *sd)
 {
-	int mode = current->seccomp.mode;
 	int this_syscall;
 
 	if (IS_ENABLED(CONFIG_CHECKPOINT_RESTORE) &&
@@ -1301,7 +1294,13 @@ int __secure_computing(const struct seccomp_data *sd)
 	this_syscall = sd ? sd->nr :
 		syscall_get_nr(current, current_pt_regs());
 
-	switch (mode) {
+    /* 
+     * Make sure that any changes to mode from another thread have
+     * been seen after SYSCALL_WORK_SECCOMP was seen.
+     */
+    smp_rmb();
+
+	switch (current->seccomp.mode) {
 	case SECCOMP_MODE_STRICT:
 		__secure_computing_strict(this_syscall);  /* may call do_exit */
 		return 0;
-- 
2.19.1


^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2021-02-25 12:04 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-24  8:58 [PATCH v3] seccomp: Improve performace by optimizing rmb() wanghongzhe
  -- strict thread matches above, loose matches on Subject: below --
2021-02-24  8:49 wanghongzhe
2021-02-24 16:18 ` Andy Lutomirski
2021-02-25 12:03   ` Wanghongzhe (Hongzhe, EulerOS)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).