All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Paul E. McKenney" <paulmck@linux.ibm.com>
To: Alexei Starovoitov <ast@fb.com>
Cc: Alexei Starovoitov <alexei.starovoitov@gmail.com>,
	Jann Horn <jannh@google.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Alexei Starovoitov <ast@kernel.org>,
	"David S. Miller" <davem@davemloft.net>,
	Daniel Borkmann <daniel@iogearbox.net>,
	"jakub.kicinski@netronome.com" <jakub.kicinski@netronome.com>,
	Network Development <netdev@vger.kernel.org>,
	Kernel Team <Kernel-team@fb.com>, Ingo Molnar <mingo@redhat.com>,
	Will Deacon <will.deacon@arm.com>
Subject: Re: [PATCH v4 bpf-next 1/9] bpf: introduce bpf_spin_lock
Date: Thu, 24 Jan 2019 20:31:58 -0800	[thread overview]
Message-ID: <20190125043158.GB4240@linux.ibm.com> (raw)
In-Reply-To: <150dc5d3-21c3-bbb6-6e5b-2ab8ab0e3f38@fb.com>

On Fri, Jan 25, 2019 at 04:27:02AM +0000, Alexei Starovoitov wrote:
> On 1/24/19 6:38 PM, Alexei Starovoitov wrote:
> >> For programs created with CAP_SYS_ADMIN,
> >> things get more tricky because you can create your own functions and
> >> call them repeatedly; I'm not sure whether the pessimal runtime there
> >> becomes exponential, or whether there is some check that catches this.
> > I think you're referring to bpf-to-bpf calls.
> > The limit it still the same. 4k per program including all calls.
> > tail calls are not allowed when bpf-to-bpf is used. So no 32 multiplier.
> 
> Jann,
> 
> I think you meant
> main:
> call A
> call A
> call A
> exit
> A:
> call B
> call B
> call B
> exit
> B:
> call C
> ...
> 
> scenario when everything fits into 4k?
> Would be great if you can construct such test while we're fixing
> the rest of the issues brought up in this thread.
> It will definitely be no more than BPF_COMPLEXITY_LIMIT_INSNS
> which is 128k, but I wonder what will be the actual number of
> executed insns.
> I think such clever constructed sequence can actually
> hit 128k executed too.
> It would be awesome test to add to test_verifier.c
> We have some of such pushing-the-boundary tests in lib/test_bpf.c
> that are generated in assembler.
> The longest takes 23853 nanoseconds, but clever bpf2bpf call hack
> like above with map_update call in the leaf function should
> certainly take much longer.
> I accept Paul's challenge to try to get such fancy bpf prog
> to take 100 millseconds :)

Fair enough!  But once you meet my challenge, the RCU CPU stall warning
code will challenge you to hit 21 seconds (or only three seconds given
an appropriately configured kernel).  ;-)

							Thanx, Paul


  reply	other threads:[~2019-01-25  4:32 UTC|newest]

Thread overview: 62+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-24  4:13 [PATCH v4 bpf-next 0/9] introduce bpf_spin_lock Alexei Starovoitov
2019-01-24  4:13 ` [PATCH v4 bpf-next 1/9] bpf: " Alexei Starovoitov
2019-01-24 18:01   ` Peter Zijlstra
2019-01-24 18:56     ` Peter Zijlstra
2019-01-24 23:42       ` Paul E. McKenney
2019-01-25  0:05         ` Alexei Starovoitov
2019-01-25  1:22           ` Paul E. McKenney
2019-01-25  1:46             ` Jann Horn
2019-01-25  2:38               ` Alexei Starovoitov
2019-01-25  4:27                 ` Alexei Starovoitov
2019-01-25  4:31                   ` Paul E. McKenney [this message]
2019-01-25  4:47                     ` Alexei Starovoitov
2019-01-25 16:02                       ` Paul E. McKenney
2019-01-25  4:11               ` Paul E. McKenney
2019-01-25 16:18                 ` Jann Horn
2019-01-25 22:51                   ` Paul E. McKenney
2019-01-25 23:44                     ` Alexei Starovoitov
2019-01-26  0:43                       ` Jann Horn
2019-01-26  0:59                         ` Jann Horn
2019-01-24 23:58     ` Alexei Starovoitov
2019-01-25  0:18       ` Jann Horn
2019-01-25  2:49         ` Alexei Starovoitov
2019-01-25  2:29       ` Eric Dumazet
2019-01-25  2:34         ` Alexei Starovoitov
2019-01-25  2:44           ` Eric Dumazet
2019-01-25  2:57             ` Alexei Starovoitov
2019-01-25  8:38               ` Peter Zijlstra
2019-01-25  9:10       ` Peter Zijlstra
2019-01-25 23:42         ` Alexei Starovoitov
2019-01-28  8:24           ` Peter Zijlstra
2019-01-28  8:31           ` Peter Zijlstra
2019-01-28  8:35             ` Peter Zijlstra
2019-01-28 20:49               ` Alexei Starovoitov
2019-01-28  8:43           ` Peter Zijlstra
2019-01-28 21:37             ` Alexei Starovoitov
2019-01-29  8:59               ` Peter Zijlstra
2019-01-30  2:20                 ` Alexei Starovoitov
2019-01-25  9:59       ` Peter Zijlstra
2019-01-25 10:09       ` Peter Zijlstra
2019-01-25 10:23       ` Peter Zijlstra
2019-01-26  0:17         ` bpf memory model. Was: " Alexei Starovoitov
2019-01-28  9:24           ` Peter Zijlstra
2019-01-28 21:56             ` Alexei Starovoitov
2019-01-29  9:16               ` Peter Zijlstra
2019-01-30  2:32                 ` Alexei Starovoitov
2019-01-30  8:58                   ` Peter Zijlstra
2019-01-30 19:36                     ` Alexei Starovoitov
2019-01-30 18:11               ` Will Deacon
2019-01-30 18:36                 ` Paul E. McKenney
2019-01-30 19:51                   ` Alexei Starovoitov
2019-01-30 21:05                     ` Paul E. McKenney
2019-01-30 22:57                       ` Alexei Starovoitov
2019-01-31 14:01                         ` Paul E. McKenney
2019-01-31 18:47                           ` Alexei Starovoitov
2019-02-01 14:05                             ` Paul E. McKenney
2019-01-30 19:50                 ` Alexei Starovoitov
2019-01-24  4:13 ` [PATCH v4 bpf-next 2/9] bpf: add support for bpf_spin_lock to cgroup local storage Alexei Starovoitov
2019-01-24  4:13 ` [PATCH v4 bpf-next 3/9] tools/bpf: sync include/uapi/linux/bpf.h Alexei Starovoitov
2019-01-24  4:13 ` [PATCH v4 bpf-next 4/9] selftests/bpf: add bpf_spin_lock tests Alexei Starovoitov
2019-01-24  4:13 ` [PATCH v4 bpf-next 5/9] selftests/bpf: add bpf_spin_lock C test Alexei Starovoitov
2019-01-24  4:14 ` [PATCH v4 bpf-next 6/9] bpf: introduce BPF_F_LOCK flag Alexei Starovoitov
2019-01-24  4:14 ` [PATCH v4 bpf-next 7/9] tools/bpf: sync uapi/bpf.h Alexei Starovoitov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190125043158.GB4240@linux.ibm.com \
    --to=paulmck@linux.ibm.com \
    --cc=Kernel-team@fb.com \
    --cc=alexei.starovoitov@gmail.com \
    --cc=ast@fb.com \
    --cc=ast@kernel.org \
    --cc=daniel@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=jakub.kicinski@netronome.com \
    --cc=jannh@google.com \
    --cc=mingo@redhat.com \
    --cc=netdev@vger.kernel.org \
    --cc=peterz@infradead.org \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.