xdp-newbies.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Christian Deacon <gamemann@gflclan.com>
To: xdp-newbies@vger.kernel.org
Subject: XDP BPF Stack Limit Issues
Date: Wed, 16 Dec 2020 09:29:05 -0600	[thread overview]
Message-ID: <ad6ea0ec-c5ce-2887-6f4c-7ed762a0f130@gflclan.com> (raw)

Hey everyone,

I've been trying to implement IPv6 support into an XDP Firewall which 
can be found below.

https://github.com/gamemann/XDP-Firewall

Unfortunately, I've been fighting with the BPF verifier and I'm 
exceeding the BPF stack size of 512 bytes. I linked the above in the 
case others want to see the headers that define things like 
`MAX_FILTERS` inside the XDP program. The error I am receiving is:

```
error: <unknown>:0:0: in function xdp_prog_main i32 (%struct.xdp_md*): 
Looks like the BPF stack limit of 512 bytes is exceeded. Please move 
large on stack variables into BPF per-cpu array map.
```

Which spams anywhere from 3 - 10 times depending on what I try to 
resolve the issue.

I ended up re-writing the entire program trying to use as little 
variables as possible and I got very close to getting the program to 
compile until I added support for the ICMPv6 protocol (once I remove 
this, it compiles and runs without any issues). I'm at a loss on what I 
can do now, though.

The current XDP program code is the following.

https://gist.github.com/gamemann/a0acd9603405c3d7b3c792b5429ced38

 From what the error states, I could try storing variables into a 
per-CPU BPF map. Therefore, I tried storing the ICMP (and at one point 
TCP) information into a BPF map and used the data later on which can be 
found below.

https://gist.github.com/gamemann/663674924e16286b02a835637912c2a5

This still exceeded the BPF stack size. With that said, I'd assume 
performance would be heavily impacted if we stored everything inside a 
BPF map. To my understanding, per-CPU maps cannot be reliably read 
within the XDP program. Therefore, if this would have worked, I'd 
probably want to use a regular non per-CPU map anyways which would 
impact performance.

I also tried BPF calls without luck and was thinking about trying BPF 
tail calls. Though, I don't think this would help. BPF tail calls use 
the same BPF stack to my understanding.

I could try adding even more variables inside the program to a BPF map 
such as the PPS and BPS variables. However, I wanted to see if there 
were any other suggestions from the mailing list on this. I plan to 
write another firewall that'll have a lot more functionality than this 
firewall in XDP and I'm worried I'd run into similar issues there.

Any help would be highly appreciated and thank you for your time!


             reply	other threads:[~2020-12-16 15:35 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-16 15:29 Christian Deacon [this message]
2020-12-17  8:50 ` XDP BPF Stack Limit Issues Jesper Dangaard Brouer
2020-12-18  2:42   ` Christian Deacon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=ad6ea0ec-c5ce-2887-6f4c-7ed762a0f130@gflclan.com \
    --to=gamemann@gflclan.com \
    --cc=xdp-newbies@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).