netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: John Fastabend <john.fastabend@gmail.com>
To: "Maxim Mikityanskiy" <maximmi@mellanox.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"Björn Töpel" <bjorn.topel@intel.com>,
	"Magnus Karlsson" <magnus.karlsson@intel.com>,
	"David S. Miller" <davem@davemloft.net>
Cc: Tariq Toukan <tariqt@mellanox.com>,
	Saeed Mahameed <saeedm@mellanox.com>,
	Eran Ben Elisha <eranbe@mellanox.com>
Subject: Re: AF_XDP design flaws
Date: Tue, 26 Feb 2019 08:41:19 -0800	[thread overview]
Message-ID: <b48e282f-8405-8974-8b71-df15f4bda8ab@gmail.com> (raw)
In-Reply-To: <AM6PR05MB5879DF6B2BD7DC426869875ED17B0@AM6PR05MB5879.eurprd05.prod.outlook.com>

On 2/26/19 6:49 AM, Maxim Mikityanskiy wrote:
> Hi everyone,
> 
> I would like to discuss some design flaws of AF_XDP socket (XSK) implementation
> in kernel. At the moment I don't see a way to work around them without changing
> the API, so I would like to make sure that I'm not missing anything and to
> suggest and discuss some possible improvements that can be made.
> 
> The issues I describe below are caused by the fact that the driver depends on
> the application doing some things, and if the application is
> slow/buggy/malicious, the driver is forced to busy poll because of the lack of a
> notification mechanism from the application side. I will refer to the i40e
> driver implementation a lot, as it is the first implementation of AF_XDP, but
> the issues are general and affect any driver. I already considered trying to fix
> it on driver level, but it doesn't seem possible, so it looks like the behavior
> and implementation of AF_XDP in the kernel has to be changed.
> 
> RX side busy polling
> ====================
> 
> On the RX side, the driver expects the application to put some descriptors in
> the Fill Ring. There is no way for the application to notify the driver that
> there are more Fill Ring descriptors to take, so the driver is forced to busy
> poll the Fill Ring if it gets empty. E.g., the i40e driver does it in NAPI poll:
> 
> int i40e_clean_rx_irq_zc(struct i40e_ring *rx_ring, int budget)
> {
> ...
>                         failure = failure ||
>                                   !i40e_alloc_rx_buffers_fast_zc(rx_ring,
>                                                                  cleaned_count);
> ...
>         return failure ? budget : (int)total_rx_packets;
> }
> 
> Basically, it means that if there are no descriptors in the Fill Ring, NAPI will
> never stop, draining CPU.
> 
> Possible cases when it happens
> ------------------------------
> 
> 1. The application is slow, it received some frames in the RX Ring, and it is
> still handling the data, so it has no free frames to put to the Fill Ring.
> 
> 2. The application is malicious, it opens an XSK and puts no frames to the Fill
> Ring. It can be used as a local DoS attack.
> 
> 3. The application is buggy and stops filling the Fill Ring for whatever reason
> (deadlock, waiting for another blocking operation, other bugs).
> 
> Although loading an XDP program requires root access, the DoS attack can be
> targeted to setups that already use XDP, i.e. an XDP program is already loaded.
> Even under root, userspace applications should not be able to disrupt system
> stability by just calling normal APIs without an intention to destroy the
> system, and here it happens in case 1.

I believe this is by design. If packets per second is your top priority
(at the expense of power, cpu, etc.) this is the behavior you might
want. To resolve your points if you don't trust the application it
should be isolated to a queue pair and cores so the impact is known and
managed.

That said having a flag to back-off seems like a good idea. But should
be doable as a feature without breaking current API.

> 
> Possible way to solve the issue
> -------------------------------
> 
> When the driver can't take new Fill Ring frames, it shouldn't busy poll.
> Instead, it signals the failure to the application (e.g., with POLLERR), and
> after that it's up to the application to restart polling (e.g., by calling
> sendto()) after refilling the Fill Ring. The issue with this approach is that it
> changes the API, so we either have to deal with it or to introduce some API
> version field.

See above. I like the idea here though.

> 
> TX side getting stuck
> =====================
> 
> On the TX side, there is the Completion Ring that the application has to clean.
> If it doesn't, the i40e driver stops taking descriptors from the TX Ring. If the
> application finally completes something, the driver can go on transmitting.
> However, it would require busy polling the Completion Ring (just like with the
> Fill Ring on the RX side). i40e doesn't do it, instead, it relies on the
> application to kick the TX by calling sendto(). The issue is that poll() doesn't
> return POLLOUT in this case, because the TX Ring is full, so the application
> will never call sendto(), and the ring is stuck forever (or at least until
> something triggers NAPI).
> 
> Possible way to solve the issue
> -------------------------------
> 
> When the driver can't reserve a descriptor in the Completion Ring, it should
> signal the failure to the application (e.g., with POLLERR). The application
> shouldn't call sendto() every time it sees that the number of not completed
> frames is greater than zero (like xdpsock sample does). Instead, the application
> should kick the TX only when it wants to flush the ring, and, in addition, after
> resolving the cause for POLLERR, i.e. after handling Completion Ring entries.
> The API will also have to change with this approach.
> 

+1 seems to me this can be done without breaking existing API though.

> Triggering NAPI on a different CPU core
> =======================================
> 
> .ndo_xsk_async_xmit runs on a random CPU core, so, to preserve CPU affinity,
> i40e triggers an interrupt to schedule NAPI, instead of calling napi_schedule
> directly. Scheduling NAPI on the correct CPU is what would every driver do, I
> guess, but currently it has to be implemented differently in every driver, and
> it relies on hardware features (the ability to trigger an IRQ).

Ideally the application would be pinned to a core and the traffic
steered to that core using ethtool/tc. Yes it requires a bit of mgmt on
the platform but I think this is needed for best pps numbers.

> 
> I suggest introducing a kernel API that would allow triggering NAPI on a given
> CPU. A brief look shows that something like smp_call_function_single_async can
> be used. Advantages:

Assuming you want to avoid pinning cores/traffic for some reason. Could
this be done with some combination of cpumap + af_xdp? I haven't thought
too much about it though. Maybe Bjorn has some ideas.

> 
> 1. It lifts the hardware requirement to be able to raise an interrupt on demand.
> 
> 2. It would allow to move common code to the kernel (.ndo_xsk_async_xmit).
> 
> 3. It is also useful in the situation where CPU affinity changes while being in
> NAPI poll. Currently, i40e and mlx5e try to stop NAPI polling by returning
> a value less than budget if CPU affinity changes. However, there are cases
> (e.g., NAPIF_STATE_MISSED) when NAPI will be rescheduled on a wrong CPU. It's a
> race between the interrupt, which will move NAPI to the correct CPU, and
> __napi_schedule from a wrong CPU. Having an API to schedule NAPI on a given CPU
> will benefit both mlx5e and i40e, because when this situation happens, it kills
> the performance.

How would we know what core to trigger NAPI on?

> 
> I would be happy to hear your thoughts about these issues.

At least the first two observations seem incrementally solvable to me
and would be nice improvements. I assume the last case can be resolved
by pinning cores/traffic but for the general case perhaps it is useful.

> 
> Thanks,
> Max
> 


  reply	other threads:[~2019-02-26 16:41 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-26 14:49 AF_XDP design flaws Maxim Mikityanskiy
2019-02-26 16:41 ` John Fastabend [this message]
2019-02-27  8:08   ` Björn Töpel
2019-02-27 19:17     ` Maxim Mikityanskiy
2019-02-27 21:03       ` Jonathan Lemon
2019-02-28 10:49         ` Maxim Mikityanskiy
2019-03-05 18:26           ` Björn Töpel
2019-03-07 15:09             ` Maxim Mikityanskiy
2019-03-07 16:51               ` Alexei Starovoitov
2019-03-07 17:52               ` Björn Töpel
2019-03-21 15:46                 ` Maxim Mikityanskiy
2019-02-28 14:23         ` Magnus Karlsson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b48e282f-8405-8974-8b71-df15f4bda8ab@gmail.com \
    --to=john.fastabend@gmail.com \
    --cc=bjorn.topel@intel.com \
    --cc=davem@davemloft.net \
    --cc=eranbe@mellanox.com \
    --cc=magnus.karlsson@intel.com \
    --cc=maximmi@mellanox.com \
    --cc=netdev@vger.kernel.org \
    --cc=saeedm@mellanox.com \
    --cc=tariqt@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).