From: "Jan Beulich" <JBeulich@suse.com>
To: "Ian Campbell" <ian.campbell@citrix.com>
Cc: "Wei Liu" <wei.liu2@citrix.com>, <davem@davemloft.net>,
"Dion Kant" <g.w.kant@hunenet.nl>, <xen-devel@lists.xen.org>,
<netdev@vger.kernel.org>, <stable@vger.kernel.org>
Subject: Re: [Xen-devel] [PATCH] xen-netfront: pull on receive skb may need to happen earlier
Date: Wed, 10 Jul 2013 14:58:42 +0100 [thread overview]
Message-ID: <51DD84B202000078000E3EF8@nat28.tlf.novell.com> (raw)
In-Reply-To: <1373460644.5453.109.camel@hastur.hellion.org.uk>
>>> On 10.07.13 at 14:50, Ian Campbell <ian.campbell@citrix.com> wrote:
> On Wed, 2013-07-10 at 11:46 +0100, Jan Beulich wrote:
>> >>> On 10.07.13 at 12:04, Wei Liu <wei.liu2@citrix.com> wrote:
>> > Jan, looking at the commit log, the overrun issue in
>> > xennet_get_responses was not introduced by __pskb_pull_tail. The call to
>> > xennet_fill_frags has always been in the same place.
>>
>> I'm convinced it was: Prior to that commit, if the first response slot
>> contained up to RX_COPY_THRESHOLD bytes, it got entirely
>> consumed into the linear portion of the SKB, leaving the number of
>> fragments available for filling at MAX_SKB_FRAGS. Said commit
>> dropped the early copying, leaving the fragment count at 1
>> unconditionally, and now accumulates all of the response slots into
>> fragments, only pulling after all of them got filled in. It neglected to
>> realize - due to the count now always being 1 at the beginning - that
>> this can lead to MAX_SKB_FRAGS + 1 frags getting filled, corrupting
>> memory.
>
> That argument makes sense to me.
>
> Is it possible to hit a scenario where we need to pull more than
> RX_COPY_THRESHOLD in order to fit all of the data in MAX_SKB_FRAGS ?
I'm not aware of any, but I'm no expert here in any way.
> Does this relate somehow to the patch Annie has sent out recently too?
I don't think so.
Jan
next prev parent reply other threads:[~2013-07-10 13:58 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <8511913.uMAmUdIO30@eistomin.edss.local>
[not found] ` <20130517085923.GC14401@zion.uk.xensource.com>
[not found] ` <51D57C1F.8070909@hunenet.nl>
[not found] ` <20130704150137.GW7483@zion.uk.xensource.com>
2013-07-05 9:32 ` [PATCH] xen-netfront: pull on receive skb may need to happen earlier Jan Beulich
2013-07-05 14:53 ` Wei Liu
2013-07-07 1:10 ` David Miller
2013-07-08 9:59 ` Jan Beulich
2013-07-08 12:16 ` Dion Kant
2013-07-08 12:41 ` Jan Beulich
2013-07-08 14:20 ` [Xen-devel] " Jan Beulich
2013-07-08 15:22 ` Eric Dumazet
2013-07-09 7:47 ` Jan Beulich
2013-07-08 15:48 ` Wei Liu
2013-07-09 6:52 ` Jan Beulich
2013-07-09 16:51 ` Wei Liu
2013-07-10 6:58 ` Jan Beulich
2013-07-10 10:04 ` Wei Liu
2013-07-10 10:46 ` Jan Beulich
2013-07-10 12:50 ` Ian Campbell
2013-07-10 12:53 ` Wei Liu
2013-07-10 13:58 ` Jan Beulich [this message]
2013-07-10 13:19 ` Eric Dumazet
2013-07-12 8:32 ` Wei Liu
2013-07-12 8:56 ` Jan Beulich
2013-07-13 11:26 ` Dion Kant
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=51DD84B202000078000E3EF8@nat28.tlf.novell.com \
--to=jbeulich@suse.com \
--cc=davem@davemloft.net \
--cc=g.w.kant@hunenet.nl \
--cc=ian.campbell@citrix.com \
--cc=netdev@vger.kernel.org \
--cc=stable@vger.kernel.org \
--cc=wei.liu2@citrix.com \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).