* [PATCH net] net: dsa: ksz: don't pad a cloned sk_buff
@ 2020-10-16 7:35 Christian Eggers
2020-10-16 14:00 ` Andrew Lunn
0 siblings, 1 reply; 3+ messages in thread
From: Christian Eggers @ 2020-10-16 7:35 UTC (permalink / raw)
To: Woojung Huh, Andrew Lunn, Vivien Didelot, Florian Fainelli,
Vladimir Oltean
Cc: Microchip Linux Driver Support, David S . Miller, Jakub Kicinski,
netdev, linux-kernel, Christian Eggers
If the supplied sk_buff is cloned (e.g. in dsa_skb_tx_timestamp()),
__skb_put_padto() will allocate a new sk_buff with size = skb->len +
padlen. So the condition just tested for (skb_tailroom(skb) >= padlen +
len) is not fulfilled anymore. Although the real size will usually be
larger than skb->len + padlen (due to alignment), there is no guarantee
that the required memory for the tail tag will be available
Instead of letting __skb_put_padto allocate a new (too small) sk_buff,
lets take the already existing path and allocate a new sk_buff ourself
(with sufficient size).
Fixes: 8b8010fb7876 ("dsa: add support for Microchip KSZ tail tagging")
Signed-off-by: Christian Eggers <ceggers@arri.de>
---
I am not sure whether this is a problem for current kernels (it depends
whether cloned sk_buffs can happen on any paths). But when adding time
stamping (will be submitted soon), this will become an issue.
This patch supersedes "net: dsa: ksz: fix padding size of skb" from
yesterday.
net/dsa/tag_ksz.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/net/dsa/tag_ksz.c b/net/dsa/tag_ksz.c
index 945a9bd5ba35..cb1f27e15201 100644
--- a/net/dsa/tag_ksz.c
+++ b/net/dsa/tag_ksz.c
@@ -22,7 +22,7 @@ static struct sk_buff *ksz_common_xmit(struct sk_buff *skb,
padlen = (skb->len >= ETH_ZLEN) ? 0 : ETH_ZLEN - skb->len;
- if (skb_tailroom(skb) >= padlen + len) {
+ if (skb_tailroom(skb) >= padlen + len && !skb_cloned(skb)) {
/* Let dsa_slave_xmit() free skb */
if (__skb_put_padto(skb, skb->len + padlen, false))
return NULL;
@@ -45,7 +45,7 @@ static struct sk_buff *ksz_common_xmit(struct sk_buff *skb,
/* Let skb_put_padto() free nskb, and let dsa_slave_xmit() free
* skb
*/
- if (skb_put_padto(nskb, nskb->len + padlen))
+ if (skb_put_padto(nskb, ETH_ZLEN + len))
return NULL;
consume_skb(skb);
--
Christian Eggers
Embedded software developer
Arnold & Richter Cine Technik GmbH & Co. Betriebs KG
Sitz: Muenchen - Registergericht: Amtsgericht Muenchen - Handelsregisternummer: HRA 57918
Persoenlich haftender Gesellschafter: Arnold & Richter Cine Technik GmbH
Sitz: Muenchen - Registergericht: Amtsgericht Muenchen - Handelsregisternummer: HRB 54477
Geschaeftsfuehrer: Dr. Michael Neuhaeuser; Stephan Schenk; Walter Trauninger; Markus Zeiler
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH net] net: dsa: ksz: don't pad a cloned sk_buff
2020-10-16 7:35 [PATCH net] net: dsa: ksz: don't pad a cloned sk_buff Christian Eggers
@ 2020-10-16 14:00 ` Andrew Lunn
2020-10-16 14:06 ` Vladimir Oltean
0 siblings, 1 reply; 3+ messages in thread
From: Andrew Lunn @ 2020-10-16 14:00 UTC (permalink / raw)
To: Christian Eggers
Cc: Woojung Huh, Vivien Didelot, Florian Fainelli, Vladimir Oltean,
Microchip Linux Driver Support, David S . Miller, Jakub Kicinski,
netdev, linux-kernel
On Fri, Oct 16, 2020 at 09:35:27AM +0200, Christian Eggers wrote:
> If the supplied sk_buff is cloned (e.g. in dsa_skb_tx_timestamp()),
> __skb_put_padto() will allocate a new sk_buff with size = skb->len +
> padlen. So the condition just tested for (skb_tailroom(skb) >= padlen +
> len) is not fulfilled anymore. Although the real size will usually be
> larger than skb->len + padlen (due to alignment), there is no guarantee
> that the required memory for the tail tag will be available
>
> Instead of letting __skb_put_padto allocate a new (too small) sk_buff,
> lets take the already existing path and allocate a new sk_buff ourself
> (with sufficient size).
Hi Christian
What is not clear to me is why not change the __skb_put_padto() call
to pass the correct length?
Andrew
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [PATCH net] net: dsa: ksz: don't pad a cloned sk_buff
2020-10-16 14:00 ` Andrew Lunn
@ 2020-10-16 14:06 ` Vladimir Oltean
0 siblings, 0 replies; 3+ messages in thread
From: Vladimir Oltean @ 2020-10-16 14:06 UTC (permalink / raw)
To: Andrew Lunn
Cc: Christian Eggers, Woojung Huh, Vivien Didelot, Florian Fainelli,
Microchip Linux Driver Support, David S . Miller, Jakub Kicinski,
netdev, linux-kernel
On Fri, Oct 16, 2020 at 04:00:36PM +0200, Andrew Lunn wrote:
> On Fri, Oct 16, 2020 at 09:35:27AM +0200, Christian Eggers wrote:
> > If the supplied sk_buff is cloned (e.g. in dsa_skb_tx_timestamp()),
> > __skb_put_padto() will allocate a new sk_buff with size = skb->len +
> > padlen. So the condition just tested for (skb_tailroom(skb) >= padlen +
> > len) is not fulfilled anymore. Although the real size will usually be
> > larger than skb->len + padlen (due to alignment), there is no guarantee
> > that the required memory for the tail tag will be available
> >
> > Instead of letting __skb_put_padto allocate a new (too small) sk_buff,
> > lets take the already existing path and allocate a new sk_buff ourself
> > (with sufficient size).
>
> Hi Christian
>
> What is not clear to me is why not change the __skb_put_padto() call
> to pass the correct length?
There is a second call to skb_put that increases the skb->len further
from the tailroom area. See Christian's other patch.
I would treat this patch as "premature" until we fully understand what's
going on there.
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2020-10-16 14:06 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-16 7:35 [PATCH net] net: dsa: ksz: don't pad a cloned sk_buff Christian Eggers
2020-10-16 14:00 ` Andrew Lunn
2020-10-16 14:06 ` Vladimir Oltean
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).