regressions.lists.linux.dev archive mirror
 help / color / mirror / Atom feed
* [regression] UDP recv data corruption
@ 2021-07-01 10:47 Matthias Treydte
  2021-07-01 15:39 ` David Ahern
  0 siblings, 1 reply; 11+ messages in thread
From: Matthias Treydte @ 2021-07-01 10:47 UTC (permalink / raw)
  To: stable; +Cc: netdev, regressions, davem, yoshfuji, dsahern

Hello,

we recently upgraded the Linux kernel from 5.11.21 to 5.12.12 in our  
video stream receiver appliance and noticed compression artifacts on  
video streams that were previously looking fine. We are receiving UDP  
multicast MPEG TS streams through an FFMpeg / libav layer which does  
the socket and lower level protocol handling. For affected kernels it  
spills the log with messages like

> [mpegts @ 0x7fa130000900] Packet corrupt (stream = 0, dts = 6870802195).
> [mpegts @ 0x7fa11c000900] Packet corrupt (stream = 0, dts = 6870821068).

Bisecting identified commit 18f25dc399901426dff61e676ba603ff52c666f7  
as the one introducing the problem in the mainline kernel. It was  
backported to the 5.12 series in  
450687386cd16d081b58cd7a342acff370a96078. Some random observations  
which may help to understand what's going on:

    * the problem exists in Linux 5.13
    * reverting that commit on top of 5.13 makes the problem go away
    * Linux 5.10.45 is fine
    * no relevant output in dmesg
    * can be reproduced on different hardware (Intel, AMD, different NICs, ...)
    * we do use the bonding driver on the systems (but I did not yet  
verify that this is related)
    * we do not use vxlan (mentioned in the commit message)
    * the relevant code in FFMpeg identifying packet corruption is here:
      https://github.com/FFmpeg/FFmpeg/blob/master/libavformat/mpegts.c#L2758

And the bonding configuration:

# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v5.10.45

Bonding Mode: fault-tolerance (active-backup)
Primary Slave: None
Currently Active Slave: enp2s0
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 0
Down Delay (ms): 0
Peer Notification Delay (ms): 0

Slave Interface: enp2s0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 80:ee:73:XX:XX:XX
Slave queue ID: 0

Slave Interface: enp3s0
MII Status: down
Speed: Unknown
Duplex: Unknown
Link Failure Count: 0
Permanent HW addr: 80:ee:73:XX:XX:XX
Slave queue ID: 0


If there is anything else I can do to help tracking this down please  
let me know.


Regards,
-Matthias Treydte



^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2021-07-02 16:07 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-01 10:47 [regression] UDP recv data corruption Matthias Treydte
2021-07-01 15:39 ` David Ahern
2021-07-02  0:31   ` Willem de Bruijn
2021-07-02 11:42     ` Paolo Abeni
2021-07-02 14:31       ` Matthias Treydte
2021-07-02 12:36     ` Matthias Treydte
2021-07-02 14:06       ` Paolo Abeni
2021-07-02 14:21         ` Paolo Abeni
2021-07-02 15:23           ` Matthias Treydte
2021-07-02 15:32             ` Paolo Abeni
2021-07-02 16:07               ` Matthias Treydte

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).