linux-tegra.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jon Hunter <jonathanh@nvidia.com>
To: Giuseppe Cavallaro <peppe.cavallaro@st.com>,
	Alexandre Torgue <alexandre.torgue@foss.st.com>,
	Jose Abreu <joabreu@synopsys.com>
Cc: "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	linux-tegra <linux-tegra@vger.kernel.org>,
	Thierry Reding <treding@nvidia.com>
Subject: [BUG] net: stmmac: Panic observed in stmmac_napi_poll_rx()
Date: Fri, 14 May 2021 15:24:58 +0100	[thread overview]
Message-ID: <b0b17697-f23e-8fa5-3757-604a86f3a095@nvidia.com> (raw)

Hello!

I have been looking into some random crashes that appear to stem from
the stmmac_napi_poll_rx() function. There are two different panics I
have observed which are ...

[   19.591967] ------------[ cut here ]------------
[   19.596701] kernel BUG at /home/jonathanh/kernel/include/linux/skbuff.h:2297!
[   19.606273] Internal error: Oops - BUG: 0 [#1] PREEMPT SMP
[   19.611847] Modules linked in:
[   19.615146] CPU: 0 PID: 9 Comm: ksoftirqd/0 Not tainted 5.10.0 #1
[   19.621357] Hardware name: NVIDIA Jetson AGX Xavier Developer Kit (DT)
[   19.628010] pstate: 00c00009 (nzcv daif +PAN +UAO -TCO BTYPE=--)
[   19.634134] pc : eth_type_trans+0xf8/0x1a0
[   19.638326] lr : stmmac_napi_poll_rx+0x340/0xbd0
[   19.643123] sp : ffff800011f63ba0
[   19.646544] x29: ffff800011f63ba0 x28: 00000000000005ea 
[   19.651995] x27: ffff0003d4fe48c0 x26: 00000000000000d3 
[   19.657532] x25: ffff0003d4fe48c0 x24: 0000000000000000 
[   19.662961] x23: ffff0000828d1a40 x22: ffff000083497f00 
[   19.668503] x21: ffff000084b4d6c0 x20: ffff800011b79000 
[   19.673934] x19: 0000000000000000 x18: 0000000000000000 
[   19.679472] x17: 0000000000000000 x16: ffff00008095e291 
[   19.684953] x15: 0000000000000001 x14: 4f77dff9f5ddb24a 
[   19.690425] x13: 3ecd4adcbed3184e x12: bca593c98536d7f5 
[   19.695860] x11: 62327b117d535683 x10: 69f6607bf62388d6 
[   19.701297] x9 : a65753eedd84defc x8 : ffff000084b4dcd0 
[   19.706836] x7 : ffff000084b4dcc0 x6 : 0000000000000000 
[   19.712271] x5 : 0000000000001000 x4 : 0000008000000000 
[   19.717810] x3 : 00000000ffffffea x2 : 00000000000005dc 
[   19.723241] x1 : ffff0003d4fe4000 x0 : ffff000083497f00 
[   19.728777] Call trace:
[   19.731322]  eth_type_trans+0xf8/0x1a0
[   19.735163]  stmmac_napi_poll_rx+0x340/0xbd0
[   19.739540]  net_rx_action+0x220/0x368
[   19.743387]  __do_softirq+0x10c/0x264
[   19.747145]  run_ksoftirqd+0x44/0x58
[   19.750840]  smpboot_thread_fn+0x1d0/0x298
[   19.755025]  kthread+0x120/0x150
[   19.758484]  ret_from_fork+0x10/0x1c
[   19.762235] Code: 52800063 33000862 39020002 17ffffe0 (d4210000) 
[   19.768472] ---[ end trace 8e68a50b77108dd8 ]---
[   19.773159] Kernel panic - not syncing: Oops - BUG: Fatal exception in interrupt
[   19.780636] SMP: stopping secondary CPUs
[   19.785053] Kernel Offset: disabled
[   19.788691] CPU features: 0x0240002,2300aa30
[   19.793044] Memory Limit: none
[   19.796183] ---[ end Kernel panic - not syncing: Oops - BUG: Fatal exception in interrupt ]---

... or ...

[   14.914607] Unable to handle kernel write to read-only memory at virtual address ffff000087ab3000
[   14.924064] Mem abort info:
[   14.927201]   ESR = 0x9600014f
[   14.930578]   EC = 0x25: DABT (current EL), IL = 32 bits
[   14.936484]   SET = 0, FnV = 0
[   14.939890]   EA = 0, S1PTW = 0
[   14.943603] Data abort info:
[   14.946781]   ISV = 0, ISS = 0x0000014f
[   14.950995]   CM = 1, WnR = 1
[   14.954334] swapper pgtable: 4k pages, 48-bit VAs, pgdp=00000004505ae000
[   14.961377] [ffff000087ab3000] pgd=000000047fff8003, p4d=000000047fff8003, pud=000000047fc6b003, pmd=000000047fc2d003, pte=0060000107ab3787
[   14.974932] Internal error: Oops: 9600014f [#1] PREEMPT SMP
[   14.980660] Modules linked in: tegra210_adma snd_hda_codec_hdmi snd_hda_tegra snd_hda_codec host1x snd_hda_core crct10dif_ce tegra_aconnect pwm_fan lm90 pwm_tegra tegra_bpmp_thermal phy_tegra194_p2u pcie_tegra194 at24 ip_tables x_tables ipv6
[   15.003003] CPU: 0 PID: 9 Comm: ksoftirqd/0 Not tainted 5.10.0 #1
[   15.009381] Hardware name: NVIDIA Jetson AGX Xavier Developer Kit (DT)
[   15.016058] pstate: 80c00009 (Nzcv daif +PAN +UAO -TCO BTYPE=--)
[   15.022364] pc : __dma_inv_area+0x40/0x58
[   15.026660] lr : arch_sync_dma_for_cpu+0x1c/0x28
[   15.031368] sp : ffff800011f63ba0
[   15.035035] x29: ffff800011f63ba0 x28: 0000000000000000 
[   15.040658] x27: ffff0003d4fb88c0 x26: 00000000000000be 
[   15.046253] x25: ffff0003d4fb88c0 x24: 00000000000005ea 
[   15.051960] x23: ffff0000828bd7a0 x22: 00000000ffffffa8 
[   15.057544] x21: 0000000000000002 x20: ffff000080929010 
[   15.063251] x19: 0000000107a3d000 x18: 0000000000000001 
[   15.068957] x17: 0000000000000000 x16: 0000000000000000 
[   15.074517] x15: 0000000000000068 x14: b17d6301a8460a08 
[   15.080185] x13: 010100003bd8701b x12: 1080e9f16ff0a040 
[   15.086033] x11: 3bb9990301080263 x10: a8c00163a8c0ccd5 
[   15.091856] x9 : 06400040fb17dc05 x8 : ffff000083e108c0 
[   15.097401] x7 : ffff000083e108c0 x6 : 00000000ffffffa8 
[   15.102924] x5 : ffff8000109ca6e0 x4 : 0000000000000000 
[   15.108612] x3 : 000000000000003f x2 : 0000000000000040 
[   15.114302] x1 : ffff000187a3cf80 x0 : ffff000087ab3000 
[   15.119952] Call trace:
[   15.122543]  __dma_inv_area+0x40/0x58
[   15.126521]  dma_sync_single_for_cpu+0xac/0x100
[   15.131195]  stmmac_napi_poll_rx+0x490/0xbd0
[   15.135786]  net_rx_action+0x220/0x368
[   15.139663]  __do_softirq+0x10c/0x264
[   15.143609]  run_ksoftirqd+0x44/0x58
[   15.147452]  smpboot_thread_fn+0x1d0/0x298
[   15.151678]  kthread+0x120/0x150
[   15.155042]  ret_from_fork+0x10/0x1c
[   15.158772] Code: 8a230000 54000060 d50b7e20 14000002 (d5087620) 
[   15.165228] ---[ end trace 23cff873d17a8509 ]---


This is seen on our Tegra194 Jetson AGX Xavier and it seems that it
is more likely to occur when the CPU is operating at slower
frequencies. For instance if I set the CPU speed to the min of
115.2MHz it occurs more often but not 100%.

Please note that although the above panics logs are from 5.10, it
is also seen with the current mainline.

The bug being triggered in skbuff.h is the following ...

 void *skb_pull(struct sk_buff *skb, unsigned int len);
 static inline void *__skb_pull(struct sk_buff *skb, unsigned int len)
 {
         skb->len -= len;
         BUG_ON(skb->len < skb->data_len);
         return skb->data += len;
 }

Looking into the above panic triggered in skbuff.h, when this occurs
I have noticed that the value of skb->data_len is unusually large ...

 __skb_pull: len 1500 (14), data_len 4294967274

I then added some traces to stmmac_napi_poll_rx() and
stmmac_rx_buf2_len() to trace the values of various various variables
and when the problem occurs I see ...

 stmmac_napi_poll_rx: stmmac_rx: count 0, len 1518, buf1 66, buf2 1452
 stmmac_napi_poll_rx: stmmac_rx_buf2_len: len 66, plen 1518
 stmmac_napi_poll_rx: stmmac_rx: count 1, len 1518, buf1 66, buf2 1452
 stmmac_napi_poll_rx: stmmac_rx_buf2_len: len 66, plen 1536
 stmmac_napi_poll_rx: stmmac_rx: count 2, len 1602, buf1 66, buf2 1536
 stmmac_napi_poll_rx: stmmac_rx_buf2_len: len 1602, plen 1518
 stmmac_napi_poll_rx: stmmac_rx: count 2, len 1518, buf1 0, buf2 4294967212
 stmmac_napi_poll_rx: stmmac_rx: dma_buf_sz 1536, buf1 0, buf2 4294967212

The above shows that occasionally the value of 'len' passed to
stmmac_rx_buf2_len() is greater than the value returned by
stmmac_get_rx_frame_len() and stored in 'plen', which then causes
stmmac_rx_buf2_len() to return a negative value but is then treated
as an unsigned value. So it is clear to me that there is overflow 
happening occasionally, but I am not sure how to fix it. I tried the
following but this appears to be causing other issues ...

diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
index 4749bd0af160..26336de80fe8 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
@@ -3728,7 +3728,7 @@ static unsigned int stmmac_rx_buf2_len(struct stmmac_priv *priv,
 
        /* Not last descriptor */
        if (status & rx_not_ls)
-               return priv->dma_buf_sz;
+               return priv->dma_buf_sz - len;
 
        plen = stmmac_get_rx_frame_len(priv, p, coe);
 
@@ -3848,7 +3848,7 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
 
                buf1_len = stmmac_rx_buf1_len(priv, p, status, len);
                len += buf1_len;
-               buf2_len = stmmac_rx_buf2_len(priv, p, status, len);
+               buf2_len = stmmac_rx_buf2_len(priv, p, status, buf1_len);
                len += buf2_len;


The patch I added to trace the various variable is as follows ...

diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
index 4749bd0af160..975059bb3165 100644
--- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
+++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
@@ -3723,15 +3723,20 @@ static unsigned int stmmac_rx_buf2_len(struct stmmac_priv *priv,
        unsigned int plen = 0;
 
        /* Not split header, buffer is not available */
-       if (!priv->sph)
+       if (!priv->sph) {
+               trace_printk("%s: len %u, plen %u\n", __func__, len, priv->dma_buf_sz);
                return 0;
+       }
 
        /* Not last descriptor */
-       if (status & rx_not_ls)
+       if (status & rx_not_ls) {
+               trace_printk("%s: len %u, plen %u\n", __func__, len, priv->dma_buf_sz);
                return priv->dma_buf_sz;
+       }
 
        plen = stmmac_get_rx_frame_len(priv, p, coe);
 
+       trace_printk("%s: len %u, plen %u\n", __func__, len, plen);
        /* Last descriptor */
        return plen - len;
 }
@@ -3850,7 +3855,13 @@ static int stmmac_rx(struct stmmac_priv *priv, int limit, u32 queue)
                len += buf1_len;
                buf2_len = stmmac_rx_buf2_len(priv, p, status, len);
                len += buf2_len;
-
+               trace_printk("%s: count %u, len %u, buf1 %u, buf2 %u\n", __func__,
+                            count, len, buf1_len, buf2_len);
+               if (buf1_len > priv->dma_buf_sz || buf2_len > priv->dma_buf_sz) {
+                       trace_printk("%s: dma_buf_sz %u, buf1 %u, buf2 %u\n",
+                               __func__, priv->dma_buf_sz, buf1_len, buf2_len);
+                       BUG_ON(1);
+               }
                /* ACS is set; GMAC core strips PAD/FCS for IEEE 802.3
                 * Type frames (LLC/LLC-SNAP)
                 *

Note I added the above BUG_ON to trap unusually large buffers. Let
me know if you have any thoughts.

Thanks!
Jon

-- 
nvpublic

             reply	other threads:[~2021-05-14 14:25 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-14 14:24 Jon Hunter [this message]
2021-05-14 21:49 ` [BUG] net: stmmac: Panic observed in stmmac_napi_poll_rx() Michał Mirosław
2021-05-17 10:39   ` Jon Hunter
2021-05-24 12:48     ` Mikko Perttunen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b0b17697-f23e-8fa5-3757-604a86f3a095@nvidia.com \
    --to=jonathanh@nvidia.com \
    --cc=alexandre.torgue@foss.st.com \
    --cc=joabreu@synopsys.com \
    --cc=linux-tegra@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=peppe.cavallaro@st.com \
    --cc=treding@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).