All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Neftin, Sasha" <sasha.neftin@intel.com>
To: Mario.Limonciello@dell.com, pmenzel@molgen.mpg.de,
	jeffrey.t.kirsher@intel.com
Cc: intel-wired-lan@lists.osuosl.org, linux-kernel@vger.kernel.org
Subject: Re: [Intel-wired-lan] MDI errors during resume from ACPI S3 (suspend to ram)
Date: Wed, 7 Aug 2019 10:23:59 +0300	[thread overview]
Message-ID: <d0aaa0f8-b94c-be65-7a4e-f5592aa65647@intel.com> (raw)
In-Reply-To: <2277f25bc44c4aebaac59942de2e24bb@AUSX13MPC105.AMER.DELL.COM>

On 8/6/2019 18:53, Mario.Limonciello@dell.com wrote:
>> -----Original Message-----
>> From: Paul Menzel <pmenzel@molgen.mpg.de>
>> Sent: Tuesday, August 6, 2019 10:36 AM
>> To: Jeff Kirsher
>> Cc: intel-wired-lan@lists.osuosl.org; Linux Kernel Mailing List; Limonciello, Mario
>> Subject: MDI errors during resume from ACPI S3 (suspend to ram)
>>
>> Dear Linux folks,
>>
>>
>> Trying to decrease the resume time of Linux 5.3-rc3 on the Dell OptiPlex
>> 5040 with the device below
>>
>>      $ lspci -nn -s 00:1f.6
>>      00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (2)
>> I219-V [8086:15b8] (rev 31)
>>
>> pm-graph’s script `sleepgraph.py` shows, that the driver *e1000e* takes
>> around 400 ms, which is quite a lot. The call graph trace shows that
>> `e1000e_read_phy_reg_mdic()` is responsible for a lot of those. From
>> `drivers/net/ethernet/intel/e1000e/phy.c` [1]:
>>
>>          for (i = 0; i < (E1000_GEN_POLL_TIMEOUT * 3); i++) {
>>                  udelay(50);
>>                  mdic = er32(MDIC);
>>                  if (mdic & E1000_MDIC_READY)
>>                          break;
>>          }
>>          if (!(mdic & E1000_MDIC_READY)) {
>>                  e_dbg("MDI Read did not complete\n");
>>                  return -E1000_ERR_PHY;
>>          }
>>          if (mdic & E1000_MDIC_ERROR) {
>>                  e_dbg("MDI Error\n");
>>                  return -E1000_ERR_PHY;
>>          }
>>
>> Unfortunately, errors are not logged if dynamic debug is disabled,
>> so rebuilding the Linux kernel with `CONFIG_DYNAMIC_DEBUG`, and
>>
>>      echo "file drivers/net/ethernet/* +p" | sudo tee
>> /sys/kernel/debug/dynamic_debug/control
>>
>> I got the messages below.
>>
>>      [ 4159.204192] e1000e 0000:00:1f.6 net00: MDI Error
>>      [ 4160.267950] e1000e 0000:00:1f.6 net00: MDI Write did not complete
>>      [ 4160.359855] e1000e 0000:00:1f.6 net00: MDI Error
>>
>> Can you please shed a little more light into these errors? Please
>> find the full log attached.
>>
>>
>> Kind regards,
>>
>> Paul
>>
>>
>> [1]:
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/n
>> et/ethernet/intel/e1000e/phy.c#n206
> 
> Strictly as a reference point you may consider trying the out-of-tree driver to see if these
> behaviors persist.
> 
> https://sourceforge.net/projects/e1000/
> 
> _______________________________________________
> Intel-wired-lan mailing list
> Intel-wired-lan@osuosl.org
> https://lists.osuosl.org/mailman/listinfo/intel-wired-lan
> 
We are using external PHY. Required ~200 ms to complete MDIC transaction 
(depended on the project). You need to take to consider this time before 
access to the PHY. I do not recommend decrease timer in a 
'e1000e_read_phy_reg_mdic()' method. We could hit on wrong MDI access.

WARNING: multiple messages have this Message-ID (diff)
From: Neftin, Sasha <sasha.neftin@intel.com>
To: intel-wired-lan@osuosl.org
Subject: [Intel-wired-lan] MDI errors during resume from ACPI S3 (suspend to ram)
Date: Wed, 7 Aug 2019 10:23:59 +0300	[thread overview]
Message-ID: <d0aaa0f8-b94c-be65-7a4e-f5592aa65647@intel.com> (raw)
In-Reply-To: <2277f25bc44c4aebaac59942de2e24bb@AUSX13MPC105.AMER.DELL.COM>

On 8/6/2019 18:53, Mario.Limonciello at dell.com wrote:
>> -----Original Message-----
>> From: Paul Menzel <pmenzel@molgen.mpg.de>
>> Sent: Tuesday, August 6, 2019 10:36 AM
>> To: Jeff Kirsher
>> Cc: intel-wired-lan at lists.osuosl.org; Linux Kernel Mailing List; Limonciello, Mario
>> Subject: MDI errors during resume from ACPI S3 (suspend to ram)
>>
>> Dear Linux folks,
>>
>>
>> Trying to decrease the resume time of Linux 5.3-rc3 on the Dell OptiPlex
>> 5040 with the device below
>>
>>      $ lspci -nn -s 00:1f.6
>>      00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (2)
>> I219-V [8086:15b8] (rev 31)
>>
>> pm-graph?s script `sleepgraph.py` shows, that the driver *e1000e* takes
>> around 400 ms, which is quite a lot. The call graph trace shows that
>> `e1000e_read_phy_reg_mdic()` is responsible for a lot of those. From
>> `drivers/net/ethernet/intel/e1000e/phy.c` [1]:
>>
>>          for (i = 0; i < (E1000_GEN_POLL_TIMEOUT * 3); i++) {
>>                  udelay(50);
>>                  mdic = er32(MDIC);
>>                  if (mdic & E1000_MDIC_READY)
>>                          break;
>>          }
>>          if (!(mdic & E1000_MDIC_READY)) {
>>                  e_dbg("MDI Read did not complete\n");
>>                  return -E1000_ERR_PHY;
>>          }
>>          if (mdic & E1000_MDIC_ERROR) {
>>                  e_dbg("MDI Error\n");
>>                  return -E1000_ERR_PHY;
>>          }
>>
>> Unfortunately, errors are not logged if dynamic debug is disabled,
>> so rebuilding the Linux kernel with `CONFIG_DYNAMIC_DEBUG`, and
>>
>>      echo "file drivers/net/ethernet/* +p" | sudo tee
>> /sys/kernel/debug/dynamic_debug/control
>>
>> I got the messages below.
>>
>>      [ 4159.204192] e1000e 0000:00:1f.6 net00: MDI Error
>>      [ 4160.267950] e1000e 0000:00:1f.6 net00: MDI Write did not complete
>>      [ 4160.359855] e1000e 0000:00:1f.6 net00: MDI Error
>>
>> Can you please shed a little more light into these errors? Please
>> find the full log attached.
>>
>>
>> Kind regards,
>>
>> Paul
>>
>>
>> [1]:
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/n
>> et/ethernet/intel/e1000e/phy.c#n206
> 
> Strictly as a reference point you may consider trying the out-of-tree driver to see if these
> behaviors persist.
> 
> https://sourceforge.net/projects/e1000/
> 
> _______________________________________________
> Intel-wired-lan mailing list
> Intel-wired-lan at osuosl.org
> https://lists.osuosl.org/mailman/listinfo/intel-wired-lan
> 
We are using external PHY. Required ~200 ms to complete MDIC transaction 
(depended on the project). You need to take to consider this time before 
access to the PHY. I do not recommend decrease timer in a 
'e1000e_read_phy_reg_mdic()' method. We could hit on wrong MDI access.

  reply	other threads:[~2019-08-07  7:24 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-06 15:36 MDI errors during resume from ACPI S3 (suspend to ram) Paul Menzel
2019-08-06 15:36 ` [Intel-wired-lan] " Paul Menzel
2019-08-06 15:53 ` Mario.Limonciello
2019-08-06 15:53   ` [Intel-wired-lan] " Mario.Limonciello
2019-08-07  7:23   ` Neftin, Sasha [this message]
2019-08-07  7:23     ` Neftin, Sasha
2019-08-07 14:55     ` Paul Menzel
2019-08-07 14:55       ` Paul Menzel
2019-08-08  6:10       ` Neftin, Sasha
2019-08-08  6:10         ` Neftin, Sasha

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=d0aaa0f8-b94c-be65-7a4e-f5592aa65647@intel.com \
    --to=sasha.neftin@intel.com \
    --cc=Mario.Limonciello@dell.com \
    --cc=intel-wired-lan@lists.osuosl.org \
    --cc=jeffrey.t.kirsher@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=pmenzel@molgen.mpg.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.