intel-wired-lan.lists.osuosl.org archive mirror
 help / color / mirror / Atom feed
From: Vinicius Costa Gomes <vinicius.gomes@intel.com>
To: Jacob Keller <jacob.e.keller@intel.com>, intel-wired-lan@osuosl.org
Subject: Re: [Intel-wired-lan] [PATCH net-next 2/2] ice: Remove gettime HW semaphore
Date: Mon, 17 Oct 2022 17:50:47 -0700	[thread overview]
Message-ID: <877d0yt0ns.fsf@intel.com> (raw)
In-Reply-To: <65b11d3a-8806-7d1d-e010-eb886af9f772@intel.com>

Jacob Keller <jacob.e.keller@intel.com> writes:

> On 10/5/2022 2:10 PM, Vinicius Costa Gomes wrote:
>> "Kolacinski, Karol" <karol.kolacinski@intel.com> writes:
>> 
>>> Hi Vinicius,
>>>
>>>> I think the problem is less about concurrent writes/reads than
>>>> concurrent reads: the fact that the registers are latched when the
>>>> "lower" register is read, makes me worried that there's a (very narrow)
>>>> window during rollover in which the "losing" read (of multiple threads
>>>> doing reads) can return a wrong value.
>>>
>>> The issue in this case is, it's either risk of reading slightly wrong
>>> value or having multiple timeouts and errors.
>>> We experienced a lot of simultaneous reads on multiple PFs (especially
>>> on E822 HW with 8 ports) and even with increased timeout to acquire
>>> the HW semaphore, it still failed.
>> 
>> I am wondering if using a hw semaphore is making the problem worse than
>> it needs to be. Why a kernel spinlock can't be used?
>> 
>> 
>> Cheers,
>
> The same clock is shared across multiple ports which operate as
> independent PCIe devices, hence having their own instance of the ice
> driver structures. A spinlock doesn't work because they wouldn't be
> using the same lock.
>

Oh! I should have realized that. The thought that there could be
multiple devices/ports sharing some resources didn't cross my mind.

> We could try to share the lock in software between PFs, but its actually
> quite difficult to do that with the existing PCIe driver model.

I can see how that would be difficult, yeah.

Did you happen to test if my fears were true or not? For example,
'phc2sys' running in parallel with a few (4?) 'while true { phc_ctl get }'.
Do you notice any weirdness?


Cheers,
-- 
Vinicius
_______________________________________________
Intel-wired-lan mailing list
Intel-wired-lan@osuosl.org
https://lists.osuosl.org/mailman/listinfo/intel-wired-lan

  reply	other threads:[~2022-10-18  0:50 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-03  9:55 [Intel-wired-lan] [PATCH net-next 1/2] ice: Check for PTP HW lock more frequently Karol Kolacinski
2022-10-03  9:55 ` [Intel-wired-lan] [PATCH net-next 2/2] ice: Remove gettime HW semaphore Karol Kolacinski
2022-10-03 18:03   ` Tony Nguyen
2022-10-05 11:55     ` Kolacinski, Karol
2022-10-05 21:10       ` Vinicius Costa Gomes
2022-10-17 22:52         ` Jacob Keller
2022-10-18  0:50           ` Vinicius Costa Gomes [this message]
2022-11-11  4:38   ` G, GurucharanX
2022-11-11  4:39 ` [Intel-wired-lan] [PATCH net-next 1/2] ice: Check for PTP HW lock more frequently G, GurucharanX

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=877d0yt0ns.fsf@intel.com \
    --to=vinicius.gomes@intel.com \
    --cc=intel-wired-lan@osuosl.org \
    --cc=jacob.e.keller@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).