linux-nvdimm.lists.01.org archive mirror
 help / color / mirror / Atom feed
From: Andy Shevchenko <andy.shevchenko@gmail.com>
To: Dan Williams <dan.j.williams@intel.com>
Cc: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>,
	Stable <stable@vger.kernel.org>, Len Brown <lenb@kernel.org>,
	Borislav Petkov <bp@alien8.de>, James Morse <james.morse@arm.com>,
	Erik Kaneda <erik.kaneda@intel.com>,
	Myron Stowe <myron.stowe@redhat.com>,
	"Rafael J. Wysocki" <rjw@rjwysocki.net>,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	linux-nvdimm@lists.01.org
Subject: Re: [PATCH] ACPI: Drop rcu usage for MMIO mappings
Date: Thu, 7 May 2020 12:24:49 +0300	[thread overview]
Message-ID: <CAHp75Vf0zBnwHubK+C265M9nh3Y5K2K=8ck61HQtnW+021bgwQ@mail.gmail.com> (raw)
In-Reply-To: <158880834905.2183490.15616329469420234017.stgit@dwillia2-desk3.amr.corp.intel.com>

On Thu, May 7, 2020 at 3:21 AM Dan Williams <dan.j.williams@intel.com> wrote:
>
> Recently a performance problem was reported for a process invoking a
> non-trival ASL program. The method call in this case ends up
> repetitively triggering a call path like:
>
>     acpi_ex_store
>     acpi_ex_store_object_to_node
>     acpi_ex_write_data_to_field
>     acpi_ex_insert_into_field
>     acpi_ex_write_with_update_rule
>     acpi_ex_field_datum_io
>     acpi_ex_access_region
>     acpi_ev_address_space_dispatch
>     acpi_ex_system_memory_space_handler
>     acpi_os_map_cleanup.part.14
>     _synchronize_rcu_expedited.constprop.89
>     schedule
>
> The end result of frequent synchronize_rcu_expedited() invocation is
> tiny sub-millisecond spurts of execution where the scheduler freely
> migrates this apparently sleepy task. The overhead of frequent scheduler
> invocation multiplies the execution time by a factor of 2-3X.
>
> For example, performance improves from 16 minutes to 7 minutes for a
> firmware update procedure across 24 devices.
>
> Perhaps the rcu usage was intended to allow for not taking a sleeping
> lock in the acpi_os_{read,write}_memory() path which ostensibly could be
> called from an APEI NMI error interrupt? Neither rcu_read_lock() nor
> ioremap() are interrupt safe, so add a WARN_ONCE() to validate that rcu
> was not serving as a mechanism to avoid direct calls to ioremap(). Even
> the original implementation had a spin_lock_irqsave(), but that is not
> NMI safe.
>
> APEI itself already has some concept of avoiding ioremap() from
> interrupt context (see erst_exec_move_data()), if the new warning
> triggers it means that APEI either needs more instrumentation like that
> to pre-emptively fail, or more infrastructure to arrange for pre-mapping
> the resources it needs in NMI context.

...

> +static void __iomem *acpi_os_rw_map(acpi_physical_address phys_addr,
> +                                   unsigned int size, bool *did_fallback)
> +{
> +       void __iomem *virt_addr = NULL;

Assignment is not needed as far as I can see.

> +       if (WARN_ONCE(in_interrupt(), "ioremap in interrupt context\n"))
> +               return NULL;
> +
> +       /* Try to use a cached mapping and fallback otherwise */
> +       *did_fallback = false;
> +       mutex_lock(&acpi_ioremap_lock);
> +       virt_addr = acpi_map_vaddr_lookup(phys_addr, size);
> +       if (virt_addr)
> +               return virt_addr;
> +       mutex_unlock(&acpi_ioremap_lock);
> +
> +       virt_addr = acpi_os_ioremap(phys_addr, size);
> +       *did_fallback = true;
> +
> +       return virt_addr;
> +}

I'm wondering if Sparse is okay with this...

> +static void acpi_os_rw_unmap(void __iomem *virt_addr, bool did_fallback)
> +{
> +       if (did_fallback) {
> +               /* in the fallback case no lock is held */
> +               iounmap(virt_addr);
> +               return;
> +       }
> +
> +       mutex_unlock(&acpi_ioremap_lock);
> +}

...and this functions from locking perspective.

-- 
With Best Regards,
Andy Shevchenko
_______________________________________________
Linux-nvdimm mailing list -- linux-nvdimm@lists.01.org
To unsubscribe send an email to linux-nvdimm-leave@lists.01.org

  reply	other threads:[~2020-05-07  9:25 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-06 23:39 [PATCH] ACPI: Drop rcu usage for MMIO mappings Dan Williams
2020-05-07  9:24 ` Andy Shevchenko [this message]
2020-05-07 23:12   ` Dan Williams
2020-05-07 16:43 ` Rafael J. Wysocki
2020-05-07 21:20   ` Dan Williams
2020-05-11  9:00 ` [ACPI] b13663bdf9: BUG:sleeping_function_called_from_invalid_context_at_kernel/locking/mutex.c kernel test robot
2020-05-12 16:28   ` Rafael J. Wysocki
2020-05-12 18:05     ` Dan Williams
2020-05-18 18:08       ` James Morse
2020-05-18 19:44         ` Dan Williams

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAHp75Vf0zBnwHubK+C265M9nh3Y5K2K=8ck61HQtnW+021bgwQ@mail.gmail.com' \
    --to=andy.shevchenko@gmail.com \
    --cc=andriy.shevchenko@linux.intel.com \
    --cc=bp@alien8.de \
    --cc=dan.j.williams@intel.com \
    --cc=erik.kaneda@intel.com \
    --cc=james.morse@arm.com \
    --cc=lenb@kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-nvdimm@lists.01.org \
    --cc=myron.stowe@redhat.com \
    --cc=rafael.j.wysocki@intel.com \
    --cc=rjw@rjwysocki.net \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).