linux-acpi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] ACPI: Drop rcu usage for MMIO mappings
@ 2020-05-07 23:39 Dan Williams
  2020-06-05 13:32 ` Rafael J. Wysocki
                   ` (3 more replies)
  0 siblings, 4 replies; 51+ messages in thread
From: Dan Williams @ 2020-05-07 23:39 UTC (permalink / raw)
  To: rafael.j.wysocki
  Cc: stable, Len Brown, Borislav Petkov, Ira Weiny, James Morse,
	Erik Kaneda, Myron Stowe, Rafael J. Wysocki, Andy Shevchenko,
	linux-kernel, linux-acpi, linux-nvdimm

Recently a performance problem was reported for a process invoking a
non-trival ASL program. The method call in this case ends up
repetitively triggering a call path like:

    acpi_ex_store
    acpi_ex_store_object_to_node
    acpi_ex_write_data_to_field
    acpi_ex_insert_into_field
    acpi_ex_write_with_update_rule
    acpi_ex_field_datum_io
    acpi_ex_access_region
    acpi_ev_address_space_dispatch
    acpi_ex_system_memory_space_handler
    acpi_os_map_cleanup.part.14
    _synchronize_rcu_expedited.constprop.89
    schedule

The end result of frequent synchronize_rcu_expedited() invocation is
tiny sub-millisecond spurts of execution where the scheduler freely
migrates this apparently sleepy task. The overhead of frequent scheduler
invocation multiplies the execution time by a factor of 2-3X.

For example, performance improves from 16 minutes to 7 minutes for a
firmware update procedure across 24 devices.

Perhaps the rcu usage was intended to allow for not taking a sleeping
lock in the acpi_os_{read,write}_memory() path which ostensibly could be
called from an APEI NMI error interrupt? Neither rcu_read_lock() nor
ioremap() are interrupt safe, so add a WARN_ONCE() to validate that rcu
was not serving as a mechanism to avoid direct calls to ioremap(). Even
the original implementation had a spin_lock_irqsave(), but that is not
NMI safe.

APEI itself already has some concept of avoiding ioremap() from
interrupt context (see erst_exec_move_data()), if the new warning
triggers it means that APEI either needs more instrumentation like that
to pre-emptively fail, or more infrastructure to arrange for pre-mapping
the resources it needs in NMI context.

Cc: <stable@vger.kernel.org>
Fixes: 620242ae8c3d ("ACPI: Maintain a list of ACPI memory mapped I/O remappings")
Cc: Len Brown <lenb@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Ira Weiny <ira.weiny@intel.com>
Cc: James Morse <james.morse@arm.com>
Cc: Erik Kaneda <erik.kaneda@intel.com>
Cc: Myron Stowe <myron.stowe@redhat.com>
Cc: "Rafael J. Wysocki" <rjw@rjwysocki.net>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
---
Changes since v1 [1]:

- Actually cc: the most important list for ACPI changes (Rafael)

- Cleanup unnecessary variable initialization (Andy)

Link: https://lore.kernel.org/linux-nvdimm/158880834905.2183490.15616329469420234017.stgit@dwillia2-desk3.amr.corp.intel.com/


 drivers/acpi/osl.c |  117 +++++++++++++++++++++++++---------------------------
 1 file changed, 57 insertions(+), 60 deletions(-)

diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
index 762c5d50b8fe..a44b75aac5d0 100644
--- a/drivers/acpi/osl.c
+++ b/drivers/acpi/osl.c
@@ -214,13 +214,13 @@ acpi_physical_address __init acpi_os_get_root_pointer(void)
 	return pa;
 }
 
-/* Must be called with 'acpi_ioremap_lock' or RCU read lock held. */
 static struct acpi_ioremap *
 acpi_map_lookup(acpi_physical_address phys, acpi_size size)
 {
 	struct acpi_ioremap *map;
 
-	list_for_each_entry_rcu(map, &acpi_ioremaps, list, acpi_ioremap_lock_held())
+	lockdep_assert_held(&acpi_ioremap_lock);
+	list_for_each_entry(map, &acpi_ioremaps, list)
 		if (map->phys <= phys &&
 		    phys + size <= map->phys + map->size)
 			return map;
@@ -228,7 +228,6 @@ acpi_map_lookup(acpi_physical_address phys, acpi_size size)
 	return NULL;
 }
 
-/* Must be called with 'acpi_ioremap_lock' or RCU read lock held. */
 static void __iomem *
 acpi_map_vaddr_lookup(acpi_physical_address phys, unsigned int size)
 {
@@ -263,7 +262,8 @@ acpi_map_lookup_virt(void __iomem *virt, acpi_size size)
 {
 	struct acpi_ioremap *map;
 
-	list_for_each_entry_rcu(map, &acpi_ioremaps, list, acpi_ioremap_lock_held())
+	lockdep_assert_held(&acpi_ioremap_lock);
+	list_for_each_entry(map, &acpi_ioremaps, list)
 		if (map->virt <= virt &&
 		    virt + size <= map->virt + map->size)
 			return map;
@@ -360,7 +360,7 @@ void __iomem __ref
 	map->size = pg_sz;
 	map->refcount = 1;
 
-	list_add_tail_rcu(&map->list, &acpi_ioremaps);
+	list_add_tail(&map->list, &acpi_ioremaps);
 
 out:
 	mutex_unlock(&acpi_ioremap_lock);
@@ -374,20 +374,13 @@ void *__ref acpi_os_map_memory(acpi_physical_address phys, acpi_size size)
 }
 EXPORT_SYMBOL_GPL(acpi_os_map_memory);
 
-/* Must be called with mutex_lock(&acpi_ioremap_lock) */
-static unsigned long acpi_os_drop_map_ref(struct acpi_ioremap *map)
-{
-	unsigned long refcount = --map->refcount;
-
-	if (!refcount)
-		list_del_rcu(&map->list);
-	return refcount;
-}
-
-static void acpi_os_map_cleanup(struct acpi_ioremap *map)
+static void acpi_os_drop_map_ref(struct acpi_ioremap *map)
 {
-	synchronize_rcu_expedited();
+	lockdep_assert_held(&acpi_ioremap_lock);
+	if (--map->refcount > 0)
+		return;
 	acpi_unmap(map->phys, map->virt);
+	list_del(&map->list);
 	kfree(map);
 }
 
@@ -408,7 +401,6 @@ static void acpi_os_map_cleanup(struct acpi_ioremap *map)
 void __ref acpi_os_unmap_iomem(void __iomem *virt, acpi_size size)
 {
 	struct acpi_ioremap *map;
-	unsigned long refcount;
 
 	if (!acpi_permanent_mmap) {
 		__acpi_unmap_table(virt, size);
@@ -422,11 +414,8 @@ void __ref acpi_os_unmap_iomem(void __iomem *virt, acpi_size size)
 		WARN(true, PREFIX "%s: bad address %p\n", __func__, virt);
 		return;
 	}
-	refcount = acpi_os_drop_map_ref(map);
+	acpi_os_drop_map_ref(map);
 	mutex_unlock(&acpi_ioremap_lock);
-
-	if (!refcount)
-		acpi_os_map_cleanup(map);
 }
 EXPORT_SYMBOL_GPL(acpi_os_unmap_iomem);
 
@@ -461,7 +450,6 @@ void acpi_os_unmap_generic_address(struct acpi_generic_address *gas)
 {
 	u64 addr;
 	struct acpi_ioremap *map;
-	unsigned long refcount;
 
 	if (gas->space_id != ACPI_ADR_SPACE_SYSTEM_MEMORY)
 		return;
@@ -477,11 +465,8 @@ void acpi_os_unmap_generic_address(struct acpi_generic_address *gas)
 		mutex_unlock(&acpi_ioremap_lock);
 		return;
 	}
-	refcount = acpi_os_drop_map_ref(map);
+	acpi_os_drop_map_ref(map);
 	mutex_unlock(&acpi_ioremap_lock);
-
-	if (!refcount)
-		acpi_os_map_cleanup(map);
 }
 EXPORT_SYMBOL(acpi_os_unmap_generic_address);
 
@@ -700,55 +685,71 @@ int acpi_os_read_iomem(void __iomem *virt_addr, u64 *value, u32 width)
 	return 0;
 }
 
+static void __iomem *acpi_os_rw_map(acpi_physical_address phys_addr,
+				    unsigned int size, bool *did_fallback)
+{
+	void __iomem *virt_addr;
+
+	if (WARN_ONCE(in_interrupt(), "ioremap in interrupt context\n"))
+		return NULL;
+
+	/* Try to use a cached mapping and fallback otherwise */
+	*did_fallback = false;
+	mutex_lock(&acpi_ioremap_lock);
+	virt_addr = acpi_map_vaddr_lookup(phys_addr, size);
+	if (virt_addr)
+		return virt_addr;
+	mutex_unlock(&acpi_ioremap_lock);
+
+	virt_addr = acpi_os_ioremap(phys_addr, size);
+	*did_fallback = true;
+
+	return virt_addr;
+}
+
+static void acpi_os_rw_unmap(void __iomem *virt_addr, bool did_fallback)
+{
+	if (did_fallback) {
+		/* in the fallback case no lock is held */
+		iounmap(virt_addr);
+		return;
+	}
+
+	mutex_unlock(&acpi_ioremap_lock);
+}
+
 acpi_status
 acpi_os_read_memory(acpi_physical_address phys_addr, u64 *value, u32 width)
 {
-	void __iomem *virt_addr;
 	unsigned int size = width / 8;
-	bool unmap = false;
+	bool did_fallback = false;
+	void __iomem *virt_addr;
 	u64 dummy;
 	int error;
 
-	rcu_read_lock();
-	virt_addr = acpi_map_vaddr_lookup(phys_addr, size);
-	if (!virt_addr) {
-		rcu_read_unlock();
-		virt_addr = acpi_os_ioremap(phys_addr, size);
-		if (!virt_addr)
-			return AE_BAD_ADDRESS;
-		unmap = true;
-	}
-
+	virt_addr = acpi_os_rw_map(phys_addr, size, &did_fallback);
+	if (!virt_addr)
+		return AE_BAD_ADDRESS;
 	if (!value)
 		value = &dummy;
 
 	error = acpi_os_read_iomem(virt_addr, value, width);
 	BUG_ON(error);
 
-	if (unmap)
-		iounmap(virt_addr);
-	else
-		rcu_read_unlock();
-
+	acpi_os_rw_unmap(virt_addr, did_fallback);
 	return AE_OK;
 }
 
 acpi_status
 acpi_os_write_memory(acpi_physical_address phys_addr, u64 value, u32 width)
 {
-	void __iomem *virt_addr;
 	unsigned int size = width / 8;
-	bool unmap = false;
+	bool did_fallback = false;
+	void __iomem *virt_addr;
 
-	rcu_read_lock();
-	virt_addr = acpi_map_vaddr_lookup(phys_addr, size);
-	if (!virt_addr) {
-		rcu_read_unlock();
-		virt_addr = acpi_os_ioremap(phys_addr, size);
-		if (!virt_addr)
-			return AE_BAD_ADDRESS;
-		unmap = true;
-	}
+	virt_addr = acpi_os_rw_map(phys_addr, size, &did_fallback);
+	if (!virt_addr)
+		return AE_BAD_ADDRESS;
 
 	switch (width) {
 	case 8:
@@ -767,11 +768,7 @@ acpi_os_write_memory(acpi_physical_address phys_addr, u64 value, u32 width)
 		BUG();
 	}
 
-	if (unmap)
-		iounmap(virt_addr);
-	else
-		rcu_read_unlock();
-
+	acpi_os_rw_unmap(virt_addr, did_fallback);
 	return AE_OK;
 }
 


^ permalink raw reply related	[flat|nested] 51+ messages in thread

* Re: [PATCH v2] ACPI: Drop rcu usage for MMIO mappings
  2020-05-07 23:39 [PATCH v2] ACPI: Drop rcu usage for MMIO mappings Dan Williams
@ 2020-06-05 13:32 ` Rafael J. Wysocki
  2020-06-05 16:18   ` Dan Williams
  2020-06-05 14:06 ` [RFT][PATCH] ACPI: OSL: Use rwlock instead of RCU for memory management Rafael J. Wysocki
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-05 13:32 UTC (permalink / raw)
  To: Dan Williams
  Cc: Rafael Wysocki, Stable, Len Brown, Borislav Petkov, Ira Weiny,
	James Morse, Erik Kaneda, Myron Stowe, Rafael J. Wysocki,
	Andy Shevchenko, Linux Kernel Mailing List,
	ACPI Devel Maling List, linux-nvdimm

On Fri, May 8, 2020 at 1:55 AM Dan Williams <dan.j.williams@intel.com> wrote:
>
> Recently a performance problem was reported for a process invoking a
> non-trival ASL program. The method call in this case ends up
> repetitively triggering a call path like:
>
>     acpi_ex_store
>     acpi_ex_store_object_to_node
>     acpi_ex_write_data_to_field
>     acpi_ex_insert_into_field
>     acpi_ex_write_with_update_rule
>     acpi_ex_field_datum_io
>     acpi_ex_access_region
>     acpi_ev_address_space_dispatch
>     acpi_ex_system_memory_space_handler
>     acpi_os_map_cleanup.part.14
>     _synchronize_rcu_expedited.constprop.89
>     schedule
>
> The end result of frequent synchronize_rcu_expedited() invocation is
> tiny sub-millisecond spurts of execution where the scheduler freely
> migrates this apparently sleepy task. The overhead of frequent scheduler
> invocation multiplies the execution time by a factor of 2-3X.
>
> For example, performance improves from 16 minutes to 7 minutes for a
> firmware update procedure across 24 devices.
>
> Perhaps the rcu usage was intended to allow for not taking a sleeping
> lock in the acpi_os_{read,write}_memory() path which ostensibly could be
> called from an APEI NMI error interrupt?

Not really.

acpi_os_{read|write}_memory() end up being called from non-NMI
interrupt context via acpi_hw_{read|write}(), respectively, and quite
obviously ioremap() cannot be run from there, but in those cases the
mappings in question are there in the list already in all cases and so
the ioremap() isn't used then.

RCU is there to protect these users from walking the list while it is
being updated.

> Neither rcu_read_lock() nor ioremap() are interrupt safe, so add a WARN_ONCE() to validate that rcu
> was not serving as a mechanism to avoid direct calls to ioremap().

But it would produce false-positives if the IRQ context was not NMI,
wouldn't it?

> Even the original implementation had a spin_lock_irqsave(), but that is not
> NMI safe.

Which is not a problem (see above).

> APEI itself already has some concept of avoiding ioremap() from
> interrupt context (see erst_exec_move_data()), if the new warning
> triggers it means that APEI either needs more instrumentation like that
> to pre-emptively fail, or more infrastructure to arrange for pre-mapping
> the resources it needs in NMI context.

Well, I'm not sure about that.

Thanks!

^ permalink raw reply	[flat|nested] 51+ messages in thread

* [RFT][PATCH] ACPI: OSL: Use rwlock instead of RCU for memory management
  2020-05-07 23:39 [PATCH v2] ACPI: Drop rcu usage for MMIO mappings Dan Williams
  2020-06-05 13:32 ` Rafael J. Wysocki
@ 2020-06-05 14:06 ` Rafael J. Wysocki
  2020-06-05 17:08   ` Dan Williams
  2020-06-05 19:40   ` Andy Shevchenko
  2020-06-10 12:17 ` [RFT][PATCH 0/3] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter Rafael J. Wysocki
  2020-06-22 13:50 ` [RFT][PATCH v2 0/4] " Rafael J. Wysocki
  3 siblings, 2 replies; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-05 14:06 UTC (permalink / raw)
  To: Dan Williams
  Cc: rafael.j.wysocki, stable, Len Brown, Borislav Petkov, Ira Weiny,
	James Morse, Erik Kaneda, Myron Stowe, Andy Shevchenko,
	linux-kernel, linux-acpi, linux-nvdimm

From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Subject: [PATCH] ACPI: OSL: Use rwlock instead of RCU for memory management

The ACPI OS layer uses RCU to protect the list of ACPI memory
mappings from being walked while it is updated.  Among other
situations, that list can be walked in non-NMI interrupt context,
so using a sleeping lock to protect it is not an option.

However, performance issues related to the RCU usage in there
appear, as described by Dan Williams:

"Recently a performance problem was reported for a process invoking
a non-trival ASL program. The method call in this case ends up
repetitively triggering a call path like:

    acpi_ex_store
    acpi_ex_store_object_to_node
    acpi_ex_write_data_to_field
    acpi_ex_insert_into_field
    acpi_ex_write_with_update_rule
    acpi_ex_field_datum_io
    acpi_ex_access_region
    acpi_ev_address_space_dispatch
    acpi_ex_system_memory_space_handler
    acpi_os_map_cleanup.part.14
    _synchronize_rcu_expedited.constprop.89
    schedule

The end result of frequent synchronize_rcu_expedited() invocation is
tiny sub-millisecond spurts of execution where the scheduler freely
migrates this apparently sleepy task. The overhead of frequent
scheduler invocation multiplies the execution time by a factor
of 2-3X."

In order to avoid these issues, replace the RCU in the ACPI OS
layer by an rwlock.

That rwlock should not be frequently contended, so the performance
impact of it is not expected to be significant.

Reported-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---

Hi Dan,

This is a possible fix for the ACPI OSL RCU-related performance issues, but
can you please arrange for the testing of it on the affected systems?

Cheers!

---
 drivers/acpi/osl.c |   50 ++++++++++++++++++++++++++++++++++----------------
 1 file changed, 34 insertions(+), 16 deletions(-)

Index: linux-pm/drivers/acpi/osl.c
===================================================================
--- linux-pm.orig/drivers/acpi/osl.c
+++ linux-pm/drivers/acpi/osl.c
@@ -81,8 +81,8 @@ struct acpi_ioremap {
 };
 
 static LIST_HEAD(acpi_ioremaps);
+static DEFINE_RWLOCK(acpi_ioremaps_list_lock);
 static DEFINE_MUTEX(acpi_ioremap_lock);
-#define acpi_ioremap_lock_held() lock_is_held(&acpi_ioremap_lock.dep_map)
 
 static void __init acpi_request_region (struct acpi_generic_address *gas,
 	unsigned int length, char *desc)
@@ -214,13 +214,13 @@ acpi_physical_address __init acpi_os_get
 	return pa;
 }
 
-/* Must be called with 'acpi_ioremap_lock' or RCU read lock held. */
+/* Must be called with 'acpi_ioremap_lock' or 'acpi_ioremaps_list_lock' held. */
 static struct acpi_ioremap *
 acpi_map_lookup(acpi_physical_address phys, acpi_size size)
 {
 	struct acpi_ioremap *map;
 
-	list_for_each_entry_rcu(map, &acpi_ioremaps, list, acpi_ioremap_lock_held())
+	list_for_each_entry(map, &acpi_ioremaps, list)
 		if (map->phys <= phys &&
 		    phys + size <= map->phys + map->size)
 			return map;
@@ -228,7 +228,7 @@ acpi_map_lookup(acpi_physical_address ph
 	return NULL;
 }
 
-/* Must be called with 'acpi_ioremap_lock' or RCU read lock held. */
+/* Must be called with 'acpi_ioremap_lock' or 'acpi_ioremaps_list_lock' held. */
 static void __iomem *
 acpi_map_vaddr_lookup(acpi_physical_address phys, unsigned int size)
 {
@@ -257,13 +257,13 @@ void __iomem *acpi_os_get_iomem(acpi_phy
 }
 EXPORT_SYMBOL_GPL(acpi_os_get_iomem);
 
-/* Must be called with 'acpi_ioremap_lock' or RCU read lock held. */
+/* Must be called with 'acpi_ioremap_lock' or 'acpi_ioremaps_list_lock' held. */
 static struct acpi_ioremap *
 acpi_map_lookup_virt(void __iomem *virt, acpi_size size)
 {
 	struct acpi_ioremap *map;
 
-	list_for_each_entry_rcu(map, &acpi_ioremaps, list, acpi_ioremap_lock_held())
+	list_for_each_entry(map, &acpi_ioremaps, list)
 		if (map->virt <= virt &&
 		    virt + size <= map->virt + map->size)
 			return map;
@@ -360,7 +360,11 @@ void __iomem __ref
 	map->size = pg_sz;
 	map->refcount = 1;
 
-	list_add_tail_rcu(&map->list, &acpi_ioremaps);
+	write_lock_irq(&acpi_ioremaps_list_lock);
+
+	list_add_tail(&map->list, &acpi_ioremaps);
+
+	write_unlock_irq(&acpi_ioremaps_list_lock);
 
 out:
 	mutex_unlock(&acpi_ioremap_lock);
@@ -379,14 +383,18 @@ static unsigned long acpi_os_drop_map_re
 {
 	unsigned long refcount = --map->refcount;
 
-	if (!refcount)
-		list_del_rcu(&map->list);
+	if (!refcount) {
+		write_lock_irq(&acpi_ioremaps_list_lock);
+
+		list_del(&map->list);
+
+		write_unlock_irq(&acpi_ioremaps_list_lock);
+	}
 	return refcount;
 }
 
 static void acpi_os_map_cleanup(struct acpi_ioremap *map)
 {
-	synchronize_rcu_expedited();
 	acpi_unmap(map->phys, map->virt);
 	kfree(map);
 }
@@ -704,18 +712,23 @@ acpi_status
 acpi_os_read_memory(acpi_physical_address phys_addr, u64 *value, u32 width)
 {
 	void __iomem *virt_addr;
+	unsigned long flags;
 	unsigned int size = width / 8;
 	bool unmap = false;
 	u64 dummy;
 	int error;
 
-	rcu_read_lock();
+	read_lock_irqsave(&acpi_ioremaps_list_lock, flags);
+
 	virt_addr = acpi_map_vaddr_lookup(phys_addr, size);
 	if (!virt_addr) {
-		rcu_read_unlock();
+
+		read_unlock_irqrestore(&acpi_ioremaps_list_lock, flags);
+
 		virt_addr = acpi_os_ioremap(phys_addr, size);
 		if (!virt_addr)
 			return AE_BAD_ADDRESS;
+
 		unmap = true;
 	}
 
@@ -728,7 +741,7 @@ acpi_os_read_memory(acpi_physical_addres
 	if (unmap)
 		iounmap(virt_addr);
 	else
-		rcu_read_unlock();
+		read_unlock_irqrestore(&acpi_ioremaps_list_lock, flags);
 
 	return AE_OK;
 }
@@ -737,16 +750,21 @@ acpi_status
 acpi_os_write_memory(acpi_physical_address phys_addr, u64 value, u32 width)
 {
 	void __iomem *virt_addr;
+	unsigned long flags;
 	unsigned int size = width / 8;
 	bool unmap = false;
 
-	rcu_read_lock();
+	read_lock_irqsave(&acpi_ioremaps_list_lock, flags);
+
 	virt_addr = acpi_map_vaddr_lookup(phys_addr, size);
 	if (!virt_addr) {
-		rcu_read_unlock();
+
+		read_unlock_irqrestore(&acpi_ioremaps_list_lock, flags);
+
 		virt_addr = acpi_os_ioremap(phys_addr, size);
 		if (!virt_addr)
 			return AE_BAD_ADDRESS;
+
 		unmap = true;
 	}
 
@@ -770,7 +788,7 @@ acpi_os_write_memory(acpi_physical_addre
 	if (unmap)
 		iounmap(virt_addr);
 	else
-		rcu_read_unlock();
+		read_unlock_irqrestore(&acpi_ioremaps_list_lock, flags);
 
 	return AE_OK;
 }




^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v2] ACPI: Drop rcu usage for MMIO mappings
  2020-06-05 13:32 ` Rafael J. Wysocki
@ 2020-06-05 16:18   ` Dan Williams
  2020-06-05 16:21     ` Rafael J. Wysocki
  0 siblings, 1 reply; 51+ messages in thread
From: Dan Williams @ 2020-06-05 16:18 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Rafael Wysocki, Stable, Len Brown, Borislav Petkov, Ira Weiny,
	James Morse, Erik Kaneda, Myron Stowe, Rafael J. Wysocki,
	Andy Shevchenko, Linux Kernel Mailing List,
	ACPI Devel Maling List, linux-nvdimm

On Fri, Jun 5, 2020 at 6:32 AM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Fri, May 8, 2020 at 1:55 AM Dan Williams <dan.j.williams@intel.com> wrote:
> >
> > Recently a performance problem was reported for a process invoking a
> > non-trival ASL program. The method call in this case ends up
> > repetitively triggering a call path like:
> >
> >     acpi_ex_store
> >     acpi_ex_store_object_to_node
> >     acpi_ex_write_data_to_field
> >     acpi_ex_insert_into_field
> >     acpi_ex_write_with_update_rule
> >     acpi_ex_field_datum_io
> >     acpi_ex_access_region
> >     acpi_ev_address_space_dispatch
> >     acpi_ex_system_memory_space_handler
> >     acpi_os_map_cleanup.part.14
> >     _synchronize_rcu_expedited.constprop.89
> >     schedule
> >
> > The end result of frequent synchronize_rcu_expedited() invocation is
> > tiny sub-millisecond spurts of execution where the scheduler freely
> > migrates this apparently sleepy task. The overhead of frequent scheduler
> > invocation multiplies the execution time by a factor of 2-3X.
> >
> > For example, performance improves from 16 minutes to 7 minutes for a
> > firmware update procedure across 24 devices.
> >
> > Perhaps the rcu usage was intended to allow for not taking a sleeping
> > lock in the acpi_os_{read,write}_memory() path which ostensibly could be
> > called from an APEI NMI error interrupt?
>
> Not really.
>
> acpi_os_{read|write}_memory() end up being called from non-NMI
> interrupt context via acpi_hw_{read|write}(), respectively, and quite
> obviously ioremap() cannot be run from there, but in those cases the
> mappings in question are there in the list already in all cases and so
> the ioremap() isn't used then.
>
> RCU is there to protect these users from walking the list while it is
> being updated.
>
> > Neither rcu_read_lock() nor ioremap() are interrupt safe, so add a WARN_ONCE() to validate that rcu
> > was not serving as a mechanism to avoid direct calls to ioremap().
>
> But it would produce false-positives if the IRQ context was not NMI,
> wouldn't it?
>
> > Even the original implementation had a spin_lock_irqsave(), but that is not
> > NMI safe.
>
> Which is not a problem (see above).
>
> > APEI itself already has some concept of avoiding ioremap() from
> > interrupt context (see erst_exec_move_data()), if the new warning
> > triggers it means that APEI either needs more instrumentation like that
> > to pre-emptively fail, or more infrastructure to arrange for pre-mapping
> > the resources it needs in NMI context.
>
> Well, I'm not sure about that.

Right, this patch set is about 2-3 generations behind the architecture
of the fix we are discussing internally, you might mention that.

The fix we are looking at now is to pre-map operation regions in a
similar manner as the way APEI resources are pre-mapped. The
pre-mapping would arrange for synchronize_rcu_expedited() to be elided
on each dynamic mapping attempt. The other piece is to arrange for
operation-regions to be mapped at their full size at once rather than
a page at a time.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v2] ACPI: Drop rcu usage for MMIO mappings
  2020-06-05 16:18   ` Dan Williams
@ 2020-06-05 16:21     ` Rafael J. Wysocki
  2020-06-05 16:39       ` Dan Williams
  0 siblings, 1 reply; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-05 16:21 UTC (permalink / raw)
  To: Dan Williams
  Cc: Rafael J. Wysocki, Rafael Wysocki, Stable, Len Brown,
	Borislav Petkov, Ira Weiny, James Morse, Erik Kaneda,
	Myron Stowe, Rafael J. Wysocki, Andy Shevchenko,
	Linux Kernel Mailing List, ACPI Devel Maling List, linux-nvdimm

On Fri, Jun 5, 2020 at 6:18 PM Dan Williams <dan.j.williams@intel.com> wrote:
>
> On Fri, Jun 5, 2020 at 6:32 AM Rafael J. Wysocki <rafael@kernel.org> wrote:
> >
> > On Fri, May 8, 2020 at 1:55 AM Dan Williams <dan.j.williams@intel.com> wrote:
> > >
> > > Recently a performance problem was reported for a process invoking a
> > > non-trival ASL program. The method call in this case ends up
> > > repetitively triggering a call path like:
> > >
> > >     acpi_ex_store
> > >     acpi_ex_store_object_to_node
> > >     acpi_ex_write_data_to_field
> > >     acpi_ex_insert_into_field
> > >     acpi_ex_write_with_update_rule
> > >     acpi_ex_field_datum_io
> > >     acpi_ex_access_region
> > >     acpi_ev_address_space_dispatch
> > >     acpi_ex_system_memory_space_handler
> > >     acpi_os_map_cleanup.part.14
> > >     _synchronize_rcu_expedited.constprop.89
> > >     schedule
> > >
> > > The end result of frequent synchronize_rcu_expedited() invocation is
> > > tiny sub-millisecond spurts of execution where the scheduler freely
> > > migrates this apparently sleepy task. The overhead of frequent scheduler
> > > invocation multiplies the execution time by a factor of 2-3X.
> > >
> > > For example, performance improves from 16 minutes to 7 minutes for a
> > > firmware update procedure across 24 devices.
> > >
> > > Perhaps the rcu usage was intended to allow for not taking a sleeping
> > > lock in the acpi_os_{read,write}_memory() path which ostensibly could be
> > > called from an APEI NMI error interrupt?
> >
> > Not really.
> >
> > acpi_os_{read|write}_memory() end up being called from non-NMI
> > interrupt context via acpi_hw_{read|write}(), respectively, and quite
> > obviously ioremap() cannot be run from there, but in those cases the
> > mappings in question are there in the list already in all cases and so
> > the ioremap() isn't used then.
> >
> > RCU is there to protect these users from walking the list while it is
> > being updated.
> >
> > > Neither rcu_read_lock() nor ioremap() are interrupt safe, so add a WARN_ONCE() to validate that rcu
> > > was not serving as a mechanism to avoid direct calls to ioremap().
> >
> > But it would produce false-positives if the IRQ context was not NMI,
> > wouldn't it?
> >
> > > Even the original implementation had a spin_lock_irqsave(), but that is not
> > > NMI safe.
> >
> > Which is not a problem (see above).
> >
> > > APEI itself already has some concept of avoiding ioremap() from
> > > interrupt context (see erst_exec_move_data()), if the new warning
> > > triggers it means that APEI either needs more instrumentation like that
> > > to pre-emptively fail, or more infrastructure to arrange for pre-mapping
> > > the resources it needs in NMI context.
> >
> > Well, I'm not sure about that.
>
> Right, this patch set is about 2-3 generations behind the architecture
> of the fix we are discussing internally, you might mention that.

Yes, sorry.

> The fix we are looking at now is to pre-map operation regions in a
> similar manner as the way APEI resources are pre-mapped. The
> pre-mapping would arrange for synchronize_rcu_expedited() to be elided
> on each dynamic mapping attempt. The other piece is to arrange for
> operation-regions to be mapped at their full size at once rather than
> a page at a time.

However, if the RCU usage in ACPI OSL can be replaced with an rwlock,
some of the ACPICA changes above may not be necessary anymore (even
though some of them may still be worth making).

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v2] ACPI: Drop rcu usage for MMIO mappings
  2020-06-05 16:21     ` Rafael J. Wysocki
@ 2020-06-05 16:39       ` Dan Williams
  2020-06-05 17:02         ` Rafael J. Wysocki
  0 siblings, 1 reply; 51+ messages in thread
From: Dan Williams @ 2020-06-05 16:39 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Rafael Wysocki, Stable, Len Brown, Borislav Petkov, Ira Weiny,
	James Morse, Erik Kaneda, Myron Stowe, Rafael J. Wysocki,
	Andy Shevchenko, Linux Kernel Mailing List,
	ACPI Devel Maling List, linux-nvdimm

On Fri, Jun 5, 2020 at 9:22 AM Rafael J. Wysocki <rafael@kernel.org> wrote:
[..]
> > The fix we are looking at now is to pre-map operation regions in a
> > similar manner as the way APEI resources are pre-mapped. The
> > pre-mapping would arrange for synchronize_rcu_expedited() to be elided
> > on each dynamic mapping attempt. The other piece is to arrange for
> > operation-regions to be mapped at their full size at once rather than
> > a page at a time.
>
> However, if the RCU usage in ACPI OSL can be replaced with an rwlock,
> some of the ACPICA changes above may not be necessary anymore (even
> though some of them may still be worth making).

I don't think you can replace the RCU usage in ACPI OSL and still
maintain NMI lookups in a dynamic list.

However, there are 3 solutions I see:

- Prevent acpi_os_map_cleanup() from triggering at high frequency by
pre-mapping and never unmapping operation-regions resources (internal
discussion in progress)

- Prevent walks of the 'acpi_ioremaps' list (acpi_map_lookup_virt())
from NMI context by re-writing the physical addresses in the APEI
tables with pre-mapped virtual address, i.e. remove rcu_read_lock()
and list_for_each_entry_rcu() from NMI context.

- Split operation-region resources into a separate mapping mechanism
than APEI resources so that typical locking can be used for the
sleepable resources and let the NMI accessible resources be managed
separately.

That last one is one we have not discussed internally, but it occurred
to me when you mentioned replacing RCU.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v2] ACPI: Drop rcu usage for MMIO mappings
  2020-06-05 16:39       ` Dan Williams
@ 2020-06-05 17:02         ` Rafael J. Wysocki
  0 siblings, 0 replies; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-05 17:02 UTC (permalink / raw)
  To: Dan Williams
  Cc: Rafael J. Wysocki, Rafael Wysocki, Stable, Len Brown,
	Borislav Petkov, Ira Weiny, James Morse, Erik Kaneda,
	Myron Stowe, Rafael J. Wysocki, Andy Shevchenko,
	Linux Kernel Mailing List, ACPI Devel Maling List, linux-nvdimm

On Fri, Jun 5, 2020 at 6:39 PM Dan Williams <dan.j.williams@intel.com> wrote:
>
> On Fri, Jun 5, 2020 at 9:22 AM Rafael J. Wysocki <rafael@kernel.org> wrote:
> [..]
> > > The fix we are looking at now is to pre-map operation regions in a
> > > similar manner as the way APEI resources are pre-mapped. The
> > > pre-mapping would arrange for synchronize_rcu_expedited() to be elided
> > > on each dynamic mapping attempt. The other piece is to arrange for
> > > operation-regions to be mapped at their full size at once rather than
> > > a page at a time.
> >
> > However, if the RCU usage in ACPI OSL can be replaced with an rwlock,
> > some of the ACPICA changes above may not be necessary anymore (even
> > though some of them may still be worth making).
>
> I don't think you can replace the RCU usage in ACPI OSL and still
> maintain NMI lookups in a dynamic list.

I'm not sure what NMI lookups have to do with the issue at hand.

If acpi_os_{read|write}_memory() is used from NMI, that is a bug
already in there which is unrelated to the performance problem with
opregions.

> However, there are 3 solutions I see:
>
> - Prevent acpi_os_map_cleanup() from triggering at high frequency by
> pre-mapping and never unmapping operation-regions resources (internal
> discussion in progress)

Yes, that can be done, if necessary.

> - Prevent walks of the 'acpi_ioremaps' list (acpi_map_lookup_virt())
> from NMI context by re-writing the physical addresses in the APEI
> tables with pre-mapped virtual address, i.e. remove rcu_read_lock()
> and list_for_each_entry_rcu() from NMI context.

That sounds a bit convoluted to me.

> - Split operation-region resources into a separate mapping mechanism
> than APEI resources so that typical locking can be used for the
> sleepable resources and let the NMI accessible resources be managed
> separately.
>
> That last one is one we have not discussed internally, but it occurred
> to me when you mentioned replacing RCU.

So NMI cannot use acpi_os_{read|write}_memory() safely which you have
pointed out for a few times.

But even if NMI resources are managed separately, the others will
still not be sleepable (at least not all of them).

Cheers!

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [RFT][PATCH] ACPI: OSL: Use rwlock instead of RCU for memory management
  2020-06-05 14:06 ` [RFT][PATCH] ACPI: OSL: Use rwlock instead of RCU for memory management Rafael J. Wysocki
@ 2020-06-05 17:08   ` Dan Williams
  2020-06-06  6:56     ` Rafael J. Wysocki
  2020-06-05 19:40   ` Andy Shevchenko
  1 sibling, 1 reply; 51+ messages in thread
From: Dan Williams @ 2020-06-05 17:08 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Rafael J Wysocki, stable, Len Brown, Borislav Petkov, Ira Weiny,
	James Morse, Erik Kaneda, Myron Stowe, Andy Shevchenko,
	Linux Kernel Mailing List, Linux ACPI, linux-nvdimm

On Fri, Jun 5, 2020 at 7:06 AM Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
>
> From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> Subject: [PATCH] ACPI: OSL: Use rwlock instead of RCU for memory management
>
> The ACPI OS layer uses RCU to protect the list of ACPI memory
> mappings from being walked while it is updated.  Among other
> situations, that list can be walked in non-NMI interrupt context,
> so using a sleeping lock to protect it is not an option.
>
> However, performance issues related to the RCU usage in there
> appear, as described by Dan Williams:
>
> "Recently a performance problem was reported for a process invoking
> a non-trival ASL program. The method call in this case ends up
> repetitively triggering a call path like:
>
>     acpi_ex_store
>     acpi_ex_store_object_to_node
>     acpi_ex_write_data_to_field
>     acpi_ex_insert_into_field
>     acpi_ex_write_with_update_rule
>     acpi_ex_field_datum_io
>     acpi_ex_access_region
>     acpi_ev_address_space_dispatch
>     acpi_ex_system_memory_space_handler
>     acpi_os_map_cleanup.part.14
>     _synchronize_rcu_expedited.constprop.89
>     schedule
>
> The end result of frequent synchronize_rcu_expedited() invocation is
> tiny sub-millisecond spurts of execution where the scheduler freely
> migrates this apparently sleepy task. The overhead of frequent
> scheduler invocation multiplies the execution time by a factor
> of 2-3X."
>
> In order to avoid these issues, replace the RCU in the ACPI OS
> layer by an rwlock.
>
> That rwlock should not be frequently contended, so the performance
> impact of it is not expected to be significant.
>
> Reported-by: Dan Williams <dan.j.williams@intel.com>
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
>
> Hi Dan,
>
> This is a possible fix for the ACPI OSL RCU-related performance issues, but
> can you please arrange for the testing of it on the affected systems?

Ugh, is it really this simple? I did not realize the read-side is NMI
safe. I'll take a look.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [RFT][PATCH] ACPI: OSL: Use rwlock instead of RCU for memory management
  2020-06-05 14:06 ` [RFT][PATCH] ACPI: OSL: Use rwlock instead of RCU for memory management Rafael J. Wysocki
  2020-06-05 17:08   ` Dan Williams
@ 2020-06-05 19:40   ` Andy Shevchenko
  2020-06-06  6:48     ` Rafael J. Wysocki
  1 sibling, 1 reply; 51+ messages in thread
From: Andy Shevchenko @ 2020-06-05 19:40 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Dan Williams, Rafael J. Wysocki, Stable, Len Brown,
	Borislav Petkov, Ira Weiny, James Morse, Erik Kaneda,
	Myron Stowe, Andy Shevchenko, Linux Kernel Mailing List,
	ACPI Devel Maling List, linux-nvdimm

On Fri, Jun 5, 2020 at 5:11 PM Rafael J. Wysocki <rjw@rjwysocki.net> wrote:

...

> +       if (!refcount) {
> +               write_lock_irq(&acpi_ioremaps_list_lock);
> +
> +               list_del(&map->list);
> +
> +               write_unlock_irq(&acpi_ioremaps_list_lock);
> +       }
>         return refcount;

It seems we can decrease indentation level at the same time:

  if (refcount)
    return refcount;

 ...
 return 0;

-- 
With Best Regards,
Andy Shevchenko

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [RFT][PATCH] ACPI: OSL: Use rwlock instead of RCU for memory management
  2020-06-05 19:40   ` Andy Shevchenko
@ 2020-06-06  6:48     ` Rafael J. Wysocki
  0 siblings, 0 replies; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-06  6:48 UTC (permalink / raw)
  To: Andy Shevchenko
  Cc: Rafael J. Wysocki, Dan Williams, Rafael J. Wysocki, Stable,
	Len Brown, Borislav Petkov, Ira Weiny, James Morse, Erik Kaneda,
	Myron Stowe, Andy Shevchenko, Linux Kernel Mailing List,
	ACPI Devel Maling List, linux-nvdimm

On Fri, Jun 5, 2020 at 9:40 PM Andy Shevchenko
<andy.shevchenko@gmail.com> wrote:
>
> On Fri, Jun 5, 2020 at 5:11 PM Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
>
> ...
>
> > +       if (!refcount) {
> > +               write_lock_irq(&acpi_ioremaps_list_lock);
> > +
> > +               list_del(&map->list);
> > +
> > +               write_unlock_irq(&acpi_ioremaps_list_lock);
> > +       }
> >         return refcount;
>
> It seems we can decrease indentation level at the same time:
>
>   if (refcount)
>     return refcount;
>
>  ...
>  return 0;

Right, but the patch will need to be dropped anyway I think.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [RFT][PATCH] ACPI: OSL: Use rwlock instead of RCU for memory management
  2020-06-05 17:08   ` Dan Williams
@ 2020-06-06  6:56     ` Rafael J. Wysocki
  2020-06-08 15:33       ` Rafael J. Wysocki
  0 siblings, 1 reply; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-06  6:56 UTC (permalink / raw)
  To: Dan Williams
  Cc: Rafael J. Wysocki, Rafael J Wysocki, stable, Len Brown,
	Borislav Petkov, Ira Weiny, James Morse, Erik Kaneda,
	Myron Stowe, Andy Shevchenko, Linux Kernel Mailing List,
	Linux ACPI, linux-nvdimm

On Fri, Jun 5, 2020 at 7:09 PM Dan Williams <dan.j.williams@intel.com> wrote:
>
> On Fri, Jun 5, 2020 at 7:06 AM Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
> >
> > From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > Subject: [PATCH] ACPI: OSL: Use rwlock instead of RCU for memory management
> >
> > The ACPI OS layer uses RCU to protect the list of ACPI memory
> > mappings from being walked while it is updated.  Among other
> > situations, that list can be walked in non-NMI interrupt context,
> > so using a sleeping lock to protect it is not an option.
> >
> > However, performance issues related to the RCU usage in there
> > appear, as described by Dan Williams:
> >
> > "Recently a performance problem was reported for a process invoking
> > a non-trival ASL program. The method call in this case ends up
> > repetitively triggering a call path like:
> >
> >     acpi_ex_store
> >     acpi_ex_store_object_to_node
> >     acpi_ex_write_data_to_field
> >     acpi_ex_insert_into_field
> >     acpi_ex_write_with_update_rule
> >     acpi_ex_field_datum_io
> >     acpi_ex_access_region
> >     acpi_ev_address_space_dispatch
> >     acpi_ex_system_memory_space_handler
> >     acpi_os_map_cleanup.part.14
> >     _synchronize_rcu_expedited.constprop.89
> >     schedule
> >
> > The end result of frequent synchronize_rcu_expedited() invocation is
> > tiny sub-millisecond spurts of execution where the scheduler freely
> > migrates this apparently sleepy task. The overhead of frequent
> > scheduler invocation multiplies the execution time by a factor
> > of 2-3X."
> >
> > In order to avoid these issues, replace the RCU in the ACPI OS
> > layer by an rwlock.
> >
> > That rwlock should not be frequently contended, so the performance
> > impact of it is not expected to be significant.
> >
> > Reported-by: Dan Williams <dan.j.williams@intel.com>
> > Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > ---
> >
> > Hi Dan,
> >
> > This is a possible fix for the ACPI OSL RCU-related performance issues, but
> > can you please arrange for the testing of it on the affected systems?
>
> Ugh, is it really this simple? I did not realize the read-side is NMI
> safe. I'll take a look.

But if an NMI triggers while the lock is being held for writing, it
will deadlock, won't it?

OTOH, according to the RCU documentation it is valid to call
rcu_read_[un]lock() from an NMI handler (see Interrupts and NMIs in
Documentation/RCU/Design/Requirements/Requirements.rst) so we are good
from this perspective today.

Unless we teach APEI to avoid mapping lookups from
apei_{read|write}(), which wouldn't be unreasonable by itself, we need
to hold on to the RCU in ACPI OSL, so it looks like addressing the
problem in ACPICA is the best way to do it (and the current ACPICA
code in question is suboptimal, so it would be good to rework it
anyway).

Cheers!

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [RFT][PATCH] ACPI: OSL: Use rwlock instead of RCU for memory management
  2020-06-06  6:56     ` Rafael J. Wysocki
@ 2020-06-08 15:33       ` Rafael J. Wysocki
  2020-06-08 16:29         ` Rafael J. Wysocki
  0 siblings, 1 reply; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-08 15:33 UTC (permalink / raw)
  To: Dan Williams, Len Brown
  Cc: Rafael J. Wysocki, Rafael J Wysocki, Borislav Petkov, Ira Weiny,
	James Morse, Erik Kaneda, Myron Stowe, Andy Shevchenko,
	Linux Kernel Mailing List, Linux ACPI, linux-nvdimm, Bob Moore

On Saturday, June 6, 2020 8:56:26 AM CEST Rafael J. Wysocki wrote:
> On Fri, Jun 5, 2020 at 7:09 PM Dan Williams <dan.j.williams@intel.com> wrote:
> >
> > On Fri, Jun 5, 2020 at 7:06 AM Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
> > >
> > > From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > > Subject: [PATCH] ACPI: OSL: Use rwlock instead of RCU for memory management
> > >
> > > The ACPI OS layer uses RCU to protect the list of ACPI memory
> > > mappings from being walked while it is updated.  Among other
> > > situations, that list can be walked in non-NMI interrupt context,
> > > so using a sleeping lock to protect it is not an option.
> > >
> > > However, performance issues related to the RCU usage in there
> > > appear, as described by Dan Williams:
> > >
> > > "Recently a performance problem was reported for a process invoking
> > > a non-trival ASL program. The method call in this case ends up
> > > repetitively triggering a call path like:
> > >
> > >     acpi_ex_store
> > >     acpi_ex_store_object_to_node
> > >     acpi_ex_write_data_to_field
> > >     acpi_ex_insert_into_field
> > >     acpi_ex_write_with_update_rule
> > >     acpi_ex_field_datum_io
> > >     acpi_ex_access_region
> > >     acpi_ev_address_space_dispatch
> > >     acpi_ex_system_memory_space_handler
> > >     acpi_os_map_cleanup.part.14
> > >     _synchronize_rcu_expedited.constprop.89
> > >     schedule
> > >
> > > The end result of frequent synchronize_rcu_expedited() invocation is
> > > tiny sub-millisecond spurts of execution where the scheduler freely
> > > migrates this apparently sleepy task. The overhead of frequent
> > > scheduler invocation multiplies the execution time by a factor
> > > of 2-3X."
> > >
> > > In order to avoid these issues, replace the RCU in the ACPI OS
> > > layer by an rwlock.
> > >
> > > That rwlock should not be frequently contended, so the performance
> > > impact of it is not expected to be significant.
> > >
> > > Reported-by: Dan Williams <dan.j.williams@intel.com>
> > > Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > > ---
> > >
> > > Hi Dan,
> > >
> > > This is a possible fix for the ACPI OSL RCU-related performance issues, but
> > > can you please arrange for the testing of it on the affected systems?
> >
> > Ugh, is it really this simple? I did not realize the read-side is NMI
> > safe. I'll take a look.
> 
> But if an NMI triggers while the lock is being held for writing, it
> will deadlock, won't it?
> 
> OTOH, according to the RCU documentation it is valid to call
> rcu_read_[un]lock() from an NMI handler (see Interrupts and NMIs in
> Documentation/RCU/Design/Requirements/Requirements.rst) so we are good
> from this perspective today.
> 
> Unless we teach APEI to avoid mapping lookups from
> apei_{read|write}(), which wouldn't be unreasonable by itself, we need
> to hold on to the RCU in ACPI OSL, so it looks like addressing the
> problem in ACPICA is the best way to do it (and the current ACPICA
> code in question is suboptimal, so it would be good to rework it
> anyway).
> 
> Cheers!

I've sent the prototype patch below to you, Bob and Erik in private, so
here it goes to the lists for completeness.

It introduces a "fast-path" variant of acpi_os_map_memory() that only
returns non-NULL if a matching mapping is already there in the list
and reworks acpi_ex_system_memory_space_handler() to use it.

The idea is to do a fast-path lookup first for every new mapping and
only run the full acpi_os_map_memory() if that returns NULL and then
save the mapping return by it and do a fast-path lookup for it again
to bump up its reference counter in the OSL layer.  That should prevent
the mappings from going away until the opregions that they belong to
go away (the opregion deactivation code is updated too to remove the
saved mappings), so in the cases when there's not too much opregion
creation and removal activity, it should make the RCU-related overhead
go away.

Please test.

Cheers!

---
 drivers/acpi/acpica/evrgnini.c    |   14 ++++++++-
 drivers/acpi/acpica/exregion.c    |   49 +++++++++++++++++++++++++++++--
 drivers/acpi/osl.c                |   59 ++++++++++++++++++++++++++++----------
 include/acpi/actypes.h            |    7 ++++
 include/acpi/platform/aclinuxex.h |    2 +
 5 files changed, 112 insertions(+), 19 deletions(-)

Index: linux-pm/drivers/acpi/osl.c
===================================================================
--- linux-pm.orig/drivers/acpi/osl.c
+++ linux-pm/drivers/acpi/osl.c
@@ -302,21 +302,8 @@ static void acpi_unmap(acpi_physical_add
 		iounmap(vaddr);
 }
 
-/**
- * acpi_os_map_iomem - Get a virtual address for a given physical address range.
- * @phys: Start of the physical address range to map.
- * @size: Size of the physical address range to map.
- *
- * Look up the given physical address range in the list of existing ACPI memory
- * mappings.  If found, get a reference to it and return a pointer to it (its
- * virtual address).  If not found, map it, add it to that list and return a
- * pointer to it.
- *
- * During early init (when acpi_permanent_mmap has not been set yet) this
- * routine simply calls __acpi_map_table() to get the job done.
- */
-void __iomem __ref
-*acpi_os_map_iomem(acpi_physical_address phys, acpi_size size)
+static void __iomem __ref *__acpi_os_map_iomem(acpi_physical_address phys,
+					       acpi_size size, bool fastpath)
 {
 	struct acpi_ioremap *map;
 	void __iomem *virt;
@@ -339,6 +326,11 @@ void __iomem __ref
 		goto out;
 	}
 
+	if (fastpath) {
+		mutex_unlock(&acpi_ioremap_lock);
+		return NULL;
+	}
+
 	map = kzalloc(sizeof(*map), GFP_KERNEL);
 	if (!map) {
 		mutex_unlock(&acpi_ioremap_lock);
@@ -366,6 +358,25 @@ out:
 	mutex_unlock(&acpi_ioremap_lock);
 	return map->virt + (phys - map->phys);
 }
+
+/**
+ * acpi_os_map_iomem - Get a virtual address for a given physical address range.
+ * @phys: Start of the physical address range to map.
+ * @size: Size of the physical address range to map.
+ *
+ * Look up the given physical address range in the list of existing ACPI memory
+ * mappings.  If found, get a reference to it and return a pointer representing
+ * its virtual address.  If not found, map it, add it to that list and return a
+ * pointer representing its virtual address.
+ *
+ * During early init (when acpi_permanent_mmap has not been set yet) call
+ * __acpi_map_table() to obtain the mapping.
+ */
+void __iomem __ref *acpi_os_map_iomem(acpi_physical_address phys,
+				      acpi_size size)
+{
+	return __acpi_os_map_iomem(phys, size, false);
+}
 EXPORT_SYMBOL_GPL(acpi_os_map_iomem);
 
 void *__ref acpi_os_map_memory(acpi_physical_address phys, acpi_size size)
@@ -374,6 +385,24 @@ void *__ref acpi_os_map_memory(acpi_phys
 }
 EXPORT_SYMBOL_GPL(acpi_os_map_memory);
 
+/**
+ * acpi_os_map_memory_fastpath - Fast-path physical-to-virtual address mapping.
+ * @phys: Start of the physical address range to map.
+ * @size: Size of the physical address range to map.
+ *
+ * Look up the given physical address range in the list of existing ACPI memory
+ * mappings.  If found, get a reference to it and return a pointer representing
+ * its virtual address.  If not found, return NULL.
+ *
+ * During early init (when acpi_permanent_mmap has not been set yet) call
+ * __acpi_map_table() to obtain the mapping.
+ */
+void __ref *acpi_os_map_memory_fastpath(acpi_physical_address phys,
+					acpi_size size)
+{
+	return __acpi_os_map_iomem(phys, size, true);
+}
+
 /* Must be called with mutex_lock(&acpi_ioremap_lock) */
 static unsigned long acpi_os_drop_map_ref(struct acpi_ioremap *map)
 {
Index: linux-pm/include/acpi/actypes.h
===================================================================
--- linux-pm.orig/include/acpi/actypes.h
+++ linux-pm/include/acpi/actypes.h
@@ -1200,12 +1200,19 @@ struct acpi_pci_id {
 	u16 function;
 };
 
+struct acpi_mem_mapping {
+	u8 *logical_address;
+	acpi_size length;
+	struct acpi_mem_mapping *next;
+};
+
 struct acpi_mem_space_context {
 	u32 length;
 	acpi_physical_address address;
 	acpi_physical_address mapped_physical_address;
 	u8 *mapped_logical_address;
 	acpi_size mapped_length;
+	struct acpi_mem_mapping *first_mapping;
 };
 
 /*
Index: linux-pm/drivers/acpi/acpica/exregion.c
===================================================================
--- linux-pm.orig/drivers/acpi/acpica/exregion.c
+++ linux-pm/drivers/acpi/acpica/exregion.c
@@ -44,6 +44,9 @@ acpi_ex_system_memory_space_handler(u32
 	u32 length;
 	acpi_size map_length;
 	acpi_size page_boundary_map_length;
+#ifdef ACPI_OS_MAP_MEMORY_FASTPATH
+	struct acpi_mem_mapping *new_mapping;
+#endif
 #ifdef ACPI_MISALIGNMENT_NOT_SUPPORTED
 	u32 remainder;
 #endif
@@ -143,9 +146,20 @@ acpi_ex_system_memory_space_handler(u32
 
 		/* Create a new mapping starting at the address given */
 
-		mem_info->mapped_logical_address =
-		    acpi_os_map_memory(address, map_length);
-		if (!mem_info->mapped_logical_address) {
+#ifdef ACPI_OS_MAP_MEMORY_FASTPATH
+		/* Look for an existing mapping matching the request at hand. */
+		logical_addr_ptr = acpi_os_map_memory_fastpath(address, length);
+		if (logical_addr_ptr) {
+			/*
+			 * A matching mapping has been found, so cache it and
+			 * carry our the access as requested.
+			 */
+			goto access;
+		}
+#endif /* ACPI_OS_MAP_MEMORY_FASTPATH */
+
+		logical_addr_ptr = acpi_os_map_memory(address, map_length);
+		if (!logical_addr_ptr) {
 			ACPI_ERROR((AE_INFO,
 				    "Could not map memory at 0x%8.8X%8.8X, size %u",
 				    ACPI_FORMAT_UINT64(address),
@@ -154,8 +168,37 @@ acpi_ex_system_memory_space_handler(u32
 			return_ACPI_STATUS(AE_NO_MEMORY);
 		}
 
+#ifdef ACPI_OS_MAP_MEMORY_FASTPATH
+		new_mapping = ACPI_ALLOCATE_ZEROED(sizeof(*new_mapping));
+		if (new_mapping) {
+			new_mapping->logical_address = logical_addr_ptr;
+			new_mapping->length = map_length;
+			new_mapping->next = mem_info->first_mapping;
+			mem_info->first_mapping = new_mapping;
+			/*
+			 * Carry out an extra fast-path lookup to get one more
+			 * reference to this mapping to prevent it from getting
+			 * dropped if a future access involving this region does
+			 * not fall into it.
+			 */
+			acpi_os_map_memory_fastpath(address, map_length);
+		} else {
+			/*
+			 * No room to save the new mapping, but this is not
+			 * critical.  Just log the error and carry out the
+			 * access as requested.
+			 */
+			ACPI_ERROR((AE_INFO,
+				    "Not enough memory to save memory mapping at 0x%8.8X%8.8X, size %u",
+				    ACPI_FORMAT_UINT64(address),
+				    (u32)map_length));
+		}
+
+access:
+#endif /* ACPI_OS_MAP_MEMORY_FASTPATH */
 		/* Save the physical address and mapping size */
 
+		mem_info->mapped_logical_address = logical_addr_ptr;
 		mem_info->mapped_physical_address = address;
 		mem_info->mapped_length = map_length;
 	}
Index: linux-pm/drivers/acpi/acpica/evrgnini.c
===================================================================
--- linux-pm.orig/drivers/acpi/acpica/evrgnini.c
+++ linux-pm/drivers/acpi/acpica/evrgnini.c
@@ -38,6 +38,9 @@ acpi_ev_system_memory_region_setup(acpi_
 	union acpi_operand_object *region_desc =
 	    (union acpi_operand_object *)handle;
 	struct acpi_mem_space_context *local_region_context;
+#ifdef ACPI_OS_MAP_MEMORY_FASTPATH
+	struct acpi_mem_mapping *mapping;
+#endif
 
 	ACPI_FUNCTION_TRACE(ev_system_memory_region_setup);
 
@@ -46,13 +49,22 @@ acpi_ev_system_memory_region_setup(acpi_
 			local_region_context =
 			    (struct acpi_mem_space_context *)*region_context;
 
-			/* Delete a cached mapping if present */
+			/* Delete memory mappings if present */
 
 			if (local_region_context->mapped_length) {
 				acpi_os_unmap_memory(local_region_context->
 						     mapped_logical_address,
 						     local_region_context->
 						     mapped_length);
+#ifdef ACPI_OS_MAP_MEMORY_FASTPATH
+				while (local_region_context->first_mapping) {
+					mapping = local_region_context->first_mapping;
+					local_region_context->first_mapping = mapping->next;
+					acpi_os_unmap_memory(mapping->logical_address,
+							     mapping->length);
+					ACPI_FREE(mapping);
+				}
+#endif /* ACPI_OS_MAP_MEMORY_FASTPATH */
 			}
 			ACPI_FREE(local_region_context);
 			*region_context = NULL;
Index: linux-pm/include/acpi/platform/aclinuxex.h
===================================================================
--- linux-pm.orig/include/acpi/platform/aclinuxex.h
+++ linux-pm/include/acpi/platform/aclinuxex.h
@@ -138,6 +138,8 @@ static inline void acpi_os_terminate_deb
 /*
  * OSL interfaces added by Linux
  */
+#define ACPI_OS_MAP_MEMORY_FASTPATH
+void *acpi_os_map_memory_fastpath(acpi_physical_address where, acpi_size length);
 
 #endif				/* __KERNEL__ */
 




^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [RFT][PATCH] ACPI: OSL: Use rwlock instead of RCU for memory management
  2020-06-08 15:33       ` Rafael J. Wysocki
@ 2020-06-08 16:29         ` Rafael J. Wysocki
  0 siblings, 0 replies; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-08 16:29 UTC (permalink / raw)
  To: Dan Williams
  Cc: Len Brown, Rafael J. Wysocki, Rafael J Wysocki, Borislav Petkov,
	Ira Weiny, James Morse, Erik Kaneda, Myron Stowe,
	Andy Shevchenko, Linux Kernel Mailing List, Linux ACPI,
	linux-nvdimm, Bob Moore

On Mon, Jun 8, 2020 at 5:33 PM Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
>
> On Saturday, June 6, 2020 8:56:26 AM CEST Rafael J. Wysocki wrote:
> > On Fri, Jun 5, 2020 at 7:09 PM Dan Williams <dan.j.williams@intel.com> wrote:
> > >
> > > On Fri, Jun 5, 2020 at 7:06 AM Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
> > > >
> > > > From: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > > > Subject: [PATCH] ACPI: OSL: Use rwlock instead of RCU for memory management
> > > >
> > > > The ACPI OS layer uses RCU to protect the list of ACPI memory
> > > > mappings from being walked while it is updated.  Among other
> > > > situations, that list can be walked in non-NMI interrupt context,
> > > > so using a sleeping lock to protect it is not an option.
> > > >
> > > > However, performance issues related to the RCU usage in there
> > > > appear, as described by Dan Williams:
> > > >
> > > > "Recently a performance problem was reported for a process invoking
> > > > a non-trival ASL program. The method call in this case ends up
> > > > repetitively triggering a call path like:
> > > >
> > > >     acpi_ex_store
> > > >     acpi_ex_store_object_to_node
> > > >     acpi_ex_write_data_to_field
> > > >     acpi_ex_insert_into_field
> > > >     acpi_ex_write_with_update_rule
> > > >     acpi_ex_field_datum_io
> > > >     acpi_ex_access_region
> > > >     acpi_ev_address_space_dispatch
> > > >     acpi_ex_system_memory_space_handler
> > > >     acpi_os_map_cleanup.part.14
> > > >     _synchronize_rcu_expedited.constprop.89
> > > >     schedule
> > > >
> > > > The end result of frequent synchronize_rcu_expedited() invocation is
> > > > tiny sub-millisecond spurts of execution where the scheduler freely
> > > > migrates this apparently sleepy task. The overhead of frequent
> > > > scheduler invocation multiplies the execution time by a factor
> > > > of 2-3X."
> > > >
> > > > In order to avoid these issues, replace the RCU in the ACPI OS
> > > > layer by an rwlock.
> > > >
> > > > That rwlock should not be frequently contended, so the performance
> > > > impact of it is not expected to be significant.
> > > >
> > > > Reported-by: Dan Williams <dan.j.williams@intel.com>
> > > > Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > > > ---
> > > >
> > > > Hi Dan,
> > > >
> > > > This is a possible fix for the ACPI OSL RCU-related performance issues, but
> > > > can you please arrange for the testing of it on the affected systems?
> > >
> > > Ugh, is it really this simple? I did not realize the read-side is NMI
> > > safe. I'll take a look.
> >
> > But if an NMI triggers while the lock is being held for writing, it
> > will deadlock, won't it?
> >
> > OTOH, according to the RCU documentation it is valid to call
> > rcu_read_[un]lock() from an NMI handler (see Interrupts and NMIs in
> > Documentation/RCU/Design/Requirements/Requirements.rst) so we are good
> > from this perspective today.
> >
> > Unless we teach APEI to avoid mapping lookups from
> > apei_{read|write}(), which wouldn't be unreasonable by itself, we need
> > to hold on to the RCU in ACPI OSL, so it looks like addressing the
> > problem in ACPICA is the best way to do it (and the current ACPICA
> > code in question is suboptimal, so it would be good to rework it
> > anyway).
> >
> > Cheers!
>
> I've sent the prototype patch below to you, Bob and Erik in private, so
> here it goes to the lists for completeness.
>
> It introduces a "fast-path" variant of acpi_os_map_memory() that only
> returns non-NULL if a matching mapping is already there in the list
> and reworks acpi_ex_system_memory_space_handler() to use it.
>
> The idea is to do a fast-path lookup first for every new mapping and
> only run the full acpi_os_map_memory() if that returns NULL and then
> save the mapping return by it and do a fast-path lookup for it again
> to bump up its reference counter in the OSL layer.  That should prevent
> the mappings from going away until the opregions that they belong to
> go away (the opregion deactivation code is updated too to remove the
> saved mappings), so in the cases when there's not too much opregion
> creation and removal activity, it should make the RCU-related overhead
> go away.
>
> Please test.
>
> Cheers!
>
> ---
>  drivers/acpi/acpica/evrgnini.c    |   14 ++++++++-
>  drivers/acpi/acpica/exregion.c    |   49 +++++++++++++++++++++++++++++--
>  drivers/acpi/osl.c                |   59 ++++++++++++++++++++++++++++----------
>  include/acpi/actypes.h            |    7 ++++
>  include/acpi/platform/aclinuxex.h |    2 +
>  5 files changed, 112 insertions(+), 19 deletions(-)
>
> Index: linux-pm/drivers/acpi/osl.c
> ===================================================================
> --- linux-pm.orig/drivers/acpi/osl.c
> +++ linux-pm/drivers/acpi/osl.c
> @@ -302,21 +302,8 @@ static void acpi_unmap(acpi_physical_add
>                 iounmap(vaddr);
>  }
>
> -/**
> - * acpi_os_map_iomem - Get a virtual address for a given physical address range.
> - * @phys: Start of the physical address range to map.
> - * @size: Size of the physical address range to map.
> - *
> - * Look up the given physical address range in the list of existing ACPI memory
> - * mappings.  If found, get a reference to it and return a pointer to it (its
> - * virtual address).  If not found, map it, add it to that list and return a
> - * pointer to it.
> - *
> - * During early init (when acpi_permanent_mmap has not been set yet) this
> - * routine simply calls __acpi_map_table() to get the job done.
> - */
> -void __iomem __ref
> -*acpi_os_map_iomem(acpi_physical_address phys, acpi_size size)
> +static void __iomem __ref *__acpi_os_map_iomem(acpi_physical_address phys,
> +                                              acpi_size size, bool fastpath)
>  {
>         struct acpi_ioremap *map;
>         void __iomem *virt;
> @@ -339,6 +326,11 @@ void __iomem __ref
>                 goto out;
>         }
>
> +       if (fastpath) {
> +               mutex_unlock(&acpi_ioremap_lock);
> +               return NULL;
> +       }
> +
>         map = kzalloc(sizeof(*map), GFP_KERNEL);
>         if (!map) {
>                 mutex_unlock(&acpi_ioremap_lock);
> @@ -366,6 +358,25 @@ out:
>         mutex_unlock(&acpi_ioremap_lock);
>         return map->virt + (phys - map->phys);
>  }
> +
> +/**
> + * acpi_os_map_iomem - Get a virtual address for a given physical address range.
> + * @phys: Start of the physical address range to map.
> + * @size: Size of the physical address range to map.
> + *
> + * Look up the given physical address range in the list of existing ACPI memory
> + * mappings.  If found, get a reference to it and return a pointer representing
> + * its virtual address.  If not found, map it, add it to that list and return a
> + * pointer representing its virtual address.
> + *
> + * During early init (when acpi_permanent_mmap has not been set yet) call
> + * __acpi_map_table() to obtain the mapping.
> + */
> +void __iomem __ref *acpi_os_map_iomem(acpi_physical_address phys,
> +                                     acpi_size size)
> +{
> +       return __acpi_os_map_iomem(phys, size, false);
> +}
>  EXPORT_SYMBOL_GPL(acpi_os_map_iomem);
>
>  void *__ref acpi_os_map_memory(acpi_physical_address phys, acpi_size size)
> @@ -374,6 +385,24 @@ void *__ref acpi_os_map_memory(acpi_phys
>  }
>  EXPORT_SYMBOL_GPL(acpi_os_map_memory);
>
> +/**
> + * acpi_os_map_memory_fastpath - Fast-path physical-to-virtual address mapping.
> + * @phys: Start of the physical address range to map.
> + * @size: Size of the physical address range to map.
> + *
> + * Look up the given physical address range in the list of existing ACPI memory
> + * mappings.  If found, get a reference to it and return a pointer representing
> + * its virtual address.  If not found, return NULL.
> + *
> + * During early init (when acpi_permanent_mmap has not been set yet) call
> + * __acpi_map_table() to obtain the mapping.
> + */
> +void __ref *acpi_os_map_memory_fastpath(acpi_physical_address phys,
> +                                       acpi_size size)
> +{
> +       return __acpi_os_map_iomem(phys, size, true);
> +}
> +
>  /* Must be called with mutex_lock(&acpi_ioremap_lock) */
>  static unsigned long acpi_os_drop_map_ref(struct acpi_ioremap *map)
>  {
> Index: linux-pm/include/acpi/actypes.h
> ===================================================================
> --- linux-pm.orig/include/acpi/actypes.h
> +++ linux-pm/include/acpi/actypes.h
> @@ -1200,12 +1200,19 @@ struct acpi_pci_id {
>         u16 function;
>  };
>
> +struct acpi_mem_mapping {
> +       u8 *logical_address;
> +       acpi_size length;
> +       struct acpi_mem_mapping *next;
> +};
> +
>  struct acpi_mem_space_context {
>         u32 length;
>         acpi_physical_address address;
>         acpi_physical_address mapped_physical_address;
>         u8 *mapped_logical_address;
>         acpi_size mapped_length;
> +       struct acpi_mem_mapping *first_mapping;
>  };
>
>  /*
> Index: linux-pm/drivers/acpi/acpica/exregion.c
> ===================================================================
> --- linux-pm.orig/drivers/acpi/acpica/exregion.c
> +++ linux-pm/drivers/acpi/acpica/exregion.c
> @@ -44,6 +44,9 @@ acpi_ex_system_memory_space_handler(u32
>         u32 length;
>         acpi_size map_length;
>         acpi_size page_boundary_map_length;
> +#ifdef ACPI_OS_MAP_MEMORY_FASTPATH
> +       struct acpi_mem_mapping *new_mapping;
> +#endif
>  #ifdef ACPI_MISALIGNMENT_NOT_SUPPORTED
>         u32 remainder;
>  #endif
> @@ -143,9 +146,20 @@ acpi_ex_system_memory_space_handler(u32
>
>                 /* Create a new mapping starting at the address given */
>
> -               mem_info->mapped_logical_address =
> -                   acpi_os_map_memory(address, map_length);
> -               if (!mem_info->mapped_logical_address) {
> +#ifdef ACPI_OS_MAP_MEMORY_FASTPATH
> +               /* Look for an existing mapping matching the request at hand. */
> +               logical_addr_ptr = acpi_os_map_memory_fastpath(address, length);

s/length/map_length/

but the patch should still work as is, with more overhead though.

> +               if (logical_addr_ptr) {
> +                       /*
> +                        * A matching mapping has been found, so cache it and
> +                        * carry our the access as requested.
> +                        */
> +                       goto access;
> +               }
> +#endif /* ACPI_OS_MAP_MEMORY_FASTPATH */
> +
> +               logical_addr_ptr = acpi_os_map_memory(address, map_length);
> +               if (!logical_addr_ptr) {
>                         ACPI_ERROR((AE_INFO,
>                                     "Could not map memory at 0x%8.8X%8.8X, size %u",
>                                     ACPI_FORMAT_UINT64(address),
> @@ -154,8 +168,37 @@ acpi_ex_system_memory_space_handler(u32
>                         return_ACPI_STATUS(AE_NO_MEMORY);
>                 }
>
> +#ifdef ACPI_OS_MAP_MEMORY_FASTPATH
> +               new_mapping = ACPI_ALLOCATE_ZEROED(sizeof(*new_mapping));
> +               if (new_mapping) {
> +                       new_mapping->logical_address = logical_addr_ptr;
> +                       new_mapping->length = map_length;
> +                       new_mapping->next = mem_info->first_mapping;
> +                       mem_info->first_mapping = new_mapping;
> +                       /*
> +                        * Carry out an extra fast-path lookup to get one more
> +                        * reference to this mapping to prevent it from getting
> +                        * dropped if a future access involving this region does
> +                        * not fall into it.
> +                        */
> +                       acpi_os_map_memory_fastpath(address, map_length);
> +               } else {
> +                       /*
> +                        * No room to save the new mapping, but this is not
> +                        * critical.  Just log the error and carry out the
> +                        * access as requested.
> +                        */
> +                       ACPI_ERROR((AE_INFO,
> +                                   "Not enough memory to save memory mapping at 0x%8.8X%8.8X, size %u",
> +                                   ACPI_FORMAT_UINT64(address),
> +                                   (u32)map_length));
> +               }
> +
> +access:
> +#endif /* ACPI_OS_MAP_MEMORY_FASTPATH */
>                 /* Save the physical address and mapping size */
>
> +               mem_info->mapped_logical_address = logical_addr_ptr;
>                 mem_info->mapped_physical_address = address;
>                 mem_info->mapped_length = map_length;
>         }
> Index: linux-pm/drivers/acpi/acpica/evrgnini.c
> ===================================================================
> --- linux-pm.orig/drivers/acpi/acpica/evrgnini.c
> +++ linux-pm/drivers/acpi/acpica/evrgnini.c
> @@ -38,6 +38,9 @@ acpi_ev_system_memory_region_setup(acpi_
>         union acpi_operand_object *region_desc =
>             (union acpi_operand_object *)handle;
>         struct acpi_mem_space_context *local_region_context;
> +#ifdef ACPI_OS_MAP_MEMORY_FASTPATH
> +       struct acpi_mem_mapping *mapping;
> +#endif
>
>         ACPI_FUNCTION_TRACE(ev_system_memory_region_setup);
>
> @@ -46,13 +49,22 @@ acpi_ev_system_memory_region_setup(acpi_
>                         local_region_context =
>                             (struct acpi_mem_space_context *)*region_context;
>
> -                       /* Delete a cached mapping if present */
> +                       /* Delete memory mappings if present */
>
>                         if (local_region_context->mapped_length) {
>                                 acpi_os_unmap_memory(local_region_context->
>                                                      mapped_logical_address,
>                                                      local_region_context->
>                                                      mapped_length);
> +#ifdef ACPI_OS_MAP_MEMORY_FASTPATH
> +                               while (local_region_context->first_mapping) {
> +                                       mapping = local_region_context->first_mapping;
> +                                       local_region_context->first_mapping = mapping->next;
> +                                       acpi_os_unmap_memory(mapping->logical_address,
> +                                                            mapping->length);
> +                                       ACPI_FREE(mapping);
> +                               }
> +#endif /* ACPI_OS_MAP_MEMORY_FASTPATH */
>                         }
>                         ACPI_FREE(local_region_context);
>                         *region_context = NULL;
> Index: linux-pm/include/acpi/platform/aclinuxex.h
> ===================================================================
> --- linux-pm.orig/include/acpi/platform/aclinuxex.h
> +++ linux-pm/include/acpi/platform/aclinuxex.h
> @@ -138,6 +138,8 @@ static inline void acpi_os_terminate_deb
>  /*
>   * OSL interfaces added by Linux
>   */
> +#define ACPI_OS_MAP_MEMORY_FASTPATH
> +void *acpi_os_map_memory_fastpath(acpi_physical_address where, acpi_size length);
>
>  #endif                         /* __KERNEL__ */
>
>
>
>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* [RFT][PATCH 0/3] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter
  2020-05-07 23:39 [PATCH v2] ACPI: Drop rcu usage for MMIO mappings Dan Williams
  2020-06-05 13:32 ` Rafael J. Wysocki
  2020-06-05 14:06 ` [RFT][PATCH] ACPI: OSL: Use rwlock instead of RCU for memory management Rafael J. Wysocki
@ 2020-06-10 12:17 ` Rafael J. Wysocki
  2020-06-10 12:20   ` [RFT][PATCH 1/3] ACPICA: Defer unmapping of memory used in memory opregions Rafael J. Wysocki
                     ` (3 more replies)
  2020-06-22 13:50 ` [RFT][PATCH v2 0/4] " Rafael J. Wysocki
  3 siblings, 4 replies; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-10 12:17 UTC (permalink / raw)
  To: Dan Williams, Erik Kaneda
  Cc: rafael.j.wysocki, Len Brown, Borislav Petkov, Ira Weiny,
	James Morse, Myron Stowe, Andy Shevchenko, linux-kernel,
	linux-acpi, linux-nvdimm, Bob Moore

Hi All,

This series is to address the problem with RCU synchronization occurring,
possibly relatively often, inside of acpi_ex_system_memory_space_handler(),
when the namespace and interpreter mutexes are held.

The basic idea is to avoid the actual unmapping of memory in
acpi_ex_system_memory_space_handler() by making it take the advantage of the
reference counting of memory mappings utilized by the OSL layer in Linux.

The basic assumption in patch [1/3] is that if the special
ACPI_OS_MAP_MEMORY_FAST_PATH() macro is present, it can be used to increment
the reference counter of a known-existing memory mapping in the OS layer
which then is dropped by the subsequent acpi_os_unmap_memory() without
unmapping the address range at hand.  That can be utilized by
acpi_ex_system_memory_space_handler() to prevent the reference counters of
all mappings used by it from dropping down to 0 (which also prevents the
address ranges associated with them from being unmapped) so that they can
be unmapped later (specifically, at the operation region deactivation time).

Patch [2/3] defers the unmapping even further, until the namespace and
interpreter mutexes are released, to avoid invoking the RCU synchronization
under theses mutexes.

Finally, patch [3/3] changes the OS layer in Linux to provide the
ACPI_OS_MAP_MEMORY_FAST_PATH() macro.

Note that if this macro is not defined, the code works the way it used to.

The series is available from the git branch at

 git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git \
 acpica-osl

for easier testing.

Cheers,
Rafael




^ permalink raw reply	[flat|nested] 51+ messages in thread

* [RFT][PATCH 1/3] ACPICA: Defer unmapping of memory used in memory opregions
  2020-06-10 12:17 ` [RFT][PATCH 0/3] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter Rafael J. Wysocki
@ 2020-06-10 12:20   ` Rafael J. Wysocki
  2020-06-10 12:21   ` [RFT][PATCH 2/3] ACPICA: Remove unused memory mappings on interpreter exit Rafael J. Wysocki
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-10 12:20 UTC (permalink / raw)
  To: Dan Williams, Erik Kaneda
  Cc: rafael.j.wysocki, Len Brown, Borislav Petkov, Ira Weiny,
	James Morse, Myron Stowe, Andy Shevchenko, linux-kernel,
	linux-acpi, linux-nvdimm, Bob Moore

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

The ACPI OS layer in Linux uses RCU to protect the list of ACPI
memory mappings from being walked while it is being updated.  Among
other situations, that list can be walked in (NMI and non-NMI)
interrupt context, so using a sleeping lock to protect it is not
an option.

However, performance issues related to the RCU usage in there
appear, as described by Dan Williams:

"Recently a performance problem was reported for a process invoking
a non-trival ASL program. The method call in this case ends up
repetitively triggering a call path like:

    acpi_ex_store
    acpi_ex_store_object_to_node
    acpi_ex_write_data_to_field
    acpi_ex_insert_into_field
    acpi_ex_write_with_update_rule
    acpi_ex_field_datum_io
    acpi_ex_access_region
    acpi_ev_address_space_dispatch
    acpi_ex_system_memory_space_handler
    acpi_os_map_cleanup.part.14
    _synchronize_rcu_expedited.constprop.89
    schedule

The end result of frequent synchronize_rcu_expedited() invocation is
tiny sub-millisecond spurts of execution where the scheduler freely
migrates this apparently sleepy task. The overhead of frequent
scheduler invocation multiplies the execution time by a factor
of 2-3X."

The source of this is that acpi_ex_system_memory_space_handler()
unmaps the memory mapping currently cached by it at the access time
it that mapping doesn't cover the memory area being accessed.
Consequently, if there is a memory opregion with two fields
separated from each other by an unused chunk of address space that
is large enough for not being covered by a single mapping, and they
happen to be used in an alternating pattern, the unmapping will
occur on every acpi_ex_system_memory_space_handler() invocation for
that memory opregion and that will lead to significant overhead.

To remedy that, modify acpi_ex_system_memory_space_handler() so it
can defer the unmapping of the memory mapped by it till it is
deactivated if a special ACPI_OS_MAP_MEMORY_FAST_PATH() macro,
allowing its users to get an extra reference to a known-existing
memory mapping without actually mapping it again, is defined in the
OS layer.

Namely, make acpi_ex_system_memory_space_handler() manage an internal
list of memory mappings covering all memory accesses through it that
have occurred so far if ACPI_OS_MAP_MEMORY_FAST_PATH() is present, so
that every new mapping is added to that list with an extra reference
obtained via ACPI_OS_MAP_MEMORY_FAST_PATH() which prevents it from
being unmapped by acpi_ex_system_memory_space_handler() itself.
Of course, those mappings need to go away at one point, so change
acpi_ev_system_memory_region_setup() to unmap them when the memory
opregion holding them is deactivated.

This should reduce the acpi_ex_system_memory_space_handler() overhead
for memory opregions that do not appear and then go away immediately
after a single access.  [of course, ACPI_OS_MAP_MEMORY_FAST_PATH()
needs to be implemented by the OS for this change to take effect.]

Reported-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/acpi/acpica/evrgnini.c | 14 +++++-
 drivers/acpi/acpica/exregion.c | 90 ++++++++++++++++++++++++++++++++--
 include/acpi/actypes.h         |  8 +++
 3 files changed, 107 insertions(+), 5 deletions(-)

diff --git a/drivers/acpi/acpica/evrgnini.c b/drivers/acpi/acpica/evrgnini.c
index aefc0145e583..48a5e6eaf9b9 100644
--- a/drivers/acpi/acpica/evrgnini.c
+++ b/drivers/acpi/acpica/evrgnini.c
@@ -38,6 +38,9 @@ acpi_ev_system_memory_region_setup(acpi_handle handle,
 	union acpi_operand_object *region_desc =
 	    (union acpi_operand_object *)handle;
 	struct acpi_mem_space_context *local_region_context;
+#ifdef ACPI_OS_MAP_MEMORY_FAST_PATH
+	struct acpi_mem_mapping *mapping;
+#endif
 
 	ACPI_FUNCTION_TRACE(ev_system_memory_region_setup);
 
@@ -46,13 +49,22 @@ acpi_ev_system_memory_region_setup(acpi_handle handle,
 			local_region_context =
 			    (struct acpi_mem_space_context *)*region_context;
 
-			/* Delete a cached mapping if present */
+			/* Delete memory mappings if present */
 
 			if (local_region_context->mapped_length) {
 				acpi_os_unmap_memory(local_region_context->
 						     mapped_logical_address,
 						     local_region_context->
 						     mapped_length);
+#ifdef ACPI_OS_MAP_MEMORY_FAST_PATH
+				while (local_region_context->first_mapping) {
+					mapping = local_region_context->first_mapping;
+					local_region_context->first_mapping = mapping->next;
+					acpi_os_unmap_memory(mapping->logical_address,
+							     mapping->length);
+					ACPI_FREE(mapping);
+				}
+#endif
 			}
 			ACPI_FREE(local_region_context);
 			*region_context = NULL;
diff --git a/drivers/acpi/acpica/exregion.c b/drivers/acpi/acpica/exregion.c
index d15a66de26c0..703868db9551 100644
--- a/drivers/acpi/acpica/exregion.c
+++ b/drivers/acpi/acpica/exregion.c
@@ -44,6 +44,9 @@ acpi_ex_system_memory_space_handler(u32 function,
 	u32 length;
 	acpi_size map_length;
 	acpi_size page_boundary_map_length;
+#ifdef ACPI_OS_MAP_MEMORY_FAST_PATH
+	struct acpi_mem_mapping *mapping;
+#endif
 #ifdef ACPI_MISALIGNMENT_NOT_SUPPORTED
 	u32 remainder;
 #endif
@@ -102,7 +105,7 @@ acpi_ex_system_memory_space_handler(u32 function,
 					 mem_info->mapped_length))) {
 		/*
 		 * The request cannot be resolved by the current memory mapping;
-		 * Delete the existing mapping and create a new one.
+		 * Delete the current cached mapping and get a new one.
 		 */
 		if (mem_info->mapped_length) {
 
@@ -112,6 +115,40 @@ acpi_ex_system_memory_space_handler(u32 function,
 					     mem_info->mapped_length);
 		}
 
+#ifdef ACPI_OS_MAP_MEMORY_FAST_PATH
+		/*
+		 * Look for an existing saved mapping matching the request at
+		 * hand.  If found, bump up its reference counter in the OS
+		 * layer, cache it and carry out the access.
+		 */
+		for (mapping = mem_info->first_mapping; mapping;
+		     mapping = mapping->next) {
+			if (address < mapping->physical_address)
+				continue;
+
+			if ((u64)address + length >
+					(u64)mapping->physical_address +
+					mapping->length)
+				continue;
+
+			/*
+			 * When called on a known-existing memory mapping,
+			 * ACPI_OS_MAP_MEMORY_FAST_PATH() must return the same
+			 * logical address as before or NULL.
+			 */
+			if (!ACPI_OS_MAP_MEMORY_FAST_PATH(mapping->physical_address,
+							  mapping->length))
+				continue;
+
+			mem_info->mapped_logical_address =
+						mapping->logical_address;
+			mem_info->mapped_physical_address =
+						mapping->physical_address;
+			mem_info->mapped_length = mapping->length;
+			goto access;
+		}
+#endif /* ACPI_OS_MAP_MEMORY_FAST_PATH */
+
 		/*
 		 * October 2009: Attempt to map from the requested address to the
 		 * end of the region. However, we will never map more than one
@@ -143,9 +180,8 @@ acpi_ex_system_memory_space_handler(u32 function,
 
 		/* Create a new mapping starting at the address given */
 
-		mem_info->mapped_logical_address =
-		    acpi_os_map_memory(address, map_length);
-		if (!mem_info->mapped_logical_address) {
+		logical_addr_ptr = acpi_os_map_memory(address, map_length);
+		if (!logical_addr_ptr) {
 			ACPI_ERROR((AE_INFO,
 				    "Could not map memory at 0x%8.8X%8.8X, size %u",
 				    ACPI_FORMAT_UINT64(address),
@@ -156,10 +192,56 @@ acpi_ex_system_memory_space_handler(u32 function,
 
 		/* Save the physical address and mapping size */
 
+		mem_info->mapped_logical_address = logical_addr_ptr;
 		mem_info->mapped_physical_address = address;
 		mem_info->mapped_length = map_length;
+
+#ifdef ACPI_OS_MAP_MEMORY_FAST_PATH
+		/*
+		 * Get memory to save the new mapping for removal at the
+		 * operation region deactivation time.
+		 */
+		mapping = ACPI_ALLOCATE_ZEROED(sizeof(*mapping));
+		if (!mapping) {
+			/*
+			 * No room to save the new mapping, but this is not
+			 * critical.  Just log the error and carry out the
+			 * access as requested.
+			 */
+			ACPI_ERROR((AE_INFO,
+				    "Not enough memory to save memory mapping at 0x%8.8X%8.8X, size %u",
+				    ACPI_FORMAT_UINT64(address),
+				    (u32)map_length));
+			goto access;
+		}
+		/*
+		 * Bump up the mapping's reference counter in the OS layer to
+		 * prevent it from getting dropped prematurely.
+		 */
+		if (!ACPI_OS_MAP_MEMORY_FAST_PATH(address, map_length)) {
+			/*
+			 * Something has gone wrong, but this is not critical.
+			 * Log the error, free the memory that won't be used and
+			 * carry out the access as requested.
+			 */
+			ACPI_ERROR((AE_INFO,
+				    "Unable to save memory mapping at 0x%8.8X%8.8X, size %u",
+				    ACPI_FORMAT_UINT64(address),
+				    (u32)map_length));
+			ACPI_FREE(mapping);
+			goto access;
+		}
+		mapping->physical_address = address;
+		mapping->logical_address = logical_addr_ptr;
+		mapping->length = map_length;
+		mapping->next = mem_info->first_mapping;
+		mem_info->first_mapping = mapping;
 	}
 
+access:
+#else /* !ACPI_OS_MAP_MEMORY_FAST_PATH */
+	}
+#endif /* !ACPI_OS_MAP_MEMORY_FAST_PATH */
 	/*
 	 * Generate a logical pointer corresponding to the address we want to
 	 * access
diff --git a/include/acpi/actypes.h b/include/acpi/actypes.h
index 4defed58ea33..64ab323b81b4 100644
--- a/include/acpi/actypes.h
+++ b/include/acpi/actypes.h
@@ -1200,12 +1200,20 @@ struct acpi_pci_id {
 	u16 function;
 };
 
+struct acpi_mem_mapping {
+	acpi_physical_address physical_address;
+	u8 *logical_address;
+	acpi_size length;
+	struct acpi_mem_mapping *next;
+};
+
 struct acpi_mem_space_context {
 	u32 length;
 	acpi_physical_address address;
 	acpi_physical_address mapped_physical_address;
 	u8 *mapped_logical_address;
 	acpi_size mapped_length;
+	struct acpi_mem_mapping *first_mapping;
 };
 
 /*
-- 
2.26.2





^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [RFT][PATCH 2/3] ACPICA: Remove unused memory mappings on interpreter exit
  2020-06-10 12:17 ` [RFT][PATCH 0/3] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter Rafael J. Wysocki
  2020-06-10 12:20   ` [RFT][PATCH 1/3] ACPICA: Defer unmapping of memory used in memory opregions Rafael J. Wysocki
@ 2020-06-10 12:21   ` Rafael J. Wysocki
  2020-06-12  0:12     ` Kaneda, Erik
  2020-06-10 12:22   ` [RFT][PATCH 3/3] ACPI: OSL: Define ACPI_OS_MAP_MEMORY_FAST_PATH() Rafael J. Wysocki
  2020-06-13 19:19   ` [RFT][PATCH 0/3] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter Rafael J. Wysocki
  3 siblings, 1 reply; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-10 12:21 UTC (permalink / raw)
  To: Dan Williams
  Cc: Erik Kaneda, rafael.j.wysocki, Len Brown, Borislav Petkov,
	Ira Weiny, James Morse, Myron Stowe, Andy Shevchenko,
	linux-kernel, linux-acpi, linux-nvdimm, Bob Moore

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

For transient memory opregions that are created dynamically under
the namespace and interpreter mutexes and go away quickly, there
still is the problem that removing their memory mappings may take
significant time and so doing that while holding the mutexes should
be avoided.

For example, unmapping a chunk of memory associated with a memory
opregion in Linux involves running synchronize_rcu_expedited()
which really should not be done with the namespace mutex held.

To address that problem, notice that the unused memory mappings left
behind by the "dynamic" opregions that went away need not be unmapped
right away when the opregion is deactivated.  Instead, they may be
unmapped when exiting the interpreter, after the namespace and
interpreter mutexes have been dropped (there's one more place dealing
with opregions in the debug code that can be treated analogously).

Accordingly, change acpi_ev_system_memory_region_setup() to put
the unused mappings into a global list instead of unmapping them
right away and add acpi_ev_system_release_memory_mappings() to
be called when leaving the interpreter in order to unmap the
unused memory mappings in the global list (which is protected
by the namespace mutex).

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/acpi/acpica/acevents.h |  2 ++
 drivers/acpi/acpica/dbtest.c   |  3 ++
 drivers/acpi/acpica/evrgnini.c | 51 ++++++++++++++++++++++++++++++++--
 drivers/acpi/acpica/exutils.c  |  3 ++
 drivers/acpi/acpica/utxface.c  | 23 +++++++++++++++
 include/acpi/acpixf.h          |  1 +
 6 files changed, 80 insertions(+), 3 deletions(-)

diff --git a/drivers/acpi/acpica/acevents.h b/drivers/acpi/acpica/acevents.h
index 79f292687bd6..463eb9124765 100644
--- a/drivers/acpi/acpica/acevents.h
+++ b/drivers/acpi/acpica/acevents.h
@@ -197,6 +197,8 @@ acpi_ev_execute_reg_method(union acpi_operand_object *region_obj, u32 function);
 /*
  * evregini - Region initialization and setup
  */
+void acpi_ev_system_release_memory_mappings(void);
+
 acpi_status
 acpi_ev_system_memory_region_setup(acpi_handle handle,
 				   u32 function,
diff --git a/drivers/acpi/acpica/dbtest.c b/drivers/acpi/acpica/dbtest.c
index 6db44a5ac786..7dac6dae5c48 100644
--- a/drivers/acpi/acpica/dbtest.c
+++ b/drivers/acpi/acpica/dbtest.c
@@ -8,6 +8,7 @@
 #include <acpi/acpi.h>
 #include "accommon.h"
 #include "acdebug.h"
+#include "acevents.h"
 #include "acnamesp.h"
 #include "acpredef.h"
 #include "acinterp.h"
@@ -768,6 +769,8 @@ acpi_db_test_field_unit_type(union acpi_operand_object *obj_desc)
 		acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
 		acpi_ut_release_mutex(ACPI_MTX_INTERPRETER);
 
+		acpi_ev_system_release_memory_mappings();
+
 		bit_length = obj_desc->common_field.bit_length;
 		byte_length = ACPI_ROUND_BITS_UP_TO_BYTES(bit_length);
 
diff --git a/drivers/acpi/acpica/evrgnini.c b/drivers/acpi/acpica/evrgnini.c
index 48a5e6eaf9b9..946c4eef054d 100644
--- a/drivers/acpi/acpica/evrgnini.c
+++ b/drivers/acpi/acpica/evrgnini.c
@@ -16,6 +16,52 @@
 #define _COMPONENT          ACPI_EVENTS
 ACPI_MODULE_NAME("evrgnini")
 
+#ifdef ACPI_OS_MAP_MEMORY_FAST_PATH
+static struct acpi_mem_mapping *unused_memory_mappings;
+
+/*******************************************************************************
+ *
+ * FUNCTION:    acpi_ev_system_release_memory_mappings
+ *
+ * PARAMETERS:  None
+ *
+ * RETURN:      None
+ *
+ * DESCRIPTION: Release all of the unused memory mappings in the queue
+ *              under the interpreter mutex.
+ *
+ ******************************************************************************/
+void acpi_ev_system_release_memory_mappings(void)
+{
+	struct acpi_mem_mapping *mapping;
+
+	ACPI_FUNCTION_TRACE(acpi_ev_system_release_memory_mappings);
+
+	acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE);
+
+	while (unused_memory_mappings) {
+		mapping = unused_memory_mappings;
+		unused_memory_mappings = mapping->next;
+
+		acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
+
+		acpi_os_unmap_memory(mapping->logical_address, mapping->length);
+		ACPI_FREE(mapping);
+
+		acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE);
+	}
+
+	acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
+
+	return_VOID;
+}
+#else /* !ACPI_OS_MAP_MEMORY_FAST_PATH */
+void acpi_ev_system_release_memory_mappings(void)
+{
+	return_VOID;
+}
+#endif /* !ACPI_OS_MAP_MEMORY_FAST_PATH */
+
 /*******************************************************************************
  *
  * FUNCTION:    acpi_ev_system_memory_region_setup
@@ -60,9 +106,8 @@ acpi_ev_system_memory_region_setup(acpi_handle handle,
 				while (local_region_context->first_mapping) {
 					mapping = local_region_context->first_mapping;
 					local_region_context->first_mapping = mapping->next;
-					acpi_os_unmap_memory(mapping->logical_address,
-							     mapping->length);
-					ACPI_FREE(mapping);
+					mapping->next = unused_memory_mappings;
+					unused_memory_mappings = mapping;
 				}
 #endif
 			}
diff --git a/drivers/acpi/acpica/exutils.c b/drivers/acpi/acpica/exutils.c
index 8fefa6feac2f..516d67664392 100644
--- a/drivers/acpi/acpica/exutils.c
+++ b/drivers/acpi/acpica/exutils.c
@@ -25,6 +25,7 @@
 
 #include <acpi/acpi.h>
 #include "accommon.h"
+#include "acevents.h"
 #include "acinterp.h"
 #include "amlcode.h"
 
@@ -106,6 +107,8 @@ void acpi_ex_exit_interpreter(void)
 			    "Could not release AML Interpreter mutex"));
 	}
 
+	acpi_ev_system_release_memory_mappings();
+
 	return_VOID;
 }
 
diff --git a/drivers/acpi/acpica/utxface.c b/drivers/acpi/acpica/utxface.c
index ca7c9f0144ef..d972696be846 100644
--- a/drivers/acpi/acpica/utxface.c
+++ b/drivers/acpi/acpica/utxface.c
@@ -11,6 +11,7 @@
 
 #include <acpi/acpi.h>
 #include "accommon.h"
+#include "acevents.h"
 #include "acdebug.h"
 
 #define _COMPONENT          ACPI_UTILITIES
@@ -244,6 +245,28 @@ acpi_status acpi_purge_cached_objects(void)
 
 ACPI_EXPORT_SYMBOL(acpi_purge_cached_objects)
 
+/*****************************************************************************
+ *
+ * FUNCTION:    acpi_release_unused_memory_mappings
+ *
+ * PARAMETERS:  None
+ *
+ * RETURN:      None
+ *
+ * DESCRIPTION: Remove memory mappings that are not used any more.
+ *
+ ****************************************************************************/
+void acpi_release_unused_memory_mappings(void)
+{
+	ACPI_FUNCTION_TRACE(acpi_release_unused_memory_mappings);
+
+	acpi_ev_system_release_memory_mappings();
+
+	return_VOID;
+}
+
+ACPI_EXPORT_SYMBOL(acpi_release_unused_memory_mappings)
+
 /*****************************************************************************
  *
  * FUNCTION:    acpi_install_interface
diff --git a/include/acpi/acpixf.h b/include/acpi/acpixf.h
index 1dc8d262035b..8d2cc02257ed 100644
--- a/include/acpi/acpixf.h
+++ b/include/acpi/acpixf.h
@@ -449,6 +449,7 @@ ACPI_EXTERNAL_RETURN_STATUS(acpi_status
 						    acpi_size length,
 						    struct acpi_pld_info
 						    **return_buffer))
+ACPI_EXTERNAL_RETURN_VOID(void acpi_release_unused_memory_mappings(void))
 
 /*
  * ACPI table load/unload interfaces
-- 
2.26.2





^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [RFT][PATCH 3/3] ACPI: OSL: Define ACPI_OS_MAP_MEMORY_FAST_PATH()
  2020-06-10 12:17 ` [RFT][PATCH 0/3] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter Rafael J. Wysocki
  2020-06-10 12:20   ` [RFT][PATCH 1/3] ACPICA: Defer unmapping of memory used in memory opregions Rafael J. Wysocki
  2020-06-10 12:21   ` [RFT][PATCH 2/3] ACPICA: Remove unused memory mappings on interpreter exit Rafael J. Wysocki
@ 2020-06-10 12:22   ` Rafael J. Wysocki
  2020-06-13 19:19   ` [RFT][PATCH 0/3] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter Rafael J. Wysocki
  3 siblings, 0 replies; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-10 12:22 UTC (permalink / raw)
  To: Dan Williams
  Cc: Erik Kaneda, rafael.j.wysocki, Len Brown, Borislav Petkov,
	Ira Weiny, James Morse, Myron Stowe, Andy Shevchenko,
	linux-kernel, linux-acpi, linux-nvdimm, Bob Moore

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

Define the ACPI_OS_MAP_MEMORY_FAST_PATH() macro to allow
acpi_ex_system_memory_space_handler() to avoid memory unmapping
overhead by deferring the unmap operations to the point when the
AML interpreter is exited after removing the operation region
that held the memory mappings which are not used any more.

That macro, when called on a knwon-existing memory mapping,
causes the reference counter of that mapping in the OS layer to be
incremented and returns a pointer representing the virtual address
of the start of the mapped memory area without really mapping it,
so the first subsequent unmap operation on it will only decrement
the reference counter.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/acpi/osl.c                | 67 +++++++++++++++++++++++--------
 include/acpi/platform/aclinuxex.h |  4 ++
 2 files changed, 55 insertions(+), 16 deletions(-)

diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
index 762c5d50b8fe..b75f3a17776f 100644
--- a/drivers/acpi/osl.c
+++ b/drivers/acpi/osl.c
@@ -302,21 +302,8 @@ static void acpi_unmap(acpi_physical_address pg_off, void __iomem *vaddr)
 		iounmap(vaddr);
 }
 
-/**
- * acpi_os_map_iomem - Get a virtual address for a given physical address range.
- * @phys: Start of the physical address range to map.
- * @size: Size of the physical address range to map.
- *
- * Look up the given physical address range in the list of existing ACPI memory
- * mappings.  If found, get a reference to it and return a pointer to it (its
- * virtual address).  If not found, map it, add it to that list and return a
- * pointer to it.
- *
- * During early init (when acpi_permanent_mmap has not been set yet) this
- * routine simply calls __acpi_map_table() to get the job done.
- */
-void __iomem __ref
-*acpi_os_map_iomem(acpi_physical_address phys, acpi_size size)
+static void __iomem __ref *__acpi_os_map_iomem(acpi_physical_address phys,
+					       acpi_size size, bool fast_path)
 {
 	struct acpi_ioremap *map;
 	void __iomem *virt;
@@ -328,8 +315,12 @@ void __iomem __ref
 		return NULL;
 	}
 
-	if (!acpi_permanent_mmap)
+	if (!acpi_permanent_mmap) {
+		if (WARN_ON(fast_path))
+			return NULL;
+
 		return __acpi_map_table((unsigned long)phys, size);
+	}
 
 	mutex_lock(&acpi_ioremap_lock);
 	/* Check if there's a suitable mapping already. */
@@ -339,6 +330,11 @@ void __iomem __ref
 		goto out;
 	}
 
+	if (fast_path) {
+		mutex_unlock(&acpi_ioremap_lock);
+		return NULL;
+	}
+
 	map = kzalloc(sizeof(*map), GFP_KERNEL);
 	if (!map) {
 		mutex_unlock(&acpi_ioremap_lock);
@@ -366,6 +362,25 @@ void __iomem __ref
 	mutex_unlock(&acpi_ioremap_lock);
 	return map->virt + (phys - map->phys);
 }
+
+/**
+ * acpi_os_map_iomem - Get a virtual address for a given physical address range.
+ * @phys: Start of the physical address range to map.
+ * @size: Size of the physical address range to map.
+ *
+ * Look up the given physical address range in the list of existing ACPI memory
+ * mappings.  If found, get a reference to it and return a pointer representing
+ * its virtual address.  If not found, map it, add it to that list and return a
+ * pointer representing its virtual address.
+ *
+ * During early init (when acpi_permanent_mmap has not been set yet) call
+ * __acpi_map_table() to obtain the mapping.
+ */
+void __iomem __ref *acpi_os_map_iomem(acpi_physical_address phys,
+				      acpi_size size)
+{
+	return __acpi_os_map_iomem(phys, size, false);
+}
 EXPORT_SYMBOL_GPL(acpi_os_map_iomem);
 
 void *__ref acpi_os_map_memory(acpi_physical_address phys, acpi_size size)
@@ -374,6 +389,24 @@ void *__ref acpi_os_map_memory(acpi_physical_address phys, acpi_size size)
 }
 EXPORT_SYMBOL_GPL(acpi_os_map_memory);
 
+/**
+ * acpi_os_map_memory_fast_path - Fast-path physical-to-virtual address mapping.
+ * @phys: Start of the physical address range to map.
+ * @size: Size of the physical address range to map.
+ *
+ * Look up the given physical address range in the list of existing ACPI memory
+ * mappings.  If found, get a reference to it and return a pointer representing
+ * its virtual address.  If not found, return NULL.
+ *
+ * During early init (when acpi_permanent_mmap has not been set yet) log a
+ * warning and return NULL.
+ */
+void __ref *acpi_os_map_memory_fast_path(acpi_physical_address phys,
+					acpi_size size)
+{
+	return __acpi_os_map_iomem(phys, size, true);
+}
+
 /* Must be called with mutex_lock(&acpi_ioremap_lock) */
 static unsigned long acpi_os_drop_map_ref(struct acpi_ioremap *map)
 {
@@ -1571,6 +1604,8 @@ acpi_status acpi_release_memory(acpi_handle handle, struct resource *res,
 
 	return acpi_walk_namespace(ACPI_TYPE_REGION, handle, level,
 				   acpi_deactivate_mem_region, NULL, res, NULL);
+
+	acpi_release_unused_memory_mappings();
 }
 EXPORT_SYMBOL_GPL(acpi_release_memory);
 
diff --git a/include/acpi/platform/aclinuxex.h b/include/acpi/platform/aclinuxex.h
index 04f88f2de781..1d8be4ac9ef9 100644
--- a/include/acpi/platform/aclinuxex.h
+++ b/include/acpi/platform/aclinuxex.h
@@ -139,6 +139,10 @@ static inline void acpi_os_terminate_debugger(void)
  * OSL interfaces added by Linux
  */
 
+void *acpi_os_map_memory_fast_path(acpi_physical_address where, acpi_size length);
+
+#define ACPI_OS_MAP_MEMORY_FAST_PATH(a, s)	acpi_os_map_memory_fast_path(a, s)
+
 #endif				/* __KERNEL__ */
 
 #endif				/* __ACLINUXEX_H__ */
-- 
2.26.2





^ permalink raw reply related	[flat|nested] 51+ messages in thread

* RE: [RFT][PATCH 2/3] ACPICA: Remove unused memory mappings on interpreter exit
  2020-06-10 12:21   ` [RFT][PATCH 2/3] ACPICA: Remove unused memory mappings on interpreter exit Rafael J. Wysocki
@ 2020-06-12  0:12     ` Kaneda, Erik
  2020-06-12 12:05       ` Rafael J. Wysocki
  0 siblings, 1 reply; 51+ messages in thread
From: Kaneda, Erik @ 2020-06-12  0:12 UTC (permalink / raw)
  To: Rafael J. Wysocki, Williams, Dan J
  Cc: Wysocki, Rafael J, Len Brown, Borislav Petkov, Weiny, Ira,
	James Morse, Myron Stowe, Andy Shevchenko, linux-kernel,
	linux-acpi, linux-nvdimm, Moore, Robert



> -----Original Message-----
> From: Rafael J. Wysocki <rjw@rjwysocki.net>
> Sent: Wednesday, June 10, 2020 5:22 AM
> To: Williams, Dan J <dan.j.williams@intel.com>
> Cc: Kaneda, Erik <erik.kaneda@intel.com>; Wysocki, Rafael J
> <rafael.j.wysocki@intel.com>; Len Brown <lenb@kernel.org>; Borislav
> Petkov <bp@alien8.de>; Weiny, Ira <ira.weiny@intel.com>; James Morse
> <james.morse@arm.com>; Myron Stowe <myron.stowe@redhat.com>;
> Andy Shevchenko <andriy.shevchenko@linux.intel.com>; linux-
> kernel@vger.kernel.org; linux-acpi@vger.kernel.org; linux-
> nvdimm@lists.01.org; Moore, Robert <robert.moore@intel.com>
> Subject: [RFT][PATCH 2/3] ACPICA: Remove unused memory mappings on
> interpreter exit
> 
> From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> 
> For transient memory opregions that are created dynamically under
> the namespace and interpreter mutexes and go away quickly, there
> still is the problem that removing their memory mappings may take
> significant time and so doing that while holding the mutexes should
> be avoided.
> 
> For example, unmapping a chunk of memory associated with a memory
> opregion in Linux involves running synchronize_rcu_expedited()
> which really should not be done with the namespace mutex held.
> 
> To address that problem, notice that the unused memory mappings left
> behind by the "dynamic" opregions that went away need not be unmapped
> right away when the opregion is deactivated.  Instead, they may be
> unmapped when exiting the interpreter, after the namespace and
> interpreter mutexes have been dropped (there's one more place dealing
> with opregions in the debug code that can be treated analogously).
> 
> Accordingly, change acpi_ev_system_memory_region_setup() to put
> the unused mappings into a global list instead of unmapping them
> right away and add acpi_ev_system_release_memory_mappings() to
> be called when leaving the interpreter in order to unmap the
> unused memory mappings in the global list (which is protected
> by the namespace mutex).
> 
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
>  drivers/acpi/acpica/acevents.h |  2 ++
>  drivers/acpi/acpica/dbtest.c   |  3 ++
>  drivers/acpi/acpica/evrgnini.c | 51
> ++++++++++++++++++++++++++++++++--
>  drivers/acpi/acpica/exutils.c  |  3 ++
>  drivers/acpi/acpica/utxface.c  | 23 +++++++++++++++
>  include/acpi/acpixf.h          |  1 +
>  6 files changed, 80 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/acpi/acpica/acevents.h b/drivers/acpi/acpica/acevents.h
> index 79f292687bd6..463eb9124765 100644
> --- a/drivers/acpi/acpica/acevents.h
> +++ b/drivers/acpi/acpica/acevents.h
> @@ -197,6 +197,8 @@ acpi_ev_execute_reg_method(union
> acpi_operand_object *region_obj, u32 function);
>  /*
>   * evregini - Region initialization and setup
>   */
> +void acpi_ev_system_release_memory_mappings(void);
> +
>  acpi_status
>  acpi_ev_system_memory_region_setup(acpi_handle handle,
>  				   u32 function,
> diff --git a/drivers/acpi/acpica/dbtest.c b/drivers/acpi/acpica/dbtest.c
> index 6db44a5ac786..7dac6dae5c48 100644
> --- a/drivers/acpi/acpica/dbtest.c
> +++ b/drivers/acpi/acpica/dbtest.c
> @@ -8,6 +8,7 @@
>  #include <acpi/acpi.h>
>  #include "accommon.h"
>  #include "acdebug.h"
> +#include "acevents.h"
>  #include "acnamesp.h"
>  #include "acpredef.h"
>  #include "acinterp.h"
> @@ -768,6 +769,8 @@ acpi_db_test_field_unit_type(union
> acpi_operand_object *obj_desc)
>  		acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
>  		acpi_ut_release_mutex(ACPI_MTX_INTERPRETER);
> 
> +		acpi_ev_system_release_memory_mappings();
> +
>  		bit_length = obj_desc->common_field.bit_length;
>  		byte_length =
> ACPI_ROUND_BITS_UP_TO_BYTES(bit_length);
> 
> diff --git a/drivers/acpi/acpica/evrgnini.c b/drivers/acpi/acpica/evrgnini.c
> index 48a5e6eaf9b9..946c4eef054d 100644
> --- a/drivers/acpi/acpica/evrgnini.c
> +++ b/drivers/acpi/acpica/evrgnini.c
> @@ -16,6 +16,52 @@
>  #define _COMPONENT          ACPI_EVENTS
>  ACPI_MODULE_NAME("evrgnini")
> 
> +#ifdef ACPI_OS_MAP_MEMORY_FAST_PATH
> +static struct acpi_mem_mapping *unused_memory_mappings;
> +
> +/*********************************************************
> **********************
> + *
> + * FUNCTION:    acpi_ev_system_release_memory_mappings
> + *
> + * PARAMETERS:  None
> + *
> + * RETURN:      None
> + *
> + * DESCRIPTION: Release all of the unused memory mappings in the queue
> + *              under the interpreter mutex.
> + *
> +
> **********************************************************
> ********************/
> +void acpi_ev_system_release_memory_mappings(void)
> +{
> +	struct acpi_mem_mapping *mapping;
> +
> +
> 	ACPI_FUNCTION_TRACE(acpi_ev_system_release_memory_mappin
> gs);
> +
> +	acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE);
> +
> +	while (unused_memory_mappings) {
> +		mapping = unused_memory_mappings;
> +		unused_memory_mappings = mapping->next;
> +
> +		acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
> +
> +		acpi_os_unmap_memory(mapping->logical_address,
> mapping->length);

acpi_os_unmap_memory calls synchronize_rcu_expedited(). I'm no RCU expert but the 
definition of this function states:

* Although this is a great improvement over previous expedited
 * implementations, it is still unfriendly to real-time workloads, so is
 * thus not recommended for any sort of common-case code.  In fact, if
 * you are using synchronize_rcu_expedited() in a loop, please restructure
 * your code to batch your updates, and then use a single synchronize_rcu()
 * instead.


> +		ACPI_FREE(mapping);
> +
> +		acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE);
> +	}
> +
> +	acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
> +
> +	return_VOID;
> +}
> +#else /* !ACPI_OS_MAP_MEMORY_FAST_PATH */
> +void acpi_ev_system_release_memory_mappings(void)
> +{
> +	return_VOID;
> +}
> +#endif /* !ACPI_OS_MAP_MEMORY_FAST_PATH */
> +
> 
> /**********************************************************
> *********************
>   *
>   * FUNCTION:    acpi_ev_system_memory_region_setup
> @@ -60,9 +106,8 @@ acpi_ev_system_memory_region_setup(acpi_handle
> handle,
>  				while (local_region_context->first_mapping)
> {
>  					mapping = local_region_context-
> >first_mapping;
>  					local_region_context->first_mapping
> = mapping->next;
> -					acpi_os_unmap_memory(mapping-
> >logical_address,
> -							     mapping->length);
> -					ACPI_FREE(mapping);
> +					mapping->next =
> unused_memory_mappings;
> +					unused_memory_mappings =
> mapping;
>  				}
>  #endif
>  			}
> diff --git a/drivers/acpi/acpica/exutils.c b/drivers/acpi/acpica/exutils.c
> index 8fefa6feac2f..516d67664392 100644
> --- a/drivers/acpi/acpica/exutils.c
> +++ b/drivers/acpi/acpica/exutils.c
> @@ -25,6 +25,7 @@
> 
>  #include <acpi/acpi.h>
>  #include "accommon.h"
> +#include "acevents.h"
>  #include "acinterp.h"
>  #include "amlcode.h"
> 
> @@ -106,6 +107,8 @@ void acpi_ex_exit_interpreter(void)
>  			    "Could not release AML Interpreter mutex"));
>  	}
> 
> +	acpi_ev_system_release_memory_mappings();
> +
>  	return_VOID;
>  }
> 
> diff --git a/drivers/acpi/acpica/utxface.c b/drivers/acpi/acpica/utxface.c
> index ca7c9f0144ef..d972696be846 100644
> --- a/drivers/acpi/acpica/utxface.c
> +++ b/drivers/acpi/acpica/utxface.c
> @@ -11,6 +11,7 @@
> 
>  #include <acpi/acpi.h>
>  #include "accommon.h"
> +#include "acevents.h"
>  #include "acdebug.h"
> 
>  #define _COMPONENT          ACPI_UTILITIES
> @@ -244,6 +245,28 @@ acpi_status acpi_purge_cached_objects(void)
> 
>  ACPI_EXPORT_SYMBOL(acpi_purge_cached_objects)
> 
> +/*********************************************************
> ********************
> + *
> + * FUNCTION:    acpi_release_unused_memory_mappings
> + *
> + * PARAMETERS:  None
> + *
> + * RETURN:      None
> + *
> + * DESCRIPTION: Remove memory mappings that are not used any more.
> + *
> +
> **********************************************************
> ******************/
> +void acpi_release_unused_memory_mappings(void)
> +{
> +	ACPI_FUNCTION_TRACE(acpi_release_unused_memory_mappings);
> +
> +	acpi_ev_system_release_memory_mappings();
> +
> +	return_VOID;
> +}
> +
> +ACPI_EXPORT_SYMBOL(acpi_release_unused_memory_mappings)
> +
> 
> /**********************************************************
> *******************
>   *
>   * FUNCTION:    acpi_install_interface
> diff --git a/include/acpi/acpixf.h b/include/acpi/acpixf.h
> index 1dc8d262035b..8d2cc02257ed 100644
> --- a/include/acpi/acpixf.h
> +++ b/include/acpi/acpixf.h
> @@ -449,6 +449,7 @@ ACPI_EXTERNAL_RETURN_STATUS(acpi_status
>  						    acpi_size length,
>  						    struct acpi_pld_info
>  						    **return_buffer))
> +ACPI_EXTERNAL_RETURN_VOID(void
> acpi_release_unused_memory_mappings(void))
> 
>  /*
>   * ACPI table load/unload interfaces
> --
> 2.26.2
> 
> 
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [RFT][PATCH 2/3] ACPICA: Remove unused memory mappings on interpreter exit
  2020-06-12  0:12     ` Kaneda, Erik
@ 2020-06-12 12:05       ` Rafael J. Wysocki
  2020-06-13 19:28         ` Rafael J. Wysocki
  0 siblings, 1 reply; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-12 12:05 UTC (permalink / raw)
  To: Kaneda, Erik
  Cc: Rafael J. Wysocki, Williams, Dan J, Wysocki, Rafael J, Len Brown,
	Borislav Petkov, Weiny, Ira, James Morse, Myron Stowe,
	Andy Shevchenko, linux-kernel, linux-acpi, linux-nvdimm, Moore,
	Robert

On Fri, Jun 12, 2020 at 2:12 AM Kaneda, Erik <erik.kaneda@intel.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Rafael J. Wysocki <rjw@rjwysocki.net>
> > Sent: Wednesday, June 10, 2020 5:22 AM
> > To: Williams, Dan J <dan.j.williams@intel.com>
> > Cc: Kaneda, Erik <erik.kaneda@intel.com>; Wysocki, Rafael J
> > <rafael.j.wysocki@intel.com>; Len Brown <lenb@kernel.org>; Borislav
> > Petkov <bp@alien8.de>; Weiny, Ira <ira.weiny@intel.com>; James Morse
> > <james.morse@arm.com>; Myron Stowe <myron.stowe@redhat.com>;
> > Andy Shevchenko <andriy.shevchenko@linux.intel.com>; linux-
> > kernel@vger.kernel.org; linux-acpi@vger.kernel.org; linux-
> > nvdimm@lists.01.org; Moore, Robert <robert.moore@intel.com>
> > Subject: [RFT][PATCH 2/3] ACPICA: Remove unused memory mappings on
> > interpreter exit
> >
> > From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> >
> > For transient memory opregions that are created dynamically under
> > the namespace and interpreter mutexes and go away quickly, there
> > still is the problem that removing their memory mappings may take
> > significant time and so doing that while holding the mutexes should
> > be avoided.
> >
> > For example, unmapping a chunk of memory associated with a memory
> > opregion in Linux involves running synchronize_rcu_expedited()
> > which really should not be done with the namespace mutex held.
> >
> > To address that problem, notice that the unused memory mappings left
> > behind by the "dynamic" opregions that went away need not be unmapped
> > right away when the opregion is deactivated.  Instead, they may be
> > unmapped when exiting the interpreter, after the namespace and
> > interpreter mutexes have been dropped (there's one more place dealing
> > with opregions in the debug code that can be treated analogously).
> >
> > Accordingly, change acpi_ev_system_memory_region_setup() to put
> > the unused mappings into a global list instead of unmapping them
> > right away and add acpi_ev_system_release_memory_mappings() to
> > be called when leaving the interpreter in order to unmap the
> > unused memory mappings in the global list (which is protected
> > by the namespace mutex).
> >
> > Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > ---
> >  drivers/acpi/acpica/acevents.h |  2 ++
> >  drivers/acpi/acpica/dbtest.c   |  3 ++
> >  drivers/acpi/acpica/evrgnini.c | 51
> > ++++++++++++++++++++++++++++++++--
> >  drivers/acpi/acpica/exutils.c  |  3 ++
> >  drivers/acpi/acpica/utxface.c  | 23 +++++++++++++++
> >  include/acpi/acpixf.h          |  1 +
> >  6 files changed, 80 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/acpi/acpica/acevents.h b/drivers/acpi/acpica/acevents.h
> > index 79f292687bd6..463eb9124765 100644
> > --- a/drivers/acpi/acpica/acevents.h
> > +++ b/drivers/acpi/acpica/acevents.h
> > @@ -197,6 +197,8 @@ acpi_ev_execute_reg_method(union
> > acpi_operand_object *region_obj, u32 function);
> >  /*
> >   * evregini - Region initialization and setup
> >   */
> > +void acpi_ev_system_release_memory_mappings(void);
> > +
> >  acpi_status
> >  acpi_ev_system_memory_region_setup(acpi_handle handle,
> >                                  u32 function,
> > diff --git a/drivers/acpi/acpica/dbtest.c b/drivers/acpi/acpica/dbtest.c
> > index 6db44a5ac786..7dac6dae5c48 100644
> > --- a/drivers/acpi/acpica/dbtest.c
> > +++ b/drivers/acpi/acpica/dbtest.c
> > @@ -8,6 +8,7 @@
> >  #include <acpi/acpi.h>
> >  #include "accommon.h"
> >  #include "acdebug.h"
> > +#include "acevents.h"
> >  #include "acnamesp.h"
> >  #include "acpredef.h"
> >  #include "acinterp.h"
> > @@ -768,6 +769,8 @@ acpi_db_test_field_unit_type(union
> > acpi_operand_object *obj_desc)
> >               acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
> >               acpi_ut_release_mutex(ACPI_MTX_INTERPRETER);
> >
> > +             acpi_ev_system_release_memory_mappings();
> > +
> >               bit_length = obj_desc->common_field.bit_length;
> >               byte_length =
> > ACPI_ROUND_BITS_UP_TO_BYTES(bit_length);
> >
> > diff --git a/drivers/acpi/acpica/evrgnini.c b/drivers/acpi/acpica/evrgnini.c
> > index 48a5e6eaf9b9..946c4eef054d 100644
> > --- a/drivers/acpi/acpica/evrgnini.c
> > +++ b/drivers/acpi/acpica/evrgnini.c
> > @@ -16,6 +16,52 @@
> >  #define _COMPONENT          ACPI_EVENTS
> >  ACPI_MODULE_NAME("evrgnini")
> >
> > +#ifdef ACPI_OS_MAP_MEMORY_FAST_PATH
> > +static struct acpi_mem_mapping *unused_memory_mappings;
> > +
> > +/*********************************************************
> > **********************
> > + *
> > + * FUNCTION:    acpi_ev_system_release_memory_mappings
> > + *
> > + * PARAMETERS:  None
> > + *
> > + * RETURN:      None
> > + *
> > + * DESCRIPTION: Release all of the unused memory mappings in the queue
> > + *              under the interpreter mutex.
> > + *
> > +
> > **********************************************************
> > ********************/
> > +void acpi_ev_system_release_memory_mappings(void)
> > +{
> > +     struct acpi_mem_mapping *mapping;
> > +
> > +
> >       ACPI_FUNCTION_TRACE(acpi_ev_system_release_memory_mappin
> > gs);
> > +
> > +     acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE);
> > +
> > +     while (unused_memory_mappings) {
> > +             mapping = unused_memory_mappings;
> > +             unused_memory_mappings = mapping->next;
> > +
> > +             acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
> > +
> > +             acpi_os_unmap_memory(mapping->logical_address,
> > mapping->length);
>
> acpi_os_unmap_memory calls synchronize_rcu_expedited(). I'm no RCU expert but the
> definition of this function states:
>
> * Although this is a great improvement over previous expedited
>  * implementations, it is still unfriendly to real-time workloads, so is
>  * thus not recommended for any sort of common-case code.  In fact, if
>  * you are using synchronize_rcu_expedited() in a loop, please restructure
>  * your code to batch your updates, and then use a single synchronize_rcu()
>  * instead.

If this really ends up being a loop, the code without this patch will
also call synchronize_rcu_expedited() in a loop, but indirectly and
under the namespace and interpreter mutexes.

While I agree that this is still somewhat suboptimal, improving this
would require more changes in the OSL code.

Cheers!

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [RFT][PATCH 0/3] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter
  2020-06-10 12:17 ` [RFT][PATCH 0/3] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter Rafael J. Wysocki
                     ` (2 preceding siblings ...)
  2020-06-10 12:22   ` [RFT][PATCH 3/3] ACPI: OSL: Define ACPI_OS_MAP_MEMORY_FAST_PATH() Rafael J. Wysocki
@ 2020-06-13 19:19   ` Rafael J. Wysocki
  3 siblings, 0 replies; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-13 19:19 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Dan Williams, Erik Kaneda, rafael.j.wysocki, Len Brown,
	Borislav Petkov, Ira Weiny, James Morse, Myron Stowe,
	Andy Shevchenko, linux-kernel, linux-acpi, linux-nvdimm,
	Bob Moore

On Wednesday, June 10, 2020 2:17:04 PM CEST Rafael J. Wysocki wrote:
> Hi All,
> 
> This series is to address the problem with RCU synchronization occurring,
> possibly relatively often, inside of acpi_ex_system_memory_space_handler(),
> when the namespace and interpreter mutexes are held.
> 
> The basic idea is to avoid the actual unmapping of memory in
> acpi_ex_system_memory_space_handler() by making it take the advantage of the
> reference counting of memory mappings utilized by the OSL layer in Linux.
> 
> The basic assumption in patch [1/3] is that if the special
> ACPI_OS_MAP_MEMORY_FAST_PATH() macro is present, it can be used to increment
> the reference counter of a known-existing memory mapping in the OS layer
> which then is dropped by the subsequent acpi_os_unmap_memory() without
> unmapping the address range at hand.  That can be utilized by
> acpi_ex_system_memory_space_handler() to prevent the reference counters of
> all mappings used by it from dropping down to 0 (which also prevents the
> address ranges associated with them from being unmapped) so that they can
> be unmapped later (specifically, at the operation region deactivation time).
> 
> Patch [2/3] defers the unmapping even further, until the namespace and
> interpreter mutexes are released, to avoid invoking the RCU synchronization
> under theses mutexes.
> 
> Finally, patch [3/3] changes the OS layer in Linux to provide the
> ACPI_OS_MAP_MEMORY_FAST_PATH() macro.
> 
> Note that if this macro is not defined, the code works the way it used to.
> 
> The series is available from the git branch at
> 
>  git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git \
>  acpica-osl
> 
> for easier testing.

Please disregard this patch series, it will be replaced by a new one which
already is there in the acpica-osl branch above.

Thanks!




^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [RFT][PATCH 2/3] ACPICA: Remove unused memory mappings on interpreter exit
  2020-06-12 12:05       ` Rafael J. Wysocki
@ 2020-06-13 19:28         ` Rafael J. Wysocki
  2020-06-15 19:06           ` Dan Williams
  0 siblings, 1 reply; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-13 19:28 UTC (permalink / raw)
  To: Kaneda, Erik
  Cc: Rafael J. Wysocki, Williams, Dan J, Wysocki, Rafael J, Len Brown,
	Borislav Petkov, Weiny, Ira, James Morse, Myron Stowe,
	Andy Shevchenko, linux-kernel, linux-acpi, linux-nvdimm, Moore,
	Robert

On Friday, June 12, 2020 2:05:01 PM CEST Rafael J. Wysocki wrote:
> On Fri, Jun 12, 2020 at 2:12 AM Kaneda, Erik <erik.kaneda@intel.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Rafael J. Wysocki <rjw@rjwysocki.net>
> > > Sent: Wednesday, June 10, 2020 5:22 AM
> > > To: Williams, Dan J <dan.j.williams@intel.com>
> > > Cc: Kaneda, Erik <erik.kaneda@intel.com>; Wysocki, Rafael J
> > > <rafael.j.wysocki@intel.com>; Len Brown <lenb@kernel.org>; Borislav
> > > Petkov <bp@alien8.de>; Weiny, Ira <ira.weiny@intel.com>; James Morse
> > > <james.morse@arm.com>; Myron Stowe <myron.stowe@redhat.com>;
> > > Andy Shevchenko <andriy.shevchenko@linux.intel.com>; linux-
> > > kernel@vger.kernel.org; linux-acpi@vger.kernel.org; linux-
> > > nvdimm@lists.01.org; Moore, Robert <robert.moore@intel.com>
> > > Subject: [RFT][PATCH 2/3] ACPICA: Remove unused memory mappings on
> > > interpreter exit
> > >
> > > From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> > >
> > > For transient memory opregions that are created dynamically under
> > > the namespace and interpreter mutexes and go away quickly, there
> > > still is the problem that removing their memory mappings may take
> > > significant time and so doing that while holding the mutexes should
> > > be avoided.
> > >
> > > For example, unmapping a chunk of memory associated with a memory
> > > opregion in Linux involves running synchronize_rcu_expedited()
> > > which really should not be done with the namespace mutex held.
> > >
> > > To address that problem, notice that the unused memory mappings left
> > > behind by the "dynamic" opregions that went away need not be unmapped
> > > right away when the opregion is deactivated.  Instead, they may be
> > > unmapped when exiting the interpreter, after the namespace and
> > > interpreter mutexes have been dropped (there's one more place dealing
> > > with opregions in the debug code that can be treated analogously).
> > >
> > > Accordingly, change acpi_ev_system_memory_region_setup() to put
> > > the unused mappings into a global list instead of unmapping them
> > > right away and add acpi_ev_system_release_memory_mappings() to
> > > be called when leaving the interpreter in order to unmap the
> > > unused memory mappings in the global list (which is protected
> > > by the namespace mutex).
> > >
> > > Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > > ---
> > >  drivers/acpi/acpica/acevents.h |  2 ++
> > >  drivers/acpi/acpica/dbtest.c   |  3 ++
> > >  drivers/acpi/acpica/evrgnini.c | 51
> > > ++++++++++++++++++++++++++++++++--
> > >  drivers/acpi/acpica/exutils.c  |  3 ++
> > >  drivers/acpi/acpica/utxface.c  | 23 +++++++++++++++
> > >  include/acpi/acpixf.h          |  1 +
> > >  6 files changed, 80 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/drivers/acpi/acpica/acevents.h b/drivers/acpi/acpica/acevents.h
> > > index 79f292687bd6..463eb9124765 100644
> > > --- a/drivers/acpi/acpica/acevents.h
> > > +++ b/drivers/acpi/acpica/acevents.h
> > > @@ -197,6 +197,8 @@ acpi_ev_execute_reg_method(union
> > > acpi_operand_object *region_obj, u32 function);
> > >  /*
> > >   * evregini - Region initialization and setup
> > >   */
> > > +void acpi_ev_system_release_memory_mappings(void);
> > > +
> > >  acpi_status
> > >  acpi_ev_system_memory_region_setup(acpi_handle handle,
> > >                                  u32 function,
> > > diff --git a/drivers/acpi/acpica/dbtest.c b/drivers/acpi/acpica/dbtest.c
> > > index 6db44a5ac786..7dac6dae5c48 100644
> > > --- a/drivers/acpi/acpica/dbtest.c
> > > +++ b/drivers/acpi/acpica/dbtest.c
> > > @@ -8,6 +8,7 @@
> > >  #include <acpi/acpi.h>
> > >  #include "accommon.h"
> > >  #include "acdebug.h"
> > > +#include "acevents.h"
> > >  #include "acnamesp.h"
> > >  #include "acpredef.h"
> > >  #include "acinterp.h"
> > > @@ -768,6 +769,8 @@ acpi_db_test_field_unit_type(union
> > > acpi_operand_object *obj_desc)
> > >               acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
> > >               acpi_ut_release_mutex(ACPI_MTX_INTERPRETER);
> > >
> > > +             acpi_ev_system_release_memory_mappings();
> > > +
> > >               bit_length = obj_desc->common_field.bit_length;
> > >               byte_length =
> > > ACPI_ROUND_BITS_UP_TO_BYTES(bit_length);
> > >
> > > diff --git a/drivers/acpi/acpica/evrgnini.c b/drivers/acpi/acpica/evrgnini.c
> > > index 48a5e6eaf9b9..946c4eef054d 100644
> > > --- a/drivers/acpi/acpica/evrgnini.c
> > > +++ b/drivers/acpi/acpica/evrgnini.c
> > > @@ -16,6 +16,52 @@
> > >  #define _COMPONENT          ACPI_EVENTS
> > >  ACPI_MODULE_NAME("evrgnini")
> > >
> > > +#ifdef ACPI_OS_MAP_MEMORY_FAST_PATH
> > > +static struct acpi_mem_mapping *unused_memory_mappings;
> > > +
> > > +/*********************************************************
> > > **********************
> > > + *
> > > + * FUNCTION:    acpi_ev_system_release_memory_mappings
> > > + *
> > > + * PARAMETERS:  None
> > > + *
> > > + * RETURN:      None
> > > + *
> > > + * DESCRIPTION: Release all of the unused memory mappings in the queue
> > > + *              under the interpreter mutex.
> > > + *
> > > +
> > > **********************************************************
> > > ********************/
> > > +void acpi_ev_system_release_memory_mappings(void)
> > > +{
> > > +     struct acpi_mem_mapping *mapping;
> > > +
> > > +
> > >       ACPI_FUNCTION_TRACE(acpi_ev_system_release_memory_mappin
> > > gs);
> > > +
> > > +     acpi_ut_acquire_mutex(ACPI_MTX_NAMESPACE);
> > > +
> > > +     while (unused_memory_mappings) {
> > > +             mapping = unused_memory_mappings;
> > > +             unused_memory_mappings = mapping->next;
> > > +
> > > +             acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
> > > +
> > > +             acpi_os_unmap_memory(mapping->logical_address,
> > > mapping->length);
> >
> > acpi_os_unmap_memory calls synchronize_rcu_expedited(). I'm no RCU expert but the
> > definition of this function states:
> >
> > * Although this is a great improvement over previous expedited
> >  * implementations, it is still unfriendly to real-time workloads, so is
> >  * thus not recommended for any sort of common-case code.  In fact, if
> >  * you are using synchronize_rcu_expedited() in a loop, please restructure
> >  * your code to batch your updates, and then use a single synchronize_rcu()
> >  * instead.
> 
> If this really ends up being a loop, the code without this patch will
> also call synchronize_rcu_expedited() in a loop, but indirectly and
> under the namespace and interpreter mutexes.
> 
> While I agree that this is still somewhat suboptimal, improving this
> would require more changes in the OSL code.

After writing the above I started to think about the extra changes needed
to improve that and I realized that it would take making the OS layer
support deferred memory unmapping, such that the unused mappings would be
queued up for later removal and then released in one go at a suitable time.

However, that would be sufficient to address the issue addressed by this
series, because the deferred unmapping could be used in
acpi_ev_system_memory_region_setup() right away and that would be a much
simpler change than the one made in patch [1/3].

So I went ahead and implemented this and the result is there in the
acpica-osl branch in my tree, but it hasn't been built yet, so caveat
emptor.  Anyway, please feel free to have a look at it still.

Cheers!




^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [RFT][PATCH 2/3] ACPICA: Remove unused memory mappings on interpreter exit
  2020-06-13 19:28         ` Rafael J. Wysocki
@ 2020-06-15 19:06           ` Dan Williams
  0 siblings, 0 replies; 51+ messages in thread
From: Dan Williams @ 2020-06-15 19:06 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Kaneda, Erik, Rafael J. Wysocki, Wysocki, Rafael J, Len Brown,
	Borislav Petkov, Weiny, Ira, James Morse, Myron Stowe,
	Andy Shevchenko, linux-kernel, linux-acpi, linux-nvdimm, Moore,
	Robert

On Sat, Jun 13, 2020 at 12:29 PM Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
[,,]
> > While I agree that this is still somewhat suboptimal, improving this
> > would require more changes in the OSL code.
>
> After writing the above I started to think about the extra changes needed
> to improve that and I realized that it would take making the OS layer
> support deferred memory unmapping, such that the unused mappings would be
> queued up for later removal and then released in one go at a suitable time.
>
> However, that would be sufficient to address the issue addressed by this
> series, because the deferred unmapping could be used in
> acpi_ev_system_memory_region_setup() right away and that would be a much
> simpler change than the one made in patch [1/3].
>
> So I went ahead and implemented this and the result is there in the
> acpica-osl branch in my tree, but it hasn't been built yet, so caveat
> emptor.  Anyway, please feel free to have a look at it still.

I'll have a look. However, I was just about to build a test kernel for
the original reporter of this problem with this patch set. Do you want
test feedback on that branch, or this set as is?

^ permalink raw reply	[flat|nested] 51+ messages in thread

* [RFT][PATCH v2 0/4] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter
  2020-05-07 23:39 [PATCH v2] ACPI: Drop rcu usage for MMIO mappings Dan Williams
                   ` (2 preceding siblings ...)
  2020-06-10 12:17 ` [RFT][PATCH 0/3] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter Rafael J. Wysocki
@ 2020-06-22 13:50 ` Rafael J. Wysocki
  2020-06-22 13:52   ` [RFT][PATCH v2 1/4] ACPICA: Defer unmapping of opregion memory if supported by OS Rafael J. Wysocki
                     ` (4 more replies)
  3 siblings, 5 replies; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-22 13:50 UTC (permalink / raw)
  To: Dan Williams, Erik Kaneda
  Cc: rafael.j.wysocki, Len Brown, Borislav Petkov, Ira Weiny,
	James Morse, Myron Stowe, Andy Shevchenko, linux-kernel,
	linux-acpi, linux-nvdimm, Bob Moore

Hi All,

This series is to address the problem with RCU synchronization occurring,
possibly relatively often, inside of acpi_ex_system_memory_space_handler(),
when the namespace and interpreter mutexes are held.

Like I said before, I had decided to change the approach used in the previous
iteration of this series and to allow the unmap operations carried out by 
acpi_ex_system_memory_space_handler() to be deferred in the first place,
which is done in patches [1-2/4].

However, it turns out that the "fast-path" mapping is still useful on top of
the above to reduce the number of ioremap-iounmap cycles for the same address
range and so it is introduced by patches [3-4/4].

For details, please refer to the patch changelogs.

The series is available from the git branch at

 git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git \
 acpica-osl

for easier testing.

Cheers,
Rafael







^ permalink raw reply	[flat|nested] 51+ messages in thread

* [RFT][PATCH v2 1/4] ACPICA: Defer unmapping of opregion memory if supported by OS
  2020-06-22 13:50 ` [RFT][PATCH v2 0/4] " Rafael J. Wysocki
@ 2020-06-22 13:52   ` Rafael J. Wysocki
  2020-06-22 13:53   ` [RFT][PATCH v2 2/4] ACPI: OSL: Add support for deferred unmapping of ACPI memory Rafael J. Wysocki
                     ` (3 subsequent siblings)
  4 siblings, 0 replies; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-22 13:52 UTC (permalink / raw)
  To: Dan Williams, Erik Kaneda
  Cc: rafael.j.wysocki, Len Brown, Borislav Petkov, Ira Weiny,
	James Morse, Myron Stowe, Andy Shevchenko, linux-kernel,
	linux-acpi, linux-nvdimm, Bob Moore

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

The ACPI OS layer in Linux uses RCU to protect the walkers of the
list of ACPI memory mappings from seeing an inconsistent state
while it is being updated.  Among other situations, that list can
be walked in (NMI and non-NMI) interrupt context, so using a
sleeping lock to protect it is not an option.

However, performance issues related to the RCU usage in there
appear, as described by Dan Williams:

"Recently a performance problem was reported for a process invoking
a non-trival ASL program. The method call in this case ends up
repetitively triggering a call path like:

    acpi_ex_store
    acpi_ex_store_object_to_node
    acpi_ex_write_data_to_field
    acpi_ex_insert_into_field
    acpi_ex_write_with_update_rule
    acpi_ex_field_datum_io
    acpi_ex_access_region
    acpi_ev_address_space_dispatch
    acpi_ex_system_memory_space_handler
    acpi_os_map_cleanup.part.14
    _synchronize_rcu_expedited.constprop.89
    schedule

The end result of frequent synchronize_rcu_expedited() invocation is
tiny sub-millisecond spurts of execution where the scheduler freely
migrates this apparently sleepy task. The overhead of frequent
scheduler invocation multiplies the execution time by a factor
of 2-3X."

The source of this is that acpi_ex_system_memory_space_handler()
unmaps the memory mapping currently cached by it at the access time
if that mapping doesn't cover the memory area being accessed.
Consequently, if there is a memory opregion with two fields
separated from each other by an unused chunk of address space that
is large enough for not being covered by a single mapping, and they
happen to be used in an alternating pattern, the unmapping will
occur on every acpi_ex_system_memory_space_handler() invocation for
that memory opregion and that will lead to significant overhead.

However, if the OS supports deferred unmapping of ACPI memory,
such that the unused mappings will not be unmapped immediately,
but collected for unmapping when directly requested later,
acpi_ex_system_memory_space_handler() can be optimized to avoid
the above issue.

Namely, if ACPI_USE_DEFERRED_UNMAPPING is set for the given OS,
it is expected to provide acpi_os_unmap_deferred(), for dropping
references to memory mapping and queuing up the unused ones for
later unmapping, and acpi_os_release_unused_mappings(), for the
eventual unmapping of the unused mappings queued up earlier.

Accordingly, if ACPI_USE_DEFERRED_UNMAPPING is set,
acpi_ex_system_memory_space_handler() can use
acpi_os_unmap_deferred() to unmap memory ranges mapped by it,
so they are not unmapped right away, which addresses the issue
described above, and the unused mappings queued up by it for
removal can be unmapped later via acpi_os_release_unused_mappings().

Implement the ACPICA side of the described mechanism so as to
avoid the RCU-related performance issues with memory opregions.

Reported-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/acpi/acpica/acinterp.h |  2 ++
 drivers/acpi/acpica/dbtest.c   |  2 ++
 drivers/acpi/acpica/evrgnini.c |  5 +----
 drivers/acpi/acpica/exregion.c | 29 +++++++++++++++++++++++++++--
 drivers/acpi/acpica/exutils.c  |  2 ++
 drivers/acpi/acpica/utxface.c  | 24 ++++++++++++++++++++++++
 include/acpi/acpixf.h          |  1 +
 7 files changed, 59 insertions(+), 6 deletions(-)

diff --git a/drivers/acpi/acpica/acinterp.h b/drivers/acpi/acpica/acinterp.h
index a6d896cda2a5..1f1026fb06e9 100644
--- a/drivers/acpi/acpica/acinterp.h
+++ b/drivers/acpi/acpica/acinterp.h
@@ -479,6 +479,8 @@ void acpi_ex_pci_cls_to_string(char *dest, u8 class_code[3]);
 
 u8 acpi_is_valid_space_id(u8 space_id);
 
+void acpi_ex_unmap_region_memory(struct acpi_mem_space_context *mem_info);
+
 /*
  * exregion - default op_region handlers
  */
diff --git a/drivers/acpi/acpica/dbtest.c b/drivers/acpi/acpica/dbtest.c
index 6db44a5ac786..a3d119bb2857 100644
--- a/drivers/acpi/acpica/dbtest.c
+++ b/drivers/acpi/acpica/dbtest.c
@@ -768,6 +768,8 @@ acpi_db_test_field_unit_type(union acpi_operand_object *obj_desc)
 		acpi_ut_release_mutex(ACPI_MTX_NAMESPACE);
 		acpi_ut_release_mutex(ACPI_MTX_INTERPRETER);
 
+		acpi_release_unused_memory_mappings();
+
 		bit_length = obj_desc->common_field.bit_length;
 		byte_length = ACPI_ROUND_BITS_UP_TO_BYTES(bit_length);
 
diff --git a/drivers/acpi/acpica/evrgnini.c b/drivers/acpi/acpica/evrgnini.c
index aefc0145e583..9f33114a74ca 100644
--- a/drivers/acpi/acpica/evrgnini.c
+++ b/drivers/acpi/acpica/evrgnini.c
@@ -49,10 +49,7 @@ acpi_ev_system_memory_region_setup(acpi_handle handle,
 			/* Delete a cached mapping if present */
 
 			if (local_region_context->mapped_length) {
-				acpi_os_unmap_memory(local_region_context->
-						     mapped_logical_address,
-						     local_region_context->
-						     mapped_length);
+				acpi_ex_unmap_region_memory(local_region_context);
 			}
 			ACPI_FREE(local_region_context);
 			*region_context = NULL;
diff --git a/drivers/acpi/acpica/exregion.c b/drivers/acpi/acpica/exregion.c
index d15a66de26c0..af777b7fccb0 100644
--- a/drivers/acpi/acpica/exregion.c
+++ b/drivers/acpi/acpica/exregion.c
@@ -14,6 +14,32 @@
 #define _COMPONENT          ACPI_EXECUTER
 ACPI_MODULE_NAME("exregion")
 
+/*****************************************************************************
+ *
+ * FUNCTION:    acpi_ex_unmap_region_memory
+ *
+ * PARAMETERS:  mem_info            - Region specific context
+ *
+ * RETURN:      None
+ *
+ * DESCRIPTION: Unmap memory associated with a memory operation region.
+ *
+ ****************************************************************************/
+void acpi_ex_unmap_region_memory(struct acpi_mem_space_context *mem_info)
+{
+	ACPI_FUNCTION_TRACE(acpi_ex_unmap_region_memory);
+
+#ifdef ACPI_USE_DEFERRED_UNMAPPING
+	acpi_os_unmap_deferred(mem_info->mapped_logical_address,
+			       mem_info->mapped_length);
+#else
+	acpi_os_unmap_memory(mem_info->mapped_logical_address,
+			     mem_info->mapped_length);
+#endif
+
+	return_VOID;
+}
+
 /*******************************************************************************
  *
  * FUNCTION:    acpi_ex_system_memory_space_handler
@@ -108,8 +134,7 @@ acpi_ex_system_memory_space_handler(u32 function,
 
 			/* Valid mapping, delete it */
 
-			acpi_os_unmap_memory(mem_info->mapped_logical_address,
-					     mem_info->mapped_length);
+			acpi_ex_unmap_region_memory(mem_info);
 		}
 
 		/*
diff --git a/drivers/acpi/acpica/exutils.c b/drivers/acpi/acpica/exutils.c
index 8fefa6feac2f..9597baf33eb4 100644
--- a/drivers/acpi/acpica/exutils.c
+++ b/drivers/acpi/acpica/exutils.c
@@ -106,6 +106,8 @@ void acpi_ex_exit_interpreter(void)
 			    "Could not release AML Interpreter mutex"));
 	}
 
+	acpi_release_unused_memory_mappings();
+
 	return_VOID;
 }
 
diff --git a/drivers/acpi/acpica/utxface.c b/drivers/acpi/acpica/utxface.c
index ca7c9f0144ef..a70ac19a207b 100644
--- a/drivers/acpi/acpica/utxface.c
+++ b/drivers/acpi/acpica/utxface.c
@@ -244,6 +244,30 @@ acpi_status acpi_purge_cached_objects(void)
 
 ACPI_EXPORT_SYMBOL(acpi_purge_cached_objects)
 
+/*****************************************************************************
+ *
+ * FUNCTION:    acpi_release_unused_memory_mappings
+ *
+ * PARAMETERS:  None
+ *
+ * RETURN:      None
+ *
+ * DESCRIPTION: Remove memory mappings that are not used any more.
+ *
+ ****************************************************************************/
+void acpi_release_unused_memory_mappings(void)
+{
+	ACPI_FUNCTION_TRACE(acpi_release_unused_memory_mappings);
+
+#ifdef ACPI_USE_DEFERRED_UNMAPPING
+	acpi_os_release_unused_mappings();
+#endif
+
+	return_VOID;
+}
+
+ACPI_EXPORT_SYMBOL(acpi_release_unused_memory_mappings)
+
 /*****************************************************************************
  *
  * FUNCTION:    acpi_install_interface
diff --git a/include/acpi/acpixf.h b/include/acpi/acpixf.h
index 459d6981ca96..068ed92f5e28 100644
--- a/include/acpi/acpixf.h
+++ b/include/acpi/acpixf.h
@@ -449,6 +449,7 @@ ACPI_EXTERNAL_RETURN_STATUS(acpi_status
 						    acpi_size length,
 						    struct acpi_pld_info
 						    **return_buffer))
+ACPI_EXTERNAL_RETURN_VOID(void acpi_release_unused_memory_mappings(void))
 
 /*
  * ACPI table load/unload interfaces
-- 
2.26.2





^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [RFT][PATCH v2 2/4] ACPI: OSL: Add support for deferred unmapping of ACPI memory
  2020-06-22 13:50 ` [RFT][PATCH v2 0/4] " Rafael J. Wysocki
  2020-06-22 13:52   ` [RFT][PATCH v2 1/4] ACPICA: Defer unmapping of opregion memory if supported by OS Rafael J. Wysocki
@ 2020-06-22 13:53   ` Rafael J. Wysocki
  2020-06-22 14:56     ` Andy Shevchenko
  2020-06-22 14:01   ` [RFT][PATCH v2 3/4] ACPICA: Preserve memory opregion mappings if supported by OS Rafael J. Wysocki
                     ` (2 subsequent siblings)
  4 siblings, 1 reply; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-22 13:53 UTC (permalink / raw)
  To: Dan Williams, Erik Kaneda
  Cc: rafael.j.wysocki, Len Brown, Borislav Petkov, Ira Weiny,
	James Morse, Myron Stowe, Andy Shevchenko, linux-kernel,
	linux-acpi, linux-nvdimm, Bob Moore

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

Implement acpi_os_unmap_deferred() and
acpi_os_release_unused_mappings() and set ACPI_USE_DEFERRED_UNMAPPING
to allow ACPICA to use deferred unmapping of memory in
acpi_ex_system_memory_space_handler() so as to avoid RCU-related
performance issues with memory opregions.

Reported-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/acpi/osl.c                | 160 +++++++++++++++++++++++-------
 include/acpi/platform/aclinuxex.h |   4 +
 2 files changed, 128 insertions(+), 36 deletions(-)

diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
index 762c5d50b8fe..28863d908fa8 100644
--- a/drivers/acpi/osl.c
+++ b/drivers/acpi/osl.c
@@ -77,12 +77,16 @@ struct acpi_ioremap {
 	void __iomem *virt;
 	acpi_physical_address phys;
 	acpi_size size;
-	unsigned long refcount;
+	union {
+		unsigned long refcount;
+		struct list_head gc;
+	} track;
 };
 
 static LIST_HEAD(acpi_ioremaps);
 static DEFINE_MUTEX(acpi_ioremap_lock);
 #define acpi_ioremap_lock_held() lock_is_held(&acpi_ioremap_lock.dep_map)
+static LIST_HEAD(unused_mappings);
 
 static void __init acpi_request_region (struct acpi_generic_address *gas,
 	unsigned int length, char *desc)
@@ -250,7 +254,7 @@ void __iomem *acpi_os_get_iomem(acpi_physical_address phys, unsigned int size)
 	map = acpi_map_lookup(phys, size);
 	if (map) {
 		virt = map->virt + (phys - map->phys);
-		map->refcount++;
+		map->track.refcount++;
 	}
 	mutex_unlock(&acpi_ioremap_lock);
 	return virt;
@@ -335,7 +339,7 @@ void __iomem __ref
 	/* Check if there's a suitable mapping already. */
 	map = acpi_map_lookup(phys, size);
 	if (map) {
-		map->refcount++;
+		map->track.refcount++;
 		goto out;
 	}
 
@@ -358,7 +362,7 @@ void __iomem __ref
 	map->virt = virt;
 	map->phys = pg_off;
 	map->size = pg_sz;
-	map->refcount = 1;
+	map->track.refcount = 1;
 
 	list_add_tail_rcu(&map->list, &acpi_ioremaps);
 
@@ -375,40 +379,41 @@ void *__ref acpi_os_map_memory(acpi_physical_address phys, acpi_size size)
 EXPORT_SYMBOL_GPL(acpi_os_map_memory);
 
 /* Must be called with mutex_lock(&acpi_ioremap_lock) */
-static unsigned long acpi_os_drop_map_ref(struct acpi_ioremap *map)
+static bool acpi_os_drop_map_ref(struct acpi_ioremap *map, bool defer)
 {
-	unsigned long refcount = --map->refcount;
+	if (--map->track.refcount)
+		return true;
 
-	if (!refcount)
-		list_del_rcu(&map->list);
-	return refcount;
+	list_del_rcu(&map->list);
+
+	if (defer) {
+		INIT_LIST_HEAD(&map->track.gc);
+		list_add_tail(&map->track.gc, &unused_mappings);
+		return true;
+	}
+
+	return false;
 }
 
-static void acpi_os_map_cleanup(struct acpi_ioremap *map)
+static void __acpi_os_map_cleanup(struct acpi_ioremap *map)
 {
-	synchronize_rcu_expedited();
 	acpi_unmap(map->phys, map->virt);
 	kfree(map);
 }
 
-/**
- * acpi_os_unmap_iomem - Drop a memory mapping reference.
- * @virt: Start of the address range to drop a reference to.
- * @size: Size of the address range to drop a reference to.
- *
- * Look up the given virtual address range in the list of existing ACPI memory
- * mappings, drop a reference to it and unmap it if there are no more active
- * references to it.
- *
- * During early init (when acpi_permanent_mmap has not been set yet) this
- * routine simply calls __acpi_unmap_table() to get the job done.  Since
- * __acpi_unmap_table() is an __init function, the __ref annotation is needed
- * here.
- */
-void __ref acpi_os_unmap_iomem(void __iomem *virt, acpi_size size)
+static void acpi_os_map_cleanup(struct acpi_ioremap *map)
+{
+	if (!map)
+		return;
+
+	synchronize_rcu_expedited();
+	__acpi_os_map_cleanup(map);
+}
+
+static void __ref __acpi_os_unmap_iomem(void __iomem *virt, acpi_size size,
+					bool defer)
 {
 	struct acpi_ioremap *map;
-	unsigned long refcount;
 
 	if (!acpi_permanent_mmap) {
 		__acpi_unmap_table(virt, size);
@@ -416,26 +421,102 @@ void __ref acpi_os_unmap_iomem(void __iomem *virt, acpi_size size)
 	}
 
 	mutex_lock(&acpi_ioremap_lock);
+
 	map = acpi_map_lookup_virt(virt, size);
 	if (!map) {
 		mutex_unlock(&acpi_ioremap_lock);
 		WARN(true, PREFIX "%s: bad address %p\n", __func__, virt);
 		return;
 	}
-	refcount = acpi_os_drop_map_ref(map);
+	if (acpi_os_drop_map_ref(map, defer))
+		map = NULL;
+
 	mutex_unlock(&acpi_ioremap_lock);
 
-	if (!refcount)
-		acpi_os_map_cleanup(map);
+	acpi_os_map_cleanup(map);
+}
+
+/**
+ * acpi_os_unmap_iomem - Drop a memory mapping reference.
+ * @virt: Start of the address range to drop a reference to.
+ * @size: Size of the address range to drop a reference to.
+ *
+ * Look up the given virtual address range in the list of existing ACPI memory
+ * mappings, drop a reference to it and unmap it if there are no more active
+ * references to it.
+ *
+ * During early init (when acpi_permanent_mmap has not been set yet) this
+ * routine simply calls __acpi_unmap_table() to get the job done.  Since
+ * __acpi_unmap_table() is an __init function, the __ref annotation is needed
+ * here.
+ */
+void __ref acpi_os_unmap_iomem(void __iomem *virt, acpi_size size)
+{
+	__acpi_os_unmap_iomem(virt, size, false);
 }
 EXPORT_SYMBOL_GPL(acpi_os_unmap_iomem);
 
 void __ref acpi_os_unmap_memory(void *virt, acpi_size size)
 {
-	return acpi_os_unmap_iomem((void __iomem *)virt, size);
+	acpi_os_unmap_iomem((void __iomem *)virt, size);
 }
 EXPORT_SYMBOL_GPL(acpi_os_unmap_memory);
 
+/**
+ * acpi_os_unmap_deferred - Drop a memory mapping reference.
+ * @virt: Start of the address range to drop a reference to.
+ * @size: Size of the address range to drop a reference to.
+ *
+ * Look up the given virtual address range in the list of existing ACPI memory
+ * mappings, drop a reference to it and if there are no more active references
+ * to it, put it in the list of unused memory mappings.
+ *
+ * During early init (when acpi_permanent_mmap has not been set yet) this
+ * routine behaves like acpi_os_unmap_memory().
+ */
+void __ref acpi_os_unmap_deferred(void *virt, acpi_size size)
+{
+	__acpi_os_unmap_iomem((void __iomem *)virt, size, true);
+}
+
+/**
+ * acpi_os_release_unused_mappings - Release unused ACPI memory mappings.
+ */
+void acpi_os_release_unused_mappings(void)
+{
+	struct list_head list;
+
+	INIT_LIST_HEAD(&list);
+
+	/*
+	 * First avoid looking at mappings that may be added to the "unused"
+	 * list while the synchronize_rcu() below is running.
+	 */
+	mutex_lock(&acpi_ioremap_lock);
+
+	list_splice_init(&unused_mappings, &list);
+
+	mutex_unlock(&acpi_ioremap_lock);
+
+	if (list_empty(&list))
+		return;
+
+	/*
+	 * Wait for the possible users of the mappings in the "unused" list to
+	 * stop using them.
+	 */
+	synchronize_rcu();
+
+	/* Release the unused mappings in the list. */
+	while (!list_empty(&list)) {
+		struct acpi_ioremap *map;
+
+		map = list_entry(list.next, struct acpi_ioremap, track.gc);
+		list_del(&map->track.gc);
+		__acpi_os_map_cleanup(map);
+	}
+}
+
 int acpi_os_map_generic_address(struct acpi_generic_address *gas)
 {
 	u64 addr;
@@ -461,7 +542,6 @@ void acpi_os_unmap_generic_address(struct acpi_generic_address *gas)
 {
 	u64 addr;
 	struct acpi_ioremap *map;
-	unsigned long refcount;
 
 	if (gas->space_id != ACPI_ADR_SPACE_SYSTEM_MEMORY)
 		return;
@@ -472,16 +552,18 @@ void acpi_os_unmap_generic_address(struct acpi_generic_address *gas)
 		return;
 
 	mutex_lock(&acpi_ioremap_lock);
+
 	map = acpi_map_lookup(addr, gas->bit_width / 8);
 	if (!map) {
 		mutex_unlock(&acpi_ioremap_lock);
 		return;
 	}
-	refcount = acpi_os_drop_map_ref(map);
+	if (acpi_os_drop_map_ref(map, false))
+		map = NULL;
+
 	mutex_unlock(&acpi_ioremap_lock);
 
-	if (!refcount)
-		acpi_os_map_cleanup(map);
+	acpi_os_map_cleanup(map);
 }
 EXPORT_SYMBOL(acpi_os_unmap_generic_address);
 
@@ -1566,11 +1648,17 @@ static acpi_status acpi_deactivate_mem_region(acpi_handle handle, u32 level,
 acpi_status acpi_release_memory(acpi_handle handle, struct resource *res,
 				u32 level)
 {
+	acpi_status ret;
+
 	if (!(res->flags & IORESOURCE_MEM))
 		return AE_TYPE;
 
-	return acpi_walk_namespace(ACPI_TYPE_REGION, handle, level,
+	ret = acpi_walk_namespace(ACPI_TYPE_REGION, handle, level,
 				   acpi_deactivate_mem_region, NULL, res, NULL);
+
+	acpi_os_release_unused_mappings();
+
+	return ret;
 }
 EXPORT_SYMBOL_GPL(acpi_release_memory);
 
diff --git a/include/acpi/platform/aclinuxex.h b/include/acpi/platform/aclinuxex.h
index 04f88f2de781..e13f364d6c69 100644
--- a/include/acpi/platform/aclinuxex.h
+++ b/include/acpi/platform/aclinuxex.h
@@ -138,6 +138,10 @@ static inline void acpi_os_terminate_debugger(void)
 /*
  * OSL interfaces added by Linux
  */
+void acpi_os_unmap_deferred(void *virt, acpi_size size);
+void acpi_os_release_unused_mappings(void);
+
+#define ACPI_USE_DEFERRED_UNMAPPING
 
 #endif				/* __KERNEL__ */
 
-- 
2.26.2





^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [RFT][PATCH v2 3/4] ACPICA: Preserve memory opregion mappings if supported by OS
  2020-06-22 13:50 ` [RFT][PATCH v2 0/4] " Rafael J. Wysocki
  2020-06-22 13:52   ` [RFT][PATCH v2 1/4] ACPICA: Defer unmapping of opregion memory if supported by OS Rafael J. Wysocki
  2020-06-22 13:53   ` [RFT][PATCH v2 2/4] ACPI: OSL: Add support for deferred unmapping of ACPI memory Rafael J. Wysocki
@ 2020-06-22 14:01   ` Rafael J. Wysocki
  2020-06-26 22:53     ` Kaneda, Erik
  2020-06-22 14:02   ` [RFT][PATCH v2 4/4] ACPI: OSL: Implement acpi_os_map_memory_fast_path() Rafael J. Wysocki
  2020-06-26 17:28   ` [RFT][PATCH v3 0/4] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter Rafael J. Wysocki
  4 siblings, 1 reply; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-22 14:01 UTC (permalink / raw)
  To: Dan Williams, Erik Kaneda
  Cc: rafael.j.wysocki, Len Brown, Borislav Petkov, Ira Weiny,
	James Morse, Myron Stowe, Andy Shevchenko, linux-kernel,
	linux-acpi, linux-nvdimm, Bob Moore

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

The ACPICA's strategy with respect to the handling of memory mappings
associated with memory operation regions is to avoid mapping the
entire region at once which may be problematic at least in principle
(for example, it may lead to conflicts with overlapping mappings
having different attributes created by drivers).  It may also be
wasteful, because memory opregions on some systems take up vast
chunks of address space while the fields in those regions actually
accessed by AML are sparsely distributed.

For this reason, a one-page "window" is mapped for a given opregion
on the first memory access through it and if that "window" does not
cover an address range accessed through that opregion subsequently,
it is unmapped and a new "window" is mapped to replace it.  Next,
if the new "window" is not sufficient to access memory through the
opregion in question in the future, it will be replaced with yet
another "window" and so on.  That may lead to a suboptimal sequence
of memory mapping and unmapping operations, for example if two fields
in one opregion separated from each other by a sufficiently wide
chunk of unused address space are accessed in an alternating pattern.

The situation may still be suboptimal if the deferred unmapping
introduced previously is supported by the OS layer.  For instance,
the alternating memory access pattern mentioned above may produce
a relatively long list of mappings to release with substantial
duplication among the entries in it, which could be avoided if
acpi_ex_system_memory_space_handler() did not release the mapping
used by it previously as soon as the current access was not covered
by it.

In order to improve that, modify acpi_ex_system_memory_space_handler()
to take advantage of the memory mappings reference counting at the OS
level if a suitable interface is provided.

Namely, if ACPI_USE_FAST_PATH_MAPPING is set, the OS is expected to
implement acpi_os_map_memory_fast_path() that will return NULL if
there is no mapping covering the given address range known to it.
If such a mapping is there, however, its reference counter will be
incremented and a pointer representing the requested virtual address
will be returned right away without any additional consequences.

That allows acpi_ex_system_memory_space_handler() to acquire
additional references to all new memory mappings with the help
of acpi_os_map_memory_fast_path() so as to retain them until the
memory opregions associated with them go away.  The function will
still use a new "window" mapping if the current one does not
cover the address range at hand, but it will avoid unmapping the
current one right away by adding it to a list of "known" mappings
associated with the given memory opregion which will be deleted at
the opregion deactivation time.  The mappings in that list can be
used every time a "new window" is needed so as to avoid overhead
related to the mapping and unmapping of memory.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/acpi/acpica/acinterp.h |   4 +
 drivers/acpi/acpica/evrgnini.c |   7 +-
 drivers/acpi/acpica/exregion.c | 159 ++++++++++++++++++++++++++++++++-
 3 files changed, 162 insertions(+), 8 deletions(-)

diff --git a/drivers/acpi/acpica/acinterp.h b/drivers/acpi/acpica/acinterp.h
index 1f1026fb06e9..db9c279baa2e 100644
--- a/drivers/acpi/acpica/acinterp.h
+++ b/drivers/acpi/acpica/acinterp.h
@@ -479,8 +479,12 @@ void acpi_ex_pci_cls_to_string(char *dest, u8 class_code[3]);
 
 u8 acpi_is_valid_space_id(u8 space_id);
 
+struct acpi_mem_space_context *acpi_ex_alloc_mem_space_context(void);
+
 void acpi_ex_unmap_region_memory(struct acpi_mem_space_context *mem_info);
 
+void acpi_ex_unmap_all_region_mappings(struct acpi_mem_space_context *mem_info);
+
 /*
  * exregion - default op_region handlers
  */
diff --git a/drivers/acpi/acpica/evrgnini.c b/drivers/acpi/acpica/evrgnini.c
index 9f33114a74ca..f6c5feea10bc 100644
--- a/drivers/acpi/acpica/evrgnini.c
+++ b/drivers/acpi/acpica/evrgnini.c
@@ -46,10 +46,10 @@ acpi_ev_system_memory_region_setup(acpi_handle handle,
 			local_region_context =
 			    (struct acpi_mem_space_context *)*region_context;
 
-			/* Delete a cached mapping if present */
+			/* Delete memory mappings if present */
 
 			if (local_region_context->mapped_length) {
-				acpi_ex_unmap_region_memory(local_region_context);
+				acpi_ex_unmap_all_region_mappings(local_region_context);
 			}
 			ACPI_FREE(local_region_context);
 			*region_context = NULL;
@@ -59,8 +59,7 @@ acpi_ev_system_memory_region_setup(acpi_handle handle,
 
 	/* Create a new context */
 
-	local_region_context =
-	    ACPI_ALLOCATE_ZEROED(sizeof(struct acpi_mem_space_context));
+	local_region_context = acpi_ex_alloc_mem_space_context();
 	if (!(local_region_context)) {
 		return_ACPI_STATUS(AE_NO_MEMORY);
 	}
diff --git a/drivers/acpi/acpica/exregion.c b/drivers/acpi/acpica/exregion.c
index af777b7fccb0..9d97b6a67074 100644
--- a/drivers/acpi/acpica/exregion.c
+++ b/drivers/acpi/acpica/exregion.c
@@ -14,6 +14,40 @@
 #define _COMPONENT          ACPI_EXECUTER
 ACPI_MODULE_NAME("exregion")
 
+struct acpi_mem_mapping {
+	acpi_physical_address physical_address;
+	u8 *logical_address;
+	acpi_size length;
+	struct acpi_mem_mapping *next_mm;
+};
+
+struct acpi_mm_context {
+	struct acpi_mem_space_context mem_info;
+	struct acpi_mem_mapping *first_mm;
+};
+
+/*****************************************************************************
+ *
+ * FUNCTION:    acpi_ex_alloc_mem_space_context
+ *
+ * PARAMETERS:  None
+ *
+ * RETURN:      Pointer to a new region context object.
+ *
+ * DESCRIPTION: Allocate memory for memory operation region representation.
+ *
+ ****************************************************************************/
+struct acpi_mem_space_context *acpi_ex_alloc_mem_space_context(void)
+{
+	ACPI_FUNCTION_TRACE(acpi_ex_alloc_mem_space_context);
+
+#ifdef ACPI_USE_FAST_PATH_MAPPING
+	return ACPI_ALLOCATE_ZEROED(sizeof(struct acpi_mm_context));
+#else
+	return ACPI_ALLOCATE_ZEROED(sizeof(struct acpi_mem_space_context));
+#endif
+}
+
 /*****************************************************************************
  *
  * FUNCTION:    acpi_ex_unmap_region_memory
@@ -40,6 +74,44 @@ void acpi_ex_unmap_region_memory(struct acpi_mem_space_context *mem_info)
 	return_VOID;
 }
 
+/*****************************************************************************
+ *
+ * FUNCTION:    acpi_ex_unmap_all_region_mappings
+ *
+ * PARAMETERS:  mem_info            - Region specific context
+ *
+ * RETURN:      None
+ *
+ * DESCRIPTION: Unmap all mappings associated with a memory operation region.
+ *
+ ****************************************************************************/
+void acpi_ex_unmap_all_region_mappings(struct acpi_mem_space_context *mem_info)
+{
+#ifdef ACPI_USE_FAST_PATH_MAPPING
+	struct acpi_mm_context *mm_context = (struct acpi_mm_context *)mem_info;
+	struct acpi_mem_mapping *mm;
+#endif
+
+	ACPI_FUNCTION_TRACE(acpi_ex_unmap_all_region_mappings);
+
+	acpi_ex_unmap_region_memory(mem_info);
+
+#ifdef ACPI_USE_FAST_PATH_MAPPING
+	while (mm_context->first_mm) {
+		mm = mm_context->first_mm;
+		mm_context->first_mm = mm->next_mm;
+#ifdef ACPI_USE_DEFERRED_UNMAPPING
+		acpi_os_unmap_deferred(mm->logical_address, mm->length);
+#else
+		acpi_os_unmap_memory(mm->logical_address, mm->length);
+#endif
+		ACPI_FREE(mm);
+	}
+#endif /* ACPI_USE_FAST_PATH_MAPPING */
+
+	return_VOID;
+}
+
 /*******************************************************************************
  *
  * FUNCTION:    acpi_ex_system_memory_space_handler
@@ -70,6 +142,10 @@ acpi_ex_system_memory_space_handler(u32 function,
 	u32 length;
 	acpi_size map_length;
 	acpi_size page_boundary_map_length;
+#ifdef ACPI_USE_FAST_PATH_MAPPING
+	struct acpi_mm_context *mm_context = (struct acpi_mm_context *)mem_info;
+	struct acpi_mem_mapping *mm;
+#endif
 #ifdef ACPI_MISALIGNMENT_NOT_SUPPORTED
 	u32 remainder;
 #endif
@@ -128,7 +204,7 @@ acpi_ex_system_memory_space_handler(u32 function,
 					 mem_info->mapped_length))) {
 		/*
 		 * The request cannot be resolved by the current memory mapping;
-		 * Delete the existing mapping and create a new one.
+		 * Delete the current cached mapping and get a new one.
 		 */
 		if (mem_info->mapped_length) {
 
@@ -137,6 +213,36 @@ acpi_ex_system_memory_space_handler(u32 function,
 			acpi_ex_unmap_region_memory(mem_info);
 		}
 
+#ifdef ACPI_USE_FAST_PATH_MAPPING
+		/*
+		 * Look for an existing saved mapping matching the address range
+		 * at hand.  If found, make the OS layer bump up the reference
+		 * counter of that mapping, cache it and carry out the access.
+		 */
+		for (mm = mm_context->first_mm; mm; mm = mm->next_mm) {
+			if (address < mm->physical_address)
+				continue;
+
+			if ((u64)address + length >
+					(u64)mm->physical_address + mm->length)
+				continue;
+
+			/*
+			 * When called on a known-existing memory mapping,
+			 * acpi_os_map_memory_fast_path() must return the same
+			 * logical address as before or NULL.
+			 */
+			if (!acpi_os_map_memory_fast_path(mm->physical_address,
+							  mm->length))
+				continue;
+
+			mem_info->mapped_logical_address = mm->logical_address;
+			mem_info->mapped_physical_address = mm->physical_address;
+			mem_info->mapped_length = mm->length;
+			goto access;
+		}
+#endif /* ACPI_USE_FAST_PATH_MAPPING */
+
 		/*
 		 * October 2009: Attempt to map from the requested address to the
 		 * end of the region. However, we will never map more than one
@@ -168,9 +274,8 @@ acpi_ex_system_memory_space_handler(u32 function,
 
 		/* Create a new mapping starting at the address given */
 
-		mem_info->mapped_logical_address =
-		    acpi_os_map_memory(address, map_length);
-		if (!mem_info->mapped_logical_address) {
+		logical_addr_ptr = acpi_os_map_memory(address, map_length);
+		if (!logical_addr_ptr) {
 			ACPI_ERROR((AE_INFO,
 				    "Could not map memory at 0x%8.8X%8.8X, size %u",
 				    ACPI_FORMAT_UINT64(address),
@@ -181,10 +286,56 @@ acpi_ex_system_memory_space_handler(u32 function,
 
 		/* Save the physical address and mapping size */
 
+		mem_info->mapped_logical_address = logical_addr_ptr;
 		mem_info->mapped_physical_address = address;
 		mem_info->mapped_length = map_length;
+
+#ifdef ACPI_USE_FAST_PATH_MAPPING
+		/*
+		 * Create a new mm list entry to save the new mapping for
+		 * removal at the operation region deactivation time.
+		 */
+		mm = ACPI_ALLOCATE_ZEROED(sizeof(*mm));
+		if (!mm) {
+			/*
+			 * No room to save the new mapping, but this is not
+			 * critical.  Just log the error and carry out the
+			 * access as requested.
+			 */
+			ACPI_ERROR((AE_INFO,
+				    "Not enough memory to save memory mapping at 0x%8.8X%8.8X, size %u",
+				    ACPI_FORMAT_UINT64(address),
+				    (u32)map_length));
+			goto access;
+		}
+		/*
+		 * Bump up the new mapping's reference counter in the OS layer
+		 * to prevent it from getting dropped prematurely.
+		 */
+		if (!acpi_os_map_memory_fast_path(address, map_length)) {
+			/*
+			 * Something has gone wrong, but this is not critical.
+			 * Log the error, free the mm list entry that won't be
+			 * used and carry out the access as requested.
+			 */
+			ACPI_ERROR((AE_INFO,
+				    "Unable to save memory mapping at 0x%8.8X%8.8X, size %u",
+				    ACPI_FORMAT_UINT64(address),
+				    (u32)map_length));
+			ACPI_FREE(mm);
+			goto access;
+		}
+		mm->physical_address = address;
+		mm->logical_address = logical_addr_ptr;
+		mm->length = map_length;
+		mm->next_mm = mm_context->first_mm;
+		mm_context->first_mm = mm;
 	}
 
+access:
+#else /* !ACPI_USE_FAST_PATH_MAPPING */
+	}
+#endif /* !ACPI_USE_FAST_PATH_MAPPING */
 	/*
 	 * Generate a logical pointer corresponding to the address we want to
 	 * access
-- 
2.26.2





^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [RFT][PATCH v2 4/4] ACPI: OSL: Implement acpi_os_map_memory_fast_path()
  2020-06-22 13:50 ` [RFT][PATCH v2 0/4] " Rafael J. Wysocki
                     ` (2 preceding siblings ...)
  2020-06-22 14:01   ` [RFT][PATCH v2 3/4] ACPICA: Preserve memory opregion mappings if supported by OS Rafael J. Wysocki
@ 2020-06-22 14:02   ` Rafael J. Wysocki
  2020-06-26 17:28   ` [RFT][PATCH v3 0/4] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter Rafael J. Wysocki
  4 siblings, 0 replies; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-22 14:02 UTC (permalink / raw)
  To: Dan Williams, Erik Kaneda
  Cc: rafael.j.wysocki, Len Brown, Borislav Petkov, Ira Weiny,
	James Morse, Myron Stowe, Andy Shevchenko, linux-kernel,
	linux-acpi, linux-nvdimm, Bob Moore

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

Add acpi_os_map_memory_fast_path() and set ACPI_USE_FAST_PATH_MAPPING
to allow acpi_ex_system_memory_space_handler() to avoid unnecessary
memory mapping and unmapping overhead by retaining all memory
mappings created by it until the memory opregions associated with
them go away.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/acpi/osl.c                | 65 +++++++++++++++++++++++--------
 include/acpi/platform/aclinuxex.h |  4 ++
 2 files changed, 53 insertions(+), 16 deletions(-)

diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
index 28863d908fa8..89554ec9a178 100644
--- a/drivers/acpi/osl.c
+++ b/drivers/acpi/osl.c
@@ -306,21 +306,8 @@ static void acpi_unmap(acpi_physical_address pg_off, void __iomem *vaddr)
 		iounmap(vaddr);
 }
 
-/**
- * acpi_os_map_iomem - Get a virtual address for a given physical address range.
- * @phys: Start of the physical address range to map.
- * @size: Size of the physical address range to map.
- *
- * Look up the given physical address range in the list of existing ACPI memory
- * mappings.  If found, get a reference to it and return a pointer to it (its
- * virtual address).  If not found, map it, add it to that list and return a
- * pointer to it.
- *
- * During early init (when acpi_permanent_mmap has not been set yet) this
- * routine simply calls __acpi_map_table() to get the job done.
- */
-void __iomem __ref
-*acpi_os_map_iomem(acpi_physical_address phys, acpi_size size)
+static void __iomem __ref *__acpi_os_map_iomem(acpi_physical_address phys,
+					       acpi_size size, bool fast_path)
 {
 	struct acpi_ioremap *map;
 	void __iomem *virt;
@@ -332,8 +319,12 @@ void __iomem __ref
 		return NULL;
 	}
 
-	if (!acpi_permanent_mmap)
+	if (!acpi_permanent_mmap) {
+		if (WARN_ON(fast_path))
+			return NULL;
+
 		return __acpi_map_table((unsigned long)phys, size);
+	}
 
 	mutex_lock(&acpi_ioremap_lock);
 	/* Check if there's a suitable mapping already. */
@@ -343,6 +334,11 @@ void __iomem __ref
 		goto out;
 	}
 
+	if (fast_path) {
+		mutex_unlock(&acpi_ioremap_lock);
+		return NULL;
+	}
+
 	map = kzalloc(sizeof(*map), GFP_KERNEL);
 	if (!map) {
 		mutex_unlock(&acpi_ioremap_lock);
@@ -370,6 +366,25 @@ void __iomem __ref
 	mutex_unlock(&acpi_ioremap_lock);
 	return map->virt + (phys - map->phys);
 }
+
+/**
+ * acpi_os_map_iomem - Get a virtual address for a given physical address range.
+ * @phys: Start of the physical address range to map.
+ * @size: Size of the physical address range to map.
+ *
+ * Look up the given physical address range in the list of existing ACPI memory
+ * mappings.  If found, get a reference to it and return a pointer representing
+ * its virtual address.  If not found, map it, add it to that list and return a
+ * pointer representing its virtual address.
+ *
+ * During early init (when acpi_permanent_mmap has not been set yet) call
+ * __acpi_map_table() to obtain the mapping.
+ */
+void __iomem __ref *acpi_os_map_iomem(acpi_physical_address phys,
+				      acpi_size size)
+{
+	return __acpi_os_map_iomem(phys, size, false);
+}
 EXPORT_SYMBOL_GPL(acpi_os_map_iomem);
 
 void *__ref acpi_os_map_memory(acpi_physical_address phys, acpi_size size)
@@ -378,6 +393,24 @@ void *__ref acpi_os_map_memory(acpi_physical_address phys, acpi_size size)
 }
 EXPORT_SYMBOL_GPL(acpi_os_map_memory);
 
+/**
+ * acpi_os_map_memory_fast_path - Fast-path physical-to-virtual address mapping.
+ * @phys: Start of the physical address range to map.
+ * @size: Size of the physical address range to map.
+ *
+ * Look up the given physical address range in the list of existing ACPI memory
+ * mappings.  If found, get a reference to it and return a pointer representing
+ * its virtual address.  If not found, return NULL.
+ *
+ * During early init (when acpi_permanent_mmap has not been set yet) log a
+ * warning and return NULL.
+ */
+void __ref *acpi_os_map_memory_fast_path(acpi_physical_address phys,
+					acpi_size size)
+{
+	return __acpi_os_map_iomem(phys, size, true);
+}
+
 /* Must be called with mutex_lock(&acpi_ioremap_lock) */
 static bool acpi_os_drop_map_ref(struct acpi_ioremap *map, bool defer)
 {
diff --git a/include/acpi/platform/aclinuxex.h b/include/acpi/platform/aclinuxex.h
index e13f364d6c69..89c387449425 100644
--- a/include/acpi/platform/aclinuxex.h
+++ b/include/acpi/platform/aclinuxex.h
@@ -143,6 +143,10 @@ void acpi_os_release_unused_mappings(void);
 
 #define ACPI_USE_DEFERRED_UNMAPPING
 
+void *acpi_os_map_memory_fast_path(acpi_physical_address where, acpi_size length);
+
+#define ACPI_USE_FAST_PATH_MAPPING
+
 #endif				/* __KERNEL__ */
 
 #endif				/* __ACLINUXEX_H__ */
-- 
2.26.2





^ permalink raw reply related	[flat|nested] 51+ messages in thread

* Re: [RFT][PATCH v2 2/4] ACPI: OSL: Add support for deferred unmapping of ACPI memory
  2020-06-22 13:53   ` [RFT][PATCH v2 2/4] ACPI: OSL: Add support for deferred unmapping of ACPI memory Rafael J. Wysocki
@ 2020-06-22 14:56     ` Andy Shevchenko
  2020-06-22 15:27       ` Rafael J. Wysocki
  0 siblings, 1 reply; 51+ messages in thread
From: Andy Shevchenko @ 2020-06-22 14:56 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Dan Williams, Erik Kaneda, Rafael J. Wysocki, Len Brown,
	Borislav Petkov, Ira Weiny, James Morse, Myron Stowe,
	Andy Shevchenko, Linux Kernel Mailing List,
	ACPI Devel Maling List, linux-nvdimm, Bob Moore

On Mon, Jun 22, 2020 at 5:06 PM Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
>
> From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
>
> Implement acpi_os_unmap_deferred() and
> acpi_os_release_unused_mappings() and set ACPI_USE_DEFERRED_UNMAPPING
> to allow ACPICA to use deferred unmapping of memory in
> acpi_ex_system_memory_space_handler() so as to avoid RCU-related
> performance issues with memory opregions.

...

> +static bool acpi_os_drop_map_ref(struct acpi_ioremap *map, bool defer)
>  {
> -       unsigned long refcount = --map->refcount;
> +       if (--map->track.refcount)
> +               return true;
>
> -       if (!refcount)
> -               list_del_rcu(&map->list);
> -       return refcount;
> +       list_del_rcu(&map->list);
> +

> +       if (defer) {
> +               INIT_LIST_HEAD(&map->track.gc);
> +               list_add_tail(&map->track.gc, &unused_mappings);

> +               return true;
> +       }
> +
> +       return false;

A nit:

Effectively it returns a value of defer.

  return defer;

>  }

...

> @@ -416,26 +421,102 @@ void __ref acpi_os_unmap_iomem(void __iomem *virt, acpi_size size)
>         }
>
>         mutex_lock(&acpi_ioremap_lock);
> +
>         map = acpi_map_lookup_virt(virt, size);

A nit: should it be somewhere else (I mean in another patch)?

>         if (!map) {

...

> +       /* Release the unused mappings in the list. */
> +       while (!list_empty(&list)) {
> +               struct acpi_ioremap *map;
> +
> +               map = list_entry(list.next, struct acpi_ioremap, track.gc);

A nt: if __acpi_os_map_cleanup() (actually acpi_unmap() according to
the code) has no side effects, can we use list_for_each_entry_safe()
here?

> +               list_del(&map->track.gc);
> +               __acpi_os_map_cleanup(map);
> +       }
> +}

...

> @@ -472,16 +552,18 @@ void acpi_os_unmap_generic_address(struct acpi_generic_address *gas)
>                 return;
>
>         mutex_lock(&acpi_ioremap_lock);
> +
>         map = acpi_map_lookup(addr, gas->bit_width / 8);

A nit: should it be somewhere else (I mean in another patch)?

-- 
With Best Regards,
Andy Shevchenko

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [RFT][PATCH v2 2/4] ACPI: OSL: Add support for deferred unmapping of ACPI memory
  2020-06-22 14:56     ` Andy Shevchenko
@ 2020-06-22 15:27       ` Rafael J. Wysocki
  2020-06-22 15:46         ` Andy Shevchenko
  0 siblings, 1 reply; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-22 15:27 UTC (permalink / raw)
  To: Andy Shevchenko
  Cc: Rafael J. Wysocki, Dan Williams, Erik Kaneda, Rafael J. Wysocki,
	Len Brown, Borislav Petkov, Ira Weiny, James Morse, Myron Stowe,
	Andy Shevchenko, Linux Kernel Mailing List,
	ACPI Devel Maling List, linux-nvdimm, Bob Moore

On Mon, Jun 22, 2020 at 4:56 PM Andy Shevchenko
<andy.shevchenko@gmail.com> wrote:
>
> On Mon, Jun 22, 2020 at 5:06 PM Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
> >
> > From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> >
> > Implement acpi_os_unmap_deferred() and
> > acpi_os_release_unused_mappings() and set ACPI_USE_DEFERRED_UNMAPPING
> > to allow ACPICA to use deferred unmapping of memory in
> > acpi_ex_system_memory_space_handler() so as to avoid RCU-related
> > performance issues with memory opregions.
>
> ...
>
> > +static bool acpi_os_drop_map_ref(struct acpi_ioremap *map, bool defer)
> >  {
> > -       unsigned long refcount = --map->refcount;
> > +       if (--map->track.refcount)
> > +               return true;
> >
> > -       if (!refcount)
> > -               list_del_rcu(&map->list);
> > -       return refcount;
> > +       list_del_rcu(&map->list);
> > +
>
> > +       if (defer) {
> > +               INIT_LIST_HEAD(&map->track.gc);
> > +               list_add_tail(&map->track.gc, &unused_mappings);
>
> > +               return true;
> > +       }
> > +
> > +       return false;
>
> A nit:
>
> Effectively it returns a value of defer.
>
>   return defer;
>
> >  }

Do you mean that one line of code could be saved?  Yes, it could.

>
> ...
>
> > @@ -416,26 +421,102 @@ void __ref acpi_os_unmap_iomem(void __iomem *virt, acpi_size size)
> >         }
> >
> >         mutex_lock(&acpi_ioremap_lock);
> > +
> >         map = acpi_map_lookup_virt(virt, size);
>
> A nit: should it be somewhere else (I mean in another patch)?

Do you mean the extra empty line?

No, I don't think so, or the code style after this patch would not
look consistent.

> >         if (!map) {
>
> ...
>
> > +       /* Release the unused mappings in the list. */
> > +       while (!list_empty(&list)) {
> > +               struct acpi_ioremap *map;
> > +
> > +               map = list_entry(list.next, struct acpi_ioremap, track.gc);
>
> A nt: if __acpi_os_map_cleanup() (actually acpi_unmap() according to
> the code) has no side effects, can we use list_for_each_entry_safe()
> here?

I actually prefer a do .. while version of this which saves the
initial check (which has been carried out already).

> > +               list_del(&map->track.gc);
> > +               __acpi_os_map_cleanup(map);
> > +       }
> > +}
>
> ...
>
> > @@ -472,16 +552,18 @@ void acpi_os_unmap_generic_address(struct acpi_generic_address *gas)
> >                 return;
> >
> >         mutex_lock(&acpi_ioremap_lock);
> > +
> >         map = acpi_map_lookup(addr, gas->bit_width / 8);
>
> A nit: should it be somewhere else (I mean in another patch)?

Nope.

Thanks!

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [RFT][PATCH v2 2/4] ACPI: OSL: Add support for deferred unmapping of ACPI memory
  2020-06-22 15:27       ` Rafael J. Wysocki
@ 2020-06-22 15:46         ` Andy Shevchenko
  0 siblings, 0 replies; 51+ messages in thread
From: Andy Shevchenko @ 2020-06-22 15:46 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Rafael J. Wysocki, Dan Williams, Erik Kaneda, Rafael J. Wysocki,
	Len Brown, Borislav Petkov, Ira Weiny, James Morse, Myron Stowe,
	Andy Shevchenko, Linux Kernel Mailing List,
	ACPI Devel Maling List, linux-nvdimm, Bob Moore

On Mon, Jun 22, 2020 at 6:28 PM Rafael J. Wysocki <rafael@kernel.org> wrote:
> On Mon, Jun 22, 2020 at 4:56 PM Andy Shevchenko
> <andy.shevchenko@gmail.com> wrote:
> > On Mon, Jun 22, 2020 at 5:06 PM Rafael J. Wysocki <rjw@rjwysocki.net> wrote:

...

> > > +               return true;
> > > +       }
> > > +
> > > +       return false;
> >
> > A nit:
> >
> > Effectively it returns a value of defer.
> >
> >   return defer;
> >
> > >  }
>
> Do you mean that one line of code could be saved?  Yes, it could.

Yes. The question here would it make a cleaner way for the reader to
understand the returned value?

(For the rest, nevermind, choose whatever suits better in your opinion)

-- 
With Best Regards,
Andy Shevchenko

^ permalink raw reply	[flat|nested] 51+ messages in thread

* [RFT][PATCH v3 0/4] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter
  2020-06-22 13:50 ` [RFT][PATCH v2 0/4] " Rafael J. Wysocki
                     ` (3 preceding siblings ...)
  2020-06-22 14:02   ` [RFT][PATCH v2 4/4] ACPI: OSL: Implement acpi_os_map_memory_fast_path() Rafael J. Wysocki
@ 2020-06-26 17:28   ` Rafael J. Wysocki
  2020-06-26 17:31     ` [RFT][PATCH v3 1/4] ACPICA: Take deferred unmapping of memory into account Rafael J. Wysocki
                       ` (5 more replies)
  4 siblings, 6 replies; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-26 17:28 UTC (permalink / raw)
  To: Dan Williams, Erik Kaneda
  Cc: rafael.j.wysocki, Len Brown, Borislav Petkov, Ira Weiny,
	James Morse, Myron Stowe, Andy Shevchenko, linux-kernel,
	linux-acpi, linux-nvdimm, Bob Moore

Hi All,

On Monday, June 22, 2020 3:50:42 PM CEST Rafael J. Wysocki wrote:
> Hi All,
> 
> This series is to address the problem with RCU synchronization occurring,
> possibly relatively often, inside of acpi_ex_system_memory_space_handler(),
> when the namespace and interpreter mutexes are held.
> 
> Like I said before, I had decided to change the approach used in the previous
> iteration of this series and to allow the unmap operations carried out by 
> acpi_ex_system_memory_space_handler() to be deferred in the first place,
> which is done in patches [1-2/4].

In the meantime I realized that calling syncrhonize_rcu_expedited() under the
"tables" mutex within ACPICA is not quite a good idea too and that there is no
reason for any users of acpi_os_unmap_memory() in the tree to use the "sync"
variant of unmapping.

So, unless I'm missing something, acpi_os_unmap_memory() can be changed to
always defer the final unmapping and the only ACPICA change needed to support
that is the addition of the acpi_os_release_unused_mappings() call to get rid
of the unused mappings when leaving the interpreter (module the extra call in
the debug code for consistency).

So patches [1-2/4] have been changed accordingly.

> However, it turns out that the "fast-path" mapping is still useful on top of
> the above to reduce the number of ioremap-iounmap cycles for the same address
> range and so it is introduced by patches [3-4/4].

Patches [3-4/4] still do what they did, but they have been simplified a bit
after rebasing on top of the new [1-2/4].

The below information is still valid, but it applies to the v3, of course.

> For details, please refer to the patch changelogs.
> 
> The series is available from the git branch at
> 
>  git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git \
>  acpica-osl
> 
> for easier testing.

Also the series have been tested locally.

Thanks,
Rafael




^ permalink raw reply	[flat|nested] 51+ messages in thread

* [RFT][PATCH v3 1/4] ACPICA: Take deferred unmapping of memory into account
  2020-06-26 17:28   ` [RFT][PATCH v3 0/4] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter Rafael J. Wysocki
@ 2020-06-26 17:31     ` Rafael J. Wysocki
  2020-06-26 17:31     ` [RFT][PATCH v3 2/4] ACPI: OSL: Implement deferred unmapping of ACPI memory Rafael J. Wysocki
                       ` (4 subsequent siblings)
  5 siblings, 0 replies; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-26 17:31 UTC (permalink / raw)
  To: Dan Williams, Erik Kaneda
  Cc: rafael.j.wysocki, Len Brown, Borislav Petkov, Ira Weiny,
	James Morse, Myron Stowe, Andy Shevchenko, linux-kernel,
	linux-acpi, linux-nvdimm, Bob Moore

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

The ACPI OS layer in Linux uses RCU to protect the walkers of the
list of ACPI memory mappings from seeing an inconsistent state
while it is being updated.  Among other situations, that list can
be walked in (NMI and non-NMI) interrupt context, so using a
sleeping lock to protect it is not an option.

However, performance issues related to the RCU usage in there
appear, as described by Dan Williams:

"Recently a performance problem was reported for a process invoking
a non-trival ASL program. The method call in this case ends up
repetitively triggering a call path like:

    acpi_ex_store
    acpi_ex_store_object_to_node
    acpi_ex_write_data_to_field
    acpi_ex_insert_into_field
    acpi_ex_write_with_update_rule
    acpi_ex_field_datum_io
    acpi_ex_access_region
    acpi_ev_address_space_dispatch
    acpi_ex_system_memory_space_handler
    acpi_os_map_cleanup.part.14
    _synchronize_rcu_expedited.constprop.89
    schedule

The end result of frequent synchronize_rcu_expedited() invocation is
tiny sub-millisecond spurts of execution where the scheduler freely
migrates this apparently sleepy task. The overhead of frequent
scheduler invocation multiplies the execution time by a factor
of 2-3X."

The source of this is that acpi_ex_system_memory_space_handler()
unmaps the memory mapping currently cached by it at the access time
if that mapping doesn't cover the memory area being accessed.
Consequently, if there is a memory opregion with two fields
separated from each other by an unused chunk of address space that
is large enough for not being covered by a single mapping, and they
happen to be used in an alternating pattern, the unmapping will
occur on every acpi_ex_system_memory_space_handler() invocation for
that memory opregion and that will lead to significant overhead.

To address that, acpi_os_unmap_memory() provided by Linux can be
modified so as to avoid unmapping the memory region matching the
address range at hand right away and queue it up for later removal.

However, that requires the deferred unmapping of unused memory
regions to be carried out at least occasionally, so modify
ACPICA to do that by invoking a new OS layer function,
acpi_os_release_unused_mappings(), for this purpose every time
the AML interpreter is exited.

For completeness, also call that function from
acpi_db_test_all_objects() after all of the fields have been
tested.

Reported-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/acpi/acpica/dbtest.c  | 4 ++++
 drivers/acpi/acpica/exutils.c | 2 ++
 include/acpi/acpiosxf.h       | 4 ++++
 3 files changed, 10 insertions(+)

diff --git a/drivers/acpi/acpica/dbtest.c b/drivers/acpi/acpica/dbtest.c
index 6db44a5ac786..55931daa1779 100644
--- a/drivers/acpi/acpica/dbtest.c
+++ b/drivers/acpi/acpica/dbtest.c
@@ -220,6 +220,10 @@ static void acpi_db_test_all_objects(void)
 	(void)acpi_walk_namespace(ACPI_TYPE_ANY, ACPI_ROOT_OBJECT,
 				  ACPI_UINT32_MAX, acpi_db_test_one_object,
 				  NULL, NULL, NULL);
+
+	/* Release memory mappings that are not needed any more. */
+
+	acpi_os_release_unused_mappings();
 }
 
 /*******************************************************************************
diff --git a/drivers/acpi/acpica/exutils.c b/drivers/acpi/acpica/exutils.c
index 8fefa6feac2f..ae2030095b63 100644
--- a/drivers/acpi/acpica/exutils.c
+++ b/drivers/acpi/acpica/exutils.c
@@ -106,6 +106,8 @@ void acpi_ex_exit_interpreter(void)
 			    "Could not release AML Interpreter mutex"));
 	}
 
+	acpi_os_release_unused_mappings();
+
 	return_VOID;
 }
 
diff --git a/include/acpi/acpiosxf.h b/include/acpi/acpiosxf.h
index 33bb8c9a089d..0efe2d1725e2 100644
--- a/include/acpi/acpiosxf.h
+++ b/include/acpi/acpiosxf.h
@@ -187,6 +187,10 @@ void *acpi_os_map_memory(acpi_physical_address where, acpi_size length);
 void acpi_os_unmap_memory(void *logical_address, acpi_size size);
 #endif
 
+#ifndef ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_release_unused_mappings
+#define acpi_os_release_unused_mappings()	do { } while (FALSE)
+#endif
+
 #ifndef ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_get_physical_address
 acpi_status
 acpi_os_get_physical_address(void *logical_address,
-- 
2.26.2





^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [RFT][PATCH v3 2/4] ACPI: OSL: Implement deferred unmapping of ACPI memory
  2020-06-26 17:28   ` [RFT][PATCH v3 0/4] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter Rafael J. Wysocki
  2020-06-26 17:31     ` [RFT][PATCH v3 1/4] ACPICA: Take deferred unmapping of memory into account Rafael J. Wysocki
@ 2020-06-26 17:31     ` Rafael J. Wysocki
  2020-06-26 17:32     ` [RFT][PATCH v3 3/4] ACPICA: Preserve memory opregion mappings if supported by OS Rafael J. Wysocki
                       ` (3 subsequent siblings)
  5 siblings, 0 replies; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-26 17:31 UTC (permalink / raw)
  To: Dan Williams, Erik Kaneda
  Cc: rafael.j.wysocki, Len Brown, Borislav Petkov, Ira Weiny,
	James Morse, Myron Stowe, Andy Shevchenko, linux-kernel,
	linux-acpi, linux-nvdimm, Bob Moore

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

Rework acpi_os_unmap_memory() so that it does not release the memory
mapping matching the given address range right away but queues it
up for later removal, implement acpi_os_release_unused_mappings()
that will remove the unused ACPI memory mappings and add invocations
of it to acpi_release_memory() and to the table loading/unloading
code, to get rid of memory mappings that may be left behind.

Reported-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/acpi/acpi_configfs.c      |   3 +
 drivers/acpi/osl.c                | 153 +++++++++++++++++++++++-------
 drivers/acpi/tables.c             |   2 +
 include/acpi/platform/aclinux.h   |   1 +
 include/acpi/platform/aclinuxex.h |   2 +
 5 files changed, 125 insertions(+), 36 deletions(-)

diff --git a/drivers/acpi/acpi_configfs.c b/drivers/acpi/acpi_configfs.c
index ece8c1a921cc..dd167ff87dc4 100644
--- a/drivers/acpi/acpi_configfs.c
+++ b/drivers/acpi/acpi_configfs.c
@@ -59,6 +59,8 @@ static ssize_t acpi_table_aml_write(struct config_item *cfg,
 		table->header = NULL;
 	}
 
+	acpi_os_release_unused_mappings();
+
 	return ret;
 }
 
@@ -224,6 +226,7 @@ static void acpi_table_drop_item(struct config_group *group,
 
 	ACPI_INFO(("Host-directed Dynamic ACPI Table Unload"));
 	acpi_unload_table(table->index);
+	acpi_os_release_unused_mappings();
 }
 
 static struct configfs_group_operations acpi_table_group_ops = {
diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
index 762c5d50b8fe..749ae3e32193 100644
--- a/drivers/acpi/osl.c
+++ b/drivers/acpi/osl.c
@@ -77,12 +77,16 @@ struct acpi_ioremap {
 	void __iomem *virt;
 	acpi_physical_address phys;
 	acpi_size size;
-	unsigned long refcount;
+	union {
+		unsigned long refcount;
+		struct list_head gc;
+	} track;
 };
 
 static LIST_HEAD(acpi_ioremaps);
 static DEFINE_MUTEX(acpi_ioremap_lock);
 #define acpi_ioremap_lock_held() lock_is_held(&acpi_ioremap_lock.dep_map)
+static LIST_HEAD(unused_mappings);
 
 static void __init acpi_request_region (struct acpi_generic_address *gas,
 	unsigned int length, char *desc)
@@ -250,7 +254,7 @@ void __iomem *acpi_os_get_iomem(acpi_physical_address phys, unsigned int size)
 	map = acpi_map_lookup(phys, size);
 	if (map) {
 		virt = map->virt + (phys - map->phys);
-		map->refcount++;
+		map->track.refcount++;
 	}
 	mutex_unlock(&acpi_ioremap_lock);
 	return virt;
@@ -335,7 +339,7 @@ void __iomem __ref
 	/* Check if there's a suitable mapping already. */
 	map = acpi_map_lookup(phys, size);
 	if (map) {
-		map->refcount++;
+		map->track.refcount++;
 		goto out;
 	}
 
@@ -358,7 +362,7 @@ void __iomem __ref
 	map->virt = virt;
 	map->phys = pg_off;
 	map->size = pg_sz;
-	map->refcount = 1;
+	map->track.refcount = 1;
 
 	list_add_tail_rcu(&map->list, &acpi_ioremaps);
 
@@ -375,40 +379,39 @@ void *__ref acpi_os_map_memory(acpi_physical_address phys, acpi_size size)
 EXPORT_SYMBOL_GPL(acpi_os_map_memory);
 
 /* Must be called with mutex_lock(&acpi_ioremap_lock) */
-static unsigned long acpi_os_drop_map_ref(struct acpi_ioremap *map)
+static bool acpi_os_drop_map_ref(struct acpi_ioremap *map, bool defer)
 {
-	unsigned long refcount = --map->refcount;
+	if (--map->track.refcount)
+		return true;
 
-	if (!refcount)
-		list_del_rcu(&map->list);
-	return refcount;
+	list_del_rcu(&map->list);
+
+	if (defer) {
+		INIT_LIST_HEAD(&map->track.gc);
+		list_add_tail(&map->track.gc, &unused_mappings);
+	}
+	return defer;
 }
 
-static void acpi_os_map_cleanup(struct acpi_ioremap *map)
+static void __acpi_os_map_cleanup(struct acpi_ioremap *map)
 {
-	synchronize_rcu_expedited();
 	acpi_unmap(map->phys, map->virt);
 	kfree(map);
 }
 
-/**
- * acpi_os_unmap_iomem - Drop a memory mapping reference.
- * @virt: Start of the address range to drop a reference to.
- * @size: Size of the address range to drop a reference to.
- *
- * Look up the given virtual address range in the list of existing ACPI memory
- * mappings, drop a reference to it and unmap it if there are no more active
- * references to it.
- *
- * During early init (when acpi_permanent_mmap has not been set yet) this
- * routine simply calls __acpi_unmap_table() to get the job done.  Since
- * __acpi_unmap_table() is an __init function, the __ref annotation is needed
- * here.
- */
-void __ref acpi_os_unmap_iomem(void __iomem *virt, acpi_size size)
+static void acpi_os_map_cleanup(struct acpi_ioremap *map)
+{
+	if (!map)
+		return;
+
+	synchronize_rcu_expedited();
+	__acpi_os_map_cleanup(map);
+}
+
+static void __ref __acpi_os_unmap_iomem(void __iomem *virt, acpi_size size,
+					bool defer)
 {
 	struct acpi_ioremap *map;
-	unsigned long refcount;
 
 	if (!acpi_permanent_mmap) {
 		__acpi_unmap_table(virt, size);
@@ -416,26 +419,97 @@ void __ref acpi_os_unmap_iomem(void __iomem *virt, acpi_size size)
 	}
 
 	mutex_lock(&acpi_ioremap_lock);
+
 	map = acpi_map_lookup_virt(virt, size);
 	if (!map) {
 		mutex_unlock(&acpi_ioremap_lock);
 		WARN(true, PREFIX "%s: bad address %p\n", __func__, virt);
 		return;
 	}
-	refcount = acpi_os_drop_map_ref(map);
+	if (acpi_os_drop_map_ref(map, defer))
+		map = NULL;
+
 	mutex_unlock(&acpi_ioremap_lock);
 
-	if (!refcount)
-		acpi_os_map_cleanup(map);
+	acpi_os_map_cleanup(map);
+}
+
+/**
+ * acpi_os_unmap_iomem - Drop a memory mapping reference.
+ * @virt: Start of the address range to drop a reference to.
+ * @size: Size of the address range to drop a reference to.
+ *
+ * Look up the given virtual address range in the list of existing ACPI memory
+ * mappings, drop a reference to it and unmap it if there are no more active
+ * references to it.
+ *
+ * During early init (when acpi_permanent_mmap has not been set yet) this
+ * routine simply calls __acpi_unmap_table() to get the job done.  Since
+ * __acpi_unmap_table() is an __init function, the __ref annotation is needed
+ * here.
+ */
+void __ref acpi_os_unmap_iomem(void __iomem *virt, acpi_size size)
+{
+	__acpi_os_unmap_iomem(virt, size, false);
 }
 EXPORT_SYMBOL_GPL(acpi_os_unmap_iomem);
 
+/**
+ * acpi_os_unmap_memory - Drop a memory mapping reference.
+ * @virt: Start of the address range to drop a reference to.
+ * @size: Size of the address range to drop a reference to.
+ *
+ * Look up the given virtual address range in the list of existing ACPI memory
+ * mappings, drop a reference to it and if there are no more active references
+ * to it, put it in the list of unused memory mappings.
+ *
+ * During early init (when acpi_permanent_mmap has not been set yet) this
+ * routine behaves like acpi_os_unmap_iomem().
+ */
 void __ref acpi_os_unmap_memory(void *virt, acpi_size size)
 {
-	return acpi_os_unmap_iomem((void __iomem *)virt, size);
+	__acpi_os_unmap_iomem((void __iomem *)virt, size, true);
 }
 EXPORT_SYMBOL_GPL(acpi_os_unmap_memory);
 
+/**
+ * acpi_os_release_unused_mappings - Release unused ACPI memory mappings.
+ */
+void acpi_os_release_unused_mappings(void)
+{
+	struct list_head list;
+
+	INIT_LIST_HEAD(&list);
+
+	/*
+	 * First avoid looking at mappings that may be added to the "unused"
+	 * list while the synchronize_rcu() below is running.
+	 */
+	mutex_lock(&acpi_ioremap_lock);
+
+	list_splice_init(&unused_mappings, &list);
+
+	mutex_unlock(&acpi_ioremap_lock);
+
+	if (list_empty(&list))
+		return;
+
+	/*
+	 * Wait for the possible users of the mappings in the "unused" list to
+	 * stop using them.
+	 */
+	synchronize_rcu();
+
+	/* Release the unused mappings in the list. */
+	do {
+		struct acpi_ioremap *map;
+
+		map = list_entry(list.next, struct acpi_ioremap, track.gc);
+		list_del(&map->track.gc);
+		__acpi_os_map_cleanup(map);
+	} while (!list_empty(&list));
+}
+
 int acpi_os_map_generic_address(struct acpi_generic_address *gas)
 {
 	u64 addr;
@@ -461,7 +535,6 @@ void acpi_os_unmap_generic_address(struct acpi_generic_address *gas)
 {
 	u64 addr;
 	struct acpi_ioremap *map;
-	unsigned long refcount;
 
 	if (gas->space_id != ACPI_ADR_SPACE_SYSTEM_MEMORY)
 		return;
@@ -472,16 +545,18 @@ void acpi_os_unmap_generic_address(struct acpi_generic_address *gas)
 		return;
 
 	mutex_lock(&acpi_ioremap_lock);
+
 	map = acpi_map_lookup(addr, gas->bit_width / 8);
 	if (!map) {
 		mutex_unlock(&acpi_ioremap_lock);
 		return;
 	}
-	refcount = acpi_os_drop_map_ref(map);
+	if (acpi_os_drop_map_ref(map, false))
+		map = NULL;
+
 	mutex_unlock(&acpi_ioremap_lock);
 
-	if (!refcount)
-		acpi_os_map_cleanup(map);
+	acpi_os_map_cleanup(map);
 }
 EXPORT_SYMBOL(acpi_os_unmap_generic_address);
 
@@ -1566,11 +1641,17 @@ static acpi_status acpi_deactivate_mem_region(acpi_handle handle, u32 level,
 acpi_status acpi_release_memory(acpi_handle handle, struct resource *res,
 				u32 level)
 {
+	acpi_status ret;
+
 	if (!(res->flags & IORESOURCE_MEM))
 		return AE_TYPE;
 
-	return acpi_walk_namespace(ACPI_TYPE_REGION, handle, level,
+	ret = acpi_walk_namespace(ACPI_TYPE_REGION, handle, level,
 				   acpi_deactivate_mem_region, NULL, res, NULL);
+
+	acpi_os_release_unused_mappings();
+
+	return ret;
 }
 EXPORT_SYMBOL_GPL(acpi_release_memory);
 
diff --git a/drivers/acpi/tables.c b/drivers/acpi/tables.c
index 0e905c3d1645..939484a860a1 100644
--- a/drivers/acpi/tables.c
+++ b/drivers/acpi/tables.c
@@ -816,6 +816,8 @@ int __init acpi_table_init(void)
 		return -EINVAL;
 	acpi_table_initrd_scan();
 
+	acpi_os_release_unused_mappings();
+
 	check_multiple_madt();
 	return 0;
 }
diff --git a/include/acpi/platform/aclinux.h b/include/acpi/platform/aclinux.h
index 987e2af7c335..784e294dc74c 100644
--- a/include/acpi/platform/aclinux.h
+++ b/include/acpi/platform/aclinux.h
@@ -133,6 +133,7 @@
 #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_delete_raw_lock
 #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_acquire_raw_lock
 #define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_release_raw_lock
+#define ACPI_USE_ALTERNATE_PROTOTYPE_acpi_os_release_unused_mappings
 
 /*
  * OSL interfaces used by debugger/disassembler
diff --git a/include/acpi/platform/aclinuxex.h b/include/acpi/platform/aclinuxex.h
index 04f88f2de781..ad6b905358c5 100644
--- a/include/acpi/platform/aclinuxex.h
+++ b/include/acpi/platform/aclinuxex.h
@@ -120,6 +120,8 @@ static inline void acpi_os_delete_raw_lock(acpi_raw_spinlock handle)
 	ACPI_FREE(handle);
 }
 
+void acpi_os_release_unused_mappings(void);
+
 static inline u8 acpi_os_readable(void *pointer, acpi_size length)
 {
 	return TRUE;
-- 
2.26.2





^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [RFT][PATCH v3 3/4] ACPICA: Preserve memory opregion mappings if supported by OS
  2020-06-26 17:28   ` [RFT][PATCH v3 0/4] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter Rafael J. Wysocki
  2020-06-26 17:31     ` [RFT][PATCH v3 1/4] ACPICA: Take deferred unmapping of memory into account Rafael J. Wysocki
  2020-06-26 17:31     ` [RFT][PATCH v3 2/4] ACPI: OSL: Implement deferred unmapping of ACPI memory Rafael J. Wysocki
@ 2020-06-26 17:32     ` Rafael J. Wysocki
  2020-06-26 17:33     ` [RFT][PATCH v3 4/4] ACPI: OSL: Implement acpi_os_map_memory_fast_path() Rafael J. Wysocki
                       ` (2 subsequent siblings)
  5 siblings, 0 replies; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-26 17:32 UTC (permalink / raw)
  To: Dan Williams, Erik Kaneda
  Cc: rafael.j.wysocki, Len Brown, Borislav Petkov, Ira Weiny,
	James Morse, Myron Stowe, Andy Shevchenko, linux-kernel,
	linux-acpi, linux-nvdimm, Bob Moore

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

The ACPICA's strategy with respect to the handling of memory mappings
associated with memory operation regions is to avoid mapping the
entire region at once which may be problematic at least in principle
(for example, it may lead to conflicts with overlapping mappings
having different attributes created by drivers).  It may also be
wasteful, because memory opregions on some systems take up vast
chunks of address space while the fields in those regions actually
accessed by AML are sparsely distributed.

For this reason, a one-page "window" is mapped for a given opregion
on the first memory access through it and if that "window" does not
cover an address range accessed through that opregion subsequently,
it is unmapped and a new "window" is mapped to replace it.  Next,
if the new "window" is not sufficient to acess memory through the
opregion in question in the future, it will be replaced with yet
another "window" and so on.  That may lead to a suboptimal sequence
of memory mapping and unmapping operations, for example if two fields
in one opregion separated from each other by a sufficiently wide
chunk of unused address space are accessed in an alternating pattern.

The situation may still be suboptimal if the deferred unmapping
introduced previously is supported by the OS layer.  For instance,
the alternating memory access pattern mentioned above may produce
a relatively long list of mappings to release with substantial
duplication among the entries in it, which could be avoided if
acpi_ex_system_memory_space_handler() did not release the mapping
used by it previously as soon as the current access was not covered
by it.

In order to improve that, modify acpi_ex_system_memory_space_handler()
to take advantage of the memory mappings reference counting at the OS
level if a suitable interface is provided.

Namely, if ACPI_USE_FAST_PATH_MAPPING is set, the OS is expected to
implement acpi_os_map_memory_fast_path() that will return NULL if
there is no mapping covering the given address range known to it.
If such a mapping is there, however, its reference counter will be
incremented and a pointer representing the requested virtual address
will be returned right away without any additional consequences.

That allows acpi_ex_system_memory_space_handler() to acquire
additional references to all new memory mappings with the help
of acpi_os_map_memory_fast_path() so as to retain them until the
memory opregions associated with them go away.  The function will
still use a new "window" mapping if the current one does not
cover the address range at hand, but it will avoid unmapping the
current one right away by adding it to a list of "known" mappings
associated with the given memory opregion which will be deleted at
the opregion deactivation time.  The mappings in that list can be
used every time a "new window" is needed so as to avoid overhead
related to the mapping and unmapping of memory.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/acpi/acpica/acinterp.h |   3 +
 drivers/acpi/acpica/evrgnini.c |   9 +-
 drivers/acpi/acpica/exregion.c | 154 ++++++++++++++++++++++++++++++++-
 3 files changed, 156 insertions(+), 10 deletions(-)

diff --git a/drivers/acpi/acpica/acinterp.h b/drivers/acpi/acpica/acinterp.h
index a6d896cda2a5..95675a7a8a6b 100644
--- a/drivers/acpi/acpica/acinterp.h
+++ b/drivers/acpi/acpica/acinterp.h
@@ -479,6 +479,9 @@ void acpi_ex_pci_cls_to_string(char *dest, u8 class_code[3]);
 
 u8 acpi_is_valid_space_id(u8 space_id);
 
+acpi_size acpi_ex_mem_space_context_size(void);
+void acpi_ex_unmap_all_region_mappings(struct acpi_mem_space_context *mem_info);
+
 /*
  * exregion - default op_region handlers
  */
diff --git a/drivers/acpi/acpica/evrgnini.c b/drivers/acpi/acpica/evrgnini.c
index aefc0145e583..82f466a128d5 100644
--- a/drivers/acpi/acpica/evrgnini.c
+++ b/drivers/acpi/acpica/evrgnini.c
@@ -46,13 +46,10 @@ acpi_ev_system_memory_region_setup(acpi_handle handle,
 			local_region_context =
 			    (struct acpi_mem_space_context *)*region_context;
 
-			/* Delete a cached mapping if present */
+			/* Delete memory mappings if present */
 
 			if (local_region_context->mapped_length) {
-				acpi_os_unmap_memory(local_region_context->
-						     mapped_logical_address,
-						     local_region_context->
-						     mapped_length);
+				acpi_ex_unmap_all_region_mappings(local_region_context);
 			}
 			ACPI_FREE(local_region_context);
 			*region_context = NULL;
@@ -63,7 +60,7 @@ acpi_ev_system_memory_region_setup(acpi_handle handle,
 	/* Create a new context */
 
 	local_region_context =
-	    ACPI_ALLOCATE_ZEROED(sizeof(struct acpi_mem_space_context));
+		ACPI_ALLOCATE_ZEROED(acpi_ex_mem_space_context_size());
 	if (!(local_region_context)) {
 		return_ACPI_STATUS(AE_NO_MEMORY);
 	}
diff --git a/drivers/acpi/acpica/exregion.c b/drivers/acpi/acpica/exregion.c
index d15a66de26c0..4274582619d2 100644
--- a/drivers/acpi/acpica/exregion.c
+++ b/drivers/acpi/acpica/exregion.c
@@ -14,6 +14,73 @@
 #define _COMPONENT          ACPI_EXECUTER
 ACPI_MODULE_NAME("exregion")
 
+struct acpi_mem_mapping {
+	acpi_physical_address physical_address;
+	u8 *logical_address;
+	acpi_size length;
+	struct acpi_mem_mapping *next_mm;
+};
+
+struct acpi_mm_context {
+	struct acpi_mem_space_context mem_info;
+	struct acpi_mem_mapping *first_mm;
+};
+
+/*******************************************************************************
+ *
+ * FUNCTION:    acpi_ex_mem_space_context_size
+ *
+ * PARAMETERS:  None
+ *
+ * RETURN:      Size of internal memory operation region representation.
+ *
+ ******************************************************************************/
+acpi_size acpi_ex_mem_space_context_size(void)
+{
+	ACPI_FUNCTION_TRACE(acpi_ex_mem_space_context_size);
+
+#ifdef ACPI_USE_FAST_PATH_MAPPING
+	return sizeof(struct acpi_mm_context);
+#else
+	return sizeof(struct acpi_mem_space_context);
+#endif
+}
+
+/*******************************************************************************
+ *
+ * FUNCTION:    acpi_ex_unmap_all_region_mappings
+ *
+ * PARAMETERS:  mem_info            - Region specific context
+ *
+ * RETURN:      None
+ *
+ * DESCRIPTION: Unmap all mappings associated with a memory operation region.
+ *
+ ******************************************************************************/
+void acpi_ex_unmap_all_region_mappings(struct acpi_mem_space_context *mem_info)
+{
+#ifdef ACPI_USE_FAST_PATH_MAPPING
+	struct acpi_mm_context *mm_context = (struct acpi_mm_context *)mem_info;
+	struct acpi_mem_mapping *mm;
+#endif
+
+	ACPI_FUNCTION_TRACE(acpi_ex_unmap_all_region_mappings);
+
+	acpi_os_unmap_memory(mem_info->mapped_logical_address,
+			     mem_info->mapped_length);
+
+#ifdef ACPI_USE_FAST_PATH_MAPPING
+	while (mm_context->first_mm) {
+		mm = mm_context->first_mm;
+		mm_context->first_mm = mm->next_mm;
+		acpi_os_unmap_memory(mm->logical_address, mm->length);
+		ACPI_FREE(mm);
+	}
+#endif
+
+	return_VOID;
+}
+
 /*******************************************************************************
  *
  * FUNCTION:    acpi_ex_system_memory_space_handler
@@ -44,6 +111,10 @@ acpi_ex_system_memory_space_handler(u32 function,
 	u32 length;
 	acpi_size map_length;
 	acpi_size page_boundary_map_length;
+#ifdef ACPI_USE_FAST_PATH_MAPPING
+	struct acpi_mm_context *mm_context = (struct acpi_mm_context *)mem_info;
+	struct acpi_mem_mapping *mm;
+#endif
 #ifdef ACPI_MISALIGNMENT_NOT_SUPPORTED
 	u32 remainder;
 #endif
@@ -102,7 +173,7 @@ acpi_ex_system_memory_space_handler(u32 function,
 					 mem_info->mapped_length))) {
 		/*
 		 * The request cannot be resolved by the current memory mapping;
-		 * Delete the existing mapping and create a new one.
+		 * Delete the current cached mapping and get a new one.
 		 */
 		if (mem_info->mapped_length) {
 
@@ -112,6 +183,36 @@ acpi_ex_system_memory_space_handler(u32 function,
 					     mem_info->mapped_length);
 		}
 
+#ifdef ACPI_USE_FAST_PATH_MAPPING
+		/*
+		 * Look for an existing saved mapping matching the address range
+		 * at hand.  If found, make the OS layer bump up the reference
+		 * counter of that mapping, cache it and carry out the access.
+		 */
+		for (mm = mm_context->first_mm; mm; mm = mm->next_mm) {
+			if (address < mm->physical_address)
+				continue;
+
+			if ((u64)address + length >
+					(u64)mm->physical_address + mm->length)
+				continue;
+
+			/*
+			 * When called on a known-existing memory mapping,
+			 * acpi_os_map_memory_fast_path() must return the same
+			 * logical address as before or NULL.
+			 */
+			if (!acpi_os_map_memory_fast_path(mm->physical_address,
+							  mm->length))
+				continue;
+
+			mem_info->mapped_logical_address = mm->logical_address;
+			mem_info->mapped_physical_address = mm->physical_address;
+			mem_info->mapped_length = mm->length;
+			goto access;
+		}
+#endif /* ACPI_USE_FAST_PATH_MAPPING */
+
 		/*
 		 * October 2009: Attempt to map from the requested address to the
 		 * end of the region. However, we will never map more than one
@@ -143,9 +244,8 @@ acpi_ex_system_memory_space_handler(u32 function,
 
 		/* Create a new mapping starting at the address given */
 
-		mem_info->mapped_logical_address =
-		    acpi_os_map_memory(address, map_length);
-		if (!mem_info->mapped_logical_address) {
+		logical_addr_ptr = acpi_os_map_memory(address, map_length);
+		if (!logical_addr_ptr) {
 			ACPI_ERROR((AE_INFO,
 				    "Could not map memory at 0x%8.8X%8.8X, size %u",
 				    ACPI_FORMAT_UINT64(address),
@@ -156,10 +256,56 @@ acpi_ex_system_memory_space_handler(u32 function,
 
 		/* Save the physical address and mapping size */
 
+		mem_info->mapped_logical_address = logical_addr_ptr;
 		mem_info->mapped_physical_address = address;
 		mem_info->mapped_length = map_length;
+
+#ifdef ACPI_USE_FAST_PATH_MAPPING
+		/*
+		 * Create a new mm list entry to save the new mapping for
+		 * removal at the operation region deactivation time.
+		 */
+		mm = ACPI_ALLOCATE_ZEROED(sizeof(*mm));
+		if (!mm) {
+			/*
+			 * No room to save the new mapping, but this is not
+			 * critical.  Just log the error and carry out the
+			 * access as requested.
+			 */
+			ACPI_ERROR((AE_INFO,
+				    "Not enough memory to save memory mapping at 0x%8.8X%8.8X, size %u",
+				    ACPI_FORMAT_UINT64(address),
+				    (u32)map_length));
+			goto access;
+		}
+		/*
+		 * Bump up the new mapping's reference counter in the OS layer
+		 * to prevent it from getting dropped prematurely.
+		 */
+		if (!acpi_os_map_memory_fast_path(address, map_length)) {
+			/*
+			 * Something has gone wrong, but this is not critical.
+			 * Log the error, free the mm list entry that won't be
+			 * used and carry out the access as requested.
+			 */
+			ACPI_ERROR((AE_INFO,
+				    "Unable to save memory mapping at 0x%8.8X%8.8X, size %u",
+				    ACPI_FORMAT_UINT64(address),
+				    (u32)map_length));
+			ACPI_FREE(mm);
+			goto access;
+		}
+		mm->physical_address = address;
+		mm->logical_address = logical_addr_ptr;
+		mm->length = map_length;
+		mm->next_mm = mm_context->first_mm;
+		mm_context->first_mm = mm;
 	}
 
+access:
+#else /* !ACPI_USE_FAST_PATH_MAPPING */
+	}
+#endif /* !ACPI_USE_FAST_PATH_MAPPING */
 	/*
 	 * Generate a logical pointer corresponding to the address we want to
 	 * access
-- 
2.26.2





^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [RFT][PATCH v3 4/4] ACPI: OSL: Implement acpi_os_map_memory_fast_path()
  2020-06-26 17:28   ` [RFT][PATCH v3 0/4] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter Rafael J. Wysocki
                       ` (2 preceding siblings ...)
  2020-06-26 17:32     ` [RFT][PATCH v3 3/4] ACPICA: Preserve memory opregion mappings if supported by OS Rafael J. Wysocki
@ 2020-06-26 17:33     ` Rafael J. Wysocki
  2020-06-26 18:41     ` [RFT][PATCH v3 0/4] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter Dan Williams
  2020-06-29 16:31     ` [PATCH v4 0/2] " Rafael J. Wysocki
  5 siblings, 0 replies; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-26 17:33 UTC (permalink / raw)
  To: Dan Williams, Erik Kaneda
  Cc: rafael.j.wysocki, Len Brown, Borislav Petkov, Ira Weiny,
	James Morse, Myron Stowe, Andy Shevchenko, linux-kernel,
	linux-acpi, linux-nvdimm, Bob Moore

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

Add acpi_os_map_memory_fast_path() and set ACPI_USE_FAST_PATH_MAPPING
to allow acpi_ex_system_memory_space_handler() to avoid unnecessary
memory mapping and unmapping overhead by retaining all memory
mappings created by it until the memory opregions associated with
them go away.

Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/acpi/osl.c                | 65 +++++++++++++++++++++++--------
 include/acpi/platform/aclinux.h   |  4 ++
 include/acpi/platform/aclinuxex.h |  3 ++
 3 files changed, 56 insertions(+), 16 deletions(-)

diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
index 749ae3e32193..b8537ce89ea2 100644
--- a/drivers/acpi/osl.c
+++ b/drivers/acpi/osl.c
@@ -306,21 +306,8 @@ static void acpi_unmap(acpi_physical_address pg_off, void __iomem *vaddr)
 		iounmap(vaddr);
 }
 
-/**
- * acpi_os_map_iomem - Get a virtual address for a given physical address range.
- * @phys: Start of the physical address range to map.
- * @size: Size of the physical address range to map.
- *
- * Look up the given physical address range in the list of existing ACPI memory
- * mappings.  If found, get a reference to it and return a pointer to it (its
- * virtual address).  If not found, map it, add it to that list and return a
- * pointer to it.
- *
- * During early init (when acpi_permanent_mmap has not been set yet) this
- * routine simply calls __acpi_map_table() to get the job done.
- */
-void __iomem __ref
-*acpi_os_map_iomem(acpi_physical_address phys, acpi_size size)
+static void __iomem __ref *__acpi_os_map_iomem(acpi_physical_address phys,
+					       acpi_size size, bool fast_path)
 {
 	struct acpi_ioremap *map;
 	void __iomem *virt;
@@ -332,8 +319,12 @@ void __iomem __ref
 		return NULL;
 	}
 
-	if (!acpi_permanent_mmap)
+	if (!acpi_permanent_mmap) {
+		if (WARN_ON(fast_path))
+			return NULL;
+
 		return __acpi_map_table((unsigned long)phys, size);
+	}
 
 	mutex_lock(&acpi_ioremap_lock);
 	/* Check if there's a suitable mapping already. */
@@ -343,6 +334,11 @@ void __iomem __ref
 		goto out;
 	}
 
+	if (fast_path) {
+		mutex_unlock(&acpi_ioremap_lock);
+		return NULL;
+	}
+
 	map = kzalloc(sizeof(*map), GFP_KERNEL);
 	if (!map) {
 		mutex_unlock(&acpi_ioremap_lock);
@@ -370,6 +366,25 @@ void __iomem __ref
 	mutex_unlock(&acpi_ioremap_lock);
 	return map->virt + (phys - map->phys);
 }
+
+/**
+ * acpi_os_map_iomem - Get a virtual address for a given physical address range.
+ * @phys: Start of the physical address range to map.
+ * @size: Size of the physical address range to map.
+ *
+ * Look up the given physical address range in the list of existing ACPI memory
+ * mappings.  If found, get a reference to it and return a pointer representing
+ * its virtual address.  If not found, map it, add it to that list and return a
+ * pointer representing its virtual address.
+ *
+ * During early init (when acpi_permanent_mmap has not been set yet) call
+ * __acpi_map_table() to obtain the mapping.
+ */
+void __iomem __ref *acpi_os_map_iomem(acpi_physical_address phys,
+				      acpi_size size)
+{
+	return __acpi_os_map_iomem(phys, size, false);
+}
 EXPORT_SYMBOL_GPL(acpi_os_map_iomem);
 
 void *__ref acpi_os_map_memory(acpi_physical_address phys, acpi_size size)
@@ -378,6 +393,24 @@ void *__ref acpi_os_map_memory(acpi_physical_address phys, acpi_size size)
 }
 EXPORT_SYMBOL_GPL(acpi_os_map_memory);
 
+/**
+ * acpi_os_map_memory_fast_path - Fast-path physical-to-virtual address mapping.
+ * @phys: Start of the physical address range to map.
+ * @size: Size of the physical address range to map.
+ *
+ * Look up the given physical address range in the list of existing ACPI memory
+ * mappings.  If found, get a reference to it and return a pointer representing
+ * its virtual address.  If not found, return NULL.
+ *
+ * During early init (when acpi_permanent_mmap has not been set yet) log a
+ * warning and return NULL.
+ */
+void __ref *acpi_os_map_memory_fast_path(acpi_physical_address phys,
+					acpi_size size)
+{
+	return __acpi_os_map_iomem(phys, size, true);
+}
+
 /* Must be called with mutex_lock(&acpi_ioremap_lock) */
 static bool acpi_os_drop_map_ref(struct acpi_ioremap *map, bool defer)
 {
diff --git a/include/acpi/platform/aclinux.h b/include/acpi/platform/aclinux.h
index 784e294dc74c..1a5f8037e3d5 100644
--- a/include/acpi/platform/aclinux.h
+++ b/include/acpi/platform/aclinux.h
@@ -118,6 +118,10 @@
 
 #define USE_NATIVE_ALLOCATE_ZEROED
 
+/* Use fast-path memory mapping to optimize memory opregions handling */
+
+#define ACPI_USE_FAST_PATH_MAPPING
+
 /*
  * Overrides for in-kernel ACPICA
  */
diff --git a/include/acpi/platform/aclinuxex.h b/include/acpi/platform/aclinuxex.h
index ad6b905358c5..c64b836ba455 100644
--- a/include/acpi/platform/aclinuxex.h
+++ b/include/acpi/platform/aclinuxex.h
@@ -141,6 +141,9 @@ static inline void acpi_os_terminate_debugger(void)
  * OSL interfaces added by Linux
  */
 
+void *acpi_os_map_memory_fast_path(acpi_physical_address where,
+				   acpi_size length);
+
 #endif				/* __KERNEL__ */
 
 #endif				/* __ACLINUXEX_H__ */
-- 
2.26.2





^ permalink raw reply related	[flat|nested] 51+ messages in thread

* Re: [RFT][PATCH v3 0/4] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter
  2020-06-26 17:28   ` [RFT][PATCH v3 0/4] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter Rafael J. Wysocki
                       ` (3 preceding siblings ...)
  2020-06-26 17:33     ` [RFT][PATCH v3 4/4] ACPI: OSL: Implement acpi_os_map_memory_fast_path() Rafael J. Wysocki
@ 2020-06-26 18:41     ` Dan Williams
  2020-06-28 17:09       ` Rafael J. Wysocki
  2020-06-29 16:31     ` [PATCH v4 0/2] " Rafael J. Wysocki
  5 siblings, 1 reply; 51+ messages in thread
From: Dan Williams @ 2020-06-26 18:41 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Erik Kaneda, Rafael J Wysocki, Len Brown, Borislav Petkov,
	Ira Weiny, James Morse, Myron Stowe, Andy Shevchenko,
	Linux Kernel Mailing List, Linux ACPI, linux-nvdimm, Bob Moore

On Fri, Jun 26, 2020 at 10:34 AM Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
>
> Hi All,
>
> On Monday, June 22, 2020 3:50:42 PM CEST Rafael J. Wysocki wrote:
> > Hi All,
> >
> > This series is to address the problem with RCU synchronization occurring,
> > possibly relatively often, inside of acpi_ex_system_memory_space_handler(),
> > when the namespace and interpreter mutexes are held.
> >
> > Like I said before, I had decided to change the approach used in the previous
> > iteration of this series and to allow the unmap operations carried out by
> > acpi_ex_system_memory_space_handler() to be deferred in the first place,
> > which is done in patches [1-2/4].
>
> In the meantime I realized that calling syncrhonize_rcu_expedited() under the
> "tables" mutex within ACPICA is not quite a good idea too and that there is no
> reason for any users of acpi_os_unmap_memory() in the tree to use the "sync"
> variant of unmapping.
>
> So, unless I'm missing something, acpi_os_unmap_memory() can be changed to
> always defer the final unmapping and the only ACPICA change needed to support
> that is the addition of the acpi_os_release_unused_mappings() call to get rid
> of the unused mappings when leaving the interpreter (module the extra call in
> the debug code for consistency).
>
> So patches [1-2/4] have been changed accordingly.
>
> > However, it turns out that the "fast-path" mapping is still useful on top of
> > the above to reduce the number of ioremap-iounmap cycles for the same address
> > range and so it is introduced by patches [3-4/4].
>
> Patches [3-4/4] still do what they did, but they have been simplified a bit
> after rebasing on top of the new [1-2/4].
>
> The below information is still valid, but it applies to the v3, of course.
>
> > For details, please refer to the patch changelogs.
> >
> > The series is available from the git branch at
> >
> >  git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git \
> >  acpica-osl
> >
> > for easier testing.
>
> Also the series have been tested locally.

Ok, I'm still trying to get the original reporter to confirm this
reduces the execution time for ASL routines with a lot of OpRegion
touches. Shall I rebuild that test kernel with these changes, or are
the results from the original RFT still interesting?

^ permalink raw reply	[flat|nested] 51+ messages in thread

* RE: [RFT][PATCH v2 3/4] ACPICA: Preserve memory opregion mappings if supported by OS
  2020-06-22 14:01   ` [RFT][PATCH v2 3/4] ACPICA: Preserve memory opregion mappings if supported by OS Rafael J. Wysocki
@ 2020-06-26 22:53     ` Kaneda, Erik
  2020-06-29 13:02       ` Rafael J. Wysocki
  0 siblings, 1 reply; 51+ messages in thread
From: Kaneda, Erik @ 2020-06-26 22:53 UTC (permalink / raw)
  To: Rafael J. Wysocki, Williams, Dan J, Moore, Robert
  Cc: Wysocki, Rafael J, Len Brown, Borislav Petkov, Weiny, Ira,
	James Morse, Myron Stowe, Andy Shevchenko, linux-kernel,
	linux-acpi, linux-nvdimm



> -----Original Message-----
> From: Rafael J. Wysocki <rjw@rjwysocki.net>
> Sent: Monday, June 22, 2020 7:02 AM
> To: Williams, Dan J <dan.j.williams@intel.com>; Kaneda, Erik
> <erik.kaneda@intel.com>
> Cc: Wysocki, Rafael J <rafael.j.wysocki@intel.com>; Len Brown
> <lenb@kernel.org>; Borislav Petkov <bp@alien8.de>; Weiny, Ira
> <ira.weiny@intel.com>; James Morse <james.morse@arm.com>; Myron
> Stowe <myron.stowe@redhat.com>; Andy Shevchenko
> <andriy.shevchenko@linux.intel.com>; linux-kernel@vger.kernel.org; linux-
> acpi@vger.kernel.org; linux-nvdimm@lists.01.org; Moore, Robert
> <robert.moore@intel.com>
> Subject: [RFT][PATCH v2 3/4] ACPICA: Preserve memory opregion mappings
> if supported by OS
> 
> From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> 
> The ACPICA's strategy with respect to the handling of memory mappings
> associated with memory operation regions is to avoid mapping the
> entire region at once which may be problematic at least in principle
> (for example, it may lead to conflicts with overlapping mappings
> having different attributes created by drivers).  It may also be
> wasteful, because memory opregions on some systems take up vast
> chunks of address space while the fields in those regions actually
> accessed by AML are sparsely distributed.
> 
> For this reason, a one-page "window" is mapped for a given opregion
> on the first memory access through it and if that "window" does not
> cover an address range accessed through that opregion subsequently,
> it is unmapped and a new "window" is mapped to replace it.  Next,
> if the new "window" is not sufficient to access memory through the
> opregion in question in the future, it will be replaced with yet
> another "window" and so on.  That may lead to a suboptimal sequence
> of memory mapping and unmapping operations, for example if two fields
> in one opregion separated from each other by a sufficiently wide
> chunk of unused address space are accessed in an alternating pattern.
> 
> The situation may still be suboptimal if the deferred unmapping
> introduced previously is supported by the OS layer.  For instance,
> the alternating memory access pattern mentioned above may produce
> a relatively long list of mappings to release with substantial
> duplication among the entries in it, which could be avoided if
> acpi_ex_system_memory_space_handler() did not release the mapping
> used by it previously as soon as the current access was not covered
> by it.
> 
> In order to improve that, modify acpi_ex_system_memory_space_handler()
> to take advantage of the memory mappings reference counting at the OS
> level if a suitable interface is provided.
> 
Hi,

> Namely, if ACPI_USE_FAST_PATH_MAPPING is set, the OS is expected to
> implement acpi_os_map_memory_fast_path() that will return NULL if
> there is no mapping covering the given address range known to it.
> If such a mapping is there, however, its reference counter will be
> incremented and a pointer representing the requested virtual address
> will be returned right away without any additional consequences.

I do not fully understand why this is under a #ifdef. Is this to support operating systems that might not want to add support for this behavior?

Also, instead of using the terminology fast_path, I think it would be easier to use terminology that describes the mechanism..
It might be easier for other Operating systems to understand something like acpi_os_map_preserved_memory or acpi_os_map_sysmem_opregion_memory.

Thanks,
Erik
> 
> That allows acpi_ex_system_memory_space_handler() to acquire
> additional references to all new memory mappings with the help
> of acpi_os_map_memory_fast_path() so as to retain them until the
> memory opregions associated with them go away.  The function will
> still use a new "window" mapping if the current one does not
> cover the address range at hand, but it will avoid unmapping the
> current one right away by adding it to a list of "known" mappings
> associated with the given memory opregion which will be deleted at
> the opregion deactivation time.  The mappings in that list can be
> used every time a "new window" is needed so as to avoid overhead
> related to the mapping and unmapping of memory.
> 
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
>  drivers/acpi/acpica/acinterp.h |   4 +
>  drivers/acpi/acpica/evrgnini.c |   7 +-
>  drivers/acpi/acpica/exregion.c | 159
> ++++++++++++++++++++++++++++++++-
>  3 files changed, 162 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/acpi/acpica/acinterp.h b/drivers/acpi/acpica/acinterp.h
> index 1f1026fb06e9..db9c279baa2e 100644
> --- a/drivers/acpi/acpica/acinterp.h
> +++ b/drivers/acpi/acpica/acinterp.h
> @@ -479,8 +479,12 @@ void acpi_ex_pci_cls_to_string(char *dest, u8
> class_code[3]);
> 
>  u8 acpi_is_valid_space_id(u8 space_id);
> 
> +struct acpi_mem_space_context
> *acpi_ex_alloc_mem_space_context(void);
> +
>  void acpi_ex_unmap_region_memory(struct acpi_mem_space_context
> *mem_info);
> 
> +void acpi_ex_unmap_all_region_mappings(struct
> acpi_mem_space_context *mem_info);
> +
>  /*
>   * exregion - default op_region handlers
>   */
> diff --git a/drivers/acpi/acpica/evrgnini.c b/drivers/acpi/acpica/evrgnini.c
> index 9f33114a74ca..f6c5feea10bc 100644
> --- a/drivers/acpi/acpica/evrgnini.c
> +++ b/drivers/acpi/acpica/evrgnini.c
> @@ -46,10 +46,10 @@ acpi_ev_system_memory_region_setup(acpi_handle
> handle,
>  			local_region_context =
>  			    (struct acpi_mem_space_context
> *)*region_context;
> 
> -			/* Delete a cached mapping if present */
> +			/* Delete memory mappings if present */
> 
>  			if (local_region_context->mapped_length) {
> -
> 	acpi_ex_unmap_region_memory(local_region_context);
> +
> 	acpi_ex_unmap_all_region_mappings(local_region_context);
>  			}
>  			ACPI_FREE(local_region_context);
>  			*region_context = NULL;
> @@ -59,8 +59,7 @@ acpi_ev_system_memory_region_setup(acpi_handle
> handle,
> 
>  	/* Create a new context */
> 
> -	local_region_context =
> -	    ACPI_ALLOCATE_ZEROED(sizeof(struct
> acpi_mem_space_context));
> +	local_region_context = acpi_ex_alloc_mem_space_context();
>  	if (!(local_region_context)) {
>  		return_ACPI_STATUS(AE_NO_MEMORY);
>  	}
> diff --git a/drivers/acpi/acpica/exregion.c b/drivers/acpi/acpica/exregion.c
> index af777b7fccb0..9d97b6a67074 100644
> --- a/drivers/acpi/acpica/exregion.c
> +++ b/drivers/acpi/acpica/exregion.c
> @@ -14,6 +14,40 @@
>  #define _COMPONENT          ACPI_EXECUTER
>  ACPI_MODULE_NAME("exregion")
> 
> +struct acpi_mem_mapping {
> +	acpi_physical_address physical_address;
> +	u8 *logical_address;
> +	acpi_size length;
> +	struct acpi_mem_mapping *next_mm;
> +};
> +
> +struct acpi_mm_context {
> +	struct acpi_mem_space_context mem_info;
> +	struct acpi_mem_mapping *first_mm;
> +};
> +
> +/*********************************************************
> ********************
> + *
> + * FUNCTION:    acpi_ex_alloc_mem_space_context
> + *
> + * PARAMETERS:  None
> + *
> + * RETURN:      Pointer to a new region context object.
> + *
> + * DESCRIPTION: Allocate memory for memory operation region
> representation.
> + *
> +
> **********************************************************
> ******************/
> +struct acpi_mem_space_context
> *acpi_ex_alloc_mem_space_context(void)
> +{
> +	ACPI_FUNCTION_TRACE(acpi_ex_alloc_mem_space_context);
> +
> +#ifdef ACPI_USE_FAST_PATH_MAPPING
> +	return ACPI_ALLOCATE_ZEROED(sizeof(struct acpi_mm_context));
> +#else
> +	return ACPI_ALLOCATE_ZEROED(sizeof(struct
> acpi_mem_space_context));
> +#endif
> +}
> +
> 
> /**********************************************************
> *******************
>   *
>   * FUNCTION:    acpi_ex_unmap_region_memory
> @@ -40,6 +74,44 @@ void acpi_ex_unmap_region_memory(struct
> acpi_mem_space_context *mem_info)
>  	return_VOID;
>  }
> 
> +/*********************************************************
> ********************
> + *
> + * FUNCTION:    acpi_ex_unmap_all_region_mappings
> + *
> + * PARAMETERS:  mem_info            - Region specific context
> + *
> + * RETURN:      None
> + *
> + * DESCRIPTION: Unmap all mappings associated with a memory operation
> region.
> + *
> +
> **********************************************************
> ******************/
> +void acpi_ex_unmap_all_region_mappings(struct
> acpi_mem_space_context *mem_info)
> +{
> +#ifdef ACPI_USE_FAST_PATH_MAPPING
> +	struct acpi_mm_context *mm_context = (struct acpi_mm_context
> *)mem_info;
> +	struct acpi_mem_mapping *mm;
> +#endif
> +
> +	ACPI_FUNCTION_TRACE(acpi_ex_unmap_all_region_mappings);
> +
> +	acpi_ex_unmap_region_memory(mem_info);
> +
> +#ifdef ACPI_USE_FAST_PATH_MAPPING
> +	while (mm_context->first_mm) {
> +		mm = mm_context->first_mm;
> +		mm_context->first_mm = mm->next_mm;
> +#ifdef ACPI_USE_DEFERRED_UNMAPPING
> +		acpi_os_unmap_deferred(mm->logical_address, mm-
> >length);
> +#else
> +		acpi_os_unmap_memory(mm->logical_address, mm-
> >length);
> +#endif
> +		ACPI_FREE(mm);
> +	}
> +#endif /* ACPI_USE_FAST_PATH_MAPPING */
> +
> +	return_VOID;
> +}
> +
> 
> /**********************************************************
> *********************
>   *
>   * FUNCTION:    acpi_ex_system_memory_space_handler
> @@ -70,6 +142,10 @@ acpi_ex_system_memory_space_handler(u32
> function,
>  	u32 length;
>  	acpi_size map_length;
>  	acpi_size page_boundary_map_length;
> +#ifdef ACPI_USE_FAST_PATH_MAPPING
> +	struct acpi_mm_context *mm_context = (struct acpi_mm_context
> *)mem_info;
> +	struct acpi_mem_mapping *mm;
> +#endif
>  #ifdef ACPI_MISALIGNMENT_NOT_SUPPORTED
>  	u32 remainder;
>  #endif
> @@ -128,7 +204,7 @@ acpi_ex_system_memory_space_handler(u32
> function,
>  					 mem_info->mapped_length))) {
>  		/*
>  		 * The request cannot be resolved by the current memory
> mapping;
> -		 * Delete the existing mapping and create a new one.
> +		 * Delete the current cached mapping and get a new one.
>  		 */
>  		if (mem_info->mapped_length) {
> 
> @@ -137,6 +213,36 @@ acpi_ex_system_memory_space_handler(u32
> function,
>  			acpi_ex_unmap_region_memory(mem_info);
>  		}
> 
> +#ifdef ACPI_USE_FAST_PATH_MAPPING
> +		/*
> +		 * Look for an existing saved mapping matching the address
> range
> +		 * at hand.  If found, make the OS layer bump up the
> reference
> +		 * counter of that mapping, cache it and carry out the access.
> +		 */
> +		for (mm = mm_context->first_mm; mm; mm = mm-
> >next_mm) {
> +			if (address < mm->physical_address)
> +				continue;
> +
> +			if ((u64)address + length >
> +					(u64)mm->physical_address + mm-
> >length)
> +				continue;
> +
> +			/*
> +			 * When called on a known-existing memory mapping,
> +			 * acpi_os_map_memory_fast_path() must return
> the same
> +			 * logical address as before or NULL.
> +			 */
> +			if (!acpi_os_map_memory_fast_path(mm-
> >physical_address,
> +							  mm->length))
> +				continue;
> +
> +			mem_info->mapped_logical_address = mm-
> >logical_address;
> +			mem_info->mapped_physical_address = mm-
> >physical_address;
> +			mem_info->mapped_length = mm->length;
> +			goto access;
> +		}
> +#endif /* ACPI_USE_FAST_PATH_MAPPING */
> +
>  		/*
>  		 * October 2009: Attempt to map from the requested
> address to the
>  		 * end of the region. However, we will never map more than
> one
> @@ -168,9 +274,8 @@ acpi_ex_system_memory_space_handler(u32
> function,
> 
>  		/* Create a new mapping starting at the address given */
> 
> -		mem_info->mapped_logical_address =
> -		    acpi_os_map_memory(address, map_length);
> -		if (!mem_info->mapped_logical_address) {
> +		logical_addr_ptr = acpi_os_map_memory(address,
> map_length);
> +		if (!logical_addr_ptr) {
>  			ACPI_ERROR((AE_INFO,
>  				    "Could not map memory at 0x%8.8X%8.8X,
> size %u",
>  				    ACPI_FORMAT_UINT64(address),
> @@ -181,10 +286,56 @@ acpi_ex_system_memory_space_handler(u32
> function,
> 
>  		/* Save the physical address and mapping size */
> 
> +		mem_info->mapped_logical_address = logical_addr_ptr;
>  		mem_info->mapped_physical_address = address;
>  		mem_info->mapped_length = map_length;
> +
> +#ifdef ACPI_USE_FAST_PATH_MAPPING
> +		/*
> +		 * Create a new mm list entry to save the new mapping for
> +		 * removal at the operation region deactivation time.
> +		 */
> +		mm = ACPI_ALLOCATE_ZEROED(sizeof(*mm));
> +		if (!mm) {
> +			/*
> +			 * No room to save the new mapping, but this is not
> +			 * critical.  Just log the error and carry out the
> +			 * access as requested.
> +			 */
> +			ACPI_ERROR((AE_INFO,
> +				    "Not enough memory to save memory
> mapping at 0x%8.8X%8.8X, size %u",
> +				    ACPI_FORMAT_UINT64(address),
> +				    (u32)map_length));
> +			goto access;
> +		}
> +		/*
> +		 * Bump up the new mapping's reference counter in the OS
> layer
> +		 * to prevent it from getting dropped prematurely.
> +		 */
> +		if (!acpi_os_map_memory_fast_path(address, map_length))
> {
> +			/*
> +			 * Something has gone wrong, but this is not critical.
> +			 * Log the error, free the mm list entry that won't be
> +			 * used and carry out the access as requested.
> +			 */
> +			ACPI_ERROR((AE_INFO,
> +				    "Unable to save memory mapping at
> 0x%8.8X%8.8X, size %u",
> +				    ACPI_FORMAT_UINT64(address),
> +				    (u32)map_length));
> +			ACPI_FREE(mm);
> +			goto access;
> +		}
> +		mm->physical_address = address;
> +		mm->logical_address = logical_addr_ptr;
> +		mm->length = map_length;
> +		mm->next_mm = mm_context->first_mm;
> +		mm_context->first_mm = mm;
>  	}
> 
> +access:
> +#else /* !ACPI_USE_FAST_PATH_MAPPING */
> +	}
> +#endif /* !ACPI_USE_FAST_PATH_MAPPING */
>  	/*
>  	 * Generate a logical pointer corresponding to the address we want
> to
>  	 * access
> --
> 2.26.2
> 
> 
> 


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [RFT][PATCH v3 0/4] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter
  2020-06-26 18:41     ` [RFT][PATCH v3 0/4] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter Dan Williams
@ 2020-06-28 17:09       ` Rafael J. Wysocki
  2020-06-29 20:46         ` Dan Williams
  0 siblings, 1 reply; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-28 17:09 UTC (permalink / raw)
  To: Dan Williams
  Cc: Rafael J. Wysocki, Erik Kaneda, Rafael J Wysocki, Len Brown,
	Borislav Petkov, Ira Weiny, James Morse, Myron Stowe,
	Andy Shevchenko, Linux Kernel Mailing List, Linux ACPI,
	linux-nvdimm, Bob Moore

On Fri, Jun 26, 2020 at 8:41 PM Dan Williams <dan.j.williams@intel.com> wrote:
>
> On Fri, Jun 26, 2020 at 10:34 AM Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
> >
> > Hi All,
> >
> > On Monday, June 22, 2020 3:50:42 PM CEST Rafael J. Wysocki wrote:
> > > Hi All,
> > >
> > > This series is to address the problem with RCU synchronization occurring,
> > > possibly relatively often, inside of acpi_ex_system_memory_space_handler(),
> > > when the namespace and interpreter mutexes are held.
> > >
> > > Like I said before, I had decided to change the approach used in the previous
> > > iteration of this series and to allow the unmap operations carried out by
> > > acpi_ex_system_memory_space_handler() to be deferred in the first place,
> > > which is done in patches [1-2/4].
> >
> > In the meantime I realized that calling syncrhonize_rcu_expedited() under the
> > "tables" mutex within ACPICA is not quite a good idea too and that there is no
> > reason for any users of acpi_os_unmap_memory() in the tree to use the "sync"
> > variant of unmapping.
> >
> > So, unless I'm missing something, acpi_os_unmap_memory() can be changed to
> > always defer the final unmapping and the only ACPICA change needed to support
> > that is the addition of the acpi_os_release_unused_mappings() call to get rid
> > of the unused mappings when leaving the interpreter (module the extra call in
> > the debug code for consistency).
> >
> > So patches [1-2/4] have been changed accordingly.
> >
> > > However, it turns out that the "fast-path" mapping is still useful on top of
> > > the above to reduce the number of ioremap-iounmap cycles for the same address
> > > range and so it is introduced by patches [3-4/4].
> >
> > Patches [3-4/4] still do what they did, but they have been simplified a bit
> > after rebasing on top of the new [1-2/4].
> >
> > The below information is still valid, but it applies to the v3, of course.
> >
> > > For details, please refer to the patch changelogs.
> > >
> > > The series is available from the git branch at
> > >
> > >  git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git \
> > >  acpica-osl
> > >
> > > for easier testing.
> >
> > Also the series have been tested locally.
>
> Ok, I'm still trying to get the original reporter to confirm this
> reduces the execution time for ASL routines with a lot of OpRegion
> touches. Shall I rebuild that test kernel with these changes, or are
> the results from the original RFT still interesting?

I'm mostly interested in the results with the v3 applied.

Also it would be good to check the impact of the first two patches
alone relative to all four.

Thanks!

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [RFT][PATCH v2 3/4] ACPICA: Preserve memory opregion mappings if supported by OS
  2020-06-26 22:53     ` Kaneda, Erik
@ 2020-06-29 13:02       ` Rafael J. Wysocki
  0 siblings, 0 replies; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-29 13:02 UTC (permalink / raw)
  To: Kaneda, Erik
  Cc: Rafael J. Wysocki, Williams, Dan J, Moore, Robert, Wysocki,
	Rafael J, Len Brown, Borislav Petkov, Weiny, Ira, James Morse,
	Myron Stowe, Andy Shevchenko, linux-kernel, linux-acpi,
	linux-nvdimm

On Sat, Jun 27, 2020 at 12:53 AM Kaneda, Erik <erik.kaneda@intel.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Rafael J. Wysocki <rjw@rjwysocki.net>
> > Sent: Monday, June 22, 2020 7:02 AM
> > To: Williams, Dan J <dan.j.williams@intel.com>; Kaneda, Erik
> > <erik.kaneda@intel.com>
> > Cc: Wysocki, Rafael J <rafael.j.wysocki@intel.com>; Len Brown
> > <lenb@kernel.org>; Borislav Petkov <bp@alien8.de>; Weiny, Ira
> > <ira.weiny@intel.com>; James Morse <james.morse@arm.com>; Myron
> > Stowe <myron.stowe@redhat.com>; Andy Shevchenko
> > <andriy.shevchenko@linux.intel.com>; linux-kernel@vger.kernel.org; linux-
> > acpi@vger.kernel.org; linux-nvdimm@lists.01.org; Moore, Robert
> > <robert.moore@intel.com>
> > Subject: [RFT][PATCH v2 3/4] ACPICA: Preserve memory opregion mappings
> > if supported by OS
> >
> > From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> >
> > The ACPICA's strategy with respect to the handling of memory mappings
> > associated with memory operation regions is to avoid mapping the
> > entire region at once which may be problematic at least in principle
> > (for example, it may lead to conflicts with overlapping mappings
> > having different attributes created by drivers).  It may also be
> > wasteful, because memory opregions on some systems take up vast
> > chunks of address space while the fields in those regions actually
> > accessed by AML are sparsely distributed.
> >
> > For this reason, a one-page "window" is mapped for a given opregion
> > on the first memory access through it and if that "window" does not
> > cover an address range accessed through that opregion subsequently,
> > it is unmapped and a new "window" is mapped to replace it.  Next,
> > if the new "window" is not sufficient to access memory through the
> > opregion in question in the future, it will be replaced with yet
> > another "window" and so on.  That may lead to a suboptimal sequence
> > of memory mapping and unmapping operations, for example if two fields
> > in one opregion separated from each other by a sufficiently wide
> > chunk of unused address space are accessed in an alternating pattern.
> >
> > The situation may still be suboptimal if the deferred unmapping
> > introduced previously is supported by the OS layer.  For instance,
> > the alternating memory access pattern mentioned above may produce
> > a relatively long list of mappings to release with substantial
> > duplication among the entries in it, which could be avoided if
> > acpi_ex_system_memory_space_handler() did not release the mapping
> > used by it previously as soon as the current access was not covered
> > by it.
> >
> > In order to improve that, modify acpi_ex_system_memory_space_handler()
> > to take advantage of the memory mappings reference counting at the OS
> > level if a suitable interface is provided.
> >
> Hi,
>
> > Namely, if ACPI_USE_FAST_PATH_MAPPING is set, the OS is expected to
> > implement acpi_os_map_memory_fast_path() that will return NULL if
> > there is no mapping covering the given address range known to it.
> > If such a mapping is there, however, its reference counter will be
> > incremented and a pointer representing the requested virtual address
> > will be returned right away without any additional consequences.
>
> I do not fully understand why this is under a #ifdef. Is this to support operating systems that might not want to add support for this behavior?

Yes, and to protect the ones that have not added support for it just yet.

Without the "fast-path" mapping support, ACPICA has no way to obtain
additional references to known-existing mappings and the new code
won't work as expected without it, so it is better to avoid building
that code at all in those cases IMO.

> Also, instead of using the terminology fast_path, I think it would be easier to use terminology that describes the mechanism..
> It might be easier for other Operating systems to understand something like acpi_os_map_preserved_memory or acpi_os_map_sysmem_opregion_memory.

Well, the naming is not particularly important to me to be honest, but
this is mostly about being able to get a new reference to a
known-existing memory mapping.

So something like acpi_os_ref_memory_map() perhaps?

But I'm thinking that this can be implemented without the "fast-path"
mapping support too, let me try to do that.

Cheers!

^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH v4 0/2] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter
  2020-06-26 17:28   ` [RFT][PATCH v3 0/4] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter Rafael J. Wysocki
                       ` (4 preceding siblings ...)
  2020-06-26 18:41     ` [RFT][PATCH v3 0/4] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter Dan Williams
@ 2020-06-29 16:31     ` Rafael J. Wysocki
  2020-06-29 16:33       ` [PATCH v4 1/2] ACPI: OSL: Implement deferred unmapping of ACPI memory Rafael J. Wysocki
  2020-06-29 16:33       ` [PATCH v4 2/2] ACPICA: Preserve memory opregion mappings Rafael J. Wysocki
  5 siblings, 2 replies; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-29 16:31 UTC (permalink / raw)
  To: Dan Williams, Erik Kaneda
  Cc: rafael.j.wysocki, Len Brown, Borislav Petkov, Ira Weiny,
	James Morse, Myron Stowe, Andy Shevchenko, linux-kernel,
	linux-acpi, linux-nvdimm, Bob Moore

Hi All,

On Friday, June 26, 2020 7:28:27 PM CEST Rafael J. Wysocki wrote:
> Hi All,
> 
> On Monday, June 22, 2020 3:50:42 PM CEST Rafael J. Wysocki wrote:
> > Hi All,
> > 
> > This series is to address the problem with RCU synchronization occurring,
> > possibly relatively often, inside of acpi_ex_system_memory_space_handler(),
> > when the namespace and interpreter mutexes are held.
> > 
> > Like I said before, I had decided to change the approach used in the previous
> > iteration of this series and to allow the unmap operations carried out by 
> > acpi_ex_system_memory_space_handler() to be deferred in the first place,
> > which is done in patches [1-2/4].
> 
> In the meantime I realized that calling syncrhonize_rcu_expedited() under the
> "tables" mutex within ACPICA is not quite a good idea too and that there is no
> reason for any users of acpi_os_unmap_memory() in the tree to use the "sync"
> variant of unmapping.
> 
> So, unless I'm missing something, acpi_os_unmap_memory() can be changed to
> always defer the final unmapping and the only ACPICA change needed to support
> that is the addition of the acpi_os_release_unused_mappings() call to get rid
> of the unused mappings when leaving the interpreter (module the extra call in
> the debug code for consistency).
> 
> So patches [1-2/4] have been changed accordingly.

And this still can be improved by using queue_rcu_work() to queue up the unused
mappings for removal in which case ACPICA need not be modified at all for the
deferred unmapping to work.

Accordingly, patches [1-2/4] from the v3 (and earlier) are now replaced by one
patch, the [1/2].

> > However, it turns out that the "fast-path" mapping is still useful on top of
> > the above to reduce the number of ioremap-iounmap cycles for the same address
> > range and so it is introduced by patches [3-4/4].
> 
> Patches [3-4/4] still do what they did, but they have been simplified a bit
> after rebasing on top of the new [1-2/4].

Moreover, the ACPICA part of the old patches [3-4/4] can be reworked to always
preserve memory mappings created by the memory opregion handler without the
need to take additional references to memory mappings at the OS level, so
patch [4/4] from the v3 (and earlier) is not needed now.

Again, for details, please refer to the patch changelogs, but I'm kind of
inclined to make these changes regardless, because they both are clear
improvements to me.

As before:

> > The series is available from the git branch at
> > 
> >  git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git \
> >  acpica-osl
> > 
> > for easier testing.
> 
> Also the series have been tested locally.

Cheers,
Rafael




^ permalink raw reply	[flat|nested] 51+ messages in thread

* [PATCH v4 1/2] ACPI: OSL: Implement deferred unmapping of ACPI memory
  2020-06-29 16:31     ` [PATCH v4 0/2] " Rafael J. Wysocki
@ 2020-06-29 16:33       ` Rafael J. Wysocki
  2020-06-29 16:33       ` [PATCH v4 2/2] ACPICA: Preserve memory opregion mappings Rafael J. Wysocki
  1 sibling, 0 replies; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-29 16:33 UTC (permalink / raw)
  To: Dan Williams, Erik Kaneda
  Cc: rafael.j.wysocki, Len Brown, Borislav Petkov, Ira Weiny,
	James Morse, Myron Stowe, Andy Shevchenko, linux-kernel,
	linux-acpi, linux-nvdimm, Bob Moore

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

The ACPI OS layer in Linux uses RCU to protect the walkers of the
list of ACPI memory mappings from seeing an inconsistent state
while it is being updated.  Among other situations, that list can
be walked in (NMI and non-NMI) interrupt context, so using a
sleeping lock to protect it is not an option.

However, performance issues related to the RCU usage in there
appear, as described by Dan Williams:

"Recently a performance problem was reported for a process invoking
a non-trival ASL program. The method call in this case ends up
repetitively triggering a call path like:

    acpi_ex_store
    acpi_ex_store_object_to_node
    acpi_ex_write_data_to_field
    acpi_ex_insert_into_field
    acpi_ex_write_with_update_rule
    acpi_ex_field_datum_io
    acpi_ex_access_region
    acpi_ev_address_space_dispatch
    acpi_ex_system_memory_space_handler
    acpi_os_map_cleanup.part.14
    _synchronize_rcu_expedited.constprop.89
    schedule

The end result of frequent synchronize_rcu_expedited() invocation is
tiny sub-millisecond spurts of execution where the scheduler freely
migrates this apparently sleepy task. The overhead of frequent
scheduler invocation multiplies the execution time by a factor
of 2-3X."

The source of this is that acpi_ex_system_memory_space_handler()
unmaps the memory mapping currently cached by it at the access time
if that mapping doesn't cover the memory area being accessed.
Consequently, if there is a memory opregion with two fields
separated from each other by an unused chunk of address space that
is large enough for not being covered by a single mapping, and they
happen to be used in an alternating pattern, the unmapping will
occur on every acpi_ex_system_memory_space_handler() invocation for
that memory opregion and that will lead to significant overhead.

Moreover, acpi_ex_system_memory_space_handler() carries out the
memory unmapping with the namespace and interpreter mutexes held
which may lead to additional latency, because all of the tasks
wanting to acquire on of these mutexes need to wait for the
memory unmapping operation to complete.

To address that, rework acpi_os_unmap_memory() so that it does not
release the memory mapping covering the given address range right
away and instead make it queue up the mapping at hand for removal
via queue_rcu_work().

Reported-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/acpi/osl.c | 112 +++++++++++++++++++++++++++++++--------------
 1 file changed, 77 insertions(+), 35 deletions(-)

diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
index 762c5d50b8fe..5ced89a756a8 100644
--- a/drivers/acpi/osl.c
+++ b/drivers/acpi/osl.c
@@ -77,7 +77,10 @@ struct acpi_ioremap {
 	void __iomem *virt;
 	acpi_physical_address phys;
 	acpi_size size;
-	unsigned long refcount;
+	union {
+		unsigned long refcount;
+		struct rcu_work rwork;
+	} track;
 };
 
 static LIST_HEAD(acpi_ioremaps);
@@ -250,7 +253,7 @@ void __iomem *acpi_os_get_iomem(acpi_physical_address phys, unsigned int size)
 	map = acpi_map_lookup(phys, size);
 	if (map) {
 		virt = map->virt + (phys - map->phys);
-		map->refcount++;
+		map->track.refcount++;
 	}
 	mutex_unlock(&acpi_ioremap_lock);
 	return virt;
@@ -335,7 +338,7 @@ void __iomem __ref
 	/* Check if there's a suitable mapping already. */
 	map = acpi_map_lookup(phys, size);
 	if (map) {
-		map->refcount++;
+		map->track.refcount++;
 		goto out;
 	}
 
@@ -358,7 +361,7 @@ void __iomem __ref
 	map->virt = virt;
 	map->phys = pg_off;
 	map->size = pg_sz;
-	map->refcount = 1;
+	map->track.refcount = 1;
 
 	list_add_tail_rcu(&map->list, &acpi_ioremaps);
 
@@ -374,41 +377,46 @@ void *__ref acpi_os_map_memory(acpi_physical_address phys, acpi_size size)
 }
 EXPORT_SYMBOL_GPL(acpi_os_map_memory);
 
+static void acpi_os_map_remove(struct acpi_ioremap *map)
+{
+	acpi_unmap(map->phys, map->virt);
+	kfree(map);
+}
+
+static void acpi_os_map_cleanup_deferred(struct work_struct *work)
+{
+	acpi_os_map_remove(container_of(to_rcu_work(work), struct acpi_ioremap,
+					track.rwork));
+}
+
 /* Must be called with mutex_lock(&acpi_ioremap_lock) */
-static unsigned long acpi_os_drop_map_ref(struct acpi_ioremap *map)
+static bool acpi_os_drop_map_ref(struct acpi_ioremap *map, bool defer)
 {
-	unsigned long refcount = --map->refcount;
+	if (--map->track.refcount)
+		return true;
 
-	if (!refcount)
-		list_del_rcu(&map->list);
-	return refcount;
+	list_del_rcu(&map->list);
+
+	if (defer) {
+		INIT_RCU_WORK(&map->track.rwork, acpi_os_map_cleanup_deferred);
+		queue_rcu_work(system_wq, &map->track.rwork);
+	}
+	return defer;
 }
 
 static void acpi_os_map_cleanup(struct acpi_ioremap *map)
 {
+	if (!map)
+		return;
+
 	synchronize_rcu_expedited();
-	acpi_unmap(map->phys, map->virt);
-	kfree(map);
+	acpi_os_map_remove(map);
 }
 
-/**
- * acpi_os_unmap_iomem - Drop a memory mapping reference.
- * @virt: Start of the address range to drop a reference to.
- * @size: Size of the address range to drop a reference to.
- *
- * Look up the given virtual address range in the list of existing ACPI memory
- * mappings, drop a reference to it and unmap it if there are no more active
- * references to it.
- *
- * During early init (when acpi_permanent_mmap has not been set yet) this
- * routine simply calls __acpi_unmap_table() to get the job done.  Since
- * __acpi_unmap_table() is an __init function, the __ref annotation is needed
- * here.
- */
-void __ref acpi_os_unmap_iomem(void __iomem *virt, acpi_size size)
+static void __ref __acpi_os_unmap_iomem(void __iomem *virt, acpi_size size,
+					bool defer)
 {
 	struct acpi_ioremap *map;
-	unsigned long refcount;
 
 	if (!acpi_permanent_mmap) {
 		__acpi_unmap_table(virt, size);
@@ -416,23 +424,56 @@ void __ref acpi_os_unmap_iomem(void __iomem *virt, acpi_size size)
 	}
 
 	mutex_lock(&acpi_ioremap_lock);
+
 	map = acpi_map_lookup_virt(virt, size);
 	if (!map) {
 		mutex_unlock(&acpi_ioremap_lock);
 		WARN(true, PREFIX "%s: bad address %p\n", __func__, virt);
 		return;
 	}
-	refcount = acpi_os_drop_map_ref(map);
+	if (acpi_os_drop_map_ref(map, defer))
+		map = NULL;
+
 	mutex_unlock(&acpi_ioremap_lock);
 
-	if (!refcount)
-		acpi_os_map_cleanup(map);
+	acpi_os_map_cleanup(map);
+}
+
+/**
+ * acpi_os_unmap_iomem - Drop a memory mapping reference.
+ * @virt: Start of the address range to drop a reference to.
+ * @size: Size of the address range to drop a reference to.
+ *
+ * Look up the given virtual address range in the list of existing ACPI memory
+ * mappings, drop a reference to it and unmap it if there are no more active
+ * references to it.
+ *
+ * During early init (when acpi_permanent_mmap has not been set yet) this
+ * routine simply calls __acpi_unmap_table() to get the job done.  Since
+ * __acpi_unmap_table() is an __init function, the __ref annotation is needed
+ * here.
+ */
+void __ref acpi_os_unmap_iomem(void __iomem *virt, acpi_size size)
+{
+	__acpi_os_unmap_iomem(virt, size, false);
 }
 EXPORT_SYMBOL_GPL(acpi_os_unmap_iomem);
 
+/**
+ * acpi_os_unmap_memory - Drop a memory mapping reference.
+ * @virt: Start of the address range to drop a reference to.
+ * @size: Size of the address range to drop a reference to.
+ *
+ * Look up the given virtual address range in the list of existing ACPI memory
+ * mappings, drop a reference to it and if there are no more active references
+ * to it, put it in the list of unused memory mappings.
+ *
+ * During early init (when acpi_permanent_mmap has not been set yet) this
+ * routine behaves like acpi_os_unmap_iomem().
+ */
 void __ref acpi_os_unmap_memory(void *virt, acpi_size size)
 {
-	return acpi_os_unmap_iomem((void __iomem *)virt, size);
+	__acpi_os_unmap_iomem((void __iomem *)virt, size, true);
 }
 EXPORT_SYMBOL_GPL(acpi_os_unmap_memory);
 
@@ -461,7 +502,6 @@ void acpi_os_unmap_generic_address(struct acpi_generic_address *gas)
 {
 	u64 addr;
 	struct acpi_ioremap *map;
-	unsigned long refcount;
 
 	if (gas->space_id != ACPI_ADR_SPACE_SYSTEM_MEMORY)
 		return;
@@ -472,16 +512,18 @@ void acpi_os_unmap_generic_address(struct acpi_generic_address *gas)
 		return;
 
 	mutex_lock(&acpi_ioremap_lock);
+
 	map = acpi_map_lookup(addr, gas->bit_width / 8);
 	if (!map) {
 		mutex_unlock(&acpi_ioremap_lock);
 		return;
 	}
-	refcount = acpi_os_drop_map_ref(map);
+	if (acpi_os_drop_map_ref(map, false))
+		map = NULL;
+
 	mutex_unlock(&acpi_ioremap_lock);
 
-	if (!refcount)
-		acpi_os_map_cleanup(map);
+	acpi_os_map_cleanup(map);
 }
 EXPORT_SYMBOL(acpi_os_unmap_generic_address);
 
-- 
2.26.2





^ permalink raw reply related	[flat|nested] 51+ messages in thread

* [PATCH v4 2/2] ACPICA: Preserve memory opregion mappings
  2020-06-29 16:31     ` [PATCH v4 0/2] " Rafael J. Wysocki
  2020-06-29 16:33       ` [PATCH v4 1/2] ACPI: OSL: Implement deferred unmapping of ACPI memory Rafael J. Wysocki
@ 2020-06-29 16:33       ` Rafael J. Wysocki
  2020-06-29 20:57         ` Al Stone
  2020-07-16 19:22         ` Verma, Vishal L
  1 sibling, 2 replies; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-29 16:33 UTC (permalink / raw)
  To: Dan Williams, Erik Kaneda
  Cc: rafael.j.wysocki, Len Brown, Borislav Petkov, Ira Weiny,
	James Morse, Myron Stowe, Andy Shevchenko, linux-kernel,
	linux-acpi, linux-nvdimm, Bob Moore

From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>

The ACPICA's strategy with respect to the handling of memory mappings
associated with memory operation regions is to avoid mapping the
entire region at once which may be problematic at least in principle
(for example, it may lead to conflicts with overlapping mappings
having different attributes created by drivers).  It may also be
wasteful, because memory opregions on some systems take up vast
chunks of address space while the fields in those regions actually
accessed by AML are sparsely distributed.

For this reason, a one-page "window" is mapped for a given opregion
on the first memory access through it and if that "window" does not
cover an address range accessed through that opregion subsequently,
it is unmapped and a new "window" is mapped to replace it.  Next,
if the new "window" is not sufficient to acess memory through the
opregion in question in the future, it will be replaced with yet
another "window" and so on.  That may lead to a suboptimal sequence
of memory mapping and unmapping operations, for example if two fields
in one opregion separated from each other by a sufficiently wide
chunk of unused address space are accessed in an alternating pattern.

The situation may still be suboptimal if the deferred unmapping
introduced previously is supported by the OS layer.  For instance,
the alternating memory access pattern mentioned above may produce
a relatively long list of mappings to release with substantial
duplication among the entries in it, which could be avoided if
acpi_ex_system_memory_space_handler() did not release the mapping
used by it previously as soon as the current access was not covered
by it.

In order to improve that, modify acpi_ex_system_memory_space_handler()
to preserve all of the memory mappings created by it until the memory
regions associated with them go away.

Accordingly, update acpi_ev_system_memory_region_setup() to unmap all
memory associated with memory opregions that go away.

Reported-by: Dan Williams <dan.j.williams@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
---
 drivers/acpi/acpica/evrgnini.c | 14 ++++----
 drivers/acpi/acpica/exregion.c | 65 ++++++++++++++++++++++++----------
 include/acpi/actypes.h         | 12 +++++--
 3 files changed, 64 insertions(+), 27 deletions(-)

diff --git a/drivers/acpi/acpica/evrgnini.c b/drivers/acpi/acpica/evrgnini.c
index aefc0145e583..89be3ccdad53 100644
--- a/drivers/acpi/acpica/evrgnini.c
+++ b/drivers/acpi/acpica/evrgnini.c
@@ -38,6 +38,7 @@ acpi_ev_system_memory_region_setup(acpi_handle handle,
 	union acpi_operand_object *region_desc =
 	    (union acpi_operand_object *)handle;
 	struct acpi_mem_space_context *local_region_context;
+	struct acpi_mem_mapping *mm;
 
 	ACPI_FUNCTION_TRACE(ev_system_memory_region_setup);
 
@@ -46,13 +47,14 @@ acpi_ev_system_memory_region_setup(acpi_handle handle,
 			local_region_context =
 			    (struct acpi_mem_space_context *)*region_context;
 
-			/* Delete a cached mapping if present */
+			/* Delete memory mappings if present */
 
-			if (local_region_context->mapped_length) {
-				acpi_os_unmap_memory(local_region_context->
-						     mapped_logical_address,
-						     local_region_context->
-						     mapped_length);
+			while (local_region_context->first_mm) {
+				mm = local_region_context->first_mm;
+				local_region_context->first_mm = mm->next_mm;
+				acpi_os_unmap_memory(mm->logical_address,
+						     mm->length);
+				ACPI_FREE(mm);
 			}
 			ACPI_FREE(local_region_context);
 			*region_context = NULL;
diff --git a/drivers/acpi/acpica/exregion.c b/drivers/acpi/acpica/exregion.c
index d15a66de26c0..fd68f2134804 100644
--- a/drivers/acpi/acpica/exregion.c
+++ b/drivers/acpi/acpica/exregion.c
@@ -41,6 +41,7 @@ acpi_ex_system_memory_space_handler(u32 function,
 	acpi_status status = AE_OK;
 	void *logical_addr_ptr = NULL;
 	struct acpi_mem_space_context *mem_info = region_context;
+	struct acpi_mem_mapping *mm = mem_info->cur_mm;
 	u32 length;
 	acpi_size map_length;
 	acpi_size page_boundary_map_length;
@@ -96,20 +97,38 @@ acpi_ex_system_memory_space_handler(u32 function,
 	 * Is 1) Address below the current mapping? OR
 	 *    2) Address beyond the current mapping?
 	 */
-	if ((address < mem_info->mapped_physical_address) ||
-	    (((u64) address + length) > ((u64)
-					 mem_info->mapped_physical_address +
-					 mem_info->mapped_length))) {
+	if (!mm || (address < mm->physical_address) ||
+	    ((u64) address + length > (u64) mm->physical_address + mm->length)) {
 		/*
-		 * The request cannot be resolved by the current memory mapping;
-		 * Delete the existing mapping and create a new one.
+		 * The request cannot be resolved by the current memory mapping.
+		 *
+		 * Look for an existing saved mapping covering the address range
+		 * at hand.  If found, save it as the current one and carry out
+		 * the access.
 		 */
-		if (mem_info->mapped_length) {
+		for (mm = mem_info->first_mm; mm; mm = mm->next_mm) {
+			if (mm == mem_info->cur_mm)
+				continue;
+
+			if (address < mm->physical_address)
+				continue;
+
+			if ((u64) address + length >
+					(u64) mm->physical_address + mm->length)
+				continue;
 
-			/* Valid mapping, delete it */
+			mem_info->cur_mm = mm;
+			goto access;
+		}
 
-			acpi_os_unmap_memory(mem_info->mapped_logical_address,
-					     mem_info->mapped_length);
+		/* Create a new mappings list entry */
+		mm = ACPI_ALLOCATE_ZEROED(sizeof(*mm));
+		if (!mm) {
+			ACPI_ERROR((AE_INFO,
+				    "Unable to save memory mapping at 0x%8.8X%8.8X, size %u",
+				    ACPI_FORMAT_UINT64(address),
+				    (u32)map_length));
+			return_ACPI_STATUS(AE_NO_MEMORY);
 		}
 
 		/*
@@ -143,29 +162,39 @@ acpi_ex_system_memory_space_handler(u32 function,
 
 		/* Create a new mapping starting at the address given */
 
-		mem_info->mapped_logical_address =
-		    acpi_os_map_memory(address, map_length);
-		if (!mem_info->mapped_logical_address) {
+		logical_addr_ptr = acpi_os_map_memory(address, map_length);
+		if (!logical_addr_ptr) {
 			ACPI_ERROR((AE_INFO,
 				    "Could not map memory at 0x%8.8X%8.8X, size %u",
 				    ACPI_FORMAT_UINT64(address),
 				    (u32)map_length));
-			mem_info->mapped_length = 0;
+			ACPI_FREE(mm);
 			return_ACPI_STATUS(AE_NO_MEMORY);
 		}
 
 		/* Save the physical address and mapping size */
 
-		mem_info->mapped_physical_address = address;
-		mem_info->mapped_length = map_length;
+		mm->logical_address = logical_addr_ptr;
+		mm->physical_address = address;
+		mm->length = map_length;
+
+		/*
+		 * Add the new entry to the mappigs list and save it as the
+		 * current mapping.
+		 */
+		mm->next_mm = mem_info->first_mm;
+		mem_info->first_mm = mm;
+
+		mem_info->cur_mm = mm;
 	}
 
+access:
 	/*
 	 * Generate a logical pointer corresponding to the address we want to
 	 * access
 	 */
-	logical_addr_ptr = mem_info->mapped_logical_address +
-	    ((u64) address - (u64) mem_info->mapped_physical_address);
+	logical_addr_ptr = mm->logical_address +
+		((u64) address - (u64) mm->physical_address);
 
 	ACPI_DEBUG_PRINT((ACPI_DB_INFO,
 			  "System-Memory (width %u) R/W %u Address=%8.8X%8.8X\n",
diff --git a/include/acpi/actypes.h b/include/acpi/actypes.h
index aa236b9e6f24..d005e35ab399 100644
--- a/include/acpi/actypes.h
+++ b/include/acpi/actypes.h
@@ -1201,12 +1201,18 @@ struct acpi_pci_id {
 	u16 function;
 };
 
+struct acpi_mem_mapping {
+	acpi_physical_address physical_address;
+	u8 *logical_address;
+	acpi_size length;
+	struct acpi_mem_mapping *next_mm;
+};
+
 struct acpi_mem_space_context {
 	u32 length;
 	acpi_physical_address address;
-	acpi_physical_address mapped_physical_address;
-	u8 *mapped_logical_address;
-	acpi_size mapped_length;
+	struct acpi_mem_mapping *cur_mm;
+	struct acpi_mem_mapping *first_mm;
 };
 
 /*
-- 
2.26.2





^ permalink raw reply related	[flat|nested] 51+ messages in thread

* Re: [RFT][PATCH v3 0/4] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter
  2020-06-28 17:09       ` Rafael J. Wysocki
@ 2020-06-29 20:46         ` Dan Williams
  2020-06-30 11:04           ` Rafael J. Wysocki
  0 siblings, 1 reply; 51+ messages in thread
From: Dan Williams @ 2020-06-29 20:46 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Rafael J. Wysocki, Erik Kaneda, Rafael J Wysocki, Len Brown,
	Borislav Petkov, Ira Weiny, James Morse, Myron Stowe,
	Andy Shevchenko, Linux Kernel Mailing List, Linux ACPI,
	linux-nvdimm, Bob Moore

On Sun, Jun 28, 2020 at 10:09 AM Rafael J. Wysocki <rafael@kernel.org> wrote:
>
> On Fri, Jun 26, 2020 at 8:41 PM Dan Williams <dan.j.williams@intel.com> wrote:
> >
> > On Fri, Jun 26, 2020 at 10:34 AM Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
> > >
> > > Hi All,
> > >
> > > On Monday, June 22, 2020 3:50:42 PM CEST Rafael J. Wysocki wrote:
> > > > Hi All,
> > > >
> > > > This series is to address the problem with RCU synchronization occurring,
> > > > possibly relatively often, inside of acpi_ex_system_memory_space_handler(),
> > > > when the namespace and interpreter mutexes are held.
> > > >
> > > > Like I said before, I had decided to change the approach used in the previous
> > > > iteration of this series and to allow the unmap operations carried out by
> > > > acpi_ex_system_memory_space_handler() to be deferred in the first place,
> > > > which is done in patches [1-2/4].
> > >
> > > In the meantime I realized that calling syncrhonize_rcu_expedited() under the
> > > "tables" mutex within ACPICA is not quite a good idea too and that there is no
> > > reason for any users of acpi_os_unmap_memory() in the tree to use the "sync"
> > > variant of unmapping.
> > >
> > > So, unless I'm missing something, acpi_os_unmap_memory() can be changed to
> > > always defer the final unmapping and the only ACPICA change needed to support
> > > that is the addition of the acpi_os_release_unused_mappings() call to get rid
> > > of the unused mappings when leaving the interpreter (module the extra call in
> > > the debug code for consistency).
> > >
> > > So patches [1-2/4] have been changed accordingly.
> > >
> > > > However, it turns out that the "fast-path" mapping is still useful on top of
> > > > the above to reduce the number of ioremap-iounmap cycles for the same address
> > > > range and so it is introduced by patches [3-4/4].
> > >
> > > Patches [3-4/4] still do what they did, but they have been simplified a bit
> > > after rebasing on top of the new [1-2/4].
> > >
> > > The below information is still valid, but it applies to the v3, of course.
> > >
> > > > For details, please refer to the patch changelogs.
> > > >
> > > > The series is available from the git branch at
> > > >
> > > >  git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git \
> > > >  acpica-osl
> > > >
> > > > for easier testing.
> > >
> > > Also the series have been tested locally.
> >
> > Ok, I'm still trying to get the original reporter to confirm this
> > reduces the execution time for ASL routines with a lot of OpRegion
> > touches. Shall I rebuild that test kernel with these changes, or are
> > the results from the original RFT still interesting?
>
> I'm mostly interested in the results with the v3 applied.
>

Ok, I just got feedback on v2 and it still showed the 30 minute
execution time where 7 minutes was achieved previously.

> Also it would be good to check the impact of the first two patches
> alone relative to all four.

I'll start with the full set and see if they can also support the
"first 2" experiment.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] ACPICA: Preserve memory opregion mappings
  2020-06-29 16:33       ` [PATCH v4 2/2] ACPICA: Preserve memory opregion mappings Rafael J. Wysocki
@ 2020-06-29 20:57         ` Al Stone
  2020-06-30 11:44           ` Rafael J. Wysocki
  2020-07-16 19:22         ` Verma, Vishal L
  1 sibling, 1 reply; 51+ messages in thread
From: Al Stone @ 2020-06-29 20:57 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Dan Williams, Erik Kaneda, rafael.j.wysocki, Len Brown,
	Borislav Petkov, Ira Weiny, James Morse, Myron Stowe,
	Andy Shevchenko, linux-kernel, linux-acpi, linux-nvdimm,
	Bob Moore

On 29 Jun 2020 18:33, Rafael J. Wysocki wrote:
> From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> 
> The ACPICA's strategy with respect to the handling of memory mappings
> associated with memory operation regions is to avoid mapping the
> entire region at once which may be problematic at least in principle
> (for example, it may lead to conflicts with overlapping mappings
> having different attributes created by drivers).  It may also be
> wasteful, because memory opregions on some systems take up vast
> chunks of address space while the fields in those regions actually
> accessed by AML are sparsely distributed.
> 
> For this reason, a one-page "window" is mapped for a given opregion
> on the first memory access through it and if that "window" does not
> cover an address range accessed through that opregion subsequently,
> it is unmapped and a new "window" is mapped to replace it.  Next,
> if the new "window" is not sufficient to acess memory through the
> opregion in question in the future, it will be replaced with yet
> another "window" and so on.  That may lead to a suboptimal sequence
> of memory mapping and unmapping operations, for example if two fields
> in one opregion separated from each other by a sufficiently wide
> chunk of unused address space are accessed in an alternating pattern.
> 
> The situation may still be suboptimal if the deferred unmapping
> introduced previously is supported by the OS layer.  For instance,
> the alternating memory access pattern mentioned above may produce
> a relatively long list of mappings to release with substantial
> duplication among the entries in it, which could be avoided if
> acpi_ex_system_memory_space_handler() did not release the mapping
> used by it previously as soon as the current access was not covered
> by it.
> 
> In order to improve that, modify acpi_ex_system_memory_space_handler()
> to preserve all of the memory mappings created by it until the memory
> regions associated with them go away.
> 
> Accordingly, update acpi_ev_system_memory_region_setup() to unmap all
> memory associated with memory opregions that go away.
> 
> Reported-by: Dan Williams <dan.j.williams@intel.com>
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
>  drivers/acpi/acpica/evrgnini.c | 14 ++++----
>  drivers/acpi/acpica/exregion.c | 65 ++++++++++++++++++++++++----------
>  include/acpi/actypes.h         | 12 +++++--
>  3 files changed, 64 insertions(+), 27 deletions(-)
> 
> diff --git a/drivers/acpi/acpica/evrgnini.c b/drivers/acpi/acpica/evrgnini.c
> index aefc0145e583..89be3ccdad53 100644
> --- a/drivers/acpi/acpica/evrgnini.c
> +++ b/drivers/acpi/acpica/evrgnini.c
> @@ -38,6 +38,7 @@ acpi_ev_system_memory_region_setup(acpi_handle handle,
>  	union acpi_operand_object *region_desc =
>  	    (union acpi_operand_object *)handle;
>  	struct acpi_mem_space_context *local_region_context;
> +	struct acpi_mem_mapping *mm;
>  
>  	ACPI_FUNCTION_TRACE(ev_system_memory_region_setup);
>  
> @@ -46,13 +47,14 @@ acpi_ev_system_memory_region_setup(acpi_handle handle,
>  			local_region_context =
>  			    (struct acpi_mem_space_context *)*region_context;
>  
> -			/* Delete a cached mapping if present */
> +			/* Delete memory mappings if present */
>  
> -			if (local_region_context->mapped_length) {
> -				acpi_os_unmap_memory(local_region_context->
> -						     mapped_logical_address,
> -						     local_region_context->
> -						     mapped_length);
> +			while (local_region_context->first_mm) {
> +				mm = local_region_context->first_mm;
> +				local_region_context->first_mm = mm->next_mm;
> +				acpi_os_unmap_memory(mm->logical_address,
> +						     mm->length);
> +				ACPI_FREE(mm);
>  			}
>  			ACPI_FREE(local_region_context);
>  			*region_context = NULL;
> diff --git a/drivers/acpi/acpica/exregion.c b/drivers/acpi/acpica/exregion.c
> index d15a66de26c0..fd68f2134804 100644
> --- a/drivers/acpi/acpica/exregion.c
> +++ b/drivers/acpi/acpica/exregion.c
> @@ -41,6 +41,7 @@ acpi_ex_system_memory_space_handler(u32 function,
>  	acpi_status status = AE_OK;
>  	void *logical_addr_ptr = NULL;
>  	struct acpi_mem_space_context *mem_info = region_context;
> +	struct acpi_mem_mapping *mm = mem_info->cur_mm;
>  	u32 length;
>  	acpi_size map_length;

I think this needs to be:

        acpi_size map_length = mem_info->length;

since it now gets used in the ACPI_ERROR() call below.  I'm getting
a "maybe used unitialized" error on compilation.

>  	acpi_size page_boundary_map_length;
> @@ -96,20 +97,38 @@ acpi_ex_system_memory_space_handler(u32 function,
>  	 * Is 1) Address below the current mapping? OR
>  	 *    2) Address beyond the current mapping?
>  	 */
> -	if ((address < mem_info->mapped_physical_address) ||
> -	    (((u64) address + length) > ((u64)
> -					 mem_info->mapped_physical_address +
> -					 mem_info->mapped_length))) {
> +	if (!mm || (address < mm->physical_address) ||
> +	    ((u64) address + length > (u64) mm->physical_address + mm->length)) {
>  		/*
> -		 * The request cannot be resolved by the current memory mapping;
> -		 * Delete the existing mapping and create a new one.
> +		 * The request cannot be resolved by the current memory mapping.
> +		 *
> +		 * Look for an existing saved mapping covering the address range
> +		 * at hand.  If found, save it as the current one and carry out
> +		 * the access.
>  		 */
> -		if (mem_info->mapped_length) {
> +		for (mm = mem_info->first_mm; mm; mm = mm->next_mm) {
> +			if (mm == mem_info->cur_mm)
> +				continue;
> +
> +			if (address < mm->physical_address)
> +				continue;
> +
> +			if ((u64) address + length >
> +					(u64) mm->physical_address + mm->length)
> +				continue;
>  
> -			/* Valid mapping, delete it */
> +			mem_info->cur_mm = mm;
> +			goto access;
> +		}
>  
> -			acpi_os_unmap_memory(mem_info->mapped_logical_address,
> -					     mem_info->mapped_length);
> +		/* Create a new mappings list entry */
> +		mm = ACPI_ALLOCATE_ZEROED(sizeof(*mm));
> +		if (!mm) {
> +			ACPI_ERROR((AE_INFO,
> +				    "Unable to save memory mapping at 0x%8.8X%8.8X, size %u",
> +				    ACPI_FORMAT_UINT64(address),
> +				    (u32)map_length));
> +			return_ACPI_STATUS(AE_NO_MEMORY);
>  		}
>  
>  		/*
> @@ -143,29 +162,39 @@ acpi_ex_system_memory_space_handler(u32 function,
>  
>  		/* Create a new mapping starting at the address given */
>  
> -		mem_info->mapped_logical_address =
> -		    acpi_os_map_memory(address, map_length);
> -		if (!mem_info->mapped_logical_address) {
> +		logical_addr_ptr = acpi_os_map_memory(address, map_length);
> +		if (!logical_addr_ptr) {
>  			ACPI_ERROR((AE_INFO,
>  				    "Could not map memory at 0x%8.8X%8.8X, size %u",
>  				    ACPI_FORMAT_UINT64(address),
>  				    (u32)map_length));
> -			mem_info->mapped_length = 0;
> +			ACPI_FREE(mm);
>  			return_ACPI_STATUS(AE_NO_MEMORY);
>  		}
>  
>  		/* Save the physical address and mapping size */
>  
> -		mem_info->mapped_physical_address = address;
> -		mem_info->mapped_length = map_length;
> +		mm->logical_address = logical_addr_ptr;
> +		mm->physical_address = address;
> +		mm->length = map_length;
> +
> +		/*
> +		 * Add the new entry to the mappigs list and save it as the
> +		 * current mapping.
> +		 */
> +		mm->next_mm = mem_info->first_mm;
> +		mem_info->first_mm = mm;
> +
> +		mem_info->cur_mm = mm;
>  	}
>  
> +access:
>  	/*
>  	 * Generate a logical pointer corresponding to the address we want to
>  	 * access
>  	 */
> -	logical_addr_ptr = mem_info->mapped_logical_address +
> -	    ((u64) address - (u64) mem_info->mapped_physical_address);
> +	logical_addr_ptr = mm->logical_address +
> +		((u64) address - (u64) mm->physical_address);
>  
>  	ACPI_DEBUG_PRINT((ACPI_DB_INFO,
>  			  "System-Memory (width %u) R/W %u Address=%8.8X%8.8X\n",
> diff --git a/include/acpi/actypes.h b/include/acpi/actypes.h
> index aa236b9e6f24..d005e35ab399 100644
> --- a/include/acpi/actypes.h
> +++ b/include/acpi/actypes.h
> @@ -1201,12 +1201,18 @@ struct acpi_pci_id {
>  	u16 function;
>  };
>  
> +struct acpi_mem_mapping {
> +	acpi_physical_address physical_address;
> +	u8 *logical_address;
> +	acpi_size length;
> +	struct acpi_mem_mapping *next_mm;
> +};
> +
>  struct acpi_mem_space_context {
>  	u32 length;
>  	acpi_physical_address address;
> -	acpi_physical_address mapped_physical_address;
> -	u8 *mapped_logical_address;
> -	acpi_size mapped_length;
> +	struct acpi_mem_mapping *cur_mm;
> +	struct acpi_mem_mapping *first_mm;
>  };
>  
>  /*
> -- 
> 2.26.2
> 
> 
> 
> 

-- 
ciao,
al
-----------------------------------
Al Stone
Software Engineer
Red Hat, Inc.
ahs3@redhat.com
-----------------------------------


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [RFT][PATCH v3 0/4] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter
  2020-06-29 20:46         ` Dan Williams
@ 2020-06-30 11:04           ` Rafael J. Wysocki
  0 siblings, 0 replies; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-30 11:04 UTC (permalink / raw)
  To: Dan Williams
  Cc: Rafael J. Wysocki, Rafael J. Wysocki, Erik Kaneda,
	Rafael J Wysocki, Len Brown, Borislav Petkov, Ira Weiny,
	James Morse, Myron Stowe, Andy Shevchenko,
	Linux Kernel Mailing List, Linux ACPI, linux-nvdimm, Bob Moore

On Mon, Jun 29, 2020 at 10:46 PM Dan Williams <dan.j.williams@intel.com> wrote:
>
> On Sun, Jun 28, 2020 at 10:09 AM Rafael J. Wysocki <rafael@kernel.org> wrote:
> >
> > On Fri, Jun 26, 2020 at 8:41 PM Dan Williams <dan.j.williams@intel.com> wrote:
> > >
> > > On Fri, Jun 26, 2020 at 10:34 AM Rafael J. Wysocki <rjw@rjwysocki.net> wrote:
> > > >
> > > > Hi All,
> > > >
> > > > On Monday, June 22, 2020 3:50:42 PM CEST Rafael J. Wysocki wrote:
> > > > > Hi All,
> > > > >
> > > > > This series is to address the problem with RCU synchronization occurring,
> > > > > possibly relatively often, inside of acpi_ex_system_memory_space_handler(),
> > > > > when the namespace and interpreter mutexes are held.
> > > > >
> > > > > Like I said before, I had decided to change the approach used in the previous
> > > > > iteration of this series and to allow the unmap operations carried out by
> > > > > acpi_ex_system_memory_space_handler() to be deferred in the first place,
> > > > > which is done in patches [1-2/4].
> > > >
> > > > In the meantime I realized that calling syncrhonize_rcu_expedited() under the
> > > > "tables" mutex within ACPICA is not quite a good idea too and that there is no
> > > > reason for any users of acpi_os_unmap_memory() in the tree to use the "sync"
> > > > variant of unmapping.
> > > >
> > > > So, unless I'm missing something, acpi_os_unmap_memory() can be changed to
> > > > always defer the final unmapping and the only ACPICA change needed to support
> > > > that is the addition of the acpi_os_release_unused_mappings() call to get rid
> > > > of the unused mappings when leaving the interpreter (module the extra call in
> > > > the debug code for consistency).
> > > >
> > > > So patches [1-2/4] have been changed accordingly.
> > > >
> > > > > However, it turns out that the "fast-path" mapping is still useful on top of
> > > > > the above to reduce the number of ioremap-iounmap cycles for the same address
> > > > > range and so it is introduced by patches [3-4/4].
> > > >
> > > > Patches [3-4/4] still do what they did, but they have been simplified a bit
> > > > after rebasing on top of the new [1-2/4].
> > > >
> > > > The below information is still valid, but it applies to the v3, of course.
> > > >
> > > > > For details, please refer to the patch changelogs.
> > > > >
> > > > > The series is available from the git branch at
> > > > >
> > > > >  git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git \
> > > > >  acpica-osl
> > > > >
> > > > > for easier testing.
> > > >
> > > > Also the series have been tested locally.
> > >
> > > Ok, I'm still trying to get the original reporter to confirm this
> > > reduces the execution time for ASL routines with a lot of OpRegion
> > > touches. Shall I rebuild that test kernel with these changes, or are
> > > the results from the original RFT still interesting?
> >
> > I'm mostly interested in the results with the v3 applied.
> >
>
> Ok, I just got feedback on v2 and it still showed the 30 minute
> execution time where 7 minutes was achieved previously.

This probably means that "transient" memory opregions, which appear
and go away during the AML execution, are involved and so moving the
RCU synchronization outside of the interpreter and namespace locks is
not enough to cover this case.

It should be covered by the v4
(https://lore.kernel.org/linux-acpi/1666722.UopIai5n7p@kreacher/T/#u),
though, because the unmapping is completely asynchronous in there and
it doesn't add any significant latency to the interpreter exit path.
So I would expect to see much better results with the v4, so I'd
recommend testing this one next.

> > Also it would be good to check the impact of the first two patches
> > alone relative to all four.
>
> I'll start with the full set and see if they can also support the
> "first 2" experiment.

In the v4 there are just two patches, so it should be straightforward
enough to test with and without the top-most one. :-)

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] ACPICA: Preserve memory opregion mappings
  2020-06-29 20:57         ` Al Stone
@ 2020-06-30 11:44           ` Rafael J. Wysocki
  2020-06-30 15:31             ` Al Stone
  0 siblings, 1 reply; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-30 11:44 UTC (permalink / raw)
  To: Al Stone
  Cc: Rafael J. Wysocki, Dan Williams, Erik Kaneda, Rafael Wysocki,
	Len Brown, Borislav Petkov, Ira Weiny, James Morse, Myron Stowe,
	Andy Shevchenko, Linux Kernel Mailing List,
	ACPI Devel Maling List, linux-nvdimm, Bob Moore

On Mon, Jun 29, 2020 at 10:57 PM Al Stone <ahs3@redhat.com> wrote:
>
> On 29 Jun 2020 18:33, Rafael J. Wysocki wrote:
> > From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> >
> > The ACPICA's strategy with respect to the handling of memory mappings
> > associated with memory operation regions is to avoid mapping the
> > entire region at once which may be problematic at least in principle
> > (for example, it may lead to conflicts with overlapping mappings
> > having different attributes created by drivers).  It may also be
> > wasteful, because memory opregions on some systems take up vast
> > chunks of address space while the fields in those regions actually
> > accessed by AML are sparsely distributed.
> >
> > For this reason, a one-page "window" is mapped for a given opregion
> > on the first memory access through it and if that "window" does not
> > cover an address range accessed through that opregion subsequently,
> > it is unmapped and a new "window" is mapped to replace it.  Next,
> > if the new "window" is not sufficient to acess memory through the
> > opregion in question in the future, it will be replaced with yet
> > another "window" and so on.  That may lead to a suboptimal sequence
> > of memory mapping and unmapping operations, for example if two fields
> > in one opregion separated from each other by a sufficiently wide
> > chunk of unused address space are accessed in an alternating pattern.
> >
> > The situation may still be suboptimal if the deferred unmapping
> > introduced previously is supported by the OS layer.  For instance,
> > the alternating memory access pattern mentioned above may produce
> > a relatively long list of mappings to release with substantial
> > duplication among the entries in it, which could be avoided if
> > acpi_ex_system_memory_space_handler() did not release the mapping
> > used by it previously as soon as the current access was not covered
> > by it.
> >
> > In order to improve that, modify acpi_ex_system_memory_space_handler()
> > to preserve all of the memory mappings created by it until the memory
> > regions associated with them go away.
> >
> > Accordingly, update acpi_ev_system_memory_region_setup() to unmap all
> > memory associated with memory opregions that go away.
> >
> > Reported-by: Dan Williams <dan.j.williams@intel.com>
> > Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > ---
> >  drivers/acpi/acpica/evrgnini.c | 14 ++++----
> >  drivers/acpi/acpica/exregion.c | 65 ++++++++++++++++++++++++----------
> >  include/acpi/actypes.h         | 12 +++++--
> >  3 files changed, 64 insertions(+), 27 deletions(-)
> >
> > diff --git a/drivers/acpi/acpica/evrgnini.c b/drivers/acpi/acpica/evrgnini.c
> > index aefc0145e583..89be3ccdad53 100644
> > --- a/drivers/acpi/acpica/evrgnini.c
> > +++ b/drivers/acpi/acpica/evrgnini.c
> > @@ -38,6 +38,7 @@ acpi_ev_system_memory_region_setup(acpi_handle handle,
> >       union acpi_operand_object *region_desc =
> >           (union acpi_operand_object *)handle;
> >       struct acpi_mem_space_context *local_region_context;
> > +     struct acpi_mem_mapping *mm;
> >
> >       ACPI_FUNCTION_TRACE(ev_system_memory_region_setup);
> >
> > @@ -46,13 +47,14 @@ acpi_ev_system_memory_region_setup(acpi_handle handle,
> >                       local_region_context =
> >                           (struct acpi_mem_space_context *)*region_context;
> >
> > -                     /* Delete a cached mapping if present */
> > +                     /* Delete memory mappings if present */
> >
> > -                     if (local_region_context->mapped_length) {
> > -                             acpi_os_unmap_memory(local_region_context->
> > -                                                  mapped_logical_address,
> > -                                                  local_region_context->
> > -                                                  mapped_length);
> > +                     while (local_region_context->first_mm) {
> > +                             mm = local_region_context->first_mm;
> > +                             local_region_context->first_mm = mm->next_mm;
> > +                             acpi_os_unmap_memory(mm->logical_address,
> > +                                                  mm->length);
> > +                             ACPI_FREE(mm);
> >                       }
> >                       ACPI_FREE(local_region_context);
> >                       *region_context = NULL;
> > diff --git a/drivers/acpi/acpica/exregion.c b/drivers/acpi/acpica/exregion.c
> > index d15a66de26c0..fd68f2134804 100644
> > --- a/drivers/acpi/acpica/exregion.c
> > +++ b/drivers/acpi/acpica/exregion.c
> > @@ -41,6 +41,7 @@ acpi_ex_system_memory_space_handler(u32 function,
> >       acpi_status status = AE_OK;
> >       void *logical_addr_ptr = NULL;
> >       struct acpi_mem_space_context *mem_info = region_context;
> > +     struct acpi_mem_mapping *mm = mem_info->cur_mm;
> >       u32 length;
> >       acpi_size map_length;
>
> I think this needs to be:
>
>         acpi_size map_length = mem_info->length;
>
> since it now gets used in the ACPI_ERROR() call below.

No, it's better to print the length value in the message.

>  I'm getting a "maybe used unitialized" error on compilation.

Thanks for reporting!

I've updated the commit in the acpica-osl branch with the fix.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] ACPICA: Preserve memory opregion mappings
  2020-06-30 11:44           ` Rafael J. Wysocki
@ 2020-06-30 15:31             ` Al Stone
  2020-06-30 15:52               ` Rafael J. Wysocki
  0 siblings, 1 reply; 51+ messages in thread
From: Al Stone @ 2020-06-30 15:31 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Rafael J. Wysocki, Dan Williams, Erik Kaneda, Rafael Wysocki,
	Len Brown, Borislav Petkov, Ira Weiny, James Morse, Myron Stowe,
	Andy Shevchenko, Linux Kernel Mailing List,
	ACPI Devel Maling List, linux-nvdimm, Bob Moore

On 30 Jun 2020 13:44, Rafael J. Wysocki wrote:
> On Mon, Jun 29, 2020 at 10:57 PM Al Stone <ahs3@redhat.com> wrote:
> >
> > On 29 Jun 2020 18:33, Rafael J. Wysocki wrote:
> > > From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> > >
> > > The ACPICA's strategy with respect to the handling of memory mappings
> > > associated with memory operation regions is to avoid mapping the
> > > entire region at once which may be problematic at least in principle
> > > (for example, it may lead to conflicts with overlapping mappings
> > > having different attributes created by drivers).  It may also be
> > > wasteful, because memory opregions on some systems take up vast
> > > chunks of address space while the fields in those regions actually
> > > accessed by AML are sparsely distributed.
> > >
> > > For this reason, a one-page "window" is mapped for a given opregion
> > > on the first memory access through it and if that "window" does not
> > > cover an address range accessed through that opregion subsequently,
> > > it is unmapped and a new "window" is mapped to replace it.  Next,
> > > if the new "window" is not sufficient to acess memory through the
> > > opregion in question in the future, it will be replaced with yet
> > > another "window" and so on.  That may lead to a suboptimal sequence
> > > of memory mapping and unmapping operations, for example if two fields
> > > in one opregion separated from each other by a sufficiently wide
> > > chunk of unused address space are accessed in an alternating pattern.
> > >
> > > The situation may still be suboptimal if the deferred unmapping
> > > introduced previously is supported by the OS layer.  For instance,
> > > the alternating memory access pattern mentioned above may produce
> > > a relatively long list of mappings to release with substantial
> > > duplication among the entries in it, which could be avoided if
> > > acpi_ex_system_memory_space_handler() did not release the mapping
> > > used by it previously as soon as the current access was not covered
> > > by it.
> > >
> > > In order to improve that, modify acpi_ex_system_memory_space_handler()
> > > to preserve all of the memory mappings created by it until the memory
> > > regions associated with them go away.
> > >
> > > Accordingly, update acpi_ev_system_memory_region_setup() to unmap all
> > > memory associated with memory opregions that go away.
> > >
> > > Reported-by: Dan Williams <dan.j.williams@intel.com>
> > > Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > > ---
> > >  drivers/acpi/acpica/evrgnini.c | 14 ++++----
> > >  drivers/acpi/acpica/exregion.c | 65 ++++++++++++++++++++++++----------
> > >  include/acpi/actypes.h         | 12 +++++--
> > >  3 files changed, 64 insertions(+), 27 deletions(-)
> > >
> > > diff --git a/drivers/acpi/acpica/evrgnini.c b/drivers/acpi/acpica/evrgnini.c
> > > index aefc0145e583..89be3ccdad53 100644
> > > --- a/drivers/acpi/acpica/evrgnini.c
> > > +++ b/drivers/acpi/acpica/evrgnini.c
> > > @@ -38,6 +38,7 @@ acpi_ev_system_memory_region_setup(acpi_handle handle,
> > >       union acpi_operand_object *region_desc =
> > >           (union acpi_operand_object *)handle;
> > >       struct acpi_mem_space_context *local_region_context;
> > > +     struct acpi_mem_mapping *mm;
> > >
> > >       ACPI_FUNCTION_TRACE(ev_system_memory_region_setup);
> > >
> > > @@ -46,13 +47,14 @@ acpi_ev_system_memory_region_setup(acpi_handle handle,
> > >                       local_region_context =
> > >                           (struct acpi_mem_space_context *)*region_context;
> > >
> > > -                     /* Delete a cached mapping if present */
> > > +                     /* Delete memory mappings if present */
> > >
> > > -                     if (local_region_context->mapped_length) {
> > > -                             acpi_os_unmap_memory(local_region_context->
> > > -                                                  mapped_logical_address,
> > > -                                                  local_region_context->
> > > -                                                  mapped_length);
> > > +                     while (local_region_context->first_mm) {
> > > +                             mm = local_region_context->first_mm;
> > > +                             local_region_context->first_mm = mm->next_mm;
> > > +                             acpi_os_unmap_memory(mm->logical_address,
> > > +                                                  mm->length);
> > > +                             ACPI_FREE(mm);
> > >                       }
> > >                       ACPI_FREE(local_region_context);
> > >                       *region_context = NULL;
> > > diff --git a/drivers/acpi/acpica/exregion.c b/drivers/acpi/acpica/exregion.c
> > > index d15a66de26c0..fd68f2134804 100644
> > > --- a/drivers/acpi/acpica/exregion.c
> > > +++ b/drivers/acpi/acpica/exregion.c
> > > @@ -41,6 +41,7 @@ acpi_ex_system_memory_space_handler(u32 function,
> > >       acpi_status status = AE_OK;
> > >       void *logical_addr_ptr = NULL;
> > >       struct acpi_mem_space_context *mem_info = region_context;
> > > +     struct acpi_mem_mapping *mm = mem_info->cur_mm;
> > >       u32 length;
> > >       acpi_size map_length;
> >
> > I think this needs to be:
> >
> >         acpi_size map_length = mem_info->length;
> >
> > since it now gets used in the ACPI_ERROR() call below.
> 
> No, it's better to print the length value in the message.

Yeah, that was the other option.

> >  I'm getting a "maybe used unitialized" error on compilation.
> 
> Thanks for reporting!
> 
> I've updated the commit in the acpica-osl branch with the fix.

Thanks, Rafael.

Do you have a generic way of testing this?  I can see a way to do it
-- timing a call of a method in a dynamically loaded SSDT -- but if
you had a test case laying around, I could continue to be lazy :).

-- 
ciao,
al
-----------------------------------
Al Stone
Software Engineer
Red Hat, Inc.
ahs3@redhat.com
-----------------------------------


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] ACPICA: Preserve memory opregion mappings
  2020-06-30 15:31             ` Al Stone
@ 2020-06-30 15:52               ` Rafael J. Wysocki
  2020-06-30 19:57                 ` Al Stone
  0 siblings, 1 reply; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-06-30 15:52 UTC (permalink / raw)
  To: Al Stone
  Cc: Rafael J. Wysocki, Rafael J. Wysocki, Dan Williams, Erik Kaneda,
	Rafael Wysocki, Len Brown, Borislav Petkov, Ira Weiny,
	James Morse, Myron Stowe, Andy Shevchenko,
	Linux Kernel Mailing List, ACPI Devel Maling List, linux-nvdimm,
	Bob Moore

On Tue, Jun 30, 2020 at 5:31 PM Al Stone <ahs3@redhat.com> wrote:
>
> On 30 Jun 2020 13:44, Rafael J. Wysocki wrote:
> > On Mon, Jun 29, 2020 at 10:57 PM Al Stone <ahs3@redhat.com> wrote:
> > >
> > > On 29 Jun 2020 18:33, Rafael J. Wysocki wrote:
> > > > From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> > > >
> > > > The ACPICA's strategy with respect to the handling of memory mappings
> > > > associated with memory operation regions is to avoid mapping the
> > > > entire region at once which may be problematic at least in principle
> > > > (for example, it may lead to conflicts with overlapping mappings
> > > > having different attributes created by drivers).  It may also be
> > > > wasteful, because memory opregions on some systems take up vast
> > > > chunks of address space while the fields in those regions actually
> > > > accessed by AML are sparsely distributed.
> > > >
> > > > For this reason, a one-page "window" is mapped for a given opregion
> > > > on the first memory access through it and if that "window" does not
> > > > cover an address range accessed through that opregion subsequently,
> > > > it is unmapped and a new "window" is mapped to replace it.  Next,
> > > > if the new "window" is not sufficient to acess memory through the
> > > > opregion in question in the future, it will be replaced with yet
> > > > another "window" and so on.  That may lead to a suboptimal sequence
> > > > of memory mapping and unmapping operations, for example if two fields
> > > > in one opregion separated from each other by a sufficiently wide
> > > > chunk of unused address space are accessed in an alternating pattern.
> > > >
> > > > The situation may still be suboptimal if the deferred unmapping
> > > > introduced previously is supported by the OS layer.  For instance,
> > > > the alternating memory access pattern mentioned above may produce
> > > > a relatively long list of mappings to release with substantial
> > > > duplication among the entries in it, which could be avoided if
> > > > acpi_ex_system_memory_space_handler() did not release the mapping
> > > > used by it previously as soon as the current access was not covered
> > > > by it.
> > > >
> > > > In order to improve that, modify acpi_ex_system_memory_space_handler()
> > > > to preserve all of the memory mappings created by it until the memory
> > > > regions associated with them go away.
> > > >
> > > > Accordingly, update acpi_ev_system_memory_region_setup() to unmap all
> > > > memory associated with memory opregions that go away.
> > > >
> > > > Reported-by: Dan Williams <dan.j.williams@intel.com>
> > > > Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > > > ---
> > > >  drivers/acpi/acpica/evrgnini.c | 14 ++++----
> > > >  drivers/acpi/acpica/exregion.c | 65 ++++++++++++++++++++++++----------
> > > >  include/acpi/actypes.h         | 12 +++++--
> > > >  3 files changed, 64 insertions(+), 27 deletions(-)
> > > >
> > > > diff --git a/drivers/acpi/acpica/evrgnini.c b/drivers/acpi/acpica/evrgnini.c
> > > > index aefc0145e583..89be3ccdad53 100644
> > > > --- a/drivers/acpi/acpica/evrgnini.c
> > > > +++ b/drivers/acpi/acpica/evrgnini.c
> > > > @@ -38,6 +38,7 @@ acpi_ev_system_memory_region_setup(acpi_handle handle,
> > > >       union acpi_operand_object *region_desc =
> > > >           (union acpi_operand_object *)handle;
> > > >       struct acpi_mem_space_context *local_region_context;
> > > > +     struct acpi_mem_mapping *mm;
> > > >
> > > >       ACPI_FUNCTION_TRACE(ev_system_memory_region_setup);
> > > >
> > > > @@ -46,13 +47,14 @@ acpi_ev_system_memory_region_setup(acpi_handle handle,
> > > >                       local_region_context =
> > > >                           (struct acpi_mem_space_context *)*region_context;
> > > >
> > > > -                     /* Delete a cached mapping if present */
> > > > +                     /* Delete memory mappings if present */
> > > >
> > > > -                     if (local_region_context->mapped_length) {
> > > > -                             acpi_os_unmap_memory(local_region_context->
> > > > -                                                  mapped_logical_address,
> > > > -                                                  local_region_context->
> > > > -                                                  mapped_length);
> > > > +                     while (local_region_context->first_mm) {
> > > > +                             mm = local_region_context->first_mm;
> > > > +                             local_region_context->first_mm = mm->next_mm;
> > > > +                             acpi_os_unmap_memory(mm->logical_address,
> > > > +                                                  mm->length);
> > > > +                             ACPI_FREE(mm);
> > > >                       }
> > > >                       ACPI_FREE(local_region_context);
> > > >                       *region_context = NULL;
> > > > diff --git a/drivers/acpi/acpica/exregion.c b/drivers/acpi/acpica/exregion.c
> > > > index d15a66de26c0..fd68f2134804 100644
> > > > --- a/drivers/acpi/acpica/exregion.c
> > > > +++ b/drivers/acpi/acpica/exregion.c
> > > > @@ -41,6 +41,7 @@ acpi_ex_system_memory_space_handler(u32 function,
> > > >       acpi_status status = AE_OK;
> > > >       void *logical_addr_ptr = NULL;
> > > >       struct acpi_mem_space_context *mem_info = region_context;
> > > > +     struct acpi_mem_mapping *mm = mem_info->cur_mm;
> > > >       u32 length;
> > > >       acpi_size map_length;
> > >
> > > I think this needs to be:
> > >
> > >         acpi_size map_length = mem_info->length;
> > >
> > > since it now gets used in the ACPI_ERROR() call below.
> >
> > No, it's better to print the length value in the message.
>
> Yeah, that was the other option.
>
> > >  I'm getting a "maybe used unitialized" error on compilation.
> >
> > Thanks for reporting!
> >
> > I've updated the commit in the acpica-osl branch with the fix.
>
> Thanks, Rafael.
>
> Do you have a generic way of testing this?  I can see a way to do it
> -- timing a call of a method in a dynamically loaded SSDT -- but if
> you had a test case laying around, I could continue to be lazy :).

I don't check the timing, but instrument the code to see if what
happens is what is expected.

Now, the overhead reduction resulting from this change in Linux is
quite straightforward: Every time the current mapping doesn't cover
the request at hand, an unmap is carried out by the original code,
which involves a linear search through acpi_ioremaps, and which
generally is (at least a bit) more expensive than the linear search
through the list of opregion-specific mappings introduced by the
$subject patch, because quite likely the acpi_ioremaps list holds more
items.  And, of course, if the opregion in question holds many fields
and they are not covered by one mapping, each of them needs to be
mapped just once per the opregion life cycle.

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] ACPICA: Preserve memory opregion mappings
  2020-06-30 15:52               ` Rafael J. Wysocki
@ 2020-06-30 19:57                 ` Al Stone
  0 siblings, 0 replies; 51+ messages in thread
From: Al Stone @ 2020-06-30 19:57 UTC (permalink / raw)
  To: Rafael J. Wysocki
  Cc: Rafael J. Wysocki, Dan Williams, Erik Kaneda, Rafael Wysocki,
	Len Brown, Borislav Petkov, Ira Weiny, James Morse, Myron Stowe,
	Andy Shevchenko, Linux Kernel Mailing List,
	ACPI Devel Maling List, linux-nvdimm, Bob Moore

On 30 Jun 2020 17:52, Rafael J. Wysocki wrote:
> On Tue, Jun 30, 2020 at 5:31 PM Al Stone <ahs3@redhat.com> wrote:
> >
> > On 30 Jun 2020 13:44, Rafael J. Wysocki wrote:
> > > On Mon, Jun 29, 2020 at 10:57 PM Al Stone <ahs3@redhat.com> wrote:
> > > >
> > > > On 29 Jun 2020 18:33, Rafael J. Wysocki wrote:
> > > > > From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> > > > >
> > > > > The ACPICA's strategy with respect to the handling of memory mappings
> > > > > associated with memory operation regions is to avoid mapping the
> > > > > entire region at once which may be problematic at least in principle
> > > > > (for example, it may lead to conflicts with overlapping mappings
> > > > > having different attributes created by drivers).  It may also be
> > > > > wasteful, because memory opregions on some systems take up vast
> > > > > chunks of address space while the fields in those regions actually
> > > > > accessed by AML are sparsely distributed.
> > > > >
> > > > > For this reason, a one-page "window" is mapped for a given opregion
> > > > > on the first memory access through it and if that "window" does not
> > > > > cover an address range accessed through that opregion subsequently,
> > > > > it is unmapped and a new "window" is mapped to replace it.  Next,
> > > > > if the new "window" is not sufficient to acess memory through the
> > > > > opregion in question in the future, it will be replaced with yet
> > > > > another "window" and so on.  That may lead to a suboptimal sequence
> > > > > of memory mapping and unmapping operations, for example if two fields
> > > > > in one opregion separated from each other by a sufficiently wide
> > > > > chunk of unused address space are accessed in an alternating pattern.
> > > > >
> > > > > The situation may still be suboptimal if the deferred unmapping
> > > > > introduced previously is supported by the OS layer.  For instance,
> > > > > the alternating memory access pattern mentioned above may produce
> > > > > a relatively long list of mappings to release with substantial
> > > > > duplication among the entries in it, which could be avoided if
> > > > > acpi_ex_system_memory_space_handler() did not release the mapping
> > > > > used by it previously as soon as the current access was not covered
> > > > > by it.
> > > > >
> > > > > In order to improve that, modify acpi_ex_system_memory_space_handler()
> > > > > to preserve all of the memory mappings created by it until the memory
> > > > > regions associated with them go away.
> > > > >
> > > > > Accordingly, update acpi_ev_system_memory_region_setup() to unmap all
> > > > > memory associated with memory opregions that go away.
> > > > >
> > > > > Reported-by: Dan Williams <dan.j.williams@intel.com>
> > > > > Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > > > > ---
> > > > >  drivers/acpi/acpica/evrgnini.c | 14 ++++----
> > > > >  drivers/acpi/acpica/exregion.c | 65 ++++++++++++++++++++++++----------
> > > > >  include/acpi/actypes.h         | 12 +++++--
> > > > >  3 files changed, 64 insertions(+), 27 deletions(-)
> > > > >
> > > > > diff --git a/drivers/acpi/acpica/evrgnini.c b/drivers/acpi/acpica/evrgnini.c
> > > > > index aefc0145e583..89be3ccdad53 100644
> > > > > --- a/drivers/acpi/acpica/evrgnini.c
> > > > > +++ b/drivers/acpi/acpica/evrgnini.c
> > > > > @@ -38,6 +38,7 @@ acpi_ev_system_memory_region_setup(acpi_handle handle,
> > > > >       union acpi_operand_object *region_desc =
> > > > >           (union acpi_operand_object *)handle;
> > > > >       struct acpi_mem_space_context *local_region_context;
> > > > > +     struct acpi_mem_mapping *mm;
> > > > >
> > > > >       ACPI_FUNCTION_TRACE(ev_system_memory_region_setup);
> > > > >
> > > > > @@ -46,13 +47,14 @@ acpi_ev_system_memory_region_setup(acpi_handle handle,
> > > > >                       local_region_context =
> > > > >                           (struct acpi_mem_space_context *)*region_context;
> > > > >
> > > > > -                     /* Delete a cached mapping if present */
> > > > > +                     /* Delete memory mappings if present */
> > > > >
> > > > > -                     if (local_region_context->mapped_length) {
> > > > > -                             acpi_os_unmap_memory(local_region_context->
> > > > > -                                                  mapped_logical_address,
> > > > > -                                                  local_region_context->
> > > > > -                                                  mapped_length);
> > > > > +                     while (local_region_context->first_mm) {
> > > > > +                             mm = local_region_context->first_mm;
> > > > > +                             local_region_context->first_mm = mm->next_mm;
> > > > > +                             acpi_os_unmap_memory(mm->logical_address,
> > > > > +                                                  mm->length);
> > > > > +                             ACPI_FREE(mm);
> > > > >                       }
> > > > >                       ACPI_FREE(local_region_context);
> > > > >                       *region_context = NULL;
> > > > > diff --git a/drivers/acpi/acpica/exregion.c b/drivers/acpi/acpica/exregion.c
> > > > > index d15a66de26c0..fd68f2134804 100644
> > > > > --- a/drivers/acpi/acpica/exregion.c
> > > > > +++ b/drivers/acpi/acpica/exregion.c
> > > > > @@ -41,6 +41,7 @@ acpi_ex_system_memory_space_handler(u32 function,
> > > > >       acpi_status status = AE_OK;
> > > > >       void *logical_addr_ptr = NULL;
> > > > >       struct acpi_mem_space_context *mem_info = region_context;
> > > > > +     struct acpi_mem_mapping *mm = mem_info->cur_mm;
> > > > >       u32 length;
> > > > >       acpi_size map_length;
> > > >
> > > > I think this needs to be:
> > > >
> > > >         acpi_size map_length = mem_info->length;
> > > >
> > > > since it now gets used in the ACPI_ERROR() call below.
> > >
> > > No, it's better to print the length value in the message.
> >
> > Yeah, that was the other option.
> >
> > > >  I'm getting a "maybe used unitialized" error on compilation.
> > >
> > > Thanks for reporting!
> > >
> > > I've updated the commit in the acpica-osl branch with the fix.
> >
> > Thanks, Rafael.
> >
> > Do you have a generic way of testing this?  I can see a way to do it
> > -- timing a call of a method in a dynamically loaded SSDT -- but if
> > you had a test case laying around, I could continue to be lazy :).
> 
> I don't check the timing, but instrument the code to see if what
> happens is what is expected.

Ah, okay.  Thanks.

> Now, the overhead reduction resulting from this change in Linux is
> quite straightforward: Every time the current mapping doesn't cover
> the request at hand, an unmap is carried out by the original code,
> which involves a linear search through acpi_ioremaps, and which
> generally is (at least a bit) more expensive than the linear search
> through the list of opregion-specific mappings introduced by the
> $subject patch, because quite likely the acpi_ioremaps list holds more
> items.  And, of course, if the opregion in question holds many fields
> and they are not covered by one mapping, each of them needs to be
> mapped just once per the opregion life cycle.

Right.  What I was debating as a generic test was something to try to
force an OpRegion through mapping and unmapping repeatedly with the
current code to determine a rough average elapsed time.  Then, apply
the patch to see what the change does.  Granted, a completely synthetic
scenario, and specifically designed to exaggerate the overhead, but
I'm just curious.

-- 
ciao,
al
-----------------------------------
Al Stone
Software Engineer
Red Hat, Inc.
ahs3@redhat.com
-----------------------------------


^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] ACPICA: Preserve memory opregion mappings
  2020-06-29 16:33       ` [PATCH v4 2/2] ACPICA: Preserve memory opregion mappings Rafael J. Wysocki
  2020-06-29 20:57         ` Al Stone
@ 2020-07-16 19:22         ` Verma, Vishal L
  2020-07-19 19:14           ` Rafael J. Wysocki
  1 sibling, 1 reply; 51+ messages in thread
From: Verma, Vishal L @ 2020-07-16 19:22 UTC (permalink / raw)
  To: Williams, Dan J, Kaneda, Erik, rjw
  Cc: linux-nvdimm, james.morse, lenb, andriy.shevchenko, bp,
	linux-kernel, myron.stowe, Wysocki, Rafael J, Weiny, Ira, Moore,
	Robert, linux-acpi

On Mon, 2020-06-29 at 18:33 +0200, Rafael J. Wysocki wrote:
> From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> 
> The ACPICA's strategy with respect to the handling of memory mappings
> associated with memory operation regions is to avoid mapping the
> entire region at once which may be problematic at least in principle
> (for example, it may lead to conflicts with overlapping mappings
> having different attributes created by drivers).  It may also be
> wasteful, because memory opregions on some systems take up vast
> chunks of address space while the fields in those regions actually
> accessed by AML are sparsely distributed.
> 
> For this reason, a one-page "window" is mapped for a given opregion
> on the first memory access through it and if that "window" does not
> cover an address range accessed through that opregion subsequently,
> it is unmapped and a new "window" is mapped to replace it.  Next,
> if the new "window" is not sufficient to acess memory through the
> opregion in question in the future, it will be replaced with yet
> another "window" and so on.  That may lead to a suboptimal sequence
> of memory mapping and unmapping operations, for example if two fields
> in one opregion separated from each other by a sufficiently wide
> chunk of unused address space are accessed in an alternating pattern.
> 
> The situation may still be suboptimal if the deferred unmapping
> introduced previously is supported by the OS layer.  For instance,
> the alternating memory access pattern mentioned above may produce
> a relatively long list of mappings to release with substantial
> duplication among the entries in it, which could be avoided if
> acpi_ex_system_memory_space_handler() did not release the mapping
> used by it previously as soon as the current access was not covered
> by it.
> 
> In order to improve that, modify acpi_ex_system_memory_space_handler()
> to preserve all of the memory mappings created by it until the memory
> regions associated with them go away.
> 
> Accordingly, update acpi_ev_system_memory_region_setup() to unmap all
> memory associated with memory opregions that go away.
> 
> Reported-by: Dan Williams <dan.j.williams@intel.com>
> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> ---
>  drivers/acpi/acpica/evrgnini.c | 14 ++++----
>  drivers/acpi/acpica/exregion.c | 65 ++++++++++++++++++++++++----------
>  include/acpi/actypes.h         | 12 +++++--
>  3 files changed, 64 insertions(+), 27 deletions(-)
> 

Hi Rafael,

Picking up from Dan while he's out - I had these patches tested by the
original reporter, and they work fine. I see you had them staged in the
acpica-osl branch. Is that slated to go in during the 5.9 merge window?

You can add:
Tested-by: Xiang Li <xiang.z.li@intel.com>

^ permalink raw reply	[flat|nested] 51+ messages in thread

* Re: [PATCH v4 2/2] ACPICA: Preserve memory opregion mappings
  2020-07-16 19:22         ` Verma, Vishal L
@ 2020-07-19 19:14           ` Rafael J. Wysocki
  0 siblings, 0 replies; 51+ messages in thread
From: Rafael J. Wysocki @ 2020-07-19 19:14 UTC (permalink / raw)
  To: Verma, Vishal L
  Cc: Williams, Dan J, Kaneda, Erik, rjw, linux-nvdimm, james.morse,
	lenb, andriy.shevchenko, bp, linux-kernel, myron.stowe, Wysocki,
	Rafael J, Weiny, Ira, Moore, Robert, linux-acpi

On Thu, Jul 16, 2020 at 9:22 PM Verma, Vishal L
<vishal.l.verma@intel.com> wrote:
>
> On Mon, 2020-06-29 at 18:33 +0200, Rafael J. Wysocki wrote:
> > From: "Rafael J. Wysocki" <rafael.j.wysocki@intel.com>
> >
> > The ACPICA's strategy with respect to the handling of memory mappings
> > associated with memory operation regions is to avoid mapping the
> > entire region at once which may be problematic at least in principle
> > (for example, it may lead to conflicts with overlapping mappings
> > having different attributes created by drivers).  It may also be
> > wasteful, because memory opregions on some systems take up vast
> > chunks of address space while the fields in those regions actually
> > accessed by AML are sparsely distributed.
> >
> > For this reason, a one-page "window" is mapped for a given opregion
> > on the first memory access through it and if that "window" does not
> > cover an address range accessed through that opregion subsequently,
> > it is unmapped and a new "window" is mapped to replace it.  Next,
> > if the new "window" is not sufficient to acess memory through the
> > opregion in question in the future, it will be replaced with yet
> > another "window" and so on.  That may lead to a suboptimal sequence
> > of memory mapping and unmapping operations, for example if two fields
> > in one opregion separated from each other by a sufficiently wide
> > chunk of unused address space are accessed in an alternating pattern.
> >
> > The situation may still be suboptimal if the deferred unmapping
> > introduced previously is supported by the OS layer.  For instance,
> > the alternating memory access pattern mentioned above may produce
> > a relatively long list of mappings to release with substantial
> > duplication among the entries in it, which could be avoided if
> > acpi_ex_system_memory_space_handler() did not release the mapping
> > used by it previously as soon as the current access was not covered
> > by it.
> >
> > In order to improve that, modify acpi_ex_system_memory_space_handler()
> > to preserve all of the memory mappings created by it until the memory
> > regions associated with them go away.
> >
> > Accordingly, update acpi_ev_system_memory_region_setup() to unmap all
> > memory associated with memory opregions that go away.
> >
> > Reported-by: Dan Williams <dan.j.williams@intel.com>
> > Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
> > ---
> >  drivers/acpi/acpica/evrgnini.c | 14 ++++----
> >  drivers/acpi/acpica/exregion.c | 65 ++++++++++++++++++++++++----------
> >  include/acpi/actypes.h         | 12 +++++--
> >  3 files changed, 64 insertions(+), 27 deletions(-)
> >
>
> Hi Rafael,
>
> Picking up from Dan while he's out - I had these patches tested by the
> original reporter, and they work fine. I see you had them staged in the
> acpica-osl branch. Is that slated to go in during the 5.9 merge window?

Yes, it is.

> You can add:
> Tested-by: Xiang Li <xiang.z.li@intel.com>

Thank you!

^ permalink raw reply	[flat|nested] 51+ messages in thread

end of thread, other threads:[~2020-07-19 19:14 UTC | newest]

Thread overview: 51+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-07 23:39 [PATCH v2] ACPI: Drop rcu usage for MMIO mappings Dan Williams
2020-06-05 13:32 ` Rafael J. Wysocki
2020-06-05 16:18   ` Dan Williams
2020-06-05 16:21     ` Rafael J. Wysocki
2020-06-05 16:39       ` Dan Williams
2020-06-05 17:02         ` Rafael J. Wysocki
2020-06-05 14:06 ` [RFT][PATCH] ACPI: OSL: Use rwlock instead of RCU for memory management Rafael J. Wysocki
2020-06-05 17:08   ` Dan Williams
2020-06-06  6:56     ` Rafael J. Wysocki
2020-06-08 15:33       ` Rafael J. Wysocki
2020-06-08 16:29         ` Rafael J. Wysocki
2020-06-05 19:40   ` Andy Shevchenko
2020-06-06  6:48     ` Rafael J. Wysocki
2020-06-10 12:17 ` [RFT][PATCH 0/3] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter Rafael J. Wysocki
2020-06-10 12:20   ` [RFT][PATCH 1/3] ACPICA: Defer unmapping of memory used in memory opregions Rafael J. Wysocki
2020-06-10 12:21   ` [RFT][PATCH 2/3] ACPICA: Remove unused memory mappings on interpreter exit Rafael J. Wysocki
2020-06-12  0:12     ` Kaneda, Erik
2020-06-12 12:05       ` Rafael J. Wysocki
2020-06-13 19:28         ` Rafael J. Wysocki
2020-06-15 19:06           ` Dan Williams
2020-06-10 12:22   ` [RFT][PATCH 3/3] ACPI: OSL: Define ACPI_OS_MAP_MEMORY_FAST_PATH() Rafael J. Wysocki
2020-06-13 19:19   ` [RFT][PATCH 0/3] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter Rafael J. Wysocki
2020-06-22 13:50 ` [RFT][PATCH v2 0/4] " Rafael J. Wysocki
2020-06-22 13:52   ` [RFT][PATCH v2 1/4] ACPICA: Defer unmapping of opregion memory if supported by OS Rafael J. Wysocki
2020-06-22 13:53   ` [RFT][PATCH v2 2/4] ACPI: OSL: Add support for deferred unmapping of ACPI memory Rafael J. Wysocki
2020-06-22 14:56     ` Andy Shevchenko
2020-06-22 15:27       ` Rafael J. Wysocki
2020-06-22 15:46         ` Andy Shevchenko
2020-06-22 14:01   ` [RFT][PATCH v2 3/4] ACPICA: Preserve memory opregion mappings if supported by OS Rafael J. Wysocki
2020-06-26 22:53     ` Kaneda, Erik
2020-06-29 13:02       ` Rafael J. Wysocki
2020-06-22 14:02   ` [RFT][PATCH v2 4/4] ACPI: OSL: Implement acpi_os_map_memory_fast_path() Rafael J. Wysocki
2020-06-26 17:28   ` [RFT][PATCH v3 0/4] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter Rafael J. Wysocki
2020-06-26 17:31     ` [RFT][PATCH v3 1/4] ACPICA: Take deferred unmapping of memory into account Rafael J. Wysocki
2020-06-26 17:31     ` [RFT][PATCH v3 2/4] ACPI: OSL: Implement deferred unmapping of ACPI memory Rafael J. Wysocki
2020-06-26 17:32     ` [RFT][PATCH v3 3/4] ACPICA: Preserve memory opregion mappings if supported by OS Rafael J. Wysocki
2020-06-26 17:33     ` [RFT][PATCH v3 4/4] ACPI: OSL: Implement acpi_os_map_memory_fast_path() Rafael J. Wysocki
2020-06-26 18:41     ` [RFT][PATCH v3 0/4] ACPI: ACPICA / OSL: Avoid unmapping ACPI memory inside of the AML interpreter Dan Williams
2020-06-28 17:09       ` Rafael J. Wysocki
2020-06-29 20:46         ` Dan Williams
2020-06-30 11:04           ` Rafael J. Wysocki
2020-06-29 16:31     ` [PATCH v4 0/2] " Rafael J. Wysocki
2020-06-29 16:33       ` [PATCH v4 1/2] ACPI: OSL: Implement deferred unmapping of ACPI memory Rafael J. Wysocki
2020-06-29 16:33       ` [PATCH v4 2/2] ACPICA: Preserve memory opregion mappings Rafael J. Wysocki
2020-06-29 20:57         ` Al Stone
2020-06-30 11:44           ` Rafael J. Wysocki
2020-06-30 15:31             ` Al Stone
2020-06-30 15:52               ` Rafael J. Wysocki
2020-06-30 19:57                 ` Al Stone
2020-07-16 19:22         ` Verma, Vishal L
2020-07-19 19:14           ` Rafael J. Wysocki

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).