linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] drivers/hv: Replace binary semaphore with mutex
@ 2019-11-01 20:00 Davidlohr Bueso
  2019-11-19 13:20 ` Sasha Levin
  0 siblings, 1 reply; 2+ messages in thread
From: Davidlohr Bueso @ 2019-11-01 20:00 UTC (permalink / raw)
  To: kys, haiyangz, sthemmin, sashal
  Cc: linux-kernel, linux-hyperv, dave, Davidlohr Bueso

At a slight footprint cost (24 vs 32 bytes), mutexes are more optimal
than semaphores; it's also a nicer interface for mutual exclusion,
which is why they are encouraged over binary semaphores, when possible.

Replace the hyperv_mmio_lock, its semantics implies traditional lock
ownership; that is, the lock owner is the same for both lock/unlock
operations. Therefore it is safe to convert.

Signed-off-by: Davidlohr Bueso <dbueso@suse.de>
---
This is part of reducing the amount of semaphore users in the kernel.
Compile-tested only.

 drivers/hv/vmbus_drv.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index 391f0b225c9a..5c606dc4a3f7 100644
--- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c
@@ -79,7 +79,7 @@ static struct notifier_block hyperv_panic_block = {
 static const char *fb_mmio_name = "fb_range";
 static struct resource *fb_mmio;
 static struct resource *hyperv_mmio;
-static DEFINE_SEMAPHORE(hyperv_mmio_lock);
+static DEFINE_MUTEX(hyperv_mmio_lock);
 
 static int vmbus_exists(void)
 {
@@ -2011,7 +2011,7 @@ int vmbus_allocate_mmio(struct resource **new, struct hv_device *device_obj,
 	int retval;
 
 	retval = -ENXIO;
-	down(&hyperv_mmio_lock);
+	mutex_lock(&hyperv_mmio_lock);
 
 	/*
 	 * If overlaps with frame buffers are allowed, then first attempt to
@@ -2058,7 +2058,7 @@ int vmbus_allocate_mmio(struct resource **new, struct hv_device *device_obj,
 	}
 
 exit:
-	up(&hyperv_mmio_lock);
+	mutex_unlock(&hyperv_mmio_lock);
 	return retval;
 }
 EXPORT_SYMBOL_GPL(vmbus_allocate_mmio);
@@ -2075,7 +2075,7 @@ void vmbus_free_mmio(resource_size_t start, resource_size_t size)
 {
 	struct resource *iter;
 
-	down(&hyperv_mmio_lock);
+	mutex_lock(&hyperv_mmio_lock);
 	for (iter = hyperv_mmio; iter; iter = iter->sibling) {
 		if ((iter->start >= start + size) || (iter->end <= start))
 			continue;
@@ -2083,7 +2083,7 @@ void vmbus_free_mmio(resource_size_t start, resource_size_t size)
 		__release_region(iter, start, size);
 	}
 	release_mem_region(start, size);
-	up(&hyperv_mmio_lock);
+	mutex_unlock(&hyperv_mmio_lock);
 
 }
 EXPORT_SYMBOL_GPL(vmbus_free_mmio);
-- 
2.16.4


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH] drivers/hv: Replace binary semaphore with mutex
  2019-11-01 20:00 [PATCH] drivers/hv: Replace binary semaphore with mutex Davidlohr Bueso
@ 2019-11-19 13:20 ` Sasha Levin
  0 siblings, 0 replies; 2+ messages in thread
From: Sasha Levin @ 2019-11-19 13:20 UTC (permalink / raw)
  To: Davidlohr Bueso
  Cc: kys, haiyangz, sthemmin, linux-kernel, linux-hyperv, Davidlohr Bueso

On Fri, Nov 01, 2019 at 01:00:04PM -0700, Davidlohr Bueso wrote:
>At a slight footprint cost (24 vs 32 bytes), mutexes are more optimal
>than semaphores; it's also a nicer interface for mutual exclusion,
>which is why they are encouraged over binary semaphores, when possible.
>
>Replace the hyperv_mmio_lock, its semantics implies traditional lock
>ownership; that is, the lock owner is the same for both lock/unlock
>operations. Therefore it is safe to convert.
>
>Signed-off-by: Davidlohr Bueso <dbueso@suse.de>

Queued up for hyperv-next, thank you.

-- 
Thanks,
Sasha

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2019-11-19 13:20 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-11-01 20:00 [PATCH] drivers/hv: Replace binary semaphore with mutex Davidlohr Bueso
2019-11-19 13:20 ` Sasha Levin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).