* [PATCH net v3] 9p/xen : Fix use after free bug in xen_9pfs_front_remove due to race condition
@ 2023-03-13 14:43 Zheng Wang
2023-03-13 14:53 ` Michal Swiatkowski
0 siblings, 1 reply; 3+ messages in thread
From: Zheng Wang @ 2023-03-13 14:43 UTC (permalink / raw)
To: ericvh
Cc: michal.swiatkowski, lucho, asmadeus, linux_oss, davem, edumazet,
kuba, pabeni, v9fs-developer, netdev, linux-kernel,
hackerzheng666, 1395428693sheep, alex000young, Zheng Wang
In xen_9pfs_front_probe, it calls xen_9pfs_front_alloc_dataring
to init priv->rings and bound &ring->work with p9_xen_response.
When it calls xen_9pfs_front_event_handler to handle IRQ requests,
it will finally call schedule_work to start the work.
When we call xen_9pfs_front_remove to remove the driver, there
may be a sequence as follows:
Fix it by finishing the work before cleanup in xen_9pfs_front_free.
Note that, this bug is found by static analysis, which might be
false positive.
CPU0 CPU1
|p9_xen_response
xen_9pfs_front_remove|
xen_9pfs_front_free|
kfree(priv) |
//free priv |
|p9_tag_lookup
|//use priv->client
Fixes: 71ebd71921e4 ("xen/9pfs: connect to the backend")
Signed-off-by: Zheng Wang <zyytlz.wz@163.com>
---
v3:
- remove unnecessary comment and move definition to the
for loop suggested by Michal Swiatkowski
v2:
- fix type error of ring found by kernel test robot
---
net/9p/trans_xen.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
index c64050e839ac..df467ffb52d0 100644
--- a/net/9p/trans_xen.c
+++ b/net/9p/trans_xen.c
@@ -280,6 +280,10 @@ static void xen_9pfs_front_free(struct xen_9pfs_front_priv *priv)
write_unlock(&xen_9pfs_lock);
for (i = 0; i < priv->num_rings; i++) {
+ struct xen_9pfs_dataring *ring = &priv->rings[i];
+
+ cancel_work_sync(&ring->work);
+
if (!priv->rings[i].intf)
break;
if (priv->rings[i].irq > 0)
--
2.25.1
^ permalink raw reply related [flat|nested] 3+ messages in thread
* Re: [PATCH net v3] 9p/xen : Fix use after free bug in xen_9pfs_front_remove due to race condition
2023-03-13 14:43 [PATCH net v3] 9p/xen : Fix use after free bug in xen_9pfs_front_remove due to race condition Zheng Wang
@ 2023-03-13 14:53 ` Michal Swiatkowski
0 siblings, 0 replies; 3+ messages in thread
From: Michal Swiatkowski @ 2023-03-13 14:53 UTC (permalink / raw)
To: Zheng Wang
Cc: ericvh, lucho, asmadeus, linux_oss, davem, edumazet, kuba,
pabeni, v9fs-developer, netdev, linux-kernel, hackerzheng666,
1395428693sheep, alex000young
On Mon, Mar 13, 2023 at 10:43:25PM +0800, Zheng Wang wrote:
> In xen_9pfs_front_probe, it calls xen_9pfs_front_alloc_dataring
> to init priv->rings and bound &ring->work with p9_xen_response.
>
> When it calls xen_9pfs_front_event_handler to handle IRQ requests,
> it will finally call schedule_work to start the work.
>
> When we call xen_9pfs_front_remove to remove the driver, there
> may be a sequence as follows:
>
> Fix it by finishing the work before cleanup in xen_9pfs_front_free.
>
> Note that, this bug is found by static analysis, which might be
> false positive.
>
> CPU0 CPU1
>
> |p9_xen_response
> xen_9pfs_front_remove|
> xen_9pfs_front_free|
> kfree(priv) |
> //free priv |
> |p9_tag_lookup
> |//use priv->client
>
> Fixes: 71ebd71921e4 ("xen/9pfs: connect to the backend")
> Signed-off-by: Zheng Wang <zyytlz.wz@163.com>
> ---
> v3:
> - remove unnecessary comment and move definition to the
> for loop suggested by Michal Swiatkowski
>
> v2:
> - fix type error of ring found by kernel test robot
> ---
> net/9p/trans_xen.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
[...]
Thanks for changes
Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
^ permalink raw reply [flat|nested] 3+ messages in thread
* [PATCH net v3] 9p/xen : Fix use after free bug in xen_9pfs_front_remove due to race condition
@ 2023-03-13 16:59 Zheng Wang
0 siblings, 0 replies; 3+ messages in thread
From: Zheng Wang @ 2023-03-13 16:59 UTC (permalink / raw)
To: ericvh
Cc: michal.swiatkowski, lucho, asmadeus, linux_oss, davem, edumazet,
kuba, pabeni, v9fs-developer, netdev, linux-kernel,
hackerzheng666, 1395428693sheep, alex000young, Zheng Wang
In xen_9pfs_front_probe, it calls xen_9pfs_front_alloc_dataring
to init priv->rings and bound &ring->work with p9_xen_response.
When it calls xen_9pfs_front_event_handler to handle IRQ requests,
it will finally call schedule_work to start the work.
When we call xen_9pfs_front_remove to remove the driver, there
may be a sequence as follows:
Fix it by finishing the work before cleanup in xen_9pfs_front_free.
Note that, this bug is found by static analysis, which might be
false positive.
CPU0 CPU1
|p9_xen_response
xen_9pfs_front_remove|
xen_9pfs_front_free|
kfree(priv) |
//free priv |
|p9_tag_lookup
|//use priv->client
Fixes: 71ebd71921e4 ("xen/9pfs: connect to the backend")
Reviewed-by: Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
Signed-off-by: Zheng Wang <zyytlz.wz@163.com>
---
v3:
- remove unnecessary comment and move definition to the
for loop suggested by Michal Swiatkowski
v2:
- fix type error of ring found by kernel test robot
---
net/9p/trans_xen.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
index c64050e839ac..df467ffb52d0 100644
--- a/net/9p/trans_xen.c
+++ b/net/9p/trans_xen.c
@@ -280,6 +280,10 @@ static void xen_9pfs_front_free(struct xen_9pfs_front_priv *priv)
write_unlock(&xen_9pfs_lock);
for (i = 0; i < priv->num_rings; i++) {
+ struct xen_9pfs_dataring *ring = &priv->rings[i];
+
+ cancel_work_sync(&ring->work);
+
if (!priv->rings[i].intf)
break;
if (priv->rings[i].irq > 0)
--
2.25.1
^ permalink raw reply related [flat|nested] 3+ messages in thread
end of thread, other threads:[~2023-03-13 17:22 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-13 14:43 [PATCH net v3] 9p/xen : Fix use after free bug in xen_9pfs_front_remove due to race condition Zheng Wang
2023-03-13 14:53 ` Michal Swiatkowski
2023-03-13 16:59 Zheng Wang
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).