* Re: [PATCH v4] scsi: ufs: Quiesce all scsi devices before shutdown
@ 2020-07-27 7:30 ` Can Guo
0 siblings, 0 replies; 12+ messages in thread
From: Can Guo @ 2020-07-27 7:30 UTC (permalink / raw)
To: Stanley Chu
Cc: linux-scsi, martin.petersen, andy.teng, jejb, chun-hung.wu,
kuohong.wang, linux-kernel, asutoshd, avri.altman,
linux-mediatek, peter.wang, alim.akhtar, matthias.bgg, beanhuo,
chaotian.jing, cc.chou, linux-arm-kernel, bvanassche
Hi Stanley,
On 2020-07-24 22:01, Stanley Chu wrote:
> Currently I/O request could be still submitted to UFS device while
> UFS is working on shutdown flow. This may lead to racing as below
> scenarios and finally system may crash due to unclocked register
> accesses.
>
> To fix this kind of issues, specifically quiesce all SCSI devices
> before UFS shutdown to block all I/O request sending from block
> layer.
>
> Example of racing scenario: While UFS device is runtime-suspended
>
> Thread #1: Executing UFS shutdown flow, e.g.,
> ufshcd_suspend(UFS_SHUTDOWN_PM)
> Thread #2: Executing runtime resume flow triggered by I/O request,
> e.g., ufshcd_resume(UFS_RUNTIME_PM)
>
I don't quite get it, how can you prevent block layer PM from iniating
hba runtime resume by quiescing the scsi devices? Block layer PM
iniates hba async runtime resume in blk_queue_enter(). But quiescing
the scsi devices can only prevent general I/O requests from passing
through scsi_queue_rq() callback.
Say hba is runtime suspended, if an I/O request to sda is sent from
block layer (sda must be runtime suspended as well at this time),
blk_queue_enter() initiates async runtime resume for sda. But since
sda's parents are also runtime suspended, the RPM framework shall do
runtime resume to the devices in the sequence hba->host->target->sda.
In this case, ufshcd_resume() still runs concurrently, no?
Thanks,
Can Guo.
> This breaks the assumption that UFS PM flows can not be running
> concurrently and some unexpected racing behavior may happen.
>
> Signed-off-by: Stanley Chu <stanley.chu@mediatek.com>
> ---
> drivers/scsi/ufs/ufshcd.c | 29 +++++++++++++++++++++++++++++
> 1 file changed, 29 insertions(+)
>
> diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
> index 9d180da77488..2e18596f3a8e 100644
> --- a/drivers/scsi/ufs/ufshcd.c
> +++ b/drivers/scsi/ufs/ufshcd.c
> @@ -159,6 +159,12 @@ struct ufs_pm_lvl_states ufs_pm_lvl_states[] = {
> {UFS_POWERDOWN_PWR_MODE, UIC_LINK_OFF_STATE},
> };
>
> +#define ufshcd_scsi_for_each_sdev(fn) \
> + list_for_each_entry(starget, &hba->host->__targets, siblings) { \
> + __starget_for_each_device(starget, NULL, \
> + fn); \
> + }
> +
> static inline enum ufs_dev_pwr_mode
> ufs_get_pm_lvl_to_dev_pwr_mode(enum ufs_pm_level lvl)
> {
> @@ -8620,6 +8626,13 @@ int ufshcd_runtime_idle(struct ufs_hba *hba)
> }
> EXPORT_SYMBOL(ufshcd_runtime_idle);
>
> +static void ufshcd_quiesce_sdev(struct scsi_device *sdev, void *data)
> +{
> + /* Suspended devices are already quiesced so can be skipped */
> + if (!pm_runtime_suspended(&sdev->sdev_gendev))
> + scsi_device_quiesce(sdev);
> +}
> +
> /**
> * ufshcd_shutdown - shutdown routine
> * @hba: per adapter instance
> @@ -8631,6 +8644,7 @@ EXPORT_SYMBOL(ufshcd_runtime_idle);
> int ufshcd_shutdown(struct ufs_hba *hba)
> {
> int ret = 0;
> + struct scsi_target *starget;
>
> if (!hba->is_powered)
> goto out;
> @@ -8644,6 +8658,21 @@ int ufshcd_shutdown(struct ufs_hba *hba)
> goto out;
> }
>
> + /*
> + * Quiesce all SCSI devices to prevent any non-PM requests sending
> + * from block layer during and after shutdown.
> + *
> + * Here we can not use blk_cleanup_queue() since PM requests
> + * (with BLK_MQ_REQ_PREEMPT flag) are still required to be sent
> + * through block layer. Therefore SCSI command queued after the
> + * scsi_target_quiesce() call returned will block until
> + * blk_cleanup_queue() is called.
> + *
> + * Besides, scsi_target_"un"quiesce (e.g., scsi_target_resume) can
> + * be ignored since shutdown is one-way flow.
> + */
> + ufshcd_scsi_for_each_sdev(ufshcd_quiesce_sdev);
> +
> ret = ufshcd_suspend(hba, UFS_SHUTDOWN_PM);
> out:
> if (ret)
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v4] scsi: ufs: Quiesce all scsi devices before shutdown
@ 2020-07-27 7:30 ` Can Guo
0 siblings, 0 replies; 12+ messages in thread
From: Can Guo @ 2020-07-27 7:30 UTC (permalink / raw)
To: Stanley Chu
Cc: linux-scsi, martin.petersen, andy.teng, jejb, chun-hung.wu,
kuohong.wang, linux-kernel, asutoshd, avri.altman,
linux-mediatek, peter.wang, alim.akhtar, matthias.bgg, beanhuo,
chaotian.jing, cc.chou, linux-arm-kernel, bvanassche
Hi Stanley,
On 2020-07-24 22:01, Stanley Chu wrote:
> Currently I/O request could be still submitted to UFS device while
> UFS is working on shutdown flow. This may lead to racing as below
> scenarios and finally system may crash due to unclocked register
> accesses.
>
> To fix this kind of issues, specifically quiesce all SCSI devices
> before UFS shutdown to block all I/O request sending from block
> layer.
>
> Example of racing scenario: While UFS device is runtime-suspended
>
> Thread #1: Executing UFS shutdown flow, e.g.,
> ufshcd_suspend(UFS_SHUTDOWN_PM)
> Thread #2: Executing runtime resume flow triggered by I/O request,
> e.g., ufshcd_resume(UFS_RUNTIME_PM)
>
I don't quite get it, how can you prevent block layer PM from iniating
hba runtime resume by quiescing the scsi devices? Block layer PM
iniates hba async runtime resume in blk_queue_enter(). But quiescing
the scsi devices can only prevent general I/O requests from passing
through scsi_queue_rq() callback.
Say hba is runtime suspended, if an I/O request to sda is sent from
block layer (sda must be runtime suspended as well at this time),
blk_queue_enter() initiates async runtime resume for sda. But since
sda's parents are also runtime suspended, the RPM framework shall do
runtime resume to the devices in the sequence hba->host->target->sda.
In this case, ufshcd_resume() still runs concurrently, no?
Thanks,
Can Guo.
> This breaks the assumption that UFS PM flows can not be running
> concurrently and some unexpected racing behavior may happen.
>
> Signed-off-by: Stanley Chu <stanley.chu@mediatek.com>
> ---
> drivers/scsi/ufs/ufshcd.c | 29 +++++++++++++++++++++++++++++
> 1 file changed, 29 insertions(+)
>
> diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
> index 9d180da77488..2e18596f3a8e 100644
> --- a/drivers/scsi/ufs/ufshcd.c
> +++ b/drivers/scsi/ufs/ufshcd.c
> @@ -159,6 +159,12 @@ struct ufs_pm_lvl_states ufs_pm_lvl_states[] = {
> {UFS_POWERDOWN_PWR_MODE, UIC_LINK_OFF_STATE},
> };
>
> +#define ufshcd_scsi_for_each_sdev(fn) \
> + list_for_each_entry(starget, &hba->host->__targets, siblings) { \
> + __starget_for_each_device(starget, NULL, \
> + fn); \
> + }
> +
> static inline enum ufs_dev_pwr_mode
> ufs_get_pm_lvl_to_dev_pwr_mode(enum ufs_pm_level lvl)
> {
> @@ -8620,6 +8626,13 @@ int ufshcd_runtime_idle(struct ufs_hba *hba)
> }
> EXPORT_SYMBOL(ufshcd_runtime_idle);
>
> +static void ufshcd_quiesce_sdev(struct scsi_device *sdev, void *data)
> +{
> + /* Suspended devices are already quiesced so can be skipped */
> + if (!pm_runtime_suspended(&sdev->sdev_gendev))
> + scsi_device_quiesce(sdev);
> +}
> +
> /**
> * ufshcd_shutdown - shutdown routine
> * @hba: per adapter instance
> @@ -8631,6 +8644,7 @@ EXPORT_SYMBOL(ufshcd_runtime_idle);
> int ufshcd_shutdown(struct ufs_hba *hba)
> {
> int ret = 0;
> + struct scsi_target *starget;
>
> if (!hba->is_powered)
> goto out;
> @@ -8644,6 +8658,21 @@ int ufshcd_shutdown(struct ufs_hba *hba)
> goto out;
> }
>
> + /*
> + * Quiesce all SCSI devices to prevent any non-PM requests sending
> + * from block layer during and after shutdown.
> + *
> + * Here we can not use blk_cleanup_queue() since PM requests
> + * (with BLK_MQ_REQ_PREEMPT flag) are still required to be sent
> + * through block layer. Therefore SCSI command queued after the
> + * scsi_target_quiesce() call returned will block until
> + * blk_cleanup_queue() is called.
> + *
> + * Besides, scsi_target_"un"quiesce (e.g., scsi_target_resume) can
> + * be ignored since shutdown is one-way flow.
> + */
> + ufshcd_scsi_for_each_sdev(ufshcd_quiesce_sdev);
> +
> ret = ufshcd_suspend(hba, UFS_SHUTDOWN_PM);
> out:
> if (ret)
_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v4] scsi: ufs: Quiesce all scsi devices before shutdown
2020-07-27 7:30 ` Can Guo
(?)
@ 2020-07-31 8:22 ` Stanley Chu
-1 siblings, 0 replies; 12+ messages in thread
From: Stanley Chu @ 2020-07-31 8:22 UTC (permalink / raw)
To: Can Guo
Cc: linux-scsi, martin.petersen, avri.altman, alim.akhtar, jejb,
bvanassche, beanhuo, asutoshd, matthias.bgg, linux-mediatek,
linux-arm-kernel, linux-kernel,
Kuohong Wang (王國鴻),
Peter Wang (王信友),
Chun-Hung Wu (巫駿宏),
Andy Teng ( ^[$B{}G!9(^[(B),
Chaotian Jing (井朝天),
CC Chou (周志杰)
Hi Can,
On Mon, 2020-07-27 at 15:30 +0800, Can Guo wrote:
> Hi Stanley,
>
> On 2020-07-24 22:01, Stanley Chu wrote:
> > Currently I/O request could be still submitted to UFS device while
> > UFS is working on shutdown flow. This may lead to racing as below
> > scenarios and finally system may crash due to unclocked register
> > accesses.
> >
> > To fix this kind of issues, specifically quiesce all SCSI devices
> > before UFS shutdown to block all I/O request sending from block
> > layer.
> >
> > Example of racing scenario: While UFS device is runtime-suspended
> >
> > Thread #1: Executing UFS shutdown flow, e.g.,
> > ufshcd_suspend(UFS_SHUTDOWN_PM)
> > Thread #2: Executing runtime resume flow triggered by I/O request,
> > e.g., ufshcd_resume(UFS_RUNTIME_PM)
> >
>
> I don't quite get it, how can you prevent block layer PM from iniating
> hba runtime resume by quiescing the scsi devices? Block layer PM
> iniates hba async runtime resume in blk_queue_enter(). But quiescing
> the scsi devices can only prevent general I/O requests from passing
> through scsi_queue_rq() callback.
>
> Say hba is runtime suspended, if an I/O request to sda is sent from
> block layer (sda must be runtime suspended as well at this time),
> blk_queue_enter() initiates async runtime resume for sda. But since
> sda's parents are also runtime suspended, the RPM framework shall do
> runtime resume to the devices in the sequence hba->host->target->sda.
> In this case, ufshcd_resume() still runs concurrently, no?
>
You are right. This patch can not fix the case you mentioned. It just
prevents "general I/O requests".
So perhaps we also need below patch?
#2 scsi: ufs: Use pm_runtime_get_sync in shutdown flow
https://patchwork.kernel.org/patch/10964097/
The above patch #2 let runtime PM framework manage and prevent
concurrent runtime operations in device driver.
And then using patch #1 (this patch) to block general I/O requests after
ufshcd device is resumed.
Thanks,
Stanley Chu
> Thanks,
>
> Can Guo.
>
> > This breaks the assumption that UFS PM flows can not be running
> > concurrently and some unexpected racing behavior may happen.
> >
> > Signed-off-by: Stanley Chu <stanley.chu@mediatek.com>
> > ---
> > drivers/scsi/ufs/ufshcd.c | 29 +++++++++++++++++++++++++++++
> > 1 file changed, 29 insertions(+)
> >
> > diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
> > index 9d180da77488..2e18596f3a8e 100644
> > --- a/drivers/scsi/ufs/ufshcd.c
> > +++ b/drivers/scsi/ufs/ufshcd.c
> > @@ -159,6 +159,12 @@ struct ufs_pm_lvl_states ufs_pm_lvl_states[] = {
> > {UFS_POWERDOWN_PWR_MODE, UIC_LINK_OFF_STATE},
> > };
> >
> > +#define ufshcd_scsi_for_each_sdev(fn) \
> > + list_for_each_entry(starget, &hba->host->__targets, siblings) { \
> > + __starget_for_each_device(starget, NULL, \
> > + fn); \
> > + }
> > +
> > static inline enum ufs_dev_pwr_mode
> > ufs_get_pm_lvl_to_dev_pwr_mode(enum ufs_pm_level lvl)
> > {
> > @@ -8620,6 +8626,13 @@ int ufshcd_runtime_idle(struct ufs_hba *hba)
> > }
> > EXPORT_SYMBOL(ufshcd_runtime_idle);
> >
> > +static void ufshcd_quiesce_sdev(struct scsi_device *sdev, void *data)
> > +{
> > + /* Suspended devices are already quiesced so can be skipped */
> > + if (!pm_runtime_suspended(&sdev->sdev_gendev))
> > + scsi_device_quiesce(sdev);
> > +}
> > +
> > /**
> > * ufshcd_shutdown - shutdown routine
> > * @hba: per adapter instance
> > @@ -8631,6 +8644,7 @@ EXPORT_SYMBOL(ufshcd_runtime_idle);
> > int ufshcd_shutdown(struct ufs_hba *hba)
> > {
> > int ret = 0;
> > + struct scsi_target *starget;
> >
> > if (!hba->is_powered)
> > goto out;
> > @@ -8644,6 +8658,21 @@ int ufshcd_shutdown(struct ufs_hba *hba)
> > goto out;
> > }
> >
> > + /*
> > + * Quiesce all SCSI devices to prevent any non-PM requests sending
> > + * from block layer during and after shutdown.
> > + *
> > + * Here we can not use blk_cleanup_queue() since PM requests
> > + * (with BLK_MQ_REQ_PREEMPT flag) are still required to be sent
> > + * through block layer. Therefore SCSI command queued after the
> > + * scsi_target_quiesce() call returned will block until
> > + * blk_cleanup_queue() is called.
> > + *
> > + * Besides, scsi_target_"un"quiesce (e.g., scsi_target_resume) can
> > + * be ignored since shutdown is one-way flow.
> > + */
> > + ufshcd_scsi_for_each_sdev(ufshcd_quiesce_sdev);
> > +
> > ret = ufshcd_suspend(hba, UFS_SHUTDOWN_PM);
> > out:
> > if (ret)
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v4] scsi: ufs: Quiesce all scsi devices before shutdown
@ 2020-07-31 8:22 ` Stanley Chu
0 siblings, 0 replies; 12+ messages in thread
From: Stanley Chu @ 2020-07-31 8:22 UTC (permalink / raw)
To: Can Guo
Cc: linux-scsi, martin.petersen, Andy Teng (^[$B{}G!9(^[(B),
jejb, Chun-Hung Wu (巫駿宏),
Kuohong Wang (王國鴻),
linux-kernel, asutoshd, avri.altman, linux-mediatek,
Peter Wang (王信友),
alim.akhtar, matthias.bgg, beanhuo,
Chaotian Jing (井朝天),
CC Chou (周志杰),
linux-arm-kernel, bvanassche
Hi Can,
On Mon, 2020-07-27 at 15:30 +0800, Can Guo wrote:
> Hi Stanley,
>
> On 2020-07-24 22:01, Stanley Chu wrote:
> > Currently I/O request could be still submitted to UFS device while
> > UFS is working on shutdown flow. This may lead to racing as below
> > scenarios and finally system may crash due to unclocked register
> > accesses.
> >
> > To fix this kind of issues, specifically quiesce all SCSI devices
> > before UFS shutdown to block all I/O request sending from block
> > layer.
> >
> > Example of racing scenario: While UFS device is runtime-suspended
> >
> > Thread #1: Executing UFS shutdown flow, e.g.,
> > ufshcd_suspend(UFS_SHUTDOWN_PM)
> > Thread #2: Executing runtime resume flow triggered by I/O request,
> > e.g., ufshcd_resume(UFS_RUNTIME_PM)
> >
>
> I don't quite get it, how can you prevent block layer PM from iniating
> hba runtime resume by quiescing the scsi devices? Block layer PM
> iniates hba async runtime resume in blk_queue_enter(). But quiescing
> the scsi devices can only prevent general I/O requests from passing
> through scsi_queue_rq() callback.
>
> Say hba is runtime suspended, if an I/O request to sda is sent from
> block layer (sda must be runtime suspended as well at this time),
> blk_queue_enter() initiates async runtime resume for sda. But since
> sda's parents are also runtime suspended, the RPM framework shall do
> runtime resume to the devices in the sequence hba->host->target->sda.
> In this case, ufshcd_resume() still runs concurrently, no?
>
You are right. This patch can not fix the case you mentioned. It just
prevents "general I/O requests".
So perhaps we also need below patch?
#2 scsi: ufs: Use pm_runtime_get_sync in shutdown flow
https://patchwork.kernel.org/patch/10964097/
The above patch #2 let runtime PM framework manage and prevent
concurrent runtime operations in device driver.
And then using patch #1 (this patch) to block general I/O requests after
ufshcd device is resumed.
Thanks,
Stanley Chu
> Thanks,
>
> Can Guo.
>
> > This breaks the assumption that UFS PM flows can not be running
> > concurrently and some unexpected racing behavior may happen.
> >
> > Signed-off-by: Stanley Chu <stanley.chu@mediatek.com>
> > ---
> > drivers/scsi/ufs/ufshcd.c | 29 +++++++++++++++++++++++++++++
> > 1 file changed, 29 insertions(+)
> >
> > diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
> > index 9d180da77488..2e18596f3a8e 100644
> > --- a/drivers/scsi/ufs/ufshcd.c
> > +++ b/drivers/scsi/ufs/ufshcd.c
> > @@ -159,6 +159,12 @@ struct ufs_pm_lvl_states ufs_pm_lvl_states[] = {
> > {UFS_POWERDOWN_PWR_MODE, UIC_LINK_OFF_STATE},
> > };
> >
> > +#define ufshcd_scsi_for_each_sdev(fn) \
> > + list_for_each_entry(starget, &hba->host->__targets, siblings) { \
> > + __starget_for_each_device(starget, NULL, \
> > + fn); \
> > + }
> > +
> > static inline enum ufs_dev_pwr_mode
> > ufs_get_pm_lvl_to_dev_pwr_mode(enum ufs_pm_level lvl)
> > {
> > @@ -8620,6 +8626,13 @@ int ufshcd_runtime_idle(struct ufs_hba *hba)
> > }
> > EXPORT_SYMBOL(ufshcd_runtime_idle);
> >
> > +static void ufshcd_quiesce_sdev(struct scsi_device *sdev, void *data)
> > +{
> > + /* Suspended devices are already quiesced so can be skipped */
> > + if (!pm_runtime_suspended(&sdev->sdev_gendev))
> > + scsi_device_quiesce(sdev);
> > +}
> > +
> > /**
> > * ufshcd_shutdown - shutdown routine
> > * @hba: per adapter instance
> > @@ -8631,6 +8644,7 @@ EXPORT_SYMBOL(ufshcd_runtime_idle);
> > int ufshcd_shutdown(struct ufs_hba *hba)
> > {
> > int ret = 0;
> > + struct scsi_target *starget;
> >
> > if (!hba->is_powered)
> > goto out;
> > @@ -8644,6 +8658,21 @@ int ufshcd_shutdown(struct ufs_hba *hba)
> > goto out;
> > }
> >
> > + /*
> > + * Quiesce all SCSI devices to prevent any non-PM requests sending
> > + * from block layer during and after shutdown.
> > + *
> > + * Here we can not use blk_cleanup_queue() since PM requests
> > + * (with BLK_MQ_REQ_PREEMPT flag) are still required to be sent
> > + * through block layer. Therefore SCSI command queued after the
> > + * scsi_target_quiesce() call returned will block until
> > + * blk_cleanup_queue() is called.
> > + *
> > + * Besides, scsi_target_"un"quiesce (e.g., scsi_target_resume) can
> > + * be ignored since shutdown is one-way flow.
> > + */
> > + ufshcd_scsi_for_each_sdev(ufshcd_quiesce_sdev);
> > +
> > ret = ufshcd_suspend(hba, UFS_SHUTDOWN_PM);
> > out:
> > if (ret)
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v4] scsi: ufs: Quiesce all scsi devices before shutdown
@ 2020-07-31 8:22 ` Stanley Chu
0 siblings, 0 replies; 12+ messages in thread
From: Stanley Chu @ 2020-07-31 8:22 UTC (permalink / raw)
To: Can Guo
Cc: linux-scsi, martin.petersen, Andy Teng (^[$B{}G!9(^[(B),
jejb, Chun-Hung Wu (巫駿宏),
Kuohong Wang (王國鴻),
linux-kernel, asutoshd, avri.altman, linux-mediatek,
Peter Wang (王信友),
alim.akhtar, matthias.bgg, beanhuo,
Chaotian Jing (井朝天),
CC Chou (周志杰),
linux-arm-kernel, bvanassche
Hi Can,
On Mon, 2020-07-27 at 15:30 +0800, Can Guo wrote:
> Hi Stanley,
>
> On 2020-07-24 22:01, Stanley Chu wrote:
> > Currently I/O request could be still submitted to UFS device while
> > UFS is working on shutdown flow. This may lead to racing as below
> > scenarios and finally system may crash due to unclocked register
> > accesses.
> >
> > To fix this kind of issues, specifically quiesce all SCSI devices
> > before UFS shutdown to block all I/O request sending from block
> > layer.
> >
> > Example of racing scenario: While UFS device is runtime-suspended
> >
> > Thread #1: Executing UFS shutdown flow, e.g.,
> > ufshcd_suspend(UFS_SHUTDOWN_PM)
> > Thread #2: Executing runtime resume flow triggered by I/O request,
> > e.g., ufshcd_resume(UFS_RUNTIME_PM)
> >
>
> I don't quite get it, how can you prevent block layer PM from iniating
> hba runtime resume by quiescing the scsi devices? Block layer PM
> iniates hba async runtime resume in blk_queue_enter(). But quiescing
> the scsi devices can only prevent general I/O requests from passing
> through scsi_queue_rq() callback.
>
> Say hba is runtime suspended, if an I/O request to sda is sent from
> block layer (sda must be runtime suspended as well at this time),
> blk_queue_enter() initiates async runtime resume for sda. But since
> sda's parents are also runtime suspended, the RPM framework shall do
> runtime resume to the devices in the sequence hba->host->target->sda.
> In this case, ufshcd_resume() still runs concurrently, no?
>
You are right. This patch can not fix the case you mentioned. It just
prevents "general I/O requests".
So perhaps we also need below patch?
#2 scsi: ufs: Use pm_runtime_get_sync in shutdown flow
https://patchwork.kernel.org/patch/10964097/
The above patch #2 let runtime PM framework manage and prevent
concurrent runtime operations in device driver.
And then using patch #1 (this patch) to block general I/O requests after
ufshcd device is resumed.
Thanks,
Stanley Chu
> Thanks,
>
> Can Guo.
>
> > This breaks the assumption that UFS PM flows can not be running
> > concurrently and some unexpected racing behavior may happen.
> >
> > Signed-off-by: Stanley Chu <stanley.chu@mediatek.com>
> > ---
> > drivers/scsi/ufs/ufshcd.c | 29 +++++++++++++++++++++++++++++
> > 1 file changed, 29 insertions(+)
> >
> > diff --git a/drivers/scsi/ufs/ufshcd.c b/drivers/scsi/ufs/ufshcd.c
> > index 9d180da77488..2e18596f3a8e 100644
> > --- a/drivers/scsi/ufs/ufshcd.c
> > +++ b/drivers/scsi/ufs/ufshcd.c
> > @@ -159,6 +159,12 @@ struct ufs_pm_lvl_states ufs_pm_lvl_states[] = {
> > {UFS_POWERDOWN_PWR_MODE, UIC_LINK_OFF_STATE},
> > };
> >
> > +#define ufshcd_scsi_for_each_sdev(fn) \
> > + list_for_each_entry(starget, &hba->host->__targets, siblings) { \
> > + __starget_for_each_device(starget, NULL, \
> > + fn); \
> > + }
> > +
> > static inline enum ufs_dev_pwr_mode
> > ufs_get_pm_lvl_to_dev_pwr_mode(enum ufs_pm_level lvl)
> > {
> > @@ -8620,6 +8626,13 @@ int ufshcd_runtime_idle(struct ufs_hba *hba)
> > }
> > EXPORT_SYMBOL(ufshcd_runtime_idle);
> >
> > +static void ufshcd_quiesce_sdev(struct scsi_device *sdev, void *data)
> > +{
> > + /* Suspended devices are already quiesced so can be skipped */
> > + if (!pm_runtime_suspended(&sdev->sdev_gendev))
> > + scsi_device_quiesce(sdev);
> > +}
> > +
> > /**
> > * ufshcd_shutdown - shutdown routine
> > * @hba: per adapter instance
> > @@ -8631,6 +8644,7 @@ EXPORT_SYMBOL(ufshcd_runtime_idle);
> > int ufshcd_shutdown(struct ufs_hba *hba)
> > {
> > int ret = 0;
> > + struct scsi_target *starget;
> >
> > if (!hba->is_powered)
> > goto out;
> > @@ -8644,6 +8658,21 @@ int ufshcd_shutdown(struct ufs_hba *hba)
> > goto out;
> > }
> >
> > + /*
> > + * Quiesce all SCSI devices to prevent any non-PM requests sending
> > + * from block layer during and after shutdown.
> > + *
> > + * Here we can not use blk_cleanup_queue() since PM requests
> > + * (with BLK_MQ_REQ_PREEMPT flag) are still required to be sent
> > + * through block layer. Therefore SCSI command queued after the
> > + * scsi_target_quiesce() call returned will block until
> > + * blk_cleanup_queue() is called.
> > + *
> > + * Besides, scsi_target_"un"quiesce (e.g., scsi_target_resume) can
> > + * be ignored since shutdown is one-way flow.
> > + */
> > + ufshcd_scsi_for_each_sdev(ufshcd_quiesce_sdev);
> > +
> > ret = ufshcd_suspend(hba, UFS_SHUTDOWN_PM);
> > out:
> > if (ret)
_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek
^ permalink raw reply [flat|nested] 12+ messages in thread
[parent not found: <1d74498da71ba54e23cd82ee6400dbd4@codeaurora.org>]
* Re: [PATCH v4] scsi: ufs: Quiesce all scsi devices before shutdown
[not found] ` <1d74498da71ba54e23cd82ee6400dbd4@codeaurora.org>
2020-07-31 9:27 ` Stanley Chu
@ 2020-07-31 9:27 ` Stanley Chu
0 siblings, 0 replies; 12+ messages in thread
From: Stanley Chu @ 2020-07-31 9:27 UTC (permalink / raw)
To: Can Guo
Cc: linux-scsi, martin.petersen, avri.altman, alim.akhtar, jejb,
bvanassche, beanhuo, asutoshd, matthias.bgg, linux-mediatek,
linux-arm-kernel, linux-kernel,
Kuohong Wang (王國鴻),
Peter Wang (王信友),
Chun-Hung Wu (巫駿宏),
Andy Teng ( ^[$B{}G!9(^[ (B),
Chaotian Jing (井朝天),
CC Chou (周志杰)
Hi Can,
On Fri, 2020-07-31 at 16:58 +0800, Can Guo wrote:
> Hi Stanley,
>
> On 2020-07-31 16:22, Stanley Chu wrote:
> > Hi Can,
> >
> > On Mon, 2020-07-27 at 15:30 +0800, Can Guo wrote:
> >> Hi Stanley,
> >>
> >> On 2020-07-24 22:01, Stanley Chu wrote:
> >> > Currently I/O request could be still submitted to UFS device while
> >> > UFS is working on shutdown flow. This may lead to racing as below
> >> > scenarios and finally system may crash due to unclocked register
> >> > accesses.
> >> >
> >> > To fix this kind of issues, specifically quiesce all SCSI devices
> >> > before UFS shutdown to block all I/O request sending from block
> >> > layer.
> >> >
> >> > Example of racing scenario: While UFS device is runtime-suspended
> >> >
> >> > Thread #1: Executing UFS shutdown flow, e.g.,
> >> > ufshcd_suspend(UFS_SHUTDOWN_PM)
> >> > Thread #2: Executing runtime resume flow triggered by I/O request,
> >> > e.g., ufshcd_resume(UFS_RUNTIME_PM)
> >> >
> >>
> >> I don't quite get it, how can you prevent block layer PM from iniating
> >> hba runtime resume by quiescing the scsi devices? Block layer PM
> >> iniates hba async runtime resume in blk_queue_enter(). But quiescing
> >> the scsi devices can only prevent general I/O requests from passing
> >> through scsi_queue_rq() callback.
> >>
> >> Say hba is runtime suspended, if an I/O request to sda is sent from
> >> block layer (sda must be runtime suspended as well at this time),
> >> blk_queue_enter() initiates async runtime resume for sda. But since
> >> sda's parents are also runtime suspended, the RPM framework shall do
> >> runtime resume to the devices in the sequence hba->host->target->sda.
> >> In this case, ufshcd_resume() still runs concurrently, no?
> >>
> >
> > You are right. This patch can not fix the case you mentioned. It just
> > prevents "general I/O requests".
> >
> > So perhaps we also need below patch?
> >
> > #2 scsi: ufs: Use pm_runtime_get_sync in shutdown flow
> > https://patchwork.kernel.org/patch/10964097/
>
> That is what I am talking about, we definitely need this. Since
> you are already working on the fixes to the shutdown path, I will
> not upload my fixes (basically look same with yours). However, as
> regard for the new change, if pm_runtime_get_sync(hba->dev) < 0,
> hba can still be runtime ACTIVE, why directly goto out without a
> check of hba's runtime status?
>
Thanks for reminding this. Then I will fix it and resend both patches as
a new series to fix the shutdown path.
Thanks so much,
Stanley Chu
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v4] scsi: ufs: Quiesce all scsi devices before shutdown
@ 2020-07-31 9:27 ` Stanley Chu
0 siblings, 0 replies; 12+ messages in thread
From: Stanley Chu @ 2020-07-31 9:27 UTC (permalink / raw)
To: Can Guo
Cc: linux-scsi, martin.petersen, Andy Teng (^[$B{}G!9(^[ (B),
jejb, Chun-Hung Wu (巫駿宏),
Kuohong Wang (王國鴻),
linux-kernel, asutoshd, avri.altman, linux-mediatek,
Peter Wang (王信友),
alim.akhtar, matthias.bgg, beanhuo,
Chaotian Jing (井朝天),
CC Chou (周志杰),
linux-arm-kernel, bvanassche
Hi Can,
On Fri, 2020-07-31 at 16:58 +0800, Can Guo wrote:
> Hi Stanley,
>
> On 2020-07-31 16:22, Stanley Chu wrote:
> > Hi Can,
> >
> > On Mon, 2020-07-27 at 15:30 +0800, Can Guo wrote:
> >> Hi Stanley,
> >>
> >> On 2020-07-24 22:01, Stanley Chu wrote:
> >> > Currently I/O request could be still submitted to UFS device while
> >> > UFS is working on shutdown flow. This may lead to racing as below
> >> > scenarios and finally system may crash due to unclocked register
> >> > accesses.
> >> >
> >> > To fix this kind of issues, specifically quiesce all SCSI devices
> >> > before UFS shutdown to block all I/O request sending from block
> >> > layer.
> >> >
> >> > Example of racing scenario: While UFS device is runtime-suspended
> >> >
> >> > Thread #1: Executing UFS shutdown flow, e.g.,
> >> > ufshcd_suspend(UFS_SHUTDOWN_PM)
> >> > Thread #2: Executing runtime resume flow triggered by I/O request,
> >> > e.g., ufshcd_resume(UFS_RUNTIME_PM)
> >> >
> >>
> >> I don't quite get it, how can you prevent block layer PM from iniating
> >> hba runtime resume by quiescing the scsi devices? Block layer PM
> >> iniates hba async runtime resume in blk_queue_enter(). But quiescing
> >> the scsi devices can only prevent general I/O requests from passing
> >> through scsi_queue_rq() callback.
> >>
> >> Say hba is runtime suspended, if an I/O request to sda is sent from
> >> block layer (sda must be runtime suspended as well at this time),
> >> blk_queue_enter() initiates async runtime resume for sda. But since
> >> sda's parents are also runtime suspended, the RPM framework shall do
> >> runtime resume to the devices in the sequence hba->host->target->sda.
> >> In this case, ufshcd_resume() still runs concurrently, no?
> >>
> >
> > You are right. This patch can not fix the case you mentioned. It just
> > prevents "general I/O requests".
> >
> > So perhaps we also need below patch?
> >
> > #2 scsi: ufs: Use pm_runtime_get_sync in shutdown flow
> > https://patchwork.kernel.org/patch/10964097/
>
> That is what I am talking about, we definitely need this. Since
> you are already working on the fixes to the shutdown path, I will
> not upload my fixes (basically look same with yours). However, as
> regard for the new change, if pm_runtime_get_sync(hba->dev) < 0,
> hba can still be runtime ACTIVE, why directly goto out without a
> check of hba's runtime status?
>
Thanks for reminding this. Then I will fix it and resend both patches as
a new series to fix the shutdown path.
Thanks so much,
Stanley Chu
_______________________________________________
linux-arm-kernel mailing list
linux-arm-kernel@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: [PATCH v4] scsi: ufs: Quiesce all scsi devices before shutdown
@ 2020-07-31 9:27 ` Stanley Chu
0 siblings, 0 replies; 12+ messages in thread
From: Stanley Chu @ 2020-07-31 9:27 UTC (permalink / raw)
To: Can Guo
Cc: linux-scsi, martin.petersen, Andy Teng (^[$B{}G!9(^[ (B),
jejb, Chun-Hung Wu (巫駿宏),
Kuohong Wang (王國鴻),
linux-kernel, asutoshd, avri.altman, linux-mediatek,
Peter Wang (王信友),
alim.akhtar, matthias.bgg, beanhuo,
Chaotian Jing (井朝天),
CC Chou (周志杰),
linux-arm-kernel, bvanassche
Hi Can,
On Fri, 2020-07-31 at 16:58 +0800, Can Guo wrote:
> Hi Stanley,
>
> On 2020-07-31 16:22, Stanley Chu wrote:
> > Hi Can,
> >
> > On Mon, 2020-07-27 at 15:30 +0800, Can Guo wrote:
> >> Hi Stanley,
> >>
> >> On 2020-07-24 22:01, Stanley Chu wrote:
> >> > Currently I/O request could be still submitted to UFS device while
> >> > UFS is working on shutdown flow. This may lead to racing as below
> >> > scenarios and finally system may crash due to unclocked register
> >> > accesses.
> >> >
> >> > To fix this kind of issues, specifically quiesce all SCSI devices
> >> > before UFS shutdown to block all I/O request sending from block
> >> > layer.
> >> >
> >> > Example of racing scenario: While UFS device is runtime-suspended
> >> >
> >> > Thread #1: Executing UFS shutdown flow, e.g.,
> >> > ufshcd_suspend(UFS_SHUTDOWN_PM)
> >> > Thread #2: Executing runtime resume flow triggered by I/O request,
> >> > e.g., ufshcd_resume(UFS_RUNTIME_PM)
> >> >
> >>
> >> I don't quite get it, how can you prevent block layer PM from iniating
> >> hba runtime resume by quiescing the scsi devices? Block layer PM
> >> iniates hba async runtime resume in blk_queue_enter(). But quiescing
> >> the scsi devices can only prevent general I/O requests from passing
> >> through scsi_queue_rq() callback.
> >>
> >> Say hba is runtime suspended, if an I/O request to sda is sent from
> >> block layer (sda must be runtime suspended as well at this time),
> >> blk_queue_enter() initiates async runtime resume for sda. But since
> >> sda's parents are also runtime suspended, the RPM framework shall do
> >> runtime resume to the devices in the sequence hba->host->target->sda.
> >> In this case, ufshcd_resume() still runs concurrently, no?
> >>
> >
> > You are right. This patch can not fix the case you mentioned. It just
> > prevents "general I/O requests".
> >
> > So perhaps we also need below patch?
> >
> > #2 scsi: ufs: Use pm_runtime_get_sync in shutdown flow
> > https://patchwork.kernel.org/patch/10964097/
>
> That is what I am talking about, we definitely need this. Since
> you are already working on the fixes to the shutdown path, I will
> not upload my fixes (basically look same with yours). However, as
> regard for the new change, if pm_runtime_get_sync(hba->dev) < 0,
> hba can still be runtime ACTIVE, why directly goto out without a
> check of hba's runtime status?
>
Thanks for reminding this. Then I will fix it and resend both patches as
a new series to fix the shutdown path.
Thanks so much,
Stanley Chu
_______________________________________________
Linux-mediatek mailing list
Linux-mediatek@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-mediatek
^ permalink raw reply [flat|nested] 12+ messages in thread