All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/2] hpsa: fix rmmod issues
@ 2016-11-21 14:04 Martin Wilck
  2016-11-21 14:04 ` [PATCH 1/2] hpsa: cleanup sas_phy structures in sysfs when unloading Martin Wilck
                   ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: Martin Wilck @ 2016-11-21 14:04 UTC (permalink / raw)
  To: don.brace
  Cc: storagedev, iss_storagedev, linux-scsi, JBottomley, hch, hare,
	Martin Wilck

This patch set fixes two issues I encountered when removing the
hpsa modules with rmmod.

Comments and reviews are welcome.

Martin Wilck (2):
  hpsa: cleanup sas_phy structures in sysfs when unloading
  hpsa: destroy sas transport properties before scsi_host

 drivers/scsi/hpsa.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

-- 
2.10.1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 1/2] hpsa: cleanup sas_phy structures in sysfs when unloading
  2016-11-21 14:04 [PATCH 0/2] hpsa: fix rmmod issues Martin Wilck
@ 2016-11-21 14:04 ` Martin Wilck
  2016-11-21 14:13   ` Johannes Thumshirn
  2016-11-29  1:52   ` Don Brace
  2016-11-21 14:04 ` [PATCH 2/2] hpsa: destroy sas transport properties before scsi_host Martin Wilck
  2016-12-01 23:22 ` [PATCH 0/2] hpsa: fix rmmod issues Don Brace
  2 siblings, 2 replies; 15+ messages in thread
From: Martin Wilck @ 2016-11-21 14:04 UTC (permalink / raw)
  To: don.brace
  Cc: storagedev, iss_storagedev, linux-scsi, JBottomley, hch, hare,
	Martin Wilck

When the hpsa module is unloaded using rmmod, dangling
symlinks remain under /sys/class/sas_phy. Fix this by
calling sas_phy_delete() rather than sas_phy_free (which,
according to comments, should not be called for PHYs that
have been set up successfully, anyway).

References: bsc#1010946.
Signed-off-by: Martin Wilck <mwilck@suse.de>
---
 drivers/scsi/hpsa.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
index efe2f36..8ec77c3 100644
--- a/drivers/scsi/hpsa.c
+++ b/drivers/scsi/hpsa.c
@@ -9547,9 +9547,9 @@ static void hpsa_free_sas_phy(struct hpsa_sas_phy *hpsa_sas_phy)
 	struct sas_phy *phy = hpsa_sas_phy->phy;
 
 	sas_port_delete_phy(hpsa_sas_phy->parent_port->port, phy);
-	sas_phy_free(phy);
 	if (hpsa_sas_phy->added_to_port)
 		list_del(&hpsa_sas_phy->phy_list_entry);
+	sas_phy_delete(phy);
 	kfree(hpsa_sas_phy);
 }
 
-- 
2.10.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 2/2] hpsa: destroy sas transport properties before scsi_host
  2016-11-21 14:04 [PATCH 0/2] hpsa: fix rmmod issues Martin Wilck
  2016-11-21 14:04 ` [PATCH 1/2] hpsa: cleanup sas_phy structures in sysfs when unloading Martin Wilck
@ 2016-11-21 14:04 ` Martin Wilck
  2016-11-21 14:14   ` Johannes Thumshirn
  2016-12-01 23:22 ` [PATCH 0/2] hpsa: fix rmmod issues Don Brace
  2 siblings, 1 reply; 15+ messages in thread
From: Martin Wilck @ 2016-11-21 14:04 UTC (permalink / raw)
  To: don.brace
  Cc: storagedev, iss_storagedev, linux-scsi, JBottomley, hch, hare,
	Martin Wilck

Unloading the hpsa driver causes warnings

[ 1063.793652] WARNING: CPU: 1 PID: 4850 at ../fs/sysfs/group.c:237 device_del+0x54/0x240()
[ 1063.793659] sysfs group ffffffff81cf21a0 not found for kobject 'port-2:0'

with two different stacks:
1)
[ 1063.793774]  [<ffffffff81448af4>] device_del+0x54/0x240
[ 1063.793780]  [<ffffffff8145178a>] transport_remove_classdev+0x4a/0x60
[ 1063.793784]  [<ffffffff81451216>] attribute_container_device_trigger+0xa6/0xb0
[ 1063.793802]  [<ffffffffa0105d46>] sas_port_delete+0x126/0x160 [scsi_transport_sas]
[ 1063.793819]  [<ffffffffa036ebcc>] hpsa_free_sas_port+0x3c/0x70 [hpsa]

2)
[ 1063.797103]  [<ffffffff81448af4>] device_del+0x54/0x240
[ 1063.797118]  [<ffffffffa0105d4e>] sas_port_delete+0x12e/0x160 [scsi_transport_sas]
[ 1063.797134]  [<ffffffffa036ebcc>] hpsa_free_sas_port+0x3c/0x70 [hpsa]

This is caused by the fact that host device hostX is deleted before the
SAS transport devices hostX/port-a:b.

This patch fixes this by reverting the order of device deletions.

References: bsc#1010946
Signed-off-by: Martin Wilck <mwilck@suse.de>
---
 drivers/scsi/hpsa.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
index 8ec77c3..f23f680 100644
--- a/drivers/scsi/hpsa.c
+++ b/drivers/scsi/hpsa.c
@@ -9020,6 +9020,7 @@ static void hpsa_remove_one(struct pci_dev *pdev)
 	destroy_workqueue(h->rescan_ctlr_wq);
 	destroy_workqueue(h->resubmit_wq);
 
+	hpsa_delete_sas_host(h);
 	/*
 	 * Call before disabling interrupts.
 	 * scsi_remove_host can trigger I/O operations especially
@@ -9054,7 +9055,6 @@ static void hpsa_remove_one(struct pci_dev *pdev)
 	h->lockup_detected = NULL;			/* init_one 2 */
 	/* (void) pci_disable_pcie_error_reporting(pdev); */	/* init_one 1 */
 
-	hpsa_delete_sas_host(h);
 
 	kfree(h);					/* init_one 1 */
 }
-- 
2.10.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] hpsa: cleanup sas_phy structures in sysfs when unloading
  2016-11-21 14:04 ` [PATCH 1/2] hpsa: cleanup sas_phy structures in sysfs when unloading Martin Wilck
@ 2016-11-21 14:13   ` Johannes Thumshirn
  2016-11-21 15:13     ` Martin Wilck
  2016-11-22  3:47     ` Martin K. Petersen
  2016-11-29  1:52   ` Don Brace
  1 sibling, 2 replies; 15+ messages in thread
From: Johannes Thumshirn @ 2016-11-21 14:13 UTC (permalink / raw)
  To: Martin Wilck
  Cc: don.brace, storagedev, iss_storagedev, linux-scsi, JBottomley, hch, hare

On Mon, Nov 21, 2016 at 03:04:28PM +0100, Martin Wilck wrote:
> When the hpsa module is unloaded using rmmod, dangling
> symlinks remain under /sys/class/sas_phy. Fix this by
> calling sas_phy_delete() rather than sas_phy_free (which,
> according to comments, should not be called for PHYs that
> have been set up successfully, anyway).
> 
> References: bsc#1010946.

I don't think the SUSE bugzilla tag is of relevance upstream. But for sake of
completeness we could add a 
Link: https://bugzilla.suse.com/show_bug.cgi?id=1010946

> Signed-off-by: Martin Wilck <mwilck@suse.de>

Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>

> ---
>  drivers/scsi/hpsa.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
> index efe2f36..8ec77c3 100644
> --- a/drivers/scsi/hpsa.c
> +++ b/drivers/scsi/hpsa.c
> @@ -9547,9 +9547,9 @@ static void hpsa_free_sas_phy(struct hpsa_sas_phy *hpsa_sas_phy)
>  	struct sas_phy *phy = hpsa_sas_phy->phy;
>  
>  	sas_port_delete_phy(hpsa_sas_phy->parent_port->port, phy);
> -	sas_phy_free(phy);
>  	if (hpsa_sas_phy->added_to_port)
>  		list_del(&hpsa_sas_phy->phy_list_entry);
> +	sas_phy_delete(phy);
>  	kfree(hpsa_sas_phy);
>  }
>  
> -- 
> 2.10.1
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-scsi" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/2] hpsa: destroy sas transport properties before scsi_host
  2016-11-21 14:04 ` [PATCH 2/2] hpsa: destroy sas transport properties before scsi_host Martin Wilck
@ 2016-11-21 14:14   ` Johannes Thumshirn
  2017-10-10 23:04     ` Don Brace
  0 siblings, 1 reply; 15+ messages in thread
From: Johannes Thumshirn @ 2016-11-21 14:14 UTC (permalink / raw)
  To: Martin Wilck
  Cc: don.brace, storagedev, iss_storagedev, linux-scsi, JBottomley, hch, hare

On Mon, Nov 21, 2016 at 03:04:29PM +0100, Martin Wilck wrote:
> Unloading the hpsa driver causes warnings
> 
> [ 1063.793652] WARNING: CPU: 1 PID: 4850 at ../fs/sysfs/group.c:237 device_del+0x54/0x240()
> [ 1063.793659] sysfs group ffffffff81cf21a0 not found for kobject 'port-2:0'
> 
> with two different stacks:
> 1)
> [ 1063.793774]  [<ffffffff81448af4>] device_del+0x54/0x240
> [ 1063.793780]  [<ffffffff8145178a>] transport_remove_classdev+0x4a/0x60
> [ 1063.793784]  [<ffffffff81451216>] attribute_container_device_trigger+0xa6/0xb0
> [ 1063.793802]  [<ffffffffa0105d46>] sas_port_delete+0x126/0x160 [scsi_transport_sas]
> [ 1063.793819]  [<ffffffffa036ebcc>] hpsa_free_sas_port+0x3c/0x70 [hpsa]
> 
> 2)
> [ 1063.797103]  [<ffffffff81448af4>] device_del+0x54/0x240
> [ 1063.797118]  [<ffffffffa0105d4e>] sas_port_delete+0x12e/0x160 [scsi_transport_sas]
> [ 1063.797134]  [<ffffffffa036ebcc>] hpsa_free_sas_port+0x3c/0x70 [hpsa]
> 
> This is caused by the fact that host device hostX is deleted before the
> SAS transport devices hostX/port-a:b.
> 
> This patch fixes this by reverting the order of device deletions.
> 
> References: bsc#1010946
> Signed-off-by: Martin Wilck <mwilck@suse.de>
> ---

With the References changed to the bug link like in patch 1/2 
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] hpsa: cleanup sas_phy structures in sysfs when unloading
  2016-11-21 14:13   ` Johannes Thumshirn
@ 2016-11-21 15:13     ` Martin Wilck
  2016-11-22  3:47     ` Martin K. Petersen
  1 sibling, 0 replies; 15+ messages in thread
From: Martin Wilck @ 2016-11-21 15:13 UTC (permalink / raw)
  To: Johannes Thumshirn, martin.petersen
  Cc: don.brace, storagedev, linux-scsi, jejb, hch, hare

On Mon, 2016-11-21 at 15:13 +0100, Johannes Thumshirn wrote:
> On Mon, Nov 21, 2016 at 03:04:28PM +0100, Martin Wilck wrote:
> > When the hpsa module is unloaded using rmmod, dangling
> > symlinks remain under /sys/class/sas_phy. Fix this by
> > calling sas_phy_delete() rather than sas_phy_free (which,
> > according to comments, should not be called for PHYs that
> > have been set up successfully, anyway).
> > 
> > References: bsc#1010946.
> 
> I don't think the SUSE bugzilla tag is of relevance upstream. But for
> sake of
> completeness we could add a 
> Link: https://bugzilla.suse.com/show_bug.cgi?id=1010946

I am sorry for this mistake.

@Martin, do you want me to re-submit with these references fixed?

I also apologize for the broken cc list of the first series, I hope I
got it right this time.

Regards
Martin


> 
> > Signed-off-by: Martin Wilck <mwilck@suse.de>
> 
> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] hpsa: cleanup sas_phy structures in sysfs when unloading
  2016-11-21 14:13   ` Johannes Thumshirn
  2016-11-21 15:13     ` Martin Wilck
@ 2016-11-22  3:47     ` Martin K. Petersen
  2016-11-22  8:10       ` Johannes Thumshirn
  1 sibling, 1 reply; 15+ messages in thread
From: Martin K. Petersen @ 2016-11-22  3:47 UTC (permalink / raw)
  To: Johannes Thumshirn
  Cc: Martin Wilck, don.brace, storagedev, iss_storagedev, linux-scsi,
	JBottomley, hch, hare

>>>>> "Johannes" == Johannes Thumshirn <jthumshirn@suse.de> writes:

Johannes> I don't think the SUSE bugzilla tag is of relevance upstream.

Nope. I'd rather have really comprehensive patch descriptions.

Johannes> But for sake of completeness we could add a Link:
Johannes> https://bugzilla.suse.com/show_bug.cgi?id=1010946

"You are not authorized to access bug #1010946. To see this bug, you
must first log in to an account with the appropriate permissions."

We'll see what the Microsemi folks think of the patches. If the patches
get acked I can just drop the bsc tag when I apply.

-- 
Martin K. Petersen	Oracle Linux Engineering

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] hpsa: cleanup sas_phy structures in sysfs when unloading
  2016-11-22  3:47     ` Martin K. Petersen
@ 2016-11-22  8:10       ` Johannes Thumshirn
  0 siblings, 0 replies; 15+ messages in thread
From: Johannes Thumshirn @ 2016-11-22  8:10 UTC (permalink / raw)
  To: Martin K. Petersen
  Cc: Martin Wilck, don.brace, storagedev, iss_storagedev, linux-scsi,
	JBottomley, hch, hare

On Mon, Nov 21, 2016 at 10:47:20PM -0500, Martin K . Petersen wrote:
> >>>>> "Johannes" == Johannes Thumshirn <jthumshirn@suse.de> writes:
> 
> Johannes> I don't think the SUSE bugzilla tag is of relevance upstream.
> 
> Nope. I'd rather have really comprehensive patch descriptions.
> 
> Johannes> But for sake of completeness we could add a Link:
> Johannes> https://bugzilla.suse.com/show_bug.cgi?id=1010946
> 
> "You are not authorized to access bug #1010946. To see this bug, you
> must first log in to an account with the appropriate permissions."

I'm sorry, I always thought that the bugs are visible from the outside if
we don't mark it as closed. I'm sorry, apparently it's only the openSUSE
bugs which are visible.

	Johannes

-- 
Johannes Thumshirn                                          Storage
jthumshirn@suse.de                                +49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [PATCH 1/2] hpsa: cleanup sas_phy structures in sysfs when unloading
  2016-11-21 14:04 ` [PATCH 1/2] hpsa: cleanup sas_phy structures in sysfs when unloading Martin Wilck
  2016-11-21 14:13   ` Johannes Thumshirn
@ 2016-11-29  1:52   ` Don Brace
  2016-11-29  9:16     ` Martin Wilck
  1 sibling, 1 reply; 15+ messages in thread
From: Don Brace @ 2016-11-29  1:52 UTC (permalink / raw)
  To: Martin Wilck
  Cc: dl-esc-Team ESD Storage Dev Support, iss_storagedev, linux-scsi,
	JBottomley, hch, hare

> -----Original Message-----
> From: Martin Wilck [mailto:mwilck@suse.de]
> Sent: Monday, November 21, 2016 8:04 AM
> To: Don Brace
> Cc: dl-esc-Team ESD Storage Dev Support; iss_storagedev@hp.com; linux-
> scsi@vger.kernel.org; JBottomley@odin.com; hch@lst.de; hare@suse.de;
> Martin Wilck
> Subject: [PATCH 1/2] hpsa: cleanup sas_phy structures in sysfs when
> unloading
> 
> EXTERNAL EMAIL
> 
> 
> When the hpsa module is unloaded using rmmod, dangling
> symlinks remain under /sys/class/sas_phy. Fix this by
> calling sas_phy_delete() rather than sas_phy_free (which,
> according to comments, should not be called for PHYs that
> have been set up successfully, anyway).
> 
> References: bsc#1010946.
> Signed-off-by: Martin Wilck <mwilck@suse.de>
> ---
>  drivers/scsi/hpsa.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
> index efe2f36..8ec77c3 100644
> --- a/drivers/scsi/hpsa.c
> +++ b/drivers/scsi/hpsa.c
> @@ -9547,9 +9547,9 @@ static void hpsa_free_sas_phy(struct hpsa_sas_phy
> *hpsa_sas_phy)
>         struct sas_phy *phy = hpsa_sas_phy->phy;
> 
>         sas_port_delete_phy(hpsa_sas_phy->parent_port->port, phy);
> -       sas_phy_free(phy);
>         if (hpsa_sas_phy->added_to_port)
>                 list_del(&hpsa_sas_phy->phy_list_entry);
> +       sas_phy_delete(phy);
>         kfree(hpsa_sas_phy);
>  }
> 
> --
> 2.10.1

I tried these patches on: 4.9.0-rc7, was this correct?

I got the following stack trace:
[  231.192289] ------------[ cut here ]------------
[  231.214333] WARNING: CPU: 51 PID: 15876 at fs/sysfs/group.c:237 sysfs_remove_group+0x8e/0x90
[  231.254371] sysfs group 'power' not found for kobject '4:0:0:0'
[  231.282637] Modules linked in: ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 nf_conntrack_ipv6 nf_defrag_ipv6 ipt_REJECT nf_reject_ipv4 nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack cfg80211 rfkill ebtable_nat ebtable_broute bridge stp llc ebtable_filter ebtables ip6table_mangle ip6table_security ip6table_raw ip6table_filter ip6_tables iptable_mangle iptable_security iptable_raw iptable_filter ip_tables sb_edac edac_core x86_pkg_temp_thermal coretemp crct10dif_pclmul crc32_pclmul iTCO_wdt ghash_clmulni_intel iTCO_vendor_support aesni_intel lrw gf128mul glue_helper ablk_helper cryptd osst pcspkr ch st ioatdma lpc_ich hpilo hpwdt mfd_core dca ipmi_si ipmi_msghandler pcc_cpufreq wmi acpi_cpufreq acpi_power_meter uinput mgag200 i2c_algo_bit drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops
[  231.620046]  ttm drm crc32c_intel tg3 serio_raw ptp usb_storage i2c_core hpsa(-) pps_core scsi_transport_sas dm_mirror dm_region_hash dm_log dm_mod
[  231.676364] CPU: 51 PID: 15876 Comm: rmmod Not tainted 4.9.0-rc7+ #22
[  231.706769] Hardware name: HP ProLiant DL580 Gen8, BIOS P79 08/18/2016
[  231.737730]  ffffc9002123bbf0 ffffffff815909bd ffffc9002123bc40 0000000000000000
[  231.772474]  ffffc9002123bc30 ffffffff81090901 000000ed00000246 0000000000000000
[  231.807897]  ffffffff81f71560 ffff8820529faea8 ffff88205542ea60 00000000024b0090
[  231.842671] Call Trace:
[  231.854426]  [<ffffffff815909bd>] dump_stack+0x85/0xc8
[  231.878826]  [<ffffffff81090901>] __warn+0xd1/0xf0
[  231.901476]  [<ffffffff8109097f>] warn_slowpath_fmt+0x5f/0x80
[  231.929515]  [<ffffffff8196e5de>] ? mutex_unlock+0xe/0x10
[  231.955067]  [<ffffffff812f1a6a>] ? kernfs_find_and_get_ns+0x4a/0x60
[  231.985095]  [<ffffffff812f56ae>] sysfs_remove_group+0x8e/0x90
[  232.012416]  [<ffffffff816dd1b7>] dpm_sysfs_remove+0x57/0x60
[  232.038904]  [<ffffffff816cf928>] device_del+0x58/0x270
[  232.064056]  [<ffffffff816cfb5a>] device_unregister+0x1a/0x60
[  232.091138]  [<ffffffff8157d470>] bsg_unregister_queue+0x60/0xa0
[  232.119498]  [<ffffffff8170e2ea>] __scsi_remove_device+0xaa/0xd0
[  232.147745]  [<ffffffff8170c369>] scsi_forget_host+0x69/0x70
[  232.174666]  [<ffffffff816ff292>] scsi_remove_host+0x82/0x130
[  232.201804]  [<ffffffffa007cfc3>] hpsa_remove_one+0x93/0x190 [hpsa]
[  232.231329]  [<ffffffff815dd8d9>] pci_device_remove+0x39/0xc0
[  232.258089]  [<ffffffff816d4aca>] __device_release_driver+0x9a/0x150
[  232.288005]  [<ffffffff816d4ca1>] driver_detach+0xc1/0xd0
[  232.313479]  [<ffffffff816d3a98>] bus_remove_driver+0x58/0xd0
[  232.341280]  [<ffffffff816d572c>] driver_unregister+0x2c/0x50
[  232.369272]  [<ffffffff815dbf3a>] pci_unregister_driver+0x2a/0x80
[  232.398723]  [<ffffffffa0085869>] hpsa_cleanup+0x10/0x7a7 [hpsa]
[  232.428094]  [<ffffffff8113571c>] SyS_delete_module+0x1bc/0x220
[  232.456716]  [<ffffffff81003c0c>] do_syscall_64+0x6c/0x200
[  232.483125]  [<ffffffff81971d49>] entry_SYSCALL64_slow_path+0x25/0x25
[  232.514162] ---[ end trace 3c490662736284eb ]---


Thanks,
Don Brace
ESC - Smart Storage
Microsemi Corporation

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 1/2] hpsa: cleanup sas_phy structures in sysfs when unloading
  2016-11-29  1:52   ` Don Brace
@ 2016-11-29  9:16     ` Martin Wilck
  0 siblings, 0 replies; 15+ messages in thread
From: Martin Wilck @ 2016-11-29  9:16 UTC (permalink / raw)
  To: Don Brace
  Cc: dl-esc-Team ESD Storage Dev Support, iss_storagedev, linux-scsi,
	James Bottomley <jejb@linux.vnet.ibm.com> . hch@lst.de,
	hare

Hi Don,

On Tue, 2016-11-29 at 01:52 +0000, Don Brace wrote:
> > -----Original Message-----
> > From: Martin Wilck [mailto:mwilck@suse.de]
> > Sent: Monday, November 21, 2016 8:04 AM
> > To: Don Brace
> > Cc: dl-esc-Team ESD Storage Dev Support; iss_storagedev@hp.com;
> > linux-
> > scsi@vger.kernel.org; JBottomley@odin.com; hch@lst.de; hare@suse.de
> > ;
> > Martin Wilck
> > Subject: [PATCH 1/2] hpsa: cleanup sas_phy structures in sysfs when
> > unloading
> > 
> > EXTERNAL EMAIL
> > 
> > 
> > When the hpsa module is unloaded using rmmod, dangling
> > symlinks remain under /sys/class/sas_phy. Fix this by
> > calling sas_phy_delete() rather than sas_phy_free (which,
> > according to comments, should not be called for PHYs that
> > have been set up successfully, anyway).
> > 
> > References: bsc#1010946.
> > Signed-off-by: Martin Wilck <mwilck@suse.de>
> > ---
> >  drivers/scsi/hpsa.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/drivers/scsi/hpsa.c b/drivers/scsi/hpsa.c
> > index efe2f36..8ec77c3 100644
> > --- a/drivers/scsi/hpsa.c
> > +++ b/drivers/scsi/hpsa.c
> > @@ -9547,9 +9547,9 @@ static void hpsa_free_sas_phy(struct
> > hpsa_sas_phy
> > *hpsa_sas_phy)
> >         struct sas_phy *phy = hpsa_sas_phy->phy;
> > 
> >         sas_port_delete_phy(hpsa_sas_phy->parent_port->port, phy);
> > -       sas_phy_free(phy);
> >         if (hpsa_sas_phy->added_to_port)
> >                 list_del(&hpsa_sas_phy->phy_list_entry);
> > +       sas_phy_delete(phy);
> >         kfree(hpsa_sas_phy);
> >  }
> > 
> > --
> > 2.10.1
> 
> I tried these patches on: 4.9.0-rc7, was this correct?
> 
> I got the following stack trace:
> [  231.192289] ------------[ cut here ]------------
> [  231.214333] WARNING: CPU: 51 PID: 15876 at fs/sysfs/group.c:237
> sysfs_remove_group+0x8e/0x90
> [  231.254371] sysfs group 'power' not found for kobject '4:0:0:0'

[...]

The stack traces should be gone if you apply the 2nd patch of the
series ("hpsa: destroy sas transport properties before scsi_host").

My testing (done with a SLES12 kernel), without my patches, showed
these traces for the removal of "sas_port" structures. Adding PATCH 1/2
indeed adds more of these warnings (now for "sas_port" *and*
"sas_phy"). But that's not the fault of this patch; it's caused by the
sequence of actions in hpsa_remove_one() and it's fixed in PATCH 2/2.

Regards
Martin


^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [PATCH 0/2] hpsa: fix rmmod issues
  2016-11-21 14:04 [PATCH 0/2] hpsa: fix rmmod issues Martin Wilck
  2016-11-21 14:04 ` [PATCH 1/2] hpsa: cleanup sas_phy structures in sysfs when unloading Martin Wilck
  2016-11-21 14:04 ` [PATCH 2/2] hpsa: destroy sas transport properties before scsi_host Martin Wilck
@ 2016-12-01 23:22 ` Don Brace
  2016-12-02  8:58   ` Martin Wilck
  2 siblings, 1 reply; 15+ messages in thread
From: Don Brace @ 2016-12-01 23:22 UTC (permalink / raw)
  To: Martin Wilck
  Cc: dl-esc-Team ESD Storage Dev Support, iss_storagedev, linux-scsi,
	JBottomley, hch, hare

> -----Original Message-----
> From: Martin Wilck [mailto:mwilck@suse.de]
> Sent: Monday, November 21, 2016 8:04 AM
> To: Don Brace
> Cc: dl-esc-Team ESD Storage Dev Support; iss_storagedev@hp.com; linux-
> scsi@vger.kernel.org; JBottomley@odin.com; hch@lst.de; hare@suse.de;
> Martin Wilck
> Subject: [PATCH 0/2] hpsa: fix rmmod issues
> 
> EXTERNAL EMAIL
> 
> 
> This patch set fixes two issues I encountered when removing the
> hpsa modules with rmmod.
> 
> Comments and reviews are welcome.
> 
> Martin Wilck (2):
>   hpsa: cleanup sas_phy structures in sysfs when unloading
>   hpsa: destroy sas transport properties before scsi_host
> 
>  drivers/scsi/hpsa.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> --
> 2.10.1

I have both patches applied and I still get stack traces.



[252338.604903] ------------[ cut here ]------------
[252338.627899] WARNING: CPU: 69 PID: 23977 at fs/sysfs/group.c:237 sysfs_remove_group+0x8e/0x90
[252338.668726] sysfs group 'power' not found for kobject '5:0:0:0'
[252338.697526] Modules linked in: hpsa(OE-) scsi_transport_sas(OE) ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 nf_conntrack_ipv6 nf_defrag_ipv6 ipt_REJECT nf_reject_ipv4 nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack cfg80211 rfkill ebtable_nat ebtable_broute bridge stp llc ebtable_filter ebtables ip6table_mangle ip6table_security ip6table_raw ip6table_filter ip6_tables iptable_mangle iptable_security iptable_raw iptable_filter ip_tables osst ch st sb_edac edac_core x86_pkg_temp_thermal coretemp crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel lrw iTCO_wdt gf128mul glue_helper iTCO_vendor_support ablk_helper cryptd pcspkr ioatdma lpc_ich hpwdt hpilo mfd_core dca ipmi_si wmi ipmi_msghandler pcc_cpufreq acpi_cpufreq acpi_power_meter uinput mgag200 i2c_algo_bit drm_kms_helper syscopyarea
[252339.038528]  sysfillrect sysimgblt fb_sys_fops ttm drm crc32c_intel serio_raw tg3 ptp usb_storage i2c_core pps_core dm_mirror dm_region_hash dm_log dm_mod [last unloaded: scsi_transport_sas]
[252339.113923] CPU: 69 PID: 23977 Comm: rmmod Tainted: G        W  OE   4.9.0-rc71+ #24
[252339.151296] Hardware name: HP ProLiant DL580 Gen8, BIOS P79 08/18/2016
[252339.183354]  ffffc9000ff1bbf0 ffffffff815909bd ffffc9000ff1bc40 0000000000000000
[252339.219046]  ffffc9000ff1bc30 ffffffff81090901 000000ed00000246 0000000000000000
[252339.255044]  ffffffff81f71560 ffff88204584dd38 ffff882051e93670 0000000000ae1090
[252339.290927] Call Trace:
[252339.303968]  [<ffffffff815909bd>] dump_stack+0x85/0xc8
[252339.329624]  [<ffffffff81090901>] __warn+0xd1/0xf0
[252339.354037]  [<ffffffff8109097f>] warn_slowpath_fmt+0x5f/0x80
[252339.382084]  [<ffffffff8196e5de>] ? mutex_unlock+0xe/0x10
[252339.408515]  [<ffffffff812f1a6a>] ? kernfs_find_and_get_ns+0x4a/0x60
[252339.439571]  [<ffffffff812f56ae>] sysfs_remove_group+0x8e/0x90
[252339.468069]  [<ffffffff816dd1b7>] dpm_sysfs_remove+0x57/0x60
[252339.495701]  [<ffffffff816cf928>] device_del+0x58/0x270
[252339.521474]  [<ffffffff816cfb5a>] device_unregister+0x1a/0x60
[252339.549796]  [<ffffffff8157d470>] bsg_unregister_queue+0x60/0xa0
[252339.578999]  [<ffffffff8170e2ea>] __scsi_remove_device+0xaa/0xd0
[252339.608230]  [<ffffffff8170c369>] scsi_forget_host+0x69/0x70
[252339.635723]  [<ffffffff816ff292>] scsi_remove_host+0x82/0x130
[252339.663738]  [<ffffffffa038cfc3>] hpsa_remove_one+0x93/0x190 [hpsa]
[252339.694960]  [<ffffffff815dd8d9>] pci_device_remove+0x39/0xc0
[252339.723128]  [<ffffffff816d4aca>] __device_release_driver+0x9a/0x150
[252339.753539]  [<ffffffff816d4ca1>] driver_detach+0xc1/0xd0
[252339.779784]  [<ffffffff816d3a98>] bus_remove_driver+0x58/0xd0
[252339.807519]  [<ffffffff816d572c>] driver_unregister+0x2c/0x50
[252339.835948]  [<ffffffff815dbf3a>] pci_unregister_driver+0x2a/0x80
[252339.865660]  [<ffffffffa0395869>] hpsa_cleanup+0x10/0x7a7 [hpsa]
[252339.894682]  [<ffffffff8113571c>] SyS_delete_module+0x1bc/0x220
[252339.924020]  [<ffffffff81003c0c>] do_syscall_64+0x6c/0x200
[252339.950862]  [<ffffffff81971d49>] entry_SYSCALL64_slow_path+0x25/0x25
[252339.982292] ---[ end trace 03cf2c42f2f658e5 ]---

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 0/2] hpsa: fix rmmod issues
  2016-12-01 23:22 ` [PATCH 0/2] hpsa: fix rmmod issues Don Brace
@ 2016-12-02  8:58   ` Martin Wilck
  2016-12-02 15:44     ` Don Brace
  0 siblings, 1 reply; 15+ messages in thread
From: Martin Wilck @ 2016-12-02  8:58 UTC (permalink / raw)
  To: Don Brace
  Cc: dl-esc-Team ESD Storage Dev Support, iss_storagedev, linux-scsi,
	James Bottomley, hch, hare, jthumshirn

On Thu, 2016-12-01 at 23:22 +0000, Don Brace wrote:
> > -----Original Message-----
> > From: Martin Wilck [mailto:mwilck@suse.de]
> > Sent: Monday, November 21, 2016 8:04 AM
> > To: Don Brace
> > Cc: dl-esc-Team ESD Storage Dev Support; iss_storagedev@hp.com;
> > linux-
> > scsi@vger.kernel.org; JBottomley@odin.com; hch@lst.de; hare@suse.de
> > ;
> > Martin Wilck
> > Subject: [PATCH 0/2] hpsa: fix rmmod issues
> > 
> > EXTERNAL EMAIL
> > 
> > 
> > This patch set fixes two issues I encountered when removing the
> > hpsa modules with rmmod.
> > 
> > Comments and reviews are welcome.
> > 
> > Martin Wilck (2):
> >   hpsa: cleanup sas_phy structures in sysfs when unloading
> >   hpsa: destroy sas transport properties before scsi_host
> > 
> >  drivers/scsi/hpsa.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> > 
> > --
> > 2.10.1
> 
> I have both patches applied and I still get stack traces.

Hm, 

there must be a difference between 4.9.0 and the SUSE kernel that I
tested with, then. To be certain, you did NOT see a stack trace at
rmmod before applying my patches?

I can see that your trace occurs in a different code path
(bsg_unregister_queue()) than the ones I observed
(sas_port_delete()/sas_phy_delete()). CC'ing Johannes who alluded to a
generic problem in the SCSI stack during our internal discussion.

Anyway, I'll have another look.

Regards
Martin


> 
> 
> 
> [252338.604903] ------------[ cut here ]------------
> [252338.627899] WARNING: CPU: 69 PID: 23977 at fs/sysfs/group.c:237
> sysfs_remove_group+0x8e/0x90
> [252338.668726] sysfs group 'power' not found for kobject '5:0:0:0'
> [252338.697526] Modules linked in: hpsa(OE-) scsi_transport_sas(OE)
> ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 nf_conntrack_ipv6
> nf_defrag_ipv6 ipt_REJECT nf_reject_ipv4 nf_conntrack_ipv4
> nf_defrag_ipv4 xt_conntrack nf_conntrack cfg80211 rfkill ebtable_nat
> ebtable_broute bridge stp llc ebtable_filter ebtables ip6table_mangle
> ip6table_security ip6table_raw ip6table_filter ip6_tables
> iptable_mangle iptable_security iptable_raw iptable_filter ip_tables
> osst ch st sb_edac edac_core x86_pkg_temp_thermal coretemp
> crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel lrw
> iTCO_wdt gf128mul glue_helper iTCO_vendor_support ablk_helper cryptd
> pcspkr ioatdma lpc_ich hpwdt hpilo mfd_core dca ipmi_si wmi
> ipmi_msghandler pcc_cpufreq acpi_cpufreq acpi_power_meter uinput
> mgag200 i2c_algo_bit drm_kms_helper syscopyarea
> [252339.038528]  sysfillrect sysimgblt fb_sys_fops ttm drm
> crc32c_intel serio_raw tg3 ptp usb_storage i2c_core pps_core
> dm_mirror dm_region_hash dm_log dm_mod [last unloaded:
> scsi_transport_sas]
> [252339.113923] CPU: 69 PID: 23977 Comm: rmmod Tainted:
> G        W  OE   4.9.0-rc71+ #24
> [252339.151296] Hardware name: HP ProLiant DL580 Gen8, BIOS P79
> 08/18/2016
> [252339.183354]  ffffc9000ff1bbf0 ffffffff815909bd ffffc9000ff1bc40
> 0000000000000000
> [252339.219046]  ffffc9000ff1bc30 ffffffff81090901 000000ed00000246
> 0000000000000000
> [252339.255044]  ffffffff81f71560 ffff88204584dd38 ffff882051e93670
> 0000000000ae1090
> [252339.290927] Call Trace:
> [252339.303968]  [<ffffffff815909bd>] dump_stack+0x85/0xc8
> [252339.329624]  [<ffffffff81090901>] __warn+0xd1/0xf0
> [252339.354037]  [<ffffffff8109097f>] warn_slowpath_fmt+0x5f/0x80
> [252339.382084]  [<ffffffff8196e5de>] ? mutex_unlock+0xe/0x10
> [252339.408515]  [<ffffffff812f1a6a>] ?
> kernfs_find_and_get_ns+0x4a/0x60
> [252339.439571]  [<ffffffff812f56ae>] sysfs_remove_group+0x8e/0x90
> [252339.468069]  [<ffffffff816dd1b7>] dpm_sysfs_remove+0x57/0x60
> [252339.495701]  [<ffffffff816cf928>] device_del+0x58/0x270
> [252339.521474]  [<ffffffff816cfb5a>] device_unregister+0x1a/0x60
> [252339.549796]  [<ffffffff8157d470>] bsg_unregister_queue+0x60/0xa0
> [252339.578999]  [<ffffffff8170e2ea>] __scsi_remove_device+0xaa/0xd0
> [252339.608230]  [<ffffffff8170c369>] scsi_forget_host+0x69/0x70
> [252339.635723]  [<ffffffff816ff292>] scsi_remove_host+0x82/0x130
> [252339.663738]  [<ffffffffa038cfc3>] hpsa_remove_one+0x93/0x190
> [hpsa]
> [252339.694960]  [<ffffffff815dd8d9>] pci_device_remove+0x39/0xc0
> [252339.723128]  [<ffffffff816d4aca>]
> __device_release_driver+0x9a/0x150
> [252339.753539]  [<ffffffff816d4ca1>] driver_detach+0xc1/0xd0
> [252339.779784]  [<ffffffff816d3a98>] bus_remove_driver+0x58/0xd0
> [252339.807519]  [<ffffffff816d572c>] driver_unregister+0x2c/0x50
> [252339.835948]  [<ffffffff815dbf3a>] pci_unregister_driver+0x2a/0x80
> [252339.865660]  [<ffffffffa0395869>] hpsa_cleanup+0x10/0x7a7 [hpsa]
> [252339.894682]  [<ffffffff8113571c>] SyS_delete_module+0x1bc/0x220
> [252339.924020]  [<ffffffff81003c0c>] do_syscall_64+0x6c/0x200
> [252339.950862]  [<ffffffff81971d49>]
> entry_SYSCALL64_slow_path+0x25/0x25
> [252339.982292] ---[ end trace 03cf2c42f2f658e5 ]---


^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [PATCH 0/2] hpsa: fix rmmod issues
  2016-12-02  8:58   ` Martin Wilck
@ 2016-12-02 15:44     ` Don Brace
  0 siblings, 0 replies; 15+ messages in thread
From: Don Brace @ 2016-12-02 15:44 UTC (permalink / raw)
  To: Martin Wilck
  Cc: dl-esc-Team ESD Storage Dev Support, iss_storagedev, linux-scsi,
	James Bottomley, hch, hare, jthumshirn

________________________________________
From: Martin Wilck [mwilck@suse.de]
Sent: Friday, December 02, 2016 12:58 AM
To: Don Brace
Cc: dl-esc-Team ESD Storage Dev Support; iss_storagedev@hp.com; linux-scsi@vger.kernel.org; James Bottomley; hch@lst.de; hare@suse.de; jthumshirn@suse.com
Subject: Re: [PATCH 0/2] hpsa: fix rmmod issues

EXTERNAL EMAIL


On Thu, 2016-12-01 at 23:22 +0000, Don Brace wrote:
> > -----Original Message-----
> > From: Martin Wilck [mailto:mwilck@suse.de]
> > Sent: Monday, November 21, 2016 8:04 AM
> > To: Don Brace
> > Cc: dl-esc-Team ESD Storage Dev Support; iss_storagedev@hp.com;
> > linux-
> > scsi@vger.kernel.org; JBottomley@odin.com; hch@lst.de; hare@suse.de
> > ;
> > Martin Wilck
> > Subject: [PATCH 0/2] hpsa: fix rmmod issues
> >
> > EXTERNAL EMAIL
> >
> >
> > This patch set fixes two issues I encountered when removing the
> > hpsa modules with rmmod.
> >
> > Comments and reviews are welcome.
> >
> > Martin Wilck (2):
> >   hpsa: cleanup sas_phy structures in sysfs when unloading
> >   hpsa: destroy sas transport properties before scsi_host
> >
> >  drivers/scsi/hpsa.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > --
> > 2.10.1
>
> I have both patches applied and I still get stack traces.

Hm,

there must be a difference between 4.9.0 and the SUSE kernel that I
tested with, then. To be certain, you did NOT see a stack trace at
rmmod before applying my patches?

I can see that your trace occurs in a different code path
(bsg_unregister_queue()) than the ones I observed
(sas_port_delete()/sas_phy_delete()). CC'ing Johannes who alluded to a
generic problem in the SCSI stack during our internal discussion.

Anyway, I'll have another look.

Regards
Martin

I do see a stack trace before your patches. You patches have made a difference.
I am not yet sure when they started,. Will have to start doing some more investigation also,

Appreciate the help.

Thanks,
Don

>
>
>
> [252338.604903] ------------[ cut here ]------------
> [252338.627899] WARNING: CPU: 69 PID: 23977 at fs/sysfs/group.c:237
> sysfs_remove_group+0x8e/0x90
> [252338.668726] sysfs group 'power' not found for kobject '5:0:0:0'
> [252338.697526] Modules linked in: hpsa(OE-) scsi_transport_sas(OE)
> ip6t_rpfilter ip6t_REJECT nf_reject_ipv6 nf_conntrack_ipv6
> nf_defrag_ipv6 ipt_REJECT nf_reject_ipv4 nf_conntrack_ipv4
> nf_defrag_ipv4 xt_conntrack nf_conntrack cfg80211 rfkill ebtable_nat
> ebtable_broute bridge stp llc ebtable_filter ebtables ip6table_mangle
> ip6table_security ip6table_raw ip6table_filter ip6_tables
> iptable_mangle iptable_security iptable_raw iptable_filter ip_tables
> osst ch st sb_edac edac_core x86_pkg_temp_thermal coretemp
> crct10dif_pclmul crc32_pclmul ghash_clmulni_intel aesni_intel lrw
> iTCO_wdt gf128mul glue_helper iTCO_vendor_support ablk_helper cryptd
> pcspkr ioatdma lpc_ich hpwdt hpilo mfd_core dca ipmi_si wmi
> ipmi_msghandler pcc_cpufreq acpi_cpufreq acpi_power_meter uinput
> mgag200 i2c_algo_bit drm_kms_helper syscopyarea
> [252339.038528]  sysfillrect sysimgblt fb_sys_fops ttm drm
> crc32c_intel serio_raw tg3 ptp usb_storage i2c_core pps_core
> dm_mirror dm_region_hash dm_log dm_mod [last unloaded:
> scsi_transport_sas]
> [252339.113923] CPU: 69 PID: 23977 Comm: rmmod Tainted:
> G        W  OE   4.9.0-rc71+ #24
> [252339.151296] Hardware name: HP ProLiant DL580 Gen8, BIOS P79
> 08/18/2016
> [252339.183354]  ffffc9000ff1bbf0 ffffffff815909bd ffffc9000ff1bc40
> 0000000000000000
> [252339.219046]  ffffc9000ff1bc30 ffffffff81090901 000000ed00000246
> 0000000000000000
> [252339.255044]  ffffffff81f71560 ffff88204584dd38 ffff882051e93670
> 0000000000ae1090
> [252339.290927] Call Trace:
> [252339.303968]  [<ffffffff815909bd>] dump_stack+0x85/0xc8
> [252339.329624]  [<ffffffff81090901>] __warn+0xd1/0xf0
> [252339.354037]  [<ffffffff8109097f>] warn_slowpath_fmt+0x5f/0x80
> [252339.382084]  [<ffffffff8196e5de>] ? mutex_unlock+0xe/0x10
> [252339.408515]  [<ffffffff812f1a6a>] ?
> kernfs_find_and_get_ns+0x4a/0x60
> [252339.439571]  [<ffffffff812f56ae>] sysfs_remove_group+0x8e/0x90
> [252339.468069]  [<ffffffff816dd1b7>] dpm_sysfs_remove+0x57/0x60
> [252339.495701]  [<ffffffff816cf928>] device_del+0x58/0x270
> [252339.521474]  [<ffffffff816cfb5a>] device_unregister+0x1a/0x60
> [252339.549796]  [<ffffffff8157d470>] bsg_unregister_queue+0x60/0xa0
> [252339.578999]  [<ffffffff8170e2ea>] __scsi_remove_device+0xaa/0xd0
> [252339.608230]  [<ffffffff8170c369>] scsi_forget_host+0x69/0x70
> [252339.635723]  [<ffffffff816ff292>] scsi_remove_host+0x82/0x130
> [252339.663738]  [<ffffffffa038cfc3>] hpsa_remove_one+0x93/0x190
> [hpsa]
> [252339.694960]  [<ffffffff815dd8d9>] pci_device_remove+0x39/0xc0
> [252339.723128]  [<ffffffff816d4aca>]
> __device_release_driver+0x9a/0x150
> [252339.753539]  [<ffffffff816d4ca1>] driver_detach+0xc1/0xd0
> [252339.779784]  [<ffffffff816d3a98>] bus_remove_driver+0x58/0xd0
> [252339.807519]  [<ffffffff816d572c>] driver_unregister+0x2c/0x50
> [252339.835948]  [<ffffffff815dbf3a>] pci_unregister_driver+0x2a/0x80
> [252339.865660]  [<ffffffffa0395869>] hpsa_cleanup+0x10/0x7a7 [hpsa]
> [252339.894682]  [<ffffffff8113571c>] SyS_delete_module+0x1bc/0x220
> [252339.924020]  [<ffffffff81003c0c>] do_syscall_64+0x6c/0x200
> [252339.950862]  [<ffffffff81971d49>]
> entry_SYSCALL64_slow_path+0x25/0x25
> [252339.982292] ---[ end trace 03cf2c42f2f658e5 ]---


^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [PATCH 2/2] hpsa: destroy sas transport properties before scsi_host
  2016-11-21 14:14   ` Johannes Thumshirn
@ 2017-10-10 23:04     ` Don Brace
  2017-10-11  9:15       ` Martin Wilck
  0 siblings, 1 reply; 15+ messages in thread
From: Don Brace @ 2017-10-10 23:04 UTC (permalink / raw)
  To: Johannes Thumshirn, Martin Wilck
  Cc: dl-esc-Team ESD Storage Dev Support, iss_storagedev, linux-scsi,
	JBottomley, hch, hare

> -----Original Message-----
> From: Johannes Thumshirn [mailto:jthumshirn@suse.de]
> Sent: Monday, November 21, 2016 8:15 AM
> To: Martin Wilck <mwilck@suse.de>
> Cc: Don Brace <don.brace@microsemi.com>; dl-esc-Team ESD Storage Dev
> Support <esc-TeamESDStorageDevSupport@microsemi.com>;
> iss_storagedev@hp.com; linux-scsi@vger.kernel.org; JBottomley@odin.com;
> hch@lst.de; hare@suse.de
> Subject: Re: [PATCH 2/2] hpsa: destroy sas transport properties before
> scsi_host
> 
> EXTERNAL EMAIL
> 
> 
> On Mon, Nov 21, 2016 at 03:04:29PM +0100, Martin Wilck wrote:
> > Unloading the hpsa driver causes warnings
> >
> > [ 1063.793652] WARNING: CPU: 1 PID: 4850 at ../fs/sysfs/group.c:237
> device_del+0x54/0x240()
> > [ 1063.793659] sysfs group ffffffff81cf21a0 not found for kobject 'port-2:0'
> >
> > with two different stacks:
> > 1)
> > [ 1063.793774]  [<ffffffff81448af4>] device_del+0x54/0x240
> > [ 1063.793780]  [<ffffffff8145178a>]
> transport_remove_classdev+0x4a/0x60
> > [ 1063.793784]  [<ffffffff81451216>]
> attribute_container_device_trigger+0xa6/0xb0
> > [ 1063.793802]  [<ffffffffa0105d46>] sas_port_delete+0x126/0x160
> [scsi_transport_sas]
> > [ 1063.793819]  [<ffffffffa036ebcc>] hpsa_free_sas_port+0x3c/0x70 [hpsa]
> >
> > 2)
> > [ 1063.797103]  [<ffffffff81448af4>] device_del+0x54/0x240
> > [ 1063.797118]  [<ffffffffa0105d4e>] sas_port_delete+0x12e/0x160
> [scsi_transport_sas]
> > [ 1063.797134]  [<ffffffffa036ebcc>] hpsa_free_sas_port+0x3c/0x70 [hpsa]
> >
> > This is caused by the fact that host device hostX is deleted before the
> > SAS transport devices hostX/port-a:b.
> >
> > This patch fixes this by reverting the order of device deletions.
> >
> > References: bsc#1010946
> > Signed-off-by: Martin Wilck <mwilck@suse.de>
> > ---
> 
> With the References changed to the bug link like in patch 1/2
> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>

Now that Hannes's patch  9441284fbc39610c0f9ec0ed118ff85d78352906
has been applied, this patch corrects the stack trace issue.

Would you like to re-submit this patch or would you like me to send it up?
I'll run some quick tests if you do decide to send it up. 
If you want me to send it up, you will get the credit anyway.

Thanks for your help and attention to this issue.
And thanks again to Hannes.

Thanks,
Don Brace
ESC - Smart Storage
Microsemi Corporation


> --
> Johannes Thumshirn                                          Storage
> jthumshirn@suse.de                                +49 911 74053 689
> SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
> GF: Felix Imendörffer, Jane Smithard, Graham Norton
> HRB 21284 (AG Nürnberg)
> Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 2/2] hpsa: destroy sas transport properties before scsi_host
  2017-10-10 23:04     ` Don Brace
@ 2017-10-11  9:15       ` Martin Wilck
  0 siblings, 0 replies; 15+ messages in thread
From: Martin Wilck @ 2017-10-11  9:15 UTC (permalink / raw)
  To: Don Brace, Johannes Thumshirn
  Cc: dl-esc-Team ESD Storage Dev Support, iss_storagedev, linux-scsi,
	JBottomley, hch, hare

On Tue, 2017-10-10 at 23:04 +0000, Don Brace wrote:
> Now that Hannes's patch  9441284fbc39610c0f9ec0ed118ff85d78352906
> has been applied, this patch corrects the stack trace issue.
> 
> Would you like to re-submit this patch or would you like me to send
> it up?
> I'll run some quick tests if you do decide to send it up. 
> If you want me to send it up, you will get the credit anyway.

>From my PoV, just go ahead and re-submit. It'll be faster than me
diving into this again.

Thanks,
Martin

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2017-10-11  9:15 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-21 14:04 [PATCH 0/2] hpsa: fix rmmod issues Martin Wilck
2016-11-21 14:04 ` [PATCH 1/2] hpsa: cleanup sas_phy structures in sysfs when unloading Martin Wilck
2016-11-21 14:13   ` Johannes Thumshirn
2016-11-21 15:13     ` Martin Wilck
2016-11-22  3:47     ` Martin K. Petersen
2016-11-22  8:10       ` Johannes Thumshirn
2016-11-29  1:52   ` Don Brace
2016-11-29  9:16     ` Martin Wilck
2016-11-21 14:04 ` [PATCH 2/2] hpsa: destroy sas transport properties before scsi_host Martin Wilck
2016-11-21 14:14   ` Johannes Thumshirn
2017-10-10 23:04     ` Don Brace
2017-10-11  9:15       ` Martin Wilck
2016-12-01 23:22 ` [PATCH 0/2] hpsa: fix rmmod issues Don Brace
2016-12-02  8:58   ` Martin Wilck
2016-12-02 15:44     ` Don Brace

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.