linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: John Garry <john.garry@huawei.com>
To: wangyijing <wangyijing@huawei.com>, <jejb@linux.vnet.ibm.com>,
	<martin.petersen@oracle.com>
Cc: <chenqilin2@huawei.com>, <hare@suse.com>,
	<linux-scsi@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
	<chenxiang66@hisilicon.com>, <huangdaode@hisilicon.com>,
	<wangkefeng.wang@huawei.com>, <zhaohongjiang@huawei.com>,
	<dingtianhong@huawei.com>, <guohanjun@huawei.com>,
	<yanaijie@huawei.com>, <hch@lst.de>, <dan.j.williams@intel.com>,
	<emilne@redhat.com>, <thenzl@redhat.com>, <wefu@redhat.com>,
	<charles.chenxin@huawei.com>, <chenweilong@huawei.com>,
	Johannes Thumshirn <jthumshirn@suse.de>,
	Linuxarm <linuxarm@huawei.com>
Subject: Re: [PATCH v3 1/7] libsas: Use static sas event pool to appease sas event lost
Date: Wed, 12 Jul 2017 11:13:38 +0100	[thread overview]
Message-ID: <a3a7c434-cbd8-c84a-8ec1-5345f9ce4056@huawei.com> (raw)
In-Reply-To: <5965E22F.7020309@huawei.com>

On 12/07/2017 09:47, wangyijing wrote:
>
>
> 在 2017/7/12 16:17, John Garry 写道:
>> On 12/07/2017 03:06, wangyijing wrote:
>>>>> -    unsigned long port_events_pending;
>>>>> -    unsigned long phy_events_pending;
>>>>> +    struct asd_sas_event   port_events[PORT_POOL_SIZE];
>>>>> +    struct asd_sas_event   phy_events[PHY_POOL_SIZE];
>>>>>
>>>>>      int error;
>>>>
>>>> Hi Yijing,
>>>>
>>>> So now we are creating a static pool of events per PHY/port, instead of having 1 static work struct per event per PHY/port. So, for sure, this avoids the dynamic event issue of system memory exhaustion which we discussed in v1+v2 series. And it seems to possibly remove issue of losing SAS events.
>>>>
>>>> But how did you determine the pool size for a PHY/port? It would seem to be 5 * #phy events or #port events (which is also 5, I figure by coincidence). How does this deal with flutter of >25 events?
>>>
>>> There is no special meaning for the pool size, if flutter of > 25 events, notify sas events will return error, and the further step work is depending on LLDD drivers.
>>> I hope libsas could do more work in this case, but now it seems a little difficult, this patch may be a interim fix, until we find a perfect solution.
>>
>> The principal of having a fixed-sized pool is ok, even though the pool size needs more consideration.
>>
>> However my issue is how to handle pool exhaustion. For a start, relaying info to the LLDD that the event notification failed is probably not the way to go. I only now noticed "scsi: sas: scsi_queue_work can fail, so make callers aware" made it into the kernel; as I mentioned in response to this patch, the LLDD does not know how to handle this (and no LLDDs do actually handle this).
>>
>> I would say it is better to shut down the PHY from libsas (As Dan mentioned in the v1 series) when the pool exhausts, under the assumption that the PHY has gone into some erroneous state. The user can later re-enable the PHY from sysfs, if required.
>
> I considered this suggestion, and what I am worried about are, first if we disable phy once the sas event pool exhausts, it may hurt the pending sas event process which has been queued,

I don't see how it affects currently queued events - they should just be 
processed normally. As for LLDD reporting events when the pool is 
exhausted, they are just lost.

> second, if phy was disabled, and no one trigger the reenable by sysfs, the LLDD has no way to post new sas phy events.

For the extreme scenario of pool becoming exhausted and PHY being 
disabled, it should remain disabled until user takes some action to fix 
originating problem.

>
> Thanks!
> Yijing.
>
>>
>> Much appreciated,
>> John
>>
>>>
>>> Thanks!
>>> Yijing.
>>>
>>>>
>>>> Thanks,
>>>> John
>>>>
>>>>
>>>> .
>>>>
>>>
>>>
>>> .
>>>
>>
>>
>>
>> .
>>
>
>
> .
>

  reply	other threads:[~2017-07-12 10:14 UTC|newest]

Thread overview: 39+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-07-10  7:06 [PATCH v3 0/7] Enhance libsas hotplug feature Yijing Wang
2017-07-10  7:06 ` [PATCH v3 1/7] libsas: Use static sas event pool to appease sas event lost Yijing Wang
2017-07-11 15:37   ` John Garry
2017-07-12  2:06     ` wangyijing
2017-07-12  8:17       ` John Garry
2017-07-12  8:47         ` wangyijing
2017-07-12 10:13           ` John Garry [this message]
2017-07-13  2:13             ` wangyijing
2017-07-14  6:40   ` Hannes Reinecke
2017-07-10  7:06 ` [PATCH v3 2/7] libsas: remove unused port_gone_completion Yijing Wang
2017-07-11 15:54   ` John Garry
2017-07-12  2:18     ` wangyijing
2017-07-14  6:40   ` Hannes Reinecke
2017-07-10  7:06 ` [PATCH v3 3/7] libsas: Use new workqueue to run sas event Yijing Wang
2017-07-14  6:42   ` Hannes Reinecke
2017-07-10  7:06 ` [PATCH v3 4/7] libsas: add sas event wait-complete support Yijing Wang
2017-07-14  6:51   ` Hannes Reinecke
2017-07-14  7:46     ` wangyijing
2017-07-14  8:42     ` John Garry
2017-07-10  7:06 ` [PATCH v3 5/7] libsas: add a new workqueue to run probe/destruct discovery event Yijing Wang
2017-07-12 16:50   ` John Garry
2017-07-13  2:36     ` wangyijing
2017-07-14  6:52   ` Hannes Reinecke
2017-07-10  7:06 ` [PATCH v3 6/7] libsas: add wait-complete support to sync " Yijing Wang
2017-07-12 13:51   ` John Garry
2017-07-13  2:19     ` wangyijing
2017-07-14  6:53   ` Hannes Reinecke
2017-07-10  7:06 ` [PATCH v3 7/7] libsas: release disco mutex during waiting in sas_ex_discover_end_dev Yijing Wang
2017-07-13 16:10   ` John Garry
2017-07-14  1:44     ` wangyijing
2017-07-14  8:26       ` John Garry
2017-07-14  6:55   ` Hannes Reinecke
2017-07-12  9:59 ` [PATCH v3 0/7] Enhance libsas hotplug feature John Garry
2017-07-12 11:56   ` Johannes Thumshirn
2017-07-13  1:27   ` wangyijing
2017-07-13  1:37   ` wangyijing
2017-07-13  8:08     ` John Garry
2017-07-13  8:38       ` wangyijing
2017-07-14  8:19 ` wangyijing

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=a3a7c434-cbd8-c84a-8ec1-5345f9ce4056@huawei.com \
    --to=john.garry@huawei.com \
    --cc=charles.chenxin@huawei.com \
    --cc=chenqilin2@huawei.com \
    --cc=chenweilong@huawei.com \
    --cc=chenxiang66@hisilicon.com \
    --cc=dan.j.williams@intel.com \
    --cc=dingtianhong@huawei.com \
    --cc=emilne@redhat.com \
    --cc=guohanjun@huawei.com \
    --cc=hare@suse.com \
    --cc=hch@lst.de \
    --cc=huangdaode@hisilicon.com \
    --cc=jejb@linux.vnet.ibm.com \
    --cc=jthumshirn@suse.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-scsi@vger.kernel.org \
    --cc=linuxarm@huawei.com \
    --cc=martin.petersen@oracle.com \
    --cc=thenzl@redhat.com \
    --cc=wangkefeng.wang@huawei.com \
    --cc=wangyijing@huawei.com \
    --cc=wefu@redhat.com \
    --cc=yanaijie@huawei.com \
    --cc=zhaohongjiang@huawei.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).