kvm.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Eric Farman <farman@linux.ibm.com>
To: Cornelia Huck <cohuck@redhat.com>
Cc: kvm@vger.kernel.org, linux-s390@vger.kernel.org,
	Jason Herne <jjherne@linux.ibm.com>,
	Jared Rossi <jrossi@linux.ibm.com>
Subject: Re: [RFC PATCH v1 08/10] vfio-ccw: Wire up the CRW irq and CRW region
Date: Fri, 6 Dec 2019 16:24:31 -0500	[thread overview]
Message-ID: <9c83e960-9f68-2328-6a89-d0fa7b8768d8@linux.ibm.com> (raw)
In-Reply-To: <20191206112107.63fb37a1.cohuck@redhat.com>



On 12/6/19 5:21 AM, Cornelia Huck wrote:
> On Thu, 5 Dec 2019 15:43:55 -0500
> Eric Farman <farman@linux.ibm.com> wrote:
> 
>> On 11/19/19 1:52 PM, Cornelia Huck wrote:
>>> On Fri, 15 Nov 2019 03:56:18 +0100
>>> Eric Farman <farman@linux.ibm.com> wrote:
>>>   
>>>> From: Farhan Ali <alifm@linux.ibm.com>
>>>>
>>>> Use an IRQ to notify userspace that there is a CRW
>>>> pending in the region, related to path-availability
>>>> changes on the passthrough subchannel.  
>>>
>>> Thinking a bit more about this, it feels a bit odd that a crw for a
>>> chpid ends up on one subchannel. What happens if we have multiple
>>> subchannels passed through by vfio-ccw that use that same chpid?  
>>
>> Yeah...  It doesn't end up on one subchannel, it ends up on every
>> affected subchannel, based on the loops in (for example)
>> chsc_chp_offline().  This means that "let's configure off a CHPID to the
>> LPAR" translates one channel-path CRW into N channel-path CRWs (one each
>> sent to N subchannels).  It would make more sense if we just presented
>> one channel-path CRW to the guest, but I'm having difficulty seeing how
>> we could wire this up.  What we do here is use the channel-path event
>> handler in vfio-ccw also create a channel-path CRW to be presented to
>> the guest, even though it's processing something at the subchannel level.
> 
> Yes, it's a bit odd that we need to do 1 -> N -> 1 conversion here, but
> we can't really avoid it without introducing a new way to report
> information that is relevant for more than one subchannel. The thing we
> need to make sure is that userspace gets the same information,
> regardless of which affected subchannel it looks at.
> 
>>
>> The actual CRW handlers are in the base cio code, and we only get into
>> vfio-ccw when processing the individual subchannels.  Do we need to make
>> a device (or something?) at the guest level for the chpids that exist?
>> Or do something to say "hey we got this from a subchannel, put it on a
>> global queue if it's unique, or throw it away if it's a duplicate we
>> haven't processed yet" ?  Thoughts?
> 
> The problem is that you can easily get several crws that look identical
> (consider e.g. a chpid that is set online and offline in a tight loop).

Yeah, I have a little program that does such a loop.  Things don't work
too well even with a random delay between on/off, though a hack I'm
trying to formalize for v2 improves matters.  If I drop that delay to
zero, um, well I haven't had the nerve to try that.  :)

> The only entity that should make decisions as to what to process here
> is the guest.

Agreed.  So your suggestion in the QEMU series of acting like stcrw is
good; give the guest all the information it can, and let it decide what
thrashing is needed.  I guess if I can just queue everything on the
vfio_ccw_private, and move one (two?) into the crw_region each time it's
read then that should work well enough.  Thanks!

> 
> (...)
> 
>>>> @@ -312,6 +334,11 @@ static int vfio_ccw_chp_event(struct subchannel *sch,
>>>>  	case CHP_ONLINE:
>>>>  		/* Path became available */
>>>>  		sch->lpm |= mask & sch->opm;
>>>> +		private->crw.rsc = CRW_RSC_CPATH;
>>>> +		private->crw.rsid = 0x0 | (link->chpid.cssid << 8) |
>>>> +				    link->chpid.id;
>>>> +		private->crw.erc = CRW_ERC_INIT;
>>>> +		queue_work(vfio_ccw_work_q, &private->crw_work);  
>>>
>>> Isn't that racy? Imagine you get one notification for a chpid and queue
>>> it. Then, you get another notification for another chpid and queue it
>>> as well. Depending on when userspace reads, it gets different chpids.
>>> Moreover, a crw may be lost... or am I missing something obvious?  
>>
>> Nope, you're right on.  If I start thrashing config on/off chpids on the
>> host, I eventually fall down with all sorts of weirdness.
>>
>>>
>>> Maybe you need a real queue for the generated crws?  
>>
>> I guess this is what I'm wrestling with...  We don't have a queue for
>> guest-wide work items, as it's currently broken apart by subchannel.  Is
>> adding one at the vfio-ccw level right?  Feels odd to me, since multiple
>> guests could use devices connected via vfio-ccw, which may or may share
>> common chpids.
> 
> One problem is that the common I/O layer already processes the crws and
> translates them into different per-subchannel events. We don't even
> know what the original crw was: IIUC, we translate both a crw for a
> chpid and a link incident event (reported by a crw with source css and
> event information via chsc) concerning the concrete link to the same
> event. That *probably* doesn't matter too much, but it makes things
> harder. Access to the original crw queue would be nice, but hard to
> implement without stepping on each others' toes.>
>>
>> I have a rough hack that serializes things a bit, while still keeping
>> the CRW duplication at the subchannel level.  Things improve
>> considerably, but it still seems odd to me.  I'll keep working on that
>> unless anyone has any better ideas.
> 
> The main issue is that we're trying to report a somewhat global event
> via individual devices...

+1

> 
> ...what about not reporting crws at all, but something derived from the
> events we get at the subchannel driver level? Have four masks that
> indicate online/offline/vary on/vary off for the respective chpids, and
> have userspace decide how they want to report these to the guest? A
> drawback (?) would be that a series of on/off events would only be
> reported as one on and one off event, though. Feasible, or complete
> lunacy?
> 

Not complete lunacy, but brings concerns of its own as we'd need to
ensure the masks don't say something nonsensical, like (for example)
both vary on and vary off.  Or what happens if both vary on and config
off gets set?  Not a huge amount of work, but just seems to carry more
risk than a queue of the existing CRWs and letting the guest process
them itself, even if things are duplicated more than necessary.  In
reality, these events aren't that common anyway unless things go REALLY
sideways.

  reply	other threads:[~2019-12-06 21:25 UTC|newest]

Thread overview: 29+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-11-15  2:56 [RFC PATCH v1 00/10] s390/vfio-ccw: Channel Path Handling Eric Farman
2019-11-15  2:56 ` [RFC PATCH v1 01/10] vfio-ccw: Introduce new helper functions to free/destroy regions Eric Farman
2019-11-19 12:33   ` Cornelia Huck
2019-11-15  2:56 ` [RFC PATCH v1 02/10] vfio-ccw: Register a chp_event callback for vfio-ccw Eric Farman
2019-11-19 12:48   ` Cornelia Huck
2019-11-19 15:45     ` Eric Farman
2019-11-15  2:56 ` [RFC PATCH v1 03/10] vfio-ccw: Use subchannel lpm in the orb Eric Farman
2019-11-19 13:00   ` Cornelia Huck
2019-11-19 15:16     ` Eric Farman
2019-11-19 15:38       ` Cornelia Huck
2019-11-19 18:58         ` Eric Farman
2019-11-15  2:56 ` [RFC PATCH v1 04/10] vfio-ccw: Refactor the unregister of the async regions Eric Farman
2019-11-19 16:21   ` Cornelia Huck
2019-11-15  2:56 ` [RFC PATCH v1 05/10] vfio-ccw: Introduce a new schib region Eric Farman
2019-11-19 16:52   ` Cornelia Huck
2019-11-20 16:49     ` Eric Farman
2019-11-15  2:56 ` [RFC PATCH v1 06/10] vfio-ccw: Introduce a new CRW region Eric Farman
2019-11-19 17:17   ` Cornelia Huck
2019-11-15  2:56 ` [RFC PATCH v1 07/10] vfio-ccw: Refactor IRQ handlers Eric Farman
2019-11-19 17:18   ` Cornelia Huck
2019-11-15  2:56 ` [RFC PATCH v1 08/10] vfio-ccw: Wire up the CRW irq and CRW region Eric Farman
2019-11-19 18:52   ` Cornelia Huck
2019-12-05 20:43     ` Eric Farman
2019-12-06 10:21       ` Cornelia Huck
2019-12-06 21:24         ` Eric Farman [this message]
2019-12-09 12:38           ` Cornelia Huck
2019-11-15  2:56 ` [RFC PATCH v1 09/10] vfio-ccw: Add trace for CRW event Eric Farman
2019-11-15  2:56 ` [RFC PATCH v1 10/10] vfio-ccw: Remove inline get_schid() routine Eric Farman
2019-11-15 11:15 ` [RFC PATCH v1 00/10] s390/vfio-ccw: Channel Path Handling Cornelia Huck

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9c83e960-9f68-2328-6a89-d0fa7b8768d8@linux.ibm.com \
    --to=farman@linux.ibm.com \
    --cc=cohuck@redhat.com \
    --cc=jjherne@linux.ibm.com \
    --cc=jrossi@linux.ibm.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-s390@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).