All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jason Wang <jasowang@redhat.com>
To: "Enrico Weigelt, metux IT consult" <lkml@metux.net>,
	"Enrico Weigelt, metux IT consult" <info@metux.net>,
	linux-kernel@vger.kernel.org
Cc: corbet@lwn.net, linus.walleij@linaro.org,
	bgolaszewski@baylibre.com, mst@redhat.com,
	linux-doc@vger.kernel.org, linux-gpio@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	linux-riscv@lists.infradead.org
Subject: Re: [PATCH v2 2/2] drivers: gpio: add virtio-gpio guest driver
Date: Tue, 8 Dec 2020 10:49:02 +0800	[thread overview]
Message-ID: <2882f118-3555-614c-33a0-30865673deb3@redhat.com> (raw)
In-Reply-To: <635faeb7-950e-e594-3217-69032ed9cbd1@metux.net>


On 2020/12/7 下午5:33, Enrico Weigelt, metux IT consult wrote:
> On 07.12.20 04:48, Jason Wang wrote:
>
> Hi,
>
>>>> Not a native speaker but event sounds like something driver read from
>>>> device. Looking at the below lists, most of them except for
>>>> VIRTIO_GPIO_EV_HOST_LEVEL looks more like a command.
>>> okay, shall I name it "message" ?
>>
>> It might be better.
> Okay, renamed to messages in v3.
>
>>>> #define VIRTIO_NET_OK     0
>>>> #define VIRTIO_NET_ERR    1
>>> hmm, so I'd need to define all the error codes that possibly could
>>> happen ?
>>
>> Yes, I think you need.
> Okay, going to do it in the next version.
>
>>>> If I read the code correctly, this expects there will be at most a
>>>> single type of event that can be processed at the same time. E.g can
>>>> upper layer want to read from different lines in parallel? If yes, we
>>>> need to deal with that.
>>> @Linus @Bartosz: can that happen or does gpio subsys already serialize
>>> requests ?
>>>
>>> Initially, I tried to protect it by spinlock (so, only one request may
>>> run at a time, other calls just wait until the first is finished), but
>>> it crashed when gpio cdev registration calls into the driver (fetches
>>> the status) while still in bootup.
>>>
>>> Don't recall the exact error anymore, but something like an
>>> inconsistency in the spinlock calls.
>>>
>>> Did I just use the wrong type of lock ?
>> I'm not sure since I am not familiar with GPIO. But a question is, if at
>> most one request is allowed, I'm not sure virtio is the best choice here
>> since we don't even need a queue(virtqueue) here.
> I guess, I should add locks to the gpio callback functions (where gpio
> subsys calls in). That way, requests are requests are strictly ordered.
> The locks didn't work in my previous attempts, but probably because I've
> missed to set the can_sleep flag (now fixed in v3).
>
> The gpio ops are already waiting for reply of the corresponding type, so
> the only bad thing that could happen is the same operation being called
> twice (when coming from different threads) and replies mixed up between
> first and second one. OTOH I don't see much problem w/ that. This can be
> fixed by adding a global lock.
>
>> I think it's still about whether or not we need allow a batch of
>> requests via a queue. Consider you've submitted two request A and B, and
>> if B is done first, current code won't work. This is because, the reply
>> is transported via rxq buffers not just reuse the txq buffer if I read
>> the code correctly.
> Meanwhile I've changed it to allocate a new rx buffer for the reply
> (done right before the request is sent), so everything should be
> processed in the order it had been sent. Assuming virtio keeps the
> order of the buffers in the queues.


Unfortunately, there's no guarantee that virtio will keep the order or 
it needs to advertise VIRTIO_F_IN_ORDER. (see 2.6.9 in the virtio spec).

Btw, if possible, it's better to add a link to the userspace code here.


>
>>> Could you please give an example how bi-directional transmission within
>>> the same queue could look like ?
>> You can check how virtio-blk did this in:
>>
>> https://docs.oasis-open.org/virtio/virtio/v1.1/csprd01/virtio-v1.1-csprd01.html#x1-2500006
> hmm, still don't see how the code would actually look like. (in qemu as
> well as kernel). Just add the fetched inbuf as an outbuf (within the
> same queue) ?


Yes, virtio allows adding IN buffers after OUT buffer through descriptor 
chaining.

Thanks


>
>>> Maybe add one new buffer per request and one new per received async
>>> signal ?
>> It would be safe to fill the whole rxq and do the refill e.g when half
>> of the queue is used.
> Okay, doing that now in v3: there's always at least one rx buffer,
> and requests as well as the intr receiver always add a new one.
> (they get removed on fetching, IMHO).
>
>
> --mtx
>


WARNING: multiple messages have this Message-ID (diff)
From: Jason Wang <jasowang@redhat.com>
To: "Enrico Weigelt, metux IT consult" <lkml@metux.net>,
	"Enrico Weigelt, metux IT consult" <info@metux.net>,
	linux-kernel@vger.kernel.org
Cc: corbet@lwn.net, mst@redhat.com, linus.walleij@linaro.org,
	linux-doc@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	bgolaszewski@baylibre.com, linux-gpio@vger.kernel.org,
	linux-riscv@lists.infradead.org
Subject: Re: [PATCH v2 2/2] drivers: gpio: add virtio-gpio guest driver
Date: Tue, 8 Dec 2020 10:49:02 +0800	[thread overview]
Message-ID: <2882f118-3555-614c-33a0-30865673deb3@redhat.com> (raw)
In-Reply-To: <635faeb7-950e-e594-3217-69032ed9cbd1@metux.net>


On 2020/12/7 下午5:33, Enrico Weigelt, metux IT consult wrote:
> On 07.12.20 04:48, Jason Wang wrote:
>
> Hi,
>
>>>> Not a native speaker but event sounds like something driver read from
>>>> device. Looking at the below lists, most of them except for
>>>> VIRTIO_GPIO_EV_HOST_LEVEL looks more like a command.
>>> okay, shall I name it "message" ?
>>
>> It might be better.
> Okay, renamed to messages in v3.
>
>>>> #define VIRTIO_NET_OK     0
>>>> #define VIRTIO_NET_ERR    1
>>> hmm, so I'd need to define all the error codes that possibly could
>>> happen ?
>>
>> Yes, I think you need.
> Okay, going to do it in the next version.
>
>>>> If I read the code correctly, this expects there will be at most a
>>>> single type of event that can be processed at the same time. E.g can
>>>> upper layer want to read from different lines in parallel? If yes, we
>>>> need to deal with that.
>>> @Linus @Bartosz: can that happen or does gpio subsys already serialize
>>> requests ?
>>>
>>> Initially, I tried to protect it by spinlock (so, only one request may
>>> run at a time, other calls just wait until the first is finished), but
>>> it crashed when gpio cdev registration calls into the driver (fetches
>>> the status) while still in bootup.
>>>
>>> Don't recall the exact error anymore, but something like an
>>> inconsistency in the spinlock calls.
>>>
>>> Did I just use the wrong type of lock ?
>> I'm not sure since I am not familiar with GPIO. But a question is, if at
>> most one request is allowed, I'm not sure virtio is the best choice here
>> since we don't even need a queue(virtqueue) here.
> I guess, I should add locks to the gpio callback functions (where gpio
> subsys calls in). That way, requests are requests are strictly ordered.
> The locks didn't work in my previous attempts, but probably because I've
> missed to set the can_sleep flag (now fixed in v3).
>
> The gpio ops are already waiting for reply of the corresponding type, so
> the only bad thing that could happen is the same operation being called
> twice (when coming from different threads) and replies mixed up between
> first and second one. OTOH I don't see much problem w/ that. This can be
> fixed by adding a global lock.
>
>> I think it's still about whether or not we need allow a batch of
>> requests via a queue. Consider you've submitted two request A and B, and
>> if B is done first, current code won't work. This is because, the reply
>> is transported via rxq buffers not just reuse the txq buffer if I read
>> the code correctly.
> Meanwhile I've changed it to allocate a new rx buffer for the reply
> (done right before the request is sent), so everything should be
> processed in the order it had been sent. Assuming virtio keeps the
> order of the buffers in the queues.


Unfortunately, there's no guarantee that virtio will keep the order or 
it needs to advertise VIRTIO_F_IN_ORDER. (see 2.6.9 in the virtio spec).

Btw, if possible, it's better to add a link to the userspace code here.


>
>>> Could you please give an example how bi-directional transmission within
>>> the same queue could look like ?
>> You can check how virtio-blk did this in:
>>
>> https://docs.oasis-open.org/virtio/virtio/v1.1/csprd01/virtio-v1.1-csprd01.html#x1-2500006
> hmm, still don't see how the code would actually look like. (in qemu as
> well as kernel). Just add the fetched inbuf as an outbuf (within the
> same queue) ?


Yes, virtio allows adding IN buffers after OUT buffer through descriptor 
chaining.

Thanks


>
>>> Maybe add one new buffer per request and one new per received async
>>> signal ?
>> It would be safe to fill the whole rxq and do the refill e.g when half
>> of the queue is used.
> Okay, doing that now in v3: there's always at least one rx buffer,
> and requests as well as the intr receiver always add a new one.
> (they get removed on fetching, IMHO).
>
>
> --mtx
>


_______________________________________________
linux-riscv mailing list
linux-riscv@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-riscv

WARNING: multiple messages have this Message-ID (diff)
From: Jason Wang <jasowang@redhat.com>
To: "Enrico Weigelt, metux IT consult" <lkml@metux.net>,
	"Enrico Weigelt, metux IT consult" <info@metux.net>,
	linux-kernel@vger.kernel.org
Cc: corbet@lwn.net, mst@redhat.com, linus.walleij@linaro.org,
	linux-doc@vger.kernel.org,
	virtualization@lists.linux-foundation.org,
	bgolaszewski@baylibre.com, linux-gpio@vger.kernel.org,
	linux-riscv@lists.infradead.org
Subject: Re: [PATCH v2 2/2] drivers: gpio: add virtio-gpio guest driver
Date: Tue, 8 Dec 2020 10:49:02 +0800	[thread overview]
Message-ID: <2882f118-3555-614c-33a0-30865673deb3@redhat.com> (raw)
In-Reply-To: <635faeb7-950e-e594-3217-69032ed9cbd1@metux.net>


On 2020/12/7 下午5:33, Enrico Weigelt, metux IT consult wrote:
> On 07.12.20 04:48, Jason Wang wrote:
>
> Hi,
>
>>>> Not a native speaker but event sounds like something driver read from
>>>> device. Looking at the below lists, most of them except for
>>>> VIRTIO_GPIO_EV_HOST_LEVEL looks more like a command.
>>> okay, shall I name it "message" ?
>>
>> It might be better.
> Okay, renamed to messages in v3.
>
>>>> #define VIRTIO_NET_OK     0
>>>> #define VIRTIO_NET_ERR    1
>>> hmm, so I'd need to define all the error codes that possibly could
>>> happen ?
>>
>> Yes, I think you need.
> Okay, going to do it in the next version.
>
>>>> If I read the code correctly, this expects there will be at most a
>>>> single type of event that can be processed at the same time. E.g can
>>>> upper layer want to read from different lines in parallel? If yes, we
>>>> need to deal with that.
>>> @Linus @Bartosz: can that happen or does gpio subsys already serialize
>>> requests ?
>>>
>>> Initially, I tried to protect it by spinlock (so, only one request may
>>> run at a time, other calls just wait until the first is finished), but
>>> it crashed when gpio cdev registration calls into the driver (fetches
>>> the status) while still in bootup.
>>>
>>> Don't recall the exact error anymore, but something like an
>>> inconsistency in the spinlock calls.
>>>
>>> Did I just use the wrong type of lock ?
>> I'm not sure since I am not familiar with GPIO. But a question is, if at
>> most one request is allowed, I'm not sure virtio is the best choice here
>> since we don't even need a queue(virtqueue) here.
> I guess, I should add locks to the gpio callback functions (where gpio
> subsys calls in). That way, requests are requests are strictly ordered.
> The locks didn't work in my previous attempts, but probably because I've
> missed to set the can_sleep flag (now fixed in v3).
>
> The gpio ops are already waiting for reply of the corresponding type, so
> the only bad thing that could happen is the same operation being called
> twice (when coming from different threads) and replies mixed up between
> first and second one. OTOH I don't see much problem w/ that. This can be
> fixed by adding a global lock.
>
>> I think it's still about whether or not we need allow a batch of
>> requests via a queue. Consider you've submitted two request A and B, and
>> if B is done first, current code won't work. This is because, the reply
>> is transported via rxq buffers not just reuse the txq buffer if I read
>> the code correctly.
> Meanwhile I've changed it to allocate a new rx buffer for the reply
> (done right before the request is sent), so everything should be
> processed in the order it had been sent. Assuming virtio keeps the
> order of the buffers in the queues.


Unfortunately, there's no guarantee that virtio will keep the order or 
it needs to advertise VIRTIO_F_IN_ORDER. (see 2.6.9 in the virtio spec).

Btw, if possible, it's better to add a link to the userspace code here.


>
>>> Could you please give an example how bi-directional transmission within
>>> the same queue could look like ?
>> You can check how virtio-blk did this in:
>>
>> https://docs.oasis-open.org/virtio/virtio/v1.1/csprd01/virtio-v1.1-csprd01.html#x1-2500006
> hmm, still don't see how the code would actually look like. (in qemu as
> well as kernel). Just add the fetched inbuf as an outbuf (within the
> same queue) ?


Yes, virtio allows adding IN buffers after OUT buffer through descriptor 
chaining.

Thanks


>
>>> Maybe add one new buffer per request and one new per received async
>>> signal ?
>> It would be safe to fill the whole rxq and do the refill e.g when half
>> of the queue is used.
> Okay, doing that now in v3: there's always at least one rx buffer,
> and requests as well as the intr receiver always add a new one.
> (they get removed on fetching, IMHO).
>
>
> --mtx
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

  reply	other threads:[~2020-12-08  2:51 UTC|newest]

Thread overview: 99+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-12-03 19:11 [PATCH v2 1/2] drivers: gpio: put virtual gpio device into their own submenu Enrico Weigelt, metux IT consult
2020-12-03 19:11 ` Enrico Weigelt, metux IT consult
2020-12-03 19:11 ` [PATCH v2 2/2] drivers: gpio: add virtio-gpio guest driver Enrico Weigelt, metux IT consult
2020-12-03 19:11   ` Enrico Weigelt, metux IT consult
2020-12-04  3:35   ` Jason Wang
2020-12-04  3:35     ` Jason Wang
2020-12-04  3:35     ` Jason Wang
2020-12-04  9:36     ` Enrico Weigelt, metux IT consult
2020-12-04  9:36       ` Enrico Weigelt, metux IT consult
2020-12-07  3:48       ` Jason Wang
2020-12-07  3:48         ` Jason Wang
2020-12-07  3:48         ` Jason Wang
2020-12-07  9:33         ` Enrico Weigelt, metux IT consult
2020-12-07  9:33           ` Enrico Weigelt, metux IT consult
2020-12-08  2:49           ` Jason Wang [this message]
2020-12-08  2:49             ` Jason Wang
2020-12-08  2:49             ` Jason Wang
2021-04-13 11:07       ` Alex Bennée
2021-04-13 11:07         ` Alex Bennée
2021-04-13 11:07         ` Alex Bennée
2020-12-05  7:59     ` Enrico Weigelt, metux IT consult
2020-12-05  7:59       ` Enrico Weigelt, metux IT consult
2020-12-05 19:32       ` Michael S. Tsirkin
2020-12-05 19:32         ` Michael S. Tsirkin
2020-12-05 19:32         ` Michael S. Tsirkin
2020-12-05 20:05         ` Enrico Weigelt, metux IT consult
2020-12-05 20:05           ` Enrico Weigelt, metux IT consult
2020-12-07  3:16           ` Jason Wang
2020-12-07  3:16             ` Jason Wang
2020-12-07  3:16             ` Jason Wang
2020-12-07 13:52           ` Michael S. Tsirkin
2020-12-07 13:52             ` Michael S. Tsirkin
2020-12-07 13:52             ` Michael S. Tsirkin
2020-12-07 20:34             ` Enrico Weigelt, metux IT consult
2020-12-07 20:34               ` Enrico Weigelt, metux IT consult
2020-12-07  3:12         ` Jason Wang
2020-12-07  3:12           ` Jason Wang
2020-12-07  3:12           ` Jason Wang
2020-12-07 13:53           ` Michael S. Tsirkin
2020-12-07 13:53             ` Michael S. Tsirkin
2020-12-07 13:53             ` Michael S. Tsirkin
2020-12-08  2:36             ` Jason Wang
2020-12-08  2:36               ` Jason Wang
2020-12-08  2:36               ` Jason Wang
2020-12-08  7:02               ` Enrico Weigelt, metux IT consult
2020-12-08  7:02                 ` Enrico Weigelt, metux IT consult
2020-12-09  9:31                 ` Jason Wang
2020-12-09  9:31                   ` Jason Wang
2020-12-09  9:31                   ` Jason Wang
2020-12-09 10:33                   ` Enrico Weigelt, metux IT consult
2020-12-09 10:33                     ` Enrico Weigelt, metux IT consult
2020-12-08 10:10         ` Michal Suchánek
2020-12-08 10:10           ` Michal Suchánek
2020-12-08 12:33           ` Enrico Weigelt, metux IT consult
2020-12-08 12:33             ` Enrico Weigelt, metux IT consult
2020-12-09 10:34             ` Michal Suchánek
2020-12-09 10:34               ` Michal Suchánek
2020-12-05 20:15   ` Howto listen to/handle gpio state changes ? " Enrico Weigelt, metux IT consult
2020-12-05 20:15     ` Enrico Weigelt, metux IT consult
2020-12-08  9:38     ` Linus Walleij
2020-12-08  9:38       ` Linus Walleij
2020-12-08  9:38       ` Linus Walleij
2020-12-08 14:04       ` Enrico Weigelt, metux IT consult
2020-12-08 14:04         ` Enrico Weigelt, metux IT consult
2020-12-08 16:15         ` Grygorii Strashko
2020-12-08 16:15           ` Grygorii Strashko
2020-12-09  8:51         ` Linus Walleij
2020-12-09  8:51           ` Linus Walleij
2020-12-09  8:51           ` Linus Walleij
2020-12-09 11:19           ` Arnd Bergmann
2020-12-09 11:19             ` Arnd Bergmann
2020-12-09 12:53             ` Linus Walleij
2020-12-09 12:53               ` Linus Walleij
2020-12-09 12:53               ` Linus Walleij
2020-12-09 20:22               ` Grygorii Strashko
2020-12-09 20:22                 ` Grygorii Strashko
2020-12-09 20:38                 ` Arnd Bergmann
2020-12-09 20:38                   ` Arnd Bergmann
2020-12-10 13:32                   ` Grygorii Strashko
2020-12-10 13:32                     ` Grygorii Strashko
2021-05-24 11:27   ` Viresh Kumar
2021-05-24 11:27     ` Viresh Kumar
2021-05-25 12:59     ` Enrico Weigelt, metux IT consult
2021-05-25 12:59       ` Enrico Weigelt, metux IT consult
2021-05-26  3:32       ` Viresh Kumar
2021-05-26  3:32         ` Viresh Kumar
2021-07-03  8:05         ` Michael S. Tsirkin
2021-07-03  8:05           ` Michael S. Tsirkin
2021-07-03  8:05           ` Michael S. Tsirkin
2021-07-05  3:51           ` Viresh Kumar
2021-07-05  3:51             ` Viresh Kumar
2021-07-05  3:51             ` Viresh Kumar
2020-12-07  9:55 ` [PATCH v2 1/2] drivers: gpio: put virtual gpio device into their own submenu Andy Shevchenko
2020-12-07  9:55   ` Andy Shevchenko
2020-12-07  9:55   ` Andy Shevchenko
2020-12-07 10:31 ` Bartosz Golaszewski
2020-12-07 10:31   ` Bartosz Golaszewski
2020-12-07 11:22   ` Enrico Weigelt, metux IT consult
2020-12-07 11:22     ` Enrico Weigelt, metux IT consult

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2882f118-3555-614c-33a0-30865673deb3@redhat.com \
    --to=jasowang@redhat.com \
    --cc=bgolaszewski@baylibre.com \
    --cc=corbet@lwn.net \
    --cc=info@metux.net \
    --cc=linus.walleij@linaro.org \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-gpio@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-riscv@lists.infradead.org \
    --cc=lkml@metux.net \
    --cc=mst@redhat.com \
    --cc=virtualization@lists.linux-foundation.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.