All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3 0/2] NBD socket backlog
@ 2021-02-09 15:27 Eric Blake
  2021-02-09 15:27 ` [PATCH v3 1/2] qemu-nbd: Use SOMAXCONN for socket listen() backlog Eric Blake
  2021-02-09 15:27 ` [PATCH v3 2/2] qemu-nbd: Permit --shared=0 for unlimited clients Eric Blake
  0 siblings, 2 replies; 10+ messages in thread
From: Eric Blake @ 2021-02-09 15:27 UTC (permalink / raw)
  To: qemu-devel; +Cc: nsoffer, berrange, rjones, qemu-block

in v2:
- also adjust backlog of QMP nbd-server-start [Dan]
- tweak qemu-nbd backlog to -e when not persistent [Nir]
- allow qemu-nbd -e0 for symmetry with QMP [new patch 2]

Eric Blake (2):
  qemu-nbd: Use SOMAXCONN for socket listen() backlog
  qemu-nbd: Permit --shared=0 for unlimited clients

 docs/tools/qemu-nbd.rst |  4 ++--
 blockdev-nbd.c          |  7 ++++++-
 qemu-nbd.c              | 15 +++++++++++----
 3 files changed, 19 insertions(+), 7 deletions(-)

-- 
2.30.0



^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v3 1/2] qemu-nbd: Use SOMAXCONN for socket listen() backlog
  2021-02-09 15:27 [PATCH v3 0/2] NBD socket backlog Eric Blake
@ 2021-02-09 15:27 ` Eric Blake
  2021-02-09 16:08   ` Richard W.M. Jones
                     ` (2 more replies)
  2021-02-09 15:27 ` [PATCH v3 2/2] qemu-nbd: Permit --shared=0 for unlimited clients Eric Blake
  1 sibling, 3 replies; 10+ messages in thread
From: Eric Blake @ 2021-02-09 15:27 UTC (permalink / raw)
  To: qemu-devel
  Cc: Kevin Wolf, berrange, qemu-block, rjones, qemu-stable, nsoffer,
	Max Reitz

Our default of a backlog of 1 connection is rather puny; it gets in
the way when we are explicitly allowing multiple clients (such as
qemu-nbd -e N [--shared], or nbd-server-start with its default
"max-connections":0 for unlimited), but is even a problem when we
stick to qemu-nbd's default of only 1 active client but use -t
[--persistent] where a second client can start using the server once
the first finishes.  While the effects are less noticeable on TCP
sockets (since the client can poll() to learn when the server is ready
again), it is definitely observable on Unix sockets, where on Unix, a
client will fail with EAGAIN and no recourse but to sleep an arbitrary
amount of time before retrying if the server backlog is already full.

Since QMP nbd-server-start is always persistent, it now always
requests a backlog of SOMAXCONN; meanwhile, qemu-nbd will request
SOMAXCONN if persistent, otherwise its backlog should be based on the
expected number of clients.

See https://bugzilla.redhat.com/1925045 for a demonstration of where
our low backlog prevents libnbd from connecting as many parallel
clients as it wants.

Reported-by: Richard W.M. Jones <rjones@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
CC: qemu-stable@nongnu.org
---
 blockdev-nbd.c |  7 ++++++-
 qemu-nbd.c     | 10 +++++++++-
 2 files changed, 15 insertions(+), 2 deletions(-)

diff --git a/blockdev-nbd.c b/blockdev-nbd.c
index d8443d235b73..b264620b98d8 100644
--- a/blockdev-nbd.c
+++ b/blockdev-nbd.c
@@ -134,7 +134,12 @@ void nbd_server_start(SocketAddress *addr, const char *tls_creds,
     qio_net_listener_set_name(nbd_server->listener,
                               "nbd-listener");

-    if (qio_net_listener_open_sync(nbd_server->listener, addr, 1, errp) < 0) {
+    /*
+     * Because this server is persistent, a backlog of SOMAXCONN is
+     * better than trying to size it to max_connections.
+     */
+    if (qio_net_listener_open_sync(nbd_server->listener, addr, SOMAXCONN,
+                                   errp) < 0) {
         goto error;
     }

diff --git a/qemu-nbd.c b/qemu-nbd.c
index 608c63e82a25..1a340ea4858d 100644
--- a/qemu-nbd.c
+++ b/qemu-nbd.c
@@ -964,8 +964,16 @@ int main(int argc, char **argv)

     server = qio_net_listener_new();
     if (socket_activation == 0) {
+        int backlog;
+
+        if (persistent) {
+            backlog = SOMAXCONN;
+        } else {
+            backlog = MIN(shared, SOMAXCONN);
+        }
         saddr = nbd_build_socket_address(sockpath, bindto, port);
-        if (qio_net_listener_open_sync(server, saddr, 1, &local_err) < 0) {
+        if (qio_net_listener_open_sync(server, saddr, backlog,
+                                       &local_err) < 0) {
             object_unref(OBJECT(server));
             error_report_err(local_err);
             exit(EXIT_FAILURE);
-- 
2.30.0



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 2/2] qemu-nbd: Permit --shared=0 for unlimited clients
  2021-02-09 15:27 [PATCH v3 0/2] NBD socket backlog Eric Blake
  2021-02-09 15:27 ` [PATCH v3 1/2] qemu-nbd: Use SOMAXCONN for socket listen() backlog Eric Blake
@ 2021-02-09 15:27 ` Eric Blake
  2021-02-09 16:14   ` Daniel P. Berrangé
  2021-02-10 21:05   ` Nir Soffer
  1 sibling, 2 replies; 10+ messages in thread
From: Eric Blake @ 2021-02-09 15:27 UTC (permalink / raw)
  To: qemu-devel; +Cc: nsoffer, berrange, rjones, qemu-block

This gives us better feature parity with QMP nbd-server-start, where
max-connections defaults to 0 for unlimited.

Signed-off-by: Eric Blake <eblake@redhat.com>
---
 docs/tools/qemu-nbd.rst | 4 ++--
 qemu-nbd.c              | 7 +++----
 2 files changed, 5 insertions(+), 6 deletions(-)

diff --git a/docs/tools/qemu-nbd.rst b/docs/tools/qemu-nbd.rst
index fe41336dc550..ee862fa0bc02 100644
--- a/docs/tools/qemu-nbd.rst
+++ b/docs/tools/qemu-nbd.rst
@@ -136,8 +136,8 @@ driver options if ``--image-opts`` is specified.
 .. option:: -e, --shared=NUM

   Allow up to *NUM* clients to share the device (default
-  ``1``). Safe for readers, but for now, consistency is not
-  guaranteed between multiple writers.
+  ``1``), 0 for unlimited. Safe for readers, but for now,
+  consistency is not guaranteed between multiple writers.

 .. option:: -t, --persistent

diff --git a/qemu-nbd.c b/qemu-nbd.c
index 1a340ea4858d..5416509ece18 100644
--- a/qemu-nbd.c
+++ b/qemu-nbd.c
@@ -328,7 +328,7 @@ static void *nbd_client_thread(void *arg)

 static int nbd_can_accept(void)
 {
-    return state == RUNNING && nb_fds < shared;
+    return state == RUNNING && (shared == 0 || nb_fds < shared);
 }

 static void nbd_update_server_watch(void);
@@ -706,8 +706,8 @@ int main(int argc, char **argv)
             device = optarg;
             break;
         case 'e':
             if (qemu_strtoi(optarg, NULL, 0, &shared) < 0 ||
-                shared < 1) {
+                shared < 0) {
                 error_report("Invalid shared device number '%s'", optarg);
                 exit(EXIT_FAILURE);
             }
@@ -966,7 +965,7 @@ int main(int argc, char **argv)
     if (socket_activation == 0) {
         int backlog;

-        if (persistent) {
+        if (persistent || shared == 0) {
             backlog = SOMAXCONN;
         } else {
             backlog = MIN(shared, SOMAXCONN);
-- 
2.30.0



^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 1/2] qemu-nbd: Use SOMAXCONN for socket listen() backlog
  2021-02-09 15:27 ` [PATCH v3 1/2] qemu-nbd: Use SOMAXCONN for socket listen() backlog Eric Blake
@ 2021-02-09 16:08   ` Richard W.M. Jones
  2021-02-09 16:12     ` Eric Blake
  2021-02-09 16:13   ` Daniel P. Berrangé
  2021-02-10 16:58   ` Nir Soffer
  2 siblings, 1 reply; 10+ messages in thread
From: Richard W.M. Jones @ 2021-02-09 16:08 UTC (permalink / raw)
  To: Eric Blake
  Cc: Kevin Wolf, berrange, qemu-block, qemu-devel, qemu-stable,
	nsoffer, Max Reitz

On Tue, Feb 09, 2021 at 09:27:58AM -0600, Eric Blake wrote:
> Our default of a backlog of 1 connection is rather puny; it gets in
> the way when we are explicitly allowing multiple clients (such as
> qemu-nbd -e N [--shared], or nbd-server-start with its default
> "max-connections":0 for unlimited), but is even a problem when we
> stick to qemu-nbd's default of only 1 active client but use -t
> [--persistent] where a second client can start using the server once
> the first finishes.  While the effects are less noticeable on TCP
> sockets (since the client can poll() to learn when the server is ready
> again), it is definitely observable on Unix sockets, where on Unix, a
> client will fail with EAGAIN and no recourse but to sleep an arbitrary
> amount of time before retrying if the server backlog is already full.
> 
> Since QMP nbd-server-start is always persistent, it now always
> requests a backlog of SOMAXCONN; meanwhile, qemu-nbd will request
> SOMAXCONN if persistent, otherwise its backlog should be based on the
> expected number of clients.
> 
> See https://bugzilla.redhat.com/1925045 for a demonstration of where
> our low backlog prevents libnbd from connecting as many parallel
> clients as it wants.
> 
> Reported-by: Richard W.M. Jones <rjones@redhat.com>
> Signed-off-by: Eric Blake <eblake@redhat.com>
> CC: qemu-stable@nongnu.org
> ---
>  blockdev-nbd.c |  7 ++++++-
>  qemu-nbd.c     | 10 +++++++++-
>  2 files changed, 15 insertions(+), 2 deletions(-)
> 
> diff --git a/blockdev-nbd.c b/blockdev-nbd.c
> index d8443d235b73..b264620b98d8 100644
> --- a/blockdev-nbd.c
> +++ b/blockdev-nbd.c
> @@ -134,7 +134,12 @@ void nbd_server_start(SocketAddress *addr, const char *tls_creds,
>      qio_net_listener_set_name(nbd_server->listener,
>                                "nbd-listener");
> 
> -    if (qio_net_listener_open_sync(nbd_server->listener, addr, 1, errp) < 0) {
> +    /*
> +     * Because this server is persistent, a backlog of SOMAXCONN is
> +     * better than trying to size it to max_connections.
> +     */
> +    if (qio_net_listener_open_sync(nbd_server->listener, addr, SOMAXCONN,
> +                                   errp) < 0) {
>          goto error;
>      }
> 
> diff --git a/qemu-nbd.c b/qemu-nbd.c
> index 608c63e82a25..1a340ea4858d 100644
> --- a/qemu-nbd.c
> +++ b/qemu-nbd.c
> @@ -964,8 +964,16 @@ int main(int argc, char **argv)
> 
>      server = qio_net_listener_new();
>      if (socket_activation == 0) {
> +        int backlog;
> +
> +        if (persistent) {
> +            backlog = SOMAXCONN;
> +        } else {
> +            backlog = MIN(shared, SOMAXCONN);
> +        }
>          saddr = nbd_build_socket_address(sockpath, bindto, port);
> -        if (qio_net_listener_open_sync(server, saddr, 1, &local_err) < 0) {
> +        if (qio_net_listener_open_sync(server, saddr, backlog,
> +                                       &local_err) < 0) {
>              object_unref(OBJECT(server));
>              error_report_err(local_err);
>              exit(EXIT_FAILURE);

Works fine here, so:

Tested-by: Richard W.M. Jones <rjones@redhat.com>

Rich.

-- 
Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones
Read my programming and virtualization blog: http://rwmj.wordpress.com
Fedora Windows cross-compiler. Compile Windows programs, test, and
build Windows installers. Over 100 libraries supported.
http://fedoraproject.org/wiki/MinGW



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 1/2] qemu-nbd: Use SOMAXCONN for socket listen() backlog
  2021-02-09 16:08   ` Richard W.M. Jones
@ 2021-02-09 16:12     ` Eric Blake
  0 siblings, 0 replies; 10+ messages in thread
From: Eric Blake @ 2021-02-09 16:12 UTC (permalink / raw)
  To: Richard W.M. Jones
  Cc: Kevin Wolf, berrange, qemu-block, qemu-devel, qemu-stable,
	nsoffer, Max Reitz

On 2/9/21 10:08 AM, Richard W.M. Jones wrote:
> On Tue, Feb 09, 2021 at 09:27:58AM -0600, Eric Blake wrote:
>> Our default of a backlog of 1 connection is rather puny; it gets in
>> the way when we are explicitly allowing multiple clients (such as
>> qemu-nbd -e N [--shared], or nbd-server-start with its default
>> "max-connections":0 for unlimited), but is even a problem when we
>> stick to qemu-nbd's default of only 1 active client but use -t
>> [--persistent] where a second client can start using the server once
>> the first finishes.  While the effects are less noticeable on TCP
>> sockets (since the client can poll() to learn when the server is ready
>> again), it is definitely observable on Unix sockets, where on Unix, a

s/where on Unix/where on Linux/

>> client will fail with EAGAIN and no recourse but to sleep an arbitrary
>> amount of time before retrying if the server backlog is already full.
>>
>> Since QMP nbd-server-start is always persistent, it now always
>> requests a backlog of SOMAXCONN; meanwhile, qemu-nbd will request
>> SOMAXCONN if persistent, otherwise its backlog should be based on the
>> expected number of clients.
>>
>> See https://bugzilla.redhat.com/1925045 for a demonstration of where
>> our low backlog prevents libnbd from connecting as many parallel
>> clients as it wants.
>>
>> Reported-by: Richard W.M. Jones <rjones@redhat.com>
>> Signed-off-by: Eric Blake <eblake@redhat.com>
>> CC: qemu-stable@nongnu.org
>> ---
>>  blockdev-nbd.c |  7 ++++++-
>>  qemu-nbd.c     | 10 +++++++++-
>>  2 files changed, 15 insertions(+), 2 deletions(-)
>>

> 
> Works fine here, so:
> 
> Tested-by: Richard W.M. Jones <rjones@redhat.com>
> 

Thanks for testing.

> Rich.
> 

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 1/2] qemu-nbd: Use SOMAXCONN for socket listen() backlog
  2021-02-09 15:27 ` [PATCH v3 1/2] qemu-nbd: Use SOMAXCONN for socket listen() backlog Eric Blake
  2021-02-09 16:08   ` Richard W.M. Jones
@ 2021-02-09 16:13   ` Daniel P. Berrangé
  2021-02-10 16:58   ` Nir Soffer
  2 siblings, 0 replies; 10+ messages in thread
From: Daniel P. Berrangé @ 2021-02-09 16:13 UTC (permalink / raw)
  To: Eric Blake
  Cc: Kevin Wolf, qemu-block, rjones, qemu-stable, qemu-devel, nsoffer,
	Max Reitz

On Tue, Feb 09, 2021 at 09:27:58AM -0600, Eric Blake wrote:
> Our default of a backlog of 1 connection is rather puny; it gets in
> the way when we are explicitly allowing multiple clients (such as
> qemu-nbd -e N [--shared], or nbd-server-start with its default
> "max-connections":0 for unlimited), but is even a problem when we
> stick to qemu-nbd's default of only 1 active client but use -t
> [--persistent] where a second client can start using the server once
> the first finishes.  While the effects are less noticeable on TCP
> sockets (since the client can poll() to learn when the server is ready
> again), it is definitely observable on Unix sockets, where on Unix, a
> client will fail with EAGAIN and no recourse but to sleep an arbitrary
> amount of time before retrying if the server backlog is already full.
> 
> Since QMP nbd-server-start is always persistent, it now always
> requests a backlog of SOMAXCONN; meanwhile, qemu-nbd will request
> SOMAXCONN if persistent, otherwise its backlog should be based on the
> expected number of clients.
> 
> See https://bugzilla.redhat.com/1925045 for a demonstration of where
> our low backlog prevents libnbd from connecting as many parallel
> clients as it wants.
> 
> Reported-by: Richard W.M. Jones <rjones@redhat.com>
> Signed-off-by: Eric Blake <eblake@redhat.com>
> CC: qemu-stable@nongnu.org
> ---
>  blockdev-nbd.c |  7 ++++++-
>  qemu-nbd.c     | 10 +++++++++-
>  2 files changed, 15 insertions(+), 2 deletions(-)

Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>


Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 2/2] qemu-nbd: Permit --shared=0 for unlimited clients
  2021-02-09 15:27 ` [PATCH v3 2/2] qemu-nbd: Permit --shared=0 for unlimited clients Eric Blake
@ 2021-02-09 16:14   ` Daniel P. Berrangé
  2021-02-10 21:05   ` Nir Soffer
  1 sibling, 0 replies; 10+ messages in thread
From: Daniel P. Berrangé @ 2021-02-09 16:14 UTC (permalink / raw)
  To: Eric Blake; +Cc: nsoffer, qemu-devel, qemu-block, rjones

On Tue, Feb 09, 2021 at 09:27:59AM -0600, Eric Blake wrote:
> This gives us better feature parity with QMP nbd-server-start, where
> max-connections defaults to 0 for unlimited.
> 
> Signed-off-by: Eric Blake <eblake@redhat.com>
> ---
>  docs/tools/qemu-nbd.rst | 4 ++--
>  qemu-nbd.c              | 7 +++----
>  2 files changed, 5 insertions(+), 6 deletions(-)

Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 1/2] qemu-nbd: Use SOMAXCONN for socket listen() backlog
  2021-02-09 15:27 ` [PATCH v3 1/2] qemu-nbd: Use SOMAXCONN for socket listen() backlog Eric Blake
  2021-02-09 16:08   ` Richard W.M. Jones
  2021-02-09 16:13   ` Daniel P. Berrangé
@ 2021-02-10 16:58   ` Nir Soffer
  2021-02-10 22:24     ` Eric Blake
  2 siblings, 1 reply; 10+ messages in thread
From: Nir Soffer @ 2021-02-10 16:58 UTC (permalink / raw)
  To: Eric Blake
  Cc: Kevin Wolf, Daniel Berrange, qemu-block, Richard Jones,
	qemu-stable, QEMU Developers, Max Reitz

On Tue, Feb 9, 2021 at 5:28 PM Eric Blake <eblake@redhat.com> wrote:
>
> Our default of a backlog of 1 connection is rather puny; it gets in
> the way when we are explicitly allowing multiple clients (such as
> qemu-nbd -e N [--shared], or nbd-server-start with its default
> "max-connections":0 for unlimited), but is even a problem when we
> stick to qemu-nbd's default of only 1 active client but use -t
> [--persistent] where a second client can start using the server once
> the first finishes.  While the effects are less noticeable on TCP
> sockets (since the client can poll() to learn when the server is ready
> again), it is definitely observable on Unix sockets, where on Unix, a
> client will fail with EAGAIN and no recourse but to sleep an arbitrary
> amount of time before retrying if the server backlog is already full.
>
> Since QMP nbd-server-start is always persistent, it now always
> requests a backlog of SOMAXCONN;

This makes sense since we don't limit the number of connections.

> meanwhile, qemu-nbd will request
> SOMAXCONN if persistent, otherwise its backlog should be based on the
> expected number of clients.

If --persistent is used without --shared, we allow only one concurrent
connection, so not clear why we need maximum backlog.

I think that separating --persistent and --shared would be easier to
understand and use. The backlog will always be based on shared value.

> See https://bugzilla.redhat.com/1925045 for a demonstration of where
> our low backlog prevents libnbd from connecting as many parallel
> clients as it wants.
>
> Reported-by: Richard W.M. Jones <rjones@redhat.com>
> Signed-off-by: Eric Blake <eblake@redhat.com>
> CC: qemu-stable@nongnu.org
> ---
>  blockdev-nbd.c |  7 ++++++-
>  qemu-nbd.c     | 10 +++++++++-
>  2 files changed, 15 insertions(+), 2 deletions(-)
>
> diff --git a/blockdev-nbd.c b/blockdev-nbd.c
> index d8443d235b73..b264620b98d8 100644
> --- a/blockdev-nbd.c
> +++ b/blockdev-nbd.c
> @@ -134,7 +134,12 @@ void nbd_server_start(SocketAddress *addr, const char *tls_creds,
>      qio_net_listener_set_name(nbd_server->listener,
>                                "nbd-listener");
>
> -    if (qio_net_listener_open_sync(nbd_server->listener, addr, 1, errp) < 0) {
> +    /*
> +     * Because this server is persistent, a backlog of SOMAXCONN is
> +     * better than trying to size it to max_connections.

The comment is not clear. Previously we used hard code value (1) but we
do support more than one connection. Maybe it is better to explain that we
don't know how many connections are needed?

> +     */
> +    if (qio_net_listener_open_sync(nbd_server->listener, addr, SOMAXCONN,
> +                                   errp) < 0) {
>          goto error;
>      }
>
> diff --git a/qemu-nbd.c b/qemu-nbd.c
> index 608c63e82a25..1a340ea4858d 100644
> --- a/qemu-nbd.c
> +++ b/qemu-nbd.c
> @@ -964,8 +964,16 @@ int main(int argc, char **argv)
>
>      server = qio_net_listener_new();
>      if (socket_activation == 0) {
> +        int backlog;
> +
> +        if (persistent) {
> +            backlog = SOMAXCONN;

This increases the backlog, but since default shared is still 1, we will
not accept more than 1 connection, so not clear why SOMAXCONN
is better.

> +        } else {
> +            backlog = MIN(shared, SOMAXCONN);
> +        }
>          saddr = nbd_build_socket_address(sockpath, bindto, port);
> -        if (qio_net_listener_open_sync(server, saddr, 1, &local_err) < 0) {
> +        if (qio_net_listener_open_sync(server, saddr, backlog,
> +                                       &local_err) < 0) {
>              object_unref(OBJECT(server));
>              error_report_err(local_err);
>              exit(EXIT_FAILURE);
> --
> 2.30.0
>



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 2/2] qemu-nbd: Permit --shared=0 for unlimited clients
  2021-02-09 15:27 ` [PATCH v3 2/2] qemu-nbd: Permit --shared=0 for unlimited clients Eric Blake
  2021-02-09 16:14   ` Daniel P. Berrangé
@ 2021-02-10 21:05   ` Nir Soffer
  1 sibling, 0 replies; 10+ messages in thread
From: Nir Soffer @ 2021-02-10 21:05 UTC (permalink / raw)
  To: Eric Blake; +Cc: Daniel Berrange, QEMU Developers, qemu-block, Richard Jones

On Tue, Feb 9, 2021 at 5:28 PM Eric Blake <eblake@redhat.com> wrote:
>
> This gives us better feature parity with QMP nbd-server-start, where
> max-connections defaults to 0 for unlimited.

Sound useful

> Signed-off-by: Eric Blake <eblake@redhat.com>
> ---
>  docs/tools/qemu-nbd.rst | 4 ++--
>  qemu-nbd.c              | 7 +++----
>  2 files changed, 5 insertions(+), 6 deletions(-)
>
> diff --git a/docs/tools/qemu-nbd.rst b/docs/tools/qemu-nbd.rst
> index fe41336dc550..ee862fa0bc02 100644
> --- a/docs/tools/qemu-nbd.rst
> +++ b/docs/tools/qemu-nbd.rst
> @@ -136,8 +136,8 @@ driver options if ``--image-opts`` is specified.
>  .. option:: -e, --shared=NUM
>
>    Allow up to *NUM* clients to share the device (default
> -  ``1``). Safe for readers, but for now, consistency is not
> -  guaranteed between multiple writers.
> +  ``1``), 0 for unlimited. Safe for readers, but for now,
> +  consistency is not guaranteed between multiple writers.
>
>  .. option:: -t, --persistent
>
> diff --git a/qemu-nbd.c b/qemu-nbd.c
> index 1a340ea4858d..5416509ece18 100644
> --- a/qemu-nbd.c
> +++ b/qemu-nbd.c
> @@ -328,7 +328,7 @@ static void *nbd_client_thread(void *arg)
>
>  static int nbd_can_accept(void)
>  {
> -    return state == RUNNING && nb_fds < shared;
> +    return state == RUNNING && (shared == 0 || nb_fds < shared);
>  }
>
>  static void nbd_update_server_watch(void);
> @@ -706,8 +706,8 @@ int main(int argc, char **argv)
>              device = optarg;
>              break;
>          case 'e':
>              if (qemu_strtoi(optarg, NULL, 0, &shared) < 0 ||
> -                shared < 1) {
> +                shared < 0) {
>                  error_report("Invalid shared device number '%s'", optarg);
>                  exit(EXIT_FAILURE);
>              }
> @@ -966,7 +965,7 @@ int main(int argc, char **argv)
>      if (socket_activation == 0) {
>          int backlog;
>
> -        if (persistent) {
> +        if (persistent || shared == 0) {
>              backlog = SOMAXCONN;
>          } else {
>              backlog = MIN(shared, SOMAXCONN);
> --
> 2.30.0
>

Reviewed-by: Nir Soffer <nsoffer@redhat.com>



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 1/2] qemu-nbd: Use SOMAXCONN for socket listen() backlog
  2021-02-10 16:58   ` Nir Soffer
@ 2021-02-10 22:24     ` Eric Blake
  0 siblings, 0 replies; 10+ messages in thread
From: Eric Blake @ 2021-02-10 22:24 UTC (permalink / raw)
  To: Nir Soffer
  Cc: Kevin Wolf, Daniel Berrange, qemu-block, Richard Jones,
	qemu-stable, QEMU Developers, Max Reitz

On 2/10/21 10:58 AM, Nir Soffer wrote:
> On Tue, Feb 9, 2021 at 5:28 PM Eric Blake <eblake@redhat.com> wrote:
>>
>> Our default of a backlog of 1 connection is rather puny; it gets in
>> the way when we are explicitly allowing multiple clients (such as
>> qemu-nbd -e N [--shared], or nbd-server-start with its default
>> "max-connections":0 for unlimited), but is even a problem when we
>> stick to qemu-nbd's default of only 1 active client but use -t
>> [--persistent] where a second client can start using the server once
>> the first finishes.  While the effects are less noticeable on TCP
>> sockets (since the client can poll() to learn when the server is ready
>> again), it is definitely observable on Unix sockets, where on Unix, a
>> client will fail with EAGAIN and no recourse but to sleep an arbitrary
>> amount of time before retrying if the server backlog is already full.
>>
>> Since QMP nbd-server-start is always persistent, it now always
>> requests a backlog of SOMAXCONN;
> 
> This makes sense since we don't limit the number of connections.
> 
>> meanwhile, qemu-nbd will request
>> SOMAXCONN if persistent, otherwise its backlog should be based on the
>> expected number of clients.
> 
> If --persistent is used without --shared, we allow only one concurrent
> connection, so not clear why we need maximum backlog.

We only allow one active connection, but other clients can queue up to
also take advantage of the server once the first client disconnects.  A
larger backlog allows those additional clients to reach the point where
they can poll() for activity, rather than getting EAGAIN failures.

> 
> I think that separating --persistent and --shared would be easier to
> understand and use. The backlog will always be based on shared value.
> 

>> +++ b/qemu-nbd.c
>> @@ -964,8 +964,16 @@ int main(int argc, char **argv)
>>
>>      server = qio_net_listener_new();
>>      if (socket_activation == 0) {
>> +        int backlog;
>> +
>> +        if (persistent) {
>> +            backlog = SOMAXCONN;
> 
> This increases the backlog, but since default shared is still 1, we will
> not accept more than 1 connection, so not clear why SOMAXCONN
> is better.

While we aren't servicing the next client yet, we are at least allowing
them to make it further in their connection by supporting a backlog.


-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org



^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-02-10 22:39 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-09 15:27 [PATCH v3 0/2] NBD socket backlog Eric Blake
2021-02-09 15:27 ` [PATCH v3 1/2] qemu-nbd: Use SOMAXCONN for socket listen() backlog Eric Blake
2021-02-09 16:08   ` Richard W.M. Jones
2021-02-09 16:12     ` Eric Blake
2021-02-09 16:13   ` Daniel P. Berrangé
2021-02-10 16:58   ` Nir Soffer
2021-02-10 22:24     ` Eric Blake
2021-02-09 15:27 ` [PATCH v3 2/2] qemu-nbd: Permit --shared=0 for unlimited clients Eric Blake
2021-02-09 16:14   ` Daniel P. Berrangé
2021-02-10 21:05   ` Nir Soffer

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.