From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([2001:4830:134:3::10]:35139) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aqApL-0008R2-RK for qemu-devel@nongnu.org; Tue, 12 Apr 2016 22:46:48 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1aqApI-0002QP-L5 for qemu-devel@nongnu.org; Tue, 12 Apr 2016 22:46:47 -0400 Received: from mga01.intel.com ([192.55.52.88]:49860) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1aqApI-0002QB-FK for qemu-devel@nongnu.org; Tue, 12 Apr 2016 22:46:44 -0400 Date: Wed, 13 Apr 2016 10:49:31 +0800 From: Yuanhan Liu Message-ID: <20160413024931.GM3080@yliu-dev.sh.intel.com> References: <1459509388-6185-1-git-send-email-marcandre.lureau@redhat.com> <1459509388-6185-12-git-send-email-marcandre.lureau@redhat.com> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <1459509388-6185-12-git-send-email-marcandre.lureau@redhat.com> Subject: Re: [Qemu-devel] [PATCH 11/18] vhost-user: add shutdown support List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: marcandre.lureau@redhat.com Cc: qemu-devel@nongnu.org, "Michael S. Tsirkin" , Tetsuya Mukawa , jonshin@cisco.com, Ilya Maximets Hi Marc, First of all, sorry again for late response! Last time I tried with your first version, I found few issues related with reconnect, mainly on the acked_feautres lost. While checking your new code, I found that you've already solved that, which is great. So, I tried harder this time, your patches work great, except that I found few nits. On Fri, Apr 01, 2016 at 01:16:21PM +0200, marcandre.lureau@redhat.com wrote: > From: Marc-André Lureau ... > +Slave message types > +------------------- > + > + * VHOST_USER_SLAVE_SHUTDOWN: > + Id: 1 > + Master payload: N/A > + Slave payload: u64 > + > + Request the master to shutdown the slave. A 0 reply is for > + success, in which case the slave may close all connections > + immediately and quit. Assume we are using ovs + dpdk here, that we could have two vhost-user connections. While ovs tries to initiate a restart, it might unregister the two connections one by one. In such case, two VHOST_USER_SLAVE_SHUTDOWN request will be sent, and two replies will get. Therefore, I don't think it's a proper ask here to let the backend implementation to do quit here. > > switch (msg.request) { > + case VHOST_USER_SLAVE_SHUTDOWN: { > + uint64_t success = 1; /* 0 is for success */ > + if (dev->stop) { > + dev->stop(dev); > + success = 0; > + } > + msg.payload.u64 = success; > + msg.size = sizeof(msg.payload.u64); > + size = send(u->slave_fd, &msg, VHOST_USER_HDR_SIZE + msg.size, 0); > + if (size != VHOST_USER_HDR_SIZE + msg.size) { > + error_report("Failed to write reply."); > + } > + break; You might want to remove the slave_fd from watch list? We might also need to close slave_fd here, assuming that we will no longer use it when VHOST_USER_SLAVE_SHUTDOWN is received? I'm asking because I found a seg fault issue sometimes, due to opaque is NULL. --yliu