From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1759033AbdAJQgJ convert rfc822-to-8bit (ORCPT ); Tue, 10 Jan 2017 11:36:09 -0500 Received: from userp1040.oracle.com ([156.151.31.81]:19220 "EHLO userp1040.oracle.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752281AbdAJQgH (ORCPT ); Tue, 10 Jan 2017 11:36:07 -0500 Subject: Re: [PATCH 3/3] xen: optimize xenbus driver for multiple concurrent xenstore accesses To: Juergen Gross , linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org References: <20170106150544.10836-1-jgross@suse.com> <20170106150544.10836-4-jgross@suse.com> <9f84b972-9a76-aae2-ece6-b292bf9796cd@suse.com> From: Boris Ostrovsky Message-ID: Date: Tue, 10 Jan 2017 11:36:12 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0 MIME-Version: 1.0 In-Reply-To: <9f84b972-9a76-aae2-ece6-b292bf9796cd@suse.com> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 8BIT X-Source-IP: aserv0022.oracle.com [141.146.126.234] Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org >>> +static int process_msg(void) >>> +{ >>> + static struct xs_thread_state_read state; >>> + struct xb_req_data *req; >>> + int err; >>> + unsigned int len; >>> + >>> + if (!state.in_msg) { >>> + state.in_msg = true; >>> + state.in_hdr = true; >>> + state.used = 0; >>> + >>> + /* >>> + * We must disallow save/restore while reading a message. >>> + * A partial read across s/r leaves us out of sync with >>> + * xenstored. >>> + */ >>> + mutex_lock(&xs_response_mutex); >>> + >>> + if (!xb_data_to_read()) { >>> + /* We raced with save/restore: pending data 'gone'. */ >>> + mutex_unlock(&xs_response_mutex); >>> + state.in_msg = false; Just noticed: should in_hdr be set to false here as well? >>> + return 0; >>> + } Or set it to true here. >>> + } >>> + >>> + if (state.in_hdr) { >>> + if (state.used != sizeof(state.msg)) { >>> + err = xb_read((void *)&state.msg + state.used, >>> + sizeof(state.msg) - state.used); >>> + if (err < 0) >>> + goto out; >>> + state.used += err; >>> + if (state.used != sizeof(state.msg)) >>> + return 0; >> Would it be possible to do locking at the caller? I understand that you >> are trying to hold the lock across multiple invocations of this function >> but it feels somewhat counter-intuitive and bug-prone. > I think that would be difficult. > >> If it's not possible then at least please add a comment explaining >> locking algorithm. > Okay. Something like: > > /* > * xs_response_mutex is locked as long as we are processing one > * message. state.in_msg will be true as long as we are holding the > * lock in process_msg(). Then in_msg is the same as mutex_is_locked(&xs_response_mutex). And if so, do we really need it? -boris