From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8E0ABC3525B for ; Mon, 18 Apr 2022 13:28:29 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S243320AbiDRN2z (ORCPT ); Mon, 18 Apr 2022 09:28:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239762AbiDRNFS (ORCPT ); Mon, 18 Apr 2022 09:05:18 -0400 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6BEE819017; Mon, 18 Apr 2022 05:45:46 -0700 (PDT) Received: from smtp.kernel.org (relay.kernel.org [52.25.139.140]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by dfw.source.kernel.org (Postfix) with ESMTPS id 0671E6124D; Mon, 18 Apr 2022 12:45:46 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id EB30FC385A7; Mon, 18 Apr 2022 12:45:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1650285945; bh=h7wCG4XjaYle+ieR3ESj44+7lHbRR0+fYce5PBuCF8I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PVvEEHRY7u5ChLhf2rBwoBdhhBeikv3quzMxFrjGsbOOjuVYIfVVD437yrGIU/egK Us4NudR38hsoinGrc5dJVGecG5I55xwLDUHpOUWfPV/RiWM4/gSMJTpuw1ICgulZjk Qc16j5LEzeRxJ0AJUOrKC6+pUO08+Bbd/1NsrC1M= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Michael Kelley , "Andrea Parri (Microsoft)" , Wei Liu , Sasha Levin Subject: [PATCH 4.19 13/32] Drivers: hv: vmbus: Prevent load re-ordering when reading ring buffer Date: Mon, 18 Apr 2022 14:13:53 +0200 Message-Id: <20220418121127.515948805@linuxfoundation.org> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20220418121127.127656835@linuxfoundation.org> References: <20220418121127.127656835@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Michael Kelley [ Upstream commit b6cae15b5710c8097aad26a2e5e752c323ee5348 ] When reading a packet from a host-to-guest ring buffer, there is no memory barrier between reading the write index (to see if there is a packet to read) and reading the contents of the packet. The Hyper-V host uses store-release when updating the write index to ensure that writes of the packet data are completed first. On the guest side, the processor can reorder and read the packet data before the write index, and sometimes get stale packet data. Getting such stale packet data has been observed in a reproducible case in a VM on ARM64. Fix this by using virt_load_acquire() to read the write index, ensuring that reads of the packet data cannot be reordered before it. Preventing such reordering is logically correct, and with this change, getting stale data can no longer be reproduced. Signed-off-by: Michael Kelley Reviewed-by: Andrea Parri (Microsoft) Link: https://lore.kernel.org/r/1648394710-33480-1-git-send-email-mikelley@microsoft.com Signed-off-by: Wei Liu Signed-off-by: Sasha Levin --- drivers/hv/ring_buffer.c | 11 ++++++++++- 1 file changed, 10 insertions(+), 1 deletion(-) diff --git a/drivers/hv/ring_buffer.c b/drivers/hv/ring_buffer.c index 6cb45f256107..d97b30af9e03 100644 --- a/drivers/hv/ring_buffer.c +++ b/drivers/hv/ring_buffer.c @@ -365,7 +365,16 @@ int hv_ringbuffer_read(struct vmbus_channel *channel, static u32 hv_pkt_iter_avail(const struct hv_ring_buffer_info *rbi) { u32 priv_read_loc = rbi->priv_read_index; - u32 write_loc = READ_ONCE(rbi->ring_buffer->write_index); + u32 write_loc; + + /* + * The Hyper-V host writes the packet data, then uses + * store_release() to update the write_index. Use load_acquire() + * here to prevent loads of the packet data from being re-ordered + * before the read of the write_index and potentially getting + * stale data. + */ + write_loc = virt_load_acquire(&rbi->ring_buffer->write_index); if (write_loc >= priv_read_loc) return write_loc - priv_read_loc; -- 2.35.1