From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.9 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34399C433E0 for ; Fri, 29 May 2020 15:24:17 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id EC853206A4 for ; Fri, 29 May 2020 15:24:16 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="cxxJkSGU" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org EC853206A4 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=chromium.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 78CAB8001A; Fri, 29 May 2020 11:24:16 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 73D9380010; Fri, 29 May 2020 11:24:16 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 62B2D8001A; Fri, 29 May 2020 11:24:16 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0227.hostedemail.com [216.40.44.227]) by kanga.kvack.org (Postfix) with ESMTP id 47B7C80010 for ; Fri, 29 May 2020 11:24:16 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id CFF1AE084 for ; Fri, 29 May 2020 15:24:15 +0000 (UTC) X-FDA: 76870127670.16.screw14_7c76789cbad3a Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin16.hostedemail.com (Postfix) with ESMTP id A0D98100CF1DE for ; Fri, 29 May 2020 15:24:15 +0000 (UTC) X-HE-Tag: screw14_7c76789cbad3a X-Filterd-Recvd-Size: 6150 Received: from mail-pf1-f196.google.com (mail-pf1-f196.google.com [209.85.210.196]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Fri, 29 May 2020 15:24:15 +0000 (UTC) Received: by mail-pf1-f196.google.com with SMTP id d66so1494964pfd.6 for ; Fri, 29 May 2020 08:24:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=3qfv4Y4PkjJEv7bG2fRuJoW0d/ix4Crcs/JJo3CxTpE=; b=cxxJkSGUH/75rwZte/yl5nsogU3Io2X7B5Ceghm+YvVBTEthbLL/BTrq7PzpquaxPc QAkpHnqU7YpJytQmKVELXJTs8FlRwqkNz8NUDqNhO9mJOgLXMt+jSFjRyNnDnutPAHmc /mrmK3l+gFurvxd2kQOtCCz7slTaPyaMpYTfg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=3qfv4Y4PkjJEv7bG2fRuJoW0d/ix4Crcs/JJo3CxTpE=; b=Nwg7PE8+PCOIBf4xQYzDXjOVNoB3eG2Tu3wcBwLX9whVQ7MziVqejBemlTHx8fC94V Db1tPTZHc6ARnZVmBYLkEa/S25Y1A3G6LluvrqXaPxjZjOC0+TsY39iPzGXu5380zOSE OiLiD9/Ep2AEoifB73WP78GDtCaTufAf7y/J+Tj4rMNOWMnTd3QRQcfRWalCbB206V/h dH9y/XyfpcBPszAiwuRvbe/s1amFk3yx+OtBCc8FSx5wuh4QFhnRpPm/NB4oEaqYwIzI g8llmrWfIt1OTxDKwkXwJMYALV10UcLa1VcObn2bXiLY81f0PkJED8yXBcrJnhRqiFM8 hVug== X-Gm-Message-State: AOAM530AppNJMPQuonGHgFKmVy4yzgdC3Vl6ZmUS0Uh9Tiaau2hn0xBZ kiavw0fxNEUeCD14l1INnn/53g== X-Google-Smtp-Source: ABdhPJw9Vm+OKtbf8TSG39uU5ZeGM2/j91AXK4EGxqkr5hCiiGU7wluib2DWEf6SIxfxmAf0np70PQ== X-Received: by 2002:a62:4dc3:: with SMTP id a186mr8879636pfb.269.1590765854190; Fri, 29 May 2020 08:24:14 -0700 (PDT) Received: from www.outflux.net (smtp.outflux.net. [198.145.64.163]) by smtp.gmail.com with ESMTPSA id 192sm7177979pfz.198.2020.05.29.08.24.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 29 May 2020 08:24:13 -0700 (PDT) Date: Fri, 29 May 2020 08:24:11 -0700 From: Kees Cook To: "Kirill A. Shutemov" Cc: Dave Hansen , Andy Lutomirski , Peter Zijlstra , Paolo Bonzini , Sean Christopherson , Vitaly Kuznetsov , Wanpeng Li , Jim Mattson , Joerg Roedel , David Rientjes , Andrea Arcangeli , Will Drewry , "Edgecombe, Rick P" , "Kleen, Andi" , x86@kernel.org, kvm@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, "Kirill A. Shutemov" Subject: Re: [RFC 06/16] KVM: Use GUP instead of copy_from/to_user() to access guest memory Message-ID: <202005290815.9ABDE475@keescook> References: <20200522125214.31348-1-kirill.shutemov@linux.intel.com> <20200522125214.31348-7-kirill.shutemov@linux.intel.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200522125214.31348-7-kirill.shutemov@linux.intel.com> X-Rspamd-Queue-Id: A0D98100CF1DE X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, May 22, 2020 at 03:52:04PM +0300, Kirill A. Shutemov wrote: > +int copy_from_guest(void *data, unsigned long hva, int len) > +{ > + int offset = offset_in_page(hva); > + struct page *page; > + int npages, seg; > + > + while ((seg = next_segment(len, offset)) != 0) { > + npages = get_user_pages_unlocked(hva, 1, &page, 0); > + if (npages != 1) > + return -EFAULT; > + memcpy(data, page_address(page) + offset, seg); > + put_page(page); > + len -= seg; > + hva += seg; > + offset = 0; > + } > + > + return 0; > +} > + > +int copy_to_guest(unsigned long hva, const void *data, int len) > +{ > + int offset = offset_in_page(hva); > + struct page *page; > + int npages, seg; > + > + while ((seg = next_segment(len, offset)) != 0) { > + npages = get_user_pages_unlocked(hva, 1, &page, FOLL_WRITE); > + if (npages != 1) > + return -EFAULT; > + memcpy(page_address(page) + offset, data, seg); > + put_page(page); > + len -= seg; > + hva += seg; > + offset = 0; > + } > + return 0; > +} > + > static int __kvm_read_guest_page(struct kvm_memory_slot *slot, gfn_t gfn, > - void *data, int offset, int len) > + void *data, int offset, int len, > + bool protected) > { > int r; > unsigned long addr; > @@ -2257,7 +2297,10 @@ static int __kvm_read_guest_page(struct kvm_memory_slot *slot, gfn_t gfn, > addr = gfn_to_hva_memslot_prot(slot, gfn, NULL); > if (kvm_is_error_hva(addr)) > return -EFAULT; > - r = __copy_from_user(data, (void __user *)addr + offset, len); > + if (protected) > + r = copy_from_guest(data, addr + offset, len); > + else > + r = __copy_from_user(data, (void __user *)addr + offset, len); > if (r) > return -EFAULT; > return 0; This ends up removing KASAN and object size tests. Compare to: __copy_from_user(void *to, const void __user *from, unsigned long n) { might_fault(); kasan_check_write(to, n); check_object_size(to, n, false); return raw_copy_from_user(to, from, n); } Those will need to get added back. :) Additionally, I see that copy_from_guest() neither clears the destination memory on a short read, nor does KVM actually handle the short read case correctly now. See the notes in uaccess.h: * NOTE: only copy_from_user() zero-pads the destination in case of short copy. * Neither __copy_from_user() nor __copy_from_user_inatomic() zero anything * at all; their callers absolutely must check the return value. It's not clear to me how the destination buffers get reused, but the has the potential to leak kernel memory contents. This needs separate fixing, I think. -Kees -- Kees Cook