From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-0.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4CA36C352A4 for ; Thu, 13 Feb 2020 02:41:04 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 0E27320873 for ; Thu, 13 Feb 2020 02:41:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="dL8bc8ip" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0E27320873 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 980286B04FC; Wed, 12 Feb 2020 21:41:03 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 92F266B04FD; Wed, 12 Feb 2020 21:41:03 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 81FAC6B04FE; Wed, 12 Feb 2020 21:41:03 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0077.hostedemail.com [216.40.44.77]) by kanga.kvack.org (Postfix) with ESMTP id 6C6F26B04FC for ; Wed, 12 Feb 2020 21:41:03 -0500 (EST) Received: from smtpin08.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 1BFCA181AEF07 for ; Thu, 13 Feb 2020 02:41:03 +0000 (UTC) X-FDA: 76483551606.08.soap13_607298727e521 X-HE-Tag: soap13_607298727e521 X-Filterd-Recvd-Size: 2784 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf45.hostedemail.com (Postfix) with ESMTP for ; Thu, 13 Feb 2020 02:41:02 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A69A420675; Thu, 13 Feb 2020 02:41:01 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1581561661; bh=dGs9A8uGyG7xB9ZmxsOWUecYy9yVfwQ/+1KnytNLTtI=; h=Date:From:To:Cc:Subject:In-Reply-To:References:From; b=dL8bc8ipcRlO7NVt/LStb7SpHRCWowdH6aaWLUBkJIoa5a0KrntZE+xTZE9wcWSsy LcPX+qTDFha77w1rG3XKalE1tR2uY6AX1EAldm9UibNGvTUQZuvfdWFtPYCGUkkQ4s 1CKQlUE+b16xK+42I0+ad39OpCfSTX0OgnFzPAgo= Date: Wed, 12 Feb 2020 18:41:01 -0800 From: Andrew Morton To: Arjun Roy Cc: davem@davemloft.net, netdev@vger.kernel.org, linux-mm@kvack.org, arjunroy@google.com, Eric Dumazet , Soheil Hassas Yeganeh , Linus Torvalds Subject: Re: [PATCH resend mm,net-next 2/3] mm: Add vm_insert_pages(). Message-Id: <20200212184101.b8551710bd19c8216d62290d@linux-foundation.org> In-Reply-To: <20200128025958.43490-2-arjunroy.kdev@gmail.com> References: <20200128025958.43490-1-arjunroy.kdev@gmail.com> <20200128025958.43490-2-arjunroy.kdev@gmail.com> X-Mailer: Sylpheed 3.5.1 (GTK+ 2.24.31; x86_64-pc-linux-gnu) Mime-Version: 1.0 Content-Type: text/plain; charset=US-ASCII Content-Transfer-Encoding: 7bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Mon, 27 Jan 2020 18:59:57 -0800 Arjun Roy wrote: > Add the ability to insert multiple pages at once to a user VM with > lower PTE spinlock operations. > > The intention of this patch-set is to reduce atomic ops for > tcp zerocopy receives, which normally hits the same spinlock multiple > times consecutively. Seems sensible, thanks. Some other vm_insert_page() callers might want to know about this, but I can't immediately spot any which appear to be high bandwidth. Is there much point in keeping the vm_insert_page() implementation around? Replace it with static inline int vm_insert_page(struct vm_area_struct *vma, unsigned long addr, struct page *page) { return vm_insert_pages(vma, addr, &page, 1); } ? Also, vm_insert_page() does if (!page_count(page)) return -EINVAL; and this was not carried over into vm_insert_pages(). How come? I don't know what that test does - it was added by Linus in the original commit a145dd411eb28c83. It's only been 15 years so I'm sure he remembers ;)