From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 54841C43217 for ; Wed, 16 Nov 2022 23:54:00 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233325AbiKPXx7 (ORCPT ); Wed, 16 Nov 2022 18:53:59 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43514 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233023AbiKPXx5 (ORCPT ); Wed, 16 Nov 2022 18:53:57 -0500 Received: from bombadil.infradead.org (bombadil.infradead.org [IPv6:2607:7c80:54:3::133]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7AAFD60372 for ; Wed, 16 Nov 2022 15:53:56 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20210309; h=Sender:In-Reply-To:Content-Type: MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Reply-To: Content-Transfer-Encoding:Content-ID:Content-Description; bh=jXsTQ6UgBt/n03y8z/yQU/gmTXj684z17e8Hwln5Ysc=; b=Phd0VpGqvG8wUAIlVn6fL650de QC1iMSQcjGMBgvA1CS/TNEhuGge33jnn9B7xA8sUnlDsQhFIt3ubkwmJflsTMPG4Qu4Wvm1VW2f5L 5iNVrqrzkhtvYyHDsKsNxR1YmXgdC2boX1zPw13BAqiZsgaTiaNtQN9HsmVQYApH5WwKRjbDVx7UO a50PfDO5KWfO2PmI9r525caVcBYwPvRnH0NIKY4GdHH14XOt8Yz1A7tPI+q9h5OD8X6HnLPJO2b+w 1YOD0uL+RcAQokioVBV8dj8OF3I6mESmV0hvVKBynnCyQMNeDJ6gkXiOcVJiNTIBCy9wqQOW2ydCd tYYe4c8A==; Received: from mcgrof by bombadil.infradead.org with local (Exim 4.94.2 #2 (Red Hat Linux)) id 1ovSE0-008lxa-Md; Wed, 16 Nov 2022 23:53:48 +0000 Date: Wed, 16 Nov 2022 15:53:48 -0800 From: Luis Chamberlain To: "Edgecombe, Rick P" Cc: "song@kernel.org" , "peterz@infradead.org" , "bpf@vger.kernel.org" , "rppt@kernel.org" , "linux-mm@kvack.org" , "hch@lst.de" , "x86@kernel.org" , "akpm@linux-foundation.org" , "Lu, Aaron" Subject: Re: [PATCH bpf-next v2 0/5] execmem_alloc for BPF programs Message-ID: References: <20221107223921.3451913-1-song@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Sender: Luis Chamberlain Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org On Wed, Nov 16, 2022 at 10:47:04PM +0000, Edgecombe, Rick P wrote: > On Wed, 2022-11-16 at 14:33 -0800, Luis Chamberlain wrote: > > More in lines with what I was hoping for. Can something just do > > the parallelization for you in one shot? Can bench alone do it for > > you? > > Is there no interest to have soemthing which generically showcases > > multithreading / hammering a system with tons of eBPF JITs? It may > > prove useful. > > > > And also, it begs the question, what if you had another iTLB generic > > benchmark or genearl memory pressure workload running *as* you run > > the > > above? I as, as it was my understanding that one of the issues was > > the > > long term slowdown caused by the directmap fragmentation without > > bpf_prog_pack, and so such an application should crawl to its knees > > over time, and there should be numbers you could show to prove that > > too, before and after. > > We did have some benchmarks that showed if your direct map was totally > fragmented (started from boot at 4k page size) what the regression was: > > > https://lore.kernel.org/linux-mm/213b4567-46ce-f116-9cdf-bbd0c884eb3c@linux.intel.com/ Oh yes that is a good example of effort, but I'm suggesting taking for instance will-it-scale and run it in tandem with bpg prog pack and measure on *both* iTLB differences, before / after, *and* doing this again after a period of expected deterioation of the direct map fragmentation (say after non-bpf-prog-pack shows high direct map fragmetnation). This is the sort of thing which easily go into a commit log. Luis