From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B273C636D3 for ; Tue, 31 Jan 2023 06:29:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229810AbjAaG3C (ORCPT ); Tue, 31 Jan 2023 01:29:02 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:59474 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229468AbjAaG3B (ORCPT ); Tue, 31 Jan 2023 01:29:01 -0500 Received: from mail-qv1-xf2a.google.com (mail-qv1-xf2a.google.com [IPv6:2607:f8b0:4864:20::f2a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 83ECF36467 for ; Mon, 30 Jan 2023 22:28:59 -0800 (PST) Received: by mail-qv1-xf2a.google.com with SMTP id d13so10520778qvj.8 for ; Mon, 30 Jan 2023 22:28:59 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=OXKnE1Q6JdHrDyzYHAwmA0pvBC/vTqF0R9By7nEYIy8=; b=SdZcC9sg9BHuygD51Fl4j+M9bYlSIcsSygDtD6jIfeEM6ygkgHWQU9tybNnifq/TtE kGpv6hFqw7G9rLWcYmA/lsWGF1utgZ6MEcP4GB6pq13VdXxmLq3T+Lk/2dGritYxTAtb nZgF0fYwYSv06rw+CyBkKdQBpY8t6S/V5VfTtuOxrRxZ+hQfA7+mLMm+NdDpWaS/hwvY yd7HrK6TyLsY/hF8xnRrmPl9h+x/JJG63SZT+Xm3jpvgdfhQOM8ViDWhVFeR11hGkZtv dnsh58ryEapRJ+55dZ6UpExrMgDtif+4FqNgmcKCn5/E7Fqr1Uvx/nPD8tkGRFyntX2u iJFA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=OXKnE1Q6JdHrDyzYHAwmA0pvBC/vTqF0R9By7nEYIy8=; b=QAKDEKPwQKIoTIH/1mkgJ2Q8sjWZSDlovdxunCnS4Gk+KXEAgMQoLXv8ZyHElKQ63E hWLrhGOABVBi7AcR6SXjJLbeGmXxRvWJtgcM1coV2DJzZfg1HTmy5uh7HsD5/c9ito1S DxISHey99jJtAxZx/k92SdHWkCFKZFEts3SSl+VHdahr+WezScUF15v2Wim97DyqPyfd zVTDWq1Xz0MeiCeGJfV8QawlKAnFeAZ1DT3aleRAyAJDtqTlmOxjLqea5j7lB3ODW37z t6hx5MW8NoNRP30szF+Y5Mz9pSjr8clO8X3r0h6A+zJpPWkkgdjG/ltBQDYdVw537r8Y MuCQ== X-Gm-Message-State: AFqh2ko/OaUcYEhLq/XGJeyoCmdYD7doyAZ4PSZhAWM4HJZbZNRBdG48 PVOIeoosIBeorpj7BWyjbRbtkTDy1jwzq6JIsgo= X-Google-Smtp-Source: AMrXdXsqE5xOOBm/VB9hw0p4F3n+12UIrKfURK3doyrcCWAv5uwmZkPCnjxgvfD9hipk634TLRTksk3uxGMKUgZRXKc= X-Received: by 2002:a05:6214:2622:b0:537:2334:7fb3 with SMTP id gv2-20020a056214262200b0053723347fb3mr2248442qvb.59.1675146538591; Mon, 30 Jan 2023 22:28:58 -0800 (PST) MIME-Version: 1.0 References: <20230112155326.26902-1-laoar.shao@gmail.com> In-Reply-To: From: Yafang Shao Date: Tue, 31 Jan 2023 14:28:22 +0800 Message-ID: Subject: Re: [RFC PATCH bpf-next v2 00/11] mm, bpf: Add BPF into /proc/meminfo To: Uladzislau Rezki Cc: Alexei Starovoitov , Andrew Morton , Christoph Hellwig , Hyeonggon Yoo <42.hyeyoo@gmail.com>, Vlastimil Babka , Alexei Starovoitov , Daniel Borkmann , Andrii Nakryiko , Martin KaFai Lau , Song Liu , Yonghong Song , John Fastabend , KP Singh , Stanislav Fomichev , Hao Luo , Jiri Olsa , Tejun Heo , dennis@kernel.org, Chris Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Roman Gushchin , linux-mm , bpf Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: bpf@vger.kernel.org On Mon, Jan 30, 2023 at 9:14 PM Uladzislau Rezki wrote: > > On Sat, Jan 28, 2023 at 07:49:08PM +0800, Yafang Shao wrote: > > On Thu, Jan 26, 2023 at 1:45 PM Alexei Starovoitov > > wrote: > > > > > > On Tue, Jan 17, 2023 at 10:49 PM Yafang Shao wrote: > > > > > > I just don't want to add many if-elses or switch-cases into > > > > > > bpf_map_memory_footprint(), because I think it is a little ugly. > > > > > > Introducing a new map ops could make it more clear. For example, > > > > > > static unsigned long bpf_map_memory_footprint(const struct bpf_map *map) > > > > > > { > > > > > > unsigned long size; > > > > > > > > > > > > if (map->ops->map_mem_footprint) > > > > > > return map->ops->map_mem_footprint(map); > > > > > > > > > > > > size = round_up(map->key_size + bpf_map_value_size(map), 8); > > > > > > return round_up(map->max_entries * size, PAGE_SIZE); > > > > > > } > > > > > > > > > > It is also ugly, because bpf_map_value_size() already has if-stmt. > > > > > I prefer to keep all estimates in one place. > > > > > There is no need to be 100% accurate. > > > > > > > > Per my investigation, it can be almost accurate with little effort. > > > > Take the htab for example, > > > > static unsigned long htab_mem_footprint(const struct bpf_map *map) > > > > { > > > > struct bpf_htab *htab = container_of(map, struct bpf_htab, map); > > > > unsigned long size = 0; > > > > > > > > if (!htab_is_prealloc(htab)) { > > > > size += htab_elements_size(htab); > > > > } > > > > size += kvsize(htab->elems); > > > > size += percpu_size(htab->extra_elems); > > > > size += kvsize(htab->buckets); > > > > size += bpf_mem_alloc_size(&htab->pcpu_ma); > > > > size += bpf_mem_alloc_size(&htab->ma); > > > > if (htab->use_percpu_counter) > > > > size += percpu_size(htab->pcount.counters); > > > > size += percpu_size(htab->map_locked[i]) * HASHTAB_MAP_LOCK_COUNT; > > > > size += kvsize(htab); > > > > return size; > > > > } > > > > > > Please don't. > > > Above doesn't look maintainable. > > > > It is similar to htab_map_free(). These pointers are the pointers > > which will be freed in map_free(). > > We just need to keep map_mem_footprint() in sync with map_free(). It > > won't be a problem for maintenance. > > > > > Look at kvsize(htab). Do you really care about hundred bytes? > > > Just accept that there will be a small constant difference > > > between what show_fdinfo reports and the real memory. > > > > The point is we don't have a clear idea what the margin is. > > > > > You cannot make it 100%. > > > There is kfence that will allocate 4k though you asked kmalloc(8). > > > > > > > We already have ksize()[1], which covers the kfence. > > > > [1]. https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf-next.git/tree/mm/slab_common.c#n1431 > > > > > > We just need to get the real memory size from the pointer instead of > > > > calculating the size again. > > > > For non-preallocated htab, it is a little trouble to get the element > > > > size (not the unit_size), but it won't be a big deal. > > > > > > You'd have to convince mm folks that kvsize() is worth doing. > > > I don't think it will be easy. > > > > > > > As I mentioned above, we already have ksize(), so we only need to > > introduce vsize(). Per my understanding, we can simply use > > vm_struct->size to get the vmalloc size, see also the patch #5 in this > > patchset[2]. > > > > Andrew, Uladzislau, Christoph, do you have any comments on this newly > > introduced vsize()[2] ? > > > > [2]. https://lore.kernel.org/bpf/20230112155326.26902-6-laoar.shao@gmail.com/ > > > > +/* Report full size of underlying allocation of a vmalloc'ed addr */ > +static inline size_t vsize(const void *addr) > +{ > + struct vm_struct *area; > + > + if (!addr) > + return 0; > + > + area = find_vm_area(addr); > + if (unlikely(!area)) > + return 0; > + > + return area->size; > +} > > > You can not access area after the lock is dropped. We do not have any > ref counters for VA objects. Therefore it should be done like below: > > > > spin_lock(&vmap_area_lock); > va = __find_vmap_area(addr, &vmap_area_root); > if (va && va->vm) > va_size = va->vm->size; > spin_unlock(&vmap_area_lock); > > return va_size; > > Ah, it should take this global lock. I missed that. Many thanks for the detailed explanation. -- Regards Yafang