From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6D77C19F2D for ; Sat, 6 Aug 2022 10:25:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229806AbiHFKZL (ORCPT ); Sat, 6 Aug 2022 06:25:11 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41542 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229671AbiHFKZK (ORCPT ); Sat, 6 Aug 2022 06:25:10 -0400 Received: from mta-01.yadro.com (mta-02.yadro.com [89.207.88.252]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9463213D58; Sat, 6 Aug 2022 03:25:08 -0700 (PDT) Received: from localhost (unknown [127.0.0.1]) by mta-01.yadro.com (Postfix) with ESMTP id 11B3541247; Sat, 6 Aug 2022 10:25:07 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=yadro.com; h= in-reply-to:content-disposition:content-type:content-type :mime-version:message-id:subject:subject:from:from:date:date :received:received:received:received; s=mta-01; t=1659781505; x= 1661595906; bh=Q4UWjyW+pixLaTW0W0cgN8BYm6mgWJyHbqebGfcoTtI=; b=t Ny1n/wsWp0odKmQJc24E0YNR0AY2kmaYsYwsoi8zA5TorIaOo77F0gw+ItkH8+mk 5h2wKlEyPR70B+hD+wjB1E/a53ItEm/vgwBJrbY0yMElplIf4XqpGofTjofLCVUk O8WZ0nB6+XPhnnD0deUMlAWvAx//Po/EUwBItKK9ZE= X-Virus-Scanned: amavisd-new at yadro.com Received: from mta-01.yadro.com ([127.0.0.1]) by localhost (mta-01.yadro.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id zOgZf4FZOzVM; Sat, 6 Aug 2022 13:25:05 +0300 (MSK) Received: from T-EXCH-01.corp.yadro.com (t-exch-01.corp.yadro.com [172.17.10.101]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mta-01.yadro.com (Postfix) with ESMTPS id AAA4541246; Sat, 6 Aug 2022 13:25:01 +0300 (MSK) Received: from T-EXCH-09.corp.yadro.com (172.17.11.59) by T-EXCH-01.corp.yadro.com (172.17.10.101) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.669.32; Sat, 6 Aug 2022 13:25:01 +0300 Received: from yadro.com (10.178.119.167) by T-EXCH-09.corp.yadro.com (172.17.11.59) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1118.9; Sat, 6 Aug 2022 13:24:59 +0300 Date: Sat, 6 Aug 2022 13:24:59 +0300 From: Konstantin Shelekhin To: CC: , , , , , , , , , , , , , , , , , , , , , , , , , , , Subject: Re: [PATCH v9 12/27] rust: add `kernel` crate Message-ID: MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Disposition: inline In-Reply-To: <20220805154231.31257-13-ojeda@kernel.org> X-Originating-IP: [10.178.119.167] X-ClientProxiedBy: T-EXCH-01.corp.yadro.com (172.17.10.101) To T-EXCH-09.corp.yadro.com (172.17.11.59) Precedence: bulk List-ID: X-Mailing-List: rust-for-linux@vger.kernel.org > +unsafe impl GlobalAlloc for KernelAllocator { > + unsafe fn alloc(&self, layout: Layout) -> *mut u8 { > + // `krealloc()` is used instead of `kmalloc()` because the latter is > + // an inline function and cannot be bound to as a result. > + unsafe { bindings::krealloc(ptr::null(), layout.size(), bindings::GFP_KERNEL) as *mut u8 } > + } > + > + unsafe fn dealloc(&self, ptr: *mut u8, _layout: Layout) { > + unsafe { > + bindings::kfree(ptr as *const core::ffi::c_void); > + } > + } > +} I sense possible problems here. It's common for a kernel code to pass flags during memory allocations. For example: struct bio *bio; for (...) { bio = bio_alloc_bioset(bdev, nr_vecs, opf, GFP_NOIO, bs); if (!bio) return -ENOMEM; } Without GFP_NOIO we can run into a deadlock, because the kernel will try give us free memory by flushing the dirty pages and we need the memory to actually do it and boom, deadlock. Or we can be allocating some structs under spinlock (yeah, that happens too): struct efc_vport *vport; spin_lock_irqsave(...); vport = kzalloc(sizeof(*vport), GFP_ATOMIC); if (!vport) { spin_unlock_irqrestore(...); return NULL; } spin_unlock(...); Same can (and probably will) happen to e.g. Vec elements. So some form of flags passing should be supported in try_* variants: let mut vec = Vec::try_new(GFP_ATOMIC)?; vec.try_push(GFP_ATOMIC, 1)?; vec.try_push(GFP_ATOMIC, 2)?; vec.try_push(GFP_ATOMIC, 3)?;