From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.1 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1AE2DC636CE for ; Mon, 19 Jul 2021 18:42:15 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0495661003 for ; Mon, 19 Jul 2021 18:42:13 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1352059AbhGSRtL (ORCPT ); Mon, 19 Jul 2021 13:49:11 -0400 Received: from esa.hc503-62.ca.iphmx.com ([216.71.135.51]:35849 "EHLO esa.hc503-62.ca.iphmx.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1355519AbhGSRcd (ORCPT ); Mon, 19 Jul 2021 13:32:33 -0400 DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=uwaterloo.ca; i=@uwaterloo.ca; q=dns/txt; s=default; t=1626718393; x=1658254393; h=subject:to:cc:references:from:message-id:date: mime-version:in-reply-to:content-transfer-encoding; bh=VzvZu29zOp7shLhWo6Q+hyIk6zoz9Y0UhxdZuvE/Y24=; b=rEjltEZlhcSecXMOT7kJdsAPCYi0TsoeT5wXWEh/Xrsr2sv3hJAsTmzz tgh94nXEvf5L95b8OEV6H4X++SFjOeofhJPyzfVJojh3H2rVmP45v7Ai1 DVnYrYp5wkDq114XR8c14CExwiVxVEfIT7v62op3hktrIKPESyg4VNG8+ M=; Received: from connect.uwaterloo.ca (HELO connhm04.connect.uwaterloo.ca) ([129.97.208.43]) by ob1.hc503-62.ca.iphmx.com with ESMTP/TLS/AES256-GCM-SHA384; 19 Jul 2021 14:13:06 -0400 Received: from [10.42.0.123] (10.32.139.159) by connhm04.connect.uwaterloo.ca (172.16.137.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.2; Mon, 19 Jul 2021 14:13:05 -0400 Subject: Re: [RFC PATCH 4/4 v0.3] sched/umcg: RFC: implement UMCG syscalls To: Peter Oskolkov CC: , , , , , , , , , , , , Peter Buhr References: <20210716184719.269033-5-posk@google.com> <2c971806-b8f6-50b9-491f-e1ede4a33579@uwaterloo.ca> From: Thierry Delisle Message-ID: Date: Mon, 19 Jul 2021 14:13:05 -0400 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset="utf-8"; format=flowed Content-Transfer-Encoding: 7bit Content-Language: en-US X-Originating-IP: [10.32.139.159] X-ClientProxiedBy: connhm02.connect.uwaterloo.ca (172.16.137.66) To connhm04.connect.uwaterloo.ca (172.16.137.68) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > Latency/efficiency: on worker wakeup an idle server can be picked from > the list and context-switched into synchronously, on the same CPU. > Using FDs and select/poll/epoll will add extra layers of abstractions; > synchronous context-switches (not yet fully implemented in UMCG) will > most likely be impossible. This patchset seems much more efficient and > lightweight than whatever can be built on top of FDs. I can believe that. Are you planning to support separate scheduling instances within a single user space? That is having multiple sets of server threads and workers can only run within a specific set. I believe the problem with the idle_servers_ptr as specified is that it is not possible to reclaim used nodes safely. I don't see any indication of which nodes the kernel can concurrently access and on which some memory reclamation scheme could be based. What is the benefit of having users maintain themselves a list of idle servers rather than each servers marking themselves as 'out of work' and having the kernel maintain the list?