From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS, USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 380F8C4361B for ; Mon, 7 Dec 2020 09:05:07 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7A7A02333D for ; Mon, 7 Dec 2020 09:04:51 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726660AbgLGJEm (ORCPT ); Mon, 7 Dec 2020 04:04:42 -0500 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]:42894 "EHLO us-smtp-delivery-124.mimecast.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726164AbgLGJEl (ORCPT ); Mon, 7 Dec 2020 04:04:41 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1607331797; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kTBrO+OTon/BmOp3fwNS5rOxfgsjT5TGx0AQt2Ixnks=; b=eBrUsMmB9sJ5hlNtCBFryyg6e+EP5z4+7MEHegKZ1AAx9Xe4cPhq+tcHqvMr41X5bdgmfe LxVr9oXhu6Hd78YjQxitVV/tQtHsfaM0OUiSzM0jv+8J5/HJNwGEkTpD1emjYbvGL0/LWf KfUGkEM7Ui5kXPBoxVh15LP0HlpHRUk= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-23-y1pivGv4M2KB9EywnAwuGA-1; Mon, 07 Dec 2020 04:03:15 -0500 X-MC-Unique: y1pivGv4M2KB9EywnAwuGA-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E7B081842140; Mon, 7 Dec 2020 09:03:13 +0000 (UTC) Received: from [10.36.114.33] (ovpn-114-33.ams2.redhat.com [10.36.114.33]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0898160C4D; Mon, 7 Dec 2020 09:03:11 +0000 (UTC) Subject: Re: [RFC V2 3/3] s390/mm: Define arch_get_mappable_range() To: Anshuman Khandual , Heiko Carstens Cc: linux-mm@kvack.org, akpm@linux-foundation.org, linux-arm-kernel@lists.infradead.org, linux-s390@vger.kernel.org, linux-kernel@vger.kernel.org, Vasily Gorbik References: <1606706992-26656-1-git-send-email-anshuman.khandual@arm.com> <1606706992-26656-4-git-send-email-anshuman.khandual@arm.com> <20201202203233.GB11274@osiris> <24905c32-f6c1-97a0-000f-f822b9870ea5@arm.com> <20201203115133.GB9994@osiris> <4d6c9ec4-f1be-46b9-5d67-5c53f5afedc5@redhat.com> <62c60c9e-20d6-25bd-94d0-78bfed0f2476@arm.com> From: David Hildenbrand Organization: Red Hat GmbH Message-ID: <02dfe6f5-efb6-c04d-c34a-a1e7393625cf@redhat.com> Date: Mon, 7 Dec 2020 10:03:11 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.4.0 MIME-Version: 1.0 In-Reply-To: <62c60c9e-20d6-25bd-94d0-78bfed0f2476@arm.com> Content-Type: text/plain; charset=utf-8 Content-Language: en-US Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 07.12.20 05:38, Anshuman Khandual wrote: > > > On 12/3/20 5:31 PM, David Hildenbrand wrote: >> On 03.12.20 12:51, Heiko Carstens wrote: >>> On Thu, Dec 03, 2020 at 06:03:00AM +0530, Anshuman Khandual wrote: >>>>>> diff --git a/arch/s390/mm/extmem.c b/arch/s390/mm/extmem.c >>>>>> index 5060956b8e7d..cc055a78f7b6 100644 >>>>>> --- a/arch/s390/mm/extmem.c >>>>>> +++ b/arch/s390/mm/extmem.c >>>>>> @@ -337,6 +337,11 @@ __segment_load (char *name, int do_nonshared, unsigned long *addr, unsigned long >>>>>> goto out_free_resource; >>>>>> } >>>>>> >>>>>> + if (seg->end + 1 > VMEM_MAX_PHYS || seg->end + 1 < seg->start_addr) { >>>>>> + rc = -ERANGE; >>>>>> + goto out_resource; >>>>>> + } >>>>>> + >>>>>> rc = vmem_add_mapping(seg->start_addr, seg->end - seg->start_addr + 1); >>>>>> if (rc) >>>>>> goto out_resource; >>>>>> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c >>>>>> index b239f2ba93b0..06dddcc0ce06 100644 >>>>>> --- a/arch/s390/mm/vmem.c >>>>>> +++ b/arch/s390/mm/vmem.c >>>>>> @@ -532,14 +532,19 @@ void vmem_remove_mapping(unsigned long start, unsigned long size) >>>>>> mutex_unlock(&vmem_mutex); >>>>>> } >>>>>> >>>>>> +struct range arch_get_mappable_range(void) >>>>>> +{ >>>>>> + struct range memhp_range; >>>>>> + >>>>>> + memhp_range.start = 0; >>>>>> + memhp_range.end = VMEM_MAX_PHYS; >>>>>> + return memhp_range; >>>>>> +} >>>>>> + >>>>>> int vmem_add_mapping(unsigned long start, unsigned long size) >>>>>> { >>>>>> int ret; >>>>>> >>>>>> - if (start + size > VMEM_MAX_PHYS || >>>>>> - start + size < start) >>>>>> - return -ERANGE; >>>>>> - >>>>> >>>>> I really fail to see how this could be considered an improvement for >>>>> s390. Especially I do not like that the (central) range check is now >>>>> moved to the caller (__segment_load). Which would mean potential >>>>> additional future callers would have to duplicate that code as well. >>>> >>>> The physical range check is being moved to the generic hotplug code >>>> via arch_get_mappable_range() instead, making the existing check in >>>> vmem_add_mapping() redundant. Dropping the check there necessitates >>>> adding back a similar check in __segment_load(). Otherwise there >>>> will be a loss of functionality in terms of range check. >>>> >>>> May be we could just keep this existing check in vmem_add_mapping() >>>> as well in order avoid this movement but then it would be redundant >>>> check in every hotplug path. >>>> >>>> So I guess the choice is to either have redundant range checks in >>>> all hotplug paths or future internal callers of vmem_add_mapping() >>>> take care of the range check. >>> >>> The problem I have with this current approach from an architecture >>> perspective: we end up having two completely different methods which >>> are doing the same and must be kept in sync. This might be obvious >>> looking at this patch, but I'm sure this will go out-of-sync (aka >>> broken) sooner or later. >> >> Exactly, there should be one function only that was the whole idea of >> arch_get_mappable_range(). >> >>> >>> Therefore I would really like to see a single method to do the range >>> checking. Maybe you could add a callback into architecture code, so >>> that such an architecture specific function could also be used >>> elsewhere. Dunno. >>> >> >> I think we can just switch to using "memhp_range_allowed()" here then >> after implementing arch_get_mappable_range(). >> >> Doesn't hurt to double check in vmem_add_mapping() - especially to keep >> extmem working without changes. At least for callers of memory hotplug >> it's then clear which values actually won't fail deep down in arch code. > > But there is a small problem here. memhp_range_allowed() is now defined > and available with CONFIG_MEMORY_HOTPLUG where as vmem_add_mapping() and > __segment_load() are generally available without any config dependency. > So if CONFIG_MEMORY_HOTPLUG is not enabled there will be a build failure > in vmem_add_mapping() for memhp_range_allowed() symbol. > > We could just move VM_BUG_ON(!memhp_range_allowed(start, size, 1)) check > from vmem_add_mapping() to arch_add_memory() like on arm64 platform. But > then __segment_load() would need that additional new check to compensate > as proposed earlier. > > Also leaving vmem_add_mapping() and __segment_load() unchanged will cause > the address range check to be called three times on the hotplug path i.e > > 1. register_memory_resource() > 2. arch_add_memory() > 3. vmem_add_mapping() > > Moving memhp_range_allowed() check inside arch_add_memory() seems better > and consistent with arm64. Also in the future, any platform which choose > to override arch_get_mappable() will have this additional VM_BUG_ON() in > their arch_add_memory(). Yeah, it might not make sense to add these checks all over the place. The important part is that 1. There is a check somewhere (and if it's deep down in arch code) 2. There is an obvious way for callers to find out what valid values are. I guess it would be good enough to a) Factor out getting arch ranges into arch_get_mappable_range() b) Provide memhp_get_pluggable_range() Both changes only make sense with an in-tree user. I'm planning on using this functionality in virtio-mem code. I can pickup your patches, drop the superfluous checks, and use it from virtio-mem code. Makese sense (BTW, looks like we'll see aarch64 support for virtio-mem soon)? -- Thanks, David / dhildenb From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.3 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9EEDC4361B for ; Mon, 7 Dec 2020 09:04:44 +0000 (UTC) Received: from merlin.infradead.org (merlin.infradead.org [205.233.59.134]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 28CAB2311B for ; Mon, 7 Dec 2020 09:04:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 28CAB2311B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=redhat.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=merlin.20170209; h=Sender:Content-Transfer-Encoding: Content-Type:Cc:List-Subscribe:List-Help:List-Post:List-Archive: List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:Date:Message-ID:From: References:To:Subject:Reply-To:Content-ID:Content-Description:Resent-Date: Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Owner; bh=3CwPAzkoo3KiZLVnTBly3iOd5aKl5oqXBaDnJmXFbUI=; b=0afEEyrkzCUooxBjQ4kjeJ43m K04VLRdZhN52yfeCk13l3oEbCaVxePreBWJOQ504j9Dq5M32Udtz9gNi6T5NDzFTHbdamdb1YH6fy LiDt8s6JPt6EmlspxJ3uyFRgPyFBc2GIQno9kFtP25cScZ3HY6fzKinAtwJkpptZsnYrEFpp5jhf/ XOlfwRoI5JLtoDpTOJb6DlT1JaR0tX5Necc1LeokEUm5afkTJHmJRTWAJDzIle0VoD2mih2QF+Uk7 xOPv8wY2ytvBw+e0iEU420AWQ+Z6bq4OHrMMC+h5gkxBTV3bhwkkJDRcW6CPrnvTnPnQNVrpZMlr9 qOUTqpncQ==; Received: from localhost ([::1] helo=merlin.infradead.org) by merlin.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1kmCQU-0007CJ-2Q; Mon, 07 Dec 2020 09:03:22 +0000 Received: from us-smtp-delivery-124.mimecast.com ([216.205.24.124]) by merlin.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1kmCQR-0007Bh-7x for linux-arm-kernel@lists.infradead.org; Mon, 07 Dec 2020 09:03:20 +0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1607331797; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=kTBrO+OTon/BmOp3fwNS5rOxfgsjT5TGx0AQt2Ixnks=; b=eBrUsMmB9sJ5hlNtCBFryyg6e+EP5z4+7MEHegKZ1AAx9Xe4cPhq+tcHqvMr41X5bdgmfe LxVr9oXhu6Hd78YjQxitVV/tQtHsfaM0OUiSzM0jv+8J5/HJNwGEkTpD1emjYbvGL0/LWf KfUGkEM7Ui5kXPBoxVh15LP0HlpHRUk= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-23-y1pivGv4M2KB9EywnAwuGA-1; Mon, 07 Dec 2020 04:03:15 -0500 X-MC-Unique: y1pivGv4M2KB9EywnAwuGA-1 Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id E7B081842140; Mon, 7 Dec 2020 09:03:13 +0000 (UTC) Received: from [10.36.114.33] (ovpn-114-33.ams2.redhat.com [10.36.114.33]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0898160C4D; Mon, 7 Dec 2020 09:03:11 +0000 (UTC) Subject: Re: [RFC V2 3/3] s390/mm: Define arch_get_mappable_range() To: Anshuman Khandual , Heiko Carstens References: <1606706992-26656-1-git-send-email-anshuman.khandual@arm.com> <1606706992-26656-4-git-send-email-anshuman.khandual@arm.com> <20201202203233.GB11274@osiris> <24905c32-f6c1-97a0-000f-f822b9870ea5@arm.com> <20201203115133.GB9994@osiris> <4d6c9ec4-f1be-46b9-5d67-5c53f5afedc5@redhat.com> <62c60c9e-20d6-25bd-94d0-78bfed0f2476@arm.com> From: David Hildenbrand Organization: Red Hat GmbH Message-ID: <02dfe6f5-efb6-c04d-c34a-a1e7393625cf@redhat.com> Date: Mon, 7 Dec 2020 10:03:11 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.4.0 MIME-Version: 1.0 In-Reply-To: <62c60c9e-20d6-25bd-94d0-78bfed0f2476@arm.com> Content-Language: en-US X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20201207_040319_669947_A22B3E5E X-CRM114-Status: GOOD ( 35.30 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: linux-s390@vger.kernel.org, Vasily Gorbik , linux-kernel@vger.kernel.org, linux-mm@kvack.org, akpm@linux-foundation.org, linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org On 07.12.20 05:38, Anshuman Khandual wrote: > > > On 12/3/20 5:31 PM, David Hildenbrand wrote: >> On 03.12.20 12:51, Heiko Carstens wrote: >>> On Thu, Dec 03, 2020 at 06:03:00AM +0530, Anshuman Khandual wrote: >>>>>> diff --git a/arch/s390/mm/extmem.c b/arch/s390/mm/extmem.c >>>>>> index 5060956b8e7d..cc055a78f7b6 100644 >>>>>> --- a/arch/s390/mm/extmem.c >>>>>> +++ b/arch/s390/mm/extmem.c >>>>>> @@ -337,6 +337,11 @@ __segment_load (char *name, int do_nonshared, unsigned long *addr, unsigned long >>>>>> goto out_free_resource; >>>>>> } >>>>>> >>>>>> + if (seg->end + 1 > VMEM_MAX_PHYS || seg->end + 1 < seg->start_addr) { >>>>>> + rc = -ERANGE; >>>>>> + goto out_resource; >>>>>> + } >>>>>> + >>>>>> rc = vmem_add_mapping(seg->start_addr, seg->end - seg->start_addr + 1); >>>>>> if (rc) >>>>>> goto out_resource; >>>>>> diff --git a/arch/s390/mm/vmem.c b/arch/s390/mm/vmem.c >>>>>> index b239f2ba93b0..06dddcc0ce06 100644 >>>>>> --- a/arch/s390/mm/vmem.c >>>>>> +++ b/arch/s390/mm/vmem.c >>>>>> @@ -532,14 +532,19 @@ void vmem_remove_mapping(unsigned long start, unsigned long size) >>>>>> mutex_unlock(&vmem_mutex); >>>>>> } >>>>>> >>>>>> +struct range arch_get_mappable_range(void) >>>>>> +{ >>>>>> + struct range memhp_range; >>>>>> + >>>>>> + memhp_range.start = 0; >>>>>> + memhp_range.end = VMEM_MAX_PHYS; >>>>>> + return memhp_range; >>>>>> +} >>>>>> + >>>>>> int vmem_add_mapping(unsigned long start, unsigned long size) >>>>>> { >>>>>> int ret; >>>>>> >>>>>> - if (start + size > VMEM_MAX_PHYS || >>>>>> - start + size < start) >>>>>> - return -ERANGE; >>>>>> - >>>>> >>>>> I really fail to see how this could be considered an improvement for >>>>> s390. Especially I do not like that the (central) range check is now >>>>> moved to the caller (__segment_load). Which would mean potential >>>>> additional future callers would have to duplicate that code as well. >>>> >>>> The physical range check is being moved to the generic hotplug code >>>> via arch_get_mappable_range() instead, making the existing check in >>>> vmem_add_mapping() redundant. Dropping the check there necessitates >>>> adding back a similar check in __segment_load(). Otherwise there >>>> will be a loss of functionality in terms of range check. >>>> >>>> May be we could just keep this existing check in vmem_add_mapping() >>>> as well in order avoid this movement but then it would be redundant >>>> check in every hotplug path. >>>> >>>> So I guess the choice is to either have redundant range checks in >>>> all hotplug paths or future internal callers of vmem_add_mapping() >>>> take care of the range check. >>> >>> The problem I have with this current approach from an architecture >>> perspective: we end up having two completely different methods which >>> are doing the same and must be kept in sync. This might be obvious >>> looking at this patch, but I'm sure this will go out-of-sync (aka >>> broken) sooner or later. >> >> Exactly, there should be one function only that was the whole idea of >> arch_get_mappable_range(). >> >>> >>> Therefore I would really like to see a single method to do the range >>> checking. Maybe you could add a callback into architecture code, so >>> that such an architecture specific function could also be used >>> elsewhere. Dunno. >>> >> >> I think we can just switch to using "memhp_range_allowed()" here then >> after implementing arch_get_mappable_range(). >> >> Doesn't hurt to double check in vmem_add_mapping() - especially to keep >> extmem working without changes. At least for callers of memory hotplug >> it's then clear which values actually won't fail deep down in arch code. > > But there is a small problem here. memhp_range_allowed() is now defined > and available with CONFIG_MEMORY_HOTPLUG where as vmem_add_mapping() and > __segment_load() are generally available without any config dependency. > So if CONFIG_MEMORY_HOTPLUG is not enabled there will be a build failure > in vmem_add_mapping() for memhp_range_allowed() symbol. > > We could just move VM_BUG_ON(!memhp_range_allowed(start, size, 1)) check > from vmem_add_mapping() to arch_add_memory() like on arm64 platform. But > then __segment_load() would need that additional new check to compensate > as proposed earlier. > > Also leaving vmem_add_mapping() and __segment_load() unchanged will cause > the address range check to be called three times on the hotplug path i.e > > 1. register_memory_resource() > 2. arch_add_memory() > 3. vmem_add_mapping() > > Moving memhp_range_allowed() check inside arch_add_memory() seems better > and consistent with arm64. Also in the future, any platform which choose > to override arch_get_mappable() will have this additional VM_BUG_ON() in > their arch_add_memory(). Yeah, it might not make sense to add these checks all over the place. The important part is that 1. There is a check somewhere (and if it's deep down in arch code) 2. There is an obvious way for callers to find out what valid values are. I guess it would be good enough to a) Factor out getting arch ranges into arch_get_mappable_range() b) Provide memhp_get_pluggable_range() Both changes only make sense with an in-tree user. I'm planning on using this functionality in virtio-mem code. I can pickup your patches, drop the superfluous checks, and use it from virtio-mem code. Makese sense (BTW, looks like we'll see aarch64 support for virtio-mem soon)? -- Thanks, David / dhildenb _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel