From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,MAILING_LIST_MULTI, PDS_BAD_THREAD_QP_64,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1878FC433E0 for ; Thu, 4 Feb 2021 09:26:26 +0000 (UTC) Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id C116364D99 for ; Thu, 4 Feb 2021 09:26:25 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C116364D99 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=xen-devel-bounces@lists.xenproject.org Received: from list by lists.xenproject.org with outflank-mailman.81190.149532 (Exim 4.92) (envelope-from ) id 1l7atu-000606-Uk; Thu, 04 Feb 2021 09:26:10 +0000 X-Outflank-Mailman: Message body and most headers restored to incoming version Received: by outflank-mailman (output) from mailman id 81190.149532; Thu, 04 Feb 2021 09:26:10 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l7atu-0005zz-RU; Thu, 04 Feb 2021 09:26:10 +0000 Received: by outflank-mailman (input) for mailman id 81190; Thu, 04 Feb 2021 09:26:09 +0000 Received: from us1-rack-iad1.inumbo.com ([172.99.69.81]) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1l7att-0005zu-Kp for xen-devel@lists.xenproject.org; Thu, 04 Feb 2021 09:26:09 +0000 Received: from mail-wr1-x432.google.com (unknown [2a00:1450:4864:20::432]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS id 738f4b5e-d347-4486-8735-4827e2d7c01e; Thu, 04 Feb 2021 09:26:07 +0000 (UTC) Received: by mail-wr1-x432.google.com with SMTP id v15so2637207wrx.4 for ; Thu, 04 Feb 2021 01:26:07 -0800 (PST) Received: from CBGR90WXYV0 (host86-190-149-163.range86-190.btcentralplus.com. [86.190.149.163]) by smtp.gmail.com with ESMTPSA id l1sm4921463wmq.17.2021.02.04.01.26.05 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 04 Feb 2021 01:26:06 -0800 (PST) X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 738f4b5e-d347-4486-8735-4827e2d7c01e DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id :mime-version:content-transfer-encoding:thread-index :content-language; bh=YgMIO4QlRwtCHJVXKAiPwTXPhhYr/y0FarmhJ4l+GWg=; b=KWPFBkvXrwU8Ckk1aSirKduTBDoltf2vZN3HLAhSMk0QmNsB9Pa94Z3KeOVn8Pehw5 juv7N+7qcwIbqQuQNUh1jaZErsuD41jObLSNbru/Ebkz2qcqWJy7Nor+Bu6DH+oQTFAk xcRm5y6EbVHHDqr8p9qMNyemlfVriWjTrbAa1+xJ1CnizVW6meAa2Dcg1AhZQZq9JKjq j5SZAEbV3YY7RdRgMhf07NRvLbVnJA5/zZSwHbh+OPq7D89ZV7YqoEwfxrf+n3j+Ven4 rrEx2/KjrT2/K7eOBpQVkcOwzNEqkd/9SUVAUO07+jIi/wKZuB1wB77s/ZKzpfas0fz0 Q5hg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to :subject:date:message-id:mime-version:content-transfer-encoding :thread-index:content-language; bh=YgMIO4QlRwtCHJVXKAiPwTXPhhYr/y0FarmhJ4l+GWg=; b=CfW/upeNILWtyyPlP45cBV+4R5+mILkDcmQTAa0r8hxSTE8Oe3cKTpfCxu+GFhwDiH 8beTK75q0xaYKRrho93cT6fEG1lxdJxXUdgmC9zEJaxg30HsptqhjhJv6LM6y8Ud0WXI wkWiRQ4jCBaBA0AmBv5A3ADuG6rkZqd9VVdoredQH9TKfoJ+JWBNBmUwGcFpjMzCS8Vj w+CMH9EEL8aPi7ieLs3bVGuClmK+Yejv6QqodT8mQ50K9NJBp54y+VBZ9RZPXIkSuSGX XUz8pRMn3bv3nbYSn/u5GvCcOsWxJAmJTF8qb7Xw4ERH/f2kvHkhxLhQZEou7rWMofDs eRxQ== X-Gm-Message-State: AOAM533DfVjOcyX5T7NI8IaaGd2URsu2WnpqMozGfQUTi+TnjY9m283X 3Gy4GluJKzvbtQXDV8e/oIs= X-Google-Smtp-Source: ABdhPJw3yoDv4R0uoiSYO+6vjaWmMyHJ2+Oewx7bQpV8FIdzH8YG2PDLfJ8YJd4hUfJtZVmiaj/Y0w== X-Received: by 2002:adf:8b47:: with SMTP id v7mr8119940wra.133.1612430766601; Thu, 04 Feb 2021 01:26:06 -0800 (PST) From: Paul Durrant X-Google-Original-From: "Paul Durrant" Reply-To: To: "'Jan Beulich'" , Cc: "'Andrew Cooper'" , "'Wei Liu'" , =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= , "'Julien Grall'" , "'Stefano Stabellini'" , "'George Dunlap'" References: <0e7265fe-8d89-facb-790d-9232c742c3fa@suse.com> In-Reply-To: Subject: RE: [PATCH v2 2/2] IOREQ: refine when to send mapcache invalidation request Date: Thu, 4 Feb 2021 09:26:04 -0000 Message-ID: <03fb01d6fad7$c39087b0$4ab19710$@xen.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable X-Mailer: Microsoft Outlook 16.0 Thread-Index: AQHgg/zPp1o/mkuzupLKGuAOuFa3bwGR/APtqigXZrA= Content-Language: en-gb > -----Original Message----- > From: Jan Beulich > Sent: 02 February 2021 15:15 > To: xen-devel@lists.xenproject.org > Cc: Andrew Cooper ; Wei Liu ; = Roger Pau Monn=C3=A9 > ; Paul Durrant ; Julien Grall = ; Stefano Stabellini > ; George Dunlap > Subject: [PATCH v2 2/2] IOREQ: refine when to send mapcache = invalidation request >=20 > XENMEM_decrease_reservation isn't the only means by which pages can = get > removed from a guest, yet all removals ought to be signaled to qemu. = Put > setting of the flag into the central p2m_remove_page() underlying all > respective hypercalls as well as a few similar places, mainly in PoD > code. >=20 > Additionally there's no point sending the request for the local domain > when the domain acted upon is a different one. The latter domain's = ioreq > server mapcaches need invalidating. We assume that domain to be paused > at the point the operation takes place, so sending the request in this > case happens from the hvm_do_resume() path, which as one of its first > steps calls handle_hvm_io_completion(). >=20 > Even without the remote operation aspect a single domain-wide flag > doesn't do: Guests may e.g. decrease-reservation on multiple vCPU-s in > parallel. Each of them needs to issue an invalidation request in due > course, in particular because exiting to guest context should not = happen > before the request was actually seen by (all) the emulator(s). >=20 > Signed-off-by: Jan Beulich > --- > v2: Preemption related adjustment split off. Make flag per-vCPU. More > places to set the flag. Also handle acting on a remote domain. > Re-base. I'm wondering if a per-vcpu flag is overkill actually. We just need to = make sure that we don't miss sending an invalidation where multiple = vcpus are in play. The mapcache in the emulator is global so issuing an = invalidate for every vcpu is going to cause an unnecessary storm of = ioreqs, isn't it? Could we get away with the per-domain atomic counter? Paul