All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@suse.com>
To: Chao Gao <chao.gao@intel.com>
Cc: Tim Deegan <tim@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wei.liu2@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Ian Jackson <Ian.Jackson@eu.citrix.com>,
	xen-devel@lists.xen.org, Paul Durrant <paul.durrant@citrix.com>
Subject: Re: [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages
Date: Wed, 18 Apr 2018 02:19:49 -0600	[thread overview]
Message-ID: <5AD6FFA502000078001BC458@prv1-mh.provo.novell.com> (raw)
In-Reply-To: <1512546614-9937-3-git-send-email-chao.gao@intel.com>

>>> On 06.12.17 at 08:50, <chao.gao@intel.com> wrote:
> One 4K-byte page at most contains 128 'ioreq_t'. In order to remove the vcpu
> number constraint imposed by one IOREQ page, bump the number of IOREQ page to
> 4 pages. With this patch, multiple pages can be used as IOREQ page.

In case I didn't say so before - I'm opposed to simply changing the upper limit
here. Please make sure any number of vCPU-s can be supported by the new
code (it looks like it mostly does already, so it may be mainly the description
which needs changing). If we want to impose an upper limit, this shouldn't
affect the ioreq page handling code at all.

> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -64,14 +64,24 @@ static struct hvm_ioreq_server *get_ioreq_server(const struct domain *d,
>              continue; \
>          else
>  
> +/* Iterate over all ioreq pages */
> +#define FOR_EACH_IOREQ_PAGE(s, i, iorp) \
> +    for ( (i) = 0, iorp = s->ioreq; (i) < (s)->ioreq_page_nr; (i)++, iorp++ )

You're going too far with parenthesization here: Just like iorp, i is required to
be a simple identifier anyway. Otoh you parenthesize s only once.

> +static ioreq_t *get_ioreq_fallible(struct hvm_ioreq_server *s, struct vcpu *v)

What is "fallible"? And why such a separate wrapper anyway? But I guess a lot
of this will need to change anyway with Paul's recent changes. I'm therefore not
going to look in close detail at any of this.

>  static void hvm_ioreq_server_unmap_pages(struct hvm_ioreq_server *s)
>  {
> -    hvm_unmap_ioreq_gfn(s, &s->ioreq);
> +    int i;

At the example of this - unsigned int please in all cases where the value can't go
negative.

> @@ -688,8 +741,15 @@ static int hvm_ioreq_server_init(struct hvm_ioreq_server *s,
>      INIT_LIST_HEAD(&s->ioreq_vcpu_list);
>      spin_lock_init(&s->bufioreq_lock);
>  
> -    s->ioreq.gfn = INVALID_GFN;
> +    FOR_EACH_IOREQ_PAGE(s, i, iorp)
> +        iorp->gfn = INVALID_GFN;
>      s->bufioreq.gfn = INVALID_GFN;
> +    s->ioreq_page_nr = (d->max_vcpus + IOREQ_NUM_PER_PAGE - 1) /
> +                       IOREQ_NUM_PER_PAGE;

DIV_ROUND_UP() - please don't open-code things.

> --- a/xen/include/public/hvm/params.h
> +++ b/xen/include/public/hvm/params.h
> @@ -279,6 +279,12 @@
>  #define XEN_HVM_MCA_CAP_LMCE   (xen_mk_ullong(1) << 0)
>  #define XEN_HVM_MCA_CAP_MASK   XEN_HVM_MCA_CAP_LMCE
>  
> -#define HVM_NR_PARAMS 39
> +/*
> + * Number of pages that are reserved for default IOREQ server. The base PFN
> + * is set via HVM_PARAM_IOREQ_PFN.
> + */
> +#define HVM_PARAM_IOREQ_PAGES 39

Why is this needed? It can be derived from the vCPU count permitted for the
domain.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  parent reply	other threads:[~2018-04-18  8:19 UTC|newest]

Thread overview: 56+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-12-06  7:50 [RFC Patch v4 0/8] Extend resources to support more vcpus in single VM Chao Gao
2017-12-06  7:50 ` [RFC Patch v4 1/8] ioreq: remove most 'buf' parameter from static functions Chao Gao
2017-12-06 14:44   ` Paul Durrant
2017-12-06  8:37     ` Chao Gao
2017-12-06  7:50 ` [RFC Patch v4 2/8] ioreq: bump the number of IOREQ page to 4 pages Chao Gao
2017-12-06 15:04   ` Paul Durrant
2017-12-06  9:02     ` Chao Gao
2017-12-06 16:10       ` Paul Durrant
2017-12-07  8:41         ` Paul Durrant
2017-12-07  6:56           ` Chao Gao
2017-12-08 11:06             ` Paul Durrant
2017-12-12  1:03               ` Chao Gao
2017-12-12  9:07                 ` Paul Durrant
2017-12-12 23:39                   ` Chao Gao
2017-12-13 10:49                     ` Paul Durrant
2017-12-13 17:50                       ` Paul Durrant
2017-12-14 14:50                         ` Paul Durrant
2017-12-15  0:35                           ` Chao Gao
2017-12-15  9:40                             ` Paul Durrant
2018-04-18  8:19   ` Jan Beulich [this message]
2017-12-06  7:50 ` [RFC Patch v4 3/8] xl/acpi: unify the computation of lapic_id Chao Gao
2018-02-22 18:05   ` Wei Liu
2017-12-06  7:50 ` [RFC Patch v4 4/8] hvmloader: boot cpu through broadcast Chao Gao
2018-02-22 18:44   ` Wei Liu
2018-02-23  8:41     ` Jan Beulich
2018-02-23 16:42   ` Roger Pau Monné
2018-02-24  5:49     ` Chao Gao
2018-02-26  8:28       ` Jan Beulich
2018-02-26 12:33         ` Chao Gao
2018-02-26 14:19           ` Roger Pau Monné
2018-04-18  8:38   ` Jan Beulich
2018-04-18 11:20     ` Chao Gao
2018-04-18 11:50       ` Jan Beulich
2017-12-06  7:50 ` [RFC Patch v4 5/8] Tool/ACPI: DSDT extension to support more vcpus Chao Gao
2017-12-06  7:50 ` [RFC Patch v4 6/8] hvmload: Add x2apic entry support in the MADT and SRAT build Chao Gao
2018-04-18  8:48   ` Jan Beulich
2017-12-06  7:50 ` [RFC Patch v4 7/8] x86/hvm: bump the number of pages of shadow memory Chao Gao
2018-02-27 14:17   ` George Dunlap
2018-04-18  8:53   ` Jan Beulich
2018-04-18 11:39     ` Chao Gao
2018-04-18 11:50       ` Andrew Cooper
2018-04-18 11:59       ` Jan Beulich
2017-12-06  7:50 ` [RFC Patch v4 8/8] x86/hvm: bump the maximum number of vcpus to 512 Chao Gao
2018-02-22 18:46   ` Wei Liu
2018-02-23  8:50     ` Jan Beulich
2018-02-23 17:18       ` Wei Liu
2018-02-23 18:11   ` Roger Pau Monné
2018-02-24  6:26     ` Chao Gao
2018-02-26  8:26     ` Jan Beulich
2018-02-26 13:11       ` Chao Gao
2018-02-26 16:10         ` Jan Beulich
2018-03-01  5:21           ` Chao Gao
2018-03-01  7:17             ` Juergen Gross
2018-03-01  7:37             ` Jan Beulich
2018-03-01  7:11               ` Chao Gao
2018-02-27 14:59         ` George Dunlap

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5AD6FFA502000078001BC458@prv1-mh.provo.novell.com \
    --to=jbeulich@suse.com \
    --cc=George.Dunlap@eu.citrix.com \
    --cc=Ian.Jackson@eu.citrix.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=chao.gao@intel.com \
    --cc=paul.durrant@citrix.com \
    --cc=sstabellini@kernel.org \
    --cc=tim@xen.org \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.