All of lore.kernel.org
 help / color / mirror / Atom feed
From: Stefano Stabellini <stefano.stabellini@xilinx.com>
To: Julien Grall <julien.grall@arm.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <stefanos@xilinx.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 1/6] xen: extend XEN_DOMCTL_memory_mapping to handle cacheability
Date: Wed, 17 Apr 2019 14:55:30 -0700	[thread overview]
Message-ID: <alpine.DEB.2.10.1904171441360.1370@sstabellini-ThinkPad-X260> (raw)
In-Reply-To: <28e234b7-7678-6064-37fb-7518756a3c12@arm.com>

On Wed, 17 Apr 2019, Julien Grall wrote:
> Hi,
> 
> On 4/17/19 10:12 PM, Stefano Stabellini wrote:
> > On Wed, 27 Feb 2019, Jan Beulich wrote:
> > > > > > On 27.02.19 at 00:07, <sstabellini@kernel.org> wrote:
> > > > --- a/xen/include/public/domctl.h
> > > > +++ b/xen/include/public/domctl.h
> > > > @@ -571,12 +571,14 @@ struct xen_domctl_bind_pt_irq {
> > > >   */
> > > >   #define DPCI_ADD_MAPPING         1
> > > >   #define DPCI_REMOVE_MAPPING      0
> > > > +#define CACHEABILITY_DEVMEM      0 /* device memory, the default */
> > > > +#define CACHEABILITY_MEMORY      1 /* normal memory */
> > > >   struct xen_domctl_memory_mapping {
> > > >       uint64_aligned_t first_gfn; /* first page (hvm guest phys page) in
> > > > range */
> > > >       uint64_aligned_t first_mfn; /* first page (machine page) in range
> > > > */
> > > >       uint64_aligned_t nr_mfns;   /* number of pages in range (>0) */
> > > >       uint32_t add_mapping;       /* add or remove mapping */
> > > > -    uint32_t padding;           /* padding for 64-bit aligned structure
> > > > */
> > > > +    uint32_t cache_policy;      /* cacheability of the memory mapping
> > > > */
> > > >   };
> > > 
> > > I don't think DEVMEM and MEMORY are anywhere near descriptive
> > > enough, nor - if we want such control anyway - flexible enough. I
> > > think what you want is to actually specify cachability, allowing on
> > > x86 to e.g. map frame buffers or alike WC. The attribute then
> > > would (obviously and necessarily) be architecture specific.
> > 
> > Yes, I agree with what you wrote, and also with what Julien wrote. Now
> > the question is how do you both think this should look like in more
> > details:
> > 
> > - are you OK with using memory_policy instead of cache_policy like
> >    Julien's suggested as name for the field?
> > - are you OK with using #defines for the values?
> > - should the #defines for both x86 and Arm be defined here or in other
> >    headers?
> > - what values would you like to see for x86?
> > 
> > For Arm, the ones I care about are:
> > 
> > - p2m_mmio_direct_dev
> > - p2m_mmio_direct_c
> 
> First step first. What are the reason to have p2m_mmio_direct_c? Is it only
> for memory sharing between DomU and Dom0?

No, Dom0-DomU sharing is only a minor positive side effect. Xilinx MPSoC
has two Cortex R5 cpus in addition to four Cortex A53 cpus on the board.
It is also possible to add additional Cortex M4 cpus and Microblaze cpus
in fabric. There could be dozen independent processors. Users need to
exchange data between the heterogeneous cpus. They usually set up their
own ring structures over shared memory, or they use OpenAMP.  Either
way, they need to share a cacheable memory region between them.  The
MPSoC is very flexible and the memory region can come from a multitude
of sources, including a portion of normal memory, or a portion of a
special memory area on the board. There are a couple of special SRAM
banks 64K or 256K large that could be used for that. Also, PRAM can be
easily added in fabric and used for the purpose.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

WARNING: multiple messages have this Message-ID (diff)
From: Stefano Stabellini <stefano.stabellini@xilinx.com>
To: Julien Grall <julien.grall@arm.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <stefanos@xilinx.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jan Beulich <JBeulich@suse.com>,
	xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [PATCH 1/6] xen: extend XEN_DOMCTL_memory_mapping to handle cacheability
Date: Wed, 17 Apr 2019 14:55:30 -0700	[thread overview]
Message-ID: <alpine.DEB.2.10.1904171441360.1370@sstabellini-ThinkPad-X260> (raw)
Message-ID: <20190417215530.3Cdv-C2UbxqHDLtlQojLHgOnH4AnfXxEFLeMMZHcNP4@z> (raw)
In-Reply-To: <28e234b7-7678-6064-37fb-7518756a3c12@arm.com>

On Wed, 17 Apr 2019, Julien Grall wrote:
> Hi,
> 
> On 4/17/19 10:12 PM, Stefano Stabellini wrote:
> > On Wed, 27 Feb 2019, Jan Beulich wrote:
> > > > > > On 27.02.19 at 00:07, <sstabellini@kernel.org> wrote:
> > > > --- a/xen/include/public/domctl.h
> > > > +++ b/xen/include/public/domctl.h
> > > > @@ -571,12 +571,14 @@ struct xen_domctl_bind_pt_irq {
> > > >   */
> > > >   #define DPCI_ADD_MAPPING         1
> > > >   #define DPCI_REMOVE_MAPPING      0
> > > > +#define CACHEABILITY_DEVMEM      0 /* device memory, the default */
> > > > +#define CACHEABILITY_MEMORY      1 /* normal memory */
> > > >   struct xen_domctl_memory_mapping {
> > > >       uint64_aligned_t first_gfn; /* first page (hvm guest phys page) in
> > > > range */
> > > >       uint64_aligned_t first_mfn; /* first page (machine page) in range
> > > > */
> > > >       uint64_aligned_t nr_mfns;   /* number of pages in range (>0) */
> > > >       uint32_t add_mapping;       /* add or remove mapping */
> > > > -    uint32_t padding;           /* padding for 64-bit aligned structure
> > > > */
> > > > +    uint32_t cache_policy;      /* cacheability of the memory mapping
> > > > */
> > > >   };
> > > 
> > > I don't think DEVMEM and MEMORY are anywhere near descriptive
> > > enough, nor - if we want such control anyway - flexible enough. I
> > > think what you want is to actually specify cachability, allowing on
> > > x86 to e.g. map frame buffers or alike WC. The attribute then
> > > would (obviously and necessarily) be architecture specific.
> > 
> > Yes, I agree with what you wrote, and also with what Julien wrote. Now
> > the question is how do you both think this should look like in more
> > details:
> > 
> > - are you OK with using memory_policy instead of cache_policy like
> >    Julien's suggested as name for the field?
> > - are you OK with using #defines for the values?
> > - should the #defines for both x86 and Arm be defined here or in other
> >    headers?
> > - what values would you like to see for x86?
> > 
> > For Arm, the ones I care about are:
> > 
> > - p2m_mmio_direct_dev
> > - p2m_mmio_direct_c
> 
> First step first. What are the reason to have p2m_mmio_direct_c? Is it only
> for memory sharing between DomU and Dom0?

No, Dom0-DomU sharing is only a minor positive side effect. Xilinx MPSoC
has two Cortex R5 cpus in addition to four Cortex A53 cpus on the board.
It is also possible to add additional Cortex M4 cpus and Microblaze cpus
in fabric. There could be dozen independent processors. Users need to
exchange data between the heterogeneous cpus. They usually set up their
own ring structures over shared memory, or they use OpenAMP.  Either
way, they need to share a cacheable memory region between them.  The
MPSoC is very flexible and the memory region can come from a multitude
of sources, including a portion of normal memory, or a portion of a
special memory area on the board. There are a couple of special SRAM
banks 64K or 256K large that could be used for that. Also, PRAM can be
easily added in fabric and used for the purpose.

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

  reply	other threads:[~2019-04-17 21:55 UTC|newest]

Thread overview: 67+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-26 23:06 [PATCH 0/6] iomem cacheability Stefano Stabellini
2019-02-26 23:07 ` [PATCH 1/6] xen: extend XEN_DOMCTL_memory_mapping to handle cacheability Stefano Stabellini
2019-02-26 23:18   ` Julien Grall
2019-04-20  0:02     ` Stefano Stabellini
2019-04-20  0:02       ` [Xen-devel] " Stefano Stabellini
2019-04-21 17:32       ` Julien Grall
2019-04-21 17:32         ` [Xen-devel] " Julien Grall
2019-04-22 21:59         ` Stefano Stabellini
2019-04-22 21:59           ` [Xen-devel] " Stefano Stabellini
2019-04-24 10:42           ` Julien Grall
2019-04-24 10:42             ` [Xen-devel] " Julien Grall
2019-02-27 10:34   ` Jan Beulich
2019-04-17 21:12     ` Stefano Stabellini
2019-04-17 21:12       ` [Xen-devel] " Stefano Stabellini
2019-04-17 21:25       ` Julien Grall
2019-04-17 21:25         ` [Xen-devel] " Julien Grall
2019-04-17 21:55         ` Stefano Stabellini [this message]
2019-04-17 21:55           ` Stefano Stabellini
2019-04-25 10:41       ` Jan Beulich
2019-04-25 10:41         ` [Xen-devel] " Jan Beulich
2019-04-25 22:31         ` Stefano Stabellini
2019-04-25 22:31           ` [Xen-devel] " Stefano Stabellini
2019-04-26  7:12           ` Jan Beulich
2019-04-26  7:12             ` [Xen-devel] " Jan Beulich
2019-02-27 19:28   ` Julien Grall
2019-04-19 23:20     ` Stefano Stabellini
2019-04-19 23:20       ` [Xen-devel] " Stefano Stabellini
2019-04-21 17:14       ` Julien Grall
2019-04-21 17:14         ` [Xen-devel] " Julien Grall
2019-04-22 17:33         ` Stefano Stabellini
2019-04-22 17:33           ` [Xen-devel] " Stefano Stabellini
2019-04-22 17:42           ` Julien Grall
2019-04-22 17:42             ` [Xen-devel] " Julien Grall
2019-02-27 21:02   ` Julien Grall
2019-02-26 23:07 ` [PATCH 2/6] libxc: xc_domain_memory_mapping, " Stefano Stabellini
2019-02-26 23:07 ` [PATCH 3/6] libxl/xl: add cacheability option to iomem Stefano Stabellini
2019-02-27 20:02   ` Julien Grall
2019-04-19 23:13     ` Stefano Stabellini
2019-04-19 23:13       ` [Xen-devel] " Stefano Stabellini
2019-02-26 23:07 ` [PATCH 4/6] xen/arm: keep track of reserved-memory regions Stefano Stabellini
2019-02-28 14:38   ` Julien Grall
2019-02-26 23:07 ` [PATCH 5/6] xen/arm: map reserved-memory regions as normal memory in dom0 Stefano Stabellini
2019-02-26 23:45   ` Julien Grall
2019-04-22 22:42     ` Stefano Stabellini
2019-04-22 22:42       ` [Xen-devel] " Stefano Stabellini
2019-04-23  8:09       ` Julien Grall
2019-04-23  8:09         ` [Xen-devel] " Julien Grall
2019-04-23 17:32         ` Stefano Stabellini
2019-04-23 17:32           ` [Xen-devel] " Stefano Stabellini
2019-04-23 18:37           ` Julien Grall
2019-04-23 18:37             ` [Xen-devel] " Julien Grall
2019-04-23 21:34             ` Stefano Stabellini
2019-04-23 21:34               ` [Xen-devel] " Stefano Stabellini
2019-02-26 23:07 ` [PATCH 6/6] xen/docs: how to map a page between dom0 and domU using iomem Stefano Stabellini
2019-03-03 17:20 ` [PATCH 0/6] iomem cacheability Amit Tomer
2019-03-05 21:22   ` Stefano Stabellini
2019-03-05 22:45     ` Julien Grall
2019-03-06 11:46       ` Amit Tomer
2019-03-06 22:42         ` Stefano Stabellini
2019-03-06 22:59           ` Julien Grall
2019-03-07  8:42             ` Amit Tomer
2019-03-07 10:04               ` Julien Grall
2019-03-07 21:24                 ` Stefano Stabellini
2019-03-08 10:10                   ` Amit Tomer
2019-03-08 16:37                     ` Julien Grall
2019-03-08 17:44                       ` Amit Tomer
2019-03-06 11:30     ` Amit Tomer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.DEB.2.10.1904171441360.1370@sstabellini-ThinkPad-X260 \
    --to=stefano.stabellini@xilinx.com \
    --cc=JBeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=julien.grall@arm.com \
    --cc=sstabellini@kernel.org \
    --cc=stefanos@xilinx.com \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.