All of lore.kernel.org
 help / color / mirror / Atom feed
* Help: How do I make a machine with 2 separate ARM SoC's?
@ 2022-05-26 22:09 Peter Delevoryas
  2022-05-30 16:53 ` Peter Maydell
  0 siblings, 1 reply; 7+ messages in thread
From: Peter Delevoryas @ 2022-05-26 22:09 UTC (permalink / raw)
  Cc: Cameron Esfahani via, qemu-arm, Cédric Le Goater, Peter Delevoryas

Hey QEMU developers,

Cedric mentioned here[1] that QEMU can support emulating a
more complete board, e.g. a machine with an AST2600 *and* an AST1030.

I read through the memory API docs[2] and it mostly makes sense to me,
but what I don’t understand is, what does system_memory represent?
Or, what should the layout be for a situation like I’m interested in,
where you have an AST2600 and an AST1030 (and actually, maybe even
an x86 CPU too? idk if that would be possible).

I need to make sure each SoC runs in a different address space, right?
But, how do I actually do that? Do I model it as two containers inside
the large system_memory container, or as two different containers
that get swapped in for system_memory when executing their associated
CPU?

I was having trouble figuring out what the Xilinx boards are actually
doing in this case. Does each CPU share peripherals, or are the
A + R cpu’s actually in separate address spaces? I’m very confused lol.

If anyone can provide suggestions, they would be greatly appreciated!

Thanks,
Peter

[1] https://lore.kernel.org/qemu-devel/2ab490a2-875d-ae82-38d0-425415f9818c@kaod.org/
[2] https://www.qemu.org/docs/master/devel/memory.html

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Help: How do I make a machine with 2 separate ARM SoC's?
  2022-05-26 22:09 Help: How do I make a machine with 2 separate ARM SoC's? Peter Delevoryas
@ 2022-05-30 16:53 ` Peter Maydell
  2022-05-30 18:15   ` Cédric Le Goater
  2022-06-06 15:37   ` Cédric Le Goater
  0 siblings, 2 replies; 7+ messages in thread
From: Peter Maydell @ 2022-05-30 16:53 UTC (permalink / raw)
  To: Peter Delevoryas; +Cc: Cameron Esfahani via, qemu-arm, Cédric Le Goater

On Thu, 26 May 2022 at 23:14, Peter Delevoryas <pdel@fb.com> wrote:
> Hey QEMU developers,
>
> Cedric mentioned here[1] that QEMU can support emulating a
> more complete board, e.g. a machine with an AST2600 *and* an AST1030.

This is true, as long as all the CPUs are the same
architecture family, e.g. all Arm CPUs. (Mixing A- and
R- or A- and M-profile is OK, they just all have to be
available in the same qemu-system-whatever binary.)

> I read through the memory API docs[2] and it mostly makes sense to me,
> but what I don’t understand is, what does system_memory represent?

So, system_memory is something of a legacy from when QEMU was
much older. Before the MemoryRegion and AddressSpace APIs were
added to QEMU, everything that could initiate a memory transaction
(CPUs, DMA-capable devices, etc) always saw the same view of
memory. The functions to do memory accesses just operated on
that view implicitly. (We still have some of them, for instance
cpu_physical_memory_read() and cpu_physical_memory_write().) The
MemoryRegion/AddressSpace APIs are much more flexible and allow
different memory transaction initiators to see different views, as
real hardware does. But for backwards compatibility we still have
the old assumes-one-view APIs. The view those APIs use is the
"system memory". We also have some device models which have been
converted to use an AddressSpace to do their DMA operations, but
which assume they want to use address_space_memory (which is the AS
corresponding to the system_memory MR) instead of taking a
MemoryRegion as a QOM pointer and creating an AddressSpace for it.

In the modern view of the world, you can build up a system with
a set of MemoryRegions. Typically you can start with an empty
container, and the board code fills it with board-level devices,
then passes it to the SoC code, which fills it with SoC devices,
and passes it again to the CPU object, which creates an AddressSpace
so it can initiate transactions into it. By making that initial
"empty container" be the global system_memory MemoryRegion, this
makes the legacy APIs and devices that still use it basically work.

> Or, what should the layout be for a situation like I’m interested in,
> where you have an AST2600 and an AST1030 (and actually, maybe even
> an x86 CPU too? idk if that would be possible).

Cross-architecture heterogenous board models can't be done today:
the qemu-system-foo binaries compile-time build in some assumptions
about specifics of the guest architecture. (This is something it would
be nice to fix, but the amount of work is pretty big and hairy, and
thus far nobody's had a pressing enough need for it to try to tackle it.)

> I need to make sure each SoC runs in a different address space, right?
> But, how do I actually do that? Do I model it as two containers inside
> the large system_memory container, or as two different containers
> that get swapped in for system_memory when executing their associated
> CPU?

The best way to think about QEMU's AddressSpace type is that it is
the interface you use to initiate memory transactions. You create
one from a MemoryRegion. When SoC and board code is building up its
view of the world, what it is really creating and passing around is
a hierarchy of MemoryRegions. It's only when the SoC code hands a
MemoryRegion to a CPU or a DMA-capable device that that device says
"I will need to make transactions to this, let me create the
corresponding AddressSpace".

> I was having trouble figuring out what the Xilinx boards are actually
> doing in this case. Does each CPU share peripherals, or are the
> A + R cpu’s actually in separate address spaces? I’m very confused lol.

xlnx-versal-virt is a virtual board, so ignore that one: it's
probably more confusing than helpful. The xlnx-zcu102 board
uses the xlnx-zynqmp SoC, and that SoC has both R and A profile
CPUs in it, but they both see basically the same view of the
world because they're in the same SoC.

Another device that does some moderately complicated things with
MemoryRegions is the hw/arm/armsse.c SoC, which has several CPUs
and has some per-CPU devices.

I think we have not thus far had a model of a board where different
CPUs see radically different things (only ones where they can see
minor differences), so you'll probably run into places where the
APIs are a bit clunky (and we can perhaps have a go at making
them a bit less so). What I would do is make the system_memory
container be used by whatever is the "main" application processor
SoC in your board. If the two SoCs really see absolutely different
worlds with no shared devices at all, then you'll want to create
a new empty container for the second SoC. If they do have some
board-level shared devices, then you'll want to do something a little
more complicated with aliases.

If you find the SoC device models you're using hardcode use of
system_memory or address_space_memory you should treat those as
bugs to be fixed. Other loose ends (like monitor commands that
assume the system address space) can be ignored: having those
operate on the 'application processor' SoC is fine, I think.

Overall, this is definitely doable but will involve a fair
about of slogging through territory where nobody has yet
broken a trail for you :-)

-- PMM


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Help: How do I make a machine with 2 separate ARM SoC's?
  2022-05-30 16:53 ` Peter Maydell
@ 2022-05-30 18:15   ` Cédric Le Goater
  2022-05-30 19:18     ` Peter Delevoryas
  2022-06-06 15:37   ` Cédric Le Goater
  1 sibling, 1 reply; 7+ messages in thread
From: Cédric Le Goater @ 2022-05-30 18:15 UTC (permalink / raw)
  To: Peter Maydell, Peter Delevoryas; +Cc: Cameron Esfahani via, qemu-arm

On 5/30/22 18:53, Peter Maydell wrote:
> On Thu, 26 May 2022 at 23:14, Peter Delevoryas <pdel@fb.com> wrote:
>> Hey QEMU developers,
>>
>> Cedric mentioned here[1] that QEMU can support emulating a
>> more complete board, e.g. a machine with an AST2600 *and* an AST1030.
> 
> This is true, as long as all the CPUs are the same
> architecture family, e.g. all Arm CPUs. (Mixing A- and
> R- or A- and M-profile is OK, they just all have to be
> available in the same qemu-system-whatever binary.)
> 
>> I read through the memory API docs[2] and it mostly makes sense to me,
>> but what I don’t understand is, what does system_memory represent?
> 
> So, system_memory is something of a legacy from when QEMU was
> much older. Before the MemoryRegion and AddressSpace APIs were
> added to QEMU, everything that could initiate a memory transaction
> (CPUs, DMA-capable devices, etc) always saw the same view of
> memory. The functions to do memory accesses just operated on
> that view implicitly. (We still have some of them, for instance
> cpu_physical_memory_read() and cpu_physical_memory_write().) The
> MemoryRegion/AddressSpace APIs are much more flexible and allow
> different memory transaction initiators to see different views, as
> real hardware does. But for backwards compatibility we still have
> the old assumes-one-view APIs. The view those APIs use is the
> "system memory". We also have some device models which have been
> converted to use an AddressSpace to do their DMA operations, but
> which assume they want to use address_space_memory (which is the AS
> corresponding to the system_memory MR) instead of taking a
> MemoryRegion as a QOM pointer and creating an AddressSpace for it.
> 
> In the modern view of the world, you can build up a system with
> a set of MemoryRegions. Typically you can start with an empty
> container, and the board code fills it with board-level devices,
> then passes it to the SoC code, which fills it with SoC devices,
> and passes it again to the CPU object, which creates an AddressSpace
> so it can initiate transactions into it. By making that initial
> "empty container" be the global system_memory MemoryRegion, this
> makes the legacy APIs and devices that still use it basically work.
> 
>> Or, what should the layout be for a situation like I’m interested in,
>> where you have an AST2600 and an AST1030 (and actually, maybe even
>> an x86 CPU too? idk if that would be possible).
> 
> Cross-architecture heterogenous board models can't be done today:
> the qemu-system-foo binaries compile-time build in some assumptions
> about specifics of the guest architecture. (This is something it would
> be nice to fix, but the amount of work is pretty big and hairy, and
> thus far nobody's had a pressing enough need for it to try to tackle it.)
> 
>> I need to make sure each SoC runs in a different address space, right?
>> But, how do I actually do that? Do I model it as two containers inside
>> the large system_memory container, or as two different containers
>> that get swapped in for system_memory when executing their associated
>> CPU?
> 
> The best way to think about QEMU's AddressSpace type is that it is
> the interface you use to initiate memory transactions. You create
> one from a MemoryRegion. When SoC and board code is building up its
> view of the world, what it is really creating and passing around is
> a hierarchy of MemoryRegions. It's only when the SoC code hands a
> MemoryRegion to a CPU or a DMA-capable device that that device says
> "I will need to make transactions to this, let me create the
> corresponding AddressSpace".
> 
>> I was having trouble figuring out what the Xilinx boards are actually
>> doing in this case. Does each CPU share peripherals, or are the
>> A + R cpu’s actually in separate address spaces? I’m very confused lol.
> 
> xlnx-versal-virt is a virtual board, so ignore that one: it's
> probably more confusing than helpful. The xlnx-zcu102 board
> uses the xlnx-zynqmp SoC, and that SoC has both R and A profile
> CPUs in it, but they both see basically the same view of the
> world because they're in the same SoC.
> 
> Another device that does some moderately complicated things with
> MemoryRegions is the hw/arm/armsse.c SoC, which has several CPUs
> and has some per-CPU devices.
>
> I think we have not thus far had a model of a board where different
> CPUs see radically different things (only ones where they can see
> minor differences), so you'll probably run into places where the
> APIs are a bit clunky (and we can perhaps have a go at making
> them a bit less so). What I would do is make the system_memory
> container be used by whatever is the "main" application processor
> SoC in your board. 

I think Peter D. wants to emulate a machine with a  BMC board (ast2600)
and a SCP-like SoC (ast1030) running zephir. Correct me if I am wrong.
That's a first step.

> If the two SoCs really see absolutely different
> worlds with no shared devices at all, then you'll want to create
> a new empty container for the second SoC. 

yes.

> If they do have some
> board-level shared devices, then you'll want to do something a little
> more complicated with aliases.

The first device would be a shared I2C bus to communicate. I haven't
looked deeply how complex it would we to plug a slave model of the
first SoC on a bus of the second SoC.

> If you find the SoC device models you're using hardcode use of
> system_memory or address_space_memory you should treat those as
> bugs to be fixed. 

There are a few get_system_memory() left in aspeed SoC (UART, SRAM)
that could be fixed easily and upstream. The rest should be clean
enough.

> Other loose ends (like monitor commands that
> assume the system address space) can be ignored: having those
> operate on the 'application processor' SoC is fine, I think.
> 
> Overall, this is definitely doable but will involve a fair
> about of slogging through territory where nobody has yet
> broken a trail for you :-)


I am around. We can start building the machine on a GH branch and
feed mainline with updates while it's getting ready.


Thanks for the feedback.

C.



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Help: How do I make a machine with 2 separate ARM SoC's?
  2022-05-30 18:15   ` Cédric Le Goater
@ 2022-05-30 19:18     ` Peter Delevoryas
  0 siblings, 0 replies; 7+ messages in thread
From: Peter Delevoryas @ 2022-05-30 19:18 UTC (permalink / raw)
  Cc: Cédric Le Goater, Peter Delevoryas, Peter Maydell, qemu-arm,
	Cameron Esfahani via



> On May 30, 2022, at 11:15 AM, Cédric Le Goater <clg@kaod.org> wrote:
> 
> On 5/30/22 18:53, Peter Maydell wrote:
>> On Thu, 26 May 2022 at 23:14, Peter Delevoryas <pdel@fb.com> wrote:
>>> Hey QEMU developers,
>>> 
>>> Cedric mentioned here[1] that QEMU can support emulating a
>>> more complete board, e.g. a machine with an AST2600 *and* an AST1030.
>> This is true, as long as all the CPUs are the same
>> architecture family, e.g. all Arm CPUs. (Mixing A- and
>> R- or A- and M-profile is OK, they just all have to be
>> available in the same qemu-system-whatever binary.)
>>> I read through the memory API docs[2] and it mostly makes sense to me,
>>> but what I don’t understand is, what does system_memory represent?
>> So, system_memory is something of a legacy from when QEMU was
>> much older. Before the MemoryRegion and AddressSpace APIs were
>> added to QEMU, everything that could initiate a memory transaction
>> (CPUs, DMA-capable devices, etc) always saw the same view of
>> memory. The functions to do memory accesses just operated on
>> that view implicitly. (We still have some of them, for instance
>> cpu_physical_memory_read() and cpu_physical_memory_write().) The
>> MemoryRegion/AddressSpace APIs are much more flexible and allow
>> different memory transaction initiators to see different views, as
>> real hardware does. But for backwards compatibility we still have
>> the old assumes-one-view APIs. The view those APIs use is the
>> "system memory". We also have some device models which have been
>> converted to use an AddressSpace to do their DMA operations, but
>> which assume they want to use address_space_memory (which is the AS
>> corresponding to the system_memory MR) instead of taking a
>> MemoryRegion as a QOM pointer and creating an AddressSpace for it.
>> In the modern view of the world, you can build up a system with
>> a set of MemoryRegions. Typically you can start with an empty
>> container, and the board code fills it with board-level devices,
>> then passes it to the SoC code, which fills it with SoC devices,
>> and passes it again to the CPU object, which creates an AddressSpace
>> so it can initiate transactions into it. By making that initial
>> "empty container" be the global system_memory MemoryRegion, this
>> makes the legacy APIs and devices that still use it basically work.
>>> Or, what should the layout be for a situation like I’m interested in,
>>> where you have an AST2600 and an AST1030 (and actually, maybe even
>>> an x86 CPU too? idk if that would be possible).
>> Cross-architecture heterogenous board models can't be done today:
>> the qemu-system-foo binaries compile-time build in some assumptions
>> about specifics of the guest architecture. (This is something it would
>> be nice to fix, but the amount of work is pretty big and hairy, and
>> thus far nobody's had a pressing enough need for it to try to tackle it.)
>>> I need to make sure each SoC runs in a different address space, right?
>>> But, how do I actually do that? Do I model it as two containers inside
>>> the large system_memory container, or as two different containers
>>> that get swapped in for system_memory when executing their associated
>>> CPU?
>> The best way to think about QEMU's AddressSpace type is that it is
>> the interface you use to initiate memory transactions. You create
>> one from a MemoryRegion. When SoC and board code is building up its
>> view of the world, what it is really creating and passing around is
>> a hierarchy of MemoryRegions. It's only when the SoC code hands a
>> MemoryRegion to a CPU or a DMA-capable device that that device says
>> "I will need to make transactions to this, let me create the
>> corresponding AddressSpace".
>>> I was having trouble figuring out what the Xilinx boards are actually
>>> doing in this case. Does each CPU share peripherals, or are the
>>> A + R cpu’s actually in separate address spaces? I’m very confused lol.
>> xlnx-versal-virt is a virtual board, so ignore that one: it's
>> probably more confusing than helpful. The xlnx-zcu102 board
>> uses the xlnx-zynqmp SoC, and that SoC has both R and A profile
>> CPUs in it, but they both see basically the same view of the
>> world because they're in the same SoC.

I see, I started to suspect as much.

>> Another device that does some moderately complicated things with
>> MemoryRegions is the hw/arm/armsse.c SoC, which has several CPUs
>> and has some per-CPU devices.
>> 
>> I think we have not thus far had a model of a board where different
>> CPUs see radically different things (only ones where they can see
>> minor differences), so you'll probably run into places where the
>> APIs are a bit clunky (and we can perhaps have a go at making
>> them a bit less so). What I would do is make the system_memory
>> container be used by whatever is the "main" application processor
>> SoC in your board. 
> 
> I think Peter D. wants to emulate a machine with a BMC board (ast2600)
> and a SCP-like SoC (ast1030) running zephir. Correct me if I am wrong.
> That's a first step.

That’s right, an AST2600 running Linux (OpenBMC) and an AST1030 running
Zephyr (OpenBIC).

> 
>> If the two SoCs really see absolutely different
>> worlds with no shared devices at all, then you'll want to create
>> a new empty container for the second SoC. 
> 
> yes.

I see, one container for each SoC then.

> 
>> If they do have some
>> board-level shared devices, then you'll want to do something a little
>> more complicated with aliases.
> 
> The first device would be a shared I2C bus to communicate. I haven't
> looked deeply how complex it would we to plug a slave model of the
> first SoC on a bus of the second SoC.

This is the most important thing for me: our board designs only
use I2C and I3C to connect the AST2600 and AST1030, so this is the
one and only thing I need the two to share.

Well, maybe there’s some GPIO’s to connect. But I2C is the biggest thing.

I think right now, when we construct the Aspeed SoC’s I2C bus
controllers, we just create a new I2CBus for each controller,
in realize(). My naive guess was to refactor each controller to receive
an I2CBus from an external source, so that we can construct a board
where 1 or 2 of the buses are shared between SoC’s.

> 
>> If you find the SoC device models you're using hardcode use of
>> system_memory or address_space_memory you should treat those as
>> bugs to be fixed. 
> 
> There are a few get_system_memory() left in aspeed SoC (UART, SRAM)
> that could be fixed easily and upstream. The rest should be clean
> enough.

Ah I see, yes I was getting stuck figuring out if I needed to refactor
all of that or if I was doing something wrong. That makes sense now.

> 
>> Other loose ends (like monitor commands that
>> assume the system address space) can be ignored: having those
>> operate on the 'application processor' SoC is fine, I think.
>> Overall, this is definitely doable but will involve a fair
>> about of slogging through territory where nobody has yet
>> broken a trail for you :-)
> 
> 
> I am around. We can start building the machine on a GH branch and
> feed mainline with updates while it's getting ready.
> 
> 
> Thanks for the feedback.

Yes thanks so much Peter for the very detailed and helpful answer,
that clears up so much to me, and thanks Cedric for responding too.

I’m very interested in experimenting more with this, I’ll try to
get something up and running and come back with patches to upstream.

It’s too bad we can’t do multiple architectures in the same board,
we also have boards that have i2c between ARM and a riscv64 machine.

Since we mostly just use i2c between each processor, I made a generic
i2c-over-a-socket thing that kinda-sorta works, and we can connect
two separate QEMU instances (one ARM, one riscv64) that way, but
I’ve been struggling to get that working well with the AST2600 and
AST1030 board. I would also prefer to just run them both in
the same QEMU instance if possible.

Also, I’d like to test 1 AST2600 and 1 AST1030 at first, but my
primary use case is actually 1 AST2600 and 5 AST1030’s.

I’m also interested in making a board with 3 different AST2600’s too,
although that one includes USB between them I think, and I haven’t
even started looking into that.

Anyways, thanks again for the help!!!

Peter

> 
> C.


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Help: How do I make a machine with 2 separate ARM SoC's?
  2022-05-30 16:53 ` Peter Maydell
  2022-05-30 18:15   ` Cédric Le Goater
@ 2022-06-06 15:37   ` Cédric Le Goater
  2022-06-06 17:02     ` Peter Maydell
  1 sibling, 1 reply; 7+ messages in thread
From: Cédric Le Goater @ 2022-06-06 15:37 UTC (permalink / raw)
  To: Peter Maydell, Peter Delevoryas; +Cc: Cameron Esfahani via, qemu-arm

Hello Peter M.,

[ ... ]

> Another device that does some moderately complicated things with
> MemoryRegions is the hw/arm/armsse.c SoC, which has several CPUs
> and has some per-CPU devices.
> 
> I think we have not thus far had a model of a board where different
> CPUs see radically different things (only ones where they can see
> minor differences), so you'll probably run into places where the
> APIs are a bit clunky (and we can perhaps have a go at making
> them a bit less so). What I would do is make the system_memory
> container be used by whatever is the "main" application processor
> SoC in your board. If the two SoCs really see absolutely different
> worlds with no shared devices at all, then you'll want to create
> a new empty container for the second SoC. If they do have some
> board-level shared devices, then you'll want to do something a little
> more complicated with aliases.
> 
> If you find the SoC device models you're using hardcode use of
> system_memory or address_space_memory you should treat those as
> bugs to be fixed. Other loose ends (like monitor commands that
> assume the system address space) can be ignored: having those
> operate on the 'application processor' SoC is fine, I think.

On the CPU topic, I think we will need to change the GIC device
to stop using qemu_get_cpu() in the CPU interface init routine
and in the GIC realize routine, since this is global to the machine.
I am having the same problem when trying to model a multi SoC board
with a GIC device on each chip.

What would be a good approach to loop only on the CPUs related
to a GIC device ? Could we tag the CPUs and the GIC in some way
to filter the unrelated CPUs ? Or pass a CPU list to the GIC
device ?

Thanks,

C.

> Overall, this is definitely doable but will involve a fair
> about of slogging through territory where nobody has yet
> broken a trail for you :-)
> 
> -- PMM



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Help: How do I make a machine with 2 separate ARM SoC's?
  2022-06-06 15:37   ` Cédric Le Goater
@ 2022-06-06 17:02     ` Peter Maydell
  2022-06-07  6:48       ` Cédric Le Goater
  0 siblings, 1 reply; 7+ messages in thread
From: Peter Maydell @ 2022-06-06 17:02 UTC (permalink / raw)
  To: Cédric Le Goater; +Cc: Peter Delevoryas, Cameron Esfahani via, qemu-arm

On Mon, 6 Jun 2022 at 16:37, Cédric Le Goater <clg@kaod.org> wrote:
> On the CPU topic, I think we will need to change the GIC device
> to stop using qemu_get_cpu() in the CPU interface init routine
> and in the GIC realize routine, since this is global to the machine.
> I am having the same problem when trying to model a multi SoC board
> with a GIC device on each chip.
>
> What would be a good approach to loop only on the CPUs related
> to a GIC device ? Could we tag the CPUs and the GIC in some way
> to filter the unrelated CPUs ? Or pass a CPU list to the GIC
> device ?

GICv2 or GICv3 ?

Guessing GICv3, I think probably the right approach is to
have the GICv3 device have an array of QOM link properties,
and then the SoC or board code links up the CPUs to the
GIC device object.

thanks
-- PMM


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: Help: How do I make a machine with 2 separate ARM SoC's?
  2022-06-06 17:02     ` Peter Maydell
@ 2022-06-07  6:48       ` Cédric Le Goater
  0 siblings, 0 replies; 7+ messages in thread
From: Cédric Le Goater @ 2022-06-07  6:48 UTC (permalink / raw)
  To: Peter Maydell; +Cc: Peter Delevoryas, Cameron Esfahani via, qemu-arm

On 6/6/22 19:02, Peter Maydell wrote:
> On Mon, 6 Jun 2022 at 16:37, Cédric Le Goater <clg@kaod.org> wrote:
>> On the CPU topic, I think we will need to change the GIC device
>> to stop using qemu_get_cpu() in the CPU interface init routine
>> and in the GIC realize routine, since this is global to the machine.
>> I am having the same problem when trying to model a multi SoC board
>> with a GIC device on each chip.
>>
>> What would be a good approach to loop only on the CPUs related
>> to a GIC device ? Could we tag the CPUs and the GIC in some way
>> to filter the unrelated CPUs ? Or pass a CPU list to the GIC
>> device ?
> 
> GICv2 or GICv3 ?

v3 yes. sorry.

> Guessing GICv3, I think probably the right approach is to
> have the GICv3 device have an array of QOM link properties,
> and then the SoC or board code links up the CPUs to the
> GIC device object.

I will look at this.

Thanks,

C.
  


^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2022-06-07  6:52 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-26 22:09 Help: How do I make a machine with 2 separate ARM SoC's? Peter Delevoryas
2022-05-30 16:53 ` Peter Maydell
2022-05-30 18:15   ` Cédric Le Goater
2022-05-30 19:18     ` Peter Delevoryas
2022-06-06 15:37   ` Cédric Le Goater
2022-06-06 17:02     ` Peter Maydell
2022-06-07  6:48       ` Cédric Le Goater

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.