* [PATCH 0/4] RISC-V multi-socket support
@ 2020-05-16 6:37 Anup Patel
2020-05-16 6:37 ` [PATCH 1/4] hw/riscv: Allow creating multiple instances of CLINT Anup Patel
` (5 more replies)
0 siblings, 6 replies; 19+ messages in thread
From: Anup Patel @ 2020-05-16 6:37 UTC (permalink / raw)
To: Peter Maydell, Palmer Dabbelt, Alistair Francis, Sagar Karandikar
Cc: Atish Patra, Anup Patel, qemu-riscv, qemu-devel, Anup Patel
This series adds multi-socket support for RISC-V virt machine and
RISC-V spike machine. The multi-socket support will help us improve
various RISC-V operating systems, firmwares, and bootloader to
support RISC-V NUMA systems.
These patch can be found in riscv_multi_socket_v1 branch at:
https://github.com/avpatel/qemu.git
To try this patches, we will need:
1. OpenSBI multi-PLIC and multi-CLINT support which can be found in
multi_plic_clint_v1 branch at:
https://github.com/avpatel/opensbi.git
2. Linux multi-PLIC improvements support which can be found in
plic_imp_v1 branch at:
https://github.com/avpatel/linux.git
Anup Patel (4):
hw/riscv: Allow creating multiple instances of CLINT
hw/riscv: spike: Allow creating multiple sockets
hw/riscv: Allow creating multiple instances of PLIC
hw/riscv: virt: Allow creating multiple sockets
hw/riscv/sifive_clint.c | 20 +-
hw/riscv/sifive_e.c | 4 +-
hw/riscv/sifive_plic.c | 24 +-
hw/riscv/sifive_u.c | 4 +-
hw/riscv/spike.c | 210 ++++++++------
hw/riscv/virt.c | 495 ++++++++++++++++++--------------
include/hw/riscv/sifive_clint.h | 7 +-
include/hw/riscv/sifive_plic.h | 12 +-
include/hw/riscv/spike.h | 8 +-
include/hw/riscv/virt.h | 12 +-
10 files changed, 458 insertions(+), 338 deletions(-)
--
2.25.1
^ permalink raw reply [flat|nested] 19+ messages in thread
* [PATCH 1/4] hw/riscv: Allow creating multiple instances of CLINT
2020-05-16 6:37 [PATCH 0/4] RISC-V multi-socket support Anup Patel
@ 2020-05-16 6:37 ` Anup Patel
2020-05-19 21:21 ` Alistair Francis
2020-05-21 20:16 ` Palmer Dabbelt
2020-05-16 6:37 ` [PATCH 2/4] hw/riscv: spike: Allow creating multiple sockets Anup Patel
` (4 subsequent siblings)
5 siblings, 2 replies; 19+ messages in thread
From: Anup Patel @ 2020-05-16 6:37 UTC (permalink / raw)
To: Peter Maydell, Palmer Dabbelt, Alistair Francis, Sagar Karandikar
Cc: Atish Patra, Anup Patel, qemu-riscv, qemu-devel, Anup Patel
We extend CLINT emulation to allow multiple instances of CLINT in
a QEMU RISC-V machine. To achieve this, we remove first HART id
zero assumption from CLINT emulation.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
---
hw/riscv/sifive_clint.c | 20 ++++++++++++--------
hw/riscv/sifive_e.c | 2 +-
hw/riscv/sifive_u.c | 2 +-
hw/riscv/spike.c | 6 +++---
hw/riscv/virt.c | 2 +-
include/hw/riscv/sifive_clint.h | 7 ++++---
6 files changed, 22 insertions(+), 17 deletions(-)
diff --git a/hw/riscv/sifive_clint.c b/hw/riscv/sifive_clint.c
index e933d35092..7d713fd743 100644
--- a/hw/riscv/sifive_clint.c
+++ b/hw/riscv/sifive_clint.c
@@ -78,7 +78,7 @@ static uint64_t sifive_clint_read(void *opaque, hwaddr addr, unsigned size)
SiFiveCLINTState *clint = opaque;
if (addr >= clint->sip_base &&
addr < clint->sip_base + (clint->num_harts << 2)) {
- size_t hartid = (addr - clint->sip_base) >> 2;
+ size_t hartid = clint->hartid_base + ((addr - clint->sip_base) >> 2);
CPUState *cpu = qemu_get_cpu(hartid);
CPURISCVState *env = cpu ? cpu->env_ptr : NULL;
if (!env) {
@@ -91,7 +91,8 @@ static uint64_t sifive_clint_read(void *opaque, hwaddr addr, unsigned size)
}
} else if (addr >= clint->timecmp_base &&
addr < clint->timecmp_base + (clint->num_harts << 3)) {
- size_t hartid = (addr - clint->timecmp_base) >> 3;
+ size_t hartid = clint->hartid_base +
+ ((addr - clint->timecmp_base) >> 3);
CPUState *cpu = qemu_get_cpu(hartid);
CPURISCVState *env = cpu ? cpu->env_ptr : NULL;
if (!env) {
@@ -128,7 +129,7 @@ static void sifive_clint_write(void *opaque, hwaddr addr, uint64_t value,
if (addr >= clint->sip_base &&
addr < clint->sip_base + (clint->num_harts << 2)) {
- size_t hartid = (addr - clint->sip_base) >> 2;
+ size_t hartid = clint->hartid_base + ((addr - clint->sip_base) >> 2);
CPUState *cpu = qemu_get_cpu(hartid);
CPURISCVState *env = cpu ? cpu->env_ptr : NULL;
if (!env) {
@@ -141,7 +142,8 @@ static void sifive_clint_write(void *opaque, hwaddr addr, uint64_t value,
return;
} else if (addr >= clint->timecmp_base &&
addr < clint->timecmp_base + (clint->num_harts << 3)) {
- size_t hartid = (addr - clint->timecmp_base) >> 3;
+ size_t hartid = clint->hartid_base +
+ ((addr - clint->timecmp_base) >> 3);
CPUState *cpu = qemu_get_cpu(hartid);
CPURISCVState *env = cpu ? cpu->env_ptr : NULL;
if (!env) {
@@ -185,6 +187,7 @@ static const MemoryRegionOps sifive_clint_ops = {
};
static Property sifive_clint_properties[] = {
+ DEFINE_PROP_UINT32("hartid-base", SiFiveCLINTState, hartid_base, 0),
DEFINE_PROP_UINT32("num-harts", SiFiveCLINTState, num_harts, 0),
DEFINE_PROP_UINT32("sip-base", SiFiveCLINTState, sip_base, 0),
DEFINE_PROP_UINT32("timecmp-base", SiFiveCLINTState, timecmp_base, 0),
@@ -226,13 +229,13 @@ type_init(sifive_clint_register_types)
/*
* Create CLINT device.
*/
-DeviceState *sifive_clint_create(hwaddr addr, hwaddr size, uint32_t num_harts,
- uint32_t sip_base, uint32_t timecmp_base, uint32_t time_base,
- bool provide_rdtime)
+DeviceState *sifive_clint_create(hwaddr addr, hwaddr size,
+ uint32_t hartid_base, uint32_t num_harts, uint32_t sip_base,
+ uint32_t timecmp_base, uint32_t time_base, bool provide_rdtime)
{
int i;
for (i = 0; i < num_harts; i++) {
- CPUState *cpu = qemu_get_cpu(i);
+ CPUState *cpu = qemu_get_cpu(hartid_base + i);
CPURISCVState *env = cpu ? cpu->env_ptr : NULL;
if (!env) {
continue;
@@ -246,6 +249,7 @@ DeviceState *sifive_clint_create(hwaddr addr, hwaddr size, uint32_t num_harts,
}
DeviceState *dev = qdev_create(NULL, TYPE_SIFIVE_CLINT);
+ qdev_prop_set_uint32(dev, "hartid-base", hartid_base);
qdev_prop_set_uint32(dev, "num-harts", num_harts);
qdev_prop_set_uint32(dev, "sip-base", sip_base);
qdev_prop_set_uint32(dev, "timecmp-base", timecmp_base);
diff --git a/hw/riscv/sifive_e.c b/hw/riscv/sifive_e.c
index b53109521e..1c3b37d0ba 100644
--- a/hw/riscv/sifive_e.c
+++ b/hw/riscv/sifive_e.c
@@ -163,7 +163,7 @@ static void riscv_sifive_e_soc_realize(DeviceState *dev, Error **errp)
SIFIVE_E_PLIC_CONTEXT_STRIDE,
memmap[SIFIVE_E_PLIC].size);
sifive_clint_create(memmap[SIFIVE_E_CLINT].base,
- memmap[SIFIVE_E_CLINT].size, ms->smp.cpus,
+ memmap[SIFIVE_E_CLINT].size, 0, ms->smp.cpus,
SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE, false);
create_unimplemented_device("riscv.sifive.e.aon",
memmap[SIFIVE_E_AON].base, memmap[SIFIVE_E_AON].size);
diff --git a/hw/riscv/sifive_u.c b/hw/riscv/sifive_u.c
index bed10fcfa8..22997fbf13 100644
--- a/hw/riscv/sifive_u.c
+++ b/hw/riscv/sifive_u.c
@@ -601,7 +601,7 @@ static void riscv_sifive_u_soc_realize(DeviceState *dev, Error **errp)
sifive_uart_create(system_memory, memmap[SIFIVE_U_UART1].base,
serial_hd(1), qdev_get_gpio_in(DEVICE(s->plic), SIFIVE_U_UART1_IRQ));
sifive_clint_create(memmap[SIFIVE_U_CLINT].base,
- memmap[SIFIVE_U_CLINT].size, ms->smp.cpus,
+ memmap[SIFIVE_U_CLINT].size, 0, ms->smp.cpus,
SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE, false);
object_property_set_bool(OBJECT(&s->prci), true, "realized", &err);
diff --git a/hw/riscv/spike.c b/hw/riscv/spike.c
index d0c4843712..d5e0103d89 100644
--- a/hw/riscv/spike.c
+++ b/hw/riscv/spike.c
@@ -253,7 +253,7 @@ static void spike_board_init(MachineState *machine)
/* Core Local Interruptor (timer and IPI) */
sifive_clint_create(memmap[SPIKE_CLINT].base, memmap[SPIKE_CLINT].size,
- smp_cpus, SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE,
+ 0, smp_cpus, SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE,
false);
}
@@ -343,7 +343,7 @@ static void spike_v1_10_0_board_init(MachineState *machine)
/* Core Local Interruptor (timer and IPI) */
sifive_clint_create(memmap[SPIKE_CLINT].base, memmap[SPIKE_CLINT].size,
- smp_cpus, SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE,
+ 0, smp_cpus, SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE,
false);
}
@@ -452,7 +452,7 @@ static void spike_v1_09_1_board_init(MachineState *machine)
/* Core Local Interruptor (timer and IPI) */
sifive_clint_create(memmap[SPIKE_CLINT].base, memmap[SPIKE_CLINT].size,
- smp_cpus, SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE,
+ 0, smp_cpus, SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE,
false);
g_free(config_string);
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
index daae3ebdbb..dcb8a83b35 100644
--- a/hw/riscv/virt.c
+++ b/hw/riscv/virt.c
@@ -596,7 +596,7 @@ static void riscv_virt_board_init(MachineState *machine)
VIRT_PLIC_CONTEXT_STRIDE,
memmap[VIRT_PLIC].size);
sifive_clint_create(memmap[VIRT_CLINT].base,
- memmap[VIRT_CLINT].size, smp_cpus,
+ memmap[VIRT_CLINT].size, 0, smp_cpus,
SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE, true);
sifive_test_create(memmap[VIRT_TEST].base);
diff --git a/include/hw/riscv/sifive_clint.h b/include/hw/riscv/sifive_clint.h
index 4a720bfece..9f5fb3d31d 100644
--- a/include/hw/riscv/sifive_clint.h
+++ b/include/hw/riscv/sifive_clint.h
@@ -33,6 +33,7 @@ typedef struct SiFiveCLINTState {
/*< public >*/
MemoryRegion mmio;
+ uint32_t hartid_base;
uint32_t num_harts;
uint32_t sip_base;
uint32_t timecmp_base;
@@ -40,9 +41,9 @@ typedef struct SiFiveCLINTState {
uint32_t aperture_size;
} SiFiveCLINTState;
-DeviceState *sifive_clint_create(hwaddr addr, hwaddr size, uint32_t num_harts,
- uint32_t sip_base, uint32_t timecmp_base, uint32_t time_base,
- bool provide_rdtime);
+DeviceState *sifive_clint_create(hwaddr addr, hwaddr size,
+ uint32_t hartid_base, uint32_t num_harts, uint32_t sip_base,
+ uint32_t timecmp_base, uint32_t time_base, bool provide_rdtime);
enum {
SIFIVE_SIP_BASE = 0x0,
--
2.25.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH 2/4] hw/riscv: spike: Allow creating multiple sockets
2020-05-16 6:37 [PATCH 0/4] RISC-V multi-socket support Anup Patel
2020-05-16 6:37 ` [PATCH 1/4] hw/riscv: Allow creating multiple instances of CLINT Anup Patel
@ 2020-05-16 6:37 ` Anup Patel
2020-05-21 20:16 ` Palmer Dabbelt
2020-05-16 6:37 ` [PATCH 3/4] hw/riscv: Allow creating multiple instances of PLIC Anup Patel
` (3 subsequent siblings)
5 siblings, 1 reply; 19+ messages in thread
From: Anup Patel @ 2020-05-16 6:37 UTC (permalink / raw)
To: Peter Maydell, Palmer Dabbelt, Alistair Francis, Sagar Karandikar
Cc: Atish Patra, Anup Patel, qemu-riscv, qemu-devel, Anup Patel
We extend RISC-V spike machine to allow creating a multi-socket machine.
Each RISC-V spike machine socket is a set of HARTs and a CLINT instance.
Other peripherals are shared between all RISC-V spike machine sockets.
We also update RISC-V spike machine device tree to treat each socket as
a NUMA node.
The number of sockets in RISC-V spike machine can be specified using
the "sockets=" sub-option of QEMU "-smp" command-line option. By
default, only one socket RISC-V spike machine will be created.
Currently, we only allow creating upto maximum 4 sockets with minimum
2 HARTs per socket. In future, this limits can be changed.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
---
hw/riscv/spike.c | 206 ++++++++++++++++++++++++---------------
include/hw/riscv/spike.h | 8 +-
2 files changed, 133 insertions(+), 81 deletions(-)
diff --git a/hw/riscv/spike.c b/hw/riscv/spike.c
index d5e0103d89..f63c57a87c 100644
--- a/hw/riscv/spike.c
+++ b/hw/riscv/spike.c
@@ -64,9 +64,11 @@ static void create_fdt(SpikeState *s, const struct MemmapEntry *memmap,
uint64_t mem_size, const char *cmdline)
{
void *fdt;
- int cpu;
- uint32_t *cells;
- char *nodename;
+ int cpu, socket;
+ uint32_t *clint_cells;
+ unsigned long clint_addr;
+ uint32_t cpu_phandle, intc_phandle, phandle = 1;
+ char *name, *clint_name, *clust_name, *core_name, *cpu_name, *intc_name;
fdt = s->fdt = create_device_tree(&s->fdt_size);
if (!fdt) {
@@ -88,68 +90,85 @@ static void create_fdt(SpikeState *s, const struct MemmapEntry *memmap,
qemu_fdt_setprop_cell(fdt, "/soc", "#size-cells", 0x2);
qemu_fdt_setprop_cell(fdt, "/soc", "#address-cells", 0x2);
- nodename = g_strdup_printf("/memory@%lx",
- (long)memmap[SPIKE_DRAM].base);
- qemu_fdt_add_subnode(fdt, nodename);
- qemu_fdt_setprop_cells(fdt, nodename, "reg",
+ name = g_strdup_printf("/memory@%lx", (long)memmap[SPIKE_DRAM].base);
+ qemu_fdt_add_subnode(fdt, name);
+ qemu_fdt_setprop_cells(fdt, name, "reg",
memmap[SPIKE_DRAM].base >> 32, memmap[SPIKE_DRAM].base,
mem_size >> 32, mem_size);
- qemu_fdt_setprop_string(fdt, nodename, "device_type", "memory");
- g_free(nodename);
+ qemu_fdt_setprop_string(fdt, name, "device_type", "memory");
+ g_free(name);
qemu_fdt_add_subnode(fdt, "/cpus");
qemu_fdt_setprop_cell(fdt, "/cpus", "timebase-frequency",
SIFIVE_CLINT_TIMEBASE_FREQ);
qemu_fdt_setprop_cell(fdt, "/cpus", "#size-cells", 0x0);
qemu_fdt_setprop_cell(fdt, "/cpus", "#address-cells", 0x1);
+ qemu_fdt_add_subnode(fdt, "/cpus/cpu-map");
- for (cpu = s->soc.num_harts - 1; cpu >= 0; cpu--) {
- nodename = g_strdup_printf("/cpus/cpu@%d", cpu);
- char *intc = g_strdup_printf("/cpus/cpu@%d/interrupt-controller", cpu);
- char *isa = riscv_isa_string(&s->soc.harts[cpu]);
- qemu_fdt_add_subnode(fdt, nodename);
+ for (socket = (s->num_socs - 1); socket >= 0; socket--) {
+ clust_name = g_strdup_printf("/cpus/cpu-map/cluster0%d", socket);
+ qemu_fdt_add_subnode(fdt, clust_name);
+
+ clint_cells = g_new0(uint32_t, s->soc[socket].num_harts * 4);
+
+ for (cpu = s->soc[socket].num_harts - 1; cpu >= 0; cpu--) {
+ cpu_phandle = phandle++;
+
+ cpu_name = g_strdup_printf("/cpus/cpu@%d",
+ s->soc[socket].hartid_base + cpu);
+ qemu_fdt_add_subnode(fdt, cpu_name);
#if defined(TARGET_RISCV32)
- qemu_fdt_setprop_string(fdt, nodename, "mmu-type", "riscv,sv32");
+ qemu_fdt_setprop_string(fdt, cpu_name, "mmu-type", "riscv,sv32");
#else
- qemu_fdt_setprop_string(fdt, nodename, "mmu-type", "riscv,sv48");
+ qemu_fdt_setprop_string(fdt, cpu_name, "mmu-type", "riscv,sv48");
#endif
- qemu_fdt_setprop_string(fdt, nodename, "riscv,isa", isa);
- qemu_fdt_setprop_string(fdt, nodename, "compatible", "riscv");
- qemu_fdt_setprop_string(fdt, nodename, "status", "okay");
- qemu_fdt_setprop_cell(fdt, nodename, "reg", cpu);
- qemu_fdt_setprop_string(fdt, nodename, "device_type", "cpu");
- qemu_fdt_add_subnode(fdt, intc);
- qemu_fdt_setprop_cell(fdt, intc, "phandle", 1);
- qemu_fdt_setprop_string(fdt, intc, "compatible", "riscv,cpu-intc");
- qemu_fdt_setprop(fdt, intc, "interrupt-controller", NULL, 0);
- qemu_fdt_setprop_cell(fdt, intc, "#interrupt-cells", 1);
- g_free(isa);
- g_free(intc);
- g_free(nodename);
- }
+ name = riscv_isa_string(&s->soc[socket].harts[cpu]);
+ qemu_fdt_setprop_string(fdt, cpu_name, "riscv,isa", name);
+ g_free(name);
+ qemu_fdt_setprop_string(fdt, cpu_name, "compatible", "riscv");
+ qemu_fdt_setprop_string(fdt, cpu_name, "status", "okay");
+ qemu_fdt_setprop_cell(fdt, cpu_name, "reg",
+ s->soc[socket].hartid_base + cpu);
+ qemu_fdt_setprop_string(fdt, cpu_name, "device_type", "cpu");
+ qemu_fdt_setprop_cell(fdt, cpu_name, "phandle", cpu_phandle);
+
+ intc_name = g_strdup_printf("%s/interrupt-controller", cpu_name);
+ qemu_fdt_add_subnode(fdt, intc_name);
+ intc_phandle = phandle++;
+ qemu_fdt_setprop_cell(fdt, intc_name, "phandle", intc_phandle);
+ qemu_fdt_setprop_string(fdt, intc_name, "compatible",
+ "riscv,cpu-intc");
+ qemu_fdt_setprop(fdt, intc_name, "interrupt-controller", NULL, 0);
+ qemu_fdt_setprop_cell(fdt, intc_name, "#interrupt-cells", 1);
+
+ clint_cells[cpu * 4 + 0] = cpu_to_be32(intc_phandle);
+ clint_cells[cpu * 4 + 1] = cpu_to_be32(IRQ_M_SOFT);
+ clint_cells[cpu * 4 + 2] = cpu_to_be32(intc_phandle);
+ clint_cells[cpu * 4 + 3] = cpu_to_be32(IRQ_M_TIMER);
+
+ core_name = g_strdup_printf("%s/core%d", clust_name, cpu);
+ qemu_fdt_add_subnode(fdt, core_name);
+ qemu_fdt_setprop_cell(fdt, core_name, "cpu", cpu_phandle);
+
+ g_free(core_name);
+ g_free(intc_name);
+ g_free(cpu_name);
+ }
- cells = g_new0(uint32_t, s->soc.num_harts * 4);
- for (cpu = 0; cpu < s->soc.num_harts; cpu++) {
- nodename =
- g_strdup_printf("/cpus/cpu@%d/interrupt-controller", cpu);
- uint32_t intc_phandle = qemu_fdt_get_phandle(fdt, nodename);
- cells[cpu * 4 + 0] = cpu_to_be32(intc_phandle);
- cells[cpu * 4 + 1] = cpu_to_be32(IRQ_M_SOFT);
- cells[cpu * 4 + 2] = cpu_to_be32(intc_phandle);
- cells[cpu * 4 + 3] = cpu_to_be32(IRQ_M_TIMER);
- g_free(nodename);
+ clint_addr = memmap[SPIKE_CLINT].base +
+ (memmap[SPIKE_CLINT].size * socket);
+ clint_name = g_strdup_printf("/soc/clint@%lx", clint_addr);
+ qemu_fdt_add_subnode(fdt, clint_name);
+ qemu_fdt_setprop_string(fdt, clint_name, "compatible", "riscv,clint0");
+ qemu_fdt_setprop_cells(fdt, clint_name, "reg",
+ 0x0, clint_addr, 0x0, memmap[SPIKE_CLINT].size);
+ qemu_fdt_setprop(fdt, clint_name, "interrupts-extended",
+ clint_cells, s->soc[socket].num_harts * sizeof(uint32_t) * 4);
+
+ g_free(clint_name);
+ g_free(clint_cells);
+ g_free(clust_name);
}
- nodename = g_strdup_printf("/soc/clint@%lx",
- (long)memmap[SPIKE_CLINT].base);
- qemu_fdt_add_subnode(fdt, nodename);
- qemu_fdt_setprop_string(fdt, nodename, "compatible", "riscv,clint0");
- qemu_fdt_setprop_cells(fdt, nodename, "reg",
- 0x0, memmap[SPIKE_CLINT].base,
- 0x0, memmap[SPIKE_CLINT].size);
- qemu_fdt_setprop(fdt, nodename, "interrupts-extended",
- cells, s->soc.num_harts * sizeof(uint32_t) * 4);
- g_free(cells);
- g_free(nodename);
if (cmdline) {
qemu_fdt_add_subnode(fdt, "/chosen");
@@ -160,23 +179,51 @@ static void create_fdt(SpikeState *s, const struct MemmapEntry *memmap,
static void spike_board_init(MachineState *machine)
{
const struct MemmapEntry *memmap = spike_memmap;
-
SpikeState *s = g_new0(SpikeState, 1);
MemoryRegion *system_memory = get_system_memory();
MemoryRegion *main_mem = g_new(MemoryRegion, 1);
MemoryRegion *mask_rom = g_new(MemoryRegion, 1);
int i;
+ char *soc_name;
unsigned int smp_cpus = machine->smp.cpus;
-
- /* Initialize SOC */
- object_initialize_child(OBJECT(machine), "soc", &s->soc, sizeof(s->soc),
- TYPE_RISCV_HART_ARRAY, &error_abort, NULL);
- object_property_set_str(OBJECT(&s->soc), machine->cpu_type, "cpu-type",
- &error_abort);
- object_property_set_int(OBJECT(&s->soc), smp_cpus, "num-harts",
- &error_abort);
- object_property_set_bool(OBJECT(&s->soc), true, "realized",
- &error_abort);
+ unsigned int base_hartid, cpus_per_socket;
+
+ s->num_socs = machine->smp.sockets;
+
+ /* Ensure minumum required CPUs per socket */
+ if ((smp_cpus / s->num_socs) < SPIKE_CPUS_PER_SOCKET_MIN)
+ s->num_socs = 1;
+
+ /* Limit the number of sockets */
+ if (SPIKE_SOCKETS_MAX < s->num_socs)
+ s->num_socs = SPIKE_SOCKETS_MAX;
+
+ /* Initialize socket */
+ for (i = 0; i < s->num_socs; i++) {
+ base_hartid = i * (smp_cpus / s->num_socs);
+ if (i == (s->num_socs - 1))
+ cpus_per_socket = smp_cpus - base_hartid;
+ else
+ cpus_per_socket = smp_cpus / s->num_socs;
+ soc_name = g_strdup_printf("soc%d", i);
+ object_initialize_child(OBJECT(machine), soc_name, &s->soc[i],
+ sizeof(s->soc[i]), TYPE_RISCV_HART_ARRAY, &error_abort, NULL);
+ g_free(soc_name);
+ object_property_set_str(OBJECT(&s->soc[i]),
+ machine->cpu_type, "cpu-type", &error_abort);
+ object_property_set_int(OBJECT(&s->soc[i]),
+ base_hartid, "hartid-base", &error_abort);
+ object_property_set_int(OBJECT(&s->soc[i]),
+ cpus_per_socket, "num-harts", &error_abort);
+ object_property_set_bool(OBJECT(&s->soc[i]),
+ true, "realized", &error_abort);
+
+ /* Core Local Interruptor (timer and IPI) for each socket */
+ sifive_clint_create(
+ memmap[SPIKE_CLINT].base + i * memmap[SPIKE_CLINT].size,
+ memmap[SPIKE_CLINT].size, base_hartid, cpus_per_socket,
+ SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE, false);
+ }
/* register system main memory (actual RAM) */
memory_region_init_ram(main_mem, NULL, "riscv.spike.ram",
@@ -249,12 +296,7 @@ static void spike_board_init(MachineState *machine)
&address_space_memory);
/* initialize HTIF using symbols found in load_kernel */
- htif_mm_init(system_memory, mask_rom, &s->soc.harts[0].env, serial_hd(0));
-
- /* Core Local Interruptor (timer and IPI) */
- sifive_clint_create(memmap[SPIKE_CLINT].base, memmap[SPIKE_CLINT].size,
- 0, smp_cpus, SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE,
- false);
+ htif_mm_init(system_memory, mask_rom, &s->soc[0].harts[0].env, serial_hd(0));
}
static void spike_v1_10_0_board_init(MachineState *machine)
@@ -268,6 +310,8 @@ static void spike_v1_10_0_board_init(MachineState *machine)
int i;
unsigned int smp_cpus = machine->smp.cpus;
+ s->num_socs = 1;
+
if (!qtest_enabled()) {
info_report("The Spike v1.10.0 machine has been deprecated. "
"Please use the generic spike machine and specify the ISA "
@@ -275,13 +319,13 @@ static void spike_v1_10_0_board_init(MachineState *machine)
}
/* Initialize SOC */
- object_initialize_child(OBJECT(machine), "soc", &s->soc, sizeof(s->soc),
+ object_initialize_child(OBJECT(machine), "soc", &s->soc[0], sizeof(s->soc[0]),
TYPE_RISCV_HART_ARRAY, &error_abort, NULL);
- object_property_set_str(OBJECT(&s->soc), SPIKE_V1_10_0_CPU, "cpu-type",
+ object_property_set_str(OBJECT(&s->soc[0]), SPIKE_V1_10_0_CPU, "cpu-type",
&error_abort);
- object_property_set_int(OBJECT(&s->soc), smp_cpus, "num-harts",
+ object_property_set_int(OBJECT(&s->soc[0]), smp_cpus, "num-harts",
&error_abort);
- object_property_set_bool(OBJECT(&s->soc), true, "realized",
+ object_property_set_bool(OBJECT(&s->soc[0]), true, "realized",
&error_abort);
/* register system main memory (actual RAM) */
@@ -339,7 +383,7 @@ static void spike_v1_10_0_board_init(MachineState *machine)
&address_space_memory);
/* initialize HTIF using symbols found in load_kernel */
- htif_mm_init(system_memory, mask_rom, &s->soc.harts[0].env, serial_hd(0));
+ htif_mm_init(system_memory, mask_rom, &s->soc[0].harts[0].env, serial_hd(0));
/* Core Local Interruptor (timer and IPI) */
sifive_clint_create(memmap[SPIKE_CLINT].base, memmap[SPIKE_CLINT].size,
@@ -358,6 +402,8 @@ static void spike_v1_09_1_board_init(MachineState *machine)
int i;
unsigned int smp_cpus = machine->smp.cpus;
+ s->num_socs = 1;
+
if (!qtest_enabled()) {
info_report("The Spike v1.09.1 machine has been deprecated. "
"Please use the generic spike machine and specify the ISA "
@@ -365,13 +411,13 @@ static void spike_v1_09_1_board_init(MachineState *machine)
}
/* Initialize SOC */
- object_initialize_child(OBJECT(machine), "soc", &s->soc, sizeof(s->soc),
+ object_initialize_child(OBJECT(machine), "soc", &s->soc[0], sizeof(s->soc[0]),
TYPE_RISCV_HART_ARRAY, &error_abort, NULL);
- object_property_set_str(OBJECT(&s->soc), SPIKE_V1_09_1_CPU, "cpu-type",
+ object_property_set_str(OBJECT(&s->soc[0]), SPIKE_V1_09_1_CPU, "cpu-type",
&error_abort);
- object_property_set_int(OBJECT(&s->soc), smp_cpus, "num-harts",
+ object_property_set_int(OBJECT(&s->soc[0]), smp_cpus, "num-harts",
&error_abort);
- object_property_set_bool(OBJECT(&s->soc), true, "realized",
+ object_property_set_bool(OBJECT(&s->soc[0]), true, "realized",
&error_abort);
/* register system main memory (actual RAM) */
@@ -425,7 +471,7 @@ static void spike_v1_09_1_board_init(MachineState *machine)
"};\n";
/* build config string with supplied memory size */
- char *isa = riscv_isa_string(&s->soc.harts[0]);
+ char *isa = riscv_isa_string(&s->soc[0].harts[0]);
char *config_string = g_strdup_printf(config_string_tmpl,
(uint64_t)memmap[SPIKE_CLINT].base + SIFIVE_TIME_BASE,
(uint64_t)memmap[SPIKE_DRAM].base,
@@ -448,7 +494,7 @@ static void spike_v1_09_1_board_init(MachineState *machine)
&address_space_memory);
/* initialize HTIF using symbols found in load_kernel */
- htif_mm_init(system_memory, mask_rom, &s->soc.harts[0].env, serial_hd(0));
+ htif_mm_init(system_memory, mask_rom, &s->soc[0].harts[0].env, serial_hd(0));
/* Core Local Interruptor (timer and IPI) */
sifive_clint_create(memmap[SPIKE_CLINT].base, memmap[SPIKE_CLINT].size,
@@ -476,7 +522,7 @@ static void spike_machine_init(MachineClass *mc)
{
mc->desc = "RISC-V Spike Board";
mc->init = spike_board_init;
- mc->max_cpus = 8;
+ mc->max_cpus = SPIKE_CPUS_MAX;
mc->is_default = true;
mc->default_cpu_type = SPIKE_V1_10_0_CPU;
}
diff --git a/include/hw/riscv/spike.h b/include/hw/riscv/spike.h
index dc770421bc..04a9f593b5 100644
--- a/include/hw/riscv/spike.h
+++ b/include/hw/riscv/spike.h
@@ -22,12 +22,18 @@
#include "hw/riscv/riscv_hart.h"
#include "hw/sysbus.h"
+#define SPIKE_SOCKETS_MAX 4
+#define SPIKE_CPUS_PER_SOCKET_MIN 2
+#define SPIKE_CPUS_PER_SOCKET_MAX 4
+#define SPIKE_CPUS_MAX (SPIKE_SOCKETS_MAX * SPIKE_CPUS_PER_SOCKET_MAX)
+
typedef struct {
/*< private >*/
SysBusDevice parent_obj;
/*< public >*/
- RISCVHartArrayState soc;
+ unsigned int num_socs;
+ RISCVHartArrayState soc[SPIKE_SOCKETS_MAX];
void *fdt;
int fdt_size;
} SpikeState;
--
2.25.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH 3/4] hw/riscv: Allow creating multiple instances of PLIC
2020-05-16 6:37 [PATCH 0/4] RISC-V multi-socket support Anup Patel
2020-05-16 6:37 ` [PATCH 1/4] hw/riscv: Allow creating multiple instances of CLINT Anup Patel
2020-05-16 6:37 ` [PATCH 2/4] hw/riscv: spike: Allow creating multiple sockets Anup Patel
@ 2020-05-16 6:37 ` Anup Patel
2020-05-21 20:16 ` Palmer Dabbelt
2020-05-21 21:59 ` Alistair Francis
2020-05-16 6:37 ` [PATCH 4/4] hw/riscv: virt: Allow creating multiple sockets Anup Patel
` (2 subsequent siblings)
5 siblings, 2 replies; 19+ messages in thread
From: Anup Patel @ 2020-05-16 6:37 UTC (permalink / raw)
To: Peter Maydell, Palmer Dabbelt, Alistair Francis, Sagar Karandikar
Cc: Atish Patra, Anup Patel, qemu-riscv, qemu-devel, Anup Patel
We extend PLIC emulation to allow multiple instances of PLIC in
a QEMU RISC-V machine. To achieve this, we remove first HART id
zero assumption from PLIC emulation.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
---
hw/riscv/sifive_e.c | 2 +-
hw/riscv/sifive_plic.c | 24 +++++++++++++-----------
hw/riscv/sifive_u.c | 2 +-
hw/riscv/virt.c | 2 +-
include/hw/riscv/sifive_plic.h | 12 +++++++-----
5 files changed, 23 insertions(+), 19 deletions(-)
diff --git a/hw/riscv/sifive_e.c b/hw/riscv/sifive_e.c
index 1c3b37d0ba..bd122e71ae 100644
--- a/hw/riscv/sifive_e.c
+++ b/hw/riscv/sifive_e.c
@@ -152,7 +152,7 @@ static void riscv_sifive_e_soc_realize(DeviceState *dev, Error **errp)
/* MMIO */
s->plic = sifive_plic_create(memmap[SIFIVE_E_PLIC].base,
- (char *)SIFIVE_E_PLIC_HART_CONFIG,
+ (char *)SIFIVE_E_PLIC_HART_CONFIG, 0,
SIFIVE_E_PLIC_NUM_SOURCES,
SIFIVE_E_PLIC_NUM_PRIORITIES,
SIFIVE_E_PLIC_PRIORITY_BASE,
diff --git a/hw/riscv/sifive_plic.c b/hw/riscv/sifive_plic.c
index c1e04cbb98..f88bb48053 100644
--- a/hw/riscv/sifive_plic.c
+++ b/hw/riscv/sifive_plic.c
@@ -352,6 +352,7 @@ static const MemoryRegionOps sifive_plic_ops = {
static Property sifive_plic_properties[] = {
DEFINE_PROP_STRING("hart-config", SiFivePLICState, hart_config),
+ DEFINE_PROP_UINT32("hartid-base", SiFivePLICState, hartid_base, 0),
DEFINE_PROP_UINT32("num-sources", SiFivePLICState, num_sources, 0),
DEFINE_PROP_UINT32("num-priorities", SiFivePLICState, num_priorities, 0),
DEFINE_PROP_UINT32("priority-base", SiFivePLICState, priority_base, 0),
@@ -400,10 +401,12 @@ static void parse_hart_config(SiFivePLICState *plic)
}
hartid++;
- /* store hart/mode combinations */
plic->num_addrs = addrid;
+ plic->num_harts = hartid;
+
+ /* store hart/mode combinations */
plic->addr_config = g_new(PLICAddr, plic->num_addrs);
- addrid = 0, hartid = 0;
+ addrid = 0, hartid = plic->hartid_base;
p = plic->hart_config;
while ((c = *p++)) {
if (c == ',') {
@@ -429,8 +432,6 @@ static void sifive_plic_irq_request(void *opaque, int irq, int level)
static void sifive_plic_realize(DeviceState *dev, Error **errp)
{
- MachineState *ms = MACHINE(qdev_get_machine());
- unsigned int smp_cpus = ms->smp.cpus;
SiFivePLICState *plic = SIFIVE_PLIC(dev);
int i;
@@ -451,8 +452,8 @@ static void sifive_plic_realize(DeviceState *dev, Error **errp)
* lost a interrupt in the case a PLIC is attached. The SEIP bit must be
* hardware controlled when a PLIC is attached.
*/
- for (i = 0; i < smp_cpus; i++) {
- RISCVCPU *cpu = RISCV_CPU(qemu_get_cpu(i));
+ for (i = 0; i < plic->num_harts; i++) {
+ RISCVCPU *cpu = RISCV_CPU(qemu_get_cpu(plic->hartid_base + i));
if (riscv_cpu_claim_interrupts(cpu, MIP_SEIP) < 0) {
error_report("SEIP already claimed");
exit(1);
@@ -488,16 +489,17 @@ type_init(sifive_plic_register_types)
* Create PLIC device.
*/
DeviceState *sifive_plic_create(hwaddr addr, char *hart_config,
- uint32_t num_sources, uint32_t num_priorities,
- uint32_t priority_base, uint32_t pending_base,
- uint32_t enable_base, uint32_t enable_stride,
- uint32_t context_base, uint32_t context_stride,
- uint32_t aperture_size)
+ uint32_t hartid_base, uint32_t num_sources,
+ uint32_t num_priorities, uint32_t priority_base,
+ uint32_t pending_base, uint32_t enable_base,
+ uint32_t enable_stride, uint32_t context_base,
+ uint32_t context_stride, uint32_t aperture_size)
{
DeviceState *dev = qdev_create(NULL, TYPE_SIFIVE_PLIC);
assert(enable_stride == (enable_stride & -enable_stride));
assert(context_stride == (context_stride & -context_stride));
qdev_prop_set_string(dev, "hart-config", hart_config);
+ qdev_prop_set_uint32(dev, "hartid-base", hartid_base);
qdev_prop_set_uint32(dev, "num-sources", num_sources);
qdev_prop_set_uint32(dev, "num-priorities", num_priorities);
qdev_prop_set_uint32(dev, "priority-base", priority_base);
diff --git a/hw/riscv/sifive_u.c b/hw/riscv/sifive_u.c
index 22997fbf13..69dbd7980b 100644
--- a/hw/riscv/sifive_u.c
+++ b/hw/riscv/sifive_u.c
@@ -585,7 +585,7 @@ static void riscv_sifive_u_soc_realize(DeviceState *dev, Error **errp)
/* MMIO */
s->plic = sifive_plic_create(memmap[SIFIVE_U_PLIC].base,
- plic_hart_config,
+ plic_hart_config, 0,
SIFIVE_U_PLIC_NUM_SOURCES,
SIFIVE_U_PLIC_NUM_PRIORITIES,
SIFIVE_U_PLIC_PRIORITY_BASE,
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
index dcb8a83b35..f40efcb193 100644
--- a/hw/riscv/virt.c
+++ b/hw/riscv/virt.c
@@ -585,7 +585,7 @@ static void riscv_virt_board_init(MachineState *machine)
/* MMIO */
s->plic = sifive_plic_create(memmap[VIRT_PLIC].base,
- plic_hart_config,
+ plic_hart_config, 0,
VIRT_PLIC_NUM_SOURCES,
VIRT_PLIC_NUM_PRIORITIES,
VIRT_PLIC_PRIORITY_BASE,
diff --git a/include/hw/riscv/sifive_plic.h b/include/hw/riscv/sifive_plic.h
index 4421e81249..ace76d0f1b 100644
--- a/include/hw/riscv/sifive_plic.h
+++ b/include/hw/riscv/sifive_plic.h
@@ -48,6 +48,7 @@ typedef struct SiFivePLICState {
/*< public >*/
MemoryRegion mmio;
uint32_t num_addrs;
+ uint32_t num_harts;
uint32_t bitfield_words;
PLICAddr *addr_config;
uint32_t *source_priority;
@@ -58,6 +59,7 @@ typedef struct SiFivePLICState {
/* config */
char *hart_config;
+ uint32_t hartid_base;
uint32_t num_sources;
uint32_t num_priorities;
uint32_t priority_base;
@@ -70,10 +72,10 @@ typedef struct SiFivePLICState {
} SiFivePLICState;
DeviceState *sifive_plic_create(hwaddr addr, char *hart_config,
- uint32_t num_sources, uint32_t num_priorities,
- uint32_t priority_base, uint32_t pending_base,
- uint32_t enable_base, uint32_t enable_stride,
- uint32_t context_base, uint32_t context_stride,
- uint32_t aperture_size);
+ uint32_t hartid_base, uint32_t num_sources,
+ uint32_t num_priorities, uint32_t priority_base,
+ uint32_t pending_base, uint32_t enable_base,
+ uint32_t enable_stride, uint32_t context_base,
+ uint32_t context_stride, uint32_t aperture_size);
#endif
--
2.25.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* [PATCH 4/4] hw/riscv: virt: Allow creating multiple sockets
2020-05-16 6:37 [PATCH 0/4] RISC-V multi-socket support Anup Patel
` (2 preceding siblings ...)
2020-05-16 6:37 ` [PATCH 3/4] hw/riscv: Allow creating multiple instances of PLIC Anup Patel
@ 2020-05-16 6:37 ` Anup Patel
2020-05-21 20:16 ` Palmer Dabbelt
2020-05-16 11:58 ` [PATCH 0/4] RISC-V multi-socket support no-reply
2020-05-19 21:20 ` Alistair Francis
5 siblings, 1 reply; 19+ messages in thread
From: Anup Patel @ 2020-05-16 6:37 UTC (permalink / raw)
To: Peter Maydell, Palmer Dabbelt, Alistair Francis, Sagar Karandikar
Cc: Atish Patra, Anup Patel, qemu-riscv, qemu-devel, Anup Patel
We extend RISC-V virt machine to allow creating a multi-socket machine.
Each RISC-V virt machine socket is a set of HARTs, a CLINT instance,
and a PLIC instance. Other peripherals are shared between all RISC-V
virt machine sockets. We also update RISC-V virt machine device tree
to treat each socket as a NUMA node.
The number of sockets in RISC-V virt machine can be specified using
the "sockets=" sub-option of QEMU "-smp" command-line option. By
default, only one socket RISC-V virt machine will be created.
Currently, we only allow creating upto maximum 4 sockets with minimum
2 HARTs per socket. In future, this limits can be changed.
Signed-off-by: Anup Patel <anup.patel@wdc.com>
---
hw/riscv/virt.c | 495 ++++++++++++++++++++++------------------
include/hw/riscv/virt.h | 12 +-
2 files changed, 283 insertions(+), 224 deletions(-)
diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
index f40efcb193..205224c01c 100644
--- a/hw/riscv/virt.c
+++ b/hw/riscv/virt.c
@@ -60,7 +60,7 @@ static const struct MemmapEntry {
[VIRT_TEST] = { 0x100000, 0x1000 },
[VIRT_RTC] = { 0x101000, 0x1000 },
[VIRT_CLINT] = { 0x2000000, 0x10000 },
- [VIRT_PLIC] = { 0xc000000, 0x4000000 },
+ [VIRT_PLIC] = { 0xc000000, VIRT_PLIC_SIZE(VIRT_CPUS_MAX*2) },
[VIRT_UART0] = { 0x10000000, 0x100 },
[VIRT_VIRTIO] = { 0x10001000, 0x1000 },
[VIRT_FLASH] = { 0x20000000, 0x4000000 },
@@ -183,10 +183,15 @@ static void create_fdt(RISCVVirtState *s, const struct MemmapEntry *memmap,
uint64_t mem_size, const char *cmdline)
{
void *fdt;
- int cpu, i;
- uint32_t *cells;
- char *nodename;
- uint32_t plic_phandle, test_phandle, phandle = 1;
+ int i, cpu, socket;
+ uint32_t *clint_cells, *plic_cells;
+ unsigned long clint_addr, plic_addr;
+ uint32_t plic_phandle[VIRT_SOCKETS_MAX];
+ uint32_t cpu_phandle, intc_phandle, test_phandle;
+ uint32_t phandle = 1, plic_mmio_phandle = 1;
+ uint32_t plic_pcie_phandle = 1, plic_virtio_phandle = 1;
+ char *name, *cpu_name, *core_name, *intc_name;
+ char *clint_name, *plic_name, *clust_name;
hwaddr flashsize = virt_memmap[VIRT_FLASH].size / 2;
hwaddr flashbase = virt_memmap[VIRT_FLASH].base;
@@ -207,231 +212,231 @@ static void create_fdt(RISCVVirtState *s, const struct MemmapEntry *memmap,
qemu_fdt_setprop_cell(fdt, "/soc", "#size-cells", 0x2);
qemu_fdt_setprop_cell(fdt, "/soc", "#address-cells", 0x2);
- nodename = g_strdup_printf("/memory@%lx",
+ name = g_strdup_printf("/memory@%lx",
(long)memmap[VIRT_DRAM].base);
- qemu_fdt_add_subnode(fdt, nodename);
- qemu_fdt_setprop_cells(fdt, nodename, "reg",
+ qemu_fdt_add_subnode(fdt, name);
+ qemu_fdt_setprop_cells(fdt, name, "reg",
memmap[VIRT_DRAM].base >> 32, memmap[VIRT_DRAM].base,
mem_size >> 32, mem_size);
- qemu_fdt_setprop_string(fdt, nodename, "device_type", "memory");
- g_free(nodename);
+ qemu_fdt_setprop_string(fdt, name, "device_type", "memory");
+ g_free(name);
qemu_fdt_add_subnode(fdt, "/cpus");
qemu_fdt_setprop_cell(fdt, "/cpus", "timebase-frequency",
SIFIVE_CLINT_TIMEBASE_FREQ);
qemu_fdt_setprop_cell(fdt, "/cpus", "#size-cells", 0x0);
qemu_fdt_setprop_cell(fdt, "/cpus", "#address-cells", 0x1);
+ qemu_fdt_add_subnode(fdt, "/cpus/cpu-map");
+
+ for (socket = (s->num_socs - 1); socket >= 0; socket--) {
+ clust_name = g_strdup_printf("/cpus/cpu-map/cluster0%d", socket);
+ qemu_fdt_add_subnode(fdt, clust_name);
+
+ plic_cells = g_new0(uint32_t, s->soc[socket].num_harts * 4);
+ clint_cells = g_new0(uint32_t, s->soc[socket].num_harts * 4);
+
+ for (cpu = s->soc[socket].num_harts - 1; cpu >= 0; cpu--) {
+ cpu_phandle = phandle++;
- for (cpu = s->soc.num_harts - 1; cpu >= 0; cpu--) {
- int cpu_phandle = phandle++;
- int intc_phandle;
- nodename = g_strdup_printf("/cpus/cpu@%d", cpu);
- char *intc = g_strdup_printf("/cpus/cpu@%d/interrupt-controller", cpu);
- char *isa = riscv_isa_string(&s->soc.harts[cpu]);
- qemu_fdt_add_subnode(fdt, nodename);
+ cpu_name = g_strdup_printf("/cpus/cpu@%d",
+ s->soc[socket].hartid_base + cpu);
+ qemu_fdt_add_subnode(fdt, cpu_name);
#if defined(TARGET_RISCV32)
- qemu_fdt_setprop_string(fdt, nodename, "mmu-type", "riscv,sv32");
+ qemu_fdt_setprop_string(fdt, cpu_name, "mmu-type", "riscv,sv32");
#else
- qemu_fdt_setprop_string(fdt, nodename, "mmu-type", "riscv,sv48");
+ qemu_fdt_setprop_string(fdt, cpu_name, "mmu-type", "riscv,sv48");
#endif
- qemu_fdt_setprop_string(fdt, nodename, "riscv,isa", isa);
- qemu_fdt_setprop_string(fdt, nodename, "compatible", "riscv");
- qemu_fdt_setprop_string(fdt, nodename, "status", "okay");
- qemu_fdt_setprop_cell(fdt, nodename, "reg", cpu);
- qemu_fdt_setprop_string(fdt, nodename, "device_type", "cpu");
- qemu_fdt_setprop_cell(fdt, nodename, "phandle", cpu_phandle);
- intc_phandle = phandle++;
- qemu_fdt_add_subnode(fdt, intc);
- qemu_fdt_setprop_cell(fdt, intc, "phandle", intc_phandle);
- qemu_fdt_setprop_string(fdt, intc, "compatible", "riscv,cpu-intc");
- qemu_fdt_setprop(fdt, intc, "interrupt-controller", NULL, 0);
- qemu_fdt_setprop_cell(fdt, intc, "#interrupt-cells", 1);
- g_free(isa);
- g_free(intc);
- g_free(nodename);
- }
+ name = riscv_isa_string(&s->soc[socket].harts[cpu]);
+ qemu_fdt_setprop_string(fdt, cpu_name, "riscv,isa", name);
+ g_free(name);
+ qemu_fdt_setprop_string(fdt, cpu_name, "compatible", "riscv");
+ qemu_fdt_setprop_string(fdt, cpu_name, "status", "okay");
+ qemu_fdt_setprop_cell(fdt, cpu_name, "reg",
+ s->soc[socket].hartid_base + cpu);
+ qemu_fdt_setprop_string(fdt, cpu_name, "device_type", "cpu");
+ qemu_fdt_setprop_cell(fdt, cpu_name, "phandle", cpu_phandle);
+
+ intc_name = g_strdup_printf("%s/interrupt-controller", cpu_name);
+ qemu_fdt_add_subnode(fdt, intc_name);
+ intc_phandle = phandle++;
+ qemu_fdt_setprop_cell(fdt, intc_name, "phandle", intc_phandle);
+ qemu_fdt_setprop_string(fdt, intc_name, "compatible",
+ "riscv,cpu-intc");
+ qemu_fdt_setprop(fdt, intc_name, "interrupt-controller", NULL, 0);
+ qemu_fdt_setprop_cell(fdt, intc_name, "#interrupt-cells", 1);
+
+ clint_cells[cpu * 4 + 0] = cpu_to_be32(intc_phandle);
+ clint_cells[cpu * 4 + 1] = cpu_to_be32(IRQ_M_SOFT);
+ clint_cells[cpu * 4 + 2] = cpu_to_be32(intc_phandle);
+ clint_cells[cpu * 4 + 3] = cpu_to_be32(IRQ_M_TIMER);
+
+ plic_cells[cpu * 4 + 0] = cpu_to_be32(intc_phandle);
+ plic_cells[cpu * 4 + 1] = cpu_to_be32(IRQ_M_EXT);
+ plic_cells[cpu * 4 + 2] = cpu_to_be32(intc_phandle);
+ plic_cells[cpu * 4 + 3] = cpu_to_be32(IRQ_S_EXT);
+
+ core_name = g_strdup_printf("%s/core%d", clust_name, cpu);
+ qemu_fdt_add_subnode(fdt, core_name);
+ qemu_fdt_setprop_cell(fdt, core_name, "cpu", cpu_phandle);
+
+ g_free(core_name);
+ g_free(intc_name);
+ g_free(cpu_name);
+ }
- /* Add cpu-topology node */
- qemu_fdt_add_subnode(fdt, "/cpus/cpu-map");
- qemu_fdt_add_subnode(fdt, "/cpus/cpu-map/cluster0");
- for (cpu = s->soc.num_harts - 1; cpu >= 0; cpu--) {
- char *core_nodename = g_strdup_printf("/cpus/cpu-map/cluster0/core%d",
- cpu);
- char *cpu_nodename = g_strdup_printf("/cpus/cpu@%d", cpu);
- uint32_t intc_phandle = qemu_fdt_get_phandle(fdt, cpu_nodename);
- qemu_fdt_add_subnode(fdt, core_nodename);
- qemu_fdt_setprop_cell(fdt, core_nodename, "cpu", intc_phandle);
- g_free(core_nodename);
- g_free(cpu_nodename);
+ clint_addr = memmap[VIRT_CLINT].base +
+ (memmap[VIRT_CLINT].size * socket);
+ clint_name = g_strdup_printf("/soc/clint@%lx", clint_addr);
+ qemu_fdt_add_subnode(fdt, clint_name);
+ qemu_fdt_setprop_string(fdt, clint_name, "compatible", "riscv,clint0");
+ qemu_fdt_setprop_cells(fdt, clint_name, "reg",
+ 0x0, clint_addr, 0x0, memmap[VIRT_CLINT].size);
+ qemu_fdt_setprop(fdt, clint_name, "interrupts-extended",
+ clint_cells, s->soc[socket].num_harts * sizeof(uint32_t) * 4);
+ g_free(clint_name);
+
+ plic_phandle[socket] = phandle++;
+ plic_addr = memmap[VIRT_PLIC].base + (memmap[VIRT_PLIC].size * socket);
+ plic_name = g_strdup_printf("/soc/plic@%lx", plic_addr);
+ qemu_fdt_add_subnode(fdt, plic_name);
+ qemu_fdt_setprop_cell(fdt, plic_name,
+ "#address-cells", FDT_PLIC_ADDR_CELLS);
+ qemu_fdt_setprop_cell(fdt, plic_name,
+ "#interrupt-cells", FDT_PLIC_INT_CELLS);
+ qemu_fdt_setprop_string(fdt, plic_name, "compatible", "riscv,plic0");
+ qemu_fdt_setprop(fdt, plic_name, "interrupt-controller", NULL, 0);
+ qemu_fdt_setprop(fdt, plic_name, "interrupts-extended",
+ plic_cells, s->soc[socket].num_harts * sizeof(uint32_t) * 4);
+ qemu_fdt_setprop_cells(fdt, plic_name, "reg",
+ 0x0, plic_addr, 0x0, memmap[VIRT_PLIC].size);
+ qemu_fdt_setprop_cell(fdt, plic_name, "riscv,ndev", VIRTIO_NDEV);
+ qemu_fdt_setprop_cell(fdt, plic_name, "phandle", plic_phandle[socket]);
+ g_free(plic_name);
+
+ g_free(clint_cells);
+ g_free(plic_cells);
+ g_free(clust_name);
}
- cells = g_new0(uint32_t, s->soc.num_harts * 4);
- for (cpu = 0; cpu < s->soc.num_harts; cpu++) {
- nodename =
- g_strdup_printf("/cpus/cpu@%d/interrupt-controller", cpu);
- uint32_t intc_phandle = qemu_fdt_get_phandle(fdt, nodename);
- cells[cpu * 4 + 0] = cpu_to_be32(intc_phandle);
- cells[cpu * 4 + 1] = cpu_to_be32(IRQ_M_SOFT);
- cells[cpu * 4 + 2] = cpu_to_be32(intc_phandle);
- cells[cpu * 4 + 3] = cpu_to_be32(IRQ_M_TIMER);
- g_free(nodename);
- }
- nodename = g_strdup_printf("/soc/clint@%lx",
- (long)memmap[VIRT_CLINT].base);
- qemu_fdt_add_subnode(fdt, nodename);
- qemu_fdt_setprop_string(fdt, nodename, "compatible", "riscv,clint0");
- qemu_fdt_setprop_cells(fdt, nodename, "reg",
- 0x0, memmap[VIRT_CLINT].base,
- 0x0, memmap[VIRT_CLINT].size);
- qemu_fdt_setprop(fdt, nodename, "interrupts-extended",
- cells, s->soc.num_harts * sizeof(uint32_t) * 4);
- g_free(cells);
- g_free(nodename);
-
- plic_phandle = phandle++;
- cells = g_new0(uint32_t, s->soc.num_harts * 4);
- for (cpu = 0; cpu < s->soc.num_harts; cpu++) {
- nodename =
- g_strdup_printf("/cpus/cpu@%d/interrupt-controller", cpu);
- uint32_t intc_phandle = qemu_fdt_get_phandle(fdt, nodename);
- cells[cpu * 4 + 0] = cpu_to_be32(intc_phandle);
- cells[cpu * 4 + 1] = cpu_to_be32(IRQ_M_EXT);
- cells[cpu * 4 + 2] = cpu_to_be32(intc_phandle);
- cells[cpu * 4 + 3] = cpu_to_be32(IRQ_S_EXT);
- g_free(nodename);
+ for (socket = 0; socket < s->num_socs; socket++) {
+ if (socket == 0) {
+ plic_mmio_phandle = plic_phandle[socket];
+ plic_virtio_phandle = plic_phandle[socket];
+ plic_pcie_phandle = plic_phandle[socket];
+ }
+ if (socket == 1) {
+ plic_virtio_phandle = plic_phandle[socket];
+ plic_pcie_phandle = plic_phandle[socket];
+ }
+ if (socket == 2) {
+ plic_pcie_phandle = plic_phandle[socket];
+ }
}
- nodename = g_strdup_printf("/soc/interrupt-controller@%lx",
- (long)memmap[VIRT_PLIC].base);
- qemu_fdt_add_subnode(fdt, nodename);
- qemu_fdt_setprop_cell(fdt, nodename, "#address-cells",
- FDT_PLIC_ADDR_CELLS);
- qemu_fdt_setprop_cell(fdt, nodename, "#interrupt-cells",
- FDT_PLIC_INT_CELLS);
- qemu_fdt_setprop_string(fdt, nodename, "compatible", "riscv,plic0");
- qemu_fdt_setprop(fdt, nodename, "interrupt-controller", NULL, 0);
- qemu_fdt_setprop(fdt, nodename, "interrupts-extended",
- cells, s->soc.num_harts * sizeof(uint32_t) * 4);
- qemu_fdt_setprop_cells(fdt, nodename, "reg",
- 0x0, memmap[VIRT_PLIC].base,
- 0x0, memmap[VIRT_PLIC].size);
- qemu_fdt_setprop_cell(fdt, nodename, "riscv,ndev", VIRTIO_NDEV);
- qemu_fdt_setprop_cell(fdt, nodename, "phandle", plic_phandle);
- plic_phandle = qemu_fdt_get_phandle(fdt, nodename);
- g_free(cells);
- g_free(nodename);
for (i = 0; i < VIRTIO_COUNT; i++) {
- nodename = g_strdup_printf("/virtio_mmio@%lx",
+ name = g_strdup_printf("/soc/virtio_mmio@%lx",
(long)(memmap[VIRT_VIRTIO].base + i * memmap[VIRT_VIRTIO].size));
- qemu_fdt_add_subnode(fdt, nodename);
- qemu_fdt_setprop_string(fdt, nodename, "compatible", "virtio,mmio");
- qemu_fdt_setprop_cells(fdt, nodename, "reg",
+ qemu_fdt_add_subnode(fdt, name);
+ qemu_fdt_setprop_string(fdt, name, "compatible", "virtio,mmio");
+ qemu_fdt_setprop_cells(fdt, name, "reg",
0x0, memmap[VIRT_VIRTIO].base + i * memmap[VIRT_VIRTIO].size,
0x0, memmap[VIRT_VIRTIO].size);
- qemu_fdt_setprop_cell(fdt, nodename, "interrupt-parent", plic_phandle);
- qemu_fdt_setprop_cell(fdt, nodename, "interrupts", VIRTIO_IRQ + i);
- g_free(nodename);
+ qemu_fdt_setprop_cell(fdt, name, "interrupt-parent", plic_virtio_phandle);
+ qemu_fdt_setprop_cell(fdt, name, "interrupts", VIRTIO_IRQ + i);
+ g_free(name);
}
- nodename = g_strdup_printf("/soc/pci@%lx",
+ name = g_strdup_printf("/soc/pci@%lx",
(long) memmap[VIRT_PCIE_ECAM].base);
- qemu_fdt_add_subnode(fdt, nodename);
- qemu_fdt_setprop_cell(fdt, nodename, "#address-cells",
- FDT_PCI_ADDR_CELLS);
- qemu_fdt_setprop_cell(fdt, nodename, "#interrupt-cells",
- FDT_PCI_INT_CELLS);
- qemu_fdt_setprop_cell(fdt, nodename, "#size-cells", 0x2);
- qemu_fdt_setprop_string(fdt, nodename, "compatible",
- "pci-host-ecam-generic");
- qemu_fdt_setprop_string(fdt, nodename, "device_type", "pci");
- qemu_fdt_setprop_cell(fdt, nodename, "linux,pci-domain", 0);
- qemu_fdt_setprop_cells(fdt, nodename, "bus-range", 0,
- memmap[VIRT_PCIE_ECAM].size /
- PCIE_MMCFG_SIZE_MIN - 1);
- qemu_fdt_setprop(fdt, nodename, "dma-coherent", NULL, 0);
- qemu_fdt_setprop_cells(fdt, nodename, "reg", 0, memmap[VIRT_PCIE_ECAM].base,
- 0, memmap[VIRT_PCIE_ECAM].size);
- qemu_fdt_setprop_sized_cells(fdt, nodename, "ranges",
+ qemu_fdt_add_subnode(fdt, name);
+ qemu_fdt_setprop_cell(fdt, name, "#address-cells", FDT_PCI_ADDR_CELLS);
+ qemu_fdt_setprop_cell(fdt, name, "#interrupt-cells", FDT_PCI_INT_CELLS);
+ qemu_fdt_setprop_cell(fdt, name, "#size-cells", 0x2);
+ qemu_fdt_setprop_string(fdt, name, "compatible", "pci-host-ecam-generic");
+ qemu_fdt_setprop_string(fdt, name, "device_type", "pci");
+ qemu_fdt_setprop_cell(fdt, name, "linux,pci-domain", 0);
+ qemu_fdt_setprop_cells(fdt, name, "bus-range", 0,
+ memmap[VIRT_PCIE_ECAM].size / PCIE_MMCFG_SIZE_MIN - 1);
+ qemu_fdt_setprop(fdt, name, "dma-coherent", NULL, 0);
+ qemu_fdt_setprop_cells(fdt, name, "reg", 0,
+ memmap[VIRT_PCIE_ECAM].base, 0, memmap[VIRT_PCIE_ECAM].size);
+ qemu_fdt_setprop_sized_cells(fdt, name, "ranges",
1, FDT_PCI_RANGE_IOPORT, 2, 0,
2, memmap[VIRT_PCIE_PIO].base, 2, memmap[VIRT_PCIE_PIO].size,
1, FDT_PCI_RANGE_MMIO,
2, memmap[VIRT_PCIE_MMIO].base,
2, memmap[VIRT_PCIE_MMIO].base, 2, memmap[VIRT_PCIE_MMIO].size);
- create_pcie_irq_map(fdt, nodename, plic_phandle);
- g_free(nodename);
+ create_pcie_irq_map(fdt, name, plic_pcie_phandle);
+ g_free(name);
test_phandle = phandle++;
- nodename = g_strdup_printf("/test@%lx",
+ name = g_strdup_printf("/soc/test@%lx",
(long)memmap[VIRT_TEST].base);
- qemu_fdt_add_subnode(fdt, nodename);
+ qemu_fdt_add_subnode(fdt, name);
{
const char compat[] = "sifive,test1\0sifive,test0\0syscon";
- qemu_fdt_setprop(fdt, nodename, "compatible", compat, sizeof(compat));
+ qemu_fdt_setprop(fdt, name, "compatible", compat, sizeof(compat));
}
- qemu_fdt_setprop_cells(fdt, nodename, "reg",
+ qemu_fdt_setprop_cells(fdt, name, "reg",
0x0, memmap[VIRT_TEST].base,
0x0, memmap[VIRT_TEST].size);
- qemu_fdt_setprop_cell(fdt, nodename, "phandle", test_phandle);
- test_phandle = qemu_fdt_get_phandle(fdt, nodename);
- g_free(nodename);
-
- nodename = g_strdup_printf("/reboot");
- qemu_fdt_add_subnode(fdt, nodename);
- qemu_fdt_setprop_string(fdt, nodename, "compatible", "syscon-reboot");
- qemu_fdt_setprop_cell(fdt, nodename, "regmap", test_phandle);
- qemu_fdt_setprop_cell(fdt, nodename, "offset", 0x0);
- qemu_fdt_setprop_cell(fdt, nodename, "value", FINISHER_RESET);
- g_free(nodename);
-
- nodename = g_strdup_printf("/poweroff");
- qemu_fdt_add_subnode(fdt, nodename);
- qemu_fdt_setprop_string(fdt, nodename, "compatible", "syscon-poweroff");
- qemu_fdt_setprop_cell(fdt, nodename, "regmap", test_phandle);
- qemu_fdt_setprop_cell(fdt, nodename, "offset", 0x0);
- qemu_fdt_setprop_cell(fdt, nodename, "value", FINISHER_PASS);
- g_free(nodename);
-
- nodename = g_strdup_printf("/uart@%lx",
- (long)memmap[VIRT_UART0].base);
- qemu_fdt_add_subnode(fdt, nodename);
- qemu_fdt_setprop_string(fdt, nodename, "compatible", "ns16550a");
- qemu_fdt_setprop_cells(fdt, nodename, "reg",
+ qemu_fdt_setprop_cell(fdt, name, "phandle", test_phandle);
+ test_phandle = qemu_fdt_get_phandle(fdt, name);
+ g_free(name);
+
+ name = g_strdup_printf("/soc/reboot");
+ qemu_fdt_add_subnode(fdt, name);
+ qemu_fdt_setprop_string(fdt, name, "compatible", "syscon-reboot");
+ qemu_fdt_setprop_cell(fdt, name, "regmap", test_phandle);
+ qemu_fdt_setprop_cell(fdt, name, "offset", 0x0);
+ qemu_fdt_setprop_cell(fdt, name, "value", FINISHER_RESET);
+ g_free(name);
+
+ name = g_strdup_printf("/soc/poweroff");
+ qemu_fdt_add_subnode(fdt, name);
+ qemu_fdt_setprop_string(fdt, name, "compatible", "syscon-poweroff");
+ qemu_fdt_setprop_cell(fdt, name, "regmap", test_phandle);
+ qemu_fdt_setprop_cell(fdt, name, "offset", 0x0);
+ qemu_fdt_setprop_cell(fdt, name, "value", FINISHER_PASS);
+ g_free(name);
+
+ name = g_strdup_printf("/soc/uart@%lx", (long)memmap[VIRT_UART0].base);
+ qemu_fdt_add_subnode(fdt, name);
+ qemu_fdt_setprop_string(fdt, name, "compatible", "ns16550a");
+ qemu_fdt_setprop_cells(fdt, name, "reg",
0x0, memmap[VIRT_UART0].base,
0x0, memmap[VIRT_UART0].size);
- qemu_fdt_setprop_cell(fdt, nodename, "clock-frequency", 3686400);
- qemu_fdt_setprop_cell(fdt, nodename, "interrupt-parent", plic_phandle);
- qemu_fdt_setprop_cell(fdt, nodename, "interrupts", UART0_IRQ);
+ qemu_fdt_setprop_cell(fdt, name, "clock-frequency", 3686400);
+ qemu_fdt_setprop_cell(fdt, name, "interrupt-parent", plic_mmio_phandle);
+ qemu_fdt_setprop_cell(fdt, name, "interrupts", UART0_IRQ);
qemu_fdt_add_subnode(fdt, "/chosen");
- qemu_fdt_setprop_string(fdt, "/chosen", "stdout-path", nodename);
+ qemu_fdt_setprop_string(fdt, "/chosen", "stdout-path", name);
if (cmdline) {
qemu_fdt_setprop_string(fdt, "/chosen", "bootargs", cmdline);
}
- g_free(nodename);
-
- nodename = g_strdup_printf("/rtc@%lx",
- (long)memmap[VIRT_RTC].base);
- qemu_fdt_add_subnode(fdt, nodename);
- qemu_fdt_setprop_string(fdt, nodename, "compatible",
- "google,goldfish-rtc");
- qemu_fdt_setprop_cells(fdt, nodename, "reg",
+ g_free(name);
+
+ name = g_strdup_printf("/soc/rtc@%lx", (long)memmap[VIRT_RTC].base);
+ qemu_fdt_add_subnode(fdt, name);
+ qemu_fdt_setprop_string(fdt, name, "compatible", "google,goldfish-rtc");
+ qemu_fdt_setprop_cells(fdt, name, "reg",
0x0, memmap[VIRT_RTC].base,
0x0, memmap[VIRT_RTC].size);
- qemu_fdt_setprop_cell(fdt, nodename, "interrupt-parent", plic_phandle);
- qemu_fdt_setprop_cell(fdt, nodename, "interrupts", RTC_IRQ);
- g_free(nodename);
-
- nodename = g_strdup_printf("/flash@%" PRIx64, flashbase);
- qemu_fdt_add_subnode(s->fdt, nodename);
- qemu_fdt_setprop_string(s->fdt, nodename, "compatible", "cfi-flash");
- qemu_fdt_setprop_sized_cells(s->fdt, nodename, "reg",
+ qemu_fdt_setprop_cell(fdt, name, "interrupt-parent", plic_mmio_phandle);
+ qemu_fdt_setprop_cell(fdt, name, "interrupts", RTC_IRQ);
+ g_free(name);
+
+ name = g_strdup_printf("/soc/flash@%" PRIx64, flashbase);
+ qemu_fdt_add_subnode(s->fdt, name);
+ qemu_fdt_setprop_string(s->fdt, name, "compatible", "cfi-flash");
+ qemu_fdt_setprop_sized_cells(s->fdt, name, "reg",
2, flashbase, 2, flashsize,
2, flashbase + flashsize, 2, flashsize);
- qemu_fdt_setprop_cell(s->fdt, nodename, "bank-width", 4);
- g_free(nodename);
+ qemu_fdt_setprop_cell(s->fdt, name, "bank-width", 4);
+ g_free(name);
}
-
static inline DeviceState *gpex_pcie_init(MemoryRegion *sys_mem,
hwaddr ecam_base, hwaddr ecam_size,
hwaddr mmio_base, hwaddr mmio_size,
@@ -479,21 +484,93 @@ static void riscv_virt_board_init(MachineState *machine)
MemoryRegion *system_memory = get_system_memory();
MemoryRegion *main_mem = g_new(MemoryRegion, 1);
MemoryRegion *mask_rom = g_new(MemoryRegion, 1);
- char *plic_hart_config;
+ char *plic_hart_config, *soc_name;
size_t plic_hart_config_len;
target_ulong start_addr = memmap[VIRT_DRAM].base;
- int i;
+ int i, j;
unsigned int smp_cpus = machine->smp.cpus;
+ unsigned int base_hartid, cpus_per_socket;
+ DeviceState *mmio_plic, *virtio_plic, *pcie_plic;
+
+ s->num_socs = machine->smp.sockets;
+
+ /* Ensure minumum required CPUs per socket */
+ if ((smp_cpus / s->num_socs) < VIRT_CPUS_PER_SOCKET_MIN)
+ s->num_socs = 1;
+
+ /* Limit the number of sockets */
+ if (VIRT_SOCKETS_MAX < s->num_socs)
+ s->num_socs = VIRT_SOCKETS_MAX;
/* Initialize SOC */
- object_initialize_child(OBJECT(machine), "soc", &s->soc, sizeof(s->soc),
- TYPE_RISCV_HART_ARRAY, &error_abort, NULL);
- object_property_set_str(OBJECT(&s->soc), machine->cpu_type, "cpu-type",
- &error_abort);
- object_property_set_int(OBJECT(&s->soc), smp_cpus, "num-harts",
- &error_abort);
- object_property_set_bool(OBJECT(&s->soc), true, "realized",
- &error_abort);
+ mmio_plic = virtio_plic = pcie_plic = NULL;
+ for (i = 0; i < s->num_socs; i++) {
+ base_hartid = i * (smp_cpus / s->num_socs);
+ if (i == (s->num_socs - 1))
+ cpus_per_socket = smp_cpus - base_hartid;
+ else
+ cpus_per_socket = smp_cpus / s->num_socs;
+ soc_name = g_strdup_printf("soc%d", i);
+ object_initialize_child(OBJECT(machine), soc_name, &s->soc[i],
+ sizeof(s->soc[i]), TYPE_RISCV_HART_ARRAY, &error_abort, NULL);
+ g_free(soc_name);
+ object_property_set_str(OBJECT(&s->soc[i]),
+ machine->cpu_type, "cpu-type", &error_abort);
+ object_property_set_int(OBJECT(&s->soc[i]),
+ base_hartid, "hartid-base", &error_abort);
+ object_property_set_int(OBJECT(&s->soc[i]),
+ cpus_per_socket, "num-harts", &error_abort);
+ object_property_set_bool(OBJECT(&s->soc[i]),
+ true, "realized", &error_abort);
+
+ /* Per-socket CLINT */
+ sifive_clint_create(
+ memmap[VIRT_CLINT].base + i * memmap[VIRT_CLINT].size,
+ memmap[VIRT_CLINT].size, base_hartid, cpus_per_socket,
+ SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE, true);
+
+ /* Per-socket PLIC hart topology configuration string */
+ plic_hart_config_len =
+ (strlen(VIRT_PLIC_HART_CONFIG) + 1) * cpus_per_socket;
+ plic_hart_config = g_malloc0(plic_hart_config_len);
+ for (j = 0; j < cpus_per_socket; j++) {
+ if (j != 0) {
+ strncat(plic_hart_config, ",", plic_hart_config_len);
+ }
+ strncat(plic_hart_config, VIRT_PLIC_HART_CONFIG,
+ plic_hart_config_len);
+ plic_hart_config_len -= (strlen(VIRT_PLIC_HART_CONFIG) + 1);
+ }
+
+ /* Per-socket PLIC */
+ s->plic[i] = sifive_plic_create(
+ memmap[VIRT_PLIC].base + i * memmap[VIRT_PLIC].size,
+ plic_hart_config, base_hartid,
+ VIRT_PLIC_NUM_SOURCES,
+ VIRT_PLIC_NUM_PRIORITIES,
+ VIRT_PLIC_PRIORITY_BASE,
+ VIRT_PLIC_PENDING_BASE,
+ VIRT_PLIC_ENABLE_BASE,
+ VIRT_PLIC_ENABLE_STRIDE,
+ VIRT_PLIC_CONTEXT_BASE,
+ VIRT_PLIC_CONTEXT_STRIDE,
+ memmap[VIRT_PLIC].size);
+ g_free(plic_hart_config);
+
+ /* Try to use different PLIC instance based device type */
+ if (i == 0) {
+ mmio_plic = s->plic[i];
+ virtio_plic = s->plic[i];
+ pcie_plic = s->plic[i];
+ }
+ if (i == 1) {
+ virtio_plic = s->plic[i];
+ pcie_plic = s->plic[i];
+ }
+ if (i == 2) {
+ pcie_plic = s->plic[i];
+ }
+ }
/* register system main memory (actual RAM) */
memory_region_init_ram(main_mem, NULL, "riscv_virt_board.ram",
@@ -572,38 +649,14 @@ static void riscv_virt_board_init(MachineState *machine)
memmap[VIRT_MROM].base + sizeof(reset_vec),
&address_space_memory);
- /* create PLIC hart topology configuration string */
- plic_hart_config_len = (strlen(VIRT_PLIC_HART_CONFIG) + 1) * smp_cpus;
- plic_hart_config = g_malloc0(plic_hart_config_len);
- for (i = 0; i < smp_cpus; i++) {
- if (i != 0) {
- strncat(plic_hart_config, ",", plic_hart_config_len);
- }
- strncat(plic_hart_config, VIRT_PLIC_HART_CONFIG, plic_hart_config_len);
- plic_hart_config_len -= (strlen(VIRT_PLIC_HART_CONFIG) + 1);
- }
-
- /* MMIO */
- s->plic = sifive_plic_create(memmap[VIRT_PLIC].base,
- plic_hart_config, 0,
- VIRT_PLIC_NUM_SOURCES,
- VIRT_PLIC_NUM_PRIORITIES,
- VIRT_PLIC_PRIORITY_BASE,
- VIRT_PLIC_PENDING_BASE,
- VIRT_PLIC_ENABLE_BASE,
- VIRT_PLIC_ENABLE_STRIDE,
- VIRT_PLIC_CONTEXT_BASE,
- VIRT_PLIC_CONTEXT_STRIDE,
- memmap[VIRT_PLIC].size);
- sifive_clint_create(memmap[VIRT_CLINT].base,
- memmap[VIRT_CLINT].size, 0, smp_cpus,
- SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE, true);
+ /* SiFive Test MMIO device */
sifive_test_create(memmap[VIRT_TEST].base);
+ /* VirtIO MMIO devices */
for (i = 0; i < VIRTIO_COUNT; i++) {
sysbus_create_simple("virtio-mmio",
memmap[VIRT_VIRTIO].base + i * memmap[VIRT_VIRTIO].size,
- qdev_get_gpio_in(DEVICE(s->plic), VIRTIO_IRQ + i));
+ qdev_get_gpio_in(DEVICE(virtio_plic), VIRTIO_IRQ + i));
}
gpex_pcie_init(system_memory,
@@ -612,14 +665,14 @@ static void riscv_virt_board_init(MachineState *machine)
memmap[VIRT_PCIE_MMIO].base,
memmap[VIRT_PCIE_MMIO].size,
memmap[VIRT_PCIE_PIO].base,
- DEVICE(s->plic), true);
+ DEVICE(pcie_plic), true);
serial_mm_init(system_memory, memmap[VIRT_UART0].base,
- 0, qdev_get_gpio_in(DEVICE(s->plic), UART0_IRQ), 399193,
+ 0, qdev_get_gpio_in(DEVICE(mmio_plic), UART0_IRQ), 399193,
serial_hd(0), DEVICE_LITTLE_ENDIAN);
sysbus_create_simple("goldfish_rtc", memmap[VIRT_RTC].base,
- qdev_get_gpio_in(DEVICE(s->plic), RTC_IRQ));
+ qdev_get_gpio_in(DEVICE(mmio_plic), RTC_IRQ));
virt_flash_create(s);
@@ -629,8 +682,6 @@ static void riscv_virt_board_init(MachineState *machine)
drive_get(IF_PFLASH, 0, i));
}
virt_flash_map(s, system_memory);
-
- g_free(plic_hart_config);
}
static void riscv_virt_machine_instance_init(Object *obj)
@@ -643,7 +694,7 @@ static void riscv_virt_machine_class_init(ObjectClass *oc, void *data)
mc->desc = "RISC-V VirtIO board";
mc->init = riscv_virt_board_init;
- mc->max_cpus = 8;
+ mc->max_cpus = VIRT_CPUS_MAX;
mc->default_cpu_type = VIRT_CPU;
mc->pci_allow_0_address = true;
}
diff --git a/include/hw/riscv/virt.h b/include/hw/riscv/virt.h
index e69355efaf..333d1edbc2 100644
--- a/include/hw/riscv/virt.h
+++ b/include/hw/riscv/virt.h
@@ -23,6 +23,11 @@
#include "hw/sysbus.h"
#include "hw/block/flash.h"
+#define VIRT_SOCKETS_MAX 4
+#define VIRT_CPUS_PER_SOCKET_MIN 2
+#define VIRT_CPUS_PER_SOCKET_MAX 4
+#define VIRT_CPUS_MAX (VIRT_SOCKETS_MAX * VIRT_CPUS_PER_SOCKET_MAX)
+
#define TYPE_RISCV_VIRT_MACHINE MACHINE_TYPE_NAME("virt")
#define RISCV_VIRT_MACHINE(obj) \
OBJECT_CHECK(RISCVVirtState, (obj), TYPE_RISCV_VIRT_MACHINE)
@@ -32,8 +37,9 @@ typedef struct {
MachineState parent;
/*< public >*/
- RISCVHartArrayState soc;
- DeviceState *plic;
+ unsigned int num_socs;
+ RISCVHartArrayState soc[VIRT_SOCKETS_MAX];
+ DeviceState *plic[VIRT_SOCKETS_MAX];
PFlashCFI01 *flash[2];
void *fdt;
@@ -74,6 +80,8 @@ enum {
#define VIRT_PLIC_ENABLE_STRIDE 0x80
#define VIRT_PLIC_CONTEXT_BASE 0x200000
#define VIRT_PLIC_CONTEXT_STRIDE 0x1000
+#define VIRT_PLIC_SIZE(__num_context) \
+ (VIRT_PLIC_CONTEXT_BASE + (__num_context) * VIRT_PLIC_CONTEXT_STRIDE)
#define FDT_PCI_ADDR_CELLS 3
#define FDT_PCI_INT_CELLS 1
--
2.25.1
^ permalink raw reply related [flat|nested] 19+ messages in thread
* Re: [PATCH 0/4] RISC-V multi-socket support
2020-05-16 6:37 [PATCH 0/4] RISC-V multi-socket support Anup Patel
` (3 preceding siblings ...)
2020-05-16 6:37 ` [PATCH 4/4] hw/riscv: virt: Allow creating multiple sockets Anup Patel
@ 2020-05-16 11:58 ` no-reply
2020-05-19 21:20 ` Alistair Francis
5 siblings, 0 replies; 19+ messages in thread
From: no-reply @ 2020-05-16 11:58 UTC (permalink / raw)
To: anup.patel
Cc: peter.maydell, qemu-riscv, sagark, anup, anup.patel, qemu-devel,
atish.patra, Alistair.Francis, palmer
Patchew URL: https://patchew.org/QEMU/20200516063746.18296-1-anup.patel@wdc.com/
Hi,
This series seems to have some coding style problems. See output below for
more information:
Message-id: 20200516063746.18296-1-anup.patel@wdc.com
Subject: [PATCH 0/4] RISC-V multi-socket support
Type: series
=== TEST SCRIPT BEGIN ===
#!/bin/bash
git rev-parse base > /dev/null || exit 0
git config --local diff.renamelimit 0
git config --local diff.renames True
git config --local diff.algorithm histogram
./scripts/checkpatch.pl --mailback base..
=== TEST SCRIPT END ===
Updating 3c8cf5a9c21ff8782164d1def7f44bd888713384
Switched to a new branch 'test'
9031755 hw/riscv: virt: Allow creating multiple sockets
67e9547 hw/riscv: Allow creating multiple instances of PLIC
2999a11 hw/riscv: spike: Allow creating multiple sockets
b563a80 hw/riscv: Allow creating multiple instances of CLINT
=== OUTPUT BEGIN ===
1/4 Checking commit b563a8089a7a (hw/riscv: Allow creating multiple instances of CLINT)
2/4 Checking commit 2999a1101f27 (hw/riscv: spike: Allow creating multiple sockets)
ERROR: braces {} are necessary for all arms of this statement
#202: FILE: hw/riscv/spike.c:194:
+ if ((smp_cpus / s->num_socs) < SPIKE_CPUS_PER_SOCKET_MIN)
[...]
ERROR: braces {} are necessary for all arms of this statement
#206: FILE: hw/riscv/spike.c:198:
+ if (SPIKE_SOCKETS_MAX < s->num_socs)
[...]
ERROR: braces {} are necessary for all arms of this statement
#212: FILE: hw/riscv/spike.c:204:
+ if (i == (s->num_socs - 1))
[...]
+ else
[...]
WARNING: line over 80 characters
#248: FILE: hw/riscv/spike.c:299:
+ htif_mm_init(system_memory, mask_rom, &s->soc[0].harts[0].env, serial_hd(0));
WARNING: line over 80 characters
#266: FILE: hw/riscv/spike.c:322:
+ object_initialize_child(OBJECT(machine), "soc", &s->soc[0], sizeof(s->soc[0]),
WARNING: line over 80 characters
#284: FILE: hw/riscv/spike.c:386:
+ htif_mm_init(system_memory, mask_rom, &s->soc[0].harts[0].env, serial_hd(0));
WARNING: line over 80 characters
#302: FILE: hw/riscv/spike.c:414:
+ object_initialize_child(OBJECT(machine), "soc", &s->soc[0], sizeof(s->soc[0]),
WARNING: line over 80 characters
#329: FILE: hw/riscv/spike.c:497:
+ htif_mm_init(system_memory, mask_rom, &s->soc[0].harts[0].env, serial_hd(0));
total: 3 errors, 5 warnings, 322 lines checked
Patch 2/4 has style problems, please review. If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.
3/4 Checking commit 67e95477fcbe (hw/riscv: Allow creating multiple instances of PLIC)
4/4 Checking commit 90317551d9da (hw/riscv: virt: Allow creating multiple sockets)
ERROR: spaces required around that '*' (ctx:VxV)
#32: FILE: hw/riscv/virt.c:63:
+ [VIRT_PLIC] = { 0xc000000, VIRT_PLIC_SIZE(VIRT_CPUS_MAX*2) },
^
WARNING: line over 80 characters
#295: FILE: hw/riscv/virt.c:343:
+ qemu_fdt_setprop_cell(fdt, name, "interrupt-parent", plic_virtio_phandle);
ERROR: braces {} are necessary for all arms of this statement
#478: FILE: hw/riscv/virt.c:497:
+ if ((smp_cpus / s->num_socs) < VIRT_CPUS_PER_SOCKET_MIN)
[...]
ERROR: braces {} are necessary for all arms of this statement
#482: FILE: hw/riscv/virt.c:501:
+ if (VIRT_SOCKETS_MAX < s->num_socs)
[...]
ERROR: braces {} are necessary for all arms of this statement
#497: FILE: hw/riscv/virt.c:508:
+ if (i == (s->num_socs - 1))
[...]
+ else
[...]
total: 4 errors, 1 warnings, 638 lines checked
Patch 4/4 has style problems, please review. If any of these errors
are false positives report them to the maintainer, see
CHECKPATCH in MAINTAINERS.
=== OUTPUT END ===
Test command exited with code: 1
The full log is available at
http://patchew.org/logs/20200516063746.18296-1-anup.patel@wdc.com/testing.checkpatch/?type=message.
---
Email generated automatically by Patchew [https://patchew.org/].
Please send your feedback to patchew-devel@redhat.com
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 0/4] RISC-V multi-socket support
2020-05-16 6:37 [PATCH 0/4] RISC-V multi-socket support Anup Patel
` (4 preceding siblings ...)
2020-05-16 11:58 ` [PATCH 0/4] RISC-V multi-socket support no-reply
@ 2020-05-19 21:20 ` Alistair Francis
2020-05-20 8:50 ` Anup Patel
5 siblings, 1 reply; 19+ messages in thread
From: Alistair Francis @ 2020-05-19 21:20 UTC (permalink / raw)
To: Anup Patel
Cc: Peter Maydell, open list:RISC-V, Sagar Karandikar, Anup Patel,
qemu-devel@nongnu.org Developers, Atish Patra, Alistair Francis,
Palmer Dabbelt
On Fri, May 15, 2020 at 11:40 PM Anup Patel <anup.patel@wdc.com> wrote:
>
> This series adds multi-socket support for RISC-V virt machine and
> RISC-V spike machine. The multi-socket support will help us improve
> various RISC-V operating systems, firmwares, and bootloader to
> support RISC-V NUMA systems.
>
> These patch can be found in riscv_multi_socket_v1 branch at:
> https://github.com/avpatel/qemu.git
>
> To try this patches, we will need:
> 1. OpenSBI multi-PLIC and multi-CLINT support which can be found in
> multi_plic_clint_v1 branch at:
> https://github.com/avpatel/opensbi.git
> 2. Linux multi-PLIC improvements support which can be found in
> plic_imp_v1 branch at:
> https://github.com/avpatel/linux.git
>
> Anup Patel (4):
> hw/riscv: Allow creating multiple instances of CLINT
> hw/riscv: spike: Allow creating multiple sockets
> hw/riscv: Allow creating multiple instances of PLIC
> hw/riscv: virt: Allow creating multiple sockets
Can you make sure all the patches pass checkpatch?
Alistair
>
> hw/riscv/sifive_clint.c | 20 +-
> hw/riscv/sifive_e.c | 4 +-
> hw/riscv/sifive_plic.c | 24 +-
> hw/riscv/sifive_u.c | 4 +-
> hw/riscv/spike.c | 210 ++++++++------
> hw/riscv/virt.c | 495 ++++++++++++++++++--------------
> include/hw/riscv/sifive_clint.h | 7 +-
> include/hw/riscv/sifive_plic.h | 12 +-
> include/hw/riscv/spike.h | 8 +-
> include/hw/riscv/virt.h | 12 +-
> 10 files changed, 458 insertions(+), 338 deletions(-)
>
> --
> 2.25.1
>
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/4] hw/riscv: Allow creating multiple instances of CLINT
2020-05-16 6:37 ` [PATCH 1/4] hw/riscv: Allow creating multiple instances of CLINT Anup Patel
@ 2020-05-19 21:21 ` Alistair Francis
2020-05-21 20:16 ` Palmer Dabbelt
1 sibling, 0 replies; 19+ messages in thread
From: Alistair Francis @ 2020-05-19 21:21 UTC (permalink / raw)
To: Anup Patel
Cc: Peter Maydell, open list:RISC-V, Sagar Karandikar, Anup Patel,
qemu-devel@nongnu.org Developers, Atish Patra, Alistair Francis,
Palmer Dabbelt
On Fri, May 15, 2020 at 11:39 PM Anup Patel <anup.patel@wdc.com> wrote:
>
> We extend CLINT emulation to allow multiple instances of CLINT in
> a QEMU RISC-V machine. To achieve this, we remove first HART id
> zero assumption from CLINT emulation.
>
> Signed-off-by: Anup Patel <anup.patel@wdc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Alistair
> ---
> hw/riscv/sifive_clint.c | 20 ++++++++++++--------
> hw/riscv/sifive_e.c | 2 +-
> hw/riscv/sifive_u.c | 2 +-
> hw/riscv/spike.c | 6 +++---
> hw/riscv/virt.c | 2 +-
> include/hw/riscv/sifive_clint.h | 7 ++++---
> 6 files changed, 22 insertions(+), 17 deletions(-)
>
> diff --git a/hw/riscv/sifive_clint.c b/hw/riscv/sifive_clint.c
> index e933d35092..7d713fd743 100644
> --- a/hw/riscv/sifive_clint.c
> +++ b/hw/riscv/sifive_clint.c
> @@ -78,7 +78,7 @@ static uint64_t sifive_clint_read(void *opaque, hwaddr addr, unsigned size)
> SiFiveCLINTState *clint = opaque;
> if (addr >= clint->sip_base &&
> addr < clint->sip_base + (clint->num_harts << 2)) {
> - size_t hartid = (addr - clint->sip_base) >> 2;
> + size_t hartid = clint->hartid_base + ((addr - clint->sip_base) >> 2);
> CPUState *cpu = qemu_get_cpu(hartid);
> CPURISCVState *env = cpu ? cpu->env_ptr : NULL;
> if (!env) {
> @@ -91,7 +91,8 @@ static uint64_t sifive_clint_read(void *opaque, hwaddr addr, unsigned size)
> }
> } else if (addr >= clint->timecmp_base &&
> addr < clint->timecmp_base + (clint->num_harts << 3)) {
> - size_t hartid = (addr - clint->timecmp_base) >> 3;
> + size_t hartid = clint->hartid_base +
> + ((addr - clint->timecmp_base) >> 3);
> CPUState *cpu = qemu_get_cpu(hartid);
> CPURISCVState *env = cpu ? cpu->env_ptr : NULL;
> if (!env) {
> @@ -128,7 +129,7 @@ static void sifive_clint_write(void *opaque, hwaddr addr, uint64_t value,
>
> if (addr >= clint->sip_base &&
> addr < clint->sip_base + (clint->num_harts << 2)) {
> - size_t hartid = (addr - clint->sip_base) >> 2;
> + size_t hartid = clint->hartid_base + ((addr - clint->sip_base) >> 2);
> CPUState *cpu = qemu_get_cpu(hartid);
> CPURISCVState *env = cpu ? cpu->env_ptr : NULL;
> if (!env) {
> @@ -141,7 +142,8 @@ static void sifive_clint_write(void *opaque, hwaddr addr, uint64_t value,
> return;
> } else if (addr >= clint->timecmp_base &&
> addr < clint->timecmp_base + (clint->num_harts << 3)) {
> - size_t hartid = (addr - clint->timecmp_base) >> 3;
> + size_t hartid = clint->hartid_base +
> + ((addr - clint->timecmp_base) >> 3);
> CPUState *cpu = qemu_get_cpu(hartid);
> CPURISCVState *env = cpu ? cpu->env_ptr : NULL;
> if (!env) {
> @@ -185,6 +187,7 @@ static const MemoryRegionOps sifive_clint_ops = {
> };
>
> static Property sifive_clint_properties[] = {
> + DEFINE_PROP_UINT32("hartid-base", SiFiveCLINTState, hartid_base, 0),
> DEFINE_PROP_UINT32("num-harts", SiFiveCLINTState, num_harts, 0),
> DEFINE_PROP_UINT32("sip-base", SiFiveCLINTState, sip_base, 0),
> DEFINE_PROP_UINT32("timecmp-base", SiFiveCLINTState, timecmp_base, 0),
> @@ -226,13 +229,13 @@ type_init(sifive_clint_register_types)
> /*
> * Create CLINT device.
> */
> -DeviceState *sifive_clint_create(hwaddr addr, hwaddr size, uint32_t num_harts,
> - uint32_t sip_base, uint32_t timecmp_base, uint32_t time_base,
> - bool provide_rdtime)
> +DeviceState *sifive_clint_create(hwaddr addr, hwaddr size,
> + uint32_t hartid_base, uint32_t num_harts, uint32_t sip_base,
> + uint32_t timecmp_base, uint32_t time_base, bool provide_rdtime)
> {
> int i;
> for (i = 0; i < num_harts; i++) {
> - CPUState *cpu = qemu_get_cpu(i);
> + CPUState *cpu = qemu_get_cpu(hartid_base + i);
> CPURISCVState *env = cpu ? cpu->env_ptr : NULL;
> if (!env) {
> continue;
> @@ -246,6 +249,7 @@ DeviceState *sifive_clint_create(hwaddr addr, hwaddr size, uint32_t num_harts,
> }
>
> DeviceState *dev = qdev_create(NULL, TYPE_SIFIVE_CLINT);
> + qdev_prop_set_uint32(dev, "hartid-base", hartid_base);
> qdev_prop_set_uint32(dev, "num-harts", num_harts);
> qdev_prop_set_uint32(dev, "sip-base", sip_base);
> qdev_prop_set_uint32(dev, "timecmp-base", timecmp_base);
> diff --git a/hw/riscv/sifive_e.c b/hw/riscv/sifive_e.c
> index b53109521e..1c3b37d0ba 100644
> --- a/hw/riscv/sifive_e.c
> +++ b/hw/riscv/sifive_e.c
> @@ -163,7 +163,7 @@ static void riscv_sifive_e_soc_realize(DeviceState *dev, Error **errp)
> SIFIVE_E_PLIC_CONTEXT_STRIDE,
> memmap[SIFIVE_E_PLIC].size);
> sifive_clint_create(memmap[SIFIVE_E_CLINT].base,
> - memmap[SIFIVE_E_CLINT].size, ms->smp.cpus,
> + memmap[SIFIVE_E_CLINT].size, 0, ms->smp.cpus,
> SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE, false);
> create_unimplemented_device("riscv.sifive.e.aon",
> memmap[SIFIVE_E_AON].base, memmap[SIFIVE_E_AON].size);
> diff --git a/hw/riscv/sifive_u.c b/hw/riscv/sifive_u.c
> index bed10fcfa8..22997fbf13 100644
> --- a/hw/riscv/sifive_u.c
> +++ b/hw/riscv/sifive_u.c
> @@ -601,7 +601,7 @@ static void riscv_sifive_u_soc_realize(DeviceState *dev, Error **errp)
> sifive_uart_create(system_memory, memmap[SIFIVE_U_UART1].base,
> serial_hd(1), qdev_get_gpio_in(DEVICE(s->plic), SIFIVE_U_UART1_IRQ));
> sifive_clint_create(memmap[SIFIVE_U_CLINT].base,
> - memmap[SIFIVE_U_CLINT].size, ms->smp.cpus,
> + memmap[SIFIVE_U_CLINT].size, 0, ms->smp.cpus,
> SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE, false);
>
> object_property_set_bool(OBJECT(&s->prci), true, "realized", &err);
> diff --git a/hw/riscv/spike.c b/hw/riscv/spike.c
> index d0c4843712..d5e0103d89 100644
> --- a/hw/riscv/spike.c
> +++ b/hw/riscv/spike.c
> @@ -253,7 +253,7 @@ static void spike_board_init(MachineState *machine)
>
> /* Core Local Interruptor (timer and IPI) */
> sifive_clint_create(memmap[SPIKE_CLINT].base, memmap[SPIKE_CLINT].size,
> - smp_cpus, SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE,
> + 0, smp_cpus, SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE,
> false);
> }
>
> @@ -343,7 +343,7 @@ static void spike_v1_10_0_board_init(MachineState *machine)
>
> /* Core Local Interruptor (timer and IPI) */
> sifive_clint_create(memmap[SPIKE_CLINT].base, memmap[SPIKE_CLINT].size,
> - smp_cpus, SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE,
> + 0, smp_cpus, SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE,
> false);
> }
>
> @@ -452,7 +452,7 @@ static void spike_v1_09_1_board_init(MachineState *machine)
>
> /* Core Local Interruptor (timer and IPI) */
> sifive_clint_create(memmap[SPIKE_CLINT].base, memmap[SPIKE_CLINT].size,
> - smp_cpus, SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE,
> + 0, smp_cpus, SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE,
> false);
>
> g_free(config_string);
> diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
> index daae3ebdbb..dcb8a83b35 100644
> --- a/hw/riscv/virt.c
> +++ b/hw/riscv/virt.c
> @@ -596,7 +596,7 @@ static void riscv_virt_board_init(MachineState *machine)
> VIRT_PLIC_CONTEXT_STRIDE,
> memmap[VIRT_PLIC].size);
> sifive_clint_create(memmap[VIRT_CLINT].base,
> - memmap[VIRT_CLINT].size, smp_cpus,
> + memmap[VIRT_CLINT].size, 0, smp_cpus,
> SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE, true);
> sifive_test_create(memmap[VIRT_TEST].base);
>
> diff --git a/include/hw/riscv/sifive_clint.h b/include/hw/riscv/sifive_clint.h
> index 4a720bfece..9f5fb3d31d 100644
> --- a/include/hw/riscv/sifive_clint.h
> +++ b/include/hw/riscv/sifive_clint.h
> @@ -33,6 +33,7 @@ typedef struct SiFiveCLINTState {
>
> /*< public >*/
> MemoryRegion mmio;
> + uint32_t hartid_base;
> uint32_t num_harts;
> uint32_t sip_base;
> uint32_t timecmp_base;
> @@ -40,9 +41,9 @@ typedef struct SiFiveCLINTState {
> uint32_t aperture_size;
> } SiFiveCLINTState;
>
> -DeviceState *sifive_clint_create(hwaddr addr, hwaddr size, uint32_t num_harts,
> - uint32_t sip_base, uint32_t timecmp_base, uint32_t time_base,
> - bool provide_rdtime);
> +DeviceState *sifive_clint_create(hwaddr addr, hwaddr size,
> + uint32_t hartid_base, uint32_t num_harts, uint32_t sip_base,
> + uint32_t timecmp_base, uint32_t time_base, bool provide_rdtime);
>
> enum {
> SIFIVE_SIP_BASE = 0x0,
> --
> 2.25.1
>
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: [PATCH 0/4] RISC-V multi-socket support
2020-05-19 21:20 ` Alistair Francis
@ 2020-05-20 8:50 ` Anup Patel
0 siblings, 0 replies; 19+ messages in thread
From: Anup Patel @ 2020-05-20 8:50 UTC (permalink / raw)
To: Alistair Francis
Cc: Peter Maydell, open list:RISC-V, Sagar Karandikar, Anup Patel,
qemu-devel@nongnu.org Developers, Atish Patra, Alistair Francis,
Palmer Dabbelt
> -----Original Message-----
> From: Alistair Francis <alistair23@gmail.com>
> Sent: 20 May 2020 02:50
> To: Anup Patel <Anup.Patel@wdc.com>
> Cc: Peter Maydell <peter.maydell@linaro.org>; Palmer Dabbelt
> <palmer@dabbelt.com>; Alistair Francis <Alistair.Francis@wdc.com>; Sagar
> Karandikar <sagark@eecs.berkeley.edu>; Atish Patra <Atish.Patra@wdc.com>;
> open list:RISC-V <qemu-riscv@nongnu.org>; qemu-devel@nongnu.org
> Developers <qemu-devel@nongnu.org>; Anup Patel <anup@brainfault.org>
> Subject: Re: [PATCH 0/4] RISC-V multi-socket support
>
> On Fri, May 15, 2020 at 11:40 PM Anup Patel <anup.patel@wdc.com> wrote:
> >
> > This series adds multi-socket support for RISC-V virt machine and
> > RISC-V spike machine. The multi-socket support will help us improve
> > various RISC-V operating systems, firmwares, and bootloader to support
> > RISC-V NUMA systems.
> >
> > These patch can be found in riscv_multi_socket_v1 branch at:
> > https://github.com/avpatel/qemu.git
> >
> > To try this patches, we will need:
> > 1. OpenSBI multi-PLIC and multi-CLINT support which can be found in
> > multi_plic_clint_v1 branch at:
> > https://github.com/avpatel/opensbi.git
> > 2. Linux multi-PLIC improvements support which can be found in
> > plic_imp_v1 branch at:
> > https://github.com/avpatel/linux.git
> >
> > Anup Patel (4):
> > hw/riscv: Allow creating multiple instances of CLINT
> > hw/riscv: spike: Allow creating multiple sockets
> > hw/riscv: Allow creating multiple instances of PLIC
> > hw/riscv: virt: Allow creating multiple sockets
>
> Can you make sure all the patches pass checkpatch?
My bad, I forgot to run checkpatch on this series.
I will update in v2.
Regards,
Anup
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 1/4] hw/riscv: Allow creating multiple instances of CLINT
2020-05-16 6:37 ` [PATCH 1/4] hw/riscv: Allow creating multiple instances of CLINT Anup Patel
2020-05-19 21:21 ` Alistair Francis
@ 2020-05-21 20:16 ` Palmer Dabbelt
1 sibling, 0 replies; 19+ messages in thread
From: Palmer Dabbelt @ 2020-05-21 20:16 UTC (permalink / raw)
To: Anup Patel
Cc: Peter Maydell, qemu-riscv, sagark, anup, Anup Patel, qemu-devel,
Atish Patra, Alistair Francis
On Fri, 15 May 2020 23:37:43 PDT (-0700), Anup Patel wrote:
> We extend CLINT emulation to allow multiple instances of CLINT in
> a QEMU RISC-V machine. To achieve this, we remove first HART id
> zero assumption from CLINT emulation.
>
> Signed-off-by: Anup Patel <anup.patel@wdc.com>
> ---
> hw/riscv/sifive_clint.c | 20 ++++++++++++--------
> hw/riscv/sifive_e.c | 2 +-
> hw/riscv/sifive_u.c | 2 +-
> hw/riscv/spike.c | 6 +++---
> hw/riscv/virt.c | 2 +-
> include/hw/riscv/sifive_clint.h | 7 ++++---
> 6 files changed, 22 insertions(+), 17 deletions(-)
>
> diff --git a/hw/riscv/sifive_clint.c b/hw/riscv/sifive_clint.c
> index e933d35092..7d713fd743 100644
> --- a/hw/riscv/sifive_clint.c
> +++ b/hw/riscv/sifive_clint.c
> @@ -78,7 +78,7 @@ static uint64_t sifive_clint_read(void *opaque, hwaddr addr, unsigned size)
> SiFiveCLINTState *clint = opaque;
> if (addr >= clint->sip_base &&
> addr < clint->sip_base + (clint->num_harts << 2)) {
> - size_t hartid = (addr - clint->sip_base) >> 2;
> + size_t hartid = clint->hartid_base + ((addr - clint->sip_base) >> 2);
> CPUState *cpu = qemu_get_cpu(hartid);
> CPURISCVState *env = cpu ? cpu->env_ptr : NULL;
> if (!env) {
> @@ -91,7 +91,8 @@ static uint64_t sifive_clint_read(void *opaque, hwaddr addr, unsigned size)
> }
> } else if (addr >= clint->timecmp_base &&
> addr < clint->timecmp_base + (clint->num_harts << 3)) {
> - size_t hartid = (addr - clint->timecmp_base) >> 3;
> + size_t hartid = clint->hartid_base +
> + ((addr - clint->timecmp_base) >> 3);
> CPUState *cpu = qemu_get_cpu(hartid);
> CPURISCVState *env = cpu ? cpu->env_ptr : NULL;
> if (!env) {
> @@ -128,7 +129,7 @@ static void sifive_clint_write(void *opaque, hwaddr addr, uint64_t value,
>
> if (addr >= clint->sip_base &&
> addr < clint->sip_base + (clint->num_harts << 2)) {
> - size_t hartid = (addr - clint->sip_base) >> 2;
> + size_t hartid = clint->hartid_base + ((addr - clint->sip_base) >> 2);
> CPUState *cpu = qemu_get_cpu(hartid);
> CPURISCVState *env = cpu ? cpu->env_ptr : NULL;
> if (!env) {
> @@ -141,7 +142,8 @@ static void sifive_clint_write(void *opaque, hwaddr addr, uint64_t value,
> return;
> } else if (addr >= clint->timecmp_base &&
> addr < clint->timecmp_base + (clint->num_harts << 3)) {
> - size_t hartid = (addr - clint->timecmp_base) >> 3;
> + size_t hartid = clint->hartid_base +
> + ((addr - clint->timecmp_base) >> 3);
> CPUState *cpu = qemu_get_cpu(hartid);
> CPURISCVState *env = cpu ? cpu->env_ptr : NULL;
> if (!env) {
> @@ -185,6 +187,7 @@ static const MemoryRegionOps sifive_clint_ops = {
> };
>
> static Property sifive_clint_properties[] = {
> + DEFINE_PROP_UINT32("hartid-base", SiFiveCLINTState, hartid_base, 0),
> DEFINE_PROP_UINT32("num-harts", SiFiveCLINTState, num_harts, 0),
> DEFINE_PROP_UINT32("sip-base", SiFiveCLINTState, sip_base, 0),
> DEFINE_PROP_UINT32("timecmp-base", SiFiveCLINTState, timecmp_base, 0),
> @@ -226,13 +229,13 @@ type_init(sifive_clint_register_types)
> /*
> * Create CLINT device.
> */
> -DeviceState *sifive_clint_create(hwaddr addr, hwaddr size, uint32_t num_harts,
> - uint32_t sip_base, uint32_t timecmp_base, uint32_t time_base,
> - bool provide_rdtime)
> +DeviceState *sifive_clint_create(hwaddr addr, hwaddr size,
> + uint32_t hartid_base, uint32_t num_harts, uint32_t sip_base,
> + uint32_t timecmp_base, uint32_t time_base, bool provide_rdtime)
> {
> int i;
> for (i = 0; i < num_harts; i++) {
> - CPUState *cpu = qemu_get_cpu(i);
> + CPUState *cpu = qemu_get_cpu(hartid_base + i);
> CPURISCVState *env = cpu ? cpu->env_ptr : NULL;
> if (!env) {
> continue;
> @@ -246,6 +249,7 @@ DeviceState *sifive_clint_create(hwaddr addr, hwaddr size, uint32_t num_harts,
> }
>
> DeviceState *dev = qdev_create(NULL, TYPE_SIFIVE_CLINT);
> + qdev_prop_set_uint32(dev, "hartid-base", hartid_base);
> qdev_prop_set_uint32(dev, "num-harts", num_harts);
> qdev_prop_set_uint32(dev, "sip-base", sip_base);
> qdev_prop_set_uint32(dev, "timecmp-base", timecmp_base);
> diff --git a/hw/riscv/sifive_e.c b/hw/riscv/sifive_e.c
> index b53109521e..1c3b37d0ba 100644
> --- a/hw/riscv/sifive_e.c
> +++ b/hw/riscv/sifive_e.c
> @@ -163,7 +163,7 @@ static void riscv_sifive_e_soc_realize(DeviceState *dev, Error **errp)
> SIFIVE_E_PLIC_CONTEXT_STRIDE,
> memmap[SIFIVE_E_PLIC].size);
> sifive_clint_create(memmap[SIFIVE_E_CLINT].base,
> - memmap[SIFIVE_E_CLINT].size, ms->smp.cpus,
> + memmap[SIFIVE_E_CLINT].size, 0, ms->smp.cpus,
> SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE, false);
> create_unimplemented_device("riscv.sifive.e.aon",
> memmap[SIFIVE_E_AON].base, memmap[SIFIVE_E_AON].size);
> diff --git a/hw/riscv/sifive_u.c b/hw/riscv/sifive_u.c
> index bed10fcfa8..22997fbf13 100644
> --- a/hw/riscv/sifive_u.c
> +++ b/hw/riscv/sifive_u.c
> @@ -601,7 +601,7 @@ static void riscv_sifive_u_soc_realize(DeviceState *dev, Error **errp)
> sifive_uart_create(system_memory, memmap[SIFIVE_U_UART1].base,
> serial_hd(1), qdev_get_gpio_in(DEVICE(s->plic), SIFIVE_U_UART1_IRQ));
> sifive_clint_create(memmap[SIFIVE_U_CLINT].base,
> - memmap[SIFIVE_U_CLINT].size, ms->smp.cpus,
> + memmap[SIFIVE_U_CLINT].size, 0, ms->smp.cpus,
> SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE, false);
>
> object_property_set_bool(OBJECT(&s->prci), true, "realized", &err);
> diff --git a/hw/riscv/spike.c b/hw/riscv/spike.c
> index d0c4843712..d5e0103d89 100644
> --- a/hw/riscv/spike.c
> +++ b/hw/riscv/spike.c
> @@ -253,7 +253,7 @@ static void spike_board_init(MachineState *machine)
>
> /* Core Local Interruptor (timer and IPI) */
> sifive_clint_create(memmap[SPIKE_CLINT].base, memmap[SPIKE_CLINT].size,
> - smp_cpus, SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE,
> + 0, smp_cpus, SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE,
> false);
> }
>
> @@ -343,7 +343,7 @@ static void spike_v1_10_0_board_init(MachineState *machine)
>
> /* Core Local Interruptor (timer and IPI) */
> sifive_clint_create(memmap[SPIKE_CLINT].base, memmap[SPIKE_CLINT].size,
> - smp_cpus, SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE,
> + 0, smp_cpus, SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE,
> false);
> }
>
> @@ -452,7 +452,7 @@ static void spike_v1_09_1_board_init(MachineState *machine)
>
> /* Core Local Interruptor (timer and IPI) */
> sifive_clint_create(memmap[SPIKE_CLINT].base, memmap[SPIKE_CLINT].size,
> - smp_cpus, SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE,
> + 0, smp_cpus, SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE,
> false);
>
> g_free(config_string);
> diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
> index daae3ebdbb..dcb8a83b35 100644
> --- a/hw/riscv/virt.c
> +++ b/hw/riscv/virt.c
> @@ -596,7 +596,7 @@ static void riscv_virt_board_init(MachineState *machine)
> VIRT_PLIC_CONTEXT_STRIDE,
> memmap[VIRT_PLIC].size);
> sifive_clint_create(memmap[VIRT_CLINT].base,
> - memmap[VIRT_CLINT].size, smp_cpus,
> + memmap[VIRT_CLINT].size, 0, smp_cpus,
> SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE, true);
> sifive_test_create(memmap[VIRT_TEST].base);
>
> diff --git a/include/hw/riscv/sifive_clint.h b/include/hw/riscv/sifive_clint.h
> index 4a720bfece..9f5fb3d31d 100644
> --- a/include/hw/riscv/sifive_clint.h
> +++ b/include/hw/riscv/sifive_clint.h
> @@ -33,6 +33,7 @@ typedef struct SiFiveCLINTState {
>
> /*< public >*/
> MemoryRegion mmio;
> + uint32_t hartid_base;
> uint32_t num_harts;
> uint32_t sip_base;
> uint32_t timecmp_base;
> @@ -40,9 +41,9 @@ typedef struct SiFiveCLINTState {
> uint32_t aperture_size;
> } SiFiveCLINTState;
>
> -DeviceState *sifive_clint_create(hwaddr addr, hwaddr size, uint32_t num_harts,
> - uint32_t sip_base, uint32_t timecmp_base, uint32_t time_base,
> - bool provide_rdtime);
> +DeviceState *sifive_clint_create(hwaddr addr, hwaddr size,
> + uint32_t hartid_base, uint32_t num_harts, uint32_t sip_base,
> + uint32_t timecmp_base, uint32_t time_base, bool provide_rdtime);
>
> enum {
> SIFIVE_SIP_BASE = 0x0,
Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/4] hw/riscv: spike: Allow creating multiple sockets
2020-05-16 6:37 ` [PATCH 2/4] hw/riscv: spike: Allow creating multiple sockets Anup Patel
@ 2020-05-21 20:16 ` Palmer Dabbelt
2020-05-22 10:09 ` Anup Patel
0 siblings, 1 reply; 19+ messages in thread
From: Palmer Dabbelt @ 2020-05-21 20:16 UTC (permalink / raw)
To: Anup Patel
Cc: Peter Maydell, qemu-riscv, sagark, anup, Anup Patel, qemu-devel,
Atish Patra, Alistair Francis
On Fri, 15 May 2020 23:37:44 PDT (-0700), Anup Patel wrote:
> We extend RISC-V spike machine to allow creating a multi-socket machine.
> Each RISC-V spike machine socket is a set of HARTs and a CLINT instance.
> Other peripherals are shared between all RISC-V spike machine sockets.
> We also update RISC-V spike machine device tree to treat each socket as
> a NUMA node.
>
> The number of sockets in RISC-V spike machine can be specified using
> the "sockets=" sub-option of QEMU "-smp" command-line option. By
> default, only one socket RISC-V spike machine will be created.
>
> Currently, we only allow creating upto maximum 4 sockets with minimum
> 2 HARTs per socket. In future, this limits can be changed.
>
> Signed-off-by: Anup Patel <anup.patel@wdc.com>
> ---
> hw/riscv/spike.c | 206 ++++++++++++++++++++++++---------------
> include/hw/riscv/spike.h | 8 +-
> 2 files changed, 133 insertions(+), 81 deletions(-)
>
> diff --git a/hw/riscv/spike.c b/hw/riscv/spike.c
> index d5e0103d89..f63c57a87c 100644
> --- a/hw/riscv/spike.c
> +++ b/hw/riscv/spike.c
> @@ -64,9 +64,11 @@ static void create_fdt(SpikeState *s, const struct MemmapEntry *memmap,
> uint64_t mem_size, const char *cmdline)
> {
> void *fdt;
> - int cpu;
> - uint32_t *cells;
> - char *nodename;
> + int cpu, socket;
> + uint32_t *clint_cells;
> + unsigned long clint_addr;
> + uint32_t cpu_phandle, intc_phandle, phandle = 1;
> + char *name, *clint_name, *clust_name, *core_name, *cpu_name, *intc_name;
>
> fdt = s->fdt = create_device_tree(&s->fdt_size);
> if (!fdt) {
> @@ -88,68 +90,85 @@ static void create_fdt(SpikeState *s, const struct MemmapEntry *memmap,
> qemu_fdt_setprop_cell(fdt, "/soc", "#size-cells", 0x2);
> qemu_fdt_setprop_cell(fdt, "/soc", "#address-cells", 0x2);
>
> - nodename = g_strdup_printf("/memory@%lx",
> - (long)memmap[SPIKE_DRAM].base);
> - qemu_fdt_add_subnode(fdt, nodename);
> - qemu_fdt_setprop_cells(fdt, nodename, "reg",
> + name = g_strdup_printf("/memory@%lx", (long)memmap[SPIKE_DRAM].base);
> + qemu_fdt_add_subnode(fdt, name);
> + qemu_fdt_setprop_cells(fdt, name, "reg",
> memmap[SPIKE_DRAM].base >> 32, memmap[SPIKE_DRAM].base,
> mem_size >> 32, mem_size);
> - qemu_fdt_setprop_string(fdt, nodename, "device_type", "memory");
> - g_free(nodename);
> + qemu_fdt_setprop_string(fdt, name, "device_type", "memory");
> + g_free(name);
>
> qemu_fdt_add_subnode(fdt, "/cpus");
> qemu_fdt_setprop_cell(fdt, "/cpus", "timebase-frequency",
> SIFIVE_CLINT_TIMEBASE_FREQ);
> qemu_fdt_setprop_cell(fdt, "/cpus", "#size-cells", 0x0);
> qemu_fdt_setprop_cell(fdt, "/cpus", "#address-cells", 0x1);
> + qemu_fdt_add_subnode(fdt, "/cpus/cpu-map");
>
> - for (cpu = s->soc.num_harts - 1; cpu >= 0; cpu--) {
> - nodename = g_strdup_printf("/cpus/cpu@%d", cpu);
> - char *intc = g_strdup_printf("/cpus/cpu@%d/interrupt-controller", cpu);
> - char *isa = riscv_isa_string(&s->soc.harts[cpu]);
> - qemu_fdt_add_subnode(fdt, nodename);
> + for (socket = (s->num_socs - 1); socket >= 0; socket--) {
> + clust_name = g_strdup_printf("/cpus/cpu-map/cluster0%d", socket);
> + qemu_fdt_add_subnode(fdt, clust_name);
> +
> + clint_cells = g_new0(uint32_t, s->soc[socket].num_harts * 4);
> +
> + for (cpu = s->soc[socket].num_harts - 1; cpu >= 0; cpu--) {
> + cpu_phandle = phandle++;
> +
> + cpu_name = g_strdup_printf("/cpus/cpu@%d",
> + s->soc[socket].hartid_base + cpu);
> + qemu_fdt_add_subnode(fdt, cpu_name);
> #if defined(TARGET_RISCV32)
> - qemu_fdt_setprop_string(fdt, nodename, "mmu-type", "riscv,sv32");
> + qemu_fdt_setprop_string(fdt, cpu_name, "mmu-type", "riscv,sv32");
> #else
> - qemu_fdt_setprop_string(fdt, nodename, "mmu-type", "riscv,sv48");
> + qemu_fdt_setprop_string(fdt, cpu_name, "mmu-type", "riscv,sv48");
> #endif
> - qemu_fdt_setprop_string(fdt, nodename, "riscv,isa", isa);
> - qemu_fdt_setprop_string(fdt, nodename, "compatible", "riscv");
> - qemu_fdt_setprop_string(fdt, nodename, "status", "okay");
> - qemu_fdt_setprop_cell(fdt, nodename, "reg", cpu);
> - qemu_fdt_setprop_string(fdt, nodename, "device_type", "cpu");
> - qemu_fdt_add_subnode(fdt, intc);
> - qemu_fdt_setprop_cell(fdt, intc, "phandle", 1);
> - qemu_fdt_setprop_string(fdt, intc, "compatible", "riscv,cpu-intc");
> - qemu_fdt_setprop(fdt, intc, "interrupt-controller", NULL, 0);
> - qemu_fdt_setprop_cell(fdt, intc, "#interrupt-cells", 1);
> - g_free(isa);
> - g_free(intc);
> - g_free(nodename);
> - }
> + name = riscv_isa_string(&s->soc[socket].harts[cpu]);
> + qemu_fdt_setprop_string(fdt, cpu_name, "riscv,isa", name);
> + g_free(name);
> + qemu_fdt_setprop_string(fdt, cpu_name, "compatible", "riscv");
> + qemu_fdt_setprop_string(fdt, cpu_name, "status", "okay");
> + qemu_fdt_setprop_cell(fdt, cpu_name, "reg",
> + s->soc[socket].hartid_base + cpu);
> + qemu_fdt_setprop_string(fdt, cpu_name, "device_type", "cpu");
> + qemu_fdt_setprop_cell(fdt, cpu_name, "phandle", cpu_phandle);
> +
> + intc_name = g_strdup_printf("%s/interrupt-controller", cpu_name);
> + qemu_fdt_add_subnode(fdt, intc_name);
> + intc_phandle = phandle++;
> + qemu_fdt_setprop_cell(fdt, intc_name, "phandle", intc_phandle);
> + qemu_fdt_setprop_string(fdt, intc_name, "compatible",
> + "riscv,cpu-intc");
> + qemu_fdt_setprop(fdt, intc_name, "interrupt-controller", NULL, 0);
> + qemu_fdt_setprop_cell(fdt, intc_name, "#interrupt-cells", 1);
> +
> + clint_cells[cpu * 4 + 0] = cpu_to_be32(intc_phandle);
> + clint_cells[cpu * 4 + 1] = cpu_to_be32(IRQ_M_SOFT);
> + clint_cells[cpu * 4 + 2] = cpu_to_be32(intc_phandle);
> + clint_cells[cpu * 4 + 3] = cpu_to_be32(IRQ_M_TIMER);
> +
> + core_name = g_strdup_printf("%s/core%d", clust_name, cpu);
> + qemu_fdt_add_subnode(fdt, core_name);
> + qemu_fdt_setprop_cell(fdt, core_name, "cpu", cpu_phandle);
> +
> + g_free(core_name);
> + g_free(intc_name);
> + g_free(cpu_name);
> + }
>
> - cells = g_new0(uint32_t, s->soc.num_harts * 4);
> - for (cpu = 0; cpu < s->soc.num_harts; cpu++) {
> - nodename =
> - g_strdup_printf("/cpus/cpu@%d/interrupt-controller", cpu);
> - uint32_t intc_phandle = qemu_fdt_get_phandle(fdt, nodename);
> - cells[cpu * 4 + 0] = cpu_to_be32(intc_phandle);
> - cells[cpu * 4 + 1] = cpu_to_be32(IRQ_M_SOFT);
> - cells[cpu * 4 + 2] = cpu_to_be32(intc_phandle);
> - cells[cpu * 4 + 3] = cpu_to_be32(IRQ_M_TIMER);
> - g_free(nodename);
> + clint_addr = memmap[SPIKE_CLINT].base +
> + (memmap[SPIKE_CLINT].size * socket);
> + clint_name = g_strdup_printf("/soc/clint@%lx", clint_addr);
> + qemu_fdt_add_subnode(fdt, clint_name);
> + qemu_fdt_setprop_string(fdt, clint_name, "compatible", "riscv,clint0");
> + qemu_fdt_setprop_cells(fdt, clint_name, "reg",
> + 0x0, clint_addr, 0x0, memmap[SPIKE_CLINT].size);
> + qemu_fdt_setprop(fdt, clint_name, "interrupts-extended",
> + clint_cells, s->soc[socket].num_harts * sizeof(uint32_t) * 4);
> +
> + g_free(clint_name);
> + g_free(clint_cells);
> + g_free(clust_name);
> }
> - nodename = g_strdup_printf("/soc/clint@%lx",
> - (long)memmap[SPIKE_CLINT].base);
> - qemu_fdt_add_subnode(fdt, nodename);
> - qemu_fdt_setprop_string(fdt, nodename, "compatible", "riscv,clint0");
> - qemu_fdt_setprop_cells(fdt, nodename, "reg",
> - 0x0, memmap[SPIKE_CLINT].base,
> - 0x0, memmap[SPIKE_CLINT].size);
> - qemu_fdt_setprop(fdt, nodename, "interrupts-extended",
> - cells, s->soc.num_harts * sizeof(uint32_t) * 4);
> - g_free(cells);
> - g_free(nodename);
>
> if (cmdline) {
> qemu_fdt_add_subnode(fdt, "/chosen");
> @@ -160,23 +179,51 @@ static void create_fdt(SpikeState *s, const struct MemmapEntry *memmap,
> static void spike_board_init(MachineState *machine)
> {
> const struct MemmapEntry *memmap = spike_memmap;
> -
> SpikeState *s = g_new0(SpikeState, 1);
> MemoryRegion *system_memory = get_system_memory();
> MemoryRegion *main_mem = g_new(MemoryRegion, 1);
> MemoryRegion *mask_rom = g_new(MemoryRegion, 1);
> int i;
> + char *soc_name;
> unsigned int smp_cpus = machine->smp.cpus;
> -
> - /* Initialize SOC */
> - object_initialize_child(OBJECT(machine), "soc", &s->soc, sizeof(s->soc),
> - TYPE_RISCV_HART_ARRAY, &error_abort, NULL);
> - object_property_set_str(OBJECT(&s->soc), machine->cpu_type, "cpu-type",
> - &error_abort);
> - object_property_set_int(OBJECT(&s->soc), smp_cpus, "num-harts",
> - &error_abort);
> - object_property_set_bool(OBJECT(&s->soc), true, "realized",
> - &error_abort);
> + unsigned int base_hartid, cpus_per_socket;
> +
> + s->num_socs = machine->smp.sockets;
> +
> + /* Ensure minumum required CPUs per socket */
> + if ((smp_cpus / s->num_socs) < SPIKE_CPUS_PER_SOCKET_MIN)
> + s->num_socs = 1;
Why? It seems like creating single-hart sockets would be a good test case, and
I'm pretty sure it's a configuration that we had in embedded systems.
> + /* Limit the number of sockets */
> + if (SPIKE_SOCKETS_MAX < s->num_socs)
> + s->num_socs = SPIKE_SOCKETS_MAX;
> +
> + /* Initialize socket */
> + for (i = 0; i < s->num_socs; i++) {
> + base_hartid = i * (smp_cpus / s->num_socs);
> + if (i == (s->num_socs - 1))
> + cpus_per_socket = smp_cpus - base_hartid;
> + else
> + cpus_per_socket = smp_cpus / s->num_socs;
> + soc_name = g_strdup_printf("soc%d", i);
> + object_initialize_child(OBJECT(machine), soc_name, &s->soc[i],
> + sizeof(s->soc[i]), TYPE_RISCV_HART_ARRAY, &error_abort, NULL);
> + g_free(soc_name);
> + object_property_set_str(OBJECT(&s->soc[i]),
> + machine->cpu_type, "cpu-type", &error_abort);
> + object_property_set_int(OBJECT(&s->soc[i]),
> + base_hartid, "hartid-base", &error_abort);
> + object_property_set_int(OBJECT(&s->soc[i]),
> + cpus_per_socket, "num-harts", &error_abort);
> + object_property_set_bool(OBJECT(&s->soc[i]),
> + true, "realized", &error_abort);
> +
> + /* Core Local Interruptor (timer and IPI) for each socket */
> + sifive_clint_create(
> + memmap[SPIKE_CLINT].base + i * memmap[SPIKE_CLINT].size,
> + memmap[SPIKE_CLINT].size, base_hartid, cpus_per_socket,
> + SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE, false);
> + }
>
> /* register system main memory (actual RAM) */
> memory_region_init_ram(main_mem, NULL, "riscv.spike.ram",
> @@ -249,12 +296,7 @@ static void spike_board_init(MachineState *machine)
> &address_space_memory);
>
> /* initialize HTIF using symbols found in load_kernel */
> - htif_mm_init(system_memory, mask_rom, &s->soc.harts[0].env, serial_hd(0));
> -
> - /* Core Local Interruptor (timer and IPI) */
> - sifive_clint_create(memmap[SPIKE_CLINT].base, memmap[SPIKE_CLINT].size,
> - 0, smp_cpus, SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE,
> - false);
> + htif_mm_init(system_memory, mask_rom, &s->soc[0].harts[0].env, serial_hd(0));
> }
>
> static void spike_v1_10_0_board_init(MachineState *machine)
> @@ -268,6 +310,8 @@ static void spike_v1_10_0_board_init(MachineState *machine)
> int i;
> unsigned int smp_cpus = machine->smp.cpus;
>
> + s->num_socs = 1;
> +
> if (!qtest_enabled()) {
> info_report("The Spike v1.10.0 machine has been deprecated. "
> "Please use the generic spike machine and specify the ISA "
> @@ -275,13 +319,13 @@ static void spike_v1_10_0_board_init(MachineState *machine)
> }
>
> /* Initialize SOC */
> - object_initialize_child(OBJECT(machine), "soc", &s->soc, sizeof(s->soc),
> + object_initialize_child(OBJECT(machine), "soc", &s->soc[0], sizeof(s->soc[0]),
> TYPE_RISCV_HART_ARRAY, &error_abort, NULL);
> - object_property_set_str(OBJECT(&s->soc), SPIKE_V1_10_0_CPU, "cpu-type",
> + object_property_set_str(OBJECT(&s->soc[0]), SPIKE_V1_10_0_CPU, "cpu-type",
> &error_abort);
> - object_property_set_int(OBJECT(&s->soc), smp_cpus, "num-harts",
> + object_property_set_int(OBJECT(&s->soc[0]), smp_cpus, "num-harts",
> &error_abort);
> - object_property_set_bool(OBJECT(&s->soc), true, "realized",
> + object_property_set_bool(OBJECT(&s->soc[0]), true, "realized",
> &error_abort);
>
> /* register system main memory (actual RAM) */
> @@ -339,7 +383,7 @@ static void spike_v1_10_0_board_init(MachineState *machine)
> &address_space_memory);
>
> /* initialize HTIF using symbols found in load_kernel */
> - htif_mm_init(system_memory, mask_rom, &s->soc.harts[0].env, serial_hd(0));
> + htif_mm_init(system_memory, mask_rom, &s->soc[0].harts[0].env, serial_hd(0));
>
> /* Core Local Interruptor (timer and IPI) */
> sifive_clint_create(memmap[SPIKE_CLINT].base, memmap[SPIKE_CLINT].size,
> @@ -358,6 +402,8 @@ static void spike_v1_09_1_board_init(MachineState *machine)
> int i;
> unsigned int smp_cpus = machine->smp.cpus;
>
> + s->num_socs = 1;
> +
> if (!qtest_enabled()) {
> info_report("The Spike v1.09.1 machine has been deprecated. "
> "Please use the generic spike machine and specify the ISA "
> @@ -365,13 +411,13 @@ static void spike_v1_09_1_board_init(MachineState *machine)
> }
>
> /* Initialize SOC */
> - object_initialize_child(OBJECT(machine), "soc", &s->soc, sizeof(s->soc),
> + object_initialize_child(OBJECT(machine), "soc", &s->soc[0], sizeof(s->soc[0]),
> TYPE_RISCV_HART_ARRAY, &error_abort, NULL);
> - object_property_set_str(OBJECT(&s->soc), SPIKE_V1_09_1_CPU, "cpu-type",
> + object_property_set_str(OBJECT(&s->soc[0]), SPIKE_V1_09_1_CPU, "cpu-type",
> &error_abort);
> - object_property_set_int(OBJECT(&s->soc), smp_cpus, "num-harts",
> + object_property_set_int(OBJECT(&s->soc[0]), smp_cpus, "num-harts",
> &error_abort);
> - object_property_set_bool(OBJECT(&s->soc), true, "realized",
> + object_property_set_bool(OBJECT(&s->soc[0]), true, "realized",
> &error_abort);
>
> /* register system main memory (actual RAM) */
> @@ -425,7 +471,7 @@ static void spike_v1_09_1_board_init(MachineState *machine)
> "};\n";
>
> /* build config string with supplied memory size */
> - char *isa = riscv_isa_string(&s->soc.harts[0]);
> + char *isa = riscv_isa_string(&s->soc[0].harts[0]);
> char *config_string = g_strdup_printf(config_string_tmpl,
> (uint64_t)memmap[SPIKE_CLINT].base + SIFIVE_TIME_BASE,
> (uint64_t)memmap[SPIKE_DRAM].base,
> @@ -448,7 +494,7 @@ static void spike_v1_09_1_board_init(MachineState *machine)
> &address_space_memory);
>
> /* initialize HTIF using symbols found in load_kernel */
> - htif_mm_init(system_memory, mask_rom, &s->soc.harts[0].env, serial_hd(0));
> + htif_mm_init(system_memory, mask_rom, &s->soc[0].harts[0].env, serial_hd(0));
>
> /* Core Local Interruptor (timer and IPI) */
> sifive_clint_create(memmap[SPIKE_CLINT].base, memmap[SPIKE_CLINT].size,
> @@ -476,7 +522,7 @@ static void spike_machine_init(MachineClass *mc)
> {
> mc->desc = "RISC-V Spike Board";
> mc->init = spike_board_init;
> - mc->max_cpus = 8;
> + mc->max_cpus = SPIKE_CPUS_MAX;
> mc->is_default = true;
> mc->default_cpu_type = SPIKE_V1_10_0_CPU;
> }
> diff --git a/include/hw/riscv/spike.h b/include/hw/riscv/spike.h
> index dc770421bc..04a9f593b5 100644
> --- a/include/hw/riscv/spike.h
> +++ b/include/hw/riscv/spike.h
> @@ -22,12 +22,18 @@
> #include "hw/riscv/riscv_hart.h"
> #include "hw/sysbus.h"
>
> +#define SPIKE_SOCKETS_MAX 4
> +#define SPIKE_CPUS_PER_SOCKET_MIN 2
> +#define SPIKE_CPUS_PER_SOCKET_MAX 4
> +#define SPIKE_CPUS_MAX (SPIKE_SOCKETS_MAX * SPIKE_CPUS_PER_SOCKET_MAX)
> +
> typedef struct {
> /*< private >*/
> SysBusDevice parent_obj;
>
> /*< public >*/
> - RISCVHartArrayState soc;
> + unsigned int num_socs;
> + RISCVHartArrayState soc[SPIKE_SOCKETS_MAX];
> void *fdt;
> int fdt_size;
> } SpikeState;
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 3/4] hw/riscv: Allow creating multiple instances of PLIC
2020-05-16 6:37 ` [PATCH 3/4] hw/riscv: Allow creating multiple instances of PLIC Anup Patel
@ 2020-05-21 20:16 ` Palmer Dabbelt
2020-05-21 21:59 ` Alistair Francis
1 sibling, 0 replies; 19+ messages in thread
From: Palmer Dabbelt @ 2020-05-21 20:16 UTC (permalink / raw)
To: Anup Patel
Cc: Peter Maydell, qemu-riscv, sagark, anup, Anup Patel, qemu-devel,
Atish Patra, Alistair Francis
On Fri, 15 May 2020 23:37:45 PDT (-0700), Anup Patel wrote:
> We extend PLIC emulation to allow multiple instances of PLIC in
> a QEMU RISC-V machine. To achieve this, we remove first HART id
> zero assumption from PLIC emulation.
>
> Signed-off-by: Anup Patel <anup.patel@wdc.com>
> ---
> hw/riscv/sifive_e.c | 2 +-
> hw/riscv/sifive_plic.c | 24 +++++++++++++-----------
> hw/riscv/sifive_u.c | 2 +-
> hw/riscv/virt.c | 2 +-
> include/hw/riscv/sifive_plic.h | 12 +++++++-----
> 5 files changed, 23 insertions(+), 19 deletions(-)
>
> diff --git a/hw/riscv/sifive_e.c b/hw/riscv/sifive_e.c
> index 1c3b37d0ba..bd122e71ae 100644
> --- a/hw/riscv/sifive_e.c
> +++ b/hw/riscv/sifive_e.c
> @@ -152,7 +152,7 @@ static void riscv_sifive_e_soc_realize(DeviceState *dev, Error **errp)
>
> /* MMIO */
> s->plic = sifive_plic_create(memmap[SIFIVE_E_PLIC].base,
> - (char *)SIFIVE_E_PLIC_HART_CONFIG,
> + (char *)SIFIVE_E_PLIC_HART_CONFIG, 0,
> SIFIVE_E_PLIC_NUM_SOURCES,
> SIFIVE_E_PLIC_NUM_PRIORITIES,
> SIFIVE_E_PLIC_PRIORITY_BASE,
> diff --git a/hw/riscv/sifive_plic.c b/hw/riscv/sifive_plic.c
> index c1e04cbb98..f88bb48053 100644
> --- a/hw/riscv/sifive_plic.c
> +++ b/hw/riscv/sifive_plic.c
> @@ -352,6 +352,7 @@ static const MemoryRegionOps sifive_plic_ops = {
>
> static Property sifive_plic_properties[] = {
> DEFINE_PROP_STRING("hart-config", SiFivePLICState, hart_config),
> + DEFINE_PROP_UINT32("hartid-base", SiFivePLICState, hartid_base, 0),
> DEFINE_PROP_UINT32("num-sources", SiFivePLICState, num_sources, 0),
> DEFINE_PROP_UINT32("num-priorities", SiFivePLICState, num_priorities, 0),
> DEFINE_PROP_UINT32("priority-base", SiFivePLICState, priority_base, 0),
> @@ -400,10 +401,12 @@ static void parse_hart_config(SiFivePLICState *plic)
> }
> hartid++;
>
> - /* store hart/mode combinations */
> plic->num_addrs = addrid;
> + plic->num_harts = hartid;
> +
> + /* store hart/mode combinations */
> plic->addr_config = g_new(PLICAddr, plic->num_addrs);
> - addrid = 0, hartid = 0;
> + addrid = 0, hartid = plic->hartid_base;
> p = plic->hart_config;
> while ((c = *p++)) {
> if (c == ',') {
> @@ -429,8 +432,6 @@ static void sifive_plic_irq_request(void *opaque, int irq, int level)
>
> static void sifive_plic_realize(DeviceState *dev, Error **errp)
> {
> - MachineState *ms = MACHINE(qdev_get_machine());
> - unsigned int smp_cpus = ms->smp.cpus;
> SiFivePLICState *plic = SIFIVE_PLIC(dev);
> int i;
>
> @@ -451,8 +452,8 @@ static void sifive_plic_realize(DeviceState *dev, Error **errp)
> * lost a interrupt in the case a PLIC is attached. The SEIP bit must be
> * hardware controlled when a PLIC is attached.
> */
> - for (i = 0; i < smp_cpus; i++) {
> - RISCVCPU *cpu = RISCV_CPU(qemu_get_cpu(i));
> + for (i = 0; i < plic->num_harts; i++) {
> + RISCVCPU *cpu = RISCV_CPU(qemu_get_cpu(plic->hartid_base + i));
> if (riscv_cpu_claim_interrupts(cpu, MIP_SEIP) < 0) {
> error_report("SEIP already claimed");
> exit(1);
> @@ -488,16 +489,17 @@ type_init(sifive_plic_register_types)
> * Create PLIC device.
> */
> DeviceState *sifive_plic_create(hwaddr addr, char *hart_config,
> - uint32_t num_sources, uint32_t num_priorities,
> - uint32_t priority_base, uint32_t pending_base,
> - uint32_t enable_base, uint32_t enable_stride,
> - uint32_t context_base, uint32_t context_stride,
> - uint32_t aperture_size)
> + uint32_t hartid_base, uint32_t num_sources,
> + uint32_t num_priorities, uint32_t priority_base,
> + uint32_t pending_base, uint32_t enable_base,
> + uint32_t enable_stride, uint32_t context_base,
> + uint32_t context_stride, uint32_t aperture_size)
> {
> DeviceState *dev = qdev_create(NULL, TYPE_SIFIVE_PLIC);
> assert(enable_stride == (enable_stride & -enable_stride));
> assert(context_stride == (context_stride & -context_stride));
> qdev_prop_set_string(dev, "hart-config", hart_config);
> + qdev_prop_set_uint32(dev, "hartid-base", hartid_base);
> qdev_prop_set_uint32(dev, "num-sources", num_sources);
> qdev_prop_set_uint32(dev, "num-priorities", num_priorities);
> qdev_prop_set_uint32(dev, "priority-base", priority_base);
> diff --git a/hw/riscv/sifive_u.c b/hw/riscv/sifive_u.c
> index 22997fbf13..69dbd7980b 100644
> --- a/hw/riscv/sifive_u.c
> +++ b/hw/riscv/sifive_u.c
> @@ -585,7 +585,7 @@ static void riscv_sifive_u_soc_realize(DeviceState *dev, Error **errp)
>
> /* MMIO */
> s->plic = sifive_plic_create(memmap[SIFIVE_U_PLIC].base,
> - plic_hart_config,
> + plic_hart_config, 0,
> SIFIVE_U_PLIC_NUM_SOURCES,
> SIFIVE_U_PLIC_NUM_PRIORITIES,
> SIFIVE_U_PLIC_PRIORITY_BASE,
> diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
> index dcb8a83b35..f40efcb193 100644
> --- a/hw/riscv/virt.c
> +++ b/hw/riscv/virt.c
> @@ -585,7 +585,7 @@ static void riscv_virt_board_init(MachineState *machine)
>
> /* MMIO */
> s->plic = sifive_plic_create(memmap[VIRT_PLIC].base,
> - plic_hart_config,
> + plic_hart_config, 0,
> VIRT_PLIC_NUM_SOURCES,
> VIRT_PLIC_NUM_PRIORITIES,
> VIRT_PLIC_PRIORITY_BASE,
> diff --git a/include/hw/riscv/sifive_plic.h b/include/hw/riscv/sifive_plic.h
> index 4421e81249..ace76d0f1b 100644
> --- a/include/hw/riscv/sifive_plic.h
> +++ b/include/hw/riscv/sifive_plic.h
> @@ -48,6 +48,7 @@ typedef struct SiFivePLICState {
> /*< public >*/
> MemoryRegion mmio;
> uint32_t num_addrs;
> + uint32_t num_harts;
> uint32_t bitfield_words;
> PLICAddr *addr_config;
> uint32_t *source_priority;
> @@ -58,6 +59,7 @@ typedef struct SiFivePLICState {
>
> /* config */
> char *hart_config;
> + uint32_t hartid_base;
> uint32_t num_sources;
> uint32_t num_priorities;
> uint32_t priority_base;
> @@ -70,10 +72,10 @@ typedef struct SiFivePLICState {
> } SiFivePLICState;
>
> DeviceState *sifive_plic_create(hwaddr addr, char *hart_config,
> - uint32_t num_sources, uint32_t num_priorities,
> - uint32_t priority_base, uint32_t pending_base,
> - uint32_t enable_base, uint32_t enable_stride,
> - uint32_t context_base, uint32_t context_stride,
> - uint32_t aperture_size);
> + uint32_t hartid_base, uint32_t num_sources,
> + uint32_t num_priorities, uint32_t priority_base,
> + uint32_t pending_base, uint32_t enable_base,
> + uint32_t enable_stride, uint32_t context_base,
> + uint32_t context_stride, uint32_t aperture_size);
>
> #endif
Reviewed-by: Palmer Dabbelt <palmerdabbelt@google.com>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 4/4] hw/riscv: virt: Allow creating multiple sockets
2020-05-16 6:37 ` [PATCH 4/4] hw/riscv: virt: Allow creating multiple sockets Anup Patel
@ 2020-05-21 20:16 ` Palmer Dabbelt
0 siblings, 0 replies; 19+ messages in thread
From: Palmer Dabbelt @ 2020-05-21 20:16 UTC (permalink / raw)
To: Anup Patel
Cc: Peter Maydell, qemu-riscv, sagark, anup, Anup Patel, qemu-devel,
Atish Patra, Alistair Francis
On Fri, 15 May 2020 23:37:46 PDT (-0700), Anup Patel wrote:
> We extend RISC-V virt machine to allow creating a multi-socket machine.
> Each RISC-V virt machine socket is a set of HARTs, a CLINT instance,
> and a PLIC instance. Other peripherals are shared between all RISC-V
> virt machine sockets. We also update RISC-V virt machine device tree
> to treat each socket as a NUMA node.
>
> The number of sockets in RISC-V virt machine can be specified using
> the "sockets=" sub-option of QEMU "-smp" command-line option. By
> default, only one socket RISC-V virt machine will be created.
>
> Currently, we only allow creating upto maximum 4 sockets with minimum
> 2 HARTs per socket. In future, this limits can be changed.
>
> Signed-off-by: Anup Patel <anup.patel@wdc.com>
> ---
> hw/riscv/virt.c | 495 ++++++++++++++++++++++------------------
> include/hw/riscv/virt.h | 12 +-
> 2 files changed, 283 insertions(+), 224 deletions(-)
>
> diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
> index f40efcb193..205224c01c 100644
> --- a/hw/riscv/virt.c
> +++ b/hw/riscv/virt.c
> @@ -60,7 +60,7 @@ static const struct MemmapEntry {
> [VIRT_TEST] = { 0x100000, 0x1000 },
> [VIRT_RTC] = { 0x101000, 0x1000 },
> [VIRT_CLINT] = { 0x2000000, 0x10000 },
> - [VIRT_PLIC] = { 0xc000000, 0x4000000 },
> + [VIRT_PLIC] = { 0xc000000, VIRT_PLIC_SIZE(VIRT_CPUS_MAX*2) },
> [VIRT_UART0] = { 0x10000000, 0x100 },
> [VIRT_VIRTIO] = { 0x10001000, 0x1000 },
> [VIRT_FLASH] = { 0x20000000, 0x4000000 },
> @@ -183,10 +183,15 @@ static void create_fdt(RISCVVirtState *s, const struct MemmapEntry *memmap,
> uint64_t mem_size, const char *cmdline)
> {
> void *fdt;
> - int cpu, i;
> - uint32_t *cells;
> - char *nodename;
> - uint32_t plic_phandle, test_phandle, phandle = 1;
> + int i, cpu, socket;
> + uint32_t *clint_cells, *plic_cells;
> + unsigned long clint_addr, plic_addr;
> + uint32_t plic_phandle[VIRT_SOCKETS_MAX];
> + uint32_t cpu_phandle, intc_phandle, test_phandle;
> + uint32_t phandle = 1, plic_mmio_phandle = 1;
> + uint32_t plic_pcie_phandle = 1, plic_virtio_phandle = 1;
> + char *name, *cpu_name, *core_name, *intc_name;
> + char *clint_name, *plic_name, *clust_name;
> hwaddr flashsize = virt_memmap[VIRT_FLASH].size / 2;
> hwaddr flashbase = virt_memmap[VIRT_FLASH].base;
>
> @@ -207,231 +212,231 @@ static void create_fdt(RISCVVirtState *s, const struct MemmapEntry *memmap,
> qemu_fdt_setprop_cell(fdt, "/soc", "#size-cells", 0x2);
> qemu_fdt_setprop_cell(fdt, "/soc", "#address-cells", 0x2);
>
> - nodename = g_strdup_printf("/memory@%lx",
> + name = g_strdup_printf("/memory@%lx",
> (long)memmap[VIRT_DRAM].base);
> - qemu_fdt_add_subnode(fdt, nodename);
> - qemu_fdt_setprop_cells(fdt, nodename, "reg",
> + qemu_fdt_add_subnode(fdt, name);
> + qemu_fdt_setprop_cells(fdt, name, "reg",
> memmap[VIRT_DRAM].base >> 32, memmap[VIRT_DRAM].base,
> mem_size >> 32, mem_size);
> - qemu_fdt_setprop_string(fdt, nodename, "device_type", "memory");
> - g_free(nodename);
> + qemu_fdt_setprop_string(fdt, name, "device_type", "memory");
> + g_free(name);
>
> qemu_fdt_add_subnode(fdt, "/cpus");
> qemu_fdt_setprop_cell(fdt, "/cpus", "timebase-frequency",
> SIFIVE_CLINT_TIMEBASE_FREQ);
> qemu_fdt_setprop_cell(fdt, "/cpus", "#size-cells", 0x0);
> qemu_fdt_setprop_cell(fdt, "/cpus", "#address-cells", 0x1);
> + qemu_fdt_add_subnode(fdt, "/cpus/cpu-map");
> +
> + for (socket = (s->num_socs - 1); socket >= 0; socket--) {
> + clust_name = g_strdup_printf("/cpus/cpu-map/cluster0%d", socket);
> + qemu_fdt_add_subnode(fdt, clust_name);
> +
> + plic_cells = g_new0(uint32_t, s->soc[socket].num_harts * 4);
> + clint_cells = g_new0(uint32_t, s->soc[socket].num_harts * 4);
> +
> + for (cpu = s->soc[socket].num_harts - 1; cpu >= 0; cpu--) {
> + cpu_phandle = phandle++;
>
> - for (cpu = s->soc.num_harts - 1; cpu >= 0; cpu--) {
> - int cpu_phandle = phandle++;
> - int intc_phandle;
> - nodename = g_strdup_printf("/cpus/cpu@%d", cpu);
> - char *intc = g_strdup_printf("/cpus/cpu@%d/interrupt-controller", cpu);
> - char *isa = riscv_isa_string(&s->soc.harts[cpu]);
> - qemu_fdt_add_subnode(fdt, nodename);
> + cpu_name = g_strdup_printf("/cpus/cpu@%d",
> + s->soc[socket].hartid_base + cpu);
> + qemu_fdt_add_subnode(fdt, cpu_name);
> #if defined(TARGET_RISCV32)
> - qemu_fdt_setprop_string(fdt, nodename, "mmu-type", "riscv,sv32");
> + qemu_fdt_setprop_string(fdt, cpu_name, "mmu-type", "riscv,sv32");
> #else
> - qemu_fdt_setprop_string(fdt, nodename, "mmu-type", "riscv,sv48");
> + qemu_fdt_setprop_string(fdt, cpu_name, "mmu-type", "riscv,sv48");
> #endif
> - qemu_fdt_setprop_string(fdt, nodename, "riscv,isa", isa);
> - qemu_fdt_setprop_string(fdt, nodename, "compatible", "riscv");
> - qemu_fdt_setprop_string(fdt, nodename, "status", "okay");
> - qemu_fdt_setprop_cell(fdt, nodename, "reg", cpu);
> - qemu_fdt_setprop_string(fdt, nodename, "device_type", "cpu");
> - qemu_fdt_setprop_cell(fdt, nodename, "phandle", cpu_phandle);
> - intc_phandle = phandle++;
> - qemu_fdt_add_subnode(fdt, intc);
> - qemu_fdt_setprop_cell(fdt, intc, "phandle", intc_phandle);
> - qemu_fdt_setprop_string(fdt, intc, "compatible", "riscv,cpu-intc");
> - qemu_fdt_setprop(fdt, intc, "interrupt-controller", NULL, 0);
> - qemu_fdt_setprop_cell(fdt, intc, "#interrupt-cells", 1);
> - g_free(isa);
> - g_free(intc);
> - g_free(nodename);
> - }
> + name = riscv_isa_string(&s->soc[socket].harts[cpu]);
> + qemu_fdt_setprop_string(fdt, cpu_name, "riscv,isa", name);
> + g_free(name);
> + qemu_fdt_setprop_string(fdt, cpu_name, "compatible", "riscv");
> + qemu_fdt_setprop_string(fdt, cpu_name, "status", "okay");
> + qemu_fdt_setprop_cell(fdt, cpu_name, "reg",
> + s->soc[socket].hartid_base + cpu);
> + qemu_fdt_setprop_string(fdt, cpu_name, "device_type", "cpu");
> + qemu_fdt_setprop_cell(fdt, cpu_name, "phandle", cpu_phandle);
> +
> + intc_name = g_strdup_printf("%s/interrupt-controller", cpu_name);
> + qemu_fdt_add_subnode(fdt, intc_name);
> + intc_phandle = phandle++;
> + qemu_fdt_setprop_cell(fdt, intc_name, "phandle", intc_phandle);
> + qemu_fdt_setprop_string(fdt, intc_name, "compatible",
> + "riscv,cpu-intc");
> + qemu_fdt_setprop(fdt, intc_name, "interrupt-controller", NULL, 0);
> + qemu_fdt_setprop_cell(fdt, intc_name, "#interrupt-cells", 1);
> +
> + clint_cells[cpu * 4 + 0] = cpu_to_be32(intc_phandle);
> + clint_cells[cpu * 4 + 1] = cpu_to_be32(IRQ_M_SOFT);
> + clint_cells[cpu * 4 + 2] = cpu_to_be32(intc_phandle);
> + clint_cells[cpu * 4 + 3] = cpu_to_be32(IRQ_M_TIMER);
> +
> + plic_cells[cpu * 4 + 0] = cpu_to_be32(intc_phandle);
> + plic_cells[cpu * 4 + 1] = cpu_to_be32(IRQ_M_EXT);
> + plic_cells[cpu * 4 + 2] = cpu_to_be32(intc_phandle);
> + plic_cells[cpu * 4 + 3] = cpu_to_be32(IRQ_S_EXT);
> +
> + core_name = g_strdup_printf("%s/core%d", clust_name, cpu);
> + qemu_fdt_add_subnode(fdt, core_name);
> + qemu_fdt_setprop_cell(fdt, core_name, "cpu", cpu_phandle);
> +
> + g_free(core_name);
> + g_free(intc_name);
> + g_free(cpu_name);
> + }
>
> - /* Add cpu-topology node */
> - qemu_fdt_add_subnode(fdt, "/cpus/cpu-map");
> - qemu_fdt_add_subnode(fdt, "/cpus/cpu-map/cluster0");
> - for (cpu = s->soc.num_harts - 1; cpu >= 0; cpu--) {
> - char *core_nodename = g_strdup_printf("/cpus/cpu-map/cluster0/core%d",
> - cpu);
> - char *cpu_nodename = g_strdup_printf("/cpus/cpu@%d", cpu);
> - uint32_t intc_phandle = qemu_fdt_get_phandle(fdt, cpu_nodename);
> - qemu_fdt_add_subnode(fdt, core_nodename);
> - qemu_fdt_setprop_cell(fdt, core_nodename, "cpu", intc_phandle);
> - g_free(core_nodename);
> - g_free(cpu_nodename);
> + clint_addr = memmap[VIRT_CLINT].base +
> + (memmap[VIRT_CLINT].size * socket);
> + clint_name = g_strdup_printf("/soc/clint@%lx", clint_addr);
> + qemu_fdt_add_subnode(fdt, clint_name);
> + qemu_fdt_setprop_string(fdt, clint_name, "compatible", "riscv,clint0");
> + qemu_fdt_setprop_cells(fdt, clint_name, "reg",
> + 0x0, clint_addr, 0x0, memmap[VIRT_CLINT].size);
> + qemu_fdt_setprop(fdt, clint_name, "interrupts-extended",
> + clint_cells, s->soc[socket].num_harts * sizeof(uint32_t) * 4);
> + g_free(clint_name);
> +
> + plic_phandle[socket] = phandle++;
> + plic_addr = memmap[VIRT_PLIC].base + (memmap[VIRT_PLIC].size * socket);
> + plic_name = g_strdup_printf("/soc/plic@%lx", plic_addr);
> + qemu_fdt_add_subnode(fdt, plic_name);
> + qemu_fdt_setprop_cell(fdt, plic_name,
> + "#address-cells", FDT_PLIC_ADDR_CELLS);
> + qemu_fdt_setprop_cell(fdt, plic_name,
> + "#interrupt-cells", FDT_PLIC_INT_CELLS);
> + qemu_fdt_setprop_string(fdt, plic_name, "compatible", "riscv,plic0");
> + qemu_fdt_setprop(fdt, plic_name, "interrupt-controller", NULL, 0);
> + qemu_fdt_setprop(fdt, plic_name, "interrupts-extended",
> + plic_cells, s->soc[socket].num_harts * sizeof(uint32_t) * 4);
> + qemu_fdt_setprop_cells(fdt, plic_name, "reg",
> + 0x0, plic_addr, 0x0, memmap[VIRT_PLIC].size);
> + qemu_fdt_setprop_cell(fdt, plic_name, "riscv,ndev", VIRTIO_NDEV);
> + qemu_fdt_setprop_cell(fdt, plic_name, "phandle", plic_phandle[socket]);
> + g_free(plic_name);
> +
> + g_free(clint_cells);
> + g_free(plic_cells);
> + g_free(clust_name);
> }
>
> - cells = g_new0(uint32_t, s->soc.num_harts * 4);
> - for (cpu = 0; cpu < s->soc.num_harts; cpu++) {
> - nodename =
> - g_strdup_printf("/cpus/cpu@%d/interrupt-controller", cpu);
> - uint32_t intc_phandle = qemu_fdt_get_phandle(fdt, nodename);
> - cells[cpu * 4 + 0] = cpu_to_be32(intc_phandle);
> - cells[cpu * 4 + 1] = cpu_to_be32(IRQ_M_SOFT);
> - cells[cpu * 4 + 2] = cpu_to_be32(intc_phandle);
> - cells[cpu * 4 + 3] = cpu_to_be32(IRQ_M_TIMER);
> - g_free(nodename);
> - }
> - nodename = g_strdup_printf("/soc/clint@%lx",
> - (long)memmap[VIRT_CLINT].base);
> - qemu_fdt_add_subnode(fdt, nodename);
> - qemu_fdt_setprop_string(fdt, nodename, "compatible", "riscv,clint0");
> - qemu_fdt_setprop_cells(fdt, nodename, "reg",
> - 0x0, memmap[VIRT_CLINT].base,
> - 0x0, memmap[VIRT_CLINT].size);
> - qemu_fdt_setprop(fdt, nodename, "interrupts-extended",
> - cells, s->soc.num_harts * sizeof(uint32_t) * 4);
> - g_free(cells);
> - g_free(nodename);
> -
> - plic_phandle = phandle++;
> - cells = g_new0(uint32_t, s->soc.num_harts * 4);
> - for (cpu = 0; cpu < s->soc.num_harts; cpu++) {
> - nodename =
> - g_strdup_printf("/cpus/cpu@%d/interrupt-controller", cpu);
> - uint32_t intc_phandle = qemu_fdt_get_phandle(fdt, nodename);
> - cells[cpu * 4 + 0] = cpu_to_be32(intc_phandle);
> - cells[cpu * 4 + 1] = cpu_to_be32(IRQ_M_EXT);
> - cells[cpu * 4 + 2] = cpu_to_be32(intc_phandle);
> - cells[cpu * 4 + 3] = cpu_to_be32(IRQ_S_EXT);
> - g_free(nodename);
> + for (socket = 0; socket < s->num_socs; socket++) {
> + if (socket == 0) {
> + plic_mmio_phandle = plic_phandle[socket];
> + plic_virtio_phandle = plic_phandle[socket];
> + plic_pcie_phandle = plic_phandle[socket];
> + }
> + if (socket == 1) {
> + plic_virtio_phandle = plic_phandle[socket];
> + plic_pcie_phandle = plic_phandle[socket];
> + }
> + if (socket == 2) {
> + plic_pcie_phandle = plic_phandle[socket];
> + }
> }
> - nodename = g_strdup_printf("/soc/interrupt-controller@%lx",
> - (long)memmap[VIRT_PLIC].base);
> - qemu_fdt_add_subnode(fdt, nodename);
> - qemu_fdt_setprop_cell(fdt, nodename, "#address-cells",
> - FDT_PLIC_ADDR_CELLS);
> - qemu_fdt_setprop_cell(fdt, nodename, "#interrupt-cells",
> - FDT_PLIC_INT_CELLS);
> - qemu_fdt_setprop_string(fdt, nodename, "compatible", "riscv,plic0");
> - qemu_fdt_setprop(fdt, nodename, "interrupt-controller", NULL, 0);
> - qemu_fdt_setprop(fdt, nodename, "interrupts-extended",
> - cells, s->soc.num_harts * sizeof(uint32_t) * 4);
> - qemu_fdt_setprop_cells(fdt, nodename, "reg",
> - 0x0, memmap[VIRT_PLIC].base,
> - 0x0, memmap[VIRT_PLIC].size);
> - qemu_fdt_setprop_cell(fdt, nodename, "riscv,ndev", VIRTIO_NDEV);
> - qemu_fdt_setprop_cell(fdt, nodename, "phandle", plic_phandle);
> - plic_phandle = qemu_fdt_get_phandle(fdt, nodename);
> - g_free(cells);
> - g_free(nodename);
>
> for (i = 0; i < VIRTIO_COUNT; i++) {
> - nodename = g_strdup_printf("/virtio_mmio@%lx",
> + name = g_strdup_printf("/soc/virtio_mmio@%lx",
> (long)(memmap[VIRT_VIRTIO].base + i * memmap[VIRT_VIRTIO].size));
> - qemu_fdt_add_subnode(fdt, nodename);
> - qemu_fdt_setprop_string(fdt, nodename, "compatible", "virtio,mmio");
> - qemu_fdt_setprop_cells(fdt, nodename, "reg",
> + qemu_fdt_add_subnode(fdt, name);
> + qemu_fdt_setprop_string(fdt, name, "compatible", "virtio,mmio");
> + qemu_fdt_setprop_cells(fdt, name, "reg",
> 0x0, memmap[VIRT_VIRTIO].base + i * memmap[VIRT_VIRTIO].size,
> 0x0, memmap[VIRT_VIRTIO].size);
> - qemu_fdt_setprop_cell(fdt, nodename, "interrupt-parent", plic_phandle);
> - qemu_fdt_setprop_cell(fdt, nodename, "interrupts", VIRTIO_IRQ + i);
> - g_free(nodename);
> + qemu_fdt_setprop_cell(fdt, name, "interrupt-parent", plic_virtio_phandle);
> + qemu_fdt_setprop_cell(fdt, name, "interrupts", VIRTIO_IRQ + i);
> + g_free(name);
> }
>
> - nodename = g_strdup_printf("/soc/pci@%lx",
> + name = g_strdup_printf("/soc/pci@%lx",
> (long) memmap[VIRT_PCIE_ECAM].base);
> - qemu_fdt_add_subnode(fdt, nodename);
> - qemu_fdt_setprop_cell(fdt, nodename, "#address-cells",
> - FDT_PCI_ADDR_CELLS);
> - qemu_fdt_setprop_cell(fdt, nodename, "#interrupt-cells",
> - FDT_PCI_INT_CELLS);
> - qemu_fdt_setprop_cell(fdt, nodename, "#size-cells", 0x2);
> - qemu_fdt_setprop_string(fdt, nodename, "compatible",
> - "pci-host-ecam-generic");
> - qemu_fdt_setprop_string(fdt, nodename, "device_type", "pci");
> - qemu_fdt_setprop_cell(fdt, nodename, "linux,pci-domain", 0);
> - qemu_fdt_setprop_cells(fdt, nodename, "bus-range", 0,
> - memmap[VIRT_PCIE_ECAM].size /
> - PCIE_MMCFG_SIZE_MIN - 1);
> - qemu_fdt_setprop(fdt, nodename, "dma-coherent", NULL, 0);
> - qemu_fdt_setprop_cells(fdt, nodename, "reg", 0, memmap[VIRT_PCIE_ECAM].base,
> - 0, memmap[VIRT_PCIE_ECAM].size);
> - qemu_fdt_setprop_sized_cells(fdt, nodename, "ranges",
> + qemu_fdt_add_subnode(fdt, name);
> + qemu_fdt_setprop_cell(fdt, name, "#address-cells", FDT_PCI_ADDR_CELLS);
> + qemu_fdt_setprop_cell(fdt, name, "#interrupt-cells", FDT_PCI_INT_CELLS);
> + qemu_fdt_setprop_cell(fdt, name, "#size-cells", 0x2);
> + qemu_fdt_setprop_string(fdt, name, "compatible", "pci-host-ecam-generic");
> + qemu_fdt_setprop_string(fdt, name, "device_type", "pci");
> + qemu_fdt_setprop_cell(fdt, name, "linux,pci-domain", 0);
> + qemu_fdt_setprop_cells(fdt, name, "bus-range", 0,
> + memmap[VIRT_PCIE_ECAM].size / PCIE_MMCFG_SIZE_MIN - 1);
> + qemu_fdt_setprop(fdt, name, "dma-coherent", NULL, 0);
> + qemu_fdt_setprop_cells(fdt, name, "reg", 0,
> + memmap[VIRT_PCIE_ECAM].base, 0, memmap[VIRT_PCIE_ECAM].size);
> + qemu_fdt_setprop_sized_cells(fdt, name, "ranges",
> 1, FDT_PCI_RANGE_IOPORT, 2, 0,
> 2, memmap[VIRT_PCIE_PIO].base, 2, memmap[VIRT_PCIE_PIO].size,
> 1, FDT_PCI_RANGE_MMIO,
> 2, memmap[VIRT_PCIE_MMIO].base,
> 2, memmap[VIRT_PCIE_MMIO].base, 2, memmap[VIRT_PCIE_MMIO].size);
> - create_pcie_irq_map(fdt, nodename, plic_phandle);
> - g_free(nodename);
> + create_pcie_irq_map(fdt, name, plic_pcie_phandle);
> + g_free(name);
>
> test_phandle = phandle++;
> - nodename = g_strdup_printf("/test@%lx",
> + name = g_strdup_printf("/soc/test@%lx",
> (long)memmap[VIRT_TEST].base);
> - qemu_fdt_add_subnode(fdt, nodename);
> + qemu_fdt_add_subnode(fdt, name);
> {
> const char compat[] = "sifive,test1\0sifive,test0\0syscon";
> - qemu_fdt_setprop(fdt, nodename, "compatible", compat, sizeof(compat));
> + qemu_fdt_setprop(fdt, name, "compatible", compat, sizeof(compat));
> }
> - qemu_fdt_setprop_cells(fdt, nodename, "reg",
> + qemu_fdt_setprop_cells(fdt, name, "reg",
> 0x0, memmap[VIRT_TEST].base,
> 0x0, memmap[VIRT_TEST].size);
> - qemu_fdt_setprop_cell(fdt, nodename, "phandle", test_phandle);
> - test_phandle = qemu_fdt_get_phandle(fdt, nodename);
> - g_free(nodename);
> -
> - nodename = g_strdup_printf("/reboot");
> - qemu_fdt_add_subnode(fdt, nodename);
> - qemu_fdt_setprop_string(fdt, nodename, "compatible", "syscon-reboot");
> - qemu_fdt_setprop_cell(fdt, nodename, "regmap", test_phandle);
> - qemu_fdt_setprop_cell(fdt, nodename, "offset", 0x0);
> - qemu_fdt_setprop_cell(fdt, nodename, "value", FINISHER_RESET);
> - g_free(nodename);
> -
> - nodename = g_strdup_printf("/poweroff");
> - qemu_fdt_add_subnode(fdt, nodename);
> - qemu_fdt_setprop_string(fdt, nodename, "compatible", "syscon-poweroff");
> - qemu_fdt_setprop_cell(fdt, nodename, "regmap", test_phandle);
> - qemu_fdt_setprop_cell(fdt, nodename, "offset", 0x0);
> - qemu_fdt_setprop_cell(fdt, nodename, "value", FINISHER_PASS);
> - g_free(nodename);
> -
> - nodename = g_strdup_printf("/uart@%lx",
> - (long)memmap[VIRT_UART0].base);
> - qemu_fdt_add_subnode(fdt, nodename);
> - qemu_fdt_setprop_string(fdt, nodename, "compatible", "ns16550a");
> - qemu_fdt_setprop_cells(fdt, nodename, "reg",
> + qemu_fdt_setprop_cell(fdt, name, "phandle", test_phandle);
> + test_phandle = qemu_fdt_get_phandle(fdt, name);
> + g_free(name);
> +
> + name = g_strdup_printf("/soc/reboot");
> + qemu_fdt_add_subnode(fdt, name);
> + qemu_fdt_setprop_string(fdt, name, "compatible", "syscon-reboot");
> + qemu_fdt_setprop_cell(fdt, name, "regmap", test_phandle);
> + qemu_fdt_setprop_cell(fdt, name, "offset", 0x0);
> + qemu_fdt_setprop_cell(fdt, name, "value", FINISHER_RESET);
> + g_free(name);
> +
> + name = g_strdup_printf("/soc/poweroff");
> + qemu_fdt_add_subnode(fdt, name);
> + qemu_fdt_setprop_string(fdt, name, "compatible", "syscon-poweroff");
> + qemu_fdt_setprop_cell(fdt, name, "regmap", test_phandle);
> + qemu_fdt_setprop_cell(fdt, name, "offset", 0x0);
> + qemu_fdt_setprop_cell(fdt, name, "value", FINISHER_PASS);
> + g_free(name);
> +
> + name = g_strdup_printf("/soc/uart@%lx", (long)memmap[VIRT_UART0].base);
> + qemu_fdt_add_subnode(fdt, name);
> + qemu_fdt_setprop_string(fdt, name, "compatible", "ns16550a");
> + qemu_fdt_setprop_cells(fdt, name, "reg",
> 0x0, memmap[VIRT_UART0].base,
> 0x0, memmap[VIRT_UART0].size);
> - qemu_fdt_setprop_cell(fdt, nodename, "clock-frequency", 3686400);
> - qemu_fdt_setprop_cell(fdt, nodename, "interrupt-parent", plic_phandle);
> - qemu_fdt_setprop_cell(fdt, nodename, "interrupts", UART0_IRQ);
> + qemu_fdt_setprop_cell(fdt, name, "clock-frequency", 3686400);
> + qemu_fdt_setprop_cell(fdt, name, "interrupt-parent", plic_mmio_phandle);
> + qemu_fdt_setprop_cell(fdt, name, "interrupts", UART0_IRQ);
>
> qemu_fdt_add_subnode(fdt, "/chosen");
> - qemu_fdt_setprop_string(fdt, "/chosen", "stdout-path", nodename);
> + qemu_fdt_setprop_string(fdt, "/chosen", "stdout-path", name);
> if (cmdline) {
> qemu_fdt_setprop_string(fdt, "/chosen", "bootargs", cmdline);
> }
> - g_free(nodename);
> -
> - nodename = g_strdup_printf("/rtc@%lx",
> - (long)memmap[VIRT_RTC].base);
> - qemu_fdt_add_subnode(fdt, nodename);
> - qemu_fdt_setprop_string(fdt, nodename, "compatible",
> - "google,goldfish-rtc");
> - qemu_fdt_setprop_cells(fdt, nodename, "reg",
> + g_free(name);
> +
> + name = g_strdup_printf("/soc/rtc@%lx", (long)memmap[VIRT_RTC].base);
> + qemu_fdt_add_subnode(fdt, name);
> + qemu_fdt_setprop_string(fdt, name, "compatible", "google,goldfish-rtc");
> + qemu_fdt_setprop_cells(fdt, name, "reg",
> 0x0, memmap[VIRT_RTC].base,
> 0x0, memmap[VIRT_RTC].size);
> - qemu_fdt_setprop_cell(fdt, nodename, "interrupt-parent", plic_phandle);
> - qemu_fdt_setprop_cell(fdt, nodename, "interrupts", RTC_IRQ);
> - g_free(nodename);
> -
> - nodename = g_strdup_printf("/flash@%" PRIx64, flashbase);
> - qemu_fdt_add_subnode(s->fdt, nodename);
> - qemu_fdt_setprop_string(s->fdt, nodename, "compatible", "cfi-flash");
> - qemu_fdt_setprop_sized_cells(s->fdt, nodename, "reg",
> + qemu_fdt_setprop_cell(fdt, name, "interrupt-parent", plic_mmio_phandle);
> + qemu_fdt_setprop_cell(fdt, name, "interrupts", RTC_IRQ);
> + g_free(name);
> +
> + name = g_strdup_printf("/soc/flash@%" PRIx64, flashbase);
> + qemu_fdt_add_subnode(s->fdt, name);
> + qemu_fdt_setprop_string(s->fdt, name, "compatible", "cfi-flash");
> + qemu_fdt_setprop_sized_cells(s->fdt, name, "reg",
> 2, flashbase, 2, flashsize,
> 2, flashbase + flashsize, 2, flashsize);
> - qemu_fdt_setprop_cell(s->fdt, nodename, "bank-width", 4);
> - g_free(nodename);
> + qemu_fdt_setprop_cell(s->fdt, name, "bank-width", 4);
> + g_free(name);
> }
>
> -
> static inline DeviceState *gpex_pcie_init(MemoryRegion *sys_mem,
> hwaddr ecam_base, hwaddr ecam_size,
> hwaddr mmio_base, hwaddr mmio_size,
> @@ -479,21 +484,93 @@ static void riscv_virt_board_init(MachineState *machine)
> MemoryRegion *system_memory = get_system_memory();
> MemoryRegion *main_mem = g_new(MemoryRegion, 1);
> MemoryRegion *mask_rom = g_new(MemoryRegion, 1);
> - char *plic_hart_config;
> + char *plic_hart_config, *soc_name;
> size_t plic_hart_config_len;
> target_ulong start_addr = memmap[VIRT_DRAM].base;
> - int i;
> + int i, j;
> unsigned int smp_cpus = machine->smp.cpus;
> + unsigned int base_hartid, cpus_per_socket;
> + DeviceState *mmio_plic, *virtio_plic, *pcie_plic;
> +
> + s->num_socs = machine->smp.sockets;
> +
> + /* Ensure minumum required CPUs per socket */
> + if ((smp_cpus / s->num_socs) < VIRT_CPUS_PER_SOCKET_MIN)
> + s->num_socs = 1;
> +
> + /* Limit the number of sockets */
> + if (VIRT_SOCKETS_MAX < s->num_socs)
> + s->num_socs = VIRT_SOCKETS_MAX;
>
> /* Initialize SOC */
> - object_initialize_child(OBJECT(machine), "soc", &s->soc, sizeof(s->soc),
> - TYPE_RISCV_HART_ARRAY, &error_abort, NULL);
> - object_property_set_str(OBJECT(&s->soc), machine->cpu_type, "cpu-type",
> - &error_abort);
> - object_property_set_int(OBJECT(&s->soc), smp_cpus, "num-harts",
> - &error_abort);
> - object_property_set_bool(OBJECT(&s->soc), true, "realized",
> - &error_abort);
> + mmio_plic = virtio_plic = pcie_plic = NULL;
> + for (i = 0; i < s->num_socs; i++) {
> + base_hartid = i * (smp_cpus / s->num_socs);
> + if (i == (s->num_socs - 1))
> + cpus_per_socket = smp_cpus - base_hartid;
> + else
> + cpus_per_socket = smp_cpus / s->num_socs;
> + soc_name = g_strdup_printf("soc%d", i);
> + object_initialize_child(OBJECT(machine), soc_name, &s->soc[i],
> + sizeof(s->soc[i]), TYPE_RISCV_HART_ARRAY, &error_abort, NULL);
> + g_free(soc_name);
> + object_property_set_str(OBJECT(&s->soc[i]),
> + machine->cpu_type, "cpu-type", &error_abort);
> + object_property_set_int(OBJECT(&s->soc[i]),
> + base_hartid, "hartid-base", &error_abort);
> + object_property_set_int(OBJECT(&s->soc[i]),
> + cpus_per_socket, "num-harts", &error_abort);
> + object_property_set_bool(OBJECT(&s->soc[i]),
> + true, "realized", &error_abort);
> +
> + /* Per-socket CLINT */
> + sifive_clint_create(
> + memmap[VIRT_CLINT].base + i * memmap[VIRT_CLINT].size,
> + memmap[VIRT_CLINT].size, base_hartid, cpus_per_socket,
> + SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE, true);
> +
> + /* Per-socket PLIC hart topology configuration string */
> + plic_hart_config_len =
> + (strlen(VIRT_PLIC_HART_CONFIG) + 1) * cpus_per_socket;
> + plic_hart_config = g_malloc0(plic_hart_config_len);
> + for (j = 0; j < cpus_per_socket; j++) {
> + if (j != 0) {
> + strncat(plic_hart_config, ",", plic_hart_config_len);
> + }
> + strncat(plic_hart_config, VIRT_PLIC_HART_CONFIG,
> + plic_hart_config_len);
> + plic_hart_config_len -= (strlen(VIRT_PLIC_HART_CONFIG) + 1);
> + }
> +
> + /* Per-socket PLIC */
> + s->plic[i] = sifive_plic_create(
> + memmap[VIRT_PLIC].base + i * memmap[VIRT_PLIC].size,
> + plic_hart_config, base_hartid,
> + VIRT_PLIC_NUM_SOURCES,
> + VIRT_PLIC_NUM_PRIORITIES,
> + VIRT_PLIC_PRIORITY_BASE,
> + VIRT_PLIC_PENDING_BASE,
> + VIRT_PLIC_ENABLE_BASE,
> + VIRT_PLIC_ENABLE_STRIDE,
> + VIRT_PLIC_CONTEXT_BASE,
> + VIRT_PLIC_CONTEXT_STRIDE,
> + memmap[VIRT_PLIC].size);
> + g_free(plic_hart_config);
> +
> + /* Try to use different PLIC instance based device type */
> + if (i == 0) {
> + mmio_plic = s->plic[i];
> + virtio_plic = s->plic[i];
> + pcie_plic = s->plic[i];
> + }
> + if (i == 1) {
> + virtio_plic = s->plic[i];
> + pcie_plic = s->plic[i];
> + }
> + if (i == 2) {
> + pcie_plic = s->plic[i];
> + }
> + }
>
> /* register system main memory (actual RAM) */
> memory_region_init_ram(main_mem, NULL, "riscv_virt_board.ram",
> @@ -572,38 +649,14 @@ static void riscv_virt_board_init(MachineState *machine)
> memmap[VIRT_MROM].base + sizeof(reset_vec),
> &address_space_memory);
>
> - /* create PLIC hart topology configuration string */
> - plic_hart_config_len = (strlen(VIRT_PLIC_HART_CONFIG) + 1) * smp_cpus;
> - plic_hart_config = g_malloc0(plic_hart_config_len);
> - for (i = 0; i < smp_cpus; i++) {
> - if (i != 0) {
> - strncat(plic_hart_config, ",", plic_hart_config_len);
> - }
> - strncat(plic_hart_config, VIRT_PLIC_HART_CONFIG, plic_hart_config_len);
> - plic_hart_config_len -= (strlen(VIRT_PLIC_HART_CONFIG) + 1);
> - }
> -
> - /* MMIO */
> - s->plic = sifive_plic_create(memmap[VIRT_PLIC].base,
> - plic_hart_config, 0,
> - VIRT_PLIC_NUM_SOURCES,
> - VIRT_PLIC_NUM_PRIORITIES,
> - VIRT_PLIC_PRIORITY_BASE,
> - VIRT_PLIC_PENDING_BASE,
> - VIRT_PLIC_ENABLE_BASE,
> - VIRT_PLIC_ENABLE_STRIDE,
> - VIRT_PLIC_CONTEXT_BASE,
> - VIRT_PLIC_CONTEXT_STRIDE,
> - memmap[VIRT_PLIC].size);
> - sifive_clint_create(memmap[VIRT_CLINT].base,
> - memmap[VIRT_CLINT].size, 0, smp_cpus,
> - SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE, true);
> + /* SiFive Test MMIO device */
> sifive_test_create(memmap[VIRT_TEST].base);
>
> + /* VirtIO MMIO devices */
> for (i = 0; i < VIRTIO_COUNT; i++) {
> sysbus_create_simple("virtio-mmio",
> memmap[VIRT_VIRTIO].base + i * memmap[VIRT_VIRTIO].size,
> - qdev_get_gpio_in(DEVICE(s->plic), VIRTIO_IRQ + i));
> + qdev_get_gpio_in(DEVICE(virtio_plic), VIRTIO_IRQ + i));
> }
>
> gpex_pcie_init(system_memory,
> @@ -612,14 +665,14 @@ static void riscv_virt_board_init(MachineState *machine)
> memmap[VIRT_PCIE_MMIO].base,
> memmap[VIRT_PCIE_MMIO].size,
> memmap[VIRT_PCIE_PIO].base,
> - DEVICE(s->plic), true);
> + DEVICE(pcie_plic), true);
>
> serial_mm_init(system_memory, memmap[VIRT_UART0].base,
> - 0, qdev_get_gpio_in(DEVICE(s->plic), UART0_IRQ), 399193,
> + 0, qdev_get_gpio_in(DEVICE(mmio_plic), UART0_IRQ), 399193,
> serial_hd(0), DEVICE_LITTLE_ENDIAN);
>
> sysbus_create_simple("goldfish_rtc", memmap[VIRT_RTC].base,
> - qdev_get_gpio_in(DEVICE(s->plic), RTC_IRQ));
> + qdev_get_gpio_in(DEVICE(mmio_plic), RTC_IRQ));
>
> virt_flash_create(s);
>
> @@ -629,8 +682,6 @@ static void riscv_virt_board_init(MachineState *machine)
> drive_get(IF_PFLASH, 0, i));
> }
> virt_flash_map(s, system_memory);
> -
> - g_free(plic_hart_config);
> }
>
> static void riscv_virt_machine_instance_init(Object *obj)
> @@ -643,7 +694,7 @@ static void riscv_virt_machine_class_init(ObjectClass *oc, void *data)
>
> mc->desc = "RISC-V VirtIO board";
> mc->init = riscv_virt_board_init;
> - mc->max_cpus = 8;
> + mc->max_cpus = VIRT_CPUS_MAX;
> mc->default_cpu_type = VIRT_CPU;
> mc->pci_allow_0_address = true;
> }
> diff --git a/include/hw/riscv/virt.h b/include/hw/riscv/virt.h
> index e69355efaf..333d1edbc2 100644
> --- a/include/hw/riscv/virt.h
> +++ b/include/hw/riscv/virt.h
> @@ -23,6 +23,11 @@
> #include "hw/sysbus.h"
> #include "hw/block/flash.h"
>
> +#define VIRT_SOCKETS_MAX 4
> +#define VIRT_CPUS_PER_SOCKET_MIN 2
> +#define VIRT_CPUS_PER_SOCKET_MAX 4
> +#define VIRT_CPUS_MAX (VIRT_SOCKETS_MAX * VIRT_CPUS_PER_SOCKET_MAX)
> +
> #define TYPE_RISCV_VIRT_MACHINE MACHINE_TYPE_NAME("virt")
> #define RISCV_VIRT_MACHINE(obj) \
> OBJECT_CHECK(RISCVVirtState, (obj), TYPE_RISCV_VIRT_MACHINE)
> @@ -32,8 +37,9 @@ typedef struct {
> MachineState parent;
>
> /*< public >*/
> - RISCVHartArrayState soc;
> - DeviceState *plic;
> + unsigned int num_socs;
> + RISCVHartArrayState soc[VIRT_SOCKETS_MAX];
> + DeviceState *plic[VIRT_SOCKETS_MAX];
> PFlashCFI01 *flash[2];
>
> void *fdt;
> @@ -74,6 +80,8 @@ enum {
> #define VIRT_PLIC_ENABLE_STRIDE 0x80
> #define VIRT_PLIC_CONTEXT_BASE 0x200000
> #define VIRT_PLIC_CONTEXT_STRIDE 0x1000
> +#define VIRT_PLIC_SIZE(__num_context) \
> + (VIRT_PLIC_CONTEXT_BASE + (__num_context) * VIRT_PLIC_CONTEXT_STRIDE)
>
> #define FDT_PCI_ADDR_CELLS 3
> #define FDT_PCI_INT_CELLS 1
This has the same "minimum two CPUs per socket issue".
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 3/4] hw/riscv: Allow creating multiple instances of PLIC
2020-05-16 6:37 ` [PATCH 3/4] hw/riscv: Allow creating multiple instances of PLIC Anup Patel
2020-05-21 20:16 ` Palmer Dabbelt
@ 2020-05-21 21:59 ` Alistair Francis
1 sibling, 0 replies; 19+ messages in thread
From: Alistair Francis @ 2020-05-21 21:59 UTC (permalink / raw)
To: Anup Patel
Cc: Peter Maydell, open list:RISC-V, Sagar Karandikar, Anup Patel,
qemu-devel@nongnu.org Developers, Atish Patra, Alistair Francis,
Palmer Dabbelt
On Fri, May 15, 2020 at 11:39 PM Anup Patel <anup.patel@wdc.com> wrote:
>
> We extend PLIC emulation to allow multiple instances of PLIC in
> a QEMU RISC-V machine. To achieve this, we remove first HART id
> zero assumption from PLIC emulation.
>
> Signed-off-by: Anup Patel <anup.patel@wdc.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
Alistair
> ---
> hw/riscv/sifive_e.c | 2 +-
> hw/riscv/sifive_plic.c | 24 +++++++++++++-----------
> hw/riscv/sifive_u.c | 2 +-
> hw/riscv/virt.c | 2 +-
> include/hw/riscv/sifive_plic.h | 12 +++++++-----
> 5 files changed, 23 insertions(+), 19 deletions(-)
>
> diff --git a/hw/riscv/sifive_e.c b/hw/riscv/sifive_e.c
> index 1c3b37d0ba..bd122e71ae 100644
> --- a/hw/riscv/sifive_e.c
> +++ b/hw/riscv/sifive_e.c
> @@ -152,7 +152,7 @@ static void riscv_sifive_e_soc_realize(DeviceState *dev, Error **errp)
>
> /* MMIO */
> s->plic = sifive_plic_create(memmap[SIFIVE_E_PLIC].base,
> - (char *)SIFIVE_E_PLIC_HART_CONFIG,
> + (char *)SIFIVE_E_PLIC_HART_CONFIG, 0,
> SIFIVE_E_PLIC_NUM_SOURCES,
> SIFIVE_E_PLIC_NUM_PRIORITIES,
> SIFIVE_E_PLIC_PRIORITY_BASE,
> diff --git a/hw/riscv/sifive_plic.c b/hw/riscv/sifive_plic.c
> index c1e04cbb98..f88bb48053 100644
> --- a/hw/riscv/sifive_plic.c
> +++ b/hw/riscv/sifive_plic.c
> @@ -352,6 +352,7 @@ static const MemoryRegionOps sifive_plic_ops = {
>
> static Property sifive_plic_properties[] = {
> DEFINE_PROP_STRING("hart-config", SiFivePLICState, hart_config),
> + DEFINE_PROP_UINT32("hartid-base", SiFivePLICState, hartid_base, 0),
> DEFINE_PROP_UINT32("num-sources", SiFivePLICState, num_sources, 0),
> DEFINE_PROP_UINT32("num-priorities", SiFivePLICState, num_priorities, 0),
> DEFINE_PROP_UINT32("priority-base", SiFivePLICState, priority_base, 0),
> @@ -400,10 +401,12 @@ static void parse_hart_config(SiFivePLICState *plic)
> }
> hartid++;
>
> - /* store hart/mode combinations */
> plic->num_addrs = addrid;
> + plic->num_harts = hartid;
> +
> + /* store hart/mode combinations */
> plic->addr_config = g_new(PLICAddr, plic->num_addrs);
> - addrid = 0, hartid = 0;
> + addrid = 0, hartid = plic->hartid_base;
> p = plic->hart_config;
> while ((c = *p++)) {
> if (c == ',') {
> @@ -429,8 +432,6 @@ static void sifive_plic_irq_request(void *opaque, int irq, int level)
>
> static void sifive_plic_realize(DeviceState *dev, Error **errp)
> {
> - MachineState *ms = MACHINE(qdev_get_machine());
> - unsigned int smp_cpus = ms->smp.cpus;
> SiFivePLICState *plic = SIFIVE_PLIC(dev);
> int i;
>
> @@ -451,8 +452,8 @@ static void sifive_plic_realize(DeviceState *dev, Error **errp)
> * lost a interrupt in the case a PLIC is attached. The SEIP bit must be
> * hardware controlled when a PLIC is attached.
> */
> - for (i = 0; i < smp_cpus; i++) {
> - RISCVCPU *cpu = RISCV_CPU(qemu_get_cpu(i));
> + for (i = 0; i < plic->num_harts; i++) {
> + RISCVCPU *cpu = RISCV_CPU(qemu_get_cpu(plic->hartid_base + i));
> if (riscv_cpu_claim_interrupts(cpu, MIP_SEIP) < 0) {
> error_report("SEIP already claimed");
> exit(1);
> @@ -488,16 +489,17 @@ type_init(sifive_plic_register_types)
> * Create PLIC device.
> */
> DeviceState *sifive_plic_create(hwaddr addr, char *hart_config,
> - uint32_t num_sources, uint32_t num_priorities,
> - uint32_t priority_base, uint32_t pending_base,
> - uint32_t enable_base, uint32_t enable_stride,
> - uint32_t context_base, uint32_t context_stride,
> - uint32_t aperture_size)
> + uint32_t hartid_base, uint32_t num_sources,
> + uint32_t num_priorities, uint32_t priority_base,
> + uint32_t pending_base, uint32_t enable_base,
> + uint32_t enable_stride, uint32_t context_base,
> + uint32_t context_stride, uint32_t aperture_size)
> {
> DeviceState *dev = qdev_create(NULL, TYPE_SIFIVE_PLIC);
> assert(enable_stride == (enable_stride & -enable_stride));
> assert(context_stride == (context_stride & -context_stride));
> qdev_prop_set_string(dev, "hart-config", hart_config);
> + qdev_prop_set_uint32(dev, "hartid-base", hartid_base);
> qdev_prop_set_uint32(dev, "num-sources", num_sources);
> qdev_prop_set_uint32(dev, "num-priorities", num_priorities);
> qdev_prop_set_uint32(dev, "priority-base", priority_base);
> diff --git a/hw/riscv/sifive_u.c b/hw/riscv/sifive_u.c
> index 22997fbf13..69dbd7980b 100644
> --- a/hw/riscv/sifive_u.c
> +++ b/hw/riscv/sifive_u.c
> @@ -585,7 +585,7 @@ static void riscv_sifive_u_soc_realize(DeviceState *dev, Error **errp)
>
> /* MMIO */
> s->plic = sifive_plic_create(memmap[SIFIVE_U_PLIC].base,
> - plic_hart_config,
> + plic_hart_config, 0,
> SIFIVE_U_PLIC_NUM_SOURCES,
> SIFIVE_U_PLIC_NUM_PRIORITIES,
> SIFIVE_U_PLIC_PRIORITY_BASE,
> diff --git a/hw/riscv/virt.c b/hw/riscv/virt.c
> index dcb8a83b35..f40efcb193 100644
> --- a/hw/riscv/virt.c
> +++ b/hw/riscv/virt.c
> @@ -585,7 +585,7 @@ static void riscv_virt_board_init(MachineState *machine)
>
> /* MMIO */
> s->plic = sifive_plic_create(memmap[VIRT_PLIC].base,
> - plic_hart_config,
> + plic_hart_config, 0,
> VIRT_PLIC_NUM_SOURCES,
> VIRT_PLIC_NUM_PRIORITIES,
> VIRT_PLIC_PRIORITY_BASE,
> diff --git a/include/hw/riscv/sifive_plic.h b/include/hw/riscv/sifive_plic.h
> index 4421e81249..ace76d0f1b 100644
> --- a/include/hw/riscv/sifive_plic.h
> +++ b/include/hw/riscv/sifive_plic.h
> @@ -48,6 +48,7 @@ typedef struct SiFivePLICState {
> /*< public >*/
> MemoryRegion mmio;
> uint32_t num_addrs;
> + uint32_t num_harts;
> uint32_t bitfield_words;
> PLICAddr *addr_config;
> uint32_t *source_priority;
> @@ -58,6 +59,7 @@ typedef struct SiFivePLICState {
>
> /* config */
> char *hart_config;
> + uint32_t hartid_base;
> uint32_t num_sources;
> uint32_t num_priorities;
> uint32_t priority_base;
> @@ -70,10 +72,10 @@ typedef struct SiFivePLICState {
> } SiFivePLICState;
>
> DeviceState *sifive_plic_create(hwaddr addr, char *hart_config,
> - uint32_t num_sources, uint32_t num_priorities,
> - uint32_t priority_base, uint32_t pending_base,
> - uint32_t enable_base, uint32_t enable_stride,
> - uint32_t context_base, uint32_t context_stride,
> - uint32_t aperture_size);
> + uint32_t hartid_base, uint32_t num_sources,
> + uint32_t num_priorities, uint32_t priority_base,
> + uint32_t pending_base, uint32_t enable_base,
> + uint32_t enable_stride, uint32_t context_base,
> + uint32_t context_stride, uint32_t aperture_size);
>
> #endif
> --
> 2.25.1
>
>
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: [PATCH 2/4] hw/riscv: spike: Allow creating multiple sockets
2020-05-21 20:16 ` Palmer Dabbelt
@ 2020-05-22 10:09 ` Anup Patel
2020-05-27 0:38 ` Alistair Francis
0 siblings, 1 reply; 19+ messages in thread
From: Anup Patel @ 2020-05-22 10:09 UTC (permalink / raw)
To: Palmer Dabbelt
Cc: Peter Maydell, qemu-riscv, sagark, anup, qemu-devel, Atish Patra,
Alistair Francis
> -----Original Message-----
> From: Palmer Dabbelt <palmer@dabbelt.com>
> Sent: 22 May 2020 01:46
> To: Anup Patel <Anup.Patel@wdc.com>
> Cc: Peter Maydell <peter.maydell@linaro.org>; Alistair Francis
> <Alistair.Francis@wdc.com>; sagark@eecs.berkeley.edu; Atish Patra
> <Atish.Patra@wdc.com>; anup@brainfault.org; qemu-riscv@nongnu.org;
> qemu-devel@nongnu.org; Anup Patel <Anup.Patel@wdc.com>
> Subject: Re: [PATCH 2/4] hw/riscv: spike: Allow creating multiple sockets
>
> On Fri, 15 May 2020 23:37:44 PDT (-0700), Anup Patel wrote:
> > We extend RISC-V spike machine to allow creating a multi-socket machine.
> > Each RISC-V spike machine socket is a set of HARTs and a CLINT instance.
> > Other peripherals are shared between all RISC-V spike machine sockets.
> > We also update RISC-V spike machine device tree to treat each socket
> > as a NUMA node.
> >
> > The number of sockets in RISC-V spike machine can be specified using
> > the "sockets=" sub-option of QEMU "-smp" command-line option. By
> > default, only one socket RISC-V spike machine will be created.
> >
> > Currently, we only allow creating upto maximum 4 sockets with minimum
> > 2 HARTs per socket. In future, this limits can be changed.
> >
> > Signed-off-by: Anup Patel <anup.patel@wdc.com>
> > ---
> > hw/riscv/spike.c | 206 ++++++++++++++++++++++++---------------
> > include/hw/riscv/spike.h | 8 +-
> > 2 files changed, 133 insertions(+), 81 deletions(-)
> >
> > diff --git a/hw/riscv/spike.c b/hw/riscv/spike.c index
> > d5e0103d89..f63c57a87c 100644
> > --- a/hw/riscv/spike.c
> > +++ b/hw/riscv/spike.c
> > @@ -64,9 +64,11 @@ static void create_fdt(SpikeState *s, const struct
> MemmapEntry *memmap,
> > uint64_t mem_size, const char *cmdline) {
> > void *fdt;
> > - int cpu;
> > - uint32_t *cells;
> > - char *nodename;
> > + int cpu, socket;
> > + uint32_t *clint_cells;
> > + unsigned long clint_addr;
> > + uint32_t cpu_phandle, intc_phandle, phandle = 1;
> > + char *name, *clint_name, *clust_name, *core_name, *cpu_name,
> > + *intc_name;
> >
> > fdt = s->fdt = create_device_tree(&s->fdt_size);
> > if (!fdt) {
> > @@ -88,68 +90,85 @@ static void create_fdt(SpikeState *s, const struct
> MemmapEntry *memmap,
> > qemu_fdt_setprop_cell(fdt, "/soc", "#size-cells", 0x2);
> > qemu_fdt_setprop_cell(fdt, "/soc", "#address-cells", 0x2);
> >
> > - nodename = g_strdup_printf("/memory@%lx",
> > - (long)memmap[SPIKE_DRAM].base);
> > - qemu_fdt_add_subnode(fdt, nodename);
> > - qemu_fdt_setprop_cells(fdt, nodename, "reg",
> > + name = g_strdup_printf("/memory@%lx",
> (long)memmap[SPIKE_DRAM].base);
> > + qemu_fdt_add_subnode(fdt, name);
> > + qemu_fdt_setprop_cells(fdt, name, "reg",
> > memmap[SPIKE_DRAM].base >> 32, memmap[SPIKE_DRAM].base,
> > mem_size >> 32, mem_size);
> > - qemu_fdt_setprop_string(fdt, nodename, "device_type", "memory");
> > - g_free(nodename);
> > + qemu_fdt_setprop_string(fdt, name, "device_type", "memory");
> > + g_free(name);
> >
> > qemu_fdt_add_subnode(fdt, "/cpus");
> > qemu_fdt_setprop_cell(fdt, "/cpus", "timebase-frequency",
> > SIFIVE_CLINT_TIMEBASE_FREQ);
> > qemu_fdt_setprop_cell(fdt, "/cpus", "#size-cells", 0x0);
> > qemu_fdt_setprop_cell(fdt, "/cpus", "#address-cells", 0x1);
> > + qemu_fdt_add_subnode(fdt, "/cpus/cpu-map");
> >
> > - for (cpu = s->soc.num_harts - 1; cpu >= 0; cpu--) {
> > - nodename = g_strdup_printf("/cpus/cpu@%d", cpu);
> > - char *intc = g_strdup_printf("/cpus/cpu@%d/interrupt-controller", cpu);
> > - char *isa = riscv_isa_string(&s->soc.harts[cpu]);
> > - qemu_fdt_add_subnode(fdt, nodename);
> > + for (socket = (s->num_socs - 1); socket >= 0; socket--) {
> > + clust_name = g_strdup_printf("/cpus/cpu-map/cluster0%d", socket);
> > + qemu_fdt_add_subnode(fdt, clust_name);
> > +
> > + clint_cells = g_new0(uint32_t, s->soc[socket].num_harts *
> > + 4);
> > +
> > + for (cpu = s->soc[socket].num_harts - 1; cpu >= 0; cpu--) {
> > + cpu_phandle = phandle++;
> > +
> > + cpu_name = g_strdup_printf("/cpus/cpu@%d",
> > + s->soc[socket].hartid_base + cpu);
> > + qemu_fdt_add_subnode(fdt, cpu_name);
> > #if defined(TARGET_RISCV32)
> > - qemu_fdt_setprop_string(fdt, nodename, "mmu-type", "riscv,sv32");
> > + qemu_fdt_setprop_string(fdt, cpu_name, "mmu-type",
> > + "riscv,sv32");
> > #else
> > - qemu_fdt_setprop_string(fdt, nodename, "mmu-type", "riscv,sv48");
> > + qemu_fdt_setprop_string(fdt, cpu_name, "mmu-type",
> > + "riscv,sv48");
> > #endif
> > - qemu_fdt_setprop_string(fdt, nodename, "riscv,isa", isa);
> > - qemu_fdt_setprop_string(fdt, nodename, "compatible", "riscv");
> > - qemu_fdt_setprop_string(fdt, nodename, "status", "okay");
> > - qemu_fdt_setprop_cell(fdt, nodename, "reg", cpu);
> > - qemu_fdt_setprop_string(fdt, nodename, "device_type", "cpu");
> > - qemu_fdt_add_subnode(fdt, intc);
> > - qemu_fdt_setprop_cell(fdt, intc, "phandle", 1);
> > - qemu_fdt_setprop_string(fdt, intc, "compatible", "riscv,cpu-intc");
> > - qemu_fdt_setprop(fdt, intc, "interrupt-controller", NULL, 0);
> > - qemu_fdt_setprop_cell(fdt, intc, "#interrupt-cells", 1);
> > - g_free(isa);
> > - g_free(intc);
> > - g_free(nodename);
> > - }
> > + name = riscv_isa_string(&s->soc[socket].harts[cpu]);
> > + qemu_fdt_setprop_string(fdt, cpu_name, "riscv,isa", name);
> > + g_free(name);
> > + qemu_fdt_setprop_string(fdt, cpu_name, "compatible", "riscv");
> > + qemu_fdt_setprop_string(fdt, cpu_name, "status", "okay");
> > + qemu_fdt_setprop_cell(fdt, cpu_name, "reg",
> > + s->soc[socket].hartid_base + cpu);
> > + qemu_fdt_setprop_string(fdt, cpu_name, "device_type", "cpu");
> > + qemu_fdt_setprop_cell(fdt, cpu_name, "phandle",
> > + cpu_phandle);
> > +
> > + intc_name = g_strdup_printf("%s/interrupt-controller", cpu_name);
> > + qemu_fdt_add_subnode(fdt, intc_name);
> > + intc_phandle = phandle++;
> > + qemu_fdt_setprop_cell(fdt, intc_name, "phandle", intc_phandle);
> > + qemu_fdt_setprop_string(fdt, intc_name, "compatible",
> > + "riscv,cpu-intc");
> > + qemu_fdt_setprop(fdt, intc_name, "interrupt-controller", NULL, 0);
> > + qemu_fdt_setprop_cell(fdt, intc_name, "#interrupt-cells",
> > + 1);
> > +
> > + clint_cells[cpu * 4 + 0] = cpu_to_be32(intc_phandle);
> > + clint_cells[cpu * 4 + 1] = cpu_to_be32(IRQ_M_SOFT);
> > + clint_cells[cpu * 4 + 2] = cpu_to_be32(intc_phandle);
> > + clint_cells[cpu * 4 + 3] = cpu_to_be32(IRQ_M_TIMER);
> > +
> > + core_name = g_strdup_printf("%s/core%d", clust_name, cpu);
> > + qemu_fdt_add_subnode(fdt, core_name);
> > + qemu_fdt_setprop_cell(fdt, core_name, "cpu",
> > + cpu_phandle);
> > +
> > + g_free(core_name);
> > + g_free(intc_name);
> > + g_free(cpu_name);
> > + }
> >
> > - cells = g_new0(uint32_t, s->soc.num_harts * 4);
> > - for (cpu = 0; cpu < s->soc.num_harts; cpu++) {
> > - nodename =
> > - g_strdup_printf("/cpus/cpu@%d/interrupt-controller", cpu);
> > - uint32_t intc_phandle = qemu_fdt_get_phandle(fdt, nodename);
> > - cells[cpu * 4 + 0] = cpu_to_be32(intc_phandle);
> > - cells[cpu * 4 + 1] = cpu_to_be32(IRQ_M_SOFT);
> > - cells[cpu * 4 + 2] = cpu_to_be32(intc_phandle);
> > - cells[cpu * 4 + 3] = cpu_to_be32(IRQ_M_TIMER);
> > - g_free(nodename);
> > + clint_addr = memmap[SPIKE_CLINT].base +
> > + (memmap[SPIKE_CLINT].size * socket);
> > + clint_name = g_strdup_printf("/soc/clint@%lx", clint_addr);
> > + qemu_fdt_add_subnode(fdt, clint_name);
> > + qemu_fdt_setprop_string(fdt, clint_name, "compatible", "riscv,clint0");
> > + qemu_fdt_setprop_cells(fdt, clint_name, "reg",
> > + 0x0, clint_addr, 0x0, memmap[SPIKE_CLINT].size);
> > + qemu_fdt_setprop(fdt, clint_name, "interrupts-extended",
> > + clint_cells, s->soc[socket].num_harts * sizeof(uint32_t)
> > + * 4);
> > +
> > + g_free(clint_name);
> > + g_free(clint_cells);
> > + g_free(clust_name);
> > }
> > - nodename = g_strdup_printf("/soc/clint@%lx",
> > - (long)memmap[SPIKE_CLINT].base);
> > - qemu_fdt_add_subnode(fdt, nodename);
> > - qemu_fdt_setprop_string(fdt, nodename, "compatible", "riscv,clint0");
> > - qemu_fdt_setprop_cells(fdt, nodename, "reg",
> > - 0x0, memmap[SPIKE_CLINT].base,
> > - 0x0, memmap[SPIKE_CLINT].size);
> > - qemu_fdt_setprop(fdt, nodename, "interrupts-extended",
> > - cells, s->soc.num_harts * sizeof(uint32_t) * 4);
> > - g_free(cells);
> > - g_free(nodename);
> >
> > if (cmdline) {
> > qemu_fdt_add_subnode(fdt, "/chosen"); @@ -160,23 +179,51 @@
> > static void create_fdt(SpikeState *s, const struct MemmapEntry
> > *memmap, static void spike_board_init(MachineState *machine) {
> > const struct MemmapEntry *memmap = spike_memmap;
> > -
> > SpikeState *s = g_new0(SpikeState, 1);
> > MemoryRegion *system_memory = get_system_memory();
> > MemoryRegion *main_mem = g_new(MemoryRegion, 1);
> > MemoryRegion *mask_rom = g_new(MemoryRegion, 1);
> > int i;
> > + char *soc_name;
> > unsigned int smp_cpus = machine->smp.cpus;
> > -
> > - /* Initialize SOC */
> > - object_initialize_child(OBJECT(machine), "soc", &s->soc, sizeof(s->soc),
> > - TYPE_RISCV_HART_ARRAY, &error_abort, NULL);
> > - object_property_set_str(OBJECT(&s->soc), machine->cpu_type, "cpu-
> type",
> > - &error_abort);
> > - object_property_set_int(OBJECT(&s->soc), smp_cpus, "num-harts",
> > - &error_abort);
> > - object_property_set_bool(OBJECT(&s->soc), true, "realized",
> > - &error_abort);
> > + unsigned int base_hartid, cpus_per_socket;
> > +
> > + s->num_socs = machine->smp.sockets;
> > +
> > + /* Ensure minumum required CPUs per socket */
> > + if ((smp_cpus / s->num_socs) < SPIKE_CPUS_PER_SOCKET_MIN)
> > + s->num_socs = 1;
>
> Why? It seems like creating single-hart sockets would be a good test case, and
> I'm pretty sure it's a configuration that we had in embedded systems.
Yes, single-hart sockets are sensible for testing software.
When "sockets=" sub-option is not provided in "-smp " command line
options, the machine->smp.sockets is set same as machine->smp.cpus
by smp_parse() function in hw/core/machine.c. This means by default
we will always get single-hart per socket. In other words, "-smp 4" will
be 4 cpus and 4 sockets. This is counter intuitive for users because when
"sockets=" is not provided we should default to single socket irrespective
to number of cpus.
I had added SPIKE_CPUS_PER_SOCKET_MIN to handle the default case
when no "sockets=" sub-option is provided.
Alternate approach will be:
1. Add more members in struct CpuTopology of include/hw/boards.h
to help us know whether "sockets=" option was passed or not
2. Update smp_parse() for new members in struct CpuTopology
3. Assume single-socket machine in QEMU RISC-V virt and QEMU
RISC-V spike machines when "sockets=" option was not passed
Suggestions ??
>
> > + /* Limit the number of sockets */
> > + if (SPIKE_SOCKETS_MAX < s->num_socs)
> > + s->num_socs = SPIKE_SOCKETS_MAX;
> > +
> > + /* Initialize socket */
> > + for (i = 0; i < s->num_socs; i++) {
> > + base_hartid = i * (smp_cpus / s->num_socs);
> > + if (i == (s->num_socs - 1))
> > + cpus_per_socket = smp_cpus - base_hartid;
> > + else
> > + cpus_per_socket = smp_cpus / s->num_socs;
> > + soc_name = g_strdup_printf("soc%d", i);
> > + object_initialize_child(OBJECT(machine), soc_name, &s->soc[i],
> > + sizeof(s->soc[i]), TYPE_RISCV_HART_ARRAY, &error_abort, NULL);
> > + g_free(soc_name);
> > + object_property_set_str(OBJECT(&s->soc[i]),
> > + machine->cpu_type, "cpu-type", &error_abort);
> > + object_property_set_int(OBJECT(&s->soc[i]),
> > + base_hartid, "hartid-base", &error_abort);
> > + object_property_set_int(OBJECT(&s->soc[i]),
> > + cpus_per_socket, "num-harts", &error_abort);
> > + object_property_set_bool(OBJECT(&s->soc[i]),
> > + true, "realized", &error_abort);
> > +
> > + /* Core Local Interruptor (timer and IPI) for each socket */
> > + sifive_clint_create(
> > + memmap[SPIKE_CLINT].base + i * memmap[SPIKE_CLINT].size,
> > + memmap[SPIKE_CLINT].size, base_hartid, cpus_per_socket,
> > + SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE, SIFIVE_TIME_BASE, false);
> > + }
> >
> > /* register system main memory (actual RAM) */
> > memory_region_init_ram(main_mem, NULL, "riscv.spike.ram", @@
> > -249,12 +296,7 @@ static void spike_board_init(MachineState *machine)
> > &address_space_memory);
> >
> > /* initialize HTIF using symbols found in load_kernel */
> > - htif_mm_init(system_memory, mask_rom, &s->soc.harts[0].env,
> serial_hd(0));
> > -
> > - /* Core Local Interruptor (timer and IPI) */
> > - sifive_clint_create(memmap[SPIKE_CLINT].base,
> memmap[SPIKE_CLINT].size,
> > - 0, smp_cpus, SIFIVE_SIP_BASE, SIFIVE_TIMECMP_BASE,
> SIFIVE_TIME_BASE,
> > - false);
> > + htif_mm_init(system_memory, mask_rom, &s->soc[0].harts[0].env,
> > + serial_hd(0));
> > }
> >
> > static void spike_v1_10_0_board_init(MachineState *machine) @@ -268,6
> > +310,8 @@ static void spike_v1_10_0_board_init(MachineState *machine)
> > int i;
> > unsigned int smp_cpus = machine->smp.cpus;
> >
> > + s->num_socs = 1;
> > +
> > if (!qtest_enabled()) {
> > info_report("The Spike v1.10.0 machine has been deprecated. "
> > "Please use the generic spike machine and specify the ISA "
> > @@ -275,13 +319,13 @@ static void
> spike_v1_10_0_board_init(MachineState *machine)
> > }
> >
> > /* Initialize SOC */
> > - object_initialize_child(OBJECT(machine), "soc", &s->soc, sizeof(s->soc),
> > + object_initialize_child(OBJECT(machine), "soc", &s->soc[0],
> > + sizeof(s->soc[0]),
> > TYPE_RISCV_HART_ARRAY, &error_abort, NULL);
> > - object_property_set_str(OBJECT(&s->soc), SPIKE_V1_10_0_CPU, "cpu-
> type",
> > + object_property_set_str(OBJECT(&s->soc[0]), SPIKE_V1_10_0_CPU,
> > + "cpu-type",
> > &error_abort);
> > - object_property_set_int(OBJECT(&s->soc), smp_cpus, "num-harts",
> > + object_property_set_int(OBJECT(&s->soc[0]), smp_cpus,
> > + "num-harts",
> > &error_abort);
> > - object_property_set_bool(OBJECT(&s->soc), true, "realized",
> > + object_property_set_bool(OBJECT(&s->soc[0]), true, "realized",
> > &error_abort);
> >
> > /* register system main memory (actual RAM) */ @@ -339,7 +383,7
> > @@ static void spike_v1_10_0_board_init(MachineState *machine)
> > &address_space_memory);
> >
> > /* initialize HTIF using symbols found in load_kernel */
> > - htif_mm_init(system_memory, mask_rom, &s->soc.harts[0].env,
> serial_hd(0));
> > + htif_mm_init(system_memory, mask_rom, &s->soc[0].harts[0].env,
> > + serial_hd(0));
> >
> > /* Core Local Interruptor (timer and IPI) */
> > sifive_clint_create(memmap[SPIKE_CLINT].base,
> > memmap[SPIKE_CLINT].size, @@ -358,6 +402,8 @@ static void
> spike_v1_09_1_board_init(MachineState *machine)
> > int i;
> > unsigned int smp_cpus = machine->smp.cpus;
> >
> > + s->num_socs = 1;
> > +
> > if (!qtest_enabled()) {
> > info_report("The Spike v1.09.1 machine has been deprecated. "
> > "Please use the generic spike machine and specify the ISA "
> > @@ -365,13 +411,13 @@ static void
> spike_v1_09_1_board_init(MachineState *machine)
> > }
> >
> > /* Initialize SOC */
> > - object_initialize_child(OBJECT(machine), "soc", &s->soc, sizeof(s->soc),
> > + object_initialize_child(OBJECT(machine), "soc", &s->soc[0],
> > + sizeof(s->soc[0]),
> > TYPE_RISCV_HART_ARRAY, &error_abort, NULL);
> > - object_property_set_str(OBJECT(&s->soc), SPIKE_V1_09_1_CPU, "cpu-
> type",
> > + object_property_set_str(OBJECT(&s->soc[0]), SPIKE_V1_09_1_CPU,
> > + "cpu-type",
> > &error_abort);
> > - object_property_set_int(OBJECT(&s->soc), smp_cpus, "num-harts",
> > + object_property_set_int(OBJECT(&s->soc[0]), smp_cpus,
> > + "num-harts",
> > &error_abort);
> > - object_property_set_bool(OBJECT(&s->soc), true, "realized",
> > + object_property_set_bool(OBJECT(&s->soc[0]), true, "realized",
> > &error_abort);
> >
> > /* register system main memory (actual RAM) */ @@ -425,7 +471,7
> > @@ static void spike_v1_09_1_board_init(MachineState *machine)
> > "};\n";
> >
> > /* build config string with supplied memory size */
> > - char *isa = riscv_isa_string(&s->soc.harts[0]);
> > + char *isa = riscv_isa_string(&s->soc[0].harts[0]);
> > char *config_string = g_strdup_printf(config_string_tmpl,
> > (uint64_t)memmap[SPIKE_CLINT].base + SIFIVE_TIME_BASE,
> > (uint64_t)memmap[SPIKE_DRAM].base,
> > @@ -448,7 +494,7 @@ static void spike_v1_09_1_board_init(MachineState
> *machine)
> > &address_space_memory);
> >
> > /* initialize HTIF using symbols found in load_kernel */
> > - htif_mm_init(system_memory, mask_rom, &s->soc.harts[0].env,
> serial_hd(0));
> > + htif_mm_init(system_memory, mask_rom, &s->soc[0].harts[0].env,
> > + serial_hd(0));
> >
> > /* Core Local Interruptor (timer and IPI) */
> > sifive_clint_create(memmap[SPIKE_CLINT].base,
> > memmap[SPIKE_CLINT].size, @@ -476,7 +522,7 @@ static void
> > spike_machine_init(MachineClass *mc) {
> > mc->desc = "RISC-V Spike Board";
> > mc->init = spike_board_init;
> > - mc->max_cpus = 8;
> > + mc->max_cpus = SPIKE_CPUS_MAX;
> > mc->is_default = true;
> > mc->default_cpu_type = SPIKE_V1_10_0_CPU; } diff --git
> > a/include/hw/riscv/spike.h b/include/hw/riscv/spike.h index
> > dc770421bc..04a9f593b5 100644
> > --- a/include/hw/riscv/spike.h
> > +++ b/include/hw/riscv/spike.h
> > @@ -22,12 +22,18 @@
> > #include "hw/riscv/riscv_hart.h"
> > #include "hw/sysbus.h"
> >
> > +#define SPIKE_SOCKETS_MAX 4
> > +#define SPIKE_CPUS_PER_SOCKET_MIN 2
> > +#define SPIKE_CPUS_PER_SOCKET_MAX 4
> > +#define SPIKE_CPUS_MAX (SPIKE_SOCKETS_MAX *
> > +SPIKE_CPUS_PER_SOCKET_MAX)
> > +
> > typedef struct {
> > /*< private >*/
> > SysBusDevice parent_obj;
> >
> > /*< public >*/
> > - RISCVHartArrayState soc;
> > + unsigned int num_socs;
> > + RISCVHartArrayState soc[SPIKE_SOCKETS_MAX];
> > void *fdt;
> > int fdt_size;
> > } SpikeState;
Regards,
Anup
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/4] hw/riscv: spike: Allow creating multiple sockets
2020-05-22 10:09 ` Anup Patel
@ 2020-05-27 0:38 ` Alistair Francis
2020-05-27 2:54 ` Anup Patel
0 siblings, 1 reply; 19+ messages in thread
From: Alistair Francis @ 2020-05-27 0:38 UTC (permalink / raw)
To: Anup Patel
Cc: Peter Maydell, qemu-riscv, sagark, anup, qemu-devel, Atish Patra,
Palmer Dabbelt, Alistair Francis
On Fri, May 22, 2020 at 3:10 AM Anup Patel <Anup.Patel@wdc.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Palmer Dabbelt <palmer@dabbelt.com>
> > Sent: 22 May 2020 01:46
> > To: Anup Patel <Anup.Patel@wdc.com>
> > Cc: Peter Maydell <peter.maydell@linaro.org>; Alistair Francis
> > <Alistair.Francis@wdc.com>; sagark@eecs.berkeley.edu; Atish Patra
> > <Atish.Patra@wdc.com>; anup@brainfault.org; qemu-riscv@nongnu.org;
> > qemu-devel@nongnu.org; Anup Patel <Anup.Patel@wdc.com>
> > Subject: Re: [PATCH 2/4] hw/riscv: spike: Allow creating multiple sockets
> >
> > On Fri, 15 May 2020 23:37:44 PDT (-0700), Anup Patel wrote:
> > > We extend RISC-V spike machine to allow creating a multi-socket machine.
> > > Each RISC-V spike machine socket is a set of HARTs and a CLINT instance.
> > > Other peripherals are shared between all RISC-V spike machine sockets.
> > > We also update RISC-V spike machine device tree to treat each socket
> > > as a NUMA node.
> > >
> > > The number of sockets in RISC-V spike machine can be specified using
> > > the "sockets=" sub-option of QEMU "-smp" command-line option. By
> > > default, only one socket RISC-V spike machine will be created.
> > >
> > > Currently, we only allow creating upto maximum 4 sockets with minimum
> > > 2 HARTs per socket. In future, this limits can be changed.
> > >
> > > Signed-off-by: Anup Patel <anup.patel@wdc.com>
> > > ---
> > > hw/riscv/spike.c | 206 ++++++++++++++++++++++++---------------
> > > include/hw/riscv/spike.h | 8 +-
> > > 2 files changed, 133 insertions(+), 81 deletions(-)
> > >
> > > diff --git a/hw/riscv/spike.c b/hw/riscv/spike.c index
> > > d5e0103d89..f63c57a87c 100644
> > > --- a/hw/riscv/spike.c
> > > +++ b/hw/riscv/spike.c
> > > @@ -64,9 +64,11 @@ static void create_fdt(SpikeState *s, const struct
> > MemmapEntry *memmap,
> > > uint64_t mem_size, const char *cmdline) {
> > > void *fdt;
> > > - int cpu;
> > > - uint32_t *cells;
> > > - char *nodename;
> > > + int cpu, socket;
> > > + uint32_t *clint_cells;
> > > + unsigned long clint_addr;
> > > + uint32_t cpu_phandle, intc_phandle, phandle = 1;
> > > + char *name, *clint_name, *clust_name, *core_name, *cpu_name,
> > > + *intc_name;
> > >
> > > fdt = s->fdt = create_device_tree(&s->fdt_size);
> > > if (!fdt) {
> > > @@ -88,68 +90,85 @@ static void create_fdt(SpikeState *s, const struct
> > MemmapEntry *memmap,
> > > qemu_fdt_setprop_cell(fdt, "/soc", "#size-cells", 0x2);
> > > qemu_fdt_setprop_cell(fdt, "/soc", "#address-cells", 0x2);
> > >
> > > - nodename = g_strdup_printf("/memory@%lx",
> > > - (long)memmap[SPIKE_DRAM].base);
> > > - qemu_fdt_add_subnode(fdt, nodename);
> > > - qemu_fdt_setprop_cells(fdt, nodename, "reg",
> > > + name = g_strdup_printf("/memory@%lx",
> > (long)memmap[SPIKE_DRAM].base);
> > > + qemu_fdt_add_subnode(fdt, name);
> > > + qemu_fdt_setprop_cells(fdt, name, "reg",
> > > memmap[SPIKE_DRAM].base >> 32, memmap[SPIKE_DRAM].base,
> > > mem_size >> 32, mem_size);
> > > - qemu_fdt_setprop_string(fdt, nodename, "device_type", "memory");
> > > - g_free(nodename);
> > > + qemu_fdt_setprop_string(fdt, name, "device_type", "memory");
> > > + g_free(name);
> > >
> > > qemu_fdt_add_subnode(fdt, "/cpus");
> > > qemu_fdt_setprop_cell(fdt, "/cpus", "timebase-frequency",
> > > SIFIVE_CLINT_TIMEBASE_FREQ);
> > > qemu_fdt_setprop_cell(fdt, "/cpus", "#size-cells", 0x0);
> > > qemu_fdt_setprop_cell(fdt, "/cpus", "#address-cells", 0x1);
> > > + qemu_fdt_add_subnode(fdt, "/cpus/cpu-map");
> > >
> > > - for (cpu = s->soc.num_harts - 1; cpu >= 0; cpu--) {
> > > - nodename = g_strdup_printf("/cpus/cpu@%d", cpu);
> > > - char *intc = g_strdup_printf("/cpus/cpu@%d/interrupt-controller", cpu);
> > > - char *isa = riscv_isa_string(&s->soc.harts[cpu]);
> > > - qemu_fdt_add_subnode(fdt, nodename);
> > > + for (socket = (s->num_socs - 1); socket >= 0; socket--) {
> > > + clust_name = g_strdup_printf("/cpus/cpu-map/cluster0%d", socket);
> > > + qemu_fdt_add_subnode(fdt, clust_name);
> > > +
> > > + clint_cells = g_new0(uint32_t, s->soc[socket].num_harts *
> > > + 4);
> > > +
> > > + for (cpu = s->soc[socket].num_harts - 1; cpu >= 0; cpu--) {
> > > + cpu_phandle = phandle++;
> > > +
> > > + cpu_name = g_strdup_printf("/cpus/cpu@%d",
> > > + s->soc[socket].hartid_base + cpu);
> > > + qemu_fdt_add_subnode(fdt, cpu_name);
> > > #if defined(TARGET_RISCV32)
> > > - qemu_fdt_setprop_string(fdt, nodename, "mmu-type", "riscv,sv32");
> > > + qemu_fdt_setprop_string(fdt, cpu_name, "mmu-type",
> > > + "riscv,sv32");
> > > #else
> > > - qemu_fdt_setprop_string(fdt, nodename, "mmu-type", "riscv,sv48");
> > > + qemu_fdt_setprop_string(fdt, cpu_name, "mmu-type",
> > > + "riscv,sv48");
> > > #endif
> > > - qemu_fdt_setprop_string(fdt, nodename, "riscv,isa", isa);
> > > - qemu_fdt_setprop_string(fdt, nodename, "compatible", "riscv");
> > > - qemu_fdt_setprop_string(fdt, nodename, "status", "okay");
> > > - qemu_fdt_setprop_cell(fdt, nodename, "reg", cpu);
> > > - qemu_fdt_setprop_string(fdt, nodename, "device_type", "cpu");
> > > - qemu_fdt_add_subnode(fdt, intc);
> > > - qemu_fdt_setprop_cell(fdt, intc, "phandle", 1);
> > > - qemu_fdt_setprop_string(fdt, intc, "compatible", "riscv,cpu-intc");
> > > - qemu_fdt_setprop(fdt, intc, "interrupt-controller", NULL, 0);
> > > - qemu_fdt_setprop_cell(fdt, intc, "#interrupt-cells", 1);
> > > - g_free(isa);
> > > - g_free(intc);
> > > - g_free(nodename);
> > > - }
> > > + name = riscv_isa_string(&s->soc[socket].harts[cpu]);
> > > + qemu_fdt_setprop_string(fdt, cpu_name, "riscv,isa", name);
> > > + g_free(name);
> > > + qemu_fdt_setprop_string(fdt, cpu_name, "compatible", "riscv");
> > > + qemu_fdt_setprop_string(fdt, cpu_name, "status", "okay");
> > > + qemu_fdt_setprop_cell(fdt, cpu_name, "reg",
> > > + s->soc[socket].hartid_base + cpu);
> > > + qemu_fdt_setprop_string(fdt, cpu_name, "device_type", "cpu");
> > > + qemu_fdt_setprop_cell(fdt, cpu_name, "phandle",
> > > + cpu_phandle);
> > > +
> > > + intc_name = g_strdup_printf("%s/interrupt-controller", cpu_name);
> > > + qemu_fdt_add_subnode(fdt, intc_name);
> > > + intc_phandle = phandle++;
> > > + qemu_fdt_setprop_cell(fdt, intc_name, "phandle", intc_phandle);
> > > + qemu_fdt_setprop_string(fdt, intc_name, "compatible",
> > > + "riscv,cpu-intc");
> > > + qemu_fdt_setprop(fdt, intc_name, "interrupt-controller", NULL, 0);
> > > + qemu_fdt_setprop_cell(fdt, intc_name, "#interrupt-cells",
> > > + 1);
> > > +
> > > + clint_cells[cpu * 4 + 0] = cpu_to_be32(intc_phandle);
> > > + clint_cells[cpu * 4 + 1] = cpu_to_be32(IRQ_M_SOFT);
> > > + clint_cells[cpu * 4 + 2] = cpu_to_be32(intc_phandle);
> > > + clint_cells[cpu * 4 + 3] = cpu_to_be32(IRQ_M_TIMER);
> > > +
> > > + core_name = g_strdup_printf("%s/core%d", clust_name, cpu);
> > > + qemu_fdt_add_subnode(fdt, core_name);
> > > + qemu_fdt_setprop_cell(fdt, core_name, "cpu",
> > > + cpu_phandle);
> > > +
> > > + g_free(core_name);
> > > + g_free(intc_name);
> > > + g_free(cpu_name);
> > > + }
> > >
> > > - cells = g_new0(uint32_t, s->soc.num_harts * 4);
> > > - for (cpu = 0; cpu < s->soc.num_harts; cpu++) {
> > > - nodename =
> > > - g_strdup_printf("/cpus/cpu@%d/interrupt-controller", cpu);
> > > - uint32_t intc_phandle = qemu_fdt_get_phandle(fdt, nodename);
> > > - cells[cpu * 4 + 0] = cpu_to_be32(intc_phandle);
> > > - cells[cpu * 4 + 1] = cpu_to_be32(IRQ_M_SOFT);
> > > - cells[cpu * 4 + 2] = cpu_to_be32(intc_phandle);
> > > - cells[cpu * 4 + 3] = cpu_to_be32(IRQ_M_TIMER);
> > > - g_free(nodename);
> > > + clint_addr = memmap[SPIKE_CLINT].base +
> > > + (memmap[SPIKE_CLINT].size * socket);
> > > + clint_name = g_strdup_printf("/soc/clint@%lx", clint_addr);
> > > + qemu_fdt_add_subnode(fdt, clint_name);
> > > + qemu_fdt_setprop_string(fdt, clint_name, "compatible", "riscv,clint0");
> > > + qemu_fdt_setprop_cells(fdt, clint_name, "reg",
> > > + 0x0, clint_addr, 0x0, memmap[SPIKE_CLINT].size);
> > > + qemu_fdt_setprop(fdt, clint_name, "interrupts-extended",
> > > + clint_cells, s->soc[socket].num_harts * sizeof(uint32_t)
> > > + * 4);
> > > +
> > > + g_free(clint_name);
> > > + g_free(clint_cells);
> > > + g_free(clust_name);
> > > }
> > > - nodename = g_strdup_printf("/soc/clint@%lx",
> > > - (long)memmap[SPIKE_CLINT].base);
> > > - qemu_fdt_add_subnode(fdt, nodename);
> > > - qemu_fdt_setprop_string(fdt, nodename, "compatible", "riscv,clint0");
> > > - qemu_fdt_setprop_cells(fdt, nodename, "reg",
> > > - 0x0, memmap[SPIKE_CLINT].base,
> > > - 0x0, memmap[SPIKE_CLINT].size);
> > > - qemu_fdt_setprop(fdt, nodename, "interrupts-extended",
> > > - cells, s->soc.num_harts * sizeof(uint32_t) * 4);
> > > - g_free(cells);
> > > - g_free(nodename);
> > >
> > > if (cmdline) {
> > > qemu_fdt_add_subnode(fdt, "/chosen"); @@ -160,23 +179,51 @@
> > > static void create_fdt(SpikeState *s, const struct MemmapEntry
> > > *memmap, static void spike_board_init(MachineState *machine) {
> > > const struct MemmapEntry *memmap = spike_memmap;
> > > -
> > > SpikeState *s = g_new0(SpikeState, 1);
> > > MemoryRegion *system_memory = get_system_memory();
> > > MemoryRegion *main_mem = g_new(MemoryRegion, 1);
> > > MemoryRegion *mask_rom = g_new(MemoryRegion, 1);
> > > int i;
> > > + char *soc_name;
> > > unsigned int smp_cpus = machine->smp.cpus;
> > > -
> > > - /* Initialize SOC */
> > > - object_initialize_child(OBJECT(machine), "soc", &s->soc, sizeof(s->soc),
> > > - TYPE_RISCV_HART_ARRAY, &error_abort, NULL);
> > > - object_property_set_str(OBJECT(&s->soc), machine->cpu_type, "cpu-
> > type",
> > > - &error_abort);
> > > - object_property_set_int(OBJECT(&s->soc), smp_cpus, "num-harts",
> > > - &error_abort);
> > > - object_property_set_bool(OBJECT(&s->soc), true, "realized",
> > > - &error_abort);
> > > + unsigned int base_hartid, cpus_per_socket;
> > > +
> > > + s->num_socs = machine->smp.sockets;
> > > +
> > > + /* Ensure minumum required CPUs per socket */
> > > + if ((smp_cpus / s->num_socs) < SPIKE_CPUS_PER_SOCKET_MIN)
> > > + s->num_socs = 1;
> >
> > Why? It seems like creating single-hart sockets would be a good test case, and
> > I'm pretty sure it's a configuration that we had in embedded systems.
>
> Yes, single-hart sockets are sensible for testing software.
>
> When "sockets=" sub-option is not provided in "-smp " command line
> options, the machine->smp.sockets is set same as machine->smp.cpus
> by smp_parse() function in hw/core/machine.c. This means by default
> we will always get single-hart per socket. In other words, "-smp 4" will
> be 4 cpus and 4 sockets. This is counter intuitive for users because when
> "sockets=" is not provided we should default to single socket irrespective
> to number of cpus.
>
> I had added SPIKE_CPUS_PER_SOCKET_MIN to handle the default case
> when no "sockets=" sub-option is provided.
>
> Alternate approach will be:
> 1. Add more members in struct CpuTopology of include/hw/boards.h
> to help us know whether "sockets=" option was passed or not
> 2. Update smp_parse() for new members in struct CpuTopology
> 3. Assume single-socket machine in QEMU RISC-V virt and QEMU
> RISC-V spike machines when "sockets=" option was not passed
>
> Suggestions ??
>
I think it makes sense to just stick to what smp_parse() does. That's
what QEMU users are used to so we should follow that.
I agree it is strange that is specifying `-smp x' you will get
max_cpus number of sockets and split the CPUs via them, but that's
what every other board (besides x86) does.
Alistair
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: [PATCH 2/4] hw/riscv: spike: Allow creating multiple sockets
2020-05-27 0:38 ` Alistair Francis
@ 2020-05-27 2:54 ` Anup Patel
2020-05-27 3:30 ` Alistair Francis
0 siblings, 1 reply; 19+ messages in thread
From: Anup Patel @ 2020-05-27 2:54 UTC (permalink / raw)
To: Alistair Francis
Cc: Peter Maydell, qemu-riscv, sagark, anup, qemu-devel, Atish Patra,
Palmer Dabbelt, Alistair Francis
> -----Original Message-----
> From: Alistair Francis <alistair23@gmail.com>
> Sent: 27 May 2020 06:08
> To: Anup Patel <Anup.Patel@wdc.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>; Peter Maydell
> <peter.maydell@linaro.org>; qemu-riscv@nongnu.org;
> sagark@eecs.berkeley.edu; anup@brainfault.org; qemu-devel@nongnu.org;
> Atish Patra <Atish.Patra@wdc.com>; Alistair Francis
> <Alistair.Francis@wdc.com>
> Subject: Re: [PATCH 2/4] hw/riscv: spike: Allow creating multiple sockets
>
> On Fri, May 22, 2020 at 3:10 AM Anup Patel <Anup.Patel@wdc.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Palmer Dabbelt <palmer@dabbelt.com>
> > > Sent: 22 May 2020 01:46
> > > To: Anup Patel <Anup.Patel@wdc.com>
> > > Cc: Peter Maydell <peter.maydell@linaro.org>; Alistair Francis
> > > <Alistair.Francis@wdc.com>; sagark@eecs.berkeley.edu; Atish Patra
> > > <Atish.Patra@wdc.com>; anup@brainfault.org; qemu-riscv@nongnu.org;
> > > qemu-devel@nongnu.org; Anup Patel <Anup.Patel@wdc.com>
> > > Subject: Re: [PATCH 2/4] hw/riscv: spike: Allow creating multiple
> > > sockets
> > >
> > > On Fri, 15 May 2020 23:37:44 PDT (-0700), Anup Patel wrote:
> > > > We extend RISC-V spike machine to allow creating a multi-socket
> machine.
> > > > Each RISC-V spike machine socket is a set of HARTs and a CLINT instance.
> > > > Other peripherals are shared between all RISC-V spike machine sockets.
> > > > We also update RISC-V spike machine device tree to treat each
> > > > socket as a NUMA node.
> > > >
> > > > The number of sockets in RISC-V spike machine can be specified
> > > > using the "sockets=" sub-option of QEMU "-smp" command-line
> > > > option. By default, only one socket RISC-V spike machine will be created.
> > > >
> > > > Currently, we only allow creating upto maximum 4 sockets with
> > > > minimum
> > > > 2 HARTs per socket. In future, this limits can be changed.
> > > >
> > > > Signed-off-by: Anup Patel <anup.patel@wdc.com>
> > > > ---
> > > > hw/riscv/spike.c | 206 ++++++++++++++++++++++++---------------
> > > > include/hw/riscv/spike.h | 8 +-
> > > > 2 files changed, 133 insertions(+), 81 deletions(-)
> > > >
> > > > diff --git a/hw/riscv/spike.c b/hw/riscv/spike.c index
> > > > d5e0103d89..f63c57a87c 100644
> > > > --- a/hw/riscv/spike.c
> > > > +++ b/hw/riscv/spike.c
> > > > @@ -64,9 +64,11 @@ static void create_fdt(SpikeState *s, const
> > > > struct
> > > MemmapEntry *memmap,
> > > > uint64_t mem_size, const char *cmdline) {
> > > > void *fdt;
> > > > - int cpu;
> > > > - uint32_t *cells;
> > > > - char *nodename;
> > > > + int cpu, socket;
> > > > + uint32_t *clint_cells;
> > > > + unsigned long clint_addr;
> > > > + uint32_t cpu_phandle, intc_phandle, phandle = 1;
> > > > + char *name, *clint_name, *clust_name, *core_name, *cpu_name,
> > > > + *intc_name;
> > > >
> > > > fdt = s->fdt = create_device_tree(&s->fdt_size);
> > > > if (!fdt) {
> > > > @@ -88,68 +90,85 @@ static void create_fdt(SpikeState *s, const
> > > > struct
> > > MemmapEntry *memmap,
> > > > qemu_fdt_setprop_cell(fdt, "/soc", "#size-cells", 0x2);
> > > > qemu_fdt_setprop_cell(fdt, "/soc", "#address-cells", 0x2);
> > > >
> > > > - nodename = g_strdup_printf("/memory@%lx",
> > > > - (long)memmap[SPIKE_DRAM].base);
> > > > - qemu_fdt_add_subnode(fdt, nodename);
> > > > - qemu_fdt_setprop_cells(fdt, nodename, "reg",
> > > > + name = g_strdup_printf("/memory@%lx",
> > > (long)memmap[SPIKE_DRAM].base);
> > > > + qemu_fdt_add_subnode(fdt, name);
> > > > + qemu_fdt_setprop_cells(fdt, name, "reg",
> > > > memmap[SPIKE_DRAM].base >> 32, memmap[SPIKE_DRAM].base,
> > > > mem_size >> 32, mem_size);
> > > > - qemu_fdt_setprop_string(fdt, nodename, "device_type", "memory");
> > > > - g_free(nodename);
> > > > + qemu_fdt_setprop_string(fdt, name, "device_type", "memory");
> > > > + g_free(name);
> > > >
> > > > qemu_fdt_add_subnode(fdt, "/cpus");
> > > > qemu_fdt_setprop_cell(fdt, "/cpus", "timebase-frequency",
> > > > SIFIVE_CLINT_TIMEBASE_FREQ);
> > > > qemu_fdt_setprop_cell(fdt, "/cpus", "#size-cells", 0x0);
> > > > qemu_fdt_setprop_cell(fdt, "/cpus", "#address-cells", 0x1);
> > > > + qemu_fdt_add_subnode(fdt, "/cpus/cpu-map");
> > > >
> > > > - for (cpu = s->soc.num_harts - 1; cpu >= 0; cpu--) {
> > > > - nodename = g_strdup_printf("/cpus/cpu@%d", cpu);
> > > > - char *intc = g_strdup_printf("/cpus/cpu@%d/interrupt-controller",
> cpu);
> > > > - char *isa = riscv_isa_string(&s->soc.harts[cpu]);
> > > > - qemu_fdt_add_subnode(fdt, nodename);
> > > > + for (socket = (s->num_socs - 1); socket >= 0; socket--) {
> > > > + clust_name = g_strdup_printf("/cpus/cpu-map/cluster0%d", socket);
> > > > + qemu_fdt_add_subnode(fdt, clust_name);
> > > > +
> > > > + clint_cells = g_new0(uint32_t, s->soc[socket].num_harts
> > > > + * 4);
> > > > +
> > > > + for (cpu = s->soc[socket].num_harts - 1; cpu >= 0; cpu--) {
> > > > + cpu_phandle = phandle++;
> > > > +
> > > > + cpu_name = g_strdup_printf("/cpus/cpu@%d",
> > > > + s->soc[socket].hartid_base + cpu);
> > > > + qemu_fdt_add_subnode(fdt, cpu_name);
> > > > #if defined(TARGET_RISCV32)
> > > > - qemu_fdt_setprop_string(fdt, nodename, "mmu-type", "riscv,sv32");
> > > > + qemu_fdt_setprop_string(fdt, cpu_name, "mmu-type",
> > > > + "riscv,sv32");
> > > > #else
> > > > - qemu_fdt_setprop_string(fdt, nodename, "mmu-type", "riscv,sv48");
> > > > + qemu_fdt_setprop_string(fdt, cpu_name, "mmu-type",
> > > > + "riscv,sv48");
> > > > #endif
> > > > - qemu_fdt_setprop_string(fdt, nodename, "riscv,isa", isa);
> > > > - qemu_fdt_setprop_string(fdt, nodename, "compatible", "riscv");
> > > > - qemu_fdt_setprop_string(fdt, nodename, "status", "okay");
> > > > - qemu_fdt_setprop_cell(fdt, nodename, "reg", cpu);
> > > > - qemu_fdt_setprop_string(fdt, nodename, "device_type", "cpu");
> > > > - qemu_fdt_add_subnode(fdt, intc);
> > > > - qemu_fdt_setprop_cell(fdt, intc, "phandle", 1);
> > > > - qemu_fdt_setprop_string(fdt, intc, "compatible", "riscv,cpu-intc");
> > > > - qemu_fdt_setprop(fdt, intc, "interrupt-controller", NULL, 0);
> > > > - qemu_fdt_setprop_cell(fdt, intc, "#interrupt-cells", 1);
> > > > - g_free(isa);
> > > > - g_free(intc);
> > > > - g_free(nodename);
> > > > - }
> > > > + name = riscv_isa_string(&s->soc[socket].harts[cpu]);
> > > > + qemu_fdt_setprop_string(fdt, cpu_name, "riscv,isa", name);
> > > > + g_free(name);
> > > > + qemu_fdt_setprop_string(fdt, cpu_name, "compatible", "riscv");
> > > > + qemu_fdt_setprop_string(fdt, cpu_name, "status", "okay");
> > > > + qemu_fdt_setprop_cell(fdt, cpu_name, "reg",
> > > > + s->soc[socket].hartid_base + cpu);
> > > > + qemu_fdt_setprop_string(fdt, cpu_name, "device_type", "cpu");
> > > > + qemu_fdt_setprop_cell(fdt, cpu_name, "phandle",
> > > > + cpu_phandle);
> > > > +
> > > > + intc_name = g_strdup_printf("%s/interrupt-controller",
> cpu_name);
> > > > + qemu_fdt_add_subnode(fdt, intc_name);
> > > > + intc_phandle = phandle++;
> > > > + qemu_fdt_setprop_cell(fdt, intc_name, "phandle", intc_phandle);
> > > > + qemu_fdt_setprop_string(fdt, intc_name, "compatible",
> > > > + "riscv,cpu-intc");
> > > > + qemu_fdt_setprop(fdt, intc_name, "interrupt-controller", NULL,
> 0);
> > > > + qemu_fdt_setprop_cell(fdt, intc_name,
> > > > + "#interrupt-cells", 1);
> > > > +
> > > > + clint_cells[cpu * 4 + 0] = cpu_to_be32(intc_phandle);
> > > > + clint_cells[cpu * 4 + 1] = cpu_to_be32(IRQ_M_SOFT);
> > > > + clint_cells[cpu * 4 + 2] = cpu_to_be32(intc_phandle);
> > > > + clint_cells[cpu * 4 + 3] = cpu_to_be32(IRQ_M_TIMER);
> > > > +
> > > > + core_name = g_strdup_printf("%s/core%d", clust_name, cpu);
> > > > + qemu_fdt_add_subnode(fdt, core_name);
> > > > + qemu_fdt_setprop_cell(fdt, core_name, "cpu",
> > > > + cpu_phandle);
> > > > +
> > > > + g_free(core_name);
> > > > + g_free(intc_name);
> > > > + g_free(cpu_name);
> > > > + }
> > > >
> > > > - cells = g_new0(uint32_t, s->soc.num_harts * 4);
> > > > - for (cpu = 0; cpu < s->soc.num_harts; cpu++) {
> > > > - nodename =
> > > > - g_strdup_printf("/cpus/cpu@%d/interrupt-controller", cpu);
> > > > - uint32_t intc_phandle = qemu_fdt_get_phandle(fdt, nodename);
> > > > - cells[cpu * 4 + 0] = cpu_to_be32(intc_phandle);
> > > > - cells[cpu * 4 + 1] = cpu_to_be32(IRQ_M_SOFT);
> > > > - cells[cpu * 4 + 2] = cpu_to_be32(intc_phandle);
> > > > - cells[cpu * 4 + 3] = cpu_to_be32(IRQ_M_TIMER);
> > > > - g_free(nodename);
> > > > + clint_addr = memmap[SPIKE_CLINT].base +
> > > > + (memmap[SPIKE_CLINT].size * socket);
> > > > + clint_name = g_strdup_printf("/soc/clint@%lx", clint_addr);
> > > > + qemu_fdt_add_subnode(fdt, clint_name);
> > > > + qemu_fdt_setprop_string(fdt, clint_name, "compatible",
> "riscv,clint0");
> > > > + qemu_fdt_setprop_cells(fdt, clint_name, "reg",
> > > > + 0x0, clint_addr, 0x0, memmap[SPIKE_CLINT].size);
> > > > + qemu_fdt_setprop(fdt, clint_name, "interrupts-extended",
> > > > + clint_cells, s->soc[socket].num_harts *
> > > > + sizeof(uint32_t)
> > > > + * 4);
> > > > +
> > > > + g_free(clint_name);
> > > > + g_free(clint_cells);
> > > > + g_free(clust_name);
> > > > }
> > > > - nodename = g_strdup_printf("/soc/clint@%lx",
> > > > - (long)memmap[SPIKE_CLINT].base);
> > > > - qemu_fdt_add_subnode(fdt, nodename);
> > > > - qemu_fdt_setprop_string(fdt, nodename, "compatible", "riscv,clint0");
> > > > - qemu_fdt_setprop_cells(fdt, nodename, "reg",
> > > > - 0x0, memmap[SPIKE_CLINT].base,
> > > > - 0x0, memmap[SPIKE_CLINT].size);
> > > > - qemu_fdt_setprop(fdt, nodename, "interrupts-extended",
> > > > - cells, s->soc.num_harts * sizeof(uint32_t) * 4);
> > > > - g_free(cells);
> > > > - g_free(nodename);
> > > >
> > > > if (cmdline) {
> > > > qemu_fdt_add_subnode(fdt, "/chosen"); @@ -160,23 +179,51
> > > > @@ static void create_fdt(SpikeState *s, const struct MemmapEntry
> > > > *memmap, static void spike_board_init(MachineState *machine) {
> > > > const struct MemmapEntry *memmap = spike_memmap;
> > > > -
> > > > SpikeState *s = g_new0(SpikeState, 1);
> > > > MemoryRegion *system_memory = get_system_memory();
> > > > MemoryRegion *main_mem = g_new(MemoryRegion, 1);
> > > > MemoryRegion *mask_rom = g_new(MemoryRegion, 1);
> > > > int i;
> > > > + char *soc_name;
> > > > unsigned int smp_cpus = machine->smp.cpus;
> > > > -
> > > > - /* Initialize SOC */
> > > > - object_initialize_child(OBJECT(machine), "soc", &s->soc, sizeof(s->soc),
> > > > - TYPE_RISCV_HART_ARRAY, &error_abort, NULL);
> > > > - object_property_set_str(OBJECT(&s->soc), machine->cpu_type, "cpu-
> > > type",
> > > > - &error_abort);
> > > > - object_property_set_int(OBJECT(&s->soc), smp_cpus, "num-harts",
> > > > - &error_abort);
> > > > - object_property_set_bool(OBJECT(&s->soc), true, "realized",
> > > > - &error_abort);
> > > > + unsigned int base_hartid, cpus_per_socket;
> > > > +
> > > > + s->num_socs = machine->smp.sockets;
> > > > +
> > > > + /* Ensure minumum required CPUs per socket */
> > > > + if ((smp_cpus / s->num_socs) < SPIKE_CPUS_PER_SOCKET_MIN)
> > > > + s->num_socs = 1;
> > >
> > > Why? It seems like creating single-hart sockets would be a good
> > > test case, and I'm pretty sure it's a configuration that we had in embedded
> systems.
> >
> > Yes, single-hart sockets are sensible for testing software.
> >
> > When "sockets=" sub-option is not provided in "-smp " command line
> > options, the machine->smp.sockets is set same as machine->smp.cpus by
> > smp_parse() function in hw/core/machine.c. This means by default we
> > will always get single-hart per socket. In other words, "-smp 4" will
> > be 4 cpus and 4 sockets. This is counter intuitive for users because
> > when "sockets=" is not provided we should default to single socket
> > irrespective to number of cpus.
> >
> > I had added SPIKE_CPUS_PER_SOCKET_MIN to handle the default case when
> > no "sockets=" sub-option is provided.
> >
> > Alternate approach will be:
> > 1. Add more members in struct CpuTopology of include/hw/boards.h
> > to help us know whether "sockets=" option was passed or not 2.
> > Update smp_parse() for new members in struct CpuTopology 3. Assume
> > single-socket machine in QEMU RISC-V virt and QEMU
> > RISC-V spike machines when "sockets=" option was not passed
> >
> > Suggestions ??
> >
>
> I think it makes sense to just stick to what smp_parse() does. That's what QEMU
> users are used to so we should follow that.
>
> I agree it is strange that is specifying `-smp x' you will get max_cpus number of
> sockets and split the CPUs via them, but that's what every other board (besides
> x86) does.
So we are fine with SPIKE_CPUS_PER_SOCKET_MIN=2 for now, right ??
Regards,
Anup
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [PATCH 2/4] hw/riscv: spike: Allow creating multiple sockets
2020-05-27 2:54 ` Anup Patel
@ 2020-05-27 3:30 ` Alistair Francis
2020-05-27 3:59 ` Anup Patel
0 siblings, 1 reply; 19+ messages in thread
From: Alistair Francis @ 2020-05-27 3:30 UTC (permalink / raw)
To: Anup Patel
Cc: Peter Maydell, qemu-riscv, sagark, anup, qemu-devel, Atish Patra,
Palmer Dabbelt, Alistair Francis
at all?
AlistairOn Tue, May 26, 2020 at 7:55 PM Anup Patel <Anup.Patel@wdc.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Alistair Francis <alistair23@gmail.com>
> > Sent: 27 May 2020 06:08
> > To: Anup Patel <Anup.Patel@wdc.com>
> > Cc: Palmer Dabbelt <palmer@dabbelt.com>; Peter Maydell
> > <peter.maydell@linaro.org>; qemu-riscv@nongnu.org;
> > sagark@eecs.berkeley.edu; anup@brainfault.org; qemu-devel@nongnu.org;
> > Atish Patra <Atish.Patra@wdc.com>; Alistair Francis
> > <Alistair.Francis@wdc.com>
> > Subject: Re: [PATCH 2/4] hw/riscv: spike: Allow creating multiple sockets
> >
> > On Fri, May 22, 2020 at 3:10 AM Anup Patel <Anup.Patel@wdc.com> wrote:
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Palmer Dabbelt <palmer@dabbelt.com>
> > > > Sent: 22 May 2020 01:46
> > > > To: Anup Patel <Anup.Patel@wdc.com>
> > > > Cc: Peter Maydell <peter.maydell@linaro.org>; Alistair Francis
> > > > <Alistair.Francis@wdc.com>; sagark@eecs.berkeley.edu; Atish Patra
> > > > <Atish.Patra@wdc.com>; anup@brainfault.org; qemu-riscv@nongnu.org;
> > > > qemu-devel@nongnu.org; Anup Patel <Anup.Patel@wdc.com>
> > > > Subject: Re: [PATCH 2/4] hw/riscv: spike: Allow creating multiple
> > > > sockets
> > > >
> > > > On Fri, 15 May 2020 23:37:44 PDT (-0700), Anup Patel wrote:
> > > > > We extend RISC-V spike machine to allow creating a multi-socket
> > machine.
> > > > > Each RISC-V spike machine socket is a set of HARTs and a CLINT instance.
> > > > > Other peripherals are shared between all RISC-V spike machine sockets.
> > > > > We also update RISC-V spike machine device tree to treat each
> > > > > socket as a NUMA node.
> > > > >
> > > > > The number of sockets in RISC-V spike machine can be specified
> > > > > using the "sockets=" sub-option of QEMU "-smp" command-line
> > > > > option. By default, only one socket RISC-V spike machine will be created.
> > > > >
> > > > > Currently, we only allow creating upto maximum 4 sockets with
> > > > > minimum
> > > > > 2 HARTs per socket. In future, this limits can be changed.
> > > > >
> > > > > Signed-off-by: Anup Patel <anup.patel@wdc.com>
> > > > > ---
> > > > > hw/riscv/spike.c | 206 ++++++++++++++++++++++++---------------
> > > > > include/hw/riscv/spike.h | 8 +-
> > > > > 2 files changed, 133 insertions(+), 81 deletions(-)
> > > > >
> > > > > diff --git a/hw/riscv/spike.c b/hw/riscv/spike.c index
> > > > > d5e0103d89..f63c57a87c 100644
> > > > > --- a/hw/riscv/spike.c
> > > > > +++ b/hw/riscv/spike.c
> > > > > @@ -64,9 +64,11 @@ static void create_fdt(SpikeState *s, const
> > > > > struct
> > > > MemmapEntry *memmap,
> > > > > uint64_t mem_size, const char *cmdline) {
> > > > > void *fdt;
> > > > > - int cpu;
> > > > > - uint32_t *cells;
> > > > > - char *nodename;
> > > > > + int cpu, socket;
> > > > > + uint32_t *clint_cells;
> > > > > + unsigned long clint_addr;
> > > > > + uint32_t cpu_phandle, intc_phandle, phandle = 1;
> > > > > + char *name, *clint_name, *clust_name, *core_name, *cpu_name,
> > > > > + *intc_name;
> > > > >
> > > > > fdt = s->fdt = create_device_tree(&s->fdt_size);
> > > > > if (!fdt) {
> > > > > @@ -88,68 +90,85 @@ static void create_fdt(SpikeState *s, const
> > > > > struct
> > > > MemmapEntry *memmap,
> > > > > qemu_fdt_setprop_cell(fdt, "/soc", "#size-cells", 0x2);
> > > > > qemu_fdt_setprop_cell(fdt, "/soc", "#address-cells", 0x2);
> > > > >
> > > > > - nodename = g_strdup_printf("/memory@%lx",
> > > > > - (long)memmap[SPIKE_DRAM].base);
> > > > > - qemu_fdt_add_subnode(fdt, nodename);
> > > > > - qemu_fdt_setprop_cells(fdt, nodename, "reg",
> > > > > + name = g_strdup_printf("/memory@%lx",
> > > > (long)memmap[SPIKE_DRAM].base);
> > > > > + qemu_fdt_add_subnode(fdt, name);
> > > > > + qemu_fdt_setprop_cells(fdt, name, "reg",
> > > > > memmap[SPIKE_DRAM].base >> 32, memmap[SPIKE_DRAM].base,
> > > > > mem_size >> 32, mem_size);
> > > > > - qemu_fdt_setprop_string(fdt, nodename, "device_type", "memory");
> > > > > - g_free(nodename);
> > > > > + qemu_fdt_setprop_string(fdt, name, "device_type", "memory");
> > > > > + g_free(name);
> > > > >
> > > > > qemu_fdt_add_subnode(fdt, "/cpus");
> > > > > qemu_fdt_setprop_cell(fdt, "/cpus", "timebase-frequency",
> > > > > SIFIVE_CLINT_TIMEBASE_FREQ);
> > > > > qemu_fdt_setprop_cell(fdt, "/cpus", "#size-cells", 0x0);
> > > > > qemu_fdt_setprop_cell(fdt, "/cpus", "#address-cells", 0x1);
> > > > > + qemu_fdt_add_subnode(fdt, "/cpus/cpu-map");
> > > > >
> > > > > - for (cpu = s->soc.num_harts - 1; cpu >= 0; cpu--) {
> > > > > - nodename = g_strdup_printf("/cpus/cpu@%d", cpu);
> > > > > - char *intc = g_strdup_printf("/cpus/cpu@%d/interrupt-controller",
> > cpu);
> > > > > - char *isa = riscv_isa_string(&s->soc.harts[cpu]);
> > > > > - qemu_fdt_add_subnode(fdt, nodename);
> > > > > + for (socket = (s->num_socs - 1); socket >= 0; socket--) {
> > > > > + clust_name = g_strdup_printf("/cpus/cpu-map/cluster0%d", socket);
> > > > > + qemu_fdt_add_subnode(fdt, clust_name);
> > > > > +
> > > > > + clint_cells = g_new0(uint32_t, s->soc[socket].num_harts
> > > > > + * 4);
> > > > > +
> > > > > + for (cpu = s->soc[socket].num_harts - 1; cpu >= 0; cpu--) {
> > > > > + cpu_phandle = phandle++;
> > > > > +
> > > > > + cpu_name = g_strdup_printf("/cpus/cpu@%d",
> > > > > + s->soc[socket].hartid_base + cpu);
> > > > > + qemu_fdt_add_subnode(fdt, cpu_name);
> > > > > #if defined(TARGET_RISCV32)
> > > > > - qemu_fdt_setprop_string(fdt, nodename, "mmu-type", "riscv,sv32");
> > > > > + qemu_fdt_setprop_string(fdt, cpu_name, "mmu-type",
> > > > > + "riscv,sv32");
> > > > > #else
> > > > > - qemu_fdt_setprop_string(fdt, nodename, "mmu-type", "riscv,sv48");
> > > > > + qemu_fdt_setprop_string(fdt, cpu_name, "mmu-type",
> > > > > + "riscv,sv48");
> > > > > #endif
> > > > > - qemu_fdt_setprop_string(fdt, nodename, "riscv,isa", isa);
> > > > > - qemu_fdt_setprop_string(fdt, nodename, "compatible", "riscv");
> > > > > - qemu_fdt_setprop_string(fdt, nodename, "status", "okay");
> > > > > - qemu_fdt_setprop_cell(fdt, nodename, "reg", cpu);
> > > > > - qemu_fdt_setprop_string(fdt, nodename, "device_type", "cpu");
> > > > > - qemu_fdt_add_subnode(fdt, intc);
> > > > > - qemu_fdt_setprop_cell(fdt, intc, "phandle", 1);
> > > > > - qemu_fdt_setprop_string(fdt, intc, "compatible", "riscv,cpu-intc");
> > > > > - qemu_fdt_setprop(fdt, intc, "interrupt-controller", NULL, 0);
> > > > > - qemu_fdt_setprop_cell(fdt, intc, "#interrupt-cells", 1);
> > > > > - g_free(isa);
> > > > > - g_free(intc);
> > > > > - g_free(nodename);
> > > > > - }
> > > > > + name = riscv_isa_string(&s->soc[socket].harts[cpu]);
> > > > > + qemu_fdt_setprop_string(fdt, cpu_name, "riscv,isa", name);
> > > > > + g_free(name);
> > > > > + qemu_fdt_setprop_string(fdt, cpu_name, "compatible", "riscv");
> > > > > + qemu_fdt_setprop_string(fdt, cpu_name, "status", "okay");
> > > > > + qemu_fdt_setprop_cell(fdt, cpu_name, "reg",
> > > > > + s->soc[socket].hartid_base + cpu);
> > > > > + qemu_fdt_setprop_string(fdt, cpu_name, "device_type", "cpu");
> > > > > + qemu_fdt_setprop_cell(fdt, cpu_name, "phandle",
> > > > > + cpu_phandle);
> > > > > +
> > > > > + intc_name = g_strdup_printf("%s/interrupt-controller",
> > cpu_name);
> > > > > + qemu_fdt_add_subnode(fdt, intc_name);
> > > > > + intc_phandle = phandle++;
> > > > > + qemu_fdt_setprop_cell(fdt, intc_name, "phandle", intc_phandle);
> > > > > + qemu_fdt_setprop_string(fdt, intc_name, "compatible",
> > > > > + "riscv,cpu-intc");
> > > > > + qemu_fdt_setprop(fdt, intc_name, "interrupt-controller", NULL,
> > 0);
> > > > > + qemu_fdt_setprop_cell(fdt, intc_name,
> > > > > + "#interrupt-cells", 1);
> > > > > +
> > > > > + clint_cells[cpu * 4 + 0] = cpu_to_be32(intc_phandle);
> > > > > + clint_cells[cpu * 4 + 1] = cpu_to_be32(IRQ_M_SOFT);
> > > > > + clint_cells[cpu * 4 + 2] = cpu_to_be32(intc_phandle);
> > > > > + clint_cells[cpu * 4 + 3] = cpu_to_be32(IRQ_M_TIMER);
> > > > > +
> > > > > + core_name = g_strdup_printf("%s/core%d", clust_name, cpu);
> > > > > + qemu_fdt_add_subnode(fdt, core_name);
> > > > > + qemu_fdt_setprop_cell(fdt, core_name, "cpu",
> > > > > + cpu_phandle);
> > > > > +
> > > > > + g_free(core_name);
> > > > > + g_free(intc_name);
> > > > > + g_free(cpu_name);
> > > > > + }
> > > > >
> > > > > - cells = g_new0(uint32_t, s->soc.num_harts * 4);
> > > > > - for (cpu = 0; cpu < s->soc.num_harts; cpu++) {
> > > > > - nodename =
> > > > > - g_strdup_printf("/cpus/cpu@%d/interrupt-controller", cpu);
> > > > > - uint32_t intc_phandle = qemu_fdt_get_phandle(fdt, nodename);
> > > > > - cells[cpu * 4 + 0] = cpu_to_be32(intc_phandle);
> > > > > - cells[cpu * 4 + 1] = cpu_to_be32(IRQ_M_SOFT);
> > > > > - cells[cpu * 4 + 2] = cpu_to_be32(intc_phandle);
> > > > > - cells[cpu * 4 + 3] = cpu_to_be32(IRQ_M_TIMER);
> > > > > - g_free(nodename);
> > > > > + clint_addr = memmap[SPIKE_CLINT].base +
> > > > > + (memmap[SPIKE_CLINT].size * socket);
> > > > > + clint_name = g_strdup_printf("/soc/clint@%lx", clint_addr);
> > > > > + qemu_fdt_add_subnode(fdt, clint_name);
> > > > > + qemu_fdt_setprop_string(fdt, clint_name, "compatible",
> > "riscv,clint0");
> > > > > + qemu_fdt_setprop_cells(fdt, clint_name, "reg",
> > > > > + 0x0, clint_addr, 0x0, memmap[SPIKE_CLINT].size);
> > > > > + qemu_fdt_setprop(fdt, clint_name, "interrupts-extended",
> > > > > + clint_cells, s->soc[socket].num_harts *
> > > > > + sizeof(uint32_t)
> > > > > + * 4);
> > > > > +
> > > > > + g_free(clint_name);
> > > > > + g_free(clint_cells);
> > > > > + g_free(clust_name);
> > > > > }
> > > > > - nodename = g_strdup_printf("/soc/clint@%lx",
> > > > > - (long)memmap[SPIKE_CLINT].base);
> > > > > - qemu_fdt_add_subnode(fdt, nodename);
> > > > > - qemu_fdt_setprop_string(fdt, nodename, "compatible", "riscv,clint0");
> > > > > - qemu_fdt_setprop_cells(fdt, nodename, "reg",
> > > > > - 0x0, memmap[SPIKE_CLINT].base,
> > > > > - 0x0, memmap[SPIKE_CLINT].size);
> > > > > - qemu_fdt_setprop(fdt, nodename, "interrupts-extended",
> > > > > - cells, s->soc.num_harts * sizeof(uint32_t) * 4);
> > > > > - g_free(cells);
> > > > > - g_free(nodename);
> > > > >
> > > > > if (cmdline) {
> > > > > qemu_fdt_add_subnode(fdt, "/chosen"); @@ -160,23 +179,51
> > > > > @@ static void create_fdt(SpikeState *s, const struct MemmapEntry
> > > > > *memmap, static void spike_board_init(MachineState *machine) {
> > > > > const struct MemmapEntry *memmap = spike_memmap;
> > > > > -
> > > > > SpikeState *s = g_new0(SpikeState, 1);
> > > > > MemoryRegion *system_memory = get_system_memory();
> > > > > MemoryRegion *main_mem = g_new(MemoryRegion, 1);
> > > > > MemoryRegion *mask_rom = g_new(MemoryRegion, 1);
> > > > > int i;
> > > > > + char *soc_name;
> > > > > unsigned int smp_cpus = machine->smp.cpus;
> > > > > -
> > > > > - /* Initialize SOC */
> > > > > - object_initialize_child(OBJECT(machine), "soc", &s->soc, sizeof(s->soc),
> > > > > - TYPE_RISCV_HART_ARRAY, &error_abort, NULL);
> > > > > - object_property_set_str(OBJECT(&s->soc), machine->cpu_type, "cpu-
> > > > type",
> > > > > - &error_abort);
> > > > > - object_property_set_int(OBJECT(&s->soc), smp_cpus, "num-harts",
> > > > > - &error_abort);
> > > > > - object_property_set_bool(OBJECT(&s->soc), true, "realized",
> > > > > - &error_abort);
> > > > > + unsigned int base_hartid, cpus_per_socket;
> > > > > +
> > > > > + s->num_socs = machine->smp.sockets;
> > > > > +
> > > > > + /* Ensure minumum required CPUs per socket */
> > > > > + if ((smp_cpus / s->num_socs) < SPIKE_CPUS_PER_SOCKET_MIN)
> > > > > + s->num_socs = 1;
> > > >
> > > > Why? It seems like creating single-hart sockets would be a good
> > > > test case, and I'm pretty sure it's a configuration that we had in embedded
> > systems.
> > >
> > > Yes, single-hart sockets are sensible for testing software.
> > >
> > > When "sockets=" sub-option is not provided in "-smp " command line
> > > options, the machine->smp.sockets is set same as machine->smp.cpus by
> > > smp_parse() function in hw/core/machine.c. This means by default we
> > > will always get single-hart per socket. In other words, "-smp 4" will
> > > be 4 cpus and 4 sockets. This is counter intuitive for users because
> > > when "sockets=" is not provided we should default to single socket
> > > irrespective to number of cpus.
> > >
> > > I had added SPIKE_CPUS_PER_SOCKET_MIN to handle the default case when
> > > no "sockets=" sub-option is provided.
> > >
> > > Alternate approach will be:
> > > 1. Add more members in struct CpuTopology of include/hw/boards.h
> > > to help us know whether "sockets=" option was passed or not 2.
> > > Update smp_parse() for new members in struct CpuTopology 3. Assume
> > > single-socket machine in QEMU RISC-V virt and QEMU
> > > RISC-V spike machines when "sockets=" option was not passed
> > >
> > > Suggestions ??
> > >
> >
> > I think it makes sense to just stick to what smp_parse() does. That's what QEMU
> > users are used to so we should follow that.
> >
> > I agree it is strange that is specifying `-smp x' you will get max_cpus number of
> > sockets and split the CPUs via them, but that's what every other board (besides
> > x86) does.
>
> So we are fine with SPIKE_CPUS_PER_SOCKET_MIN=2 for now, right ??
Why do we need SPIKE_CPUS_PER_SOCKET_MIN at all?
Alistair
>
> Regards,
> Anup
^ permalink raw reply [flat|nested] 19+ messages in thread
* RE: [PATCH 2/4] hw/riscv: spike: Allow creating multiple sockets
2020-05-27 3:30 ` Alistair Francis
@ 2020-05-27 3:59 ` Anup Patel
0 siblings, 0 replies; 19+ messages in thread
From: Anup Patel @ 2020-05-27 3:59 UTC (permalink / raw)
To: Alistair Francis
Cc: Peter Maydell, qemu-riscv, sagark, anup, qemu-devel, Atish Patra,
Palmer Dabbelt, Alistair Francis
> -----Original Message-----
> From: Alistair Francis <alistair23@gmail.com>
> Sent: 27 May 2020 09:00
> To: Anup Patel <Anup.Patel@wdc.com>
> Cc: Palmer Dabbelt <palmer@dabbelt.com>; Peter Maydell
> <peter.maydell@linaro.org>; qemu-riscv@nongnu.org;
> sagark@eecs.berkeley.edu; anup@brainfault.org; qemu-devel@nongnu.org;
> Atish Patra <Atish.Patra@wdc.com>; Alistair Francis
> <Alistair.Francis@wdc.com>
> Subject: Re: [PATCH 2/4] hw/riscv: spike: Allow creating multiple sockets
>
> at all?
>
> AlistairOn Tue, May 26, 2020 at 7:55 PM Anup Patel <Anup.Patel@wdc.com>
> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: Alistair Francis <alistair23@gmail.com>
> > > Sent: 27 May 2020 06:08
> > > To: Anup Patel <Anup.Patel@wdc.com>
> > > Cc: Palmer Dabbelt <palmer@dabbelt.com>; Peter Maydell
> > > <peter.maydell@linaro.org>; qemu-riscv@nongnu.org;
> > > sagark@eecs.berkeley.edu; anup@brainfault.org;
> > > qemu-devel@nongnu.org; Atish Patra <Atish.Patra@wdc.com>; Alistair
> > > Francis <Alistair.Francis@wdc.com>
> > > Subject: Re: [PATCH 2/4] hw/riscv: spike: Allow creating multiple
> > > sockets
> > >
> > > On Fri, May 22, 2020 at 3:10 AM Anup Patel <Anup.Patel@wdc.com> wrote:
> > > >
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Palmer Dabbelt <palmer@dabbelt.com>
> > > > > Sent: 22 May 2020 01:46
> > > > > To: Anup Patel <Anup.Patel@wdc.com>
> > > > > Cc: Peter Maydell <peter.maydell@linaro.org>; Alistair Francis
> > > > > <Alistair.Francis@wdc.com>; sagark@eecs.berkeley.edu; Atish
> > > > > Patra <Atish.Patra@wdc.com>; anup@brainfault.org;
> > > > > qemu-riscv@nongnu.org; qemu-devel@nongnu.org; Anup Patel
> > > > > <Anup.Patel@wdc.com>
> > > > > Subject: Re: [PATCH 2/4] hw/riscv: spike: Allow creating
> > > > > multiple sockets
> > > > >
> > > > > On Fri, 15 May 2020 23:37:44 PDT (-0700), Anup Patel wrote:
> > > > > > We extend RISC-V spike machine to allow creating a
> > > > > > multi-socket
> > > machine.
> > > > > > Each RISC-V spike machine socket is a set of HARTs and a CLINT
> instance.
> > > > > > Other peripherals are shared between all RISC-V spike machine
> sockets.
> > > > > > We also update RISC-V spike machine device tree to treat each
> > > > > > socket as a NUMA node.
> > > > > >
> > > > > > The number of sockets in RISC-V spike machine can be specified
> > > > > > using the "sockets=" sub-option of QEMU "-smp" command-line
> > > > > > option. By default, only one socket RISC-V spike machine will be
> created.
> > > > > >
> > > > > > Currently, we only allow creating upto maximum 4 sockets with
> > > > > > minimum
> > > > > > 2 HARTs per socket. In future, this limits can be changed.
> > > > > >
> > > > > > Signed-off-by: Anup Patel <anup.patel@wdc.com>
> > > > > > ---
> > > > > > hw/riscv/spike.c | 206 ++++++++++++++++++++++++---------------
> > > > > > include/hw/riscv/spike.h | 8 +-
> > > > > > 2 files changed, 133 insertions(+), 81 deletions(-)
> > > > > >
> > > > > > diff --git a/hw/riscv/spike.c b/hw/riscv/spike.c index
> > > > > > d5e0103d89..f63c57a87c 100644
> > > > > > --- a/hw/riscv/spike.c
> > > > > > +++ b/hw/riscv/spike.c
> > > > > > @@ -64,9 +64,11 @@ static void create_fdt(SpikeState *s, const
> > > > > > struct
> > > > > MemmapEntry *memmap,
> > > > > > uint64_t mem_size, const char *cmdline) {
> > > > > > void *fdt;
> > > > > > - int cpu;
> > > > > > - uint32_t *cells;
> > > > > > - char *nodename;
> > > > > > + int cpu, socket;
> > > > > > + uint32_t *clint_cells;
> > > > > > + unsigned long clint_addr;
> > > > > > + uint32_t cpu_phandle, intc_phandle, phandle = 1;
> > > > > > + char *name, *clint_name, *clust_name, *core_name,
> > > > > > + *cpu_name, *intc_name;
> > > > > >
> > > > > > fdt = s->fdt = create_device_tree(&s->fdt_size);
> > > > > > if (!fdt) {
> > > > > > @@ -88,68 +90,85 @@ static void create_fdt(SpikeState *s,
> > > > > > const struct
> > > > > MemmapEntry *memmap,
> > > > > > qemu_fdt_setprop_cell(fdt, "/soc", "#size-cells", 0x2);
> > > > > > qemu_fdt_setprop_cell(fdt, "/soc", "#address-cells",
> > > > > > 0x2);
> > > > > >
> > > > > > - nodename = g_strdup_printf("/memory@%lx",
> > > > > > - (long)memmap[SPIKE_DRAM].base);
> > > > > > - qemu_fdt_add_subnode(fdt, nodename);
> > > > > > - qemu_fdt_setprop_cells(fdt, nodename, "reg",
> > > > > > + name = g_strdup_printf("/memory@%lx",
> > > > > (long)memmap[SPIKE_DRAM].base);
> > > > > > + qemu_fdt_add_subnode(fdt, name);
> > > > > > + qemu_fdt_setprop_cells(fdt, name, "reg",
> > > > > > memmap[SPIKE_DRAM].base >> 32,
> memmap[SPIKE_DRAM].base,
> > > > > > mem_size >> 32, mem_size);
> > > > > > - qemu_fdt_setprop_string(fdt, nodename, "device_type",
> "memory");
> > > > > > - g_free(nodename);
> > > > > > + qemu_fdt_setprop_string(fdt, name, "device_type", "memory");
> > > > > > + g_free(name);
> > > > > >
> > > > > > qemu_fdt_add_subnode(fdt, "/cpus");
> > > > > > qemu_fdt_setprop_cell(fdt, "/cpus", "timebase-frequency",
> > > > > > SIFIVE_CLINT_TIMEBASE_FREQ);
> > > > > > qemu_fdt_setprop_cell(fdt, "/cpus", "#size-cells", 0x0);
> > > > > > qemu_fdt_setprop_cell(fdt, "/cpus", "#address-cells",
> > > > > > 0x1);
> > > > > > + qemu_fdt_add_subnode(fdt, "/cpus/cpu-map");
> > > > > >
> > > > > > - for (cpu = s->soc.num_harts - 1; cpu >= 0; cpu--) {
> > > > > > - nodename = g_strdup_printf("/cpus/cpu@%d", cpu);
> > > > > > - char *intc = g_strdup_printf("/cpus/cpu@%d/interrupt-
> controller",
> > > cpu);
> > > > > > - char *isa = riscv_isa_string(&s->soc.harts[cpu]);
> > > > > > - qemu_fdt_add_subnode(fdt, nodename);
> > > > > > + for (socket = (s->num_socs - 1); socket >= 0; socket--) {
> > > > > > + clust_name = g_strdup_printf("/cpus/cpu-map/cluster0%d",
> socket);
> > > > > > + qemu_fdt_add_subnode(fdt, clust_name);
> > > > > > +
> > > > > > + clint_cells = g_new0(uint32_t,
> > > > > > + s->soc[socket].num_harts
> > > > > > + * 4);
> > > > > > +
> > > > > > + for (cpu = s->soc[socket].num_harts - 1; cpu >= 0; cpu--) {
> > > > > > + cpu_phandle = phandle++;
> > > > > > +
> > > > > > + cpu_name = g_strdup_printf("/cpus/cpu@%d",
> > > > > > + s->soc[socket].hartid_base + cpu);
> > > > > > + qemu_fdt_add_subnode(fdt, cpu_name);
> > > > > > #if defined(TARGET_RISCV32)
> > > > > > - qemu_fdt_setprop_string(fdt, nodename, "mmu-type",
> "riscv,sv32");
> > > > > > + qemu_fdt_setprop_string(fdt, cpu_name,
> > > > > > + "mmu-type", "riscv,sv32");
> > > > > > #else
> > > > > > - qemu_fdt_setprop_string(fdt, nodename, "mmu-type",
> "riscv,sv48");
> > > > > > + qemu_fdt_setprop_string(fdt, cpu_name,
> > > > > > + "mmu-type", "riscv,sv48");
> > > > > > #endif
> > > > > > - qemu_fdt_setprop_string(fdt, nodename, "riscv,isa", isa);
> > > > > > - qemu_fdt_setprop_string(fdt, nodename, "compatible", "riscv");
> > > > > > - qemu_fdt_setprop_string(fdt, nodename, "status", "okay");
> > > > > > - qemu_fdt_setprop_cell(fdt, nodename, "reg", cpu);
> > > > > > - qemu_fdt_setprop_string(fdt, nodename, "device_type", "cpu");
> > > > > > - qemu_fdt_add_subnode(fdt, intc);
> > > > > > - qemu_fdt_setprop_cell(fdt, intc, "phandle", 1);
> > > > > > - qemu_fdt_setprop_string(fdt, intc, "compatible", "riscv,cpu-
> intc");
> > > > > > - qemu_fdt_setprop(fdt, intc, "interrupt-controller", NULL, 0);
> > > > > > - qemu_fdt_setprop_cell(fdt, intc, "#interrupt-cells", 1);
> > > > > > - g_free(isa);
> > > > > > - g_free(intc);
> > > > > > - g_free(nodename);
> > > > > > - }
> > > > > > + name = riscv_isa_string(&s->soc[socket].harts[cpu]);
> > > > > > + qemu_fdt_setprop_string(fdt, cpu_name, "riscv,isa", name);
> > > > > > + g_free(name);
> > > > > > + qemu_fdt_setprop_string(fdt, cpu_name, "compatible",
> "riscv");
> > > > > > + qemu_fdt_setprop_string(fdt, cpu_name, "status", "okay");
> > > > > > + qemu_fdt_setprop_cell(fdt, cpu_name, "reg",
> > > > > > + s->soc[socket].hartid_base + cpu);
> > > > > > + qemu_fdt_setprop_string(fdt, cpu_name, "device_type",
> "cpu");
> > > > > > + qemu_fdt_setprop_cell(fdt, cpu_name, "phandle",
> > > > > > + cpu_phandle);
> > > > > > +
> > > > > > + intc_name =
> > > > > > + g_strdup_printf("%s/interrupt-controller",
> > > cpu_name);
> > > > > > + qemu_fdt_add_subnode(fdt, intc_name);
> > > > > > + intc_phandle = phandle++;
> > > > > > + qemu_fdt_setprop_cell(fdt, intc_name, "phandle",
> intc_phandle);
> > > > > > + qemu_fdt_setprop_string(fdt, intc_name, "compatible",
> > > > > > + "riscv,cpu-intc");
> > > > > > + qemu_fdt_setprop(fdt, intc_name,
> > > > > > + "interrupt-controller", NULL,
> > > 0);
> > > > > > + qemu_fdt_setprop_cell(fdt, intc_name,
> > > > > > + "#interrupt-cells", 1);
> > > > > > +
> > > > > > + clint_cells[cpu * 4 + 0] = cpu_to_be32(intc_phandle);
> > > > > > + clint_cells[cpu * 4 + 1] = cpu_to_be32(IRQ_M_SOFT);
> > > > > > + clint_cells[cpu * 4 + 2] = cpu_to_be32(intc_phandle);
> > > > > > + clint_cells[cpu * 4 + 3] =
> > > > > > + cpu_to_be32(IRQ_M_TIMER);
> > > > > > +
> > > > > > + core_name = g_strdup_printf("%s/core%d", clust_name, cpu);
> > > > > > + qemu_fdt_add_subnode(fdt, core_name);
> > > > > > + qemu_fdt_setprop_cell(fdt, core_name, "cpu",
> > > > > > + cpu_phandle);
> > > > > > +
> > > > > > + g_free(core_name);
> > > > > > + g_free(intc_name);
> > > > > > + g_free(cpu_name);
> > > > > > + }
> > > > > >
> > > > > > - cells = g_new0(uint32_t, s->soc.num_harts * 4);
> > > > > > - for (cpu = 0; cpu < s->soc.num_harts; cpu++) {
> > > > > > - nodename =
> > > > > > - g_strdup_printf("/cpus/cpu@%d/interrupt-controller", cpu);
> > > > > > - uint32_t intc_phandle = qemu_fdt_get_phandle(fdt, nodename);
> > > > > > - cells[cpu * 4 + 0] = cpu_to_be32(intc_phandle);
> > > > > > - cells[cpu * 4 + 1] = cpu_to_be32(IRQ_M_SOFT);
> > > > > > - cells[cpu * 4 + 2] = cpu_to_be32(intc_phandle);
> > > > > > - cells[cpu * 4 + 3] = cpu_to_be32(IRQ_M_TIMER);
> > > > > > - g_free(nodename);
> > > > > > + clint_addr = memmap[SPIKE_CLINT].base +
> > > > > > + (memmap[SPIKE_CLINT].size * socket);
> > > > > > + clint_name = g_strdup_printf("/soc/clint@%lx", clint_addr);
> > > > > > + qemu_fdt_add_subnode(fdt, clint_name);
> > > > > > + qemu_fdt_setprop_string(fdt, clint_name,
> > > > > > + "compatible",
> > > "riscv,clint0");
> > > > > > + qemu_fdt_setprop_cells(fdt, clint_name, "reg",
> > > > > > + 0x0, clint_addr, 0x0, memmap[SPIKE_CLINT].size);
> > > > > > + qemu_fdt_setprop(fdt, clint_name, "interrupts-extended",
> > > > > > + clint_cells, s->soc[socket].num_harts *
> > > > > > + sizeof(uint32_t)
> > > > > > + * 4);
> > > > > > +
> > > > > > + g_free(clint_name);
> > > > > > + g_free(clint_cells);
> > > > > > + g_free(clust_name);
> > > > > > }
> > > > > > - nodename = g_strdup_printf("/soc/clint@%lx",
> > > > > > - (long)memmap[SPIKE_CLINT].base);
> > > > > > - qemu_fdt_add_subnode(fdt, nodename);
> > > > > > - qemu_fdt_setprop_string(fdt, nodename, "compatible",
> "riscv,clint0");
> > > > > > - qemu_fdt_setprop_cells(fdt, nodename, "reg",
> > > > > > - 0x0, memmap[SPIKE_CLINT].base,
> > > > > > - 0x0, memmap[SPIKE_CLINT].size);
> > > > > > - qemu_fdt_setprop(fdt, nodename, "interrupts-extended",
> > > > > > - cells, s->soc.num_harts * sizeof(uint32_t) * 4);
> > > > > > - g_free(cells);
> > > > > > - g_free(nodename);
> > > > > >
> > > > > > if (cmdline) {
> > > > > > qemu_fdt_add_subnode(fdt, "/chosen"); @@ -160,23
> > > > > > +179,51 @@ static void create_fdt(SpikeState *s, const struct
> > > > > > MemmapEntry *memmap, static void spike_board_init(MachineState
> *machine) {
> > > > > > const struct MemmapEntry *memmap = spike_memmap;
> > > > > > -
> > > > > > SpikeState *s = g_new0(SpikeState, 1);
> > > > > > MemoryRegion *system_memory = get_system_memory();
> > > > > > MemoryRegion *main_mem = g_new(MemoryRegion, 1);
> > > > > > MemoryRegion *mask_rom = g_new(MemoryRegion, 1);
> > > > > > int i;
> > > > > > + char *soc_name;
> > > > > > unsigned int smp_cpus = machine->smp.cpus;
> > > > > > -
> > > > > > - /* Initialize SOC */
> > > > > > - object_initialize_child(OBJECT(machine), "soc", &s->soc, sizeof(s-
> >soc),
> > > > > > - TYPE_RISCV_HART_ARRAY, &error_abort, NULL);
> > > > > > - object_property_set_str(OBJECT(&s->soc), machine->cpu_type,
> "cpu-
> > > > > type",
> > > > > > - &error_abort);
> > > > > > - object_property_set_int(OBJECT(&s->soc), smp_cpus, "num-harts",
> > > > > > - &error_abort);
> > > > > > - object_property_set_bool(OBJECT(&s->soc), true, "realized",
> > > > > > - &error_abort);
> > > > > > + unsigned int base_hartid, cpus_per_socket;
> > > > > > +
> > > > > > + s->num_socs = machine->smp.sockets;
> > > > > > +
> > > > > > + /* Ensure minumum required CPUs per socket */
> > > > > > + if ((smp_cpus / s->num_socs) < SPIKE_CPUS_PER_SOCKET_MIN)
> > > > > > + s->num_socs = 1;
> > > > >
> > > > > Why? It seems like creating single-hart sockets would be a good
> > > > > test case, and I'm pretty sure it's a configuration that we had
> > > > > in embedded
> > > systems.
> > > >
> > > > Yes, single-hart sockets are sensible for testing software.
> > > >
> > > > When "sockets=" sub-option is not provided in "-smp " command line
> > > > options, the machine->smp.sockets is set same as machine->smp.cpus
> > > > by
> > > > smp_parse() function in hw/core/machine.c. This means by default
> > > > we will always get single-hart per socket. In other words, "-smp
> > > > 4" will be 4 cpus and 4 sockets. This is counter intuitive for
> > > > users because when "sockets=" is not provided we should default to
> > > > single socket irrespective to number of cpus.
> > > >
> > > > I had added SPIKE_CPUS_PER_SOCKET_MIN to handle the default case
> > > > when no "sockets=" sub-option is provided.
> > > >
> > > > Alternate approach will be:
> > > > 1. Add more members in struct CpuTopology of include/hw/boards.h
> > > > to help us know whether "sockets=" option was passed or not 2.
> > > > Update smp_parse() for new members in struct CpuTopology 3. Assume
> > > > single-socket machine in QEMU RISC-V virt and QEMU
> > > > RISC-V spike machines when "sockets=" option was not passed
> > > >
> > > > Suggestions ??
> > > >
> > >
> > > I think it makes sense to just stick to what smp_parse() does.
> > > That's what QEMU users are used to so we should follow that.
> > >
> > > I agree it is strange that is specifying `-smp x' you will get
> > > max_cpus number of sockets and split the CPUs via them, but that's
> > > what every other board (besides
> > > x86) does.
> >
> > So we are fine with SPIKE_CPUS_PER_SOCKET_MIN=2 for now, right ??
>
> Why do we need SPIKE_CPUS_PER_SOCKET_MIN at all?
We are creating a separate CLINT and PLIC for each socket but the
VirtIO and other devices can only connect to one PLIC instance.
Now if we have one CPU/HART per socket (by default) then only
one CPU can take all VirtIO interrupts. We can't even change the
irq affinity.
The SPIKE_CPUS_PER_SOCKET_MIN=2 helps us ensure that
smp_parse() gives us one CPU/HART per socket (by default),
we ignore the "sockets" and force it to just one socket. That's
why SPIKE_CPUS_PER_SOCKET_MIN is a work-around for
the non-intuitive behavior of smp_parse().
My previous suggestion was to add "sockets_available" field in
"Struct CpuTopology". The other fields of "Struct CpuTopology"
will remain as-in (no change in semantics) so it will work fine
for other architectures. The "sockets_available" field will tell
use whether "sockets" sub-option was specified in command
line (or not). Using "sockets_available" field we can remove the
SPIKE_CPUS_PER_SOCKET_MIN=2 work-around. In fact, we can
totally remove SPIKE_CPUS_PER_SOCKET_MIN.
Regards,
Anup
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2020-05-27 4:01 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-05-16 6:37 [PATCH 0/4] RISC-V multi-socket support Anup Patel
2020-05-16 6:37 ` [PATCH 1/4] hw/riscv: Allow creating multiple instances of CLINT Anup Patel
2020-05-19 21:21 ` Alistair Francis
2020-05-21 20:16 ` Palmer Dabbelt
2020-05-16 6:37 ` [PATCH 2/4] hw/riscv: spike: Allow creating multiple sockets Anup Patel
2020-05-21 20:16 ` Palmer Dabbelt
2020-05-22 10:09 ` Anup Patel
2020-05-27 0:38 ` Alistair Francis
2020-05-27 2:54 ` Anup Patel
2020-05-27 3:30 ` Alistair Francis
2020-05-27 3:59 ` Anup Patel
2020-05-16 6:37 ` [PATCH 3/4] hw/riscv: Allow creating multiple instances of PLIC Anup Patel
2020-05-21 20:16 ` Palmer Dabbelt
2020-05-21 21:59 ` Alistair Francis
2020-05-16 6:37 ` [PATCH 4/4] hw/riscv: virt: Allow creating multiple sockets Anup Patel
2020-05-21 20:16 ` Palmer Dabbelt
2020-05-16 11:58 ` [PATCH 0/4] RISC-V multi-socket support no-reply
2020-05-19 21:20 ` Alistair Francis
2020-05-20 8:50 ` Anup Patel
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).