From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-7.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,HTML_MESSAGE,MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING,NICE_REPLY_A,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2FC5DC2B9F4 for ; Mon, 28 Jun 2021 08:13:40 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 3C8EB619C3 for ; Mon, 28 Jun 2021 08:13:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3C8EB619C3 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=c-sky.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:40410 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lxmOg-0003oV-66 for qemu-devel@archiver.kernel.org; Mon, 28 Jun 2021 04:13:38 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:59106) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lxmNo-00031S-NX; Mon, 28 Jun 2021 04:12:44 -0400 Received: from out28-218.mail.aliyun.com ([115.124.28.218]:53377) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lxmNi-0001uJ-Uw; Mon, 28 Jun 2021 04:12:44 -0400 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.07046292|-1; CH=green; DM=|CONTINUE|false|; DS=CONTINUE|ham_regular_dialog|0.455768-0.00125223-0.54298; FP=0|0|0|0|0|-1|-1|-1; HT=ay29a033018047205; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=6; RT=6; SR=0; TI=SMTPD_---.KZ9UwGs_1624867951; Received: from 10.0.2.15(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.KZ9UwGs_1624867951) by smtp.aliyun-inc.com(10.147.41.158); Mon, 28 Jun 2021 16:12:31 +0800 Subject: Re: [RFC PATCH 03/11] hw/intc: Add CLIC device To: Frank Chang References: <20210409074857.166082-1-zhiwei_liu@c-sky.com> <20210409074857.166082-4-zhiwei_liu@c-sky.com> <52225a77-c509-9999-9d8a-942ea407f44d@c-sky.com> <27879b9f-bffa-96c9-a8b2-033eb0a0be4c@c-sky.com> <6c9c894c-6f85-c6bb-a372-d69e15896571@c-sky.com> From: LIU Zhiwei Message-ID: <26909592-51e0-df55-dff2-40f5dbc90085@c-sky.com> Date: Mon, 28 Jun 2021 16:11:18 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: Content-Type: multipart/alternative; boundary="------------B7E2ABE138837C53364851DF" Content-Language: en-US Received-SPF: none client-ip=115.124.28.218; envelope-from=zhiwei_liu@c-sky.com; helo=out28-218.mail.aliyun.com X-Spam_score_int: -36 X-Spam_score: -3.7 X-Spam_bar: --- X-Spam_report: (-3.7 / 5.0 requ) BAYES_00=-1.9, HTML_MESSAGE=0.001, NICE_REPLY_A=-1.765, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_HELO_NONE=0.001, SPF_NONE=0.001, UNPARSEABLE_RELAY=0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Palmer Dabbelt , Alistair Francis , "open list:RISC-V" , wxy194768@alibaba-inc.com, "qemu-devel@nongnu.org Developers" Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" This is a multi-part message in MIME format. --------------B7E2ABE138837C53364851DF Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit On 2021/6/28 下午4:07, Frank Chang wrote: > LIU Zhiwei > 於 > 2021年6月28日 週一 下午4:03寫道: > > > On 2021/6/28 下午3:49, Frank Chang wrote: >> LIU Zhiwei > 於 >> 2021年6月28日 週一 下午3:40寫道: >> >> >> On 2021/6/28 下午3:23, Frank Chang wrote: >>> LIU Zhiwei >> > 於 2021年6月28日 週一 >>> 下午3:17寫道: >>> >>> >>> On 2021/6/26 下午8:56, Frank Chang wrote: >>>> On Wed, Jun 16, 2021 at 10:56 AM LIU Zhiwei >>>> > wrote: >>>> >>>> >>>> On 6/13/21 6:10 PM, Frank Chang wrote: >>>>> LIU Zhiwei >>>> > 於 2021年4月9日 週五 >>>>> 下午3:57寫道: >>>>> >>>>> +static void riscv_clic_realize(DeviceState >>>>> *dev, Error **errp) >>>>> +{ >>>>> +    RISCVCLICState *clic = RISCV_CLIC(dev); >>>>> +    size_t harts_x_sources = clic->num_harts >>>>> * clic->num_sources; >>>>> +    int irqs, i; >>>>> + >>>>> +    if (clic->prv_s && clic->prv_u) { >>>>> +        irqs = 3 * harts_x_sources; >>>>> +    } else if (clic->prv_s || clic->prv_u) { >>>>> +        irqs = 2 * harts_x_sources; >>>>> +    } else { >>>>> +        irqs = harts_x_sources; >>>>> +    } >>>>> + >>>>> + clic->clic_size = irqs * 4 + 0x1000; >>>>> + memory_region_init_io(&clic->mmio, >>>>> OBJECT(dev), &riscv_clic_ops, clic, >>>>> + TYPE_RISCV_CLIC, clic->clic_size); >>>>> + >>>>> + clic->clicintip = g_new0(uint8_t, irqs); >>>>> + clic->clicintie = g_new0(uint8_t, irqs); >>>>> + clic->clicintattr = g_new0(uint8_t, irqs); >>>>> + clic->clicintctl = g_new0(uint8_t, irqs); >>>>> + clic->active_list = >>>>> g_new0(CLICActiveInterrupt, irqs); >>>>> + clic->active_count = g_new0(size_t, >>>>> clic->num_harts); >>>>> + clic->exccode = g_new0(uint32_t, >>>>> clic->num_harts); >>>>> + clic->cpu_irqs = g_new0(qemu_irq, >>>>> clic->num_harts); >>>>> + sysbus_init_mmio(SYS_BUS_DEVICE(dev), >>>>> &clic->mmio); >>>>> + >>>>> +    /* Allocate irq through gpio, so that we >>>>> can use qtest */ >>>>> + qdev_init_gpio_in(dev, riscv_clic_set_irq, >>>>> irqs); >>>>> + qdev_init_gpio_out(dev, clic->cpu_irqs, >>>>> clic->num_harts); >>>>> + >>>>> +    for (i = 0; i < clic->num_harts; i++) { >>>>> +        RISCVCPU *cpu = >>>>> RISCV_CPU(qemu_get_cpu(i)); >>>>> >>>>> >>>>> As spec says: >>>>>   Smaller single-core systems might have only a CLIC, >>>>>   while multicore systems might have a CLIC >>>>> per-core and a single shared PLIC. >>>>>   The PLIC xeip signals are treated as hart-local >>>>> interrupt sources by the CLIC at each core. >>>>> >>>>> It looks like it's possible to have one CLIC >>>>> instance per core. >>>> >>>> If you want to delivery an interrupt to one hart, >>>> you should encode the IRQ by the interrupt number >>>> , the hart number and the interrupt target >>>> privilege, then set the irq. >>>> >>>> I think how to calculate the hart number is the >>>> task of PLIC and it can make use of "hartid-base" >>>> to calculate it. >>>> >>>> Thanks, >>>> Zhiwei >>>> >>>> >>>> Hi Zhiwei, >>>> >>>> What I mean is if there are multiple CLIC instances, >>>> each per core (CLIC spec allows that). >>>> If you try to bind CLIC with CPU index start from 0, >>>> it will be impossible for CLIC instance to bind CPU >>>> from index other than 0. >>>> >>>> For example, for 4 cores system, it's possible to have >>>> 4 CLIC instances: >>>>   * CLIC 0 binds to CPU 0 >>>>   * CLIC 1 binds to CPU 1 >>>>   * CLIC 2 binds to CPU 2 >>>>   * CLIC 3 binds to CPU 3 >>>> >>>> and that's why I said it's possible to pass an extra >>>> "hartid-base" just like PLIC. >>>> I know most of hardid are calculated by the requesing >>>> address, so most hartid usages should be fine. >>>> But I saw two places using qemu_get_cpu(), >>>> which may cause the problem for the scenario I describe >>>> above: >>>> i.e. riscv_clic_next_interrupt() and >>>> riscv_clic_realize() as my original reply. >>> >>> So what's the problem here? >>> >>> Currently all cores share the same CLIC instance. Do you >>> want to give each core  a CLIC instance? >>> >>> Yes, that's what I mean, which should be supported as what >>> spec says[1]: >>>   The CLIC complements the PLIC. Smaller single-core systems >>> might have only a CLIC, >>>   while multicore systems might have *a CLIC per-core* and a >>> single shared PLIC. >>>   The PLIC xeip signals are treated as hart-local interrupt >>> sources by the CLIC at each core. >>> >>> [1] >>> https://github.com/riscv/riscv-fast-interrupt/blob/646310a5e4ae055964b4680f12c1c04a7cc0dd56/clic.adoc#12-clic-versus-plic >>> >>> >>> Thanks, >>> Frank Chang >> >> If we give each core a CLIC instance, it is not convenient to >> access the shared memory, such as 0x0-0x1000. >> Which CLIC instance should contain this memory region? >> >> What do you mean by: "access the shared memory" here? > > It means the cliccfg or clicinfo which  should be shared by all > CLIC instances. > > If there are multiple CLIC instances, shouldn't they have their own > base addresses? > So I do not understand how cliccfg and clicinfo would be shared by all > CLIC instances. (Or they should?) Once we have a talk on tech-fast-interrupt. The chair of fast interrupt reply is: /"The first part (address 0x0000-0x0FFF) which contains cliccfg/clicinfo/clicinttrig should be global since only one copy of the configuration setting is enough.// //On the other hand, the latter part (0x1000-0x4FFF) which contains control bits for individual interrupt should be one copy per hart"/ Thanks, Zhiwei > Each CLIC instance will manage its own cliccfg and clicinfo. > > Thanks, > Frank Chang > > Thanks, > Zhiwei > >> I thought the memory region is defined during CLIC's creation? >> So it should depend on the platform that creates CLIC instances. >> >> Thanks, >> Frank Chang >> >> Thanks, >> Zhiwei >> >>> >>> Thanks, >>> Zhiwei >>> >>>> Regards, >>>> Frank Chang >>>> >>>>> However if you try to bind CPU reference start >>>>> from index i = 0. >>>>> It's not possible for each per-core CLIC to bind >>>>> their own CPU instance in multicore system >>>>> as they have to bind from CPU 0. >>>>> >>>>> I'm not sure if we add a new "hartid-base" >>>>> property just like what SiFive PLIC is >>>>> implemented would be a good idea or not. >>>>> >>>>> >>>>> Regards, >>>>> Frank Chang >>>>> >>>>> +        qemu_irq irq = >>>>> qemu_allocate_irq(riscv_clic_cpu_irq_handler, >>>>> +  &cpu->env, 1); >>>>> + qdev_connect_gpio_out(dev, i, irq); >>>>> + cpu->env.clic = clic; >>>>> +    } >>>>> +} >>>>> + >>>>> >>>>> --------------B7E2ABE138837C53364851DF Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit


On 2021/6/28 下午4:07, Frank Chang wrote:
LIU Zhiwei <zhiwei_liu@c-sky.com> 於 2021年6月28日 週一 下午4:03寫道:


On 2021/6/28 下午3:49, Frank Chang wrote:
LIU Zhiwei <zhiwei_liu@c-sky.com> 於 2021年6月28日 週一 下午3:40寫道:


On 2021/6/28 下午3:23, Frank Chang wrote:
LIU Zhiwei <zhiwei_liu@c-sky.com> 於 2021年6月28日 週一 下午3:17寫道:


On 2021/6/26 下午8:56, Frank Chang wrote:
On Wed, Jun 16, 2021 at 10:56 AM LIU Zhiwei <zhiwei_liu@c-sky.com> wrote:


On 6/13/21 6:10 PM, Frank Chang wrote:
LIU Zhiwei <zhiwei_liu@c-sky.com> 於 2021年4月9日 週五 下午3:57寫道:

+static void riscv_clic_realize(DeviceState *dev, Error **errp)
+{
+    RISCVCLICState *clic = RISCV_CLIC(dev);
+    size_t harts_x_sources = clic->num_harts * clic->num_sources;
+    int irqs, i;
+
+    if (clic->prv_s && clic->prv_u) {
+        irqs = 3 * harts_x_sources;
+    } else if (clic->prv_s || clic->prv_u) {
+        irqs = 2 * harts_x_sources;
+    } else {
+        irqs = harts_x_sources;
+    }
+
+    clic->clic_size = irqs * 4 + 0x1000;
+    memory_region_init_io(&clic->mmio, OBJECT(dev), &riscv_clic_ops, clic,
+                          TYPE_RISCV_CLIC, clic->clic_size);
+
+    clic->clicintip = g_new0(uint8_t, irqs);
+    clic->clicintie = g_new0(uint8_t, irqs);
+    clic->clicintattr = g_new0(uint8_t, irqs);
+    clic->clicintctl = g_new0(uint8_t, irqs);
+    clic->active_list = g_new0(CLICActiveInterrupt, irqs);
+    clic->active_count = g_new0(size_t, clic->num_harts);
+    clic->exccode = g_new0(uint32_t, clic->num_harts);
+    clic->cpu_irqs = g_new0(qemu_irq, clic->num_harts);
+    sysbus_init_mmio(SYS_BUS_DEVICE(dev), &clic->mmio);
+
+    /* Allocate irq through gpio, so that we can use qtest */
+    qdev_init_gpio_in(dev, riscv_clic_set_irq, irqs);
+    qdev_init_gpio_out(dev, clic->cpu_irqs, clic->num_harts);
+
+    for (i = 0; i < clic->num_harts; i++) {
+        RISCVCPU *cpu = RISCV_CPU(qemu_get_cpu(i));

As spec says:
  Smaller single-core systems might have only a CLIC,
  while multicore systems might have a CLIC per-core and a single shared PLIC.
  The PLIC xeip signals are treated as hart-local interrupt sources by the CLIC at each core.

It looks like it's possible to have one CLIC instance per core.

If you want to delivery an interrupt to one hart, you should encode the IRQ by the interrupt number
, the hart number and the interrupt target privilege, then set the irq.

I think how to calculate the hart number is the task of PLIC and it can make use of "hartid-base"
to calculate it.

Thanks,
Zhiwei


Hi Zhiwei,

What I mean is if there are multiple CLIC instances, each per core (CLIC spec allows that).
If you try to bind CLIC with CPU index start from 0,
it will be impossible for CLIC instance to bind CPU from index other than 0.

For example, for 4 cores system, it's possible to have 4 CLIC instances:
  * CLIC 0 binds to CPU 0
  * CLIC 1 binds to CPU 1
  * CLIC 2 binds to CPU 2
  * CLIC 3 binds to CPU 3

and that's why I said it's possible to pass an extra "hartid-base" just like PLIC.
I know most of hardid are calculated by the requesing address, so most hartid usages should be fine.
But I saw two places using qemu_get_cpu(),
which may cause the problem for the scenario I describe above:
i.e. riscv_clic_next_interrupt() and riscv_clic_realize() as my original reply.

So what's the problem here?

Currently all cores share the same CLIC instance. Do you want to give each core  a CLIC instance?

Yes, that's what I mean, which should be supported as what spec says[1]:
  The CLIC complements the PLIC. Smaller single-core systems might have only a CLIC,
  while multicore systems might have a CLIC per-core and a single shared PLIC.
  The PLIC xeip signals are treated as hart-local interrupt sources by the CLIC at each core.


Thanks,
Frank Chang
 

If we give each core a CLIC instance, it is not convenient to access the shared memory, such as 0x0-0x1000.
Which CLIC instance should contain this memory region?

What do you mean by: "access the shared memory" here?

It means the cliccfg or clicinfo which  should be shared by all CLIC instances.

If there are multiple CLIC instances, shouldn't they have their own base addresses?
So I do not understand how cliccfg and clicinfo would be shared by all CLIC instances. (Or they should?)

Once we have a talk on tech-fast-interrupt. The chair of fast interrupt reply is:

"The first part (address 0x0000-0x0FFF) which contains cliccfg/clicinfo/clicinttrig should be global since only one copy of the configuration setting is enough.
On the other hand, the latter part (0x1000-0x4FFF) which contains control bits for individual interrupt should be one copy per hart"

Thanks,
Zhiwei

Each CLIC instance will manage its own cliccfg and clicinfo.

Thanks,
Frank Chang

Thanks,
Zhiwei

I thought the memory region is defined during CLIC's creation?
So it should depend on the platform that creates CLIC instances.

Thanks,
Frank Chang
 

Thanks,
Zhiwei


Thanks,
Zhiwei

Regards,
Frank Chang
 

However if you try to bind CPU reference start from index i = 0.
It's not possible for each per-core CLIC to bind their own CPU instance in multicore system
as they have to bind from CPU 0.

I'm not sure if we add a new "hartid-base" property just like what SiFive PLIC is
implemented would be a good idea or not.


Regards,
Frank Chang
 
+        qemu_irq irq = qemu_allocate_irq(riscv_clic_cpu_irq_handler,
+                                         &cpu->env, 1);
+        qdev_connect_gpio_out(dev, i, irq);
+        cpu->env.clic = clic;
+    }
+}
+


--------------B7E2ABE138837C53364851DF-- From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from list by lists.gnu.org with archive (Exim 4.90_1) id 1lxmNq-00032T-WA for mharc-qemu-riscv@gnu.org; Mon, 28 Jun 2021 04:12:47 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:59106) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lxmNo-00031S-NX; Mon, 28 Jun 2021 04:12:44 -0400 Received: from out28-218.mail.aliyun.com ([115.124.28.218]:53377) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lxmNi-0001uJ-Uw; Mon, 28 Jun 2021 04:12:44 -0400 X-Alimail-AntiSpam: AC=CONTINUE; BC=0.07046292|-1; CH=green; DM=|CONTINUE|false|; DS=CONTINUE|ham_regular_dialog|0.455768-0.00125223-0.54298; FP=0|0|0|0|0|-1|-1|-1; HT=ay29a033018047205; MF=zhiwei_liu@c-sky.com; NM=1; PH=DS; RN=6; RT=6; SR=0; TI=SMTPD_---.KZ9UwGs_1624867951; Received: from 10.0.2.15(mailfrom:zhiwei_liu@c-sky.com fp:SMTPD_---.KZ9UwGs_1624867951) by smtp.aliyun-inc.com(10.147.41.158); Mon, 28 Jun 2021 16:12:31 +0800 Subject: Re: [RFC PATCH 03/11] hw/intc: Add CLIC device To: Frank Chang Cc: Alistair Francis , Palmer Dabbelt , "open list:RISC-V" , wxy194768@alibaba-inc.com, "qemu-devel@nongnu.org Developers" References: <20210409074857.166082-1-zhiwei_liu@c-sky.com> <20210409074857.166082-4-zhiwei_liu@c-sky.com> <52225a77-c509-9999-9d8a-942ea407f44d@c-sky.com> <27879b9f-bffa-96c9-a8b2-033eb0a0be4c@c-sky.com> <6c9c894c-6f85-c6bb-a372-d69e15896571@c-sky.com> From: LIU Zhiwei Message-ID: <26909592-51e0-df55-dff2-40f5dbc90085@c-sky.com> Date: Mon, 28 Jun 2021 16:11:18 +0800 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.11.0 MIME-Version: 1.0 In-Reply-To: Content-Type: multipart/alternative; boundary="------------B7E2ABE138837C53364851DF" Content-Language: en-US Received-SPF: none client-ip=115.124.28.218; envelope-from=zhiwei_liu@c-sky.com; helo=out28-218.mail.aliyun.com X-Spam_score_int: -36 X-Spam_score: -3.7 X-Spam_bar: --- X-Spam_report: (-3.7 / 5.0 requ) BAYES_00=-1.9, HTML_MESSAGE=0.001, NICE_REPLY_A=-1.765, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_HELO_NONE=0.001, SPF_NONE=0.001, UNPARSEABLE_RELAY=0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-riscv@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Jun 2021 08:12:45 -0000 This is a multi-part message in MIME format. --------------B7E2ABE138837C53364851DF Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit On 2021/6/28 下午4:07, Frank Chang wrote: > LIU Zhiwei > 於 > 2021年6月28日 週一 下午4:03寫道: > > > On 2021/6/28 下午3:49, Frank Chang wrote: >> LIU Zhiwei > 於 >> 2021年6月28日 週一 下午3:40寫道: >> >> >> On 2021/6/28 下午3:23, Frank Chang wrote: >>> LIU Zhiwei >> > 於 2021年6月28日 週一 >>> 下午3:17寫道: >>> >>> >>> On 2021/6/26 下午8:56, Frank Chang wrote: >>>> On Wed, Jun 16, 2021 at 10:56 AM LIU Zhiwei >>>> > wrote: >>>> >>>> >>>> On 6/13/21 6:10 PM, Frank Chang wrote: >>>>> LIU Zhiwei >>>> > 於 2021年4月9日 週五 >>>>> 下午3:57寫道: >>>>> >>>>> +static void riscv_clic_realize(DeviceState >>>>> *dev, Error **errp) >>>>> +{ >>>>> +    RISCVCLICState *clic = RISCV_CLIC(dev); >>>>> +    size_t harts_x_sources = clic->num_harts >>>>> * clic->num_sources; >>>>> +    int irqs, i; >>>>> + >>>>> +    if (clic->prv_s && clic->prv_u) { >>>>> +        irqs = 3 * harts_x_sources; >>>>> +    } else if (clic->prv_s || clic->prv_u) { >>>>> +        irqs = 2 * harts_x_sources; >>>>> +    } else { >>>>> +        irqs = harts_x_sources; >>>>> +    } >>>>> + >>>>> + clic->clic_size = irqs * 4 + 0x1000; >>>>> + memory_region_init_io(&clic->mmio, >>>>> OBJECT(dev), &riscv_clic_ops, clic, >>>>> + TYPE_RISCV_CLIC, clic->clic_size); >>>>> + >>>>> + clic->clicintip = g_new0(uint8_t, irqs); >>>>> + clic->clicintie = g_new0(uint8_t, irqs); >>>>> + clic->clicintattr = g_new0(uint8_t, irqs); >>>>> + clic->clicintctl = g_new0(uint8_t, irqs); >>>>> + clic->active_list = >>>>> g_new0(CLICActiveInterrupt, irqs); >>>>> + clic->active_count = g_new0(size_t, >>>>> clic->num_harts); >>>>> + clic->exccode = g_new0(uint32_t, >>>>> clic->num_harts); >>>>> + clic->cpu_irqs = g_new0(qemu_irq, >>>>> clic->num_harts); >>>>> + sysbus_init_mmio(SYS_BUS_DEVICE(dev), >>>>> &clic->mmio); >>>>> + >>>>> +    /* Allocate irq through gpio, so that we >>>>> can use qtest */ >>>>> + qdev_init_gpio_in(dev, riscv_clic_set_irq, >>>>> irqs); >>>>> + qdev_init_gpio_out(dev, clic->cpu_irqs, >>>>> clic->num_harts); >>>>> + >>>>> +    for (i = 0; i < clic->num_harts; i++) { >>>>> +        RISCVCPU *cpu = >>>>> RISCV_CPU(qemu_get_cpu(i)); >>>>> >>>>> >>>>> As spec says: >>>>>   Smaller single-core systems might have only a CLIC, >>>>>   while multicore systems might have a CLIC >>>>> per-core and a single shared PLIC. >>>>>   The PLIC xeip signals are treated as hart-local >>>>> interrupt sources by the CLIC at each core. >>>>> >>>>> It looks like it's possible to have one CLIC >>>>> instance per core. >>>> >>>> If you want to delivery an interrupt to one hart, >>>> you should encode the IRQ by the interrupt number >>>> , the hart number and the interrupt target >>>> privilege, then set the irq. >>>> >>>> I think how to calculate the hart number is the >>>> task of PLIC and it can make use of "hartid-base" >>>> to calculate it. >>>> >>>> Thanks, >>>> Zhiwei >>>> >>>> >>>> Hi Zhiwei, >>>> >>>> What I mean is if there are multiple CLIC instances, >>>> each per core (CLIC spec allows that). >>>> If you try to bind CLIC with CPU index start from 0, >>>> it will be impossible for CLIC instance to bind CPU >>>> from index other than 0. >>>> >>>> For example, for 4 cores system, it's possible to have >>>> 4 CLIC instances: >>>>   * CLIC 0 binds to CPU 0 >>>>   * CLIC 1 binds to CPU 1 >>>>   * CLIC 2 binds to CPU 2 >>>>   * CLIC 3 binds to CPU 3 >>>> >>>> and that's why I said it's possible to pass an extra >>>> "hartid-base" just like PLIC. >>>> I know most of hardid are calculated by the requesing >>>> address, so most hartid usages should be fine. >>>> But I saw two places using qemu_get_cpu(), >>>> which may cause the problem for the scenario I describe >>>> above: >>>> i.e. riscv_clic_next_interrupt() and >>>> riscv_clic_realize() as my original reply. >>> >>> So what's the problem here? >>> >>> Currently all cores share the same CLIC instance. Do you >>> want to give each core  a CLIC instance? >>> >>> Yes, that's what I mean, which should be supported as what >>> spec says[1]: >>>   The CLIC complements the PLIC. Smaller single-core systems >>> might have only a CLIC, >>>   while multicore systems might have *a CLIC per-core* and a >>> single shared PLIC. >>>   The PLIC xeip signals are treated as hart-local interrupt >>> sources by the CLIC at each core. >>> >>> [1] >>> https://github.com/riscv/riscv-fast-interrupt/blob/646310a5e4ae055964b4680f12c1c04a7cc0dd56/clic.adoc#12-clic-versus-plic >>> >>> >>> Thanks, >>> Frank Chang >> >> If we give each core a CLIC instance, it is not convenient to >> access the shared memory, such as 0x0-0x1000. >> Which CLIC instance should contain this memory region? >> >> What do you mean by: "access the shared memory" here? > > It means the cliccfg or clicinfo which  should be shared by all > CLIC instances. > > If there are multiple CLIC instances, shouldn't they have their own > base addresses? > So I do not understand how cliccfg and clicinfo would be shared by all > CLIC instances. (Or they should?) Once we have a talk on tech-fast-interrupt. The chair of fast interrupt reply is: /"The first part (address 0x0000-0x0FFF) which contains cliccfg/clicinfo/clicinttrig should be global since only one copy of the configuration setting is enough.// //On the other hand, the latter part (0x1000-0x4FFF) which contains control bits for individual interrupt should be one copy per hart"/ Thanks, Zhiwei > Each CLIC instance will manage its own cliccfg and clicinfo. > > Thanks, > Frank Chang > > Thanks, > Zhiwei > >> I thought the memory region is defined during CLIC's creation? >> So it should depend on the platform that creates CLIC instances. >> >> Thanks, >> Frank Chang >> >> Thanks, >> Zhiwei >> >>> >>> Thanks, >>> Zhiwei >>> >>>> Regards, >>>> Frank Chang >>>> >>>>> However if you try to bind CPU reference start >>>>> from index i = 0. >>>>> It's not possible for each per-core CLIC to bind >>>>> their own CPU instance in multicore system >>>>> as they have to bind from CPU 0. >>>>> >>>>> I'm not sure if we add a new "hartid-base" >>>>> property just like what SiFive PLIC is >>>>> implemented would be a good idea or not. >>>>> >>>>> >>>>> Regards, >>>>> Frank Chang >>>>> >>>>> +        qemu_irq irq = >>>>> qemu_allocate_irq(riscv_clic_cpu_irq_handler, >>>>> +  &cpu->env, 1); >>>>> + qdev_connect_gpio_out(dev, i, irq); >>>>> + cpu->env.clic = clic; >>>>> +    } >>>>> +} >>>>> + >>>>> >>>>> --------------B7E2ABE138837C53364851DF Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: 8bit


On 2021/6/28 下午4:07, Frank Chang wrote:
LIU Zhiwei <zhiwei_liu@c-sky.com> 於 2021年6月28日 週一 下午4:03寫道:


On 2021/6/28 下午3:49, Frank Chang wrote:
LIU Zhiwei <zhiwei_liu@c-sky.com> 於 2021年6月28日 週一 下午3:40寫道:


On 2021/6/28 下午3:23, Frank Chang wrote:
LIU Zhiwei <zhiwei_liu@c-sky.com> 於 2021年6月28日 週一 下午3:17寫道:


On 2021/6/26 下午8:56, Frank Chang wrote:
On Wed, Jun 16, 2021 at 10:56 AM LIU Zhiwei <zhiwei_liu@c-sky.com> wrote:


On 6/13/21 6:10 PM, Frank Chang wrote:
LIU Zhiwei <zhiwei_liu@c-sky.com> 於 2021年4月9日 週五 下午3:57寫道:

+static void riscv_clic_realize(DeviceState *dev, Error **errp)
+{
+    RISCVCLICState *clic = RISCV_CLIC(dev);
+    size_t harts_x_sources = clic->num_harts * clic->num_sources;
+    int irqs, i;
+
+    if (clic->prv_s && clic->prv_u) {
+        irqs = 3 * harts_x_sources;
+    } else if (clic->prv_s || clic->prv_u) {
+        irqs = 2 * harts_x_sources;
+    } else {
+        irqs = harts_x_sources;
+    }
+
+    clic->clic_size = irqs * 4 + 0x1000;
+    memory_region_init_io(&clic->mmio, OBJECT(dev), &riscv_clic_ops, clic,
+                          TYPE_RISCV_CLIC, clic->clic_size);
+
+    clic->clicintip = g_new0(uint8_t, irqs);
+    clic->clicintie = g_new0(uint8_t, irqs);
+    clic->clicintattr = g_new0(uint8_t, irqs);
+    clic->clicintctl = g_new0(uint8_t, irqs);
+    clic->active_list = g_new0(CLICActiveInterrupt, irqs);
+    clic->active_count = g_new0(size_t, clic->num_harts);
+    clic->exccode = g_new0(uint32_t, clic->num_harts);
+    clic->cpu_irqs = g_new0(qemu_irq, clic->num_harts);
+    sysbus_init_mmio(SYS_BUS_DEVICE(dev), &clic->mmio);
+
+    /* Allocate irq through gpio, so that we can use qtest */
+    qdev_init_gpio_in(dev, riscv_clic_set_irq, irqs);
+    qdev_init_gpio_out(dev, clic->cpu_irqs, clic->num_harts);
+
+    for (i = 0; i < clic->num_harts; i++) {
+        RISCVCPU *cpu = RISCV_CPU(qemu_get_cpu(i));

As spec says:
  Smaller single-core systems might have only a CLIC,
  while multicore systems might have a CLIC per-core and a single shared PLIC.
  The PLIC xeip signals are treated as hart-local interrupt sources by the CLIC at each core.

It looks like it's possible to have one CLIC instance per core.

If you want to delivery an interrupt to one hart, you should encode the IRQ by the interrupt number
, the hart number and the interrupt target privilege, then set the irq.

I think how to calculate the hart number is the task of PLIC and it can make use of "hartid-base"
to calculate it.

Thanks,
Zhiwei


Hi Zhiwei,

What I mean is if there are multiple CLIC instances, each per core (CLIC spec allows that).
If you try to bind CLIC with CPU index start from 0,
it will be impossible for CLIC instance to bind CPU from index other than 0.

For example, for 4 cores system, it's possible to have 4 CLIC instances:
  * CLIC 0 binds to CPU 0
  * CLIC 1 binds to CPU 1
  * CLIC 2 binds to CPU 2
  * CLIC 3 binds to CPU 3

and that's why I said it's possible to pass an extra "hartid-base" just like PLIC.
I know most of hardid are calculated by the requesing address, so most hartid usages should be fine.
But I saw two places using qemu_get_cpu(),
which may cause the problem for the scenario I describe above:
i.e. riscv_clic_next_interrupt() and riscv_clic_realize() as my original reply.

So what's the problem here?

Currently all cores share the same CLIC instance. Do you want to give each core  a CLIC instance?

Yes, that's what I mean, which should be supported as what spec says[1]:
  The CLIC complements the PLIC. Smaller single-core systems might have only a CLIC,
  while multicore systems might have a CLIC per-core and a single shared PLIC.
  The PLIC xeip signals are treated as hart-local interrupt sources by the CLIC at each core.


Thanks,
Frank Chang
 

If we give each core a CLIC instance, it is not convenient to access the shared memory, such as 0x0-0x1000.
Which CLIC instance should contain this memory region?

What do you mean by: "access the shared memory" here?

It means the cliccfg or clicinfo which  should be shared by all CLIC instances.

If there are multiple CLIC instances, shouldn't they have their own base addresses?
So I do not understand how cliccfg and clicinfo would be shared by all CLIC instances. (Or they should?)

Once we have a talk on tech-fast-interrupt. The chair of fast interrupt reply is:

"The first part (address 0x0000-0x0FFF) which contains cliccfg/clicinfo/clicinttrig should be global since only one copy of the configuration setting is enough.
On the other hand, the latter part (0x1000-0x4FFF) which contains control bits for individual interrupt should be one copy per hart"

Thanks,
Zhiwei

Each CLIC instance will manage its own cliccfg and clicinfo.

Thanks,
Frank Chang

Thanks,
Zhiwei

I thought the memory region is defined during CLIC's creation?
So it should depend on the platform that creates CLIC instances.

Thanks,
Frank Chang
 

Thanks,
Zhiwei


Thanks,
Zhiwei

Regards,
Frank Chang
 

However if you try to bind CPU reference start from index i = 0.
It's not possible for each per-core CLIC to bind their own CPU instance in multicore system
as they have to bind from CPU 0.

I'm not sure if we add a new "hartid-base" property just like what SiFive PLIC is
implemented would be a good idea or not.


Regards,
Frank Chang
 
+        qemu_irq irq = qemu_allocate_irq(riscv_clic_cpu_irq_handler,
+                                         &cpu->env, 1);
+        qdev_connect_gpio_out(dev, i, irq);
+        cpu->env.clic = clic;
+    }
+}
+


--------------B7E2ABE138837C53364851DF--