From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,HTML_MESSAGE,MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 04770C2B9F4 for ; Mon, 28 Jun 2021 08:20:57 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 4627D61A0F for ; Mon, 28 Jun 2021 08:20:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 4627D61A0F Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=sifive.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:43264 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lxmVj-00063u-A3 for qemu-devel@archiver.kernel.org; Mon, 28 Jun 2021 04:20:55 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:60606) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lxmUd-0005FD-Lk for qemu-devel@nongnu.org; Mon, 28 Jun 2021 04:19:47 -0400 Received: from mail-pj1-x1036.google.com ([2607:f8b0:4864:20::1036]:37815) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lxmUZ-0007FK-VU for qemu-devel@nongnu.org; Mon, 28 Jun 2021 04:19:47 -0400 Received: by mail-pj1-x1036.google.com with SMTP id 22-20020a17090a0c16b0290164a5354ad0so12328147pjs.2 for ; Mon, 28 Jun 2021 01:19:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=NAXf566XAGyVBAy8iZXRYSfpuIHPLr4RyYxsm4vCtAc=; b=iKk4JELv1dhKbRYUfyfmo5zZHC/h3/38r8gd3YBQstzkgypaFMhFGKEwtt6bjsCWu8 LqT+pptxbQ7tDkkOadMWPifco+2Vfg0Z85EphoDE2Auv6/yrpaW899hs54RqENfuBo7O agisprzkQFDSV/Zg8IobWfUY2PEQHlsN13R13R1RULrm7k5h1OVblEsz71hHMAnqs3iy XdaZSnmmsZeDi7Tfq7IABx8w8vzD0kAsem+K5K1N5aVPe5BQTfIsLUXA4aE9cPgNl8v9 wwdyQumOMEK5QxAGYLlabLlyfmyaTZEu/IMbLXl6rRUGwTJ2odjEyTRHPGu87/DQIF7E Pf+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=NAXf566XAGyVBAy8iZXRYSfpuIHPLr4RyYxsm4vCtAc=; b=mL4+3UGIfpK9IyMfTxmh4gEQKpnFHRWKgwCWZ1sb/CeNFySplTLbbklYnJfQiSdmni pycuGuzZEY025dQeazLN/MfG9J4A2GHX+jCXeTtFhKpt3XxfsxpkkwiDg4eEU4DK2ZLV yp+LpHJ8hOOQqJDjzj16jBLCNZmE6fqAYzVqi2EB+STirxwEt0Hmd31dum+stufsEXYk dPcNqgnDWUveBsVINVgWi++nPerRxBwucLaayUjko91lKPgzoh3bFLjs++MFXq4DYo9z t8bgCz1/goVqOotLhp0RTqt4RaJqu2YU4T24x5sYrrj85dLMTog0/tBGehH9UJYhzvfx ICKg== X-Gm-Message-State: AOAM530tOKNYNe7Ay0POKpfUf2gRyl951jLWiP64ZPHZxkgZvkMnSrWj VirfGBCT4nv6a7A6C7vs3VzuFA== X-Google-Smtp-Source: ABdhPJzqLUsAN/aYPMc7U0RXFX/Y+aW5onWINE3tMPjTNsffFnjsjaMnvo26klWqk4LB548yxW9SUA== X-Received: by 2002:a17:902:7b87:b029:128:345d:f596 with SMTP id w7-20020a1709027b87b0290128345df596mr16206888pll.36.1624868382301; Mon, 28 Jun 2021 01:19:42 -0700 (PDT) Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com. [209.85.210.177]) by smtp.gmail.com with ESMTPSA id cs1sm3971394pjb.56.2021.06.28.01.19.41 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 28 Jun 2021 01:19:41 -0700 (PDT) Received: by mail-pf1-f177.google.com with SMTP id a127so13443901pfa.10; Mon, 28 Jun 2021 01:19:41 -0700 (PDT) X-Received: by 2002:a63:5b0e:: with SMTP id p14mr22021416pgb.110.1624868381598; Mon, 28 Jun 2021 01:19:41 -0700 (PDT) MIME-Version: 1.0 References: <20210409074857.166082-1-zhiwei_liu@c-sky.com> <20210409074857.166082-4-zhiwei_liu@c-sky.com> <52225a77-c509-9999-9d8a-942ea407f44d@c-sky.com> <27879b9f-bffa-96c9-a8b2-033eb0a0be4c@c-sky.com> <6c9c894c-6f85-c6bb-a372-d69e15896571@c-sky.com> <26909592-51e0-df55-dff2-40f5dbc90085@c-sky.com> In-Reply-To: <26909592-51e0-df55-dff2-40f5dbc90085@c-sky.com> From: Frank Chang Date: Mon, 28 Jun 2021 16:19:30 +0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [RFC PATCH 03/11] hw/intc: Add CLIC device To: LIU Zhiwei Content-Type: multipart/alternative; boundary="000000000000b1288205c5cf26b8" Received-SPF: pass client-ip=2607:f8b0:4864:20::1036; envelope-from=frank.chang@sifive.com; helo=mail-pj1-x1036.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "open list:RISC-V" , Frank Chang , "qemu-devel@nongnu.org Developers" , wxy194768@alibaba-inc.com, Alistair Francis , Palmer Dabbelt Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" --000000000000b1288205c5cf26b8 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable LIU Zhiwei =E6=96=BC 2021=E5=B9=B46=E6=9C=8828=E6=97= =A5 =E9=80=B1=E4=B8=80 =E4=B8=8B=E5=8D=884:12=E5=AF=AB=E9=81=93=EF=BC=9A > > On 2021/6/28 =E4=B8=8B=E5=8D=884:07, Frank Chang wrote: > > LIU Zhiwei =E6=96=BC 2021=E5=B9=B46=E6=9C=8828=E6= =97=A5 =E9=80=B1=E4=B8=80 =E4=B8=8B=E5=8D=884:03=E5=AF=AB=E9=81=93=EF=BC=9A > >> >> On 2021/6/28 =E4=B8=8B=E5=8D=883:49, Frank Chang wrote: >> >> LIU Zhiwei =E6=96=BC 2021=E5=B9=B46=E6=9C=8828=E6= =97=A5 =E9=80=B1=E4=B8=80 =E4=B8=8B=E5=8D=883:40=E5=AF=AB=E9=81=93=EF=BC=9A >> >>> >>> On 2021/6/28 =E4=B8=8B=E5=8D=883:23, Frank Chang wrote: >>> >>> LIU Zhiwei =E6=96=BC 2021=E5=B9=B46=E6=9C=8828= =E6=97=A5 =E9=80=B1=E4=B8=80 =E4=B8=8B=E5=8D=883:17=E5=AF=AB=E9=81=93=EF=BC= =9A >>> >>>> >>>> On 2021/6/26 =E4=B8=8B=E5=8D=888:56, Frank Chang wrote: >>>> >>>> On Wed, Jun 16, 2021 at 10:56 AM LIU Zhiwei >>>> wrote: >>>> >>>>> >>>>> On 6/13/21 6:10 PM, Frank Chang wrote: >>>>> >>>>> LIU Zhiwei =E6=96=BC 2021=E5=B9=B44=E6=9C=889= =E6=97=A5 =E9=80=B1=E4=BA=94 =E4=B8=8B=E5=8D=883:57=E5=AF=AB=E9=81=93=EF=BC= =9A >>>>> >>>>> +static void riscv_clic_realize(DeviceState *dev, Error **errp) >>>>>> +{ >>>>>> + RISCVCLICState *clic =3D RISCV_CLIC(dev); >>>>>> + size_t harts_x_sources =3D clic->num_harts * clic->num_sources; >>>>>> + int irqs, i; >>>>>> + >>>>>> + if (clic->prv_s && clic->prv_u) { >>>>>> + irqs =3D 3 * harts_x_sources; >>>>>> + } else if (clic->prv_s || clic->prv_u) { >>>>>> + irqs =3D 2 * harts_x_sources; >>>>>> + } else { >>>>>> + irqs =3D harts_x_sources; >>>>>> + } >>>>>> + >>>>>> + clic->clic_size =3D irqs * 4 + 0x1000; >>>>>> + memory_region_init_io(&clic->mmio, OBJECT(dev), &riscv_clic_ops= , >>>>>> clic, >>>>>> + TYPE_RISCV_CLIC, clic->clic_size); >>>>>> + >>>>>> + clic->clicintip =3D g_new0(uint8_t, irqs); >>>>>> + clic->clicintie =3D g_new0(uint8_t, irqs); >>>>>> + clic->clicintattr =3D g_new0(uint8_t, irqs); >>>>>> + clic->clicintctl =3D g_new0(uint8_t, irqs); >>>>>> + clic->active_list =3D g_new0(CLICActiveInterrupt, irqs); >>>>>> + clic->active_count =3D g_new0(size_t, clic->num_harts); >>>>>> + clic->exccode =3D g_new0(uint32_t, clic->num_harts); >>>>>> + clic->cpu_irqs =3D g_new0(qemu_irq, clic->num_harts); >>>>>> + sysbus_init_mmio(SYS_BUS_DEVICE(dev), &clic->mmio); >>>>>> + >>>>>> + /* Allocate irq through gpio, so that we can use qtest */ >>>>>> + qdev_init_gpio_in(dev, riscv_clic_set_irq, irqs); >>>>>> + qdev_init_gpio_out(dev, clic->cpu_irqs, clic->num_harts); >>>>>> + >>>>>> + for (i =3D 0; i < clic->num_harts; i++) { >>>>>> + RISCVCPU *cpu =3D RISCV_CPU(qemu_get_cpu(i)); >>>>>> >>>>> >>>>> As spec says: >>>>> Smaller single-core systems might have only a CLIC, >>>>> while multicore systems might have a CLIC per-core and a single >>>>> shared PLIC. >>>>> The PLIC xeip signals are treated as hart-local interrupt sources b= y >>>>> the CLIC at each core. >>>>> >>>>> It looks like it's possible to have one CLIC instance per core. >>>>> >>>>> If you want to delivery an interrupt to one hart, you should encode >>>>> the IRQ by the interrupt number >>>>> , the hart number and the interrupt target privilege, then set the ir= q. >>>>> >>>>> I think how to calculate the hart number is the task of PLIC and it >>>>> can make use of "hartid-base" >>>>> to calculate it. >>>>> >>>>> Thanks, >>>>> Zhiwei >>>>> >>>> >>>> Hi Zhiwei, >>>> >>>> What I mean is if there are multiple CLIC instances, each per core >>>> (CLIC spec allows that). >>>> If you try to bind CLIC with CPU index start from 0, >>>> it will be impossible for CLIC instance to bind CPU from index other >>>> than 0. >>>> >>>> For example, for 4 cores system, it's possible to have 4 CLIC instance= s: >>>> * CLIC 0 binds to CPU 0 >>>> * CLIC 1 binds to CPU 1 >>>> * CLIC 2 binds to CPU 2 >>>> * CLIC 3 binds to CPU 3 >>>> >>>> and that's why I said it's possible to pass an extra "hartid-base" jus= t >>>> like PLIC. >>>> I know most of hardid are calculated by the requesing address, so most >>>> hartid usages should be fine. >>>> But I saw two places using qemu_get_cpu(), >>>> >>>> which may cause the problem for the scenario I describe above: >>>> i.e. riscv_clic_next_interrupt() and riscv_clic_realize() as my >>>> original reply. >>>> >>>> So what's the problem here? >>>> >>>> Currently all cores share the same CLIC instance. Do you want to give >>>> each core a CLIC instance? >>>> >>> Yes, that's what I mean, which should be supported as what spec says[1]= : >>> The CLIC complements the PLIC. Smaller single-core systems might have >>> only a CLIC, >>> while multicore systems might have *a CLIC per-core* and a single >>> shared PLIC. >>> The PLIC xeip signals are treated as hart-local interrupt sources by >>> the CLIC at each core. >>> >>> [1] >>> https://github.com/riscv/riscv-fast-interrupt/blob/646310a5e4ae055964b4= 680f12c1c04a7cc0dd56/clic.adoc#12-clic-versus-plic >>> >>> Thanks, >>> Frank Chang >>> >>> >>> If we give each core a CLIC instance, it is not convenient to access th= e >>> shared memory, such as 0x0-0x1000. >>> Which CLIC instance should contain this memory region? >>> >> What do you mean by: "access the shared memory" here? >> >> It means the cliccfg or clicinfo which should be shared by all CLIC >> instances. >> > If there are multiple CLIC instances, shouldn't they have their own base > addresses? > So I do not understand how cliccfg and clicinfo would be shared by all > CLIC instances. (Or they should?) > > Once we have a talk on tech-fast-interrupt. The chair of fast interrupt > reply is: > > *"The first part (address 0x0000-0x0FFF) which contains > cliccfg/clicinfo/clicinttrig should be global since only one copy of the > configuration setting is enough.* > *On the other hand, the latter part (0x1000-0x4FFF) which contains contro= l > bits for individual interrupt should be one copy per hart"* > Hmm... interesting, that's probably something I have missed. and they didn't document this statement in the spec :( But I think this statement has a contradiction against the system with multi-CLIC instances described in spec. Does it imply that either: * I can only have one CLIC in the system, or * All CLIC instances must have the same configuration in the system. Do you have the link to this statement? I would like to take a look. Thanks, Frank Chang > Thanks, > Zhiwei > > Each CLIC instance will manage its own cliccfg and clicinfo. > > Thanks, > Frank Chang > > Thanks, >> Zhiwei >> >> I thought the memory region is defined during CLIC's creation? >> So it should depend on the platform that creates CLIC instances. >> >> Thanks, >> Frank Chang >> >> >>> Thanks, >>> Zhiwei >>> >>> >>>> Thanks, >>>> Zhiwei >>>> >>> Regards, >>>> Frank Chang >>>> >>>> >>>>> However if you try to bind CPU reference start from index i =3D 0. >>>>> It's not possible for each per-core CLIC to bind their own CPU >>>>> instance in multicore system >>>>> as they have to bind from CPU 0. >>>>> >>>>> I'm not sure if we add a new "hartid-base" property just like what >>>>> SiFive PLIC is >>>>> implemented would be a good idea or not. >>>>> >>>>> >>>>> Regards, >>>>> Frank Chang >>>>> >>>>> >>>>>> + qemu_irq irq =3D qemu_allocate_irq(riscv_clic_cpu_irq_handl= er, >>>>>> + &cpu->env, 1); >>>>>> + qdev_connect_gpio_out(dev, i, irq); >>>>>> + cpu->env.clic =3D clic; >>>>>> + } >>>>>> +} >>>>>> + >>>>>> >>>>>> >>>>>> --000000000000b1288205c5cf26b8 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
LIU Zhiwei <zhiwei_liu@c-sky.com> =E6=96=BC 2021=E5=B9=B46=E6=9C= =8828=E6=97=A5 =E9=80=B1=E4=B8=80 =E4=B8=8B=E5=8D=884:12=E5=AF=AB=E9=81=93= =EF=BC=9A
=20 =20 =20


On 2021/6/28 =E4=B8=8B=E5=8D=884:07, Frank Chang wrote:
=20
LIU Zhiwei <zhiwei_liu@c-sky.com> =E6=96=BC 2021=E5=B9=B46=E6=9C=8828=E6=97=A5 =E9=80=B1=E4=B8=80 = =E4=B8=8B=E5=8D=884:03=E5=AF=AB=E9=81=93=EF=BC=9A


On 2021/6/28 =E4=B8=8B=E5=8D=883:49, Frank Chang wrote:<= br>
LIU Zhiwei <zhiwei_liu@c-sky.com> =E6=96=BC 2021=E5=B9=B46=E6=9C=8828=E6=97=A5 =E9=80=B1= =E4=B8=80 =E4=B8=8B=E5=8D=883:40=E5=AF=AB=E9=81=93=EF=BC=9A


On 2021/6/28 =E4=B8=8B=E5=8D=883:23, Frank Cha= ng wrote:
LIU Zhiwei <zhiwei_liu@c-sky.com> =E6=96=BC 2021=E5=B9=B46=E6=9C=8828=E6=97=A5 = =E9=80=B1=E4=B8=80 =E4=B8=8B=E5=8D=883:17=E5=AF=AB=E9=81=93=EF=BC=9A


On 2021/6/26 =E4=B8=8B=E5=8D=888:56,= Frank Chang wrote:
On Wed, Jun 16, 2021 at 10:56 AM LIU Zhiwei <zhiwei_liu@c-sky.com= > wrote:


On 6/13/21 6:10 PM, Frank Chang wrote:
LIU Zhiwei <zhiwei_liu@c-sky.com> =E6=96=BC 2021=E5=B9=B44= =E6=9C=889=E6=97=A5 =E9=80=B1=E4=BA=94 =E4=B8=8B=E5=8D=883:57=E5= =AF=AB=E9=81=93=EF=BC=9A
=
+static void riscv_clic_realize(Devi= ceState *dev, Error **errp)
+{
+=C2=A0 =C2=A0 RISCVCLI= CState *clic =3D RISCV_CLIC(dev);
+=C2=A0 =C2=A0 size_t harts_x_sources =3D clic->num_harts * clic->num_sources;
+=C2=A0 =C2=A0 int irqs= , i;
+
+=C2=A0 =C2=A0 if (clic->prv_s && clic->prv_u) {
+=C2=A0 =C2=A0 =C2=A0 = =C2=A0 irqs =3D 3 * harts_x_sources;
+=C2=A0 =C2=A0 } else i= f (clic->prv_s || clic->prv_u) {
+=C2=A0 =C2=A0 =C2=A0 = =C2=A0 irqs =3D 2 * harts_x_sources;
+=C2=A0 =C2=A0 } else {=
+=C2=A0 =C2=A0 =C2=A0 = =C2=A0 irqs =3D harts_x_sources;
+=C2=A0 =C2=A0 }
+
+=C2=A0 =C2=A0 clic->clic_size =3D irqs * 4 + 0x1000;
+=C2=A0 =C2=A0 memory_region_init_io(&= amp;clic->mmio, OBJECT(dev), &riscv_clic_ops, clic,
+=C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 TYPE_RISCV_CLIC, clic->clic_size); +
+=C2=A0 =C2=A0 clic->clicintip =3D g_new0(uint8_t, irqs);
+=C2=A0 =C2=A0 clic->clicintie =3D g_new0(uint8_t, irqs);
+=C2=A0 =C2=A0 clic->clicintattr =3D g_new0(uint8_t, irqs);
+=C2=A0 =C2=A0 clic->clicintctl =3D g_new0(uint8_t, irqs);
+=C2=A0 =C2=A0 clic->active_list =3D g_new0(CLICActiveInterr= upt, irqs);
+=C2=A0 =C2=A0 clic->active_count =3D g_new0(size_t, clic->num_harts); +=C2=A0 =C2=A0 clic->exccode =3D g_new0(uint32_t, clic->num_harts); +=C2=A0 =C2=A0 clic->cpu_irqs =3D g_new0(qemu_irq, clic->num_harts); +=C2=A0 =C2=A0 sysbus_init_mmio(SYS_BU= S_DEVICE(dev), &clic->mmio); +
+=C2=A0 =C2=A0 /* Alloc= ate irq through gpio, so that we can use qtest */
+=C2=A0 =C2=A0 qdev_init_gpio_in(dev, riscv_clic_set_irq, irqs);
+=C2=A0 =C2=A0 qdev_init_gpio_out(dev, clic->cpu_irqs, clic->num_harts); +
+=C2=A0 =C2=A0 for (i = =3D 0; i < clic->num_harts; i++) {
+=C2=A0 =C2=A0 =C2=A0 = =C2=A0 RISCVCPU *cpu =3D RISCV_CPU(qemu_get_cpu(= i));

As spec says:
=C2=A0 Smaller single-core systems might have only a CLIC,
=C2=A0 while multico= re systems might have a CLIC per-core and a single shared PLIC.
=C2=A0 The PLIC xeip signals are treated as hart-local interrupt sources by the CLIC at each core.

It looks like it's possible to have one CLIC instance per core.

If you want to delivery an interrupt to one hart, you should encode the IRQ by the interrupt number
, the hart number and the interrupt target privilege, then set the irq.

I think how to calculate the hart number is the task of PLIC and it can make use of "hartid-base= "
to calculate it.

Thanks,
Zhiwei


Hi Zhiwei,

What I mean is if there are multiple CLIC instances, each per core (CLIC spec allows that).
If you try to bind CLIC with CPU index start from 0,
it will be impossible for CLIC instance=C2=A0to bind CPU fr= om index other than 0.

For example, for 4 cores system, it's possible to have 4 CLIC instances:
=C2=A0 * CLIC 0 binds to CPU 0=
=C2=A0 * CLIC 1 binds to CPU 1=
=C2=A0 * CLIC 2 binds to CPU 2=
=C2=A0 * CLIC 3 binds to CPU 3=

and that's why I said it&#= 39;s possible to pass an extra "hartid-base" just like= PLIC.
I know most of hardid are calculated by the requesing address, so most hartid=C2=A0usag= es should be fine.
But I saw two places using=C2=A0qemu_get_cpu(),
which may cause the problem for the scenario I describe above:
i.e. riscv_clic_next_interrupt() and riscv_clic_realize() as my original reply.

So what's the problem here?

Currently all cores share the same CLIC instance. Do you want to give each core=C2=A0 a CLIC instance?

Yes, that's what I mean, which should be supported as what spec says[1]:
=C2=A0=C2=A0The CLIC complements the P= LIC. Smaller single-core systems might have only a CLIC,
=C2=A0 while multicore systems might have a CLIC per-core and a single shared PLIC.
=C2=A0 The PLIC xeip signals are treat= ed as hart-local interrupt sources by the CLIC at each core.


Thanks,
Frank Chang
=C2=A0

If we give each core a CLIC instance, it is not convenient to access the shared memory, such as 0x0-0x1000.
Which CLIC instance should contain this memory region?

What do you mean by: "access the shared memor= y" here?

It means the cliccfg or clicinfo which=C2=A0 should be shared by all CLIC instances.

If there are multiple CLIC instances, shouldn't they hav= e their own base addresses?
So I do not understand how cliccfg and clicinfo would be shared by all CLIC instances. (Or they should?)

Once we have a talk on tech-fast-interrupt. The chair of fast interrupt reply is:

"The first part (address 0x0000-0x0FFF) which contains cliccfg/clicinfo/clicinttrig should be global since only one copy of the configuration setting is enough.
On the other hand, the latter part (0x1000-0x4FFF) which contains control bits for individual interrupt should be one copy per hart"

Hmm... interesti= ng, that's probably something I have missed.
and they didn= 9;t document this statement in the spec :(

But I t= hink this statement has a contradiction against the system with multi-CLIC = instances described in spec.
Does it imply that either:
=C2=A0 * I can only have one CLIC in the system, or
=C2=A0 * All= CLIC instances must have the same configuration in the system.
<= br>
Do you have the link to this statement? I would like to take = a look.

Thanks,
Frank Chang


Thanks,
Zhiwei

Each CLIC instance will manage its own cliccfg and clicinfo.

Thanks,
Frank Chang

Thanks,
Zhiwei

I thought the memory region is defined during CLIC's creation?
So it should depend on the platform that creates CLIC instances.

Thanks,
Frank Chang
=C2=A0

Thanks,
Zhiwei


Thanks,
Zhiwei

Regards,
Frank Chang
=C2=A0

However if you try to bind CPU reference start from index i =3D 0.
It's not possibl= e for each per-core CLIC to bind their own CPU instance in multicore system
as they have to bind from CPU 0.

I'm not sure if we add a new "hartid-base" property just like what SiFive PLIC is
implemented would be a good idea or not.


Regards,
Frank Chang
=C2=A0
+=C2=A0 =C2=A0 =C2=A0 = =C2=A0 qemu_irq irq =3D qemu_allocate_irq(riscv= _clic_cpu_irq_handler,
+=C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&cpu->env, 1);
+=C2=A0 =C2=A0 =C2=A0 = =C2=A0 qdev_connect_gpio_out(d= ev, i, irq);
+=C2=A0 =C2=A0 =C2=A0 = =C2=A0 cpu->env.clic =3D clic;
+=C2=A0 =C2=A0 }
+}
+


--000000000000b1288205c5cf26b8-- From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from list by lists.gnu.org with archive (Exim 4.90_1) id 1lxmUf-0005Ge-0s for mharc-qemu-riscv@gnu.org; Mon, 28 Jun 2021 04:19:49 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:60586) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lxmUc-0005Er-SO for qemu-riscv@nongnu.org; Mon, 28 Jun 2021 04:19:46 -0400 Received: from mail-pl1-x62f.google.com ([2607:f8b0:4864:20::62f]:41735) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lxmUZ-0007Ft-VI for qemu-riscv@nongnu.org; Mon, 28 Jun 2021 04:19:46 -0400 Received: by mail-pl1-x62f.google.com with SMTP id y13so8478570plc.8 for ; Mon, 28 Jun 2021 01:19:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=NAXf566XAGyVBAy8iZXRYSfpuIHPLr4RyYxsm4vCtAc=; b=iKk4JELv1dhKbRYUfyfmo5zZHC/h3/38r8gd3YBQstzkgypaFMhFGKEwtt6bjsCWu8 LqT+pptxbQ7tDkkOadMWPifco+2Vfg0Z85EphoDE2Auv6/yrpaW899hs54RqENfuBo7O agisprzkQFDSV/Zg8IobWfUY2PEQHlsN13R13R1RULrm7k5h1OVblEsz71hHMAnqs3iy XdaZSnmmsZeDi7Tfq7IABx8w8vzD0kAsem+K5K1N5aVPe5BQTfIsLUXA4aE9cPgNl8v9 wwdyQumOMEK5QxAGYLlabLlyfmyaTZEu/IMbLXl6rRUGwTJ2odjEyTRHPGu87/DQIF7E Pf+g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=NAXf566XAGyVBAy8iZXRYSfpuIHPLr4RyYxsm4vCtAc=; b=R7NfWJ3NwW1eKISKVgV2C3P7A6nfIKuzbaY8Q9noz4XtqMfRcCTcZoOIjvEHtmrILA ufM/jcY1zcEZODmdZLfftcQpgUJxridjZorpWuAV+s6rgoiBScy08u82qYsEyhH4hJ6P y2t6N4fB3rNjat833MdhiSNrw7waxtoqrASiw9t3Jny+N95OeefYmgDSyNqiITAfXVtg teEEcNA+fPveg6jwwLAapQ/dZY4/jepiW0pw9pdZ50X5Gk66r+495A2KFSM6QJo204h5 rs8EG9eAQbftG25vgmrtpxPGYCNul94Mbt8WyFWVyhQ6loDHLdAvbwLVXLpQ5cFV669K QOww== X-Gm-Message-State: AOAM5333UMHOGlUqOoCR3VFhaEaAGSL0kiESA0i2QGHbF8/aPA+GXlw4 N12cxJlEmgYsGFBfgP973px5jyr3iYl9cgRZ X-Google-Smtp-Source: ABdhPJzqLUsAN/aYPMc7U0RXFX/Y+aW5onWINE3tMPjTNsffFnjsjaMnvo26klWqk4LB548yxW9SUA== X-Received: by 2002:a17:902:7b87:b029:128:345d:f596 with SMTP id w7-20020a1709027b87b0290128345df596mr16206888pll.36.1624868382301; Mon, 28 Jun 2021 01:19:42 -0700 (PDT) Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com. [209.85.210.177]) by smtp.gmail.com with ESMTPSA id cs1sm3971394pjb.56.2021.06.28.01.19.41 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 28 Jun 2021 01:19:41 -0700 (PDT) Received: by mail-pf1-f177.google.com with SMTP id a127so13443901pfa.10; Mon, 28 Jun 2021 01:19:41 -0700 (PDT) X-Received: by 2002:a63:5b0e:: with SMTP id p14mr22021416pgb.110.1624868381598; Mon, 28 Jun 2021 01:19:41 -0700 (PDT) MIME-Version: 1.0 References: <20210409074857.166082-1-zhiwei_liu@c-sky.com> <20210409074857.166082-4-zhiwei_liu@c-sky.com> <52225a77-c509-9999-9d8a-942ea407f44d@c-sky.com> <27879b9f-bffa-96c9-a8b2-033eb0a0be4c@c-sky.com> <6c9c894c-6f85-c6bb-a372-d69e15896571@c-sky.com> <26909592-51e0-df55-dff2-40f5dbc90085@c-sky.com> In-Reply-To: <26909592-51e0-df55-dff2-40f5dbc90085@c-sky.com> From: Frank Chang Date: Mon, 28 Jun 2021 16:19:30 +0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [RFC PATCH 03/11] hw/intc: Add CLIC device To: LIU Zhiwei Cc: Frank Chang , Palmer Dabbelt , Alistair Francis , "open list:RISC-V" , wxy194768@alibaba-inc.com, "qemu-devel@nongnu.org Developers" Content-Type: multipart/alternative; boundary="000000000000b1288205c5cf26b8" Received-SPF: pass client-ip=2607:f8b0:4864:20::62f; envelope-from=frank.chang@sifive.com; helo=mail-pl1-x62f.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-riscv@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Jun 2021 08:19:47 -0000 --000000000000b1288205c5cf26b8 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable LIU Zhiwei =E6=96=BC 2021=E5=B9=B46=E6=9C=8828=E6=97= =A5 =E9=80=B1=E4=B8=80 =E4=B8=8B=E5=8D=884:12=E5=AF=AB=E9=81=93=EF=BC=9A > > On 2021/6/28 =E4=B8=8B=E5=8D=884:07, Frank Chang wrote: > > LIU Zhiwei =E6=96=BC 2021=E5=B9=B46=E6=9C=8828=E6= =97=A5 =E9=80=B1=E4=B8=80 =E4=B8=8B=E5=8D=884:03=E5=AF=AB=E9=81=93=EF=BC=9A > >> >> On 2021/6/28 =E4=B8=8B=E5=8D=883:49, Frank Chang wrote: >> >> LIU Zhiwei =E6=96=BC 2021=E5=B9=B46=E6=9C=8828=E6= =97=A5 =E9=80=B1=E4=B8=80 =E4=B8=8B=E5=8D=883:40=E5=AF=AB=E9=81=93=EF=BC=9A >> >>> >>> On 2021/6/28 =E4=B8=8B=E5=8D=883:23, Frank Chang wrote: >>> >>> LIU Zhiwei =E6=96=BC 2021=E5=B9=B46=E6=9C=8828= =E6=97=A5 =E9=80=B1=E4=B8=80 =E4=B8=8B=E5=8D=883:17=E5=AF=AB=E9=81=93=EF=BC= =9A >>> >>>> >>>> On 2021/6/26 =E4=B8=8B=E5=8D=888:56, Frank Chang wrote: >>>> >>>> On Wed, Jun 16, 2021 at 10:56 AM LIU Zhiwei >>>> wrote: >>>> >>>>> >>>>> On 6/13/21 6:10 PM, Frank Chang wrote: >>>>> >>>>> LIU Zhiwei =E6=96=BC 2021=E5=B9=B44=E6=9C=889= =E6=97=A5 =E9=80=B1=E4=BA=94 =E4=B8=8B=E5=8D=883:57=E5=AF=AB=E9=81=93=EF=BC= =9A >>>>> >>>>> +static void riscv_clic_realize(DeviceState *dev, Error **errp) >>>>>> +{ >>>>>> + RISCVCLICState *clic =3D RISCV_CLIC(dev); >>>>>> + size_t harts_x_sources =3D clic->num_harts * clic->num_sources; >>>>>> + int irqs, i; >>>>>> + >>>>>> + if (clic->prv_s && clic->prv_u) { >>>>>> + irqs =3D 3 * harts_x_sources; >>>>>> + } else if (clic->prv_s || clic->prv_u) { >>>>>> + irqs =3D 2 * harts_x_sources; >>>>>> + } else { >>>>>> + irqs =3D harts_x_sources; >>>>>> + } >>>>>> + >>>>>> + clic->clic_size =3D irqs * 4 + 0x1000; >>>>>> + memory_region_init_io(&clic->mmio, OBJECT(dev), &riscv_clic_ops= , >>>>>> clic, >>>>>> + TYPE_RISCV_CLIC, clic->clic_size); >>>>>> + >>>>>> + clic->clicintip =3D g_new0(uint8_t, irqs); >>>>>> + clic->clicintie =3D g_new0(uint8_t, irqs); >>>>>> + clic->clicintattr =3D g_new0(uint8_t, irqs); >>>>>> + clic->clicintctl =3D g_new0(uint8_t, irqs); >>>>>> + clic->active_list =3D g_new0(CLICActiveInterrupt, irqs); >>>>>> + clic->active_count =3D g_new0(size_t, clic->num_harts); >>>>>> + clic->exccode =3D g_new0(uint32_t, clic->num_harts); >>>>>> + clic->cpu_irqs =3D g_new0(qemu_irq, clic->num_harts); >>>>>> + sysbus_init_mmio(SYS_BUS_DEVICE(dev), &clic->mmio); >>>>>> + >>>>>> + /* Allocate irq through gpio, so that we can use qtest */ >>>>>> + qdev_init_gpio_in(dev, riscv_clic_set_irq, irqs); >>>>>> + qdev_init_gpio_out(dev, clic->cpu_irqs, clic->num_harts); >>>>>> + >>>>>> + for (i =3D 0; i < clic->num_harts; i++) { >>>>>> + RISCVCPU *cpu =3D RISCV_CPU(qemu_get_cpu(i)); >>>>>> >>>>> >>>>> As spec says: >>>>> Smaller single-core systems might have only a CLIC, >>>>> while multicore systems might have a CLIC per-core and a single >>>>> shared PLIC. >>>>> The PLIC xeip signals are treated as hart-local interrupt sources b= y >>>>> the CLIC at each core. >>>>> >>>>> It looks like it's possible to have one CLIC instance per core. >>>>> >>>>> If you want to delivery an interrupt to one hart, you should encode >>>>> the IRQ by the interrupt number >>>>> , the hart number and the interrupt target privilege, then set the ir= q. >>>>> >>>>> I think how to calculate the hart number is the task of PLIC and it >>>>> can make use of "hartid-base" >>>>> to calculate it. >>>>> >>>>> Thanks, >>>>> Zhiwei >>>>> >>>> >>>> Hi Zhiwei, >>>> >>>> What I mean is if there are multiple CLIC instances, each per core >>>> (CLIC spec allows that). >>>> If you try to bind CLIC with CPU index start from 0, >>>> it will be impossible for CLIC instance to bind CPU from index other >>>> than 0. >>>> >>>> For example, for 4 cores system, it's possible to have 4 CLIC instance= s: >>>> * CLIC 0 binds to CPU 0 >>>> * CLIC 1 binds to CPU 1 >>>> * CLIC 2 binds to CPU 2 >>>> * CLIC 3 binds to CPU 3 >>>> >>>> and that's why I said it's possible to pass an extra "hartid-base" jus= t >>>> like PLIC. >>>> I know most of hardid are calculated by the requesing address, so most >>>> hartid usages should be fine. >>>> But I saw two places using qemu_get_cpu(), >>>> >>>> which may cause the problem for the scenario I describe above: >>>> i.e. riscv_clic_next_interrupt() and riscv_clic_realize() as my >>>> original reply. >>>> >>>> So what's the problem here? >>>> >>>> Currently all cores share the same CLIC instance. Do you want to give >>>> each core a CLIC instance? >>>> >>> Yes, that's what I mean, which should be supported as what spec says[1]= : >>> The CLIC complements the PLIC. Smaller single-core systems might have >>> only a CLIC, >>> while multicore systems might have *a CLIC per-core* and a single >>> shared PLIC. >>> The PLIC xeip signals are treated as hart-local interrupt sources by >>> the CLIC at each core. >>> >>> [1] >>> https://github.com/riscv/riscv-fast-interrupt/blob/646310a5e4ae055964b4= 680f12c1c04a7cc0dd56/clic.adoc#12-clic-versus-plic >>> >>> Thanks, >>> Frank Chang >>> >>> >>> If we give each core a CLIC instance, it is not convenient to access th= e >>> shared memory, such as 0x0-0x1000. >>> Which CLIC instance should contain this memory region? >>> >> What do you mean by: "access the shared memory" here? >> >> It means the cliccfg or clicinfo which should be shared by all CLIC >> instances. >> > If there are multiple CLIC instances, shouldn't they have their own base > addresses? > So I do not understand how cliccfg and clicinfo would be shared by all > CLIC instances. (Or they should?) > > Once we have a talk on tech-fast-interrupt. The chair of fast interrupt > reply is: > > *"The first part (address 0x0000-0x0FFF) which contains > cliccfg/clicinfo/clicinttrig should be global since only one copy of the > configuration setting is enough.* > *On the other hand, the latter part (0x1000-0x4FFF) which contains contro= l > bits for individual interrupt should be one copy per hart"* > Hmm... interesting, that's probably something I have missed. and they didn't document this statement in the spec :( But I think this statement has a contradiction against the system with multi-CLIC instances described in spec. Does it imply that either: * I can only have one CLIC in the system, or * All CLIC instances must have the same configuration in the system. Do you have the link to this statement? I would like to take a look. Thanks, Frank Chang > Thanks, > Zhiwei > > Each CLIC instance will manage its own cliccfg and clicinfo. > > Thanks, > Frank Chang > > Thanks, >> Zhiwei >> >> I thought the memory region is defined during CLIC's creation? >> So it should depend on the platform that creates CLIC instances. >> >> Thanks, >> Frank Chang >> >> >>> Thanks, >>> Zhiwei >>> >>> >>>> Thanks, >>>> Zhiwei >>>> >>> Regards, >>>> Frank Chang >>>> >>>> >>>>> However if you try to bind CPU reference start from index i =3D 0. >>>>> It's not possible for each per-core CLIC to bind their own CPU >>>>> instance in multicore system >>>>> as they have to bind from CPU 0. >>>>> >>>>> I'm not sure if we add a new "hartid-base" property just like what >>>>> SiFive PLIC is >>>>> implemented would be a good idea or not. >>>>> >>>>> >>>>> Regards, >>>>> Frank Chang >>>>> >>>>> >>>>>> + qemu_irq irq =3D qemu_allocate_irq(riscv_clic_cpu_irq_handl= er, >>>>>> + &cpu->env, 1); >>>>>> + qdev_connect_gpio_out(dev, i, irq); >>>>>> + cpu->env.clic =3D clic; >>>>>> + } >>>>>> +} >>>>>> + >>>>>> >>>>>> >>>>>> --000000000000b1288205c5cf26b8 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
LIU Zhiwei <zhiwei_liu@c-sky.com> =E6=96=BC 2021=E5=B9=B46=E6=9C= =8828=E6=97=A5 =E9=80=B1=E4=B8=80 =E4=B8=8B=E5=8D=884:12=E5=AF=AB=E9=81=93= =EF=BC=9A
=20 =20 =20


On 2021/6/28 =E4=B8=8B=E5=8D=884:07, Frank Chang wrote:
=20
LIU Zhiwei <zhiwei_liu@c-sky.com> =E6=96=BC 2021=E5=B9=B46=E6=9C=8828=E6=97=A5 =E9=80=B1=E4=B8=80 = =E4=B8=8B=E5=8D=884:03=E5=AF=AB=E9=81=93=EF=BC=9A


On 2021/6/28 =E4=B8=8B=E5=8D=883:49, Frank Chang wrote:<= br>
LIU Zhiwei <zhiwei_liu@c-sky.com> =E6=96=BC 2021=E5=B9=B46=E6=9C=8828=E6=97=A5 =E9=80=B1= =E4=B8=80 =E4=B8=8B=E5=8D=883:40=E5=AF=AB=E9=81=93=EF=BC=9A


On 2021/6/28 =E4=B8=8B=E5=8D=883:23, Frank Cha= ng wrote:
LIU Zhiwei <zhiwei_liu@c-sky.com> =E6=96=BC 2021=E5=B9=B46=E6=9C=8828=E6=97=A5 = =E9=80=B1=E4=B8=80 =E4=B8=8B=E5=8D=883:17=E5=AF=AB=E9=81=93=EF=BC=9A


On 2021/6/26 =E4=B8=8B=E5=8D=888:56,= Frank Chang wrote:
On Wed, Jun 16, 2021 at 10:56 AM LIU Zhiwei <zhiwei_liu@c-sky.com= > wrote:


On 6/13/21 6:10 PM, Frank Chang wrote:
LIU Zhiwei <zhiwei_liu@c-sky.com> =E6=96=BC 2021=E5=B9=B44= =E6=9C=889=E6=97=A5 =E9=80=B1=E4=BA=94 =E4=B8=8B=E5=8D=883:57=E5= =AF=AB=E9=81=93=EF=BC=9A
=
+static void riscv_clic_realize(Devi= ceState *dev, Error **errp)
+{
+=C2=A0 =C2=A0 RISCVCLI= CState *clic =3D RISCV_CLIC(dev);
+=C2=A0 =C2=A0 size_t harts_x_sources =3D clic->num_harts * clic->num_sources;
+=C2=A0 =C2=A0 int irqs= , i;
+
+=C2=A0 =C2=A0 if (clic->prv_s && clic->prv_u) {
+=C2=A0 =C2=A0 =C2=A0 = =C2=A0 irqs =3D 3 * harts_x_sources;
+=C2=A0 =C2=A0 } else i= f (clic->prv_s || clic->prv_u) {
+=C2=A0 =C2=A0 =C2=A0 = =C2=A0 irqs =3D 2 * harts_x_sources;
+=C2=A0 =C2=A0 } else {=
+=C2=A0 =C2=A0 =C2=A0 = =C2=A0 irqs =3D harts_x_sources;
+=C2=A0 =C2=A0 }
+
+=C2=A0 =C2=A0 clic->clic_size =3D irqs * 4 + 0x1000;
+=C2=A0 =C2=A0 memory_region_init_io(&= amp;clic->mmio, OBJECT(dev), &riscv_clic_ops, clic,
+=C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 TYPE_RISCV_CLIC, clic->clic_size); +
+=C2=A0 =C2=A0 clic->clicintip =3D g_new0(uint8_t, irqs);
+=C2=A0 =C2=A0 clic->clicintie =3D g_new0(uint8_t, irqs);
+=C2=A0 =C2=A0 clic->clicintattr =3D g_new0(uint8_t, irqs);
+=C2=A0 =C2=A0 clic->clicintctl =3D g_new0(uint8_t, irqs);
+=C2=A0 =C2=A0 clic->active_list =3D g_new0(CLICActiveInterr= upt, irqs);
+=C2=A0 =C2=A0 clic->active_count =3D g_new0(size_t, clic->num_harts); +=C2=A0 =C2=A0 clic->exccode =3D g_new0(uint32_t, clic->num_harts); +=C2=A0 =C2=A0 clic->cpu_irqs =3D g_new0(qemu_irq, clic->num_harts); +=C2=A0 =C2=A0 sysbus_init_mmio(SYS_BU= S_DEVICE(dev), &clic->mmio); +
+=C2=A0 =C2=A0 /* Alloc= ate irq through gpio, so that we can use qtest */
+=C2=A0 =C2=A0 qdev_init_gpio_in(dev, riscv_clic_set_irq, irqs);
+=C2=A0 =C2=A0 qdev_init_gpio_out(dev, clic->cpu_irqs, clic->num_harts); +
+=C2=A0 =C2=A0 for (i = =3D 0; i < clic->num_harts; i++) {
+=C2=A0 =C2=A0 =C2=A0 = =C2=A0 RISCVCPU *cpu =3D RISCV_CPU(qemu_get_cpu(= i));

As spec says:
=C2=A0 Smaller single-core systems might have only a CLIC,
=C2=A0 while multico= re systems might have a CLIC per-core and a single shared PLIC.
=C2=A0 The PLIC xeip signals are treated as hart-local interrupt sources by the CLIC at each core.

It looks like it's possible to have one CLIC instance per core.

If you want to delivery an interrupt to one hart, you should encode the IRQ by the interrupt number
, the hart number and the interrupt target privilege, then set the irq.

I think how to calculate the hart number is the task of PLIC and it can make use of "hartid-base= "
to calculate it.

Thanks,
Zhiwei


Hi Zhiwei,

What I mean is if there are multiple CLIC instances, each per core (CLIC spec allows that).
If you try to bind CLIC with CPU index start from 0,
it will be impossible for CLIC instance=C2=A0to bind CPU fr= om index other than 0.

For example, for 4 cores system, it's possible to have 4 CLIC instances:
=C2=A0 * CLIC 0 binds to CPU 0=
=C2=A0 * CLIC 1 binds to CPU 1=
=C2=A0 * CLIC 2 binds to CPU 2=
=C2=A0 * CLIC 3 binds to CPU 3=

and that's why I said it&#= 39;s possible to pass an extra "hartid-base" just like= PLIC.
I know most of hardid are calculated by the requesing address, so most hartid=C2=A0usag= es should be fine.
But I saw two places using=C2=A0qemu_get_cpu(),
which may cause the problem for the scenario I describe above:
i.e. riscv_clic_next_interrupt() and riscv_clic_realize() as my original reply.

So what's the problem here?

Currently all cores share the same CLIC instance. Do you want to give each core=C2=A0 a CLIC instance?

Yes, that's what I mean, which should be supported as what spec says[1]:
=C2=A0=C2=A0The CLIC complements the P= LIC. Smaller single-core systems might have only a CLIC,
=C2=A0 while multicore systems might have a CLIC per-core and a single shared PLIC.
=C2=A0 The PLIC xeip signals are treat= ed as hart-local interrupt sources by the CLIC at each core.


Thanks,
Frank Chang
=C2=A0

If we give each core a CLIC instance, it is not convenient to access the shared memory, such as 0x0-0x1000.
Which CLIC instance should contain this memory region?

What do you mean by: "access the shared memor= y" here?

It means the cliccfg or clicinfo which=C2=A0 should be shared by all CLIC instances.

If there are multiple CLIC instances, shouldn't they hav= e their own base addresses?
So I do not understand how cliccfg and clicinfo would be shared by all CLIC instances. (Or they should?)

Once we have a talk on tech-fast-interrupt. The chair of fast interrupt reply is:

"The first part (address 0x0000-0x0FFF) which contains cliccfg/clicinfo/clicinttrig should be global since only one copy of the configuration setting is enough.
On the other hand, the latter part (0x1000-0x4FFF) which contains control bits for individual interrupt should be one copy per hart"

Hmm... interesti= ng, that's probably something I have missed.
and they didn= 9;t document this statement in the spec :(

But I t= hink this statement has a contradiction against the system with multi-CLIC = instances described in spec.
Does it imply that either:
=C2=A0 * I can only have one CLIC in the system, or
=C2=A0 * All= CLIC instances must have the same configuration in the system.
<= br>
Do you have the link to this statement? I would like to take = a look.

Thanks,
Frank Chang


Thanks,
Zhiwei

Each CLIC instance will manage its own cliccfg and clicinfo.

Thanks,
Frank Chang

Thanks,
Zhiwei

I thought the memory region is defined during CLIC's creation?
So it should depend on the platform that creates CLIC instances.

Thanks,
Frank Chang
=C2=A0

Thanks,
Zhiwei


Thanks,
Zhiwei

Regards,
Frank Chang
=C2=A0

However if you try to bind CPU reference start from index i =3D 0.
It's not possibl= e for each per-core CLIC to bind their own CPU instance in multicore system
as they have to bind from CPU 0.

I'm not sure if we add a new "hartid-base" property just like what SiFive PLIC is
implemented would be a good idea or not.


Regards,
Frank Chang
=C2=A0
+=C2=A0 =C2=A0 =C2=A0 = =C2=A0 qemu_irq irq =3D qemu_allocate_irq(riscv= _clic_cpu_irq_handler,
+=C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&cpu->env, 1);
+=C2=A0 =C2=A0 =C2=A0 = =C2=A0 qdev_connect_gpio_out(d= ev, i, irq);
+=C2=A0 =C2=A0 =C2=A0 = =C2=A0 cpu->env.clic =3D clic;
+=C2=A0 =C2=A0 }
+}
+


--000000000000b1288205c5cf26b8--