From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,HTML_MESSAGE,MAILING_LIST_MULTI, MENTIONS_GIT_HOSTING,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A647EC2B9F4 for ; Mon, 28 Jun 2021 07:50:04 +0000 (UTC) Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id D10836198A for ; Mon, 28 Jun 2021 07:50:03 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org D10836198A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=sifive.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Received: from localhost ([::1]:55370 helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1lxm1q-0002Eb-Ra for qemu-devel@archiver.kernel.org; Mon, 28 Jun 2021 03:50:02 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54592) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lxm1B-0001XA-OV for qemu-devel@nongnu.org; Mon, 28 Jun 2021 03:49:21 -0400 Received: from mail-pg1-x52c.google.com ([2607:f8b0:4864:20::52c]:43808) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lxm19-0000We-08 for qemu-devel@nongnu.org; Mon, 28 Jun 2021 03:49:21 -0400 Received: by mail-pg1-x52c.google.com with SMTP id e22so14752613pgv.10 for ; Mon, 28 Jun 2021 00:49:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=kKqTlffdGT1dszQCEFbZc8eeKk/EppgCJ7VPDCHhKdU=; b=ThCuizfZDY0imqw9Yv0zO9gv1w0Rz8RMGbvfbJzEAtLSbvYm3LVB6V8UwdWVOXWbm+ nvFtXhiIm+XagRyVcN73deCjK1UAEyxAwkWKsVRhL/OfhUyYJ+Xn04NO+g5U/yR+8M1p iT1CnDgds5uBfBvgyVMlKCZrWxvRvCWBKgHGCX9VtsCBcdK4fvWJeltZdrRsVxbXdXlm F0DtTTB/bI5kwb99Ikdbq10JwWQsklA04y6gqHKXi7zvLtgSln40i1FZCyEttIH9/ayM CrmrViLWwQ5xCZMVfi+wSR7HCXHVkbXlRImm/mkr1XChagMKvWpqCTX9UEUMumIJhIBk dGMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=kKqTlffdGT1dszQCEFbZc8eeKk/EppgCJ7VPDCHhKdU=; b=iUibzlwcrW2ZEMpOuFWLbc9HZPtABfj6m3I4PPqHIN13Tj56btsIsyYvb+PO0ERVlY 0t9hFoH1zNeLEHTqZI4SNdEcTL4n9zQfmWlUoERupndFdizxdMJj6JqrwmkmzXoP9THf Be2Gl31nnT9AUkV2ZM+RytjAKlrOifJ6puggf6j22y2B4itXSxOlLc/w3Y36Of/MLzEr cE3a+eTcmhFChjVUZytBMXszrZ4vpgZiZmh9yLy4FAObkIK2rbei5dDa87Fq/1SWdDyI u88ZdhRaMOFSuIvI3+g7VHmSHXO8FxCVDwoS0s4PJFyblQ/3ZUu5ca1ErGrmXkWbahNA ocIw== X-Gm-Message-State: AOAM5300atFuRGp2Ns0qiiOeTR7J1J7V9XKLaVr7ikwO1whXQPsUayTn KfMrcKukbSo8NlTreO92pAMBKGXRtl1zSCop X-Google-Smtp-Source: ABdhPJxrgTneWSNH+ES+5o4c8aA5AOqwaadAtqfh3XBRMpr/Mp8SGWuP/xBOJZycBGCO7BH6A7Zv8Q== X-Received: by 2002:a05:6a00:1496:b029:308:29bc:6d4d with SMTP id v22-20020a056a001496b029030829bc6d4dmr21388832pfu.14.1624866557493; Mon, 28 Jun 2021 00:49:17 -0700 (PDT) Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com. [209.85.214.181]) by smtp.gmail.com with ESMTPSA id c62sm13418996pfa.12.2021.06.28.00.49.17 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 28 Jun 2021 00:49:17 -0700 (PDT) Received: by mail-pl1-f181.google.com with SMTP id v12so8435202plo.10; Mon, 28 Jun 2021 00:49:17 -0700 (PDT) X-Received: by 2002:a17:90a:f291:: with SMTP id fs17mr26439011pjb.47.1624866556799; Mon, 28 Jun 2021 00:49:16 -0700 (PDT) MIME-Version: 1.0 References: <20210409074857.166082-1-zhiwei_liu@c-sky.com> <20210409074857.166082-4-zhiwei_liu@c-sky.com> <52225a77-c509-9999-9d8a-942ea407f44d@c-sky.com> <27879b9f-bffa-96c9-a8b2-033eb0a0be4c@c-sky.com> <6c9c894c-6f85-c6bb-a372-d69e15896571@c-sky.com> In-Reply-To: <6c9c894c-6f85-c6bb-a372-d69e15896571@c-sky.com> From: Frank Chang Date: Mon, 28 Jun 2021 15:49:05 +0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [RFC PATCH 03/11] hw/intc: Add CLIC device To: LIU Zhiwei Content-Type: multipart/alternative; boundary="000000000000ecee2905c5ceb96b" Received-SPF: pass client-ip=2607:f8b0:4864:20::52c; envelope-from=frank.chang@sifive.com; helo=mail-pg1-x52c.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: "open list:RISC-V" , Frank Chang , "qemu-devel@nongnu.org Developers" , wxy194768@alibaba-inc.com, Alistair Francis , Palmer Dabbelt Errors-To: qemu-devel-bounces+qemu-devel=archiver.kernel.org@nongnu.org Sender: "Qemu-devel" --000000000000ecee2905c5ceb96b Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable LIU Zhiwei =E6=96=BC 2021=E5=B9=B46=E6=9C=8828=E6=97= =A5 =E9=80=B1=E4=B8=80 =E4=B8=8B=E5=8D=883:40=E5=AF=AB=E9=81=93=EF=BC=9A > > On 2021/6/28 =E4=B8=8B=E5=8D=883:23, Frank Chang wrote: > > LIU Zhiwei =E6=96=BC 2021=E5=B9=B46=E6=9C=8828=E6= =97=A5 =E9=80=B1=E4=B8=80 =E4=B8=8B=E5=8D=883:17=E5=AF=AB=E9=81=93=EF=BC=9A > >> >> On 2021/6/26 =E4=B8=8B=E5=8D=888:56, Frank Chang wrote: >> >> On Wed, Jun 16, 2021 at 10:56 AM LIU Zhiwei wrote= : >> >>> >>> On 6/13/21 6:10 PM, Frank Chang wrote: >>> >>> LIU Zhiwei =E6=96=BC 2021=E5=B9=B44=E6=9C=889=E6= =97=A5 =E9=80=B1=E4=BA=94 =E4=B8=8B=E5=8D=883:57=E5=AF=AB=E9=81=93=EF=BC=9A >>> >>> +static void riscv_clic_realize(DeviceState *dev, Error **errp) >>>> +{ >>>> + RISCVCLICState *clic =3D RISCV_CLIC(dev); >>>> + size_t harts_x_sources =3D clic->num_harts * clic->num_sources; >>>> + int irqs, i; >>>> + >>>> + if (clic->prv_s && clic->prv_u) { >>>> + irqs =3D 3 * harts_x_sources; >>>> + } else if (clic->prv_s || clic->prv_u) { >>>> + irqs =3D 2 * harts_x_sources; >>>> + } else { >>>> + irqs =3D harts_x_sources; >>>> + } >>>> + >>>> + clic->clic_size =3D irqs * 4 + 0x1000; >>>> + memory_region_init_io(&clic->mmio, OBJECT(dev), &riscv_clic_ops, >>>> clic, >>>> + TYPE_RISCV_CLIC, clic->clic_size); >>>> + >>>> + clic->clicintip =3D g_new0(uint8_t, irqs); >>>> + clic->clicintie =3D g_new0(uint8_t, irqs); >>>> + clic->clicintattr =3D g_new0(uint8_t, irqs); >>>> + clic->clicintctl =3D g_new0(uint8_t, irqs); >>>> + clic->active_list =3D g_new0(CLICActiveInterrupt, irqs); >>>> + clic->active_count =3D g_new0(size_t, clic->num_harts); >>>> + clic->exccode =3D g_new0(uint32_t, clic->num_harts); >>>> + clic->cpu_irqs =3D g_new0(qemu_irq, clic->num_harts); >>>> + sysbus_init_mmio(SYS_BUS_DEVICE(dev), &clic->mmio); >>>> + >>>> + /* Allocate irq through gpio, so that we can use qtest */ >>>> + qdev_init_gpio_in(dev, riscv_clic_set_irq, irqs); >>>> + qdev_init_gpio_out(dev, clic->cpu_irqs, clic->num_harts); >>>> + >>>> + for (i =3D 0; i < clic->num_harts; i++) { >>>> + RISCVCPU *cpu =3D RISCV_CPU(qemu_get_cpu(i)); >>>> >>> >>> As spec says: >>> Smaller single-core systems might have only a CLIC, >>> while multicore systems might have a CLIC per-core and a single share= d >>> PLIC. >>> The PLIC xeip signals are treated as hart-local interrupt sources by >>> the CLIC at each core. >>> >>> It looks like it's possible to have one CLIC instance per core. >>> >>> If you want to delivery an interrupt to one hart, you should encode the >>> IRQ by the interrupt number >>> , the hart number and the interrupt target privilege, then set the irq. >>> >>> I think how to calculate the hart number is the task of PLIC and it can >>> make use of "hartid-base" >>> to calculate it. >>> >>> Thanks, >>> Zhiwei >>> >> >> Hi Zhiwei, >> >> What I mean is if there are multiple CLIC instances, each per core (CLIC >> spec allows that). >> If you try to bind CLIC with CPU index start from 0, >> it will be impossible for CLIC instance to bind CPU from index other tha= n >> 0. >> >> For example, for 4 cores system, it's possible to have 4 CLIC instances: >> * CLIC 0 binds to CPU 0 >> * CLIC 1 binds to CPU 1 >> * CLIC 2 binds to CPU 2 >> * CLIC 3 binds to CPU 3 >> >> and that's why I said it's possible to pass an extra "hartid-base" just >> like PLIC. >> I know most of hardid are calculated by the requesing address, so most >> hartid usages should be fine. >> But I saw two places using qemu_get_cpu(), >> >> which may cause the problem for the scenario I describe above: >> i.e. riscv_clic_next_interrupt() and riscv_clic_realize() as my original >> reply. >> >> So what's the problem here? >> >> Currently all cores share the same CLIC instance. Do you want to give >> each core a CLIC instance? >> > Yes, that's what I mean, which should be supported as what spec says[1]: > The CLIC complements the PLIC. Smaller single-core systems might have > only a CLIC, > while multicore systems might have *a CLIC per-core* and a single > shared PLIC. > The PLIC xeip signals are treated as hart-local interrupt sources by th= e > CLIC at each core. > > [1] > https://github.com/riscv/riscv-fast-interrupt/blob/646310a5e4ae055964b468= 0f12c1c04a7cc0dd56/clic.adoc#12-clic-versus-plic > > Thanks, > Frank Chang > > > If we give each core a CLIC instance, it is not convenient to access the > shared memory, such as 0x0-0x1000. > Which CLIC instance should contain this memory region? > What do you mean by: "access the shared memory" here? I thought the memory region is defined during CLIC's creation? So it should depend on the platform that creates CLIC instances. Thanks, Frank Chang > Thanks, > Zhiwei > > >> Thanks, >> Zhiwei >> > Regards, >> Frank Chang >> >> >>> However if you try to bind CPU reference start from index i =3D 0. >>> It's not possible for each per-core CLIC to bind their own CPU instance >>> in multicore system >>> as they have to bind from CPU 0. >>> >>> I'm not sure if we add a new "hartid-base" property just like what >>> SiFive PLIC is >>> implemented would be a good idea or not. >>> >>> >>> Regards, >>> Frank Chang >>> >>> >>>> + qemu_irq irq =3D qemu_allocate_irq(riscv_clic_cpu_irq_handler= , >>>> + &cpu->env, 1); >>>> + qdev_connect_gpio_out(dev, i, irq); >>>> + cpu->env.clic =3D clic; >>>> + } >>>> +} >>>> + >>>> >>>> >>>> --000000000000ecee2905c5ceb96b Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
LIU Zhiwei <zhiwei_liu@c-sky.com> =E6=96=BC 2021=E5=B9=B46=E6=9C= =8828=E6=97=A5 =E9=80=B1=E4=B8=80 =E4=B8=8B=E5=8D=883:40=E5=AF=AB=E9=81=93= =EF=BC=9A
=20 =20 =20


On 2021/6/28 =E4=B8=8B=E5=8D=883:23, Frank Chang wrote:
=20
LIU Zhiwei <zhiwei_liu@c-sky.com> =E6=96=BC 2021=E5=B9=B46=E6=9C=8828=E6=97=A5 =E9=80=B1=E4=B8=80 = =E4=B8=8B=E5=8D=883:17=E5=AF=AB=E9=81=93=EF=BC=9A


On 2021/6/26 =E4=B8=8B=E5=8D=888:56, Frank Chang wrote:<= br>
On Wed, Jun 16, 2021 at 10:56 AM LIU Zhiwei <zhiwei_liu@c-sky.com> wrote:


On 6/13/21 6:10 PM, Frank Chang wrote:
LIU Zhiwei <zhiwei_liu@c-sky.com> =E6=96=BC 2021=E5=B9=B44=E6=9C=889=E6=97=A5 = =E9=80=B1=E4=BA=94 =E4=B8=8B=E5=8D=883:57=E5=AF=AB=E9=81=93=EF=BC=9A

+static void riscv_clic_realize(DeviceState *dev, Error **errp)
+{
+=C2=A0 =C2=A0 RISCVCLICState *clic =3D RISCV_CLIC(dev);
+=C2=A0 =C2=A0 size_t harts_x_sources =3D clic->num_harts * clic->num_sources;
+=C2=A0 =C2=A0 int irqs, i;
+
+=C2=A0 =C2=A0 if (clic->prv_s &&= ; clic->prv_u) {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 irqs =3D 3 * h= arts_x_sources;
+=C2=A0 =C2=A0 } else if (clic->prv_s || clic->prv_u) {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 irqs =3D 2 * h= arts_x_sources;
+=C2=A0 =C2=A0 } else {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 irqs =3D harts= _x_sources;
+=C2=A0 =C2=A0 }
+
+=C2=A0 =C2=A0 clic->clic_size =3D irqs = * 4 + 0x1000;
+=C2=A0 =C2=A0 memory_region_init_io(&clic->mmio, OBJECT(dev), &riscv_clic_ops, clic,
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 TYPE_RISCV_CLIC, clic->clic_size);
+
+=C2=A0 =C2=A0 clic->clicintip =3D g_new0(uint8_t, irqs);
+=C2=A0 =C2=A0 clic->clicintie =3D g_new0(uint8_t, irqs);
+=C2=A0 =C2=A0 clic->clicintattr =3D g_new0(uint8_t, irqs);
+=C2=A0 =C2=A0 clic->clicintctl =3D g_new0(uint8_t, irqs);
+=C2=A0 =C2=A0 clic->active_list =3D g_new0(CLICActiveInterrupt, irqs);
+=C2=A0 =C2=A0 clic->active_count =3D g_new0(size_t, clic->num_harts);
+=C2=A0 =C2=A0 clic->exccode =3D g_new0(= uint32_t, clic->num_harts);
+=C2=A0 =C2=A0 clic->cpu_irqs =3D g_new0(qemu_irq, clic->num_harts);
+=C2=A0 =C2=A0 sysbus_init_mmio(SYS_BUS_DEVICE(dev), &clic->mmio);
+
+=C2=A0 =C2=A0 /* Allocate irq through gpio= , so that we can use qtest */
+=C2=A0 =C2=A0 qdev_init_gpio_in(dev, riscv_clic_set_irq, irqs);
+=C2=A0 =C2=A0 qdev_init_gpio_out(dev, clic->cpu_irqs, clic->num_harts);
+
+=C2=A0 =C2=A0 for (i =3D 0; i < clic->num_harts; i++) {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 RISCVCPU *cpu = =3D RISCV_CPU(qemu_get_cpu(i));

As spec says:
=C2=A0 Smaller single-core systems might have only a CLIC,
=C2=A0 while multicore systems might hav= e a CLIC per-core and a single shared PLIC.
=C2=A0 The PLIC xeip signals are treated as hart-local interrupt sources by the CLIC at each core.

It looks like it's possible to have one CLIC instance per core.

If you want to delivery an interrupt to one hart, you should encode the IRQ by the interrupt number
, the hart number and the interrupt target privilege, then set the irq.

I think how to calculate the hart number is the task of PLIC and it can make use of "hartid-base"
to calculate it.

Thanks,
Zhiwei


Hi Zhiwei,

What I mean is if there are multiple CLIC instances, each per core (CLIC spec allows that).
If you try to bind CLIC with CPU index start from 0,
it will be impossible for CLIC instance=C2=A0to bi= nd CPU from index other than 0.

For example, for 4 cores system, it's possible to have 4 CLIC instances:
=C2=A0 * CLIC 0 binds to CPU 0
=C2=A0 * CLIC 1 binds to CPU 1
=C2=A0 * CLIC 2 binds to CPU 2
=C2=A0 * CLIC 3 binds to CPU 3

and that's why I said it's possible to pas= s an extra "hartid-base" just like PLIC.
I know most of hardid are calculated by the requesing address, so most hartid=C2=A0usages should = be fine.
But I saw two places using=C2=A0qemu_get_cpu(),
which may cause the problem for the scenario I describe above:
i.e. riscv_clic_next_interrupt() and riscv_clic_realize() as my original reply.

So what's the problem here?

Currently all cores share the same CLIC instance. Do you want to give each core=C2=A0 a CLIC instance?

Yes, that's what I mean, which should be supported as what spec says[1]:
=C2=A0=C2=A0The CLIC complements the PLIC. Smaller single-= core systems might have only a CLIC,
=C2=A0 while multicore systems might have a CLIC per-core and a single shared PLIC.
=C2=A0 The PLIC xeip signals are treated as hart-local interrupt sources by the CLIC at each core.


Thanks,
Frank Chang
=C2=A0

If we give each core a CLIC instance, it is not convenient to access the shared memory, such as 0x0-0x1000.
Which CLIC instance should contain this memory region?


Thanks,
Zhiwei


Thanks,
Zhiwei

Regards,
Frank Chang
=C2=A0

However if you try to bind CPU reference start from index i =3D 0.
It's not possible for each per-core CLIC to bind their own CPU instance in multicore system
as they have to bind from CPU 0.

I'm not sure if we add a new "hartid-base" property just like = what SiFive PLIC is
implemented would be a good idea or not.


Regards,
Frank Chang
=C2=A0
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 qemu_irq irq =3D qemu_allocate_irq(riscv_clic_cpu_irq_handle= r,
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&cpu->env, 1);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 qdev_connect_g= pio_out(dev, i, irq);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 cpu->env.cl= ic =3D clic;
+=C2=A0 =C2=A0 }
+}
+


--000000000000ecee2905c5ceb96b-- From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from list by lists.gnu.org with archive (Exim 4.90_1) id 1lxm1D-0001Xp-OC for mharc-qemu-riscv@gnu.org; Mon, 28 Jun 2021 03:49:23 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]:54594) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1lxm1B-0001XB-RO for qemu-riscv@nongnu.org; Mon, 28 Jun 2021 03:49:21 -0400 Received: from mail-pf1-x433.google.com ([2607:f8b0:4864:20::433]:33690) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.90_1) (envelope-from ) id 1lxm19-0000Wn-7e for qemu-riscv@nongnu.org; Mon, 28 Jun 2021 03:49:21 -0400 Received: by mail-pf1-x433.google.com with SMTP id s14so12149600pfg.0 for ; Mon, 28 Jun 2021 00:49:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sifive.com; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=kKqTlffdGT1dszQCEFbZc8eeKk/EppgCJ7VPDCHhKdU=; b=ThCuizfZDY0imqw9Yv0zO9gv1w0Rz8RMGbvfbJzEAtLSbvYm3LVB6V8UwdWVOXWbm+ nvFtXhiIm+XagRyVcN73deCjK1UAEyxAwkWKsVRhL/OfhUyYJ+Xn04NO+g5U/yR+8M1p iT1CnDgds5uBfBvgyVMlKCZrWxvRvCWBKgHGCX9VtsCBcdK4fvWJeltZdrRsVxbXdXlm F0DtTTB/bI5kwb99Ikdbq10JwWQsklA04y6gqHKXi7zvLtgSln40i1FZCyEttIH9/ayM CrmrViLWwQ5xCZMVfi+wSR7HCXHVkbXlRImm/mkr1XChagMKvWpqCTX9UEUMumIJhIBk dGMw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=kKqTlffdGT1dszQCEFbZc8eeKk/EppgCJ7VPDCHhKdU=; b=ffTQDVb6f6I0A4xZnao89NfRABZrFRWRyzTwVTuqOOp1alj+e0hbHTDhYHxCq7u543 F8tMDhEvdUcpGch7FhSDizpi7PQfS4v+E6Si5SLzMbO5whrLdrS10xO2Hnv7jK0wEI4t bCWE6ma4T/KUMnx32xIHjXlHSCn2kLlSIwaqAKxI5Iy+S0afchWDDCV8twd2S/vfUZ1b HTsgFQBmdO125c9rpUZEPis6B1oH7eLa6WIKsbdLN2Eefye8RlwpJEyiHk9C9P44/TpG uVa3hXCMfNqBr55pwYu8db/iwIf3wZF/+xr5/EVBLzsUI/ddhAbXbfh/P7gjRGDNeybi GZQg== X-Gm-Message-State: AOAM532is+c4sEngLEtoigp80BO3Hn12aBUullvnQIwvK8Rn3Um4sBuM 3V+Kk/X8QS/wHQLiZm5u9Sa5zTBpTE4x8xh8 X-Google-Smtp-Source: ABdhPJxrgTneWSNH+ES+5o4c8aA5AOqwaadAtqfh3XBRMpr/Mp8SGWuP/xBOJZycBGCO7BH6A7Zv8Q== X-Received: by 2002:a05:6a00:1496:b029:308:29bc:6d4d with SMTP id v22-20020a056a001496b029030829bc6d4dmr21388832pfu.14.1624866557493; Mon, 28 Jun 2021 00:49:17 -0700 (PDT) Received: from mail-pl1-f181.google.com (mail-pl1-f181.google.com. [209.85.214.181]) by smtp.gmail.com with ESMTPSA id c62sm13418996pfa.12.2021.06.28.00.49.17 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Mon, 28 Jun 2021 00:49:17 -0700 (PDT) Received: by mail-pl1-f181.google.com with SMTP id v12so8435202plo.10; Mon, 28 Jun 2021 00:49:17 -0700 (PDT) X-Received: by 2002:a17:90a:f291:: with SMTP id fs17mr26439011pjb.47.1624866556799; Mon, 28 Jun 2021 00:49:16 -0700 (PDT) MIME-Version: 1.0 References: <20210409074857.166082-1-zhiwei_liu@c-sky.com> <20210409074857.166082-4-zhiwei_liu@c-sky.com> <52225a77-c509-9999-9d8a-942ea407f44d@c-sky.com> <27879b9f-bffa-96c9-a8b2-033eb0a0be4c@c-sky.com> <6c9c894c-6f85-c6bb-a372-d69e15896571@c-sky.com> In-Reply-To: <6c9c894c-6f85-c6bb-a372-d69e15896571@c-sky.com> From: Frank Chang Date: Mon, 28 Jun 2021 15:49:05 +0800 X-Gmail-Original-Message-ID: Message-ID: Subject: Re: [RFC PATCH 03/11] hw/intc: Add CLIC device To: LIU Zhiwei Cc: Frank Chang , Palmer Dabbelt , Alistair Francis , "open list:RISC-V" , wxy194768@alibaba-inc.com, "qemu-devel@nongnu.org Developers" Content-Type: multipart/alternative; boundary="000000000000ecee2905c5ceb96b" Received-SPF: pass client-ip=2607:f8b0:4864:20::433; envelope-from=frank.chang@sifive.com; helo=mail-pf1-x433.google.com X-Spam_score_int: -20 X-Spam_score: -2.1 X-Spam_bar: -- X-Spam_report: (-2.1 / 5.0 requ) BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, HTML_MESSAGE=0.001, RCVD_IN_DNSWL_NONE=-0.0001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-riscv@nongnu.org X-Mailman-Version: 2.1.23 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 28 Jun 2021 07:49:22 -0000 --000000000000ecee2905c5ceb96b Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable LIU Zhiwei =E6=96=BC 2021=E5=B9=B46=E6=9C=8828=E6=97= =A5 =E9=80=B1=E4=B8=80 =E4=B8=8B=E5=8D=883:40=E5=AF=AB=E9=81=93=EF=BC=9A > > On 2021/6/28 =E4=B8=8B=E5=8D=883:23, Frank Chang wrote: > > LIU Zhiwei =E6=96=BC 2021=E5=B9=B46=E6=9C=8828=E6= =97=A5 =E9=80=B1=E4=B8=80 =E4=B8=8B=E5=8D=883:17=E5=AF=AB=E9=81=93=EF=BC=9A > >> >> On 2021/6/26 =E4=B8=8B=E5=8D=888:56, Frank Chang wrote: >> >> On Wed, Jun 16, 2021 at 10:56 AM LIU Zhiwei wrote= : >> >>> >>> On 6/13/21 6:10 PM, Frank Chang wrote: >>> >>> LIU Zhiwei =E6=96=BC 2021=E5=B9=B44=E6=9C=889=E6= =97=A5 =E9=80=B1=E4=BA=94 =E4=B8=8B=E5=8D=883:57=E5=AF=AB=E9=81=93=EF=BC=9A >>> >>> +static void riscv_clic_realize(DeviceState *dev, Error **errp) >>>> +{ >>>> + RISCVCLICState *clic =3D RISCV_CLIC(dev); >>>> + size_t harts_x_sources =3D clic->num_harts * clic->num_sources; >>>> + int irqs, i; >>>> + >>>> + if (clic->prv_s && clic->prv_u) { >>>> + irqs =3D 3 * harts_x_sources; >>>> + } else if (clic->prv_s || clic->prv_u) { >>>> + irqs =3D 2 * harts_x_sources; >>>> + } else { >>>> + irqs =3D harts_x_sources; >>>> + } >>>> + >>>> + clic->clic_size =3D irqs * 4 + 0x1000; >>>> + memory_region_init_io(&clic->mmio, OBJECT(dev), &riscv_clic_ops, >>>> clic, >>>> + TYPE_RISCV_CLIC, clic->clic_size); >>>> + >>>> + clic->clicintip =3D g_new0(uint8_t, irqs); >>>> + clic->clicintie =3D g_new0(uint8_t, irqs); >>>> + clic->clicintattr =3D g_new0(uint8_t, irqs); >>>> + clic->clicintctl =3D g_new0(uint8_t, irqs); >>>> + clic->active_list =3D g_new0(CLICActiveInterrupt, irqs); >>>> + clic->active_count =3D g_new0(size_t, clic->num_harts); >>>> + clic->exccode =3D g_new0(uint32_t, clic->num_harts); >>>> + clic->cpu_irqs =3D g_new0(qemu_irq, clic->num_harts); >>>> + sysbus_init_mmio(SYS_BUS_DEVICE(dev), &clic->mmio); >>>> + >>>> + /* Allocate irq through gpio, so that we can use qtest */ >>>> + qdev_init_gpio_in(dev, riscv_clic_set_irq, irqs); >>>> + qdev_init_gpio_out(dev, clic->cpu_irqs, clic->num_harts); >>>> + >>>> + for (i =3D 0; i < clic->num_harts; i++) { >>>> + RISCVCPU *cpu =3D RISCV_CPU(qemu_get_cpu(i)); >>>> >>> >>> As spec says: >>> Smaller single-core systems might have only a CLIC, >>> while multicore systems might have a CLIC per-core and a single share= d >>> PLIC. >>> The PLIC xeip signals are treated as hart-local interrupt sources by >>> the CLIC at each core. >>> >>> It looks like it's possible to have one CLIC instance per core. >>> >>> If you want to delivery an interrupt to one hart, you should encode the >>> IRQ by the interrupt number >>> , the hart number and the interrupt target privilege, then set the irq. >>> >>> I think how to calculate the hart number is the task of PLIC and it can >>> make use of "hartid-base" >>> to calculate it. >>> >>> Thanks, >>> Zhiwei >>> >> >> Hi Zhiwei, >> >> What I mean is if there are multiple CLIC instances, each per core (CLIC >> spec allows that). >> If you try to bind CLIC with CPU index start from 0, >> it will be impossible for CLIC instance to bind CPU from index other tha= n >> 0. >> >> For example, for 4 cores system, it's possible to have 4 CLIC instances: >> * CLIC 0 binds to CPU 0 >> * CLIC 1 binds to CPU 1 >> * CLIC 2 binds to CPU 2 >> * CLIC 3 binds to CPU 3 >> >> and that's why I said it's possible to pass an extra "hartid-base" just >> like PLIC. >> I know most of hardid are calculated by the requesing address, so most >> hartid usages should be fine. >> But I saw two places using qemu_get_cpu(), >> >> which may cause the problem for the scenario I describe above: >> i.e. riscv_clic_next_interrupt() and riscv_clic_realize() as my original >> reply. >> >> So what's the problem here? >> >> Currently all cores share the same CLIC instance. Do you want to give >> each core a CLIC instance? >> > Yes, that's what I mean, which should be supported as what spec says[1]: > The CLIC complements the PLIC. Smaller single-core systems might have > only a CLIC, > while multicore systems might have *a CLIC per-core* and a single > shared PLIC. > The PLIC xeip signals are treated as hart-local interrupt sources by th= e > CLIC at each core. > > [1] > https://github.com/riscv/riscv-fast-interrupt/blob/646310a5e4ae055964b468= 0f12c1c04a7cc0dd56/clic.adoc#12-clic-versus-plic > > Thanks, > Frank Chang > > > If we give each core a CLIC instance, it is not convenient to access the > shared memory, such as 0x0-0x1000. > Which CLIC instance should contain this memory region? > What do you mean by: "access the shared memory" here? I thought the memory region is defined during CLIC's creation? So it should depend on the platform that creates CLIC instances. Thanks, Frank Chang > Thanks, > Zhiwei > > >> Thanks, >> Zhiwei >> > Regards, >> Frank Chang >> >> >>> However if you try to bind CPU reference start from index i =3D 0. >>> It's not possible for each per-core CLIC to bind their own CPU instance >>> in multicore system >>> as they have to bind from CPU 0. >>> >>> I'm not sure if we add a new "hartid-base" property just like what >>> SiFive PLIC is >>> implemented would be a good idea or not. >>> >>> >>> Regards, >>> Frank Chang >>> >>> >>>> + qemu_irq irq =3D qemu_allocate_irq(riscv_clic_cpu_irq_handler= , >>>> + &cpu->env, 1); >>>> + qdev_connect_gpio_out(dev, i, irq); >>>> + cpu->env.clic =3D clic; >>>> + } >>>> +} >>>> + >>>> >>>> >>>> --000000000000ecee2905c5ceb96b Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
LIU Zhiwei <zhiwei_liu@c-sky.com> =E6=96=BC 2021=E5=B9=B46=E6=9C= =8828=E6=97=A5 =E9=80=B1=E4=B8=80 =E4=B8=8B=E5=8D=883:40=E5=AF=AB=E9=81=93= =EF=BC=9A
=20 =20 =20


On 2021/6/28 =E4=B8=8B=E5=8D=883:23, Frank Chang wrote:
=20
LIU Zhiwei <zhiwei_liu@c-sky.com> =E6=96=BC 2021=E5=B9=B46=E6=9C=8828=E6=97=A5 =E9=80=B1=E4=B8=80 = =E4=B8=8B=E5=8D=883:17=E5=AF=AB=E9=81=93=EF=BC=9A


On 2021/6/26 =E4=B8=8B=E5=8D=888:56, Frank Chang wrote:<= br>
On Wed, Jun 16, 2021 at 10:56 AM LIU Zhiwei <zhiwei_liu@c-sky.com> wrote:


On 6/13/21 6:10 PM, Frank Chang wrote:
LIU Zhiwei <zhiwei_liu@c-sky.com> =E6=96=BC 2021=E5=B9=B44=E6=9C=889=E6=97=A5 = =E9=80=B1=E4=BA=94 =E4=B8=8B=E5=8D=883:57=E5=AF=AB=E9=81=93=EF=BC=9A

+static void riscv_clic_realize(DeviceState *dev, Error **errp)
+{
+=C2=A0 =C2=A0 RISCVCLICState *clic =3D RISCV_CLIC(dev);
+=C2=A0 =C2=A0 size_t harts_x_sources =3D clic->num_harts * clic->num_sources;
+=C2=A0 =C2=A0 int irqs, i;
+
+=C2=A0 =C2=A0 if (clic->prv_s &&= ; clic->prv_u) {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 irqs =3D 3 * h= arts_x_sources;
+=C2=A0 =C2=A0 } else if (clic->prv_s || clic->prv_u) {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 irqs =3D 2 * h= arts_x_sources;
+=C2=A0 =C2=A0 } else {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 irqs =3D harts= _x_sources;
+=C2=A0 =C2=A0 }
+
+=C2=A0 =C2=A0 clic->clic_size =3D irqs = * 4 + 0x1000;
+=C2=A0 =C2=A0 memory_region_init_io(&clic->mmio, OBJECT(dev), &riscv_clic_ops, clic,
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 TYPE_RISCV_CLIC, clic->clic_size);
+
+=C2=A0 =C2=A0 clic->clicintip =3D g_new0(uint8_t, irqs);
+=C2=A0 =C2=A0 clic->clicintie =3D g_new0(uint8_t, irqs);
+=C2=A0 =C2=A0 clic->clicintattr =3D g_new0(uint8_t, irqs);
+=C2=A0 =C2=A0 clic->clicintctl =3D g_new0(uint8_t, irqs);
+=C2=A0 =C2=A0 clic->active_list =3D g_new0(CLICActiveInterrupt, irqs);
+=C2=A0 =C2=A0 clic->active_count =3D g_new0(size_t, clic->num_harts);
+=C2=A0 =C2=A0 clic->exccode =3D g_new0(= uint32_t, clic->num_harts);
+=C2=A0 =C2=A0 clic->cpu_irqs =3D g_new0(qemu_irq, clic->num_harts);
+=C2=A0 =C2=A0 sysbus_init_mmio(SYS_BUS_DEVICE(dev), &clic->mmio);
+
+=C2=A0 =C2=A0 /* Allocate irq through gpio= , so that we can use qtest */
+=C2=A0 =C2=A0 qdev_init_gpio_in(dev, riscv_clic_set_irq, irqs);
+=C2=A0 =C2=A0 qdev_init_gpio_out(dev, clic->cpu_irqs, clic->num_harts);
+
+=C2=A0 =C2=A0 for (i =3D 0; i < clic->num_harts; i++) {
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 RISCVCPU *cpu = =3D RISCV_CPU(qemu_get_cpu(i));

As spec says:
=C2=A0 Smaller single-core systems might have only a CLIC,
=C2=A0 while multicore systems might hav= e a CLIC per-core and a single shared PLIC.
=C2=A0 The PLIC xeip signals are treated as hart-local interrupt sources by the CLIC at each core.

It looks like it's possible to have one CLIC instance per core.

If you want to delivery an interrupt to one hart, you should encode the IRQ by the interrupt number
, the hart number and the interrupt target privilege, then set the irq.

I think how to calculate the hart number is the task of PLIC and it can make use of "hartid-base"
to calculate it.

Thanks,
Zhiwei


Hi Zhiwei,

What I mean is if there are multiple CLIC instances, each per core (CLIC spec allows that).
If you try to bind CLIC with CPU index start from 0,
it will be impossible for CLIC instance=C2=A0to bi= nd CPU from index other than 0.

For example, for 4 cores system, it's possible to have 4 CLIC instances:
=C2=A0 * CLIC 0 binds to CPU 0
=C2=A0 * CLIC 1 binds to CPU 1
=C2=A0 * CLIC 2 binds to CPU 2
=C2=A0 * CLIC 3 binds to CPU 3

and that's why I said it's possible to pas= s an extra "hartid-base" just like PLIC.
I know most of hardid are calculated by the requesing address, so most hartid=C2=A0usages should = be fine.
But I saw two places using=C2=A0qemu_get_cpu(),
which may cause the problem for the scenario I describe above:
i.e. riscv_clic_next_interrupt() and riscv_clic_realize() as my original reply.

So what's the problem here?

Currently all cores share the same CLIC instance. Do you want to give each core=C2=A0 a CLIC instance?

Yes, that's what I mean, which should be supported as what spec says[1]:
=C2=A0=C2=A0The CLIC complements the PLIC. Smaller single-= core systems might have only a CLIC,
=C2=A0 while multicore systems might have a CLIC per-core and a single shared PLIC.
=C2=A0 The PLIC xeip signals are treated as hart-local interrupt sources by the CLIC at each core.


Thanks,
Frank Chang
=C2=A0

If we give each core a CLIC instance, it is not convenient to access the shared memory, such as 0x0-0x1000.
Which CLIC instance should contain this memory region?


Thanks,
Zhiwei


Thanks,
Zhiwei

Regards,
Frank Chang
=C2=A0

However if you try to bind CPU reference start from index i =3D 0.
It's not possible for each per-core CLIC to bind their own CPU instance in multicore system
as they have to bind from CPU 0.

I'm not sure if we add a new "hartid-base" property just like = what SiFive PLIC is
implemented would be a good idea or not.


Regards,
Frank Chang
=C2=A0
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 qemu_irq irq =3D qemu_allocate_irq(riscv_clic_cpu_irq_handle= r,
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0&cpu->env, 1);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 qdev_connect_g= pio_out(dev, i, irq);
+=C2=A0 =C2=A0 =C2=A0 =C2=A0 cpu->env.cl= ic =3D clic;
+=C2=A0 =C2=A0 }
+}
+


--000000000000ecee2905c5ceb96b--