linux-mips.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Huacai Chen <chenhuacai@kernel.org>
To: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Cc: Yu Zhao <yuzhao@google.com>,
	aleksandar.qemu.devel@gmail.com, jiaxun.yang@flygoat.com,
	kvm@vger.kernel.org, linux-mips@vger.kernel.org,
	pbonzini@redhat.com, robh+dt@kernel.org, zhangfx@lemote.com
Subject: Re: [PATCH 5/5] MAINTAINERS: Update KVM/MIPS maintainers
Date: Fri, 16 Jun 2023 16:31:13 +0800	[thread overview]
Message-ID: <CAAhV-H54aipF6jXAiGpcjzpDy06Q28hmx-p4msvxgASHU5Z+cw@mail.gmail.com> (raw)
In-Reply-To: <20230616082322.GA7323@alpha.franken.de>

On Fri, Jun 16, 2023 at 4:23 PM Thomas Bogendoerfer
<tsbogend@alpha.franken.de> wrote:
>
> On Fri, Jun 16, 2023 at 01:18:31AM -0600, Yu Zhao wrote:
> > On Tue, Jul 28, 2020 at 23:58:20PM -0700, Huacai Chen wrote:
> > > James Hogan has become inactive for a long time and leaves KVM for MIPS
> > > orphan. I'm working on KVM/Loongson and attempt to make it upstream both
> > > in kernel and QEMU, while Aleksandar Markovic is already a maintainer of
> > > QEMU/MIPS. We are both interested in QEMU/KVM/MIPS, and we have already
> > > made some contributions in kernel and QEMU. If possible, we want to take
> > > the KVM/MIPS maintainership.
> > >
> > > Reviewed-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
> > > Reviewed-by: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
> > > Signed-off-by: Huacai Chen <chenhc@lemote.com>
> > > ---
> > >  MAINTAINERS | 4 +++-
> > >  1 file changed, 3 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/MAINTAINERS b/MAINTAINERS
> > > index bddc79a..5f9c2fd 100644
> > > --- a/MAINTAINERS
> > > +++ b/MAINTAINERS
> > > @@ -9441,9 +9441,11 @@ F:   arch/arm64/kvm/
> > >  F: include/kvm/arm_*
> > >
> > >  KERNEL VIRTUAL MACHINE FOR MIPS (KVM/mips)
> > > +M: Huacai Chen <chenhc@lemote.com>
> > > +M: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
> > >  L: linux-mips@vger.kernel.org
> > >  L: kvm@vger.kernel.org
> > > -S: Orphan
> > > +S: Maintained
> > >  F: arch/mips/include/asm/kvm*
> > >  F: arch/mips/include/uapi/asm/kvm*
> > >  F: arch/mips/kvm/
> >
> > Hi,
> >
> > Is kvm/mips still maintained? Thanks.
> >
> > I tried v6.4-rc6 and hit the following crash. It seems it has been broken since
> >
> >   commit 45c7e8af4a5e3f0bea4ac209eea34118dd57ac64
> >   Author: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
> >   Date:   Mon Mar 1 16:29:57 2021 +0100
> >
> >       MIPS: Remove KVM_TE support
>
> ok, I see what I missed when removing TE support, d'oh. Does the patch
> below fix the issue for you ?
>
> Thomas.
>
> diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
> index 957121a495f0..04cedf9f8811 100644
> --- a/arch/mips/include/asm/kvm_host.h
> +++ b/arch/mips/include/asm/kvm_host.h
> @@ -317,7 +317,7 @@ struct kvm_vcpu_arch {
>         unsigned int aux_inuse;
>
>         /* COP0 State */
> -       struct mips_coproc *cop0;
> +       struct mips_coproc cop0;
Maybe keeping it as a pointer and allocate in kvm_arch_vcpu_create()
is better? (smaller modification)

Huacai
>
>         /* Resume PC after MMIO completion */
>         unsigned long io_pc;
> @@ -698,7 +698,7 @@ static inline bool kvm_mips_guest_can_have_fpu(struct kvm_vcpu_arch *vcpu)
>  static inline bool kvm_mips_guest_has_fpu(struct kvm_vcpu_arch *vcpu)
>  {
>         return kvm_mips_guest_can_have_fpu(vcpu) &&
> -               kvm_read_c0_guest_config1(vcpu->cop0) & MIPS_CONF1_FP;
> +               kvm_read_c0_guest_config1(&vcpu->cop0) & MIPS_CONF1_FP;
>  }
>
>  static inline bool kvm_mips_guest_can_have_msa(struct kvm_vcpu_arch *vcpu)
> @@ -710,7 +710,7 @@ static inline bool kvm_mips_guest_can_have_msa(struct kvm_vcpu_arch *vcpu)
>  static inline bool kvm_mips_guest_has_msa(struct kvm_vcpu_arch *vcpu)
>  {
>         return kvm_mips_guest_can_have_msa(vcpu) &&
> -               kvm_read_c0_guest_config3(vcpu->cop0) & MIPS_CONF3_MSA;
> +               kvm_read_c0_guest_config3(&vcpu->cop0) & MIPS_CONF3_MSA;
>  }
>
>  struct kvm_mips_callbacks {
> diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c
> index edaec93a1a1f..e64372b8f66a 100644
> --- a/arch/mips/kvm/emulate.c
> +++ b/arch/mips/kvm/emulate.c
> @@ -312,7 +312,7 @@ int kvm_get_badinstrp(u32 *opc, struct kvm_vcpu *vcpu, u32 *out)
>   */
>  int kvm_mips_count_disabled(struct kvm_vcpu *vcpu)
>  {
> -       struct mips_coproc *cop0 = vcpu->arch.cop0;
> +       struct mips_coproc *cop0 = &vcpu->arch.cop0;
>
>         return  (vcpu->arch.count_ctl & KVM_REG_MIPS_COUNT_CTL_DC) ||
>                 (kvm_read_c0_guest_cause(cop0) & CAUSEF_DC);
> @@ -384,7 +384,7 @@ static inline ktime_t kvm_mips_count_time(struct kvm_vcpu *vcpu)
>   */
>  static u32 kvm_mips_read_count_running(struct kvm_vcpu *vcpu, ktime_t now)
>  {
> -       struct mips_coproc *cop0 = vcpu->arch.cop0;
> +       struct mips_coproc *cop0 = &vcpu->arch.cop0;
>         ktime_t expires, threshold;
>         u32 count, compare;
>         int running;
> @@ -444,7 +444,7 @@ static u32 kvm_mips_read_count_running(struct kvm_vcpu *vcpu, ktime_t now)
>   */
>  u32 kvm_mips_read_count(struct kvm_vcpu *vcpu)
>  {
> -       struct mips_coproc *cop0 = vcpu->arch.cop0;
> +       struct mips_coproc *cop0 = &vcpu->arch.cop0;
>
>         /* If count disabled just read static copy of count */
>         if (kvm_mips_count_disabled(vcpu))
> @@ -502,7 +502,7 @@ ktime_t kvm_mips_freeze_hrtimer(struct kvm_vcpu *vcpu, u32 *count)
>  static void kvm_mips_resume_hrtimer(struct kvm_vcpu *vcpu,
>                                     ktime_t now, u32 count)
>  {
> -       struct mips_coproc *cop0 = vcpu->arch.cop0;
> +       struct mips_coproc *cop0 = &vcpu->arch.cop0;
>         u32 compare;
>         u64 delta;
>         ktime_t expire;
> @@ -603,7 +603,7 @@ int kvm_mips_restore_hrtimer(struct kvm_vcpu *vcpu, ktime_t before,
>   */
>  void kvm_mips_write_count(struct kvm_vcpu *vcpu, u32 count)
>  {
> -       struct mips_coproc *cop0 = vcpu->arch.cop0;
> +       struct mips_coproc *cop0 = &vcpu->arch.cop0;
>         ktime_t now;
>
>         /* Calculate bias */
> @@ -649,7 +649,7 @@ void kvm_mips_init_count(struct kvm_vcpu *vcpu, unsigned long count_hz)
>   */
>  int kvm_mips_set_count_hz(struct kvm_vcpu *vcpu, s64 count_hz)
>  {
> -       struct mips_coproc *cop0 = vcpu->arch.cop0;
> +       struct mips_coproc *cop0 = &vcpu->arch.cop0;
>         int dc;
>         ktime_t now;
>         u32 count;
> @@ -696,7 +696,7 @@ int kvm_mips_set_count_hz(struct kvm_vcpu *vcpu, s64 count_hz)
>   */
>  void kvm_mips_write_compare(struct kvm_vcpu *vcpu, u32 compare, bool ack)
>  {
> -       struct mips_coproc *cop0 = vcpu->arch.cop0;
> +       struct mips_coproc *cop0 = &vcpu->arch.cop0;
>         int dc;
>         u32 old_compare = kvm_read_c0_guest_compare(cop0);
>         s32 delta = compare - old_compare;
> @@ -779,7 +779,7 @@ void kvm_mips_write_compare(struct kvm_vcpu *vcpu, u32 compare, bool ack)
>   */
>  static ktime_t kvm_mips_count_disable(struct kvm_vcpu *vcpu)
>  {
> -       struct mips_coproc *cop0 = vcpu->arch.cop0;
> +       struct mips_coproc *cop0 = &vcpu->arch.cop0;
>         u32 count;
>         ktime_t now;
>
> @@ -806,7 +806,7 @@ static ktime_t kvm_mips_count_disable(struct kvm_vcpu *vcpu)
>   */
>  void kvm_mips_count_disable_cause(struct kvm_vcpu *vcpu)
>  {
> -       struct mips_coproc *cop0 = vcpu->arch.cop0;
> +       struct mips_coproc *cop0 = &vcpu->arch.cop0;
>
>         kvm_set_c0_guest_cause(cop0, CAUSEF_DC);
>         if (!(vcpu->arch.count_ctl & KVM_REG_MIPS_COUNT_CTL_DC))
> @@ -826,7 +826,7 @@ void kvm_mips_count_disable_cause(struct kvm_vcpu *vcpu)
>   */
>  void kvm_mips_count_enable_cause(struct kvm_vcpu *vcpu)
>  {
> -       struct mips_coproc *cop0 = vcpu->arch.cop0;
> +       struct mips_coproc *cop0 = &vcpu->arch.cop0;
>         u32 count;
>
>         kvm_clear_c0_guest_cause(cop0, CAUSEF_DC);
> @@ -852,7 +852,7 @@ void kvm_mips_count_enable_cause(struct kvm_vcpu *vcpu)
>   */
>  int kvm_mips_set_count_ctl(struct kvm_vcpu *vcpu, s64 count_ctl)
>  {
> -       struct mips_coproc *cop0 = vcpu->arch.cop0;
> +       struct mips_coproc *cop0 = &vcpu->arch.cop0;
>         s64 changed = count_ctl ^ vcpu->arch.count_ctl;
>         s64 delta;
>         ktime_t expire, now;
> diff --git a/arch/mips/kvm/mips.c b/arch/mips/kvm/mips.c
> index 884be4ef99dc..aa5583a7b05b 100644
> --- a/arch/mips/kvm/mips.c
> +++ b/arch/mips/kvm/mips.c
> @@ -649,7 +649,7 @@ static int kvm_mips_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices)
>  static int kvm_mips_get_reg(struct kvm_vcpu *vcpu,
>                             const struct kvm_one_reg *reg)
>  {
> -       struct mips_coproc *cop0 = vcpu->arch.cop0;
> +       struct mips_coproc *cop0 = &vcpu->arch.cop0;
>         struct mips_fpu_struct *fpu = &vcpu->arch.fpu;
>         int ret;
>         s64 v;
> @@ -761,7 +761,7 @@ static int kvm_mips_get_reg(struct kvm_vcpu *vcpu,
>  static int kvm_mips_set_reg(struct kvm_vcpu *vcpu,
>                             const struct kvm_one_reg *reg)
>  {
> -       struct mips_coproc *cop0 = vcpu->arch.cop0;
> +       struct mips_coproc *cop0 = &vcpu->arch.cop0;
>         struct mips_fpu_struct *fpu = &vcpu->arch.fpu;
>         s64 v;
>         s64 vs[2];
> @@ -1086,7 +1086,7 @@ int kvm_vm_ioctl_check_extension(struct kvm *kvm, long ext)
>  int kvm_cpu_has_pending_timer(struct kvm_vcpu *vcpu)
>  {
>         return kvm_mips_pending_timer(vcpu) ||
> -               kvm_read_c0_guest_cause(vcpu->arch.cop0) & C_TI;
> +               kvm_read_c0_guest_cause(&vcpu->arch.cop0) & C_TI;
>  }
>
>  int kvm_arch_vcpu_dump_regs(struct kvm_vcpu *vcpu)
> @@ -1110,7 +1110,7 @@ int kvm_arch_vcpu_dump_regs(struct kvm_vcpu *vcpu)
>         kvm_debug("\thi: 0x%08lx\n", vcpu->arch.hi);
>         kvm_debug("\tlo: 0x%08lx\n", vcpu->arch.lo);
>
> -       cop0 = vcpu->arch.cop0;
> +       cop0 = &vcpu->arch.cop0;
>         kvm_debug("\tStatus: 0x%08x, Cause: 0x%08x\n",
>                   kvm_read_c0_guest_status(cop0),
>                   kvm_read_c0_guest_cause(cop0));
> @@ -1232,7 +1232,7 @@ static int __kvm_mips_handle_exit(struct kvm_vcpu *vcpu)
>
>         case EXCCODE_TLBS:
>                 kvm_debug("TLB ST fault:  cause %#x, status %#x, PC: %p, BadVaddr: %#lx\n",
> -                         cause, kvm_read_c0_guest_status(vcpu->arch.cop0), opc,
> +                         cause, kvm_read_c0_guest_status(&vcpu->arch.cop0), opc,
>                           badvaddr);
>
>                 ++vcpu->stat.tlbmiss_st_exits;
> @@ -1304,7 +1304,7 @@ static int __kvm_mips_handle_exit(struct kvm_vcpu *vcpu)
>                 kvm_get_badinstr(opc, vcpu, &inst);
>                 kvm_err("Exception Code: %d, not yet handled, @ PC: %p, inst: 0x%08x  BadVaddr: %#lx Status: %#x\n",
>                         exccode, opc, inst, badvaddr,
> -                       kvm_read_c0_guest_status(vcpu->arch.cop0));
> +                       kvm_read_c0_guest_status(&vcpu->arch.cop0));
>                 kvm_arch_vcpu_dump_regs(vcpu);
>                 run->exit_reason = KVM_EXIT_INTERNAL_ERROR;
>                 ret = RESUME_HOST;
> @@ -1377,7 +1377,7 @@ int noinstr kvm_mips_handle_exit(struct kvm_vcpu *vcpu)
>  /* Enable FPU for guest and restore context */
>  void kvm_own_fpu(struct kvm_vcpu *vcpu)
>  {
> -       struct mips_coproc *cop0 = vcpu->arch.cop0;
> +       struct mips_coproc *cop0 = &vcpu->arch.cop0;
>         unsigned int sr, cfg5;
>
>         preempt_disable();
> @@ -1421,7 +1421,7 @@ void kvm_own_fpu(struct kvm_vcpu *vcpu)
>  /* Enable MSA for guest and restore context */
>  void kvm_own_msa(struct kvm_vcpu *vcpu)
>  {
> -       struct mips_coproc *cop0 = vcpu->arch.cop0;
> +       struct mips_coproc *cop0 = &vcpu->arch.cop0;
>         unsigned int sr, cfg5;
>
>         preempt_disable();
> diff --git a/arch/mips/kvm/vz.c b/arch/mips/kvm/vz.c
> index 3d21cbfa7443..99d5a71e4300 100644
> --- a/arch/mips/kvm/vz.c
> +++ b/arch/mips/kvm/vz.c
> @@ -422,7 +422,7 @@ static void _kvm_vz_restore_htimer(struct kvm_vcpu *vcpu,
>   */
>  static void kvm_vz_restore_timer(struct kvm_vcpu *vcpu)
>  {
> -       struct mips_coproc *cop0 = vcpu->arch.cop0;
> +       struct mips_coproc *cop0 = &vcpu->arch.cop0;
>         u32 cause, compare;
>
>         compare = kvm_read_sw_gc0_compare(cop0);
> @@ -517,7 +517,7 @@ static void _kvm_vz_save_htimer(struct kvm_vcpu *vcpu,
>   */
>  static void kvm_vz_save_timer(struct kvm_vcpu *vcpu)
>  {
> -       struct mips_coproc *cop0 = vcpu->arch.cop0;
> +       struct mips_coproc *cop0 = &vcpu->arch.cop0;
>         u32 gctl0, compare, cause;
>
>         gctl0 = read_c0_guestctl0();
> @@ -863,7 +863,7 @@ static unsigned long mips_process_maar(unsigned int op, unsigned long val)
>
>  static void kvm_write_maari(struct kvm_vcpu *vcpu, unsigned long val)
>  {
> -       struct mips_coproc *cop0 = vcpu->arch.cop0;
> +       struct mips_coproc *cop0 = &vcpu->arch.cop0;
>
>         val &= MIPS_MAARI_INDEX;
>         if (val == MIPS_MAARI_INDEX)
> @@ -876,7 +876,7 @@ static enum emulation_result kvm_vz_gpsi_cop0(union mips_instruction inst,
>                                               u32 *opc, u32 cause,
>                                               struct kvm_vcpu *vcpu)
>  {
> -       struct mips_coproc *cop0 = vcpu->arch.cop0;
> +       struct mips_coproc *cop0 = &vcpu->arch.cop0;
>         enum emulation_result er = EMULATE_DONE;
>         u32 rt, rd, sel;
>         unsigned long curr_pc;
> @@ -1911,7 +1911,7 @@ static int kvm_vz_get_one_reg(struct kvm_vcpu *vcpu,
>                               const struct kvm_one_reg *reg,
>                               s64 *v)
>  {
> -       struct mips_coproc *cop0 = vcpu->arch.cop0;
> +       struct mips_coproc *cop0 = &vcpu->arch.cop0;
>         unsigned int idx;
>
>         switch (reg->id) {
> @@ -2081,7 +2081,7 @@ static int kvm_vz_get_one_reg(struct kvm_vcpu *vcpu,
>         case KVM_REG_MIPS_CP0_MAARI:
>                 if (!cpu_guest_has_maar || cpu_guest_has_dyn_maar)
>                         return -EINVAL;
> -               *v = kvm_read_sw_gc0_maari(vcpu->arch.cop0);
> +               *v = kvm_read_sw_gc0_maari(&vcpu->arch.cop0);
>                 break;
>  #ifdef CONFIG_64BIT
>         case KVM_REG_MIPS_CP0_XCONTEXT:
> @@ -2135,7 +2135,7 @@ static int kvm_vz_set_one_reg(struct kvm_vcpu *vcpu,
>                               const struct kvm_one_reg *reg,
>                               s64 v)
>  {
> -       struct mips_coproc *cop0 = vcpu->arch.cop0;
> +       struct mips_coproc *cop0 = &vcpu->arch.cop0;
>         unsigned int idx;
>         int ret = 0;
>         unsigned int cur, change;
> @@ -2562,7 +2562,7 @@ static void kvm_vz_vcpu_load_tlb(struct kvm_vcpu *vcpu, int cpu)
>
>  static int kvm_vz_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>  {
> -       struct mips_coproc *cop0 = vcpu->arch.cop0;
> +       struct mips_coproc *cop0 = &vcpu->arch.cop0;
>         bool migrated, all;
>
>         /*
> @@ -2704,7 +2704,7 @@ static int kvm_vz_vcpu_load(struct kvm_vcpu *vcpu, int cpu)
>
>  static int kvm_vz_vcpu_put(struct kvm_vcpu *vcpu, int cpu)
>  {
> -       struct mips_coproc *cop0 = vcpu->arch.cop0;
> +       struct mips_coproc *cop0 = &vcpu->arch.cop0;
>
>         if (current->flags & PF_VCPU)
>                 kvm_vz_vcpu_save_wired(vcpu);
> @@ -3076,7 +3076,7 @@ static void kvm_vz_vcpu_uninit(struct kvm_vcpu *vcpu)
>
>  static int kvm_vz_vcpu_setup(struct kvm_vcpu *vcpu)
>  {
> -       struct mips_coproc *cop0 = vcpu->arch.cop0;
> +       struct mips_coproc *cop0 = &vcpu->arch.cop0;
>         unsigned long count_hz = 100*1000*1000; /* default to 100 MHz */
>
>         /*
>
> --
> Crap can work. Given enough thrust pigs will fly, but it's not necessarily a
> good idea.                                                [ RFC1925, 2.3 ]

  reply	other threads:[~2023-06-16  8:31 UTC|newest]

Thread overview: 23+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-29  6:58 [PATCH 1/5] PCI: Move PCI_VENDOR_ID_REDHAT definition to pci_ids.h Huacai Chen
2020-07-29  6:58 ` [PATCH 2/5] dt-bindings: mips: Document Loongson kvm guest board Huacai Chen
2020-07-29  9:52   ` Thomas Bogendoerfer
2020-07-29 10:08     ` Huacai Chen
2020-07-31  3:01       ` Huacai Chen
2020-07-31  8:32         ` Thomas Bogendoerfer
2020-07-31 11:56           ` Jiaxun Yang
2020-08-03  6:05             ` Huacai Chen
2020-08-03  9:13               ` Jiaxun Yang
2020-08-04  7:47                 ` Thomas Bogendoerfer
2020-07-29  6:58 ` [PATCH 3/5] MIPS: KVM: Add kvm guest support for Loongson-3 Huacai Chen
2020-07-29  6:58 ` [PATCH 4/5] MIPS: Update default config file " Huacai Chen
2020-07-29  6:58 ` [PATCH 5/5] MAINTAINERS: Update KVM/MIPS maintainers Huacai Chen
2023-06-16  7:18   ` Yu Zhao
2023-06-16  7:55     ` Huacai Chen
2023-06-16  8:23     ` Thomas Bogendoerfer
2023-06-16  8:31       ` Huacai Chen [this message]
2023-06-18  7:55         ` Thomas Bogendoerfer
2023-06-17  1:52       ` Yu Zhao
2023-06-18  2:13         ` Huacai Chen
2023-06-18  2:51           ` Yu Zhao
2023-06-26  8:00             ` Huacai Chen
2023-06-27 18:58               ` Yu Zhao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAAhV-H54aipF6jXAiGpcjzpDy06Q28hmx-p4msvxgASHU5Z+cw@mail.gmail.com \
    --to=chenhuacai@kernel.org \
    --cc=aleksandar.qemu.devel@gmail.com \
    --cc=jiaxun.yang@flygoat.com \
    --cc=kvm@vger.kernel.org \
    --cc=linux-mips@vger.kernel.org \
    --cc=pbonzini@redhat.com \
    --cc=robh+dt@kernel.org \
    --cc=tsbogend@alpha.franken.de \
    --cc=yuzhao@google.com \
    --cc=zhangfx@lemote.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).