From mboxrd@z Thu Jan 1 00:00:00 1970 From: Tamas K Lengyel Subject: Re: [PATCH] vm_event: Implement ARM SMC events Date: Tue, 12 Apr 2016 09:22:19 -0600 Message-ID: References: <1460404042-31179-1-git-send-email-tamas@tklengyel.com> <20160412145521.GC7140@char.us.oracle.com> Mime-Version: 1.0 Content-Type: multipart/mixed; boundary="===============3923112672056244973==" Return-path: Received: from mail6.bemta6.messagelabs.com ([85.158.143.247]) by lists.xenproject.org with esmtp (Exim 4.84_2) (envelope-from ) id 1aq091-0000Wv-C4 for xen-devel@lists.xenproject.org; Tue, 12 Apr 2016 15:22:23 +0000 Received: by mail-yw0-f179.google.com with SMTP id i84so28447553ywc.2 for ; Tue, 12 Apr 2016 08:22:21 -0700 (PDT) Received: from mail-yw0-f181.google.com (mail-yw0-f181.google.com. [209.85.161.181]) by smtp.gmail.com with ESMTPSA id i188sm17864196ywg.33.2016.04.12.08.22.19 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Apr 2016 08:22:19 -0700 (PDT) Received: by mail-yw0-f181.google.com with SMTP id t10so28562904ywa.0 for ; Tue, 12 Apr 2016 08:22:19 -0700 (PDT) In-Reply-To: <20160412145521.GC7140@char.us.oracle.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xen.org Sender: "Xen-devel" To: Konrad Rzeszutek Wilk Cc: Wei Liu , Keir Fraser , Razvan Cojocaru , Stefano Stabellini , Andrew Cooper , Ian Jackson , Julien Grall , Jan Beulich , Xen-devel List-Id: xen-devel@lists.xenproject.org --===============3923112672056244973== Content-Type: multipart/alternative; boundary=001a11c040461be8e005304b3af4 --001a11c040461be8e005304b3af4 Content-Type: text/plain; charset=UTF-8 On Apr 12, 2016 08:58, "Konrad Rzeszutek Wilk" wrote: > > On Mon, Apr 11, 2016 at 01:47:22PM -0600, Tamas K Lengyel wrote: > > From: Tamas K Lengyel > > > > The ARM SMC instructions are already configured to trap to Xen by default. In > > this patch we allow a user-space process in a privileged domain to receive > > notification of when such event happens through the vm_event subsystem. > > > > This patch will likely needs to be broken up into several smaller patches. > > Right now what this patch adds (and could be broken into smaller patches > > accordingly): > > - Implement monitor_op domctl handler for SOFTWARE_BREAKPOINTs on ARM > > - Implement vm_event register fill/set routines for ARM. This required > > removing the function from common as the function prototype now > > differs on the two archs. > > - Sending notification as SOFTWARE_BREAKPOINT vm_event from the SMC trap > > handlers. > > - Extend the xen-access test tool to receive SMC notification and step > > the PC manually in the reply. > > > > I'm sending it as an RFC to gather feedback on what has been overlooked in this > > revision. This patch has been tested on a Cubietruck board and works fine, > > but would probably not work on 64-bit boards. > > I only have some small nitpicking. > > +++ b/xen/arch/arm/traps.c > > @@ -41,6 +41,7 @@ > > #include > > #include > > #include > > +#include > > > > #include "decode.h" > > #include "vtimer.h" > > @@ -2449,6 +2450,21 @@ bad_data_abort: > > inject_dabt_exception(regs, info.gva, hsr.len); > > } > > > > +static void do_trap_smc(struct cpu_user_regs *regs) > > +{ > > + int rc = 0; > > Newline here Ack. > > + if ( current->domain->arch.monitor.software_breakpoint_enabled ) > > + { > > + rc = vm_event_smc(regs); > > + } > > + > > + if ( rc != 1 ) > > + { > > + GUEST_BUG_ON(!psr_mode_is_32bit(regs->cpsr)); > > This differs a bit from below. Should there be an comment explaining why > we expect it be only in 32-bit mode? > > > + inject_undef32_exception(regs); > > Below you do inject_undef64_exception? > > Perhaps there should be an check if it is 32 or 64-bit? Yes, indeed there should be. > > > + } > > +} > > + > > static void enter_hypervisor_head(struct cpu_user_regs *regs) > > { > > if ( guest_mode(regs) ) > > @@ -2524,7 +2540,7 @@ asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs) > > */ > > GUEST_BUG_ON(!psr_mode_is_32bit(regs->cpsr)); > > perfc_incr(trap_smc32); > > - inject_undef32_exception(regs); > > + do_trap_smc(regs); > > break; > > case HSR_EC_HVC32: > > GUEST_BUG_ON(!psr_mode_is_32bit(regs->cpsr)); > > @@ -2557,7 +2573,7 @@ asmlinkage void do_trap_hypervisor(struct cpu_user_regs *regs) > > */ > > GUEST_BUG_ON(psr_mode_is_32bit(regs->cpsr)); > > perfc_incr(trap_smc64); > > - inject_undef64_exception(regs, hsr.len); > > + do_trap_smc(regs); > > As in here.. Now it will call inject_undef32_exception. That can't > be healthy? Ack. > > > break; > > case HSR_EC_SYSREG: > > GUEST_BUG_ON(psr_mode_is_32bit(regs->cpsr)); > > diff --git a/xen/arch/arm/vm_event.c b/xen/arch/arm/vm_event.c > > new file mode 100644 > > index 0000000..d997313 > > --- /dev/null > > +++ b/xen/arch/arm/vm_event.c > > @@ -0,0 +1,95 @@ > > +/* > > + * arch/arm/vm_event.c > > + * > > + * Architecture-specific vm_event handling routines > > + * > > + * Copyright (c) 2015 Tamas K Lengyel (tamas@tklengyel.com) > > 2016? Yeap. > Also .. shouldn't the company be attributed as well? I see BitDefender > on some of them so not sure what hte relationship you have with them. I'm not affiliated with BitDefender in any way and at the moment I'm just doing this on my own as I'm no longer with Novetta either. > > > + * > > + * This program is free software; you can redistribute it and/or > > + * modify it under the terms of the GNU General Public > > + * License v2 as published by the Free Software Foundation. > > + * > > + * This program is distributed in the hope that it will be useful, > > + * but WITHOUT ANY WARRANTY; without even the implied warranty of > > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU > > + * General Public License for more details. > > + * > > + * You should have received a copy of the GNU General Public > > + * License along with this program; If not, see < http://www.gnu.org/licenses/>. > > + */ > > + > > +#include > > +#include > > + > > +static inline > > +void vm_event_fill_regs(vm_event_request_t *req, > > + const struct cpu_user_regs *regs) > > +{ > > + req->data.regs.arm.r0 = regs->r0; > > + req->data.regs.arm.r1 = regs->r1; > > + req->data.regs.arm.r2 = regs->r2; > > + req->data.regs.arm.r3 = regs->r3; > > + req->data.regs.arm.r4 = regs->r4; > > + req->data.regs.arm.r5 = regs->r5; > > + req->data.regs.arm.r6 = regs->r6; > > + req->data.regs.arm.r7 = regs->r7; > > + req->data.regs.arm.r8 = regs->r8; > > + req->data.regs.arm.r9 = regs->r9; > > + req->data.regs.arm.r10 = regs->r10; > > + req->data.regs.arm.r11 = regs->r11; > > + req->data.regs.arm.r12 = regs->r12; > > + req->data.regs.arm.sp = regs->sp; > > + req->data.regs.arm.lr = regs->lr; > > + req->data.regs.arm.pc = regs->pc; > > + req->data.regs.arm.cpsr = regs->cpsr; > > + req->data.regs.arm.ttbr0 = READ_SYSREG64(TTBR0_EL1); > > + req->data.regs.arm.ttbr1 = READ_SYSREG64(TTBR1_EL1); > > +} > > + > > +void vm_event_set_registers(struct vcpu *v, vm_event_response_t *rsp) > > +{ > > + v->arch.cpu_info->guest_cpu_user_regs.r0 = rsp->data.regs.arm.r0; > > + v->arch.cpu_info->guest_cpu_user_regs.r1 = rsp->data.regs.arm.r1; > > + v->arch.cpu_info->guest_cpu_user_regs.r2 = rsp->data.regs.arm.r2; > > + v->arch.cpu_info->guest_cpu_user_regs.r3 = rsp->data.regs.arm.r3; > > + v->arch.cpu_info->guest_cpu_user_regs.r4 = rsp->data.regs.arm.r4; > > + v->arch.cpu_info->guest_cpu_user_regs.r5 = rsp->data.regs.arm.r5; > > + v->arch.cpu_info->guest_cpu_user_regs.r6 = rsp->data.regs.arm.r6; > > + v->arch.cpu_info->guest_cpu_user_regs.r7 = rsp->data.regs.arm.r7; > > + v->arch.cpu_info->guest_cpu_user_regs.r8 = rsp->data.regs.arm.r8; > > + v->arch.cpu_info->guest_cpu_user_regs.r9 = rsp->data.regs.arm.r9; > > + v->arch.cpu_info->guest_cpu_user_regs.r10 = rsp->data.regs.arm.r10; > > + v->arch.cpu_info->guest_cpu_user_regs.r11 = rsp->data.regs.arm.r11; > > + v->arch.cpu_info->guest_cpu_user_regs.r12 = rsp->data.regs.arm.r12; > > + v->arch.cpu_info->guest_cpu_user_regs.sp = rsp->data.regs.arm.sp; > > + v->arch.cpu_info->guest_cpu_user_regs.lr = rsp->data.regs.arm.lr; > > + v->arch.cpu_info->guest_cpu_user_regs.pc = rsp->data.regs.arm.pc; > > + v->arch.cpu_info->guest_cpu_user_regs.cpsr = rsp->data.regs.arm.cpsr; > > + v->arch.ttbr0 = rsp->data.regs.arm.ttbr0; > > + v->arch.ttbr1 = rsp->data.regs.arm.ttbr1; > > +} > > + > > +int vm_event_smc(const struct cpu_user_regs *regs) { > > + struct vcpu *curr = current; > > + struct arch_domain *ad = &curr->domain->arch; > > + vm_event_request_t req = { 0 }; > > + > > + if ( !ad->monitor.software_breakpoint_enabled ) > > + return 0; > > + > > + req.reason = VM_EVENT_REASON_SOFTWARE_BREAKPOINT; > > + req.vcpu_id = curr->vcpu_id; > > + > > + vm_event_fill_regs(&req, regs); > > + > > + return vm_event_monitor_traps(curr, 1, &req); > > Perhaps a comment right past 1 explaining what this mystical > 1 value means? The function prototype is pretty self explanatory IMHO, while the call itself may not be. It's a boolean to determine if the trap is synchronous or not so should the vCPU be paused after. I can add a comment for it but I don't think it's necessary. That just reminds me though that MAINTAINERS needs to be updated to add this file to the vm_event stack as well. Thanks! Tamas --001a11c040461be8e005304b3af4 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable


On Apr 12, 2016 08:58, "Konrad Rzeszutek Wilk" <konrad.wilk@oracle.com> wrote:
>
> On Mon, Apr 11, 2016 at 01:47:22PM -0600, Tamas K Lengyel wrote:
> > From: Tamas K Lengyel <tklengyel@sec.in.tum.de>
> >
> > The ARM SMC instructions are already configured to trap to Xen by= default. In
> > this patch we allow a user-space process in a privileged domain t= o receive
> > notification of when such event happens through the vm_event subs= ystem.
> >
> > This patch will likely needs to be broken up into several smaller= patches.
> > Right now what this patch adds (and could be broken into smaller = patches
> > accordingly):
> >=C2=A0 =C2=A0 =C2=A0- Implement monitor_op domctl handler for SOFT= WARE_BREAKPOINTs on ARM
> >=C2=A0 =C2=A0 =C2=A0- Implement vm_event register fill/set routine= s for ARM. This required
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0removing the function from commo= n as the function prototype now
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0differs on the two archs.
> >=C2=A0 =C2=A0 =C2=A0- Sending notification as SOFTWARE_BREAKPOINT = vm_event from the SMC trap
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0handlers.
> >=C2=A0 =C2=A0 =C2=A0- Extend the xen-access test tool to receive S= MC notification and step
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0the PC manually in the reply. > >
> > I'm sending it as an RFC to gather feedback on what has been = overlooked in this
> > revision. This patch has been tested on a Cubietruck board and wo= rks fine,
> > but would probably not work on 64-bit boards.
>
> I only have some small nitpicking.
> > +++ b/xen/arch/arm/traps.c
> > @@ -41,6 +41,7 @@
> >=C2=A0 #include <asm/mmio.h>
> >=C2=A0 #include <asm/cpufeature.h>
> >=C2=A0 #include <asm/flushtlb.h>
> > +#include <asm/vm_event.h>
> >
> >=C2=A0 #include "decode.h"
> >=C2=A0 #include "vtimer.h"
> > @@ -2449,6 +2450,21 @@ bad_data_abort:
> >=C2=A0 =C2=A0 =C2=A0 inject_dabt_exception(regs, info.gva, hsr.len= );
> >=C2=A0 }
> >
> > +static void do_trap_smc(struct cpu_user_regs *regs)
> > +{
> > +=C2=A0 =C2=A0 int rc =3D 0;
>
> Newline here

Ack.

> > +=C2=A0 =C2=A0 if ( current->domain->arch.mo= nitor.software_breakpoint_enabled )
> > +=C2=A0 =C2=A0 {
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 rc =3D vm_event_smc(regs);
> > +=C2=A0 =C2=A0 }
> > +
> > +=C2=A0 =C2=A0 if ( rc !=3D 1 )
> > +=C2=A0 =C2=A0 {
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 GUEST_BUG_ON(!psr_mode_is_32bit(regs= ->cpsr));
>
> This differs a bit from below. Should there be an comment explaining w= hy
> we expect it be only in 32-bit mode?
>
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 inject_undef32_exception(regs);
>
> Below you do inject_undef64_exception?
>
> Perhaps there should be an check if it is 32 or 64-bit?

Yes, indeed there should be.

>
> > +=C2=A0 =C2=A0 }
> > +}
> > +
> >=C2=A0 static void enter_hypervisor_head(struct cpu_user_regs *reg= s)
> >=C2=A0 {
> >=C2=A0 =C2=A0 =C2=A0 if ( guest_mode(regs) )
> > @@ -2524,7 +2540,7 @@ asmlinkage void do_trap_hypervisor(struct c= pu_user_regs *regs)
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0*/
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 GUEST_BUG_ON(!psr_mode_is_32bit= (regs->cpsr));
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 perfc_incr(trap_smc32);
> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 inject_undef32_exception(regs);
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 do_trap_smc(regs);
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 break;
> >=C2=A0 =C2=A0 =C2=A0 case HSR_EC_HVC32:
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 GUEST_BUG_ON(!psr_mode_is_32bit= (regs->cpsr));
> > @@ -2557,7 +2573,7 @@ asmlinkage void do_trap_hypervisor(struct c= pu_user_regs *regs)
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0*/
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 GUEST_BUG_ON(psr_mode_is_32bit(= regs->cpsr));
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 perfc_incr(trap_smc64);
> > -=C2=A0 =C2=A0 =C2=A0 =C2=A0 inject_undef64_exception(regs, hsr.l= en);
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 do_trap_smc(regs);
>
> As in here.. Now it will call inject_undef32_exception. That can't=
> be healthy?

Ack.

>
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 break;
> >=C2=A0 =C2=A0 =C2=A0 case HSR_EC_SYSREG:
> >=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 GUEST_BUG_ON(psr_mode_is_32bit(= regs->cpsr));
> > diff --git a/xen/arch/arm/vm_event.c b/xen/arch/arm/vm_event.c > > new file mode 100644
> > index 0000000..d997313
> > --- /dev/null
> > +++ b/xen/arch/arm/vm_event.c
> > @@ -0,0 +1,95 @@
> > +/*
> > + * arch/arm/vm_event.c
> > + *
> > + * Architecture-specific vm_event handling routines
> > + *
> > + * Copyright (c) 2015 Tamas K Lengyel (tamas@tklengyel.com)
>
> 2016?

Yeap.

> Also .. shouldn't the company be attributed as well= ? I see BitDefender
> on some of them so not sure what hte relationship you have with them.<= /p>

I'm not affiliated with BitDefender in any way and at th= e moment I'm just doing this on my own as I'm no longer with Novett= a either.

>
> > + *
> > + * This program is free software; you can redistribute it and/or=
> > + * modify it under the terms of the GNU General Public
> > + * License v2 as published by the Free Software Foundation.
> > + *
> > + * This program is distributed in the hope that it will be usefu= l,
> > + * but WITHOUT ANY WARRANTY; without even the implied warranty o= f
> > + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.=C2=A0 Se= e the GNU
> > + * General Public License for more details.
> > + *
> > + * You should have received a copy of the GNU General Public
> > + * License along with this program; If not, see <http://www.gnu.org/licenses/>.
> > + */
> > +
> > +#include <xen/sched.h>
> > +#include <asm/vm_event.h>
> > +
> > +static inline
> > +void vm_event_fill_regs(vm_event_request_t *req,
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 const struct cpu_user_regs *regs)
> > +{
> > +=C2=A0 =C2=A0 req->data.regs.arm.r0 =3D regs->r0;
> > +=C2=A0 =C2=A0 req->data.regs.arm.r1 =3D regs->r1;
> > +=C2=A0 =C2=A0 req->data.regs.arm.r2 =3D regs->r2;
> > +=C2=A0 =C2=A0 req->data.regs.arm.r3 =3D regs->r3;
> > +=C2=A0 =C2=A0 req->data.regs.arm.r4 =3D regs->r4;
> > +=C2=A0 =C2=A0 req->data.regs.arm.r5 =3D regs->r5;
> > +=C2=A0 =C2=A0 req->data.regs.arm.r6 =3D regs->r6;
> > +=C2=A0 =C2=A0 req->data.regs.arm.r7 =3D regs->r7;
> > +=C2=A0 =C2=A0 req->data.regs.arm.r8 =3D regs->r8;
> > +=C2=A0 =C2=A0 req->data.regs.arm.r9 =3D regs->r9;
> > +=C2=A0 =C2=A0 req->data.regs.arm.r10 =3D regs->r10;
> > +=C2=A0 =C2=A0 req->data.regs.arm.r11 =3D regs->r11;
> > +=C2=A0 =C2=A0 req->data.regs.arm.r12 =3D regs->r12;
> > +=C2=A0 =C2=A0 req->data.regs.arm.sp =3D regs->sp;
> > +=C2=A0 =C2=A0 req->data.r= egs.arm.lr =3D regs->lr;
> > +=C2=A0 =C2=A0 req->data.regs.arm.pc =3D regs->pc;
> > +=C2=A0 =C2=A0 req->data.regs.arm.cpsr =3D regs->cpsr;
> > +=C2=A0 =C2=A0 req->data.regs.arm.ttbr0 =3D READ_SYSREG64(TTBR= 0_EL1);
> > +=C2=A0 =C2=A0 req->data.regs.arm.ttbr1 =3D READ_SYSREG64(TTBR= 1_EL1);
> > +}
> > +
> > +void vm_event_set_registers(struct vcpu *v, vm_event_response_t = *rsp)
> > +{
> > +=C2=A0 =C2=A0 v->arch.cpu_info->guest_cpu_user_regs.r0 =3D= rsp->data.regs.arm.r0;
> > +=C2=A0 =C2=A0 v->arch.cpu_info->guest_cpu_user_regs.r1 =3D= rsp->data.regs.arm.r1;
> > +=C2=A0 =C2=A0 v->arch.cpu_info->guest_cpu_user_regs.r2 =3D= rsp->data.regs.arm.r2;
> > +=C2=A0 =C2=A0 v->arch.cpu_info->guest_cpu_user_regs.r3 =3D= rsp->data.regs.arm.r3;
> > +=C2=A0 =C2=A0 v->arch.cpu_info->guest_cpu_user_regs.r4 =3D= rsp->data.regs.arm.r4;
> > +=C2=A0 =C2=A0 v->arch.cpu_info->guest_cpu_user_regs.r5 =3D= rsp->data.regs.arm.r5;
> > +=C2=A0 =C2=A0 v->arch.cpu_info->guest_cpu_user_regs.r6 =3D= rsp->data.regs.arm.r6;
> > +=C2=A0 =C2=A0 v->arch.cpu_info->guest_cpu_user_regs.r7 =3D= rsp->data.regs.arm.r7;
> > +=C2=A0 =C2=A0 v->arch.cpu_info->guest_cpu_user_regs.r8 =3D= rsp->data.regs.arm.r8;
> > +=C2=A0 =C2=A0 v->arch.cpu_info->guest_cpu_user_regs.r9 =3D= rsp->data.regs.arm.r9;
> > +=C2=A0 =C2=A0 v->arch.cpu_info->guest_cpu_user_regs.r10 = =3D rsp->data.regs.arm.r10;
> > +=C2=A0 =C2=A0 v->arch.cpu_info->guest_cpu_user_regs.r11 = =3D rsp->data.regs.arm.r11;
> > +=C2=A0 =C2=A0 v->arch.cpu_info->guest_cpu_user_regs.r12 = =3D rsp->data.regs.arm.r12;
> > +=C2=A0 =C2=A0 v->arch.cpu_info->guest_cpu_user_regs.sp =3D= rsp->data.regs.arm.sp;
> > +=C2=A0 =C2=A0 v->arch.cpu_info->guest_cpu_user_regs.lr =3D rsp->data.regs.arm.lr;
> > +=C2=A0 =C2=A0 v->arch.cpu_info->guest_cpu_user_regs.pc =3D= rsp->data.regs.arm.pc;
> > +=C2=A0 =C2=A0 v->arch.cpu_info->guest_cpu_user_regs.cpsr = =3D rsp->data.regs.arm.cpsr;
> > +=C2=A0 =C2=A0 v->arch.ttbr0 =3D rsp->data.regs.arm.ttbr0;<= br> > > +=C2=A0 =C2=A0 v->arch.ttbr1 =3D rsp->data.regs.arm.ttbr1;<= br> > > +}
> > +
> > +int vm_event_smc(const struct cpu_user_regs *regs) {
> > +=C2=A0 =C2=A0 struct vcpu *curr =3D current;
> > +=C2=A0 =C2=A0 struct arch_domain *ad =3D &curr->domain-&g= t;arch;
> > +=C2=A0 =C2=A0 vm_event_request_t req =3D { 0 };
> > +
> > +=C2=A0 =C2=A0 if ( !ad->monitor.software_breakpoint_enabled )=
> > +=C2=A0 =C2=A0 =C2=A0 =C2=A0 return 0;
> > +
> > +=C2=A0 =C2=A0 req.reason =3D VM_EVENT_REASON_SOFTWARE_BREAKPOINT= ;
> > +=C2=A0 =C2=A0 req.vcpu_id =3D curr->vcpu_id;
> > +
> > +=C2=A0 =C2=A0 vm_event_fill_regs(&req, regs);
> > +
> > +=C2=A0 =C2=A0 return vm_event_monitor_traps(curr, 1, &req);<= br> >
> Perhaps a comment right past 1 explaining what this mystical
> 1 value means?

The function prototype is pretty self explanatory IMHO, whil= e the call itself may not be. It's a boolean to determine if the trap i= s synchronous or not so should the vCPU be paused after. I can add a commen= t for it but I don't think it's necessary. That just reminds me tho= ugh that MAINTAINERS needs to be updated to add this file to the vm_event s= tack as well.

Thanks!
Tamas

--001a11c040461be8e005304b3af4-- --===============3923112672056244973== Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: base64 Content-Disposition: inline X19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX19fX18KWGVuLWRldmVs IG1haWxpbmcgbGlzdApYZW4tZGV2ZWxAbGlzdHMueGVuLm9yZwpodHRwOi8vbGlzdHMueGVuLm9y Zy94ZW4tZGV2ZWwK --===============3923112672056244973==--