All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Roger Pau Monné" <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"Andrew Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3 7/9] x86/PVH: actually show Dom0's stacks from debug key '0'
Date: Thu, 23 Sep 2021 12:31:14 +0200	[thread overview]
Message-ID: <YUxXcrMiPDiGkdw9@MacBook-Air-de-Roger.local> (raw)
In-Reply-To: <ca129fa5-7165-a9ef-4e57-75c43a708960@suse.com>

On Tue, Sep 21, 2021 at 09:20:00AM +0200, Jan Beulich wrote:
> show_guest_stack() does nothing for HVM. Introduce a HVM-specific
> dumping function, paralleling the 64- and 32-bit PV ones. We don't know
> the real stack size, so only dump up to the next page boundary.
> 
> Rather than adding a vcpu parameter to hvm_copy_from_guest_linear(),
> introduce hvm_copy_from_vcpu_linear() which - for now at least - in
> return won't need a "pfinfo" parameter.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> TBD: The bypassing of the output interleaving avoidance isn't nice, but
>      I've not been able to think of an alternative. Avoiding the call to
>      hvm_vcpu_virtual_to_linear() would be in principle possible (adding
>      in the SS base directly), but one way or another we need to access
>      guest memory and hence can't sensibly avoid using the P2M layer.
>      However, commit 0996e0f38540 ("x86/traps: prevent interleaving of
>      concurrent cpu state dumps") introduced this logic here while
>      really only talking about show_execution_state().
>      vcpu_show_execution_state() is imo much less prone to interleaving
>      of its output: It's uses from the keyhandler are sequential already
>      anyway, and the only other use is from hvm_triple_fault(). Instead
>      of making the locking conditional, it may therefore be an option to
>      drop it again altogether.
> TBD: For now this dumps also user mode stacks. We may want to restrict
>      this.
> TBD: An alternative to putting this next to {,compat_}show_guest_stack()
>      is to put it in hvm.c, eliminating the need to introduce
>      hvm_copy_from_vcpu_linear(), but then requiring extra parameters to
>      be passed.
> TBD: Technically this makes unnecessary the earlier added entering/
>      leaving if the VMCS. Yet to avoid a series of non-trivial
>      enter/exit pairs, I think leaving that in is still beneficial. In
>      which case here perhaps merely the associate comment may want
>      tweaking.
> ---
> v3: New.
> 
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -3408,6 +3408,15 @@ enum hvm_translation_result hvm_copy_fro
>                        PFEC_page_present | pfec, pfinfo);
>  }
>  
> +enum hvm_translation_result hvm_copy_from_vcpu_linear(
> +    void *buf, unsigned long addr, unsigned int size, struct vcpu *v,
> +    unsigned int pfec)

Even if your current use case doesn't need it, would it be worth
adding a pagefault_info_t parameter?

> +{
> +    return __hvm_copy(buf, addr, size, v,
> +                      HVMCOPY_from_guest | HVMCOPY_linear,
> +                      PFEC_page_present | pfec, NULL);
> +}
> +
>  unsigned int copy_to_user_hvm(void *to, const void *from, unsigned int len)
>  {
>      int rc;
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -364,6 +364,71 @@ static void show_guest_stack(struct vcpu
>      printk("\n");
>  }
>  
> +static void show_hvm_stack(struct vcpu *v, const struct cpu_user_regs *regs)
> +{
> +#ifdef CONFIG_HVM
> +    unsigned long sp = regs->rsp, addr;
> +    unsigned int i, bytes, words_per_line, pfec = PFEC_page_present;
> +    struct segment_register ss, cs;
> +
> +    hvm_get_segment_register(v, x86_seg_ss, &ss);
> +    hvm_get_segment_register(v, x86_seg_cs, &cs);
> +
> +    if ( hvm_long_mode_active(v) && cs.l )
> +        i = 16, bytes = 8;
> +    else
> +    {
> +        sp = ss.db ? (uint32_t)sp : (uint16_t)sp;
> +        i = ss.db ? 8 : 4;
> +        bytes = cs.db ? 4 : 2;
> +    }
> +
> +    if ( bytes == 8 || (ss.db && !ss.base) )
> +        printk("Guest stack trace from sp=%0*lx:", i, sp);
> +    else
> +        printk("Guest stack trace from ss:sp=%04x:%0*lx:", ss.sel, i, sp);
> +
> +    if ( !hvm_vcpu_virtual_to_linear(v, x86_seg_ss, &ss, sp, bytes,
> +                                     hvm_access_read, &cs, &addr) )
> +    {
> +        printk(" Guest-inaccessible memory\n");
> +        return;
> +    }
> +
> +    if ( ss.dpl == 3 )
> +        pfec |= PFEC_user_mode;
> +
> +    words_per_line = stack_words_per_line * (sizeof(void *) / bytes);
> +    for ( i = 0; i < debug_stack_lines * words_per_line; )
> +    {
> +        unsigned long val = 0;
> +
> +        if ( (addr ^ (addr + bytes - 1)) & PAGE_SIZE )
> +            break;
> +
> +        if ( !(i++ % words_per_line) )
> +            printk("\n  ");
> +
> +        if ( hvm_copy_from_vcpu_linear(&val, addr, bytes, v,
> +                                       pfec) != HVMTRANS_okay )

I think I'm confused, but what about guests without paging enabled?
Don't you need to use hvm_copy_from_guest_phys (likely transformed
into hvm_copy_from_vcpu_phys)?

> +        {
> +            printk(" Fault while accessing guest memory.");
> +            break;
> +        }
> +
> +        printk(" %0*lx", 2 * bytes, val);
> +
> +        addr += bytes;
> +        if ( !(addr & (PAGE_SIZE - 1)) )
> +            break;
> +    }
> +
> +    if ( !i )
> +        printk(" Stack empty.");
> +    printk("\n");
> +#endif
> +}
> +
>  /*
>   * Notes for get_{stack,shstk}*_bottom() helpers
>   *
> @@ -629,7 +694,7 @@ void show_execution_state(const struct c
>  
>  void vcpu_show_execution_state(struct vcpu *v)
>  {
> -    unsigned long flags;
> +    unsigned long flags = 0;
>  
>      if ( test_bit(_VPF_down, &v->pause_flags) )
>      {
> @@ -663,14 +728,22 @@ void vcpu_show_execution_state(struct vc
>      }
>  #endif
>  
> -    /* Prevent interleaving of output. */
> -    flags = console_lock_recursive_irqsave();
> +    /*
> +     * Prevent interleaving of output if possible. For HVM we can't do so, as
> +     * the necessary P2M lookups involve locking, which has to occur with IRQs
> +     * enabled.
> +     */
> +    if ( !is_hvm_vcpu(v) )
> +        flags = console_lock_recursive_irqsave();
>  
>      vcpu_show_registers(v);
> -    if ( guest_kernel_mode(v, &v->arch.user_regs) )
> +    if ( is_hvm_vcpu(v) )
> +        show_hvm_stack(v, &v->arch.user_regs);

Would it make sense to unlock in show_hvm_stack, and thus keep the
printing of vcpu_show_registers locked even when in HVM context?

TBH I've never found the guest stack dump to be helpful for debugging
purposes, but maybe others do.

Thanks, Roger.


  reply	other threads:[~2021-09-23 10:32 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-21  7:15 [PATCH v3 0/9] x86/PVH: Dom0 building adjustments Jan Beulich
2021-09-21  7:16 ` [PATCH v3 1/9] x86/PVH: improve Dom0 memory size calculation Jan Beulich
2021-09-22 11:59   ` Roger Pau Monné
2021-09-29 10:53     ` Jan Beulich
2021-09-21  7:17 ` [PATCH v3 2/9] x86/PV: properly set shadow allocation for Dom0 Jan Beulich
2021-09-22 13:01   ` Roger Pau Monné
2021-09-22 13:31   ` Andrew Cooper
2021-09-22 13:50     ` Jan Beulich
2021-09-22 14:25       ` Roger Pau Monné
2021-09-22 14:28         ` Jan Beulich
2021-09-21  7:17 ` [PATCH v3 3/9] x86/PVH: permit more physdevop-s to be used by Dom0 Jan Beulich
2021-09-22 14:22   ` Roger Pau Monné
2021-09-24 12:18     ` Jan Beulich
2021-09-21  7:18 ` [PATCH v3 4/9] x86/PVH: provide VGA console info to Dom0 Jan Beulich
2021-09-22 15:01   ` Roger Pau Monné
2021-09-22 17:03     ` Andrew Cooper
2021-09-23  9:58       ` Jan Beulich
2021-09-23  9:46     ` Jan Beulich
2021-09-23 13:22       ` Roger Pau Monné
2021-09-21  7:19 ` [PATCH v3 5/9] x86/PVH: actually show Dom0's register state from debug key '0' Jan Beulich
2021-09-22 15:48   ` Roger Pau Monné
2021-09-23 10:21     ` Jan Beulich
2021-09-23 14:27       ` Roger Pau Monné
2021-09-21  7:19 ` [PATCH v3 6/9] x86/HVM: convert hvm_virtual_to_linear_addr() to be remote-capable Jan Beulich
2021-09-23  8:09   ` Roger Pau Monné
2021-09-23 10:34     ` Jan Beulich
2021-09-23 14:28       ` Roger Pau Monné
2021-09-21  7:20 ` [PATCH v3 7/9] x86/PVH: actually show Dom0's stacks from debug key '0' Jan Beulich
2021-09-23 10:31   ` Roger Pau Monné [this message]
2021-09-23 10:38     ` Roger Pau Monné
2021-09-23 10:47     ` Jan Beulich
2021-09-23 14:43       ` Roger Pau Monné
2021-09-21  7:20 ` [PATCH v3 8/9] x86/HVM: skip offline vCPU-s when dumping VMCBs/VMCSes Jan Beulich
2021-09-23  8:23   ` Roger Pau Monné
2021-09-23 11:27     ` Jan Beulich
2021-09-23 14:46       ` Roger Pau Monné
2021-09-21  7:21 ` [PATCH v3 9/9] x86/P2M: relax permissions of PVH Dom0's MMIO entries Jan Beulich
2021-09-23 11:10   ` Roger Pau Monné
2021-09-23 11:32     ` Jan Beulich
2021-09-23 11:54       ` Roger Pau Monné
2021-09-23 12:15         ` Jan Beulich
2021-09-23 15:15           ` Roger Pau Monné
2021-09-23 15:22             ` Jan Beulich
2021-09-23 15:32               ` Roger Pau Monné

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YUxXcrMiPDiGkdw9@MacBook-Air-de-Roger.local \
    --to=roger.pau@citrix.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=jbeulich@suse.com \
    --cc=wl@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.