All of lore.kernel.org
 help / color / mirror / Atom feed
From: Santosh Jodh <Santosh.Jodh@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Cc: "wei.wang2@amd.com" <wei.wang2@amd.com>,
	"Tim (Xen.org)" <tim@xen.org>,
	"xiantao.zhang@intel.com" <xiantao.zhang@intel.com>,
	xen-devel <xen-devel@lists.xen.org>
Subject: Re: [PATCH] dump_p2m_table: For IOMMU
Date: Tue, 7 Aug 2012 09:16:59 -0700	[thread overview]
Message-ID: <7914B38A4445B34AA16EB9F1352942F1012F0E1E8FC8@SJCPMAILBOX01.citrite.net> (raw)
In-Reply-To: <502155D80200007800093525@nat28.tlf.novell.com>

Honestly, I don't have an AMD machine to test my code - I just wrote it for completion sake. I based my code on deallocate_next_page_table() in the same file. 

I agree that the map/unmap can be easily avoided.

Someone more familiar with AMD IOMMU might be able to comment more.

Thanks,
Santosh

-----Original Message-----
From: Jan Beulich [mailto:JBeulich@suse.com] 
Sent: Tuesday, August 07, 2012 8:52 AM
To: Santosh Jodh
Cc: wei.wang2@amd.com; xiantao.zhang@intel.com; xen-devel; Tim (Xen.org)
Subject: Re: [Xen-devel] [PATCH] dump_p2m_table: For IOMMU

>>> On 07.08.12 at 16:49, Santosh Jodh <santosh.jodh@citrix.com> wrote:
> --- a/xen/drivers/passthrough/amd/pci_amd_iommu.c	Thu Aug 02 11:49:37 2012 +0200
> +++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c	Tue Aug 07 07:46:14 2012 -0700
> @@ -22,6 +22,7 @@
>  #include <xen/pci.h>
>  #include <xen/pci_regs.h>
>  #include <xen/paging.h>
> +#include <xen/softirq.h>
>  #include <asm/hvm/iommu.h>
>  #include <asm/amd-iommu.h>
>  #include <asm/hvm/svm/amd-iommu-proto.h> @@ -512,6 +513,69 @@ static 
> int amd_iommu_group_id(u16 seg, u
>  
>  #include <asm/io_apic.h>
>  
> +static void amd_dump_p2m_table_level(struct page_info* pg, int level, 
> +u64 gpa) {
> +    u64 address;
> +    void *table_vaddr, *pde;
> +    u64 next_table_maddr;
> +    int index, next_level, present;
> +    u32 *entry;
> +
> +    table_vaddr = __map_domain_page(pg);
> +    if ( table_vaddr == NULL )
> +    {
> +        printk("Failed to map IOMMU domain page %-16lx\n", page_to_maddr(pg));
> +        return;
> +    }
> +
> +    if ( level > 1 )

As long as the top level call below can never pass <= 1 here and the recursive call gets gated accordingly, I don't see why you do it differently here than for VT-d, resulting in both unnecessarily deep indentation and a pointless map/unmap pair around the conditional.

Jan

> +    {
> +        for ( index = 0; index < PTE_PER_TABLE_SIZE; index++ )
> +        {
> +            if ( !(index % 2) )
> +                process_pending_softirqs();
> +
> +            pde = table_vaddr + (index * IOMMU_PAGE_TABLE_ENTRY_SIZE);
> +            next_table_maddr = amd_iommu_get_next_table_from_pte(pde);
> +            entry = (u32*)pde;
> +
> +            next_level = get_field_from_reg_u32(entry[0],
> +                                                IOMMU_PDE_NEXT_LEVEL_MASK,
> +                                                
> + IOMMU_PDE_NEXT_LEVEL_SHIFT);
> +
> +            present = get_field_from_reg_u32(entry[0],
> +                                             IOMMU_PDE_PRESENT_MASK,
> +                                             
> + IOMMU_PDE_PRESENT_SHIFT);
> +
> +            address = gpa + amd_offset_level_address(index, level);
> +            if ( (next_table_maddr != 0) && (next_level != 0)
> +                && present )
> +            {
> +                amd_dump_p2m_table_level(
> +                    maddr_to_page(next_table_maddr), level - 1, address);
> +            }
> +
> +            if ( present )
> +            {
> +                printk("gfn: %-16lx  mfn: %-16lx\n",
> +                       address, next_table_maddr);
> +            }
> +        }
> +    }
> +
> +    unmap_domain_page(table_vaddr);
> +}
> +
> +static void amd_dump_p2m_table(struct domain *d) {
> +    struct hvm_iommu *hd  = domain_hvm_iommu(d);
> +
> +    if ( !hd->root_table ) 
> +        return;
> +
> +    amd_dump_p2m_table_level(hd->root_table, hd->paging_mode, 0); }
> +
>  const struct iommu_ops amd_iommu_ops = {
>      .init = amd_iommu_domain_init,
>      .dom0_init = amd_iommu_dom0_init,

  reply	other threads:[~2012-08-07 16:16 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-08-07 14:49 [PATCH] dump_p2m_table: For IOMMU Santosh Jodh
2012-08-07 15:52 ` Jan Beulich
2012-08-07 16:16   ` Santosh Jodh [this message]
2012-08-08  7:31 ` Jan Beulich
2012-08-08 15:32   ` Santosh Jodh
2012-08-08 15:56 Santosh Jodh
2012-08-08 16:21 ` Jan Beulich
2012-08-08 17:17   ` Santosh Jodh
2012-08-08 17:17 Santosh Jodh
2012-08-09  7:26 ` Jan Beulich
2012-08-10  1:41   ` Santosh Jodh
2012-08-10 10:50   ` Wei Wang
2012-08-10 12:52     ` Jan Beulich
2012-08-10 13:41       ` Wei Wang
2012-08-10 14:24         ` Jan Beulich
2012-08-10  1:43 Santosh Jodh
2012-08-10  7:49 ` Jan Beulich
2012-08-10 12:31 ` Wei Wang
2012-08-10 13:02   ` Jan Beulich
2012-08-10 15:02   ` Santosh Jodh
2012-08-10 19:14 Santosh Jodh
2012-08-13  8:59 ` Jan Beulich
2012-08-14 19:34   ` Santosh Jodh
2012-08-13 10:31 ` Wei Wang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7914B38A4445B34AA16EB9F1352942F1012F0E1E8FC8@SJCPMAILBOX01.citrite.net \
    --to=santosh.jodh@citrix.com \
    --cc=JBeulich@suse.com \
    --cc=tim@xen.org \
    --cc=wei.wang2@amd.com \
    --cc=xen-devel@lists.xen.org \
    --cc=xiantao.zhang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.