All of lore.kernel.org
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: Stefano Stabellini <sstabellini@kernel.org>, anthony.perard@citrix.com
Cc: qemu-devel@nongnu.org, xen-devel@lists.xensource.com,
	crosthwaite.peter@gmail.com, rth@twiddle.net,
	hrgstephen@gmail.com, x1917x@gmail.com
Subject: Re: [Qemu-devel] [PATCH RFC] xen/mapcache: store dma information in revmapcache entries for debugging
Date: Wed, 3 May 2017 10:55:31 +0200	[thread overview]
Message-ID: <84b6b9c8-274c-5387-5bcd-d34584dd398c@redhat.com> (raw)
In-Reply-To: <alpine.DEB.2.10.1705021646310.8859@sstabellini-ThinkPad-X260>



On 03/05/2017 02:08, Stefano Stabellini wrote:
> The Xen mapcache is able to create long term mappings, they are called
> "locked" mappings. The third parameter of the xen_map_cache call
> specifies if a mapping is a "locked" mapping.
> 
> 
> From the QEMU point of view there are two kinds of long term mappings:
> 
> [a] device memory mappings, such as option roms and video memory
> [b] dma mappings, created by dma_memory_map & friends
> 
> After certain operations, ballooning a VM in particular, Xen asks QEMU
> kindly to destroy all mappings. However, certainly [a] mappings are
> present and cannot be removed. That's not a problem as they are not
> affected by balloonning. The *real* problem is that if there are any
> mappings of type [b], any outstanding dma operations could fail. This is
> a known shortcoming. In other words, when Xen asks QEMU to destroy all
> mappings, it is an error if any [b] mappings exist.
> 
> However today we have no way of distinguishing [a] from [b]. Because of
> that, we cannot even print a decent warning.
> 
> This patch introduces a new "dma" bool field to MapCacheRev entires, to
> remember if a given mapping is for dma or is a long term device memory
> mapping. When xen_invalidate_map_cache is called, we print a warning if
> any [b] mappings exist. We ignore [a] mappings.
> 
> Mappings created by qemu_map_ram_ptr are assumed to be [a], while
> mappings created by address_space_map->qemu_ram_ptr_length are assumed
> to be [b].
> 
> The goal of the patch is to make debugging and system understanding
> easier.

Sure, why not.

Acked-by: Paolo Bonzini <pbonzini@redhat.com>

Paolo

> Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
> 
> diff --git a/exec.c b/exec.c
> index eac6085..85769e1 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -2084,10 +2084,10 @@ void *qemu_map_ram_ptr(RAMBlock *ram_block, ram_addr_t addr)
>           * In that case just map until the end of the page.
>           */
>          if (block->offset == 0) {
> -            return xen_map_cache(addr, 0, 0);
> +            return xen_map_cache(addr, 0, 0, false);
>          }
>  
> -        block->host = xen_map_cache(block->offset, block->max_length, 1);
> +        block->host = xen_map_cache(block->offset, block->max_length, 1, false);
>      }
>      return ramblock_ptr(block, addr);
>  }
> @@ -2117,10 +2117,10 @@ static void *qemu_ram_ptr_length(RAMBlock *ram_block, ram_addr_t addr,
>           * In that case just map the requested area.
>           */
>          if (block->offset == 0) {
> -            return xen_map_cache(addr, *size, 1);
> +            return xen_map_cache(addr, *size, 1, true);
>          }
>  
> -        block->host = xen_map_cache(block->offset, block->max_length, 1);
> +        block->host = xen_map_cache(block->offset, block->max_length, 1, true);
>      }
>  
>      return ramblock_ptr(block, addr);
> diff --git a/hw/i386/xen/xen-mapcache.c b/hw/i386/xen/xen-mapcache.c
> index 31debdf..fa4282a 100644
> --- a/hw/i386/xen/xen-mapcache.c
> +++ b/hw/i386/xen/xen-mapcache.c
> @@ -62,6 +62,7 @@ typedef struct MapCacheRev {
>      hwaddr paddr_index;
>      hwaddr size;
>      QTAILQ_ENTRY(MapCacheRev) next;
> +    bool dma;
>  } MapCacheRev;
>  
>  typedef struct MapCache {
> @@ -202,7 +203,7 @@ static void xen_remap_bucket(MapCacheEntry *entry,
>  }
>  
>  static uint8_t *xen_map_cache_unlocked(hwaddr phys_addr, hwaddr size,
> -                                       uint8_t lock)
> +                                       uint8_t lock, bool dma)
>  {
>      MapCacheEntry *entry, *pentry = NULL;
>      hwaddr address_index;
> @@ -289,6 +290,7 @@ tryagain:
>      if (lock) {
>          MapCacheRev *reventry = g_malloc0(sizeof(MapCacheRev));
>          entry->lock++;
> +        reventry->dma = dma;
>          reventry->vaddr_req = mapcache->last_entry->vaddr_base + address_offset;
>          reventry->paddr_index = mapcache->last_entry->paddr_index;
>          reventry->size = entry->size;
> @@ -300,12 +302,12 @@ tryagain:
>  }
>  
>  uint8_t *xen_map_cache(hwaddr phys_addr, hwaddr size,
> -                       uint8_t lock)
> +                       uint8_t lock, bool dma)
>  {
>      uint8_t *p;
>  
>      mapcache_lock();
> -    p = xen_map_cache_unlocked(phys_addr, size, lock);
> +    p = xen_map_cache_unlocked(phys_addr, size, lock, dma);
>      mapcache_unlock();
>      return p;
>  }
> @@ -426,8 +428,10 @@ void xen_invalidate_map_cache(void)
>      mapcache_lock();
>  
>      QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) {
> -        DPRINTF("There should be no locked mappings at this time, "
> -                "but "TARGET_FMT_plx" -> %p is present\n",
> +        if (!reventry->dma)
> +            continue;
> +        fprintf(stderr, "Locked DMA mapping while invalidating mapcache!"
> +                " "TARGET_FMT_plx" -> %p is present\n",
>                  reventry->paddr_index, reventry->vaddr_req);
>      }
>  
> diff --git a/include/sysemu/xen-mapcache.h b/include/sysemu/xen-mapcache.h
> index b8c93b9..01daaad 100644
> --- a/include/sysemu/xen-mapcache.h
> +++ b/include/sysemu/xen-mapcache.h
> @@ -17,7 +17,7 @@ typedef hwaddr (*phys_offset_to_gaddr_t)(hwaddr start_addr,
>  void xen_map_cache_init(phys_offset_to_gaddr_t f,
>                          void *opaque);
>  uint8_t *xen_map_cache(hwaddr phys_addr, hwaddr size,
> -                       uint8_t lock);
> +                       uint8_t lock, bool dma);
>  ram_addr_t xen_ram_addr_from_mapcache(void *ptr);
>  void xen_invalidate_map_cache_entry(uint8_t *buffer);
>  void xen_invalidate_map_cache(void);
> @@ -31,7 +31,8 @@ static inline void xen_map_cache_init(phys_offset_to_gaddr_t f,
>  
>  static inline uint8_t *xen_map_cache(hwaddr phys_addr,
>                                       hwaddr size,
> -                                     uint8_t lock)
> +                                     uint8_t lock,
> +                                     bool dma)
>  {
>      abort();
>  }
> 

WARNING: multiple messages have this Message-ID (diff)
From: Paolo Bonzini <pbonzini@redhat.com>
To: Stefano Stabellini <sstabellini@kernel.org>, anthony.perard@citrix.com
Cc: xen-devel@lists.xensource.com, crosthwaite.peter@gmail.com,
	qemu-devel@nongnu.org, x1917x@gmail.com, hrgstephen@gmail.com,
	rth@twiddle.net
Subject: Re: [PATCH RFC] xen/mapcache: store dma information in revmapcache entries for debugging
Date: Wed, 3 May 2017 10:55:31 +0200	[thread overview]
Message-ID: <84b6b9c8-274c-5387-5bcd-d34584dd398c@redhat.com> (raw)
In-Reply-To: <alpine.DEB.2.10.1705021646310.8859@sstabellini-ThinkPad-X260>



On 03/05/2017 02:08, Stefano Stabellini wrote:
> The Xen mapcache is able to create long term mappings, they are called
> "locked" mappings. The third parameter of the xen_map_cache call
> specifies if a mapping is a "locked" mapping.
> 
> 
> From the QEMU point of view there are two kinds of long term mappings:
> 
> [a] device memory mappings, such as option roms and video memory
> [b] dma mappings, created by dma_memory_map & friends
> 
> After certain operations, ballooning a VM in particular, Xen asks QEMU
> kindly to destroy all mappings. However, certainly [a] mappings are
> present and cannot be removed. That's not a problem as they are not
> affected by balloonning. The *real* problem is that if there are any
> mappings of type [b], any outstanding dma operations could fail. This is
> a known shortcoming. In other words, when Xen asks QEMU to destroy all
> mappings, it is an error if any [b] mappings exist.
> 
> However today we have no way of distinguishing [a] from [b]. Because of
> that, we cannot even print a decent warning.
> 
> This patch introduces a new "dma" bool field to MapCacheRev entires, to
> remember if a given mapping is for dma or is a long term device memory
> mapping. When xen_invalidate_map_cache is called, we print a warning if
> any [b] mappings exist. We ignore [a] mappings.
> 
> Mappings created by qemu_map_ram_ptr are assumed to be [a], while
> mappings created by address_space_map->qemu_ram_ptr_length are assumed
> to be [b].
> 
> The goal of the patch is to make debugging and system understanding
> easier.

Sure, why not.

Acked-by: Paolo Bonzini <pbonzini@redhat.com>

Paolo

> Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
> 
> diff --git a/exec.c b/exec.c
> index eac6085..85769e1 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -2084,10 +2084,10 @@ void *qemu_map_ram_ptr(RAMBlock *ram_block, ram_addr_t addr)
>           * In that case just map until the end of the page.
>           */
>          if (block->offset == 0) {
> -            return xen_map_cache(addr, 0, 0);
> +            return xen_map_cache(addr, 0, 0, false);
>          }
>  
> -        block->host = xen_map_cache(block->offset, block->max_length, 1);
> +        block->host = xen_map_cache(block->offset, block->max_length, 1, false);
>      }
>      return ramblock_ptr(block, addr);
>  }
> @@ -2117,10 +2117,10 @@ static void *qemu_ram_ptr_length(RAMBlock *ram_block, ram_addr_t addr,
>           * In that case just map the requested area.
>           */
>          if (block->offset == 0) {
> -            return xen_map_cache(addr, *size, 1);
> +            return xen_map_cache(addr, *size, 1, true);
>          }
>  
> -        block->host = xen_map_cache(block->offset, block->max_length, 1);
> +        block->host = xen_map_cache(block->offset, block->max_length, 1, true);
>      }
>  
>      return ramblock_ptr(block, addr);
> diff --git a/hw/i386/xen/xen-mapcache.c b/hw/i386/xen/xen-mapcache.c
> index 31debdf..fa4282a 100644
> --- a/hw/i386/xen/xen-mapcache.c
> +++ b/hw/i386/xen/xen-mapcache.c
> @@ -62,6 +62,7 @@ typedef struct MapCacheRev {
>      hwaddr paddr_index;
>      hwaddr size;
>      QTAILQ_ENTRY(MapCacheRev) next;
> +    bool dma;
>  } MapCacheRev;
>  
>  typedef struct MapCache {
> @@ -202,7 +203,7 @@ static void xen_remap_bucket(MapCacheEntry *entry,
>  }
>  
>  static uint8_t *xen_map_cache_unlocked(hwaddr phys_addr, hwaddr size,
> -                                       uint8_t lock)
> +                                       uint8_t lock, bool dma)
>  {
>      MapCacheEntry *entry, *pentry = NULL;
>      hwaddr address_index;
> @@ -289,6 +290,7 @@ tryagain:
>      if (lock) {
>          MapCacheRev *reventry = g_malloc0(sizeof(MapCacheRev));
>          entry->lock++;
> +        reventry->dma = dma;
>          reventry->vaddr_req = mapcache->last_entry->vaddr_base + address_offset;
>          reventry->paddr_index = mapcache->last_entry->paddr_index;
>          reventry->size = entry->size;
> @@ -300,12 +302,12 @@ tryagain:
>  }
>  
>  uint8_t *xen_map_cache(hwaddr phys_addr, hwaddr size,
> -                       uint8_t lock)
> +                       uint8_t lock, bool dma)
>  {
>      uint8_t *p;
>  
>      mapcache_lock();
> -    p = xen_map_cache_unlocked(phys_addr, size, lock);
> +    p = xen_map_cache_unlocked(phys_addr, size, lock, dma);
>      mapcache_unlock();
>      return p;
>  }
> @@ -426,8 +428,10 @@ void xen_invalidate_map_cache(void)
>      mapcache_lock();
>  
>      QTAILQ_FOREACH(reventry, &mapcache->locked_entries, next) {
> -        DPRINTF("There should be no locked mappings at this time, "
> -                "but "TARGET_FMT_plx" -> %p is present\n",
> +        if (!reventry->dma)
> +            continue;
> +        fprintf(stderr, "Locked DMA mapping while invalidating mapcache!"
> +                " "TARGET_FMT_plx" -> %p is present\n",
>                  reventry->paddr_index, reventry->vaddr_req);
>      }
>  
> diff --git a/include/sysemu/xen-mapcache.h b/include/sysemu/xen-mapcache.h
> index b8c93b9..01daaad 100644
> --- a/include/sysemu/xen-mapcache.h
> +++ b/include/sysemu/xen-mapcache.h
> @@ -17,7 +17,7 @@ typedef hwaddr (*phys_offset_to_gaddr_t)(hwaddr start_addr,
>  void xen_map_cache_init(phys_offset_to_gaddr_t f,
>                          void *opaque);
>  uint8_t *xen_map_cache(hwaddr phys_addr, hwaddr size,
> -                       uint8_t lock);
> +                       uint8_t lock, bool dma);
>  ram_addr_t xen_ram_addr_from_mapcache(void *ptr);
>  void xen_invalidate_map_cache_entry(uint8_t *buffer);
>  void xen_invalidate_map_cache(void);
> @@ -31,7 +31,8 @@ static inline void xen_map_cache_init(phys_offset_to_gaddr_t f,
>  
>  static inline uint8_t *xen_map_cache(hwaddr phys_addr,
>                                       hwaddr size,
> -                                     uint8_t lock)
> +                                     uint8_t lock,
> +                                     bool dma)
>  {
>      abort();
>  }
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  parent reply	other threads:[~2017-05-03  8:55 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-03  0:08 [Qemu-devel] [PATCH RFC] xen/mapcache: store dma information in revmapcache entries for debugging Stefano Stabellini
2017-05-03  0:08 ` Stefano Stabellini
2017-05-03  0:40 ` [Qemu-devel] " no-reply
2017-05-03  0:40   ` no-reply
2017-05-03  8:55 ` Paolo Bonzini [this message]
2017-05-03  8:55   ` Paolo Bonzini

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=84b6b9c8-274c-5387-5bcd-d34584dd398c@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=anthony.perard@citrix.com \
    --cc=crosthwaite.peter@gmail.com \
    --cc=hrgstephen@gmail.com \
    --cc=qemu-devel@nongnu.org \
    --cc=rth@twiddle.net \
    --cc=sstabellini@kernel.org \
    --cc=x1917x@gmail.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.