All of lore.kernel.org
 help / color / mirror / Atom feed
* A problem of Intel IOMMU hardware ?
@ 2021-03-17  3:16 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  0 siblings, 0 replies; 50+ messages in thread
From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.) @ 2021-03-17  3:16 UTC (permalink / raw)
  To: dwmw2, baolu.lu, joro, will, alex.williamson
  Cc: iommu, LKML, Gonglei (Arei), chenjiashang, longpeng2

[-- Attachment #1: Type: text/plain, Size: 2241 bytes --]

Hi guys,

We find the Intel iommu cache (i.e. iotlb) maybe works wrong in a special
situation, it would cause DMA fails or get wrong data.

The reproducer (based on Alex's vfio testsuite[1]) is in attachment, it can
reproduce the problem with high probability (~50%).

The machine we used is:
processor	: 47
vendor_id	: GenuineIntel
cpu family	: 6
model		: 85
model name	: Intel(R) Xeon(R) Gold 6146 CPU @ 3.20GHz
stepping	: 4
microcode	: 0x2000069

And the iommu capability reported is:
ver 1:0 cap 8d2078c106f0466 ecap f020df
(caching mode = 0 , page-selective invalidation = 1)

(The problem is also on 'Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz' and
'Intel(R) Xeon(R) Platinum 8378A CPU @ 3.00GHz')

We run the reproducer on Linux 4.18 and it works as follow:

Step 1. alloc 4G *2M-hugetlb* memory (N.B. no problem with 4K-page mapping)
Step 2. DMA Map 4G memory
Step 3.
    while (1) {
        {UNMAP, 0x0, 0xa0000}, ------------------------------------ (a)
        {UNMAP, 0xc0000, 0xbff40000},
        {MAP,   0x0, 0xc0000000}, --------------------------------- (b)
                use GDB to pause at here, and then DMA read IOVA=0,
                sometimes DMA success (as expected),
                but sometimes DMA error (report not-present).
        {UNMAP, 0x0, 0xc0000000}, --------------------------------- (c)
        {MAP,   0x0, 0xa0000},
        {MAP,   0xc0000, 0xbff40000},
    }

The DMA read operations sholud success between (b) and (c), it should NOT report
not-present at least!

After analysis the problem, we think maybe it's caused by the Intel iommu iotlb.
It seems the DMA Remapping hardware still uses the IOTLB or other caches of (a).

When do DMA unmap at (a), the iotlb will be flush:
    intel_iommu_unmap
        domain_unmap
            iommu_flush_iotlb_psi

When do DMA map at (b), no need to flush the iotlb according to the capability
of this iommu:
    intel_iommu_map
        domain_pfn_mapping
            domain_mapping
                __mapping_notify_one
                    if (cap_caching_mode(iommu->cap)) // FALSE
                        iommu_flush_iotlb_psi
But the problem will disappear if we FORCE flush here. So we suspect the iommu
hardware.

Do you have any suggestion ?








[-- Attachment #2: vfiotest.c --]
[-- Type: text/plain, Size: 19318 bytes --]

/*
 * VFIO API definition
 *
 * Copyright (C) 2012 Red Hat, Inc.  All rights reserved.
 *     Author: Alex Williamson <alex.williamson@redhat.com>
 *
 * This program is free software; you can redistribute it and/or modify
 * it under the terms of the GNU General Public License version 2 as
 * published by the Free Software Foundation.
 */
#ifndef _UAPIVFIO_H
#define _UAPIVFIO_H

#include <linux/types.h>
#include <linux/ioctl.h>

#define VFIO_API_VERSION	0


/* Kernel & User level defines for VFIO IOCTLs. */

/* Extensions */

#define VFIO_TYPE1_IOMMU		1

/*
 * The IOCTL interface is designed for extensibility by embedding the
 * structure length (argsz) and flags into structures passed between
 * kernel and userspace.  We therefore use the _IO() macro for these
 * defines to avoid implicitly embedding a size into the ioctl request.
 * As structure fields are added, argsz will increase to match and flag
 * bits will be defined to indicate additional fields with valid data.
 * It's *always* the caller's responsibility to indicate the size of
 * the structure passed by setting argsz appropriately.
 */

#define VFIO_TYPE	(';')
#define VFIO_BASE	100

/* -------- IOCTLs for VFIO file descriptor (/dev/vfio/vfio) -------- */

/**
 * VFIO_GET_API_VERSION - _IO(VFIO_TYPE, VFIO_BASE + 0)
 *
 * Report the version of the VFIO API.  This allows us to bump the entire
 * API version should we later need to add or change features in incompatible
 * ways.
 * Return: VFIO_API_VERSION
 * Availability: Always
 */
#define VFIO_GET_API_VERSION		_IO(VFIO_TYPE, VFIO_BASE + 0)

/**
 * VFIO_CHECK_EXTENSION - _IOW(VFIO_TYPE, VFIO_BASE + 1, __u32)
 *
 * Check whether an extension is supported.
 * Return: 0 if not supported, 1 (or some other positive integer) if supported.
 * Availability: Always
 */
#define VFIO_CHECK_EXTENSION		_IO(VFIO_TYPE, VFIO_BASE + 1)

/**
 * VFIO_SET_IOMMU - _IOW(VFIO_TYPE, VFIO_BASE + 2, __s32)
 *
 * Set the iommu to the given type.  The type must be supported by an
 * iommu driver as verified by calling CHECK_EXTENSION using the same
 * type.  A group must be set to this file descriptor before this
 * ioctl is available.  The IOMMU interfaces enabled by this call are
 * specific to the value set.
 * Return: 0 on success, -errno on failure
 * Availability: When VFIO group attached
 */
#define VFIO_SET_IOMMU			_IO(VFIO_TYPE, VFIO_BASE + 2)

/* -------- IOCTLs for GROUP file descriptors (/dev/vfio/$GROUP) -------- */

/**
 * VFIO_GROUP_GET_STATUS - _IOR(VFIO_TYPE, VFIO_BASE + 3,
 *						struct vfio_group_status)
 *
 * Retrieve information about the group.  Fills in provided
 * struct vfio_group_info.  Caller sets argsz.
 * Return: 0 on succes, -errno on failure.
 * Availability: Always
 */
struct vfio_group_status {
	__u32	argsz;
	__u32	flags;
#define VFIO_GROUP_FLAGS_VIABLE		(1 << 0)
#define VFIO_GROUP_FLAGS_CONTAINER_SET	(1 << 1)
};
#define VFIO_GROUP_GET_STATUS		_IO(VFIO_TYPE, VFIO_BASE + 3)

/**
 * VFIO_GROUP_SET_CONTAINER - _IOW(VFIO_TYPE, VFIO_BASE + 4, __s32)
 *
 * Set the container for the VFIO group to the open VFIO file
 * descriptor provided.  Groups may only belong to a single
 * container.  Containers may, at their discretion, support multiple
 * groups.  Only when a container is set are all of the interfaces
 * of the VFIO file descriptor and the VFIO group file descriptor
 * available to the user.
 * Return: 0 on success, -errno on failure.
 * Availability: Always
 */
#define VFIO_GROUP_SET_CONTAINER	_IO(VFIO_TYPE, VFIO_BASE + 4)

/**
 * VFIO_GROUP_UNSET_CONTAINER - _IO(VFIO_TYPE, VFIO_BASE + 5)
 *
 * Remove the group from the attached container.  This is the
 * opposite of the SET_CONTAINER call and returns the group to
 * an initial state.  All device file descriptors must be released
 * prior to calling this interface.  When removing the last group
 * from a container, the IOMMU will be disabled and all state lost,
 * effectively also returning the VFIO file descriptor to an initial
 * state.
 * Return: 0 on success, -errno on failure.
 * Availability: When attached to container
 */
#define VFIO_GROUP_UNSET_CONTAINER	_IO(VFIO_TYPE, VFIO_BASE + 5)

/**
 * VFIO_GROUP_GET_DEVICE_FD - _IOW(VFIO_TYPE, VFIO_BASE + 6, char)
 *
 * Return a new file descriptor for the device object described by
 * the provided string.  The string should match a device listed in
 * the devices subdirectory of the IOMMU group sysfs entry.  The
 * group containing the device must already be added to this context.
 * Return: new file descriptor on success, -errno on failure.
 * Availability: When attached to container
 */
#define VFIO_GROUP_GET_DEVICE_FD	_IO(VFIO_TYPE, VFIO_BASE + 6)

/* --------------- IOCTLs for DEVICE file descriptors --------------- */

/**
 * VFIO_DEVICE_GET_INFO - _IOR(VFIO_TYPE, VFIO_BASE + 7,
 *						struct vfio_device_info)
 *
 * Retrieve information about the device.  Fills in provided
 * struct vfio_device_info.  Caller sets argsz.
 * Return: 0 on success, -errno on failure.
 */
struct vfio_device_info {
	__u32	argsz;
	__u32	flags;
#define VFIO_DEVICE_FLAGS_RESET	(1 << 0)	/* Device supports reset */
#define VFIO_DEVICE_FLAGS_PCI	(1 << 1)	/* vfio-pci device */
	__u32	num_regions;	/* Max region index + 1 */
	__u32	num_irqs;	/* Max IRQ index + 1 */
};
#define VFIO_DEVICE_GET_INFO		_IO(VFIO_TYPE, VFIO_BASE + 7)

/**
 * VFIO_DEVICE_GET_REGION_INFO - _IOWR(VFIO_TYPE, VFIO_BASE + 8,
 *				       struct vfio_region_info)
 *
 * Retrieve information about a device region.  Caller provides
 * struct vfio_region_info with index value set.  Caller sets argsz.
 * Implementation of region mapping is bus driver specific.  This is
 * intended to describe MMIO, I/O port, as well as bus specific
 * regions (ex. PCI config space).  Zero sized regions may be used
 * to describe unimplemented regions (ex. unimplemented PCI BARs).
 * Return: 0 on success, -errno on failure.
 */
struct vfio_region_info {
	__u32	argsz;
	__u32	flags;
#define VFIO_REGION_INFO_FLAG_READ	(1 << 0) /* Region supports read */
#define VFIO_REGION_INFO_FLAG_WRITE	(1 << 1) /* Region supports write */
#define VFIO_REGION_INFO_FLAG_MMAP	(1 << 2) /* Region supports mmap */
	__u32	index;		/* Region index */
	__u32	resv;		/* Reserved for alignment */
	__u64	size;		/* Region size (bytes) */
	__u64	offset;		/* Region offset from start of device fd */
};
#define VFIO_DEVICE_GET_REGION_INFO	_IO(VFIO_TYPE, VFIO_BASE + 8)

/**
 * VFIO_DEVICE_GET_IRQ_INFO - _IOWR(VFIO_TYPE, VFIO_BASE + 9,
 *				    struct vfio_irq_info)
 *
 * Retrieve information about a device IRQ.  Caller provides
 * struct vfio_irq_info with index value set.  Caller sets argsz.
 * Implementation of IRQ mapping is bus driver specific.  Indexes
 * using multiple IRQs are primarily intended to support MSI-like
 * interrupt blocks.  Zero count irq blocks may be used to describe
 * unimplemented interrupt types.
 *
 * The EVENTFD flag indicates the interrupt index supports eventfd based
 * signaling.
 *
 * The MASKABLE flags indicates the index supports MASK and UNMASK
 * actions described below.
 *
 * AUTOMASKED indicates that after signaling, the interrupt line is
 * automatically masked by VFIO and the user needs to unmask the line
 * to receive new interrupts.  This is primarily intended to distinguish
 * level triggered interrupts.
 *
 * The NORESIZE flag indicates that the interrupt lines within the index
 * are setup as a set and new subindexes cannot be enabled without first
 * disabling the entire index.  This is used for interrupts like PCI MSI
 * and MSI-X where the driver may only use a subset of the available
 * indexes, but VFIO needs to enable a specific number of vectors
 * upfront.  In the case of MSI-X, where the user can enable MSI-X and
 * then add and unmask vectors, it's up to userspace to make the decision
 * whether to allocate the maximum supported number of vectors or tear
 * down setup and incrementally increase the vectors as each is enabled.
 */
struct vfio_irq_info {
	__u32	argsz;
	__u32	flags;
#define VFIO_IRQ_INFO_EVENTFD		(1 << 0)
#define VFIO_IRQ_INFO_MASKABLE		(1 << 1)
#define VFIO_IRQ_INFO_AUTOMASKED	(1 << 2)
#define VFIO_IRQ_INFO_NORESIZE		(1 << 3)
	__u32	index;		/* IRQ index */
	__u32	count;		/* Number of IRQs within this index */
};
#define VFIO_DEVICE_GET_IRQ_INFO	_IO(VFIO_TYPE, VFIO_BASE + 9)

/**
 * VFIO_DEVICE_SET_IRQS - _IOW(VFIO_TYPE, VFIO_BASE + 10, struct vfio_irq_set)
 *
 * Set signaling, masking, and unmasking of interrupts.  Caller provides
 * struct vfio_irq_set with all fields set.  'start' and 'count' indicate
 * the range of subindexes being specified.
 *
 * The DATA flags specify the type of data provided.  If DATA_NONE, the
 * operation performs the specified action immediately on the specified
 * interrupt(s).  For example, to unmask AUTOMASKED interrupt [0,0]:
 * flags = (DATA_NONE|ACTION_UNMASK), index = 0, start = 0, count = 1.
 *
 * DATA_BOOL allows sparse support for the same on arrays of interrupts.
 * For example, to mask interrupts [0,1] and [0,3] (but not [0,2]):
 * flags = (DATA_BOOL|ACTION_MASK), index = 0, start = 1, count = 3,
 * data = {1,0,1}
 *
 * DATA_EVENTFD binds the specified ACTION to the provided __s32 eventfd.
 * A value of -1 can be used to either de-assign interrupts if already
 * assigned or skip un-assigned interrupts.  For example, to set an eventfd
 * to be trigger for interrupts [0,0] and [0,2]:
 * flags = (DATA_EVENTFD|ACTION_TRIGGER), index = 0, start = 0, count = 3,
 * data = {fd1, -1, fd2}
 * If index [0,1] is previously set, two count = 1 ioctls calls would be
 * required to set [0,0] and [0,2] without changing [0,1].
 *
 * Once a signaling mechanism is set, DATA_BOOL or DATA_NONE can be used
 * with ACTION_TRIGGER to perform kernel level interrupt loopback testing
 * from userspace (ie. simulate hardware triggering).
 *
 * Setting of an event triggering mechanism to userspace for ACTION_TRIGGER
 * enables the interrupt index for the device.  Individual subindex interrupts
 * can be disabled using the -1 value for DATA_EVENTFD or the index can be
 * disabled as a whole with: flags = (DATA_NONE|ACTION_TRIGGER), count = 0.
 *
 * Note that ACTION_[UN]MASK specify user->kernel signaling (irqfds) while
 * ACTION_TRIGGER specifies kernel->user signaling.
 */
struct vfio_irq_set {
	__u32	argsz;
	__u32	flags;
#define VFIO_IRQ_SET_DATA_NONE		(1 << 0) /* Data not present */
#define VFIO_IRQ_SET_DATA_BOOL		(1 << 1) /* Data is bool (u8) */
#define VFIO_IRQ_SET_DATA_EVENTFD	(1 << 2) /* Data is eventfd (s32) */
#define VFIO_IRQ_SET_ACTION_MASK	(1 << 3) /* Mask interrupt */
#define VFIO_IRQ_SET_ACTION_UNMASK	(1 << 4) /* Unmask interrupt */
#define VFIO_IRQ_SET_ACTION_TRIGGER	(1 << 5) /* Trigger interrupt */
	__u32	index;
	__u32	start;
	__u32	count;
	__u8	data[];
};
#define VFIO_DEVICE_SET_IRQS		_IO(VFIO_TYPE, VFIO_BASE + 10)

#define VFIO_IRQ_SET_DATA_TYPE_MASK	(VFIO_IRQ_SET_DATA_NONE | \
					 VFIO_IRQ_SET_DATA_BOOL | \
					 VFIO_IRQ_SET_DATA_EVENTFD)
#define VFIO_IRQ_SET_ACTION_TYPE_MASK	(VFIO_IRQ_SET_ACTION_MASK | \
					 VFIO_IRQ_SET_ACTION_UNMASK | \
					 VFIO_IRQ_SET_ACTION_TRIGGER)
/**
 * VFIO_DEVICE_RESET - _IO(VFIO_TYPE, VFIO_BASE + 11)
 *
 * Reset a device.
 */
#define VFIO_DEVICE_RESET		_IO(VFIO_TYPE, VFIO_BASE + 11)

/*
 * The VFIO-PCI bus driver makes use of the following fixed region and
 * IRQ index mapping.  Unimplemented regions return a size of zero.
 * Unimplemented IRQ types return a count of zero.
 */

enum {
	VFIO_PCI_BAR0_REGION_INDEX,
	VFIO_PCI_BAR1_REGION_INDEX,
	VFIO_PCI_BAR2_REGION_INDEX,
	VFIO_PCI_BAR3_REGION_INDEX,
	VFIO_PCI_BAR4_REGION_INDEX,
	VFIO_PCI_BAR5_REGION_INDEX,
	VFIO_PCI_ROM_REGION_INDEX,
	VFIO_PCI_CONFIG_REGION_INDEX,
	/*
	 * Expose VGA regions defined for PCI base class 03, subclass 00.
	 * This includes I/O port ranges 0x3b0 to 0x3bb and 0x3c0 to 0x3df
	 * as well as the MMIO range 0xa0000 to 0xbffff.  Each implemented
	 * range is found at it's identity mapped offset from the region
	 * offset, for example 0x3b0 is region_info.offset + 0x3b0.  Areas
	 * between described ranges are unimplemented.
	 */
	VFIO_PCI_VGA_REGION_INDEX,
	VFIO_PCI_NUM_REGIONS
};

enum {
	VFIO_PCI_INTX_IRQ_INDEX,
	VFIO_PCI_MSI_IRQ_INDEX,
	VFIO_PCI_MSIX_IRQ_INDEX,
	VFIO_PCI_NUM_IRQS
};

/* -------- API for Type1 VFIO IOMMU -------- */

/**
 * VFIO_IOMMU_GET_INFO - _IOR(VFIO_TYPE, VFIO_BASE + 12, struct vfio_iommu_info)
 *
 * Retrieve information about the IOMMU object. Fills in provided
 * struct vfio_iommu_info. Caller sets argsz.
 *
 * XXX Should we do these by CHECK_EXTENSION too?
 */
struct vfio_iommu_type1_info {
	__u32	argsz;
	__u32	flags;
#define VFIO_IOMMU_INFO_PGSIZES (1 << 0)	/* supported page sizes info */
	__u64	iova_pgsizes;		/* Bitmap of supported page sizes */
};

#define VFIO_IOMMU_GET_INFO _IO(VFIO_TYPE, VFIO_BASE + 12)

/**
 * VFIO_IOMMU_MAP_DMA - _IOW(VFIO_TYPE, VFIO_BASE + 13, struct vfio_dma_map)
 *
 * Map process virtual addresses to IO virtual addresses using the
 * provided struct vfio_dma_map. Caller sets argsz. READ &/ WRITE required.
 */
struct vfio_iommu_type1_dma_map {
	__u32	argsz;
	__u32	flags;
#define VFIO_DMA_MAP_FLAG_READ (1 << 0)		/* readable from device */
#define VFIO_DMA_MAP_FLAG_WRITE (1 << 1)	/* writable from device */
	__u64	vaddr;				/* Process virtual address */
	__u64	iova;				/* IO virtual address */
	__u64	size;				/* Size of mapping (bytes) */
};

#define VFIO_IOMMU_MAP_DMA _IO(VFIO_TYPE, VFIO_BASE + 13)

/**
 * VFIO_IOMMU_UNMAP_DMA - _IOWR(VFIO_TYPE, VFIO_BASE + 14,
 *							struct vfio_dma_unmap)
 *
 * Unmap IO virtual addresses using the provided struct vfio_dma_unmap.
 * Caller sets argsz.  The actual unmapped size is returned in the size
 * field.  No guarantee is made to the user that arbitrary unmaps of iova
 * or size different from those used in the original mapping call will
 * succeed.
 */
struct vfio_iommu_type1_dma_unmap {
	__u32	argsz;
	__u32	flags;
	__u64	iova;				/* IO virtual address */
	__u64	size;				/* Size of mapping (bytes) */
};

#define VFIO_IOMMU_UNMAP_DMA _IO(VFIO_TYPE, VFIO_BASE + 14)

#endif /* _UAPIVFIO_H */

#include <errno.h>
#include <libgen.h>
#include <fcntl.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <sys/ioctl.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <sys/types.h>

#include <linux/ioctl.h>

#define MAP_SIZE (4UL * 1024 * 1024 * 1024)
#define MAP_MAX 1024
#define DMA_CHUNK (2UL * 1024 * 1024)

#define DMA_CHUNK_640K		(640 * 1024)
#define DMA_CHUNK_640K_		(15744 * 1024)
#define DMA_CHUNK_16M		(16 * 1024 * 1024)
#define DMA_CHUNK_32M		(32 * 1024 * 1024)
#define DMA_CHUNK_64M		(64 * 1024 * 1024)
#define DMA_CHUNK_128M		(128 * 1024 * 1024)
#define DMA_CHUNK_256M		(256 * 1024 * 1024)
#define DMA_CHUNK_512M		(512 * 1024 * 1024)

#define MAP       1
#define UNMAP     2

typedef struct {
	int type;
	unsigned long iova;
	unsigned long size;
} IOMMU_TEST_TYPE;


static IOMMU_TEST_TYPE test_val[] = {
        {UNMAP, 0x0, 0xa0000},
        {UNMAP, 0xc0000, 0xbff40000},
        {MAP, 0x0, 0xc0000000},
        {UNMAP, 0x0, 0xc0000000},
        {MAP, 0x0, 0xa0000},
        {MAP, 0xc0000, 0xbff40000},
};


void usage(char *name)
{
	printf("usage: %s ssss:bb:dd.f\n", name);
	printf("\tssss: PCI segment, ex. 0000\n");
	printf("\tbb:   PCI bus, ex. 01\n");
	printf("\tdd:   PCI device, ex. 06\n");
	printf("\tf:    PCI function, ex. 0\n");
}

static int test_map(int container, unsigned long iova,
		unsigned long vaddr, unsigned long size)
{
	int ret;
	struct vfio_iommu_type1_dma_map dma_map = {
		.argsz = sizeof(dma_map),
		.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE,
	};

	dma_map.iova = iova;
	dma_map.vaddr = vaddr;
	dma_map.size = size;

	ret = ioctl(container, VFIO_IOMMU_MAP_DMA, &dma_map);
	if (ret) {
		printf("Failed to map memory 0x%lx/0x%lx 0x%lx %s\n",
		       iova, vaddr, size, strerror(errno));
		return ret;
	}

	printf("Succ to MAP memory 0x%lx/0x%lx 0x%lx\n",
	       iova, vaddr, size);
	return 0;
}

static int test_unmap(int container, unsigned long iova,
		unsigned long size)
{
	int ret;
	struct vfio_iommu_type1_dma_unmap dma_unmap = {
		.argsz = sizeof(dma_unmap)
	};

	dma_unmap.iova = iova;
	dma_unmap.size = size;

	ret = ioctl(container, VFIO_IOMMU_UNMAP_DMA, &dma_unmap);
	if (ret) {
		printf("Failed to unmap memory 0x%lx/0x%lx (%s)\n",
		       iova, size, strerror(errno));
		return ret;
	}

	printf("Succ to unmap memory 0x%lx/0x%lx\n",
	       iova, size);
	return 0;
}

int main(int argc, char **argv)
{
	int seg, bus, slot, func;
	int ret, container, group, groupid;
	char path[50], iommu_group_path[50], *group_name;
	struct stat st;
	ssize_t len;
	unsigned long i, j, vaddr, *val;
	struct vfio_group_status group_status = {
		.argsz = sizeof(group_status)
	};

	if (argc != 2) {
		usage(argv[0]);
		return -1;
	}

	ret = sscanf(argv[1], "%04x:%02x:%02x.%d", &seg, &bus, &slot, &func);
	if (ret != 4) {
		usage(argv[0]);
		return -1;
	}

	container = open("/dev/vfio/vfio", O_RDWR);
	if (container < 0) {
		printf("Failed to open /dev/vfio/vfio, %d (%s)\n",
		       container, strerror(errno));
		return container;
	}

	snprintf(path, sizeof(path),
		 "/sys/bus/pci/devices/%04x:%02x:%02x.%01x/",
		 seg, bus, slot, func);

	ret = stat(path, &st);
	if (ret < 0) {
		printf("No such device\n");
		return  ret;
	}

	strncat(path, "iommu_group", sizeof(path) - strlen(path) - 1);

	len = readlink(path, iommu_group_path, sizeof(iommu_group_path));
	if (len <= 0) {
		printf("No iommu_group for device\n");
		return -1;
	}

	iommu_group_path[len] = 0;
	group_name = basename(iommu_group_path);

	if (sscanf(group_name, "%d", &groupid) != 1) {
		printf("Unknown group\n");
		return -1;
	}

	snprintf(path, sizeof(path), "/dev/vfio/%d", groupid);
	group = open(path, O_RDWR);
	if (group < 0) {
		printf("Failed to open %s, %d (%s)\n",
		       path, group, strerror(errno));
		return group;
	}

	ret = ioctl(group, VFIO_GROUP_GET_STATUS, &group_status);
	if (ret) {
		printf("ioctl(VFIO_GROUP_GET_STATUS) failed\n");
		return ret;
	}

	if (!(group_status.flags & VFIO_GROUP_FLAGS_VIABLE)) {
		printf("Group not viable, are all devices attached to vfio?\n");
		return -1;
	}

	ret = ioctl(group, VFIO_GROUP_SET_CONTAINER, &container);
	if (ret) {
		printf("Failed to set group container\n");
		return ret;
	}

	ret = ioctl(container, VFIO_SET_IOMMU, VFIO_TYPE1_IOMMU);
	if (ret) {
		printf("Failed to set IOMMU\n");
		return ret;
	}

	vaddr = (unsigned long)mmap(0, MAP_SIZE, PROT_READ | PROT_WRITE,
				    MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB, 0, 0);
	if (!vaddr) {
		printf("Failed to allocate memory\n");
		return -1;
	}
	printf("%lx\n", vaddr);

	for (i = 0; i < (MAP_SIZE / 4096); i++) {
		val = (unsigned long *)(vaddr + 4096 * i);
		for (j = 0; j < (4096 / sizeof(unsigned long)); j++)
			val[j] = i;
	}
	printf("inited data\n");

	test_map(container, 0, vaddr, MAP_SIZE);
	while (1) {
		for (int i = 0; i < sizeof(test_val) / sizeof(IOMMU_TEST_TYPE); i++) {
			if (test_val[i].type == MAP) {
				test_map(container, test_val[i].iova, vaddr + test_val[i].iova, test_val[i].size);
				if (test_val[i].iova == 0 && test_val[i].size == 0xc0000000) {
					usleep(2000000);
				}
			} else if (test_val[i].type == UNMAP) {
				test_unmap(container, test_val[i].iova, test_val[i].size);
			} else {
				printf("unknoww type\n");
			}
		}
	}
	
	return 0;
}

^ permalink raw reply	[flat|nested] 50+ messages in thread

* A problem of Intel IOMMU hardware ?
@ 2021-03-17  3:16 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  0 siblings, 0 replies; 50+ messages in thread
From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.) @ 2021-03-17  3:16 UTC (permalink / raw)
  To: dwmw2, baolu.lu, joro, will, alex.williamson
  Cc: chenjiashang, longpeng2, iommu, Gonglei (Arei), LKML

[-- Attachment #1: Type: text/plain, Size: 2241 bytes --]

Hi guys,

We find the Intel iommu cache (i.e. iotlb) maybe works wrong in a special
situation, it would cause DMA fails or get wrong data.

The reproducer (based on Alex's vfio testsuite[1]) is in attachment, it can
reproduce the problem with high probability (~50%).

The machine we used is:
processor	: 47
vendor_id	: GenuineIntel
cpu family	: 6
model		: 85
model name	: Intel(R) Xeon(R) Gold 6146 CPU @ 3.20GHz
stepping	: 4
microcode	: 0x2000069

And the iommu capability reported is:
ver 1:0 cap 8d2078c106f0466 ecap f020df
(caching mode = 0 , page-selective invalidation = 1)

(The problem is also on 'Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz' and
'Intel(R) Xeon(R) Platinum 8378A CPU @ 3.00GHz')

We run the reproducer on Linux 4.18 and it works as follow:

Step 1. alloc 4G *2M-hugetlb* memory (N.B. no problem with 4K-page mapping)
Step 2. DMA Map 4G memory
Step 3.
    while (1) {
        {UNMAP, 0x0, 0xa0000}, ------------------------------------ (a)
        {UNMAP, 0xc0000, 0xbff40000},
        {MAP,   0x0, 0xc0000000}, --------------------------------- (b)
                use GDB to pause at here, and then DMA read IOVA=0,
                sometimes DMA success (as expected),
                but sometimes DMA error (report not-present).
        {UNMAP, 0x0, 0xc0000000}, --------------------------------- (c)
        {MAP,   0x0, 0xa0000},
        {MAP,   0xc0000, 0xbff40000},
    }

The DMA read operations sholud success between (b) and (c), it should NOT report
not-present at least!

After analysis the problem, we think maybe it's caused by the Intel iommu iotlb.
It seems the DMA Remapping hardware still uses the IOTLB or other caches of (a).

When do DMA unmap at (a), the iotlb will be flush:
    intel_iommu_unmap
        domain_unmap
            iommu_flush_iotlb_psi

When do DMA map at (b), no need to flush the iotlb according to the capability
of this iommu:
    intel_iommu_map
        domain_pfn_mapping
            domain_mapping
                __mapping_notify_one
                    if (cap_caching_mode(iommu->cap)) // FALSE
                        iommu_flush_iotlb_psi
But the problem will disappear if we FORCE flush here. So we suspect the iommu
hardware.

Do you have any suggestion ?








[-- Attachment #2: vfiotest.c --]
[-- Type: text/plain, Size: 19318 bytes --]

/*
 * VFIO API definition
 *
 * Copyright (C) 2012 Red Hat, Inc.  All rights reserved.
 *     Author: Alex Williamson <alex.williamson@redhat.com>
 *
 * This program is free software; you can redistribute it and/or modify
 * it under the terms of the GNU General Public License version 2 as
 * published by the Free Software Foundation.
 */
#ifndef _UAPIVFIO_H
#define _UAPIVFIO_H

#include <linux/types.h>
#include <linux/ioctl.h>

#define VFIO_API_VERSION	0


/* Kernel & User level defines for VFIO IOCTLs. */

/* Extensions */

#define VFIO_TYPE1_IOMMU		1

/*
 * The IOCTL interface is designed for extensibility by embedding the
 * structure length (argsz) and flags into structures passed between
 * kernel and userspace.  We therefore use the _IO() macro for these
 * defines to avoid implicitly embedding a size into the ioctl request.
 * As structure fields are added, argsz will increase to match and flag
 * bits will be defined to indicate additional fields with valid data.
 * It's *always* the caller's responsibility to indicate the size of
 * the structure passed by setting argsz appropriately.
 */

#define VFIO_TYPE	(';')
#define VFIO_BASE	100

/* -------- IOCTLs for VFIO file descriptor (/dev/vfio/vfio) -------- */

/**
 * VFIO_GET_API_VERSION - _IO(VFIO_TYPE, VFIO_BASE + 0)
 *
 * Report the version of the VFIO API.  This allows us to bump the entire
 * API version should we later need to add or change features in incompatible
 * ways.
 * Return: VFIO_API_VERSION
 * Availability: Always
 */
#define VFIO_GET_API_VERSION		_IO(VFIO_TYPE, VFIO_BASE + 0)

/**
 * VFIO_CHECK_EXTENSION - _IOW(VFIO_TYPE, VFIO_BASE + 1, __u32)
 *
 * Check whether an extension is supported.
 * Return: 0 if not supported, 1 (or some other positive integer) if supported.
 * Availability: Always
 */
#define VFIO_CHECK_EXTENSION		_IO(VFIO_TYPE, VFIO_BASE + 1)

/**
 * VFIO_SET_IOMMU - _IOW(VFIO_TYPE, VFIO_BASE + 2, __s32)
 *
 * Set the iommu to the given type.  The type must be supported by an
 * iommu driver as verified by calling CHECK_EXTENSION using the same
 * type.  A group must be set to this file descriptor before this
 * ioctl is available.  The IOMMU interfaces enabled by this call are
 * specific to the value set.
 * Return: 0 on success, -errno on failure
 * Availability: When VFIO group attached
 */
#define VFIO_SET_IOMMU			_IO(VFIO_TYPE, VFIO_BASE + 2)

/* -------- IOCTLs for GROUP file descriptors (/dev/vfio/$GROUP) -------- */

/**
 * VFIO_GROUP_GET_STATUS - _IOR(VFIO_TYPE, VFIO_BASE + 3,
 *						struct vfio_group_status)
 *
 * Retrieve information about the group.  Fills in provided
 * struct vfio_group_info.  Caller sets argsz.
 * Return: 0 on succes, -errno on failure.
 * Availability: Always
 */
struct vfio_group_status {
	__u32	argsz;
	__u32	flags;
#define VFIO_GROUP_FLAGS_VIABLE		(1 << 0)
#define VFIO_GROUP_FLAGS_CONTAINER_SET	(1 << 1)
};
#define VFIO_GROUP_GET_STATUS		_IO(VFIO_TYPE, VFIO_BASE + 3)

/**
 * VFIO_GROUP_SET_CONTAINER - _IOW(VFIO_TYPE, VFIO_BASE + 4, __s32)
 *
 * Set the container for the VFIO group to the open VFIO file
 * descriptor provided.  Groups may only belong to a single
 * container.  Containers may, at their discretion, support multiple
 * groups.  Only when a container is set are all of the interfaces
 * of the VFIO file descriptor and the VFIO group file descriptor
 * available to the user.
 * Return: 0 on success, -errno on failure.
 * Availability: Always
 */
#define VFIO_GROUP_SET_CONTAINER	_IO(VFIO_TYPE, VFIO_BASE + 4)

/**
 * VFIO_GROUP_UNSET_CONTAINER - _IO(VFIO_TYPE, VFIO_BASE + 5)
 *
 * Remove the group from the attached container.  This is the
 * opposite of the SET_CONTAINER call and returns the group to
 * an initial state.  All device file descriptors must be released
 * prior to calling this interface.  When removing the last group
 * from a container, the IOMMU will be disabled and all state lost,
 * effectively also returning the VFIO file descriptor to an initial
 * state.
 * Return: 0 on success, -errno on failure.
 * Availability: When attached to container
 */
#define VFIO_GROUP_UNSET_CONTAINER	_IO(VFIO_TYPE, VFIO_BASE + 5)

/**
 * VFIO_GROUP_GET_DEVICE_FD - _IOW(VFIO_TYPE, VFIO_BASE + 6, char)
 *
 * Return a new file descriptor for the device object described by
 * the provided string.  The string should match a device listed in
 * the devices subdirectory of the IOMMU group sysfs entry.  The
 * group containing the device must already be added to this context.
 * Return: new file descriptor on success, -errno on failure.
 * Availability: When attached to container
 */
#define VFIO_GROUP_GET_DEVICE_FD	_IO(VFIO_TYPE, VFIO_BASE + 6)

/* --------------- IOCTLs for DEVICE file descriptors --------------- */

/**
 * VFIO_DEVICE_GET_INFO - _IOR(VFIO_TYPE, VFIO_BASE + 7,
 *						struct vfio_device_info)
 *
 * Retrieve information about the device.  Fills in provided
 * struct vfio_device_info.  Caller sets argsz.
 * Return: 0 on success, -errno on failure.
 */
struct vfio_device_info {
	__u32	argsz;
	__u32	flags;
#define VFIO_DEVICE_FLAGS_RESET	(1 << 0)	/* Device supports reset */
#define VFIO_DEVICE_FLAGS_PCI	(1 << 1)	/* vfio-pci device */
	__u32	num_regions;	/* Max region index + 1 */
	__u32	num_irqs;	/* Max IRQ index + 1 */
};
#define VFIO_DEVICE_GET_INFO		_IO(VFIO_TYPE, VFIO_BASE + 7)

/**
 * VFIO_DEVICE_GET_REGION_INFO - _IOWR(VFIO_TYPE, VFIO_BASE + 8,
 *				       struct vfio_region_info)
 *
 * Retrieve information about a device region.  Caller provides
 * struct vfio_region_info with index value set.  Caller sets argsz.
 * Implementation of region mapping is bus driver specific.  This is
 * intended to describe MMIO, I/O port, as well as bus specific
 * regions (ex. PCI config space).  Zero sized regions may be used
 * to describe unimplemented regions (ex. unimplemented PCI BARs).
 * Return: 0 on success, -errno on failure.
 */
struct vfio_region_info {
	__u32	argsz;
	__u32	flags;
#define VFIO_REGION_INFO_FLAG_READ	(1 << 0) /* Region supports read */
#define VFIO_REGION_INFO_FLAG_WRITE	(1 << 1) /* Region supports write */
#define VFIO_REGION_INFO_FLAG_MMAP	(1 << 2) /* Region supports mmap */
	__u32	index;		/* Region index */
	__u32	resv;		/* Reserved for alignment */
	__u64	size;		/* Region size (bytes) */
	__u64	offset;		/* Region offset from start of device fd */
};
#define VFIO_DEVICE_GET_REGION_INFO	_IO(VFIO_TYPE, VFIO_BASE + 8)

/**
 * VFIO_DEVICE_GET_IRQ_INFO - _IOWR(VFIO_TYPE, VFIO_BASE + 9,
 *				    struct vfio_irq_info)
 *
 * Retrieve information about a device IRQ.  Caller provides
 * struct vfio_irq_info with index value set.  Caller sets argsz.
 * Implementation of IRQ mapping is bus driver specific.  Indexes
 * using multiple IRQs are primarily intended to support MSI-like
 * interrupt blocks.  Zero count irq blocks may be used to describe
 * unimplemented interrupt types.
 *
 * The EVENTFD flag indicates the interrupt index supports eventfd based
 * signaling.
 *
 * The MASKABLE flags indicates the index supports MASK and UNMASK
 * actions described below.
 *
 * AUTOMASKED indicates that after signaling, the interrupt line is
 * automatically masked by VFIO and the user needs to unmask the line
 * to receive new interrupts.  This is primarily intended to distinguish
 * level triggered interrupts.
 *
 * The NORESIZE flag indicates that the interrupt lines within the index
 * are setup as a set and new subindexes cannot be enabled without first
 * disabling the entire index.  This is used for interrupts like PCI MSI
 * and MSI-X where the driver may only use a subset of the available
 * indexes, but VFIO needs to enable a specific number of vectors
 * upfront.  In the case of MSI-X, where the user can enable MSI-X and
 * then add and unmask vectors, it's up to userspace to make the decision
 * whether to allocate the maximum supported number of vectors or tear
 * down setup and incrementally increase the vectors as each is enabled.
 */
struct vfio_irq_info {
	__u32	argsz;
	__u32	flags;
#define VFIO_IRQ_INFO_EVENTFD		(1 << 0)
#define VFIO_IRQ_INFO_MASKABLE		(1 << 1)
#define VFIO_IRQ_INFO_AUTOMASKED	(1 << 2)
#define VFIO_IRQ_INFO_NORESIZE		(1 << 3)
	__u32	index;		/* IRQ index */
	__u32	count;		/* Number of IRQs within this index */
};
#define VFIO_DEVICE_GET_IRQ_INFO	_IO(VFIO_TYPE, VFIO_BASE + 9)

/**
 * VFIO_DEVICE_SET_IRQS - _IOW(VFIO_TYPE, VFIO_BASE + 10, struct vfio_irq_set)
 *
 * Set signaling, masking, and unmasking of interrupts.  Caller provides
 * struct vfio_irq_set with all fields set.  'start' and 'count' indicate
 * the range of subindexes being specified.
 *
 * The DATA flags specify the type of data provided.  If DATA_NONE, the
 * operation performs the specified action immediately on the specified
 * interrupt(s).  For example, to unmask AUTOMASKED interrupt [0,0]:
 * flags = (DATA_NONE|ACTION_UNMASK), index = 0, start = 0, count = 1.
 *
 * DATA_BOOL allows sparse support for the same on arrays of interrupts.
 * For example, to mask interrupts [0,1] and [0,3] (but not [0,2]):
 * flags = (DATA_BOOL|ACTION_MASK), index = 0, start = 1, count = 3,
 * data = {1,0,1}
 *
 * DATA_EVENTFD binds the specified ACTION to the provided __s32 eventfd.
 * A value of -1 can be used to either de-assign interrupts if already
 * assigned or skip un-assigned interrupts.  For example, to set an eventfd
 * to be trigger for interrupts [0,0] and [0,2]:
 * flags = (DATA_EVENTFD|ACTION_TRIGGER), index = 0, start = 0, count = 3,
 * data = {fd1, -1, fd2}
 * If index [0,1] is previously set, two count = 1 ioctls calls would be
 * required to set [0,0] and [0,2] without changing [0,1].
 *
 * Once a signaling mechanism is set, DATA_BOOL or DATA_NONE can be used
 * with ACTION_TRIGGER to perform kernel level interrupt loopback testing
 * from userspace (ie. simulate hardware triggering).
 *
 * Setting of an event triggering mechanism to userspace for ACTION_TRIGGER
 * enables the interrupt index for the device.  Individual subindex interrupts
 * can be disabled using the -1 value for DATA_EVENTFD or the index can be
 * disabled as a whole with: flags = (DATA_NONE|ACTION_TRIGGER), count = 0.
 *
 * Note that ACTION_[UN]MASK specify user->kernel signaling (irqfds) while
 * ACTION_TRIGGER specifies kernel->user signaling.
 */
struct vfio_irq_set {
	__u32	argsz;
	__u32	flags;
#define VFIO_IRQ_SET_DATA_NONE		(1 << 0) /* Data not present */
#define VFIO_IRQ_SET_DATA_BOOL		(1 << 1) /* Data is bool (u8) */
#define VFIO_IRQ_SET_DATA_EVENTFD	(1 << 2) /* Data is eventfd (s32) */
#define VFIO_IRQ_SET_ACTION_MASK	(1 << 3) /* Mask interrupt */
#define VFIO_IRQ_SET_ACTION_UNMASK	(1 << 4) /* Unmask interrupt */
#define VFIO_IRQ_SET_ACTION_TRIGGER	(1 << 5) /* Trigger interrupt */
	__u32	index;
	__u32	start;
	__u32	count;
	__u8	data[];
};
#define VFIO_DEVICE_SET_IRQS		_IO(VFIO_TYPE, VFIO_BASE + 10)

#define VFIO_IRQ_SET_DATA_TYPE_MASK	(VFIO_IRQ_SET_DATA_NONE | \
					 VFIO_IRQ_SET_DATA_BOOL | \
					 VFIO_IRQ_SET_DATA_EVENTFD)
#define VFIO_IRQ_SET_ACTION_TYPE_MASK	(VFIO_IRQ_SET_ACTION_MASK | \
					 VFIO_IRQ_SET_ACTION_UNMASK | \
					 VFIO_IRQ_SET_ACTION_TRIGGER)
/**
 * VFIO_DEVICE_RESET - _IO(VFIO_TYPE, VFIO_BASE + 11)
 *
 * Reset a device.
 */
#define VFIO_DEVICE_RESET		_IO(VFIO_TYPE, VFIO_BASE + 11)

/*
 * The VFIO-PCI bus driver makes use of the following fixed region and
 * IRQ index mapping.  Unimplemented regions return a size of zero.
 * Unimplemented IRQ types return a count of zero.
 */

enum {
	VFIO_PCI_BAR0_REGION_INDEX,
	VFIO_PCI_BAR1_REGION_INDEX,
	VFIO_PCI_BAR2_REGION_INDEX,
	VFIO_PCI_BAR3_REGION_INDEX,
	VFIO_PCI_BAR4_REGION_INDEX,
	VFIO_PCI_BAR5_REGION_INDEX,
	VFIO_PCI_ROM_REGION_INDEX,
	VFIO_PCI_CONFIG_REGION_INDEX,
	/*
	 * Expose VGA regions defined for PCI base class 03, subclass 00.
	 * This includes I/O port ranges 0x3b0 to 0x3bb and 0x3c0 to 0x3df
	 * as well as the MMIO range 0xa0000 to 0xbffff.  Each implemented
	 * range is found at it's identity mapped offset from the region
	 * offset, for example 0x3b0 is region_info.offset + 0x3b0.  Areas
	 * between described ranges are unimplemented.
	 */
	VFIO_PCI_VGA_REGION_INDEX,
	VFIO_PCI_NUM_REGIONS
};

enum {
	VFIO_PCI_INTX_IRQ_INDEX,
	VFIO_PCI_MSI_IRQ_INDEX,
	VFIO_PCI_MSIX_IRQ_INDEX,
	VFIO_PCI_NUM_IRQS
};

/* -------- API for Type1 VFIO IOMMU -------- */

/**
 * VFIO_IOMMU_GET_INFO - _IOR(VFIO_TYPE, VFIO_BASE + 12, struct vfio_iommu_info)
 *
 * Retrieve information about the IOMMU object. Fills in provided
 * struct vfio_iommu_info. Caller sets argsz.
 *
 * XXX Should we do these by CHECK_EXTENSION too?
 */
struct vfio_iommu_type1_info {
	__u32	argsz;
	__u32	flags;
#define VFIO_IOMMU_INFO_PGSIZES (1 << 0)	/* supported page sizes info */
	__u64	iova_pgsizes;		/* Bitmap of supported page sizes */
};

#define VFIO_IOMMU_GET_INFO _IO(VFIO_TYPE, VFIO_BASE + 12)

/**
 * VFIO_IOMMU_MAP_DMA - _IOW(VFIO_TYPE, VFIO_BASE + 13, struct vfio_dma_map)
 *
 * Map process virtual addresses to IO virtual addresses using the
 * provided struct vfio_dma_map. Caller sets argsz. READ &/ WRITE required.
 */
struct vfio_iommu_type1_dma_map {
	__u32	argsz;
	__u32	flags;
#define VFIO_DMA_MAP_FLAG_READ (1 << 0)		/* readable from device */
#define VFIO_DMA_MAP_FLAG_WRITE (1 << 1)	/* writable from device */
	__u64	vaddr;				/* Process virtual address */
	__u64	iova;				/* IO virtual address */
	__u64	size;				/* Size of mapping (bytes) */
};

#define VFIO_IOMMU_MAP_DMA _IO(VFIO_TYPE, VFIO_BASE + 13)

/**
 * VFIO_IOMMU_UNMAP_DMA - _IOWR(VFIO_TYPE, VFIO_BASE + 14,
 *							struct vfio_dma_unmap)
 *
 * Unmap IO virtual addresses using the provided struct vfio_dma_unmap.
 * Caller sets argsz.  The actual unmapped size is returned in the size
 * field.  No guarantee is made to the user that arbitrary unmaps of iova
 * or size different from those used in the original mapping call will
 * succeed.
 */
struct vfio_iommu_type1_dma_unmap {
	__u32	argsz;
	__u32	flags;
	__u64	iova;				/* IO virtual address */
	__u64	size;				/* Size of mapping (bytes) */
};

#define VFIO_IOMMU_UNMAP_DMA _IO(VFIO_TYPE, VFIO_BASE + 14)

#endif /* _UAPIVFIO_H */

#include <errno.h>
#include <libgen.h>
#include <fcntl.h>
#include <stdio.h>
#include <string.h>
#include <unistd.h>
#include <sys/ioctl.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <sys/types.h>

#include <linux/ioctl.h>

#define MAP_SIZE (4UL * 1024 * 1024 * 1024)
#define MAP_MAX 1024
#define DMA_CHUNK (2UL * 1024 * 1024)

#define DMA_CHUNK_640K		(640 * 1024)
#define DMA_CHUNK_640K_		(15744 * 1024)
#define DMA_CHUNK_16M		(16 * 1024 * 1024)
#define DMA_CHUNK_32M		(32 * 1024 * 1024)
#define DMA_CHUNK_64M		(64 * 1024 * 1024)
#define DMA_CHUNK_128M		(128 * 1024 * 1024)
#define DMA_CHUNK_256M		(256 * 1024 * 1024)
#define DMA_CHUNK_512M		(512 * 1024 * 1024)

#define MAP       1
#define UNMAP     2

typedef struct {
	int type;
	unsigned long iova;
	unsigned long size;
} IOMMU_TEST_TYPE;


static IOMMU_TEST_TYPE test_val[] = {
        {UNMAP, 0x0, 0xa0000},
        {UNMAP, 0xc0000, 0xbff40000},
        {MAP, 0x0, 0xc0000000},
        {UNMAP, 0x0, 0xc0000000},
        {MAP, 0x0, 0xa0000},
        {MAP, 0xc0000, 0xbff40000},
};


void usage(char *name)
{
	printf("usage: %s ssss:bb:dd.f\n", name);
	printf("\tssss: PCI segment, ex. 0000\n");
	printf("\tbb:   PCI bus, ex. 01\n");
	printf("\tdd:   PCI device, ex. 06\n");
	printf("\tf:    PCI function, ex. 0\n");
}

static int test_map(int container, unsigned long iova,
		unsigned long vaddr, unsigned long size)
{
	int ret;
	struct vfio_iommu_type1_dma_map dma_map = {
		.argsz = sizeof(dma_map),
		.flags = VFIO_DMA_MAP_FLAG_READ | VFIO_DMA_MAP_FLAG_WRITE,
	};

	dma_map.iova = iova;
	dma_map.vaddr = vaddr;
	dma_map.size = size;

	ret = ioctl(container, VFIO_IOMMU_MAP_DMA, &dma_map);
	if (ret) {
		printf("Failed to map memory 0x%lx/0x%lx 0x%lx %s\n",
		       iova, vaddr, size, strerror(errno));
		return ret;
	}

	printf("Succ to MAP memory 0x%lx/0x%lx 0x%lx\n",
	       iova, vaddr, size);
	return 0;
}

static int test_unmap(int container, unsigned long iova,
		unsigned long size)
{
	int ret;
	struct vfio_iommu_type1_dma_unmap dma_unmap = {
		.argsz = sizeof(dma_unmap)
	};

	dma_unmap.iova = iova;
	dma_unmap.size = size;

	ret = ioctl(container, VFIO_IOMMU_UNMAP_DMA, &dma_unmap);
	if (ret) {
		printf("Failed to unmap memory 0x%lx/0x%lx (%s)\n",
		       iova, size, strerror(errno));
		return ret;
	}

	printf("Succ to unmap memory 0x%lx/0x%lx\n",
	       iova, size);
	return 0;
}

int main(int argc, char **argv)
{
	int seg, bus, slot, func;
	int ret, container, group, groupid;
	char path[50], iommu_group_path[50], *group_name;
	struct stat st;
	ssize_t len;
	unsigned long i, j, vaddr, *val;
	struct vfio_group_status group_status = {
		.argsz = sizeof(group_status)
	};

	if (argc != 2) {
		usage(argv[0]);
		return -1;
	}

	ret = sscanf(argv[1], "%04x:%02x:%02x.%d", &seg, &bus, &slot, &func);
	if (ret != 4) {
		usage(argv[0]);
		return -1;
	}

	container = open("/dev/vfio/vfio", O_RDWR);
	if (container < 0) {
		printf("Failed to open /dev/vfio/vfio, %d (%s)\n",
		       container, strerror(errno));
		return container;
	}

	snprintf(path, sizeof(path),
		 "/sys/bus/pci/devices/%04x:%02x:%02x.%01x/",
		 seg, bus, slot, func);

	ret = stat(path, &st);
	if (ret < 0) {
		printf("No such device\n");
		return  ret;
	}

	strncat(path, "iommu_group", sizeof(path) - strlen(path) - 1);

	len = readlink(path, iommu_group_path, sizeof(iommu_group_path));
	if (len <= 0) {
		printf("No iommu_group for device\n");
		return -1;
	}

	iommu_group_path[len] = 0;
	group_name = basename(iommu_group_path);

	if (sscanf(group_name, "%d", &groupid) != 1) {
		printf("Unknown group\n");
		return -1;
	}

	snprintf(path, sizeof(path), "/dev/vfio/%d", groupid);
	group = open(path, O_RDWR);
	if (group < 0) {
		printf("Failed to open %s, %d (%s)\n",
		       path, group, strerror(errno));
		return group;
	}

	ret = ioctl(group, VFIO_GROUP_GET_STATUS, &group_status);
	if (ret) {
		printf("ioctl(VFIO_GROUP_GET_STATUS) failed\n");
		return ret;
	}

	if (!(group_status.flags & VFIO_GROUP_FLAGS_VIABLE)) {
		printf("Group not viable, are all devices attached to vfio?\n");
		return -1;
	}

	ret = ioctl(group, VFIO_GROUP_SET_CONTAINER, &container);
	if (ret) {
		printf("Failed to set group container\n");
		return ret;
	}

	ret = ioctl(container, VFIO_SET_IOMMU, VFIO_TYPE1_IOMMU);
	if (ret) {
		printf("Failed to set IOMMU\n");
		return ret;
	}

	vaddr = (unsigned long)mmap(0, MAP_SIZE, PROT_READ | PROT_WRITE,
				    MAP_PRIVATE | MAP_ANONYMOUS | MAP_HUGETLB, 0, 0);
	if (!vaddr) {
		printf("Failed to allocate memory\n");
		return -1;
	}
	printf("%lx\n", vaddr);

	for (i = 0; i < (MAP_SIZE / 4096); i++) {
		val = (unsigned long *)(vaddr + 4096 * i);
		for (j = 0; j < (4096 / sizeof(unsigned long)); j++)
			val[j] = i;
	}
	printf("inited data\n");

	test_map(container, 0, vaddr, MAP_SIZE);
	while (1) {
		for (int i = 0; i < sizeof(test_val) / sizeof(IOMMU_TEST_TYPE); i++) {
			if (test_val[i].type == MAP) {
				test_map(container, test_val[i].iova, vaddr + test_val[i].iova, test_val[i].size);
				if (test_val[i].iova == 0 && test_val[i].size == 0xc0000000) {
					usleep(2000000);
				}
			} else if (test_val[i].type == UNMAP) {
				test_unmap(container, test_val[i].iova, test_val[i].size);
			} else {
				printf("unknoww type\n");
			}
		}
	}
	
	return 0;
}

[-- Attachment #3: Type: text/plain, Size: 156 bytes --]

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: A problem of Intel IOMMU hardware ?
  2021-03-17  3:16 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
@ 2021-03-17  5:16   ` Lu Baolu
  -1 siblings, 0 replies; 50+ messages in thread
From: Lu Baolu @ 2021-03-17  5:16 UTC (permalink / raw)
  To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.),
	dwmw2, joro, will, alex.williamson
  Cc: baolu.lu, iommu, LKML, Gonglei (Arei), chenjiashang

Hi Longpeng,

On 3/17/21 11:16 AM, Longpeng (Mike, Cloud Infrastructure Service 
Product Dept.) wrote:
> Hi guys,
> 
> We find the Intel iommu cache (i.e. iotlb) maybe works wrong in a special
> situation, it would cause DMA fails or get wrong data.
> 
> The reproducer (based on Alex's vfio testsuite[1]) is in attachment, it can
> reproduce the problem with high probability (~50%).
> 
> The machine we used is:
> processor	: 47
> vendor_id	: GenuineIntel
> cpu family	: 6
> model		: 85
> model name	: Intel(R) Xeon(R) Gold 6146 CPU @ 3.20GHz
> stepping	: 4
> microcode	: 0x2000069
> 
> And the iommu capability reported is:
> ver 1:0 cap 8d2078c106f0466 ecap f020df
> (caching mode = 0 , page-selective invalidation = 1)
> 
> (The problem is also on 'Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz' and
> 'Intel(R) Xeon(R) Platinum 8378A CPU @ 3.00GHz')
> 
> We run the reproducer on Linux 4.18 and it works as follow:
> 
> Step 1. alloc 4G *2M-hugetlb* memory (N.B. no problem with 4K-page mapping)

I don't understand 2M-hugetlb here means exactly. The IOMMU hardware
supports both 2M and 1G super page. The mapping physical memory is 4G.
Why couldn't it use 1G super page?

> Step 2. DMA Map 4G memory
> Step 3.
>      while (1) {
>          {UNMAP, 0x0, 0xa0000}, ------------------------------------ (a)
>          {UNMAP, 0xc0000, 0xbff40000},

Have these two ranges been mapped before? Does the IOMMU driver
complains when you trying to unmap a range which has never been
mapped? The IOMMU driver implicitly assumes that mapping and
unmapping are paired.

>          {MAP,   0x0, 0xc0000000}, --------------------------------- (b)
>                  use GDB to pause at here, and then DMA read IOVA=0,

IOVA 0 seems to be a special one. Have you verified with other addresses
than IOVA 0?

>                  sometimes DMA success (as expected),
>                  but sometimes DMA error (report not-present).
>          {UNMAP, 0x0, 0xc0000000}, --------------------------------- (c)
>          {MAP,   0x0, 0xa0000},
>          {MAP,   0xc0000, 0xbff40000},
>      }
> 
> The DMA read operations sholud success between (b) and (c), it should NOT report
> not-present at least!
> 
> After analysis the problem, we think maybe it's caused by the Intel iommu iotlb.
> It seems the DMA Remapping hardware still uses the IOTLB or other caches of (a).
> 
> When do DMA unmap at (a), the iotlb will be flush:
>      intel_iommu_unmap
>          domain_unmap
>              iommu_flush_iotlb_psi
> 
> When do DMA map at (b), no need to flush the iotlb according to the capability
> of this iommu:
>      intel_iommu_map
>          domain_pfn_mapping
>              domain_mapping
>                  __mapping_notify_one
>                      if (cap_caching_mode(iommu->cap)) // FALSE
>                          iommu_flush_iotlb_psi

That's true. The iotlb flushing is not needed in case of PTE been
changed from non-present to present unless caching mode.

> But the problem will disappear if we FORCE flush here. So we suspect the iommu
> hardware.
> 
> Do you have any suggestion ?

Best regards,
baolu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: A problem of Intel IOMMU hardware ?
@ 2021-03-17  5:16   ` Lu Baolu
  0 siblings, 0 replies; 50+ messages in thread
From: Lu Baolu @ 2021-03-17  5:16 UTC (permalink / raw)
  To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.),
	dwmw2, joro, will, alex.williamson
  Cc: chenjiashang, iommu, Gonglei (Arei), LKML

Hi Longpeng,

On 3/17/21 11:16 AM, Longpeng (Mike, Cloud Infrastructure Service 
Product Dept.) wrote:
> Hi guys,
> 
> We find the Intel iommu cache (i.e. iotlb) maybe works wrong in a special
> situation, it would cause DMA fails or get wrong data.
> 
> The reproducer (based on Alex's vfio testsuite[1]) is in attachment, it can
> reproduce the problem with high probability (~50%).
> 
> The machine we used is:
> processor	: 47
> vendor_id	: GenuineIntel
> cpu family	: 6
> model		: 85
> model name	: Intel(R) Xeon(R) Gold 6146 CPU @ 3.20GHz
> stepping	: 4
> microcode	: 0x2000069
> 
> And the iommu capability reported is:
> ver 1:0 cap 8d2078c106f0466 ecap f020df
> (caching mode = 0 , page-selective invalidation = 1)
> 
> (The problem is also on 'Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz' and
> 'Intel(R) Xeon(R) Platinum 8378A CPU @ 3.00GHz')
> 
> We run the reproducer on Linux 4.18 and it works as follow:
> 
> Step 1. alloc 4G *2M-hugetlb* memory (N.B. no problem with 4K-page mapping)

I don't understand 2M-hugetlb here means exactly. The IOMMU hardware
supports both 2M and 1G super page. The mapping physical memory is 4G.
Why couldn't it use 1G super page?

> Step 2. DMA Map 4G memory
> Step 3.
>      while (1) {
>          {UNMAP, 0x0, 0xa0000}, ------------------------------------ (a)
>          {UNMAP, 0xc0000, 0xbff40000},

Have these two ranges been mapped before? Does the IOMMU driver
complains when you trying to unmap a range which has never been
mapped? The IOMMU driver implicitly assumes that mapping and
unmapping are paired.

>          {MAP,   0x0, 0xc0000000}, --------------------------------- (b)
>                  use GDB to pause at here, and then DMA read IOVA=0,

IOVA 0 seems to be a special one. Have you verified with other addresses
than IOVA 0?

>                  sometimes DMA success (as expected),
>                  but sometimes DMA error (report not-present).
>          {UNMAP, 0x0, 0xc0000000}, --------------------------------- (c)
>          {MAP,   0x0, 0xa0000},
>          {MAP,   0xc0000, 0xbff40000},
>      }
> 
> The DMA read operations sholud success between (b) and (c), it should NOT report
> not-present at least!
> 
> After analysis the problem, we think maybe it's caused by the Intel iommu iotlb.
> It seems the DMA Remapping hardware still uses the IOTLB or other caches of (a).
> 
> When do DMA unmap at (a), the iotlb will be flush:
>      intel_iommu_unmap
>          domain_unmap
>              iommu_flush_iotlb_psi
> 
> When do DMA map at (b), no need to flush the iotlb according to the capability
> of this iommu:
>      intel_iommu_map
>          domain_pfn_mapping
>              domain_mapping
>                  __mapping_notify_one
>                      if (cap_caching_mode(iommu->cap)) // FALSE
>                          iommu_flush_iotlb_psi

That's true. The iotlb flushing is not needed in case of PTE been
changed from non-present to present unless caching mode.

> But the problem will disappear if we FORCE flush here. So we suspect the iommu
> hardware.
> 
> Do you have any suggestion ?

Best regards,
baolu
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: A problem of Intel IOMMU hardware ?
  2021-03-17  3:16 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
@ 2021-03-17  5:46   ` Nadav Amit
  -1 siblings, 0 replies; 50+ messages in thread
From: Nadav Amit @ 2021-03-17  5:46 UTC (permalink / raw)
  To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  Cc: David Woodhouse, Lu Baolu, Joerg Roedel, will, alex.williamson,
	chenjiashang, iommu, Gonglei (Arei),
	LKML

[-- Attachment #1: Type: text/plain, Size: 865 bytes --]



> On Mar 16, 2021, at 8:16 PM, Longpeng (Mike, Cloud Infrastructure Service Product Dept.) <longpeng2@huawei.com> wrote:
> 
> Hi guys,
> 
> We find the Intel iommu cache (i.e. iotlb) maybe works wrong in a special
> situation, it would cause DMA fails or get wrong data.
> 
> The reproducer (based on Alex's vfio testsuite[1]) is in attachment, it can
> reproduce the problem with high probability (~50%).

I saw Lu replied, and he is much more knowledgable than I am (I was just
intrigued by your email).

However, if I were you I would try also to remove some “optimizations” to
look for the root-cause (e.g., use domain specific invalidations instead
of page-specific).

The first thing that comes to my mind is the invalidation hint (ih) in
iommu_flush_iotlb_psi(). I would remove it to see whether you get the
failure without it.


[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: A problem of Intel IOMMU hardware ?
@ 2021-03-17  5:46   ` Nadav Amit
  0 siblings, 0 replies; 50+ messages in thread
From: Nadav Amit @ 2021-03-17  5:46 UTC (permalink / raw)
  To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  Cc: chenjiashang, David Woodhouse, iommu, LKML, alex.williamson,
	Gonglei (Arei),
	will


[-- Attachment #1.1: Type: text/plain, Size: 865 bytes --]



> On Mar 16, 2021, at 8:16 PM, Longpeng (Mike, Cloud Infrastructure Service Product Dept.) <longpeng2@huawei.com> wrote:
> 
> Hi guys,
> 
> We find the Intel iommu cache (i.e. iotlb) maybe works wrong in a special
> situation, it would cause DMA fails or get wrong data.
> 
> The reproducer (based on Alex's vfio testsuite[1]) is in attachment, it can
> reproduce the problem with high probability (~50%).

I saw Lu replied, and he is much more knowledgable than I am (I was just
intrigued by your email).

However, if I were you I would try also to remove some “optimizations” to
look for the root-cause (e.g., use domain specific invalidations instead
of page-specific).

The first thing that comes to my mind is the invalidation hint (ih) in
iommu_flush_iotlb_psi(). I would remove it to see whether you get the
failure without it.


[-- Attachment #1.2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

[-- Attachment #2: Type: text/plain, Size: 156 bytes --]

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: A problem of Intel IOMMU hardware ?
  2021-03-17  5:46   ` Nadav Amit
@ 2021-03-17  9:35     ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  -1 siblings, 0 replies; 50+ messages in thread
From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.) @ 2021-03-17  9:35 UTC (permalink / raw)
  To: Nadav Amit
  Cc: David Woodhouse, Lu Baolu, Joerg Roedel, will, alex.williamson,
	chenjiashang, iommu, Gonglei (Arei),
	LKML

Hi Nadav,

> -----Original Message-----
> From: Nadav Amit [mailto:nadav.amit@gmail.com]
> Sent: Wednesday, March 17, 2021 1:46 PM
> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> <longpeng2@huawei.com>
> Cc: David Woodhouse <dwmw2@infradead.org>; Lu Baolu
> <baolu.lu@linux.intel.com>; Joerg Roedel <joro@8bytes.org>; will@kernel.org;
> alex.williamson@redhat.com; chenjiashang <chenjiashang@huawei.com>;
> iommu@lists.linux-foundation.org; Gonglei (Arei) <arei.gonglei@huawei.com>;
> LKML <linux-kernel@vger.kernel.org>
> Subject: Re: A problem of Intel IOMMU hardware ?
> 
> 
> 
> > On Mar 16, 2021, at 8:16 PM, Longpeng (Mike, Cloud Infrastructure Service
> Product Dept.) <longpeng2@huawei.com> wrote:
> >
> > Hi guys,
> >
> > We find the Intel iommu cache (i.e. iotlb) maybe works wrong in a
> > special situation, it would cause DMA fails or get wrong data.
> >
> > The reproducer (based on Alex's vfio testsuite[1]) is in attachment,
> > it can reproduce the problem with high probability (~50%).
> 
> I saw Lu replied, and he is much more knowledgable than I am (I was just intrigued
> by your email).
> 
> However, if I were you I would try also to remove some “optimizations” to look for
> the root-cause (e.g., use domain specific invalidations instead of page-specific).
> 

Good suggestion! But we did it these days, we tried to use global invalidations as follow:
		iommu->flush.flush_iotlb(iommu, did, 0, 0,
						DMA_TLB_DSI_FLUSH);
But can not resolve the problem.

> The first thing that comes to my mind is the invalidation hint (ih) in
> iommu_flush_iotlb_psi(). I would remove it to see whether you get the failure
> without it.

We also notice the IH, but the IH is always ZERO in our case, as the spec says:
'''
Paging-structure-cache entries caching second-level mappings associated with the specified
domain-id and the second-level-input-address range are invalidated, if the Invalidation Hint
(IH) field is Clear.
'''

It seems the software is everything fine, so we've no choice but to suspect the hardware.

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: A problem of Intel IOMMU hardware ?
@ 2021-03-17  9:35     ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  0 siblings, 0 replies; 50+ messages in thread
From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.) @ 2021-03-17  9:35 UTC (permalink / raw)
  To: Nadav Amit
  Cc: chenjiashang, David Woodhouse, iommu, LKML, alex.williamson,
	Gonglei (Arei),
	will

Hi Nadav,

> -----Original Message-----
> From: Nadav Amit [mailto:nadav.amit@gmail.com]
> Sent: Wednesday, March 17, 2021 1:46 PM
> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> <longpeng2@huawei.com>
> Cc: David Woodhouse <dwmw2@infradead.org>; Lu Baolu
> <baolu.lu@linux.intel.com>; Joerg Roedel <joro@8bytes.org>; will@kernel.org;
> alex.williamson@redhat.com; chenjiashang <chenjiashang@huawei.com>;
> iommu@lists.linux-foundation.org; Gonglei (Arei) <arei.gonglei@huawei.com>;
> LKML <linux-kernel@vger.kernel.org>
> Subject: Re: A problem of Intel IOMMU hardware ?
> 
> 
> 
> > On Mar 16, 2021, at 8:16 PM, Longpeng (Mike, Cloud Infrastructure Service
> Product Dept.) <longpeng2@huawei.com> wrote:
> >
> > Hi guys,
> >
> > We find the Intel iommu cache (i.e. iotlb) maybe works wrong in a
> > special situation, it would cause DMA fails or get wrong data.
> >
> > The reproducer (based on Alex's vfio testsuite[1]) is in attachment,
> > it can reproduce the problem with high probability (~50%).
> 
> I saw Lu replied, and he is much more knowledgable than I am (I was just intrigued
> by your email).
> 
> However, if I were you I would try also to remove some “optimizations” to look for
> the root-cause (e.g., use domain specific invalidations instead of page-specific).
> 

Good suggestion! But we did it these days, we tried to use global invalidations as follow:
		iommu->flush.flush_iotlb(iommu, did, 0, 0,
						DMA_TLB_DSI_FLUSH);
But can not resolve the problem.

> The first thing that comes to my mind is the invalidation hint (ih) in
> iommu_flush_iotlb_psi(). I would remove it to see whether you get the failure
> without it.

We also notice the IH, but the IH is always ZERO in our case, as the spec says:
'''
Paging-structure-cache entries caching second-level mappings associated with the specified
domain-id and the second-level-input-address range are invalidated, if the Invalidation Hint
(IH) field is Clear.
'''

It seems the software is everything fine, so we've no choice but to suspect the hardware.
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: A problem of Intel IOMMU hardware ?
  2021-03-17  5:16   ` Lu Baolu
@ 2021-03-17  9:40     ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  -1 siblings, 0 replies; 50+ messages in thread
From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.) @ 2021-03-17  9:40 UTC (permalink / raw)
  To: Lu Baolu, dwmw2, joro, will, alex.williamson
  Cc: iommu, LKML, Gonglei (Arei), chenjiashang

Hi Baolu,

> -----Original Message-----
> From: Lu Baolu [mailto:baolu.lu@linux.intel.com]
> Sent: Wednesday, March 17, 2021 1:17 PM
> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> <longpeng2@huawei.com>; dwmw2@infradead.org; joro@8bytes.org;
> will@kernel.org; alex.williamson@redhat.com
> Cc: baolu.lu@linux.intel.com; iommu@lists.linux-foundation.org; LKML
> <linux-kernel@vger.kernel.org>; Gonglei (Arei) <arei.gonglei@huawei.com>;
> chenjiashang <chenjiashang@huawei.com>
> Subject: Re: A problem of Intel IOMMU hardware ?
> 
> Hi Longpeng,
> 
> On 3/17/21 11:16 AM, Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> wrote:
> > Hi guys,
> >
> > We find the Intel iommu cache (i.e. iotlb) maybe works wrong in a
> > special situation, it would cause DMA fails or get wrong data.
> >
> > The reproducer (based on Alex's vfio testsuite[1]) is in attachment,
> > it can reproduce the problem with high probability (~50%).
> >
> > The machine we used is:
> > processor	: 47
> > vendor_id	: GenuineIntel
> > cpu family	: 6
> > model		: 85
> > model name	: Intel(R) Xeon(R) Gold 6146 CPU @ 3.20GHz
> > stepping	: 4
> > microcode	: 0x2000069
> >
> > And the iommu capability reported is:
> > ver 1:0 cap 8d2078c106f0466 ecap f020df (caching mode = 0 ,
> > page-selective invalidation = 1)
> >
> > (The problem is also on 'Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz'
> > and
> > 'Intel(R) Xeon(R) Platinum 8378A CPU @ 3.00GHz')
> >
> > We run the reproducer on Linux 4.18 and it works as follow:
> >
> > Step 1. alloc 4G *2M-hugetlb* memory (N.B. no problem with 4K-page
> > mapping)
> 
> I don't understand 2M-hugetlb here means exactly. The IOMMU hardware
> supports both 2M and 1G super page. The mapping physical memory is 4G.
> Why couldn't it use 1G super page?
> 

We use hugetlbfs(support 1G and 2M, but we choose 2M in our case) to request
the memory in userspace: 
vaddr = (unsigned long)mmap(0, MAP_SIZE, PROT_READ | PROT_WRITE,
				    MAP_PRIVATE | MAP_ANONYMOUS | *MAP_HUGETLB*, 0, 0);

Yep, IOMMU support both 2M and 1G superpage, we just haven't to test the 1G case
yet, because our productions use 2M hugetlbfs page.

> > Step 2. DMA Map 4G memory
> > Step 3.
> >      while (1) {
> >          {UNMAP, 0x0, 0xa0000}, ------------------------------------ (a)
> >          {UNMAP, 0xc0000, 0xbff40000},
> 
> Have these two ranges been mapped before? Does the IOMMU driver complains
> when you trying to unmap a range which has never been mapped? The IOMMU
> driver implicitly assumes that mapping and unmapping are paired.
> 

Of course yes, please Step-2, we DMA mapped all the memory(4G) before the while loop.
The driver never complained during MAP and UNMAP operations.

> >          {MAP,   0x0, 0xc0000000}, --------------------------------- (b)
> >                  use GDB to pause at here, and then DMA read IOVA=0,
> 
> IOVA 0 seems to be a special one. Have you verified with other addresses than
> IOVA 0?
> 

Yes, we also test IOVA=0x1000, it has problem too.

But one of the differeces between (0x0, 0xa0000) and (0x0, 0xc0000000) is the former
can only use 4K mapping in DMA pagetable but the latter uses 2M mapping. Is it possible
the hardware cache management works something wrong in this case?

> >                  sometimes DMA success (as expected),
> >                  but sometimes DMA error (report not-present).
> >          {UNMAP, 0x0, 0xc0000000}, --------------------------------- (c)
> >          {MAP,   0x0, 0xa0000},
> >          {MAP,   0xc0000, 0xbff40000},
> >      }
> >
> > The DMA read operations sholud success between (b) and (c), it should
> > NOT report not-present at least!
> >
> > After analysis the problem, we think maybe it's caused by the Intel iommu iotlb.
> > It seems the DMA Remapping hardware still uses the IOTLB or other caches of
> (a).
> >
> > When do DMA unmap at (a), the iotlb will be flush:
> >      intel_iommu_unmap
> >          domain_unmap
> >              iommu_flush_iotlb_psi
> >
> > When do DMA map at (b), no need to flush the iotlb according to the
> > capability of this iommu:
> >      intel_iommu_map
> >          domain_pfn_mapping
> >              domain_mapping
> >                  __mapping_notify_one
> >                      if (cap_caching_mode(iommu->cap)) // FALSE
> >                          iommu_flush_iotlb_psi
> 
> That's true. The iotlb flushing is not needed in case of PTE been changed from
> non-present to present unless caching mode.
> 

Yes, I also think the driver code is correct. But it's so confused that the problem
is disappear if we force it to flush here.

> > But the problem will disappear if we FORCE flush here. So we suspect
> > the iommu hardware.
> >
> > Do you have any suggestion ?
> 
> Best regards,
> baolu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: A problem of Intel IOMMU hardware ?
@ 2021-03-17  9:40     ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  0 siblings, 0 replies; 50+ messages in thread
From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.) @ 2021-03-17  9:40 UTC (permalink / raw)
  To: Lu Baolu, dwmw2, joro, will, alex.williamson
  Cc: chenjiashang, iommu, Gonglei (Arei), LKML

Hi Baolu,

> -----Original Message-----
> From: Lu Baolu [mailto:baolu.lu@linux.intel.com]
> Sent: Wednesday, March 17, 2021 1:17 PM
> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> <longpeng2@huawei.com>; dwmw2@infradead.org; joro@8bytes.org;
> will@kernel.org; alex.williamson@redhat.com
> Cc: baolu.lu@linux.intel.com; iommu@lists.linux-foundation.org; LKML
> <linux-kernel@vger.kernel.org>; Gonglei (Arei) <arei.gonglei@huawei.com>;
> chenjiashang <chenjiashang@huawei.com>
> Subject: Re: A problem of Intel IOMMU hardware ?
> 
> Hi Longpeng,
> 
> On 3/17/21 11:16 AM, Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> wrote:
> > Hi guys,
> >
> > We find the Intel iommu cache (i.e. iotlb) maybe works wrong in a
> > special situation, it would cause DMA fails or get wrong data.
> >
> > The reproducer (based on Alex's vfio testsuite[1]) is in attachment,
> > it can reproduce the problem with high probability (~50%).
> >
> > The machine we used is:
> > processor	: 47
> > vendor_id	: GenuineIntel
> > cpu family	: 6
> > model		: 85
> > model name	: Intel(R) Xeon(R) Gold 6146 CPU @ 3.20GHz
> > stepping	: 4
> > microcode	: 0x2000069
> >
> > And the iommu capability reported is:
> > ver 1:0 cap 8d2078c106f0466 ecap f020df (caching mode = 0 ,
> > page-selective invalidation = 1)
> >
> > (The problem is also on 'Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz'
> > and
> > 'Intel(R) Xeon(R) Platinum 8378A CPU @ 3.00GHz')
> >
> > We run the reproducer on Linux 4.18 and it works as follow:
> >
> > Step 1. alloc 4G *2M-hugetlb* memory (N.B. no problem with 4K-page
> > mapping)
> 
> I don't understand 2M-hugetlb here means exactly. The IOMMU hardware
> supports both 2M and 1G super page. The mapping physical memory is 4G.
> Why couldn't it use 1G super page?
> 

We use hugetlbfs(support 1G and 2M, but we choose 2M in our case) to request
the memory in userspace: 
vaddr = (unsigned long)mmap(0, MAP_SIZE, PROT_READ | PROT_WRITE,
				    MAP_PRIVATE | MAP_ANONYMOUS | *MAP_HUGETLB*, 0, 0);

Yep, IOMMU support both 2M and 1G superpage, we just haven't to test the 1G case
yet, because our productions use 2M hugetlbfs page.

> > Step 2. DMA Map 4G memory
> > Step 3.
> >      while (1) {
> >          {UNMAP, 0x0, 0xa0000}, ------------------------------------ (a)
> >          {UNMAP, 0xc0000, 0xbff40000},
> 
> Have these two ranges been mapped before? Does the IOMMU driver complains
> when you trying to unmap a range which has never been mapped? The IOMMU
> driver implicitly assumes that mapping and unmapping are paired.
> 

Of course yes, please Step-2, we DMA mapped all the memory(4G) before the while loop.
The driver never complained during MAP and UNMAP operations.

> >          {MAP,   0x0, 0xc0000000}, --------------------------------- (b)
> >                  use GDB to pause at here, and then DMA read IOVA=0,
> 
> IOVA 0 seems to be a special one. Have you verified with other addresses than
> IOVA 0?
> 

Yes, we also test IOVA=0x1000, it has problem too.

But one of the differeces between (0x0, 0xa0000) and (0x0, 0xc0000000) is the former
can only use 4K mapping in DMA pagetable but the latter uses 2M mapping. Is it possible
the hardware cache management works something wrong in this case?

> >                  sometimes DMA success (as expected),
> >                  but sometimes DMA error (report not-present).
> >          {UNMAP, 0x0, 0xc0000000}, --------------------------------- (c)
> >          {MAP,   0x0, 0xa0000},
> >          {MAP,   0xc0000, 0xbff40000},
> >      }
> >
> > The DMA read operations sholud success between (b) and (c), it should
> > NOT report not-present at least!
> >
> > After analysis the problem, we think maybe it's caused by the Intel iommu iotlb.
> > It seems the DMA Remapping hardware still uses the IOTLB or other caches of
> (a).
> >
> > When do DMA unmap at (a), the iotlb will be flush:
> >      intel_iommu_unmap
> >          domain_unmap
> >              iommu_flush_iotlb_psi
> >
> > When do DMA map at (b), no need to flush the iotlb according to the
> > capability of this iommu:
> >      intel_iommu_map
> >          domain_pfn_mapping
> >              domain_mapping
> >                  __mapping_notify_one
> >                      if (cap_caching_mode(iommu->cap)) // FALSE
> >                          iommu_flush_iotlb_psi
> 
> That's true. The iotlb flushing is not needed in case of PTE been changed from
> non-present to present unless caching mode.
> 

Yes, I also think the driver code is correct. But it's so confused that the problem
is disappear if we force it to flush here.

> > But the problem will disappear if we FORCE flush here. So we suspect
> > the iommu hardware.
> >
> > Do you have any suggestion ?
> 
> Best regards,
> baolu
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: A problem of Intel IOMMU hardware ?
  2021-03-17  5:16   ` Lu Baolu
@ 2021-03-17 15:18     ` Alex Williamson
  -1 siblings, 0 replies; 50+ messages in thread
From: Alex Williamson @ 2021-03-17 15:18 UTC (permalink / raw)
  To: Lu Baolu
  Cc: Longpeng (Mike, Cloud Infrastructure Service Product Dept.),
	dwmw2, joro, will, iommu, LKML, Gonglei (Arei),
	chenjiashang

On Wed, 17 Mar 2021 13:16:58 +0800
Lu Baolu <baolu.lu@linux.intel.com> wrote:

> Hi Longpeng,
> 
> On 3/17/21 11:16 AM, Longpeng (Mike, Cloud Infrastructure Service 
> Product Dept.) wrote:
> > Hi guys,
> > 
> > We find the Intel iommu cache (i.e. iotlb) maybe works wrong in a special
> > situation, it would cause DMA fails or get wrong data.
> > 
> > The reproducer (based on Alex's vfio testsuite[1]) is in attachment, it can
> > reproduce the problem with high probability (~50%).
> > 
> > The machine we used is:
> > processor	: 47
> > vendor_id	: GenuineIntel
> > cpu family	: 6
> > model		: 85
> > model name	: Intel(R) Xeon(R) Gold 6146 CPU @ 3.20GHz
> > stepping	: 4
> > microcode	: 0x2000069
> > 
> > And the iommu capability reported is:
> > ver 1:0 cap 8d2078c106f0466 ecap f020df
> > (caching mode = 0 , page-selective invalidation = 1)
> > 
> > (The problem is also on 'Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz' and
> > 'Intel(R) Xeon(R) Platinum 8378A CPU @ 3.00GHz')
> > 
> > We run the reproducer on Linux 4.18 and it works as follow:
> > 
> > Step 1. alloc 4G *2M-hugetlb* memory (N.B. no problem with 4K-page mapping)  
> 
> I don't understand 2M-hugetlb here means exactly. The IOMMU hardware
> supports both 2M and 1G super page. The mapping physical memory is 4G.
> Why couldn't it use 1G super page?
> 
> > Step 2. DMA Map 4G memory
> > Step 3.
> >      while (1) {
> >          {UNMAP, 0x0, 0xa0000}, ------------------------------------ (a)
> >          {UNMAP, 0xc0000, 0xbff40000},  
> 
> Have these two ranges been mapped before? Does the IOMMU driver
> complains when you trying to unmap a range which has never been
> mapped? The IOMMU driver implicitly assumes that mapping and
> unmapping are paired.
>
> >          {MAP,   0x0, 0xc0000000}, --------------------------------- (b)
> >                  use GDB to pause at here, and then DMA read IOVA=0,  
> 
> IOVA 0 seems to be a special one. Have you verified with other addresses
> than IOVA 0?

It is???  That would be a problem.

> >                  sometimes DMA success (as expected),
> >                  but sometimes DMA error (report not-present).
> >          {UNMAP, 0x0, 0xc0000000}, --------------------------------- (c)
> >          {MAP,   0x0, 0xa0000},
> >          {MAP,   0xc0000, 0xbff40000},
> >      }

The interesting thing about this test sequence seems to be how it will
implicitly switch between super pages and regular pages.  Also note
that the test is using the original vfio type1 API rather than the v2
API that's more commonly used today.  This older API allows unmaps to
split mappings, but we don't really know how much the IOMMU is
unmapping without reading the unmap.size field returned by the ioctl.
What I expect to happen is that the IOMMU will make use of superpages
when mapping the full range.  When we unmap {0-bffff}, that's likely
going to be covered by a 2M (or more) superpage, therefore the unmap
will actually unmap {0-1fffff}.  The subsequent unmap starting at
0xc0000 might already have {a0000-1fffff} unmapped.  However, when we
then map {0 - bffff} the IOMMU will (should) switch back to 4K pages.
The mapping at 0xc0000 should use 4K pages up through 0x1fffff, then
might switch to 2M or 1G pages depending on physical memory layout.  So
the {0-2MB} IOVA range could be switching back and forth between a
superpage mapping and 4K mapping, and I can certainly imagine that
could lead to page table, if not cache management bugs.  Thanks,

Alex


> > 
> > The DMA read operations sholud success between (b) and (c), it should NOT report
> > not-present at least!
> > 
> > After analysis the problem, we think maybe it's caused by the Intel iommu iotlb.
> > It seems the DMA Remapping hardware still uses the IOTLB or other caches of (a).
> > 
> > When do DMA unmap at (a), the iotlb will be flush:
> >      intel_iommu_unmap
> >          domain_unmap
> >              iommu_flush_iotlb_psi
> > 
> > When do DMA map at (b), no need to flush the iotlb according to the capability
> > of this iommu:
> >      intel_iommu_map
> >          domain_pfn_mapping
> >              domain_mapping
> >                  __mapping_notify_one
> >                      if (cap_caching_mode(iommu->cap)) // FALSE
> >                          iommu_flush_iotlb_psi  
> 
> That's true. The iotlb flushing is not needed in case of PTE been
> changed from non-present to present unless caching mode.
> 
> > But the problem will disappear if we FORCE flush here. So we suspect the iommu
> > hardware.
> > 
> > Do you have any suggestion ?  
> 
> Best regards,
> baolu
> 


^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: A problem of Intel IOMMU hardware ?
@ 2021-03-17 15:18     ` Alex Williamson
  0 siblings, 0 replies; 50+ messages in thread
From: Alex Williamson @ 2021-03-17 15:18 UTC (permalink / raw)
  To: Lu Baolu
  Cc: chenjiashang, dwmw2, LKML, iommu, Gonglei (Arei),
	Longpeng (Mike, Cloud Infrastructure Service Product Dept.),
	will

On Wed, 17 Mar 2021 13:16:58 +0800
Lu Baolu <baolu.lu@linux.intel.com> wrote:

> Hi Longpeng,
> 
> On 3/17/21 11:16 AM, Longpeng (Mike, Cloud Infrastructure Service 
> Product Dept.) wrote:
> > Hi guys,
> > 
> > We find the Intel iommu cache (i.e. iotlb) maybe works wrong in a special
> > situation, it would cause DMA fails or get wrong data.
> > 
> > The reproducer (based on Alex's vfio testsuite[1]) is in attachment, it can
> > reproduce the problem with high probability (~50%).
> > 
> > The machine we used is:
> > processor	: 47
> > vendor_id	: GenuineIntel
> > cpu family	: 6
> > model		: 85
> > model name	: Intel(R) Xeon(R) Gold 6146 CPU @ 3.20GHz
> > stepping	: 4
> > microcode	: 0x2000069
> > 
> > And the iommu capability reported is:
> > ver 1:0 cap 8d2078c106f0466 ecap f020df
> > (caching mode = 0 , page-selective invalidation = 1)
> > 
> > (The problem is also on 'Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz' and
> > 'Intel(R) Xeon(R) Platinum 8378A CPU @ 3.00GHz')
> > 
> > We run the reproducer on Linux 4.18 and it works as follow:
> > 
> > Step 1. alloc 4G *2M-hugetlb* memory (N.B. no problem with 4K-page mapping)  
> 
> I don't understand 2M-hugetlb here means exactly. The IOMMU hardware
> supports both 2M and 1G super page. The mapping physical memory is 4G.
> Why couldn't it use 1G super page?
> 
> > Step 2. DMA Map 4G memory
> > Step 3.
> >      while (1) {
> >          {UNMAP, 0x0, 0xa0000}, ------------------------------------ (a)
> >          {UNMAP, 0xc0000, 0xbff40000},  
> 
> Have these two ranges been mapped before? Does the IOMMU driver
> complains when you trying to unmap a range which has never been
> mapped? The IOMMU driver implicitly assumes that mapping and
> unmapping are paired.
>
> >          {MAP,   0x0, 0xc0000000}, --------------------------------- (b)
> >                  use GDB to pause at here, and then DMA read IOVA=0,  
> 
> IOVA 0 seems to be a special one. Have you verified with other addresses
> than IOVA 0?

It is???  That would be a problem.

> >                  sometimes DMA success (as expected),
> >                  but sometimes DMA error (report not-present).
> >          {UNMAP, 0x0, 0xc0000000}, --------------------------------- (c)
> >          {MAP,   0x0, 0xa0000},
> >          {MAP,   0xc0000, 0xbff40000},
> >      }

The interesting thing about this test sequence seems to be how it will
implicitly switch between super pages and regular pages.  Also note
that the test is using the original vfio type1 API rather than the v2
API that's more commonly used today.  This older API allows unmaps to
split mappings, but we don't really know how much the IOMMU is
unmapping without reading the unmap.size field returned by the ioctl.
What I expect to happen is that the IOMMU will make use of superpages
when mapping the full range.  When we unmap {0-bffff}, that's likely
going to be covered by a 2M (or more) superpage, therefore the unmap
will actually unmap {0-1fffff}.  The subsequent unmap starting at
0xc0000 might already have {a0000-1fffff} unmapped.  However, when we
then map {0 - bffff} the IOMMU will (should) switch back to 4K pages.
The mapping at 0xc0000 should use 4K pages up through 0x1fffff, then
might switch to 2M or 1G pages depending on physical memory layout.  So
the {0-2MB} IOVA range could be switching back and forth between a
superpage mapping and 4K mapping, and I can certainly imagine that
could lead to page table, if not cache management bugs.  Thanks,

Alex


> > 
> > The DMA read operations sholud success between (b) and (c), it should NOT report
> > not-present at least!
> > 
> > After analysis the problem, we think maybe it's caused by the Intel iommu iotlb.
> > It seems the DMA Remapping hardware still uses the IOTLB or other caches of (a).
> > 
> > When do DMA unmap at (a), the iotlb will be flush:
> >      intel_iommu_unmap
> >          domain_unmap
> >              iommu_flush_iotlb_psi
> > 
> > When do DMA map at (b), no need to flush the iotlb according to the capability
> > of this iommu:
> >      intel_iommu_map
> >          domain_pfn_mapping
> >              domain_mapping
> >                  __mapping_notify_one
> >                      if (cap_caching_mode(iommu->cap)) // FALSE
> >                          iommu_flush_iotlb_psi  
> 
> That's true. The iotlb flushing is not needed in case of PTE been
> changed from non-present to present unless caching mode.
> 
> > But the problem will disappear if we FORCE flush here. So we suspect the iommu
> > hardware.
> > 
> > Do you have any suggestion ?  
> 
> Best regards,
> baolu
> 

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: A problem of Intel IOMMU hardware ?
  2021-03-17  9:35     ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
@ 2021-03-17 18:12       ` Nadav Amit
  -1 siblings, 0 replies; 50+ messages in thread
From: Nadav Amit @ 2021-03-17 18:12 UTC (permalink / raw)
  To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  Cc: David Woodhouse, Lu Baolu, Joerg Roedel, will, alex.williamson,
	chenjiashang, iommu, Gonglei (Arei),
	LKML

[-- Attachment #1: Type: text/plain, Size: 2000 bytes --]



> On Mar 17, 2021, at 2:35 AM, Longpeng (Mike, Cloud Infrastructure Service Product Dept.) <longpeng2@huawei.com> wrote:
> 
> Hi Nadav,
> 
>> -----Original Message-----
>> From: Nadav Amit [mailto:nadav.amit@gmail.com]
>>>  reproduce the problem with high probability (~50%).
>> 
>> I saw Lu replied, and he is much more knowledgable than I am (I was just intrigued
>> by your email).
>> 
>> However, if I were you I would try also to remove some “optimizations” to look for
>> the root-cause (e.g., use domain specific invalidations instead of page-specific).
>> 
> 
> Good suggestion! But we did it these days, we tried to use global invalidations as follow:
> 		iommu->flush.flush_iotlb(iommu, did, 0, 0,
> 						DMA_TLB_DSI_FLUSH);
> But can not resolve the problem.
> 
>> The first thing that comes to my mind is the invalidation hint (ih) in
>> iommu_flush_iotlb_psi(). I would remove it to see whether you get the failure
>> without it.
> 
> We also notice the IH, but the IH is always ZERO in our case, as the spec says:
> '''
> Paging-structure-cache entries caching second-level mappings associated with the specified
> domain-id and the second-level-input-address range are invalidated, if the Invalidation Hint
> (IH) field is Clear.
> '''
> 
> It seems the software is everything fine, so we've no choice but to suspect the hardware.

Ok, I am pretty much out of ideas. I have two more suggestions, but
they are much less likely to help. Yet, they can further help to rule
out software bugs:

1. dma_clear_pte() seems to be wrong IMHO. It should have used WRITE_ONCE()
to prevent split-write, which might potentially cause “invalid” (partially
cleared) PTE to be stored in the TLB. Having said that, the subsequent
IOTLB flush should have prevented the problem.

2. Consider ensuring that the problem is not somehow related to queued
invalidations. Try to use __iommu_flush_iotlb() instead of
qi_flush_iotlb().

Regards,
Nadav

[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: A problem of Intel IOMMU hardware ?
@ 2021-03-17 18:12       ` Nadav Amit
  0 siblings, 0 replies; 50+ messages in thread
From: Nadav Amit @ 2021-03-17 18:12 UTC (permalink / raw)
  To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  Cc: chenjiashang, David Woodhouse, iommu, LKML, alex.williamson,
	Gonglei (Arei),
	will


[-- Attachment #1.1: Type: text/plain, Size: 2000 bytes --]



> On Mar 17, 2021, at 2:35 AM, Longpeng (Mike, Cloud Infrastructure Service Product Dept.) <longpeng2@huawei.com> wrote:
> 
> Hi Nadav,
> 
>> -----Original Message-----
>> From: Nadav Amit [mailto:nadav.amit@gmail.com]
>>>  reproduce the problem with high probability (~50%).
>> 
>> I saw Lu replied, and he is much more knowledgable than I am (I was just intrigued
>> by your email).
>> 
>> However, if I were you I would try also to remove some “optimizations” to look for
>> the root-cause (e.g., use domain specific invalidations instead of page-specific).
>> 
> 
> Good suggestion! But we did it these days, we tried to use global invalidations as follow:
> 		iommu->flush.flush_iotlb(iommu, did, 0, 0,
> 						DMA_TLB_DSI_FLUSH);
> But can not resolve the problem.
> 
>> The first thing that comes to my mind is the invalidation hint (ih) in
>> iommu_flush_iotlb_psi(). I would remove it to see whether you get the failure
>> without it.
> 
> We also notice the IH, but the IH is always ZERO in our case, as the spec says:
> '''
> Paging-structure-cache entries caching second-level mappings associated with the specified
> domain-id and the second-level-input-address range are invalidated, if the Invalidation Hint
> (IH) field is Clear.
> '''
> 
> It seems the software is everything fine, so we've no choice but to suspect the hardware.

Ok, I am pretty much out of ideas. I have two more suggestions, but
they are much less likely to help. Yet, they can further help to rule
out software bugs:

1. dma_clear_pte() seems to be wrong IMHO. It should have used WRITE_ONCE()
to prevent split-write, which might potentially cause “invalid” (partially
cleared) PTE to be stored in the TLB. Having said that, the subsequent
IOTLB flush should have prevented the problem.

2. Consider ensuring that the problem is not somehow related to queued
invalidations. Try to use __iommu_flush_iotlb() instead of
qi_flush_iotlb().

Regards,
Nadav

[-- Attachment #1.2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

[-- Attachment #2: Type: text/plain, Size: 156 bytes --]

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: A problem of Intel IOMMU hardware ?
  2021-03-17 15:18     ` Alex Williamson
@ 2021-03-18  2:58       ` Lu Baolu
  -1 siblings, 0 replies; 50+ messages in thread
From: Lu Baolu @ 2021-03-18  2:58 UTC (permalink / raw)
  To: Alex Williamson
  Cc: baolu.lu, Longpeng (Mike,
	Cloud Infrastructure Service Product Dept.),
	dwmw2, joro, will, iommu, LKML, Gonglei (Arei),
	chenjiashang

Hi Alex,

On 3/17/21 11:18 PM, Alex Williamson wrote:
>>>           {MAP,   0x0, 0xc0000000}, --------------------------------- (b)
>>>                   use GDB to pause at here, and then DMA read IOVA=0,
>> IOVA 0 seems to be a special one. Have you verified with other addresses
>> than IOVA 0?
> It is???  That would be a problem.
> 

No problem from hardware point of view as far as I can see. Just
thought about software might handle it specially.

Best regards,
baolu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: A problem of Intel IOMMU hardware ?
@ 2021-03-18  2:58       ` Lu Baolu
  0 siblings, 0 replies; 50+ messages in thread
From: Lu Baolu @ 2021-03-18  2:58 UTC (permalink / raw)
  To: Alex Williamson
  Cc: chenjiashang, dwmw2, LKML, iommu, Gonglei (Arei),
	Longpeng (Mike, Cloud Infrastructure Service Product Dept.),
	will

Hi Alex,

On 3/17/21 11:18 PM, Alex Williamson wrote:
>>>           {MAP,   0x0, 0xc0000000}, --------------------------------- (b)
>>>                   use GDB to pause at here, and then DMA read IOVA=0,
>> IOVA 0 seems to be a special one. Have you verified with other addresses
>> than IOVA 0?
> It is???  That would be a problem.
> 

No problem from hardware point of view as far as I can see. Just
thought about software might handle it specially.

Best regards,
baolu
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: A problem of Intel IOMMU hardware ?
  2021-03-17 18:12       ` Nadav Amit
@ 2021-03-18  3:03         ` Lu Baolu
  -1 siblings, 0 replies; 50+ messages in thread
From: Lu Baolu @ 2021-03-18  3:03 UTC (permalink / raw)
  To: Nadav Amit, Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  Cc: baolu.lu, David Woodhouse, Joerg Roedel, will, alex.williamson,
	chenjiashang, iommu, Gonglei (Arei),
	LKML

Hi Nadav,

On 3/18/21 2:12 AM, Nadav Amit wrote:
> 
> 
>> On Mar 17, 2021, at 2:35 AM, Longpeng (Mike, Cloud Infrastructure Service Product Dept.) <longpeng2@huawei.com> wrote:
>>
>> Hi Nadav,
>>
>>> -----Original Message-----
>>> From: Nadav Amit [mailto:nadav.amit@gmail.com]
>>>>   reproduce the problem with high probability (~50%).
>>>
>>> I saw Lu replied, and he is much more knowledgable than I am (I was just intrigued
>>> by your email).
>>>
>>> However, if I were you I would try also to remove some “optimizations” to look for
>>> the root-cause (e.g., use domain specific invalidations instead of page-specific).
>>>
>>
>> Good suggestion! But we did it these days, we tried to use global invalidations as follow:
>> 		iommu->flush.flush_iotlb(iommu, did, 0, 0,
>> 						DMA_TLB_DSI_FLUSH);
>> But can not resolve the problem.
>>
>>> The first thing that comes to my mind is the invalidation hint (ih) in
>>> iommu_flush_iotlb_psi(). I would remove it to see whether you get the failure
>>> without it.
>>
>> We also notice the IH, but the IH is always ZERO in our case, as the spec says:
>> '''
>> Paging-structure-cache entries caching second-level mappings associated with the specified
>> domain-id and the second-level-input-address range are invalidated, if the Invalidation Hint
>> (IH) field is Clear.
>> '''
>>
>> It seems the software is everything fine, so we've no choice but to suspect the hardware.
> 
> Ok, I am pretty much out of ideas. I have two more suggestions, but
> they are much less likely to help. Yet, they can further help to rule
> out software bugs:
> 
> 1. dma_clear_pte() seems to be wrong IMHO. It should have used WRITE_ONCE()
> to prevent split-write, which might potentially cause “invalid” (partially
> cleared) PTE to be stored in the TLB. Having said that, the subsequent
> IOTLB flush should have prevented the problem.

Agreed. The pte read/write should use READ/WRITE_ONCE() instead.

> 
> 2. Consider ensuring that the problem is not somehow related to queued
> invalidations. Try to use __iommu_flush_iotlb() instead of
> qi_flush_iotlb().
> 
> Regards,
> Nadav
> 

Best regards,
baolu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: A problem of Intel IOMMU hardware ?
@ 2021-03-18  3:03         ` Lu Baolu
  0 siblings, 0 replies; 50+ messages in thread
From: Lu Baolu @ 2021-03-18  3:03 UTC (permalink / raw)
  To: Nadav Amit, Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  Cc: chenjiashang, David Woodhouse, iommu, LKML, alex.williamson,
	Gonglei (Arei),
	will

Hi Nadav,

On 3/18/21 2:12 AM, Nadav Amit wrote:
> 
> 
>> On Mar 17, 2021, at 2:35 AM, Longpeng (Mike, Cloud Infrastructure Service Product Dept.) <longpeng2@huawei.com> wrote:
>>
>> Hi Nadav,
>>
>>> -----Original Message-----
>>> From: Nadav Amit [mailto:nadav.amit@gmail.com]
>>>>   reproduce the problem with high probability (~50%).
>>>
>>> I saw Lu replied, and he is much more knowledgable than I am (I was just intrigued
>>> by your email).
>>>
>>> However, if I were you I would try also to remove some “optimizations” to look for
>>> the root-cause (e.g., use domain specific invalidations instead of page-specific).
>>>
>>
>> Good suggestion! But we did it these days, we tried to use global invalidations as follow:
>> 		iommu->flush.flush_iotlb(iommu, did, 0, 0,
>> 						DMA_TLB_DSI_FLUSH);
>> But can not resolve the problem.
>>
>>> The first thing that comes to my mind is the invalidation hint (ih) in
>>> iommu_flush_iotlb_psi(). I would remove it to see whether you get the failure
>>> without it.
>>
>> We also notice the IH, but the IH is always ZERO in our case, as the spec says:
>> '''
>> Paging-structure-cache entries caching second-level mappings associated with the specified
>> domain-id and the second-level-input-address range are invalidated, if the Invalidation Hint
>> (IH) field is Clear.
>> '''
>>
>> It seems the software is everything fine, so we've no choice but to suspect the hardware.
> 
> Ok, I am pretty much out of ideas. I have two more suggestions, but
> they are much less likely to help. Yet, they can further help to rule
> out software bugs:
> 
> 1. dma_clear_pte() seems to be wrong IMHO. It should have used WRITE_ONCE()
> to prevent split-write, which might potentially cause “invalid” (partially
> cleared) PTE to be stored in the TLB. Having said that, the subsequent
> IOTLB flush should have prevented the problem.

Agreed. The pte read/write should use READ/WRITE_ONCE() instead.

> 
> 2. Consider ensuring that the problem is not somehow related to queued
> invalidations. Try to use __iommu_flush_iotlb() instead of
> qi_flush_iotlb().
> 
> Regards,
> Nadav
> 

Best regards,
baolu
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: A problem of Intel IOMMU hardware ?
  2021-03-18  2:58       ` Lu Baolu
@ 2021-03-18  4:46         ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  -1 siblings, 0 replies; 50+ messages in thread
From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.) @ 2021-03-18  4:46 UTC (permalink / raw)
  To: Lu Baolu, Alex Williamson, Nadav Amit
  Cc: dwmw2, joro, will, iommu, LKML, Gonglei (Arei),
	chenjiashang, Subo (Subo,
	Cloud Infrastructure Service Product Dept.)

Hi guys,

I provide more information, please see below

> -----Original Message-----
> From: Lu Baolu [mailto:baolu.lu@linux.intel.com]
> Sent: Thursday, March 18, 2021 10:59 AM
> To: Alex Williamson <alex.williamson@redhat.com>
> Cc: baolu.lu@linux.intel.com; Longpeng (Mike, Cloud Infrastructure Service Product
> Dept.) <longpeng2@huawei.com>; dwmw2@infradead.org; joro@8bytes.org;
> will@kernel.org; iommu@lists.linux-foundation.org; LKML
> <linux-kernel@vger.kernel.org>; Gonglei (Arei) <arei.gonglei@huawei.com>;
> chenjiashang <chenjiashang@huawei.com>
> Subject: Re: A problem of Intel IOMMU hardware ?
> 
> Hi Alex,
> 
> On 3/17/21 11:18 PM, Alex Williamson wrote:
> >>>           {MAP,   0x0, 0xc0000000}, --------------------------------- (b)
> >>>                   use GDB to pause at here, and then DMA read
> >>> IOVA=0,
> >> IOVA 0 seems to be a special one. Have you verified with other
> >> addresses than IOVA 0?
> > It is???  That would be a problem.
> >
> 
> No problem from hardware point of view as far as I can see. Just thought about
> software might handle it specially.
> 

We simplify the reproducer, use the following map/unmap sequences can also 
reproduce the problem.

1. use 2M hugetlbfs to mmap 4G memory

2. run the while loop:
While (1) {
    DMA MAP (0, 0xa0000) - - - - - - - - - - - - - -(a)
    DMA UNMAP (0, 0xa0000) - - - - - - - - - - - (b)
          Operation-1 : dump DMAR table
	DMA MAP (0, 0xc0000000) - - - - - - - - - - -(c)
          Operation-2 :
             use GDB to pause at here, then DMA read IOVA=0,
             sometimes DMA success (as expected),
             but sometimes DMA error (report not-present).
          Operation-3 : dump DMAR table
          Operation-4 (when DMA error) : please see below
    DMA UNMAP (0, 0xc0000000) - - - - - - - - -(d)
}

The DMAR table of Operation-1 is (only show the entries about IOVA 0):

PML4: 0x      1a34fbb003
  PDPE: 0x      1a34fbb003
   PDE: 0x      1a34fbf003
    PTE: 0x               0

And the table of Operation-3 is:

PML4: 0x      1a34fbb003
  PDPE: 0x      1a34fbb003
   PDE: 0x       15ec00883 < - - 2M superpage

So we can see the IOVA 0 is mapped, but the DMA read is error:

dmar_fault: 131757 callbacks suppressed
DRHD: handling fault status reg 402
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set
DRHD: handling fault status reg 600
DRHD: handling fault status reg 602
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set

NOTE, the magical thing happen...(*Operation-4*) we write the PTE
of Operation-1 from 0 to 0x3 which means can Read/Write, and then
we trigger DMA read again, it success and return the data of HPA 0 !!

Why we modify the older page table would make sense ? As we
have discussed previously, the cache flush part of the driver is correct,
it call flush_iotlb after (b) and no need to flush after (c). But the result
of the experiment shows the older page table or older caches is effective
actually.

Any ideas ?

> Best regards,
> baolu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: A problem of Intel IOMMU hardware ?
@ 2021-03-18  4:46         ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  0 siblings, 0 replies; 50+ messages in thread
From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.) @ 2021-03-18  4:46 UTC (permalink / raw)
  To: Lu Baolu, Alex Williamson, Nadav Amit
  Cc: chenjiashang, Subo (Subo,
	Cloud Infrastructure Service Product Dept.),
	dwmw2, LKML, iommu, Gonglei (Arei),
	will

Hi guys,

I provide more information, please see below

> -----Original Message-----
> From: Lu Baolu [mailto:baolu.lu@linux.intel.com]
> Sent: Thursday, March 18, 2021 10:59 AM
> To: Alex Williamson <alex.williamson@redhat.com>
> Cc: baolu.lu@linux.intel.com; Longpeng (Mike, Cloud Infrastructure Service Product
> Dept.) <longpeng2@huawei.com>; dwmw2@infradead.org; joro@8bytes.org;
> will@kernel.org; iommu@lists.linux-foundation.org; LKML
> <linux-kernel@vger.kernel.org>; Gonglei (Arei) <arei.gonglei@huawei.com>;
> chenjiashang <chenjiashang@huawei.com>
> Subject: Re: A problem of Intel IOMMU hardware ?
> 
> Hi Alex,
> 
> On 3/17/21 11:18 PM, Alex Williamson wrote:
> >>>           {MAP,   0x0, 0xc0000000}, --------------------------------- (b)
> >>>                   use GDB to pause at here, and then DMA read
> >>> IOVA=0,
> >> IOVA 0 seems to be a special one. Have you verified with other
> >> addresses than IOVA 0?
> > It is???  That would be a problem.
> >
> 
> No problem from hardware point of view as far as I can see. Just thought about
> software might handle it specially.
> 

We simplify the reproducer, use the following map/unmap sequences can also 
reproduce the problem.

1. use 2M hugetlbfs to mmap 4G memory

2. run the while loop:
While (1) {
    DMA MAP (0, 0xa0000) - - - - - - - - - - - - - -(a)
    DMA UNMAP (0, 0xa0000) - - - - - - - - - - - (b)
          Operation-1 : dump DMAR table
	DMA MAP (0, 0xc0000000) - - - - - - - - - - -(c)
          Operation-2 :
             use GDB to pause at here, then DMA read IOVA=0,
             sometimes DMA success (as expected),
             but sometimes DMA error (report not-present).
          Operation-3 : dump DMAR table
          Operation-4 (when DMA error) : please see below
    DMA UNMAP (0, 0xc0000000) - - - - - - - - -(d)
}

The DMAR table of Operation-1 is (only show the entries about IOVA 0):

PML4: 0x      1a34fbb003
  PDPE: 0x      1a34fbb003
   PDE: 0x      1a34fbf003
    PTE: 0x               0

And the table of Operation-3 is:

PML4: 0x      1a34fbb003
  PDPE: 0x      1a34fbb003
   PDE: 0x       15ec00883 < - - 2M superpage

So we can see the IOVA 0 is mapped, but the DMA read is error:

dmar_fault: 131757 callbacks suppressed
DRHD: handling fault status reg 402
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set
DRHD: handling fault status reg 600
DRHD: handling fault status reg 602
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set
[DMA Read] Request device [86:05.6] fault addr 0 [fault reason 06] PTE Read access is not set

NOTE, the magical thing happen...(*Operation-4*) we write the PTE
of Operation-1 from 0 to 0x3 which means can Read/Write, and then
we trigger DMA read again, it success and return the data of HPA 0 !!

Why we modify the older page table would make sense ? As we
have discussed previously, the cache flush part of the driver is correct,
it call flush_iotlb after (b) and no need to flush after (c). But the result
of the experiment shows the older page table or older caches is effective
actually.

Any ideas ?

> Best regards,
> baolu
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: A problem of Intel IOMMU hardware ?
  2021-03-18  4:46         ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
@ 2021-03-18  7:48           ` Nadav Amit
  -1 siblings, 0 replies; 50+ messages in thread
From: Nadav Amit @ 2021-03-18  7:48 UTC (permalink / raw)
  To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  Cc: Lu Baolu, Alex Williamson, dwmw2, joro, will, iommu, LKML,
	Gonglei (Arei),
	chenjiashang, Subo (Subo,
	Cloud Infrastructure Service Product Dept.)

[-- Attachment #1: Type: text/plain, Size: 785 bytes --]


> On Mar 17, 2021, at 9:46 PM, Longpeng (Mike, Cloud Infrastructure Service Product Dept.) <longpeng2@huawei.com> wrote:
> 

[Snip]

> 
> NOTE, the magical thing happen...(*Operation-4*) we write the PTE
> of Operation-1 from 0 to 0x3 which means can Read/Write, and then
> we trigger DMA read again, it success and return the data of HPA 0 !!
> 
> Why we modify the older page table would make sense ? As we
> have discussed previously, the cache flush part of the driver is correct,
> it call flush_iotlb after (b) and no need to flush after (c). But the result
> of the experiment shows the older page table or older caches is effective
> actually.
> 
> Any ideas ?

Interesting. Sounds as if there is some page-walk cache that was not
invalidated properly.


[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: A problem of Intel IOMMU hardware ?
@ 2021-03-18  7:48           ` Nadav Amit
  0 siblings, 0 replies; 50+ messages in thread
From: Nadav Amit @ 2021-03-18  7:48 UTC (permalink / raw)
  To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  Cc: chenjiashang, Subo (Subo,
	Cloud Infrastructure Service Product Dept.),
	dwmw2, iommu, LKML, Alex Williamson, Gonglei (Arei),
	will


[-- Attachment #1.1: Type: text/plain, Size: 785 bytes --]


> On Mar 17, 2021, at 9:46 PM, Longpeng (Mike, Cloud Infrastructure Service Product Dept.) <longpeng2@huawei.com> wrote:
> 

[Snip]

> 
> NOTE, the magical thing happen...(*Operation-4*) we write the PTE
> of Operation-1 from 0 to 0x3 which means can Read/Write, and then
> we trigger DMA read again, it success and return the data of HPA 0 !!
> 
> Why we modify the older page table would make sense ? As we
> have discussed previously, the cache flush part of the driver is correct,
> it call flush_iotlb after (b) and no need to flush after (c). But the result
> of the experiment shows the older page table or older caches is effective
> actually.
> 
> Any ideas ?

Interesting. Sounds as if there is some page-walk cache that was not
invalidated properly.


[-- Attachment #1.2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

[-- Attachment #2: Type: text/plain, Size: 156 bytes --]

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: A problem of Intel IOMMU hardware ?
  2021-03-17 18:12       ` Nadav Amit
@ 2021-03-18  8:20         ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  -1 siblings, 0 replies; 50+ messages in thread
From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.) @ 2021-03-18  8:20 UTC (permalink / raw)
  To: Nadav Amit
  Cc: David Woodhouse, Lu Baolu, Joerg Roedel, will, alex.williamson,
	chenjiashang, iommu, Gonglei (Arei),
	LKML

Hi Nadav,

> -----Original Message-----
> From: Nadav Amit [mailto:nadav.amit@gmail.com]
> Sent: Thursday, March 18, 2021 2:13 AM
> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> <longpeng2@huawei.com>
> Cc: David Woodhouse <dwmw2@infradead.org>; Lu Baolu
> <baolu.lu@linux.intel.com>; Joerg Roedel <joro@8bytes.org>; will@kernel.org;
> alex.williamson@redhat.com; chenjiashang <chenjiashang@huawei.com>;
> iommu@lists.linux-foundation.org; Gonglei (Arei) <arei.gonglei@huawei.com>;
> LKML <linux-kernel@vger.kernel.org>
> Subject: Re: A problem of Intel IOMMU hardware ?
> 
> 
> 
> > On Mar 17, 2021, at 2:35 AM, Longpeng (Mike, Cloud Infrastructure Service
> Product Dept.) <longpeng2@huawei.com> wrote:
> >
> > Hi Nadav,
> >
> >> -----Original Message-----
> >> From: Nadav Amit [mailto:nadav.amit@gmail.com]
> >>>  reproduce the problem with high probability (~50%).
> >>
> >> I saw Lu replied, and he is much more knowledgable than I am (I was
> >> just intrigued by your email).
> >>
> >> However, if I were you I would try also to remove some
> >> “optimizations” to look for the root-cause (e.g., use domain specific
> invalidations instead of page-specific).
> >>
> >
> > Good suggestion! But we did it these days, we tried to use global invalidations as
> follow:
> > 		iommu->flush.flush_iotlb(iommu, did, 0, 0,
> > 						DMA_TLB_DSI_FLUSH);
> > But can not resolve the problem.
> >
> >> The first thing that comes to my mind is the invalidation hint (ih)
> >> in iommu_flush_iotlb_psi(). I would remove it to see whether you get
> >> the failure without it.
> >
> > We also notice the IH, but the IH is always ZERO in our case, as the spec says:
> > '''
> > Paging-structure-cache entries caching second-level mappings
> > associated with the specified domain-id and the
> > second-level-input-address range are invalidated, if the Invalidation
> > Hint
> > (IH) field is Clear.
> > '''
> >
> > It seems the software is everything fine, so we've no choice but to suspect the
> hardware.
> 
> Ok, I am pretty much out of ideas. I have two more suggestions, but they are much
> less likely to help. Yet, they can further help to rule out software bugs:
> 
> 1. dma_clear_pte() seems to be wrong IMHO. It should have used WRITE_ONCE()
> to prevent split-write, which might potentially cause “invalid” (partially
> cleared) PTE to be stored in the TLB. Having said that, the subsequent IOTLB flush
> should have prevented the problem.
> 

Yes, use WRITE_ONCE is much safer, however I was just testing the following code,
it didn't resolved my problem.

static inline void dma_clear_pte(struct dma_pte *pte)
{
        WRITE_ONCE(pte->val, 0ULL);
}

> 2. Consider ensuring that the problem is not somehow related to queued
> invalidations. Try to use __iommu_flush_iotlb() instead of qi_flush_iotlb().
> 

I tried to force to use __iommu_flush_iotlb(), but maybe something wrong,
the system crashed, so I prefer to lower the priority of this operation.

> Regards,
> Nadav

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: A problem of Intel IOMMU hardware ?
@ 2021-03-18  8:20         ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  0 siblings, 0 replies; 50+ messages in thread
From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.) @ 2021-03-18  8:20 UTC (permalink / raw)
  To: Nadav Amit
  Cc: chenjiashang, David Woodhouse, iommu, LKML, alex.williamson,
	Gonglei (Arei),
	will

Hi Nadav,

> -----Original Message-----
> From: Nadav Amit [mailto:nadav.amit@gmail.com]
> Sent: Thursday, March 18, 2021 2:13 AM
> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> <longpeng2@huawei.com>
> Cc: David Woodhouse <dwmw2@infradead.org>; Lu Baolu
> <baolu.lu@linux.intel.com>; Joerg Roedel <joro@8bytes.org>; will@kernel.org;
> alex.williamson@redhat.com; chenjiashang <chenjiashang@huawei.com>;
> iommu@lists.linux-foundation.org; Gonglei (Arei) <arei.gonglei@huawei.com>;
> LKML <linux-kernel@vger.kernel.org>
> Subject: Re: A problem of Intel IOMMU hardware ?
> 
> 
> 
> > On Mar 17, 2021, at 2:35 AM, Longpeng (Mike, Cloud Infrastructure Service
> Product Dept.) <longpeng2@huawei.com> wrote:
> >
> > Hi Nadav,
> >
> >> -----Original Message-----
> >> From: Nadav Amit [mailto:nadav.amit@gmail.com]
> >>>  reproduce the problem with high probability (~50%).
> >>
> >> I saw Lu replied, and he is much more knowledgable than I am (I was
> >> just intrigued by your email).
> >>
> >> However, if I were you I would try also to remove some
> >> “optimizations” to look for the root-cause (e.g., use domain specific
> invalidations instead of page-specific).
> >>
> >
> > Good suggestion! But we did it these days, we tried to use global invalidations as
> follow:
> > 		iommu->flush.flush_iotlb(iommu, did, 0, 0,
> > 						DMA_TLB_DSI_FLUSH);
> > But can not resolve the problem.
> >
> >> The first thing that comes to my mind is the invalidation hint (ih)
> >> in iommu_flush_iotlb_psi(). I would remove it to see whether you get
> >> the failure without it.
> >
> > We also notice the IH, but the IH is always ZERO in our case, as the spec says:
> > '''
> > Paging-structure-cache entries caching second-level mappings
> > associated with the specified domain-id and the
> > second-level-input-address range are invalidated, if the Invalidation
> > Hint
> > (IH) field is Clear.
> > '''
> >
> > It seems the software is everything fine, so we've no choice but to suspect the
> hardware.
> 
> Ok, I am pretty much out of ideas. I have two more suggestions, but they are much
> less likely to help. Yet, they can further help to rule out software bugs:
> 
> 1. dma_clear_pte() seems to be wrong IMHO. It should have used WRITE_ONCE()
> to prevent split-write, which might potentially cause “invalid” (partially
> cleared) PTE to be stored in the TLB. Having said that, the subsequent IOTLB flush
> should have prevented the problem.
> 

Yes, use WRITE_ONCE is much safer, however I was just testing the following code,
it didn't resolved my problem.

static inline void dma_clear_pte(struct dma_pte *pte)
{
        WRITE_ONCE(pte->val, 0ULL);
}

> 2. Consider ensuring that the problem is not somehow related to queued
> invalidations. Try to use __iommu_flush_iotlb() instead of qi_flush_iotlb().
> 

I tried to force to use __iommu_flush_iotlb(), but maybe something wrong,
the system crashed, so I prefer to lower the priority of this operation.

> Regards,
> Nadav
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: A problem of Intel IOMMU hardware ?
  2021-03-18  8:20         ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
@ 2021-03-18  8:27           ` Tian, Kevin
  -1 siblings, 0 replies; 50+ messages in thread
From: Tian, Kevin @ 2021-03-18  8:27 UTC (permalink / raw)
  To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.), Nadav Amit
  Cc: chenjiashang, David Woodhouse, iommu, LKML, alex.williamson,
	Gonglei (Arei),
	will

> From: iommu <iommu-bounces@lists.linux-foundation.org> On Behalf Of
> Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> 
> > 2. Consider ensuring that the problem is not somehow related to queued
> > invalidations. Try to use __iommu_flush_iotlb() instead of qi_flush_iotlb().
> >
> 
> I tried to force to use __iommu_flush_iotlb(), but maybe something wrong,
> the system crashed, so I prefer to lower the priority of this operation.
> 

The VT-d spec clearly says that register-based invalidation can be used
only when queued-invalidations are not enabled. Intel-IOMMU driver
doesn't provide an option to disable queued-invalidation though, when
the hardware is capable. If you really want to try, tweak the code in
intel_iommu_init_qi.

Thanks
Kevin

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: A problem of Intel IOMMU hardware ?
@ 2021-03-18  8:27           ` Tian, Kevin
  0 siblings, 0 replies; 50+ messages in thread
From: Tian, Kevin @ 2021-03-18  8:27 UTC (permalink / raw)
  To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.), Nadav Amit
  Cc: chenjiashang, will, iommu, LKML, alex.williamson, Gonglei (Arei),
	David Woodhouse

> From: iommu <iommu-bounces@lists.linux-foundation.org> On Behalf Of
> Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> 
> > 2. Consider ensuring that the problem is not somehow related to queued
> > invalidations. Try to use __iommu_flush_iotlb() instead of qi_flush_iotlb().
> >
> 
> I tried to force to use __iommu_flush_iotlb(), but maybe something wrong,
> the system crashed, so I prefer to lower the priority of this operation.
> 

The VT-d spec clearly says that register-based invalidation can be used
only when queued-invalidations are not enabled. Intel-IOMMU driver
doesn't provide an option to disable queued-invalidation though, when
the hardware is capable. If you really want to try, tweak the code in
intel_iommu_init_qi.

Thanks
Kevin
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: A problem of Intel IOMMU hardware ?
  2021-03-18  8:27           ` Tian, Kevin
@ 2021-03-18  8:38             ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  -1 siblings, 0 replies; 50+ messages in thread
From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.) @ 2021-03-18  8:38 UTC (permalink / raw)
  To: Tian, Kevin, Nadav Amit
  Cc: chenjiashang, David Woodhouse, iommu, LKML, alex.williamson,
	Gonglei (Arei),
	will



> -----Original Message-----
> From: Tian, Kevin [mailto:kevin.tian@intel.com]
> Sent: Thursday, March 18, 2021 4:27 PM
> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> <longpeng2@huawei.com>; Nadav Amit <nadav.amit@gmail.com>
> Cc: chenjiashang <chenjiashang@huawei.com>; David Woodhouse
> <dwmw2@infradead.org>; iommu@lists.linux-foundation.org; LKML
> <linux-kernel@vger.kernel.org>; alex.williamson@redhat.com; Gonglei (Arei)
> <arei.gonglei@huawei.com>; will@kernel.org
> Subject: RE: A problem of Intel IOMMU hardware ?
> 
> > From: iommu <iommu-bounces@lists.linux-foundation.org> On Behalf Of
> > Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> >
> > > 2. Consider ensuring that the problem is not somehow related to
> > > queued invalidations. Try to use __iommu_flush_iotlb() instead of
> qi_flush_iotlb().
> > >
> >
> > I tried to force to use __iommu_flush_iotlb(), but maybe something
> > wrong, the system crashed, so I prefer to lower the priority of this operation.
> >
> 
> The VT-d spec clearly says that register-based invalidation can be used only when
> queued-invalidations are not enabled. Intel-IOMMU driver doesn't provide an
> option to disable queued-invalidation though, when the hardware is capable. If you
> really want to try, tweak the code in intel_iommu_init_qi.
> 

Hi Kevin,

Thanks to point out this. Do you have any ideas about this problem ? I tried
to descript the problem much clear in my reply to Alex, hope you could have
a look if you're interested.

> Thanks
> Kevin

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: A problem of Intel IOMMU hardware ?
@ 2021-03-18  8:38             ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  0 siblings, 0 replies; 50+ messages in thread
From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.) @ 2021-03-18  8:38 UTC (permalink / raw)
  To: Tian, Kevin, Nadav Amit
  Cc: chenjiashang, will, iommu, LKML, alex.williamson, Gonglei (Arei),
	David Woodhouse



> -----Original Message-----
> From: Tian, Kevin [mailto:kevin.tian@intel.com]
> Sent: Thursday, March 18, 2021 4:27 PM
> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> <longpeng2@huawei.com>; Nadav Amit <nadav.amit@gmail.com>
> Cc: chenjiashang <chenjiashang@huawei.com>; David Woodhouse
> <dwmw2@infradead.org>; iommu@lists.linux-foundation.org; LKML
> <linux-kernel@vger.kernel.org>; alex.williamson@redhat.com; Gonglei (Arei)
> <arei.gonglei@huawei.com>; will@kernel.org
> Subject: RE: A problem of Intel IOMMU hardware ?
> 
> > From: iommu <iommu-bounces@lists.linux-foundation.org> On Behalf Of
> > Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> >
> > > 2. Consider ensuring that the problem is not somehow related to
> > > queued invalidations. Try to use __iommu_flush_iotlb() instead of
> qi_flush_iotlb().
> > >
> >
> > I tried to force to use __iommu_flush_iotlb(), but maybe something
> > wrong, the system crashed, so I prefer to lower the priority of this operation.
> >
> 
> The VT-d spec clearly says that register-based invalidation can be used only when
> queued-invalidations are not enabled. Intel-IOMMU driver doesn't provide an
> option to disable queued-invalidation though, when the hardware is capable. If you
> really want to try, tweak the code in intel_iommu_init_qi.
> 

Hi Kevin,

Thanks to point out this. Do you have any ideas about this problem ? I tried
to descript the problem much clear in my reply to Alex, hope you could have
a look if you're interested.

> Thanks
> Kevin
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: A problem of Intel IOMMU hardware ?
  2021-03-18  8:38             ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
@ 2021-03-18  8:43               ` Tian, Kevin
  -1 siblings, 0 replies; 50+ messages in thread
From: Tian, Kevin @ 2021-03-18  8:43 UTC (permalink / raw)
  To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.), Nadav Amit
  Cc: chenjiashang, David Woodhouse, iommu, LKML, alex.williamson,
	Gonglei (Arei),
	will

> From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> <longpeng2@huawei.com>
> 
> 
> > -----Original Message-----
> > From: Tian, Kevin [mailto:kevin.tian@intel.com]
> > Sent: Thursday, March 18, 2021 4:27 PM
> > To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > <longpeng2@huawei.com>; Nadav Amit <nadav.amit@gmail.com>
> > Cc: chenjiashang <chenjiashang@huawei.com>; David Woodhouse
> > <dwmw2@infradead.org>; iommu@lists.linux-foundation.org; LKML
> > <linux-kernel@vger.kernel.org>; alex.williamson@redhat.com; Gonglei
> (Arei)
> > <arei.gonglei@huawei.com>; will@kernel.org
> > Subject: RE: A problem of Intel IOMMU hardware ?
> >
> > > From: iommu <iommu-bounces@lists.linux-foundation.org> On Behalf Of
> > > Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > >
> > > > 2. Consider ensuring that the problem is not somehow related to
> > > > queued invalidations. Try to use __iommu_flush_iotlb() instead of
> > qi_flush_iotlb().
> > > >
> > >
> > > I tried to force to use __iommu_flush_iotlb(), but maybe something
> > > wrong, the system crashed, so I prefer to lower the priority of this
> operation.
> > >
> >
> > The VT-d spec clearly says that register-based invalidation can be used only
> when
> > queued-invalidations are not enabled. Intel-IOMMU driver doesn't provide
> an
> > option to disable queued-invalidation though, when the hardware is
> capable. If you
> > really want to try, tweak the code in intel_iommu_init_qi.
> >
> 
> Hi Kevin,
> 
> Thanks to point out this. Do you have any ideas about this problem ? I tried
> to descript the problem much clear in my reply to Alex, hope you could have
> a look if you're interested.
> 

I agree with Nadav. Looks this implies some stale paging structure cache entry
(e.g. PMD) is not invalidated properly. It's better if Baolu can reproduce this
problem in his local environment and then do more debug to identify whether
it's a software or hardware defect.

btw what is the device under test? Does it support ATS?

Thanks
Kevin

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: A problem of Intel IOMMU hardware ?
@ 2021-03-18  8:43               ` Tian, Kevin
  0 siblings, 0 replies; 50+ messages in thread
From: Tian, Kevin @ 2021-03-18  8:43 UTC (permalink / raw)
  To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.), Nadav Amit
  Cc: chenjiashang, will, iommu, LKML, alex.williamson, Gonglei (Arei),
	David Woodhouse

> From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> <longpeng2@huawei.com>
> 
> 
> > -----Original Message-----
> > From: Tian, Kevin [mailto:kevin.tian@intel.com]
> > Sent: Thursday, March 18, 2021 4:27 PM
> > To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > <longpeng2@huawei.com>; Nadav Amit <nadav.amit@gmail.com>
> > Cc: chenjiashang <chenjiashang@huawei.com>; David Woodhouse
> > <dwmw2@infradead.org>; iommu@lists.linux-foundation.org; LKML
> > <linux-kernel@vger.kernel.org>; alex.williamson@redhat.com; Gonglei
> (Arei)
> > <arei.gonglei@huawei.com>; will@kernel.org
> > Subject: RE: A problem of Intel IOMMU hardware ?
> >
> > > From: iommu <iommu-bounces@lists.linux-foundation.org> On Behalf Of
> > > Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > >
> > > > 2. Consider ensuring that the problem is not somehow related to
> > > > queued invalidations. Try to use __iommu_flush_iotlb() instead of
> > qi_flush_iotlb().
> > > >
> > >
> > > I tried to force to use __iommu_flush_iotlb(), but maybe something
> > > wrong, the system crashed, so I prefer to lower the priority of this
> operation.
> > >
> >
> > The VT-d spec clearly says that register-based invalidation can be used only
> when
> > queued-invalidations are not enabled. Intel-IOMMU driver doesn't provide
> an
> > option to disable queued-invalidation though, when the hardware is
> capable. If you
> > really want to try, tweak the code in intel_iommu_init_qi.
> >
> 
> Hi Kevin,
> 
> Thanks to point out this. Do you have any ideas about this problem ? I tried
> to descript the problem much clear in my reply to Alex, hope you could have
> a look if you're interested.
> 

I agree with Nadav. Looks this implies some stale paging structure cache entry
(e.g. PMD) is not invalidated properly. It's better if Baolu can reproduce this
problem in his local environment and then do more debug to identify whether
it's a software or hardware defect.

btw what is the device under test? Does it support ATS?

Thanks
Kevin
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: A problem of Intel IOMMU hardware ?
  2021-03-18  8:43               ` Tian, Kevin
@ 2021-03-18  8:54                 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  -1 siblings, 0 replies; 50+ messages in thread
From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.) @ 2021-03-18  8:54 UTC (permalink / raw)
  To: Tian, Kevin, Nadav Amit
  Cc: chenjiashang, David Woodhouse, iommu, LKML, alex.williamson,
	Gonglei (Arei),
	will



> -----Original Message-----
> From: Tian, Kevin [mailto:kevin.tian@intel.com]
> Sent: Thursday, March 18, 2021 4:43 PM
> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> <longpeng2@huawei.com>; Nadav Amit <nadav.amit@gmail.com>
> Cc: chenjiashang <chenjiashang@huawei.com>; David Woodhouse
> <dwmw2@infradead.org>; iommu@lists.linux-foundation.org; LKML
> <linux-kernel@vger.kernel.org>; alex.williamson@redhat.com; Gonglei (Arei)
> <arei.gonglei@huawei.com>; will@kernel.org
> Subject: RE: A problem of Intel IOMMU hardware ?
> 
> > From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > <longpeng2@huawei.com>
> >
> >
> > > -----Original Message-----
> > > From: Tian, Kevin [mailto:kevin.tian@intel.com]
> > > Sent: Thursday, March 18, 2021 4:27 PM
> > > To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > > <longpeng2@huawei.com>; Nadav Amit <nadav.amit@gmail.com>
> > > Cc: chenjiashang <chenjiashang@huawei.com>; David Woodhouse
> > > <dwmw2@infradead.org>; iommu@lists.linux-foundation.org; LKML
> > > <linux-kernel@vger.kernel.org>; alex.williamson@redhat.com; Gonglei
> > (Arei)
> > > <arei.gonglei@huawei.com>; will@kernel.org
> > > Subject: RE: A problem of Intel IOMMU hardware ?
> > >
> > > > From: iommu <iommu-bounces@lists.linux-foundation.org> On Behalf
> > > > Of Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > > >
> > > > > 2. Consider ensuring that the problem is not somehow related to
> > > > > queued invalidations. Try to use __iommu_flush_iotlb() instead
> > > > > of
> > > qi_flush_iotlb().
> > > > >
> > > >
> > > > I tried to force to use __iommu_flush_iotlb(), but maybe something
> > > > wrong, the system crashed, so I prefer to lower the priority of
> > > > this
> > operation.
> > > >
> > >
> > > The VT-d spec clearly says that register-based invalidation can be
> > > used only
> > when
> > > queued-invalidations are not enabled. Intel-IOMMU driver doesn't
> > > provide
> > an
> > > option to disable queued-invalidation though, when the hardware is
> > capable. If you
> > > really want to try, tweak the code in intel_iommu_init_qi.
> > >
> >
> > Hi Kevin,
> >
> > Thanks to point out this. Do you have any ideas about this problem ? I
> > tried to descript the problem much clear in my reply to Alex, hope you
> > could have a look if you're interested.
> >
> 
> I agree with Nadav. Looks this implies some stale paging structure cache entry (e.g.
> PMD) is not invalidated properly. It's better if Baolu can reproduce this problem in
> his local environment and then do more debug to identify whether it's a software or
> hardware defect.
> 
> btw what is the device under test? Does it support ATS?
> 

The device is our offload card, it does not support ATS cap.

> Thanks
> Kevin

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: A problem of Intel IOMMU hardware ?
@ 2021-03-18  8:54                 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  0 siblings, 0 replies; 50+ messages in thread
From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.) @ 2021-03-18  8:54 UTC (permalink / raw)
  To: Tian, Kevin, Nadav Amit
  Cc: chenjiashang, will, iommu, LKML, alex.williamson, Gonglei (Arei),
	David Woodhouse



> -----Original Message-----
> From: Tian, Kevin [mailto:kevin.tian@intel.com]
> Sent: Thursday, March 18, 2021 4:43 PM
> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> <longpeng2@huawei.com>; Nadav Amit <nadav.amit@gmail.com>
> Cc: chenjiashang <chenjiashang@huawei.com>; David Woodhouse
> <dwmw2@infradead.org>; iommu@lists.linux-foundation.org; LKML
> <linux-kernel@vger.kernel.org>; alex.williamson@redhat.com; Gonglei (Arei)
> <arei.gonglei@huawei.com>; will@kernel.org
> Subject: RE: A problem of Intel IOMMU hardware ?
> 
> > From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > <longpeng2@huawei.com>
> >
> >
> > > -----Original Message-----
> > > From: Tian, Kevin [mailto:kevin.tian@intel.com]
> > > Sent: Thursday, March 18, 2021 4:27 PM
> > > To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > > <longpeng2@huawei.com>; Nadav Amit <nadav.amit@gmail.com>
> > > Cc: chenjiashang <chenjiashang@huawei.com>; David Woodhouse
> > > <dwmw2@infradead.org>; iommu@lists.linux-foundation.org; LKML
> > > <linux-kernel@vger.kernel.org>; alex.williamson@redhat.com; Gonglei
> > (Arei)
> > > <arei.gonglei@huawei.com>; will@kernel.org
> > > Subject: RE: A problem of Intel IOMMU hardware ?
> > >
> > > > From: iommu <iommu-bounces@lists.linux-foundation.org> On Behalf
> > > > Of Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > > >
> > > > > 2. Consider ensuring that the problem is not somehow related to
> > > > > queued invalidations. Try to use __iommu_flush_iotlb() instead
> > > > > of
> > > qi_flush_iotlb().
> > > > >
> > > >
> > > > I tried to force to use __iommu_flush_iotlb(), but maybe something
> > > > wrong, the system crashed, so I prefer to lower the priority of
> > > > this
> > operation.
> > > >
> > >
> > > The VT-d spec clearly says that register-based invalidation can be
> > > used only
> > when
> > > queued-invalidations are not enabled. Intel-IOMMU driver doesn't
> > > provide
> > an
> > > option to disable queued-invalidation though, when the hardware is
> > capable. If you
> > > really want to try, tweak the code in intel_iommu_init_qi.
> > >
> >
> > Hi Kevin,
> >
> > Thanks to point out this. Do you have any ideas about this problem ? I
> > tried to descript the problem much clear in my reply to Alex, hope you
> > could have a look if you're interested.
> >
> 
> I agree with Nadav. Looks this implies some stale paging structure cache entry (e.g.
> PMD) is not invalidated properly. It's better if Baolu can reproduce this problem in
> his local environment and then do more debug to identify whether it's a software or
> hardware defect.
> 
> btw what is the device under test? Does it support ATS?
> 

The device is our offload card, it does not support ATS cap.

> Thanks
> Kevin
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: A problem of Intel IOMMU hardware ?
  2021-03-18  8:38             ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
@ 2021-03-18  8:56               ` Tian, Kevin
  -1 siblings, 0 replies; 50+ messages in thread
From: Tian, Kevin @ 2021-03-18  8:56 UTC (permalink / raw)
  To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.), Nadav Amit
  Cc: chenjiashang, David Woodhouse, iommu, LKML, alex.williamson,
	Gonglei (Arei),
	will

> From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> <longpeng2@huawei.com>
> 
> > -----Original Message-----
> > From: Tian, Kevin [mailto:kevin.tian@intel.com]
> > Sent: Thursday, March 18, 2021 4:27 PM
> > To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > <longpeng2@huawei.com>; Nadav Amit <nadav.amit@gmail.com>
> > Cc: chenjiashang <chenjiashang@huawei.com>; David Woodhouse
> > <dwmw2@infradead.org>; iommu@lists.linux-foundation.org; LKML
> > <linux-kernel@vger.kernel.org>; alex.williamson@redhat.com; Gonglei
> (Arei)
> > <arei.gonglei@huawei.com>; will@kernel.org
> > Subject: RE: A problem of Intel IOMMU hardware ?
> >
> > > From: iommu <iommu-bounces@lists.linux-foundation.org> On Behalf Of
> > > Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > >
> > > > 2. Consider ensuring that the problem is not somehow related to
> > > > queued invalidations. Try to use __iommu_flush_iotlb() instead of
> > qi_flush_iotlb().
> > > >
> > >
> > > I tried to force to use __iommu_flush_iotlb(), but maybe something
> > > wrong, the system crashed, so I prefer to lower the priority of this
> operation.
> > >
> >
> > The VT-d spec clearly says that register-based invalidation can be used only
> when
> > queued-invalidations are not enabled. Intel-IOMMU driver doesn't provide
> an
> > option to disable queued-invalidation though, when the hardware is
> capable. If you
> > really want to try, tweak the code in intel_iommu_init_qi.
> >
> 
> Hi Kevin,
> 
> Thanks to point out this. Do you have any ideas about this problem ? I tried
> to descript the problem much clear in my reply to Alex, hope you could have
> a look if you're interested.
> 

btw I saw you used 4.18 kernel in this test. What about latest kernel?

Also one way to separate sw/hw bug is to trace the low level interface (e.g.,
qi_flush_iotlb) which actually sends invalidation descriptors to the IOMMU
hardware. Check the window between b) and c) and see whether the
software does the right thing as expected there. 

Thanks
Kevin

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: A problem of Intel IOMMU hardware ?
@ 2021-03-18  8:56               ` Tian, Kevin
  0 siblings, 0 replies; 50+ messages in thread
From: Tian, Kevin @ 2021-03-18  8:56 UTC (permalink / raw)
  To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.), Nadav Amit
  Cc: chenjiashang, will, iommu, LKML, alex.williamson, Gonglei (Arei),
	David Woodhouse

> From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> <longpeng2@huawei.com>
> 
> > -----Original Message-----
> > From: Tian, Kevin [mailto:kevin.tian@intel.com]
> > Sent: Thursday, March 18, 2021 4:27 PM
> > To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > <longpeng2@huawei.com>; Nadav Amit <nadav.amit@gmail.com>
> > Cc: chenjiashang <chenjiashang@huawei.com>; David Woodhouse
> > <dwmw2@infradead.org>; iommu@lists.linux-foundation.org; LKML
> > <linux-kernel@vger.kernel.org>; alex.williamson@redhat.com; Gonglei
> (Arei)
> > <arei.gonglei@huawei.com>; will@kernel.org
> > Subject: RE: A problem of Intel IOMMU hardware ?
> >
> > > From: iommu <iommu-bounces@lists.linux-foundation.org> On Behalf Of
> > > Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > >
> > > > 2. Consider ensuring that the problem is not somehow related to
> > > > queued invalidations. Try to use __iommu_flush_iotlb() instead of
> > qi_flush_iotlb().
> > > >
> > >
> > > I tried to force to use __iommu_flush_iotlb(), but maybe something
> > > wrong, the system crashed, so I prefer to lower the priority of this
> operation.
> > >
> >
> > The VT-d spec clearly says that register-based invalidation can be used only
> when
> > queued-invalidations are not enabled. Intel-IOMMU driver doesn't provide
> an
> > option to disable queued-invalidation though, when the hardware is
> capable. If you
> > really want to try, tweak the code in intel_iommu_init_qi.
> >
> 
> Hi Kevin,
> 
> Thanks to point out this. Do you have any ideas about this problem ? I tried
> to descript the problem much clear in my reply to Alex, hope you could have
> a look if you're interested.
> 

btw I saw you used 4.18 kernel in this test. What about latest kernel?

Also one way to separate sw/hw bug is to trace the low level interface (e.g.,
qi_flush_iotlb) which actually sends invalidation descriptors to the IOMMU
hardware. Check the window between b) and c) and see whether the
software does the right thing as expected there. 

Thanks
Kevin
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: A problem of Intel IOMMU hardware ?
  2021-03-18  8:56               ` Tian, Kevin
@ 2021-03-18  9:25                 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  -1 siblings, 0 replies; 50+ messages in thread
From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.) @ 2021-03-18  9:25 UTC (permalink / raw)
  To: Tian, Kevin, Nadav Amit
  Cc: chenjiashang, David Woodhouse, iommu, LKML, alex.williamson,
	Gonglei (Arei),
	will



> -----Original Message-----
> From: Tian, Kevin [mailto:kevin.tian@intel.com]
> Sent: Thursday, March 18, 2021 4:56 PM
> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> <longpeng2@huawei.com>; Nadav Amit <nadav.amit@gmail.com>
> Cc: chenjiashang <chenjiashang@huawei.com>; David Woodhouse
> <dwmw2@infradead.org>; iommu@lists.linux-foundation.org; LKML
> <linux-kernel@vger.kernel.org>; alex.williamson@redhat.com; Gonglei (Arei)
> <arei.gonglei@huawei.com>; will@kernel.org
> Subject: RE: A problem of Intel IOMMU hardware ?
> 
> > From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > <longpeng2@huawei.com>
> >
> > > -----Original Message-----
> > > From: Tian, Kevin [mailto:kevin.tian@intel.com]
> > > Sent: Thursday, March 18, 2021 4:27 PM
> > > To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > > <longpeng2@huawei.com>; Nadav Amit <nadav.amit@gmail.com>
> > > Cc: chenjiashang <chenjiashang@huawei.com>; David Woodhouse
> > > <dwmw2@infradead.org>; iommu@lists.linux-foundation.org; LKML
> > > <linux-kernel@vger.kernel.org>; alex.williamson@redhat.com; Gonglei
> > (Arei)
> > > <arei.gonglei@huawei.com>; will@kernel.org
> > > Subject: RE: A problem of Intel IOMMU hardware ?
> > >
> > > > From: iommu <iommu-bounces@lists.linux-foundation.org> On Behalf
> > > > Of Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > > >
> > > > > 2. Consider ensuring that the problem is not somehow related to
> > > > > queued invalidations. Try to use __iommu_flush_iotlb() instead
> > > > > of
> > > qi_flush_iotlb().
> > > > >
> > > >
> > > > I tried to force to use __iommu_flush_iotlb(), but maybe something
> > > > wrong, the system crashed, so I prefer to lower the priority of
> > > > this
> > operation.
> > > >
> > >
> > > The VT-d spec clearly says that register-based invalidation can be
> > > used only
> > when
> > > queued-invalidations are not enabled. Intel-IOMMU driver doesn't
> > > provide
> > an
> > > option to disable queued-invalidation though, when the hardware is
> > capable. If you
> > > really want to try, tweak the code in intel_iommu_init_qi.
> > >
> >
> > Hi Kevin,
> >
> > Thanks to point out this. Do you have any ideas about this problem ? I
> > tried to descript the problem much clear in my reply to Alex, hope you
> > could have a look if you're interested.
> >
> 
> btw I saw you used 4.18 kernel in this test. What about latest kernel?
> 

Not test yet. It's hard to upgrade kernel in our environment.

> Also one way to separate sw/hw bug is to trace the low level interface (e.g.,
> qi_flush_iotlb) which actually sends invalidation descriptors to the IOMMU
> hardware. Check the window between b) and c) and see whether the software does
> the right thing as expected there.
> 

We add some log in iommu driver these days, the software seems fine. But we
didn't look inside the qi_submit_sync yet, I'll try it tonight.

> Thanks
> Kevin

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: A problem of Intel IOMMU hardware ?
@ 2021-03-18  9:25                 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  0 siblings, 0 replies; 50+ messages in thread
From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.) @ 2021-03-18  9:25 UTC (permalink / raw)
  To: Tian, Kevin, Nadav Amit
  Cc: chenjiashang, will, iommu, LKML, alex.williamson, Gonglei (Arei),
	David Woodhouse



> -----Original Message-----
> From: Tian, Kevin [mailto:kevin.tian@intel.com]
> Sent: Thursday, March 18, 2021 4:56 PM
> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> <longpeng2@huawei.com>; Nadav Amit <nadav.amit@gmail.com>
> Cc: chenjiashang <chenjiashang@huawei.com>; David Woodhouse
> <dwmw2@infradead.org>; iommu@lists.linux-foundation.org; LKML
> <linux-kernel@vger.kernel.org>; alex.williamson@redhat.com; Gonglei (Arei)
> <arei.gonglei@huawei.com>; will@kernel.org
> Subject: RE: A problem of Intel IOMMU hardware ?
> 
> > From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > <longpeng2@huawei.com>
> >
> > > -----Original Message-----
> > > From: Tian, Kevin [mailto:kevin.tian@intel.com]
> > > Sent: Thursday, March 18, 2021 4:27 PM
> > > To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > > <longpeng2@huawei.com>; Nadav Amit <nadav.amit@gmail.com>
> > > Cc: chenjiashang <chenjiashang@huawei.com>; David Woodhouse
> > > <dwmw2@infradead.org>; iommu@lists.linux-foundation.org; LKML
> > > <linux-kernel@vger.kernel.org>; alex.williamson@redhat.com; Gonglei
> > (Arei)
> > > <arei.gonglei@huawei.com>; will@kernel.org
> > > Subject: RE: A problem of Intel IOMMU hardware ?
> > >
> > > > From: iommu <iommu-bounces@lists.linux-foundation.org> On Behalf
> > > > Of Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > > >
> > > > > 2. Consider ensuring that the problem is not somehow related to
> > > > > queued invalidations. Try to use __iommu_flush_iotlb() instead
> > > > > of
> > > qi_flush_iotlb().
> > > > >
> > > >
> > > > I tried to force to use __iommu_flush_iotlb(), but maybe something
> > > > wrong, the system crashed, so I prefer to lower the priority of
> > > > this
> > operation.
> > > >
> > >
> > > The VT-d spec clearly says that register-based invalidation can be
> > > used only
> > when
> > > queued-invalidations are not enabled. Intel-IOMMU driver doesn't
> > > provide
> > an
> > > option to disable queued-invalidation though, when the hardware is
> > capable. If you
> > > really want to try, tweak the code in intel_iommu_init_qi.
> > >
> >
> > Hi Kevin,
> >
> > Thanks to point out this. Do you have any ideas about this problem ? I
> > tried to descript the problem much clear in my reply to Alex, hope you
> > could have a look if you're interested.
> >
> 
> btw I saw you used 4.18 kernel in this test. What about latest kernel?
> 

Not test yet. It's hard to upgrade kernel in our environment.

> Also one way to separate sw/hw bug is to trace the low level interface (e.g.,
> qi_flush_iotlb) which actually sends invalidation descriptors to the IOMMU
> hardware. Check the window between b) and c) and see whether the software does
> the right thing as expected there.
> 

We add some log in iommu driver these days, the software seems fine. But we
didn't look inside the qi_submit_sync yet, I'll try it tonight.

> Thanks
> Kevin
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: A problem of Intel IOMMU hardware ?
  2021-03-18  9:25                 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
@ 2021-03-18 16:46                   ` Nadav Amit
  -1 siblings, 0 replies; 50+ messages in thread
From: Nadav Amit @ 2021-03-18 16:46 UTC (permalink / raw)
  To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  Cc: Tian, Kevin, chenjiashang, David Woodhouse, iommu, LKML,
	alex.williamson, Gonglei (Arei),
	will

[-- Attachment #1: Type: text/plain, Size: 4240 bytes --]



> On Mar 18, 2021, at 2:25 AM, Longpeng (Mike, Cloud Infrastructure Service Product Dept.) <longpeng2@huawei.com> wrote:
> 
> 
> 
>> -----Original Message-----
>> From: Tian, Kevin [mailto:kevin.tian@intel.com]
>> Sent: Thursday, March 18, 2021 4:56 PM
>> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
>> <longpeng2@huawei.com>; Nadav Amit <nadav.amit@gmail.com>
>> Cc: chenjiashang <chenjiashang@huawei.com>; David Woodhouse
>> <dwmw2@infradead.org>; iommu@lists.linux-foundation.org; LKML
>> <linux-kernel@vger.kernel.org>; alex.williamson@redhat.com; Gonglei (Arei)
>> <arei.gonglei@huawei.com>; will@kernel.org
>> Subject: RE: A problem of Intel IOMMU hardware ?
>> 
>>> From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
>>> <longpeng2@huawei.com>
>>> 
>>>> -----Original Message-----
>>>> From: Tian, Kevin [mailto:kevin.tian@intel.com]
>>>> Sent: Thursday, March 18, 2021 4:27 PM
>>>> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
>>>> <longpeng2@huawei.com>; Nadav Amit <nadav.amit@gmail.com>
>>>> Cc: chenjiashang <chenjiashang@huawei.com>; David Woodhouse
>>>> <dwmw2@infradead.org>; iommu@lists.linux-foundation.org; LKML
>>>> <linux-kernel@vger.kernel.org>; alex.williamson@redhat.com; Gonglei
>>> (Arei)
>>>> <arei.gonglei@huawei.com>; will@kernel.org
>>>> Subject: RE: A problem of Intel IOMMU hardware ?
>>>> 
>>>>> From: iommu <iommu-bounces@lists.linux-foundation.org> On Behalf
>>>>> Of Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
>>>>> 
>>>>>> 2. Consider ensuring that the problem is not somehow related to
>>>>>> queued invalidations. Try to use __iommu_flush_iotlb() instead
>>>>>> of
>>>> qi_flush_iotlb().
>>>>>> 
>>>>> 
>>>>> I tried to force to use __iommu_flush_iotlb(), but maybe something
>>>>> wrong, the system crashed, so I prefer to lower the priority of
>>>>> this
>>> operation.
>>>>> 
>>>> 
>>>> The VT-d spec clearly says that register-based invalidation can be
>>>> used only
>>> when
>>>> queued-invalidations are not enabled. Intel-IOMMU driver doesn't
>>>> provide
>>> an
>>>> option to disable queued-invalidation though, when the hardware is
>>> capable. If you
>>>> really want to try, tweak the code in intel_iommu_init_qi.
>>>> 
>>> 
>>> Hi Kevin,
>>> 
>>> Thanks to point out this. Do you have any ideas about this problem ? I
>>> tried to descript the problem much clear in my reply to Alex, hope you
>>> could have a look if you're interested.
>>> 
>> 
>> btw I saw you used 4.18 kernel in this test. What about latest kernel?
>> 
> 
> Not test yet. It's hard to upgrade kernel in our environment.
> 
>> Also one way to separate sw/hw bug is to trace the low level interface (e.g.,
>> qi_flush_iotlb) which actually sends invalidation descriptors to the IOMMU
>> hardware. Check the window between b) and c) and see whether the software does
>> the right thing as expected there.
>> 
> 
> We add some log in iommu driver these days, the software seems fine. But we
> didn't look inside the qi_submit_sync yet, I'll try it tonight.

So here is my guess:

Intel probably used as a basis for the IOTLB an implementation of
some other (regular) TLB design.

Intel SDM says regarding TLBs (4.10.4.2 “Recommended Invalidation”):

"Software wishing to prevent this uncertainty should not write to
a paging-structure entry in a way that would change, for any linear
address, both the page size and either the page frame, access rights,
or other attributes.”


Now the aforementioned uncertainty is a bit different (multiple
*valid* translations of a single address). Yet, perhaps this is
yet another thing that might happen.

From a brief look on the handling of MMU (not IOMMU) hugepages
in Linux, indeed the PMD is first cleared and flushed before a
new valid PMD is set. This is possible for MMUs since they
allow the software to handle spurious page-faults gracefully.
This is not the case for the IOMMU though (without PRI).

Not sure this explains everything though. If that is the problem,
then during a mapping that changes page-sizes, a TLB flush is
needed, similarly to the one Longpeng did manually.



[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: A problem of Intel IOMMU hardware ?
@ 2021-03-18 16:46                   ` Nadav Amit
  0 siblings, 0 replies; 50+ messages in thread
From: Nadav Amit @ 2021-03-18 16:46 UTC (permalink / raw)
  To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  Cc: chenjiashang, Tian, Kevin, will, iommu, LKML, alex.williamson,
	Gonglei (Arei),
	David Woodhouse


[-- Attachment #1.1: Type: text/plain, Size: 4240 bytes --]



> On Mar 18, 2021, at 2:25 AM, Longpeng (Mike, Cloud Infrastructure Service Product Dept.) <longpeng2@huawei.com> wrote:
> 
> 
> 
>> -----Original Message-----
>> From: Tian, Kevin [mailto:kevin.tian@intel.com]
>> Sent: Thursday, March 18, 2021 4:56 PM
>> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
>> <longpeng2@huawei.com>; Nadav Amit <nadav.amit@gmail.com>
>> Cc: chenjiashang <chenjiashang@huawei.com>; David Woodhouse
>> <dwmw2@infradead.org>; iommu@lists.linux-foundation.org; LKML
>> <linux-kernel@vger.kernel.org>; alex.williamson@redhat.com; Gonglei (Arei)
>> <arei.gonglei@huawei.com>; will@kernel.org
>> Subject: RE: A problem of Intel IOMMU hardware ?
>> 
>>> From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
>>> <longpeng2@huawei.com>
>>> 
>>>> -----Original Message-----
>>>> From: Tian, Kevin [mailto:kevin.tian@intel.com]
>>>> Sent: Thursday, March 18, 2021 4:27 PM
>>>> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
>>>> <longpeng2@huawei.com>; Nadav Amit <nadav.amit@gmail.com>
>>>> Cc: chenjiashang <chenjiashang@huawei.com>; David Woodhouse
>>>> <dwmw2@infradead.org>; iommu@lists.linux-foundation.org; LKML
>>>> <linux-kernel@vger.kernel.org>; alex.williamson@redhat.com; Gonglei
>>> (Arei)
>>>> <arei.gonglei@huawei.com>; will@kernel.org
>>>> Subject: RE: A problem of Intel IOMMU hardware ?
>>>> 
>>>>> From: iommu <iommu-bounces@lists.linux-foundation.org> On Behalf
>>>>> Of Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
>>>>> 
>>>>>> 2. Consider ensuring that the problem is not somehow related to
>>>>>> queued invalidations. Try to use __iommu_flush_iotlb() instead
>>>>>> of
>>>> qi_flush_iotlb().
>>>>>> 
>>>>> 
>>>>> I tried to force to use __iommu_flush_iotlb(), but maybe something
>>>>> wrong, the system crashed, so I prefer to lower the priority of
>>>>> this
>>> operation.
>>>>> 
>>>> 
>>>> The VT-d spec clearly says that register-based invalidation can be
>>>> used only
>>> when
>>>> queued-invalidations are not enabled. Intel-IOMMU driver doesn't
>>>> provide
>>> an
>>>> option to disable queued-invalidation though, when the hardware is
>>> capable. If you
>>>> really want to try, tweak the code in intel_iommu_init_qi.
>>>> 
>>> 
>>> Hi Kevin,
>>> 
>>> Thanks to point out this. Do you have any ideas about this problem ? I
>>> tried to descript the problem much clear in my reply to Alex, hope you
>>> could have a look if you're interested.
>>> 
>> 
>> btw I saw you used 4.18 kernel in this test. What about latest kernel?
>> 
> 
> Not test yet. It's hard to upgrade kernel in our environment.
> 
>> Also one way to separate sw/hw bug is to trace the low level interface (e.g.,
>> qi_flush_iotlb) which actually sends invalidation descriptors to the IOMMU
>> hardware. Check the window between b) and c) and see whether the software does
>> the right thing as expected there.
>> 
> 
> We add some log in iommu driver these days, the software seems fine. But we
> didn't look inside the qi_submit_sync yet, I'll try it tonight.

So here is my guess:

Intel probably used as a basis for the IOTLB an implementation of
some other (regular) TLB design.

Intel SDM says regarding TLBs (4.10.4.2 “Recommended Invalidation”):

"Software wishing to prevent this uncertainty should not write to
a paging-structure entry in a way that would change, for any linear
address, both the page size and either the page frame, access rights,
or other attributes.”


Now the aforementioned uncertainty is a bit different (multiple
*valid* translations of a single address). Yet, perhaps this is
yet another thing that might happen.

From a brief look on the handling of MMU (not IOMMU) hugepages
in Linux, indeed the PMD is first cleared and flushed before a
new valid PMD is set. This is possible for MMUs since they
allow the software to handle spurious page-faults gracefully.
This is not the case for the IOMMU though (without PRI).

Not sure this explains everything though. If that is the problem,
then during a mapping that changes page-sizes, a TLB flush is
needed, similarly to the one Longpeng did manually.



[-- Attachment #1.2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

[-- Attachment #2: Type: text/plain, Size: 156 bytes --]

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: A problem of Intel IOMMU hardware ?
  2021-03-18  8:56               ` Tian, Kevin
@ 2021-03-19  0:15                 ` Lu Baolu
  -1 siblings, 0 replies; 50+ messages in thread
From: Lu Baolu @ 2021-03-19  0:15 UTC (permalink / raw)
  To: Tian, Kevin, Longpeng (Mike,
	Cloud Infrastructure Service Product Dept.),
	Nadav Amit
  Cc: baolu.lu, chenjiashang, will, iommu, LKML, alex.williamson,
	Gonglei (Arei),
	David Woodhouse

On 3/18/21 4:56 PM, Tian, Kevin wrote:
>> From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
>> <longpeng2@huawei.com>
>>
>>> -----Original Message-----
>>> From: Tian, Kevin [mailto:kevin.tian@intel.com]
>>> Sent: Thursday, March 18, 2021 4:27 PM
>>> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
>>> <longpeng2@huawei.com>; Nadav Amit <nadav.amit@gmail.com>
>>> Cc: chenjiashang <chenjiashang@huawei.com>; David Woodhouse
>>> <dwmw2@infradead.org>; iommu@lists.linux-foundation.org; LKML
>>> <linux-kernel@vger.kernel.org>; alex.williamson@redhat.com; Gonglei
>> (Arei)
>>> <arei.gonglei@huawei.com>; will@kernel.org
>>> Subject: RE: A problem of Intel IOMMU hardware ?
>>>
>>>> From: iommu <iommu-bounces@lists.linux-foundation.org> On Behalf Of
>>>> Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
>>>>
>>>>> 2. Consider ensuring that the problem is not somehow related to
>>>>> queued invalidations. Try to use __iommu_flush_iotlb() instead of
>>> qi_flush_iotlb().
>>>>>
>>>>
>>>> I tried to force to use __iommu_flush_iotlb(), but maybe something
>>>> wrong, the system crashed, so I prefer to lower the priority of this
>> operation.
>>>>
>>>
>>> The VT-d spec clearly says that register-based invalidation can be used only
>> when
>>> queued-invalidations are not enabled. Intel-IOMMU driver doesn't provide
>> an
>>> option to disable queued-invalidation though, when the hardware is
>> capable. If you
>>> really want to try, tweak the code in intel_iommu_init_qi.
>>>
>>
>> Hi Kevin,
>>
>> Thanks to point out this. Do you have any ideas about this problem ? I tried
>> to descript the problem much clear in my reply to Alex, hope you could have
>> a look if you're interested.
>>
> 
> btw I saw you used 4.18 kernel in this test. What about latest kernel?
> 
> Also one way to separate sw/hw bug is to trace the low level interface (e.g.,
> qi_flush_iotlb) which actually sends invalidation descriptors to the IOMMU
> hardware. Check the window between b) and c) and see whether the
> software does the right thing as expected there.

Yes. It's better if we can reproduce this with the latest kernel which
has debugfs files to expose page tables and the invalidation queues etc.

Best regards,
baolu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: A problem of Intel IOMMU hardware ?
@ 2021-03-19  0:15                 ` Lu Baolu
  0 siblings, 0 replies; 50+ messages in thread
From: Lu Baolu @ 2021-03-19  0:15 UTC (permalink / raw)
  To: Tian, Kevin, Longpeng (Mike,
	Cloud Infrastructure Service Product Dept.),
	Nadav Amit
  Cc: chenjiashang, David Woodhouse, iommu, LKML, alex.williamson,
	Gonglei (Arei),
	will

On 3/18/21 4:56 PM, Tian, Kevin wrote:
>> From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
>> <longpeng2@huawei.com>
>>
>>> -----Original Message-----
>>> From: Tian, Kevin [mailto:kevin.tian@intel.com]
>>> Sent: Thursday, March 18, 2021 4:27 PM
>>> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
>>> <longpeng2@huawei.com>; Nadav Amit <nadav.amit@gmail.com>
>>> Cc: chenjiashang <chenjiashang@huawei.com>; David Woodhouse
>>> <dwmw2@infradead.org>; iommu@lists.linux-foundation.org; LKML
>>> <linux-kernel@vger.kernel.org>; alex.williamson@redhat.com; Gonglei
>> (Arei)
>>> <arei.gonglei@huawei.com>; will@kernel.org
>>> Subject: RE: A problem of Intel IOMMU hardware ?
>>>
>>>> From: iommu <iommu-bounces@lists.linux-foundation.org> On Behalf Of
>>>> Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
>>>>
>>>>> 2. Consider ensuring that the problem is not somehow related to
>>>>> queued invalidations. Try to use __iommu_flush_iotlb() instead of
>>> qi_flush_iotlb().
>>>>>
>>>>
>>>> I tried to force to use __iommu_flush_iotlb(), but maybe something
>>>> wrong, the system crashed, so I prefer to lower the priority of this
>> operation.
>>>>
>>>
>>> The VT-d spec clearly says that register-based invalidation can be used only
>> when
>>> queued-invalidations are not enabled. Intel-IOMMU driver doesn't provide
>> an
>>> option to disable queued-invalidation though, when the hardware is
>> capable. If you
>>> really want to try, tweak the code in intel_iommu_init_qi.
>>>
>>
>> Hi Kevin,
>>
>> Thanks to point out this. Do you have any ideas about this problem ? I tried
>> to descript the problem much clear in my reply to Alex, hope you could have
>> a look if you're interested.
>>
> 
> btw I saw you used 4.18 kernel in this test. What about latest kernel?
> 
> Also one way to separate sw/hw bug is to trace the low level interface (e.g.,
> qi_flush_iotlb) which actually sends invalidation descriptors to the IOMMU
> hardware. Check the window between b) and c) and see whether the
> software does the right thing as expected there.

Yes. It's better if we can reproduce this with the latest kernel which
has debugfs files to expose page tables and the invalidation queues etc.

Best regards,
baolu
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: A problem of Intel IOMMU hardware ?
  2021-03-18 16:46                   ` Nadav Amit
@ 2021-03-21 23:51                     ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  -1 siblings, 0 replies; 50+ messages in thread
From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.) @ 2021-03-21 23:51 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Tian, Kevin, chenjiashang, David Woodhouse, iommu, LKML,
	alex.williamson, Gonglei (Arei),
	will, Lu Baolu, Joerg Roedel

Hi Nadav,

> -----Original Message-----
> From: Nadav Amit [mailto:nadav.amit@gmail.com]
> Sent: Friday, March 19, 2021 12:46 AM
> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> <longpeng2@huawei.com>
> Cc: Tian, Kevin <kevin.tian@intel.com>; chenjiashang
> <chenjiashang@huawei.com>; David Woodhouse <dwmw2@infradead.org>;
> iommu@lists.linux-foundation.org; LKML <linux-kernel@vger.kernel.org>;
> alex.williamson@redhat.com; Gonglei (Arei) <arei.gonglei@huawei.com>;
> will@kernel.org
> Subject: Re: A problem of Intel IOMMU hardware ?
> 
> 
> 
> > On Mar 18, 2021, at 2:25 AM, Longpeng (Mike, Cloud Infrastructure Service
> Product Dept.) <longpeng2@huawei.com> wrote:
> >
> >
> >
> >> -----Original Message-----
> >> From: Tian, Kevin [mailto:kevin.tian@intel.com]
> >> Sent: Thursday, March 18, 2021 4:56 PM
> >> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> >> <longpeng2@huawei.com>; Nadav Amit <nadav.amit@gmail.com>
> >> Cc: chenjiashang <chenjiashang@huawei.com>; David Woodhouse
> >> <dwmw2@infradead.org>; iommu@lists.linux-foundation.org; LKML
> >> <linux-kernel@vger.kernel.org>; alex.williamson@redhat.com; Gonglei
> >> (Arei) <arei.gonglei@huawei.com>; will@kernel.org
> >> Subject: RE: A problem of Intel IOMMU hardware ?
> >>
> >>> From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> >>> <longpeng2@huawei.com>
> >>>
> >>>> -----Original Message-----
> >>>> From: Tian, Kevin [mailto:kevin.tian@intel.com]
> >>>> Sent: Thursday, March 18, 2021 4:27 PM
> >>>> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> >>>> <longpeng2@huawei.com>; Nadav Amit <nadav.amit@gmail.com>
> >>>> Cc: chenjiashang <chenjiashang@huawei.com>; David Woodhouse
> >>>> <dwmw2@infradead.org>; iommu@lists.linux-foundation.org; LKML
> >>>> <linux-kernel@vger.kernel.org>; alex.williamson@redhat.com; Gonglei
> >>> (Arei)
> >>>> <arei.gonglei@huawei.com>; will@kernel.org
> >>>> Subject: RE: A problem of Intel IOMMU hardware ?
> >>>>
> >>>>> From: iommu <iommu-bounces@lists.linux-foundation.org> On Behalf
> >>>>> Of Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> >>>>>
> >>>>>> 2. Consider ensuring that the problem is not somehow related to
> >>>>>> queued invalidations. Try to use __iommu_flush_iotlb() instead of
> >>>> qi_flush_iotlb().
> >>>>>>
> >>>>>
> >>>>> I tried to force to use __iommu_flush_iotlb(), but maybe something
> >>>>> wrong, the system crashed, so I prefer to lower the priority of
> >>>>> this
> >>> operation.
> >>>>>
> >>>>
> >>>> The VT-d spec clearly says that register-based invalidation can be
> >>>> used only
> >>> when
> >>>> queued-invalidations are not enabled. Intel-IOMMU driver doesn't
> >>>> provide
> >>> an
> >>>> option to disable queued-invalidation though, when the hardware is
> >>> capable. If you
> >>>> really want to try, tweak the code in intel_iommu_init_qi.
> >>>>
> >>>
> >>> Hi Kevin,
> >>>
> >>> Thanks to point out this. Do you have any ideas about this problem ?
> >>> I tried to descript the problem much clear in my reply to Alex, hope
> >>> you could have a look if you're interested.
> >>>
> >>
> >> btw I saw you used 4.18 kernel in this test. What about latest kernel?
> >>
> >
> > Not test yet. It's hard to upgrade kernel in our environment.
> >
> >> Also one way to separate sw/hw bug is to trace the low level
> >> interface (e.g.,
> >> qi_flush_iotlb) which actually sends invalidation descriptors to the
> >> IOMMU hardware. Check the window between b) and c) and see whether
> >> the software does the right thing as expected there.
> >>
> >
> > We add some log in iommu driver these days, the software seems fine.
> > But we didn't look inside the qi_submit_sync yet, I'll try it tonight.
> 
> So here is my guess:
> 
> Intel probably used as a basis for the IOTLB an implementation of some other
> (regular) TLB design.
> 
> Intel SDM says regarding TLBs (4.10.4.2 “Recommended Invalidation”):
> 
> "Software wishing to prevent this uncertainty should not write to a
> paging-structure entry in a way that would change, for any linear address, both the
> page size and either the page frame, access rights, or other attributes.”
> 
> 
> Now the aforementioned uncertainty is a bit different (multiple
> *valid* translations of a single address). Yet, perhaps this is yet another thing that
> might happen.
> 
> From a brief look on the handling of MMU (not IOMMU) hugepages in Linux, indeed
> the PMD is first cleared and flushed before a new valid PMD is set. This is possible
> for MMUs since they allow the software to handle spurious page-faults gracefully.
> This is not the case for the IOMMU though (without PRI).
> 

But in my case, the flush_iotlb is called after the range of (0x0, 0xa0000) is unmapped,
I've no idea why this invalidation isn't effective except I've not look inside the qi yet, but
there is no complaints from the driver.

Could you please point out the code of MMU you mentioned above? In MMU code, is it
possible that all the entries of the PTE are all not-present but the PMD entry is still present?

*Page table after (0x0, 0xa0000) is unmapped:
PML4: 0x      1a34fbb003
  PDPE: 0x      1a34fbb003
   PDE: 0x      1a34fbf003
    PTE: 0x               0

*Page table after (0x0, 0xc0000000) is mapped:
PML4: 0x      1a34fbb003
  PDPE: 0x      1a34fbb003
   PDE: 0x       15ec00883

> Not sure this explains everything though. If that is the problem, then during a
> mapping that changes page-sizes, a TLB flush is needed, similarly to the one
> Longpeng did manually.
> 



^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: A problem of Intel IOMMU hardware ?
@ 2021-03-21 23:51                     ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  0 siblings, 0 replies; 50+ messages in thread
From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.) @ 2021-03-21 23:51 UTC (permalink / raw)
  To: Nadav Amit
  Cc: chenjiashang, Tian, Kevin, will, iommu, LKML, alex.williamson,
	Gonglei (Arei),
	David Woodhouse

Hi Nadav,

> -----Original Message-----
> From: Nadav Amit [mailto:nadav.amit@gmail.com]
> Sent: Friday, March 19, 2021 12:46 AM
> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> <longpeng2@huawei.com>
> Cc: Tian, Kevin <kevin.tian@intel.com>; chenjiashang
> <chenjiashang@huawei.com>; David Woodhouse <dwmw2@infradead.org>;
> iommu@lists.linux-foundation.org; LKML <linux-kernel@vger.kernel.org>;
> alex.williamson@redhat.com; Gonglei (Arei) <arei.gonglei@huawei.com>;
> will@kernel.org
> Subject: Re: A problem of Intel IOMMU hardware ?
> 
> 
> 
> > On Mar 18, 2021, at 2:25 AM, Longpeng (Mike, Cloud Infrastructure Service
> Product Dept.) <longpeng2@huawei.com> wrote:
> >
> >
> >
> >> -----Original Message-----
> >> From: Tian, Kevin [mailto:kevin.tian@intel.com]
> >> Sent: Thursday, March 18, 2021 4:56 PM
> >> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> >> <longpeng2@huawei.com>; Nadav Amit <nadav.amit@gmail.com>
> >> Cc: chenjiashang <chenjiashang@huawei.com>; David Woodhouse
> >> <dwmw2@infradead.org>; iommu@lists.linux-foundation.org; LKML
> >> <linux-kernel@vger.kernel.org>; alex.williamson@redhat.com; Gonglei
> >> (Arei) <arei.gonglei@huawei.com>; will@kernel.org
> >> Subject: RE: A problem of Intel IOMMU hardware ?
> >>
> >>> From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> >>> <longpeng2@huawei.com>
> >>>
> >>>> -----Original Message-----
> >>>> From: Tian, Kevin [mailto:kevin.tian@intel.com]
> >>>> Sent: Thursday, March 18, 2021 4:27 PM
> >>>> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> >>>> <longpeng2@huawei.com>; Nadav Amit <nadav.amit@gmail.com>
> >>>> Cc: chenjiashang <chenjiashang@huawei.com>; David Woodhouse
> >>>> <dwmw2@infradead.org>; iommu@lists.linux-foundation.org; LKML
> >>>> <linux-kernel@vger.kernel.org>; alex.williamson@redhat.com; Gonglei
> >>> (Arei)
> >>>> <arei.gonglei@huawei.com>; will@kernel.org
> >>>> Subject: RE: A problem of Intel IOMMU hardware ?
> >>>>
> >>>>> From: iommu <iommu-bounces@lists.linux-foundation.org> On Behalf
> >>>>> Of Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> >>>>>
> >>>>>> 2. Consider ensuring that the problem is not somehow related to
> >>>>>> queued invalidations. Try to use __iommu_flush_iotlb() instead of
> >>>> qi_flush_iotlb().
> >>>>>>
> >>>>>
> >>>>> I tried to force to use __iommu_flush_iotlb(), but maybe something
> >>>>> wrong, the system crashed, so I prefer to lower the priority of
> >>>>> this
> >>> operation.
> >>>>>
> >>>>
> >>>> The VT-d spec clearly says that register-based invalidation can be
> >>>> used only
> >>> when
> >>>> queued-invalidations are not enabled. Intel-IOMMU driver doesn't
> >>>> provide
> >>> an
> >>>> option to disable queued-invalidation though, when the hardware is
> >>> capable. If you
> >>>> really want to try, tweak the code in intel_iommu_init_qi.
> >>>>
> >>>
> >>> Hi Kevin,
> >>>
> >>> Thanks to point out this. Do you have any ideas about this problem ?
> >>> I tried to descript the problem much clear in my reply to Alex, hope
> >>> you could have a look if you're interested.
> >>>
> >>
> >> btw I saw you used 4.18 kernel in this test. What about latest kernel?
> >>
> >
> > Not test yet. It's hard to upgrade kernel in our environment.
> >
> >> Also one way to separate sw/hw bug is to trace the low level
> >> interface (e.g.,
> >> qi_flush_iotlb) which actually sends invalidation descriptors to the
> >> IOMMU hardware. Check the window between b) and c) and see whether
> >> the software does the right thing as expected there.
> >>
> >
> > We add some log in iommu driver these days, the software seems fine.
> > But we didn't look inside the qi_submit_sync yet, I'll try it tonight.
> 
> So here is my guess:
> 
> Intel probably used as a basis for the IOTLB an implementation of some other
> (regular) TLB design.
> 
> Intel SDM says regarding TLBs (4.10.4.2 “Recommended Invalidation”):
> 
> "Software wishing to prevent this uncertainty should not write to a
> paging-structure entry in a way that would change, for any linear address, both the
> page size and either the page frame, access rights, or other attributes.”
> 
> 
> Now the aforementioned uncertainty is a bit different (multiple
> *valid* translations of a single address). Yet, perhaps this is yet another thing that
> might happen.
> 
> From a brief look on the handling of MMU (not IOMMU) hugepages in Linux, indeed
> the PMD is first cleared and flushed before a new valid PMD is set. This is possible
> for MMUs since they allow the software to handle spurious page-faults gracefully.
> This is not the case for the IOMMU though (without PRI).
> 

But in my case, the flush_iotlb is called after the range of (0x0, 0xa0000) is unmapped,
I've no idea why this invalidation isn't effective except I've not look inside the qi yet, but
there is no complaints from the driver.

Could you please point out the code of MMU you mentioned above? In MMU code, is it
possible that all the entries of the PTE are all not-present but the PMD entry is still present?

*Page table after (0x0, 0xa0000) is unmapped:
PML4: 0x      1a34fbb003
  PDPE: 0x      1a34fbb003
   PDE: 0x      1a34fbf003
    PTE: 0x               0

*Page table after (0x0, 0xc0000000) is mapped:
PML4: 0x      1a34fbb003
  PDPE: 0x      1a34fbb003
   PDE: 0x       15ec00883

> Not sure this explains everything though. If that is the problem, then during a
> mapping that changes page-sizes, a TLB flush is needed, similarly to the one
> Longpeng did manually.
> 


_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: A problem of Intel IOMMU hardware ?
  2021-03-18 16:46                   ` Nadav Amit
@ 2021-03-22  0:27                     ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  -1 siblings, 0 replies; 50+ messages in thread
From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.) @ 2021-03-22  0:27 UTC (permalink / raw)
  To: Nadav Amit
  Cc: Tian, Kevin, chenjiashang, David Woodhouse, iommu, LKML,
	alex.williamson, Gonglei (Arei),
	will, Lu Baolu, Joerg Roedel



> -----Original Message-----
> From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> Sent: Monday, March 22, 2021 7:51 AM
> To: 'Nadav Amit' <nadav.amit@gmail.com>
> Cc: Tian, Kevin <kevin.tian@intel.com>; chenjiashang
> <chenjiashang@huawei.com>; David Woodhouse <dwmw2@infradead.org>;
> iommu@lists.linux-foundation.org; LKML <linux-kernel@vger.kernel.org>;
> alex.williamson@redhat.com; Gonglei (Arei) <arei.gonglei@huawei.com>;
> will@kernel.org; 'Lu Baolu' <baolu.lu@linux.intel.com>; 'Joerg Roedel'
> <joro@8bytes.org>
> Subject: RE: A problem of Intel IOMMU hardware ?
> 
> Hi Nadav,
> 
> > -----Original Message-----
> > From: Nadav Amit [mailto:nadav.amit@gmail.com]
> > Sent: Friday, March 19, 2021 12:46 AM
> > To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > <longpeng2@huawei.com>
> > Cc: Tian, Kevin <kevin.tian@intel.com>; chenjiashang
> > <chenjiashang@huawei.com>; David Woodhouse <dwmw2@infradead.org>;
> > iommu@lists.linux-foundation.org; LKML <linux-kernel@vger.kernel.org>;
> > alex.williamson@redhat.com; Gonglei (Arei) <arei.gonglei@huawei.com>;
> > will@kernel.org
> > Subject: Re: A problem of Intel IOMMU hardware ?
> >
> >
> >
> > > On Mar 18, 2021, at 2:25 AM, Longpeng (Mike, Cloud Infrastructure
> > > Service
> > Product Dept.) <longpeng2@huawei.com> wrote:
> > >
> > >
> > >
> > >> -----Original Message-----
> > >> From: Tian, Kevin [mailto:kevin.tian@intel.com]
> > >> Sent: Thursday, March 18, 2021 4:56 PM
> > >> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > >> <longpeng2@huawei.com>; Nadav Amit <nadav.amit@gmail.com>
> > >> Cc: chenjiashang <chenjiashang@huawei.com>; David Woodhouse
> > >> <dwmw2@infradead.org>; iommu@lists.linux-foundation.org; LKML
> > >> <linux-kernel@vger.kernel.org>; alex.williamson@redhat.com; Gonglei
> > >> (Arei) <arei.gonglei@huawei.com>; will@kernel.org
> > >> Subject: RE: A problem of Intel IOMMU hardware ?
> > >>
> > >>> From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > >>> <longpeng2@huawei.com>
> > >>>
> > >>>> -----Original Message-----
> > >>>> From: Tian, Kevin [mailto:kevin.tian@intel.com]
> > >>>> Sent: Thursday, March 18, 2021 4:27 PM
> > >>>> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > >>>> <longpeng2@huawei.com>; Nadav Amit <nadav.amit@gmail.com>
> > >>>> Cc: chenjiashang <chenjiashang@huawei.com>; David Woodhouse
> > >>>> <dwmw2@infradead.org>; iommu@lists.linux-foundation.org; LKML
> > >>>> <linux-kernel@vger.kernel.org>; alex.williamson@redhat.com;
> > >>>> Gonglei
> > >>> (Arei)
> > >>>> <arei.gonglei@huawei.com>; will@kernel.org
> > >>>> Subject: RE: A problem of Intel IOMMU hardware ?
> > >>>>
> > >>>>> From: iommu <iommu-bounces@lists.linux-foundation.org> On Behalf
> > >>>>> Of Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > >>>>>
> > >>>>>> 2. Consider ensuring that the problem is not somehow related to
> > >>>>>> queued invalidations. Try to use __iommu_flush_iotlb() instead
> > >>>>>> of
> > >>>> qi_flush_iotlb().
> > >>>>>>
> > >>>>>
> > >>>>> I tried to force to use __iommu_flush_iotlb(), but maybe
> > >>>>> something wrong, the system crashed, so I prefer to lower the
> > >>>>> priority of this
> > >>> operation.
> > >>>>>
> > >>>>
> > >>>> The VT-d spec clearly says that register-based invalidation can
> > >>>> be used only
> > >>> when
> > >>>> queued-invalidations are not enabled. Intel-IOMMU driver doesn't
> > >>>> provide
> > >>> an
> > >>>> option to disable queued-invalidation though, when the hardware
> > >>>> is
> > >>> capable. If you
> > >>>> really want to try, tweak the code in intel_iommu_init_qi.
> > >>>>
> > >>>
> > >>> Hi Kevin,
> > >>>
> > >>> Thanks to point out this. Do you have any ideas about this problem ?
> > >>> I tried to descript the problem much clear in my reply to Alex,
> > >>> hope you could have a look if you're interested.
> > >>>
> > >>
> > >> btw I saw you used 4.18 kernel in this test. What about latest kernel?
> > >>
> > >
> > > Not test yet. It's hard to upgrade kernel in our environment.
> > >
> > >> Also one way to separate sw/hw bug is to trace the low level
> > >> interface (e.g.,
> > >> qi_flush_iotlb) which actually sends invalidation descriptors to
> > >> the IOMMU hardware. Check the window between b) and c) and see
> > >> whether the software does the right thing as expected there.
> > >>
> > >
> > > We add some log in iommu driver these days, the software seems fine.
> > > But we didn't look inside the qi_submit_sync yet, I'll try it tonight.
> >
> > So here is my guess:
> >
> > Intel probably used as a basis for the IOTLB an implementation of some
> > other
> > (regular) TLB design.
> >
> > Intel SDM says regarding TLBs (4.10.4.2 “Recommended Invalidation”):
> >
> > "Software wishing to prevent this uncertainty should not write to a
> > paging-structure entry in a way that would change, for any linear
> > address, both the page size and either the page frame, access rights, or other
> attributes.”
> >
> >
> > Now the aforementioned uncertainty is a bit different (multiple
> > *valid* translations of a single address). Yet, perhaps this is yet
> > another thing that might happen.
> >
> > From a brief look on the handling of MMU (not IOMMU) hugepages in
> > Linux, indeed the PMD is first cleared and flushed before a new valid
> > PMD is set. This is possible for MMUs since they allow the software to handle
> spurious page-faults gracefully.
> > This is not the case for the IOMMU though (without PRI).
> >
> 
> But in my case, the flush_iotlb is called after the range of (0x0, 0xa0000) is
> unmapped, I've no idea why this invalidation isn't effective except I've not look
> inside the qi yet, but there is no complaints from the driver.
> 
> Could you please point out the code of MMU you mentioned above? In MMU code,
> is it possible that all the entries of the PTE are all not-present but the PMD entry is
> still present?
> 

Oh, I see the following MMU code:
unmap_pmd_range
  __unmap_pmd_range
    unmap_pte_range
      try_to_free_pmd_page (if all of the PTEs are pmd_none)

So the MMU code won't keep the PMD entry as present if all of its PTE entries
are not-present.


> *Page table after (0x0, 0xa0000) is unmapped:
> PML4: 0x      1a34fbb003
>   PDPE: 0x      1a34fbb003
>    PDE: 0x      1a34fbf003
>     PTE: 0x               0
> 
> *Page table after (0x0, 0xc0000000) is mapped:
> PML4: 0x      1a34fbb003
>   PDPE: 0x      1a34fbb003
>    PDE: 0x       15ec00883
> 
> > Not sure this explains everything though. If that is the problem, then
> > during a mapping that changes page-sizes, a TLB flush is needed,
> > similarly to the one Longpeng did manually.
> >
> 


^ permalink raw reply	[flat|nested] 50+ messages in thread

* RE: A problem of Intel IOMMU hardware ?
@ 2021-03-22  0:27                     ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  0 siblings, 0 replies; 50+ messages in thread
From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.) @ 2021-03-22  0:27 UTC (permalink / raw)
  To: Nadav Amit
  Cc: chenjiashang, Tian, Kevin, will, iommu, LKML, alex.williamson,
	Gonglei (Arei),
	David Woodhouse



> -----Original Message-----
> From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> Sent: Monday, March 22, 2021 7:51 AM
> To: 'Nadav Amit' <nadav.amit@gmail.com>
> Cc: Tian, Kevin <kevin.tian@intel.com>; chenjiashang
> <chenjiashang@huawei.com>; David Woodhouse <dwmw2@infradead.org>;
> iommu@lists.linux-foundation.org; LKML <linux-kernel@vger.kernel.org>;
> alex.williamson@redhat.com; Gonglei (Arei) <arei.gonglei@huawei.com>;
> will@kernel.org; 'Lu Baolu' <baolu.lu@linux.intel.com>; 'Joerg Roedel'
> <joro@8bytes.org>
> Subject: RE: A problem of Intel IOMMU hardware ?
> 
> Hi Nadav,
> 
> > -----Original Message-----
> > From: Nadav Amit [mailto:nadav.amit@gmail.com]
> > Sent: Friday, March 19, 2021 12:46 AM
> > To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > <longpeng2@huawei.com>
> > Cc: Tian, Kevin <kevin.tian@intel.com>; chenjiashang
> > <chenjiashang@huawei.com>; David Woodhouse <dwmw2@infradead.org>;
> > iommu@lists.linux-foundation.org; LKML <linux-kernel@vger.kernel.org>;
> > alex.williamson@redhat.com; Gonglei (Arei) <arei.gonglei@huawei.com>;
> > will@kernel.org
> > Subject: Re: A problem of Intel IOMMU hardware ?
> >
> >
> >
> > > On Mar 18, 2021, at 2:25 AM, Longpeng (Mike, Cloud Infrastructure
> > > Service
> > Product Dept.) <longpeng2@huawei.com> wrote:
> > >
> > >
> > >
> > >> -----Original Message-----
> > >> From: Tian, Kevin [mailto:kevin.tian@intel.com]
> > >> Sent: Thursday, March 18, 2021 4:56 PM
> > >> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > >> <longpeng2@huawei.com>; Nadav Amit <nadav.amit@gmail.com>
> > >> Cc: chenjiashang <chenjiashang@huawei.com>; David Woodhouse
> > >> <dwmw2@infradead.org>; iommu@lists.linux-foundation.org; LKML
> > >> <linux-kernel@vger.kernel.org>; alex.williamson@redhat.com; Gonglei
> > >> (Arei) <arei.gonglei@huawei.com>; will@kernel.org
> > >> Subject: RE: A problem of Intel IOMMU hardware ?
> > >>
> > >>> From: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > >>> <longpeng2@huawei.com>
> > >>>
> > >>>> -----Original Message-----
> > >>>> From: Tian, Kevin [mailto:kevin.tian@intel.com]
> > >>>> Sent: Thursday, March 18, 2021 4:27 PM
> > >>>> To: Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > >>>> <longpeng2@huawei.com>; Nadav Amit <nadav.amit@gmail.com>
> > >>>> Cc: chenjiashang <chenjiashang@huawei.com>; David Woodhouse
> > >>>> <dwmw2@infradead.org>; iommu@lists.linux-foundation.org; LKML
> > >>>> <linux-kernel@vger.kernel.org>; alex.williamson@redhat.com;
> > >>>> Gonglei
> > >>> (Arei)
> > >>>> <arei.gonglei@huawei.com>; will@kernel.org
> > >>>> Subject: RE: A problem of Intel IOMMU hardware ?
> > >>>>
> > >>>>> From: iommu <iommu-bounces@lists.linux-foundation.org> On Behalf
> > >>>>> Of Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
> > >>>>>
> > >>>>>> 2. Consider ensuring that the problem is not somehow related to
> > >>>>>> queued invalidations. Try to use __iommu_flush_iotlb() instead
> > >>>>>> of
> > >>>> qi_flush_iotlb().
> > >>>>>>
> > >>>>>
> > >>>>> I tried to force to use __iommu_flush_iotlb(), but maybe
> > >>>>> something wrong, the system crashed, so I prefer to lower the
> > >>>>> priority of this
> > >>> operation.
> > >>>>>
> > >>>>
> > >>>> The VT-d spec clearly says that register-based invalidation can
> > >>>> be used only
> > >>> when
> > >>>> queued-invalidations are not enabled. Intel-IOMMU driver doesn't
> > >>>> provide
> > >>> an
> > >>>> option to disable queued-invalidation though, when the hardware
> > >>>> is
> > >>> capable. If you
> > >>>> really want to try, tweak the code in intel_iommu_init_qi.
> > >>>>
> > >>>
> > >>> Hi Kevin,
> > >>>
> > >>> Thanks to point out this. Do you have any ideas about this problem ?
> > >>> I tried to descript the problem much clear in my reply to Alex,
> > >>> hope you could have a look if you're interested.
> > >>>
> > >>
> > >> btw I saw you used 4.18 kernel in this test. What about latest kernel?
> > >>
> > >
> > > Not test yet. It's hard to upgrade kernel in our environment.
> > >
> > >> Also one way to separate sw/hw bug is to trace the low level
> > >> interface (e.g.,
> > >> qi_flush_iotlb) which actually sends invalidation descriptors to
> > >> the IOMMU hardware. Check the window between b) and c) and see
> > >> whether the software does the right thing as expected there.
> > >>
> > >
> > > We add some log in iommu driver these days, the software seems fine.
> > > But we didn't look inside the qi_submit_sync yet, I'll try it tonight.
> >
> > So here is my guess:
> >
> > Intel probably used as a basis for the IOTLB an implementation of some
> > other
> > (regular) TLB design.
> >
> > Intel SDM says regarding TLBs (4.10.4.2 “Recommended Invalidation”):
> >
> > "Software wishing to prevent this uncertainty should not write to a
> > paging-structure entry in a way that would change, for any linear
> > address, both the page size and either the page frame, access rights, or other
> attributes.”
> >
> >
> > Now the aforementioned uncertainty is a bit different (multiple
> > *valid* translations of a single address). Yet, perhaps this is yet
> > another thing that might happen.
> >
> > From a brief look on the handling of MMU (not IOMMU) hugepages in
> > Linux, indeed the PMD is first cleared and flushed before a new valid
> > PMD is set. This is possible for MMUs since they allow the software to handle
> spurious page-faults gracefully.
> > This is not the case for the IOMMU though (without PRI).
> >
> 
> But in my case, the flush_iotlb is called after the range of (0x0, 0xa0000) is
> unmapped, I've no idea why this invalidation isn't effective except I've not look
> inside the qi yet, but there is no complaints from the driver.
> 
> Could you please point out the code of MMU you mentioned above? In MMU code,
> is it possible that all the entries of the PTE are all not-present but the PMD entry is
> still present?
> 

Oh, I see the following MMU code:
unmap_pmd_range
  __unmap_pmd_range
    unmap_pte_range
      try_to_free_pmd_page (if all of the PTEs are pmd_none)

So the MMU code won't keep the PMD entry as present if all of its PTE entries
are not-present.


> *Page table after (0x0, 0xa0000) is unmapped:
> PML4: 0x      1a34fbb003
>   PDPE: 0x      1a34fbb003
>    PDE: 0x      1a34fbf003
>     PTE: 0x               0
> 
> *Page table after (0x0, 0xc0000000) is mapped:
> PML4: 0x      1a34fbb003
>   PDPE: 0x      1a34fbb003
>    PDE: 0x       15ec00883
> 
> > Not sure this explains everything though. If that is the problem, then
> > during a mapping that changes page-sizes, a TLB flush is needed,
> > similarly to the one Longpeng did manually.
> >
> 

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: A problem of Intel IOMMU hardware ?
  2021-03-18 16:46                   ` Nadav Amit
@ 2021-03-27  2:31                     ` Lu Baolu
  -1 siblings, 0 replies; 50+ messages in thread
From: Lu Baolu @ 2021-03-27  2:31 UTC (permalink / raw)
  To: Nadav Amit, Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  Cc: baolu.lu, chenjiashang, Tian, Kevin, will, iommu, LKML,
	alex.williamson, Gonglei (Arei),
	David Woodhouse

Hi Nadav,

On 3/19/21 12:46 AM, Nadav Amit wrote:
> So here is my guess:
> 
> Intel probably used as a basis for the IOTLB an implementation of
> some other (regular) TLB design.
> 
> Intel SDM says regarding TLBs (4.10.4.2 “Recommended Invalidation”):
> 
> "Software wishing to prevent this uncertainty should not write to
> a paging-structure entry in a way that would change, for any linear
> address, both the page size and either the page frame, access rights,
> or other attributes.”
> 
> 
> Now the aforementioned uncertainty is a bit different (multiple
> *valid*  translations of a single address). Yet, perhaps this is
> yet another thing that might happen.
> 
>  From a brief look on the handling of MMU (not IOMMU) hugepages
> in Linux, indeed the PMD is first cleared and flushed before a
> new valid PMD is set. This is possible for MMUs since they
> allow the software to handle spurious page-faults gracefully.
> This is not the case for the IOMMU though (without PRI).
> 
> Not sure this explains everything though. If that is the problem,
> then during a mapping that changes page-sizes, a TLB flush is
> needed, similarly to the one Longpeng did manually.

I have been working with Longpeng on this issue these days. It turned
out that your guess is right. The PMD is first cleared but not flushed
before a new valid one is set. The previous entry might be cached in the
paging structure caches hence leads to disaster.

In __domain_mapping():

2352                                 /*
2353                                  * Ensure that old small page 
tables are
2354                                  * removed to make room for 
superpage(s).
2355                                  * We're adding new large pages, so 
make sure
2356                                  * we don't remove their parent tables.
2357                                  */
2358                                 dma_pte_free_pagetable(domain, 
iov_pfn, end_pfn,
2359 
largepage_lvl + 1);

I guess adding a cache flush operation after PMD switching should solve
the problem.

I am still not clear about this comment:

"
  This is possible for MMUs since they allow the software to handle
  spurious page-faults gracefully. This is not the case for the IOMMU
  though (without PRI).
"

Can you please shed more light on this?

Best regards,
baolu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: A problem of Intel IOMMU hardware ?
@ 2021-03-27  2:31                     ` Lu Baolu
  0 siblings, 0 replies; 50+ messages in thread
From: Lu Baolu @ 2021-03-27  2:31 UTC (permalink / raw)
  To: Nadav Amit, Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
  Cc: chenjiashang, Tian, Kevin, alex.williamson, David Woodhouse,
	LKML, iommu, Gonglei (Arei),
	will

Hi Nadav,

On 3/19/21 12:46 AM, Nadav Amit wrote:
> So here is my guess:
> 
> Intel probably used as a basis for the IOTLB an implementation of
> some other (regular) TLB design.
> 
> Intel SDM says regarding TLBs (4.10.4.2 “Recommended Invalidation”):
> 
> "Software wishing to prevent this uncertainty should not write to
> a paging-structure entry in a way that would change, for any linear
> address, both the page size and either the page frame, access rights,
> or other attributes.”
> 
> 
> Now the aforementioned uncertainty is a bit different (multiple
> *valid*  translations of a single address). Yet, perhaps this is
> yet another thing that might happen.
> 
>  From a brief look on the handling of MMU (not IOMMU) hugepages
> in Linux, indeed the PMD is first cleared and flushed before a
> new valid PMD is set. This is possible for MMUs since they
> allow the software to handle spurious page-faults gracefully.
> This is not the case for the IOMMU though (without PRI).
> 
> Not sure this explains everything though. If that is the problem,
> then during a mapping that changes page-sizes, a TLB flush is
> needed, similarly to the one Longpeng did manually.

I have been working with Longpeng on this issue these days. It turned
out that your guess is right. The PMD is first cleared but not flushed
before a new valid one is set. The previous entry might be cached in the
paging structure caches hence leads to disaster.

In __domain_mapping():

2352                                 /*
2353                                  * Ensure that old small page 
tables are
2354                                  * removed to make room for 
superpage(s).
2355                                  * We're adding new large pages, so 
make sure
2356                                  * we don't remove their parent tables.
2357                                  */
2358                                 dma_pte_free_pagetable(domain, 
iov_pfn, end_pfn,
2359 
largepage_lvl + 1);

I guess adding a cache flush operation after PMD switching should solve
the problem.

I am still not clear about this comment:

"
  This is possible for MMUs since they allow the software to handle
  spurious page-faults gracefully. This is not the case for the IOMMU
  though (without PRI).
"

Can you please shed more light on this?

Best regards,
baolu
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: A problem of Intel IOMMU hardware ?
  2021-03-27  2:31                     ` Lu Baolu
@ 2021-03-27  4:36                       ` Nadav Amit
  -1 siblings, 0 replies; 50+ messages in thread
From: Nadav Amit @ 2021-03-27  4:36 UTC (permalink / raw)
  To: Lu Baolu
  Cc: Longpeng (Mike, Cloud Infrastructure Service Product Dept.),
	chenjiashang, Tian, Kevin, will, iommu, LKML, alex.williamson,
	Gonglei (Arei),
	David Woodhouse

[-- Attachment #1: Type: text/plain, Size: 3814 bytes --]



> On Mar 26, 2021, at 7:31 PM, Lu Baolu <baolu.lu@linux.intel.com> wrote:
> 
> Hi Nadav,
> 
> On 3/19/21 12:46 AM, Nadav Amit wrote:
>> So here is my guess:
>> Intel probably used as a basis for the IOTLB an implementation of
>> some other (regular) TLB design.
>> Intel SDM says regarding TLBs (4.10.4.2 “Recommended Invalidation”):
>> "Software wishing to prevent this uncertainty should not write to
>> a paging-structure entry in a way that would change, for any linear
>> address, both the page size and either the page frame, access rights,
>> or other attributes.”
>> Now the aforementioned uncertainty is a bit different (multiple
>> *valid*  translations of a single address). Yet, perhaps this is
>> yet another thing that might happen.
>> From a brief look on the handling of MMU (not IOMMU) hugepages
>> in Linux, indeed the PMD is first cleared and flushed before a
>> new valid PMD is set. This is possible for MMUs since they
>> allow the software to handle spurious page-faults gracefully.
>> This is not the case for the IOMMU though (without PRI).
>> Not sure this explains everything though. If that is the problem,
>> then during a mapping that changes page-sizes, a TLB flush is
>> needed, similarly to the one Longpeng did manually.
> 
> I have been working with Longpeng on this issue these days. It turned
> out that your guess is right. The PMD is first cleared but not flushed
> before a new valid one is set. The previous entry might be cached in the
> paging structure caches hence leads to disaster.
> 
> In __domain_mapping():
> 
> 2352                                 /*
> 2353                                  * Ensure that old small page tables are
> 2354                                  * removed to make room for superpage(s).
> 2355                                  * We're adding new large pages, so make sure
> 2356                                  * we don't remove their parent tables.
> 2357                                  */
> 2358                                 dma_pte_free_pagetable(domain, iov_pfn, end_pfn,
> 2359 largepage_lvl + 1);
> 
> I guess adding a cache flush operation after PMD switching should solve
> the problem.
> 
> I am still not clear about this comment:
> 
> "
> This is possible for MMUs since they allow the software to handle
> spurious page-faults gracefully. This is not the case for the IOMMU
> though (without PRI).
> "
> 
> Can you please shed more light on this?

I was looking at the code in more detail, and apparently my concern
is incorrect.

I was under the assumption that the IOMMU map/unmap can merge/split
(specifically split) huge-pages. For instance, if you map 2MB and
then unmap 4KB out of the 2MB, then you would split the hugepage
and keep the rest of the mappings alive. This is the way MMU is
usually managed. To my defense, I also saw such partial unmappings
in Longpeng’s first scenario.

If this was possible, then you would have a case in which out of 2MB
(for instance), 4KB were unmapped, and you need to split the 2MB
hugepage into 4KB pages. If you try to clear the PMD, flush, and then
set the PMD to point to table with live 4KB PTES, you can have
an interim state in which the PMD is not present. DMAs that arrive
at this stage might fault, and without PRI (and device support)
you do not have a way of restarting the DMA after the hugepage split
is completed.

Anyhow, this concern is apparently not relevant. I guess I was too
naive to assume the IOMMU management is similar to the MMU. I now
see that there is a comment in intel_iommu_unmap() saying:

        /* Cope with horrid API which requires us to unmap more than the
           size argument if it happens to be a large-page mapping. */

Regards,
Nadav

[-- Attachment #2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: A problem of Intel IOMMU hardware ?
@ 2021-03-27  4:36                       ` Nadav Amit
  0 siblings, 0 replies; 50+ messages in thread
From: Nadav Amit @ 2021-03-27  4:36 UTC (permalink / raw)
  To: Lu Baolu
  Cc: chenjiashang, Tian, Kevin, alex.williamson, David Woodhouse,
	LKML, iommu, Gonglei (Arei),
	Longpeng (Mike, Cloud Infrastructure Service Product Dept.),
	will


[-- Attachment #1.1: Type: text/plain, Size: 3814 bytes --]



> On Mar 26, 2021, at 7:31 PM, Lu Baolu <baolu.lu@linux.intel.com> wrote:
> 
> Hi Nadav,
> 
> On 3/19/21 12:46 AM, Nadav Amit wrote:
>> So here is my guess:
>> Intel probably used as a basis for the IOTLB an implementation of
>> some other (regular) TLB design.
>> Intel SDM says regarding TLBs (4.10.4.2 “Recommended Invalidation”):
>> "Software wishing to prevent this uncertainty should not write to
>> a paging-structure entry in a way that would change, for any linear
>> address, both the page size and either the page frame, access rights,
>> or other attributes.”
>> Now the aforementioned uncertainty is a bit different (multiple
>> *valid*  translations of a single address). Yet, perhaps this is
>> yet another thing that might happen.
>> From a brief look on the handling of MMU (not IOMMU) hugepages
>> in Linux, indeed the PMD is first cleared and flushed before a
>> new valid PMD is set. This is possible for MMUs since they
>> allow the software to handle spurious page-faults gracefully.
>> This is not the case for the IOMMU though (without PRI).
>> Not sure this explains everything though. If that is the problem,
>> then during a mapping that changes page-sizes, a TLB flush is
>> needed, similarly to the one Longpeng did manually.
> 
> I have been working with Longpeng on this issue these days. It turned
> out that your guess is right. The PMD is first cleared but not flushed
> before a new valid one is set. The previous entry might be cached in the
> paging structure caches hence leads to disaster.
> 
> In __domain_mapping():
> 
> 2352                                 /*
> 2353                                  * Ensure that old small page tables are
> 2354                                  * removed to make room for superpage(s).
> 2355                                  * We're adding new large pages, so make sure
> 2356                                  * we don't remove their parent tables.
> 2357                                  */
> 2358                                 dma_pte_free_pagetable(domain, iov_pfn, end_pfn,
> 2359 largepage_lvl + 1);
> 
> I guess adding a cache flush operation after PMD switching should solve
> the problem.
> 
> I am still not clear about this comment:
> 
> "
> This is possible for MMUs since they allow the software to handle
> spurious page-faults gracefully. This is not the case for the IOMMU
> though (without PRI).
> "
> 
> Can you please shed more light on this?

I was looking at the code in more detail, and apparently my concern
is incorrect.

I was under the assumption that the IOMMU map/unmap can merge/split
(specifically split) huge-pages. For instance, if you map 2MB and
then unmap 4KB out of the 2MB, then you would split the hugepage
and keep the rest of the mappings alive. This is the way MMU is
usually managed. To my defense, I also saw such partial unmappings
in Longpeng’s first scenario.

If this was possible, then you would have a case in which out of 2MB
(for instance), 4KB were unmapped, and you need to split the 2MB
hugepage into 4KB pages. If you try to clear the PMD, flush, and then
set the PMD to point to table with live 4KB PTES, you can have
an interim state in which the PMD is not present. DMAs that arrive
at this stage might fault, and without PRI (and device support)
you do not have a way of restarting the DMA after the hugepage split
is completed.

Anyhow, this concern is apparently not relevant. I guess I was too
naive to assume the IOMMU management is similar to the MMU. I now
see that there is a comment in intel_iommu_unmap() saying:

        /* Cope with horrid API which requires us to unmap more than the
           size argument if it happens to be a large-page mapping. */

Regards,
Nadav

[-- Attachment #1.2: Message signed with OpenPGP --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

[-- Attachment #2: Type: text/plain, Size: 156 bytes --]

_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: A problem of Intel IOMMU hardware ?
  2021-03-27  4:36                       ` Nadav Amit
@ 2021-03-27  5:27                         ` Lu Baolu
  -1 siblings, 0 replies; 50+ messages in thread
From: Lu Baolu @ 2021-03-27  5:27 UTC (permalink / raw)
  To: Nadav Amit
  Cc: baolu.lu, Longpeng (Mike,
	Cloud Infrastructure Service Product Dept.),
	chenjiashang, Tian, Kevin, will, iommu, LKML, alex.williamson,
	Gonglei (Arei),
	David Woodhouse

Hi Nadav,

On 3/27/21 12:36 PM, Nadav Amit wrote:
> 
> 
>> On Mar 26, 2021, at 7:31 PM, Lu Baolu <baolu.lu@linux.intel.com> wrote:
>>
>> Hi Nadav,
>>
>> On 3/19/21 12:46 AM, Nadav Amit wrote:
>>> So here is my guess:
>>> Intel probably used as a basis for the IOTLB an implementation of
>>> some other (regular) TLB design.
>>> Intel SDM says regarding TLBs (4.10.4.2 “Recommended Invalidation”):
>>> "Software wishing to prevent this uncertainty should not write to
>>> a paging-structure entry in a way that would change, for any linear
>>> address, both the page size and either the page frame, access rights,
>>> or other attributes.”
>>> Now the aforementioned uncertainty is a bit different (multiple
>>> *valid*  translations of a single address). Yet, perhaps this is
>>> yet another thing that might happen.
>>>  From a brief look on the handling of MMU (not IOMMU) hugepages
>>> in Linux, indeed the PMD is first cleared and flushed before a
>>> new valid PMD is set. This is possible for MMUs since they
>>> allow the software to handle spurious page-faults gracefully.
>>> This is not the case for the IOMMU though (without PRI).
>>> Not sure this explains everything though. If that is the problem,
>>> then during a mapping that changes page-sizes, a TLB flush is
>>> needed, similarly to the one Longpeng did manually.
>>
>> I have been working with Longpeng on this issue these days. It turned
>> out that your guess is right. The PMD is first cleared but not flushed
>> before a new valid one is set. The previous entry might be cached in the
>> paging structure caches hence leads to disaster.
>>
>> In __domain_mapping():
>>
>> 2352                                 /*
>> 2353                                  * Ensure that old small page tables are
>> 2354                                  * removed to make room for superpage(s).
>> 2355                                  * We're adding new large pages, so make sure
>> 2356                                  * we don't remove their parent tables.
>> 2357                                  */
>> 2358                                 dma_pte_free_pagetable(domain, iov_pfn, end_pfn,
>> 2359 largepage_lvl + 1);
>>
>> I guess adding a cache flush operation after PMD switching should solve
>> the problem.
>>
>> I am still not clear about this comment:
>>
>> "
>> This is possible for MMUs since they allow the software to handle
>> spurious page-faults gracefully. This is not the case for the IOMMU
>> though (without PRI).
>> "
>>
>> Can you please shed more light on this?
> 
> I was looking at the code in more detail, and apparently my concern
> is incorrect.
> 
> I was under the assumption that the IOMMU map/unmap can merge/split
> (specifically split) huge-pages. For instance, if you map 2MB and
> then unmap 4KB out of the 2MB, then you would split the hugepage
> and keep the rest of the mappings alive. This is the way MMU is
> usually managed. To my defense, I also saw such partial unmappings
> in Longpeng’s first scenario.
> 
> If this was possible, then you would have a case in which out of 2MB
> (for instance), 4KB were unmapped, and you need to split the 2MB
> hugepage into 4KB pages. If you try to clear the PMD, flush, and then
> set the PMD to point to table with live 4KB PTES, you can have
> an interim state in which the PMD is not present. DMAs that arrive
> at this stage might fault, and without PRI (and device support)
> you do not have a way of restarting the DMA after the hugepage split
> is completed.

Get you and thanks a lot for sharing.

For current IOMMU usage, I can't see any case to split a huge page into
4KB pages, but in the near future, we do have a need of splitting huge
pages. For example, when we want to use the A/D bit to track the DMA
dirty pages during VM migration, it's an optimization if we could split
a huge page into 4K ones. So far, the solution I have considered is:

1) Prepare the split subtables in advance;
    [it's identical to the existing one only use 4k pages instead of huge
     page.]
2) Switch the super (huge) page's leaf entry;
    [at this point, hardware could use both subtables. I am not sure
     whether the hardware allows a dynamic switch of page table entry
     from on valid entry to another valid one.]
3) Flush the cache.
    [hardware will use the new subtable]

As long as we can make sure that the old subtable won't be used by
hardware, we can safely release the old table.

> 
> Anyhow, this concern is apparently not relevant. I guess I was too
> naive to assume the IOMMU management is similar to the MMU. I now
> see that there is a comment in intel_iommu_unmap() saying:
> 
>          /* Cope with horrid API which requires us to unmap more than the
>             size argument if it happens to be a large-page mapping. */
> 
> Regards,
> Nadav
> 

Best regards,
baolu

^ permalink raw reply	[flat|nested] 50+ messages in thread

* Re: A problem of Intel IOMMU hardware ?
@ 2021-03-27  5:27                         ` Lu Baolu
  0 siblings, 0 replies; 50+ messages in thread
From: Lu Baolu @ 2021-03-27  5:27 UTC (permalink / raw)
  To: Nadav Amit
  Cc: chenjiashang, Tian, Kevin, alex.williamson, David Woodhouse,
	LKML, iommu, Gonglei (Arei),
	Longpeng (Mike, Cloud Infrastructure Service Product Dept.),
	will

Hi Nadav,

On 3/27/21 12:36 PM, Nadav Amit wrote:
> 
> 
>> On Mar 26, 2021, at 7:31 PM, Lu Baolu <baolu.lu@linux.intel.com> wrote:
>>
>> Hi Nadav,
>>
>> On 3/19/21 12:46 AM, Nadav Amit wrote:
>>> So here is my guess:
>>> Intel probably used as a basis for the IOTLB an implementation of
>>> some other (regular) TLB design.
>>> Intel SDM says regarding TLBs (4.10.4.2 “Recommended Invalidation”):
>>> "Software wishing to prevent this uncertainty should not write to
>>> a paging-structure entry in a way that would change, for any linear
>>> address, both the page size and either the page frame, access rights,
>>> or other attributes.”
>>> Now the aforementioned uncertainty is a bit different (multiple
>>> *valid*  translations of a single address). Yet, perhaps this is
>>> yet another thing that might happen.
>>>  From a brief look on the handling of MMU (not IOMMU) hugepages
>>> in Linux, indeed the PMD is first cleared and flushed before a
>>> new valid PMD is set. This is possible for MMUs since they
>>> allow the software to handle spurious page-faults gracefully.
>>> This is not the case for the IOMMU though (without PRI).
>>> Not sure this explains everything though. If that is the problem,
>>> then during a mapping that changes page-sizes, a TLB flush is
>>> needed, similarly to the one Longpeng did manually.
>>
>> I have been working with Longpeng on this issue these days. It turned
>> out that your guess is right. The PMD is first cleared but not flushed
>> before a new valid one is set. The previous entry might be cached in the
>> paging structure caches hence leads to disaster.
>>
>> In __domain_mapping():
>>
>> 2352                                 /*
>> 2353                                  * Ensure that old small page tables are
>> 2354                                  * removed to make room for superpage(s).
>> 2355                                  * We're adding new large pages, so make sure
>> 2356                                  * we don't remove their parent tables.
>> 2357                                  */
>> 2358                                 dma_pte_free_pagetable(domain, iov_pfn, end_pfn,
>> 2359 largepage_lvl + 1);
>>
>> I guess adding a cache flush operation after PMD switching should solve
>> the problem.
>>
>> I am still not clear about this comment:
>>
>> "
>> This is possible for MMUs since they allow the software to handle
>> spurious page-faults gracefully. This is not the case for the IOMMU
>> though (without PRI).
>> "
>>
>> Can you please shed more light on this?
> 
> I was looking at the code in more detail, and apparently my concern
> is incorrect.
> 
> I was under the assumption that the IOMMU map/unmap can merge/split
> (specifically split) huge-pages. For instance, if you map 2MB and
> then unmap 4KB out of the 2MB, then you would split the hugepage
> and keep the rest of the mappings alive. This is the way MMU is
> usually managed. To my defense, I also saw such partial unmappings
> in Longpeng’s first scenario.
> 
> If this was possible, then you would have a case in which out of 2MB
> (for instance), 4KB were unmapped, and you need to split the 2MB
> hugepage into 4KB pages. If you try to clear the PMD, flush, and then
> set the PMD to point to table with live 4KB PTES, you can have
> an interim state in which the PMD is not present. DMAs that arrive
> at this stage might fault, and without PRI (and device support)
> you do not have a way of restarting the DMA after the hugepage split
> is completed.

Get you and thanks a lot for sharing.

For current IOMMU usage, I can't see any case to split a huge page into
4KB pages, but in the near future, we do have a need of splitting huge
pages. For example, when we want to use the A/D bit to track the DMA
dirty pages during VM migration, it's an optimization if we could split
a huge page into 4K ones. So far, the solution I have considered is:

1) Prepare the split subtables in advance;
    [it's identical to the existing one only use 4k pages instead of huge
     page.]
2) Switch the super (huge) page's leaf entry;
    [at this point, hardware could use both subtables. I am not sure
     whether the hardware allows a dynamic switch of page table entry
     from on valid entry to another valid one.]
3) Flush the cache.
    [hardware will use the new subtable]

As long as we can make sure that the old subtable won't be used by
hardware, we can safely release the old table.

> 
> Anyhow, this concern is apparently not relevant. I guess I was too
> naive to assume the IOMMU management is similar to the MMU. I now
> see that there is a comment in intel_iommu_unmap() saying:
> 
>          /* Cope with horrid API which requires us to unmap more than the
>             size argument if it happens to be a large-page mapping. */
> 
> Regards,
> Nadav
> 

Best regards,
baolu
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

^ permalink raw reply	[flat|nested] 50+ messages in thread

end of thread, other threads:[~2021-03-27  5:39 UTC | newest]

Thread overview: 50+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-17  3:16 A problem of Intel IOMMU hardware ? Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-17  3:16 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-17  5:16 ` Lu Baolu
2021-03-17  5:16   ` Lu Baolu
2021-03-17  9:40   ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-17  9:40     ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-17 15:18   ` Alex Williamson
2021-03-17 15:18     ` Alex Williamson
2021-03-18  2:58     ` Lu Baolu
2021-03-18  2:58       ` Lu Baolu
2021-03-18  4:46       ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-18  4:46         ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-18  7:48         ` Nadav Amit
2021-03-18  7:48           ` Nadav Amit
2021-03-17  5:46 ` Nadav Amit
2021-03-17  5:46   ` Nadav Amit
2021-03-17  9:35   ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-17  9:35     ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-17 18:12     ` Nadav Amit
2021-03-17 18:12       ` Nadav Amit
2021-03-18  3:03       ` Lu Baolu
2021-03-18  3:03         ` Lu Baolu
2021-03-18  8:20       ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-18  8:20         ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-18  8:27         ` Tian, Kevin
2021-03-18  8:27           ` Tian, Kevin
2021-03-18  8:38           ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-18  8:38             ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-18  8:43             ` Tian, Kevin
2021-03-18  8:43               ` Tian, Kevin
2021-03-18  8:54               ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-18  8:54                 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-18  8:56             ` Tian, Kevin
2021-03-18  8:56               ` Tian, Kevin
2021-03-18  9:25               ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-18  9:25                 ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-18 16:46                 ` Nadav Amit
2021-03-18 16:46                   ` Nadav Amit
2021-03-21 23:51                   ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-21 23:51                     ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-22  0:27                   ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-22  0:27                     ` Longpeng (Mike, Cloud Infrastructure Service Product Dept.)
2021-03-27  2:31                   ` Lu Baolu
2021-03-27  2:31                     ` Lu Baolu
2021-03-27  4:36                     ` Nadav Amit
2021-03-27  4:36                       ` Nadav Amit
2021-03-27  5:27                       ` Lu Baolu
2021-03-27  5:27                         ` Lu Baolu
2021-03-19  0:15               ` Lu Baolu
2021-03-19  0:15                 ` Lu Baolu

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.