* (unknown)
@ 2019-03-19 14:41 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
To: linux-nvme
Cc: Maxim Levitsky, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney ,
Paolo Bonzini, Liang Cunming, Liu Changpeng, Fam Zheng,
Amnon Ilan, John
Date: Tue, 19 Mar 2019 14:45:45 +0200
Subject: [PATCH 0/9] RFC: NVME VFIO mediated device
Hi everyone!
In this patch series, I would like to introduce my take on the problem of doing
as fast as possible virtualization of storage with emphasis on low latency.
In this patch series I implemented a kernel vfio based, mediated device that
allows the user to pass through a partition and/or whole namespace to a guest.
The idea behind this driver is based on paper you can find at
https://www.usenix.org/conference/atc18/presentation/peng,
Although note that I stared the development prior to reading this paper,
independently.
In addition to that implementation is not based on code used in the paper as
I wasn't being able at that time to make the source available to me.
***Key points about the implementation:***
* Polling kernel thread is used. The polling is stopped after a
predefined timeout (1/2 sec by default).
Support for all interrupt driven mode is planned, and it shows promising results.
* Guest sees a standard NVME device - this allows to run guest with
unmodified drivers, for example windows guests.
* The NVMe device is shared between host and guest.
That means that even a single namespace can be split between host
and guest based on different partitions.
* Simple configuration
*** Performance ***
Performance was tested on Intel DC P3700, With Xeon E5-2620 v2
and both latency and throughput is very similar to SPDK.
Soon I will test this on a better server and nvme device and provide
more formal performance numbers.
Latency numbers:
~80ms - spdk with fio plugin on the host.
~84ms - nvme driver on the host
~87ms - mdev-nvme + nvme driver in the guest
Throughput was following similar pattern as well.
* Configuration example
$ modprobe nvme mdev_queues=4
$ modprobe nvme-mdev
$ UUID=$(uuidgen)
$ DEVICE='device pci address'
$ echo $UUID > /sys/bus/pci/devices/$DEVICE/mdev_supported_types/nvme-2Q_V1/create
$ echo n1p3 > /sys/bus/mdev/devices/$UUID/namespaces/add_namespace #attach host namespace 1 parition 3
$ echo 11 > /sys/bus/mdev/devices/$UUID/settings/iothread_cpu #pin the io thread to cpu 11
Afterward boot qemu with
-device vfio-pci,sysfsdev=/sys/bus/mdev/devices/$UUID
Zero configuration on the guest.
*** FAQ ***
* Why to make this in the kernel? Why this is better that SPDK
-> Reuse the existing nvme kernel driver in the host. No new drivers in the guest.
-> Share the NVMe device between host and guest.
Even in fully virtualized configurations,
some partitions of nvme device could be used by guests as block devices
while others passed through with nvme-mdev to achieve balance between
all features of full IO stack emulation and performance.
-> NVME-MDEV is a bit faster due to the fact that in-kernel driver
can send interrupts to the guest directly without a context
switch that can be expensive due to meltdown mitigation.
-> Is able to utilize interrupts to get reasonable performance.
This is only implemented
as a proof of concept and not included in the patches,
but interrupt driven mode shows reasonable performance
-> This is a framework that later can be used to support NVMe devices
with more of the IO virtualization built-in
(IOMMU with PASID support coupled with device that supports it)
* Why to attach directly to nvme-pci driver and not use block layer IO
-> The direct attachment allows for better performance, but I will
check the possibility of using block IO, especially for fabrics drivers.
*** Implementation notes ***
* All guest memory is mapped into the physical nvme device
but not 1:1 as vfio-pci would do this.
This allows very efficient DMA.
To support this, patch 2 adds ability for a mdev device to listen on
guest's memory map events.
Any such memory is immediately pinned and then DMA mapped.
(Support for fabric drivers where this is not possible exits too,
in which case the fabric driver will do its own DMA mapping)
* nvme core driver is modified to announce the appearance
and disappearance of nvme controllers and namespaces,
to which the nvme-mdev driver is subscribed.
* nvme-pci driver is modified to expose raw interface of attaching to
and sending/polling the IO queues.
This allows the mdev driver very efficiently to submit/poll for the IO.
By default one host queue is used per each mediated device.
(support for other fabric based host drivers is planned)
* The nvme-mdev doesn't assume presence of KVM, thus any VFIO user, including
SPDK, a qemu running with tccg, ... can use this virtual device.
*** Testing ***
The device was tested with stock QEMU 3.0 on the host,
with host was using 5.0 kernel with nvme-mdev added and the following hardware:
* QEMU nvme virtual device (with nested guest)
* Intel DC P3700 on Xeon E5-2620 v2 server
* Samsung SM981 (in a Thunderbolt enclosure, with my laptop)
* Lenovo NVME device found in my laptop
The guest was tested with kernel 4.16, 4.18, 4.20 and
the same custom complied kernel 5.0
Windows 10 guest was tested too with both Microsoft's inbox driver and
open source community NVME driver
(https://lists.openfabrics.org/pipermail/nvmewin/2016-December/001420.html)
Testing was mostly done on x86_64, but 32 bit host/guest combination
was lightly tested too.
In addition to that, the virtual device was tested with nested guest,
by passing the virtual device to it,
using pci passthrough, qemu userspace nvme driver, and spdk
PS: I used to contribute to the kernel as a hobby using the
maximlevitsky@gmail.com address
Maxim Levitsky (9):
vfio/mdev: add .request callback
nvme/core: add some more values from the spec
nvme/core: add NVME_CTRL_SUSPENDED controller state
nvme/pci: use the NVME_CTRL_SUSPENDED state
nvme/pci: add known admin effects to augument admin effects log page
nvme/pci: init shadow doorbell after each reset
nvme/core: add mdev interfaces
nvme/core: add nvme-mdev core driver
nvme/pci: implement the mdev external queue allocation interface
MAINTAINERS | 5 +
drivers/nvme/Kconfig | 1 +
drivers/nvme/Makefile | 1 +
drivers/nvme/host/core.c | 149 +++++-
drivers/nvme/host/nvme.h | 55 ++-
drivers/nvme/host/pci.c | 385 ++++++++++++++-
drivers/nvme/mdev/Kconfig | 16 +
drivers/nvme/mdev/Makefile | 5 +
drivers/nvme/mdev/adm.c | 873 ++++++++++++++++++++++++++++++++++
drivers/nvme/mdev/events.c | 142 ++++++
drivers/nvme/mdev/host.c | 491 +++++++++++++++++++
drivers/nvme/mdev/instance.c | 802 +++++++++++++++++++++++++++++++
drivers/nvme/mdev/io.c | 563 ++++++++++++++++++++++
drivers/nvme/mdev/irq.c | 264 ++++++++++
drivers/nvme/mdev/mdev.h | 56 +++
drivers/nvme/mdev/mmio.c | 591 +++++++++++++++++++++++
drivers/nvme/mdev/pci.c | 247 ++++++++++
drivers/nvme/mdev/priv.h | 700 +++++++++++++++++++++++++++
drivers/nvme/mdev/udata.c | 390 +++++++++++++++
drivers/nvme/mdev/vcq.c | 207 ++++++++
drivers/nvme/mdev/vctrl.c | 514 ++++++++++++++++++++
drivers/nvme/mdev/viommu.c | 322 +++++++++++++
drivers/nvme/mdev/vns.c | 356 ++++++++++++++
drivers/nvme/mdev/vsq.c | 178 +++++++
drivers/vfio/mdev/vfio_mdev.c | 11 +
include/linux/mdev.h | 4 +
include/linux/nvme.h | 88 +++-
27 files changed, 7375 insertions(+), 41 deletions(-)
create mode 100644 drivers/nvme/mdev/Kconfig
create mode 100644 drivers/nvme/mdev/Makefile
create mode 100644 drivers/nvme/mdev/adm.c
create mode 100644 drivers/nvme/mdev/events.c
create mode 100644 drivers/nvme/mdev/host.c
create mode 100644 drivers/nvme/mdev/instance.c
create mode 100644 drivers/nvme/mdev/io.c
create mode 100644 drivers/nvme/mdev/irq.c
create mode 100644 drivers/nvme/mdev/mdev.h
create mode 100644 drivers/nvme/mdev/mmio.c
create mode 100644 drivers/nvme/mdev/pci.c
create mode 100644 drivers/nvme/mdev/priv.h
create mode 100644 drivers/nvme/mdev/udata.c
create mode 100644 drivers/nvme/mdev/vcq.c
create mode 100644 drivers/nvme/mdev/vctrl.c
create mode 100644 drivers/nvme/mdev/viommu.c
create mode 100644 drivers/nvme/mdev/vns.c
create mode 100644 drivers/nvme/mdev/vsq.c
--
2.17.2
^ permalink raw reply [flat|nested] 3471+ messages in thread
* No subject
@ 2019-03-19 14:41 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
Date: Tue, 19 Mar 2019 14:45:45 +0200
Subject: [PATCH 0/9] RFC: NVME VFIO mediated device
Hi everyone!
In this patch series, I would like to introduce my take on the problem of doing
as fast as possible virtualization of storage with emphasis on low latency.
In this patch series I implemented a kernel vfio based, mediated device that
allows the user to pass through a partition and/or whole namespace to a guest.
The idea behind this driver is based on paper you can find at
https://www.usenix.org/conference/atc18/presentation/peng,
Although note that I stared the development prior to reading this paper,
independently.
In addition to that implementation is not based on code used in the paper as
I wasn't being able at that time to make the source available to me.
***Key points about the implementation:***
* Polling kernel thread is used. The polling is stopped after a
predefined timeout (1/2 sec by default).
Support for all interrupt driven mode is planned, and it shows promising results.
* Guest sees a standard NVME device - this allows to run guest with
unmodified drivers, for example windows guests.
* The NVMe device is shared between host and guest.
That means that even a single namespace can be split between host
and guest based on different partitions.
* Simple configuration
*** Performance ***
Performance was tested on Intel DC P3700, With Xeon E5-2620 v2
and both latency and throughput is very similar to SPDK.
Soon I will test this on a better server and nvme device and provide
more formal performance numbers.
Latency numbers:
~80ms - spdk with fio plugin on the host.
~84ms - nvme driver on the host
~87ms - mdev-nvme + nvme driver in the guest
Throughput was following similar pattern as well.
* Configuration example
$ modprobe nvme mdev_queues=4
$ modprobe nvme-mdev
$ UUID=$(uuidgen)
$ DEVICE='device pci address'
$ echo $UUID > /sys/bus/pci/devices/$DEVICE/mdev_supported_types/nvme-2Q_V1/create
$ echo n1p3 > /sys/bus/mdev/devices/$UUID/namespaces/add_namespace #attach host namespace 1 parition 3
$ echo 11 > /sys/bus/mdev/devices/$UUID/settings/iothread_cpu #pin the io thread to cpu 11
Afterward boot qemu with
-device vfio-pci,sysfsdev=/sys/bus/mdev/devices/$UUID
Zero configuration on the guest.
*** FAQ ***
* Why to make this in the kernel? Why this is better that SPDK
-> Reuse the existing nvme kernel driver in the host. No new drivers in the guest.
-> Share the NVMe device between host and guest.
Even in fully virtualized configurations,
some partitions of nvme device could be used by guests as block devices
while others passed through with nvme-mdev to achieve balance between
all features of full IO stack emulation and performance.
-> NVME-MDEV is a bit faster due to the fact that in-kernel driver
can send interrupts to the guest directly without a context
switch that can be expensive due to meltdown mitigation.
-> Is able to utilize interrupts to get reasonable performance.
This is only implemented
as a proof of concept and not included in the patches,
but interrupt driven mode shows reasonable performance
-> This is a framework that later can be used to support NVMe devices
with more of the IO virtualization built-in
(IOMMU with PASID support coupled with device that supports it)
* Why to attach directly to nvme-pci driver and not use block layer IO
-> The direct attachment allows for better performance, but I will
check the possibility of using block IO, especially for fabrics drivers.
*** Implementation notes ***
* All guest memory is mapped into the physical nvme device
but not 1:1 as vfio-pci would do this.
This allows very efficient DMA.
To support this, patch 2 adds ability for a mdev device to listen on
guest's memory map events.
Any such memory is immediately pinned and then DMA mapped.
(Support for fabric drivers where this is not possible exits too,
in which case the fabric driver will do its own DMA mapping)
* nvme core driver is modified to announce the appearance
and disappearance of nvme controllers and namespaces,
to which the nvme-mdev driver is subscribed.
* nvme-pci driver is modified to expose raw interface of attaching to
and sending/polling the IO queues.
This allows the mdev driver very efficiently to submit/poll for the IO.
By default one host queue is used per each mediated device.
(support for other fabric based host drivers is planned)
* The nvme-mdev doesn't assume presence of KVM, thus any VFIO user, including
SPDK, a qemu running with tccg, ... can use this virtual device.
*** Testing ***
The device was tested with stock QEMU 3.0 on the host,
with host was using 5.0 kernel with nvme-mdev added and the following hardware:
* QEMU nvme virtual device (with nested guest)
* Intel DC P3700 on Xeon E5-2620 v2 server
* Samsung SM981 (in a Thunderbolt enclosure, with my laptop)
* Lenovo NVME device found in my laptop
The guest was tested with kernel 4.16, 4.18, 4.20 and
the same custom complied kernel 5.0
Windows 10 guest was tested too with both Microsoft's inbox driver and
open source community NVME driver
(https://lists.openfabrics.org/pipermail/nvmewin/2016-December/001420.html)
Testing was mostly done on x86_64, but 32 bit host/guest combination
was lightly tested too.
In addition to that, the virtual device was tested with nested guest,
by passing the virtual device to it,
using pci passthrough, qemu userspace nvme driver, and spdk
PS: I used to contribute to the kernel as a hobby using the
maximlevitsky at gmail.com address
Maxim Levitsky (9):
vfio/mdev: add .request callback
nvme/core: add some more values from the spec
nvme/core: add NVME_CTRL_SUSPENDED controller state
nvme/pci: use the NVME_CTRL_SUSPENDED state
nvme/pci: add known admin effects to augument admin effects log page
nvme/pci: init shadow doorbell after each reset
nvme/core: add mdev interfaces
nvme/core: add nvme-mdev core driver
nvme/pci: implement the mdev external queue allocation interface
MAINTAINERS | 5 +
drivers/nvme/Kconfig | 1 +
drivers/nvme/Makefile | 1 +
drivers/nvme/host/core.c | 149 +++++-
drivers/nvme/host/nvme.h | 55 ++-
drivers/nvme/host/pci.c | 385 ++++++++++++++-
drivers/nvme/mdev/Kconfig | 16 +
drivers/nvme/mdev/Makefile | 5 +
drivers/nvme/mdev/adm.c | 873 ++++++++++++++++++++++++++++++++++
drivers/nvme/mdev/events.c | 142 ++++++
drivers/nvme/mdev/host.c | 491 +++++++++++++++++++
drivers/nvme/mdev/instance.c | 802 +++++++++++++++++++++++++++++++
drivers/nvme/mdev/io.c | 563 ++++++++++++++++++++++
drivers/nvme/mdev/irq.c | 264 ++++++++++
drivers/nvme/mdev/mdev.h | 56 +++
drivers/nvme/mdev/mmio.c | 591 +++++++++++++++++++++++
drivers/nvme/mdev/pci.c | 247 ++++++++++
drivers/nvme/mdev/priv.h | 700 +++++++++++++++++++++++++++
drivers/nvme/mdev/udata.c | 390 +++++++++++++++
drivers/nvme/mdev/vcq.c | 207 ++++++++
drivers/nvme/mdev/vctrl.c | 514 ++++++++++++++++++++
drivers/nvme/mdev/viommu.c | 322 +++++++++++++
drivers/nvme/mdev/vns.c | 356 ++++++++++++++
drivers/nvme/mdev/vsq.c | 178 +++++++
drivers/vfio/mdev/vfio_mdev.c | 11 +
include/linux/mdev.h | 4 +
include/linux/nvme.h | 88 +++-
27 files changed, 7375 insertions(+), 41 deletions(-)
create mode 100644 drivers/nvme/mdev/Kconfig
create mode 100644 drivers/nvme/mdev/Makefile
create mode 100644 drivers/nvme/mdev/adm.c
create mode 100644 drivers/nvme/mdev/events.c
create mode 100644 drivers/nvme/mdev/host.c
create mode 100644 drivers/nvme/mdev/instance.c
create mode 100644 drivers/nvme/mdev/io.c
create mode 100644 drivers/nvme/mdev/irq.c
create mode 100644 drivers/nvme/mdev/mdev.h
create mode 100644 drivers/nvme/mdev/mmio.c
create mode 100644 drivers/nvme/mdev/pci.c
create mode 100644 drivers/nvme/mdev/priv.h
create mode 100644 drivers/nvme/mdev/udata.c
create mode 100644 drivers/nvme/mdev/vcq.c
create mode 100644 drivers/nvme/mdev/vctrl.c
create mode 100644 drivers/nvme/mdev/viommu.c
create mode 100644 drivers/nvme/mdev/vns.c
create mode 100644 drivers/nvme/mdev/vsq.c
--
2.17.2
^ permalink raw reply [flat|nested] 3471+ messages in thread
* [PATCH 1/9] vfio/mdev: add .request callback
2019-03-19 14:41 ` No subject Maxim Levitsky
(?)
@ 2019-03-19 14:41 ` Maxim Levitsky
-1 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
To: linux-nvme
Cc: Maxim Levitsky, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney ,
Paolo Bonzini, Liang Cunming, Liu Changpeng, Fam Zheng,
Amnon Ilan, John Ferlan
This will allow the hotplug to be enabled for mediated devices
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
drivers/vfio/mdev/vfio_mdev.c | 11 +++++++++++
include/linux/mdev.h | 4 ++++
2 files changed, 15 insertions(+)
diff --git a/drivers/vfio/mdev/vfio_mdev.c b/drivers/vfio/mdev/vfio_mdev.c
index d230620fe02d..17aa76de0764 100644
--- a/drivers/vfio/mdev/vfio_mdev.c
+++ b/drivers/vfio/mdev/vfio_mdev.c
@@ -101,6 +101,16 @@ static int vfio_mdev_mmap(void *device_data, struct vm_area_struct *vma)
return parent->ops->mmap(mdev, vma);
}
+static void vfio_mdev_request(void *device_data, unsigned int count)
+{
+ struct mdev_device *mdev = device_data;
+ struct mdev_parent *parent = mdev->parent;
+
+ if (unlikely(!parent->ops->request))
+ return;
+ parent->ops->request(mdev, count);
+}
+
static const struct vfio_device_ops vfio_mdev_dev_ops = {
.name = "vfio-mdev",
.open = vfio_mdev_open,
@@ -109,6 +119,7 @@ static const struct vfio_device_ops vfio_mdev_dev_ops = {
.read = vfio_mdev_read,
.write = vfio_mdev_write,
.mmap = vfio_mdev_mmap,
+ .request = vfio_mdev_request,
};
static int vfio_mdev_probe(struct device *dev)
diff --git a/include/linux/mdev.h b/include/linux/mdev.h
index b6e048e1045f..24887cd56962 100644
--- a/include/linux/mdev.h
+++ b/include/linux/mdev.h
@@ -13,6 +13,9 @@
#ifndef MDEV_H
#define MDEV_H
+#include <linux/uuid.h>
+#include <linux/device.h>
+
struct mdev_device;
/**
@@ -81,6 +84,7 @@ struct mdev_parent_ops {
long (*ioctl)(struct mdev_device *mdev, unsigned int cmd,
unsigned long arg);
int (*mmap)(struct mdev_device *mdev, struct vm_area_struct *vma);
+ void (*request)(struct mdev_device *mdev, unsigned int count);
};
/* interface for exporting mdev supported type attributes */
--
2.17.2
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* [PATCH 1/9] vfio/mdev: add .request callback
@ 2019-03-19 14:41 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
This will allow the hotplug to be enabled for mediated devices
Signed-off-by: Maxim Levitsky <mlevitsk at redhat.com>
---
drivers/vfio/mdev/vfio_mdev.c | 11 +++++++++++
include/linux/mdev.h | 4 ++++
2 files changed, 15 insertions(+)
diff --git a/drivers/vfio/mdev/vfio_mdev.c b/drivers/vfio/mdev/vfio_mdev.c
index d230620fe02d..17aa76de0764 100644
--- a/drivers/vfio/mdev/vfio_mdev.c
+++ b/drivers/vfio/mdev/vfio_mdev.c
@@ -101,6 +101,16 @@ static int vfio_mdev_mmap(void *device_data, struct vm_area_struct *vma)
return parent->ops->mmap(mdev, vma);
}
+static void vfio_mdev_request(void *device_data, unsigned int count)
+{
+ struct mdev_device *mdev = device_data;
+ struct mdev_parent *parent = mdev->parent;
+
+ if (unlikely(!parent->ops->request))
+ return;
+ parent->ops->request(mdev, count);
+}
+
static const struct vfio_device_ops vfio_mdev_dev_ops = {
.name = "vfio-mdev",
.open = vfio_mdev_open,
@@ -109,6 +119,7 @@ static const struct vfio_device_ops vfio_mdev_dev_ops = {
.read = vfio_mdev_read,
.write = vfio_mdev_write,
.mmap = vfio_mdev_mmap,
+ .request = vfio_mdev_request,
};
static int vfio_mdev_probe(struct device *dev)
diff --git a/include/linux/mdev.h b/include/linux/mdev.h
index b6e048e1045f..24887cd56962 100644
--- a/include/linux/mdev.h
+++ b/include/linux/mdev.h
@@ -13,6 +13,9 @@
#ifndef MDEV_H
#define MDEV_H
+#include <linux/uuid.h>
+#include <linux/device.h>
+
struct mdev_device;
/**
@@ -81,6 +84,7 @@ struct mdev_parent_ops {
long (*ioctl)(struct mdev_device *mdev, unsigned int cmd,
unsigned long arg);
int (*mmap)(struct mdev_device *mdev, struct vm_area_struct *vma);
+ void (*request)(struct mdev_device *mdev, unsigned int count);
};
/* interface for exporting mdev supported type attributes */
--
2.17.2
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* [PATCH 1/9] vfio/mdev: add .request callback
@ 2019-03-19 14:41 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
To: linux-nvme
Cc: Maxim Levitsky, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney ,
Paolo Bonzini, Liang Cunming, Liu Changpeng, Fam Zheng,
Amnon Ilan, John
This will allow the hotplug to be enabled for mediated devices
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
drivers/vfio/mdev/vfio_mdev.c | 11 +++++++++++
include/linux/mdev.h | 4 ++++
2 files changed, 15 insertions(+)
diff --git a/drivers/vfio/mdev/vfio_mdev.c b/drivers/vfio/mdev/vfio_mdev.c
index d230620fe02d..17aa76de0764 100644
--- a/drivers/vfio/mdev/vfio_mdev.c
+++ b/drivers/vfio/mdev/vfio_mdev.c
@@ -101,6 +101,16 @@ static int vfio_mdev_mmap(void *device_data, struct vm_area_struct *vma)
return parent->ops->mmap(mdev, vma);
}
+static void vfio_mdev_request(void *device_data, unsigned int count)
+{
+ struct mdev_device *mdev = device_data;
+ struct mdev_parent *parent = mdev->parent;
+
+ if (unlikely(!parent->ops->request))
+ return;
+ parent->ops->request(mdev, count);
+}
+
static const struct vfio_device_ops vfio_mdev_dev_ops = {
.name = "vfio-mdev",
.open = vfio_mdev_open,
@@ -109,6 +119,7 @@ static const struct vfio_device_ops vfio_mdev_dev_ops = {
.read = vfio_mdev_read,
.write = vfio_mdev_write,
.mmap = vfio_mdev_mmap,
+ .request = vfio_mdev_request,
};
static int vfio_mdev_probe(struct device *dev)
diff --git a/include/linux/mdev.h b/include/linux/mdev.h
index b6e048e1045f..24887cd56962 100644
--- a/include/linux/mdev.h
+++ b/include/linux/mdev.h
@@ -13,6 +13,9 @@
#ifndef MDEV_H
#define MDEV_H
+#include <linux/uuid.h>
+#include <linux/device.h>
+
struct mdev_device;
/**
@@ -81,6 +84,7 @@ struct mdev_parent_ops {
long (*ioctl)(struct mdev_device *mdev, unsigned int cmd,
unsigned long arg);
int (*mmap)(struct mdev_device *mdev, struct vm_area_struct *vma);
+ void (*request)(struct mdev_device *mdev, unsigned int count);
};
/* interface for exporting mdev supported type attributes */
--
2.17.2
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* [PATCH 2/9] nvme/core: add some more values from the spec
2019-03-19 14:41 ` No subject Maxim Levitsky
(?)
@ 2019-03-19 14:41 ` Maxim Levitsky
-1 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
To: linux-nvme
Cc: Maxim Levitsky, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney ,
Paolo Bonzini, Liang Cunming, Liu Changpeng, Fam Zheng,
Amnon Ilan, John Ferlan
This adds few defines from the spec,
that will be used in the nvme-mdev driver
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
include/linux/nvme.h | 88 ++++++++++++++++++++++++++++++++++----------
1 file changed, 68 insertions(+), 20 deletions(-)
diff --git a/include/linux/nvme.h b/include/linux/nvme.h
index bbcc83886899..029162db31bb 100644
--- a/include/linux/nvme.h
+++ b/include/linux/nvme.h
@@ -152,32 +152,42 @@ enum {
#define NVME_NVM_IOCQES 4
enum {
- NVME_CC_ENABLE = 1 << 0,
- NVME_CC_CSS_NVM = 0 << 4,
NVME_CC_EN_SHIFT = 0,
+ NVME_CC_ENABLE = 1 << NVME_CC_EN_SHIFT,
+
NVME_CC_CSS_SHIFT = 4,
+ NVME_CC_CSS_NVM = 0 << NVME_CC_CSS_SHIFT,
+
NVME_CC_MPS_SHIFT = 7,
+ NVME_CC_MPS_MASK = 0xF << NVME_CC_MPS_SHIFT,
+
NVME_CC_AMS_SHIFT = 11,
- NVME_CC_SHN_SHIFT = 14,
- NVME_CC_IOSQES_SHIFT = 16,
- NVME_CC_IOCQES_SHIFT = 20,
NVME_CC_AMS_RR = 0 << NVME_CC_AMS_SHIFT,
NVME_CC_AMS_WRRU = 1 << NVME_CC_AMS_SHIFT,
NVME_CC_AMS_VS = 7 << NVME_CC_AMS_SHIFT,
+ NVME_CC_AMS_MASK = 0x7 << NVME_CC_AMS_SHIFT,
+
+ NVME_CC_SHN_SHIFT = 14,
NVME_CC_SHN_NONE = 0 << NVME_CC_SHN_SHIFT,
NVME_CC_SHN_NORMAL = 1 << NVME_CC_SHN_SHIFT,
NVME_CC_SHN_ABRUPT = 2 << NVME_CC_SHN_SHIFT,
NVME_CC_SHN_MASK = 3 << NVME_CC_SHN_SHIFT,
+
+ NVME_CC_IOSQES_SHIFT = 16,
+ NVME_CC_IOCQES_SHIFT = 20,
NVME_CC_IOSQES = NVME_NVM_IOSQES << NVME_CC_IOSQES_SHIFT,
NVME_CC_IOCQES = NVME_NVM_IOCQES << NVME_CC_IOCQES_SHIFT,
+
NVME_CSTS_RDY = 1 << 0,
NVME_CSTS_CFS = 1 << 1,
NVME_CSTS_NSSRO = 1 << 4,
NVME_CSTS_PP = 1 << 5,
- NVME_CSTS_SHST_NORMAL = 0 << 2,
- NVME_CSTS_SHST_OCCUR = 1 << 2,
- NVME_CSTS_SHST_CMPLT = 2 << 2,
- NVME_CSTS_SHST_MASK = 3 << 2,
+
+ NVME_CSTS_SHST_SHIFT = 2,
+ NVME_CSTS_SHST_NORMAL = 0 << NVME_CSTS_SHST_SHIFT,
+ NVME_CSTS_SHST_OCCUR = 1 << NVME_CSTS_SHST_SHIFT,
+ NVME_CSTS_SHST_CMPLT = 2 << NVME_CSTS_SHST_SHIFT,
+ NVME_CSTS_SHST_MASK = 3 << NVME_CSTS_SHST_SHIFT,
};
struct nvme_id_power_state {
@@ -404,6 +414,20 @@ enum {
NVME_NIDT_UUID = 0x03,
};
+struct nvme_err_log_entry {
+ __u8 err_count[8];
+ __le16 sqid;
+ __le16 cid;
+ __le16 status;
+ __le16 location;
+ __u8 lba[8];
+ __le32 ns;
+ __u8 vnd;
+ __u8 rsvd1[3];
+ __u8 cmd_specific[8];
+ __u8 rsvd2[24];
+};
+
struct nvme_smart_log {
__u8 critical_warning;
__u8 temperature[2];
@@ -491,13 +515,30 @@ enum {
NVME_AER_VS = 7,
};
-enum {
- NVME_AER_NOTICE_NS_CHANGED = 0x00,
- NVME_AER_NOTICE_FW_ACT_STARTING = 0x01,
- NVME_AER_NOTICE_ANA = 0x03,
- NVME_AER_NOTICE_DISC_CHANGED = 0xf0,
+enum nvme_async_event_type {
+ NVME_AER_TYPE_ERROR = 0,
+ NVME_AER_TYPE_SMART = 1,
+ NVME_AER_TYPE_NOTICE = 2,
+ NVME_AER_TYPE_MAX = 7,
};
+enum nvme_async_event {
+ NVME_AER_ERROR_INVALID_DB_REG = 0,
+ NVME_AER_ERROR_INVALID_DB_VALUE = 1,
+ NVME_AER_ERROR_DIAG_FAILURE = 2,
+ NVME_AER_ERROR_PERSISTENT_INT_ERR = 3,
+ NVME_AER_ERROR_TRANSIENT_INT_ERR = 4,
+ NVME_AER_ERROR_FW_IMAGE_LOAD_ERR = 5,
+
+ NVME_AER_SMART_SUBSYS_RELIABILITY = 0,
+ NVME_AER_SMART_TEMP_THRESH = 1,
+ NVME_AER_SMART_SPARE_BELOW_THRESH = 2,
+
+ NVME_AER_NOTICE_NS_CHANGED = 0,
+ NVME_AER_NOTICE_FW_ACT_STARTING = 1,
+ NVME_AER_NOTICE_ANA = 3,
+ NVME_AER_NOTICE_DISC_CHANGED = 0xf0,
+};
enum {
NVME_AEN_BIT_NS_ATTR = 8,
NVME_AEN_BIT_FW_ACT = 9,
@@ -548,12 +589,6 @@ struct nvme_reservation_status {
} regctl_ds[];
};
-enum nvme_async_event_type {
- NVME_AER_TYPE_ERROR = 0,
- NVME_AER_TYPE_SMART = 1,
- NVME_AER_TYPE_NOTICE = 2,
-};
-
/* I/O commands */
enum nvme_opcode {
@@ -705,10 +740,19 @@ enum {
NVME_RW_DSM_LATENCY_LOW = 3 << 4,
NVME_RW_DSM_SEQ_REQ = 1 << 6,
NVME_RW_DSM_COMPRESSED = 1 << 7,
+
+ NVME_WZ_DEAC = 1 << 9,
NVME_RW_PRINFO_PRCHK_REF = 1 << 10,
NVME_RW_PRINFO_PRCHK_APP = 1 << 11,
NVME_RW_PRINFO_PRCHK_GUARD = 1 << 12,
NVME_RW_PRINFO_PRACT = 1 << 13,
+
+ NVME_RW_PRINFO =
+ NVME_RW_PRINFO_PRCHK_REF |
+ NVME_RW_PRINFO_PRCHK_APP |
+ NVME_RW_PRINFO_PRCHK_GUARD |
+ NVME_RW_PRINFO_PRACT,
+
NVME_RW_DTYPE_STREAMS = 1 << 4,
};
@@ -809,6 +853,7 @@ enum {
NVME_SQ_PRIO_HIGH = (1 << 1),
NVME_SQ_PRIO_MEDIUM = (2 << 1),
NVME_SQ_PRIO_LOW = (3 << 1),
+ NVME_SQ_PRIO_MASK = (3 << 1),
NVME_FEAT_ARBITRATION = 0x01,
NVME_FEAT_POWER_MGMT = 0x02,
NVME_FEAT_LBA_RANGE = 0x03,
@@ -1146,6 +1191,7 @@ struct streams_directive_params {
struct nvme_command {
union {
+ __le32 dwords[16];
struct nvme_common_command common;
struct nvme_rw_command rw;
struct nvme_identify identify;
@@ -1217,6 +1263,8 @@ enum {
NVME_SC_SGL_INVALID_METADATA = 0x10,
NVME_SC_SGL_INVALID_TYPE = 0x11,
+ NVME_SC_PRP_OFFSET_INVALID = 0x13,
+
NVME_SC_SGL_INVALID_OFFSET = 0x16,
NVME_SC_SGL_INVALID_SUBTYPE = 0x17,
--
2.17.2
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* [PATCH 2/9] nvme/core: add some more values from the spec
@ 2019-03-19 14:41 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
This adds few defines from the spec,
that will be used in the nvme-mdev driver
Signed-off-by: Maxim Levitsky <mlevitsk at redhat.com>
---
include/linux/nvme.h | 88 ++++++++++++++++++++++++++++++++++----------
1 file changed, 68 insertions(+), 20 deletions(-)
diff --git a/include/linux/nvme.h b/include/linux/nvme.h
index bbcc83886899..029162db31bb 100644
--- a/include/linux/nvme.h
+++ b/include/linux/nvme.h
@@ -152,32 +152,42 @@ enum {
#define NVME_NVM_IOCQES 4
enum {
- NVME_CC_ENABLE = 1 << 0,
- NVME_CC_CSS_NVM = 0 << 4,
NVME_CC_EN_SHIFT = 0,
+ NVME_CC_ENABLE = 1 << NVME_CC_EN_SHIFT,
+
NVME_CC_CSS_SHIFT = 4,
+ NVME_CC_CSS_NVM = 0 << NVME_CC_CSS_SHIFT,
+
NVME_CC_MPS_SHIFT = 7,
+ NVME_CC_MPS_MASK = 0xF << NVME_CC_MPS_SHIFT,
+
NVME_CC_AMS_SHIFT = 11,
- NVME_CC_SHN_SHIFT = 14,
- NVME_CC_IOSQES_SHIFT = 16,
- NVME_CC_IOCQES_SHIFT = 20,
NVME_CC_AMS_RR = 0 << NVME_CC_AMS_SHIFT,
NVME_CC_AMS_WRRU = 1 << NVME_CC_AMS_SHIFT,
NVME_CC_AMS_VS = 7 << NVME_CC_AMS_SHIFT,
+ NVME_CC_AMS_MASK = 0x7 << NVME_CC_AMS_SHIFT,
+
+ NVME_CC_SHN_SHIFT = 14,
NVME_CC_SHN_NONE = 0 << NVME_CC_SHN_SHIFT,
NVME_CC_SHN_NORMAL = 1 << NVME_CC_SHN_SHIFT,
NVME_CC_SHN_ABRUPT = 2 << NVME_CC_SHN_SHIFT,
NVME_CC_SHN_MASK = 3 << NVME_CC_SHN_SHIFT,
+
+ NVME_CC_IOSQES_SHIFT = 16,
+ NVME_CC_IOCQES_SHIFT = 20,
NVME_CC_IOSQES = NVME_NVM_IOSQES << NVME_CC_IOSQES_SHIFT,
NVME_CC_IOCQES = NVME_NVM_IOCQES << NVME_CC_IOCQES_SHIFT,
+
NVME_CSTS_RDY = 1 << 0,
NVME_CSTS_CFS = 1 << 1,
NVME_CSTS_NSSRO = 1 << 4,
NVME_CSTS_PP = 1 << 5,
- NVME_CSTS_SHST_NORMAL = 0 << 2,
- NVME_CSTS_SHST_OCCUR = 1 << 2,
- NVME_CSTS_SHST_CMPLT = 2 << 2,
- NVME_CSTS_SHST_MASK = 3 << 2,
+
+ NVME_CSTS_SHST_SHIFT = 2,
+ NVME_CSTS_SHST_NORMAL = 0 << NVME_CSTS_SHST_SHIFT,
+ NVME_CSTS_SHST_OCCUR = 1 << NVME_CSTS_SHST_SHIFT,
+ NVME_CSTS_SHST_CMPLT = 2 << NVME_CSTS_SHST_SHIFT,
+ NVME_CSTS_SHST_MASK = 3 << NVME_CSTS_SHST_SHIFT,
};
struct nvme_id_power_state {
@@ -404,6 +414,20 @@ enum {
NVME_NIDT_UUID = 0x03,
};
+struct nvme_err_log_entry {
+ __u8 err_count[8];
+ __le16 sqid;
+ __le16 cid;
+ __le16 status;
+ __le16 location;
+ __u8 lba[8];
+ __le32 ns;
+ __u8 vnd;
+ __u8 rsvd1[3];
+ __u8 cmd_specific[8];
+ __u8 rsvd2[24];
+};
+
struct nvme_smart_log {
__u8 critical_warning;
__u8 temperature[2];
@@ -491,13 +515,30 @@ enum {
NVME_AER_VS = 7,
};
-enum {
- NVME_AER_NOTICE_NS_CHANGED = 0x00,
- NVME_AER_NOTICE_FW_ACT_STARTING = 0x01,
- NVME_AER_NOTICE_ANA = 0x03,
- NVME_AER_NOTICE_DISC_CHANGED = 0xf0,
+enum nvme_async_event_type {
+ NVME_AER_TYPE_ERROR = 0,
+ NVME_AER_TYPE_SMART = 1,
+ NVME_AER_TYPE_NOTICE = 2,
+ NVME_AER_TYPE_MAX = 7,
};
+enum nvme_async_event {
+ NVME_AER_ERROR_INVALID_DB_REG = 0,
+ NVME_AER_ERROR_INVALID_DB_VALUE = 1,
+ NVME_AER_ERROR_DIAG_FAILURE = 2,
+ NVME_AER_ERROR_PERSISTENT_INT_ERR = 3,
+ NVME_AER_ERROR_TRANSIENT_INT_ERR = 4,
+ NVME_AER_ERROR_FW_IMAGE_LOAD_ERR = 5,
+
+ NVME_AER_SMART_SUBSYS_RELIABILITY = 0,
+ NVME_AER_SMART_TEMP_THRESH = 1,
+ NVME_AER_SMART_SPARE_BELOW_THRESH = 2,
+
+ NVME_AER_NOTICE_NS_CHANGED = 0,
+ NVME_AER_NOTICE_FW_ACT_STARTING = 1,
+ NVME_AER_NOTICE_ANA = 3,
+ NVME_AER_NOTICE_DISC_CHANGED = 0xf0,
+};
enum {
NVME_AEN_BIT_NS_ATTR = 8,
NVME_AEN_BIT_FW_ACT = 9,
@@ -548,12 +589,6 @@ struct nvme_reservation_status {
} regctl_ds[];
};
-enum nvme_async_event_type {
- NVME_AER_TYPE_ERROR = 0,
- NVME_AER_TYPE_SMART = 1,
- NVME_AER_TYPE_NOTICE = 2,
-};
-
/* I/O commands */
enum nvme_opcode {
@@ -705,10 +740,19 @@ enum {
NVME_RW_DSM_LATENCY_LOW = 3 << 4,
NVME_RW_DSM_SEQ_REQ = 1 << 6,
NVME_RW_DSM_COMPRESSED = 1 << 7,
+
+ NVME_WZ_DEAC = 1 << 9,
NVME_RW_PRINFO_PRCHK_REF = 1 << 10,
NVME_RW_PRINFO_PRCHK_APP = 1 << 11,
NVME_RW_PRINFO_PRCHK_GUARD = 1 << 12,
NVME_RW_PRINFO_PRACT = 1 << 13,
+
+ NVME_RW_PRINFO =
+ NVME_RW_PRINFO_PRCHK_REF |
+ NVME_RW_PRINFO_PRCHK_APP |
+ NVME_RW_PRINFO_PRCHK_GUARD |
+ NVME_RW_PRINFO_PRACT,
+
NVME_RW_DTYPE_STREAMS = 1 << 4,
};
@@ -809,6 +853,7 @@ enum {
NVME_SQ_PRIO_HIGH = (1 << 1),
NVME_SQ_PRIO_MEDIUM = (2 << 1),
NVME_SQ_PRIO_LOW = (3 << 1),
+ NVME_SQ_PRIO_MASK = (3 << 1),
NVME_FEAT_ARBITRATION = 0x01,
NVME_FEAT_POWER_MGMT = 0x02,
NVME_FEAT_LBA_RANGE = 0x03,
@@ -1146,6 +1191,7 @@ struct streams_directive_params {
struct nvme_command {
union {
+ __le32 dwords[16];
struct nvme_common_command common;
struct nvme_rw_command rw;
struct nvme_identify identify;
@@ -1217,6 +1263,8 @@ enum {
NVME_SC_SGL_INVALID_METADATA = 0x10,
NVME_SC_SGL_INVALID_TYPE = 0x11,
+ NVME_SC_PRP_OFFSET_INVALID = 0x13,
+
NVME_SC_SGL_INVALID_OFFSET = 0x16,
NVME_SC_SGL_INVALID_SUBTYPE = 0x17,
--
2.17.2
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* [PATCH 2/9] nvme/core: add some more values from the spec
@ 2019-03-19 14:41 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
To: linux-nvme
Cc: Maxim Levitsky, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney ,
Paolo Bonzini, Liang Cunming, Liu Changpeng, Fam Zheng,
Amnon Ilan, John
This adds few defines from the spec,
that will be used in the nvme-mdev driver
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
include/linux/nvme.h | 88 ++++++++++++++++++++++++++++++++++----------
1 file changed, 68 insertions(+), 20 deletions(-)
diff --git a/include/linux/nvme.h b/include/linux/nvme.h
index bbcc83886899..029162db31bb 100644
--- a/include/linux/nvme.h
+++ b/include/linux/nvme.h
@@ -152,32 +152,42 @@ enum {
#define NVME_NVM_IOCQES 4
enum {
- NVME_CC_ENABLE = 1 << 0,
- NVME_CC_CSS_NVM = 0 << 4,
NVME_CC_EN_SHIFT = 0,
+ NVME_CC_ENABLE = 1 << NVME_CC_EN_SHIFT,
+
NVME_CC_CSS_SHIFT = 4,
+ NVME_CC_CSS_NVM = 0 << NVME_CC_CSS_SHIFT,
+
NVME_CC_MPS_SHIFT = 7,
+ NVME_CC_MPS_MASK = 0xF << NVME_CC_MPS_SHIFT,
+
NVME_CC_AMS_SHIFT = 11,
- NVME_CC_SHN_SHIFT = 14,
- NVME_CC_IOSQES_SHIFT = 16,
- NVME_CC_IOCQES_SHIFT = 20,
NVME_CC_AMS_RR = 0 << NVME_CC_AMS_SHIFT,
NVME_CC_AMS_WRRU = 1 << NVME_CC_AMS_SHIFT,
NVME_CC_AMS_VS = 7 << NVME_CC_AMS_SHIFT,
+ NVME_CC_AMS_MASK = 0x7 << NVME_CC_AMS_SHIFT,
+
+ NVME_CC_SHN_SHIFT = 14,
NVME_CC_SHN_NONE = 0 << NVME_CC_SHN_SHIFT,
NVME_CC_SHN_NORMAL = 1 << NVME_CC_SHN_SHIFT,
NVME_CC_SHN_ABRUPT = 2 << NVME_CC_SHN_SHIFT,
NVME_CC_SHN_MASK = 3 << NVME_CC_SHN_SHIFT,
+
+ NVME_CC_IOSQES_SHIFT = 16,
+ NVME_CC_IOCQES_SHIFT = 20,
NVME_CC_IOSQES = NVME_NVM_IOSQES << NVME_CC_IOSQES_SHIFT,
NVME_CC_IOCQES = NVME_NVM_IOCQES << NVME_CC_IOCQES_SHIFT,
+
NVME_CSTS_RDY = 1 << 0,
NVME_CSTS_CFS = 1 << 1,
NVME_CSTS_NSSRO = 1 << 4,
NVME_CSTS_PP = 1 << 5,
- NVME_CSTS_SHST_NORMAL = 0 << 2,
- NVME_CSTS_SHST_OCCUR = 1 << 2,
- NVME_CSTS_SHST_CMPLT = 2 << 2,
- NVME_CSTS_SHST_MASK = 3 << 2,
+
+ NVME_CSTS_SHST_SHIFT = 2,
+ NVME_CSTS_SHST_NORMAL = 0 << NVME_CSTS_SHST_SHIFT,
+ NVME_CSTS_SHST_OCCUR = 1 << NVME_CSTS_SHST_SHIFT,
+ NVME_CSTS_SHST_CMPLT = 2 << NVME_CSTS_SHST_SHIFT,
+ NVME_CSTS_SHST_MASK = 3 << NVME_CSTS_SHST_SHIFT,
};
struct nvme_id_power_state {
@@ -404,6 +414,20 @@ enum {
NVME_NIDT_UUID = 0x03,
};
+struct nvme_err_log_entry {
+ __u8 err_count[8];
+ __le16 sqid;
+ __le16 cid;
+ __le16 status;
+ __le16 location;
+ __u8 lba[8];
+ __le32 ns;
+ __u8 vnd;
+ __u8 rsvd1[3];
+ __u8 cmd_specific[8];
+ __u8 rsvd2[24];
+};
+
struct nvme_smart_log {
__u8 critical_warning;
__u8 temperature[2];
@@ -491,13 +515,30 @@ enum {
NVME_AER_VS = 7,
};
-enum {
- NVME_AER_NOTICE_NS_CHANGED = 0x00,
- NVME_AER_NOTICE_FW_ACT_STARTING = 0x01,
- NVME_AER_NOTICE_ANA = 0x03,
- NVME_AER_NOTICE_DISC_CHANGED = 0xf0,
+enum nvme_async_event_type {
+ NVME_AER_TYPE_ERROR = 0,
+ NVME_AER_TYPE_SMART = 1,
+ NVME_AER_TYPE_NOTICE = 2,
+ NVME_AER_TYPE_MAX = 7,
};
+enum nvme_async_event {
+ NVME_AER_ERROR_INVALID_DB_REG = 0,
+ NVME_AER_ERROR_INVALID_DB_VALUE = 1,
+ NVME_AER_ERROR_DIAG_FAILURE = 2,
+ NVME_AER_ERROR_PERSISTENT_INT_ERR = 3,
+ NVME_AER_ERROR_TRANSIENT_INT_ERR = 4,
+ NVME_AER_ERROR_FW_IMAGE_LOAD_ERR = 5,
+
+ NVME_AER_SMART_SUBSYS_RELIABILITY = 0,
+ NVME_AER_SMART_TEMP_THRESH = 1,
+ NVME_AER_SMART_SPARE_BELOW_THRESH = 2,
+
+ NVME_AER_NOTICE_NS_CHANGED = 0,
+ NVME_AER_NOTICE_FW_ACT_STARTING = 1,
+ NVME_AER_NOTICE_ANA = 3,
+ NVME_AER_NOTICE_DISC_CHANGED = 0xf0,
+};
enum {
NVME_AEN_BIT_NS_ATTR = 8,
NVME_AEN_BIT_FW_ACT = 9,
@@ -548,12 +589,6 @@ struct nvme_reservation_status {
} regctl_ds[];
};
-enum nvme_async_event_type {
- NVME_AER_TYPE_ERROR = 0,
- NVME_AER_TYPE_SMART = 1,
- NVME_AER_TYPE_NOTICE = 2,
-};
-
/* I/O commands */
enum nvme_opcode {
@@ -705,10 +740,19 @@ enum {
NVME_RW_DSM_LATENCY_LOW = 3 << 4,
NVME_RW_DSM_SEQ_REQ = 1 << 6,
NVME_RW_DSM_COMPRESSED = 1 << 7,
+
+ NVME_WZ_DEAC = 1 << 9,
NVME_RW_PRINFO_PRCHK_REF = 1 << 10,
NVME_RW_PRINFO_PRCHK_APP = 1 << 11,
NVME_RW_PRINFO_PRCHK_GUARD = 1 << 12,
NVME_RW_PRINFO_PRACT = 1 << 13,
+
+ NVME_RW_PRINFO =
+ NVME_RW_PRINFO_PRCHK_REF |
+ NVME_RW_PRINFO_PRCHK_APP |
+ NVME_RW_PRINFO_PRCHK_GUARD |
+ NVME_RW_PRINFO_PRACT,
+
NVME_RW_DTYPE_STREAMS = 1 << 4,
};
@@ -809,6 +853,7 @@ enum {
NVME_SQ_PRIO_HIGH = (1 << 1),
NVME_SQ_PRIO_MEDIUM = (2 << 1),
NVME_SQ_PRIO_LOW = (3 << 1),
+ NVME_SQ_PRIO_MASK = (3 << 1),
NVME_FEAT_ARBITRATION = 0x01,
NVME_FEAT_POWER_MGMT = 0x02,
NVME_FEAT_LBA_RANGE = 0x03,
@@ -1146,6 +1191,7 @@ struct streams_directive_params {
struct nvme_command {
union {
+ __le32 dwords[16];
struct nvme_common_command common;
struct nvme_rw_command rw;
struct nvme_identify identify;
@@ -1217,6 +1263,8 @@ enum {
NVME_SC_SGL_INVALID_METADATA = 0x10,
NVME_SC_SGL_INVALID_TYPE = 0x11,
+ NVME_SC_PRP_OFFSET_INVALID = 0x13,
+
NVME_SC_SGL_INVALID_OFFSET = 0x16,
NVME_SC_SGL_INVALID_SUBTYPE = 0x17,
--
2.17.2
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* [PATCH 3/9] nvme/core: add NVME_CTRL_SUSPENDED controller state
2019-03-19 14:41 ` No subject Maxim Levitsky
(?)
@ 2019-03-19 14:41 ` Maxim Levitsky
-1 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
To: linux-nvme
Cc: Maxim Levitsky, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney ,
Paolo Bonzini, Liang Cunming, Liu Changpeng, Fam Zheng,
Amnon Ilan, John Ferlan
This state will be used by a controller that is going to
suspended state, and will later be used by mdev
framework to detect this and flush its queues
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
drivers/nvme/host/core.c | 15 +++++++++++++++
drivers/nvme/host/nvme.h | 1 +
2 files changed, 16 insertions(+)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 6a9dd68c0f4f..cf9de026cb93 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -320,6 +320,19 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
switch (old_state) {
case NVME_CTRL_NEW:
case NVME_CTRL_RESETTING:
+ case NVME_CTRL_CONNECTING:
+ case NVME_CTRL_SUSPENDED:
+ changed = true;
+ /* FALLTHRU */
+ default:
+ break;
+ }
+ break;
+ case NVME_CTRL_SUSPENDED:
+ switch (old_state) {
+ case NVME_CTRL_NEW:
+ case NVME_CTRL_LIVE:
+ case NVME_CTRL_RESETTING:
case NVME_CTRL_CONNECTING:
changed = true;
/* FALLTHRU */
@@ -332,6 +345,7 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
case NVME_CTRL_NEW:
case NVME_CTRL_LIVE:
case NVME_CTRL_ADMIN_ONLY:
+ case NVME_CTRL_SUSPENDED:
changed = true;
/* FALLTHRU */
default:
@@ -354,6 +368,7 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
case NVME_CTRL_ADMIN_ONLY:
case NVME_CTRL_RESETTING:
case NVME_CTRL_CONNECTING:
+ case NVME_CTRL_SUSPENDED:
changed = true;
/* FALLTHRU */
default:
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index c4a1bb41abf0..9320b0a87d79 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -142,6 +142,7 @@ static inline u16 nvme_req_qid(struct request *req)
enum nvme_ctrl_state {
NVME_CTRL_NEW,
NVME_CTRL_LIVE,
+ NVME_CTRL_SUSPENDED,
NVME_CTRL_ADMIN_ONLY, /* Only admin queue live */
NVME_CTRL_RESETTING,
NVME_CTRL_CONNECTING,
--
2.17.2
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* [PATCH 3/9] nvme/core: add NVME_CTRL_SUSPENDED controller state
@ 2019-03-19 14:41 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
This state will be used by a controller that is going to
suspended state, and will later be used by mdev
framework to detect this and flush its queues
Signed-off-by: Maxim Levitsky <mlevitsk at redhat.com>
---
drivers/nvme/host/core.c | 15 +++++++++++++++
drivers/nvme/host/nvme.h | 1 +
2 files changed, 16 insertions(+)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 6a9dd68c0f4f..cf9de026cb93 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -320,6 +320,19 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
switch (old_state) {
case NVME_CTRL_NEW:
case NVME_CTRL_RESETTING:
+ case NVME_CTRL_CONNECTING:
+ case NVME_CTRL_SUSPENDED:
+ changed = true;
+ /* FALLTHRU */
+ default:
+ break;
+ }
+ break;
+ case NVME_CTRL_SUSPENDED:
+ switch (old_state) {
+ case NVME_CTRL_NEW:
+ case NVME_CTRL_LIVE:
+ case NVME_CTRL_RESETTING:
case NVME_CTRL_CONNECTING:
changed = true;
/* FALLTHRU */
@@ -332,6 +345,7 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
case NVME_CTRL_NEW:
case NVME_CTRL_LIVE:
case NVME_CTRL_ADMIN_ONLY:
+ case NVME_CTRL_SUSPENDED:
changed = true;
/* FALLTHRU */
default:
@@ -354,6 +368,7 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
case NVME_CTRL_ADMIN_ONLY:
case NVME_CTRL_RESETTING:
case NVME_CTRL_CONNECTING:
+ case NVME_CTRL_SUSPENDED:
changed = true;
/* FALLTHRU */
default:
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index c4a1bb41abf0..9320b0a87d79 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -142,6 +142,7 @@ static inline u16 nvme_req_qid(struct request *req)
enum nvme_ctrl_state {
NVME_CTRL_NEW,
NVME_CTRL_LIVE,
+ NVME_CTRL_SUSPENDED,
NVME_CTRL_ADMIN_ONLY, /* Only admin queue live */
NVME_CTRL_RESETTING,
NVME_CTRL_CONNECTING,
--
2.17.2
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* [PATCH 3/9] nvme/core: add NVME_CTRL_SUSPENDED controller state
@ 2019-03-19 14:41 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
To: linux-nvme
Cc: Maxim Levitsky, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney ,
Paolo Bonzini, Liang Cunming, Liu Changpeng, Fam Zheng,
Amnon Ilan, John
This state will be used by a controller that is going to
suspended state, and will later be used by mdev
framework to detect this and flush its queues
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
drivers/nvme/host/core.c | 15 +++++++++++++++
drivers/nvme/host/nvme.h | 1 +
2 files changed, 16 insertions(+)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 6a9dd68c0f4f..cf9de026cb93 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -320,6 +320,19 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
switch (old_state) {
case NVME_CTRL_NEW:
case NVME_CTRL_RESETTING:
+ case NVME_CTRL_CONNECTING:
+ case NVME_CTRL_SUSPENDED:
+ changed = true;
+ /* FALLTHRU */
+ default:
+ break;
+ }
+ break;
+ case NVME_CTRL_SUSPENDED:
+ switch (old_state) {
+ case NVME_CTRL_NEW:
+ case NVME_CTRL_LIVE:
+ case NVME_CTRL_RESETTING:
case NVME_CTRL_CONNECTING:
changed = true;
/* FALLTHRU */
@@ -332,6 +345,7 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
case NVME_CTRL_NEW:
case NVME_CTRL_LIVE:
case NVME_CTRL_ADMIN_ONLY:
+ case NVME_CTRL_SUSPENDED:
changed = true;
/* FALLTHRU */
default:
@@ -354,6 +368,7 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
case NVME_CTRL_ADMIN_ONLY:
case NVME_CTRL_RESETTING:
case NVME_CTRL_CONNECTING:
+ case NVME_CTRL_SUSPENDED:
changed = true;
/* FALLTHRU */
default:
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index c4a1bb41abf0..9320b0a87d79 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -142,6 +142,7 @@ static inline u16 nvme_req_qid(struct request *req)
enum nvme_ctrl_state {
NVME_CTRL_NEW,
NVME_CTRL_LIVE,
+ NVME_CTRL_SUSPENDED,
NVME_CTRL_ADMIN_ONLY, /* Only admin queue live */
NVME_CTRL_RESETTING,
NVME_CTRL_CONNECTING,
--
2.17.2
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* [PATCH 4/9] nvme/pci: use the NVME_CTRL_SUSPENDED state
2019-03-19 14:41 ` No subject Maxim Levitsky
(?)
@ 2019-03-19 14:41 ` Maxim Levitsky
-1 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
To: linux-nvme
Cc: Maxim Levitsky, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney ,
Paolo Bonzini, Liang Cunming, Liu Changpeng, Fam Zheng,
Amnon Ilan, John Ferlan
When enteriing low power state, the nvme
driver will now inform the core with the NVME_CTRL_SUSPENDED state
which will allow mdev driver to act on this information
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
drivers/nvme/host/pci.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 7fee665ec45e..a188ab6ffaf8 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2451,7 +2451,8 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown)
u32 csts = readl(dev->bar + NVME_REG_CSTS);
if (dev->ctrl.state == NVME_CTRL_LIVE ||
- dev->ctrl.state == NVME_CTRL_RESETTING)
+ dev->ctrl.state == NVME_CTRL_RESETTING ||
+ dev->ctrl.state == NVME_CTRL_SUSPENDED)
nvme_start_freeze(&dev->ctrl);
dead = !!((csts & NVME_CSTS_CFS) || !(csts & NVME_CSTS_RDY) ||
pdev->error_state != pci_channel_io_normal);
@@ -2897,6 +2898,9 @@ static int nvme_suspend(struct device *dev)
struct pci_dev *pdev = to_pci_dev(dev);
struct nvme_dev *ndev = pci_get_drvdata(pdev);
+ if (!nvme_change_ctrl_state(&ndev->ctrl, NVME_CTRL_SUSPENDED))
+ WARN_ON(1);
+
nvme_dev_disable(ndev, true);
return 0;
}
--
2.17.2
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* [PATCH 4/9] nvme/pci: use the NVME_CTRL_SUSPENDED state
@ 2019-03-19 14:41 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
When enteriing low power state, the nvme
driver will now inform the core with the NVME_CTRL_SUSPENDED state
which will allow mdev driver to act on this information
Signed-off-by: Maxim Levitsky <mlevitsk at redhat.com>
---
drivers/nvme/host/pci.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 7fee665ec45e..a188ab6ffaf8 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2451,7 +2451,8 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown)
u32 csts = readl(dev->bar + NVME_REG_CSTS);
if (dev->ctrl.state == NVME_CTRL_LIVE ||
- dev->ctrl.state == NVME_CTRL_RESETTING)
+ dev->ctrl.state == NVME_CTRL_RESETTING ||
+ dev->ctrl.state == NVME_CTRL_SUSPENDED)
nvme_start_freeze(&dev->ctrl);
dead = !!((csts & NVME_CSTS_CFS) || !(csts & NVME_CSTS_RDY) ||
pdev->error_state != pci_channel_io_normal);
@@ -2897,6 +2898,9 @@ static int nvme_suspend(struct device *dev)
struct pci_dev *pdev = to_pci_dev(dev);
struct nvme_dev *ndev = pci_get_drvdata(pdev);
+ if (!nvme_change_ctrl_state(&ndev->ctrl, NVME_CTRL_SUSPENDED))
+ WARN_ON(1);
+
nvme_dev_disable(ndev, true);
return 0;
}
--
2.17.2
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* [PATCH 4/9] nvme/pci: use the NVME_CTRL_SUSPENDED state
@ 2019-03-19 14:41 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
To: linux-nvme
Cc: Maxim Levitsky, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney ,
Paolo Bonzini, Liang Cunming, Liu Changpeng, Fam Zheng,
Amnon Ilan, John
When enteriing low power state, the nvme
driver will now inform the core with the NVME_CTRL_SUSPENDED state
which will allow mdev driver to act on this information
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
drivers/nvme/host/pci.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 7fee665ec45e..a188ab6ffaf8 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2451,7 +2451,8 @@ static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown)
u32 csts = readl(dev->bar + NVME_REG_CSTS);
if (dev->ctrl.state == NVME_CTRL_LIVE ||
- dev->ctrl.state == NVME_CTRL_RESETTING)
+ dev->ctrl.state == NVME_CTRL_RESETTING ||
+ dev->ctrl.state == NVME_CTRL_SUSPENDED)
nvme_start_freeze(&dev->ctrl);
dead = !!((csts & NVME_CSTS_CFS) || !(csts & NVME_CSTS_RDY) ||
pdev->error_state != pci_channel_io_normal);
@@ -2897,6 +2898,9 @@ static int nvme_suspend(struct device *dev)
struct pci_dev *pdev = to_pci_dev(dev);
struct nvme_dev *ndev = pci_get_drvdata(pdev);
+ if (!nvme_change_ctrl_state(&ndev->ctrl, NVME_CTRL_SUSPENDED))
+ WARN_ON(1);
+
nvme_dev_disable(ndev, true);
return 0;
}
--
2.17.2
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* Re: [PATCH 4/9] nvme/pci: use the NVME_CTRL_SUSPENDED state
2019-03-19 14:41 ` Maxim Levitsky
@ 2019-03-20 2:54 ` Fam Zheng
-1 siblings, 0 replies; 3471+ messages in thread
From: Fam Zheng @ 2019-03-20 2:54 UTC (permalink / raw)
To: Maxim Levitsky
Cc: linux-nvme, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney ,
Paolo Bonzini, Liang Cunming, Liu Changpeng, Amnon Ilan,
John Ferlan
On Tue, 03/19 16:41, Maxim Levitsky wrote:
> When enteriing low power state, the nvme
Typo: "entering".
> driver will now inform the core with the NVME_CTRL_SUSPENDED state
> which will allow mdev driver to act on this information
[snip]
Fam
^ permalink raw reply [flat|nested] 3471+ messages in thread
* [PATCH 5/9] nvme/pci: add known admin effects to augument admin effects log page
2019-03-19 14:41 ` No subject Maxim Levitsky
(?)
@ 2019-03-19 14:41 ` Maxim Levitsky
-1 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
To: linux-nvme
Cc: Maxim Levitsky, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney ,
Paolo Bonzini, Liang Cunming, Liu Changpeng, Fam Zheng,
Amnon Ilan, John Ferlan
Add known admin effects even if hardware has known admin effects page,
since hardware can't be ever trusted to report sane values.
(on my Intel DC P3700, it reports no side effects for namespace format)
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
drivers/nvme/host/core.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index cf9de026cb93..e1cef428c7e9 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1260,8 +1260,8 @@ static u32 nvme_passthru_start(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
if (ctrl->effects)
effects = le32_to_cpu(ctrl->effects->acs[opcode]);
- else
- effects = nvme_known_admin_effects(opcode);
+
+ effects |= nvme_known_admin_effects(opcode);
/*
* For simplicity, IO to all namespaces is quiesced even if the command
--
2.17.2
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* [PATCH 5/9] nvme/pci: add known admin effects to augument admin effects log page
@ 2019-03-19 14:41 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
Add known admin effects even if hardware has known admin effects page,
since hardware can't be ever trusted to report sane values.
(on my Intel DC P3700, it reports no side effects for namespace format)
Signed-off-by: Maxim Levitsky <mlevitsk at redhat.com>
---
drivers/nvme/host/core.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index cf9de026cb93..e1cef428c7e9 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1260,8 +1260,8 @@ static u32 nvme_passthru_start(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
if (ctrl->effects)
effects = le32_to_cpu(ctrl->effects->acs[opcode]);
- else
- effects = nvme_known_admin_effects(opcode);
+
+ effects |= nvme_known_admin_effects(opcode);
/*
* For simplicity, IO to all namespaces is quiesced even if the command
--
2.17.2
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* [PATCH 5/9] nvme/pci: add known admin effects to augument admin effects log page
@ 2019-03-19 14:41 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
To: linux-nvme
Cc: Maxim Levitsky, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney ,
Paolo Bonzini, Liang Cunming, Liu Changpeng, Fam Zheng,
Amnon Ilan, John
Add known admin effects even if hardware has known admin effects page,
since hardware can't be ever trusted to report sane values.
(on my Intel DC P3700, it reports no side effects for namespace format)
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
drivers/nvme/host/core.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index cf9de026cb93..e1cef428c7e9 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1260,8 +1260,8 @@ static u32 nvme_passthru_start(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
if (ctrl->effects)
effects = le32_to_cpu(ctrl->effects->acs[opcode]);
- else
- effects = nvme_known_admin_effects(opcode);
+
+ effects |= nvme_known_admin_effects(opcode);
/*
* For simplicity, IO to all namespaces is quiesced even if the command
--
2.17.2
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* [PATCH 6/9] nvme/pci: init shadow doorbell after each reset
2019-03-19 14:41 ` No subject Maxim Levitsky
(?)
@ 2019-03-19 14:41 ` Maxim Levitsky
-1 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
To: linux-nvme
Cc: Maxim Levitsky, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney ,
Paolo Bonzini, Liang Cunming, Liu Changpeng, Fam Zheng,
Amnon Ilan, John Ferlan
The spec states
"The settings are not retained across a Controller Level Reset"
Therefore the driver must enable the shadow doorbell,
after each reset.
This was caught while testing the nvme driver over
upcoming nvme-mdev device
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
drivers/nvme/host/pci.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index a188ab6ffaf8..806b551d3582 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2347,8 +2347,6 @@ static int nvme_dev_add(struct nvme_dev *dev)
return ret;
}
dev->ctrl.tagset = &dev->tagset;
-
- nvme_dbbuf_set(dev);
} else {
blk_mq_update_nr_hw_queues(&dev->tagset, dev->online_queues - 1);
@@ -2356,6 +2354,7 @@ static int nvme_dev_add(struct nvme_dev *dev)
nvme_free_queues(dev, dev->online_queues);
}
+ nvme_dbbuf_set(dev);
return 0;
}
--
2.17.2
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* [PATCH 6/9] nvme/pci: init shadow doorbell after each reset
@ 2019-03-19 14:41 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
The spec states
"The settings are not retained across a Controller Level Reset"
Therefore the driver must enable the shadow doorbell,
after each reset.
This was caught while testing the nvme driver over
upcoming nvme-mdev device
Signed-off-by: Maxim Levitsky <mlevitsk at redhat.com>
---
drivers/nvme/host/pci.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index a188ab6ffaf8..806b551d3582 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2347,8 +2347,6 @@ static int nvme_dev_add(struct nvme_dev *dev)
return ret;
}
dev->ctrl.tagset = &dev->tagset;
-
- nvme_dbbuf_set(dev);
} else {
blk_mq_update_nr_hw_queues(&dev->tagset, dev->online_queues - 1);
@@ -2356,6 +2354,7 @@ static int nvme_dev_add(struct nvme_dev *dev)
nvme_free_queues(dev, dev->online_queues);
}
+ nvme_dbbuf_set(dev);
return 0;
}
--
2.17.2
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* [PATCH 6/9] nvme/pci: init shadow doorbell after each reset
@ 2019-03-19 14:41 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
To: linux-nvme
Cc: Maxim Levitsky, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney ,
Paolo Bonzini, Liang Cunming, Liu Changpeng, Fam Zheng,
Amnon Ilan, John
The spec states
"The settings are not retained across a Controller Level Reset"
Therefore the driver must enable the shadow doorbell,
after each reset.
This was caught while testing the nvme driver over
upcoming nvme-mdev device
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
drivers/nvme/host/pci.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index a188ab6ffaf8..806b551d3582 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -2347,8 +2347,6 @@ static int nvme_dev_add(struct nvme_dev *dev)
return ret;
}
dev->ctrl.tagset = &dev->tagset;
-
- nvme_dbbuf_set(dev);
} else {
blk_mq_update_nr_hw_queues(&dev->tagset, dev->online_queues - 1);
@@ -2356,6 +2354,7 @@ static int nvme_dev_add(struct nvme_dev *dev)
nvme_free_queues(dev, dev->online_queues);
}
+ nvme_dbbuf_set(dev);
return 0;
}
--
2.17.2
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* [PATCH 7/9] nvme/core: add mdev interfaces
2019-03-19 14:41 ` No subject Maxim Levitsky
(?)
@ 2019-03-19 14:41 ` Maxim Levitsky
-1 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
To: linux-nvme
Cc: Maxim Levitsky, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney ,
Paolo Bonzini, Liang Cunming, Liu Changpeng, Fam Zheng,
Amnon Ilan, John Ferlan
This adds infrastructure for a nvme-mdev
to attach to the core driver, to be able to
know which nvme controllers are present and which
namespaces they have.
It also adds an interface to nvme device drivers
which expose the its queues in a controlled manner
to the nvme mdev core driver. A driver must opt-in
for this using a new flag NVME_F_MDEV_SUPPORTED
If the mdev device driver also sets the
NVME_F_MDEV_DMA_SUPPORTED, the mdev core will
dma map all the guest memory into the nvme device,
so that nvme device driver can use dma addresses as passed
from the mdev core driver
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
drivers/nvme/host/core.c | 125 ++++++++++++++++++++++++++++++++++++++-
drivers/nvme/host/nvme.h | 54 +++++++++++++++--
2 files changed, 172 insertions(+), 7 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index e1cef428c7e9..90561973bce9 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -97,11 +97,111 @@ static dev_t nvme_chr_devt;
static struct class *nvme_class;
static struct class *nvme_subsys_class;
+static void nvme_ns_remove(struct nvme_ns *ns);
static int nvme_revalidate_disk(struct gendisk *disk);
static void nvme_put_subsystem(struct nvme_subsystem *subsys);
static void nvme_remove_invalid_namespaces(struct nvme_ctrl *ctrl,
unsigned nsid);
+#ifdef CONFIG_NVME_MDEV
+
+static struct nvme_mdev_driver *mdev_driver_interface;
+static DEFINE_MUTEX(mdev_ctrl_lock);
+static LIST_HEAD(mdev_ctrl_list);
+
+static bool nvme_ctrl_has_mdev(struct nvme_ctrl *ctrl)
+{
+ return (ctrl->ops->flags & NVME_F_MDEV_SUPPORTED) != 0;
+}
+
+static void nvme_mdev_add_ctrl(struct nvme_ctrl *ctrl)
+{
+ if (nvme_ctrl_has_mdev(ctrl)) {
+ mutex_lock(&mdev_ctrl_lock);
+ list_add_tail(&ctrl->link, &mdev_ctrl_list);
+ mutex_unlock(&mdev_ctrl_lock);
+ }
+}
+
+static void nvme_mdev_remove_ctrl(struct nvme_ctrl *ctrl)
+{
+ if (nvme_ctrl_has_mdev(ctrl)) {
+ mutex_lock(&mdev_ctrl_lock);
+ list_del_init(&ctrl->link);
+ mutex_unlock(&mdev_ctrl_lock);
+ }
+}
+
+int nvme_core_register_mdev_driver(struct nvme_mdev_driver *driver_ops)
+{
+ struct nvme_ctrl *ctrl;
+
+ if (mdev_driver_interface)
+ return -EEXIST;
+
+ mdev_driver_interface = driver_ops;
+
+ mutex_lock(&mdev_ctrl_lock);
+ list_for_each_entry(ctrl, &mdev_ctrl_list, link)
+ mdev_driver_interface->nvme_ctrl_state_changed(ctrl);
+
+ mutex_unlock(&mdev_ctrl_lock);
+ return 0;
+}
+EXPORT_SYMBOL_GPL(nvme_core_register_mdev_driver);
+
+void nvme_core_unregister_mdev_driver(struct nvme_mdev_driver *driver_ops)
+{
+ if (WARN_ON(driver_ops != mdev_driver_interface))
+ return;
+ mdev_driver_interface = NULL;
+}
+EXPORT_SYMBOL_GPL(nvme_core_unregister_mdev_driver);
+
+static void nvme_mdev_ctrl_state_changed(struct nvme_ctrl *ctrl)
+{
+ if (!mdev_driver_interface || !nvme_ctrl_has_mdev(ctrl))
+ return;
+ if (!try_module_get(mdev_driver_interface->owner))
+ return;
+
+ mdev_driver_interface->nvme_ctrl_state_changed(ctrl);
+ module_put(mdev_driver_interface->owner);
+}
+
+static void nvme_mdev_ns_state_changed(struct nvme_ctrl *ctrl,
+ struct nvme_ns *ns, bool removed)
+{
+ if (!mdev_driver_interface || !nvme_ctrl_has_mdev(ctrl))
+ return;
+ if (!try_module_get(mdev_driver_interface->owner))
+ return;
+
+ mdev_driver_interface->nvme_ns_state_changed(ctrl,
+ ns->head->ns_id, removed);
+ module_put(mdev_driver_interface->owner);
+}
+
+#else
+static void nvme_mdev_ctrl_state_changed(struct nvme_ctrl *ctrl)
+{
+}
+
+static void nvme_mdev_ns_state_changed(struct nvme_ctrl *ctrl,
+ struct nvme_ns *ns, bool removed)
+{
+}
+
+static void nvme_mdev_add_ctrl(struct nvme_ctrl *ctrl)
+{
+}
+
+static void nvme_mdev_remove_ctrl(struct nvme_ctrl *ctrl)
+{
+}
+
+#endif
+
static void nvme_set_queue_dying(struct nvme_ns *ns)
{
/*
@@ -390,10 +490,13 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
if (changed)
ctrl->state = new_state;
-
spin_unlock_irqrestore(&ctrl->lock, flags);
+
+ nvme_mdev_ctrl_state_changed(ctrl);
+
if (changed && ctrl->state == NVME_CTRL_LIVE)
nvme_kick_requeue_lists(ctrl);
+
return changed;
}
EXPORT_SYMBOL_GPL(nvme_change_ctrl_state);
@@ -429,10 +532,11 @@ static void nvme_free_ns(struct kref *kref)
kfree(ns);
}
-static void nvme_put_ns(struct nvme_ns *ns)
+void nvme_put_ns(struct nvme_ns *ns)
{
kref_put(&ns->kref, nvme_free_ns);
}
+EXPORT_SYMBOL_GPL(nvme_put_ns);
static inline void nvme_clear_nvme_request(struct request *req)
{
@@ -1275,6 +1379,11 @@ static u32 nvme_passthru_start(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
return effects;
}
+static void nvme_update_ns(struct nvme_ctrl *ctrl, struct nvme_ns *ns)
+{
+ nvme_mdev_ns_state_changed(ctrl, ns, false);
+}
+
static void nvme_update_formats(struct nvme_ctrl *ctrl)
{
struct nvme_ns *ns;
@@ -1283,6 +1392,8 @@ static void nvme_update_formats(struct nvme_ctrl *ctrl)
list_for_each_entry(ns, &ctrl->namespaces, list)
if (ns->disk && nvme_revalidate_disk(ns->disk))
nvme_set_queue_dying(ns);
+ else
+ nvme_update_ns(ctrl, ns);
up_read(&ctrl->namespaces_rwsem);
nvme_remove_invalid_namespaces(ctrl, NVME_NSID_ALL);
@@ -3133,7 +3244,7 @@ static int ns_cmp(void *priv, struct list_head *a, struct list_head *b)
return nsa->head->ns_id - nsb->head->ns_id;
}
-static struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned nsid)
+struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned int nsid)
{
struct nvme_ns *ns, *ret = NULL;
@@ -3151,6 +3262,7 @@ static struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned nsid)
up_read(&ctrl->namespaces_rwsem);
return ret;
}
+EXPORT_SYMBOL_GPL(nvme_find_get_ns);
static int nvme_setup_streams_ns(struct nvme_ctrl *ctrl, struct nvme_ns *ns)
{
@@ -3271,6 +3383,8 @@ static void nvme_ns_remove(struct nvme_ns *ns)
if (test_and_set_bit(NVME_NS_REMOVING, &ns->flags))
return;
+ nvme_mdev_ns_state_changed(ns->ctrl, ns, true);
+
nvme_fault_inject_fini(ns);
if (ns->disk && ns->disk->flags & GENHD_FL_UP) {
del_gendisk(ns->disk);
@@ -3301,6 +3415,8 @@ static void nvme_validate_ns(struct nvme_ctrl *ctrl, unsigned nsid)
if (ns) {
if (ns->disk && revalidate_disk(ns->disk))
nvme_ns_remove(ns);
+ else
+ nvme_update_ns(ctrl, ns);
nvme_put_ns(ns);
} else
nvme_alloc_ns(ctrl, nsid);
@@ -3654,6 +3770,7 @@ static void nvme_free_ctrl(struct device *dev)
sysfs_remove_link(&subsys->dev.kobj, dev_name(ctrl->device));
}
+ nvme_mdev_remove_ctrl(ctrl);
ctrl->ops->free_ctrl(ctrl);
if (subsys)
@@ -3726,6 +3843,8 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
dev_pm_qos_update_user_latency_tolerance(ctrl->device,
min(default_ps_max_latency_us, (unsigned long)S32_MAX));
+ nvme_mdev_add_ctrl(ctrl);
+
return 0;
out_free_name:
kfree_const(ctrl->device->kobj.name);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 9320b0a87d79..2df6c9f0e1cc 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -151,6 +151,7 @@ enum nvme_ctrl_state {
};
struct nvme_ctrl {
+ struct list_head link;
bool comp_seen;
enum nvme_ctrl_state state;
bool identified;
@@ -253,6 +254,23 @@ struct nvme_ctrl {
unsigned long discard_page_busy;
};
+#ifdef CONFIG_NVME_MDEV
+/* Interface to the host driver */
+struct nvme_mdev_driver {
+ struct module *owner;
+
+ /* a controller state has changed*/
+ void (*nvme_ctrl_state_changed)(struct nvme_ctrl *ctrl);
+
+ /* NS is updated in some way (after format or so) */
+ void (*nvme_ns_state_changed)(struct nvme_ctrl *ctrl,
+ u32 nsid, bool removed);
+};
+
+int nvme_core_register_mdev_driver(struct nvme_mdev_driver *driver_ops);
+void nvme_core_unregister_mdev_driver(struct nvme_mdev_driver *driver_ops);
+#endif
+
struct nvme_subsystem {
int instance;
struct device dev;
@@ -275,7 +293,7 @@ struct nvme_subsystem {
};
/*
- * Container structure for uniqueue namespace identifiers.
+ * Container structure for unique namespace identifiers.
*/
struct nvme_ns_ids {
u8 eui64[8];
@@ -351,13 +369,22 @@ struct nvme_ns {
};
+struct nvme_ext_data_iter;
+struct nvme_ext_cmd_result {
+ u32 tag;
+ u16 status;
+};
+
struct nvme_ctrl_ops {
const char *name;
struct module *module;
unsigned int flags;
-#define NVME_F_FABRICS (1 << 0)
-#define NVME_F_METADATA_SUPPORTED (1 << 1)
-#define NVME_F_PCI_P2PDMA (1 << 2)
+#define NVME_F_FABRICS BIT(0)
+#define NVME_F_METADATA_SUPPORTED BIT(1)
+#define NVME_F_PCI_P2PDMA BIT(2)
+#define NVME_F_MDEV_SUPPORTED BIT(3)
+#define NVME_F_MDEV_DMA_SUPPORTED BIT(4)
+
int (*reg_read32)(struct nvme_ctrl *ctrl, u32 off, u32 *val);
int (*reg_write32)(struct nvme_ctrl *ctrl, u32 off, u32 val);
int (*reg_read64)(struct nvme_ctrl *ctrl, u32 off, u64 *val);
@@ -366,6 +393,23 @@ struct nvme_ctrl_ops {
void (*delete_ctrl)(struct nvme_ctrl *ctrl);
int (*get_address)(struct nvme_ctrl *ctrl, char *buf, int size);
void (*stop_ctrl)(struct nvme_ctrl *ctrl);
+
+#ifdef CONFIG_NVME_MDEV
+ int (*ext_queues_available)(struct nvme_ctrl *ctrl);
+ int (*ext_queue_alloc)(struct nvme_ctrl *ctrl, u16 *qid);
+ void (*ext_queue_free)(struct nvme_ctrl *ctrl, u16 qid);
+
+ int (*ext_queue_submit)(struct nvme_ctrl *ctrl,
+ u16 qid, u32 tag,
+ struct nvme_command *command,
+ struct nvme_ext_data_iter *iter);
+
+ bool (*ext_queue_full)(struct nvme_ctrl *ctrl, u16 qid);
+
+ int (*ext_queue_poll)(struct nvme_ctrl *ctrl, u16 qid,
+ struct nvme_ext_cmd_result *results,
+ unsigned int max_len);
+#endif
};
#ifdef CONFIG_FAULT_INJECTION_DEBUG_FS
@@ -427,6 +471,8 @@ void nvme_stop_ctrl(struct nvme_ctrl *ctrl);
void nvme_put_ctrl(struct nvme_ctrl *ctrl);
int nvme_init_identify(struct nvme_ctrl *ctrl);
+struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned int nsid);
+void nvme_put_ns(struct nvme_ns *ns);
void nvme_remove_namespaces(struct nvme_ctrl *ctrl);
int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t len,
--
2.17.2
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* [PATCH 7/9] nvme/core: add mdev interfaces
@ 2019-03-19 14:41 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
This adds infrastructure for a nvme-mdev
to attach to the core driver, to be able to
know which nvme controllers are present and which
namespaces they have.
It also adds an interface to nvme device drivers
which expose the its queues in a controlled manner
to the nvme mdev core driver. A driver must opt-in
for this using a new flag NVME_F_MDEV_SUPPORTED
If the mdev device driver also sets the
NVME_F_MDEV_DMA_SUPPORTED, the mdev core will
dma map all the guest memory into the nvme device,
so that nvme device driver can use dma addresses as passed
from the mdev core driver
Signed-off-by: Maxim Levitsky <mlevitsk at redhat.com>
---
drivers/nvme/host/core.c | 125 ++++++++++++++++++++++++++++++++++++++-
drivers/nvme/host/nvme.h | 54 +++++++++++++++--
2 files changed, 172 insertions(+), 7 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index e1cef428c7e9..90561973bce9 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -97,11 +97,111 @@ static dev_t nvme_chr_devt;
static struct class *nvme_class;
static struct class *nvme_subsys_class;
+static void nvme_ns_remove(struct nvme_ns *ns);
static int nvme_revalidate_disk(struct gendisk *disk);
static void nvme_put_subsystem(struct nvme_subsystem *subsys);
static void nvme_remove_invalid_namespaces(struct nvme_ctrl *ctrl,
unsigned nsid);
+#ifdef CONFIG_NVME_MDEV
+
+static struct nvme_mdev_driver *mdev_driver_interface;
+static DEFINE_MUTEX(mdev_ctrl_lock);
+static LIST_HEAD(mdev_ctrl_list);
+
+static bool nvme_ctrl_has_mdev(struct nvme_ctrl *ctrl)
+{
+ return (ctrl->ops->flags & NVME_F_MDEV_SUPPORTED) != 0;
+}
+
+static void nvme_mdev_add_ctrl(struct nvme_ctrl *ctrl)
+{
+ if (nvme_ctrl_has_mdev(ctrl)) {
+ mutex_lock(&mdev_ctrl_lock);
+ list_add_tail(&ctrl->link, &mdev_ctrl_list);
+ mutex_unlock(&mdev_ctrl_lock);
+ }
+}
+
+static void nvme_mdev_remove_ctrl(struct nvme_ctrl *ctrl)
+{
+ if (nvme_ctrl_has_mdev(ctrl)) {
+ mutex_lock(&mdev_ctrl_lock);
+ list_del_init(&ctrl->link);
+ mutex_unlock(&mdev_ctrl_lock);
+ }
+}
+
+int nvme_core_register_mdev_driver(struct nvme_mdev_driver *driver_ops)
+{
+ struct nvme_ctrl *ctrl;
+
+ if (mdev_driver_interface)
+ return -EEXIST;
+
+ mdev_driver_interface = driver_ops;
+
+ mutex_lock(&mdev_ctrl_lock);
+ list_for_each_entry(ctrl, &mdev_ctrl_list, link)
+ mdev_driver_interface->nvme_ctrl_state_changed(ctrl);
+
+ mutex_unlock(&mdev_ctrl_lock);
+ return 0;
+}
+EXPORT_SYMBOL_GPL(nvme_core_register_mdev_driver);
+
+void nvme_core_unregister_mdev_driver(struct nvme_mdev_driver *driver_ops)
+{
+ if (WARN_ON(driver_ops != mdev_driver_interface))
+ return;
+ mdev_driver_interface = NULL;
+}
+EXPORT_SYMBOL_GPL(nvme_core_unregister_mdev_driver);
+
+static void nvme_mdev_ctrl_state_changed(struct nvme_ctrl *ctrl)
+{
+ if (!mdev_driver_interface || !nvme_ctrl_has_mdev(ctrl))
+ return;
+ if (!try_module_get(mdev_driver_interface->owner))
+ return;
+
+ mdev_driver_interface->nvme_ctrl_state_changed(ctrl);
+ module_put(mdev_driver_interface->owner);
+}
+
+static void nvme_mdev_ns_state_changed(struct nvme_ctrl *ctrl,
+ struct nvme_ns *ns, bool removed)
+{
+ if (!mdev_driver_interface || !nvme_ctrl_has_mdev(ctrl))
+ return;
+ if (!try_module_get(mdev_driver_interface->owner))
+ return;
+
+ mdev_driver_interface->nvme_ns_state_changed(ctrl,
+ ns->head->ns_id, removed);
+ module_put(mdev_driver_interface->owner);
+}
+
+#else
+static void nvme_mdev_ctrl_state_changed(struct nvme_ctrl *ctrl)
+{
+}
+
+static void nvme_mdev_ns_state_changed(struct nvme_ctrl *ctrl,
+ struct nvme_ns *ns, bool removed)
+{
+}
+
+static void nvme_mdev_add_ctrl(struct nvme_ctrl *ctrl)
+{
+}
+
+static void nvme_mdev_remove_ctrl(struct nvme_ctrl *ctrl)
+{
+}
+
+#endif
+
static void nvme_set_queue_dying(struct nvme_ns *ns)
{
/*
@@ -390,10 +490,13 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
if (changed)
ctrl->state = new_state;
-
spin_unlock_irqrestore(&ctrl->lock, flags);
+
+ nvme_mdev_ctrl_state_changed(ctrl);
+
if (changed && ctrl->state == NVME_CTRL_LIVE)
nvme_kick_requeue_lists(ctrl);
+
return changed;
}
EXPORT_SYMBOL_GPL(nvme_change_ctrl_state);
@@ -429,10 +532,11 @@ static void nvme_free_ns(struct kref *kref)
kfree(ns);
}
-static void nvme_put_ns(struct nvme_ns *ns)
+void nvme_put_ns(struct nvme_ns *ns)
{
kref_put(&ns->kref, nvme_free_ns);
}
+EXPORT_SYMBOL_GPL(nvme_put_ns);
static inline void nvme_clear_nvme_request(struct request *req)
{
@@ -1275,6 +1379,11 @@ static u32 nvme_passthru_start(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
return effects;
}
+static void nvme_update_ns(struct nvme_ctrl *ctrl, struct nvme_ns *ns)
+{
+ nvme_mdev_ns_state_changed(ctrl, ns, false);
+}
+
static void nvme_update_formats(struct nvme_ctrl *ctrl)
{
struct nvme_ns *ns;
@@ -1283,6 +1392,8 @@ static void nvme_update_formats(struct nvme_ctrl *ctrl)
list_for_each_entry(ns, &ctrl->namespaces, list)
if (ns->disk && nvme_revalidate_disk(ns->disk))
nvme_set_queue_dying(ns);
+ else
+ nvme_update_ns(ctrl, ns);
up_read(&ctrl->namespaces_rwsem);
nvme_remove_invalid_namespaces(ctrl, NVME_NSID_ALL);
@@ -3133,7 +3244,7 @@ static int ns_cmp(void *priv, struct list_head *a, struct list_head *b)
return nsa->head->ns_id - nsb->head->ns_id;
}
-static struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned nsid)
+struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned int nsid)
{
struct nvme_ns *ns, *ret = NULL;
@@ -3151,6 +3262,7 @@ static struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned nsid)
up_read(&ctrl->namespaces_rwsem);
return ret;
}
+EXPORT_SYMBOL_GPL(nvme_find_get_ns);
static int nvme_setup_streams_ns(struct nvme_ctrl *ctrl, struct nvme_ns *ns)
{
@@ -3271,6 +3383,8 @@ static void nvme_ns_remove(struct nvme_ns *ns)
if (test_and_set_bit(NVME_NS_REMOVING, &ns->flags))
return;
+ nvme_mdev_ns_state_changed(ns->ctrl, ns, true);
+
nvme_fault_inject_fini(ns);
if (ns->disk && ns->disk->flags & GENHD_FL_UP) {
del_gendisk(ns->disk);
@@ -3301,6 +3415,8 @@ static void nvme_validate_ns(struct nvme_ctrl *ctrl, unsigned nsid)
if (ns) {
if (ns->disk && revalidate_disk(ns->disk))
nvme_ns_remove(ns);
+ else
+ nvme_update_ns(ctrl, ns);
nvme_put_ns(ns);
} else
nvme_alloc_ns(ctrl, nsid);
@@ -3654,6 +3770,7 @@ static void nvme_free_ctrl(struct device *dev)
sysfs_remove_link(&subsys->dev.kobj, dev_name(ctrl->device));
}
+ nvme_mdev_remove_ctrl(ctrl);
ctrl->ops->free_ctrl(ctrl);
if (subsys)
@@ -3726,6 +3843,8 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
dev_pm_qos_update_user_latency_tolerance(ctrl->device,
min(default_ps_max_latency_us, (unsigned long)S32_MAX));
+ nvme_mdev_add_ctrl(ctrl);
+
return 0;
out_free_name:
kfree_const(ctrl->device->kobj.name);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 9320b0a87d79..2df6c9f0e1cc 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -151,6 +151,7 @@ enum nvme_ctrl_state {
};
struct nvme_ctrl {
+ struct list_head link;
bool comp_seen;
enum nvme_ctrl_state state;
bool identified;
@@ -253,6 +254,23 @@ struct nvme_ctrl {
unsigned long discard_page_busy;
};
+#ifdef CONFIG_NVME_MDEV
+/* Interface to the host driver */
+struct nvme_mdev_driver {
+ struct module *owner;
+
+ /* a controller state has changed*/
+ void (*nvme_ctrl_state_changed)(struct nvme_ctrl *ctrl);
+
+ /* NS is updated in some way (after format or so) */
+ void (*nvme_ns_state_changed)(struct nvme_ctrl *ctrl,
+ u32 nsid, bool removed);
+};
+
+int nvme_core_register_mdev_driver(struct nvme_mdev_driver *driver_ops);
+void nvme_core_unregister_mdev_driver(struct nvme_mdev_driver *driver_ops);
+#endif
+
struct nvme_subsystem {
int instance;
struct device dev;
@@ -275,7 +293,7 @@ struct nvme_subsystem {
};
/*
- * Container structure for uniqueue namespace identifiers.
+ * Container structure for unique namespace identifiers.
*/
struct nvme_ns_ids {
u8 eui64[8];
@@ -351,13 +369,22 @@ struct nvme_ns {
};
+struct nvme_ext_data_iter;
+struct nvme_ext_cmd_result {
+ u32 tag;
+ u16 status;
+};
+
struct nvme_ctrl_ops {
const char *name;
struct module *module;
unsigned int flags;
-#define NVME_F_FABRICS (1 << 0)
-#define NVME_F_METADATA_SUPPORTED (1 << 1)
-#define NVME_F_PCI_P2PDMA (1 << 2)
+#define NVME_F_FABRICS BIT(0)
+#define NVME_F_METADATA_SUPPORTED BIT(1)
+#define NVME_F_PCI_P2PDMA BIT(2)
+#define NVME_F_MDEV_SUPPORTED BIT(3)
+#define NVME_F_MDEV_DMA_SUPPORTED BIT(4)
+
int (*reg_read32)(struct nvme_ctrl *ctrl, u32 off, u32 *val);
int (*reg_write32)(struct nvme_ctrl *ctrl, u32 off, u32 val);
int (*reg_read64)(struct nvme_ctrl *ctrl, u32 off, u64 *val);
@@ -366,6 +393,23 @@ struct nvme_ctrl_ops {
void (*delete_ctrl)(struct nvme_ctrl *ctrl);
int (*get_address)(struct nvme_ctrl *ctrl, char *buf, int size);
void (*stop_ctrl)(struct nvme_ctrl *ctrl);
+
+#ifdef CONFIG_NVME_MDEV
+ int (*ext_queues_available)(struct nvme_ctrl *ctrl);
+ int (*ext_queue_alloc)(struct nvme_ctrl *ctrl, u16 *qid);
+ void (*ext_queue_free)(struct nvme_ctrl *ctrl, u16 qid);
+
+ int (*ext_queue_submit)(struct nvme_ctrl *ctrl,
+ u16 qid, u32 tag,
+ struct nvme_command *command,
+ struct nvme_ext_data_iter *iter);
+
+ bool (*ext_queue_full)(struct nvme_ctrl *ctrl, u16 qid);
+
+ int (*ext_queue_poll)(struct nvme_ctrl *ctrl, u16 qid,
+ struct nvme_ext_cmd_result *results,
+ unsigned int max_len);
+#endif
};
#ifdef CONFIG_FAULT_INJECTION_DEBUG_FS
@@ -427,6 +471,8 @@ void nvme_stop_ctrl(struct nvme_ctrl *ctrl);
void nvme_put_ctrl(struct nvme_ctrl *ctrl);
int nvme_init_identify(struct nvme_ctrl *ctrl);
+struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned int nsid);
+void nvme_put_ns(struct nvme_ns *ns);
void nvme_remove_namespaces(struct nvme_ctrl *ctrl);
int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t len,
--
2.17.2
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* [PATCH 7/9] nvme/core: add mdev interfaces
@ 2019-03-19 14:41 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
To: linux-nvme
Cc: Maxim Levitsky, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney ,
Paolo Bonzini, Liang Cunming, Liu Changpeng, Fam Zheng,
Amnon Ilan, John
This adds infrastructure for a nvme-mdev
to attach to the core driver, to be able to
know which nvme controllers are present and which
namespaces they have.
It also adds an interface to nvme device drivers
which expose the its queues in a controlled manner
to the nvme mdev core driver. A driver must opt-in
for this using a new flag NVME_F_MDEV_SUPPORTED
If the mdev device driver also sets the
NVME_F_MDEV_DMA_SUPPORTED, the mdev core will
dma map all the guest memory into the nvme device,
so that nvme device driver can use dma addresses as passed
from the mdev core driver
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
drivers/nvme/host/core.c | 125 ++++++++++++++++++++++++++++++++++++++-
drivers/nvme/host/nvme.h | 54 +++++++++++++++--
2 files changed, 172 insertions(+), 7 deletions(-)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index e1cef428c7e9..90561973bce9 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -97,11 +97,111 @@ static dev_t nvme_chr_devt;
static struct class *nvme_class;
static struct class *nvme_subsys_class;
+static void nvme_ns_remove(struct nvme_ns *ns);
static int nvme_revalidate_disk(struct gendisk *disk);
static void nvme_put_subsystem(struct nvme_subsystem *subsys);
static void nvme_remove_invalid_namespaces(struct nvme_ctrl *ctrl,
unsigned nsid);
+#ifdef CONFIG_NVME_MDEV
+
+static struct nvme_mdev_driver *mdev_driver_interface;
+static DEFINE_MUTEX(mdev_ctrl_lock);
+static LIST_HEAD(mdev_ctrl_list);
+
+static bool nvme_ctrl_has_mdev(struct nvme_ctrl *ctrl)
+{
+ return (ctrl->ops->flags & NVME_F_MDEV_SUPPORTED) != 0;
+}
+
+static void nvme_mdev_add_ctrl(struct nvme_ctrl *ctrl)
+{
+ if (nvme_ctrl_has_mdev(ctrl)) {
+ mutex_lock(&mdev_ctrl_lock);
+ list_add_tail(&ctrl->link, &mdev_ctrl_list);
+ mutex_unlock(&mdev_ctrl_lock);
+ }
+}
+
+static void nvme_mdev_remove_ctrl(struct nvme_ctrl *ctrl)
+{
+ if (nvme_ctrl_has_mdev(ctrl)) {
+ mutex_lock(&mdev_ctrl_lock);
+ list_del_init(&ctrl->link);
+ mutex_unlock(&mdev_ctrl_lock);
+ }
+}
+
+int nvme_core_register_mdev_driver(struct nvme_mdev_driver *driver_ops)
+{
+ struct nvme_ctrl *ctrl;
+
+ if (mdev_driver_interface)
+ return -EEXIST;
+
+ mdev_driver_interface = driver_ops;
+
+ mutex_lock(&mdev_ctrl_lock);
+ list_for_each_entry(ctrl, &mdev_ctrl_list, link)
+ mdev_driver_interface->nvme_ctrl_state_changed(ctrl);
+
+ mutex_unlock(&mdev_ctrl_lock);
+ return 0;
+}
+EXPORT_SYMBOL_GPL(nvme_core_register_mdev_driver);
+
+void nvme_core_unregister_mdev_driver(struct nvme_mdev_driver *driver_ops)
+{
+ if (WARN_ON(driver_ops != mdev_driver_interface))
+ return;
+ mdev_driver_interface = NULL;
+}
+EXPORT_SYMBOL_GPL(nvme_core_unregister_mdev_driver);
+
+static void nvme_mdev_ctrl_state_changed(struct nvme_ctrl *ctrl)
+{
+ if (!mdev_driver_interface || !nvme_ctrl_has_mdev(ctrl))
+ return;
+ if (!try_module_get(mdev_driver_interface->owner))
+ return;
+
+ mdev_driver_interface->nvme_ctrl_state_changed(ctrl);
+ module_put(mdev_driver_interface->owner);
+}
+
+static void nvme_mdev_ns_state_changed(struct nvme_ctrl *ctrl,
+ struct nvme_ns *ns, bool removed)
+{
+ if (!mdev_driver_interface || !nvme_ctrl_has_mdev(ctrl))
+ return;
+ if (!try_module_get(mdev_driver_interface->owner))
+ return;
+
+ mdev_driver_interface->nvme_ns_state_changed(ctrl,
+ ns->head->ns_id, removed);
+ module_put(mdev_driver_interface->owner);
+}
+
+#else
+static void nvme_mdev_ctrl_state_changed(struct nvme_ctrl *ctrl)
+{
+}
+
+static void nvme_mdev_ns_state_changed(struct nvme_ctrl *ctrl,
+ struct nvme_ns *ns, bool removed)
+{
+}
+
+static void nvme_mdev_add_ctrl(struct nvme_ctrl *ctrl)
+{
+}
+
+static void nvme_mdev_remove_ctrl(struct nvme_ctrl *ctrl)
+{
+}
+
+#endif
+
static void nvme_set_queue_dying(struct nvme_ns *ns)
{
/*
@@ -390,10 +490,13 @@ bool nvme_change_ctrl_state(struct nvme_ctrl *ctrl,
if (changed)
ctrl->state = new_state;
-
spin_unlock_irqrestore(&ctrl->lock, flags);
+
+ nvme_mdev_ctrl_state_changed(ctrl);
+
if (changed && ctrl->state == NVME_CTRL_LIVE)
nvme_kick_requeue_lists(ctrl);
+
return changed;
}
EXPORT_SYMBOL_GPL(nvme_change_ctrl_state);
@@ -429,10 +532,11 @@ static void nvme_free_ns(struct kref *kref)
kfree(ns);
}
-static void nvme_put_ns(struct nvme_ns *ns)
+void nvme_put_ns(struct nvme_ns *ns)
{
kref_put(&ns->kref, nvme_free_ns);
}
+EXPORT_SYMBOL_GPL(nvme_put_ns);
static inline void nvme_clear_nvme_request(struct request *req)
{
@@ -1275,6 +1379,11 @@ static u32 nvme_passthru_start(struct nvme_ctrl *ctrl, struct nvme_ns *ns,
return effects;
}
+static void nvme_update_ns(struct nvme_ctrl *ctrl, struct nvme_ns *ns)
+{
+ nvme_mdev_ns_state_changed(ctrl, ns, false);
+}
+
static void nvme_update_formats(struct nvme_ctrl *ctrl)
{
struct nvme_ns *ns;
@@ -1283,6 +1392,8 @@ static void nvme_update_formats(struct nvme_ctrl *ctrl)
list_for_each_entry(ns, &ctrl->namespaces, list)
if (ns->disk && nvme_revalidate_disk(ns->disk))
nvme_set_queue_dying(ns);
+ else
+ nvme_update_ns(ctrl, ns);
up_read(&ctrl->namespaces_rwsem);
nvme_remove_invalid_namespaces(ctrl, NVME_NSID_ALL);
@@ -3133,7 +3244,7 @@ static int ns_cmp(void *priv, struct list_head *a, struct list_head *b)
return nsa->head->ns_id - nsb->head->ns_id;
}
-static struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned nsid)
+struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned int nsid)
{
struct nvme_ns *ns, *ret = NULL;
@@ -3151,6 +3262,7 @@ static struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned nsid)
up_read(&ctrl->namespaces_rwsem);
return ret;
}
+EXPORT_SYMBOL_GPL(nvme_find_get_ns);
static int nvme_setup_streams_ns(struct nvme_ctrl *ctrl, struct nvme_ns *ns)
{
@@ -3271,6 +3383,8 @@ static void nvme_ns_remove(struct nvme_ns *ns)
if (test_and_set_bit(NVME_NS_REMOVING, &ns->flags))
return;
+ nvme_mdev_ns_state_changed(ns->ctrl, ns, true);
+
nvme_fault_inject_fini(ns);
if (ns->disk && ns->disk->flags & GENHD_FL_UP) {
del_gendisk(ns->disk);
@@ -3301,6 +3415,8 @@ static void nvme_validate_ns(struct nvme_ctrl *ctrl, unsigned nsid)
if (ns) {
if (ns->disk && revalidate_disk(ns->disk))
nvme_ns_remove(ns);
+ else
+ nvme_update_ns(ctrl, ns);
nvme_put_ns(ns);
} else
nvme_alloc_ns(ctrl, nsid);
@@ -3654,6 +3770,7 @@ static void nvme_free_ctrl(struct device *dev)
sysfs_remove_link(&subsys->dev.kobj, dev_name(ctrl->device));
}
+ nvme_mdev_remove_ctrl(ctrl);
ctrl->ops->free_ctrl(ctrl);
if (subsys)
@@ -3726,6 +3843,8 @@ int nvme_init_ctrl(struct nvme_ctrl *ctrl, struct device *dev,
dev_pm_qos_update_user_latency_tolerance(ctrl->device,
min(default_ps_max_latency_us, (unsigned long)S32_MAX));
+ nvme_mdev_add_ctrl(ctrl);
+
return 0;
out_free_name:
kfree_const(ctrl->device->kobj.name);
diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
index 9320b0a87d79..2df6c9f0e1cc 100644
--- a/drivers/nvme/host/nvme.h
+++ b/drivers/nvme/host/nvme.h
@@ -151,6 +151,7 @@ enum nvme_ctrl_state {
};
struct nvme_ctrl {
+ struct list_head link;
bool comp_seen;
enum nvme_ctrl_state state;
bool identified;
@@ -253,6 +254,23 @@ struct nvme_ctrl {
unsigned long discard_page_busy;
};
+#ifdef CONFIG_NVME_MDEV
+/* Interface to the host driver */
+struct nvme_mdev_driver {
+ struct module *owner;
+
+ /* a controller state has changed*/
+ void (*nvme_ctrl_state_changed)(struct nvme_ctrl *ctrl);
+
+ /* NS is updated in some way (after format or so) */
+ void (*nvme_ns_state_changed)(struct nvme_ctrl *ctrl,
+ u32 nsid, bool removed);
+};
+
+int nvme_core_register_mdev_driver(struct nvme_mdev_driver *driver_ops);
+void nvme_core_unregister_mdev_driver(struct nvme_mdev_driver *driver_ops);
+#endif
+
struct nvme_subsystem {
int instance;
struct device dev;
@@ -275,7 +293,7 @@ struct nvme_subsystem {
};
/*
- * Container structure for uniqueue namespace identifiers.
+ * Container structure for unique namespace identifiers.
*/
struct nvme_ns_ids {
u8 eui64[8];
@@ -351,13 +369,22 @@ struct nvme_ns {
};
+struct nvme_ext_data_iter;
+struct nvme_ext_cmd_result {
+ u32 tag;
+ u16 status;
+};
+
struct nvme_ctrl_ops {
const char *name;
struct module *module;
unsigned int flags;
-#define NVME_F_FABRICS (1 << 0)
-#define NVME_F_METADATA_SUPPORTED (1 << 1)
-#define NVME_F_PCI_P2PDMA (1 << 2)
+#define NVME_F_FABRICS BIT(0)
+#define NVME_F_METADATA_SUPPORTED BIT(1)
+#define NVME_F_PCI_P2PDMA BIT(2)
+#define NVME_F_MDEV_SUPPORTED BIT(3)
+#define NVME_F_MDEV_DMA_SUPPORTED BIT(4)
+
int (*reg_read32)(struct nvme_ctrl *ctrl, u32 off, u32 *val);
int (*reg_write32)(struct nvme_ctrl *ctrl, u32 off, u32 val);
int (*reg_read64)(struct nvme_ctrl *ctrl, u32 off, u64 *val);
@@ -366,6 +393,23 @@ struct nvme_ctrl_ops {
void (*delete_ctrl)(struct nvme_ctrl *ctrl);
int (*get_address)(struct nvme_ctrl *ctrl, char *buf, int size);
void (*stop_ctrl)(struct nvme_ctrl *ctrl);
+
+#ifdef CONFIG_NVME_MDEV
+ int (*ext_queues_available)(struct nvme_ctrl *ctrl);
+ int (*ext_queue_alloc)(struct nvme_ctrl *ctrl, u16 *qid);
+ void (*ext_queue_free)(struct nvme_ctrl *ctrl, u16 qid);
+
+ int (*ext_queue_submit)(struct nvme_ctrl *ctrl,
+ u16 qid, u32 tag,
+ struct nvme_command *command,
+ struct nvme_ext_data_iter *iter);
+
+ bool (*ext_queue_full)(struct nvme_ctrl *ctrl, u16 qid);
+
+ int (*ext_queue_poll)(struct nvme_ctrl *ctrl, u16 qid,
+ struct nvme_ext_cmd_result *results,
+ unsigned int max_len);
+#endif
};
#ifdef CONFIG_FAULT_INJECTION_DEBUG_FS
@@ -427,6 +471,8 @@ void nvme_stop_ctrl(struct nvme_ctrl *ctrl);
void nvme_put_ctrl(struct nvme_ctrl *ctrl);
int nvme_init_identify(struct nvme_ctrl *ctrl);
+struct nvme_ns *nvme_find_get_ns(struct nvme_ctrl *ctrl, unsigned int nsid);
+void nvme_put_ns(struct nvme_ns *ns);
void nvme_remove_namespaces(struct nvme_ctrl *ctrl);
int nvme_sec_submit(void *data, u16 spsp, u8 secp, void *buffer, size_t len,
--
2.17.2
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* Re: [PATCH 7/9] nvme/core: add mdev interfaces
2019-03-19 14:41 ` Maxim Levitsky
(?)
@ 2019-03-20 11:46 ` Stefan Hajnoczi
-1 siblings, 0 replies; 3471+ messages in thread
From: Stefan Hajnoczi @ 2019-03-20 11:46 UTC (permalink / raw)
To: Maxim Levitsky
Cc: linux-nvme, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney ,
Paolo Bonzini, Liang Cunming, Liu Changpeng, Fam Zheng,
Amnon Ilan, John Ferlan
[-- Attachment #1: Type: text/plain, Size: 472 bytes --]
On Tue, Mar 19, 2019 at 04:41:14PM +0200, Maxim Levitsky wrote:
> +int nvme_core_register_mdev_driver(struct nvme_mdev_driver *driver_ops)
> +{
> + struct nvme_ctrl *ctrl;
> +
> + if (mdev_driver_interface)
> + return -EEXIST;
> +
> + mdev_driver_interface = driver_ops;
Can mdev_driver_interface be accessed from two CPUs at the same time?
mdev_driver_interface isn't protected by the mutex. The state_changed
functions below also don't protect mdev_driver_interface.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* [PATCH 7/9] nvme/core: add mdev interfaces
@ 2019-03-20 11:46 ` Stefan Hajnoczi
0 siblings, 0 replies; 3471+ messages in thread
From: Stefan Hajnoczi @ 2019-03-20 11:46 UTC (permalink / raw)
On Tue, Mar 19, 2019@04:41:14PM +0200, Maxim Levitsky wrote:
> +int nvme_core_register_mdev_driver(struct nvme_mdev_driver *driver_ops)
> +{
> + struct nvme_ctrl *ctrl;
> +
> + if (mdev_driver_interface)
> + return -EEXIST;
> +
> + mdev_driver_interface = driver_ops;
Can mdev_driver_interface be accessed from two CPUs at the same time?
mdev_driver_interface isn't protected by the mutex. The state_changed
functions below also don't protect mdev_driver_interface.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-nvme/attachments/20190320/bc363d0c/attachment.sig>
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: [PATCH 7/9] nvme/core: add mdev interfaces
@ 2019-03-20 11:46 ` Stefan Hajnoczi
0 siblings, 0 replies; 3471+ messages in thread
From: Stefan Hajnoczi @ 2019-03-20 11:46 UTC (permalink / raw)
To: Maxim Levitsky
Cc: linux-nvme, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney ,
Paolo Bonzini, Liang Cunming, Liu Changpeng, Fam Zheng,
Amnon Ilan, John Ferlan
[-- Attachment #1: Type: text/plain, Size: 472 bytes --]
On Tue, Mar 19, 2019 at 04:41:14PM +0200, Maxim Levitsky wrote:
> +int nvme_core_register_mdev_driver(struct nvme_mdev_driver *driver_ops)
> +{
> + struct nvme_ctrl *ctrl;
> +
> + if (mdev_driver_interface)
> + return -EEXIST;
> +
> + mdev_driver_interface = driver_ops;
Can mdev_driver_interface be accessed from two CPUs at the same time?
mdev_driver_interface isn't protected by the mutex. The state_changed
functions below also don't protect mdev_driver_interface.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: [PATCH 7/9] nvme/core: add mdev interfaces
2019-03-20 11:46 ` Stefan Hajnoczi
(?)
@ 2019-03-20 12:50 ` Maxim Levitsky
-1 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-20 12:50 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: linux-nvme, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney, Paolo Bonzini,
Liang Cunming, Liu Changpeng, Fam Zheng, Amnon Ilan, John Ferlan
On Wed, 2019-03-20 at 11:46 +0000, Stefan Hajnoczi wrote:
> On Tue, Mar 19, 2019 at 04:41:14PM +0200, Maxim Levitsky wrote:
> > +int nvme_core_register_mdev_driver(struct nvme_mdev_driver *driver_ops)
> > +{
> > + struct nvme_ctrl *ctrl;
> > +
> > + if (mdev_driver_interface)
> > + return -EEXIST;
> > +
> > + mdev_driver_interface = driver_ops;
>
> Can mdev_driver_interface be accessed from two CPUs at the same time?
> mdev_driver_interface isn't protected by the mutex. The state_changed
> functions below also don't protect mdev_driver_interface.
It can be for sure.
However the only time it is updated is when the mdev core module load/unload
routines.
On module load the interface flips from NULL to a pointer to inside of the
module, so this should be safe, and when mdev module unloads, its reference
counter is 0, and all the callers first try to increase it and fail, they don't
call using this interface.
I might still be wrong with this reasoning though.
Best regards,
Maxim Levitsky
^ permalink raw reply [flat|nested] 3471+ messages in thread
* [PATCH 7/9] nvme/core: add mdev interfaces
@ 2019-03-20 12:50 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-20 12:50 UTC (permalink / raw)
On Wed, 2019-03-20@11:46 +0000, Stefan Hajnoczi wrote:
> On Tue, Mar 19, 2019@04:41:14PM +0200, Maxim Levitsky wrote:
> > +int nvme_core_register_mdev_driver(struct nvme_mdev_driver *driver_ops)
> > +{
> > + struct nvme_ctrl *ctrl;
> > +
> > + if (mdev_driver_interface)
> > + return -EEXIST;
> > +
> > + mdev_driver_interface = driver_ops;
>
> Can mdev_driver_interface be accessed from two CPUs at the same time?
> mdev_driver_interface isn't protected by the mutex. The state_changed
> functions below also don't protect mdev_driver_interface.
It can be for sure.
However the only time it is updated is when the mdev core module load/unload
routines.
On module load the interface flips from NULL to a pointer to inside of the
module, so this should be safe, and when mdev module unloads, its reference
counter is 0, and all the callers first try to increase it and fail, they don't
call using this interface.
I might still be wrong with this reasoning though.
Best regards,
Maxim Levitsky
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: [PATCH 7/9] nvme/core: add mdev interfaces
@ 2019-03-20 12:50 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-20 12:50 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: linux-nvme, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney, Paolo Bonzini,
Liang Cunming, Liu Changpeng, Fam Zheng, Amnon Ilan, John Ferlan
On Wed, 2019-03-20 at 11:46 +0000, Stefan Hajnoczi wrote:
> On Tue, Mar 19, 2019 at 04:41:14PM +0200, Maxim Levitsky wrote:
> > +int nvme_core_register_mdev_driver(struct nvme_mdev_driver *driver_ops)
> > +{
> > + struct nvme_ctrl *ctrl;
> > +
> > + if (mdev_driver_interface)
> > + return -EEXIST;
> > +
> > + mdev_driver_interface = driver_ops;
>
> Can mdev_driver_interface be accessed from two CPUs at the same time?
> mdev_driver_interface isn't protected by the mutex. The state_changed
> functions below also don't protect mdev_driver_interface.
It can be for sure.
However the only time it is updated is when the mdev core module load/unload
routines.
On module load the interface flips from NULL to a pointer to inside of the
module, so this should be safe, and when mdev module unloads, its reference
counter is 0, and all the callers first try to increase it and fail, they don't
call using this interface.
I might still be wrong with this reasoning though.
Best regards,
Maxim Levitsky
^ permalink raw reply [flat|nested] 3471+ messages in thread
* [PATCH 8/9] nvme/core: add nvme-mdev core driver
2019-03-19 14:41 ` No subject Maxim Levitsky
(?)
@ 2019-03-19 14:41 ` Maxim Levitsky
-1 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
To: linux-nvme
Cc: Maxim Levitsky, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney ,
Paolo Bonzini, Liang Cunming, Liu Changpeng, Fam Zheng,
Amnon Ilan, John Ferlan
This is main commit in the series, adding the mediated nvme
driver.
The idea behind this driver is based on paper you can find at
https://www.usenix.org/conference/atc18/presentation/peng
But this is an independent implementation.
This mdev device exposes a NVME 1.3 virtual device to any VFIO user
which represents an partition (or whole namespace) of a host NVME device.
Unlike the paper, the driver/device uses
one polling thread per mediated device,
and only needs one hw queue per it to achieve near native performance
(but can use more that one hw queue, in which case it splits the queues
between virtual nvme queues thus supporting n:m mapping between
host hw queues and the guest virtual queues).
The nvme-mdev can't yet be used after this commit, as no nvme device
drivers support mediation (support is added in the next commit to
nvme-pci)
The driver can use the shadow doorbell nvme optional feature,
to stop polling after a timeout.
Currently the device has redhat pci vendor ID, and 0x1234 pci device id,
which is only a placeholder till a real device id is allocated.
Use example:
# load the nvme-mdev driver
$ modprobe nvme-mdev
# load the nvme pci driver with 4 polling queues reserved
# (will work with the next patch)
$ modprobe nvme mdev_queues=4
# generate random UUID for the mediated device
$ UUID=$(uuidgen)
$ MDEV_DEVICE=/sys/bus/mdev/devices/$UUID
# the location of the real nvme device (replace with yours)
$ PCI_DEVICE=/sys/bus/pci/devices/0000:44:00.0
# create the mediated device using 2 host polling queues
$ echo $UUID > $PCI_DEVICE/mdev_supported_types/nvme-2Q_V1/create
# attach partition 1 of namespace 1 to a free virtual namespace
# (use n1 to attach whole namespace)
# you can attach up to 16 virtual namespaces for now
$ echo n1p1 > $MDEV_DEVICE/namespaces/add_namespace
# move the polling thread to cpu 11
$ echo 11 > $MDEV_DEVICE/settings/iothread_cpu
# now you can boot qemu with
# -device vfio-pci,sysfsdev=/sys/bus/mdev/devices/$UUID
Note that you can attach and detach virtual namespaces
even while the guest is running which will make the
device sent namespace changed AEN notifications to the guest
prior to attach/detach action.
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
MAINTAINERS | 5 +
drivers/nvme/Kconfig | 1 +
drivers/nvme/Makefile | 1 +
drivers/nvme/host/core.c | 5 +-
drivers/nvme/mdev/Kconfig | 16 +
drivers/nvme/mdev/Makefile | 5 +
drivers/nvme/mdev/adm.c | 873 +++++++++++++++++++++++++++++++++++
drivers/nvme/mdev/events.c | 142 ++++++
drivers/nvme/mdev/host.c | 491 ++++++++++++++++++++
drivers/nvme/mdev/instance.c | 802 ++++++++++++++++++++++++++++++++
drivers/nvme/mdev/io.c | 563 ++++++++++++++++++++++
drivers/nvme/mdev/irq.c | 264 +++++++++++
drivers/nvme/mdev/mdev.h | 56 +++
drivers/nvme/mdev/mmio.c | 591 ++++++++++++++++++++++++
drivers/nvme/mdev/pci.c | 247 ++++++++++
drivers/nvme/mdev/priv.h | 700 ++++++++++++++++++++++++++++
drivers/nvme/mdev/udata.c | 390 ++++++++++++++++
drivers/nvme/mdev/vcq.c | 209 +++++++++
drivers/nvme/mdev/vctrl.c | 515 +++++++++++++++++++++
drivers/nvme/mdev/viommu.c | 322 +++++++++++++
drivers/nvme/mdev/vns.c | 356 ++++++++++++++
drivers/nvme/mdev/vsq.c | 181 ++++++++
22 files changed, 6733 insertions(+), 2 deletions(-)
create mode 100644 drivers/nvme/mdev/Kconfig
create mode 100644 drivers/nvme/mdev/Makefile
create mode 100644 drivers/nvme/mdev/adm.c
create mode 100644 drivers/nvme/mdev/events.c
create mode 100644 drivers/nvme/mdev/host.c
create mode 100644 drivers/nvme/mdev/instance.c
create mode 100644 drivers/nvme/mdev/io.c
create mode 100644 drivers/nvme/mdev/irq.c
create mode 100644 drivers/nvme/mdev/mdev.h
create mode 100644 drivers/nvme/mdev/mmio.c
create mode 100644 drivers/nvme/mdev/pci.c
create mode 100644 drivers/nvme/mdev/priv.h
create mode 100644 drivers/nvme/mdev/udata.c
create mode 100644 drivers/nvme/mdev/vcq.c
create mode 100644 drivers/nvme/mdev/vctrl.c
create mode 100644 drivers/nvme/mdev/viommu.c
create mode 100644 drivers/nvme/mdev/vns.c
create mode 100644 drivers/nvme/mdev/vsq.c
diff --git a/MAINTAINERS b/MAINTAINERS
index dce5c099f43c..d143e929d7ed 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -10896,6 +10896,11 @@ W: http://git.infradead.org/nvme.git
S: Supported
F: drivers/nvme/target/
+NVM EXPRESS MDEV DRIVER
+M: Maxim Levitsky <mlevitsk@redhat.com>
+S: Supported
+F: drivers/nvme/mdev/
+
NVMEM FRAMEWORK
M: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
S: Maintained
diff --git a/drivers/nvme/Kconfig b/drivers/nvme/Kconfig
index 04008e0bbe81..cbf867e6ac1e 100644
--- a/drivers/nvme/Kconfig
+++ b/drivers/nvme/Kconfig
@@ -2,5 +2,6 @@ menu "NVME Support"
source "drivers/nvme/host/Kconfig"
source "drivers/nvme/target/Kconfig"
+source "drivers/nvme/mdev/Kconfig"
endmenu
diff --git a/drivers/nvme/Makefile b/drivers/nvme/Makefile
index 0096a7fd1431..0458efc57aee 100644
--- a/drivers/nvme/Makefile
+++ b/drivers/nvme/Makefile
@@ -1,3 +1,4 @@
obj-y += host/
obj-y += target/
+obj-y += mdev/
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 90561973bce9..a835884fcbcd 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1687,6 +1687,7 @@ static void nvme_update_disk_info(struct gendisk *disk,
if (ns->ms && !ns->ext &&
(ns->ctrl->ops->flags & NVME_F_METADATA_SUPPORTED))
nvme_init_integrity(disk, ns->ms, ns->pi_type);
+
if (ns->ms && !nvme_ns_has_pi(ns) && !blk_get_integrity(disk))
capacity = 0;
@@ -2302,7 +2303,7 @@ static void nvme_init_subnqn(struct nvme_subsystem *subsys, struct nvme_ctrl *ct
size_t nqnlen;
int off;
- if(!(ctrl->quirks & NVME_QUIRK_IGNORE_DEV_SUBNQN)) {
+ if (!(ctrl->quirks & NVME_QUIRK_IGNORE_DEV_SUBNQN)) {
nqnlen = strnlen(id->subnqn, NVMF_NQN_SIZE);
if (nqnlen > 0 && nqnlen < NVMF_NQN_SIZE) {
strlcpy(subsys->subnqn, id->subnqn, NVMF_NQN_SIZE);
@@ -3361,8 +3362,8 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
nvme_mpath_add_disk(ns, id);
nvme_fault_inject_init(ns);
- kfree(id);
+ kfree(id);
return;
out_put_disk:
put_disk(ns->disk);
diff --git a/drivers/nvme/mdev/Kconfig b/drivers/nvme/mdev/Kconfig
new file mode 100644
index 000000000000..7ebc66cdeac0
--- /dev/null
+++ b/drivers/nvme/mdev/Kconfig
@@ -0,0 +1,16 @@
+
+config NVME_MDEV
+ bool
+
+config NVME_MDEV_VFIO
+ tristate "NVME Mediated VFIO virtual device"
+ select NVME_MDEV
+ depends on BLOCK
+ depends on VFIO_MDEV
+ depends on NVME_CORE
+ help
+ This provides EXPEREMENTAL support for lightweight software
+ passthrough of an partition on a NVME storage device to
+ guest, also as a NVME namespace, attached to a virtual NVME
+ controller
+ If unsure, say N.
diff --git a/drivers/nvme/mdev/Makefile b/drivers/nvme/mdev/Makefile
new file mode 100644
index 000000000000..114016c48476
--- /dev/null
+++ b/drivers/nvme/mdev/Makefile
@@ -0,0 +1,5 @@
+
+obj-$(CONFIG_NVME_MDEV_VFIO) += nvme-mdev.o
+
+nvme-mdev-y += adm.o events.o instance.o host.o io.o irq.o \
+ udata.o viommu.o vns.o vsq.o vcq.o vctrl.o mmio.o pci.o
diff --git a/drivers/nvme/mdev/adm.c b/drivers/nvme/mdev/adm.c
new file mode 100644
index 000000000000..39a7ad252c69
--- /dev/null
+++ b/drivers/nvme/mdev/adm.c
@@ -0,0 +1,873 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * NVMe admin command implementation
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include "priv.h"
+
+struct adm_ctx {
+ struct nvme_mdev_vctrl *vctrl;
+ struct nvme_mdev_hctrl *hctrl;
+ const struct nvme_command *in;
+ struct nvme_mdev_vns *ns;
+ struct nvme_ext_data_iter udatait;
+ unsigned int datalen;
+};
+
+/*Identify Controller */
+static int nvme_mdev_adm_handle_id_cntrl(struct adm_ctx *ctx)
+{
+ int ret;
+ const struct nvme_identify *in = &ctx->in->identify;
+ struct nvme_id_ctrl *id;
+
+ if (in->nsid != 0)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ id = kzalloc(sizeof(*id), GFP_KERNEL);
+ if (!id)
+ return NVME_SC_INTERNAL;
+
+ /** Controller Capabilities and Features ************************/
+ // PCI vendor ID
+ store_le16(&id->vid, NVME_MDEV_PCI_VENDOR_ID);
+ // PCI Subsystem Vendor ID
+ store_le16(&id->ssvid, NVME_MDEV_PCI_SUBVENDOR_ID);
+ // Serial Number
+ store_strsp(id->sn, ctx->vctrl->serial);
+ // Model Number
+ store_strsp(id->mn, "NVMe MDEV virtual device");
+ // Firmware Revision
+ store_strsp(id->fr, NVME_MDEV_FIRMWARE_VERSION);
+ // Recommended Arbitration Burst
+ id->rab = 6;
+ // IEEE OUI Identifier for the controller vendor
+ id->ieee[0] = 0;
+ // Controller Multi-Path I/O and Namespace Sharing Capabilities
+ id->cmic = 0;
+ // Maximum Data Transfer Size (power of two, in page size units)
+ id->mdts = ctx->hctrl->mdts;
+ // controller ID
+ id->cntlid = 0;
+ // NVME supported version
+ store_le32(&id->ver, NVME_MDEV_NVME_VER);
+ // RTD3 Resume Latency
+ id->rtd3r = 0;
+ //RTD3 Entry Latency
+ id->rtd3e = 0;
+ // Optional Asynchronous Events Supported
+ store_le32(&id->oaes, NVME_AEN_CFG_NS_ATTR);
+ // Controller Attributes (misc junk)
+ id->ctratt = 0;
+
+ /*Admin Command Set Attributes & Optional Controller Capabilities */
+ // Optional Admin Command Support
+ id->oacs = ctx->vctrl->mmio.shadow_db_supported ?
+ NVME_CTRL_OACS_DBBUF_SUPP : 0;
+ // Abort Command Limit (dummy, zero based)
+ id->acl = 3;
+ // Asynchronous Event Request Limit (zero based)
+ id->aerl = MAX_AER_COMMANDS - 1;
+ // Firmware Updates (dummy)
+ id->frmw = 3;
+ // Log Page Attributes
+ // (IMPLEMENT: bit for commands supported and effects)
+ id->lpa = 0;
+ // Error Log Page Entries
+ // (zero based, IMPLEMENT: dummy for now)
+ id->elpe = 0;
+ // Number of Power States Support
+ // (zero based, IMPLEMENT: dummy for now)
+ id->npss = 0;
+ // Admin Vendor Specific Command Configuration (junk)
+ id->avscc = 0;
+ // Autonomous Power State Transition Attributes
+ id->apsta = 0;
+ // Warning Composite Temperature Threshold (dummy)
+ id->wctemp = 0x157;
+ // Critical Composite Temperature Threshold (dummy)
+ id->cctemp = 0x175;
+ // Maximum Time for Firmware Activation (dummy)
+ id->mtfa = 0;
+ // Host Memory Buffer Preferred Size (dummy)
+ id->hmpre = 0;
+ // Host Memory Buffer Minimum Size (dummy)
+ id->hmmin = 0;
+ // Total NVM Capacity (not supported)
+ id->tnvmcap[0] = 0;
+ // Unallocated NVM Capacity (not supported for now)
+ id->unvmcap[0] = 0;
+ // Replay Protected Memory Block Support
+ id->rpmbs = 0;
+ // Extended Device Self-test Time (dummy)
+ id->edstt = 0;
+ // Device Self-test Options (dummy)
+ id->dsto = 0;
+ // Firmware Update Granularity (dummy)
+ id->fwug = 0;
+ // Keep Alive Support (not supported)
+ id->kas = 0;
+ // Host Controlled Thermal Management Attributes (not supported)
+ id->hctma = 0;
+ // Minimum Thermal Management Temperature (not supported)
+ id->mntmt = 0;
+ // Maximum Thermal Management Temperature (not supported)
+ id->mxtmt = 0;
+ // Sanitize capabilities (not supported)
+ id->sanicap = 0;
+
+ /****************** NVM Command Set Attributes ********************/
+ // Submission Queue Entry Size
+ id->sqes = (0x6 << 4) | 0x6;
+ // Completion Queue Entry Size
+ id->cqes = (0x4 << 4) | 0x4;
+ // Maximum Outstanding Commands
+ id->maxcmd = 0;
+ // Number of Namespaces
+ id->nn = MAX_VIRTUAL_NAMESPACES;
+ // Optional NVM Command Support
+ // (we add dsm and write zeros if host supports them)
+ id->oncs = ctx->hctrl->oncs;
+ // TODOLATER: IO: Fused Operation Support
+ id->fuses = 0;
+ // Format NVM Attributes (don't support)
+ id->fna = 0;
+ // Volatile Write Cache (tell that always exist)
+ id->vwc = 1;
+ // Atomic Write Unit Normal (zero based value in blocks)
+ id->awun = 0;
+ // Atomic Write Unit Power Fail (ditto)
+ id->awupf = 0;
+ // NVM Vendor Specific Command Configuration
+ id->nvscc = 0;
+ // Atomic Compare & Write Unit (zero based value in blocks)
+ id->acwu = 0;
+ // SGL Support
+ id->sgls = 0;
+ // NVM Subsystem NVMe Qualified Name
+ strncpy(id->subnqn, ctx->vctrl->subnqn, sizeof(id->subnqn));
+
+ /******************Power state descriptors ***********************/
+ store_le16(&id->psd[0].max_power, 0x9c4); // dummy
+ store_le32(&id->psd[0].entry_lat, 0x10);
+ store_le32(&id->psd[0].exit_lat, 0x4);
+
+ ret = nvme_mdev_write_to_udata(&ctx->udatait, id, sizeof(*id));
+ kfree(id);
+ return nvme_mdev_translate_error(ret);
+}
+
+/*Identify Namespace data structure for the specified NSID or common one */
+static int nvme_mdev_adm_handle_id_ns(struct adm_ctx *ctx)
+{
+ int ret;
+ struct nvme_id_ns *idns;
+ u32 nsid = le32_to_cpu(ctx->in->identify.nsid);
+
+ if (nsid == 0xffffffff || nsid == 0 || nsid > MAX_VIRTUAL_NAMESPACES)
+ return DNR(NVME_SC_INVALID_NS);
+
+ /* Allocate return structure*/
+ idns = kzalloc(NVME_IDENTIFY_DATA_SIZE, GFP_KERNEL);
+ if (!idns)
+ return NVME_SC_INTERNAL;
+
+ if (ctx->ns) {
+ //Namespace Size
+ store_le64(&idns->nsze, ctx->ns->ns_size);
+ // Namespace Capacity
+ store_le64(&idns->ncap, ctx->ns->ns_size);
+ // Namespace Utilization
+ store_le64(&idns->nuse, ctx->ns->ns_size);
+ // Namespace Features (nothing to set here yet)
+ idns->nsfeat = 0;
+ // Number of LBA Formats (dummy, zero based)
+ idns->nlbaf = 0;
+ // Formatted LBA Size (current LBA format in use)
+ // + external metadata bit
+ idns->flbas = 0;
+ // Metadata Capabilities
+ idns->mc = 0;
+ // End-to-end Data Protection Capabilities
+ idns->dpc = 0;
+ // End-to-end Data Protection Type Settings
+ idns->dps = 0;
+ // Namespace Multi-path I/O and Namespace Sharing Capabilities
+ idns->nmic = 0;
+ // Reservation Capabilities
+ idns->rescap = 0;
+ // Format Progress Indicator (dummy)
+ idns->fpi = 0;
+ // Namespace Atomic Write Unit Normal
+ idns->nawun = 0;
+ // Namespace Atomic Write Unit Power Fail
+ idns->nawupf = 0;
+ // Namespace Atomic Compare & Write Unit
+ idns->nacwu = 0;
+ // Namespace Atomic Boundary Size Normal
+ idns->nabsn = 0;
+ // Namespace Atomic Boundary Offset
+ idns->nabo = 0;
+ // Namespace Atomic Boundary Size Power Fail
+ idns->nabspf = 0;
+ // Namespace Optimal IO Boundary
+ idns->noiob = ctx->ns->noiob;
+ // NVM Capacity (another capacity but in bytes)
+ idns->nvmcap[0] = 0;
+
+ // TODOLATER: NS: support NGUID/EUI64
+ idns->nguid[0] = 0;
+ idns->eui64[0] = 0;
+ // format 0 metadata size
+ idns->lbaf[0].ms = 0;
+ // format 0 block size (in power of two)
+ idns->lbaf[0].ds = ctx->ns->blksize_shift;
+ // format 0 relative performance
+ idns->lbaf[0].rp = 0;
+ }
+
+ ret = nvme_mdev_write_to_udata(&ctx->udatait, idns,
+ NVME_IDENTIFY_DATA_SIZE);
+ kfree(idns);
+ return nvme_mdev_translate_error(ret);
+}
+
+/* Namespace Identification Descriptor list for the specified NSID.*/
+static int nvme_mdev_adm_handle_id_ns_desc(struct adm_ctx *ctx)
+{
+ struct ns_desc {
+ struct nvme_ns_id_desc uuid_desc;
+ uuid_t uuid;
+ struct nvme_ns_id_desc null_desc;
+ };
+
+ int ret;
+ struct ns_desc *id;
+
+ if (!ctx->ns)
+ return DNR(NVME_SC_INVALID_NS);
+
+ /* Allocate return structure */
+ id = kzalloc(NVME_IDENTIFY_DATA_SIZE, GFP_KERNEL);
+ if (!id)
+ return NVME_SC_INTERNAL;
+
+ id->uuid_desc.nidt = NVME_NIDT_UUID;
+ id->uuid_desc.nidl = NVME_NIDT_UUID_LEN;
+ memcpy(&id->uuid, &ctx->ns->uuid, sizeof(id->uuid));
+
+ ret = nvme_mdev_write_to_udata(&ctx->udatait, id,
+ NVME_IDENTIFY_DATA_SIZE);
+ kfree(id);
+ return nvme_mdev_translate_error(ret);
+}
+
+/*Active Namespace ID list */
+static int nvme_mdev_adm_handle_id_active_ns_list(struct adm_ctx *ctx)
+{
+ u32 nsid, start_nsid = le32_to_cpu(ctx->in->identify.nsid);
+ struct nvme_mdev_vctrl *vctrl = ctx->vctrl;
+ int i = 0, ret;
+
+ __le32 *nslist = kzalloc(NVME_IDENTIFY_DATA_SIZE, GFP_KERNEL);
+
+ if (start_nsid >= 0xfffffffe)
+ return DNR(NVME_SC_INVALID_NS);
+
+ for (nsid = start_nsid + 1; nsid <= MAX_VIRTUAL_NAMESPACES; nsid++)
+ if (nvme_mdev_vns_from_vnsid(vctrl, nsid))
+ nslist[i++] = nsid;
+
+ ret = nvme_mdev_write_to_udata(&ctx->udatait, nslist,
+ NVME_IDENTIFY_DATA_SIZE);
+ kfree(nslist);
+ return nvme_mdev_translate_error(ret);
+}
+
+/* Handle Identify command*/
+static int nvme_mdev_adm_handle_id(struct adm_ctx *ctx)
+{
+ const struct nvme_identify *in = &ctx->in->identify;
+
+ int ret = nvme_mdev_udata_iter_set_dptr(&ctx->udatait,
+ &ctx->in->common.dptr,
+ NVME_IDENTIFY_DATA_SIZE);
+
+ u32 nsid = le32_to_cpu(in->nsid);
+
+ if (ret)
+ return nvme_mdev_translate_error(ret);
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_DW23 | RSRV_MPTR | RSRV_DW11_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (in->ctrlid)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ ctx->ns = nvme_mdev_vns_from_vnsid(ctx->vctrl, nsid);
+
+ switch (ctx->in->identify.cns) {
+ case NVME_ID_CNS_CTRL:
+ _DBG(ctx->vctrl, "ADMINQ: IDENTIFY CTRL\n");
+ return nvme_mdev_adm_handle_id_cntrl(ctx);
+ case NVME_ID_CNS_NS_ACTIVE_LIST:
+ _DBG(ctx->vctrl, "ADMINQ: IDENTIFY ACTIVE_NS_LIST\n");
+ return nvme_mdev_adm_handle_id_active_ns_list(ctx);
+ case NVME_ID_CNS_NS:
+ _DBG(ctx->vctrl, "ADMINQ: IDENTIFY NS=0x%08x\n", nsid);
+ return nvme_mdev_adm_handle_id_ns(ctx);
+ case NVME_ID_CNS_NS_DESC_LIST:
+ _DBG(ctx->vctrl, "ADMINQ: IDENTIFY NS_DESC NS=0x%08x\n", nsid);
+ return nvme_mdev_adm_handle_id_ns_desc(ctx);
+ default:
+ return DNR(NVME_SC_INVALID_FIELD);
+ }
+}
+
+/* Error log for AER */
+static int nvme_mdev_adm_handle_get_log_page_err(struct adm_ctx *ctx)
+{
+ struct nvme_err_log_entry dummy_entry;
+ int ret;
+
+ // write one dummy entry with 0 error count
+ memset(&dummy_entry, 0, sizeof(dummy_entry));
+
+ ret = nvme_mdev_write_to_udata(&ctx->udatait,
+ &dummy_entry,
+ min((unsigned int)sizeof(dummy_entry),
+ ctx->datalen));
+
+ return nvme_mdev_translate_error(ret);
+}
+
+/* This log page allows to tell user about connected/disconnected namespaces */
+static int nvme_mdev_adm_handle_get_log_page_changed_ns(struct adm_ctx *ctx)
+{
+ unsigned int datasize = min(ctx->vctrl->ns_log_size * 4, ctx->datalen);
+
+ int ret = nvme_mdev_write_to_udata(&ctx->udatait,
+ &ctx->vctrl->ns_log, datasize);
+
+ nvme_mdev_vns_log_reset(ctx->vctrl);
+ return nvme_mdev_translate_error(ret);
+}
+
+/* S.M.A.R.T. log*/
+static int nvme_mdev_adm_handle_get_log_page_smart(struct adm_ctx *ctx)
+{
+ unsigned int datasize = min_t(unsigned int,
+ sizeof(struct nvme_smart_log), ctx->datalen);
+ int ret;
+ struct nvme_smart_log *log = kzalloc(sizeof(*log), GFP_KERNEL);
+
+ if (!log)
+ return NVME_SC_INTERNAL;
+
+ /* Some dummy values */
+ log->avail_spare = 100;
+ log->spare_thresh = 10;
+ store_le16(&log->temperature, 0x140);
+
+ ret = nvme_mdev_write_to_udata(&ctx->udatait, log, datasize);
+ kfree(log);
+ return nvme_mdev_translate_error(ret);
+}
+
+/* FW slot log - useless */
+static int nvme_mdev_adm_handle_get_log_page_fw_slot(struct adm_ctx *ctx)
+{
+ unsigned int datasize = min_t(unsigned int,
+ sizeof(struct nvme_fw_slot_info_log),
+ ctx->datalen);
+ int ret;
+ struct nvme_fw_slot_info_log *log = kzalloc(sizeof(*log), GFP_KERNEL);
+
+ if (!log)
+ return NVME_SC_INTERNAL;
+
+ ret = nvme_mdev_write_to_udata(&ctx->udatait, log, datasize);
+ kfree(log);
+ return nvme_mdev_translate_error(ret);
+}
+
+/* Response to GET LOG PAGE command */
+static int nvme_mdev_adm_handle_get_log_page(struct adm_ctx *ctx)
+{
+ const struct nvme_get_log_page_command *in = &ctx->in->get_log_page;
+ u8 log_page_id = ctx->in->get_log_page.lid;
+ int ret;
+
+ ctx->datalen = (le16_to_cpu(in->numdl) + 1) * 4;
+
+ /* We don't support extensions (NUMDU,LPOL,LPOU) */
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_DW23 | RSRV_MPTR | RSRV_DW11_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ /* Currently ignore the NSID in the command */
+
+ /* ACK the AER */
+ if ((in->lsp & 0x80) == 0)
+ nvme_mdev_event_process_ack(ctx->vctrl, log_page_id);
+
+ /* map data pointer */
+ ret = nvme_mdev_udata_iter_set_dptr(&ctx->udatait,
+ &in->dptr, ctx->datalen);
+ if (ret)
+ return nvme_mdev_translate_error(ret);
+
+ switch (log_page_id) {
+ case NVME_LOG_ERROR:
+ _DBG(ctx->vctrl, "ADMINQ: GET_LOG_PAGE : ERRLOG\n");
+ return nvme_mdev_adm_handle_get_log_page_err(ctx);
+ case NVME_LOG_CHANGED_NS:
+ _DBG(ctx->vctrl, "ADMINQ: GET_LOG_PAGE : CHANGED_NS\n");
+ return nvme_mdev_adm_handle_get_log_page_changed_ns(ctx);
+ case NVME_LOG_SMART:
+ _DBG(ctx->vctrl, "ADMINQ: GET_LOG_PAGE : SMART\n");
+ return nvme_mdev_adm_handle_get_log_page_smart(ctx);
+ case NVME_LOG_FW_SLOT:
+ _DBG(ctx->vctrl, "ADMINQ: GET_LOG_PAGE : FWSLOT\n");
+ return nvme_mdev_adm_handle_get_log_page_fw_slot(ctx);
+ default:
+ _DBG(ctx->vctrl, "ADMINQ: GET_LOG_PAGE : log page 0x%02x\n",
+ log_page_id);
+ return DNR(NVME_SC_INVALID_FIELD);
+ }
+}
+
+/* Response to CREATE CQ command */
+static int nvme_mdev_adm_handle_create_cq(struct adm_ctx *ctx)
+{
+ int irq = -1, ret;
+ struct nvme_mdev_vctrl *vctrl = ctx->vctrl;
+ const struct nvme_create_cq *in = &ctx->in->create_cq;
+ u16 cqid = le16_to_cpu(in->cqid);
+ u16 qsize = le16_to_cpu(in->qsize);
+ u16 cq_flags = le16_to_cpu(in->cq_flags);
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_NSID | RSRV_DW23 | RSRV_DPTR_PRP2 |
+ RSRV_MPTR | RSRV_DW12_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ /* QID checks*/
+ if (!cqid ||
+ cqid >= MAX_VIRTUAL_QUEUES || test_bit(cqid, vctrl->vcq_en))
+ return DNR(NVME_SC_QID_INVALID);
+
+ /* Queue size checks*/
+ if (qsize > (MAX_VIRTUAL_QUEUE_DEPTH - 1) || qsize < 1)
+ return DNR(NVME_SC_QUEUE_SIZE);
+
+ /* Queue flags checks */
+ if (cq_flags & ~(NVME_QUEUE_PHYS_CONTIG | NVME_CQ_IRQ_ENABLED))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (cq_flags & NVME_CQ_IRQ_ENABLED) {
+ irq = le16_to_cpu(in->irq_vector);
+ if (irq >= MAX_VIRTUAL_IRQS)
+ return DNR(NVME_SC_INVALID_VECTOR);
+ }
+
+ ret = nvme_mdev_vcq_init(ctx->vctrl, cqid,
+ le64_to_cpu(in->prp1),
+ cq_flags & NVME_QUEUE_PHYS_CONTIG,
+ qsize + 1, irq);
+
+ return nvme_mdev_translate_error(ret);
+}
+
+/* Response to DELETE CQ command */
+static int nvme_mdev_adm_handle_delete_cq(struct adm_ctx *ctx)
+{
+ struct nvme_mdev_vctrl *vctrl = ctx->vctrl;
+ const struct nvme_delete_queue *in = &ctx->in->delete_queue;
+ u16 qid = le16_to_cpu(in->qid), sqid;
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_NSID | RSRV_DW23 | RSRV_DPTR |
+ RSRV_MPTR | RSRV_DW11_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (!qid || qid >= MAX_VIRTUAL_QUEUES || !test_bit(qid, vctrl->vcq_en))
+ return DNR(NVME_SC_QID_INVALID);
+
+ for_each_set_bit(sqid, vctrl->vsq_en, MAX_VIRTUAL_QUEUES)
+ if (vctrl->vsqs[sqid].vcq == &vctrl->vcqs[qid])
+ return DNR(NVME_SC_INVALID_QUEUE);
+
+ nvme_mdev_vcq_delete(vctrl, qid);
+ return NVME_SC_SUCCESS;
+}
+
+/* Response to CREATE SQ command */
+static int nvme_mdev_adm_handle_create_sq(struct adm_ctx *ctx)
+{
+ const struct nvme_create_sq *in = &ctx->in->create_sq;
+ struct nvme_mdev_vctrl *vctrl = ctx->vctrl;
+ int ret;
+
+ u16 sqid = le16_to_cpu(in->sqid);
+ u16 cqid = le16_to_cpu(in->cqid);
+ u16 qsize = le16_to_cpu(in->qsize);
+ u16 sq_flags = le16_to_cpu(in->sq_flags);
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_NSID | RSRV_DW23 | RSRV_DPTR_PRP2 |
+ RSRV_MPTR | RSRV_DW12_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (!sqid ||
+ sqid >= MAX_VIRTUAL_QUEUES || test_bit(sqid, vctrl->vsq_en))
+ return DNR(NVME_SC_QID_INVALID);
+
+ if (!cqid || cqid >= MAX_VIRTUAL_QUEUES)
+ return DNR(NVME_SC_QID_INVALID);
+
+ if (!test_bit(cqid, vctrl->vcq_en))
+ return DNR(NVME_SC_CQ_INVALID);
+
+ /* Queue size checks */
+ if (qsize > (MAX_VIRTUAL_QUEUE_DEPTH - 1) || qsize < 1)
+ return DNR(NVME_SC_QUEUE_SIZE);
+
+ /* Queue flags checks */
+ if (sq_flags & ~(NVME_QUEUE_PHYS_CONTIG | NVME_SQ_PRIO_MASK))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ ret = nvme_mdev_vsq_init(ctx->vctrl, sqid,
+ le64_to_cpu(in->prp1),
+ sq_flags & NVME_QUEUE_PHYS_CONTIG,
+ qsize + 1, cqid);
+ if (ret)
+ goto error;
+
+ return NVME_SC_SUCCESS;
+error:
+ return nvme_mdev_translate_error(ret);
+}
+
+/* Response to DELETE SQ command */
+static int nvme_mdev_adm_handle_delete_sq(struct adm_ctx *ctx)
+{
+ struct nvme_mdev_vctrl *vctrl = ctx->vctrl;
+ const struct nvme_delete_queue *in = &ctx->in->delete_queue;
+ u16 qid = le16_to_cpu(in->qid);
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_NSID | RSRV_DW23 | RSRV_DPTR |
+ RSRV_MPTR | RSRV_DW11_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (!qid || qid >= MAX_VIRTUAL_QUEUES || !test_bit(qid, vctrl->vsq_en))
+ return DNR(NVME_SC_QID_INVALID);
+
+ nvme_mdev_vsq_delete(ctx->vctrl, qid);
+ return NVME_SC_SUCCESS;
+}
+
+/* Set the shadow doorbell */
+static int nvme_mdev_adm_handle_dbbuf(struct adm_ctx *ctx)
+{
+ const struct nvme_dbbuf *in = &ctx->in->dbbuf;
+ int ret;
+
+ dma_addr_t sdb_iova = le64_to_cpu(in->prp1);
+ dma_addr_t eidx_iova = le64_to_cpu(in->prp2);
+
+ /* Check if we support the shadow doorbell */
+ if (!ctx->vctrl->mmio.shadow_db_supported)
+ return DNR(NVME_SC_INVALID_OPCODE);
+
+ /* Don't allow to enable the shadow doorbell more that once */
+ if (ctx->vctrl->mmio.shadow_db_en)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_NSID | RSRV_DW23 |
+ RSRV_MPTR | RSRV_DW10_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ /* check input buffers */
+ if ((OFFSET_IN_PAGE(sdb_iova) != 0) || (OFFSET_IN_PAGE(eidx_iova) != 0))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ /* switch to the new doorbell buffer */
+ ret = nvme_mdev_mmio_enable_dbs_shadow(ctx->vctrl, sdb_iova, eidx_iova);
+ return nvme_mdev_translate_error(ret);
+}
+
+/* Response to GET_FEATURES command */
+static int nvme_mdev_adm_handle_get_features(struct adm_ctx *ctx)
+{
+ u32 value = 0;
+ u32 irq;
+ const struct nvme_features *in = &ctx->in->features;
+ struct nvme_mdev_vctrl *vctrl = ctx->vctrl;
+ unsigned int tmp;
+
+ u32 fid = le32_to_cpu(in->fid);
+ u16 cid = le16_to_cpu(in->command_id);
+
+ _DBG(ctx->vctrl, "ADMINQ: GET_FEATURES FID=0x%x\n", fid);
+
+ /* common reserved bits*/
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_DW23 | RSRV_DPTR |
+ RSRV_MPTR | RSRV_DW12_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ /* reserved bits in dword10*/
+ if (fid > 0xFF)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ /* reserved bits in dword11*/
+ if (fid != NVME_FEAT_IRQ_CONFIG && in->dword11 != 0)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ switch (fid) {
+ /* Number of queues */
+ case NVME_FEAT_NUM_QUEUES:
+ value = (MAX_VIRTUAL_QUEUES - 1) |
+ ((MAX_VIRTUAL_QUEUES - 1) << 16);
+ goto out;
+
+ /* Arbitration */
+ case NVME_FEAT_ARBITRATION:
+ value = vctrl->arb_burst_shift & 0x7;
+ goto out;
+
+ /* Interrupt coalescing settings*/
+ case NVME_FEAT_IRQ_COALESCE:
+ tmp = vctrl->irqs.irq_coalesc_time_us;
+ do_div(tmp, 100);
+ value = (vctrl->irqs.irq_coalesc_max - 1) | (tmp << 8);
+ goto out;
+
+ /* Interrupt coalescing disable for a specific interrupt */
+ case NVME_FEAT_IRQ_CONFIG:
+ irq = le32_to_cpu(in->dword11);
+ if (irq >= MAX_VIRTUAL_IRQS)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ value = irq;
+ if (vctrl->irqs.vecs[irq].irq_coalesc_en)
+ value |= (1 << 16);
+ goto out;
+
+ /* Volatile write cache */
+ case NVME_FEAT_VOLATILE_WC:
+ /*we always report write cache due to mediation*/
+ value = 0x1;
+ goto out;
+
+ /* Limited error recovery */
+ case NVME_FEAT_ERR_RECOVERY:
+ value = 0;
+ break;
+
+ /* Workload hint + power state */
+ case NVME_FEAT_POWER_MGMT:
+ value = vctrl->worload_hint << 4;
+ break;
+
+ /* Temperature threshold */
+ case NVME_FEAT_TEMP_THRESH:
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ /* AEN permanent masking*/
+ case NVME_FEAT_ASYNC_EVENT:
+ value = nvme_mdev_event_read_aen_config(vctrl);
+ goto out;
+ default:
+ return DNR(NVME_SC_INVALID_FIELD);
+ }
+out:
+ nvme_mdev_vsq_cmd_done_adm(ctx->vctrl, value, cid, NVME_SC_SUCCESS);
+ return -1;
+}
+
+/* Response to SET_FEATURES command */
+static int nvme_mdev_adm_handle_set_features(struct adm_ctx *ctx)
+{
+ const struct nvme_features *in = &ctx->in->features;
+ struct nvme_mdev_vctrl *vctrl = ctx->vctrl;
+
+ u32 value = le32_to_cpu(in->dword11);
+ u8 fid = le32_to_cpu(in->fid) & 0xFF;
+ u16 cid = le16_to_cpu(in->command_id);
+ u32 nsid = le32_to_cpu(in->nsid);
+
+ _DBG(ctx->vctrl, "ADMINQ: SET_FEATURES cmd. FID=0x%x\n", fid);
+
+ if (nsid != 0xffffffff && nsid != 0)
+ return DNR(NVME_SC_FEATURE_NOT_PER_NS);
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_DW23 | RSRV_DPTR |
+ RSRV_MPTR | RSRV_DW12_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ switch (fid) {
+ case NVME_FEAT_NUM_QUEUES:
+ /* need to return the value here as well */
+ value = (MAX_VIRTUAL_QUEUES - 1) |
+ ((MAX_VIRTUAL_QUEUES - 1) << 16);
+
+ nvme_mdev_vsq_cmd_done_adm(ctx->vctrl, value,
+ cid, NVME_SC_SUCCESS);
+ return -1;
+
+ case NVME_FEAT_ARBITRATION:
+ vctrl->arb_burst_shift = value & 0x7;
+ return NVME_SC_SUCCESS;
+
+ case NVME_FEAT_IRQ_COALESCE:
+ vctrl->irqs.irq_coalesc_max = (value & 0xFF) + 1;
+ vctrl->irqs.irq_coalesc_time_us = ((value >> 8) & 0xFF) * 100;
+ return NVME_SC_SUCCESS;
+
+ case NVME_FEAT_IRQ_CONFIG: {
+ u16 irq = value & 0xFFFF;
+
+ if (irq >= MAX_VIRTUAL_IRQS)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ vctrl->irqs.vecs[irq].irq_coalesc_en = (value & 0x10000) != 0;
+ return NVME_SC_SUCCESS;
+ }
+ case NVME_FEAT_VOLATILE_WC:
+ return (value != 0x1) ? DNR(NVME_SC_FEATURE_NOT_CHANGEABLE) :
+ NVME_SC_SUCCESS;
+
+ case NVME_FEAT_ERR_RECOVERY:
+ return (value != 0) ? DNR(NVME_SC_FEATURE_NOT_CHANGEABLE) :
+ NVME_SC_SUCCESS;
+ case NVME_FEAT_POWER_MGMT:
+ if (value & 0xFFFFFF0F)
+ return DNR(NVME_SC_INVALID_FIELD);
+ vctrl->worload_hint = value >> 4;
+ return NVME_SC_SUCCESS;
+
+ case NVME_FEAT_TEMP_THRESH:
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ case NVME_FEAT_ASYNC_EVENT:
+ nvme_mdev_event_set_aen_config(vctrl, value);
+ return NVME_SC_SUCCESS;
+ default:
+ return DNR(NVME_SC_INVALID_FIELD);
+ }
+}
+
+/* Response to AER command */
+static int nvme_mdev_adm_handle_async_event(struct adm_ctx *ctx)
+{
+ u16 cid = le16_to_cpu(ctx->in->common.command_id);
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_NSID | RSRV_DW23 | RSRV_DPTR |
+ RSRV_MPTR | RSRV_DW10_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ return nvme_mdev_event_request_receive(ctx->vctrl, cid);
+}
+
+/* (Dummy) response to ABORT command*/
+static int nvme_mdev_adm_handle_abort(struct adm_ctx *ctx)
+{
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_NSID | RSRV_DW23 | RSRV_DPTR |
+ RSRV_MPTR | RSRV_DW10_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ return DNR(NVME_SC_ABORT_MISSING);
+}
+
+/* Process one new command in the admin queue*/
+static int nvme_mdev_adm_handle_cmd(struct adm_ctx *ctx)
+{
+ u8 optcode = ctx->in->common.opcode;
+
+ ctx->ns = NULL;
+ ctx->datalen = 0;
+
+ if (ctx->in->common.flags != 0)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ switch (optcode) {
+ case nvme_admin_identify:
+ return nvme_mdev_adm_handle_id(ctx);
+ case nvme_admin_create_cq:
+ _DBG(ctx->vctrl, "ADMINQ: CREATE_CQ\n");
+ return nvme_mdev_adm_handle_create_cq(ctx);
+ case nvme_admin_create_sq:
+ _DBG(ctx->vctrl, "ADMINQ: CREATE_SQ\n");
+ return nvme_mdev_adm_handle_create_sq(ctx);
+ case nvme_admin_delete_sq:
+ _DBG(ctx->vctrl, "ADMINQ: DELETE_SQ\n");
+ return nvme_mdev_adm_handle_delete_sq(ctx);
+ case nvme_admin_delete_cq:
+ _DBG(ctx->vctrl, "ADMINQ: DELETE_CQ\n");
+ return nvme_mdev_adm_handle_delete_cq(ctx);
+ case nvme_admin_dbbuf:
+ _DBG(ctx->vctrl, "ADMINQ: DBBUF_CONFIG\n");
+ return nvme_mdev_adm_handle_dbbuf(ctx);
+ case nvme_admin_get_log_page:
+ return nvme_mdev_adm_handle_get_log_page(ctx);
+ case nvme_admin_get_features:
+ return nvme_mdev_adm_handle_get_features(ctx);
+ case nvme_admin_set_features:
+ return nvme_mdev_adm_handle_set_features(ctx);
+ case nvme_admin_async_event:
+ _DBG(ctx->vctrl, "ADMINQ: ASYNC_EVENT_REQ\n");
+ return nvme_mdev_adm_handle_async_event(ctx);
+ case nvme_admin_abort_cmd:
+ _DBG(ctx->vctrl, "ADMINQ: ABORT\n");
+ return nvme_mdev_adm_handle_abort(ctx);
+ default:
+ _DBG(ctx->vctrl, "ADMINQ: optcode 0x%04x\n", optcode);
+ return DNR(NVME_SC_INVALID_OPCODE);
+ }
+}
+
+/* Process all pending admin commands */
+void nvme_mdev_adm_process_sq(struct nvme_mdev_vctrl *vctrl)
+{
+ struct adm_ctx ctx;
+
+ lockdep_assert_held(&vctrl->lock);
+ memset(&ctx, 0, sizeof(struct adm_ctx));
+ ctx.vctrl = vctrl;
+ ctx.hctrl = vctrl->hctrl;
+ nvme_mdev_udata_iter_setup(&vctrl->viommu, &ctx.udatait);
+
+ nvme_mdev_io_pause(ctx.vctrl);
+
+ while (!(nvme_mdev_vctrl_is_dead(vctrl))) {
+ int ret;
+ u16 cid;
+
+ ctx.in = nvme_mdev_vsq_get_cmd(vctrl, &vctrl->vsqs[0]);
+ if (!ctx.in)
+ break;
+
+ cid = le16_to_cpu(ctx.in->common.command_id);
+ ret = nvme_mdev_adm_handle_cmd(&ctx);
+
+ if (ret == -1)
+ continue;
+
+ if (ret != 0)
+ _DBG(vctrl, "ADMINQ: CID 0x%x FAILED: status 0x%x\n",
+ cid, ret);
+ nvme_mdev_vsq_cmd_done_adm(vctrl, 0, cid, ret);
+ }
+ nvme_mdev_io_resume(ctx.vctrl);
+}
diff --git a/drivers/nvme/mdev/events.c b/drivers/nvme/mdev/events.c
new file mode 100644
index 000000000000..9854c1cabdcb
--- /dev/null
+++ b/drivers/nvme/mdev/events.c
@@ -0,0 +1,142 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * NVMe async events implementation (AER, changed namespace log)
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include "priv.h"
+
+/* complete an AER event on the admin queue if it is pending*/
+static void nvme_mdev_event_complete(struct nvme_mdev_vctrl *vctrl)
+{
+ u16 lid, cid;
+ u32 dw0;
+
+ for_each_set_bit(lid, vctrl->events.events_pending, MAX_LOG_PAGES) {
+ /* we have pending aer requests, but no requests*/
+ if (vctrl->events.aer_cid_count == 0)
+ break;
+
+ if (!test_bit(lid, vctrl->events.events_enabled))
+ continue;
+
+ cid = vctrl->events.aer_cids[--vctrl->events.aer_cid_count];
+ dw0 = vctrl->events.event_values[lid];
+ clear_bit(lid, vctrl->events.events_pending);
+
+ _DBG(vctrl,
+ "AEN: replying to AER (CID=%d) with status 0x%08x\n",
+ cid, dw0);
+
+ nvme_mdev_vsq_cmd_done_adm(vctrl, dw0, cid, NVME_SC_SUCCESS);
+ }
+}
+
+/* deal with received async event request from the user*/
+int nvme_mdev_event_request_receive(struct nvme_mdev_vctrl *vctrl,
+ u16 cid)
+{
+ int cnt = vctrl->events.aer_cid_count;
+
+ if (cnt >= MAX_AER_COMMANDS)
+ return DNR(NVME_SC_ASYNC_LIMIT);
+
+ /* don't allow AER to be pending if there is no space left in the
+ * completion queue permanently
+ */
+ if ((cnt + 1) >= vctrl->vcqs[0].size - 1)
+ return DNR(NVME_SC_ASYNC_LIMIT);
+
+ vctrl->events.aer_cids[cnt++] = cid;
+ vctrl->events.aer_cid_count = cnt;
+
+ _DBG(vctrl, "AEN: received new request (cid=%d)\n", cid);
+ nvme_mdev_event_complete(vctrl);
+ return -1;
+}
+
+/* Send an async event request */
+void nvme_mdev_event_send(struct nvme_mdev_vctrl *vctrl,
+ enum nvme_async_event_type type,
+ enum nvme_async_event info)
+{
+ u8 log_page;
+ u32 event;
+
+ // determine the log page for event types that we support
+ switch (type) {
+ case NVME_AER_TYPE_ERROR:
+ log_page = NVME_LOG_ERROR;
+ break;
+ case NVME_AER_TYPE_SMART:
+ log_page = NVME_LOG_SMART;
+ break;
+ case NVME_AER_TYPE_NOTICE:
+ WARN_ON(info != NVME_AER_NOTICE_NS_CHANGED);
+ log_page = NVME_LOG_CHANGED_NS;
+ break;
+ default:
+ WARN_ON(1);
+ return;
+ }
+
+ if (test_and_set_bit(log_page, vctrl->events.events_masked))
+ return;
+
+ event = (u32)type | ((u32)info << 8) | ((u32)log_page << 16);
+ vctrl->events.event_values[log_page] = event;
+ set_bit(log_page, vctrl->events.events_masked);
+ set_bit(log_page, vctrl->events.events_pending);
+ nvme_mdev_event_complete(vctrl);
+}
+
+u32 nvme_mdev_event_read_aen_config(struct nvme_mdev_vctrl *vctrl)
+{
+ u32 value = 0;
+
+ if (test_bit(NVME_LOG_CHANGED_NS, vctrl->events.events_enabled))
+ value |= NVME_AEN_CFG_NS_ATTR;
+ return value;
+}
+
+void nvme_mdev_event_set_aen_config(struct nvme_mdev_vctrl *vctrl, u32 value)
+{
+ _DBG(vctrl, "AEN: set config: 0x%04x\n", value);
+
+ if (value & NVME_AEN_CFG_NS_ATTR)
+ set_bit(NVME_LOG_CHANGED_NS, vctrl->events.events_enabled);
+ else
+ clear_bit(NVME_LOG_CHANGED_NS, vctrl->events.events_enabled);
+
+ nvme_mdev_event_complete(vctrl);
+}
+
+/* called when user acks an log page which causes an AER event to be unmasked*/
+void nvme_mdev_event_process_ack(struct nvme_mdev_vctrl *vctrl, u8 log_page)
+{
+ lockdep_assert_held(&vctrl->lock);
+
+ _DBG(vctrl, "AEN: log page %d ACK\n", log_page);
+
+ if (log_page >= MAX_LOG_PAGES)
+ return;
+
+ clear_bit(log_page, vctrl->events.events_masked);
+ nvme_mdev_event_complete(vctrl);
+}
+
+/* reset event state*/
+void nvme_mdev_events_init(struct nvme_mdev_vctrl *vctrl)
+{
+ memset(&vctrl->events, 0, sizeof(vctrl->events));
+ set_bit(NVME_LOG_CHANGED_NS, vctrl->events.events_enabled);
+ set_bit(NVME_LOG_ERROR, vctrl->events.events_enabled);
+}
+
+/* reset event state*/
+void nvme_mdev_events_reset(struct nvme_mdev_vctrl *vctrl)
+{
+ memset(&vctrl->events, 0, sizeof(vctrl->events));
+}
+
diff --git a/drivers/nvme/mdev/host.c b/drivers/nvme/mdev/host.c
new file mode 100644
index 000000000000..d90275baf5f8
--- /dev/null
+++ b/drivers/nvme/mdev/host.c
@@ -0,0 +1,491 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * NVMe parent (host) device abstraction
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/nvme.h>
+#include <linux/mdev.h>
+#include <linux/module.h>
+#include "priv.h"
+
+static LIST_HEAD(nvme_mdev_hctrl_list);
+static DEFINE_MUTEX(nvme_mdev_hctrl_list_mutex);
+static struct nvme_mdev_inst_type **instance_types;
+
+unsigned int io_timeout_ms = 30000;
+module_param_named(io_timeout, io_timeout_ms, uint, 0644);
+MODULE_PARM_DESC(io_timeout,
+ "Maximum I/O command completion timeout (in msec)");
+
+unsigned int poll_timeout_ms = 500;
+module_param_named(poll_timeout, poll_timeout_ms, uint, 0644);
+MODULE_PARM_DESC(poll_timeout,
+ "Maximum idle time to keep polling (in msec) (0 - poll forever)");
+
+unsigned int admin_poll_rate_ms = 100;
+module_param_named(admin_poll_rate, poll_timeout_ms, uint, 0644);
+MODULE_PARM_DESC(admin_poll_rate,
+ "Admin queue polling rate (in msec) (used only when shadow doorbell is disabled)");
+
+bool use_shadow_doorbell = true;
+module_param(use_shadow_doorbell, bool, 0644);
+MODULE_PARM_DESC(use_shadow_doorbell,
+ "Enable the shadow doorbell NVMe extension");
+
+/* Create a new host controller */
+static struct nvme_mdev_hctrl *nvme_mdev_hctrl_create(struct nvme_ctrl *ctrl)
+{
+ struct nvme_mdev_hctrl *hctrl;
+ u32 max_lba_transfer;
+
+ /* TODOLATER: IO: support more page size configurations*/
+ if (ctrl->page_size != PAGE_SIZE)
+ return NULL;
+
+ hctrl = kzalloc_node(sizeof(*hctrl), GFP_KERNEL,
+ dev_to_node(ctrl->dev));
+ if (!hctrl)
+ return NULL;
+
+ kref_init(&hctrl->ref);
+ mutex_init(&hctrl->lock);
+
+ hctrl->nvme_ctrl = ctrl;
+ nvme_get_ctrl(ctrl);
+
+ hctrl->oncs = ctrl->oncs &
+ (NVME_CTRL_ONCS_DSM | NVME_CTRL_ONCS_WRITE_ZEROES);
+
+ hctrl->id = ctrl->instance;
+ hctrl->node = dev_to_node(ctrl->dev);
+
+ max_lba_transfer = ctrl->max_hw_sectors >> (PAGE_SHIFT - 9);
+ hctrl->mdts = ilog2(__rounddown_pow_of_two(max_lba_transfer));
+
+ hctrl->nr_host_queues = ctrl->ops->ext_queues_available(ctrl);
+
+ mutex_lock(&nvme_mdev_hctrl_list_mutex);
+
+ dev_info(ctrl->dev,
+ "mediated nvme support enabled, using up to %d host queues\n",
+ hctrl->nr_host_queues);
+
+ list_add_tail(&hctrl->link, &nvme_mdev_hctrl_list);
+
+ mutex_unlock(&nvme_mdev_hctrl_list_mutex);
+
+ if (mdev_register_device(ctrl->dev, &mdev_fops) < 0) {
+ nvme_put_ctrl(ctrl);
+ kfree(hctrl);
+ return NULL;
+ }
+ return hctrl;
+}
+
+/* Release an unused host controller*/
+static void nvme_mdev_hctrl_free(struct kref *ref)
+{
+ struct nvme_mdev_hctrl *hctrl =
+ container_of(ref, struct nvme_mdev_hctrl, ref);
+
+ dev_info(hctrl->nvme_ctrl->dev, "mediated nvme support disabled");
+
+ nvme_put_ctrl(hctrl->nvme_ctrl);
+ hctrl->nvme_ctrl = NULL;
+ kfree(hctrl);
+}
+
+/* Lookup a host controller based on mdev parent device*/
+struct nvme_mdev_hctrl *nvme_mdev_hctrl_lookup_get(struct device *parent)
+{
+ struct nvme_mdev_hctrl *hctrl = NULL, *tmp;
+
+ mutex_lock(&nvme_mdev_hctrl_list_mutex);
+ list_for_each_entry(tmp, &nvme_mdev_hctrl_list, link) {
+ if (tmp->nvme_ctrl->dev == parent) {
+ hctrl = tmp;
+ kref_get(&hctrl->ref);
+ break;
+ }
+ }
+ mutex_unlock(&nvme_mdev_hctrl_list_mutex);
+ return hctrl;
+}
+
+/* Release a held reference to a host controller*/
+void nvme_mdev_hctrl_put(struct nvme_mdev_hctrl *hctrl)
+{
+ kref_put(&hctrl->ref, nvme_mdev_hctrl_free);
+}
+
+/* Destroy a host controller. It might still be kept in zombie state
+ * if someone uses a reference to it
+ */
+static void nvme_mdev_hctrl_destroy(struct nvme_mdev_hctrl *hctrl)
+{
+ mutex_lock(&nvme_mdev_hctrl_list_mutex);
+ list_del(&hctrl->link);
+ mutex_unlock(&nvme_mdev_hctrl_list_mutex);
+
+ hctrl->removing = true;
+ mdev_unregister_device(hctrl->nvme_ctrl->dev);
+ nvme_mdev_hctrl_put(hctrl);
+}
+
+/* Check how many host queues are still available */
+int nvme_mdev_hctrl_hqs_available(struct nvme_mdev_hctrl *hctrl)
+{
+ int ret;
+
+ mutex_lock(&hctrl->lock);
+ ret = hctrl->nr_host_queues;
+ mutex_unlock(&hctrl->lock);
+ return ret;
+}
+
+/* Reserve N host IO queues, for later allocation to a specific user*/
+bool nvme_mdev_hctrl_hqs_reserve(struct nvme_mdev_hctrl *hctrl,
+ unsigned int n)
+{
+ mutex_lock(&hctrl->lock);
+
+ if (n > hctrl->nr_host_queues) {
+ mutex_unlock(&hctrl->lock);
+ return false;
+ }
+
+ hctrl->nr_host_queues -= n;
+ mutex_unlock(&hctrl->lock);
+ return true;
+}
+
+/* Free N host IO queues, for allocation for other users*/
+void nvme_mdev_hctrl_hqs_unreserve(struct nvme_mdev_hctrl *hctrl,
+ unsigned int n)
+{
+ mutex_lock(&hctrl->lock);
+ hctrl->nr_host_queues += n;
+ mutex_unlock(&hctrl->lock);
+}
+
+/* Allocate a host IO queue */
+int nvme_mdev_hctrl_hq_alloc(struct nvme_mdev_hctrl *hctrl)
+{
+ u16 qid = 0;
+ int ret = hctrl->nvme_ctrl->ops->ext_queue_alloc(hctrl->nvme_ctrl,
+ &qid);
+
+ if (ret)
+ return ret;
+ return qid;
+}
+
+/* Free an host IO queue */
+void nvme_mdev_hctrl_hq_free(struct nvme_mdev_hctrl *hctrl, u16 qid)
+{
+ hctrl->nvme_ctrl->ops->ext_queue_free(hctrl->nvme_ctrl, qid);
+}
+
+/* Check if we can submit another IO passthrough command */
+bool nvme_mdev_hctrl_hq_can_submit(struct nvme_mdev_hctrl *hctrl, u16 qid)
+{
+ return hctrl->nvme_ctrl->ops->ext_queue_full(hctrl->nvme_ctrl, qid);
+}
+
+/* Check if IO passthrough is supported for given IO optcode */
+bool nvme_mdev_hctrl_hq_check_op(struct nvme_mdev_hctrl *hctrl, u8 optcode)
+{
+ switch (optcode) {
+ case nvme_cmd_flush:
+ case nvme_cmd_read:
+ case nvme_cmd_write:
+ /* these are mandatory*/
+ return true;
+ case nvme_cmd_write_zeroes:
+ return (hctrl->oncs & NVME_CTRL_ONCS_WRITE_ZEROES);
+ case nvme_cmd_dsm:
+ return (hctrl->oncs & NVME_CTRL_ONCS_DSM);
+ default:
+ return false;
+ }
+}
+
+/* Submit a IO passthrough command */
+int nvme_mdev_hctrl_hq_submit(struct nvme_mdev_hctrl *hctrl,
+ u16 qid, u32 tag,
+ struct nvme_command *cmd,
+ struct nvme_ext_data_iter *datait)
+{
+ struct nvme_ctrl *ctrl = hctrl->nvme_ctrl;
+
+ return ctrl->ops->ext_queue_submit(ctrl, qid, tag, cmd, datait);
+}
+
+/* Poll for completion of IO passthrough commands */
+int nvme_mdev_hctrl_hq_poll(struct nvme_mdev_hctrl *hctrl,
+ u32 qid,
+ struct nvme_ext_cmd_result *results,
+ unsigned int max_len)
+{
+ struct nvme_ctrl *ctrl = hctrl->nvme_ctrl;
+
+ return ctrl->ops->ext_queue_poll(ctrl, qid, results, max_len);
+}
+
+/* Destroy all host controllers */
+void nvme_mdev_hctrl_destroy_all(void)
+{
+ struct nvme_mdev_hctrl *hctrl = NULL, *tmp;
+
+ list_for_each_entry_safe(hctrl, tmp, &nvme_mdev_hctrl_list, link) {
+ list_del(&hctrl->link);
+ hctrl->removing = true;
+ mdev_unregister_device(hctrl->nvme_ctrl->dev);
+ nvme_mdev_hctrl_put(hctrl);
+ }
+}
+
+/* Get the mdev instance given it sysfs name */
+struct nvme_mdev_inst_type *nvme_mdev_inst_type_get(const char *name)
+{
+ int i;
+
+ for (i = 0; instance_types[i]; i++) {
+ const char *test =
+ name + strlen(name) - strlen(instance_types[i]->name);
+
+ if (strcmp(instance_types[i]->name, test) == 0)
+ return instance_types[i];
+ }
+ return NULL;
+}
+
+/* This shows name of the instance type */
+static ssize_t name_show(struct kobject *kobj, struct device *dev, char *buf)
+{
+ return sprintf(buf, "%s\n", kobj->name);
+}
+static MDEV_TYPE_ATTR_RO(name);
+
+/* This shows description of the instance type */
+static ssize_t description_show(struct kobject *kobj,
+ struct device *dev, char *buf)
+{
+ struct nvme_mdev_inst_type *type = nvme_mdev_inst_type_get(kobj->name);
+
+ return sprintf(buf,
+ "MDEV nvme device, using maximum %d hw submission queues\n",
+ type->max_hw_queues);
+}
+static MDEV_TYPE_ATTR_RO(description);
+
+/* This shows the device API of the instance type */
+static ssize_t device_api_show(struct kobject *kobj,
+ struct device *dev, char *buf)
+{
+ return sprintf(buf, "%s\n", VFIO_DEVICE_API_PCI_STRING);
+}
+static MDEV_TYPE_ATTR_RO(device_api);
+
+/* This shows how many instances of this instance type can be created */
+static ssize_t available_instances_show(struct kobject *kobj,
+ struct device *dev, char *buf)
+{
+ struct nvme_mdev_inst_type *type = nvme_mdev_inst_type_get(kobj->name);
+ struct nvme_mdev_hctrl *hctrl = nvme_mdev_hctrl_lookup_get(dev);
+ int count;
+
+ if (!hctrl)
+ return -ENODEV;
+
+ count = nvme_mdev_hctrl_hqs_available(hctrl);
+ do_div(count, type->max_hw_queues);
+
+ nvme_mdev_hctrl_put(hctrl);
+ return sprintf(buf, "%d\n", count);
+}
+static MDEV_TYPE_ATTR_RO(available_instances);
+
+static struct attribute *nvme_mdev_types_attrs[] = {
+ &mdev_type_attr_name.attr,
+ &mdev_type_attr_description.attr,
+ &mdev_type_attr_device_api.attr,
+ &mdev_type_attr_available_instances.attr,
+ NULL,
+};
+
+/* Undo the creation of mdev array of instance types */
+static void nvme_mdev_instance_types_fini(struct mdev_parent_ops *ops)
+{
+ int i;
+
+ for (i = 0; instance_types[i]; i++) {
+ struct nvme_mdev_inst_type *type = instance_types[i];
+
+ kfree(type->attrgroup);
+ kfree(type);
+ }
+
+ kfree(instance_types);
+ instance_types = NULL;
+
+ kfree(ops->supported_type_groups);
+ ops->supported_type_groups = NULL;
+}
+
+/* Create the array of mdev instance types from our array of them */
+static int nvme_mdev_instance_types_init(struct mdev_parent_ops *ops)
+{
+ unsigned int i;
+ struct nvme_mdev_inst_type *type;
+ struct attribute_group *attrgroup;
+
+ ops->supported_type_groups = kzalloc(sizeof(struct attribute_group *)
+ * (MAX_HOST_QUEUES + 1), GFP_KERNEL);
+
+ if (!ops->supported_type_groups)
+ return -ENOMEM;
+
+ instance_types = kzalloc(sizeof(struct nvme_mdev_inst_type *)
+ * MAX_HOST_QUEUES + 1, GFP_KERNEL);
+
+ if (!instance_types) {
+ kfree(ops->supported_type_groups);
+ ops->supported_type_groups = NULL;
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < MAX_HOST_QUEUES; i++) {
+ type = kzalloc(sizeof(*type), GFP_KERNEL);
+ if (!type) {
+ nvme_mdev_instance_types_fini(ops);
+ return -ENOMEM;
+ }
+ snprintf(type->name, sizeof(type->name), "%dQ_V1", i + 1);
+ type->max_hw_queues = i + 1;
+
+ attrgroup = kzalloc(sizeof(*attrgroup), GFP_KERNEL);
+ if (!attrgroup) {
+ kfree(type);
+ nvme_mdev_instance_types_fini(ops);
+ return -ENOMEM;
+ }
+
+ attrgroup->attrs = nvme_mdev_types_attrs;
+ attrgroup->name = type->name;
+ type->attrgroup = attrgroup;
+ instance_types[i] = type;
+ ops->supported_type_groups[i] = attrgroup;
+ }
+ return 0;
+}
+
+/* Updates in host controller state*/
+static void nvme_mdev_nvme_ctrl_state_changed(struct nvme_ctrl *ctrl)
+{
+ struct nvme_mdev_hctrl *hctrl = nvme_mdev_hctrl_lookup_get(ctrl->dev);
+ struct nvme_mdev_vctrl *vctrl;
+
+ switch (ctrl->state) {
+ case NVME_CTRL_NEW:
+ /* do nothing as new controller is not yet initialized*/
+ break;
+
+ case NVME_CTRL_LIVE:
+ /* new controller is live, create a mdev for it*/
+ if (!hctrl) {
+ hctrl = nvme_mdev_hctrl_create(ctrl);
+ return;
+ /* a controller is live again after reset/reconnect/suspend*/
+ } else {
+ mutex_lock(&nvme_mdev_vctrl_list_mutex);
+ list_for_each_entry(vctrl, &nvme_mdev_vctrl_list, link)
+ if (vctrl->hctrl == hctrl)
+ nvme_mdev_vctrl_resume(vctrl);
+ mutex_unlock(&nvme_mdev_vctrl_list_mutex);
+ }
+ break;
+
+ case NVME_CTRL_RESETTING:
+ case NVME_CTRL_CONNECTING:
+ case NVME_CTRL_SUSPENDED:
+ /* controller is temporarily not usable, stop using its queues*/
+ if (!hctrl)
+ return;
+
+ mutex_lock(&nvme_mdev_vctrl_list_mutex);
+ list_for_each_entry(vctrl, &nvme_mdev_vctrl_list, link)
+ if (vctrl->hctrl == hctrl)
+ nvme_mdev_vctrl_pause(vctrl);
+ mutex_unlock(&nvme_mdev_vctrl_list_mutex);
+ break;
+
+ case NVME_CTRL_DELETING:
+ case NVME_CTRL_DEAD:
+ case NVME_CTRL_ADMIN_ONLY:
+ /* host nvme controller is dead, remove it*/
+ if (!hctrl)
+ return;
+ nvme_mdev_hctrl_destroy(hctrl);
+ break;
+ }
+ nvme_mdev_hctrl_put(hctrl);
+}
+
+/* A host namespace might have its properties changed/removed.*/
+static void nvme_mdev_nvme_ctrl_ns_updated(struct nvme_ctrl *ctrl,
+ u32 nsid, bool removed)
+{
+ struct nvme_mdev_vctrl *vctrl;
+ struct nvme_mdev_hctrl *hctrl = nvme_mdev_hctrl_lookup_get(ctrl->dev);
+
+ if (!hctrl)
+ return;
+
+ mutex_lock(&nvme_mdev_vctrl_list_mutex);
+ list_for_each_entry(vctrl, &nvme_mdev_vctrl_list, link)
+ if (vctrl->hctrl == hctrl)
+ nvme_mdev_vns_host_ns_update(vctrl, nsid, removed);
+ mutex_unlock(&nvme_mdev_vctrl_list_mutex);
+ nvme_mdev_hctrl_put(hctrl);
+}
+
+static struct nvme_mdev_driver nvme_mdev_driver = {
+ .owner = THIS_MODULE,
+ .nvme_ctrl_state_changed = nvme_mdev_nvme_ctrl_state_changed,
+ .nvme_ns_state_changed = nvme_mdev_nvme_ctrl_ns_updated,
+};
+
+static int __init nvme_mdev_init(void)
+{
+ int ret;
+
+ nvme_mdev_instance_types_init(&mdev_fops);
+ ret = nvme_core_register_mdev_driver(&nvme_mdev_driver);
+ if (ret) {
+ nvme_mdev_instance_types_fini(&mdev_fops);
+ return ret;
+ }
+
+ pr_info("nvme_mdev " NVME_MDEV_FIRMWARE_VERSION " loaded\n");
+ return 0;
+}
+
+static void __exit nvme_mdev_exit(void)
+{
+ nvme_core_unregister_mdev_driver(&nvme_mdev_driver);
+ nvme_mdev_hctrl_destroy_all();
+ nvme_mdev_instance_types_fini(&mdev_fops);
+ pr_info("nvme_mdev unloaded\n");
+}
+
+MODULE_AUTHOR("Maxim Levitsky <mlevitsk@redhat.com>");
+MODULE_LICENSE("GPL");
+MODULE_VERSION(NVME_MDEV_FIRMWARE_VERSION);
+
+module_init(nvme_mdev_init)
+module_exit(nvme_mdev_exit)
+
diff --git a/drivers/nvme/mdev/instance.c b/drivers/nvme/mdev/instance.c
new file mode 100644
index 000000000000..da523006aeda
--- /dev/null
+++ b/drivers/nvme/mdev/instance.c
@@ -0,0 +1,802 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Mediated NVMe instance VFIO code
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+
+#include <linux/init.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/vfio.h>
+#include <linux/sysfs.h>
+#include <linux/mdev.h>
+#include "priv.h"
+
+#define OFFSET_TO_REGION(offset) ((offset) >> 20)
+#define REGION_TO_OFFSET(nr) (((u64)nr) << 20)
+
+LIST_HEAD(nvme_mdev_vctrl_list);
+/*protects the list */
+DEFINE_MUTEX(nvme_mdev_vctrl_list_mutex);
+
+struct mdev_nvme_vfio_region_info {
+ struct vfio_region_info base;
+ struct vfio_region_info_cap_sparse_mmap mmap_cap;
+};
+
+/* User memory added*/
+static int nvme_mdev_map_notifier(struct notifier_block *nb,
+ unsigned long action, void *data)
+{
+ struct vfio_iommu_type1_dma_map *map = data;
+ struct nvme_mdev_vctrl *vctrl =
+ container_of(nb, struct nvme_mdev_vctrl, vfio_map_notifier);
+
+ int ret = nvme_mdev_vctrl_viommu_map(vctrl, map->flags,
+ map->iova, map->size);
+ return ret ? NOTIFY_OK : notifier_from_errno(ret);
+}
+
+/* User memory removed*/
+static int nvme_mdev_unmap_notifier(struct notifier_block *nb,
+ unsigned long action, void *data)
+{
+ struct nvme_mdev_vctrl *vctrl =
+ container_of(nb, struct nvme_mdev_vctrl, vfio_unmap_notifier);
+ struct vfio_iommu_type1_dma_unmap *unmap = data;
+
+ int ret = nvme_mdev_vctrl_viommu_unmap(vctrl, unmap->iova, unmap->size);
+
+ WARN_ON(ret <= 0);
+ return NOTIFY_OK;
+}
+
+/* Called when new mediated device is created */
+static int nvme_mdev_ops_create(struct kobject *kobj, struct mdev_device *mdev)
+{
+ int ret = 0;
+ const struct nvme_mdev_inst_type *type = NULL;
+ struct nvme_mdev_vctrl *vctrl;
+ struct nvme_mdev_hctrl *hctrl = NULL;
+
+ hctrl = nvme_mdev_hctrl_lookup_get(mdev_parent_dev(mdev));
+ if (!hctrl)
+ return -ENODEV;
+
+ type = nvme_mdev_inst_type_get(kobj->name);
+ vctrl = nvme_mdev_vctrl_create(mdev, hctrl, type->max_hw_queues);
+
+ if (IS_ERR(vctrl))
+ ret = PTR_ERR(vctrl);
+
+ mutex_lock(&nvme_mdev_vctrl_list_mutex);
+ list_add_tail(&vctrl->link, &nvme_mdev_vctrl_list);
+ mutex_unlock(&nvme_mdev_vctrl_list_mutex);
+
+ nvme_mdev_hctrl_put(hctrl);
+ return ret;
+}
+
+/* Called when a mediated device is removed */
+static int nvme_mdev_ops_remove(struct mdev_device *mdev)
+{
+ struct nvme_mdev_vctrl *vctrl = mdev_to_vctrl(mdev);
+
+ if (!vctrl)
+ return -ENODEV;
+ return nvme_mdev_vctrl_destroy(vctrl);
+}
+
+/* Called when new mediated device is opened by a user */
+static int nvme_mdev_ops_open(struct mdev_device *mdev)
+{
+ int ret;
+ unsigned long events;
+ struct nvme_mdev_vctrl *vctrl = mdev_to_vctrl(mdev);
+
+ if (!vctrl)
+ return -ENODEV;
+
+ ret = nvme_mdev_vctrl_open(vctrl);
+ if (ret)
+ return ret;
+
+ /* register unmap IOMMU notifier*/
+ vctrl->vfio_unmap_notifier.notifier_call = nvme_mdev_unmap_notifier;
+ events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
+
+ ret = vfio_register_notifier(mdev_dev(vctrl->mdev),
+ VFIO_IOMMU_NOTIFY, &events,
+ &vctrl->vfio_unmap_notifier);
+
+ if (ret != 0) {
+ nvme_mdev_vctrl_release(vctrl);
+ return ret;
+ }
+
+ /* register map IOMMU notifier*/
+ vctrl->vfio_map_notifier.notifier_call = nvme_mdev_map_notifier;
+ events = VFIO_IOMMU_NOTIFY_DMA_MAP;
+
+ ret = vfio_register_notifier(mdev_dev(vctrl->mdev),
+ VFIO_IOMMU_NOTIFY, &events,
+ &vctrl->vfio_map_notifier);
+
+ if (ret != 0) {
+ vfio_unregister_notifier(mdev_dev(vctrl->mdev),
+ VFIO_IOMMU_NOTIFY,
+ &vctrl->vfio_unmap_notifier);
+ nvme_mdev_vctrl_release(vctrl);
+ return ret;
+ }
+ return ret;
+}
+
+/* Called when new mediated device is closed (last close of the user) */
+static void nvme_mdev_ops_release(struct mdev_device *mdev)
+{
+ struct nvme_mdev_vctrl *vctrl = mdev_to_vctrl(mdev);
+ int ret;
+
+ ret = vfio_unregister_notifier(mdev_dev(vctrl->mdev),
+ VFIO_IOMMU_NOTIFY,
+ &vctrl->vfio_unmap_notifier);
+ WARN_ON(ret);
+
+ ret = vfio_unregister_notifier(mdev_dev(vctrl->mdev),
+ VFIO_IOMMU_NOTIFY,
+ &vctrl->vfio_map_notifier);
+ WARN_ON(ret);
+
+ nvme_mdev_vctrl_release(vctrl);
+}
+
+/* Helper function for bar/pci config read/write access */
+static ssize_t nvme_mdev_access(struct nvme_mdev_vctrl *vctrl,
+ char *buf, size_t count,
+ loff_t pos, bool is_write)
+{
+ int index = OFFSET_TO_REGION(pos);
+ int ret = -EINVAL;
+ unsigned int offset;
+
+ if (index >= VFIO_PCI_NUM_REGIONS || !vctrl->regions[index].rw)
+ goto out;
+
+ offset = pos - REGION_TO_OFFSET(index);
+ if (offset + count > vctrl->regions[index].size)
+ goto out;
+
+ ret = vctrl->regions[index].rw(vctrl, offset, buf, count, is_write);
+out:
+ return ret;
+}
+
+/* Called when read() is done on the device */
+static ssize_t nvme_mdev_ops_read(struct mdev_device *mdev, char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ unsigned int done = 0;
+ int ret;
+ struct nvme_mdev_vctrl *vctrl = mdev_to_vctrl(mdev);
+
+ if (!vctrl)
+ return -ENODEV;
+
+ while (count) {
+ size_t filled;
+
+ if (count >= 4 && !(*ppos % 4)) {
+ u32 val;
+
+ ret = nvme_mdev_access(vctrl, (char *)&val,
+ sizeof(val), *ppos, false);
+ if (ret <= 0)
+ goto read_err;
+
+ if (copy_to_user(buf, &val, sizeof(val)))
+ goto read_err;
+ filled = sizeof(val);
+ } else if (count >= 2 && !(*ppos % 2)) {
+ u16 val;
+
+ ret = nvme_mdev_access(vctrl, (char *)&val,
+ sizeof(val), *ppos, false);
+ if (ret <= 0)
+ goto read_err;
+ if (copy_to_user(buf, &val, sizeof(val)))
+ goto read_err;
+ filled = sizeof(val);
+ } else {
+ u8 val;
+
+ ret = nvme_mdev_access(vctrl, (char *)&val,
+ sizeof(val), *ppos, false);
+ if (ret <= 0)
+ goto read_err;
+ if (copy_to_user(buf, &val, sizeof(val)))
+ goto read_err;
+ filled = sizeof(val);
+ }
+
+ count -= filled;
+ done += filled;
+ *ppos += filled;
+ buf += filled;
+ }
+ return done;
+read_err:
+ return -EFAULT;
+}
+
+/* Called when write() is done on the device */
+static ssize_t nvme_mdev_ops_write(struct mdev_device *mdev,
+ const char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ unsigned int done = 0;
+ int ret;
+ struct nvme_mdev_vctrl *vctrl = mdev_to_vctrl(mdev);
+
+ if (!vctrl)
+ return -ENODEV;
+
+ while (count) {
+ size_t filled;
+
+ if (count >= 4 && !(*ppos % 4)) {
+ u32 val;
+
+ if (copy_from_user(&val, buf, sizeof(val)))
+ goto write_err;
+ ret = nvme_mdev_access(vctrl, (char *)&val,
+ sizeof(val), *ppos, true);
+ if (ret <= 0)
+ goto write_err;
+ filled = sizeof(val);
+ } else if (count >= 2 && !(*ppos % 2)) {
+ u16 val;
+
+ if (copy_from_user(&val, buf, sizeof(val)))
+ goto write_err;
+
+ ret = nvme_mdev_access(vctrl, (char *)&val,
+ sizeof(val), *ppos, true);
+ if (ret <= 0)
+ goto write_err;
+ filled = sizeof(val);
+ } else {
+ u8 val;
+
+ if (copy_from_user(&val, buf, sizeof(val)))
+ goto write_err;
+ ret = nvme_mdev_access(vctrl, (char *)&val,
+ sizeof(val), *ppos, true);
+ if (ret <= 0)
+ goto write_err;
+ filled = sizeof(val);
+ }
+ count -= filled;
+ done += filled;
+ *ppos += filled;
+ buf += filled;
+ }
+ return done;
+write_err:
+ return -EFAULT;
+}
+
+/*Helper for IRQ number VFIO query */
+static int nvme_mdev_irq_counts(struct nvme_mdev_vctrl *vctrl,
+ unsigned int irq_type)
+{
+ switch (irq_type) {
+ case VFIO_PCI_INTX_IRQ_INDEX:
+ return 1;
+ case VFIO_PCI_MSIX_IRQ_INDEX:
+ return MAX_VIRTUAL_IRQS;
+ case VFIO_PCI_REQ_IRQ_INDEX:
+ return 1;
+ default:
+ return 0;
+ }
+}
+
+/* VFIO VFIO_IRQ_SET_ACTION_TRIGGER implementation */
+static int nvme_mdev_ioctl_set_irqs_trigger(struct nvme_mdev_vctrl *vctrl,
+ u32 flags,
+ unsigned int irq_type,
+ unsigned int start,
+ unsigned int count,
+ void *data)
+{
+ u32 data_type = flags & VFIO_IRQ_SET_DATA_TYPE_MASK;
+ u8 *bools = NULL;
+ unsigned int i;
+ int ret = -EINVAL;
+
+ /* Asked to disable the current interrupt mode*/
+ if (data_type == VFIO_IRQ_SET_DATA_NONE && count == 0) {
+ switch (irq_type) {
+ case VFIO_PCI_REQ_IRQ_INDEX:
+ nvme_mdev_irqs_set_unplug_trigger(vctrl, -1);
+ return 0;
+ case VFIO_PCI_INTX_IRQ_INDEX:
+ nvme_mdev_irqs_disable(vctrl, NVME_MDEV_IMODE_INTX);
+ return 0;
+ case VFIO_PCI_MSIX_IRQ_INDEX:
+ nvme_mdev_irqs_disable(vctrl, NVME_MDEV_IMODE_MSIX);
+ return 0;
+ default:
+ return -EINVAL;
+ }
+ }
+
+ if (start + count > nvme_mdev_irq_counts(vctrl, irq_type))
+ return -EINVAL;
+
+ switch (data_type) {
+ case VFIO_IRQ_SET_DATA_BOOL:
+ bools = (u8 *)data;
+ /*fallthrough*/
+ case VFIO_IRQ_SET_DATA_NONE:
+ if (irq_type == VFIO_PCI_REQ_IRQ_INDEX)
+ return -EINVAL;
+
+ for (i = 0 ; i < count ; i++) {
+ int index = start + i;
+
+ if (!bools || bools[i])
+ nvme_mdev_irq_trigger(vctrl, index);
+ }
+ return 0;
+
+ case VFIO_IRQ_SET_DATA_EVENTFD:
+ switch (irq_type) {
+ case VFIO_PCI_REQ_IRQ_INDEX:
+ return nvme_mdev_irqs_set_unplug_trigger(vctrl,
+ *(int32_t *)data);
+ case VFIO_PCI_INTX_IRQ_INDEX:
+ ret = nvme_mdev_irqs_enable(vctrl,
+ NVME_MDEV_IMODE_INTX);
+ break;
+ case VFIO_PCI_MSIX_IRQ_INDEX:
+ ret = nvme_mdev_irqs_enable(vctrl,
+ NVME_MDEV_IMODE_MSIX);
+ break;
+ default:
+ return -EINVAL;
+ }
+ if (ret)
+ return ret;
+
+ return nvme_mdev_irqs_set_triggers(vctrl, start,
+ count, (int32_t *)data);
+ default:
+ return -EINVAL;
+ }
+}
+
+/* VFIO_DEVICE_GET_INFO ioctl implementation */
+static int nvme_mdev_ioctl_get_info(struct nvme_mdev_vctrl *vctrl,
+ void __user *arg)
+{
+ struct vfio_device_info info;
+ unsigned int minsz = offsetofend(struct vfio_device_info, num_irqs);
+
+ if (copy_from_user(&info, (void __user *)arg, minsz))
+ return -EFAULT;
+ if (info.argsz < minsz)
+ return -EINVAL;
+
+ info.flags = VFIO_DEVICE_FLAGS_PCI | VFIO_DEVICE_FLAGS_RESET;
+ info.num_regions = VFIO_PCI_NUM_REGIONS;
+ info.num_irqs = VFIO_PCI_NUM_IRQS;
+
+ if (copy_to_user(arg, &info, minsz))
+ return -EFAULT;
+ return 0;
+}
+
+/* VFIO_DEVICE_GET_REGION_INFO ioctl implementation*/
+static int nvme_mdev_ioctl_get_reg_info(struct nvme_mdev_vctrl *vctrl,
+ void __user *arg)
+{
+ struct nvme_mdev_io_region *region;
+ struct mdev_nvme_vfio_region_info *info;
+ unsigned long minsz, outsz, maxsz;
+ int ret = 0;
+
+ minsz = offsetofend(struct vfio_region_info, offset);
+ maxsz = sizeof(struct mdev_nvme_vfio_region_info) +
+ sizeof(struct vfio_region_sparse_mmap_area);
+
+ info = kzalloc(maxsz, GFP_KERNEL);
+ if (!info)
+ return -ENOMEM;
+
+ if (copy_from_user(info, arg, minsz)) {
+ ret = -EFAULT;
+ goto out;
+ }
+
+ outsz = info->base.argsz;
+ if (outsz < minsz || outsz > maxsz) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ if (info->base.index >= VFIO_PCI_NUM_REGIONS) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ region = &vctrl->regions[info->base.index];
+ info->base.offset = REGION_TO_OFFSET(info->base.index);
+ info->base.argsz = maxsz;
+ info->base.size = region->size;
+
+ info->base.flags = VFIO_REGION_INFO_FLAG_READ |
+ VFIO_REGION_INFO_FLAG_WRITE;
+
+ if (region->mmap_ops) {
+ info->base.flags |= (VFIO_REGION_INFO_FLAG_MMAP |
+ VFIO_REGION_INFO_FLAG_CAPS);
+
+ info->base.cap_offset =
+ offsetof(struct mdev_nvme_vfio_region_info, mmap_cap);
+
+ info->mmap_cap.header.id = VFIO_REGION_INFO_CAP_SPARSE_MMAP;
+ info->mmap_cap.header.version = 1;
+ info->mmap_cap.header.next = 0;
+ info->mmap_cap.nr_areas = 1;
+ info->mmap_cap.areas[0].offset = region->mmap_area_start;
+ info->mmap_cap.areas[0].size = region->mmap_area_size;
+ }
+
+ if (copy_to_user(arg, info, outsz))
+ ret = -EFAULT;
+out:
+ kfree(info);
+ return ret;
+}
+
+/* VFIO_DEVICE_GET_IRQ_INFO ioctl implementation */
+static int nvme_mdev_ioctl_get_irq_info(struct nvme_mdev_vctrl *vctrl,
+ void __user *arg)
+{
+ struct vfio_irq_info info;
+ unsigned int minsz = offsetofend(struct vfio_irq_info, count);
+
+ if (copy_from_user(&info, arg, minsz))
+ return -EFAULT;
+ if (info.argsz < minsz)
+ return -EINVAL;
+
+ info.count = nvme_mdev_irq_counts(vctrl, info.index);
+ info.flags = VFIO_IRQ_INFO_EVENTFD;
+
+ if (info.index == VFIO_PCI_INTX_IRQ_INDEX)
+ info.flags |= VFIO_IRQ_INFO_MASKABLE | VFIO_IRQ_INFO_AUTOMASKED;
+
+ if (copy_to_user(arg, &info, minsz))
+ return -EFAULT;
+ return 0;
+}
+
+/* VFIO VFIO_DEVICE_SET_IRQS ioctl implementation */
+static int nvme_mdev_ioctl_set_irqs(struct nvme_mdev_vctrl *vctrl,
+ void __user *arg)
+{
+ int ret, irqcount;
+ struct vfio_irq_set hdr;
+ u8 *data = NULL;
+ size_t data_size = 0;
+ unsigned long minsz = offsetofend(struct vfio_irq_set, count);
+
+ if (copy_from_user(&hdr, arg, minsz))
+ return -EFAULT;
+
+ irqcount = nvme_mdev_irq_counts(vctrl, hdr.index);
+ ret = vfio_set_irqs_validate_and_prepare(&hdr,
+ irqcount,
+ VFIO_PCI_NUM_IRQS,
+ &data_size);
+ if (ret)
+ return ret;
+
+ if (data_size) {
+ data = memdup_user((arg + minsz), data_size);
+ if (IS_ERR(data))
+ return PTR_ERR(data);
+ }
+
+ ret = -ENOTTY;
+ switch (hdr.index) {
+ case VFIO_PCI_INTX_IRQ_INDEX:
+ case VFIO_PCI_MSIX_IRQ_INDEX:
+ case VFIO_PCI_REQ_IRQ_INDEX:
+ switch (hdr.flags & VFIO_IRQ_SET_ACTION_TYPE_MASK) {
+ case VFIO_IRQ_SET_ACTION_MASK:
+ case VFIO_IRQ_SET_ACTION_UNMASK:
+ // pretend to support this (even with eventfd)
+ ret = hdr.index == VFIO_PCI_INTX_IRQ_INDEX ?
+ 0 : -EINVAL;
+ break;
+ case VFIO_IRQ_SET_ACTION_TRIGGER:
+ ret = nvme_mdev_ioctl_set_irqs_trigger(vctrl, hdr.flags,
+ hdr.index,
+ hdr.start,
+ hdr.count,
+ data);
+ break;
+ }
+ break;
+ }
+
+ kfree(data);
+ return ret;
+}
+
+/* ioctl() implementation */
+static long nvme_mdev_ops_ioctl(struct mdev_device *mdev, unsigned int cmd,
+ unsigned long arg)
+{
+ struct nvme_mdev_vctrl *vctrl = mdev_get_drvdata(mdev);
+
+ if (!vctrl)
+ return -ENODEV;
+
+ switch (cmd) {
+ case VFIO_DEVICE_GET_INFO:
+ return nvme_mdev_ioctl_get_info(vctrl, (void __user *)arg);
+ case VFIO_DEVICE_GET_REGION_INFO:
+ return nvme_mdev_ioctl_get_reg_info(vctrl, (void __user *)arg);
+ case VFIO_DEVICE_GET_IRQ_INFO:
+ return nvme_mdev_ioctl_get_irq_info(vctrl, (void __user *)arg);
+ case VFIO_DEVICE_SET_IRQS:
+ return nvme_mdev_ioctl_set_irqs(vctrl, (void __user *)arg);
+ case VFIO_DEVICE_RESET:
+ nvme_mdev_vctrl_reset(vctrl);
+ return 0;
+ default:
+ return -ENOTTY;
+ }
+}
+
+/* mmap() implementation (doorbell area) */
+static int nvme_mdev_ops_mmap(struct mdev_device *mdev,
+ struct vm_area_struct *vma)
+{
+ struct nvme_mdev_vctrl *vctrl = mdev_get_drvdata(mdev);
+ int index = OFFSET_TO_REGION((u64)vma->vm_pgoff << PAGE_SHIFT);
+ unsigned long size, start;
+
+ if (!vctrl)
+ return -EFAULT;
+
+ if (index >= VFIO_PCI_NUM_REGIONS || !vctrl->regions[index].mmap_ops)
+ return -EINVAL;
+
+ if (vma->vm_end < vma->vm_start)
+ return -EINVAL;
+
+ size = vma->vm_end - vma->vm_start;
+ start = vma->vm_pgoff << PAGE_SHIFT;
+
+ if (start < vctrl->regions[index].mmap_area_start)
+ return -EINVAL;
+ if (size > vctrl->regions[index].mmap_area_size)
+ return -EINVAL;
+
+ if ((vma->vm_flags & VM_SHARED) == 0)
+ return -EINVAL;
+
+ vma->vm_ops = vctrl->regions[index].mmap_ops;
+ vma->vm_private_data = vctrl;
+ return 0;
+}
+
+/* Request removal of the device*/
+static void nvme_mdev_ops_request(struct mdev_device *mdev, unsigned int count)
+{
+ struct nvme_mdev_vctrl *vctrl = mdev_get_drvdata(mdev);
+
+ if (vctrl)
+ nvme_mdev_irq_raise_unplug_event(vctrl, count);
+}
+
+/* Adding a new namespace given host NS id and partition ID (e/g. n1p2 or n1) */
+static ssize_t add_namespace_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct nvme_mdev_vctrl *vctrl = dev_to_vctrl(dev);
+ int ret;
+ unsigned long partno = 0, nsid;
+ char *buf_copy, *token, *tmp;
+
+ if (!vctrl)
+ return -ENODEV;
+
+ buf_copy = kstrdup(buf, GFP_KERNEL);
+ if (!buf_copy)
+ return -ENOMEM;
+
+ tmp = buf_copy;
+ if (tmp[0] != 'n') {
+ ret = -EINVAL;
+ goto out;
+ }
+ tmp++;
+
+ // read namespace ID (mandatory)
+ token = strsep(&tmp, "p");
+ if (!token) {
+ ret = -EINVAL;
+ goto out;
+ }
+ ret = kstrtoul(token, 10, &nsid);
+ if (ret)
+ goto out;
+
+ // read partition ID (optional)
+ if (tmp) {
+ ret = kstrtoul(tmp, 10, &partno);
+ if (ret)
+ goto out;
+ }
+
+ // create the user namespace
+ ret = nvme_mdev_vns_open(vctrl, nsid, partno);
+ if (ret)
+ goto out;
+ ret = count;
+out:
+ kfree(buf_copy);
+ return ret;
+}
+static DEVICE_ATTR_WO(add_namespace);
+
+/* Remove a user namespace */
+static ssize_t remove_namespace_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ unsigned long user_nsid;
+ int ret;
+ struct nvme_mdev_vctrl *vctrl = dev_to_vctrl(dev);
+
+ if (!vctrl)
+ return -ENODEV;
+
+ ret = kstrtoul(buf, 10, &user_nsid);
+ if (ret)
+ return ret;
+
+ ret = nvme_mdev_vns_destroy(vctrl, user_nsid);
+ if (ret)
+ return ret;
+ return count;
+}
+static DEVICE_ATTR_WO(remove_namespace);
+
+/* Show list of user namespaces */
+static ssize_t namespaces_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nvme_mdev_vctrl *vctrl = dev_to_vctrl(dev);
+
+ if (!vctrl)
+ return -ENODEV;
+ return nvme_mdev_vns_print_description(vctrl, buf, PAGE_SIZE - 1);
+}
+static DEVICE_ATTR_RO(namespaces);
+
+/* change the cpu binding of the IO threads*/
+static ssize_t iothread_cpu_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ unsigned long val;
+ int ret;
+ struct nvme_mdev_vctrl *vctrl = dev_to_vctrl(dev);
+
+ if (!vctrl)
+ return -ENODEV;
+ ret = kstrtoul(buf, 10, &val);
+ if (ret)
+ return ret;
+ nvme_mdev_vctrl_bind_iothread(vctrl, val);
+ return count;
+}
+
+/* change the cpu binding of the IO threads*/
+static ssize_t
+iothread_cpu_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct nvme_mdev_vctrl *vctrl = dev_to_vctrl(dev);
+
+ if (!vctrl)
+ return -ENODEV;
+ return sprintf(buf, "%d\n", vctrl->iothread_cpu);
+}
+static DEVICE_ATTR_RW(iothread_cpu);
+
+/* change the cpu binding of the IO threads*/
+static ssize_t shadow_doorbell_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ bool val;
+ int ret;
+ struct nvme_mdev_vctrl *vctrl = dev_to_vctrl(dev);
+
+ if (!vctrl)
+ return -ENODEV;
+ ret = kstrtobool(buf, &val);
+ if (ret)
+ return ret;
+ ret = nvme_mdev_vctrl_set_shadow_doorbell_supported(vctrl, val);
+ if (ret)
+ return ret;
+ return count;
+}
+
+/* change the cpu binding of the IO threads*/
+static ssize_t shadow_doorbell_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nvme_mdev_vctrl *vctrl = dev_to_vctrl(dev);
+
+ if (!vctrl)
+ return -ENODEV;
+
+ return sprintf(buf, "%d\n", vctrl->mmio.shadow_db_supported ? 1 : 0);
+}
+static DEVICE_ATTR_RW(shadow_doorbell);
+
+static struct attribute *nvme_mdev_dev_ns_atttributes[] = {
+ &dev_attr_add_namespace.attr,
+ &dev_attr_remove_namespace.attr,
+ &dev_attr_namespaces.attr,
+ NULL
+};
+
+static struct attribute *nvme_mdev_dev_settings_atttributes[] = {
+ &dev_attr_iothread_cpu.attr,
+ &dev_attr_shadow_doorbell.attr,
+ NULL
+};
+
+static const struct attribute_group nvme_mdev_ns_attr_group = {
+ .name = "namespaces",
+ .attrs = nvme_mdev_dev_ns_atttributes,
+};
+
+static const struct attribute_group nvme_mdev_setting_attr_group = {
+ .name = "settings",
+ .attrs = nvme_mdev_dev_settings_atttributes,
+};
+
+static const struct attribute_group *nvme_mdev_dev_attributte_groups[] = {
+ &nvme_mdev_ns_attr_group,
+ &nvme_mdev_setting_attr_group,
+ NULL,
+};
+
+struct mdev_parent_ops mdev_fops = {
+ .owner = THIS_MODULE,
+ .create = nvme_mdev_ops_create,
+ .remove = nvme_mdev_ops_remove,
+ .open = nvme_mdev_ops_open,
+ .release = nvme_mdev_ops_release,
+ .read = nvme_mdev_ops_read,
+ .write = nvme_mdev_ops_write,
+ .mmap = nvme_mdev_ops_mmap,
+ .ioctl = nvme_mdev_ops_ioctl,
+ .request = nvme_mdev_ops_request,
+ .mdev_attr_groups = nvme_mdev_dev_attributte_groups,
+ .dev_attr_groups = NULL,
+};
+
diff --git a/drivers/nvme/mdev/io.c b/drivers/nvme/mdev/io.c
new file mode 100644
index 000000000000..a731196d0365
--- /dev/null
+++ b/drivers/nvme/mdev/io.c
@@ -0,0 +1,563 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * NVMe IO command translation and polling IO thread
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/kernel.h>
+#include <linux/kthread.h>
+#include <linux/slab.h>
+#include <linux/nvme.h>
+#include <linux/timekeeping.h>
+#include <linux/ktime.h>
+#include "priv.h"
+
+struct io_ctx {
+ struct nvme_mdev_hctrl *hctrl;
+ struct nvme_mdev_vctrl *vctrl;
+
+ const struct nvme_command *in;
+ struct nvme_command out;
+ struct nvme_mdev_vns *ns;
+ struct nvme_ext_data_iter udatait;
+ struct nvme_ext_data_iter *kdatait;
+
+ ktime_t last_io_t;
+ ktime_t last_admin_poll_time;
+ unsigned int idle_timeout_ms;
+ unsigned int admin_poll_rate_ms;
+ unsigned int arb_burst;
+};
+
+/* Handle read/write command.*/
+static int nvme_mdev_io_translate_rw(struct io_ctx *ctx)
+{
+ int ret;
+ const struct nvme_rw_command *in = &ctx->in->rw;
+
+ u64 slba = le64_to_cpu(in->slba);
+ u64 length = le16_to_cpu(in->length) + 1;
+ u16 control = le16_to_cpu(in->control);
+
+ _DBG(ctx->vctrl, "IOQ: READ/WRITE\n");
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_DW23 | RSRV_MPTR | RSRV_DW14_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16, 0b1100000000111100))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (in->opcode == nvme_cmd_write && ctx->ns->readonly)
+ return DNR(NVME_SC_READ_ONLY);
+
+ if (!check_range(slba, length, ctx->ns->ns_size))
+ return DNR(NVME_SC_LBA_RANGE);
+
+ ctx->out.rw.slba = cpu_to_le64(slba + ctx->ns->host_lba_offset);
+ ctx->out.rw.length = in->length;
+
+ ret = nvme_mdev_udata_iter_set_dptr(&ctx->udatait, &in->dptr,
+ length << ctx->ns->blksize_shift);
+ if (ret)
+ return nvme_mdev_translate_error(ret);
+
+ ctx->kdatait = &ctx->udatait;
+ if (control & ~(NVME_RW_LR | NVME_RW_FUA))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ ctx->out.rw.control = in->control;
+ return -1;
+}
+
+/*Handle flush command */
+static int nvme_mdev_io_translate_flush(struct io_ctx *ctx)
+{
+ ctx->kdatait = NULL;
+
+ _DBG(ctx->vctrl, "IOQ: FLUSH\n");
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_DW23 | RSRV_DPTR |
+ RSRV_MPTR | RSRV_DW10_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (ctx->ns->readonly)
+ return DNR(NVME_SC_READ_ONLY);
+
+ return -1;
+}
+
+/* Handle write zeros command */
+static int nvme_mdev_io_translate_write_zeros(struct io_ctx *ctx)
+{
+ const struct nvme_write_zeroes_cmd *in = &ctx->in->write_zeroes;
+ u64 slba = le64_to_cpu(in->slba);
+ u64 length = le16_to_cpu(in->length) + 1;
+ u16 control = le16_to_cpu(in->control);
+
+ _DBG(ctx->vctrl, "IOQ: WRITE_ZEROS\n");
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_DW23 | RSRV_DPTR |
+ RSRV_MPTR | RSRV_DW13_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (!nvme_mdev_hctrl_hq_check_op(ctx->hctrl, in->opcode))
+ return DNR(NVME_SC_INVALID_OPCODE);
+
+ if (ctx->ns->readonly)
+ return DNR(NVME_SC_READ_ONLY);
+ ctx->kdatait = NULL;
+
+ if (!check_range(slba, length, ctx->ns->ns_size))
+ return DNR(NVME_SC_LBA_RANGE);
+
+ ctx->out.write_zeroes.slba =
+ cpu_to_le64(slba + ctx->ns->host_lba_offset);
+ ctx->out.write_zeroes.length = in->length;
+
+ if (control & ~(NVME_RW_LR | NVME_RW_FUA | NVME_WZ_DEAC))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ ctx->out.write_zeroes.control = in->control;
+ return -1;
+}
+
+/* Handle dataset management command */
+static int nvme_mdev_io_translate_dsm(struct io_ctx *ctx)
+{
+ unsigned int size, i, nr;
+ int ret;
+ const struct nvme_dsm_cmd *in = &ctx->in->dsm;
+ struct nvme_dsm_range *data_ptr;
+
+ _DBG(ctx->vctrl, "IOQ: DSM_MANAGEMENT\n");
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_DW23 | RSRV_MPTR | RSRV_DW12_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (le32_to_cpu(in->nr) & 0xFFFFFF00)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (!nvme_mdev_hctrl_hq_check_op(ctx->hctrl, in->opcode))
+ return DNR(NVME_SC_INVALID_OPCODE);
+
+ if (ctx->ns->readonly)
+ return DNR(NVME_SC_READ_ONLY);
+
+ nr = le32_to_cpu(in->nr) + 1;
+ size = nr * sizeof(struct nvme_dsm_range);
+
+ ctx->out.dsm.nr = in->nr;
+ ret = nvme_mdev_udata_iter_set_dptr(&ctx->udatait, &in->dptr, size);
+ if (ret)
+ goto error;
+
+ ctx->kdatait = nvme_mdev_kdata_iter_alloc(&ctx->vctrl->viommu, size);
+ if (!ctx->kdatait)
+ return NVME_SC_INTERNAL;
+
+ _DBG(ctx->vctrl, "IOQ: DSM_MANAGEMENT: NR=%d\n", nr);
+
+ ret = nvme_mdev_read_from_udata(ctx->kdatait->kmem.data, &ctx->udatait,
+ size);
+ if (ret)
+ goto error2;
+
+ data_ptr = (struct nvme_dsm_range *)ctx->kdatait->kmem.data;
+
+ for (i = 0 ; i < nr; i++) {
+ u64 slba = le64_to_cpu(data_ptr[i].slba);
+ /* looks like not zero based value*/
+ u32 nlb = le32_to_cpu(data_ptr[i].nlb);
+
+ if (!check_range(slba, nlb, ctx->ns->ns_size))
+ goto error2;
+
+ _DBG(ctx->vctrl, "IOQ: DSM_MANAGEMENT: RANGE 0x%llx-0x%x\n",
+ slba, nlb);
+
+ data_ptr[i].slba = cpu_to_le64(slba + ctx->ns->host_lba_offset);
+ }
+
+ ctx->out.dsm.attributes = in->attributes;
+ return -1;
+error2:
+ ctx->kdatait->release(ctx->kdatait);
+error:
+ return nvme_mdev_translate_error(ret);
+}
+
+/* Process one new command in the io queue*/
+static int nvme_mdev_io_translate_cmd(struct io_ctx *ctx)
+{
+ memset(&ctx->out, 0, sizeof(ctx->out));
+ /* translate opcode */
+ ctx->out.common.opcode = ctx->in->common.opcode;
+
+ /* check flags */
+ if (ctx->in->common.flags != 0)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ /* namespace*/
+ ctx->ns = nvme_mdev_vns_from_vnsid(ctx->vctrl,
+ le32_to_cpu(ctx->in->rw.nsid));
+ if (!ctx->ns) {
+ _DBG(ctx->vctrl, "IOQ: invalid NSID\n");
+ return DNR(NVME_SC_INVALID_NS);
+ }
+
+ if (!ctx->ns->readonly && bdev_read_only(ctx->ns->host_part))
+ ctx->ns->readonly = true;
+
+ ctx->out.common.nsid = cpu_to_le32(ctx->ns->host_nsid);
+
+ switch (ctx->in->common.opcode) {
+ case nvme_cmd_flush:
+ return nvme_mdev_io_translate_flush(ctx);
+ case nvme_cmd_read:
+ return nvme_mdev_io_translate_rw(ctx);
+ case nvme_cmd_write:
+ return nvme_mdev_io_translate_rw(ctx);
+ case nvme_cmd_write_zeroes:
+ return nvme_mdev_io_translate_write_zeros(ctx);
+ case nvme_cmd_dsm:
+ return nvme_mdev_io_translate_dsm(ctx);
+ default:
+ return DNR(NVME_SC_INVALID_OPCODE);
+ }
+}
+
+static bool nvme_mdev_io_process_sq(struct io_ctx *ctx, u16 sqid)
+{
+ struct nvme_vsq *vsq = &ctx->vctrl->vsqs[sqid];
+ u16 ucid;
+ int ret;
+
+ /* If host queue is full, we can't process a command
+ * as a command will likely result in passthrough
+ */
+ if (!nvme_mdev_hctrl_hq_can_submit(ctx->hctrl, vsq->hsq))
+ return false;
+
+ /* read the command */
+ ctx->in = nvme_mdev_vsq_get_cmd(ctx->vctrl, vsq);
+ if (!ctx->in)
+ return false;
+ ucid = le16_to_cpu(ctx->in->common.command_id);
+
+ /* translate the command */
+ ret = nvme_mdev_io_translate_cmd(ctx);
+ if (ret != -1) {
+ _DBG(ctx->vctrl,
+ "IOQ: QID %d CID %d FAILED: status 0x%x (translate)\n",
+ sqid, ucid, ret);
+ nvme_mdev_vsq_cmd_done_io(ctx->vctrl, sqid, ucid, ret);
+ return true;
+ }
+
+ /*passthrough*/
+ ret = nvme_mdev_hctrl_hq_submit(ctx->hctrl,
+ vsq->hsq,
+ (((u32)vsq->qid) << 16) | ((u32)ucid),
+ &ctx->out,
+ ctx->kdatait);
+ if (ret) {
+ ret = nvme_mdev_translate_error(ret);
+
+ _DBG(ctx->vctrl,
+ "IOQ: QID %d CID %d FAILED: status 0x%x (host submit)\n",
+ sqid, ucid, ret);
+
+ nvme_mdev_vsq_cmd_done_io(ctx->vctrl, sqid, ucid, ret);
+ }
+ return true;
+}
+
+/* process host replies to the passed through commands */
+static int nvme_mdev_io_process_hwq(struct io_ctx *ctx, u16 hwq)
+{
+ int n, i;
+ struct nvme_ext_cmd_result res[16];
+
+ /* process the completions from the hardware */
+ n = nvme_mdev_hctrl_hq_poll(ctx->hctrl, hwq, res, 16);
+ if (n == -1)
+ return -1;
+
+ for (i = 0; i < n; i++) {
+ u16 qid = res[i].tag >> 16;
+ u16 cid = res[i].tag & 0xFFFF;
+ u16 status = res[i].status;
+
+ if (status != 0)
+ _DBG(ctx->vctrl,
+ "IOQ: QID %d CID %d FAILED: status 0x%x (host response)\n",
+ qid, cid, status);
+
+ nvme_mdev_vsq_cmd_done_io(ctx->vctrl, qid, cid, status);
+ }
+ return n;
+}
+
+/* Check if we need to read a command from the admin queue */
+static bool nvme_mdev_adm_needs_processing(struct io_ctx *ctx)
+{
+ if (!timeout(ctx->last_admin_poll_time,
+ ctx->vctrl->now, ctx->admin_poll_rate_ms))
+ return false;
+
+ if (nvme_mdev_vsq_has_data(ctx->vctrl, &ctx->vctrl->vsqs[0]))
+ return true;
+
+ ctx->last_admin_poll_time = ctx->vctrl->now;
+ return false;
+}
+
+/* do polling till one of events stops it */
+static void nvme_mdev_io_maintask(struct io_ctx *ctx)
+{
+ struct nvme_mdev_vctrl *vctrl = ctx->vctrl;
+ u16 i, cqid, sqid, hsqcnt;
+ u16 hsqs[MAX_HOST_QUEUES];
+ bool idle = false;
+
+ hsqcnt = nvme_mdev_vctrl_hqs_list(vctrl, hsqs);
+ ctx->arb_burst = 1 << ctx->vctrl->arb_burst_shift;
+
+ /* can't stop polling when shadow db not enabled */
+ ctx->idle_timeout_ms = vctrl->mmio.shadow_db_en ? poll_timeout_ms : 0;
+ ctx->admin_poll_rate_ms = admin_poll_rate_ms;
+
+ vctrl->now = ktime_get();
+ ctx->last_admin_poll_time = vctrl->now;
+ ctx->last_io_t = vctrl->now;
+
+ /* main loop */
+ while (!kthread_should_park()) {
+ vctrl->now = ktime_get();
+
+ /* check if we have to exit to support admin polling */
+ if (!vctrl->mmio.shadow_db_supported)
+ if (nvme_mdev_adm_needs_processing(ctx))
+ break;
+
+ /* process the submission queues*/
+ sqid = 1;
+ for_each_set_bit_from(sqid, vctrl->vsq_en, MAX_VIRTUAL_QUEUES)
+ for (i = 0 ; i < ctx->arb_burst ; i++)
+ if (!nvme_mdev_io_process_sq(ctx, sqid))
+ break;
+
+ /* process the completions from the guest*/
+ cqid = 1;
+ for_each_set_bit_from(cqid, vctrl->vcq_en, MAX_VIRTUAL_QUEUES)
+ nvme_mdev_vcq_process(vctrl, cqid, true);
+
+ /* process the completions from the hardware*/
+ for (i = 0 ; i < hsqcnt ; i++)
+ if (nvme_mdev_io_process_hwq(ctx, hsqs[i]) > 0)
+ ctx->last_io_t = vctrl->now;
+
+ /* Check if we need to stop polling*/
+ if (ctx->idle_timeout_ms) {
+ if (timeout(ctx->last_io_t,
+ vctrl->now, ctx->idle_timeout_ms)) {
+ idle = true;
+ break;
+ }
+ }
+ cond_resched();
+ }
+
+ /* Drain the host IO */
+ for (;;) {
+ bool pending_io = false;
+
+ vctrl->now = ktime_get_coarse_boottime();
+
+ if (nvme_mdev_vctrl_is_dead(vctrl) || ctx->hctrl->removing) {
+ idle = false;
+ break;
+ }
+
+ for (i = 0; i < hsqcnt; i++) {
+ int n = nvme_mdev_io_process_hwq(ctx, hsqs[i]);
+
+ if (n != -1)
+ pending_io = true;
+ if (n > 0)
+ ctx->last_io_t = vctrl->now;
+ }
+
+ if (!pending_io)
+ break;
+
+ cond_resched();
+
+ if (!timeout(ctx->last_io_t, vctrl->now, io_timeout_ms))
+ continue;
+
+ _WARN(ctx->vctrl, "IO: skipping flush - host IO timeout\n");
+ idle = false;
+ break;
+ }
+
+ /* Drain all the pending completion interrupts to the guest*/
+ cqid = 1;
+ for_each_set_bit_from(cqid, vctrl->vcq_en, MAX_VIRTUAL_QUEUES)
+ if (nvme_mdev_vcq_flush(vctrl, cqid))
+ idle = false;
+
+ /* Park IO thread if IO is truly idle*/
+ if (idle) {
+ /* don't bother going idle if someone holds the vctrl
+ * lock. It might try to park us, and thus
+ * cause a deadlock
+ */
+ if (!mutex_trylock(&vctrl->lock))
+ return;
+
+ sqid = 1;
+ for_each_set_bit_from(sqid, vctrl->vsq_en, MAX_VIRTUAL_QUEUES)
+ if (!nvme_mdev_vsq_suspend_io(vctrl, sqid)) {
+ idle = false;
+ break;
+ }
+
+ if (idle) {
+ _DBG(ctx->vctrl, "IO: self-parking\n");
+ vctrl->io_idle = true;
+ nvme_mdev_io_pause(vctrl);
+ }
+
+ mutex_unlock(&vctrl->lock);
+ }
+
+ /* Admin poll for cases when shadow doorbell is not supported */
+ if (!vctrl->mmio.shadow_db_supported) {
+ if (mutex_trylock(&vctrl->lock)) {
+ nvme_mdev_vcq_process(vctrl, 0, false);
+ nvme_mdev_adm_process_sq(ctx->vctrl);
+ ctx->last_admin_poll_time = vctrl->now;
+ mutex_unlock(&ctx->vctrl->lock);
+ }
+ }
+}
+
+/* the main IO thread */
+static int nvme_mdev_io_polling_thread(void *data)
+{
+ struct io_ctx ctx;
+
+ if (kthread_should_stop())
+ return 0;
+
+ memset(&ctx, 0, sizeof(struct io_ctx));
+ ctx.vctrl = (struct nvme_mdev_vctrl *)data;
+ ctx.hctrl = ctx.vctrl->hctrl;
+ nvme_mdev_udata_iter_setup(&ctx.vctrl->viommu, &ctx.udatait);
+
+ _DBG(ctx.vctrl, "IO: iothread started\n");
+
+ for (;;) {
+ if (kthread_should_park()) {
+ _DBG(ctx.vctrl, "IO: iothread parked\n");
+ kthread_parkme();
+ }
+
+ if (kthread_should_stop())
+ break;
+
+ nvme_mdev_io_maintask(&ctx);
+ }
+
+ _DBG(ctx.vctrl, "IO: iothread stopped\n");
+ return 0;
+}
+
+/* Kick the IO thread into running state*/
+void nvme_mdev_io_resume(struct nvme_mdev_vctrl *vctrl)
+{
+ lockdep_assert_held(&vctrl->lock);
+
+ if (!vctrl->iothread || !vctrl->iothread_parked)
+ return;
+ if (vctrl->io_idle || vctrl->vctrl_paused)
+ return;
+
+ vctrl->iothread_parked = false;
+ /* has memory barrier*/
+ kthread_unpark(vctrl->iothread);
+}
+
+/* Pause the IO thread */
+void nvme_mdev_io_pause(struct nvme_mdev_vctrl *vctrl)
+{
+ lockdep_assert_held(&vctrl->lock);
+
+ if (!vctrl->iothread || vctrl->iothread_parked)
+ return;
+
+ vctrl->iothread_parked = true;
+ kthread_park(vctrl->iothread);
+}
+
+/* setup the main IO thread */
+int nvme_mdev_io_create(struct nvme_mdev_vctrl *vctrl, unsigned int cpu)
+{
+ /*TODOLATER: IO: Better thread name*/
+ char name[TASK_COMM_LEN];
+
+ _DBG(vctrl, "IO: creating the polling iothread\n");
+
+ if (WARN_ON(vctrl->iothread))
+ return -EINVAL;
+
+ snprintf(name, sizeof(name), "nvme%d_poll_io", vctrl->hctrl->id);
+
+ vctrl->iothread_cpu = cpu;
+ vctrl->iothread_parked = false;
+ vctrl->io_idle = true;
+
+ vctrl->iothread = kthread_create_on_node(nvme_mdev_io_polling_thread,
+ vctrl,
+ vctrl->hctrl->node,
+ name);
+ if (IS_ERR(vctrl->iothread)) {
+ vctrl->iothread = NULL;
+ return PTR_ERR(vctrl->iothread);
+ }
+
+ kthread_bind(vctrl->iothread, cpu);
+
+ if (vctrl->io_idle) {
+ vctrl->iothread_parked = true;
+ kthread_park(vctrl->iothread);
+ return 0;
+ }
+
+ wake_up_process(vctrl->iothread);
+ return 0;
+}
+
+/* End the main IO thread */
+void nvme_mdev_io_free(struct nvme_mdev_vctrl *vctrl)
+{
+ int ret;
+
+ _DBG(vctrl, "IO: destroying the polling iothread\n");
+
+ lockdep_assert_held(&vctrl->lock);
+ nvme_mdev_io_pause(vctrl);
+ ret = kthread_stop(vctrl->iothread);
+ WARN_ON(ret);
+ vctrl->iothread = NULL;
+}
+
+void nvme_mdev_assert_io_not_running(struct nvme_mdev_vctrl *vctrl)
+{
+ if (WARN_ON(vctrl->iothread && !vctrl->iothread_parked))
+ nvme_mdev_io_pause(vctrl);
+}
diff --git a/drivers/nvme/mdev/irq.c b/drivers/nvme/mdev/irq.c
new file mode 100644
index 000000000000..5809cdb4d84c
--- /dev/null
+++ b/drivers/nvme/mdev/irq.c
@@ -0,0 +1,264 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * NVMe virtual controller IRQ implementation (MSIx and INTx)
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+
+#include <linux/init.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include "priv.h"
+
+/* Setup the interrupt subsystem */
+void nvme_mdev_irqs_setup(struct nvme_mdev_vctrl *vctrl)
+{
+ vctrl->irqs.mode = NVME_MDEV_IMODE_NONE;
+ vctrl->irqs.irq_coalesc_max = 1;
+}
+
+/* Enable INTx or MSIx interrupts */
+static int __nvme_mdev_irqs_enable(struct nvme_mdev_vctrl *vctrl,
+ enum nvme_mdev_irq_mode mode)
+{
+ if (vctrl->irqs.mode == mode)
+ return 0;
+ if (vctrl->irqs.mode != NVME_MDEV_IMODE_NONE)
+ return -EBUSY;
+
+ if (mode == NVME_MDEV_IMODE_INTX)
+ _DBG(vctrl, "IRQ: enable INTx interrupts\n");
+ else if (mode == NVME_MDEV_IMODE_MSIX)
+ _DBG(vctrl, "IRQ: enable MSIX interrupts\n");
+ else
+ WARN_ON(1);
+
+ nvme_mdev_io_pause(vctrl);
+ vctrl->irqs.mode = mode;
+ nvme_mdev_io_resume(vctrl);
+ return 0;
+}
+
+int nvme_mdev_irqs_enable(struct nvme_mdev_vctrl *vctrl,
+ enum nvme_mdev_irq_mode mode)
+{
+ int retval = 0;
+
+ mutex_lock(&vctrl->lock);
+ retval = __nvme_mdev_irqs_enable(vctrl, mode);
+ mutex_unlock(&vctrl->lock);
+ return retval;
+}
+
+/* Disable INTx or MSIx interrupts */
+static void __nvme_mdev_irqs_disable(struct nvme_mdev_vctrl *vctrl,
+ enum nvme_mdev_irq_mode mode)
+{
+ unsigned int i;
+
+ if (vctrl->irqs.mode == NVME_MDEV_IMODE_NONE)
+ return;
+ if (vctrl->irqs.mode != mode)
+ return;
+
+ if (vctrl->irqs.mode == NVME_MDEV_IMODE_INTX)
+ _DBG(vctrl, "IRQ: disable INTx interrupts\n");
+ else if (vctrl->irqs.mode == NVME_MDEV_IMODE_MSIX)
+ _DBG(vctrl, "IRQ: disable MSIX interrupts\n");
+ else
+ WARN_ON(1);
+
+ nvme_mdev_io_pause(vctrl);
+
+ for (i = 0; i < MAX_VIRTUAL_IRQS; i++) {
+ struct nvme_mdev_user_irq *vec = &vctrl->irqs.vecs[i];
+
+ if (vec->trigger) {
+ eventfd_ctx_put(vec->trigger);
+ vec->trigger = NULL;
+ }
+ vec->irq_pending_cnt = 0;
+ vec->irq_time = 0;
+ }
+ vctrl->irqs.mode = NVME_MDEV_IMODE_NONE;
+ nvme_mdev_io_resume(vctrl);
+}
+
+void nvme_mdev_irqs_disable(struct nvme_mdev_vctrl *vctrl,
+ enum nvme_mdev_irq_mode mode)
+{
+ mutex_lock(&vctrl->lock);
+ __nvme_mdev_irqs_disable(vctrl, mode);
+ mutex_unlock(&vctrl->lock);
+}
+
+/* Set eventfd triggers for INTx or MSIx interrupts */
+int nvme_mdev_irqs_set_triggers(struct nvme_mdev_vctrl *vctrl,
+ int start, int count, int32_t *fds)
+{
+ unsigned int i;
+
+ mutex_lock(&vctrl->lock);
+ nvme_mdev_io_pause(vctrl);
+
+ for (i = 0; i < count; i++) {
+ int irqindex = start + i;
+ struct eventfd_ctx *trigger;
+ struct nvme_mdev_user_irq *irq = &vctrl->irqs.vecs[irqindex];
+
+ if (irq->trigger) {
+ eventfd_ctx_put(irq->trigger);
+ irq->trigger = NULL;
+ }
+
+ if (fds[i] < 0)
+ continue;
+
+ trigger = eventfd_ctx_fdget(fds[i]);
+ if (IS_ERR(trigger))
+ return PTR_ERR(trigger);
+
+ irq->trigger = trigger;
+ }
+ nvme_mdev_io_resume(vctrl);
+ mutex_unlock(&vctrl->lock);
+ return 0;
+}
+
+/* Set eventfd trigger for unplug interrupt */
+static int __nvme_mdev_irqs_set_unplug_trigger(struct nvme_mdev_vctrl *vctrl,
+ int32_t fd)
+{
+ struct eventfd_ctx *trigger;
+
+ if (vctrl->irqs.request_trigger) {
+ _DBG(vctrl, "IRQ: clear hotplug trigger\n");
+ eventfd_ctx_put(vctrl->irqs.request_trigger);
+ vctrl->irqs.request_trigger = NULL;
+ }
+
+ if (fd < 0)
+ return 0;
+
+ _DBG(vctrl, "IRQ: set hotplug trigger\n");
+
+ trigger = eventfd_ctx_fdget(fd);
+ if (IS_ERR(trigger))
+ return PTR_ERR(trigger);
+
+ vctrl->irqs.request_trigger = trigger;
+ return 0;
+}
+
+int nvme_mdev_irqs_set_unplug_trigger(struct nvme_mdev_vctrl *vctrl,
+ int32_t fd)
+{
+ int retval;
+
+ mutex_lock(&vctrl->lock);
+ retval = __nvme_mdev_irqs_set_unplug_trigger(vctrl, fd);
+ mutex_unlock(&vctrl->lock);
+ return retval;
+}
+
+/* Reset the interrupts subsystem */
+void nvme_mdev_irqs_reset(struct nvme_mdev_vctrl *vctrl)
+{
+ int i;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ if (vctrl->irqs.mode != NVME_MDEV_IMODE_NONE)
+ __nvme_mdev_irqs_disable(vctrl, vctrl->irqs.mode);
+
+ __nvme_mdev_irqs_set_unplug_trigger(vctrl, -1);
+
+ for (i = 0; i < MAX_VIRTUAL_IRQS; i++) {
+ struct nvme_mdev_user_irq *vec = &vctrl->irqs.vecs[i];
+
+ vec->irq_coalesc_en = false;
+ vec->irq_pending_cnt = 0;
+ vec->irq_time = 0;
+ }
+
+ vctrl->irqs.irq_coalesc_time_us = 0;
+}
+
+/* Check if interrupt can be coalesced */
+static bool nvme_mdev_irq_coalesce(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_mdev_user_irq *irq)
+{
+ s64 delta;
+
+ if (!irq->irq_coalesc_en)
+ return false;
+
+ if (irq->irq_pending_cnt >= vctrl->irqs.irq_coalesc_max)
+ return false;
+
+ delta = ktime_us_delta(vctrl->now, irq->irq_time);
+ return (delta < vctrl->irqs.irq_coalesc_time_us);
+}
+
+void nvme_mdev_irq_raise_unplug_event(struct nvme_mdev_vctrl *vctrl,
+ unsigned int count)
+{
+ mutex_lock(&vctrl->lock);
+
+ if (vctrl->irqs.request_trigger) {
+ if (!(count % 10))
+ dev_notice_ratelimited(mdev_dev(vctrl->mdev),
+ "Relaying device request to user (#%u)\n",
+ count);
+
+ eventfd_signal(vctrl->irqs.request_trigger, 1);
+
+ } else if (count == 0) {
+ dev_notice(mdev_dev(vctrl->mdev),
+ "No device request channel registered, blocked until released by user\n");
+ }
+ mutex_unlock(&vctrl->lock);
+}
+
+/* Raise an interrupt */
+void nvme_mdev_irq_raise(struct nvme_mdev_vctrl *vctrl, unsigned int index)
+{
+ struct nvme_mdev_user_irq *irq = &vctrl->irqs.vecs[index];
+
+ irq->irq_pending_cnt++;
+}
+
+/* Unraise an interrupt */
+void nvme_mdev_irq_clear(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index)
+{
+ struct nvme_mdev_user_irq *irq = &vctrl->irqs.vecs[index];
+
+ irq->irq_time = vctrl->now;
+ irq->irq_pending_cnt = 0;
+}
+
+/* Directly trigger an interrupt without affecting irq coalescing settings */
+void nvme_mdev_irq_trigger(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index)
+{
+ struct nvme_mdev_user_irq *irq = &vctrl->irqs.vecs[index];
+
+ if (irq->trigger)
+ eventfd_signal(irq->trigger, 1);
+}
+
+/* Trigger previously raised interrupt */
+void nvme_mdev_irq_cond_trigger(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index)
+{
+ struct nvme_mdev_user_irq *irq = &vctrl->irqs.vecs[index];
+
+ if (irq->irq_pending_cnt == 0)
+ return;
+
+ if (!nvme_mdev_irq_coalesce(vctrl, irq)) {
+ nvme_mdev_irq_trigger(vctrl, index);
+ nvme_mdev_irq_clear(vctrl, index);
+ }
+}
diff --git a/drivers/nvme/mdev/mdev.h b/drivers/nvme/mdev/mdev.h
new file mode 100644
index 000000000000..d139e090520e
--- /dev/null
+++ b/drivers/nvme/mdev/mdev.h
@@ -0,0 +1,56 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * NVME VFIO mediated driver
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+
+#ifndef _MDEV_NVME_MDEV_H
+#define _MDEV_NVME_MDEV_H
+
+#include <linux/kernel.h>
+#include <linux/byteorder/generic.h>
+#include <linux/nvme.h>
+
+struct page_map {
+ void *kmap;
+ struct page *page;
+ dma_addr_t iova;
+};
+
+struct user_prplist {
+ /* used by user data iterator*/
+ struct page_map page;
+ unsigned int index; /* index of current entry */
+};
+
+struct kernel_data {
+ /* used by kernel data iterator*/
+ void *data;
+ unsigned int size;
+ dma_addr_t dma_addr;
+};
+
+struct nvme_ext_data_iter {
+ /* private */
+ struct nvme_mdev_viommu *viommu;
+ union {
+ const union nvme_data_ptr *dptr;
+ struct user_prplist uprp;
+ struct kernel_data kmem;
+ };
+
+ /* user interface */
+ u64 count; /* number of data pages, yet to be covered */
+
+ phys_addr_t physical; /* iterator physical address value*/
+ dma_addr_t host_iova; /* iterator dma address value*/
+
+ /* moves iterator to the next item */
+ int (*next)(struct nvme_ext_data_iter *data_iter);
+
+ /* if != NULL, user should call this when it done with data
+ * pointed by the iterator
+ */
+ void (*release)(struct nvme_ext_data_iter *data_iter);
+};
+#endif
diff --git a/drivers/nvme/mdev/mmio.c b/drivers/nvme/mdev/mmio.c
new file mode 100644
index 000000000000..cf03c1f22f4c
--- /dev/null
+++ b/drivers/nvme/mdev/mmio.c
@@ -0,0 +1,591 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * NVMe virtual controller MMIO implementation
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/kernel.h>
+#include <linux/highmem.h>
+#include "priv.h"
+
+#define DB_AREA_SIZE (MAX_VIRTUAL_QUEUES * 2 * (4 << DB_STRIDE_SHIFT))
+#define DB_MASK ((4 << DB_STRIDE_SHIFT) - 1)
+#define MMIO_BAR_SIZE __roundup_pow_of_two(NVME_REG_DBS + DB_AREA_SIZE)
+
+/* Put the controller into fatal error state. Only way out is reset */
+static void nvme_mdev_mmio_fatal_error(struct nvme_mdev_vctrl *vctrl)
+{
+ if (vctrl->mmio.csts & NVME_CSTS_CFS)
+ return;
+
+ vctrl->mmio.csts |= NVME_CSTS_CFS;
+ nvme_mdev_io_pause(vctrl);
+
+ if (vctrl->mmio.csts & NVME_CSTS_RDY)
+ nvme_mdev_vctrl_disable(vctrl);
+}
+
+/* This sends an generic error notification to the user */
+static void nvme_mdev_mmio_error(struct nvme_mdev_vctrl *vctrl,
+ enum nvme_async_event info)
+{
+ nvme_mdev_event_send(vctrl, NVME_AER_TYPE_ERROR, info);
+}
+
+/* This is memory fault handler for the mmap area of the doorbells*/
+static vm_fault_t nvme_mdev_mmio_dbs_mmap_fault(struct vm_fault *vmf)
+{
+ struct vm_area_struct *vma = vmf->vma;
+ struct nvme_mdev_vctrl *vctrl = vma->vm_private_data;
+
+ /* DB area is just one page, starting at offset 4096 of the mmio*/
+ if (WARN_ON(vmf->pgoff != 1))
+ return VM_FAULT_SIGBUS;
+
+ get_page(vctrl->mmio.dbs_page);
+ vmf->page = vctrl->mmio.dbs_page;
+ return 0;
+}
+
+static const struct vm_operations_struct nvme_mdev_mmio_dbs_vm_ops = {
+ .fault = nvme_mdev_mmio_dbs_mmap_fault,
+};
+
+/* check that user db write is valid and send an error if not*/
+bool nvme_mdev_mmio_db_check(struct nvme_mdev_vctrl *vctrl,
+ u16 qid, u16 size, u16 db)
+{
+ if (get_current() != vctrl->iothread)
+ lockdep_assert_held(&vctrl->lock);
+
+ if (db < size)
+ return true;
+ if (qid == 0) {
+ _DBG(vctrl, "MMIO: invalid admin DB write - fatal error\n");
+ nvme_mdev_mmio_fatal_error(vctrl);
+ return false;
+ }
+
+ _DBG(vctrl, "MMIO: invalid DB value write qid=%d, size=%d, value=%d\n",
+ qid, size, db);
+
+ nvme_mdev_mmio_error(vctrl, NVME_AER_ERROR_INVALID_DB_VALUE);
+ return false;
+}
+
+/* handle submission queue doorbell write */
+static void nvme_mdev_mmio_db_write_sq(struct nvme_mdev_vctrl *vctrl,
+ u32 qid, u32 val)
+{
+ _DBG(vctrl, "MMIO: doorbell SQID %d, DB write %d\n", qid, val);
+
+ lockdep_assert_held(&vctrl->lock);
+ /* check if the db belongs to a valid queue */
+ if (qid >= MAX_VIRTUAL_QUEUES || !test_bit(qid, vctrl->vsq_en))
+ goto err_db;
+
+ /* emulate the shadow doorbell functionality */
+ if (!vctrl->mmio.shadow_db_en || qid == 0)
+ vctrl->mmio.dbs[qid].sqt = cpu_to_le32(val & 0x0000FFFF);
+
+ if (qid != 0)
+ vctrl->io_idle = false;
+
+ if (vctrl->vctrl_paused || !vctrl->mmio.shadow_db_supported)
+ return;
+
+ qid ? nvme_mdev_io_resume(vctrl) : nvme_mdev_adm_process_sq(vctrl);
+ return;
+err_db:
+
+ _DBG(vctrl, "MMIO: inactive/invalid SQ DB write qid=%d, value=%d\n",
+ qid, val);
+
+ nvme_mdev_mmio_error(vctrl, NVME_AER_ERROR_INVALID_DB_REG);
+}
+
+/* handle doorbell write */
+static void nvme_mdev_mmio_db_write_cq(struct nvme_mdev_vctrl *vctrl,
+ u32 qid, u32 val)
+{
+ _DBG(vctrl, "MMIO: doorbell CQID %d, DB write %d\n", qid, val);
+
+ lockdep_assert_held(&vctrl->lock);
+ /* check if the db belongs to a valid queue */
+ if (qid >= MAX_VIRTUAL_QUEUES || !test_bit(qid, vctrl->vcq_en))
+ goto err_db;
+
+ /* emulate the shadow doorbell functionality */
+ if (!vctrl->mmio.shadow_db_en || qid == 0)
+ vctrl->mmio.dbs[qid].cqh = cpu_to_le16(val & 0xFFFF);
+
+ if (vctrl->vctrl_paused || !vctrl->mmio.shadow_db_supported)
+ return;
+
+ if (qid == 0) {
+ nvme_mdev_vcq_process(vctrl, 0, false);
+ // if completion queue was full prior to that, we
+ // might have some admin commands pending,
+ // and this is the last chance to process them
+ nvme_mdev_adm_process_sq(vctrl);
+ }
+ return;
+err_db:
+ _DBG(vctrl,
+ "MMIO: inactive/invalid CQ DB write qid=%d, value=%d\n",
+ qid, val);
+
+ nvme_mdev_mmio_error(vctrl, NVME_AER_ERROR_INVALID_DB_REG);
+}
+
+/* This is called when user enables the controller */
+static void nvme_mdev_mmio_cntrl_enable(struct nvme_mdev_vctrl *vctrl)
+{
+ u64 acq, asq;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ // Controller must be reset from the dead state
+ if (nvme_mdev_vctrl_is_dead(vctrl))
+ goto error;
+
+ /* only NVME command set supported */
+ if (((vctrl->mmio.cc >> NVME_CC_CSS_SHIFT) & 0x7) != 0)
+ goto error;
+
+ /* Check the queue arbitration method*/
+ if ((vctrl->mmio.cc & NVME_CC_AMS_MASK) != NVME_CC_AMS_RR)
+ goto error;
+
+ /* Check the page size*/
+ if (((vctrl->mmio.cc >> NVME_CC_MPS_SHIFT) & 0xF) != (PAGE_SHIFT - 12))
+ goto error;
+
+ /* Start the admin completion queue*/
+ acq = vctrl->mmio.acql | ((u64)vctrl->mmio.acqh << 32);
+ asq = vctrl->mmio.asql | ((u64)vctrl->mmio.asqh << 32);
+
+ if (!nvme_mdev_vctrl_enable(vctrl, acq, asq, vctrl->mmio.aqa))
+ goto error;
+
+ /* Success! */
+ vctrl->mmio.csts |= NVME_CSTS_RDY;
+ return;
+error:
+ _DBG(vctrl, "MMIO: failure to enable the controller - fatal error\n");
+ nvme_mdev_mmio_fatal_error(vctrl);
+}
+
+/* This is called when user sends a notification that controller is
+ * about to be disabled
+ */
+static void nvme_mdev_mmio_cntrl_shutdown(struct nvme_mdev_vctrl *vctrl)
+{
+ lockdep_assert_held(&vctrl->lock);
+
+ /* clear shutdown notification bits */
+ vctrl->mmio.cc &= ~NVME_CC_SHN_MASK;
+
+ if (nvme_mdev_vctrl_is_dead(vctrl)) {
+ _DBG(vctrl, "MMIO: shutdown notification for dead ctrl\n");
+ return;
+ }
+
+ /* not enabled */
+ if (!(vctrl->mmio.csts & NVME_CSTS_RDY)) {
+ _DBG(vctrl, "MMIO: shutdown notification with CSTS.RDY==0\n");
+ nvme_mdev_assert_io_not_running(vctrl);
+ return;
+ }
+
+ nvme_mdev_io_pause(vctrl);
+ nvme_mdev_vctrl_disable(vctrl);
+ vctrl->mmio.csts |= NVME_CSTS_SHST_CMPLT;
+}
+
+/* MMIO BAR read/write */
+static int nvme_mdev_mmio_bar_access(struct nvme_mdev_vctrl *vctrl,
+ u16 offset, char *buf,
+ u32 count, bool is_write)
+{
+ u32 val, oldval;
+
+ mutex_lock(&vctrl->lock);
+
+ /* Drop non DWORD sized and aligned reads/writes
+ * (QWORD read/writes are split by the caller)
+ */
+ if (count != 4 || (offset & 0x3))
+ goto drop;
+
+ val = is_write ? le32_to_cpu(*(__le32 *)buf) : 0;
+
+ switch (offset) {
+ case NVME_REG_CAP:
+ /* controller capabilities (low 32 bit)*/
+ if (is_write)
+ goto drop;
+ store_le32(buf, vctrl->mmio.cap & 0xFFFFFFFF);
+ break;
+
+ case NVME_REG_CAP + 4:
+ /* controller capabilities (upper 32 bit)*/
+ if (is_write)
+ goto drop;
+ store_le32(buf, vctrl->mmio.cap >> 32);
+ break;
+
+ case NVME_REG_VS:
+ if (is_write)
+ goto drop;
+ store_le32(buf, NVME_MDEV_NVME_VER);
+ break;
+
+ case NVME_REG_INTMS:
+ case NVME_REG_INTMC:
+ /* Interrupt Mask Set & Clear */
+ goto drop;
+
+ case NVME_REG_CC:
+ /* Controller Configuration */
+ if (!is_write) {
+ store_le32(buf, vctrl->mmio.cc);
+ break;
+ }
+
+ oldval = vctrl->mmio.cc;
+ vctrl->mmio.cc = val;
+
+ /* drop if reserved bits set */
+ if (vctrl->mmio.cc & 0xFF00000E) {
+ _DBG(vctrl,
+ "MMIO: reserved bits of CC set - fatal error\n");
+ nvme_mdev_mmio_fatal_error(vctrl);
+ goto drop;
+ }
+
+ /* CSS(command set),MPS(memory page size),AMS(queue arbitration)
+ * must not be changed while controller is running
+ */
+ if (vctrl->mmio.csts & NVME_CSTS_RDY) {
+ if ((vctrl->mmio.cc & 0x3FF0) != (oldval & 0x3FF0)) {
+ _DBG(vctrl,
+ "MMIO: attempt to change setting bits of CC while CC.EN=1 - fatal error\n");
+
+ nvme_mdev_mmio_fatal_error(vctrl);
+ goto drop;
+ }
+ }
+
+ if ((vctrl->mmio.cc & NVME_CC_SHN_MASK) != NVME_CC_SHN_NONE) {
+ _DBG(vctrl, "MMIO: CC.SHN != 0 - shutdown\n");
+ nvme_mdev_mmio_cntrl_shutdown(vctrl);
+ }
+
+ /* change in controller enabled state */
+ if ((val & NVME_CC_ENABLE) == (oldval & NVME_CC_ENABLE))
+ break;
+
+ if (vctrl->mmio.cc & NVME_CC_ENABLE) {
+ _DBG(vctrl, "MMIO: CC.EN<=1 - enable the controller\n");
+ nvme_mdev_mmio_cntrl_enable(vctrl);
+ } else {
+ _DBG(vctrl, "MMIO: CC.EN<=0 - reset controller\n");
+ __nvme_mdev_vctrl_reset(vctrl, false);
+ }
+
+ break;
+
+ case NVME_REG_CSTS:
+ /* Controller Status */
+ if (is_write)
+ goto drop;
+ store_le32(buf, vctrl->mmio.csts);
+ break;
+
+ case NVME_REG_AQA:
+ /* admin queue submission and completion size*/
+ if (!is_write)
+ store_le32(buf, vctrl->mmio.aqa);
+ else if (!(vctrl->mmio.csts & NVME_CSTS_RDY))
+ vctrl->mmio.aqa = val;
+ else
+ goto drop;
+ break;
+
+ case NVME_REG_ASQ:
+ /* admin submission queue address (low 32 bit)*/
+ if (!is_write)
+ store_le32(buf, vctrl->mmio.asql);
+ else if (!(vctrl->mmio.csts & NVME_CSTS_RDY))
+ vctrl->mmio.asql = val;
+ else
+ goto drop;
+ break;
+
+ case NVME_REG_ASQ + 4:
+ /* admin submission queue address (high 32 bit)*/
+ if (!is_write)
+ store_le32(buf, vctrl->mmio.asqh);
+ else if (!(vctrl->mmio.csts & NVME_CSTS_RDY))
+ vctrl->mmio.asqh = val;
+ else
+ goto drop;
+ break;
+
+ case NVME_REG_ACQ:
+ /* admin completion queue address (low 32 bit)*/
+ if (!is_write)
+ store_le32(buf, vctrl->mmio.acql);
+ else if (!(vctrl->mmio.csts & NVME_CSTS_RDY))
+ vctrl->mmio.acql = val;
+ else
+ goto drop;
+ break;
+
+ case NVME_REG_ACQ + 4:
+ /* admin completion queue address (high 32 bit)*/
+ if (!is_write)
+ store_le32(buf, vctrl->mmio.acqh);
+ else if (!(vctrl->mmio.csts & NVME_CSTS_RDY))
+ vctrl->mmio.acqh = val;
+ else
+ goto drop;
+ break;
+
+ case NVME_REG_CMBLOC:
+ case NVME_REG_CMBSZ:
+ /* not supported - hardwired to 0*/
+ if (is_write)
+ goto drop;
+ store_le32(buf, 0);
+ break;
+
+ case NVME_REG_DBS ... (NVME_REG_DBS + DB_AREA_SIZE - 1): {
+ /* completion and submission doorbells */
+ u16 db_offset = offset - NVME_REG_DBS;
+ u16 index = db_offset >> (DB_STRIDE_SHIFT + 2);
+ u16 qid = index >> 1;
+ bool sq = (index & 0x1) == 0;
+
+ if (!is_write || (db_offset & DB_MASK))
+ goto drop;
+
+ if (!(vctrl->mmio.csts & NVME_CSTS_RDY))
+ goto drop;
+
+ if (nvme_mdev_vctrl_is_dead(vctrl))
+ goto drop;
+
+ sq ? nvme_mdev_mmio_db_write_sq(vctrl, qid, val) :
+ nvme_mdev_mmio_db_write_cq(vctrl, qid, val);
+ break;
+ }
+ default:
+ goto drop;
+ }
+
+ mutex_unlock(&vctrl->lock);
+ return count;
+drop:
+ _DBG(vctrl, "MMIO: dropping write at 0x%x\n", offset);
+ mutex_unlock(&vctrl->lock);
+ return 0;
+}
+
+/* Called when the virtual controller is created */
+int nvme_mdev_mmio_create(struct nvme_mdev_vctrl *vctrl)
+{
+ int ret;
+
+ /* BAR0 */
+ nvme_mdev_pci_setup_bar(vctrl, PCI_BASE_ADDRESS_0,
+ MMIO_BAR_SIZE, nvme_mdev_mmio_bar_access);
+
+ /* Spec allows for maximum depth of 0x10000, but we limit
+ * it to 1 less to avoid various overflows
+ */
+ BUILD_BUG_ON(MAX_VIRTUAL_QUEUE_DEPTH > 0xFFFF);
+
+ /* CAP has 4 bits for the doorbell stride shift*/
+ BUILD_BUG_ON(DB_STRIDE_SHIFT > 0xF);
+
+ /* Shadow doorbell limits doorbells to 1 page*/
+ BUILD_BUG_ON(DB_AREA_SIZE > PAGE_SIZE);
+
+ /* Just in case...*/
+ BUILD_BUG_ON((PAGE_SHIFT - 12) > 0xF);
+
+ vctrl->mmio.cap =
+ // MQES: maximum queue entries
+ ((u64)(MAX_VIRTUAL_QUEUE_DEPTH - 1) << 0) |
+ // CQR: physically contiguous queues - no
+ (0ULL << 16) |
+ // AMS: Queue arbitration.
+ // TODOLATER: IO: implement WRRU
+ (0ULL << 17) |
+ // TO: RDY timeout - 0 (done in sync)
+ (0ULL << 24) |
+ // DSTRD: doorbell stride
+ ((u64)DB_STRIDE_SHIFT << 32) |
+ // NSSRS: no support for nvme subsystem reset
+ (0ULL << 36) |
+ // CSS: NVM command set supported
+ (1ULL << 37) |
+ // BPS: no support for boot partition
+ (0ULL << 45) |
+ // MPSMIN: Minimum page size supported is PAGE_SIZE
+ ((u64)(PAGE_SHIFT - 12) << 48) |
+ // MPSMAX: Maximum page size is PAGE_SIZE as well
+ ((u64)(PAGE_SHIFT - 12) << 52);
+
+ /* Create the (regular) doorbell buffers */
+ vctrl->mmio.dbs_page = alloc_pages_node(vctrl->hctrl->node,
+ __GFP_ZERO, 0);
+
+ ret = -ENOMEM;
+
+ if (!vctrl->mmio.dbs_page)
+ goto error0;
+
+ vctrl->mmio.db_page_kmap = kmap(vctrl->mmio.dbs_page);
+ if (!vctrl->mmio.db_page_kmap)
+ goto error1;
+
+ vctrl->mmio.fake_eidx_page = alloc_pages_node(vctrl->hctrl->node,
+ __GFP_ZERO, 0);
+ if (!vctrl->mmio.fake_eidx_page)
+ goto error2;
+
+ vctrl->mmio.fake_eidx_kmap = kmap(vctrl->mmio.fake_eidx_page);
+ if (!vctrl->mmio.fake_eidx_kmap)
+ goto error3;
+ return 0;
+error3:
+ put_page(vctrl->mmio.fake_eidx_kmap);
+error2:
+ kunmap(vctrl->mmio.dbs_page);
+error1:
+ put_page(vctrl->mmio.dbs_page);
+error0:
+ return ret;
+}
+
+/* Called when the virtual controller is reset */
+void nvme_mdev_mmio_reset(struct nvme_mdev_vctrl *vctrl, bool pci_reset)
+{
+ vctrl->mmio.cc = 0;
+ vctrl->mmio.csts = 0;
+
+ if (pci_reset) {
+ vctrl->mmio.aqa = 0;
+ vctrl->mmio.asql = 0;
+ vctrl->mmio.asqh = 0;
+ vctrl->mmio.acql = 0;
+ vctrl->mmio.acqh = 0;
+ }
+}
+
+/* Called when the virtual controller is opened */
+void nvme_mdev_mmio_open(struct nvme_mdev_vctrl *vctrl)
+{
+ if (!vctrl->mmio.shadow_db_supported)
+ nvme_mdev_vctrl_region_set_mmap(vctrl,
+ VFIO_PCI_BAR0_REGION_INDEX,
+ NVME_REG_DBS, PAGE_SIZE,
+ &nvme_mdev_mmio_dbs_vm_ops);
+ else
+ nvme_mdev_vctrl_region_disable_mmap(vctrl,
+ VFIO_PCI_BAR0_REGION_INDEX);
+}
+
+/* Called when the virtual controller queues are enabled */
+int nvme_mdev_mmio_enable_dbs(struct nvme_mdev_vctrl *vctrl)
+{
+ if (WARN_ON(vctrl->mmio.shadow_db_en))
+ return -EINVAL;
+
+ nvme_mdev_assert_io_not_running(vctrl);
+
+ /* setup normal doorbells and reset them*/
+ vctrl->mmio.dbs = vctrl->mmio.db_page_kmap;
+ vctrl->mmio.eidxs = vctrl->mmio.fake_eidx_kmap;
+ memset((void *)vctrl->mmio.dbs, 0, DB_AREA_SIZE);
+ memset((void *)vctrl->mmio.eidxs, 0, DB_AREA_SIZE);
+ return 0;
+}
+
+/* Called when the virtual controller shadow doorbell is enabled */
+int nvme_mdev_mmio_enable_dbs_shadow(struct nvme_mdev_vctrl *vctrl,
+ dma_addr_t sdb_iova,
+ dma_addr_t eidx_iova)
+{
+ int ret;
+
+ nvme_mdev_assert_io_not_running(vctrl);
+
+ ret = nvme_mdev_viommu_create_kmap(&vctrl->viommu,
+ sdb_iova, &vctrl->mmio.sdb_map);
+ if (ret)
+ return ret;
+
+ ret = nvme_mdev_viommu_create_kmap(&vctrl->viommu,
+ eidx_iova, &vctrl->mmio.seidx_map);
+ if (ret) {
+ nvme_mdev_viommu_free_kmap(&vctrl->viommu,
+ &vctrl->mmio.sdb_map);
+ return ret;
+ }
+
+ vctrl->mmio.dbs = vctrl->mmio.sdb_map.kmap;
+ vctrl->mmio.eidxs = vctrl->mmio.seidx_map.kmap;
+
+ memcpy((void *)vctrl->mmio.dbs,
+ vctrl->mmio.db_page_kmap, DB_AREA_SIZE);
+
+ memcpy((void *)vctrl->mmio.eidxs,
+ vctrl->mmio.db_page_kmap, DB_AREA_SIZE);
+
+ vctrl->mmio.shadow_db_en = true;
+ return 0;
+}
+
+/* Called on guest mapping update to
+ * verify that our mappings are still intact
+ */
+void nvme_mdev_mmio_viommu_update(struct nvme_mdev_vctrl *vctrl)
+{
+ nvme_mdev_assert_io_not_running(vctrl);
+ if (!vctrl->mmio.shadow_db_en)
+ return;
+
+ nvme_mdev_viommu_update_kmap(&vctrl->viommu, &vctrl->mmio.sdb_map);
+ nvme_mdev_viommu_update_kmap(&vctrl->viommu, &vctrl->mmio.seidx_map);
+
+ vctrl->mmio.dbs = vctrl->mmio.sdb_map.kmap;
+ vctrl->mmio.eidxs = vctrl->mmio.seidx_map.kmap;
+}
+
+/* Disable the doorbells */
+void nvme_mdev_mmio_disable_dbs(struct nvme_mdev_vctrl *vctrl)
+{
+ nvme_mdev_assert_io_not_running(vctrl);
+
+ /* Free the shadow doorbells */
+ nvme_mdev_viommu_free_kmap(&vctrl->viommu, &vctrl->mmio.sdb_map);
+ nvme_mdev_viommu_free_kmap(&vctrl->viommu, &vctrl->mmio.seidx_map);
+
+ /* Clear the doorbells */
+ vctrl->mmio.dbs = NULL;
+ vctrl->mmio.eidxs = NULL;
+ vctrl->mmio.shadow_db_en = false;
+}
+
+/* Called when the virtual controller is about to be freed */
+void nvme_mdev_mmio_free(struct nvme_mdev_vctrl *vctrl)
+{
+ nvme_mdev_assert_io_not_running(vctrl);
+ kunmap(vctrl->mmio.dbs_page);
+ put_page(vctrl->mmio.dbs_page);
+ kunmap(vctrl->mmio.fake_eidx_page);
+ put_page(vctrl->mmio.fake_eidx_page);
+}
diff --git a/drivers/nvme/mdev/pci.c b/drivers/nvme/mdev/pci.c
new file mode 100644
index 000000000000..b7cdeaaf9c2e
--- /dev/null
+++ b/drivers/nvme/mdev/pci.c
@@ -0,0 +1,247 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * NVMe virtual controller minimal PCI/PCIe config space implementation
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include "priv.h"
+
+/* setup a 64 bit PCI bar */
+void nvme_mdev_pci_setup_bar(struct nvme_mdev_vctrl *vctrl,
+ u8 bar,
+ unsigned int size,
+ region_access_fn access_fn)
+{
+ nvme_mdev_vctrl_add_region(vctrl,
+ VFIO_PCI_BAR0_REGION_INDEX +
+ ((bar - PCI_BASE_ADDRESS_0) >> 2),
+ size, access_fn);
+
+ store_le32(vctrl->pcicfg.wmask + bar, ~((u64)size - 1));
+ store_le32(vctrl->pcicfg.values + bar,
+ PCI_BASE_ADDRESS_SPACE_MEMORY |
+ PCI_BASE_ADDRESS_MEM_TYPE_64);
+}
+
+/* Allocate a pci capability*/
+static u8 nvme_mdev_pci_allocate_cap(struct nvme_mdev_vctrl *vctrl,
+ u8 id, u8 size)
+{
+ u8 *cfg = vctrl->pcicfg.values;
+ u8 newcap = vctrl->pcicfg.end;
+ u8 cap = cfg[PCI_CAPABILITY_LIST];
+
+ size = round_up(size, 4);
+ // only standard cfg space caps for now
+ WARN_ON(newcap + size > 256);
+
+ if (!cfg[PCI_CAPABILITY_LIST]) {
+ /*special case for first capability*/
+ u16 status = load_le16(cfg + PCI_STATUS);
+
+ status |= PCI_STATUS_CAP_LIST;
+ store_le16(cfg + PCI_STATUS, status);
+
+ cfg[PCI_CAPABILITY_LIST] = newcap;
+ goto setupcap;
+ }
+
+ while (cfg[cap + PCI_CAP_LIST_NEXT] != 0)
+ cap = cfg[cap + PCI_CAP_LIST_NEXT];
+
+ cfg[cap + PCI_CAP_LIST_NEXT] = newcap;
+
+setupcap:
+ cfg[newcap + PCI_CAP_LIST_ID] = id;
+ cfg[newcap + PCI_CAP_LIST_NEXT] = 0;
+ vctrl->pcicfg.end += size;
+ return newcap;
+}
+
+static void nvme_mdev_pci_setup_pm_cap(struct nvme_mdev_vctrl *vctrl)
+{
+ u8 *cfg = vctrl->pcicfg.values;
+ u8 *cfgm = vctrl->pcicfg.wmask;
+
+ u8 cap = nvme_mdev_pci_allocate_cap(vctrl,
+ PCI_CAP_ID_PM, PCI_PM_SIZEOF);
+
+ store_le16(cfg + cap + PCI_PM_PMC, 0x3);
+ store_le16(cfg + cap + PCI_PM_CTRL, PCI_PM_CTRL_NO_SOFT_RESET);
+ store_le16(cfgm + cap + PCI_PM_CTRL, 0x3);
+ vctrl->pcicfg.pmcap = cap;
+}
+
+static void nvme_mdev_pci_setup_msix_cap(struct nvme_mdev_vctrl *vctrl)
+{
+ u8 *cfg = vctrl->pcicfg.values;
+ u8 *cfgm = vctrl->pcicfg.wmask;
+ u8 cap = nvme_mdev_pci_allocate_cap(vctrl,
+ PCI_CAP_ID_MSIX,
+ PCI_CAP_MSIX_SIZEOF);
+
+ int MSIX_TBL_SIZE = roundup(MAX_VIRTUAL_IRQS * 16, PAGE_SIZE);
+ int MSIX_PBA_SIZE = roundup(DIV_ROUND_UP(MAX_VIRTUAL_IRQS, 8),
+ PAGE_SIZE);
+
+ store_le16(cfg + cap + PCI_MSIX_FLAGS, MAX_VIRTUAL_IRQS - 1);
+ store_le16(cfgm + cap + PCI_MSIX_FLAGS,
+ PCI_MSIX_FLAGS_MASKALL | PCI_MSIX_FLAGS_ENABLE);
+
+ store_le32(cfg + cap + PCI_MSIX_TABLE, 0x2);
+ store_le32(cfg + cap + PCI_MSIX_PBA, MSIX_TBL_SIZE | 0x2);
+
+ nvme_mdev_pci_setup_bar(vctrl, PCI_BASE_ADDRESS_2,
+ __roundup_pow_of_two(MSIX_TBL_SIZE +
+ MSIX_PBA_SIZE), NULL);
+ vctrl->pcicfg.msixcap = cap;
+}
+
+static void nvme_mdev_pci_setup_pcie_cap(struct nvme_mdev_vctrl *vctrl)
+{
+ u8 *cfg = vctrl->pcicfg.values;
+ u8 cap = nvme_mdev_pci_allocate_cap(vctrl,
+ PCI_CAP_ID_EXP,
+ PCI_CAP_EXP_ENDPOINT_SIZEOF_V2);
+
+ store_le16(cfg + cap + PCI_EXP_FLAGS, 0x02 |
+ (PCI_EXP_TYPE_ENDPOINT << 4));
+
+ store_le32(cfg + cap + PCI_EXP_DEVCAP,
+ PCI_EXP_DEVCAP_RBER | PCI_EXP_DEVCAP_FLR);
+ store_le32(cfg + cap + PCI_EXP_LNKCAP,
+ PCI_EXP_LNKCAP_SLS_8_0GB | (4 << 4) /*4x*/);
+ store_le16(cfg + cap + PCI_EXP_LNKSTA,
+ PCI_EXP_LNKSTA_CLS_8_0GB | (4 << 4) /*4x*/);
+
+ store_le32(cfg + cap + PCI_EXP_LNKCAP2, PCI_EXP_LNKCAP2_SLS_8_0GB);
+ store_le16(cfg + cap + PCI_EXP_LNKCTL2, PCI_EXP_LNKCTL2_TLS_8_0GT);
+ vctrl->pcicfg.pciecap = cap;
+}
+
+/* This is called on PCI config read/write */
+static int nvme_mdev_pci_cfg_access(struct nvme_mdev_vctrl *vctrl,
+ u16 offset, char *buf,
+ u32 count, bool is_write)
+{
+ unsigned int i;
+
+ mutex_lock(&vctrl->lock);
+
+ if (!is_write) {
+ memcpy(buf, (vctrl->pcicfg.values + offset), count);
+ goto out;
+ }
+
+ for (i = 0; i < count; i++) {
+ u8 address = offset + i;
+ u8 value = buf[i];
+ u8 old_value = vctrl->pcicfg.values[address];
+ u8 wmask = vctrl->pcicfg.wmask[address];
+ u8 new_value = (value & wmask) | (old_value & ~wmask);
+
+ /* D3/D0 power control */
+ if (address == vctrl->pcicfg.pmcap + PCI_PM_CTRL) {
+ u8 state = new_value & 0x03;
+
+ if (state != 0 && state != 3)
+ new_value = old_value;
+
+ if (old_value != new_value) {
+ const char *s = state == 3 ? "D3" : "D0";
+
+ if (state == 3)
+ __nvme_mdev_vctrl_reset(vctrl, true);
+ _DBG(vctrl, "PCI: going to %s\n", s);
+ }
+ }
+
+ /* FLR reset*/
+ if (address == vctrl->pcicfg.pciecap + PCI_EXP_DEVCTL + 1)
+ if (value & 0x80) {
+ _DBG(vctrl, "PCI: FLR reset\n");
+ __nvme_mdev_vctrl_reset(vctrl, true);
+ }
+ vctrl->pcicfg.values[offset + i] = new_value;
+ }
+out:
+ mutex_unlock(&vctrl->lock);
+ return count;
+}
+
+/* setup pci configuration */
+int nvme_mdev_pci_create(struct nvme_mdev_vctrl *vctrl)
+{
+ u8 *cfg, *cfgm;
+
+ vctrl->pcicfg.values = kzalloc(PCI_CFG_SIZE, GFP_KERNEL);
+ if (!vctrl->pcicfg.values)
+ return -ENOMEM;
+
+ vctrl->pcicfg.wmask = kzalloc(PCI_CFG_SIZE, GFP_KERNEL);
+ if (!vctrl->pcicfg.wmask) {
+ kfree(vctrl->pcicfg.values);
+ return -ENOMEM;
+ }
+
+ cfg = vctrl->pcicfg.values;
+ cfgm = vctrl->pcicfg.wmask;
+
+ nvme_mdev_vctrl_add_region(vctrl,
+ VFIO_PCI_CONFIG_REGION_INDEX,
+ PCI_CFG_SIZE,
+ nvme_mdev_pci_cfg_access);
+
+ /* vendor information */
+ store_le16(cfg + PCI_VENDOR_ID, NVME_MDEV_PCI_VENDOR_ID);
+ store_le16(cfg + PCI_DEVICE_ID, NVME_MDEV_PCI_DEVICE_ID);
+
+ /* pci command register */
+ store_le16(cfgm + PCI_COMMAND,
+ PCI_COMMAND_INTX_DISABLE |
+ PCI_COMMAND_MEMORY |
+ PCI_COMMAND_MASTER);
+
+ /* pci status register */
+ store_le16(cfg + PCI_STATUS, PCI_STATUS_CAP_LIST);
+
+ /* subsystem information */
+ store_le16(cfg + PCI_SUBSYSTEM_VENDOR_ID, NVME_MDEV_PCI_SUBVENDOR_ID);
+ store_le16(cfg + PCI_SUBSYSTEM_ID, NVME_MDEV_PCI_SUBDEVICE_ID);
+ store_le8(cfg + PCI_CLASS_REVISION, NVME_MDEV_PCI_REVISION);
+
+ /*Programming Interface (NVM Express) */
+ store_le8(cfg + PCI_CLASS_PROG, 0x02);
+
+ /* Device class and subclass
+ * (Mass storage controller, Non-Volatile memory controller)
+ */
+ store_le16(cfg + PCI_CLASS_DEVICE, 0x0108);
+
+ /* dummy read/write */
+ store_le8(cfgm + PCI_CACHE_LINE_SIZE, 0xFF);
+
+ /* initial value*/
+ store_le8(cfg + PCI_CAPABILITY_LIST, 0);
+ vctrl->pcicfg.end = 0x40;
+
+ nvme_mdev_pci_setup_pm_cap(vctrl);
+ nvme_mdev_pci_setup_msix_cap(vctrl);
+ nvme_mdev_pci_setup_pcie_cap(vctrl);
+
+ /* INTX IRQ number - info only for BIOS */
+ store_le8(cfgm + PCI_INTERRUPT_LINE, 0xFF);
+ store_le8(cfg + PCI_INTERRUPT_PIN, 0x01);
+
+ return 0;
+}
+
+/* teardown pci configuration */
+void nvme_mdev_pci_free(struct nvme_mdev_vctrl *vctrl)
+{
+ kfree(vctrl->pcicfg.values);
+ kfree(vctrl->pcicfg.wmask);
+ vctrl->pcicfg.values = NULL;
+ vctrl->pcicfg.wmask = NULL;
+}
diff --git a/drivers/nvme/mdev/priv.h b/drivers/nvme/mdev/priv.h
new file mode 100644
index 000000000000..9f65e46c1ab2
--- /dev/null
+++ b/drivers/nvme/mdev/priv.h
@@ -0,0 +1,700 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * Driver private data structures and helper macros
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+
+#ifndef _MDEV_NVME_PRIV_H
+#define _MDEV_NVME_PRIV_H
+
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/rbtree.h>
+#include <linux/vfio.h>
+#include <linux/mdev.h>
+#include <linux/pci.h>
+#include <linux/eventfd.h>
+#include <linux/byteorder/generic.h>
+#include "../host/nvme.h"
+#include "mdev.h"
+
+#define NVME_MDEV_NVME_VER NVME_VS(0x01, 0x03, 0x00)
+#define NVME_MDEV_FIRMWARE_VERSION "1.0"
+
+#define NVME_MDEV_PCI_VENDOR_ID PCI_VENDOR_ID_REDHAT_QUMRANET
+#define NVME_MDEV_PCI_DEVICE_ID 0x1234
+#define NVME_MDEV_PCI_SUBVENDOR_ID PCI_SUBVENDOR_ID_REDHAT_QUMRANET
+#define NVME_MDEV_PCI_SUBDEVICE_ID 0
+#define NVME_MDEV_PCI_REVISION 0x0
+
+#define DB_STRIDE_SHIFT 4 /*4 = 1 cacheline */
+#define MAX_VIRTUAL_QUEUES 16
+#define MAX_VIRTUAL_QUEUE_DEPTH 0xFFFF
+#define MAX_VIRTUAL_NAMESPACES 16 /* NSID = 1..16*/
+#define MAX_VIRTUAL_IRQS 16
+
+#define MAX_HOST_QUEUES 4
+#define MAX_AER_COMMANDS 16
+#define MAX_LOG_PAGES 16
+
+extern bool use_shadow_doorbell;
+extern unsigned int io_timeout_ms;
+extern unsigned int poll_timeout_ms;
+extern unsigned int admin_poll_rate_ms;
+
+/* virtual submission queue*/
+struct nvme_vsq {
+ u16 qid;
+ u16 size;
+ u16 head; /*next item to read */
+
+ struct nvme_command *data; /*the queue*/
+ struct nvme_vcq *vcq; /* completion queue*/
+
+ dma_addr_t iova;
+ bool cont;
+
+ u16 hsq;
+};
+
+/* virtual completion queue*/
+struct nvme_vcq {
+ /* basic queue settings */
+ u16 qid;
+ u16 size;
+ u16 head;
+ u16 tail;
+ bool phase; /* current queue phase */
+
+ volatile struct nvme_completion *data;
+
+ /* number of items pending*/
+ u16 pending;
+
+ /* IRQ settings */
+ int irq /* -1 if disabled*/;
+
+ dma_addr_t iova;
+ bool cont;
+};
+
+/*A virtual namespace */
+struct nvme_mdev_vns {
+ /* host nvme namespace that we are attached to it*/
+ struct nvme_ns *host_ns;
+
+ /* block device that corresponds to the partition of that namespace */
+ struct block_device *host_part;
+ fmode_t fmode;
+
+ u32 nsid;
+
+ /* NSID on the host*/
+ u32 host_nsid;
+
+ /* host partition ID*/
+ unsigned int host_partid;
+
+ /* Offset inside the host namespace (start of the partition)*/
+ u64 host_lba_offset;
+
+ /* size of each block on the real namespace, same for host and guest */
+ u8 blksize_shift;
+
+ /* size of the namespace in lbas*/
+ u64 ns_size;
+
+ /* is the namespace read only?*/
+ bool readonly;
+
+ /* UUID of this namespace */
+ uuid_t uuid;
+
+ /* Optimal IO boundary*/
+ u16 noiob;
+};
+
+/* Virtual IOMMU */
+struct nvme_mdev_viommu {
+ struct device *hw_dev;
+ struct device *sw_dev;
+
+ /* dma/prp bookkeeping */
+ struct rb_root_cached maps_tree;
+ struct list_head maps_list;
+ struct nvme_mdev_vctrl *vctrl;
+};
+
+struct doorbell {
+ volatile __le32 sqt;
+ u8 rsvd1[(4 << DB_STRIDE_SHIFT) - sizeof(__le32)];
+ volatile __le32 cqh;
+ u8 rsvd2[(4 << DB_STRIDE_SHIFT) - sizeof(__le32)];
+};
+
+/* MMIO state */
+struct nvme_mdev_user_ctrl_mmio {
+ u32 cc; /* controller configuration */
+ u32 csts; /* controller status */
+ u64 cap /* controller capabilities*/;
+
+ /* admin queue location & size */
+ u32 aqa;
+ u32 asql;
+ u32 asqh;
+ u32 acql;
+ u32 acqh;
+
+ bool shadow_db_supported;
+ bool shadow_db_en;
+
+ /* Regular doorbells */
+ struct page *dbs_page;
+ struct page *fake_eidx_page;
+ void *db_page_kmap;
+ void *fake_eidx_kmap;
+
+ /* Shadow doorbells */
+ struct page_map sdb_map;
+ struct page_map seidx_map;
+
+ /* Current doorbell mappings */
+ volatile struct doorbell *dbs;
+ volatile struct doorbell *eidxs;
+};
+
+/* pci configuration space of the device*/
+#define PCI_CFG_SIZE 4096
+struct nvme_mdev_pci_cfg_space {
+ u8 *values;
+ u8 *wmask;
+
+ u8 pmcap;
+ u8 pciecap;
+ u8 msixcap;
+ u8 end;
+};
+
+/*IRQ state of the controller */
+struct nvme_mdev_user_irq {
+ struct eventfd_ctx *trigger;
+ /* IRQ coalescing */
+ bool irq_coalesc_en;
+ time_t irq_time;
+ unsigned int irq_pending_cnt;
+};
+
+enum nvme_mdev_irq_mode {
+ NVME_MDEV_IMODE_NONE,
+ NVME_MDEV_IMODE_INTX,
+ NVME_MDEV_IMODE_MSIX,
+};
+
+struct nvme_mdev_user_irqs {
+ /* one of VFIO_PCI_{INTX|MSI|MSIX}_IRQ_INDEX */
+ enum nvme_mdev_irq_mode mode;
+
+ struct nvme_mdev_user_irq vecs[MAX_VIRTUAL_IRQS];
+ /* user interrupt coalescing settings */
+ u8 irq_coalesc_max;
+ unsigned int irq_coalesc_time_us;
+ /* device removal trigger*/
+ struct eventfd_ctx *request_trigger;
+};
+
+/*AER state */
+struct nvme_mdev_user_events {
+ /* async event request CIDs*/
+ u16 aer_cids[MAX_AER_COMMANDS];
+ unsigned int aer_cid_count;
+
+ /* events that are enabled */
+ unsigned long events_enabled[BITS_TO_LONGS(MAX_LOG_PAGES)];
+
+ /* events that are masked till next log page read*/
+ unsigned long events_masked[BITS_TO_LONGS(MAX_LOG_PAGES)];
+
+ /* events that are pending to be sent when user gives us an AER*/
+ unsigned long events_pending[BITS_TO_LONGS(MAX_LOG_PAGES)];
+ u32 event_values[MAX_LOG_PAGES];
+};
+
+/* host IO queue */
+struct nvme_mdev_hq {
+ unsigned int usecount;
+ struct list_head link;
+ unsigned int hqid;
+};
+
+/* IO region abstraction (BARs, the PCI config space */
+struct nvme_mdev_vctrl;
+typedef int (*region_access_fn) (struct nvme_mdev_vctrl *vctrl,
+ u16 offset, char *buf,
+ u32 size, bool is_write);
+
+struct nvme_mdev_io_region {
+ unsigned int size;
+ region_access_fn rw;
+
+ /* IF != NULL, the mmap_area_start/size specify the mmaped window
+ * of this region
+ */
+ const struct vm_operations_struct *mmap_ops;
+ unsigned int mmap_area_start;
+ unsigned int mmap_area_size;
+};
+
+/*Virtual NVME controller state */
+struct nvme_mdev_vctrl {
+ struct kref ref;
+ struct mutex lock;
+ struct list_head link;
+
+ struct mdev_device *mdev;
+ struct nvme_mdev_hctrl *hctrl;
+ bool inuse;
+
+ struct nvme_mdev_io_region regions[VFIO_PCI_NUM_REGIONS];
+
+ /* virtual controller state */
+ struct nvme_mdev_user_ctrl_mmio mmio;
+ struct nvme_mdev_pci_cfg_space pcicfg;
+ struct nvme_mdev_user_irqs irqs;
+ struct nvme_mdev_user_events events;
+
+ /* emulated namespaces */
+ struct nvme_mdev_vns *namespaces[MAX_VIRTUAL_NAMESPACES];
+ __le32 ns_log[MAX_VIRTUAL_NAMESPACES];
+ unsigned int ns_log_size;
+
+ /* emulated submission queues*/
+ struct nvme_vsq vsqs[MAX_VIRTUAL_QUEUES];
+ unsigned long vsq_en[BITS_TO_LONGS(MAX_VIRTUAL_QUEUES)];
+
+ /* emulated completion queues*/
+ unsigned long vcq_en[BITS_TO_LONGS(MAX_VIRTUAL_QUEUES)];
+ struct nvme_vcq vcqs[MAX_VIRTUAL_QUEUES];
+
+ /* Host IO queues*/
+ int max_host_hw_queues;
+ struct list_head host_hw_queues;
+
+ /* Interface to access user memory */
+ struct notifier_block vfio_map_notifier;
+ struct notifier_block vfio_unmap_notifier;
+ struct nvme_mdev_viommu viommu;
+
+ /* the IO thread */
+ struct task_struct *iothread;
+ bool iothread_parked;
+ bool io_idle;
+ ktime_t now;
+
+ /* Settings */
+ unsigned int arb_burst_shift;
+ u8 worload_hint;
+ unsigned int iothread_cpu;
+
+ /* Identification*/
+ char subnqn[256];
+ char serial[9];
+
+ bool vctrl_paused; /* true when the host device paused our IO */
+};
+
+/* mdev instance type*/
+struct nvme_mdev_inst_type {
+ unsigned int max_hw_queues;
+ char name[16];
+ struct attribute_group *attrgroup;
+};
+
+/*Abstraction of the host controller that we are connected to */
+struct nvme_mdev_hctrl {
+ struct mutex lock;
+
+ /* numa node of the host controller*/
+ int node;
+
+ struct list_head link;
+ struct kref ref;
+ bool removing;
+
+ /* for reference counting */
+ struct nvme_ctrl *nvme_ctrl;
+
+ /* Host area*/
+ u16 oncs;
+ u8 mdts;
+ unsigned int id;
+
+ /* book-keeping for number of host queues we can allocate*/
+ unsigned int nr_host_queues;
+};
+
+/* vctrl.c*/
+struct nvme_mdev_vctrl *nvme_mdev_vctrl_create(struct mdev_device *mdev,
+ struct nvme_mdev_hctrl *hctrl,
+ unsigned int max_host_queues);
+
+int nvme_mdev_vctrl_destroy(struct nvme_mdev_vctrl *vctrl);
+
+int nvme_mdev_vctrl_open(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_vctrl_release(struct nvme_mdev_vctrl *vctrl);
+
+void nvme_mdev_vctrl_pause(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_vctrl_resume(struct nvme_mdev_vctrl *vctrl);
+
+bool nvme_mdev_vctrl_enable(struct nvme_mdev_vctrl *vctrl,
+ dma_addr_t cqiova, dma_addr_t sqiova, u32 sizes);
+
+void nvme_mdev_vctrl_disable(struct nvme_mdev_vctrl *vctrl);
+
+void nvme_mdev_vctrl_reset(struct nvme_mdev_vctrl *vctrl);
+void __nvme_mdev_vctrl_reset(struct nvme_mdev_vctrl *vctrl, bool pci_reset);
+
+void nvme_mdev_vctrl_add_region(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index, unsigned int size,
+ region_access_fn access_fn);
+
+void nvme_mdev_vctrl_region_set_mmap(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index,
+ unsigned int offset,
+ unsigned int size,
+ const struct vm_operations_struct *ops);
+
+void nvme_mdev_vctrl_region_disable_mmap(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index);
+
+void nvme_mdev_vctrl_bind_iothread(struct nvme_mdev_vctrl *vctrl,
+ unsigned int cpu);
+
+int nvme_mdev_vctrl_set_shadow_doorbell_supported(struct nvme_mdev_vctrl *vctrl,
+ bool enable);
+
+int nvme_mdev_vctrl_hq_alloc(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_vctrl_hq_free(struct nvme_mdev_vctrl *vctrl, u16 qid);
+unsigned int nvme_mdev_vctrl_hqs_list(struct nvme_mdev_vctrl *vctrl, u16 *out);
+bool nvme_mdev_vctrl_is_dead(struct nvme_mdev_vctrl *vctrl);
+
+int nvme_mdev_vctrl_viommu_map(struct nvme_mdev_vctrl *vctrl, u32 flags,
+ dma_addr_t iova, u64 size);
+
+int nvme_mdev_vctrl_viommu_unmap(struct nvme_mdev_vctrl *vctrl,
+ dma_addr_t iova, u64 size);
+
+/* hctrl.c*/
+struct nvme_mdev_inst_type *nvme_mdev_inst_type_get(const char *name);
+struct nvme_mdev_hctrl *nvme_mdev_hctrl_lookup_get(struct device *parent);
+void nvme_mdev_hctrl_put(struct nvme_mdev_hctrl *hctrl);
+
+int nvme_mdev_hctrl_hqs_available(struct nvme_mdev_hctrl *hctrl);
+
+bool nvme_mdev_hctrl_hqs_reserve(struct nvme_mdev_hctrl *hctrl,
+ unsigned int n);
+void nvme_mdev_hctrl_hqs_unreserve(struct nvme_mdev_hctrl *hctrl,
+ unsigned int n);
+
+int nvme_mdev_hctrl_hq_alloc(struct nvme_mdev_hctrl *hctrl);
+void nvme_mdev_hctrl_hq_free(struct nvme_mdev_hctrl *hctrl, u16 qid);
+bool nvme_mdev_hctrl_hq_can_submit(struct nvme_mdev_hctrl *hctrl, u16 qid);
+bool nvme_mdev_hctrl_hq_check_op(struct nvme_mdev_hctrl *hctrl, u8 optcode);
+
+int nvme_mdev_hctrl_hq_submit(struct nvme_mdev_hctrl *hctrl,
+ u16 qid, u32 tag,
+ struct nvme_command *cmd,
+ struct nvme_ext_data_iter *datait);
+
+int nvme_mdev_hctrl_hq_poll(struct nvme_mdev_hctrl *hctrl,
+ u32 qid,
+ struct nvme_ext_cmd_result *results,
+ unsigned int max_len);
+
+void nvme_mdev_hctrl_destroy_all(void);
+
+/* io.c */
+int nvme_mdev_io_create(struct nvme_mdev_vctrl *vctrl, unsigned int cpu);
+void nvme_mdev_io_free(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_io_pause(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_io_resume(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_assert_io_not_running(struct nvme_mdev_vctrl *vctrl);
+
+/* mmio.c*/
+int nvme_mdev_mmio_create(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_mmio_open(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_mmio_reset(struct nvme_mdev_vctrl *vctrl, bool pci_reset);
+void nvme_mdev_mmio_free(struct nvme_mdev_vctrl *vctrl);
+
+int nvme_mdev_mmio_enable_dbs(struct nvme_mdev_vctrl *vctrl);
+int nvme_mdev_mmio_enable_dbs_shadow(struct nvme_mdev_vctrl *vctrl,
+ dma_addr_t sdb_iova, dma_addr_t eidx_iova);
+
+void nvme_mdev_mmio_viommu_update(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_mmio_disable_dbs(struct nvme_mdev_vctrl *vctrl);
+bool nvme_mdev_mmio_db_check(struct nvme_mdev_vctrl *vctrl,
+ u16 qid, u16 size, u16 db);
+
+/* pci.c*/
+int nvme_mdev_pci_create(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_pci_free(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_pci_setup_bar(struct nvme_mdev_vctrl *vctrl,
+ u8 bar, unsigned int size,
+ region_access_fn access_fn);
+/* adm.c*/
+void nvme_mdev_adm_process_sq(struct nvme_mdev_vctrl *vctrl);
+
+/* events.c */
+void nvme_mdev_events_init(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_events_reset(struct nvme_mdev_vctrl *vctrl);
+
+int nvme_mdev_event_request_receive(struct nvme_mdev_vctrl *vctrl, u16 cid);
+void nvme_mdev_event_process_ack(struct nvme_mdev_vctrl *vctrl, u8 log_page);
+
+void nvme_mdev_event_send(struct nvme_mdev_vctrl *vctrl,
+ enum nvme_async_event_type type,
+ enum nvme_async_event info);
+
+u32 nvme_mdev_event_read_aen_config(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_event_set_aen_config(struct nvme_mdev_vctrl *vctrl, u32 value);
+
+/* irq.c*/
+void nvme_mdev_irqs_setup(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_irqs_reset(struct nvme_mdev_vctrl *vctrl);
+
+int nvme_mdev_irqs_enable(struct nvme_mdev_vctrl *vctrl,
+ enum nvme_mdev_irq_mode mode);
+void nvme_mdev_irqs_disable(struct nvme_mdev_vctrl *vctrl,
+ enum nvme_mdev_irq_mode mode);
+
+int nvme_mdev_irqs_set_triggers(struct nvme_mdev_vctrl *vctrl,
+ int start, int count, int32_t *fds);
+int nvme_mdev_irqs_set_unplug_trigger(struct nvme_mdev_vctrl *vctrl,
+ int32_t fd);
+
+void nvme_mdev_irq_raise_unplug_event(struct nvme_mdev_vctrl *vctrl,
+ unsigned int count);
+void nvme_mdev_irq_raise(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index);
+void nvme_mdev_irq_trigger(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index);
+void nvme_mdev_irq_cond_trigger(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index);
+void nvme_mdev_irq_clear(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index);
+
+/* ns.c*/
+int nvme_mdev_vns_open(struct nvme_mdev_vctrl *vctrl,
+ u32 host_nsid, unsigned int host_partid);
+int nvme_mdev_vns_destroy(struct nvme_mdev_vctrl *vctrl,
+ u32 user_nsid);
+void nvme_mdev_vns_destroy_all(struct nvme_mdev_vctrl *vctrl);
+
+struct nvme_mdev_vns *nvme_mdev_vns_from_vnsid(struct nvme_mdev_vctrl *vctrl,
+ u32 user_ns_id);
+
+int nvme_mdev_vns_print_description(struct nvme_mdev_vctrl *vctrl,
+ char *buf, unsigned int size);
+void nvme_mdev_vns_host_ns_update(struct nvme_mdev_vctrl *vctrl,
+ u32 host_nsid, bool removed);
+
+void nvme_mdev_vns_log_reset(struct nvme_mdev_vctrl *vctrl);
+
+/* vcq.c */
+int nvme_mdev_vcq_init(struct nvme_mdev_vctrl *vctrl, u16 qid,
+ dma_addr_t iova, bool cont, u16 size, int irq);
+
+int nvme_mdev_vcq_viommu_update(struct nvme_mdev_viommu *viommu,
+ struct nvme_vcq *q);
+
+void nvme_mdev_vcq_delete(struct nvme_mdev_vctrl *vctrl, u16 qid);
+void nvme_mdev_vcq_process(struct nvme_mdev_vctrl *vctrl, u16 qid,
+ bool trigger_irqs);
+
+bool nvme_mdev_vcq_flush(struct nvme_mdev_vctrl *vctrl, u16 qid);
+bool nvme_mdev_vcq_reserve_space(struct nvme_vcq *q);
+
+void nvme_mdev_vcq_write_io(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_vcq *q, u16 sq_head,
+ u16 sqid, u16 cid, u16 status);
+
+void nvme_mdev_vcq_write_adm(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_vcq *q, u32 dw0,
+ u16 sq_head, u16 cid, u16 status);
+/* vsq.c*/
+int nvme_mdev_vsq_init(struct nvme_mdev_vctrl *vctrl, u16 qid,
+ dma_addr_t iova, bool cont, u16 size, u16 cqid);
+
+int nvme_mdev_vsq_viommu_update(struct nvme_mdev_viommu *viommu,
+ struct nvme_vsq *q);
+
+void nvme_mdev_vsq_delete(struct nvme_mdev_vctrl *vctrl, u16 qid);
+
+bool nvme_mdev_vsq_has_data(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_vsq *q);
+
+const struct nvme_command *nvme_mdev_vsq_get_cmd(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_vsq *q);
+
+void nvme_mdev_vsq_cmd_done_io(struct nvme_mdev_vctrl *vctrl,
+ u16 sqid, u16 cid, u16 status);
+void nvme_mdev_vsq_cmd_done_adm(struct nvme_mdev_vctrl *vctrl,
+ u32 dw0, u16 cid, u16 status);
+bool nvme_mdev_vsq_suspend_io(struct nvme_mdev_vctrl *vctrl, u16 sqid);
+
+/* udata.c*/
+void nvme_mdev_udata_iter_setup(struct nvme_mdev_viommu *viommu,
+ struct nvme_ext_data_iter *iter);
+
+int nvme_mdev_udata_iter_set_dptr(struct nvme_ext_data_iter *it,
+ const union nvme_data_ptr *dptr, u64 size);
+
+struct nvme_ext_data_iter *
+nvme_mdev_kdata_iter_alloc(struct nvme_mdev_viommu *viommu, unsigned int size);
+
+int nvme_mdev_read_from_udata(void *dst, struct nvme_ext_data_iter *srcit,
+ u64 size);
+
+int nvme_mdev_write_to_udata(struct nvme_ext_data_iter *dstit, void *src,
+ u64 size);
+
+void *nvme_mdev_udata_queue_vmap(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova,
+ unsigned int size, bool cont);
+/* viommu.c */
+void nvme_mdev_viommu_init(struct nvme_mdev_viommu *viommu,
+ struct device *sw_dev,
+ struct device *hw_dev);
+
+int nvme_mdev_viommu_add(struct nvme_mdev_viommu *viommu, u32 flags,
+ dma_addr_t iova, u64 size);
+
+int nvme_mdev_viommu_remove(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova, u64 size);
+
+int nvme_mdev_viommu_translate(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova,
+ dma_addr_t *physical,
+ dma_addr_t *host_iova);
+
+int nvme_mdev_viommu_create_kmap(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova, struct page_map *page);
+
+void nvme_mdev_viommu_free_kmap(struct nvme_mdev_viommu *viommu,
+ struct page_map *page);
+
+void nvme_mdev_viommu_update_kmap(struct nvme_mdev_viommu *viommu,
+ struct page_map *page);
+
+void nvme_mdev_viommu_reset(struct nvme_mdev_viommu *viommu);
+
+/* some utilities*/
+
+#define store_le64(address, value) (*((__le64 *)(address)) = cpu_to_le64(value))
+#define store_le32(address, value) (*((__le32 *)(address)) = cpu_to_le32(value))
+#define store_le16(address, value) (*((__le16 *)(address)) = cpu_to_le16(value))
+#define store_le8(address, value) (*((u8 *)(address)) = (value))
+
+#define load_le16(address) le16_to_cpu(*(__le16 *)(address))
+#define load_le32(address) le32_to_cpu(*(__le32 *)(address))
+
+#define store_strsp(dst, src) \
+ memcpy_and_pad(dst, sizeof(dst), src, sizeof(src) - 1, ' ')
+
+#define DNR(e) ((e) | NVME_SC_DNR)
+
+#define PAGE_ADDRESS(address) ((address) & PAGE_MASK)
+#define OFFSET_IN_PAGE(address) ((address) & ~(PAGE_MASK))
+
+#define _DBG(vctrl, fmt, ...) \
+ dev_dbg(mdev_dev((vctrl)->mdev), fmt, ##__VA_ARGS__)
+
+#define _INFO(vctrl, fmt, ...) \
+ dev_info(mdev_dev((vctrl)->mdev), fmt, ##__VA_ARGS__)
+
+#define _WARN(vctrl, fmt, ...) \
+ dev_warn(mdev_dev((vctrl)->mdev), fmt, ##__VA_ARGS__)
+
+#define mdev_to_vctrl(mdev) \
+ ((struct nvme_mdev_vctrl *)mdev_get_drvdata(mdev))
+
+#define dev_to_vctrl(mdev) \
+ mdev_to_vctrl(mdev_from_dev(dev))
+
+#define RSRV_NSID (BIT(1))
+#define RSRV_DW23 (BIT(2) | BIT(3))
+#define RSRV_MPTR (BIT(4) | BIT(5))
+
+#define RSRV_DPTR (BIT(6) | BIT(7) | BIT(8) | BIT(9))
+#define RSRV_DPTR_PRP2 (BIT(8) | BIT(9))
+
+#define RSRV_DW10_15 (BIT(10) | BIT(11) | BIT(12) | BIT(13) | BIT(14) | BIT(15))
+#define RSRV_DW11_15 (BIT(11) | BIT(12) | BIT(13) | BIT(14) | BIT(15))
+#define RSRV_DW12_15 (BIT(12) | BIT(13) | BIT(14) | BIT(15))
+#define RSRV_DW13_15 (BIT(13) | BIT(14) | BIT(15))
+#define RSRV_DW14_15 (BIT(14) | BIT(15))
+
+static inline bool check_reserved_dwords(const u32 *dwords,
+ int count, unsigned long bitmask)
+{
+ int bit;
+
+ if (WARN_ON(count > BITS_PER_TYPE(long)))
+ return false;
+
+ for_each_set_bit(bit, &bitmask, count)
+ if (dwords[bit])
+ return false;
+ return true;
+}
+
+static inline bool check_range(u64 start, u64 size, u64 end)
+{
+ u64 test = start + size;
+
+ /* check for overflow */
+ if (test < start || test < size)
+ return false;
+ return test <= end;
+}
+
+/* Rough translation of internal errors to the NVME errors */
+static inline int nvme_mdev_translate_error(int error)
+{
+ // nvme status, including no error (NVME_SC_SUCCESS)
+ if (error >= 0)
+ return error;
+
+ switch (error) {
+ case -ENOMEM:
+ /*no memory - truly an internal error*/
+ return NVME_SC_INTERNAL;
+ case -ENOSPC:
+ /* Happens when user sends to large PRP list
+ * User shoudn't do this since the maximum transfer size
+ * is specified in the controller caps
+ */
+ return DNR(NVME_SC_DATA_XFER_ERROR);
+ case -EFAULT:
+ /* Bad memory pointers in the prp lists*/
+ return DNR(NVME_SC_DATA_XFER_ERROR);
+ case -EINVAL:
+ /* Bad prp offsets in the prp lists/command*/
+ return DNR(NVME_SC_PRP_OFFSET_INVALID);
+ default:
+ /*Shouldn't happen */
+ WARN_ON_ONCE(true);
+ return NVME_SC_INTERNAL;
+ }
+}
+
+static inline bool timeout(ktime_t event, ktime_t now, unsigned long timeout_ms)
+{
+ return ktime_ms_delta(now, event) > (long)timeout_ms;
+}
+
+extern struct mdev_parent_ops mdev_fops;
+extern struct list_head nvme_mdev_vctrl_list;
+extern struct mutex nvme_mdev_vctrl_list_mutex;
+
+#endif // _MDEV_NVME_H
diff --git a/drivers/nvme/mdev/udata.c b/drivers/nvme/mdev/udata.c
new file mode 100644
index 000000000000..7af6b3f6d6aa
--- /dev/null
+++ b/drivers/nvme/mdev/udata.c
@@ -0,0 +1,390 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * User (guest) data access routines
+ * Implementation of PRP iterator in user memory
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/highmem.h>
+#include <linux/slab.h>
+#include <linux/mdev.h>
+#include <linux/nvme.h>
+#include "priv.h"
+
+#define MAX_PRP ((PAGE_SIZE / sizeof(__le64)) - 1)
+
+/* Setup up a new PRP iterator */
+void nvme_mdev_udata_iter_setup(struct nvme_mdev_viommu *viommu,
+ struct nvme_ext_data_iter *iter)
+{
+ iter->viommu = viommu;
+ iter->count = 0;
+ iter->next = NULL;
+ iter->release = NULL;
+}
+
+/* Load a new prp list into the iterator. Internal*/
+static int nvme_mdev_udata_iter_load_prplist(struct nvme_ext_data_iter *iter,
+ dma_addr_t iova)
+{
+ dma_addr_t data_iova;
+ int ret;
+ __le64 *map;
+
+ /* map the prp list*/
+ ret = nvme_mdev_viommu_create_kmap(iter->viommu,
+ PAGE_ADDRESS(iova),
+ &iter->uprp.page);
+ if (ret)
+ return ret;
+
+ iter->uprp.index = OFFSET_IN_PAGE(iova) / (sizeof(__le64));
+
+ /* read its first entry and check its alignment */
+ map = iter->uprp.page.kmap;
+ data_iova = le64_to_cpu(map[iter->uprp.index]);
+
+ if (OFFSET_IN_PAGE(data_iova) != 0) {
+ nvme_mdev_viommu_free_kmap(iter->viommu, &iter->uprp.page);
+ return -EINVAL;
+ }
+
+ /* translate the entry to complete the setup*/
+ ret = nvme_mdev_viommu_translate(iter->viommu, data_iova,
+ &iter->physical, &iter->host_iova);
+ if (ret)
+ nvme_mdev_viommu_free_kmap(iter->viommu, &iter->uprp.page);
+
+ return ret;
+}
+
+/* ->next function when iterator points to prp list*/
+static int nvme_mdev_udata_iter_next_prplist(struct nvme_ext_data_iter *iter)
+{
+ dma_addr_t iova;
+ int ret;
+ __le64 *map = iter->uprp.page.kmap;
+
+ if (WARN_ON(iter->count <= 0))
+ return 0;
+
+ if (--iter->count == 0) {
+ nvme_mdev_viommu_free_kmap(iter->viommu, &iter->uprp.page);
+ return 0;
+ }
+
+ iter->uprp.index++;
+
+ if (iter->uprp.index < MAX_PRP || iter->count == 1) {
+ // advance over next pointer in current prp list
+ // these pointers must be page aligned
+ iova = le64_to_cpu(map[iter->uprp.index]);
+ if (OFFSET_IN_PAGE(iova) != 0)
+ return -EINVAL;
+
+ ret = nvme_mdev_viommu_translate(iter->viommu, iova,
+ &iter->physical,
+ &iter->host_iova);
+ if (ret)
+ nvme_mdev_viommu_free_kmap(iter->viommu,
+ &iter->uprp.page);
+ return ret;
+ }
+
+ /* switch to next prp list. it must be page aligned as well*/
+ iova = le64_to_cpu(map[MAX_PRP]);
+
+ if (OFFSET_IN_PAGE(iova) != 0)
+ return -EINVAL;
+
+ nvme_mdev_viommu_free_kmap(iter->viommu, &iter->uprp.page);
+ return nvme_mdev_udata_iter_load_prplist(iter, iova);
+}
+
+/* ->next function when iterator points to user data pointer*/
+static int nvme_mdev_udata_iter_next_dptr(struct nvme_ext_data_iter *iter)
+{
+ dma_addr_t iova;
+
+ if (WARN_ON(iter->count <= 0))
+ return 0;
+
+ if (--iter->count == 0)
+ return 0;
+
+ /* we will be called only once to deal with the second
+ * pointer in the data pointer
+ */
+ iova = le64_to_cpu(iter->dptr->prp2);
+
+ if (iter->count == 1) {
+ /* only need to read one more entry, meaning
+ * the 2nd entry of the dptr.
+ * It must be page aligned
+ */
+ if (OFFSET_IN_PAGE(iova) != 0)
+ return -EINVAL;
+ return nvme_mdev_viommu_translate(iter->viommu, iova,
+ &iter->physical,
+ &iter->host_iova);
+ } else {
+ /*
+ * Second dptr entry is prp pointer, and it might not
+ * be page aligned (but QWORD aligned at least)
+ */
+ if (iova & 0x7ULL)
+ return -EINVAL;
+ iter->next = nvme_mdev_udata_iter_next_prplist;
+ return nvme_mdev_udata_iter_load_prplist(iter, iova);
+ }
+}
+
+/* Set prp list iterator to point to data pointer found in NVME command */
+int nvme_mdev_udata_iter_set_dptr(struct nvme_ext_data_iter *it,
+ const union nvme_data_ptr *dptr, u64 size)
+{
+ int ret;
+ u64 prp1 = le64_to_cpu(dptr->prp1);
+ dma_addr_t iova = PAGE_ADDRESS(prp1);
+ unsigned int page_offset = OFFSET_IN_PAGE(prp1);
+
+ /* first dptr pointer must be at least DWORD aligned*/
+ if (page_offset & 0x3)
+ return -EINVAL;
+
+ it->dptr = dptr;
+ it->next = nvme_mdev_udata_iter_next_dptr;
+ it->count = DIV_ROUND_UP_ULL(size + page_offset, PAGE_SIZE);
+
+ ret = nvme_mdev_viommu_translate(it->viommu, iova,
+ &it->physical, &it->host_iova);
+ if (ret)
+ return ret;
+
+ it->physical += page_offset;
+ it->host_iova += page_offset;
+ return 0;
+}
+
+/* ->next function when iterator points to kernel memory buffer */
+static int nvme_mdev_kdata_iter_next(struct nvme_ext_data_iter *it)
+{
+ if (WARN_ON(it->count <= 0))
+ return 0;
+
+ if (--it->count == 0)
+ return 0;
+
+ it->physical = PAGE_ADDRESS(it->physical) + PAGE_SIZE;
+ it->host_iova = PAGE_ADDRESS(it->host_iova) + PAGE_SIZE;
+ return 0;
+}
+
+/* ->release function for kdata iterator to free it after use */
+static void nvme_mdev_kdata_iter_free(struct nvme_ext_data_iter *it)
+{
+ struct device *dma_dev = it->viommu->hw_dev;
+
+ if (dma_dev)
+ dma_free_coherent(dma_dev, it->kmem.size,
+ it->kmem.data, it->kmem.dma_addr);
+ else
+ kfree(it->kmem.data);
+ kfree(it);
+}
+
+/* allocate a kernel data buffer with read iterator for nvme host device */
+struct nvme_ext_data_iter *
+nvme_mdev_kdata_iter_alloc(struct nvme_mdev_viommu *viommu, unsigned int size)
+{
+ struct nvme_ext_data_iter *it;
+
+ it = kzalloc(sizeof(*it), GFP_KERNEL);
+ if (!it)
+ return NULL;
+
+ it->viommu = viommu;
+ it->kmem.size = size;
+ if (viommu->hw_dev) {
+ it->kmem.data = dma_alloc_coherent(viommu->hw_dev, size,
+ &it->kmem.dma_addr,
+ GFP_KERNEL);
+ } else {
+ it->kmem.data = kzalloc(size, GFP_KERNEL);
+ it->kmem.dma_addr = 0;
+ }
+
+ if (!it->kmem.data) {
+ kfree(it);
+ return NULL;
+ }
+
+ it->physical = virt_to_phys(it->kmem.data);
+ it->host_iova = it->kmem.dma_addr;
+
+ it->count = DIV_ROUND_UP(size + OFFSET_IN_PAGE(it->physical),
+ PAGE_SIZE);
+
+ it->next = nvme_mdev_kdata_iter_next;
+ it->release = nvme_mdev_kdata_iter_free;
+ return it;
+}
+
+/* copy data from user data iterator to a kernel buffer */
+int nvme_mdev_read_from_udata(void *dst, struct nvme_ext_data_iter *srcit,
+ u64 size)
+{
+ int ret;
+ unsigned int srcoffset, chunk_size;
+
+ while (srcit->count && size > 0) {
+ struct page *page = pfn_to_page(PHYS_PFN(srcit->physical));
+ void *src = kmap(page);
+
+ if (!src)
+ return -ENOMEM;
+
+ srcoffset = OFFSET_IN_PAGE(srcit->physical);
+ chunk_size = min(size, (u64)PAGE_SIZE - srcoffset);
+
+ memcpy(dst, src + srcoffset, chunk_size);
+ dst += chunk_size;
+ size -= chunk_size;
+ kunmap(page);
+
+ ret = srcit->next(srcit);
+ if (ret)
+ return ret;
+ }
+ WARN_ON(size > 0);
+ return 0;
+}
+
+/* copy data from kernel buffer to user data iterator */
+int nvme_mdev_write_to_udata(struct nvme_ext_data_iter *dstit, void *src,
+ u64 size)
+{
+ int ret, dstoffset, chunk_size;
+
+ while (dstit->count && size > 0) {
+ struct page *page = pfn_to_page(PHYS_PFN(dstit->physical));
+ void *dst = kmap(page);
+
+ if (!dst)
+ return -ENOMEM;
+
+ dstoffset = OFFSET_IN_PAGE(dstit->physical);
+ chunk_size = min(size, (u64)PAGE_SIZE - dstoffset);
+
+ memcpy(dst + dstoffset, src, chunk_size);
+ src += chunk_size;
+ size -= chunk_size;
+ kunmap(page);
+
+ ret = dstit->next(dstit);
+ if (ret)
+ return ret;
+ }
+ WARN_ON(size > 0);
+ return 0;
+}
+
+/* Set prp list iterator to point to prp list found in create queue command */
+static int
+nvme_mdev_udata_iter_set_queue_prplist(struct nvme_mdev_viommu *viommu,
+ struct nvme_ext_data_iter *iter,
+ dma_addr_t iova, unsigned int size)
+{
+ if (iova & ~PAGE_MASK)
+ return -EINVAL;
+
+ nvme_mdev_udata_iter_setup(viommu, iter);
+ iter->count = DIV_ROUND_UP(size, PAGE_SIZE);
+ iter->next = nvme_mdev_udata_iter_next_prplist;
+ return nvme_mdev_udata_iter_load_prplist(iter, iova);
+}
+
+/* Map an SQ/CQ queue (contiguous in guest physical memory) */
+static int nvme_mdev_queue_getpages_contiguous(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova,
+ struct page **pages,
+ unsigned int npages)
+{
+ int ret;
+ unsigned int i;
+
+ dma_addr_t host_page_iova;
+ phys_addr_t physical;
+
+ for (i = 0 ; i < npages; i++) {
+ ret = nvme_mdev_viommu_translate(viommu, iova + (PAGE_SIZE * i),
+ &physical,
+ &host_page_iova);
+ if (ret)
+ return ret;
+ pages[i] = pfn_to_page(PHYS_PFN(physical));
+ }
+ return 0;
+}
+
+/* Map an SQ/CQ queue (non contiguous in guest physical memory) */
+static int nvme_mdev_queue_getpages_prplist(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova,
+ struct page **pages,
+ unsigned int npages)
+{
+ int ret, i = 0;
+ struct nvme_ext_data_iter uprpit;
+
+ ret = nvme_mdev_udata_iter_set_queue_prplist(viommu,
+ &uprpit, iova,
+ npages * PAGE_SIZE);
+ if (ret)
+ return ret;
+
+ while (uprpit.count && i < npages) {
+ pages[i++] = pfn_to_page(PHYS_PFN(uprpit.physical));
+ ret = uprpit.next(&uprpit);
+ if (ret)
+ return ret;
+ }
+ return 0;
+}
+
+/* map a SQ/CQ queue to host physical memory */
+void *nvme_mdev_udata_queue_vmap(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova,
+ unsigned int size,
+ bool cont)
+{
+ int ret;
+ unsigned int npages;
+ void *map;
+ struct page **pages;
+
+ // queue must be page aligned
+ if (OFFSET_IN_PAGE(iova) != 0)
+ return ERR_PTR(-EINVAL);
+
+ npages = DIV_ROUND_UP(size, PAGE_SIZE);
+ pages = kcalloc(npages, sizeof(struct page *), GFP_KERNEL);
+ if (!pages)
+ return ERR_PTR(-ENOMEM);
+
+ ret = cont ?
+ nvme_mdev_queue_getpages_contiguous(viommu, iova, pages, npages)
+ : nvme_mdev_queue_getpages_prplist(viommu, iova, pages, npages);
+
+ if (ret) {
+ map = ERR_PTR(ret);
+ goto out;
+ }
+
+ map = vmap(pages, npages, VM_MAP, PAGE_KERNEL);
+ if (!map)
+ map = ERR_PTR(-ENOMEM);
+out:
+ kfree(pages);
+ return map;
+}
diff --git a/drivers/nvme/mdev/vcq.c b/drivers/nvme/mdev/vcq.c
new file mode 100644
index 000000000000..7702137eb8bc
--- /dev/null
+++ b/drivers/nvme/mdev/vcq.c
@@ -0,0 +1,209 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Virtual NVMe completion queue implementation
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include "priv.h"
+
+/* Create new virtual completion queue */
+int nvme_mdev_vcq_init(struct nvme_mdev_vctrl *vctrl, u16 qid,
+ dma_addr_t iova, bool cont, u16 size, int irq)
+{
+ struct nvme_vcq *q = &vctrl->vcqs[qid];
+ int ret;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ q->iova = iova;
+ q->cont = cont;
+ q->data = NULL;
+ q->qid = qid;
+ q->size = size;
+ q->tail = 0;
+ q->phase = true;
+ q->irq = irq;
+ q->pending = 0;
+ q->head = 0;
+
+ ret = nvme_mdev_vcq_viommu_update(&vctrl->viommu, q);
+ if (ret && (ret != -EFAULT))
+ return ret;
+
+ _DBG(vctrl, "VCQ: create qid=%d contig=%d depth=%d irq=%d\n",
+ qid, cont, size, irq);
+
+ set_bit(qid, vctrl->vcq_en);
+
+ vctrl->mmio.dbs[q->qid].cqh = 0;
+ vctrl->mmio.eidxs[q->qid].cqh = 0;
+ return 0;
+}
+
+/* Update the kernel mapping of the queue */
+int nvme_mdev_vcq_viommu_update(struct nvme_mdev_viommu *viommu,
+ struct nvme_vcq *q)
+{
+ void *data;
+
+ if (q->data)
+ vunmap((void *)q->data);
+
+ data = nvme_mdev_udata_queue_vmap(viommu, q->iova,
+ (unsigned int)q->size *
+ sizeof(struct nvme_completion),
+ q->cont);
+
+ q->data = IS_ERR(data) ? NULL : data;
+ return IS_ERR(data) ? PTR_ERR(data) : 0;
+}
+
+/* Delete a virtual completion queue */
+void nvme_mdev_vcq_delete(struct nvme_mdev_vctrl *vctrl, u16 qid)
+{
+ struct nvme_vcq *q = &vctrl->vcqs[qid];
+
+ lockdep_assert_held(&vctrl->lock);
+
+ if (q->data)
+ vunmap((void *)q->data);
+ q->data = NULL;
+
+ _DBG(vctrl, "VCQ: delete qid=%d\n", q->qid);
+ clear_bit(qid, vctrl->vcq_en);
+}
+
+/* Move queue tail one item forward */
+static void nvme_mdev_vcq_advance_tail(struct nvme_vcq *q)
+{
+ if (++q->tail == q->size) {
+ q->tail = 0;
+ q->phase = !q->phase;
+ }
+}
+
+/* Move queue head one item forward */
+static void nvme_mdev_vcq_advance_head(struct nvme_vcq *q)
+{
+ q->head++;
+ if (q->head == q->size)
+ q->head = 0;
+}
+
+/* Process a virtual completion queue*/
+void nvme_mdev_vcq_process(struct nvme_mdev_vctrl *vctrl, u16 qid,
+ bool trigger_irqs)
+{
+ struct nvme_vcq *q = &vctrl->vcqs[qid];
+ u16 new_head;
+ u32 eidx;
+
+ if (!vctrl->mmio.dbs || !vctrl->mmio.eidxs)
+ return;
+
+ new_head = le32_to_cpu(vctrl->mmio.dbs[qid].cqh);
+
+ if (new_head != q->head) {
+ /* bad tail - can't process*/
+ if (!nvme_mdev_mmio_db_check(vctrl, q->qid, q->size, new_head))
+ return;
+
+ while (q->head != new_head) {
+ nvme_mdev_vcq_advance_head(q);
+ WARN_ON_ONCE(q->pending == 0);
+ if (q->pending > 0)
+ q->pending--;
+ }
+
+ eidx = q->head + (q->size >> 1);
+ if (eidx >= q->size)
+ eidx -= q->size;
+ vctrl->mmio.eidxs[q->qid].cqh = cpu_to_le32(eidx);
+ }
+
+ if (q->irq != -1 && trigger_irqs) {
+ if (q->tail != new_head)
+ nvme_mdev_irq_cond_trigger(vctrl, q->irq);
+ else
+ nvme_mdev_irq_clear(vctrl, q->irq);
+ }
+}
+
+/* flush interrupts on a completion queue */
+bool nvme_mdev_vcq_flush(struct nvme_mdev_vctrl *vctrl, u16 qid)
+{
+ struct nvme_vcq *q = &vctrl->vcqs[qid];
+ u16 new_head = le32_to_cpu(vctrl->mmio.dbs[qid].cqh);
+
+ if (new_head == q->tail || q->irq == -1)
+ return false;
+
+ nvme_mdev_irq_trigger(vctrl, q->irq);
+ nvme_mdev_irq_clear(vctrl, q->irq);
+ return true;
+}
+
+/* Reserve space for one completion entry, that will be added later */
+bool nvme_mdev_vcq_reserve_space(struct nvme_vcq *q)
+{
+ /* TODOLATER: track passed through commmands
+ * If we pass through a command to host and never receive a response
+ * we will keep space for response in CQ forever, eventually stalling
+ * the CQ forever.
+ * In this case, the guest is still expected to recover by resetting
+ * our controller
+ * This can be fixed by tracking all the commands that we send
+ * to the host
+ */
+
+ if (q->pending == q->size - 1)
+ return false;
+ q->pending++;
+ return true;
+}
+
+/* Write a new item into the completion queue (IO version) */
+void nvme_mdev_vcq_write_io(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_vcq *q, u16 sq_head,
+ u16 sqid, u16 cid, u16 status)
+{
+ volatile u64 *qw = (__le64 *)(&q->data[q->tail]);
+
+ u64 phase = q->phase ? (0x1ULL << 48) : 0;
+ u64 qw1 =
+ ((u64)sq_head) |
+ ((u64)sqid << 16) |
+ ((u64)cid << 32) |
+ ((u64)status << 49) | phase;
+
+ WRITE_ONCE(qw[1], cpu_to_le64(qw1));
+
+ nvme_mdev_vcq_advance_tail(q);
+ if (q->irq != -1)
+ nvme_mdev_irq_raise(vctrl, q->irq);
+}
+
+/* Write a new item into the completion queue (ADMIN version) */
+void nvme_mdev_vcq_write_adm(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_vcq *q, u32 dw0,
+ u16 sq_head, u16 cid, u16 status)
+{
+ volatile u64 *qw = (__le64 *)(&q->data[q->tail]);
+
+ u64 phase = q->phase ? (0x1ULL << 48) : 0;
+ u64 qw1 =
+ ((u64)sq_head) |
+ ((u64)cid << 32) |
+ ((u64)status << 49) | phase;
+
+ WRITE_ONCE(qw[0], cpu_to_le64(dw0));
+ /* ensure that hardware sees the phase bit flip last */
+ wmb();
+ WRITE_ONCE(qw[1], cpu_to_le64(qw1));
+
+ nvme_mdev_vcq_advance_tail(q);
+ if (q->irq != -1)
+ nvme_mdev_irq_trigger(vctrl, q->irq);
+}
diff --git a/drivers/nvme/mdev/vctrl.c b/drivers/nvme/mdev/vctrl.c
new file mode 100644
index 000000000000..6f087b8fb2fc
--- /dev/null
+++ b/drivers/nvme/mdev/vctrl.c
@@ -0,0 +1,515 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Virtual NVMe controller implementation
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/slab.h>
+#include <linux/mdev.h>
+#include <linux/nvme.h>
+#include "priv.h"
+
+bool nvme_mdev_vctrl_is_dead(struct nvme_mdev_vctrl *vctrl)
+{
+ return (vctrl->mmio.csts & (NVME_CSTS_CFS | NVME_CSTS_SHST_MASK)) != 0;
+}
+
+/* Setup the controller guid and serial */
+static void nvme_mdev_vctrl_init_id(struct nvme_mdev_vctrl *vctrl)
+{
+ guid_t guid = mdev_uuid(vctrl->mdev);
+
+ snprintf(vctrl->subnqn, sizeof(vctrl->subnqn),
+ "nqn.2014-08.org.nvmexpress:uuid:%pUl", guid.b);
+
+ snprintf(vctrl->serial, sizeof(vctrl->serial), "%pUl", guid.b);
+}
+
+/* Change the IO thread CPU pinning */
+void nvme_mdev_vctrl_bind_iothread(struct nvme_mdev_vctrl *vctrl,
+ unsigned int cpu)
+{
+ mutex_lock(&vctrl->lock);
+
+ if (cpu == vctrl->iothread_cpu)
+ goto out;
+
+ nvme_mdev_io_free(vctrl);
+ nvme_mdev_io_create(vctrl, cpu);
+out:
+ mutex_unlock(&vctrl->lock);
+}
+
+/* Change the status of support for shadow doorbell */
+int nvme_mdev_vctrl_set_shadow_doorbell_supported(struct nvme_mdev_vctrl *vctrl,
+ bool enable)
+{
+ if (vctrl->inuse)
+ return -EBUSY;
+ vctrl->mmio.shadow_db_supported = enable;
+ return 0;
+}
+
+/* Called when memory mapping are changed. Propagate this to all kmap users */
+static void nvme_mdev_vctrl_viommu_update(struct nvme_mdev_vctrl *vctrl)
+{
+ u16 qid;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ if (!(vctrl->mmio.csts & NVME_CSTS_RDY))
+ return;
+
+ /* update mappings for submission and completion queues */
+ for_each_set_bit(qid, vctrl->vsq_en, MAX_VIRTUAL_QUEUES)
+ nvme_mdev_vsq_viommu_update(&vctrl->viommu, &vctrl->vsqs[qid]);
+
+ for_each_set_bit(qid, vctrl->vcq_en, MAX_VIRTUAL_QUEUES)
+ nvme_mdev_vcq_viommu_update(&vctrl->viommu, &vctrl->vcqs[qid]);
+
+ /* update mapping for the shadow doorbells */
+ nvme_mdev_mmio_viommu_update(vctrl);
+}
+
+/* Create a new virtual controller */
+struct nvme_mdev_vctrl *nvme_mdev_vctrl_create(struct mdev_device *mdev,
+ struct nvme_mdev_hctrl *hctrl,
+ unsigned int max_host_queues)
+{
+ int ret;
+ struct nvme_mdev_vctrl *vctrl = kzalloc_node(sizeof(*vctrl),
+ GFP_KERNEL, hctrl->node);
+ if (!vctrl)
+ return ERR_PTR(-ENOMEM);
+
+ /* Basic init */
+ vctrl->hctrl = hctrl;
+ vctrl->mdev = mdev;
+ vctrl->max_host_hw_queues = max_host_queues;
+ vctrl->viommu.vctrl = vctrl;
+
+ kref_init(&vctrl->ref);
+ mutex_init(&vctrl->lock);
+ nvme_mdev_vctrl_init_id(vctrl);
+ INIT_LIST_HEAD(&vctrl->host_hw_queues);
+
+ get_device(mdev_dev(mdev));
+ mdev_set_drvdata(mdev, vctrl);
+
+ /* reserve host IO queues */
+ if (!nvme_mdev_hctrl_hqs_reserve(hctrl, max_host_queues)) {
+ ret = -ENOSPC;
+ goto error1;
+ }
+
+ /* default feature values*/
+ vctrl->arb_burst_shift = 3;
+ vctrl->mmio.shadow_db_supported = use_shadow_doorbell;
+
+ ret = nvme_mdev_pci_create(vctrl);
+ if (ret)
+ goto error2;
+
+ ret = nvme_mdev_mmio_create(vctrl);
+ if (ret)
+ goto error3;
+
+ nvme_mdev_irqs_setup(vctrl);
+
+ /* Create the IO thread */
+ /*TODOLATER: IO: smp_processor_id() is not an ideal pinning choice */
+ ret = nvme_mdev_io_create(vctrl, smp_processor_id());
+ if (ret)
+ goto error4;
+
+ _INFO(vctrl, "device created using %d host queues\n", max_host_queues);
+ return vctrl;
+error4:
+ nvme_mdev_mmio_free(vctrl);
+error3:
+ nvme_mdev_pci_free(vctrl);
+error2:
+ nvme_mdev_hctrl_hqs_unreserve(hctrl, max_host_queues);
+error1:
+ put_device(mdev_dev(mdev));
+ kfree(vctrl);
+ return ERR_PTR(ret);
+}
+
+/*Try to destroy an vctrl */
+int nvme_mdev_vctrl_destroy(struct nvme_mdev_vctrl *vctrl)
+{
+ mutex_lock(&vctrl->lock);
+
+ if (vctrl->inuse) {
+ /* vctrl has mdev users */
+ mutex_unlock(&vctrl->lock);
+ return -EBUSY;
+ }
+
+ _INFO(vctrl, "device is destroying\n");
+
+ mdev_set_drvdata(vctrl->mdev, NULL);
+ mutex_unlock(&vctrl->lock);
+
+ mutex_lock(&nvme_mdev_vctrl_list_mutex);
+ list_del_init(&vctrl->link);
+ mutex_unlock(&nvme_mdev_vctrl_list_mutex);
+
+ mutex_lock(&vctrl->lock); /*only for lockdep checks */
+ nvme_mdev_io_free(vctrl);
+ nvme_mdev_vns_destroy_all(vctrl);
+ __nvme_mdev_vctrl_reset(vctrl, true);
+
+ nvme_mdev_hctrl_hqs_unreserve(vctrl->hctrl, vctrl->max_host_hw_queues);
+
+ nvme_mdev_pci_free(vctrl);
+ nvme_mdev_mmio_free(vctrl);
+
+ mutex_unlock(&vctrl->lock);
+
+ put_device(mdev_dev(vctrl->mdev));
+ _INFO(vctrl, "device is destroyed\n");
+ kfree(vctrl);
+ return 0;
+}
+
+/* Suspends a running virtual controller
+ * Called when host needs to regain full control of the device
+ */
+void nvme_mdev_vctrl_pause(struct nvme_mdev_vctrl *vctrl)
+{
+ mutex_lock(&vctrl->lock);
+ if (!vctrl->vctrl_paused) {
+ _INFO(vctrl, "pausing the virtual controller\n");
+ if (vctrl->mmio.csts & NVME_CSTS_RDY)
+ nvme_mdev_io_pause(vctrl);
+ vctrl->vctrl_paused = true;
+ }
+ mutex_unlock(&vctrl->lock);
+}
+
+/* Resumes a virtual controller
+ * Called when host done with exclusive access and allows us
+ * again to attach to the controller
+ */
+void nvme_mdev_vctrl_resume(struct nvme_mdev_vctrl *vctrl)
+{
+ mutex_lock(&vctrl->lock);
+ nvme_mdev_assert_io_not_running(vctrl);
+
+ if (vctrl->vctrl_paused) {
+ _INFO(vctrl, "resuming the virtual controller\n");
+
+ if (vctrl->mmio.csts & NVME_CSTS_RDY) {
+ /* handle all pending admin commands*/
+ nvme_mdev_adm_process_sq(vctrl);
+ /* start the IO thread again if it was stopped or
+ * if we had doorbell writes during the pause
+ */
+ nvme_mdev_io_resume(vctrl);
+ }
+ vctrl->vctrl_paused = false;
+ }
+ mutex_unlock(&vctrl->lock);
+}
+
+/* Called when emulator opens the virtual device */
+int nvme_mdev_vctrl_open(struct nvme_mdev_vctrl *vctrl)
+{
+ struct device *dma_dev = NULL;
+ int ret = 0;
+
+ mutex_lock(&vctrl->lock);
+
+ if (vctrl->hctrl->removing) {
+ ret = -ENODEV;
+ goto out;
+ }
+
+ if (vctrl->inuse) {
+ ret = -EBUSY;
+ goto out;
+ }
+
+ _INFO(vctrl, "device is opened\n");
+
+ if (vctrl->hctrl->nvme_ctrl->ops->flags & NVME_F_MDEV_DMA_SUPPORTED)
+ dma_dev = vctrl->hctrl->nvme_ctrl->dev;
+
+ nvme_mdev_viommu_init(&vctrl->viommu, mdev_dev(vctrl->mdev), dma_dev);
+
+ nvme_mdev_mmio_open(vctrl);
+ vctrl->inuse = true;
+out:
+ mutex_unlock(&vctrl->lock);
+ return ret;
+}
+
+/* Called when emulator closes the virtual device */
+void nvme_mdev_vctrl_release(struct nvme_mdev_vctrl *vctrl)
+{
+ mutex_lock(&vctrl->lock);
+ nvme_mdev_io_pause(vctrl);
+
+ /* Remove the guest DMA mappings - new user that will open the
+ * device might be a different guest
+ */
+ nvme_mdev_viommu_reset(&vctrl->viommu);
+
+ /* Reset the controller to a clean state for a new user */
+ __nvme_mdev_vctrl_reset(vctrl, false);
+ nvme_mdev_irqs_reset(vctrl);
+ vctrl->inuse = false;
+ mutex_unlock(&vctrl->lock);
+
+ WARN_ON(!list_empty(&vctrl->host_hw_queues));
+
+ _INFO(vctrl, "device is released\n");
+
+ /* If we are released after request to remove the host controller
+ * we are dead, won't be opened again ever, so remove ourselves
+ */
+ if (vctrl->hctrl->removing)
+ nvme_mdev_vctrl_destroy(vctrl);
+}
+
+/* Called each time the controller is reset (CC.EN <= 0 or VM level reset) */
+void __nvme_mdev_vctrl_reset(struct nvme_mdev_vctrl *vctrl, bool pci_reset)
+{
+ lockdep_assert_held(&vctrl->lock);
+
+ if ((vctrl->mmio.csts & NVME_CSTS_RDY) &&
+ !(vctrl->mmio.csts & NVME_CSTS_SHST_MASK)) {
+ _DBG(vctrl, "unsafe reset (CSTS.RDY==1)\n");
+ nvme_mdev_io_pause(vctrl);
+ nvme_mdev_vctrl_disable(vctrl);
+ }
+ nvme_mdev_mmio_reset(vctrl, pci_reset);
+}
+
+/* setups initial admin queues and doorbells */
+bool nvme_mdev_vctrl_enable(struct nvme_mdev_vctrl *vctrl,
+ dma_addr_t cqiova, dma_addr_t sqiova, u32 sizes)
+{
+ int ret;
+ u16 cqentries, sqentries;
+
+ nvme_mdev_assert_io_not_running(vctrl);
+
+ lockdep_assert_held(&vctrl->lock);
+
+ sqentries = (sizes & 0xFFFF) + 1;
+ cqentries = (sizes >> 16) + 1;
+
+ if (cqentries > 4096 || cqentries < 2)
+ return false;
+ if (sqentries > 4096 || sqentries < 2)
+ return false;
+
+ ret = nvme_mdev_mmio_enable_dbs(vctrl);
+ if (ret)
+ goto error0;
+
+ ret = nvme_mdev_vcq_init(vctrl, 0, cqiova, true, cqentries, 0);
+ if (ret)
+ goto error1;
+
+ ret = nvme_mdev_vsq_init(vctrl, 0, sqiova, true, sqentries, 0);
+ if (ret)
+ goto error2;
+
+ nvme_mdev_events_init(vctrl);
+
+ if (!vctrl->mmio.shadow_db_supported) {
+ /* start polling right away to support admin queue */
+ vctrl->io_idle = false;
+ nvme_mdev_io_resume(vctrl);
+ }
+
+ return true;
+error2:
+ nvme_mdev_mmio_disable_dbs(vctrl);
+error1:
+ nvme_mdev_vcq_delete(vctrl, 0);
+error0:
+ return false;
+}
+
+/* destroy all io/admin queues on the controller */
+void nvme_mdev_vctrl_disable(struct nvme_mdev_vctrl *vctrl)
+{
+ u16 sqid, cqid;
+
+ nvme_mdev_assert_io_not_running(vctrl);
+
+ lockdep_assert_held(&vctrl->lock);
+
+ nvme_mdev_events_reset(vctrl);
+ nvme_mdev_vns_log_reset(vctrl);
+
+ sqid = 1;
+ for_each_set_bit_from(sqid, vctrl->vsq_en, MAX_VIRTUAL_QUEUES)
+ nvme_mdev_vsq_delete(vctrl, sqid);
+
+ cqid = 1;
+ for_each_set_bit_from(cqid, vctrl->vcq_en, MAX_VIRTUAL_QUEUES)
+ nvme_mdev_vcq_delete(vctrl, cqid);
+
+ nvme_mdev_vsq_delete(vctrl, 0);
+ nvme_mdev_vcq_delete(vctrl, 0);
+
+ nvme_mdev_mmio_disable_dbs(vctrl);
+ vctrl->io_idle = true;
+}
+
+/* External reset */
+void nvme_mdev_vctrl_reset(struct nvme_mdev_vctrl *vctrl)
+{
+ mutex_lock(&vctrl->lock);
+ _INFO(vctrl, "reset\n");
+ __nvme_mdev_vctrl_reset(vctrl, true);
+ mutex_unlock(&vctrl->lock);
+}
+
+/* Add IO region*/
+void nvme_mdev_vctrl_add_region(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index, unsigned int size,
+ region_access_fn access_fn)
+{
+ struct nvme_mdev_io_region *region = &vctrl->regions[index];
+
+ region->size = size;
+ region->rw = access_fn;
+ region->mmap_ops = NULL;
+}
+
+/* Enable mmap window on an IO region */
+void nvme_mdev_vctrl_region_set_mmap(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index,
+ unsigned int offset,
+ unsigned int size,
+ const struct vm_operations_struct *ops)
+{
+ struct nvme_mdev_io_region *region = &vctrl->regions[index];
+
+ region->mmap_area_start = offset;
+ region->mmap_area_size = size;
+ region->mmap_ops = ops;
+}
+
+/* Disable mmap window on an IO region */
+void nvme_mdev_vctrl_region_disable_mmap(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index)
+{
+ struct nvme_mdev_io_region *region = &vctrl->regions[index];
+
+ region->mmap_area_start = 0;
+ region->mmap_area_size = 0;
+ region->mmap_ops = NULL;
+}
+
+/* Allocate a host IO queue */
+int nvme_mdev_vctrl_hq_alloc(struct nvme_mdev_vctrl *vctrl)
+{
+ struct nvme_mdev_hq *hq = NULL, *tmp;
+ int hwqcount = 0, ret;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ nvme_mdev_assert_io_not_running(vctrl);
+
+ list_for_each_entry(tmp, &vctrl->host_hw_queues, link) {
+ if (!hq || tmp->usecount < hq->usecount)
+ hq = tmp;
+ hwqcount++;
+ }
+
+ if (hwqcount < vctrl->max_host_hw_queues) {
+ ret = nvme_mdev_hctrl_hq_alloc(vctrl->hctrl);
+ if (ret < 0)
+ return ret;
+
+ hq = kzalloc_node(sizeof(*hq), GFP_KERNEL, vctrl->hctrl->node);
+ if (!hq) {
+ nvme_mdev_hctrl_hq_free(vctrl->hctrl, ret);
+ return -ENOMEM;
+ }
+
+ hq->hqid = ret;
+ hq->usecount = 1;
+ list_add_tail(&hq->link, &vctrl->host_hw_queues);
+ } else {
+ hq->usecount++;
+ }
+ return hq->hqid;
+}
+
+/* Free a host IO queue */
+void nvme_mdev_vctrl_hq_free(struct nvme_mdev_vctrl *vctrl, u16 hqid)
+{
+ struct nvme_mdev_hq *hq;
+
+ lockdep_assert_held(&vctrl->lock);
+ nvme_mdev_assert_io_not_running(vctrl);
+
+ list_for_each_entry(hq, &vctrl->host_hw_queues, link)
+ if (hq->hqid == hqid) {
+ if (--hq->usecount > 0)
+ return;
+ nvme_mdev_hctrl_hq_free(vctrl->hctrl, hq->hqid);
+ list_del(&hq->link);
+ kfree(hq);
+ return;
+ }
+ WARN_ON(1);
+}
+
+/* get current list of host queues */
+unsigned int nvme_mdev_vctrl_hqs_list(struct nvme_mdev_vctrl *vctrl, u16 *out)
+{
+ struct nvme_mdev_hq *q;
+ unsigned int i = 0;
+
+ list_for_each_entry(q, &vctrl->host_hw_queues, link) {
+ out[i++] = q->hqid;
+ if (WARN_ON(i > MAX_HOST_QUEUES))
+ break;
+ }
+ return i;
+}
+
+/* add a user memory mapping */
+int nvme_mdev_vctrl_viommu_map(struct nvme_mdev_vctrl *vctrl, u32 flags,
+ dma_addr_t iova, u64 size)
+{
+ int ret;
+
+ mutex_lock(&vctrl->lock);
+
+ nvme_mdev_io_pause(vctrl);
+ ret = nvme_mdev_viommu_add(&vctrl->viommu, flags, iova, size);
+ nvme_mdev_vctrl_viommu_update(vctrl);
+ nvme_mdev_io_resume(vctrl);
+
+ mutex_unlock(&vctrl->lock);
+ return ret;
+}
+
+/* remove a user memory mapping */
+int nvme_mdev_vctrl_viommu_unmap(struct nvme_mdev_vctrl *vctrl,
+ dma_addr_t iova, u64 size)
+{
+ int ret;
+
+ mutex_lock(&vctrl->lock);
+
+ nvme_mdev_io_pause(vctrl);
+ ret = nvme_mdev_viommu_remove(&vctrl->viommu, iova, size);
+ nvme_mdev_vctrl_viommu_update(vctrl);
+ nvme_mdev_io_resume(vctrl);
+
+ mutex_unlock(&vctrl->lock);
+ return ret;
+}
diff --git a/drivers/nvme/mdev/viommu.c b/drivers/nvme/mdev/viommu.c
new file mode 100644
index 000000000000..31b86e8f5768
--- /dev/null
+++ b/drivers/nvme/mdev/viommu.c
@@ -0,0 +1,322 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Virtual IOMMU - mapping user memory to the real device
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/highmem.h>
+#include <linux/slab.h>
+#include <linux/mdev.h>
+#include <linux/vmalloc.h>
+#include <linux/nvme.h>
+#include <linux/iommu.h>
+#include <linux/interval_tree_generic.h>
+#include "priv.h"
+
+struct mem_mapping {
+ struct rb_node rb;
+ struct list_head link;
+
+ dma_addr_t __subtree_last;
+ dma_addr_t iova_start; /* first iova in this mapping*/
+ dma_addr_t iova_last; /* last iova in this mapping*/
+
+ unsigned long pfn; /* physical address of this mapping */
+ dma_addr_t host_iova; /* dma mapping to the real device*/
+};
+
+#define map_len(m) (((m)->iova_last - (m)->iova_start) + 1ULL)
+#define map_pages(m) (map_len(m) >> PAGE_SHIFT)
+#define START(node) ((node)->iova_start)
+#define LAST(node) ((node)->iova_last)
+
+INTERVAL_TREE_DEFINE(struct mem_mapping, rb, dma_addr_t, __subtree_last,
+ START, LAST, static inline, viommu_int_tree);
+
+static void nvme_mdev_viommu_dbg_dma_range(struct nvme_mdev_viommu *viommu,
+ struct mem_mapping *map,
+ const char *action)
+{
+ dma_addr_t iova_start = map->iova_start;
+ dma_addr_t iova_end = map->iova_start + map_len(map) - 1;
+ dma_addr_t hiova_start = map->host_iova;
+ dma_addr_t hiova_end = map->host_iova + map_len(map) - 1;
+
+ _DBG(viommu->vctrl,
+ "vIOMMU: %s RW IOVA %pad-%pad -> DMA %pad-%pad\n",
+ action, &iova_start, &iova_end, &hiova_start, &hiova_end);
+}
+
+/* unpin N pages starting at given IOVA*/
+static void nvme_mdev_viommu_unpin_pages(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova, int n)
+{
+ int i;
+
+ for (i = 0; i < n; i++) {
+ unsigned long user_pfn = (iova >> PAGE_SHIFT) + i;
+ int ret = vfio_unpin_pages(viommu->sw_dev, &user_pfn, 1);
+
+ WARN_ON(ret != 1);
+ }
+}
+
+/* User memory init code*/
+void nvme_mdev_viommu_init(struct nvme_mdev_viommu *viommu,
+ struct device *sw_dev,
+ struct device *hw_dev)
+{
+ viommu->sw_dev = sw_dev;
+ viommu->hw_dev = hw_dev;
+ viommu->maps_tree = RB_ROOT_CACHED;
+ INIT_LIST_HEAD(&viommu->maps_list);
+}
+
+/* User memory end code*/
+void nvme_mdev_viommu_reset(struct nvme_mdev_viommu *viommu)
+{
+ nvme_mdev_viommu_remove(viommu, 0, 0xFFFFFFFFFFFFFFFFULL);
+ WARN_ON(!list_empty(&viommu->maps_list));
+}
+
+/* Adds a new range of user memory*/
+int nvme_mdev_viommu_add(struct nvme_mdev_viommu *viommu,
+ u32 flags,
+ dma_addr_t iova,
+ u64 size)
+{
+ u64 offset;
+ dma_addr_t iova_end = iova + size - 1;
+ struct mem_mapping *map = NULL, *tmp;
+ LIST_HEAD(new_mappings_list);
+ int ret;
+
+ if (!(flags & VFIO_DMA_MAP_FLAG_READ) ||
+ !(flags & VFIO_DMA_MAP_FLAG_WRITE)) {
+ const char *type = "none";
+
+ if (flags & VFIO_DMA_MAP_FLAG_READ)
+ type = "RO";
+ else if (flags & VFIO_DMA_MAP_FLAG_WRITE)
+ type = "WO";
+
+ _DBG(viommu->vctrl, "vIOMMU: IGN %s IOVA %pad-%pad\n",
+ type, &iova, &iova_end);
+ return 0;
+ }
+
+ WARN_ON_ONCE(nvme_mdev_viommu_remove(viommu, iova, size) != 0);
+
+ if (WARN_ON_ONCE(size & ~PAGE_MASK))
+ return -EINVAL;
+
+ // VFIO pinning all the pages
+ for (offset = 0; offset < size; offset += PAGE_SIZE) {
+ unsigned long vapfn = ((iova + offset) >> PAGE_SHIFT), pa_pfn;
+
+ ret = vfio_pin_pages(viommu->sw_dev,
+ &vapfn, 1,
+ VFIO_DMA_MAP_FLAG_READ |
+ VFIO_DMA_MAP_FLAG_WRITE,
+ &pa_pfn);
+
+ if (ret != 1) {
+ /*sadly mdev api doesn't return an error*/
+ ret = -EFAULT;
+
+ _DBG(viommu->vctrl,
+ "vIOMMU: ADD RW IOVA %pad - pin failed\n",
+ &iova);
+ goto unwind;
+ }
+
+ // new mapping needed
+ if (!map || map->pfn + map_pages(map) != pa_pfn) {
+ int node = viommu->hw_dev ?
+ dev_to_node(viommu->hw_dev) : NUMA_NO_NODE;
+
+ map = kzalloc_node(sizeof(*map), GFP_KERNEL, node);
+
+ if (WARN_ON(!map)) {
+ vfio_unpin_pages(viommu->sw_dev, &vapfn, 1);
+ ret = -ENOMEM;
+ goto unwind;
+ }
+ map->iova_start = iova + offset;
+ map->iova_last = iova + offset + PAGE_SIZE - 1ULL;
+ map->pfn = pa_pfn;
+ map->host_iova = 0;
+ list_add_tail(&map->link, &new_mappings_list);
+ } else {
+ // current map can be extended
+ map->iova_last += PAGE_SIZE;
+ }
+ }
+
+ // DMA mapping the pages
+ list_for_each_entry_safe(map, tmp, &new_mappings_list, link) {
+ if (viommu->hw_dev) {
+ map->host_iova =
+ dma_map_page(viommu->hw_dev,
+ pfn_to_page(map->pfn),
+ 0,
+ map_len(map),
+ DMA_BIDIRECTIONAL);
+
+ ret = dma_mapping_error(viommu->hw_dev, map->host_iova);
+ if (ret) {
+ _DBG(viommu->vctrl,
+ "vIOMMU: ADD RW IOVA %pad-%pad - DMA map failed\n",
+ &iova, &iova_end);
+ goto unwind;
+ }
+ }
+
+ nvme_mdev_viommu_dbg_dma_range(viommu, map, "ADD");
+ list_del(&map->link);
+ list_add_tail(&map->link, &viommu->maps_list);
+ viommu_int_tree_insert(map, &viommu->maps_tree);
+ }
+ return 0;
+unwind:
+ list_for_each_entry_safe(map, tmp, &new_mappings_list, link) {
+ nvme_mdev_viommu_unpin_pages(viommu, map->iova_start,
+ map_pages(map));
+
+ list_del(&map->link);
+ kfree(map);
+ }
+ nvme_mdev_viommu_remove(viommu, iova, size);
+ return ret;
+}
+
+/* Removes a range of user memory*/
+int nvme_mdev_viommu_remove(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova,
+ u64 size)
+{
+ struct mem_mapping *map = NULL, *tmp;
+ dma_addr_t last_iova = iova + (size) - 1ULL;
+ LIST_HEAD(remove_list);
+ int count = 0;
+
+ /* find out all the relevant ranges */
+ map = viommu_int_tree_iter_first(&viommu->maps_tree, iova, last_iova);
+ while (map) {
+ list_del(&map->link);
+ list_add_tail(&map->link, &remove_list);
+ map = viommu_int_tree_iter_next(map, iova, last_iova);
+ }
+
+ /* remove them */
+ list_for_each_entry_safe(map, tmp, &remove_list, link) {
+ count++;
+
+ nvme_mdev_viommu_dbg_dma_range(viommu, map, "DEL");
+ if (viommu->hw_dev)
+ dma_unmap_page(viommu->hw_dev, map->host_iova,
+ map_len(map), DMA_BIDIRECTIONAL);
+
+ nvme_mdev_viommu_unpin_pages(viommu, map->iova_start,
+ map_pages(map));
+
+ viommu_int_tree_remove(map, &viommu->maps_tree);
+ kfree(map);
+ }
+ return count;
+}
+
+/* Translate an IOVA to a physical address and read device bus address */
+int nvme_mdev_viommu_translate(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova,
+ dma_addr_t *physical,
+ dma_addr_t *host_iova)
+{
+ struct mem_mapping *mapping;
+ u64 offset;
+
+ if (WARN_ON_ONCE(OFFSET_IN_PAGE(iova) != 0))
+ return -EINVAL;
+
+ mapping = viommu_int_tree_iter_first(&viommu->maps_tree,
+ iova, iova + PAGE_SIZE - 1);
+ if (!mapping) {
+ _DBG(viommu->vctrl,
+ "vIOMMU: translation of IOVA %pad failed\n", &iova);
+ return -EFAULT;
+ }
+
+ WARN_ON(iova > mapping->iova_last);
+ WARN_ON(OFFSET_IN_PAGE(mapping->iova_start) != 0);
+
+ offset = iova - mapping->iova_start;
+ *physical = PFN_PHYS(mapping->pfn) + offset;
+ *host_iova = mapping->host_iova + offset;
+ return 0;
+}
+
+/* map an IOVA to kernel address space */
+int nvme_mdev_viommu_create_kmap(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova, struct page_map *page)
+{
+ dma_addr_t host_iova;
+ phys_addr_t physical;
+ struct page *new_page;
+ int ret;
+
+ page->iova = iova;
+
+ ret = nvme_mdev_viommu_translate(viommu, iova, &physical, &host_iova);
+ if (ret)
+ return ret;
+
+ new_page = pfn_to_page(PHYS_PFN(physical));
+
+ page->kmap = kmap(new_page);
+ if (!page->kmap)
+ return -ENOMEM;
+
+ page->page = new_page;
+ return 0;
+}
+
+/* update IOVA <-> kernel mapping. If fails, removes the previous mapping */
+void nvme_mdev_viommu_update_kmap(struct nvme_mdev_viommu *viommu,
+ struct page_map *page)
+{
+ dma_addr_t host_iova;
+ phys_addr_t physical;
+ struct page *new_page;
+ int ret;
+
+ ret = nvme_mdev_viommu_translate(viommu, page->iova,
+ &physical, &host_iova);
+ if (ret) {
+ nvme_mdev_viommu_free_kmap(viommu, page);
+ return;
+ }
+
+ new_page = pfn_to_page(PHYS_PFN(physical));
+ if (new_page == page->page)
+ return;
+
+ nvme_mdev_viommu_free_kmap(viommu, page);
+
+ page->kmap = kmap(new_page);
+ if (!page->kmap)
+ return;
+ page->page = new_page;
+}
+
+/* unmap an IOVA to kernel address space */
+void nvme_mdev_viommu_free_kmap(struct nvme_mdev_viommu *viommu,
+ struct page_map *page)
+{
+ if (page->page) {
+ kunmap(page->page);
+ page->page = NULL;
+ page->kmap = NULL;
+ }
+}
diff --git a/drivers/nvme/mdev/vns.c b/drivers/nvme/mdev/vns.c
new file mode 100644
index 000000000000..42d4f8d7423b
--- /dev/null
+++ b/drivers/nvme/mdev/vns.c
@@ -0,0 +1,356 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Virtual NVMe namespace implementation
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/nvme.h>
+#include "priv.h"
+
+/* Reset the changed namespace log */
+void nvme_mdev_vns_log_reset(struct nvme_mdev_vctrl *vctrl)
+{
+ vctrl->ns_log_size = 0;
+}
+
+/* This adds entry to NS changed log and sends to the user a notification */
+static void nvme_mdev_vns_send_event(struct nvme_mdev_vctrl *vctrl, u32 ns)
+{
+ unsigned int i;
+ unsigned int log_size = vctrl->ns_log_size;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ _INFO(vctrl, "host namespace list rescanned\n");
+
+ if (WARN_ON(ns == 0 || ns > MAX_VIRTUAL_NAMESPACES))
+ return;
+
+ // check if the namespace ID is alredy in the log
+ if (log_size == MAX_VIRTUAL_NAMESPACES)
+ return;
+
+ for (i = 0; i < log_size; i++)
+ if (vctrl->ns_log[i] == cpu_to_le32(ns))
+ return;
+
+ vctrl->ns_log[log_size++] = cpu_to_le32(ns);
+ vctrl->ns_log_size++;
+ nvme_mdev_event_send(vctrl, NVME_AER_TYPE_NOTICE,
+ NVME_AER_NOTICE_NS_CHANGED);
+}
+
+/* Read host NS/partition parameters to update our virtual NS */
+static void nvme_mdev_vns_read_host_properties(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_mdev_vns *vns,
+ struct nvme_ns *host_ns)
+{
+ unsigned int sector_to_lba_shift;
+ u64 host_ns_size, start, nr, align_mask;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ /* read the namespace block size */
+ vns->blksize_shift = host_ns->lba_shift;
+
+ if (WARN_ON(vns->blksize_shift < 9)) {
+ _WARN(vctrl, "NS/create: device block size is bad\n");
+ goto error;
+ }
+
+ sector_to_lba_shift = vns->blksize_shift - 9;
+ align_mask = (1ULL << sector_to_lba_shift) - 1;
+
+ /* read the partition start and size*/
+ start = get_start_sect(vns->host_part);
+ nr = part_nr_sects_read(vns->host_part->bd_part);
+
+ /* check that partition is aligned on LBA size*/
+ if (sector_to_lba_shift != 0) {
+ if ((start & align_mask) || (nr & align_mask)) {
+ _WARN(vctrl, "NS/create: partition not aligned\n");
+ goto error;
+ }
+ }
+
+ vns->host_lba_offset = start >> sector_to_lba_shift;
+ vns->ns_size = nr >> sector_to_lba_shift;
+ host_ns_size = get_capacity(host_ns->disk) >> sector_to_lba_shift;
+
+ /*TODOLATER: NS: support metadata on host namespace */
+ if (host_ns->ms) {
+ _WARN(vctrl, "NS/create: no support for namespace metadata\n");
+ goto error;
+ }
+
+ if (vns->ns_size == 0) {
+ _WARN(vctrl, "NS/create: host namespace has size 0\n");
+ goto error;
+ }
+
+ /* sanity check that partition doesn't extend beyond the namespace */
+ if (!check_range(vns->host_lba_offset, vns->ns_size, host_ns_size)) {
+ _WARN(vctrl, "NS/create: host namespace size mismatch\n");
+ goto error;
+ }
+
+ /* check if namespace is readonly*/
+ if (!vns->readonly)
+ vns->readonly = get_disk_ro(host_ns->disk);
+
+ vns->noiob = host_ns->noiob;
+ if (vns->noiob != 0) {
+ u64 tmp = vns->host_lba_offset;
+
+ if (do_div(tmp, vns->noiob)) {
+ _WARN(vctrl,
+ "NS/create: host partition is not aligned on host optimum IO boundary, performance might suffer");
+ vns->noiob = 0;
+ }
+ }
+ return;
+error:
+ vns->ns_size = 0;
+}
+
+/* Open new reference to a host namespace */
+int nvme_mdev_vns_open(struct nvme_mdev_vctrl *vctrl,
+ u32 host_nsid, unsigned int host_partid)
+{
+ struct nvme_mdev_vns *vns;
+ u32 user_nsid;
+ int ret;
+
+ _INFO(vctrl, "open host_namespace=%u, partition=%u\n",
+ host_nsid, host_partid);
+
+ mutex_lock(&vctrl->lock);
+ ret = -ENODEV;
+ if (nvme_mdev_vctrl_is_dead(vctrl))
+ goto out;
+
+ /* create the namespace object */
+ ret = -ENOMEM;
+ vns = kzalloc_node(sizeof(*vns), GFP_KERNEL, vctrl->hctrl->node);
+ if (!vns)
+ goto out;
+
+ uuid_gen(&vns->uuid); // TODOLATER: NS: non random NS UUID
+ vns->host_nsid = host_nsid;
+ vns->host_partid = host_partid;
+
+ /* find the host namespace */
+ vns->host_ns = nvme_find_get_ns(vctrl->hctrl->nvme_ctrl, host_nsid);
+ if (!vns->host_ns) {
+ ret = -ENODEV;
+ goto error1;
+ }
+
+ if (test_bit(NVME_NS_DEAD, &vns->host_ns->flags) ||
+ test_bit(NVME_NS_REMOVING, &vns->host_ns->flags) ||
+ !vns->host_ns->disk) {
+ ret = -ENODEV;
+ goto error2;
+ }
+
+ /* get the block device for the partition that we will use */
+ vns->host_part = bdget_disk(vns->host_ns->disk, host_partid);
+ if (!vns->host_part) {
+ ret = -ENODEV;
+ goto error2;
+ }
+
+ /* get exclusive access to the block device (partition) */
+ vns->fmode = FMODE_READ | FMODE_EXCL;
+ if (!vns->readonly)
+ vns->fmode |= FMODE_WRITE;
+
+ ret = blkdev_get(vns->host_part, vns->fmode, vns);
+ if (ret)
+ goto error2;
+
+ /* read properties of the host namespace */
+ nvme_mdev_vns_read_host_properties(vctrl, vns, vns->host_ns);
+
+ /* Allocate a user namespace ID for this namespace */
+ ret = -ENOSPC;
+ for (user_nsid = 1; user_nsid <= MAX_VIRTUAL_NAMESPACES; user_nsid++)
+ if (!nvme_mdev_vns_from_vnsid(vctrl, user_nsid))
+ break;
+
+ if (user_nsid > MAX_VIRTUAL_NAMESPACES)
+ goto error3;
+
+ nvme_mdev_io_pause(vctrl);
+
+ vctrl->namespaces[user_nsid - 1] = vns;
+ vns->nsid = user_nsid;
+
+ /* Announce the new namespace to the user */
+ nvme_mdev_vns_send_event(vctrl, user_nsid);
+ nvme_mdev_io_resume(vctrl);
+ ret = 0;
+ goto out;
+error3:
+ blkdev_put(vns->host_part, vns->fmode);
+error2:
+ nvme_put_ns(vns->host_ns);
+error1:
+ kfree(vns);
+out:
+ mutex_unlock(&vctrl->lock);
+ return ret;
+}
+
+/* Re-open new reference to a host namespace, after notification
+ * of change in the host namespace
+ */
+static bool nvme_mdev_vns_reopen(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_mdev_vns *vns)
+{
+ struct nvme_ns *host_ns;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ _INFO(vctrl, "reopen host namespace %u, partition=%u\n",
+ vns->host_nsid, vns->host_partid);
+
+ /* namespace disappeared on the host - invalid*/
+ host_ns = nvme_find_get_ns(vctrl->hctrl->nvme_ctrl, vns->host_nsid);
+ if (!host_ns)
+ return false;
+
+ /* different namespace with same ID on the host - invalid*/
+ if (vns->host_ns != host_ns)
+ goto error1;
+
+ // basic checks on the namespace
+ if (test_bit(NVME_NS_DEAD, &host_ns->flags) ||
+ test_bit(NVME_NS_REMOVING, &host_ns->flags) ||
+ !host_ns->disk)
+ goto error1;
+
+ /* read properties of the host namespace */
+ nvme_mdev_io_pause(vctrl);
+ nvme_mdev_vns_read_host_properties(vctrl, vns, host_ns);
+ nvme_mdev_io_resume(vctrl);
+
+ nvme_put_ns(host_ns);
+ return true;
+error1:
+ nvme_put_ns(host_ns);
+ return false;
+}
+
+/* Destroy a virtual namespace*/
+static int __nvme_mdev_vns_destroy(struct nvme_mdev_vctrl *vctrl, u32 user_nsid)
+{
+ struct nvme_mdev_vns *vns;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ vns = nvme_mdev_vns_from_vnsid(vctrl, user_nsid);
+ if (!vns)
+ return -ENODEV;
+
+ nvme_mdev_vns_send_event(vctrl, user_nsid);
+ nvme_mdev_io_pause(vctrl);
+
+ vctrl->namespaces[user_nsid - 1] = NULL;
+ blkdev_put(vns->host_part, vns->fmode);
+ nvme_put_ns(vns->host_ns);
+ kfree(vns);
+ nvme_mdev_io_resume(vctrl);
+ return 0;
+}
+
+/* Destroy a virtual namespace (external interface) */
+int nvme_mdev_vns_destroy(struct nvme_mdev_vctrl *vctrl, u32 user_nsid)
+{
+ int ret;
+
+ mutex_lock(&vctrl->lock);
+ nvme_mdev_io_pause(vctrl);
+ ret = __nvme_mdev_vns_destroy(vctrl, user_nsid);
+ nvme_mdev_io_resume(vctrl);
+ mutex_unlock(&vctrl->lock);
+
+ return ret;
+}
+
+/* Destroy all virtual namespaces */
+void nvme_mdev_vns_destroy_all(struct nvme_mdev_vctrl *vctrl)
+{
+ u32 user_nsid;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ for (user_nsid = 1 ; user_nsid <= MAX_VIRTUAL_NAMESPACES ; user_nsid++)
+ __nvme_mdev_vns_destroy(vctrl, user_nsid);
+}
+
+/* Get a virtual namespace */
+struct nvme_mdev_vns *nvme_mdev_vns_from_vnsid(struct nvme_mdev_vctrl *vctrl,
+ u32 user_ns_id)
+{
+ if (user_ns_id == 0 || user_ns_id > MAX_VIRTUAL_NAMESPACES)
+ return NULL;
+ return vctrl->namespaces[user_ns_id - 1];
+}
+
+/* Print description off all virtual namespaces */
+int nvme_mdev_vns_print_description(struct nvme_mdev_vctrl *vctrl,
+ char *buf, unsigned int size)
+{
+ int nsid, ret = 0;
+
+ mutex_lock(&vctrl->lock);
+
+ for (nsid = 1; nsid <= MAX_VIRTUAL_NAMESPACES; nsid++) {
+ int n;
+ struct nvme_mdev_vns *vns = nvme_mdev_vns_from_vnsid(vctrl,
+ nsid);
+ if (!vns)
+ continue;
+
+ else if (vns->host_partid == 0)
+ n = snprintf(buf, size, "VNS%d: nvme%dn%d\n",
+ nsid, vctrl->hctrl->id,
+ (int)vns->host_nsid);
+ else
+ n = snprintf(buf, size, "VNS%d: nvme%dn%dp%d\n",
+ nsid, vctrl->hctrl->id,
+ (int)vns->host_nsid,
+ (int)vns->host_partid);
+ if (n > size)
+ return -ENOMEM;
+ buf += n;
+ size -= n;
+ ret += n;
+ }
+ mutex_unlock(&vctrl->lock);
+ return ret;
+}
+
+/* Processes an update on the host namespace */
+void nvme_mdev_vns_host_ns_update(struct nvme_mdev_vctrl *vctrl,
+ u32 host_nsid, bool removed)
+{
+ int nsid;
+
+ mutex_lock(&vctrl->lock);
+
+ for (nsid = 1; nsid <= MAX_VIRTUAL_NAMESPACES; nsid++) {
+ struct nvme_mdev_vns *vns = nvme_mdev_vns_from_vnsid(vctrl,
+ nsid);
+ if (!vns || vns->host_nsid != host_nsid)
+ continue;
+
+ if (removed || !nvme_mdev_vns_reopen(vctrl, vns))
+ __nvme_mdev_vns_destroy(vctrl, nsid);
+ else
+ nvme_mdev_vns_send_event(vctrl, nsid);
+ }
+ mutex_unlock(&vctrl->lock);
+}
diff --git a/drivers/nvme/mdev/vsq.c b/drivers/nvme/mdev/vsq.c
new file mode 100644
index 000000000000..5b63081c144d
--- /dev/null
+++ b/drivers/nvme/mdev/vsq.c
@@ -0,0 +1,181 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Virtual NVMe submission queue implementation
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+#include "priv.h"
+
+/* Create new virtual completion queue */
+int nvme_mdev_vsq_init(struct nvme_mdev_vctrl *vctrl,
+ u16 qid, dma_addr_t iova, bool cont, u16 size, u16 cqid)
+{
+ struct nvme_vsq *q = &vctrl->vsqs[qid];
+ int ret;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ q->iova = iova;
+ q->cont = cont;
+ q->qid = qid;
+ q->size = size;
+ q->head = 0;
+ q->vcq = &vctrl->vcqs[cqid];
+ q->data = NULL;
+ q->hsq = 0;
+
+ ret = nvme_mdev_vsq_viommu_update(&vctrl->viommu, q);
+ if (ret && (ret != -EFAULT))
+ return ret;
+
+ if (qid > 0) {
+ ret = nvme_mdev_vctrl_hq_alloc(vctrl);
+ if (ret < 0) {
+ vunmap(q->data);
+ return ret;
+ }
+ q->hsq = ret;
+ }
+
+ _DBG(vctrl, "VSQ: create qid=%d contig=%d, depth=%d cqid=%d\n",
+ qid, cont, size, cqid);
+
+ set_bit(qid, vctrl->vsq_en);
+
+ vctrl->mmio.dbs[q->qid].sqt = 0;
+ vctrl->mmio.eidxs[q->qid].sqt = 0;
+
+ return 0;
+}
+
+/* Update the kernel mapping of the queue */
+int nvme_mdev_vsq_viommu_update(struct nvme_mdev_viommu *viommu,
+ struct nvme_vsq *q)
+{
+ void *data;
+
+ if (q->data)
+ vunmap((void *)q->data);
+
+ data = nvme_mdev_udata_queue_vmap(viommu, q->iova,
+ (unsigned int)q->size *
+ sizeof(struct nvme_command),
+ q->cont);
+
+ q->data = IS_ERR(data) ? NULL : data;
+ return IS_ERR(data) ? PTR_ERR(data) : 0;
+}
+
+/* Delete an virtual completion queue */
+void nvme_mdev_vsq_delete(struct nvme_mdev_vctrl *vctrl, u16 qid)
+{
+ struct nvme_vsq *q = &vctrl->vsqs[qid];
+
+ lockdep_assert_held(&vctrl->lock);
+ _DBG(vctrl, "VSQ: delete qid=%d\n", q->qid);
+
+ if (q->data)
+ vunmap(q->data);
+ q->data = NULL;
+
+ if (q->hsq) {
+ nvme_mdev_vctrl_hq_free(vctrl, q->hsq);
+ q->hsq = 0;
+ }
+
+ clear_bit(qid, vctrl->vsq_en);
+}
+
+/* Move queue head one item forward */
+static void nvme_mdev_vsq_advance_head(struct nvme_vsq *q)
+{
+ q->head++;
+ if (q->head == q->size)
+ q->head = 0;
+}
+
+bool nvme_mdev_vsq_has_data(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_vsq *q)
+{
+ u16 tail = le32_to_cpu(vctrl->mmio.dbs[q->qid].sqt);
+
+ if (!vctrl->mmio.dbs || !vctrl->mmio.eidxs || !q->data)
+ return false;
+
+ if (tail == q->head)
+ return false;
+
+ if (!nvme_mdev_mmio_db_check(vctrl, q->qid, q->size, tail))
+ return false;
+ return true;
+}
+
+/* get one command from a virtual submission queue */
+const struct nvme_command *nvme_mdev_vsq_get_cmd(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_vsq *q)
+{
+ u16 oldhead = q->head;
+ u32 eidx;
+
+ if (!nvme_mdev_vsq_has_data(vctrl, q))
+ return NULL;
+ if (!nvme_mdev_vcq_reserve_space(q->vcq))
+ return NULL;
+ nvme_mdev_vsq_advance_head(q);
+
+ eidx = q->head + (q->size >> 1);
+ if (eidx >= q->size)
+ eidx -= q->size;
+
+ vctrl->mmio.eidxs[q->qid].sqt = cpu_to_le32(eidx);
+
+ return &q->data[oldhead];
+}
+
+bool nvme_mdev_vsq_suspend_io(struct nvme_mdev_vctrl *vctrl, u16 sqid)
+{
+ struct nvme_vsq *q = &vctrl->vsqs[sqid];
+ u16 tail = le32_to_cpu(vctrl->mmio.dbs[q->qid].sqt);
+
+ /* If the queue is not in working state don't allow the idle code
+ * to kick in
+ */
+ if (!vctrl->mmio.dbs || !vctrl->mmio.eidxs || !q->data)
+ return false;
+
+ /* queue has data - refuse idle*/
+ if (tail != q->head)
+ return false;
+
+ /* Write eventid to tell the user to ring normal doorbell*/
+ vctrl->mmio.eidxs[q->qid].sqt = cpu_to_le32(q->head);
+
+ /* memory barrier to ensure that the user have seen the eidx */
+ mb();
+
+ /* Check that doorbell diddn't move meanwhile */
+ tail = le32_to_cpu(vctrl->mmio.dbs[q->qid].sqt);
+ return (tail == q->head);
+}
+
+/* complete a command (IO version)*/
+void nvme_mdev_vsq_cmd_done_io(struct nvme_mdev_vctrl *vctrl,
+ u16 sqid, u16 cid, u16 status)
+{
+ struct nvme_vsq *q = &vctrl->vsqs[sqid];
+
+ nvme_mdev_vcq_write_io(vctrl, q->vcq, q->head, q->qid, cid, status);
+}
+
+/* complete a command (ADMIN version)*/
+void nvme_mdev_vsq_cmd_done_adm(struct nvme_mdev_vctrl *vctrl,
+ u32 dw0, u16 cid, u16 status)
+{
+ struct nvme_vsq *q = &vctrl->vsqs[0];
+
+ nvme_mdev_vcq_write_adm(vctrl, q->vcq, dw0, q->head, cid, status);
+}
--
2.17.2
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* [PATCH 8/9] nvme/core: add nvme-mdev core driver
@ 2019-03-19 14:41 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
This is main commit in the series, adding the mediated nvme
driver.
The idea behind this driver is based on paper you can find at
https://www.usenix.org/conference/atc18/presentation/peng
But this is an independent implementation.
This mdev device exposes a NVME 1.3 virtual device to any VFIO user
which represents an partition (or whole namespace) of a host NVME device.
Unlike the paper, the driver/device uses
one polling thread per mediated device,
and only needs one hw queue per it to achieve near native performance
(but can use more that one hw queue, in which case it splits the queues
between virtual nvme queues thus supporting n:m mapping between
host hw queues and the guest virtual queues).
The nvme-mdev can't yet be used after this commit, as no nvme device
drivers support mediation (support is added in the next commit to
nvme-pci)
The driver can use the shadow doorbell nvme optional feature,
to stop polling after a timeout.
Currently the device has redhat pci vendor ID, and 0x1234 pci device id,
which is only a placeholder till a real device id is allocated.
Use example:
# load the nvme-mdev driver
$ modprobe nvme-mdev
# load the nvme pci driver with 4 polling queues reserved
# (will work with the next patch)
$ modprobe nvme mdev_queues=4
# generate random UUID for the mediated device
$ UUID=$(uuidgen)
$ MDEV_DEVICE=/sys/bus/mdev/devices/$UUID
# the location of the real nvme device (replace with yours)
$ PCI_DEVICE=/sys/bus/pci/devices/0000:44:00.0
# create the mediated device using 2 host polling queues
$ echo $UUID > $PCI_DEVICE/mdev_supported_types/nvme-2Q_V1/create
# attach partition 1 of namespace 1 to a free virtual namespace
# (use n1 to attach whole namespace)
# you can attach up to 16 virtual namespaces for now
$ echo n1p1 > $MDEV_DEVICE/namespaces/add_namespace
# move the polling thread to cpu 11
$ echo 11 > $MDEV_DEVICE/settings/iothread_cpu
# now you can boot qemu with
# -device vfio-pci,sysfsdev=/sys/bus/mdev/devices/$UUID
Note that you can attach and detach virtual namespaces
even while the guest is running which will make the
device sent namespace changed AEN notifications to the guest
prior to attach/detach action.
Signed-off-by: Maxim Levitsky <mlevitsk at redhat.com>
---
MAINTAINERS | 5 +
drivers/nvme/Kconfig | 1 +
drivers/nvme/Makefile | 1 +
drivers/nvme/host/core.c | 5 +-
drivers/nvme/mdev/Kconfig | 16 +
drivers/nvme/mdev/Makefile | 5 +
drivers/nvme/mdev/adm.c | 873 +++++++++++++++++++++++++++++++++++
drivers/nvme/mdev/events.c | 142 ++++++
drivers/nvme/mdev/host.c | 491 ++++++++++++++++++++
drivers/nvme/mdev/instance.c | 802 ++++++++++++++++++++++++++++++++
drivers/nvme/mdev/io.c | 563 ++++++++++++++++++++++
drivers/nvme/mdev/irq.c | 264 +++++++++++
drivers/nvme/mdev/mdev.h | 56 +++
drivers/nvme/mdev/mmio.c | 591 ++++++++++++++++++++++++
drivers/nvme/mdev/pci.c | 247 ++++++++++
drivers/nvme/mdev/priv.h | 700 ++++++++++++++++++++++++++++
drivers/nvme/mdev/udata.c | 390 ++++++++++++++++
drivers/nvme/mdev/vcq.c | 209 +++++++++
drivers/nvme/mdev/vctrl.c | 515 +++++++++++++++++++++
drivers/nvme/mdev/viommu.c | 322 +++++++++++++
drivers/nvme/mdev/vns.c | 356 ++++++++++++++
drivers/nvme/mdev/vsq.c | 181 ++++++++
22 files changed, 6733 insertions(+), 2 deletions(-)
create mode 100644 drivers/nvme/mdev/Kconfig
create mode 100644 drivers/nvme/mdev/Makefile
create mode 100644 drivers/nvme/mdev/adm.c
create mode 100644 drivers/nvme/mdev/events.c
create mode 100644 drivers/nvme/mdev/host.c
create mode 100644 drivers/nvme/mdev/instance.c
create mode 100644 drivers/nvme/mdev/io.c
create mode 100644 drivers/nvme/mdev/irq.c
create mode 100644 drivers/nvme/mdev/mdev.h
create mode 100644 drivers/nvme/mdev/mmio.c
create mode 100644 drivers/nvme/mdev/pci.c
create mode 100644 drivers/nvme/mdev/priv.h
create mode 100644 drivers/nvme/mdev/udata.c
create mode 100644 drivers/nvme/mdev/vcq.c
create mode 100644 drivers/nvme/mdev/vctrl.c
create mode 100644 drivers/nvme/mdev/viommu.c
create mode 100644 drivers/nvme/mdev/vns.c
create mode 100644 drivers/nvme/mdev/vsq.c
diff --git a/MAINTAINERS b/MAINTAINERS
index dce5c099f43c..d143e929d7ed 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -10896,6 +10896,11 @@ W: http://git.infradead.org/nvme.git
S: Supported
F: drivers/nvme/target/
+NVM EXPRESS MDEV DRIVER
+M: Maxim Levitsky <mlevitsk at redhat.com>
+S: Supported
+F: drivers/nvme/mdev/
+
NVMEM FRAMEWORK
M: Srinivas Kandagatla <srinivas.kandagatla at linaro.org>
S: Maintained
diff --git a/drivers/nvme/Kconfig b/drivers/nvme/Kconfig
index 04008e0bbe81..cbf867e6ac1e 100644
--- a/drivers/nvme/Kconfig
+++ b/drivers/nvme/Kconfig
@@ -2,5 +2,6 @@ menu "NVME Support"
source "drivers/nvme/host/Kconfig"
source "drivers/nvme/target/Kconfig"
+source "drivers/nvme/mdev/Kconfig"
endmenu
diff --git a/drivers/nvme/Makefile b/drivers/nvme/Makefile
index 0096a7fd1431..0458efc57aee 100644
--- a/drivers/nvme/Makefile
+++ b/drivers/nvme/Makefile
@@ -1,3 +1,4 @@
obj-y += host/
obj-y += target/
+obj-y += mdev/
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 90561973bce9..a835884fcbcd 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1687,6 +1687,7 @@ static void nvme_update_disk_info(struct gendisk *disk,
if (ns->ms && !ns->ext &&
(ns->ctrl->ops->flags & NVME_F_METADATA_SUPPORTED))
nvme_init_integrity(disk, ns->ms, ns->pi_type);
+
if (ns->ms && !nvme_ns_has_pi(ns) && !blk_get_integrity(disk))
capacity = 0;
@@ -2302,7 +2303,7 @@ static void nvme_init_subnqn(struct nvme_subsystem *subsys, struct nvme_ctrl *ct
size_t nqnlen;
int off;
- if(!(ctrl->quirks & NVME_QUIRK_IGNORE_DEV_SUBNQN)) {
+ if (!(ctrl->quirks & NVME_QUIRK_IGNORE_DEV_SUBNQN)) {
nqnlen = strnlen(id->subnqn, NVMF_NQN_SIZE);
if (nqnlen > 0 && nqnlen < NVMF_NQN_SIZE) {
strlcpy(subsys->subnqn, id->subnqn, NVMF_NQN_SIZE);
@@ -3361,8 +3362,8 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
nvme_mpath_add_disk(ns, id);
nvme_fault_inject_init(ns);
- kfree(id);
+ kfree(id);
return;
out_put_disk:
put_disk(ns->disk);
diff --git a/drivers/nvme/mdev/Kconfig b/drivers/nvme/mdev/Kconfig
new file mode 100644
index 000000000000..7ebc66cdeac0
--- /dev/null
+++ b/drivers/nvme/mdev/Kconfig
@@ -0,0 +1,16 @@
+
+config NVME_MDEV
+ bool
+
+config NVME_MDEV_VFIO
+ tristate "NVME Mediated VFIO virtual device"
+ select NVME_MDEV
+ depends on BLOCK
+ depends on VFIO_MDEV
+ depends on NVME_CORE
+ help
+ This provides EXPEREMENTAL support for lightweight software
+ passthrough of an partition on a NVME storage device to
+ guest, also as a NVME namespace, attached to a virtual NVME
+ controller
+ If unsure, say N.
diff --git a/drivers/nvme/mdev/Makefile b/drivers/nvme/mdev/Makefile
new file mode 100644
index 000000000000..114016c48476
--- /dev/null
+++ b/drivers/nvme/mdev/Makefile
@@ -0,0 +1,5 @@
+
+obj-$(CONFIG_NVME_MDEV_VFIO) += nvme-mdev.o
+
+nvme-mdev-y += adm.o events.o instance.o host.o io.o irq.o \
+ udata.o viommu.o vns.o vsq.o vcq.o vctrl.o mmio.o pci.o
diff --git a/drivers/nvme/mdev/adm.c b/drivers/nvme/mdev/adm.c
new file mode 100644
index 000000000000..39a7ad252c69
--- /dev/null
+++ b/drivers/nvme/mdev/adm.c
@@ -0,0 +1,873 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * NVMe admin command implementation
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include "priv.h"
+
+struct adm_ctx {
+ struct nvme_mdev_vctrl *vctrl;
+ struct nvme_mdev_hctrl *hctrl;
+ const struct nvme_command *in;
+ struct nvme_mdev_vns *ns;
+ struct nvme_ext_data_iter udatait;
+ unsigned int datalen;
+};
+
+/*Identify Controller */
+static int nvme_mdev_adm_handle_id_cntrl(struct adm_ctx *ctx)
+{
+ int ret;
+ const struct nvme_identify *in = &ctx->in->identify;
+ struct nvme_id_ctrl *id;
+
+ if (in->nsid != 0)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ id = kzalloc(sizeof(*id), GFP_KERNEL);
+ if (!id)
+ return NVME_SC_INTERNAL;
+
+ /** Controller Capabilities and Features ************************/
+ // PCI vendor ID
+ store_le16(&id->vid, NVME_MDEV_PCI_VENDOR_ID);
+ // PCI Subsystem Vendor ID
+ store_le16(&id->ssvid, NVME_MDEV_PCI_SUBVENDOR_ID);
+ // Serial Number
+ store_strsp(id->sn, ctx->vctrl->serial);
+ // Model Number
+ store_strsp(id->mn, "NVMe MDEV virtual device");
+ // Firmware Revision
+ store_strsp(id->fr, NVME_MDEV_FIRMWARE_VERSION);
+ // Recommended Arbitration Burst
+ id->rab = 6;
+ // IEEE OUI Identifier for the controller vendor
+ id->ieee[0] = 0;
+ // Controller Multi-Path I/O and Namespace Sharing Capabilities
+ id->cmic = 0;
+ // Maximum Data Transfer Size (power of two, in page size units)
+ id->mdts = ctx->hctrl->mdts;
+ // controller ID
+ id->cntlid = 0;
+ // NVME supported version
+ store_le32(&id->ver, NVME_MDEV_NVME_VER);
+ // RTD3 Resume Latency
+ id->rtd3r = 0;
+ //RTD3 Entry Latency
+ id->rtd3e = 0;
+ // Optional Asynchronous Events Supported
+ store_le32(&id->oaes, NVME_AEN_CFG_NS_ATTR);
+ // Controller Attributes (misc junk)
+ id->ctratt = 0;
+
+ /*Admin Command Set Attributes & Optional Controller Capabilities */
+ // Optional Admin Command Support
+ id->oacs = ctx->vctrl->mmio.shadow_db_supported ?
+ NVME_CTRL_OACS_DBBUF_SUPP : 0;
+ // Abort Command Limit (dummy, zero based)
+ id->acl = 3;
+ // Asynchronous Event Request Limit (zero based)
+ id->aerl = MAX_AER_COMMANDS - 1;
+ // Firmware Updates (dummy)
+ id->frmw = 3;
+ // Log Page Attributes
+ // (IMPLEMENT: bit for commands supported and effects)
+ id->lpa = 0;
+ // Error Log Page Entries
+ // (zero based, IMPLEMENT: dummy for now)
+ id->elpe = 0;
+ // Number of Power States Support
+ // (zero based, IMPLEMENT: dummy for now)
+ id->npss = 0;
+ // Admin Vendor Specific Command Configuration (junk)
+ id->avscc = 0;
+ // Autonomous Power State Transition Attributes
+ id->apsta = 0;
+ // Warning Composite Temperature Threshold (dummy)
+ id->wctemp = 0x157;
+ // Critical Composite Temperature Threshold (dummy)
+ id->cctemp = 0x175;
+ // Maximum Time for Firmware Activation (dummy)
+ id->mtfa = 0;
+ // Host Memory Buffer Preferred Size (dummy)
+ id->hmpre = 0;
+ // Host Memory Buffer Minimum Size (dummy)
+ id->hmmin = 0;
+ // Total NVM Capacity (not supported)
+ id->tnvmcap[0] = 0;
+ // Unallocated NVM Capacity (not supported for now)
+ id->unvmcap[0] = 0;
+ // Replay Protected Memory Block Support
+ id->rpmbs = 0;
+ // Extended Device Self-test Time (dummy)
+ id->edstt = 0;
+ // Device Self-test Options (dummy)
+ id->dsto = 0;
+ // Firmware Update Granularity (dummy)
+ id->fwug = 0;
+ // Keep Alive Support (not supported)
+ id->kas = 0;
+ // Host Controlled Thermal Management Attributes (not supported)
+ id->hctma = 0;
+ // Minimum Thermal Management Temperature (not supported)
+ id->mntmt = 0;
+ // Maximum Thermal Management Temperature (not supported)
+ id->mxtmt = 0;
+ // Sanitize capabilities (not supported)
+ id->sanicap = 0;
+
+ /****************** NVM Command Set Attributes ********************/
+ // Submission Queue Entry Size
+ id->sqes = (0x6 << 4) | 0x6;
+ // Completion Queue Entry Size
+ id->cqes = (0x4 << 4) | 0x4;
+ // Maximum Outstanding Commands
+ id->maxcmd = 0;
+ // Number of Namespaces
+ id->nn = MAX_VIRTUAL_NAMESPACES;
+ // Optional NVM Command Support
+ // (we add dsm and write zeros if host supports them)
+ id->oncs = ctx->hctrl->oncs;
+ // TODOLATER: IO: Fused Operation Support
+ id->fuses = 0;
+ // Format NVM Attributes (don't support)
+ id->fna = 0;
+ // Volatile Write Cache (tell that always exist)
+ id->vwc = 1;
+ // Atomic Write Unit Normal (zero based value in blocks)
+ id->awun = 0;
+ // Atomic Write Unit Power Fail (ditto)
+ id->awupf = 0;
+ // NVM Vendor Specific Command Configuration
+ id->nvscc = 0;
+ // Atomic Compare & Write Unit (zero based value in blocks)
+ id->acwu = 0;
+ // SGL Support
+ id->sgls = 0;
+ // NVM Subsystem NVMe Qualified Name
+ strncpy(id->subnqn, ctx->vctrl->subnqn, sizeof(id->subnqn));
+
+ /******************Power state descriptors ***********************/
+ store_le16(&id->psd[0].max_power, 0x9c4); // dummy
+ store_le32(&id->psd[0].entry_lat, 0x10);
+ store_le32(&id->psd[0].exit_lat, 0x4);
+
+ ret = nvme_mdev_write_to_udata(&ctx->udatait, id, sizeof(*id));
+ kfree(id);
+ return nvme_mdev_translate_error(ret);
+}
+
+/*Identify Namespace data structure for the specified NSID or common one */
+static int nvme_mdev_adm_handle_id_ns(struct adm_ctx *ctx)
+{
+ int ret;
+ struct nvme_id_ns *idns;
+ u32 nsid = le32_to_cpu(ctx->in->identify.nsid);
+
+ if (nsid == 0xffffffff || nsid == 0 || nsid > MAX_VIRTUAL_NAMESPACES)
+ return DNR(NVME_SC_INVALID_NS);
+
+ /* Allocate return structure*/
+ idns = kzalloc(NVME_IDENTIFY_DATA_SIZE, GFP_KERNEL);
+ if (!idns)
+ return NVME_SC_INTERNAL;
+
+ if (ctx->ns) {
+ //Namespace Size
+ store_le64(&idns->nsze, ctx->ns->ns_size);
+ // Namespace Capacity
+ store_le64(&idns->ncap, ctx->ns->ns_size);
+ // Namespace Utilization
+ store_le64(&idns->nuse, ctx->ns->ns_size);
+ // Namespace Features (nothing to set here yet)
+ idns->nsfeat = 0;
+ // Number of LBA Formats (dummy, zero based)
+ idns->nlbaf = 0;
+ // Formatted LBA Size (current LBA format in use)
+ // + external metadata bit
+ idns->flbas = 0;
+ // Metadata Capabilities
+ idns->mc = 0;
+ // End-to-end Data Protection Capabilities
+ idns->dpc = 0;
+ // End-to-end Data Protection Type Settings
+ idns->dps = 0;
+ // Namespace Multi-path I/O and Namespace Sharing Capabilities
+ idns->nmic = 0;
+ // Reservation Capabilities
+ idns->rescap = 0;
+ // Format Progress Indicator (dummy)
+ idns->fpi = 0;
+ // Namespace Atomic Write Unit Normal
+ idns->nawun = 0;
+ // Namespace Atomic Write Unit Power Fail
+ idns->nawupf = 0;
+ // Namespace Atomic Compare & Write Unit
+ idns->nacwu = 0;
+ // Namespace Atomic Boundary Size Normal
+ idns->nabsn = 0;
+ // Namespace Atomic Boundary Offset
+ idns->nabo = 0;
+ // Namespace Atomic Boundary Size Power Fail
+ idns->nabspf = 0;
+ // Namespace Optimal IO Boundary
+ idns->noiob = ctx->ns->noiob;
+ // NVM Capacity (another capacity but in bytes)
+ idns->nvmcap[0] = 0;
+
+ // TODOLATER: NS: support NGUID/EUI64
+ idns->nguid[0] = 0;
+ idns->eui64[0] = 0;
+ // format 0 metadata size
+ idns->lbaf[0].ms = 0;
+ // format 0 block size (in power of two)
+ idns->lbaf[0].ds = ctx->ns->blksize_shift;
+ // format 0 relative performance
+ idns->lbaf[0].rp = 0;
+ }
+
+ ret = nvme_mdev_write_to_udata(&ctx->udatait, idns,
+ NVME_IDENTIFY_DATA_SIZE);
+ kfree(idns);
+ return nvme_mdev_translate_error(ret);
+}
+
+/* Namespace Identification Descriptor list for the specified NSID.*/
+static int nvme_mdev_adm_handle_id_ns_desc(struct adm_ctx *ctx)
+{
+ struct ns_desc {
+ struct nvme_ns_id_desc uuid_desc;
+ uuid_t uuid;
+ struct nvme_ns_id_desc null_desc;
+ };
+
+ int ret;
+ struct ns_desc *id;
+
+ if (!ctx->ns)
+ return DNR(NVME_SC_INVALID_NS);
+
+ /* Allocate return structure */
+ id = kzalloc(NVME_IDENTIFY_DATA_SIZE, GFP_KERNEL);
+ if (!id)
+ return NVME_SC_INTERNAL;
+
+ id->uuid_desc.nidt = NVME_NIDT_UUID;
+ id->uuid_desc.nidl = NVME_NIDT_UUID_LEN;
+ memcpy(&id->uuid, &ctx->ns->uuid, sizeof(id->uuid));
+
+ ret = nvme_mdev_write_to_udata(&ctx->udatait, id,
+ NVME_IDENTIFY_DATA_SIZE);
+ kfree(id);
+ return nvme_mdev_translate_error(ret);
+}
+
+/*Active Namespace ID list */
+static int nvme_mdev_adm_handle_id_active_ns_list(struct adm_ctx *ctx)
+{
+ u32 nsid, start_nsid = le32_to_cpu(ctx->in->identify.nsid);
+ struct nvme_mdev_vctrl *vctrl = ctx->vctrl;
+ int i = 0, ret;
+
+ __le32 *nslist = kzalloc(NVME_IDENTIFY_DATA_SIZE, GFP_KERNEL);
+
+ if (start_nsid >= 0xfffffffe)
+ return DNR(NVME_SC_INVALID_NS);
+
+ for (nsid = start_nsid + 1; nsid <= MAX_VIRTUAL_NAMESPACES; nsid++)
+ if (nvme_mdev_vns_from_vnsid(vctrl, nsid))
+ nslist[i++] = nsid;
+
+ ret = nvme_mdev_write_to_udata(&ctx->udatait, nslist,
+ NVME_IDENTIFY_DATA_SIZE);
+ kfree(nslist);
+ return nvme_mdev_translate_error(ret);
+}
+
+/* Handle Identify command*/
+static int nvme_mdev_adm_handle_id(struct adm_ctx *ctx)
+{
+ const struct nvme_identify *in = &ctx->in->identify;
+
+ int ret = nvme_mdev_udata_iter_set_dptr(&ctx->udatait,
+ &ctx->in->common.dptr,
+ NVME_IDENTIFY_DATA_SIZE);
+
+ u32 nsid = le32_to_cpu(in->nsid);
+
+ if (ret)
+ return nvme_mdev_translate_error(ret);
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_DW23 | RSRV_MPTR | RSRV_DW11_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (in->ctrlid)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ ctx->ns = nvme_mdev_vns_from_vnsid(ctx->vctrl, nsid);
+
+ switch (ctx->in->identify.cns) {
+ case NVME_ID_CNS_CTRL:
+ _DBG(ctx->vctrl, "ADMINQ: IDENTIFY CTRL\n");
+ return nvme_mdev_adm_handle_id_cntrl(ctx);
+ case NVME_ID_CNS_NS_ACTIVE_LIST:
+ _DBG(ctx->vctrl, "ADMINQ: IDENTIFY ACTIVE_NS_LIST\n");
+ return nvme_mdev_adm_handle_id_active_ns_list(ctx);
+ case NVME_ID_CNS_NS:
+ _DBG(ctx->vctrl, "ADMINQ: IDENTIFY NS=0x%08x\n", nsid);
+ return nvme_mdev_adm_handle_id_ns(ctx);
+ case NVME_ID_CNS_NS_DESC_LIST:
+ _DBG(ctx->vctrl, "ADMINQ: IDENTIFY NS_DESC NS=0x%08x\n", nsid);
+ return nvme_mdev_adm_handle_id_ns_desc(ctx);
+ default:
+ return DNR(NVME_SC_INVALID_FIELD);
+ }
+}
+
+/* Error log for AER */
+static int nvme_mdev_adm_handle_get_log_page_err(struct adm_ctx *ctx)
+{
+ struct nvme_err_log_entry dummy_entry;
+ int ret;
+
+ // write one dummy entry with 0 error count
+ memset(&dummy_entry, 0, sizeof(dummy_entry));
+
+ ret = nvme_mdev_write_to_udata(&ctx->udatait,
+ &dummy_entry,
+ min((unsigned int)sizeof(dummy_entry),
+ ctx->datalen));
+
+ return nvme_mdev_translate_error(ret);
+}
+
+/* This log page allows to tell user about connected/disconnected namespaces */
+static int nvme_mdev_adm_handle_get_log_page_changed_ns(struct adm_ctx *ctx)
+{
+ unsigned int datasize = min(ctx->vctrl->ns_log_size * 4, ctx->datalen);
+
+ int ret = nvme_mdev_write_to_udata(&ctx->udatait,
+ &ctx->vctrl->ns_log, datasize);
+
+ nvme_mdev_vns_log_reset(ctx->vctrl);
+ return nvme_mdev_translate_error(ret);
+}
+
+/* S.M.A.R.T. log*/
+static int nvme_mdev_adm_handle_get_log_page_smart(struct adm_ctx *ctx)
+{
+ unsigned int datasize = min_t(unsigned int,
+ sizeof(struct nvme_smart_log), ctx->datalen);
+ int ret;
+ struct nvme_smart_log *log = kzalloc(sizeof(*log), GFP_KERNEL);
+
+ if (!log)
+ return NVME_SC_INTERNAL;
+
+ /* Some dummy values */
+ log->avail_spare = 100;
+ log->spare_thresh = 10;
+ store_le16(&log->temperature, 0x140);
+
+ ret = nvme_mdev_write_to_udata(&ctx->udatait, log, datasize);
+ kfree(log);
+ return nvme_mdev_translate_error(ret);
+}
+
+/* FW slot log - useless */
+static int nvme_mdev_adm_handle_get_log_page_fw_slot(struct adm_ctx *ctx)
+{
+ unsigned int datasize = min_t(unsigned int,
+ sizeof(struct nvme_fw_slot_info_log),
+ ctx->datalen);
+ int ret;
+ struct nvme_fw_slot_info_log *log = kzalloc(sizeof(*log), GFP_KERNEL);
+
+ if (!log)
+ return NVME_SC_INTERNAL;
+
+ ret = nvme_mdev_write_to_udata(&ctx->udatait, log, datasize);
+ kfree(log);
+ return nvme_mdev_translate_error(ret);
+}
+
+/* Response to GET LOG PAGE command */
+static int nvme_mdev_adm_handle_get_log_page(struct adm_ctx *ctx)
+{
+ const struct nvme_get_log_page_command *in = &ctx->in->get_log_page;
+ u8 log_page_id = ctx->in->get_log_page.lid;
+ int ret;
+
+ ctx->datalen = (le16_to_cpu(in->numdl) + 1) * 4;
+
+ /* We don't support extensions (NUMDU,LPOL,LPOU) */
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_DW23 | RSRV_MPTR | RSRV_DW11_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ /* Currently ignore the NSID in the command */
+
+ /* ACK the AER */
+ if ((in->lsp & 0x80) == 0)
+ nvme_mdev_event_process_ack(ctx->vctrl, log_page_id);
+
+ /* map data pointer */
+ ret = nvme_mdev_udata_iter_set_dptr(&ctx->udatait,
+ &in->dptr, ctx->datalen);
+ if (ret)
+ return nvme_mdev_translate_error(ret);
+
+ switch (log_page_id) {
+ case NVME_LOG_ERROR:
+ _DBG(ctx->vctrl, "ADMINQ: GET_LOG_PAGE : ERRLOG\n");
+ return nvme_mdev_adm_handle_get_log_page_err(ctx);
+ case NVME_LOG_CHANGED_NS:
+ _DBG(ctx->vctrl, "ADMINQ: GET_LOG_PAGE : CHANGED_NS\n");
+ return nvme_mdev_adm_handle_get_log_page_changed_ns(ctx);
+ case NVME_LOG_SMART:
+ _DBG(ctx->vctrl, "ADMINQ: GET_LOG_PAGE : SMART\n");
+ return nvme_mdev_adm_handle_get_log_page_smart(ctx);
+ case NVME_LOG_FW_SLOT:
+ _DBG(ctx->vctrl, "ADMINQ: GET_LOG_PAGE : FWSLOT\n");
+ return nvme_mdev_adm_handle_get_log_page_fw_slot(ctx);
+ default:
+ _DBG(ctx->vctrl, "ADMINQ: GET_LOG_PAGE : log page 0x%02x\n",
+ log_page_id);
+ return DNR(NVME_SC_INVALID_FIELD);
+ }
+}
+
+/* Response to CREATE CQ command */
+static int nvme_mdev_adm_handle_create_cq(struct adm_ctx *ctx)
+{
+ int irq = -1, ret;
+ struct nvme_mdev_vctrl *vctrl = ctx->vctrl;
+ const struct nvme_create_cq *in = &ctx->in->create_cq;
+ u16 cqid = le16_to_cpu(in->cqid);
+ u16 qsize = le16_to_cpu(in->qsize);
+ u16 cq_flags = le16_to_cpu(in->cq_flags);
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_NSID | RSRV_DW23 | RSRV_DPTR_PRP2 |
+ RSRV_MPTR | RSRV_DW12_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ /* QID checks*/
+ if (!cqid ||
+ cqid >= MAX_VIRTUAL_QUEUES || test_bit(cqid, vctrl->vcq_en))
+ return DNR(NVME_SC_QID_INVALID);
+
+ /* Queue size checks*/
+ if (qsize > (MAX_VIRTUAL_QUEUE_DEPTH - 1) || qsize < 1)
+ return DNR(NVME_SC_QUEUE_SIZE);
+
+ /* Queue flags checks */
+ if (cq_flags & ~(NVME_QUEUE_PHYS_CONTIG | NVME_CQ_IRQ_ENABLED))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (cq_flags & NVME_CQ_IRQ_ENABLED) {
+ irq = le16_to_cpu(in->irq_vector);
+ if (irq >= MAX_VIRTUAL_IRQS)
+ return DNR(NVME_SC_INVALID_VECTOR);
+ }
+
+ ret = nvme_mdev_vcq_init(ctx->vctrl, cqid,
+ le64_to_cpu(in->prp1),
+ cq_flags & NVME_QUEUE_PHYS_CONTIG,
+ qsize + 1, irq);
+
+ return nvme_mdev_translate_error(ret);
+}
+
+/* Response to DELETE CQ command */
+static int nvme_mdev_adm_handle_delete_cq(struct adm_ctx *ctx)
+{
+ struct nvme_mdev_vctrl *vctrl = ctx->vctrl;
+ const struct nvme_delete_queue *in = &ctx->in->delete_queue;
+ u16 qid = le16_to_cpu(in->qid), sqid;
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_NSID | RSRV_DW23 | RSRV_DPTR |
+ RSRV_MPTR | RSRV_DW11_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (!qid || qid >= MAX_VIRTUAL_QUEUES || !test_bit(qid, vctrl->vcq_en))
+ return DNR(NVME_SC_QID_INVALID);
+
+ for_each_set_bit(sqid, vctrl->vsq_en, MAX_VIRTUAL_QUEUES)
+ if (vctrl->vsqs[sqid].vcq == &vctrl->vcqs[qid])
+ return DNR(NVME_SC_INVALID_QUEUE);
+
+ nvme_mdev_vcq_delete(vctrl, qid);
+ return NVME_SC_SUCCESS;
+}
+
+/* Response to CREATE SQ command */
+static int nvme_mdev_adm_handle_create_sq(struct adm_ctx *ctx)
+{
+ const struct nvme_create_sq *in = &ctx->in->create_sq;
+ struct nvme_mdev_vctrl *vctrl = ctx->vctrl;
+ int ret;
+
+ u16 sqid = le16_to_cpu(in->sqid);
+ u16 cqid = le16_to_cpu(in->cqid);
+ u16 qsize = le16_to_cpu(in->qsize);
+ u16 sq_flags = le16_to_cpu(in->sq_flags);
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_NSID | RSRV_DW23 | RSRV_DPTR_PRP2 |
+ RSRV_MPTR | RSRV_DW12_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (!sqid ||
+ sqid >= MAX_VIRTUAL_QUEUES || test_bit(sqid, vctrl->vsq_en))
+ return DNR(NVME_SC_QID_INVALID);
+
+ if (!cqid || cqid >= MAX_VIRTUAL_QUEUES)
+ return DNR(NVME_SC_QID_INVALID);
+
+ if (!test_bit(cqid, vctrl->vcq_en))
+ return DNR(NVME_SC_CQ_INVALID);
+
+ /* Queue size checks */
+ if (qsize > (MAX_VIRTUAL_QUEUE_DEPTH - 1) || qsize < 1)
+ return DNR(NVME_SC_QUEUE_SIZE);
+
+ /* Queue flags checks */
+ if (sq_flags & ~(NVME_QUEUE_PHYS_CONTIG | NVME_SQ_PRIO_MASK))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ ret = nvme_mdev_vsq_init(ctx->vctrl, sqid,
+ le64_to_cpu(in->prp1),
+ sq_flags & NVME_QUEUE_PHYS_CONTIG,
+ qsize + 1, cqid);
+ if (ret)
+ goto error;
+
+ return NVME_SC_SUCCESS;
+error:
+ return nvme_mdev_translate_error(ret);
+}
+
+/* Response to DELETE SQ command */
+static int nvme_mdev_adm_handle_delete_sq(struct adm_ctx *ctx)
+{
+ struct nvme_mdev_vctrl *vctrl = ctx->vctrl;
+ const struct nvme_delete_queue *in = &ctx->in->delete_queue;
+ u16 qid = le16_to_cpu(in->qid);
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_NSID | RSRV_DW23 | RSRV_DPTR |
+ RSRV_MPTR | RSRV_DW11_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (!qid || qid >= MAX_VIRTUAL_QUEUES || !test_bit(qid, vctrl->vsq_en))
+ return DNR(NVME_SC_QID_INVALID);
+
+ nvme_mdev_vsq_delete(ctx->vctrl, qid);
+ return NVME_SC_SUCCESS;
+}
+
+/* Set the shadow doorbell */
+static int nvme_mdev_adm_handle_dbbuf(struct adm_ctx *ctx)
+{
+ const struct nvme_dbbuf *in = &ctx->in->dbbuf;
+ int ret;
+
+ dma_addr_t sdb_iova = le64_to_cpu(in->prp1);
+ dma_addr_t eidx_iova = le64_to_cpu(in->prp2);
+
+ /* Check if we support the shadow doorbell */
+ if (!ctx->vctrl->mmio.shadow_db_supported)
+ return DNR(NVME_SC_INVALID_OPCODE);
+
+ /* Don't allow to enable the shadow doorbell more that once */
+ if (ctx->vctrl->mmio.shadow_db_en)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_NSID | RSRV_DW23 |
+ RSRV_MPTR | RSRV_DW10_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ /* check input buffers */
+ if ((OFFSET_IN_PAGE(sdb_iova) != 0) || (OFFSET_IN_PAGE(eidx_iova) != 0))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ /* switch to the new doorbell buffer */
+ ret = nvme_mdev_mmio_enable_dbs_shadow(ctx->vctrl, sdb_iova, eidx_iova);
+ return nvme_mdev_translate_error(ret);
+}
+
+/* Response to GET_FEATURES command */
+static int nvme_mdev_adm_handle_get_features(struct adm_ctx *ctx)
+{
+ u32 value = 0;
+ u32 irq;
+ const struct nvme_features *in = &ctx->in->features;
+ struct nvme_mdev_vctrl *vctrl = ctx->vctrl;
+ unsigned int tmp;
+
+ u32 fid = le32_to_cpu(in->fid);
+ u16 cid = le16_to_cpu(in->command_id);
+
+ _DBG(ctx->vctrl, "ADMINQ: GET_FEATURES FID=0x%x\n", fid);
+
+ /* common reserved bits*/
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_DW23 | RSRV_DPTR |
+ RSRV_MPTR | RSRV_DW12_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ /* reserved bits in dword10*/
+ if (fid > 0xFF)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ /* reserved bits in dword11*/
+ if (fid != NVME_FEAT_IRQ_CONFIG && in->dword11 != 0)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ switch (fid) {
+ /* Number of queues */
+ case NVME_FEAT_NUM_QUEUES:
+ value = (MAX_VIRTUAL_QUEUES - 1) |
+ ((MAX_VIRTUAL_QUEUES - 1) << 16);
+ goto out;
+
+ /* Arbitration */
+ case NVME_FEAT_ARBITRATION:
+ value = vctrl->arb_burst_shift & 0x7;
+ goto out;
+
+ /* Interrupt coalescing settings*/
+ case NVME_FEAT_IRQ_COALESCE:
+ tmp = vctrl->irqs.irq_coalesc_time_us;
+ do_div(tmp, 100);
+ value = (vctrl->irqs.irq_coalesc_max - 1) | (tmp << 8);
+ goto out;
+
+ /* Interrupt coalescing disable for a specific interrupt */
+ case NVME_FEAT_IRQ_CONFIG:
+ irq = le32_to_cpu(in->dword11);
+ if (irq >= MAX_VIRTUAL_IRQS)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ value = irq;
+ if (vctrl->irqs.vecs[irq].irq_coalesc_en)
+ value |= (1 << 16);
+ goto out;
+
+ /* Volatile write cache */
+ case NVME_FEAT_VOLATILE_WC:
+ /*we always report write cache due to mediation*/
+ value = 0x1;
+ goto out;
+
+ /* Limited error recovery */
+ case NVME_FEAT_ERR_RECOVERY:
+ value = 0;
+ break;
+
+ /* Workload hint + power state */
+ case NVME_FEAT_POWER_MGMT:
+ value = vctrl->worload_hint << 4;
+ break;
+
+ /* Temperature threshold */
+ case NVME_FEAT_TEMP_THRESH:
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ /* AEN permanent masking*/
+ case NVME_FEAT_ASYNC_EVENT:
+ value = nvme_mdev_event_read_aen_config(vctrl);
+ goto out;
+ default:
+ return DNR(NVME_SC_INVALID_FIELD);
+ }
+out:
+ nvme_mdev_vsq_cmd_done_adm(ctx->vctrl, value, cid, NVME_SC_SUCCESS);
+ return -1;
+}
+
+/* Response to SET_FEATURES command */
+static int nvme_mdev_adm_handle_set_features(struct adm_ctx *ctx)
+{
+ const struct nvme_features *in = &ctx->in->features;
+ struct nvme_mdev_vctrl *vctrl = ctx->vctrl;
+
+ u32 value = le32_to_cpu(in->dword11);
+ u8 fid = le32_to_cpu(in->fid) & 0xFF;
+ u16 cid = le16_to_cpu(in->command_id);
+ u32 nsid = le32_to_cpu(in->nsid);
+
+ _DBG(ctx->vctrl, "ADMINQ: SET_FEATURES cmd. FID=0x%x\n", fid);
+
+ if (nsid != 0xffffffff && nsid != 0)
+ return DNR(NVME_SC_FEATURE_NOT_PER_NS);
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_DW23 | RSRV_DPTR |
+ RSRV_MPTR | RSRV_DW12_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ switch (fid) {
+ case NVME_FEAT_NUM_QUEUES:
+ /* need to return the value here as well */
+ value = (MAX_VIRTUAL_QUEUES - 1) |
+ ((MAX_VIRTUAL_QUEUES - 1) << 16);
+
+ nvme_mdev_vsq_cmd_done_adm(ctx->vctrl, value,
+ cid, NVME_SC_SUCCESS);
+ return -1;
+
+ case NVME_FEAT_ARBITRATION:
+ vctrl->arb_burst_shift = value & 0x7;
+ return NVME_SC_SUCCESS;
+
+ case NVME_FEAT_IRQ_COALESCE:
+ vctrl->irqs.irq_coalesc_max = (value & 0xFF) + 1;
+ vctrl->irqs.irq_coalesc_time_us = ((value >> 8) & 0xFF) * 100;
+ return NVME_SC_SUCCESS;
+
+ case NVME_FEAT_IRQ_CONFIG: {
+ u16 irq = value & 0xFFFF;
+
+ if (irq >= MAX_VIRTUAL_IRQS)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ vctrl->irqs.vecs[irq].irq_coalesc_en = (value & 0x10000) != 0;
+ return NVME_SC_SUCCESS;
+ }
+ case NVME_FEAT_VOLATILE_WC:
+ return (value != 0x1) ? DNR(NVME_SC_FEATURE_NOT_CHANGEABLE) :
+ NVME_SC_SUCCESS;
+
+ case NVME_FEAT_ERR_RECOVERY:
+ return (value != 0) ? DNR(NVME_SC_FEATURE_NOT_CHANGEABLE) :
+ NVME_SC_SUCCESS;
+ case NVME_FEAT_POWER_MGMT:
+ if (value & 0xFFFFFF0F)
+ return DNR(NVME_SC_INVALID_FIELD);
+ vctrl->worload_hint = value >> 4;
+ return NVME_SC_SUCCESS;
+
+ case NVME_FEAT_TEMP_THRESH:
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ case NVME_FEAT_ASYNC_EVENT:
+ nvme_mdev_event_set_aen_config(vctrl, value);
+ return NVME_SC_SUCCESS;
+ default:
+ return DNR(NVME_SC_INVALID_FIELD);
+ }
+}
+
+/* Response to AER command */
+static int nvme_mdev_adm_handle_async_event(struct adm_ctx *ctx)
+{
+ u16 cid = le16_to_cpu(ctx->in->common.command_id);
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_NSID | RSRV_DW23 | RSRV_DPTR |
+ RSRV_MPTR | RSRV_DW10_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ return nvme_mdev_event_request_receive(ctx->vctrl, cid);
+}
+
+/* (Dummy) response to ABORT command*/
+static int nvme_mdev_adm_handle_abort(struct adm_ctx *ctx)
+{
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_NSID | RSRV_DW23 | RSRV_DPTR |
+ RSRV_MPTR | RSRV_DW10_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ return DNR(NVME_SC_ABORT_MISSING);
+}
+
+/* Process one new command in the admin queue*/
+static int nvme_mdev_adm_handle_cmd(struct adm_ctx *ctx)
+{
+ u8 optcode = ctx->in->common.opcode;
+
+ ctx->ns = NULL;
+ ctx->datalen = 0;
+
+ if (ctx->in->common.flags != 0)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ switch (optcode) {
+ case nvme_admin_identify:
+ return nvme_mdev_adm_handle_id(ctx);
+ case nvme_admin_create_cq:
+ _DBG(ctx->vctrl, "ADMINQ: CREATE_CQ\n");
+ return nvme_mdev_adm_handle_create_cq(ctx);
+ case nvme_admin_create_sq:
+ _DBG(ctx->vctrl, "ADMINQ: CREATE_SQ\n");
+ return nvme_mdev_adm_handle_create_sq(ctx);
+ case nvme_admin_delete_sq:
+ _DBG(ctx->vctrl, "ADMINQ: DELETE_SQ\n");
+ return nvme_mdev_adm_handle_delete_sq(ctx);
+ case nvme_admin_delete_cq:
+ _DBG(ctx->vctrl, "ADMINQ: DELETE_CQ\n");
+ return nvme_mdev_adm_handle_delete_cq(ctx);
+ case nvme_admin_dbbuf:
+ _DBG(ctx->vctrl, "ADMINQ: DBBUF_CONFIG\n");
+ return nvme_mdev_adm_handle_dbbuf(ctx);
+ case nvme_admin_get_log_page:
+ return nvme_mdev_adm_handle_get_log_page(ctx);
+ case nvme_admin_get_features:
+ return nvme_mdev_adm_handle_get_features(ctx);
+ case nvme_admin_set_features:
+ return nvme_mdev_adm_handle_set_features(ctx);
+ case nvme_admin_async_event:
+ _DBG(ctx->vctrl, "ADMINQ: ASYNC_EVENT_REQ\n");
+ return nvme_mdev_adm_handle_async_event(ctx);
+ case nvme_admin_abort_cmd:
+ _DBG(ctx->vctrl, "ADMINQ: ABORT\n");
+ return nvme_mdev_adm_handle_abort(ctx);
+ default:
+ _DBG(ctx->vctrl, "ADMINQ: optcode 0x%04x\n", optcode);
+ return DNR(NVME_SC_INVALID_OPCODE);
+ }
+}
+
+/* Process all pending admin commands */
+void nvme_mdev_adm_process_sq(struct nvme_mdev_vctrl *vctrl)
+{
+ struct adm_ctx ctx;
+
+ lockdep_assert_held(&vctrl->lock);
+ memset(&ctx, 0, sizeof(struct adm_ctx));
+ ctx.vctrl = vctrl;
+ ctx.hctrl = vctrl->hctrl;
+ nvme_mdev_udata_iter_setup(&vctrl->viommu, &ctx.udatait);
+
+ nvme_mdev_io_pause(ctx.vctrl);
+
+ while (!(nvme_mdev_vctrl_is_dead(vctrl))) {
+ int ret;
+ u16 cid;
+
+ ctx.in = nvme_mdev_vsq_get_cmd(vctrl, &vctrl->vsqs[0]);
+ if (!ctx.in)
+ break;
+
+ cid = le16_to_cpu(ctx.in->common.command_id);
+ ret = nvme_mdev_adm_handle_cmd(&ctx);
+
+ if (ret == -1)
+ continue;
+
+ if (ret != 0)
+ _DBG(vctrl, "ADMINQ: CID 0x%x FAILED: status 0x%x\n",
+ cid, ret);
+ nvme_mdev_vsq_cmd_done_adm(vctrl, 0, cid, ret);
+ }
+ nvme_mdev_io_resume(ctx.vctrl);
+}
diff --git a/drivers/nvme/mdev/events.c b/drivers/nvme/mdev/events.c
new file mode 100644
index 000000000000..9854c1cabdcb
--- /dev/null
+++ b/drivers/nvme/mdev/events.c
@@ -0,0 +1,142 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * NVMe async events implementation (AER, changed namespace log)
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include "priv.h"
+
+/* complete an AER event on the admin queue if it is pending*/
+static void nvme_mdev_event_complete(struct nvme_mdev_vctrl *vctrl)
+{
+ u16 lid, cid;
+ u32 dw0;
+
+ for_each_set_bit(lid, vctrl->events.events_pending, MAX_LOG_PAGES) {
+ /* we have pending aer requests, but no requests*/
+ if (vctrl->events.aer_cid_count == 0)
+ break;
+
+ if (!test_bit(lid, vctrl->events.events_enabled))
+ continue;
+
+ cid = vctrl->events.aer_cids[--vctrl->events.aer_cid_count];
+ dw0 = vctrl->events.event_values[lid];
+ clear_bit(lid, vctrl->events.events_pending);
+
+ _DBG(vctrl,
+ "AEN: replying to AER (CID=%d) with status 0x%08x\n",
+ cid, dw0);
+
+ nvme_mdev_vsq_cmd_done_adm(vctrl, dw0, cid, NVME_SC_SUCCESS);
+ }
+}
+
+/* deal with received async event request from the user*/
+int nvme_mdev_event_request_receive(struct nvme_mdev_vctrl *vctrl,
+ u16 cid)
+{
+ int cnt = vctrl->events.aer_cid_count;
+
+ if (cnt >= MAX_AER_COMMANDS)
+ return DNR(NVME_SC_ASYNC_LIMIT);
+
+ /* don't allow AER to be pending if there is no space left in the
+ * completion queue permanently
+ */
+ if ((cnt + 1) >= vctrl->vcqs[0].size - 1)
+ return DNR(NVME_SC_ASYNC_LIMIT);
+
+ vctrl->events.aer_cids[cnt++] = cid;
+ vctrl->events.aer_cid_count = cnt;
+
+ _DBG(vctrl, "AEN: received new request (cid=%d)\n", cid);
+ nvme_mdev_event_complete(vctrl);
+ return -1;
+}
+
+/* Send an async event request */
+void nvme_mdev_event_send(struct nvme_mdev_vctrl *vctrl,
+ enum nvme_async_event_type type,
+ enum nvme_async_event info)
+{
+ u8 log_page;
+ u32 event;
+
+ // determine the log page for event types that we support
+ switch (type) {
+ case NVME_AER_TYPE_ERROR:
+ log_page = NVME_LOG_ERROR;
+ break;
+ case NVME_AER_TYPE_SMART:
+ log_page = NVME_LOG_SMART;
+ break;
+ case NVME_AER_TYPE_NOTICE:
+ WARN_ON(info != NVME_AER_NOTICE_NS_CHANGED);
+ log_page = NVME_LOG_CHANGED_NS;
+ break;
+ default:
+ WARN_ON(1);
+ return;
+ }
+
+ if (test_and_set_bit(log_page, vctrl->events.events_masked))
+ return;
+
+ event = (u32)type | ((u32)info << 8) | ((u32)log_page << 16);
+ vctrl->events.event_values[log_page] = event;
+ set_bit(log_page, vctrl->events.events_masked);
+ set_bit(log_page, vctrl->events.events_pending);
+ nvme_mdev_event_complete(vctrl);
+}
+
+u32 nvme_mdev_event_read_aen_config(struct nvme_mdev_vctrl *vctrl)
+{
+ u32 value = 0;
+
+ if (test_bit(NVME_LOG_CHANGED_NS, vctrl->events.events_enabled))
+ value |= NVME_AEN_CFG_NS_ATTR;
+ return value;
+}
+
+void nvme_mdev_event_set_aen_config(struct nvme_mdev_vctrl *vctrl, u32 value)
+{
+ _DBG(vctrl, "AEN: set config: 0x%04x\n", value);
+
+ if (value & NVME_AEN_CFG_NS_ATTR)
+ set_bit(NVME_LOG_CHANGED_NS, vctrl->events.events_enabled);
+ else
+ clear_bit(NVME_LOG_CHANGED_NS, vctrl->events.events_enabled);
+
+ nvme_mdev_event_complete(vctrl);
+}
+
+/* called when user acks an log page which causes an AER event to be unmasked*/
+void nvme_mdev_event_process_ack(struct nvme_mdev_vctrl *vctrl, u8 log_page)
+{
+ lockdep_assert_held(&vctrl->lock);
+
+ _DBG(vctrl, "AEN: log page %d ACK\n", log_page);
+
+ if (log_page >= MAX_LOG_PAGES)
+ return;
+
+ clear_bit(log_page, vctrl->events.events_masked);
+ nvme_mdev_event_complete(vctrl);
+}
+
+/* reset event state*/
+void nvme_mdev_events_init(struct nvme_mdev_vctrl *vctrl)
+{
+ memset(&vctrl->events, 0, sizeof(vctrl->events));
+ set_bit(NVME_LOG_CHANGED_NS, vctrl->events.events_enabled);
+ set_bit(NVME_LOG_ERROR, vctrl->events.events_enabled);
+}
+
+/* reset event state*/
+void nvme_mdev_events_reset(struct nvme_mdev_vctrl *vctrl)
+{
+ memset(&vctrl->events, 0, sizeof(vctrl->events));
+}
+
diff --git a/drivers/nvme/mdev/host.c b/drivers/nvme/mdev/host.c
new file mode 100644
index 000000000000..d90275baf5f8
--- /dev/null
+++ b/drivers/nvme/mdev/host.c
@@ -0,0 +1,491 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * NVMe parent (host) device abstraction
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/nvme.h>
+#include <linux/mdev.h>
+#include <linux/module.h>
+#include "priv.h"
+
+static LIST_HEAD(nvme_mdev_hctrl_list);
+static DEFINE_MUTEX(nvme_mdev_hctrl_list_mutex);
+static struct nvme_mdev_inst_type **instance_types;
+
+unsigned int io_timeout_ms = 30000;
+module_param_named(io_timeout, io_timeout_ms, uint, 0644);
+MODULE_PARM_DESC(io_timeout,
+ "Maximum I/O command completion timeout (in msec)");
+
+unsigned int poll_timeout_ms = 500;
+module_param_named(poll_timeout, poll_timeout_ms, uint, 0644);
+MODULE_PARM_DESC(poll_timeout,
+ "Maximum idle time to keep polling (in msec) (0 - poll forever)");
+
+unsigned int admin_poll_rate_ms = 100;
+module_param_named(admin_poll_rate, poll_timeout_ms, uint, 0644);
+MODULE_PARM_DESC(admin_poll_rate,
+ "Admin queue polling rate (in msec) (used only when shadow doorbell is disabled)");
+
+bool use_shadow_doorbell = true;
+module_param(use_shadow_doorbell, bool, 0644);
+MODULE_PARM_DESC(use_shadow_doorbell,
+ "Enable the shadow doorbell NVMe extension");
+
+/* Create a new host controller */
+static struct nvme_mdev_hctrl *nvme_mdev_hctrl_create(struct nvme_ctrl *ctrl)
+{
+ struct nvme_mdev_hctrl *hctrl;
+ u32 max_lba_transfer;
+
+ /* TODOLATER: IO: support more page size configurations*/
+ if (ctrl->page_size != PAGE_SIZE)
+ return NULL;
+
+ hctrl = kzalloc_node(sizeof(*hctrl), GFP_KERNEL,
+ dev_to_node(ctrl->dev));
+ if (!hctrl)
+ return NULL;
+
+ kref_init(&hctrl->ref);
+ mutex_init(&hctrl->lock);
+
+ hctrl->nvme_ctrl = ctrl;
+ nvme_get_ctrl(ctrl);
+
+ hctrl->oncs = ctrl->oncs &
+ (NVME_CTRL_ONCS_DSM | NVME_CTRL_ONCS_WRITE_ZEROES);
+
+ hctrl->id = ctrl->instance;
+ hctrl->node = dev_to_node(ctrl->dev);
+
+ max_lba_transfer = ctrl->max_hw_sectors >> (PAGE_SHIFT - 9);
+ hctrl->mdts = ilog2(__rounddown_pow_of_two(max_lba_transfer));
+
+ hctrl->nr_host_queues = ctrl->ops->ext_queues_available(ctrl);
+
+ mutex_lock(&nvme_mdev_hctrl_list_mutex);
+
+ dev_info(ctrl->dev,
+ "mediated nvme support enabled, using up to %d host queues\n",
+ hctrl->nr_host_queues);
+
+ list_add_tail(&hctrl->link, &nvme_mdev_hctrl_list);
+
+ mutex_unlock(&nvme_mdev_hctrl_list_mutex);
+
+ if (mdev_register_device(ctrl->dev, &mdev_fops) < 0) {
+ nvme_put_ctrl(ctrl);
+ kfree(hctrl);
+ return NULL;
+ }
+ return hctrl;
+}
+
+/* Release an unused host controller*/
+static void nvme_mdev_hctrl_free(struct kref *ref)
+{
+ struct nvme_mdev_hctrl *hctrl =
+ container_of(ref, struct nvme_mdev_hctrl, ref);
+
+ dev_info(hctrl->nvme_ctrl->dev, "mediated nvme support disabled");
+
+ nvme_put_ctrl(hctrl->nvme_ctrl);
+ hctrl->nvme_ctrl = NULL;
+ kfree(hctrl);
+}
+
+/* Lookup a host controller based on mdev parent device*/
+struct nvme_mdev_hctrl *nvme_mdev_hctrl_lookup_get(struct device *parent)
+{
+ struct nvme_mdev_hctrl *hctrl = NULL, *tmp;
+
+ mutex_lock(&nvme_mdev_hctrl_list_mutex);
+ list_for_each_entry(tmp, &nvme_mdev_hctrl_list, link) {
+ if (tmp->nvme_ctrl->dev == parent) {
+ hctrl = tmp;
+ kref_get(&hctrl->ref);
+ break;
+ }
+ }
+ mutex_unlock(&nvme_mdev_hctrl_list_mutex);
+ return hctrl;
+}
+
+/* Release a held reference to a host controller*/
+void nvme_mdev_hctrl_put(struct nvme_mdev_hctrl *hctrl)
+{
+ kref_put(&hctrl->ref, nvme_mdev_hctrl_free);
+}
+
+/* Destroy a host controller. It might still be kept in zombie state
+ * if someone uses a reference to it
+ */
+static void nvme_mdev_hctrl_destroy(struct nvme_mdev_hctrl *hctrl)
+{
+ mutex_lock(&nvme_mdev_hctrl_list_mutex);
+ list_del(&hctrl->link);
+ mutex_unlock(&nvme_mdev_hctrl_list_mutex);
+
+ hctrl->removing = true;
+ mdev_unregister_device(hctrl->nvme_ctrl->dev);
+ nvme_mdev_hctrl_put(hctrl);
+}
+
+/* Check how many host queues are still available */
+int nvme_mdev_hctrl_hqs_available(struct nvme_mdev_hctrl *hctrl)
+{
+ int ret;
+
+ mutex_lock(&hctrl->lock);
+ ret = hctrl->nr_host_queues;
+ mutex_unlock(&hctrl->lock);
+ return ret;
+}
+
+/* Reserve N host IO queues, for later allocation to a specific user*/
+bool nvme_mdev_hctrl_hqs_reserve(struct nvme_mdev_hctrl *hctrl,
+ unsigned int n)
+{
+ mutex_lock(&hctrl->lock);
+
+ if (n > hctrl->nr_host_queues) {
+ mutex_unlock(&hctrl->lock);
+ return false;
+ }
+
+ hctrl->nr_host_queues -= n;
+ mutex_unlock(&hctrl->lock);
+ return true;
+}
+
+/* Free N host IO queues, for allocation for other users*/
+void nvme_mdev_hctrl_hqs_unreserve(struct nvme_mdev_hctrl *hctrl,
+ unsigned int n)
+{
+ mutex_lock(&hctrl->lock);
+ hctrl->nr_host_queues += n;
+ mutex_unlock(&hctrl->lock);
+}
+
+/* Allocate a host IO queue */
+int nvme_mdev_hctrl_hq_alloc(struct nvme_mdev_hctrl *hctrl)
+{
+ u16 qid = 0;
+ int ret = hctrl->nvme_ctrl->ops->ext_queue_alloc(hctrl->nvme_ctrl,
+ &qid);
+
+ if (ret)
+ return ret;
+ return qid;
+}
+
+/* Free an host IO queue */
+void nvme_mdev_hctrl_hq_free(struct nvme_mdev_hctrl *hctrl, u16 qid)
+{
+ hctrl->nvme_ctrl->ops->ext_queue_free(hctrl->nvme_ctrl, qid);
+}
+
+/* Check if we can submit another IO passthrough command */
+bool nvme_mdev_hctrl_hq_can_submit(struct nvme_mdev_hctrl *hctrl, u16 qid)
+{
+ return hctrl->nvme_ctrl->ops->ext_queue_full(hctrl->nvme_ctrl, qid);
+}
+
+/* Check if IO passthrough is supported for given IO optcode */
+bool nvme_mdev_hctrl_hq_check_op(struct nvme_mdev_hctrl *hctrl, u8 optcode)
+{
+ switch (optcode) {
+ case nvme_cmd_flush:
+ case nvme_cmd_read:
+ case nvme_cmd_write:
+ /* these are mandatory*/
+ return true;
+ case nvme_cmd_write_zeroes:
+ return (hctrl->oncs & NVME_CTRL_ONCS_WRITE_ZEROES);
+ case nvme_cmd_dsm:
+ return (hctrl->oncs & NVME_CTRL_ONCS_DSM);
+ default:
+ return false;
+ }
+}
+
+/* Submit a IO passthrough command */
+int nvme_mdev_hctrl_hq_submit(struct nvme_mdev_hctrl *hctrl,
+ u16 qid, u32 tag,
+ struct nvme_command *cmd,
+ struct nvme_ext_data_iter *datait)
+{
+ struct nvme_ctrl *ctrl = hctrl->nvme_ctrl;
+
+ return ctrl->ops->ext_queue_submit(ctrl, qid, tag, cmd, datait);
+}
+
+/* Poll for completion of IO passthrough commands */
+int nvme_mdev_hctrl_hq_poll(struct nvme_mdev_hctrl *hctrl,
+ u32 qid,
+ struct nvme_ext_cmd_result *results,
+ unsigned int max_len)
+{
+ struct nvme_ctrl *ctrl = hctrl->nvme_ctrl;
+
+ return ctrl->ops->ext_queue_poll(ctrl, qid, results, max_len);
+}
+
+/* Destroy all host controllers */
+void nvme_mdev_hctrl_destroy_all(void)
+{
+ struct nvme_mdev_hctrl *hctrl = NULL, *tmp;
+
+ list_for_each_entry_safe(hctrl, tmp, &nvme_mdev_hctrl_list, link) {
+ list_del(&hctrl->link);
+ hctrl->removing = true;
+ mdev_unregister_device(hctrl->nvme_ctrl->dev);
+ nvme_mdev_hctrl_put(hctrl);
+ }
+}
+
+/* Get the mdev instance given it sysfs name */
+struct nvme_mdev_inst_type *nvme_mdev_inst_type_get(const char *name)
+{
+ int i;
+
+ for (i = 0; instance_types[i]; i++) {
+ const char *test =
+ name + strlen(name) - strlen(instance_types[i]->name);
+
+ if (strcmp(instance_types[i]->name, test) == 0)
+ return instance_types[i];
+ }
+ return NULL;
+}
+
+/* This shows name of the instance type */
+static ssize_t name_show(struct kobject *kobj, struct device *dev, char *buf)
+{
+ return sprintf(buf, "%s\n", kobj->name);
+}
+static MDEV_TYPE_ATTR_RO(name);
+
+/* This shows description of the instance type */
+static ssize_t description_show(struct kobject *kobj,
+ struct device *dev, char *buf)
+{
+ struct nvme_mdev_inst_type *type = nvme_mdev_inst_type_get(kobj->name);
+
+ return sprintf(buf,
+ "MDEV nvme device, using maximum %d hw submission queues\n",
+ type->max_hw_queues);
+}
+static MDEV_TYPE_ATTR_RO(description);
+
+/* This shows the device API of the instance type */
+static ssize_t device_api_show(struct kobject *kobj,
+ struct device *dev, char *buf)
+{
+ return sprintf(buf, "%s\n", VFIO_DEVICE_API_PCI_STRING);
+}
+static MDEV_TYPE_ATTR_RO(device_api);
+
+/* This shows how many instances of this instance type can be created */
+static ssize_t available_instances_show(struct kobject *kobj,
+ struct device *dev, char *buf)
+{
+ struct nvme_mdev_inst_type *type = nvme_mdev_inst_type_get(kobj->name);
+ struct nvme_mdev_hctrl *hctrl = nvme_mdev_hctrl_lookup_get(dev);
+ int count;
+
+ if (!hctrl)
+ return -ENODEV;
+
+ count = nvme_mdev_hctrl_hqs_available(hctrl);
+ do_div(count, type->max_hw_queues);
+
+ nvme_mdev_hctrl_put(hctrl);
+ return sprintf(buf, "%d\n", count);
+}
+static MDEV_TYPE_ATTR_RO(available_instances);
+
+static struct attribute *nvme_mdev_types_attrs[] = {
+ &mdev_type_attr_name.attr,
+ &mdev_type_attr_description.attr,
+ &mdev_type_attr_device_api.attr,
+ &mdev_type_attr_available_instances.attr,
+ NULL,
+};
+
+/* Undo the creation of mdev array of instance types */
+static void nvme_mdev_instance_types_fini(struct mdev_parent_ops *ops)
+{
+ int i;
+
+ for (i = 0; instance_types[i]; i++) {
+ struct nvme_mdev_inst_type *type = instance_types[i];
+
+ kfree(type->attrgroup);
+ kfree(type);
+ }
+
+ kfree(instance_types);
+ instance_types = NULL;
+
+ kfree(ops->supported_type_groups);
+ ops->supported_type_groups = NULL;
+}
+
+/* Create the array of mdev instance types from our array of them */
+static int nvme_mdev_instance_types_init(struct mdev_parent_ops *ops)
+{
+ unsigned int i;
+ struct nvme_mdev_inst_type *type;
+ struct attribute_group *attrgroup;
+
+ ops->supported_type_groups = kzalloc(sizeof(struct attribute_group *)
+ * (MAX_HOST_QUEUES + 1), GFP_KERNEL);
+
+ if (!ops->supported_type_groups)
+ return -ENOMEM;
+
+ instance_types = kzalloc(sizeof(struct nvme_mdev_inst_type *)
+ * MAX_HOST_QUEUES + 1, GFP_KERNEL);
+
+ if (!instance_types) {
+ kfree(ops->supported_type_groups);
+ ops->supported_type_groups = NULL;
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < MAX_HOST_QUEUES; i++) {
+ type = kzalloc(sizeof(*type), GFP_KERNEL);
+ if (!type) {
+ nvme_mdev_instance_types_fini(ops);
+ return -ENOMEM;
+ }
+ snprintf(type->name, sizeof(type->name), "%dQ_V1", i + 1);
+ type->max_hw_queues = i + 1;
+
+ attrgroup = kzalloc(sizeof(*attrgroup), GFP_KERNEL);
+ if (!attrgroup) {
+ kfree(type);
+ nvme_mdev_instance_types_fini(ops);
+ return -ENOMEM;
+ }
+
+ attrgroup->attrs = nvme_mdev_types_attrs;
+ attrgroup->name = type->name;
+ type->attrgroup = attrgroup;
+ instance_types[i] = type;
+ ops->supported_type_groups[i] = attrgroup;
+ }
+ return 0;
+}
+
+/* Updates in host controller state*/
+static void nvme_mdev_nvme_ctrl_state_changed(struct nvme_ctrl *ctrl)
+{
+ struct nvme_mdev_hctrl *hctrl = nvme_mdev_hctrl_lookup_get(ctrl->dev);
+ struct nvme_mdev_vctrl *vctrl;
+
+ switch (ctrl->state) {
+ case NVME_CTRL_NEW:
+ /* do nothing as new controller is not yet initialized*/
+ break;
+
+ case NVME_CTRL_LIVE:
+ /* new controller is live, create a mdev for it*/
+ if (!hctrl) {
+ hctrl = nvme_mdev_hctrl_create(ctrl);
+ return;
+ /* a controller is live again after reset/reconnect/suspend*/
+ } else {
+ mutex_lock(&nvme_mdev_vctrl_list_mutex);
+ list_for_each_entry(vctrl, &nvme_mdev_vctrl_list, link)
+ if (vctrl->hctrl == hctrl)
+ nvme_mdev_vctrl_resume(vctrl);
+ mutex_unlock(&nvme_mdev_vctrl_list_mutex);
+ }
+ break;
+
+ case NVME_CTRL_RESETTING:
+ case NVME_CTRL_CONNECTING:
+ case NVME_CTRL_SUSPENDED:
+ /* controller is temporarily not usable, stop using its queues*/
+ if (!hctrl)
+ return;
+
+ mutex_lock(&nvme_mdev_vctrl_list_mutex);
+ list_for_each_entry(vctrl, &nvme_mdev_vctrl_list, link)
+ if (vctrl->hctrl == hctrl)
+ nvme_mdev_vctrl_pause(vctrl);
+ mutex_unlock(&nvme_mdev_vctrl_list_mutex);
+ break;
+
+ case NVME_CTRL_DELETING:
+ case NVME_CTRL_DEAD:
+ case NVME_CTRL_ADMIN_ONLY:
+ /* host nvme controller is dead, remove it*/
+ if (!hctrl)
+ return;
+ nvme_mdev_hctrl_destroy(hctrl);
+ break;
+ }
+ nvme_mdev_hctrl_put(hctrl);
+}
+
+/* A host namespace might have its properties changed/removed.*/
+static void nvme_mdev_nvme_ctrl_ns_updated(struct nvme_ctrl *ctrl,
+ u32 nsid, bool removed)
+{
+ struct nvme_mdev_vctrl *vctrl;
+ struct nvme_mdev_hctrl *hctrl = nvme_mdev_hctrl_lookup_get(ctrl->dev);
+
+ if (!hctrl)
+ return;
+
+ mutex_lock(&nvme_mdev_vctrl_list_mutex);
+ list_for_each_entry(vctrl, &nvme_mdev_vctrl_list, link)
+ if (vctrl->hctrl == hctrl)
+ nvme_mdev_vns_host_ns_update(vctrl, nsid, removed);
+ mutex_unlock(&nvme_mdev_vctrl_list_mutex);
+ nvme_mdev_hctrl_put(hctrl);
+}
+
+static struct nvme_mdev_driver nvme_mdev_driver = {
+ .owner = THIS_MODULE,
+ .nvme_ctrl_state_changed = nvme_mdev_nvme_ctrl_state_changed,
+ .nvme_ns_state_changed = nvme_mdev_nvme_ctrl_ns_updated,
+};
+
+static int __init nvme_mdev_init(void)
+{
+ int ret;
+
+ nvme_mdev_instance_types_init(&mdev_fops);
+ ret = nvme_core_register_mdev_driver(&nvme_mdev_driver);
+ if (ret) {
+ nvme_mdev_instance_types_fini(&mdev_fops);
+ return ret;
+ }
+
+ pr_info("nvme_mdev " NVME_MDEV_FIRMWARE_VERSION " loaded\n");
+ return 0;
+}
+
+static void __exit nvme_mdev_exit(void)
+{
+ nvme_core_unregister_mdev_driver(&nvme_mdev_driver);
+ nvme_mdev_hctrl_destroy_all();
+ nvme_mdev_instance_types_fini(&mdev_fops);
+ pr_info("nvme_mdev unloaded\n");
+}
+
+MODULE_AUTHOR("Maxim Levitsky <mlevitsk at redhat.com>");
+MODULE_LICENSE("GPL");
+MODULE_VERSION(NVME_MDEV_FIRMWARE_VERSION);
+
+module_init(nvme_mdev_init)
+module_exit(nvme_mdev_exit)
+
diff --git a/drivers/nvme/mdev/instance.c b/drivers/nvme/mdev/instance.c
new file mode 100644
index 000000000000..da523006aeda
--- /dev/null
+++ b/drivers/nvme/mdev/instance.c
@@ -0,0 +1,802 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Mediated NVMe instance VFIO code
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+
+#include <linux/init.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/vfio.h>
+#include <linux/sysfs.h>
+#include <linux/mdev.h>
+#include "priv.h"
+
+#define OFFSET_TO_REGION(offset) ((offset) >> 20)
+#define REGION_TO_OFFSET(nr) (((u64)nr) << 20)
+
+LIST_HEAD(nvme_mdev_vctrl_list);
+/*protects the list */
+DEFINE_MUTEX(nvme_mdev_vctrl_list_mutex);
+
+struct mdev_nvme_vfio_region_info {
+ struct vfio_region_info base;
+ struct vfio_region_info_cap_sparse_mmap mmap_cap;
+};
+
+/* User memory added*/
+static int nvme_mdev_map_notifier(struct notifier_block *nb,
+ unsigned long action, void *data)
+{
+ struct vfio_iommu_type1_dma_map *map = data;
+ struct nvme_mdev_vctrl *vctrl =
+ container_of(nb, struct nvme_mdev_vctrl, vfio_map_notifier);
+
+ int ret = nvme_mdev_vctrl_viommu_map(vctrl, map->flags,
+ map->iova, map->size);
+ return ret ? NOTIFY_OK : notifier_from_errno(ret);
+}
+
+/* User memory removed*/
+static int nvme_mdev_unmap_notifier(struct notifier_block *nb,
+ unsigned long action, void *data)
+{
+ struct nvme_mdev_vctrl *vctrl =
+ container_of(nb, struct nvme_mdev_vctrl, vfio_unmap_notifier);
+ struct vfio_iommu_type1_dma_unmap *unmap = data;
+
+ int ret = nvme_mdev_vctrl_viommu_unmap(vctrl, unmap->iova, unmap->size);
+
+ WARN_ON(ret <= 0);
+ return NOTIFY_OK;
+}
+
+/* Called when new mediated device is created */
+static int nvme_mdev_ops_create(struct kobject *kobj, struct mdev_device *mdev)
+{
+ int ret = 0;
+ const struct nvme_mdev_inst_type *type = NULL;
+ struct nvme_mdev_vctrl *vctrl;
+ struct nvme_mdev_hctrl *hctrl = NULL;
+
+ hctrl = nvme_mdev_hctrl_lookup_get(mdev_parent_dev(mdev));
+ if (!hctrl)
+ return -ENODEV;
+
+ type = nvme_mdev_inst_type_get(kobj->name);
+ vctrl = nvme_mdev_vctrl_create(mdev, hctrl, type->max_hw_queues);
+
+ if (IS_ERR(vctrl))
+ ret = PTR_ERR(vctrl);
+
+ mutex_lock(&nvme_mdev_vctrl_list_mutex);
+ list_add_tail(&vctrl->link, &nvme_mdev_vctrl_list);
+ mutex_unlock(&nvme_mdev_vctrl_list_mutex);
+
+ nvme_mdev_hctrl_put(hctrl);
+ return ret;
+}
+
+/* Called when a mediated device is removed */
+static int nvme_mdev_ops_remove(struct mdev_device *mdev)
+{
+ struct nvme_mdev_vctrl *vctrl = mdev_to_vctrl(mdev);
+
+ if (!vctrl)
+ return -ENODEV;
+ return nvme_mdev_vctrl_destroy(vctrl);
+}
+
+/* Called when new mediated device is opened by a user */
+static int nvme_mdev_ops_open(struct mdev_device *mdev)
+{
+ int ret;
+ unsigned long events;
+ struct nvme_mdev_vctrl *vctrl = mdev_to_vctrl(mdev);
+
+ if (!vctrl)
+ return -ENODEV;
+
+ ret = nvme_mdev_vctrl_open(vctrl);
+ if (ret)
+ return ret;
+
+ /* register unmap IOMMU notifier*/
+ vctrl->vfio_unmap_notifier.notifier_call = nvme_mdev_unmap_notifier;
+ events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
+
+ ret = vfio_register_notifier(mdev_dev(vctrl->mdev),
+ VFIO_IOMMU_NOTIFY, &events,
+ &vctrl->vfio_unmap_notifier);
+
+ if (ret != 0) {
+ nvme_mdev_vctrl_release(vctrl);
+ return ret;
+ }
+
+ /* register map IOMMU notifier*/
+ vctrl->vfio_map_notifier.notifier_call = nvme_mdev_map_notifier;
+ events = VFIO_IOMMU_NOTIFY_DMA_MAP;
+
+ ret = vfio_register_notifier(mdev_dev(vctrl->mdev),
+ VFIO_IOMMU_NOTIFY, &events,
+ &vctrl->vfio_map_notifier);
+
+ if (ret != 0) {
+ vfio_unregister_notifier(mdev_dev(vctrl->mdev),
+ VFIO_IOMMU_NOTIFY,
+ &vctrl->vfio_unmap_notifier);
+ nvme_mdev_vctrl_release(vctrl);
+ return ret;
+ }
+ return ret;
+}
+
+/* Called when new mediated device is closed (last close of the user) */
+static void nvme_mdev_ops_release(struct mdev_device *mdev)
+{
+ struct nvme_mdev_vctrl *vctrl = mdev_to_vctrl(mdev);
+ int ret;
+
+ ret = vfio_unregister_notifier(mdev_dev(vctrl->mdev),
+ VFIO_IOMMU_NOTIFY,
+ &vctrl->vfio_unmap_notifier);
+ WARN_ON(ret);
+
+ ret = vfio_unregister_notifier(mdev_dev(vctrl->mdev),
+ VFIO_IOMMU_NOTIFY,
+ &vctrl->vfio_map_notifier);
+ WARN_ON(ret);
+
+ nvme_mdev_vctrl_release(vctrl);
+}
+
+/* Helper function for bar/pci config read/write access */
+static ssize_t nvme_mdev_access(struct nvme_mdev_vctrl *vctrl,
+ char *buf, size_t count,
+ loff_t pos, bool is_write)
+{
+ int index = OFFSET_TO_REGION(pos);
+ int ret = -EINVAL;
+ unsigned int offset;
+
+ if (index >= VFIO_PCI_NUM_REGIONS || !vctrl->regions[index].rw)
+ goto out;
+
+ offset = pos - REGION_TO_OFFSET(index);
+ if (offset + count > vctrl->regions[index].size)
+ goto out;
+
+ ret = vctrl->regions[index].rw(vctrl, offset, buf, count, is_write);
+out:
+ return ret;
+}
+
+/* Called when read() is done on the device */
+static ssize_t nvme_mdev_ops_read(struct mdev_device *mdev, char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ unsigned int done = 0;
+ int ret;
+ struct nvme_mdev_vctrl *vctrl = mdev_to_vctrl(mdev);
+
+ if (!vctrl)
+ return -ENODEV;
+
+ while (count) {
+ size_t filled;
+
+ if (count >= 4 && !(*ppos % 4)) {
+ u32 val;
+
+ ret = nvme_mdev_access(vctrl, (char *)&val,
+ sizeof(val), *ppos, false);
+ if (ret <= 0)
+ goto read_err;
+
+ if (copy_to_user(buf, &val, sizeof(val)))
+ goto read_err;
+ filled = sizeof(val);
+ } else if (count >= 2 && !(*ppos % 2)) {
+ u16 val;
+
+ ret = nvme_mdev_access(vctrl, (char *)&val,
+ sizeof(val), *ppos, false);
+ if (ret <= 0)
+ goto read_err;
+ if (copy_to_user(buf, &val, sizeof(val)))
+ goto read_err;
+ filled = sizeof(val);
+ } else {
+ u8 val;
+
+ ret = nvme_mdev_access(vctrl, (char *)&val,
+ sizeof(val), *ppos, false);
+ if (ret <= 0)
+ goto read_err;
+ if (copy_to_user(buf, &val, sizeof(val)))
+ goto read_err;
+ filled = sizeof(val);
+ }
+
+ count -= filled;
+ done += filled;
+ *ppos += filled;
+ buf += filled;
+ }
+ return done;
+read_err:
+ return -EFAULT;
+}
+
+/* Called when write() is done on the device */
+static ssize_t nvme_mdev_ops_write(struct mdev_device *mdev,
+ const char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ unsigned int done = 0;
+ int ret;
+ struct nvme_mdev_vctrl *vctrl = mdev_to_vctrl(mdev);
+
+ if (!vctrl)
+ return -ENODEV;
+
+ while (count) {
+ size_t filled;
+
+ if (count >= 4 && !(*ppos % 4)) {
+ u32 val;
+
+ if (copy_from_user(&val, buf, sizeof(val)))
+ goto write_err;
+ ret = nvme_mdev_access(vctrl, (char *)&val,
+ sizeof(val), *ppos, true);
+ if (ret <= 0)
+ goto write_err;
+ filled = sizeof(val);
+ } else if (count >= 2 && !(*ppos % 2)) {
+ u16 val;
+
+ if (copy_from_user(&val, buf, sizeof(val)))
+ goto write_err;
+
+ ret = nvme_mdev_access(vctrl, (char *)&val,
+ sizeof(val), *ppos, true);
+ if (ret <= 0)
+ goto write_err;
+ filled = sizeof(val);
+ } else {
+ u8 val;
+
+ if (copy_from_user(&val, buf, sizeof(val)))
+ goto write_err;
+ ret = nvme_mdev_access(vctrl, (char *)&val,
+ sizeof(val), *ppos, true);
+ if (ret <= 0)
+ goto write_err;
+ filled = sizeof(val);
+ }
+ count -= filled;
+ done += filled;
+ *ppos += filled;
+ buf += filled;
+ }
+ return done;
+write_err:
+ return -EFAULT;
+}
+
+/*Helper for IRQ number VFIO query */
+static int nvme_mdev_irq_counts(struct nvme_mdev_vctrl *vctrl,
+ unsigned int irq_type)
+{
+ switch (irq_type) {
+ case VFIO_PCI_INTX_IRQ_INDEX:
+ return 1;
+ case VFIO_PCI_MSIX_IRQ_INDEX:
+ return MAX_VIRTUAL_IRQS;
+ case VFIO_PCI_REQ_IRQ_INDEX:
+ return 1;
+ default:
+ return 0;
+ }
+}
+
+/* VFIO VFIO_IRQ_SET_ACTION_TRIGGER implementation */
+static int nvme_mdev_ioctl_set_irqs_trigger(struct nvme_mdev_vctrl *vctrl,
+ u32 flags,
+ unsigned int irq_type,
+ unsigned int start,
+ unsigned int count,
+ void *data)
+{
+ u32 data_type = flags & VFIO_IRQ_SET_DATA_TYPE_MASK;
+ u8 *bools = NULL;
+ unsigned int i;
+ int ret = -EINVAL;
+
+ /* Asked to disable the current interrupt mode*/
+ if (data_type == VFIO_IRQ_SET_DATA_NONE && count == 0) {
+ switch (irq_type) {
+ case VFIO_PCI_REQ_IRQ_INDEX:
+ nvme_mdev_irqs_set_unplug_trigger(vctrl, -1);
+ return 0;
+ case VFIO_PCI_INTX_IRQ_INDEX:
+ nvme_mdev_irqs_disable(vctrl, NVME_MDEV_IMODE_INTX);
+ return 0;
+ case VFIO_PCI_MSIX_IRQ_INDEX:
+ nvme_mdev_irqs_disable(vctrl, NVME_MDEV_IMODE_MSIX);
+ return 0;
+ default:
+ return -EINVAL;
+ }
+ }
+
+ if (start + count > nvme_mdev_irq_counts(vctrl, irq_type))
+ return -EINVAL;
+
+ switch (data_type) {
+ case VFIO_IRQ_SET_DATA_BOOL:
+ bools = (u8 *)data;
+ /*fallthrough*/
+ case VFIO_IRQ_SET_DATA_NONE:
+ if (irq_type == VFIO_PCI_REQ_IRQ_INDEX)
+ return -EINVAL;
+
+ for (i = 0 ; i < count ; i++) {
+ int index = start + i;
+
+ if (!bools || bools[i])
+ nvme_mdev_irq_trigger(vctrl, index);
+ }
+ return 0;
+
+ case VFIO_IRQ_SET_DATA_EVENTFD:
+ switch (irq_type) {
+ case VFIO_PCI_REQ_IRQ_INDEX:
+ return nvme_mdev_irqs_set_unplug_trigger(vctrl,
+ *(int32_t *)data);
+ case VFIO_PCI_INTX_IRQ_INDEX:
+ ret = nvme_mdev_irqs_enable(vctrl,
+ NVME_MDEV_IMODE_INTX);
+ break;
+ case VFIO_PCI_MSIX_IRQ_INDEX:
+ ret = nvme_mdev_irqs_enable(vctrl,
+ NVME_MDEV_IMODE_MSIX);
+ break;
+ default:
+ return -EINVAL;
+ }
+ if (ret)
+ return ret;
+
+ return nvme_mdev_irqs_set_triggers(vctrl, start,
+ count, (int32_t *)data);
+ default:
+ return -EINVAL;
+ }
+}
+
+/* VFIO_DEVICE_GET_INFO ioctl implementation */
+static int nvme_mdev_ioctl_get_info(struct nvme_mdev_vctrl *vctrl,
+ void __user *arg)
+{
+ struct vfio_device_info info;
+ unsigned int minsz = offsetofend(struct vfio_device_info, num_irqs);
+
+ if (copy_from_user(&info, (void __user *)arg, minsz))
+ return -EFAULT;
+ if (info.argsz < minsz)
+ return -EINVAL;
+
+ info.flags = VFIO_DEVICE_FLAGS_PCI | VFIO_DEVICE_FLAGS_RESET;
+ info.num_regions = VFIO_PCI_NUM_REGIONS;
+ info.num_irqs = VFIO_PCI_NUM_IRQS;
+
+ if (copy_to_user(arg, &info, minsz))
+ return -EFAULT;
+ return 0;
+}
+
+/* VFIO_DEVICE_GET_REGION_INFO ioctl implementation*/
+static int nvme_mdev_ioctl_get_reg_info(struct nvme_mdev_vctrl *vctrl,
+ void __user *arg)
+{
+ struct nvme_mdev_io_region *region;
+ struct mdev_nvme_vfio_region_info *info;
+ unsigned long minsz, outsz, maxsz;
+ int ret = 0;
+
+ minsz = offsetofend(struct vfio_region_info, offset);
+ maxsz = sizeof(struct mdev_nvme_vfio_region_info) +
+ sizeof(struct vfio_region_sparse_mmap_area);
+
+ info = kzalloc(maxsz, GFP_KERNEL);
+ if (!info)
+ return -ENOMEM;
+
+ if (copy_from_user(info, arg, minsz)) {
+ ret = -EFAULT;
+ goto out;
+ }
+
+ outsz = info->base.argsz;
+ if (outsz < minsz || outsz > maxsz) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ if (info->base.index >= VFIO_PCI_NUM_REGIONS) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ region = &vctrl->regions[info->base.index];
+ info->base.offset = REGION_TO_OFFSET(info->base.index);
+ info->base.argsz = maxsz;
+ info->base.size = region->size;
+
+ info->base.flags = VFIO_REGION_INFO_FLAG_READ |
+ VFIO_REGION_INFO_FLAG_WRITE;
+
+ if (region->mmap_ops) {
+ info->base.flags |= (VFIO_REGION_INFO_FLAG_MMAP |
+ VFIO_REGION_INFO_FLAG_CAPS);
+
+ info->base.cap_offset =
+ offsetof(struct mdev_nvme_vfio_region_info, mmap_cap);
+
+ info->mmap_cap.header.id = VFIO_REGION_INFO_CAP_SPARSE_MMAP;
+ info->mmap_cap.header.version = 1;
+ info->mmap_cap.header.next = 0;
+ info->mmap_cap.nr_areas = 1;
+ info->mmap_cap.areas[0].offset = region->mmap_area_start;
+ info->mmap_cap.areas[0].size = region->mmap_area_size;
+ }
+
+ if (copy_to_user(arg, info, outsz))
+ ret = -EFAULT;
+out:
+ kfree(info);
+ return ret;
+}
+
+/* VFIO_DEVICE_GET_IRQ_INFO ioctl implementation */
+static int nvme_mdev_ioctl_get_irq_info(struct nvme_mdev_vctrl *vctrl,
+ void __user *arg)
+{
+ struct vfio_irq_info info;
+ unsigned int minsz = offsetofend(struct vfio_irq_info, count);
+
+ if (copy_from_user(&info, arg, minsz))
+ return -EFAULT;
+ if (info.argsz < minsz)
+ return -EINVAL;
+
+ info.count = nvme_mdev_irq_counts(vctrl, info.index);
+ info.flags = VFIO_IRQ_INFO_EVENTFD;
+
+ if (info.index == VFIO_PCI_INTX_IRQ_INDEX)
+ info.flags |= VFIO_IRQ_INFO_MASKABLE | VFIO_IRQ_INFO_AUTOMASKED;
+
+ if (copy_to_user(arg, &info, minsz))
+ return -EFAULT;
+ return 0;
+}
+
+/* VFIO VFIO_DEVICE_SET_IRQS ioctl implementation */
+static int nvme_mdev_ioctl_set_irqs(struct nvme_mdev_vctrl *vctrl,
+ void __user *arg)
+{
+ int ret, irqcount;
+ struct vfio_irq_set hdr;
+ u8 *data = NULL;
+ size_t data_size = 0;
+ unsigned long minsz = offsetofend(struct vfio_irq_set, count);
+
+ if (copy_from_user(&hdr, arg, minsz))
+ return -EFAULT;
+
+ irqcount = nvme_mdev_irq_counts(vctrl, hdr.index);
+ ret = vfio_set_irqs_validate_and_prepare(&hdr,
+ irqcount,
+ VFIO_PCI_NUM_IRQS,
+ &data_size);
+ if (ret)
+ return ret;
+
+ if (data_size) {
+ data = memdup_user((arg + minsz), data_size);
+ if (IS_ERR(data))
+ return PTR_ERR(data);
+ }
+
+ ret = -ENOTTY;
+ switch (hdr.index) {
+ case VFIO_PCI_INTX_IRQ_INDEX:
+ case VFIO_PCI_MSIX_IRQ_INDEX:
+ case VFIO_PCI_REQ_IRQ_INDEX:
+ switch (hdr.flags & VFIO_IRQ_SET_ACTION_TYPE_MASK) {
+ case VFIO_IRQ_SET_ACTION_MASK:
+ case VFIO_IRQ_SET_ACTION_UNMASK:
+ // pretend to support this (even with eventfd)
+ ret = hdr.index == VFIO_PCI_INTX_IRQ_INDEX ?
+ 0 : -EINVAL;
+ break;
+ case VFIO_IRQ_SET_ACTION_TRIGGER:
+ ret = nvme_mdev_ioctl_set_irqs_trigger(vctrl, hdr.flags,
+ hdr.index,
+ hdr.start,
+ hdr.count,
+ data);
+ break;
+ }
+ break;
+ }
+
+ kfree(data);
+ return ret;
+}
+
+/* ioctl() implementation */
+static long nvme_mdev_ops_ioctl(struct mdev_device *mdev, unsigned int cmd,
+ unsigned long arg)
+{
+ struct nvme_mdev_vctrl *vctrl = mdev_get_drvdata(mdev);
+
+ if (!vctrl)
+ return -ENODEV;
+
+ switch (cmd) {
+ case VFIO_DEVICE_GET_INFO:
+ return nvme_mdev_ioctl_get_info(vctrl, (void __user *)arg);
+ case VFIO_DEVICE_GET_REGION_INFO:
+ return nvme_mdev_ioctl_get_reg_info(vctrl, (void __user *)arg);
+ case VFIO_DEVICE_GET_IRQ_INFO:
+ return nvme_mdev_ioctl_get_irq_info(vctrl, (void __user *)arg);
+ case VFIO_DEVICE_SET_IRQS:
+ return nvme_mdev_ioctl_set_irqs(vctrl, (void __user *)arg);
+ case VFIO_DEVICE_RESET:
+ nvme_mdev_vctrl_reset(vctrl);
+ return 0;
+ default:
+ return -ENOTTY;
+ }
+}
+
+/* mmap() implementation (doorbell area) */
+static int nvme_mdev_ops_mmap(struct mdev_device *mdev,
+ struct vm_area_struct *vma)
+{
+ struct nvme_mdev_vctrl *vctrl = mdev_get_drvdata(mdev);
+ int index = OFFSET_TO_REGION((u64)vma->vm_pgoff << PAGE_SHIFT);
+ unsigned long size, start;
+
+ if (!vctrl)
+ return -EFAULT;
+
+ if (index >= VFIO_PCI_NUM_REGIONS || !vctrl->regions[index].mmap_ops)
+ return -EINVAL;
+
+ if (vma->vm_end < vma->vm_start)
+ return -EINVAL;
+
+ size = vma->vm_end - vma->vm_start;
+ start = vma->vm_pgoff << PAGE_SHIFT;
+
+ if (start < vctrl->regions[index].mmap_area_start)
+ return -EINVAL;
+ if (size > vctrl->regions[index].mmap_area_size)
+ return -EINVAL;
+
+ if ((vma->vm_flags & VM_SHARED) == 0)
+ return -EINVAL;
+
+ vma->vm_ops = vctrl->regions[index].mmap_ops;
+ vma->vm_private_data = vctrl;
+ return 0;
+}
+
+/* Request removal of the device*/
+static void nvme_mdev_ops_request(struct mdev_device *mdev, unsigned int count)
+{
+ struct nvme_mdev_vctrl *vctrl = mdev_get_drvdata(mdev);
+
+ if (vctrl)
+ nvme_mdev_irq_raise_unplug_event(vctrl, count);
+}
+
+/* Adding a new namespace given host NS id and partition ID (e/g. n1p2 or n1) */
+static ssize_t add_namespace_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct nvme_mdev_vctrl *vctrl = dev_to_vctrl(dev);
+ int ret;
+ unsigned long partno = 0, nsid;
+ char *buf_copy, *token, *tmp;
+
+ if (!vctrl)
+ return -ENODEV;
+
+ buf_copy = kstrdup(buf, GFP_KERNEL);
+ if (!buf_copy)
+ return -ENOMEM;
+
+ tmp = buf_copy;
+ if (tmp[0] != 'n') {
+ ret = -EINVAL;
+ goto out;
+ }
+ tmp++;
+
+ // read namespace ID (mandatory)
+ token = strsep(&tmp, "p");
+ if (!token) {
+ ret = -EINVAL;
+ goto out;
+ }
+ ret = kstrtoul(token, 10, &nsid);
+ if (ret)
+ goto out;
+
+ // read partition ID (optional)
+ if (tmp) {
+ ret = kstrtoul(tmp, 10, &partno);
+ if (ret)
+ goto out;
+ }
+
+ // create the user namespace
+ ret = nvme_mdev_vns_open(vctrl, nsid, partno);
+ if (ret)
+ goto out;
+ ret = count;
+out:
+ kfree(buf_copy);
+ return ret;
+}
+static DEVICE_ATTR_WO(add_namespace);
+
+/* Remove a user namespace */
+static ssize_t remove_namespace_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ unsigned long user_nsid;
+ int ret;
+ struct nvme_mdev_vctrl *vctrl = dev_to_vctrl(dev);
+
+ if (!vctrl)
+ return -ENODEV;
+
+ ret = kstrtoul(buf, 10, &user_nsid);
+ if (ret)
+ return ret;
+
+ ret = nvme_mdev_vns_destroy(vctrl, user_nsid);
+ if (ret)
+ return ret;
+ return count;
+}
+static DEVICE_ATTR_WO(remove_namespace);
+
+/* Show list of user namespaces */
+static ssize_t namespaces_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nvme_mdev_vctrl *vctrl = dev_to_vctrl(dev);
+
+ if (!vctrl)
+ return -ENODEV;
+ return nvme_mdev_vns_print_description(vctrl, buf, PAGE_SIZE - 1);
+}
+static DEVICE_ATTR_RO(namespaces);
+
+/* change the cpu binding of the IO threads*/
+static ssize_t iothread_cpu_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ unsigned long val;
+ int ret;
+ struct nvme_mdev_vctrl *vctrl = dev_to_vctrl(dev);
+
+ if (!vctrl)
+ return -ENODEV;
+ ret = kstrtoul(buf, 10, &val);
+ if (ret)
+ return ret;
+ nvme_mdev_vctrl_bind_iothread(vctrl, val);
+ return count;
+}
+
+/* change the cpu binding of the IO threads*/
+static ssize_t
+iothread_cpu_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct nvme_mdev_vctrl *vctrl = dev_to_vctrl(dev);
+
+ if (!vctrl)
+ return -ENODEV;
+ return sprintf(buf, "%d\n", vctrl->iothread_cpu);
+}
+static DEVICE_ATTR_RW(iothread_cpu);
+
+/* change the cpu binding of the IO threads*/
+static ssize_t shadow_doorbell_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ bool val;
+ int ret;
+ struct nvme_mdev_vctrl *vctrl = dev_to_vctrl(dev);
+
+ if (!vctrl)
+ return -ENODEV;
+ ret = kstrtobool(buf, &val);
+ if (ret)
+ return ret;
+ ret = nvme_mdev_vctrl_set_shadow_doorbell_supported(vctrl, val);
+ if (ret)
+ return ret;
+ return count;
+}
+
+/* change the cpu binding of the IO threads*/
+static ssize_t shadow_doorbell_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nvme_mdev_vctrl *vctrl = dev_to_vctrl(dev);
+
+ if (!vctrl)
+ return -ENODEV;
+
+ return sprintf(buf, "%d\n", vctrl->mmio.shadow_db_supported ? 1 : 0);
+}
+static DEVICE_ATTR_RW(shadow_doorbell);
+
+static struct attribute *nvme_mdev_dev_ns_atttributes[] = {
+ &dev_attr_add_namespace.attr,
+ &dev_attr_remove_namespace.attr,
+ &dev_attr_namespaces.attr,
+ NULL
+};
+
+static struct attribute *nvme_mdev_dev_settings_atttributes[] = {
+ &dev_attr_iothread_cpu.attr,
+ &dev_attr_shadow_doorbell.attr,
+ NULL
+};
+
+static const struct attribute_group nvme_mdev_ns_attr_group = {
+ .name = "namespaces",
+ .attrs = nvme_mdev_dev_ns_atttributes,
+};
+
+static const struct attribute_group nvme_mdev_setting_attr_group = {
+ .name = "settings",
+ .attrs = nvme_mdev_dev_settings_atttributes,
+};
+
+static const struct attribute_group *nvme_mdev_dev_attributte_groups[] = {
+ &nvme_mdev_ns_attr_group,
+ &nvme_mdev_setting_attr_group,
+ NULL,
+};
+
+struct mdev_parent_ops mdev_fops = {
+ .owner = THIS_MODULE,
+ .create = nvme_mdev_ops_create,
+ .remove = nvme_mdev_ops_remove,
+ .open = nvme_mdev_ops_open,
+ .release = nvme_mdev_ops_release,
+ .read = nvme_mdev_ops_read,
+ .write = nvme_mdev_ops_write,
+ .mmap = nvme_mdev_ops_mmap,
+ .ioctl = nvme_mdev_ops_ioctl,
+ .request = nvme_mdev_ops_request,
+ .mdev_attr_groups = nvme_mdev_dev_attributte_groups,
+ .dev_attr_groups = NULL,
+};
+
diff --git a/drivers/nvme/mdev/io.c b/drivers/nvme/mdev/io.c
new file mode 100644
index 000000000000..a731196d0365
--- /dev/null
+++ b/drivers/nvme/mdev/io.c
@@ -0,0 +1,563 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * NVMe IO command translation and polling IO thread
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/kernel.h>
+#include <linux/kthread.h>
+#include <linux/slab.h>
+#include <linux/nvme.h>
+#include <linux/timekeeping.h>
+#include <linux/ktime.h>
+#include "priv.h"
+
+struct io_ctx {
+ struct nvme_mdev_hctrl *hctrl;
+ struct nvme_mdev_vctrl *vctrl;
+
+ const struct nvme_command *in;
+ struct nvme_command out;
+ struct nvme_mdev_vns *ns;
+ struct nvme_ext_data_iter udatait;
+ struct nvme_ext_data_iter *kdatait;
+
+ ktime_t last_io_t;
+ ktime_t last_admin_poll_time;
+ unsigned int idle_timeout_ms;
+ unsigned int admin_poll_rate_ms;
+ unsigned int arb_burst;
+};
+
+/* Handle read/write command.*/
+static int nvme_mdev_io_translate_rw(struct io_ctx *ctx)
+{
+ int ret;
+ const struct nvme_rw_command *in = &ctx->in->rw;
+
+ u64 slba = le64_to_cpu(in->slba);
+ u64 length = le16_to_cpu(in->length) + 1;
+ u16 control = le16_to_cpu(in->control);
+
+ _DBG(ctx->vctrl, "IOQ: READ/WRITE\n");
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_DW23 | RSRV_MPTR | RSRV_DW14_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16, 0b1100000000111100))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (in->opcode == nvme_cmd_write && ctx->ns->readonly)
+ return DNR(NVME_SC_READ_ONLY);
+
+ if (!check_range(slba, length, ctx->ns->ns_size))
+ return DNR(NVME_SC_LBA_RANGE);
+
+ ctx->out.rw.slba = cpu_to_le64(slba + ctx->ns->host_lba_offset);
+ ctx->out.rw.length = in->length;
+
+ ret = nvme_mdev_udata_iter_set_dptr(&ctx->udatait, &in->dptr,
+ length << ctx->ns->blksize_shift);
+ if (ret)
+ return nvme_mdev_translate_error(ret);
+
+ ctx->kdatait = &ctx->udatait;
+ if (control & ~(NVME_RW_LR | NVME_RW_FUA))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ ctx->out.rw.control = in->control;
+ return -1;
+}
+
+/*Handle flush command */
+static int nvme_mdev_io_translate_flush(struct io_ctx *ctx)
+{
+ ctx->kdatait = NULL;
+
+ _DBG(ctx->vctrl, "IOQ: FLUSH\n");
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_DW23 | RSRV_DPTR |
+ RSRV_MPTR | RSRV_DW10_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (ctx->ns->readonly)
+ return DNR(NVME_SC_READ_ONLY);
+
+ return -1;
+}
+
+/* Handle write zeros command */
+static int nvme_mdev_io_translate_write_zeros(struct io_ctx *ctx)
+{
+ const struct nvme_write_zeroes_cmd *in = &ctx->in->write_zeroes;
+ u64 slba = le64_to_cpu(in->slba);
+ u64 length = le16_to_cpu(in->length) + 1;
+ u16 control = le16_to_cpu(in->control);
+
+ _DBG(ctx->vctrl, "IOQ: WRITE_ZEROS\n");
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_DW23 | RSRV_DPTR |
+ RSRV_MPTR | RSRV_DW13_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (!nvme_mdev_hctrl_hq_check_op(ctx->hctrl, in->opcode))
+ return DNR(NVME_SC_INVALID_OPCODE);
+
+ if (ctx->ns->readonly)
+ return DNR(NVME_SC_READ_ONLY);
+ ctx->kdatait = NULL;
+
+ if (!check_range(slba, length, ctx->ns->ns_size))
+ return DNR(NVME_SC_LBA_RANGE);
+
+ ctx->out.write_zeroes.slba =
+ cpu_to_le64(slba + ctx->ns->host_lba_offset);
+ ctx->out.write_zeroes.length = in->length;
+
+ if (control & ~(NVME_RW_LR | NVME_RW_FUA | NVME_WZ_DEAC))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ ctx->out.write_zeroes.control = in->control;
+ return -1;
+}
+
+/* Handle dataset management command */
+static int nvme_mdev_io_translate_dsm(struct io_ctx *ctx)
+{
+ unsigned int size, i, nr;
+ int ret;
+ const struct nvme_dsm_cmd *in = &ctx->in->dsm;
+ struct nvme_dsm_range *data_ptr;
+
+ _DBG(ctx->vctrl, "IOQ: DSM_MANAGEMENT\n");
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_DW23 | RSRV_MPTR | RSRV_DW12_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (le32_to_cpu(in->nr) & 0xFFFFFF00)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (!nvme_mdev_hctrl_hq_check_op(ctx->hctrl, in->opcode))
+ return DNR(NVME_SC_INVALID_OPCODE);
+
+ if (ctx->ns->readonly)
+ return DNR(NVME_SC_READ_ONLY);
+
+ nr = le32_to_cpu(in->nr) + 1;
+ size = nr * sizeof(struct nvme_dsm_range);
+
+ ctx->out.dsm.nr = in->nr;
+ ret = nvme_mdev_udata_iter_set_dptr(&ctx->udatait, &in->dptr, size);
+ if (ret)
+ goto error;
+
+ ctx->kdatait = nvme_mdev_kdata_iter_alloc(&ctx->vctrl->viommu, size);
+ if (!ctx->kdatait)
+ return NVME_SC_INTERNAL;
+
+ _DBG(ctx->vctrl, "IOQ: DSM_MANAGEMENT: NR=%d\n", nr);
+
+ ret = nvme_mdev_read_from_udata(ctx->kdatait->kmem.data, &ctx->udatait,
+ size);
+ if (ret)
+ goto error2;
+
+ data_ptr = (struct nvme_dsm_range *)ctx->kdatait->kmem.data;
+
+ for (i = 0 ; i < nr; i++) {
+ u64 slba = le64_to_cpu(data_ptr[i].slba);
+ /* looks like not zero based value*/
+ u32 nlb = le32_to_cpu(data_ptr[i].nlb);
+
+ if (!check_range(slba, nlb, ctx->ns->ns_size))
+ goto error2;
+
+ _DBG(ctx->vctrl, "IOQ: DSM_MANAGEMENT: RANGE 0x%llx-0x%x\n",
+ slba, nlb);
+
+ data_ptr[i].slba = cpu_to_le64(slba + ctx->ns->host_lba_offset);
+ }
+
+ ctx->out.dsm.attributes = in->attributes;
+ return -1;
+error2:
+ ctx->kdatait->release(ctx->kdatait);
+error:
+ return nvme_mdev_translate_error(ret);
+}
+
+/* Process one new command in the io queue*/
+static int nvme_mdev_io_translate_cmd(struct io_ctx *ctx)
+{
+ memset(&ctx->out, 0, sizeof(ctx->out));
+ /* translate opcode */
+ ctx->out.common.opcode = ctx->in->common.opcode;
+
+ /* check flags */
+ if (ctx->in->common.flags != 0)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ /* namespace*/
+ ctx->ns = nvme_mdev_vns_from_vnsid(ctx->vctrl,
+ le32_to_cpu(ctx->in->rw.nsid));
+ if (!ctx->ns) {
+ _DBG(ctx->vctrl, "IOQ: invalid NSID\n");
+ return DNR(NVME_SC_INVALID_NS);
+ }
+
+ if (!ctx->ns->readonly && bdev_read_only(ctx->ns->host_part))
+ ctx->ns->readonly = true;
+
+ ctx->out.common.nsid = cpu_to_le32(ctx->ns->host_nsid);
+
+ switch (ctx->in->common.opcode) {
+ case nvme_cmd_flush:
+ return nvme_mdev_io_translate_flush(ctx);
+ case nvme_cmd_read:
+ return nvme_mdev_io_translate_rw(ctx);
+ case nvme_cmd_write:
+ return nvme_mdev_io_translate_rw(ctx);
+ case nvme_cmd_write_zeroes:
+ return nvme_mdev_io_translate_write_zeros(ctx);
+ case nvme_cmd_dsm:
+ return nvme_mdev_io_translate_dsm(ctx);
+ default:
+ return DNR(NVME_SC_INVALID_OPCODE);
+ }
+}
+
+static bool nvme_mdev_io_process_sq(struct io_ctx *ctx, u16 sqid)
+{
+ struct nvme_vsq *vsq = &ctx->vctrl->vsqs[sqid];
+ u16 ucid;
+ int ret;
+
+ /* If host queue is full, we can't process a command
+ * as a command will likely result in passthrough
+ */
+ if (!nvme_mdev_hctrl_hq_can_submit(ctx->hctrl, vsq->hsq))
+ return false;
+
+ /* read the command */
+ ctx->in = nvme_mdev_vsq_get_cmd(ctx->vctrl, vsq);
+ if (!ctx->in)
+ return false;
+ ucid = le16_to_cpu(ctx->in->common.command_id);
+
+ /* translate the command */
+ ret = nvme_mdev_io_translate_cmd(ctx);
+ if (ret != -1) {
+ _DBG(ctx->vctrl,
+ "IOQ: QID %d CID %d FAILED: status 0x%x (translate)\n",
+ sqid, ucid, ret);
+ nvme_mdev_vsq_cmd_done_io(ctx->vctrl, sqid, ucid, ret);
+ return true;
+ }
+
+ /*passthrough*/
+ ret = nvme_mdev_hctrl_hq_submit(ctx->hctrl,
+ vsq->hsq,
+ (((u32)vsq->qid) << 16) | ((u32)ucid),
+ &ctx->out,
+ ctx->kdatait);
+ if (ret) {
+ ret = nvme_mdev_translate_error(ret);
+
+ _DBG(ctx->vctrl,
+ "IOQ: QID %d CID %d FAILED: status 0x%x (host submit)\n",
+ sqid, ucid, ret);
+
+ nvme_mdev_vsq_cmd_done_io(ctx->vctrl, sqid, ucid, ret);
+ }
+ return true;
+}
+
+/* process host replies to the passed through commands */
+static int nvme_mdev_io_process_hwq(struct io_ctx *ctx, u16 hwq)
+{
+ int n, i;
+ struct nvme_ext_cmd_result res[16];
+
+ /* process the completions from the hardware */
+ n = nvme_mdev_hctrl_hq_poll(ctx->hctrl, hwq, res, 16);
+ if (n == -1)
+ return -1;
+
+ for (i = 0; i < n; i++) {
+ u16 qid = res[i].tag >> 16;
+ u16 cid = res[i].tag & 0xFFFF;
+ u16 status = res[i].status;
+
+ if (status != 0)
+ _DBG(ctx->vctrl,
+ "IOQ: QID %d CID %d FAILED: status 0x%x (host response)\n",
+ qid, cid, status);
+
+ nvme_mdev_vsq_cmd_done_io(ctx->vctrl, qid, cid, status);
+ }
+ return n;
+}
+
+/* Check if we need to read a command from the admin queue */
+static bool nvme_mdev_adm_needs_processing(struct io_ctx *ctx)
+{
+ if (!timeout(ctx->last_admin_poll_time,
+ ctx->vctrl->now, ctx->admin_poll_rate_ms))
+ return false;
+
+ if (nvme_mdev_vsq_has_data(ctx->vctrl, &ctx->vctrl->vsqs[0]))
+ return true;
+
+ ctx->last_admin_poll_time = ctx->vctrl->now;
+ return false;
+}
+
+/* do polling till one of events stops it */
+static void nvme_mdev_io_maintask(struct io_ctx *ctx)
+{
+ struct nvme_mdev_vctrl *vctrl = ctx->vctrl;
+ u16 i, cqid, sqid, hsqcnt;
+ u16 hsqs[MAX_HOST_QUEUES];
+ bool idle = false;
+
+ hsqcnt = nvme_mdev_vctrl_hqs_list(vctrl, hsqs);
+ ctx->arb_burst = 1 << ctx->vctrl->arb_burst_shift;
+
+ /* can't stop polling when shadow db not enabled */
+ ctx->idle_timeout_ms = vctrl->mmio.shadow_db_en ? poll_timeout_ms : 0;
+ ctx->admin_poll_rate_ms = admin_poll_rate_ms;
+
+ vctrl->now = ktime_get();
+ ctx->last_admin_poll_time = vctrl->now;
+ ctx->last_io_t = vctrl->now;
+
+ /* main loop */
+ while (!kthread_should_park()) {
+ vctrl->now = ktime_get();
+
+ /* check if we have to exit to support admin polling */
+ if (!vctrl->mmio.shadow_db_supported)
+ if (nvme_mdev_adm_needs_processing(ctx))
+ break;
+
+ /* process the submission queues*/
+ sqid = 1;
+ for_each_set_bit_from(sqid, vctrl->vsq_en, MAX_VIRTUAL_QUEUES)
+ for (i = 0 ; i < ctx->arb_burst ; i++)
+ if (!nvme_mdev_io_process_sq(ctx, sqid))
+ break;
+
+ /* process the completions from the guest*/
+ cqid = 1;
+ for_each_set_bit_from(cqid, vctrl->vcq_en, MAX_VIRTUAL_QUEUES)
+ nvme_mdev_vcq_process(vctrl, cqid, true);
+
+ /* process the completions from the hardware*/
+ for (i = 0 ; i < hsqcnt ; i++)
+ if (nvme_mdev_io_process_hwq(ctx, hsqs[i]) > 0)
+ ctx->last_io_t = vctrl->now;
+
+ /* Check if we need to stop polling*/
+ if (ctx->idle_timeout_ms) {
+ if (timeout(ctx->last_io_t,
+ vctrl->now, ctx->idle_timeout_ms)) {
+ idle = true;
+ break;
+ }
+ }
+ cond_resched();
+ }
+
+ /* Drain the host IO */
+ for (;;) {
+ bool pending_io = false;
+
+ vctrl->now = ktime_get_coarse_boottime();
+
+ if (nvme_mdev_vctrl_is_dead(vctrl) || ctx->hctrl->removing) {
+ idle = false;
+ break;
+ }
+
+ for (i = 0; i < hsqcnt; i++) {
+ int n = nvme_mdev_io_process_hwq(ctx, hsqs[i]);
+
+ if (n != -1)
+ pending_io = true;
+ if (n > 0)
+ ctx->last_io_t = vctrl->now;
+ }
+
+ if (!pending_io)
+ break;
+
+ cond_resched();
+
+ if (!timeout(ctx->last_io_t, vctrl->now, io_timeout_ms))
+ continue;
+
+ _WARN(ctx->vctrl, "IO: skipping flush - host IO timeout\n");
+ idle = false;
+ break;
+ }
+
+ /* Drain all the pending completion interrupts to the guest*/
+ cqid = 1;
+ for_each_set_bit_from(cqid, vctrl->vcq_en, MAX_VIRTUAL_QUEUES)
+ if (nvme_mdev_vcq_flush(vctrl, cqid))
+ idle = false;
+
+ /* Park IO thread if IO is truly idle*/
+ if (idle) {
+ /* don't bother going idle if someone holds the vctrl
+ * lock. It might try to park us, and thus
+ * cause a deadlock
+ */
+ if (!mutex_trylock(&vctrl->lock))
+ return;
+
+ sqid = 1;
+ for_each_set_bit_from(sqid, vctrl->vsq_en, MAX_VIRTUAL_QUEUES)
+ if (!nvme_mdev_vsq_suspend_io(vctrl, sqid)) {
+ idle = false;
+ break;
+ }
+
+ if (idle) {
+ _DBG(ctx->vctrl, "IO: self-parking\n");
+ vctrl->io_idle = true;
+ nvme_mdev_io_pause(vctrl);
+ }
+
+ mutex_unlock(&vctrl->lock);
+ }
+
+ /* Admin poll for cases when shadow doorbell is not supported */
+ if (!vctrl->mmio.shadow_db_supported) {
+ if (mutex_trylock(&vctrl->lock)) {
+ nvme_mdev_vcq_process(vctrl, 0, false);
+ nvme_mdev_adm_process_sq(ctx->vctrl);
+ ctx->last_admin_poll_time = vctrl->now;
+ mutex_unlock(&ctx->vctrl->lock);
+ }
+ }
+}
+
+/* the main IO thread */
+static int nvme_mdev_io_polling_thread(void *data)
+{
+ struct io_ctx ctx;
+
+ if (kthread_should_stop())
+ return 0;
+
+ memset(&ctx, 0, sizeof(struct io_ctx));
+ ctx.vctrl = (struct nvme_mdev_vctrl *)data;
+ ctx.hctrl = ctx.vctrl->hctrl;
+ nvme_mdev_udata_iter_setup(&ctx.vctrl->viommu, &ctx.udatait);
+
+ _DBG(ctx.vctrl, "IO: iothread started\n");
+
+ for (;;) {
+ if (kthread_should_park()) {
+ _DBG(ctx.vctrl, "IO: iothread parked\n");
+ kthread_parkme();
+ }
+
+ if (kthread_should_stop())
+ break;
+
+ nvme_mdev_io_maintask(&ctx);
+ }
+
+ _DBG(ctx.vctrl, "IO: iothread stopped\n");
+ return 0;
+}
+
+/* Kick the IO thread into running state*/
+void nvme_mdev_io_resume(struct nvme_mdev_vctrl *vctrl)
+{
+ lockdep_assert_held(&vctrl->lock);
+
+ if (!vctrl->iothread || !vctrl->iothread_parked)
+ return;
+ if (vctrl->io_idle || vctrl->vctrl_paused)
+ return;
+
+ vctrl->iothread_parked = false;
+ /* has memory barrier*/
+ kthread_unpark(vctrl->iothread);
+}
+
+/* Pause the IO thread */
+void nvme_mdev_io_pause(struct nvme_mdev_vctrl *vctrl)
+{
+ lockdep_assert_held(&vctrl->lock);
+
+ if (!vctrl->iothread || vctrl->iothread_parked)
+ return;
+
+ vctrl->iothread_parked = true;
+ kthread_park(vctrl->iothread);
+}
+
+/* setup the main IO thread */
+int nvme_mdev_io_create(struct nvme_mdev_vctrl *vctrl, unsigned int cpu)
+{
+ /*TODOLATER: IO: Better thread name*/
+ char name[TASK_COMM_LEN];
+
+ _DBG(vctrl, "IO: creating the polling iothread\n");
+
+ if (WARN_ON(vctrl->iothread))
+ return -EINVAL;
+
+ snprintf(name, sizeof(name), "nvme%d_poll_io", vctrl->hctrl->id);
+
+ vctrl->iothread_cpu = cpu;
+ vctrl->iothread_parked = false;
+ vctrl->io_idle = true;
+
+ vctrl->iothread = kthread_create_on_node(nvme_mdev_io_polling_thread,
+ vctrl,
+ vctrl->hctrl->node,
+ name);
+ if (IS_ERR(vctrl->iothread)) {
+ vctrl->iothread = NULL;
+ return PTR_ERR(vctrl->iothread);
+ }
+
+ kthread_bind(vctrl->iothread, cpu);
+
+ if (vctrl->io_idle) {
+ vctrl->iothread_parked = true;
+ kthread_park(vctrl->iothread);
+ return 0;
+ }
+
+ wake_up_process(vctrl->iothread);
+ return 0;
+}
+
+/* End the main IO thread */
+void nvme_mdev_io_free(struct nvme_mdev_vctrl *vctrl)
+{
+ int ret;
+
+ _DBG(vctrl, "IO: destroying the polling iothread\n");
+
+ lockdep_assert_held(&vctrl->lock);
+ nvme_mdev_io_pause(vctrl);
+ ret = kthread_stop(vctrl->iothread);
+ WARN_ON(ret);
+ vctrl->iothread = NULL;
+}
+
+void nvme_mdev_assert_io_not_running(struct nvme_mdev_vctrl *vctrl)
+{
+ if (WARN_ON(vctrl->iothread && !vctrl->iothread_parked))
+ nvme_mdev_io_pause(vctrl);
+}
diff --git a/drivers/nvme/mdev/irq.c b/drivers/nvme/mdev/irq.c
new file mode 100644
index 000000000000..5809cdb4d84c
--- /dev/null
+++ b/drivers/nvme/mdev/irq.c
@@ -0,0 +1,264 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * NVMe virtual controller IRQ implementation (MSIx and INTx)
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+
+#include <linux/init.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include "priv.h"
+
+/* Setup the interrupt subsystem */
+void nvme_mdev_irqs_setup(struct nvme_mdev_vctrl *vctrl)
+{
+ vctrl->irqs.mode = NVME_MDEV_IMODE_NONE;
+ vctrl->irqs.irq_coalesc_max = 1;
+}
+
+/* Enable INTx or MSIx interrupts */
+static int __nvme_mdev_irqs_enable(struct nvme_mdev_vctrl *vctrl,
+ enum nvme_mdev_irq_mode mode)
+{
+ if (vctrl->irqs.mode == mode)
+ return 0;
+ if (vctrl->irqs.mode != NVME_MDEV_IMODE_NONE)
+ return -EBUSY;
+
+ if (mode == NVME_MDEV_IMODE_INTX)
+ _DBG(vctrl, "IRQ: enable INTx interrupts\n");
+ else if (mode == NVME_MDEV_IMODE_MSIX)
+ _DBG(vctrl, "IRQ: enable MSIX interrupts\n");
+ else
+ WARN_ON(1);
+
+ nvme_mdev_io_pause(vctrl);
+ vctrl->irqs.mode = mode;
+ nvme_mdev_io_resume(vctrl);
+ return 0;
+}
+
+int nvme_mdev_irqs_enable(struct nvme_mdev_vctrl *vctrl,
+ enum nvme_mdev_irq_mode mode)
+{
+ int retval = 0;
+
+ mutex_lock(&vctrl->lock);
+ retval = __nvme_mdev_irqs_enable(vctrl, mode);
+ mutex_unlock(&vctrl->lock);
+ return retval;
+}
+
+/* Disable INTx or MSIx interrupts */
+static void __nvme_mdev_irqs_disable(struct nvme_mdev_vctrl *vctrl,
+ enum nvme_mdev_irq_mode mode)
+{
+ unsigned int i;
+
+ if (vctrl->irqs.mode == NVME_MDEV_IMODE_NONE)
+ return;
+ if (vctrl->irqs.mode != mode)
+ return;
+
+ if (vctrl->irqs.mode == NVME_MDEV_IMODE_INTX)
+ _DBG(vctrl, "IRQ: disable INTx interrupts\n");
+ else if (vctrl->irqs.mode == NVME_MDEV_IMODE_MSIX)
+ _DBG(vctrl, "IRQ: disable MSIX interrupts\n");
+ else
+ WARN_ON(1);
+
+ nvme_mdev_io_pause(vctrl);
+
+ for (i = 0; i < MAX_VIRTUAL_IRQS; i++) {
+ struct nvme_mdev_user_irq *vec = &vctrl->irqs.vecs[i];
+
+ if (vec->trigger) {
+ eventfd_ctx_put(vec->trigger);
+ vec->trigger = NULL;
+ }
+ vec->irq_pending_cnt = 0;
+ vec->irq_time = 0;
+ }
+ vctrl->irqs.mode = NVME_MDEV_IMODE_NONE;
+ nvme_mdev_io_resume(vctrl);
+}
+
+void nvme_mdev_irqs_disable(struct nvme_mdev_vctrl *vctrl,
+ enum nvme_mdev_irq_mode mode)
+{
+ mutex_lock(&vctrl->lock);
+ __nvme_mdev_irqs_disable(vctrl, mode);
+ mutex_unlock(&vctrl->lock);
+}
+
+/* Set eventfd triggers for INTx or MSIx interrupts */
+int nvme_mdev_irqs_set_triggers(struct nvme_mdev_vctrl *vctrl,
+ int start, int count, int32_t *fds)
+{
+ unsigned int i;
+
+ mutex_lock(&vctrl->lock);
+ nvme_mdev_io_pause(vctrl);
+
+ for (i = 0; i < count; i++) {
+ int irqindex = start + i;
+ struct eventfd_ctx *trigger;
+ struct nvme_mdev_user_irq *irq = &vctrl->irqs.vecs[irqindex];
+
+ if (irq->trigger) {
+ eventfd_ctx_put(irq->trigger);
+ irq->trigger = NULL;
+ }
+
+ if (fds[i] < 0)
+ continue;
+
+ trigger = eventfd_ctx_fdget(fds[i]);
+ if (IS_ERR(trigger))
+ return PTR_ERR(trigger);
+
+ irq->trigger = trigger;
+ }
+ nvme_mdev_io_resume(vctrl);
+ mutex_unlock(&vctrl->lock);
+ return 0;
+}
+
+/* Set eventfd trigger for unplug interrupt */
+static int __nvme_mdev_irqs_set_unplug_trigger(struct nvme_mdev_vctrl *vctrl,
+ int32_t fd)
+{
+ struct eventfd_ctx *trigger;
+
+ if (vctrl->irqs.request_trigger) {
+ _DBG(vctrl, "IRQ: clear hotplug trigger\n");
+ eventfd_ctx_put(vctrl->irqs.request_trigger);
+ vctrl->irqs.request_trigger = NULL;
+ }
+
+ if (fd < 0)
+ return 0;
+
+ _DBG(vctrl, "IRQ: set hotplug trigger\n");
+
+ trigger = eventfd_ctx_fdget(fd);
+ if (IS_ERR(trigger))
+ return PTR_ERR(trigger);
+
+ vctrl->irqs.request_trigger = trigger;
+ return 0;
+}
+
+int nvme_mdev_irqs_set_unplug_trigger(struct nvme_mdev_vctrl *vctrl,
+ int32_t fd)
+{
+ int retval;
+
+ mutex_lock(&vctrl->lock);
+ retval = __nvme_mdev_irqs_set_unplug_trigger(vctrl, fd);
+ mutex_unlock(&vctrl->lock);
+ return retval;
+}
+
+/* Reset the interrupts subsystem */
+void nvme_mdev_irqs_reset(struct nvme_mdev_vctrl *vctrl)
+{
+ int i;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ if (vctrl->irqs.mode != NVME_MDEV_IMODE_NONE)
+ __nvme_mdev_irqs_disable(vctrl, vctrl->irqs.mode);
+
+ __nvme_mdev_irqs_set_unplug_trigger(vctrl, -1);
+
+ for (i = 0; i < MAX_VIRTUAL_IRQS; i++) {
+ struct nvme_mdev_user_irq *vec = &vctrl->irqs.vecs[i];
+
+ vec->irq_coalesc_en = false;
+ vec->irq_pending_cnt = 0;
+ vec->irq_time = 0;
+ }
+
+ vctrl->irqs.irq_coalesc_time_us = 0;
+}
+
+/* Check if interrupt can be coalesced */
+static bool nvme_mdev_irq_coalesce(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_mdev_user_irq *irq)
+{
+ s64 delta;
+
+ if (!irq->irq_coalesc_en)
+ return false;
+
+ if (irq->irq_pending_cnt >= vctrl->irqs.irq_coalesc_max)
+ return false;
+
+ delta = ktime_us_delta(vctrl->now, irq->irq_time);
+ return (delta < vctrl->irqs.irq_coalesc_time_us);
+}
+
+void nvme_mdev_irq_raise_unplug_event(struct nvme_mdev_vctrl *vctrl,
+ unsigned int count)
+{
+ mutex_lock(&vctrl->lock);
+
+ if (vctrl->irqs.request_trigger) {
+ if (!(count % 10))
+ dev_notice_ratelimited(mdev_dev(vctrl->mdev),
+ "Relaying device request to user (#%u)\n",
+ count);
+
+ eventfd_signal(vctrl->irqs.request_trigger, 1);
+
+ } else if (count == 0) {
+ dev_notice(mdev_dev(vctrl->mdev),
+ "No device request channel registered, blocked until released by user\n");
+ }
+ mutex_unlock(&vctrl->lock);
+}
+
+/* Raise an interrupt */
+void nvme_mdev_irq_raise(struct nvme_mdev_vctrl *vctrl, unsigned int index)
+{
+ struct nvme_mdev_user_irq *irq = &vctrl->irqs.vecs[index];
+
+ irq->irq_pending_cnt++;
+}
+
+/* Unraise an interrupt */
+void nvme_mdev_irq_clear(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index)
+{
+ struct nvme_mdev_user_irq *irq = &vctrl->irqs.vecs[index];
+
+ irq->irq_time = vctrl->now;
+ irq->irq_pending_cnt = 0;
+}
+
+/* Directly trigger an interrupt without affecting irq coalescing settings */
+void nvme_mdev_irq_trigger(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index)
+{
+ struct nvme_mdev_user_irq *irq = &vctrl->irqs.vecs[index];
+
+ if (irq->trigger)
+ eventfd_signal(irq->trigger, 1);
+}
+
+/* Trigger previously raised interrupt */
+void nvme_mdev_irq_cond_trigger(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index)
+{
+ struct nvme_mdev_user_irq *irq = &vctrl->irqs.vecs[index];
+
+ if (irq->irq_pending_cnt == 0)
+ return;
+
+ if (!nvme_mdev_irq_coalesce(vctrl, irq)) {
+ nvme_mdev_irq_trigger(vctrl, index);
+ nvme_mdev_irq_clear(vctrl, index);
+ }
+}
diff --git a/drivers/nvme/mdev/mdev.h b/drivers/nvme/mdev/mdev.h
new file mode 100644
index 000000000000..d139e090520e
--- /dev/null
+++ b/drivers/nvme/mdev/mdev.h
@@ -0,0 +1,56 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * NVME VFIO mediated driver
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+
+#ifndef _MDEV_NVME_MDEV_H
+#define _MDEV_NVME_MDEV_H
+
+#include <linux/kernel.h>
+#include <linux/byteorder/generic.h>
+#include <linux/nvme.h>
+
+struct page_map {
+ void *kmap;
+ struct page *page;
+ dma_addr_t iova;
+};
+
+struct user_prplist {
+ /* used by user data iterator*/
+ struct page_map page;
+ unsigned int index; /* index of current entry */
+};
+
+struct kernel_data {
+ /* used by kernel data iterator*/
+ void *data;
+ unsigned int size;
+ dma_addr_t dma_addr;
+};
+
+struct nvme_ext_data_iter {
+ /* private */
+ struct nvme_mdev_viommu *viommu;
+ union {
+ const union nvme_data_ptr *dptr;
+ struct user_prplist uprp;
+ struct kernel_data kmem;
+ };
+
+ /* user interface */
+ u64 count; /* number of data pages, yet to be covered */
+
+ phys_addr_t physical; /* iterator physical address value*/
+ dma_addr_t host_iova; /* iterator dma address value*/
+
+ /* moves iterator to the next item */
+ int (*next)(struct nvme_ext_data_iter *data_iter);
+
+ /* if != NULL, user should call this when it done with data
+ * pointed by the iterator
+ */
+ void (*release)(struct nvme_ext_data_iter *data_iter);
+};
+#endif
diff --git a/drivers/nvme/mdev/mmio.c b/drivers/nvme/mdev/mmio.c
new file mode 100644
index 000000000000..cf03c1f22f4c
--- /dev/null
+++ b/drivers/nvme/mdev/mmio.c
@@ -0,0 +1,591 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * NVMe virtual controller MMIO implementation
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/kernel.h>
+#include <linux/highmem.h>
+#include "priv.h"
+
+#define DB_AREA_SIZE (MAX_VIRTUAL_QUEUES * 2 * (4 << DB_STRIDE_SHIFT))
+#define DB_MASK ((4 << DB_STRIDE_SHIFT) - 1)
+#define MMIO_BAR_SIZE __roundup_pow_of_two(NVME_REG_DBS + DB_AREA_SIZE)
+
+/* Put the controller into fatal error state. Only way out is reset */
+static void nvme_mdev_mmio_fatal_error(struct nvme_mdev_vctrl *vctrl)
+{
+ if (vctrl->mmio.csts & NVME_CSTS_CFS)
+ return;
+
+ vctrl->mmio.csts |= NVME_CSTS_CFS;
+ nvme_mdev_io_pause(vctrl);
+
+ if (vctrl->mmio.csts & NVME_CSTS_RDY)
+ nvme_mdev_vctrl_disable(vctrl);
+}
+
+/* This sends an generic error notification to the user */
+static void nvme_mdev_mmio_error(struct nvme_mdev_vctrl *vctrl,
+ enum nvme_async_event info)
+{
+ nvme_mdev_event_send(vctrl, NVME_AER_TYPE_ERROR, info);
+}
+
+/* This is memory fault handler for the mmap area of the doorbells*/
+static vm_fault_t nvme_mdev_mmio_dbs_mmap_fault(struct vm_fault *vmf)
+{
+ struct vm_area_struct *vma = vmf->vma;
+ struct nvme_mdev_vctrl *vctrl = vma->vm_private_data;
+
+ /* DB area is just one page, starting at offset 4096 of the mmio*/
+ if (WARN_ON(vmf->pgoff != 1))
+ return VM_FAULT_SIGBUS;
+
+ get_page(vctrl->mmio.dbs_page);
+ vmf->page = vctrl->mmio.dbs_page;
+ return 0;
+}
+
+static const struct vm_operations_struct nvme_mdev_mmio_dbs_vm_ops = {
+ .fault = nvme_mdev_mmio_dbs_mmap_fault,
+};
+
+/* check that user db write is valid and send an error if not*/
+bool nvme_mdev_mmio_db_check(struct nvme_mdev_vctrl *vctrl,
+ u16 qid, u16 size, u16 db)
+{
+ if (get_current() != vctrl->iothread)
+ lockdep_assert_held(&vctrl->lock);
+
+ if (db < size)
+ return true;
+ if (qid == 0) {
+ _DBG(vctrl, "MMIO: invalid admin DB write - fatal error\n");
+ nvme_mdev_mmio_fatal_error(vctrl);
+ return false;
+ }
+
+ _DBG(vctrl, "MMIO: invalid DB value write qid=%d, size=%d, value=%d\n",
+ qid, size, db);
+
+ nvme_mdev_mmio_error(vctrl, NVME_AER_ERROR_INVALID_DB_VALUE);
+ return false;
+}
+
+/* handle submission queue doorbell write */
+static void nvme_mdev_mmio_db_write_sq(struct nvme_mdev_vctrl *vctrl,
+ u32 qid, u32 val)
+{
+ _DBG(vctrl, "MMIO: doorbell SQID %d, DB write %d\n", qid, val);
+
+ lockdep_assert_held(&vctrl->lock);
+ /* check if the db belongs to a valid queue */
+ if (qid >= MAX_VIRTUAL_QUEUES || !test_bit(qid, vctrl->vsq_en))
+ goto err_db;
+
+ /* emulate the shadow doorbell functionality */
+ if (!vctrl->mmio.shadow_db_en || qid == 0)
+ vctrl->mmio.dbs[qid].sqt = cpu_to_le32(val & 0x0000FFFF);
+
+ if (qid != 0)
+ vctrl->io_idle = false;
+
+ if (vctrl->vctrl_paused || !vctrl->mmio.shadow_db_supported)
+ return;
+
+ qid ? nvme_mdev_io_resume(vctrl) : nvme_mdev_adm_process_sq(vctrl);
+ return;
+err_db:
+
+ _DBG(vctrl, "MMIO: inactive/invalid SQ DB write qid=%d, value=%d\n",
+ qid, val);
+
+ nvme_mdev_mmio_error(vctrl, NVME_AER_ERROR_INVALID_DB_REG);
+}
+
+/* handle doorbell write */
+static void nvme_mdev_mmio_db_write_cq(struct nvme_mdev_vctrl *vctrl,
+ u32 qid, u32 val)
+{
+ _DBG(vctrl, "MMIO: doorbell CQID %d, DB write %d\n", qid, val);
+
+ lockdep_assert_held(&vctrl->lock);
+ /* check if the db belongs to a valid queue */
+ if (qid >= MAX_VIRTUAL_QUEUES || !test_bit(qid, vctrl->vcq_en))
+ goto err_db;
+
+ /* emulate the shadow doorbell functionality */
+ if (!vctrl->mmio.shadow_db_en || qid == 0)
+ vctrl->mmio.dbs[qid].cqh = cpu_to_le16(val & 0xFFFF);
+
+ if (vctrl->vctrl_paused || !vctrl->mmio.shadow_db_supported)
+ return;
+
+ if (qid == 0) {
+ nvme_mdev_vcq_process(vctrl, 0, false);
+ // if completion queue was full prior to that, we
+ // might have some admin commands pending,
+ // and this is the last chance to process them
+ nvme_mdev_adm_process_sq(vctrl);
+ }
+ return;
+err_db:
+ _DBG(vctrl,
+ "MMIO: inactive/invalid CQ DB write qid=%d, value=%d\n",
+ qid, val);
+
+ nvme_mdev_mmio_error(vctrl, NVME_AER_ERROR_INVALID_DB_REG);
+}
+
+/* This is called when user enables the controller */
+static void nvme_mdev_mmio_cntrl_enable(struct nvme_mdev_vctrl *vctrl)
+{
+ u64 acq, asq;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ // Controller must be reset from the dead state
+ if (nvme_mdev_vctrl_is_dead(vctrl))
+ goto error;
+
+ /* only NVME command set supported */
+ if (((vctrl->mmio.cc >> NVME_CC_CSS_SHIFT) & 0x7) != 0)
+ goto error;
+
+ /* Check the queue arbitration method*/
+ if ((vctrl->mmio.cc & NVME_CC_AMS_MASK) != NVME_CC_AMS_RR)
+ goto error;
+
+ /* Check the page size*/
+ if (((vctrl->mmio.cc >> NVME_CC_MPS_SHIFT) & 0xF) != (PAGE_SHIFT - 12))
+ goto error;
+
+ /* Start the admin completion queue*/
+ acq = vctrl->mmio.acql | ((u64)vctrl->mmio.acqh << 32);
+ asq = vctrl->mmio.asql | ((u64)vctrl->mmio.asqh << 32);
+
+ if (!nvme_mdev_vctrl_enable(vctrl, acq, asq, vctrl->mmio.aqa))
+ goto error;
+
+ /* Success! */
+ vctrl->mmio.csts |= NVME_CSTS_RDY;
+ return;
+error:
+ _DBG(vctrl, "MMIO: failure to enable the controller - fatal error\n");
+ nvme_mdev_mmio_fatal_error(vctrl);
+}
+
+/* This is called when user sends a notification that controller is
+ * about to be disabled
+ */
+static void nvme_mdev_mmio_cntrl_shutdown(struct nvme_mdev_vctrl *vctrl)
+{
+ lockdep_assert_held(&vctrl->lock);
+
+ /* clear shutdown notification bits */
+ vctrl->mmio.cc &= ~NVME_CC_SHN_MASK;
+
+ if (nvme_mdev_vctrl_is_dead(vctrl)) {
+ _DBG(vctrl, "MMIO: shutdown notification for dead ctrl\n");
+ return;
+ }
+
+ /* not enabled */
+ if (!(vctrl->mmio.csts & NVME_CSTS_RDY)) {
+ _DBG(vctrl, "MMIO: shutdown notification with CSTS.RDY==0\n");
+ nvme_mdev_assert_io_not_running(vctrl);
+ return;
+ }
+
+ nvme_mdev_io_pause(vctrl);
+ nvme_mdev_vctrl_disable(vctrl);
+ vctrl->mmio.csts |= NVME_CSTS_SHST_CMPLT;
+}
+
+/* MMIO BAR read/write */
+static int nvme_mdev_mmio_bar_access(struct nvme_mdev_vctrl *vctrl,
+ u16 offset, char *buf,
+ u32 count, bool is_write)
+{
+ u32 val, oldval;
+
+ mutex_lock(&vctrl->lock);
+
+ /* Drop non DWORD sized and aligned reads/writes
+ * (QWORD read/writes are split by the caller)
+ */
+ if (count != 4 || (offset & 0x3))
+ goto drop;
+
+ val = is_write ? le32_to_cpu(*(__le32 *)buf) : 0;
+
+ switch (offset) {
+ case NVME_REG_CAP:
+ /* controller capabilities (low 32 bit)*/
+ if (is_write)
+ goto drop;
+ store_le32(buf, vctrl->mmio.cap & 0xFFFFFFFF);
+ break;
+
+ case NVME_REG_CAP + 4:
+ /* controller capabilities (upper 32 bit)*/
+ if (is_write)
+ goto drop;
+ store_le32(buf, vctrl->mmio.cap >> 32);
+ break;
+
+ case NVME_REG_VS:
+ if (is_write)
+ goto drop;
+ store_le32(buf, NVME_MDEV_NVME_VER);
+ break;
+
+ case NVME_REG_INTMS:
+ case NVME_REG_INTMC:
+ /* Interrupt Mask Set & Clear */
+ goto drop;
+
+ case NVME_REG_CC:
+ /* Controller Configuration */
+ if (!is_write) {
+ store_le32(buf, vctrl->mmio.cc);
+ break;
+ }
+
+ oldval = vctrl->mmio.cc;
+ vctrl->mmio.cc = val;
+
+ /* drop if reserved bits set */
+ if (vctrl->mmio.cc & 0xFF00000E) {
+ _DBG(vctrl,
+ "MMIO: reserved bits of CC set - fatal error\n");
+ nvme_mdev_mmio_fatal_error(vctrl);
+ goto drop;
+ }
+
+ /* CSS(command set),MPS(memory page size),AMS(queue arbitration)
+ * must not be changed while controller is running
+ */
+ if (vctrl->mmio.csts & NVME_CSTS_RDY) {
+ if ((vctrl->mmio.cc & 0x3FF0) != (oldval & 0x3FF0)) {
+ _DBG(vctrl,
+ "MMIO: attempt to change setting bits of CC while CC.EN=1 - fatal error\n");
+
+ nvme_mdev_mmio_fatal_error(vctrl);
+ goto drop;
+ }
+ }
+
+ if ((vctrl->mmio.cc & NVME_CC_SHN_MASK) != NVME_CC_SHN_NONE) {
+ _DBG(vctrl, "MMIO: CC.SHN != 0 - shutdown\n");
+ nvme_mdev_mmio_cntrl_shutdown(vctrl);
+ }
+
+ /* change in controller enabled state */
+ if ((val & NVME_CC_ENABLE) == (oldval & NVME_CC_ENABLE))
+ break;
+
+ if (vctrl->mmio.cc & NVME_CC_ENABLE) {
+ _DBG(vctrl, "MMIO: CC.EN<=1 - enable the controller\n");
+ nvme_mdev_mmio_cntrl_enable(vctrl);
+ } else {
+ _DBG(vctrl, "MMIO: CC.EN<=0 - reset controller\n");
+ __nvme_mdev_vctrl_reset(vctrl, false);
+ }
+
+ break;
+
+ case NVME_REG_CSTS:
+ /* Controller Status */
+ if (is_write)
+ goto drop;
+ store_le32(buf, vctrl->mmio.csts);
+ break;
+
+ case NVME_REG_AQA:
+ /* admin queue submission and completion size*/
+ if (!is_write)
+ store_le32(buf, vctrl->mmio.aqa);
+ else if (!(vctrl->mmio.csts & NVME_CSTS_RDY))
+ vctrl->mmio.aqa = val;
+ else
+ goto drop;
+ break;
+
+ case NVME_REG_ASQ:
+ /* admin submission queue address (low 32 bit)*/
+ if (!is_write)
+ store_le32(buf, vctrl->mmio.asql);
+ else if (!(vctrl->mmio.csts & NVME_CSTS_RDY))
+ vctrl->mmio.asql = val;
+ else
+ goto drop;
+ break;
+
+ case NVME_REG_ASQ + 4:
+ /* admin submission queue address (high 32 bit)*/
+ if (!is_write)
+ store_le32(buf, vctrl->mmio.asqh);
+ else if (!(vctrl->mmio.csts & NVME_CSTS_RDY))
+ vctrl->mmio.asqh = val;
+ else
+ goto drop;
+ break;
+
+ case NVME_REG_ACQ:
+ /* admin completion queue address (low 32 bit)*/
+ if (!is_write)
+ store_le32(buf, vctrl->mmio.acql);
+ else if (!(vctrl->mmio.csts & NVME_CSTS_RDY))
+ vctrl->mmio.acql = val;
+ else
+ goto drop;
+ break;
+
+ case NVME_REG_ACQ + 4:
+ /* admin completion queue address (high 32 bit)*/
+ if (!is_write)
+ store_le32(buf, vctrl->mmio.acqh);
+ else if (!(vctrl->mmio.csts & NVME_CSTS_RDY))
+ vctrl->mmio.acqh = val;
+ else
+ goto drop;
+ break;
+
+ case NVME_REG_CMBLOC:
+ case NVME_REG_CMBSZ:
+ /* not supported - hardwired to 0*/
+ if (is_write)
+ goto drop;
+ store_le32(buf, 0);
+ break;
+
+ case NVME_REG_DBS ... (NVME_REG_DBS + DB_AREA_SIZE - 1): {
+ /* completion and submission doorbells */
+ u16 db_offset = offset - NVME_REG_DBS;
+ u16 index = db_offset >> (DB_STRIDE_SHIFT + 2);
+ u16 qid = index >> 1;
+ bool sq = (index & 0x1) == 0;
+
+ if (!is_write || (db_offset & DB_MASK))
+ goto drop;
+
+ if (!(vctrl->mmio.csts & NVME_CSTS_RDY))
+ goto drop;
+
+ if (nvme_mdev_vctrl_is_dead(vctrl))
+ goto drop;
+
+ sq ? nvme_mdev_mmio_db_write_sq(vctrl, qid, val) :
+ nvme_mdev_mmio_db_write_cq(vctrl, qid, val);
+ break;
+ }
+ default:
+ goto drop;
+ }
+
+ mutex_unlock(&vctrl->lock);
+ return count;
+drop:
+ _DBG(vctrl, "MMIO: dropping write at 0x%x\n", offset);
+ mutex_unlock(&vctrl->lock);
+ return 0;
+}
+
+/* Called when the virtual controller is created */
+int nvme_mdev_mmio_create(struct nvme_mdev_vctrl *vctrl)
+{
+ int ret;
+
+ /* BAR0 */
+ nvme_mdev_pci_setup_bar(vctrl, PCI_BASE_ADDRESS_0,
+ MMIO_BAR_SIZE, nvme_mdev_mmio_bar_access);
+
+ /* Spec allows for maximum depth of 0x10000, but we limit
+ * it to 1 less to avoid various overflows
+ */
+ BUILD_BUG_ON(MAX_VIRTUAL_QUEUE_DEPTH > 0xFFFF);
+
+ /* CAP has 4 bits for the doorbell stride shift*/
+ BUILD_BUG_ON(DB_STRIDE_SHIFT > 0xF);
+
+ /* Shadow doorbell limits doorbells to 1 page*/
+ BUILD_BUG_ON(DB_AREA_SIZE > PAGE_SIZE);
+
+ /* Just in case...*/
+ BUILD_BUG_ON((PAGE_SHIFT - 12) > 0xF);
+
+ vctrl->mmio.cap =
+ // MQES: maximum queue entries
+ ((u64)(MAX_VIRTUAL_QUEUE_DEPTH - 1) << 0) |
+ // CQR: physically contiguous queues - no
+ (0ULL << 16) |
+ // AMS: Queue arbitration.
+ // TODOLATER: IO: implement WRRU
+ (0ULL << 17) |
+ // TO: RDY timeout - 0 (done in sync)
+ (0ULL << 24) |
+ // DSTRD: doorbell stride
+ ((u64)DB_STRIDE_SHIFT << 32) |
+ // NSSRS: no support for nvme subsystem reset
+ (0ULL << 36) |
+ // CSS: NVM command set supported
+ (1ULL << 37) |
+ // BPS: no support for boot partition
+ (0ULL << 45) |
+ // MPSMIN: Minimum page size supported is PAGE_SIZE
+ ((u64)(PAGE_SHIFT - 12) << 48) |
+ // MPSMAX: Maximum page size is PAGE_SIZE as well
+ ((u64)(PAGE_SHIFT - 12) << 52);
+
+ /* Create the (regular) doorbell buffers */
+ vctrl->mmio.dbs_page = alloc_pages_node(vctrl->hctrl->node,
+ __GFP_ZERO, 0);
+
+ ret = -ENOMEM;
+
+ if (!vctrl->mmio.dbs_page)
+ goto error0;
+
+ vctrl->mmio.db_page_kmap = kmap(vctrl->mmio.dbs_page);
+ if (!vctrl->mmio.db_page_kmap)
+ goto error1;
+
+ vctrl->mmio.fake_eidx_page = alloc_pages_node(vctrl->hctrl->node,
+ __GFP_ZERO, 0);
+ if (!vctrl->mmio.fake_eidx_page)
+ goto error2;
+
+ vctrl->mmio.fake_eidx_kmap = kmap(vctrl->mmio.fake_eidx_page);
+ if (!vctrl->mmio.fake_eidx_kmap)
+ goto error3;
+ return 0;
+error3:
+ put_page(vctrl->mmio.fake_eidx_kmap);
+error2:
+ kunmap(vctrl->mmio.dbs_page);
+error1:
+ put_page(vctrl->mmio.dbs_page);
+error0:
+ return ret;
+}
+
+/* Called when the virtual controller is reset */
+void nvme_mdev_mmio_reset(struct nvme_mdev_vctrl *vctrl, bool pci_reset)
+{
+ vctrl->mmio.cc = 0;
+ vctrl->mmio.csts = 0;
+
+ if (pci_reset) {
+ vctrl->mmio.aqa = 0;
+ vctrl->mmio.asql = 0;
+ vctrl->mmio.asqh = 0;
+ vctrl->mmio.acql = 0;
+ vctrl->mmio.acqh = 0;
+ }
+}
+
+/* Called when the virtual controller is opened */
+void nvme_mdev_mmio_open(struct nvme_mdev_vctrl *vctrl)
+{
+ if (!vctrl->mmio.shadow_db_supported)
+ nvme_mdev_vctrl_region_set_mmap(vctrl,
+ VFIO_PCI_BAR0_REGION_INDEX,
+ NVME_REG_DBS, PAGE_SIZE,
+ &nvme_mdev_mmio_dbs_vm_ops);
+ else
+ nvme_mdev_vctrl_region_disable_mmap(vctrl,
+ VFIO_PCI_BAR0_REGION_INDEX);
+}
+
+/* Called when the virtual controller queues are enabled */
+int nvme_mdev_mmio_enable_dbs(struct nvme_mdev_vctrl *vctrl)
+{
+ if (WARN_ON(vctrl->mmio.shadow_db_en))
+ return -EINVAL;
+
+ nvme_mdev_assert_io_not_running(vctrl);
+
+ /* setup normal doorbells and reset them*/
+ vctrl->mmio.dbs = vctrl->mmio.db_page_kmap;
+ vctrl->mmio.eidxs = vctrl->mmio.fake_eidx_kmap;
+ memset((void *)vctrl->mmio.dbs, 0, DB_AREA_SIZE);
+ memset((void *)vctrl->mmio.eidxs, 0, DB_AREA_SIZE);
+ return 0;
+}
+
+/* Called when the virtual controller shadow doorbell is enabled */
+int nvme_mdev_mmio_enable_dbs_shadow(struct nvme_mdev_vctrl *vctrl,
+ dma_addr_t sdb_iova,
+ dma_addr_t eidx_iova)
+{
+ int ret;
+
+ nvme_mdev_assert_io_not_running(vctrl);
+
+ ret = nvme_mdev_viommu_create_kmap(&vctrl->viommu,
+ sdb_iova, &vctrl->mmio.sdb_map);
+ if (ret)
+ return ret;
+
+ ret = nvme_mdev_viommu_create_kmap(&vctrl->viommu,
+ eidx_iova, &vctrl->mmio.seidx_map);
+ if (ret) {
+ nvme_mdev_viommu_free_kmap(&vctrl->viommu,
+ &vctrl->mmio.sdb_map);
+ return ret;
+ }
+
+ vctrl->mmio.dbs = vctrl->mmio.sdb_map.kmap;
+ vctrl->mmio.eidxs = vctrl->mmio.seidx_map.kmap;
+
+ memcpy((void *)vctrl->mmio.dbs,
+ vctrl->mmio.db_page_kmap, DB_AREA_SIZE);
+
+ memcpy((void *)vctrl->mmio.eidxs,
+ vctrl->mmio.db_page_kmap, DB_AREA_SIZE);
+
+ vctrl->mmio.shadow_db_en = true;
+ return 0;
+}
+
+/* Called on guest mapping update to
+ * verify that our mappings are still intact
+ */
+void nvme_mdev_mmio_viommu_update(struct nvme_mdev_vctrl *vctrl)
+{
+ nvme_mdev_assert_io_not_running(vctrl);
+ if (!vctrl->mmio.shadow_db_en)
+ return;
+
+ nvme_mdev_viommu_update_kmap(&vctrl->viommu, &vctrl->mmio.sdb_map);
+ nvme_mdev_viommu_update_kmap(&vctrl->viommu, &vctrl->mmio.seidx_map);
+
+ vctrl->mmio.dbs = vctrl->mmio.sdb_map.kmap;
+ vctrl->mmio.eidxs = vctrl->mmio.seidx_map.kmap;
+}
+
+/* Disable the doorbells */
+void nvme_mdev_mmio_disable_dbs(struct nvme_mdev_vctrl *vctrl)
+{
+ nvme_mdev_assert_io_not_running(vctrl);
+
+ /* Free the shadow doorbells */
+ nvme_mdev_viommu_free_kmap(&vctrl->viommu, &vctrl->mmio.sdb_map);
+ nvme_mdev_viommu_free_kmap(&vctrl->viommu, &vctrl->mmio.seidx_map);
+
+ /* Clear the doorbells */
+ vctrl->mmio.dbs = NULL;
+ vctrl->mmio.eidxs = NULL;
+ vctrl->mmio.shadow_db_en = false;
+}
+
+/* Called when the virtual controller is about to be freed */
+void nvme_mdev_mmio_free(struct nvme_mdev_vctrl *vctrl)
+{
+ nvme_mdev_assert_io_not_running(vctrl);
+ kunmap(vctrl->mmio.dbs_page);
+ put_page(vctrl->mmio.dbs_page);
+ kunmap(vctrl->mmio.fake_eidx_page);
+ put_page(vctrl->mmio.fake_eidx_page);
+}
diff --git a/drivers/nvme/mdev/pci.c b/drivers/nvme/mdev/pci.c
new file mode 100644
index 000000000000..b7cdeaaf9c2e
--- /dev/null
+++ b/drivers/nvme/mdev/pci.c
@@ -0,0 +1,247 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * NVMe virtual controller minimal PCI/PCIe config space implementation
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include "priv.h"
+
+/* setup a 64 bit PCI bar */
+void nvme_mdev_pci_setup_bar(struct nvme_mdev_vctrl *vctrl,
+ u8 bar,
+ unsigned int size,
+ region_access_fn access_fn)
+{
+ nvme_mdev_vctrl_add_region(vctrl,
+ VFIO_PCI_BAR0_REGION_INDEX +
+ ((bar - PCI_BASE_ADDRESS_0) >> 2),
+ size, access_fn);
+
+ store_le32(vctrl->pcicfg.wmask + bar, ~((u64)size - 1));
+ store_le32(vctrl->pcicfg.values + bar,
+ PCI_BASE_ADDRESS_SPACE_MEMORY |
+ PCI_BASE_ADDRESS_MEM_TYPE_64);
+}
+
+/* Allocate a pci capability*/
+static u8 nvme_mdev_pci_allocate_cap(struct nvme_mdev_vctrl *vctrl,
+ u8 id, u8 size)
+{
+ u8 *cfg = vctrl->pcicfg.values;
+ u8 newcap = vctrl->pcicfg.end;
+ u8 cap = cfg[PCI_CAPABILITY_LIST];
+
+ size = round_up(size, 4);
+ // only standard cfg space caps for now
+ WARN_ON(newcap + size > 256);
+
+ if (!cfg[PCI_CAPABILITY_LIST]) {
+ /*special case for first capability*/
+ u16 status = load_le16(cfg + PCI_STATUS);
+
+ status |= PCI_STATUS_CAP_LIST;
+ store_le16(cfg + PCI_STATUS, status);
+
+ cfg[PCI_CAPABILITY_LIST] = newcap;
+ goto setupcap;
+ }
+
+ while (cfg[cap + PCI_CAP_LIST_NEXT] != 0)
+ cap = cfg[cap + PCI_CAP_LIST_NEXT];
+
+ cfg[cap + PCI_CAP_LIST_NEXT] = newcap;
+
+setupcap:
+ cfg[newcap + PCI_CAP_LIST_ID] = id;
+ cfg[newcap + PCI_CAP_LIST_NEXT] = 0;
+ vctrl->pcicfg.end += size;
+ return newcap;
+}
+
+static void nvme_mdev_pci_setup_pm_cap(struct nvme_mdev_vctrl *vctrl)
+{
+ u8 *cfg = vctrl->pcicfg.values;
+ u8 *cfgm = vctrl->pcicfg.wmask;
+
+ u8 cap = nvme_mdev_pci_allocate_cap(vctrl,
+ PCI_CAP_ID_PM, PCI_PM_SIZEOF);
+
+ store_le16(cfg + cap + PCI_PM_PMC, 0x3);
+ store_le16(cfg + cap + PCI_PM_CTRL, PCI_PM_CTRL_NO_SOFT_RESET);
+ store_le16(cfgm + cap + PCI_PM_CTRL, 0x3);
+ vctrl->pcicfg.pmcap = cap;
+}
+
+static void nvme_mdev_pci_setup_msix_cap(struct nvme_mdev_vctrl *vctrl)
+{
+ u8 *cfg = vctrl->pcicfg.values;
+ u8 *cfgm = vctrl->pcicfg.wmask;
+ u8 cap = nvme_mdev_pci_allocate_cap(vctrl,
+ PCI_CAP_ID_MSIX,
+ PCI_CAP_MSIX_SIZEOF);
+
+ int MSIX_TBL_SIZE = roundup(MAX_VIRTUAL_IRQS * 16, PAGE_SIZE);
+ int MSIX_PBA_SIZE = roundup(DIV_ROUND_UP(MAX_VIRTUAL_IRQS, 8),
+ PAGE_SIZE);
+
+ store_le16(cfg + cap + PCI_MSIX_FLAGS, MAX_VIRTUAL_IRQS - 1);
+ store_le16(cfgm + cap + PCI_MSIX_FLAGS,
+ PCI_MSIX_FLAGS_MASKALL | PCI_MSIX_FLAGS_ENABLE);
+
+ store_le32(cfg + cap + PCI_MSIX_TABLE, 0x2);
+ store_le32(cfg + cap + PCI_MSIX_PBA, MSIX_TBL_SIZE | 0x2);
+
+ nvme_mdev_pci_setup_bar(vctrl, PCI_BASE_ADDRESS_2,
+ __roundup_pow_of_two(MSIX_TBL_SIZE +
+ MSIX_PBA_SIZE), NULL);
+ vctrl->pcicfg.msixcap = cap;
+}
+
+static void nvme_mdev_pci_setup_pcie_cap(struct nvme_mdev_vctrl *vctrl)
+{
+ u8 *cfg = vctrl->pcicfg.values;
+ u8 cap = nvme_mdev_pci_allocate_cap(vctrl,
+ PCI_CAP_ID_EXP,
+ PCI_CAP_EXP_ENDPOINT_SIZEOF_V2);
+
+ store_le16(cfg + cap + PCI_EXP_FLAGS, 0x02 |
+ (PCI_EXP_TYPE_ENDPOINT << 4));
+
+ store_le32(cfg + cap + PCI_EXP_DEVCAP,
+ PCI_EXP_DEVCAP_RBER | PCI_EXP_DEVCAP_FLR);
+ store_le32(cfg + cap + PCI_EXP_LNKCAP,
+ PCI_EXP_LNKCAP_SLS_8_0GB | (4 << 4) /*4x*/);
+ store_le16(cfg + cap + PCI_EXP_LNKSTA,
+ PCI_EXP_LNKSTA_CLS_8_0GB | (4 << 4) /*4x*/);
+
+ store_le32(cfg + cap + PCI_EXP_LNKCAP2, PCI_EXP_LNKCAP2_SLS_8_0GB);
+ store_le16(cfg + cap + PCI_EXP_LNKCTL2, PCI_EXP_LNKCTL2_TLS_8_0GT);
+ vctrl->pcicfg.pciecap = cap;
+}
+
+/* This is called on PCI config read/write */
+static int nvme_mdev_pci_cfg_access(struct nvme_mdev_vctrl *vctrl,
+ u16 offset, char *buf,
+ u32 count, bool is_write)
+{
+ unsigned int i;
+
+ mutex_lock(&vctrl->lock);
+
+ if (!is_write) {
+ memcpy(buf, (vctrl->pcicfg.values + offset), count);
+ goto out;
+ }
+
+ for (i = 0; i < count; i++) {
+ u8 address = offset + i;
+ u8 value = buf[i];
+ u8 old_value = vctrl->pcicfg.values[address];
+ u8 wmask = vctrl->pcicfg.wmask[address];
+ u8 new_value = (value & wmask) | (old_value & ~wmask);
+
+ /* D3/D0 power control */
+ if (address == vctrl->pcicfg.pmcap + PCI_PM_CTRL) {
+ u8 state = new_value & 0x03;
+
+ if (state != 0 && state != 3)
+ new_value = old_value;
+
+ if (old_value != new_value) {
+ const char *s = state == 3 ? "D3" : "D0";
+
+ if (state == 3)
+ __nvme_mdev_vctrl_reset(vctrl, true);
+ _DBG(vctrl, "PCI: going to %s\n", s);
+ }
+ }
+
+ /* FLR reset*/
+ if (address == vctrl->pcicfg.pciecap + PCI_EXP_DEVCTL + 1)
+ if (value & 0x80) {
+ _DBG(vctrl, "PCI: FLR reset\n");
+ __nvme_mdev_vctrl_reset(vctrl, true);
+ }
+ vctrl->pcicfg.values[offset + i] = new_value;
+ }
+out:
+ mutex_unlock(&vctrl->lock);
+ return count;
+}
+
+/* setup pci configuration */
+int nvme_mdev_pci_create(struct nvme_mdev_vctrl *vctrl)
+{
+ u8 *cfg, *cfgm;
+
+ vctrl->pcicfg.values = kzalloc(PCI_CFG_SIZE, GFP_KERNEL);
+ if (!vctrl->pcicfg.values)
+ return -ENOMEM;
+
+ vctrl->pcicfg.wmask = kzalloc(PCI_CFG_SIZE, GFP_KERNEL);
+ if (!vctrl->pcicfg.wmask) {
+ kfree(vctrl->pcicfg.values);
+ return -ENOMEM;
+ }
+
+ cfg = vctrl->pcicfg.values;
+ cfgm = vctrl->pcicfg.wmask;
+
+ nvme_mdev_vctrl_add_region(vctrl,
+ VFIO_PCI_CONFIG_REGION_INDEX,
+ PCI_CFG_SIZE,
+ nvme_mdev_pci_cfg_access);
+
+ /* vendor information */
+ store_le16(cfg + PCI_VENDOR_ID, NVME_MDEV_PCI_VENDOR_ID);
+ store_le16(cfg + PCI_DEVICE_ID, NVME_MDEV_PCI_DEVICE_ID);
+
+ /* pci command register */
+ store_le16(cfgm + PCI_COMMAND,
+ PCI_COMMAND_INTX_DISABLE |
+ PCI_COMMAND_MEMORY |
+ PCI_COMMAND_MASTER);
+
+ /* pci status register */
+ store_le16(cfg + PCI_STATUS, PCI_STATUS_CAP_LIST);
+
+ /* subsystem information */
+ store_le16(cfg + PCI_SUBSYSTEM_VENDOR_ID, NVME_MDEV_PCI_SUBVENDOR_ID);
+ store_le16(cfg + PCI_SUBSYSTEM_ID, NVME_MDEV_PCI_SUBDEVICE_ID);
+ store_le8(cfg + PCI_CLASS_REVISION, NVME_MDEV_PCI_REVISION);
+
+ /*Programming Interface (NVM Express) */
+ store_le8(cfg + PCI_CLASS_PROG, 0x02);
+
+ /* Device class and subclass
+ * (Mass storage controller, Non-Volatile memory controller)
+ */
+ store_le16(cfg + PCI_CLASS_DEVICE, 0x0108);
+
+ /* dummy read/write */
+ store_le8(cfgm + PCI_CACHE_LINE_SIZE, 0xFF);
+
+ /* initial value*/
+ store_le8(cfg + PCI_CAPABILITY_LIST, 0);
+ vctrl->pcicfg.end = 0x40;
+
+ nvme_mdev_pci_setup_pm_cap(vctrl);
+ nvme_mdev_pci_setup_msix_cap(vctrl);
+ nvme_mdev_pci_setup_pcie_cap(vctrl);
+
+ /* INTX IRQ number - info only for BIOS */
+ store_le8(cfgm + PCI_INTERRUPT_LINE, 0xFF);
+ store_le8(cfg + PCI_INTERRUPT_PIN, 0x01);
+
+ return 0;
+}
+
+/* teardown pci configuration */
+void nvme_mdev_pci_free(struct nvme_mdev_vctrl *vctrl)
+{
+ kfree(vctrl->pcicfg.values);
+ kfree(vctrl->pcicfg.wmask);
+ vctrl->pcicfg.values = NULL;
+ vctrl->pcicfg.wmask = NULL;
+}
diff --git a/drivers/nvme/mdev/priv.h b/drivers/nvme/mdev/priv.h
new file mode 100644
index 000000000000..9f65e46c1ab2
--- /dev/null
+++ b/drivers/nvme/mdev/priv.h
@@ -0,0 +1,700 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * Driver private data structures and helper macros
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+
+#ifndef _MDEV_NVME_PRIV_H
+#define _MDEV_NVME_PRIV_H
+
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/rbtree.h>
+#include <linux/vfio.h>
+#include <linux/mdev.h>
+#include <linux/pci.h>
+#include <linux/eventfd.h>
+#include <linux/byteorder/generic.h>
+#include "../host/nvme.h"
+#include "mdev.h"
+
+#define NVME_MDEV_NVME_VER NVME_VS(0x01, 0x03, 0x00)
+#define NVME_MDEV_FIRMWARE_VERSION "1.0"
+
+#define NVME_MDEV_PCI_VENDOR_ID PCI_VENDOR_ID_REDHAT_QUMRANET
+#define NVME_MDEV_PCI_DEVICE_ID 0x1234
+#define NVME_MDEV_PCI_SUBVENDOR_ID PCI_SUBVENDOR_ID_REDHAT_QUMRANET
+#define NVME_MDEV_PCI_SUBDEVICE_ID 0
+#define NVME_MDEV_PCI_REVISION 0x0
+
+#define DB_STRIDE_SHIFT 4 /*4 = 1 cacheline */
+#define MAX_VIRTUAL_QUEUES 16
+#define MAX_VIRTUAL_QUEUE_DEPTH 0xFFFF
+#define MAX_VIRTUAL_NAMESPACES 16 /* NSID = 1..16*/
+#define MAX_VIRTUAL_IRQS 16
+
+#define MAX_HOST_QUEUES 4
+#define MAX_AER_COMMANDS 16
+#define MAX_LOG_PAGES 16
+
+extern bool use_shadow_doorbell;
+extern unsigned int io_timeout_ms;
+extern unsigned int poll_timeout_ms;
+extern unsigned int admin_poll_rate_ms;
+
+/* virtual submission queue*/
+struct nvme_vsq {
+ u16 qid;
+ u16 size;
+ u16 head; /*next item to read */
+
+ struct nvme_command *data; /*the queue*/
+ struct nvme_vcq *vcq; /* completion queue*/
+
+ dma_addr_t iova;
+ bool cont;
+
+ u16 hsq;
+};
+
+/* virtual completion queue*/
+struct nvme_vcq {
+ /* basic queue settings */
+ u16 qid;
+ u16 size;
+ u16 head;
+ u16 tail;
+ bool phase; /* current queue phase */
+
+ volatile struct nvme_completion *data;
+
+ /* number of items pending*/
+ u16 pending;
+
+ /* IRQ settings */
+ int irq /* -1 if disabled*/;
+
+ dma_addr_t iova;
+ bool cont;
+};
+
+/*A virtual namespace */
+struct nvme_mdev_vns {
+ /* host nvme namespace that we are attached to it*/
+ struct nvme_ns *host_ns;
+
+ /* block device that corresponds to the partition of that namespace */
+ struct block_device *host_part;
+ fmode_t fmode;
+
+ u32 nsid;
+
+ /* NSID on the host*/
+ u32 host_nsid;
+
+ /* host partition ID*/
+ unsigned int host_partid;
+
+ /* Offset inside the host namespace (start of the partition)*/
+ u64 host_lba_offset;
+
+ /* size of each block on the real namespace, same for host and guest */
+ u8 blksize_shift;
+
+ /* size of the namespace in lbas*/
+ u64 ns_size;
+
+ /* is the namespace read only?*/
+ bool readonly;
+
+ /* UUID of this namespace */
+ uuid_t uuid;
+
+ /* Optimal IO boundary*/
+ u16 noiob;
+};
+
+/* Virtual IOMMU */
+struct nvme_mdev_viommu {
+ struct device *hw_dev;
+ struct device *sw_dev;
+
+ /* dma/prp bookkeeping */
+ struct rb_root_cached maps_tree;
+ struct list_head maps_list;
+ struct nvme_mdev_vctrl *vctrl;
+};
+
+struct doorbell {
+ volatile __le32 sqt;
+ u8 rsvd1[(4 << DB_STRIDE_SHIFT) - sizeof(__le32)];
+ volatile __le32 cqh;
+ u8 rsvd2[(4 << DB_STRIDE_SHIFT) - sizeof(__le32)];
+};
+
+/* MMIO state */
+struct nvme_mdev_user_ctrl_mmio {
+ u32 cc; /* controller configuration */
+ u32 csts; /* controller status */
+ u64 cap /* controller capabilities*/;
+
+ /* admin queue location & size */
+ u32 aqa;
+ u32 asql;
+ u32 asqh;
+ u32 acql;
+ u32 acqh;
+
+ bool shadow_db_supported;
+ bool shadow_db_en;
+
+ /* Regular doorbells */
+ struct page *dbs_page;
+ struct page *fake_eidx_page;
+ void *db_page_kmap;
+ void *fake_eidx_kmap;
+
+ /* Shadow doorbells */
+ struct page_map sdb_map;
+ struct page_map seidx_map;
+
+ /* Current doorbell mappings */
+ volatile struct doorbell *dbs;
+ volatile struct doorbell *eidxs;
+};
+
+/* pci configuration space of the device*/
+#define PCI_CFG_SIZE 4096
+struct nvme_mdev_pci_cfg_space {
+ u8 *values;
+ u8 *wmask;
+
+ u8 pmcap;
+ u8 pciecap;
+ u8 msixcap;
+ u8 end;
+};
+
+/*IRQ state of the controller */
+struct nvme_mdev_user_irq {
+ struct eventfd_ctx *trigger;
+ /* IRQ coalescing */
+ bool irq_coalesc_en;
+ time_t irq_time;
+ unsigned int irq_pending_cnt;
+};
+
+enum nvme_mdev_irq_mode {
+ NVME_MDEV_IMODE_NONE,
+ NVME_MDEV_IMODE_INTX,
+ NVME_MDEV_IMODE_MSIX,
+};
+
+struct nvme_mdev_user_irqs {
+ /* one of VFIO_PCI_{INTX|MSI|MSIX}_IRQ_INDEX */
+ enum nvme_mdev_irq_mode mode;
+
+ struct nvme_mdev_user_irq vecs[MAX_VIRTUAL_IRQS];
+ /* user interrupt coalescing settings */
+ u8 irq_coalesc_max;
+ unsigned int irq_coalesc_time_us;
+ /* device removal trigger*/
+ struct eventfd_ctx *request_trigger;
+};
+
+/*AER state */
+struct nvme_mdev_user_events {
+ /* async event request CIDs*/
+ u16 aer_cids[MAX_AER_COMMANDS];
+ unsigned int aer_cid_count;
+
+ /* events that are enabled */
+ unsigned long events_enabled[BITS_TO_LONGS(MAX_LOG_PAGES)];
+
+ /* events that are masked till next log page read*/
+ unsigned long events_masked[BITS_TO_LONGS(MAX_LOG_PAGES)];
+
+ /* events that are pending to be sent when user gives us an AER*/
+ unsigned long events_pending[BITS_TO_LONGS(MAX_LOG_PAGES)];
+ u32 event_values[MAX_LOG_PAGES];
+};
+
+/* host IO queue */
+struct nvme_mdev_hq {
+ unsigned int usecount;
+ struct list_head link;
+ unsigned int hqid;
+};
+
+/* IO region abstraction (BARs, the PCI config space */
+struct nvme_mdev_vctrl;
+typedef int (*region_access_fn) (struct nvme_mdev_vctrl *vctrl,
+ u16 offset, char *buf,
+ u32 size, bool is_write);
+
+struct nvme_mdev_io_region {
+ unsigned int size;
+ region_access_fn rw;
+
+ /* IF != NULL, the mmap_area_start/size specify the mmaped window
+ * of this region
+ */
+ const struct vm_operations_struct *mmap_ops;
+ unsigned int mmap_area_start;
+ unsigned int mmap_area_size;
+};
+
+/*Virtual NVME controller state */
+struct nvme_mdev_vctrl {
+ struct kref ref;
+ struct mutex lock;
+ struct list_head link;
+
+ struct mdev_device *mdev;
+ struct nvme_mdev_hctrl *hctrl;
+ bool inuse;
+
+ struct nvme_mdev_io_region regions[VFIO_PCI_NUM_REGIONS];
+
+ /* virtual controller state */
+ struct nvme_mdev_user_ctrl_mmio mmio;
+ struct nvme_mdev_pci_cfg_space pcicfg;
+ struct nvme_mdev_user_irqs irqs;
+ struct nvme_mdev_user_events events;
+
+ /* emulated namespaces */
+ struct nvme_mdev_vns *namespaces[MAX_VIRTUAL_NAMESPACES];
+ __le32 ns_log[MAX_VIRTUAL_NAMESPACES];
+ unsigned int ns_log_size;
+
+ /* emulated submission queues*/
+ struct nvme_vsq vsqs[MAX_VIRTUAL_QUEUES];
+ unsigned long vsq_en[BITS_TO_LONGS(MAX_VIRTUAL_QUEUES)];
+
+ /* emulated completion queues*/
+ unsigned long vcq_en[BITS_TO_LONGS(MAX_VIRTUAL_QUEUES)];
+ struct nvme_vcq vcqs[MAX_VIRTUAL_QUEUES];
+
+ /* Host IO queues*/
+ int max_host_hw_queues;
+ struct list_head host_hw_queues;
+
+ /* Interface to access user memory */
+ struct notifier_block vfio_map_notifier;
+ struct notifier_block vfio_unmap_notifier;
+ struct nvme_mdev_viommu viommu;
+
+ /* the IO thread */
+ struct task_struct *iothread;
+ bool iothread_parked;
+ bool io_idle;
+ ktime_t now;
+
+ /* Settings */
+ unsigned int arb_burst_shift;
+ u8 worload_hint;
+ unsigned int iothread_cpu;
+
+ /* Identification*/
+ char subnqn[256];
+ char serial[9];
+
+ bool vctrl_paused; /* true when the host device paused our IO */
+};
+
+/* mdev instance type*/
+struct nvme_mdev_inst_type {
+ unsigned int max_hw_queues;
+ char name[16];
+ struct attribute_group *attrgroup;
+};
+
+/*Abstraction of the host controller that we are connected to */
+struct nvme_mdev_hctrl {
+ struct mutex lock;
+
+ /* numa node of the host controller*/
+ int node;
+
+ struct list_head link;
+ struct kref ref;
+ bool removing;
+
+ /* for reference counting */
+ struct nvme_ctrl *nvme_ctrl;
+
+ /* Host area*/
+ u16 oncs;
+ u8 mdts;
+ unsigned int id;
+
+ /* book-keeping for number of host queues we can allocate*/
+ unsigned int nr_host_queues;
+};
+
+/* vctrl.c*/
+struct nvme_mdev_vctrl *nvme_mdev_vctrl_create(struct mdev_device *mdev,
+ struct nvme_mdev_hctrl *hctrl,
+ unsigned int max_host_queues);
+
+int nvme_mdev_vctrl_destroy(struct nvme_mdev_vctrl *vctrl);
+
+int nvme_mdev_vctrl_open(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_vctrl_release(struct nvme_mdev_vctrl *vctrl);
+
+void nvme_mdev_vctrl_pause(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_vctrl_resume(struct nvme_mdev_vctrl *vctrl);
+
+bool nvme_mdev_vctrl_enable(struct nvme_mdev_vctrl *vctrl,
+ dma_addr_t cqiova, dma_addr_t sqiova, u32 sizes);
+
+void nvme_mdev_vctrl_disable(struct nvme_mdev_vctrl *vctrl);
+
+void nvme_mdev_vctrl_reset(struct nvme_mdev_vctrl *vctrl);
+void __nvme_mdev_vctrl_reset(struct nvme_mdev_vctrl *vctrl, bool pci_reset);
+
+void nvme_mdev_vctrl_add_region(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index, unsigned int size,
+ region_access_fn access_fn);
+
+void nvme_mdev_vctrl_region_set_mmap(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index,
+ unsigned int offset,
+ unsigned int size,
+ const struct vm_operations_struct *ops);
+
+void nvme_mdev_vctrl_region_disable_mmap(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index);
+
+void nvme_mdev_vctrl_bind_iothread(struct nvme_mdev_vctrl *vctrl,
+ unsigned int cpu);
+
+int nvme_mdev_vctrl_set_shadow_doorbell_supported(struct nvme_mdev_vctrl *vctrl,
+ bool enable);
+
+int nvme_mdev_vctrl_hq_alloc(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_vctrl_hq_free(struct nvme_mdev_vctrl *vctrl, u16 qid);
+unsigned int nvme_mdev_vctrl_hqs_list(struct nvme_mdev_vctrl *vctrl, u16 *out);
+bool nvme_mdev_vctrl_is_dead(struct nvme_mdev_vctrl *vctrl);
+
+int nvme_mdev_vctrl_viommu_map(struct nvme_mdev_vctrl *vctrl, u32 flags,
+ dma_addr_t iova, u64 size);
+
+int nvme_mdev_vctrl_viommu_unmap(struct nvme_mdev_vctrl *vctrl,
+ dma_addr_t iova, u64 size);
+
+/* hctrl.c*/
+struct nvme_mdev_inst_type *nvme_mdev_inst_type_get(const char *name);
+struct nvme_mdev_hctrl *nvme_mdev_hctrl_lookup_get(struct device *parent);
+void nvme_mdev_hctrl_put(struct nvme_mdev_hctrl *hctrl);
+
+int nvme_mdev_hctrl_hqs_available(struct nvme_mdev_hctrl *hctrl);
+
+bool nvme_mdev_hctrl_hqs_reserve(struct nvme_mdev_hctrl *hctrl,
+ unsigned int n);
+void nvme_mdev_hctrl_hqs_unreserve(struct nvme_mdev_hctrl *hctrl,
+ unsigned int n);
+
+int nvme_mdev_hctrl_hq_alloc(struct nvme_mdev_hctrl *hctrl);
+void nvme_mdev_hctrl_hq_free(struct nvme_mdev_hctrl *hctrl, u16 qid);
+bool nvme_mdev_hctrl_hq_can_submit(struct nvme_mdev_hctrl *hctrl, u16 qid);
+bool nvme_mdev_hctrl_hq_check_op(struct nvme_mdev_hctrl *hctrl, u8 optcode);
+
+int nvme_mdev_hctrl_hq_submit(struct nvme_mdev_hctrl *hctrl,
+ u16 qid, u32 tag,
+ struct nvme_command *cmd,
+ struct nvme_ext_data_iter *datait);
+
+int nvme_mdev_hctrl_hq_poll(struct nvme_mdev_hctrl *hctrl,
+ u32 qid,
+ struct nvme_ext_cmd_result *results,
+ unsigned int max_len);
+
+void nvme_mdev_hctrl_destroy_all(void);
+
+/* io.c */
+int nvme_mdev_io_create(struct nvme_mdev_vctrl *vctrl, unsigned int cpu);
+void nvme_mdev_io_free(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_io_pause(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_io_resume(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_assert_io_not_running(struct nvme_mdev_vctrl *vctrl);
+
+/* mmio.c*/
+int nvme_mdev_mmio_create(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_mmio_open(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_mmio_reset(struct nvme_mdev_vctrl *vctrl, bool pci_reset);
+void nvme_mdev_mmio_free(struct nvme_mdev_vctrl *vctrl);
+
+int nvme_mdev_mmio_enable_dbs(struct nvme_mdev_vctrl *vctrl);
+int nvme_mdev_mmio_enable_dbs_shadow(struct nvme_mdev_vctrl *vctrl,
+ dma_addr_t sdb_iova, dma_addr_t eidx_iova);
+
+void nvme_mdev_mmio_viommu_update(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_mmio_disable_dbs(struct nvme_mdev_vctrl *vctrl);
+bool nvme_mdev_mmio_db_check(struct nvme_mdev_vctrl *vctrl,
+ u16 qid, u16 size, u16 db);
+
+/* pci.c*/
+int nvme_mdev_pci_create(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_pci_free(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_pci_setup_bar(struct nvme_mdev_vctrl *vctrl,
+ u8 bar, unsigned int size,
+ region_access_fn access_fn);
+/* adm.c*/
+void nvme_mdev_adm_process_sq(struct nvme_mdev_vctrl *vctrl);
+
+/* events.c */
+void nvme_mdev_events_init(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_events_reset(struct nvme_mdev_vctrl *vctrl);
+
+int nvme_mdev_event_request_receive(struct nvme_mdev_vctrl *vctrl, u16 cid);
+void nvme_mdev_event_process_ack(struct nvme_mdev_vctrl *vctrl, u8 log_page);
+
+void nvme_mdev_event_send(struct nvme_mdev_vctrl *vctrl,
+ enum nvme_async_event_type type,
+ enum nvme_async_event info);
+
+u32 nvme_mdev_event_read_aen_config(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_event_set_aen_config(struct nvme_mdev_vctrl *vctrl, u32 value);
+
+/* irq.c*/
+void nvme_mdev_irqs_setup(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_irqs_reset(struct nvme_mdev_vctrl *vctrl);
+
+int nvme_mdev_irqs_enable(struct nvme_mdev_vctrl *vctrl,
+ enum nvme_mdev_irq_mode mode);
+void nvme_mdev_irqs_disable(struct nvme_mdev_vctrl *vctrl,
+ enum nvme_mdev_irq_mode mode);
+
+int nvme_mdev_irqs_set_triggers(struct nvme_mdev_vctrl *vctrl,
+ int start, int count, int32_t *fds);
+int nvme_mdev_irqs_set_unplug_trigger(struct nvme_mdev_vctrl *vctrl,
+ int32_t fd);
+
+void nvme_mdev_irq_raise_unplug_event(struct nvme_mdev_vctrl *vctrl,
+ unsigned int count);
+void nvme_mdev_irq_raise(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index);
+void nvme_mdev_irq_trigger(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index);
+void nvme_mdev_irq_cond_trigger(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index);
+void nvme_mdev_irq_clear(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index);
+
+/* ns.c*/
+int nvme_mdev_vns_open(struct nvme_mdev_vctrl *vctrl,
+ u32 host_nsid, unsigned int host_partid);
+int nvme_mdev_vns_destroy(struct nvme_mdev_vctrl *vctrl,
+ u32 user_nsid);
+void nvme_mdev_vns_destroy_all(struct nvme_mdev_vctrl *vctrl);
+
+struct nvme_mdev_vns *nvme_mdev_vns_from_vnsid(struct nvme_mdev_vctrl *vctrl,
+ u32 user_ns_id);
+
+int nvme_mdev_vns_print_description(struct nvme_mdev_vctrl *vctrl,
+ char *buf, unsigned int size);
+void nvme_mdev_vns_host_ns_update(struct nvme_mdev_vctrl *vctrl,
+ u32 host_nsid, bool removed);
+
+void nvme_mdev_vns_log_reset(struct nvme_mdev_vctrl *vctrl);
+
+/* vcq.c */
+int nvme_mdev_vcq_init(struct nvme_mdev_vctrl *vctrl, u16 qid,
+ dma_addr_t iova, bool cont, u16 size, int irq);
+
+int nvme_mdev_vcq_viommu_update(struct nvme_mdev_viommu *viommu,
+ struct nvme_vcq *q);
+
+void nvme_mdev_vcq_delete(struct nvme_mdev_vctrl *vctrl, u16 qid);
+void nvme_mdev_vcq_process(struct nvme_mdev_vctrl *vctrl, u16 qid,
+ bool trigger_irqs);
+
+bool nvme_mdev_vcq_flush(struct nvme_mdev_vctrl *vctrl, u16 qid);
+bool nvme_mdev_vcq_reserve_space(struct nvme_vcq *q);
+
+void nvme_mdev_vcq_write_io(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_vcq *q, u16 sq_head,
+ u16 sqid, u16 cid, u16 status);
+
+void nvme_mdev_vcq_write_adm(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_vcq *q, u32 dw0,
+ u16 sq_head, u16 cid, u16 status);
+/* vsq.c*/
+int nvme_mdev_vsq_init(struct nvme_mdev_vctrl *vctrl, u16 qid,
+ dma_addr_t iova, bool cont, u16 size, u16 cqid);
+
+int nvme_mdev_vsq_viommu_update(struct nvme_mdev_viommu *viommu,
+ struct nvme_vsq *q);
+
+void nvme_mdev_vsq_delete(struct nvme_mdev_vctrl *vctrl, u16 qid);
+
+bool nvme_mdev_vsq_has_data(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_vsq *q);
+
+const struct nvme_command *nvme_mdev_vsq_get_cmd(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_vsq *q);
+
+void nvme_mdev_vsq_cmd_done_io(struct nvme_mdev_vctrl *vctrl,
+ u16 sqid, u16 cid, u16 status);
+void nvme_mdev_vsq_cmd_done_adm(struct nvme_mdev_vctrl *vctrl,
+ u32 dw0, u16 cid, u16 status);
+bool nvme_mdev_vsq_suspend_io(struct nvme_mdev_vctrl *vctrl, u16 sqid);
+
+/* udata.c*/
+void nvme_mdev_udata_iter_setup(struct nvme_mdev_viommu *viommu,
+ struct nvme_ext_data_iter *iter);
+
+int nvme_mdev_udata_iter_set_dptr(struct nvme_ext_data_iter *it,
+ const union nvme_data_ptr *dptr, u64 size);
+
+struct nvme_ext_data_iter *
+nvme_mdev_kdata_iter_alloc(struct nvme_mdev_viommu *viommu, unsigned int size);
+
+int nvme_mdev_read_from_udata(void *dst, struct nvme_ext_data_iter *srcit,
+ u64 size);
+
+int nvme_mdev_write_to_udata(struct nvme_ext_data_iter *dstit, void *src,
+ u64 size);
+
+void *nvme_mdev_udata_queue_vmap(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova,
+ unsigned int size, bool cont);
+/* viommu.c */
+void nvme_mdev_viommu_init(struct nvme_mdev_viommu *viommu,
+ struct device *sw_dev,
+ struct device *hw_dev);
+
+int nvme_mdev_viommu_add(struct nvme_mdev_viommu *viommu, u32 flags,
+ dma_addr_t iova, u64 size);
+
+int nvme_mdev_viommu_remove(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova, u64 size);
+
+int nvme_mdev_viommu_translate(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova,
+ dma_addr_t *physical,
+ dma_addr_t *host_iova);
+
+int nvme_mdev_viommu_create_kmap(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova, struct page_map *page);
+
+void nvme_mdev_viommu_free_kmap(struct nvme_mdev_viommu *viommu,
+ struct page_map *page);
+
+void nvme_mdev_viommu_update_kmap(struct nvme_mdev_viommu *viommu,
+ struct page_map *page);
+
+void nvme_mdev_viommu_reset(struct nvme_mdev_viommu *viommu);
+
+/* some utilities*/
+
+#define store_le64(address, value) (*((__le64 *)(address)) = cpu_to_le64(value))
+#define store_le32(address, value) (*((__le32 *)(address)) = cpu_to_le32(value))
+#define store_le16(address, value) (*((__le16 *)(address)) = cpu_to_le16(value))
+#define store_le8(address, value) (*((u8 *)(address)) = (value))
+
+#define load_le16(address) le16_to_cpu(*(__le16 *)(address))
+#define load_le32(address) le32_to_cpu(*(__le32 *)(address))
+
+#define store_strsp(dst, src) \
+ memcpy_and_pad(dst, sizeof(dst), src, sizeof(src) - 1, ' ')
+
+#define DNR(e) ((e) | NVME_SC_DNR)
+
+#define PAGE_ADDRESS(address) ((address) & PAGE_MASK)
+#define OFFSET_IN_PAGE(address) ((address) & ~(PAGE_MASK))
+
+#define _DBG(vctrl, fmt, ...) \
+ dev_dbg(mdev_dev((vctrl)->mdev), fmt, ##__VA_ARGS__)
+
+#define _INFO(vctrl, fmt, ...) \
+ dev_info(mdev_dev((vctrl)->mdev), fmt, ##__VA_ARGS__)
+
+#define _WARN(vctrl, fmt, ...) \
+ dev_warn(mdev_dev((vctrl)->mdev), fmt, ##__VA_ARGS__)
+
+#define mdev_to_vctrl(mdev) \
+ ((struct nvme_mdev_vctrl *)mdev_get_drvdata(mdev))
+
+#define dev_to_vctrl(mdev) \
+ mdev_to_vctrl(mdev_from_dev(dev))
+
+#define RSRV_NSID (BIT(1))
+#define RSRV_DW23 (BIT(2) | BIT(3))
+#define RSRV_MPTR (BIT(4) | BIT(5))
+
+#define RSRV_DPTR (BIT(6) | BIT(7) | BIT(8) | BIT(9))
+#define RSRV_DPTR_PRP2 (BIT(8) | BIT(9))
+
+#define RSRV_DW10_15 (BIT(10) | BIT(11) | BIT(12) | BIT(13) | BIT(14) | BIT(15))
+#define RSRV_DW11_15 (BIT(11) | BIT(12) | BIT(13) | BIT(14) | BIT(15))
+#define RSRV_DW12_15 (BIT(12) | BIT(13) | BIT(14) | BIT(15))
+#define RSRV_DW13_15 (BIT(13) | BIT(14) | BIT(15))
+#define RSRV_DW14_15 (BIT(14) | BIT(15))
+
+static inline bool check_reserved_dwords(const u32 *dwords,
+ int count, unsigned long bitmask)
+{
+ int bit;
+
+ if (WARN_ON(count > BITS_PER_TYPE(long)))
+ return false;
+
+ for_each_set_bit(bit, &bitmask, count)
+ if (dwords[bit])
+ return false;
+ return true;
+}
+
+static inline bool check_range(u64 start, u64 size, u64 end)
+{
+ u64 test = start + size;
+
+ /* check for overflow */
+ if (test < start || test < size)
+ return false;
+ return test <= end;
+}
+
+/* Rough translation of internal errors to the NVME errors */
+static inline int nvme_mdev_translate_error(int error)
+{
+ // nvme status, including no error (NVME_SC_SUCCESS)
+ if (error >= 0)
+ return error;
+
+ switch (error) {
+ case -ENOMEM:
+ /*no memory - truly an internal error*/
+ return NVME_SC_INTERNAL;
+ case -ENOSPC:
+ /* Happens when user sends to large PRP list
+ * User shoudn't do this since the maximum transfer size
+ * is specified in the controller caps
+ */
+ return DNR(NVME_SC_DATA_XFER_ERROR);
+ case -EFAULT:
+ /* Bad memory pointers in the prp lists*/
+ return DNR(NVME_SC_DATA_XFER_ERROR);
+ case -EINVAL:
+ /* Bad prp offsets in the prp lists/command*/
+ return DNR(NVME_SC_PRP_OFFSET_INVALID);
+ default:
+ /*Shouldn't happen */
+ WARN_ON_ONCE(true);
+ return NVME_SC_INTERNAL;
+ }
+}
+
+static inline bool timeout(ktime_t event, ktime_t now, unsigned long timeout_ms)
+{
+ return ktime_ms_delta(now, event) > (long)timeout_ms;
+}
+
+extern struct mdev_parent_ops mdev_fops;
+extern struct list_head nvme_mdev_vctrl_list;
+extern struct mutex nvme_mdev_vctrl_list_mutex;
+
+#endif // _MDEV_NVME_H
diff --git a/drivers/nvme/mdev/udata.c b/drivers/nvme/mdev/udata.c
new file mode 100644
index 000000000000..7af6b3f6d6aa
--- /dev/null
+++ b/drivers/nvme/mdev/udata.c
@@ -0,0 +1,390 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * User (guest) data access routines
+ * Implementation of PRP iterator in user memory
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/highmem.h>
+#include <linux/slab.h>
+#include <linux/mdev.h>
+#include <linux/nvme.h>
+#include "priv.h"
+
+#define MAX_PRP ((PAGE_SIZE / sizeof(__le64)) - 1)
+
+/* Setup up a new PRP iterator */
+void nvme_mdev_udata_iter_setup(struct nvme_mdev_viommu *viommu,
+ struct nvme_ext_data_iter *iter)
+{
+ iter->viommu = viommu;
+ iter->count = 0;
+ iter->next = NULL;
+ iter->release = NULL;
+}
+
+/* Load a new prp list into the iterator. Internal*/
+static int nvme_mdev_udata_iter_load_prplist(struct nvme_ext_data_iter *iter,
+ dma_addr_t iova)
+{
+ dma_addr_t data_iova;
+ int ret;
+ __le64 *map;
+
+ /* map the prp list*/
+ ret = nvme_mdev_viommu_create_kmap(iter->viommu,
+ PAGE_ADDRESS(iova),
+ &iter->uprp.page);
+ if (ret)
+ return ret;
+
+ iter->uprp.index = OFFSET_IN_PAGE(iova) / (sizeof(__le64));
+
+ /* read its first entry and check its alignment */
+ map = iter->uprp.page.kmap;
+ data_iova = le64_to_cpu(map[iter->uprp.index]);
+
+ if (OFFSET_IN_PAGE(data_iova) != 0) {
+ nvme_mdev_viommu_free_kmap(iter->viommu, &iter->uprp.page);
+ return -EINVAL;
+ }
+
+ /* translate the entry to complete the setup*/
+ ret = nvme_mdev_viommu_translate(iter->viommu, data_iova,
+ &iter->physical, &iter->host_iova);
+ if (ret)
+ nvme_mdev_viommu_free_kmap(iter->viommu, &iter->uprp.page);
+
+ return ret;
+}
+
+/* ->next function when iterator points to prp list*/
+static int nvme_mdev_udata_iter_next_prplist(struct nvme_ext_data_iter *iter)
+{
+ dma_addr_t iova;
+ int ret;
+ __le64 *map = iter->uprp.page.kmap;
+
+ if (WARN_ON(iter->count <= 0))
+ return 0;
+
+ if (--iter->count == 0) {
+ nvme_mdev_viommu_free_kmap(iter->viommu, &iter->uprp.page);
+ return 0;
+ }
+
+ iter->uprp.index++;
+
+ if (iter->uprp.index < MAX_PRP || iter->count == 1) {
+ // advance over next pointer in current prp list
+ // these pointers must be page aligned
+ iova = le64_to_cpu(map[iter->uprp.index]);
+ if (OFFSET_IN_PAGE(iova) != 0)
+ return -EINVAL;
+
+ ret = nvme_mdev_viommu_translate(iter->viommu, iova,
+ &iter->physical,
+ &iter->host_iova);
+ if (ret)
+ nvme_mdev_viommu_free_kmap(iter->viommu,
+ &iter->uprp.page);
+ return ret;
+ }
+
+ /* switch to next prp list. it must be page aligned as well*/
+ iova = le64_to_cpu(map[MAX_PRP]);
+
+ if (OFFSET_IN_PAGE(iova) != 0)
+ return -EINVAL;
+
+ nvme_mdev_viommu_free_kmap(iter->viommu, &iter->uprp.page);
+ return nvme_mdev_udata_iter_load_prplist(iter, iova);
+}
+
+/* ->next function when iterator points to user data pointer*/
+static int nvme_mdev_udata_iter_next_dptr(struct nvme_ext_data_iter *iter)
+{
+ dma_addr_t iova;
+
+ if (WARN_ON(iter->count <= 0))
+ return 0;
+
+ if (--iter->count == 0)
+ return 0;
+
+ /* we will be called only once to deal with the second
+ * pointer in the data pointer
+ */
+ iova = le64_to_cpu(iter->dptr->prp2);
+
+ if (iter->count == 1) {
+ /* only need to read one more entry, meaning
+ * the 2nd entry of the dptr.
+ * It must be page aligned
+ */
+ if (OFFSET_IN_PAGE(iova) != 0)
+ return -EINVAL;
+ return nvme_mdev_viommu_translate(iter->viommu, iova,
+ &iter->physical,
+ &iter->host_iova);
+ } else {
+ /*
+ * Second dptr entry is prp pointer, and it might not
+ * be page aligned (but QWORD aligned at least)
+ */
+ if (iova & 0x7ULL)
+ return -EINVAL;
+ iter->next = nvme_mdev_udata_iter_next_prplist;
+ return nvme_mdev_udata_iter_load_prplist(iter, iova);
+ }
+}
+
+/* Set prp list iterator to point to data pointer found in NVME command */
+int nvme_mdev_udata_iter_set_dptr(struct nvme_ext_data_iter *it,
+ const union nvme_data_ptr *dptr, u64 size)
+{
+ int ret;
+ u64 prp1 = le64_to_cpu(dptr->prp1);
+ dma_addr_t iova = PAGE_ADDRESS(prp1);
+ unsigned int page_offset = OFFSET_IN_PAGE(prp1);
+
+ /* first dptr pointer must be at least DWORD aligned*/
+ if (page_offset & 0x3)
+ return -EINVAL;
+
+ it->dptr = dptr;
+ it->next = nvme_mdev_udata_iter_next_dptr;
+ it->count = DIV_ROUND_UP_ULL(size + page_offset, PAGE_SIZE);
+
+ ret = nvme_mdev_viommu_translate(it->viommu, iova,
+ &it->physical, &it->host_iova);
+ if (ret)
+ return ret;
+
+ it->physical += page_offset;
+ it->host_iova += page_offset;
+ return 0;
+}
+
+/* ->next function when iterator points to kernel memory buffer */
+static int nvme_mdev_kdata_iter_next(struct nvme_ext_data_iter *it)
+{
+ if (WARN_ON(it->count <= 0))
+ return 0;
+
+ if (--it->count == 0)
+ return 0;
+
+ it->physical = PAGE_ADDRESS(it->physical) + PAGE_SIZE;
+ it->host_iova = PAGE_ADDRESS(it->host_iova) + PAGE_SIZE;
+ return 0;
+}
+
+/* ->release function for kdata iterator to free it after use */
+static void nvme_mdev_kdata_iter_free(struct nvme_ext_data_iter *it)
+{
+ struct device *dma_dev = it->viommu->hw_dev;
+
+ if (dma_dev)
+ dma_free_coherent(dma_dev, it->kmem.size,
+ it->kmem.data, it->kmem.dma_addr);
+ else
+ kfree(it->kmem.data);
+ kfree(it);
+}
+
+/* allocate a kernel data buffer with read iterator for nvme host device */
+struct nvme_ext_data_iter *
+nvme_mdev_kdata_iter_alloc(struct nvme_mdev_viommu *viommu, unsigned int size)
+{
+ struct nvme_ext_data_iter *it;
+
+ it = kzalloc(sizeof(*it), GFP_KERNEL);
+ if (!it)
+ return NULL;
+
+ it->viommu = viommu;
+ it->kmem.size = size;
+ if (viommu->hw_dev) {
+ it->kmem.data = dma_alloc_coherent(viommu->hw_dev, size,
+ &it->kmem.dma_addr,
+ GFP_KERNEL);
+ } else {
+ it->kmem.data = kzalloc(size, GFP_KERNEL);
+ it->kmem.dma_addr = 0;
+ }
+
+ if (!it->kmem.data) {
+ kfree(it);
+ return NULL;
+ }
+
+ it->physical = virt_to_phys(it->kmem.data);
+ it->host_iova = it->kmem.dma_addr;
+
+ it->count = DIV_ROUND_UP(size + OFFSET_IN_PAGE(it->physical),
+ PAGE_SIZE);
+
+ it->next = nvme_mdev_kdata_iter_next;
+ it->release = nvme_mdev_kdata_iter_free;
+ return it;
+}
+
+/* copy data from user data iterator to a kernel buffer */
+int nvme_mdev_read_from_udata(void *dst, struct nvme_ext_data_iter *srcit,
+ u64 size)
+{
+ int ret;
+ unsigned int srcoffset, chunk_size;
+
+ while (srcit->count && size > 0) {
+ struct page *page = pfn_to_page(PHYS_PFN(srcit->physical));
+ void *src = kmap(page);
+
+ if (!src)
+ return -ENOMEM;
+
+ srcoffset = OFFSET_IN_PAGE(srcit->physical);
+ chunk_size = min(size, (u64)PAGE_SIZE - srcoffset);
+
+ memcpy(dst, src + srcoffset, chunk_size);
+ dst += chunk_size;
+ size -= chunk_size;
+ kunmap(page);
+
+ ret = srcit->next(srcit);
+ if (ret)
+ return ret;
+ }
+ WARN_ON(size > 0);
+ return 0;
+}
+
+/* copy data from kernel buffer to user data iterator */
+int nvme_mdev_write_to_udata(struct nvme_ext_data_iter *dstit, void *src,
+ u64 size)
+{
+ int ret, dstoffset, chunk_size;
+
+ while (dstit->count && size > 0) {
+ struct page *page = pfn_to_page(PHYS_PFN(dstit->physical));
+ void *dst = kmap(page);
+
+ if (!dst)
+ return -ENOMEM;
+
+ dstoffset = OFFSET_IN_PAGE(dstit->physical);
+ chunk_size = min(size, (u64)PAGE_SIZE - dstoffset);
+
+ memcpy(dst + dstoffset, src, chunk_size);
+ src += chunk_size;
+ size -= chunk_size;
+ kunmap(page);
+
+ ret = dstit->next(dstit);
+ if (ret)
+ return ret;
+ }
+ WARN_ON(size > 0);
+ return 0;
+}
+
+/* Set prp list iterator to point to prp list found in create queue command */
+static int
+nvme_mdev_udata_iter_set_queue_prplist(struct nvme_mdev_viommu *viommu,
+ struct nvme_ext_data_iter *iter,
+ dma_addr_t iova, unsigned int size)
+{
+ if (iova & ~PAGE_MASK)
+ return -EINVAL;
+
+ nvme_mdev_udata_iter_setup(viommu, iter);
+ iter->count = DIV_ROUND_UP(size, PAGE_SIZE);
+ iter->next = nvme_mdev_udata_iter_next_prplist;
+ return nvme_mdev_udata_iter_load_prplist(iter, iova);
+}
+
+/* Map an SQ/CQ queue (contiguous in guest physical memory) */
+static int nvme_mdev_queue_getpages_contiguous(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova,
+ struct page **pages,
+ unsigned int npages)
+{
+ int ret;
+ unsigned int i;
+
+ dma_addr_t host_page_iova;
+ phys_addr_t physical;
+
+ for (i = 0 ; i < npages; i++) {
+ ret = nvme_mdev_viommu_translate(viommu, iova + (PAGE_SIZE * i),
+ &physical,
+ &host_page_iova);
+ if (ret)
+ return ret;
+ pages[i] = pfn_to_page(PHYS_PFN(physical));
+ }
+ return 0;
+}
+
+/* Map an SQ/CQ queue (non contiguous in guest physical memory) */
+static int nvme_mdev_queue_getpages_prplist(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova,
+ struct page **pages,
+ unsigned int npages)
+{
+ int ret, i = 0;
+ struct nvme_ext_data_iter uprpit;
+
+ ret = nvme_mdev_udata_iter_set_queue_prplist(viommu,
+ &uprpit, iova,
+ npages * PAGE_SIZE);
+ if (ret)
+ return ret;
+
+ while (uprpit.count && i < npages) {
+ pages[i++] = pfn_to_page(PHYS_PFN(uprpit.physical));
+ ret = uprpit.next(&uprpit);
+ if (ret)
+ return ret;
+ }
+ return 0;
+}
+
+/* map a SQ/CQ queue to host physical memory */
+void *nvme_mdev_udata_queue_vmap(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova,
+ unsigned int size,
+ bool cont)
+{
+ int ret;
+ unsigned int npages;
+ void *map;
+ struct page **pages;
+
+ // queue must be page aligned
+ if (OFFSET_IN_PAGE(iova) != 0)
+ return ERR_PTR(-EINVAL);
+
+ npages = DIV_ROUND_UP(size, PAGE_SIZE);
+ pages = kcalloc(npages, sizeof(struct page *), GFP_KERNEL);
+ if (!pages)
+ return ERR_PTR(-ENOMEM);
+
+ ret = cont ?
+ nvme_mdev_queue_getpages_contiguous(viommu, iova, pages, npages)
+ : nvme_mdev_queue_getpages_prplist(viommu, iova, pages, npages);
+
+ if (ret) {
+ map = ERR_PTR(ret);
+ goto out;
+ }
+
+ map = vmap(pages, npages, VM_MAP, PAGE_KERNEL);
+ if (!map)
+ map = ERR_PTR(-ENOMEM);
+out:
+ kfree(pages);
+ return map;
+}
diff --git a/drivers/nvme/mdev/vcq.c b/drivers/nvme/mdev/vcq.c
new file mode 100644
index 000000000000..7702137eb8bc
--- /dev/null
+++ b/drivers/nvme/mdev/vcq.c
@@ -0,0 +1,209 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Virtual NVMe completion queue implementation
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include "priv.h"
+
+/* Create new virtual completion queue */
+int nvme_mdev_vcq_init(struct nvme_mdev_vctrl *vctrl, u16 qid,
+ dma_addr_t iova, bool cont, u16 size, int irq)
+{
+ struct nvme_vcq *q = &vctrl->vcqs[qid];
+ int ret;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ q->iova = iova;
+ q->cont = cont;
+ q->data = NULL;
+ q->qid = qid;
+ q->size = size;
+ q->tail = 0;
+ q->phase = true;
+ q->irq = irq;
+ q->pending = 0;
+ q->head = 0;
+
+ ret = nvme_mdev_vcq_viommu_update(&vctrl->viommu, q);
+ if (ret && (ret != -EFAULT))
+ return ret;
+
+ _DBG(vctrl, "VCQ: create qid=%d contig=%d depth=%d irq=%d\n",
+ qid, cont, size, irq);
+
+ set_bit(qid, vctrl->vcq_en);
+
+ vctrl->mmio.dbs[q->qid].cqh = 0;
+ vctrl->mmio.eidxs[q->qid].cqh = 0;
+ return 0;
+}
+
+/* Update the kernel mapping of the queue */
+int nvme_mdev_vcq_viommu_update(struct nvme_mdev_viommu *viommu,
+ struct nvme_vcq *q)
+{
+ void *data;
+
+ if (q->data)
+ vunmap((void *)q->data);
+
+ data = nvme_mdev_udata_queue_vmap(viommu, q->iova,
+ (unsigned int)q->size *
+ sizeof(struct nvme_completion),
+ q->cont);
+
+ q->data = IS_ERR(data) ? NULL : data;
+ return IS_ERR(data) ? PTR_ERR(data) : 0;
+}
+
+/* Delete a virtual completion queue */
+void nvme_mdev_vcq_delete(struct nvme_mdev_vctrl *vctrl, u16 qid)
+{
+ struct nvme_vcq *q = &vctrl->vcqs[qid];
+
+ lockdep_assert_held(&vctrl->lock);
+
+ if (q->data)
+ vunmap((void *)q->data);
+ q->data = NULL;
+
+ _DBG(vctrl, "VCQ: delete qid=%d\n", q->qid);
+ clear_bit(qid, vctrl->vcq_en);
+}
+
+/* Move queue tail one item forward */
+static void nvme_mdev_vcq_advance_tail(struct nvme_vcq *q)
+{
+ if (++q->tail == q->size) {
+ q->tail = 0;
+ q->phase = !q->phase;
+ }
+}
+
+/* Move queue head one item forward */
+static void nvme_mdev_vcq_advance_head(struct nvme_vcq *q)
+{
+ q->head++;
+ if (q->head == q->size)
+ q->head = 0;
+}
+
+/* Process a virtual completion queue*/
+void nvme_mdev_vcq_process(struct nvme_mdev_vctrl *vctrl, u16 qid,
+ bool trigger_irqs)
+{
+ struct nvme_vcq *q = &vctrl->vcqs[qid];
+ u16 new_head;
+ u32 eidx;
+
+ if (!vctrl->mmio.dbs || !vctrl->mmio.eidxs)
+ return;
+
+ new_head = le32_to_cpu(vctrl->mmio.dbs[qid].cqh);
+
+ if (new_head != q->head) {
+ /* bad tail - can't process*/
+ if (!nvme_mdev_mmio_db_check(vctrl, q->qid, q->size, new_head))
+ return;
+
+ while (q->head != new_head) {
+ nvme_mdev_vcq_advance_head(q);
+ WARN_ON_ONCE(q->pending == 0);
+ if (q->pending > 0)
+ q->pending--;
+ }
+
+ eidx = q->head + (q->size >> 1);
+ if (eidx >= q->size)
+ eidx -= q->size;
+ vctrl->mmio.eidxs[q->qid].cqh = cpu_to_le32(eidx);
+ }
+
+ if (q->irq != -1 && trigger_irqs) {
+ if (q->tail != new_head)
+ nvme_mdev_irq_cond_trigger(vctrl, q->irq);
+ else
+ nvme_mdev_irq_clear(vctrl, q->irq);
+ }
+}
+
+/* flush interrupts on a completion queue */
+bool nvme_mdev_vcq_flush(struct nvme_mdev_vctrl *vctrl, u16 qid)
+{
+ struct nvme_vcq *q = &vctrl->vcqs[qid];
+ u16 new_head = le32_to_cpu(vctrl->mmio.dbs[qid].cqh);
+
+ if (new_head == q->tail || q->irq == -1)
+ return false;
+
+ nvme_mdev_irq_trigger(vctrl, q->irq);
+ nvme_mdev_irq_clear(vctrl, q->irq);
+ return true;
+}
+
+/* Reserve space for one completion entry, that will be added later */
+bool nvme_mdev_vcq_reserve_space(struct nvme_vcq *q)
+{
+ /* TODOLATER: track passed through commmands
+ * If we pass through a command to host and never receive a response
+ * we will keep space for response in CQ forever, eventually stalling
+ * the CQ forever.
+ * In this case, the guest is still expected to recover by resetting
+ * our controller
+ * This can be fixed by tracking all the commands that we send
+ * to the host
+ */
+
+ if (q->pending == q->size - 1)
+ return false;
+ q->pending++;
+ return true;
+}
+
+/* Write a new item into the completion queue (IO version) */
+void nvme_mdev_vcq_write_io(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_vcq *q, u16 sq_head,
+ u16 sqid, u16 cid, u16 status)
+{
+ volatile u64 *qw = (__le64 *)(&q->data[q->tail]);
+
+ u64 phase = q->phase ? (0x1ULL << 48) : 0;
+ u64 qw1 =
+ ((u64)sq_head) |
+ ((u64)sqid << 16) |
+ ((u64)cid << 32) |
+ ((u64)status << 49) | phase;
+
+ WRITE_ONCE(qw[1], cpu_to_le64(qw1));
+
+ nvme_mdev_vcq_advance_tail(q);
+ if (q->irq != -1)
+ nvme_mdev_irq_raise(vctrl, q->irq);
+}
+
+/* Write a new item into the completion queue (ADMIN version) */
+void nvme_mdev_vcq_write_adm(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_vcq *q, u32 dw0,
+ u16 sq_head, u16 cid, u16 status)
+{
+ volatile u64 *qw = (__le64 *)(&q->data[q->tail]);
+
+ u64 phase = q->phase ? (0x1ULL << 48) : 0;
+ u64 qw1 =
+ ((u64)sq_head) |
+ ((u64)cid << 32) |
+ ((u64)status << 49) | phase;
+
+ WRITE_ONCE(qw[0], cpu_to_le64(dw0));
+ /* ensure that hardware sees the phase bit flip last */
+ wmb();
+ WRITE_ONCE(qw[1], cpu_to_le64(qw1));
+
+ nvme_mdev_vcq_advance_tail(q);
+ if (q->irq != -1)
+ nvme_mdev_irq_trigger(vctrl, q->irq);
+}
diff --git a/drivers/nvme/mdev/vctrl.c b/drivers/nvme/mdev/vctrl.c
new file mode 100644
index 000000000000..6f087b8fb2fc
--- /dev/null
+++ b/drivers/nvme/mdev/vctrl.c
@@ -0,0 +1,515 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Virtual NVMe controller implementation
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/slab.h>
+#include <linux/mdev.h>
+#include <linux/nvme.h>
+#include "priv.h"
+
+bool nvme_mdev_vctrl_is_dead(struct nvme_mdev_vctrl *vctrl)
+{
+ return (vctrl->mmio.csts & (NVME_CSTS_CFS | NVME_CSTS_SHST_MASK)) != 0;
+}
+
+/* Setup the controller guid and serial */
+static void nvme_mdev_vctrl_init_id(struct nvme_mdev_vctrl *vctrl)
+{
+ guid_t guid = mdev_uuid(vctrl->mdev);
+
+ snprintf(vctrl->subnqn, sizeof(vctrl->subnqn),
+ "nqn.2014-08.org.nvmexpress:uuid:%pUl", guid.b);
+
+ snprintf(vctrl->serial, sizeof(vctrl->serial), "%pUl", guid.b);
+}
+
+/* Change the IO thread CPU pinning */
+void nvme_mdev_vctrl_bind_iothread(struct nvme_mdev_vctrl *vctrl,
+ unsigned int cpu)
+{
+ mutex_lock(&vctrl->lock);
+
+ if (cpu == vctrl->iothread_cpu)
+ goto out;
+
+ nvme_mdev_io_free(vctrl);
+ nvme_mdev_io_create(vctrl, cpu);
+out:
+ mutex_unlock(&vctrl->lock);
+}
+
+/* Change the status of support for shadow doorbell */
+int nvme_mdev_vctrl_set_shadow_doorbell_supported(struct nvme_mdev_vctrl *vctrl,
+ bool enable)
+{
+ if (vctrl->inuse)
+ return -EBUSY;
+ vctrl->mmio.shadow_db_supported = enable;
+ return 0;
+}
+
+/* Called when memory mapping are changed. Propagate this to all kmap users */
+static void nvme_mdev_vctrl_viommu_update(struct nvme_mdev_vctrl *vctrl)
+{
+ u16 qid;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ if (!(vctrl->mmio.csts & NVME_CSTS_RDY))
+ return;
+
+ /* update mappings for submission and completion queues */
+ for_each_set_bit(qid, vctrl->vsq_en, MAX_VIRTUAL_QUEUES)
+ nvme_mdev_vsq_viommu_update(&vctrl->viommu, &vctrl->vsqs[qid]);
+
+ for_each_set_bit(qid, vctrl->vcq_en, MAX_VIRTUAL_QUEUES)
+ nvme_mdev_vcq_viommu_update(&vctrl->viommu, &vctrl->vcqs[qid]);
+
+ /* update mapping for the shadow doorbells */
+ nvme_mdev_mmio_viommu_update(vctrl);
+}
+
+/* Create a new virtual controller */
+struct nvme_mdev_vctrl *nvme_mdev_vctrl_create(struct mdev_device *mdev,
+ struct nvme_mdev_hctrl *hctrl,
+ unsigned int max_host_queues)
+{
+ int ret;
+ struct nvme_mdev_vctrl *vctrl = kzalloc_node(sizeof(*vctrl),
+ GFP_KERNEL, hctrl->node);
+ if (!vctrl)
+ return ERR_PTR(-ENOMEM);
+
+ /* Basic init */
+ vctrl->hctrl = hctrl;
+ vctrl->mdev = mdev;
+ vctrl->max_host_hw_queues = max_host_queues;
+ vctrl->viommu.vctrl = vctrl;
+
+ kref_init(&vctrl->ref);
+ mutex_init(&vctrl->lock);
+ nvme_mdev_vctrl_init_id(vctrl);
+ INIT_LIST_HEAD(&vctrl->host_hw_queues);
+
+ get_device(mdev_dev(mdev));
+ mdev_set_drvdata(mdev, vctrl);
+
+ /* reserve host IO queues */
+ if (!nvme_mdev_hctrl_hqs_reserve(hctrl, max_host_queues)) {
+ ret = -ENOSPC;
+ goto error1;
+ }
+
+ /* default feature values*/
+ vctrl->arb_burst_shift = 3;
+ vctrl->mmio.shadow_db_supported = use_shadow_doorbell;
+
+ ret = nvme_mdev_pci_create(vctrl);
+ if (ret)
+ goto error2;
+
+ ret = nvme_mdev_mmio_create(vctrl);
+ if (ret)
+ goto error3;
+
+ nvme_mdev_irqs_setup(vctrl);
+
+ /* Create the IO thread */
+ /*TODOLATER: IO: smp_processor_id() is not an ideal pinning choice */
+ ret = nvme_mdev_io_create(vctrl, smp_processor_id());
+ if (ret)
+ goto error4;
+
+ _INFO(vctrl, "device created using %d host queues\n", max_host_queues);
+ return vctrl;
+error4:
+ nvme_mdev_mmio_free(vctrl);
+error3:
+ nvme_mdev_pci_free(vctrl);
+error2:
+ nvme_mdev_hctrl_hqs_unreserve(hctrl, max_host_queues);
+error1:
+ put_device(mdev_dev(mdev));
+ kfree(vctrl);
+ return ERR_PTR(ret);
+}
+
+/*Try to destroy an vctrl */
+int nvme_mdev_vctrl_destroy(struct nvme_mdev_vctrl *vctrl)
+{
+ mutex_lock(&vctrl->lock);
+
+ if (vctrl->inuse) {
+ /* vctrl has mdev users */
+ mutex_unlock(&vctrl->lock);
+ return -EBUSY;
+ }
+
+ _INFO(vctrl, "device is destroying\n");
+
+ mdev_set_drvdata(vctrl->mdev, NULL);
+ mutex_unlock(&vctrl->lock);
+
+ mutex_lock(&nvme_mdev_vctrl_list_mutex);
+ list_del_init(&vctrl->link);
+ mutex_unlock(&nvme_mdev_vctrl_list_mutex);
+
+ mutex_lock(&vctrl->lock); /*only for lockdep checks */
+ nvme_mdev_io_free(vctrl);
+ nvme_mdev_vns_destroy_all(vctrl);
+ __nvme_mdev_vctrl_reset(vctrl, true);
+
+ nvme_mdev_hctrl_hqs_unreserve(vctrl->hctrl, vctrl->max_host_hw_queues);
+
+ nvme_mdev_pci_free(vctrl);
+ nvme_mdev_mmio_free(vctrl);
+
+ mutex_unlock(&vctrl->lock);
+
+ put_device(mdev_dev(vctrl->mdev));
+ _INFO(vctrl, "device is destroyed\n");
+ kfree(vctrl);
+ return 0;
+}
+
+/* Suspends a running virtual controller
+ * Called when host needs to regain full control of the device
+ */
+void nvme_mdev_vctrl_pause(struct nvme_mdev_vctrl *vctrl)
+{
+ mutex_lock(&vctrl->lock);
+ if (!vctrl->vctrl_paused) {
+ _INFO(vctrl, "pausing the virtual controller\n");
+ if (vctrl->mmio.csts & NVME_CSTS_RDY)
+ nvme_mdev_io_pause(vctrl);
+ vctrl->vctrl_paused = true;
+ }
+ mutex_unlock(&vctrl->lock);
+}
+
+/* Resumes a virtual controller
+ * Called when host done with exclusive access and allows us
+ * again to attach to the controller
+ */
+void nvme_mdev_vctrl_resume(struct nvme_mdev_vctrl *vctrl)
+{
+ mutex_lock(&vctrl->lock);
+ nvme_mdev_assert_io_not_running(vctrl);
+
+ if (vctrl->vctrl_paused) {
+ _INFO(vctrl, "resuming the virtual controller\n");
+
+ if (vctrl->mmio.csts & NVME_CSTS_RDY) {
+ /* handle all pending admin commands*/
+ nvme_mdev_adm_process_sq(vctrl);
+ /* start the IO thread again if it was stopped or
+ * if we had doorbell writes during the pause
+ */
+ nvme_mdev_io_resume(vctrl);
+ }
+ vctrl->vctrl_paused = false;
+ }
+ mutex_unlock(&vctrl->lock);
+}
+
+/* Called when emulator opens the virtual device */
+int nvme_mdev_vctrl_open(struct nvme_mdev_vctrl *vctrl)
+{
+ struct device *dma_dev = NULL;
+ int ret = 0;
+
+ mutex_lock(&vctrl->lock);
+
+ if (vctrl->hctrl->removing) {
+ ret = -ENODEV;
+ goto out;
+ }
+
+ if (vctrl->inuse) {
+ ret = -EBUSY;
+ goto out;
+ }
+
+ _INFO(vctrl, "device is opened\n");
+
+ if (vctrl->hctrl->nvme_ctrl->ops->flags & NVME_F_MDEV_DMA_SUPPORTED)
+ dma_dev = vctrl->hctrl->nvme_ctrl->dev;
+
+ nvme_mdev_viommu_init(&vctrl->viommu, mdev_dev(vctrl->mdev), dma_dev);
+
+ nvme_mdev_mmio_open(vctrl);
+ vctrl->inuse = true;
+out:
+ mutex_unlock(&vctrl->lock);
+ return ret;
+}
+
+/* Called when emulator closes the virtual device */
+void nvme_mdev_vctrl_release(struct nvme_mdev_vctrl *vctrl)
+{
+ mutex_lock(&vctrl->lock);
+ nvme_mdev_io_pause(vctrl);
+
+ /* Remove the guest DMA mappings - new user that will open the
+ * device might be a different guest
+ */
+ nvme_mdev_viommu_reset(&vctrl->viommu);
+
+ /* Reset the controller to a clean state for a new user */
+ __nvme_mdev_vctrl_reset(vctrl, false);
+ nvme_mdev_irqs_reset(vctrl);
+ vctrl->inuse = false;
+ mutex_unlock(&vctrl->lock);
+
+ WARN_ON(!list_empty(&vctrl->host_hw_queues));
+
+ _INFO(vctrl, "device is released\n");
+
+ /* If we are released after request to remove the host controller
+ * we are dead, won't be opened again ever, so remove ourselves
+ */
+ if (vctrl->hctrl->removing)
+ nvme_mdev_vctrl_destroy(vctrl);
+}
+
+/* Called each time the controller is reset (CC.EN <= 0 or VM level reset) */
+void __nvme_mdev_vctrl_reset(struct nvme_mdev_vctrl *vctrl, bool pci_reset)
+{
+ lockdep_assert_held(&vctrl->lock);
+
+ if ((vctrl->mmio.csts & NVME_CSTS_RDY) &&
+ !(vctrl->mmio.csts & NVME_CSTS_SHST_MASK)) {
+ _DBG(vctrl, "unsafe reset (CSTS.RDY==1)\n");
+ nvme_mdev_io_pause(vctrl);
+ nvme_mdev_vctrl_disable(vctrl);
+ }
+ nvme_mdev_mmio_reset(vctrl, pci_reset);
+}
+
+/* setups initial admin queues and doorbells */
+bool nvme_mdev_vctrl_enable(struct nvme_mdev_vctrl *vctrl,
+ dma_addr_t cqiova, dma_addr_t sqiova, u32 sizes)
+{
+ int ret;
+ u16 cqentries, sqentries;
+
+ nvme_mdev_assert_io_not_running(vctrl);
+
+ lockdep_assert_held(&vctrl->lock);
+
+ sqentries = (sizes & 0xFFFF) + 1;
+ cqentries = (sizes >> 16) + 1;
+
+ if (cqentries > 4096 || cqentries < 2)
+ return false;
+ if (sqentries > 4096 || sqentries < 2)
+ return false;
+
+ ret = nvme_mdev_mmio_enable_dbs(vctrl);
+ if (ret)
+ goto error0;
+
+ ret = nvme_mdev_vcq_init(vctrl, 0, cqiova, true, cqentries, 0);
+ if (ret)
+ goto error1;
+
+ ret = nvme_mdev_vsq_init(vctrl, 0, sqiova, true, sqentries, 0);
+ if (ret)
+ goto error2;
+
+ nvme_mdev_events_init(vctrl);
+
+ if (!vctrl->mmio.shadow_db_supported) {
+ /* start polling right away to support admin queue */
+ vctrl->io_idle = false;
+ nvme_mdev_io_resume(vctrl);
+ }
+
+ return true;
+error2:
+ nvme_mdev_mmio_disable_dbs(vctrl);
+error1:
+ nvme_mdev_vcq_delete(vctrl, 0);
+error0:
+ return false;
+}
+
+/* destroy all io/admin queues on the controller */
+void nvme_mdev_vctrl_disable(struct nvme_mdev_vctrl *vctrl)
+{
+ u16 sqid, cqid;
+
+ nvme_mdev_assert_io_not_running(vctrl);
+
+ lockdep_assert_held(&vctrl->lock);
+
+ nvme_mdev_events_reset(vctrl);
+ nvme_mdev_vns_log_reset(vctrl);
+
+ sqid = 1;
+ for_each_set_bit_from(sqid, vctrl->vsq_en, MAX_VIRTUAL_QUEUES)
+ nvme_mdev_vsq_delete(vctrl, sqid);
+
+ cqid = 1;
+ for_each_set_bit_from(cqid, vctrl->vcq_en, MAX_VIRTUAL_QUEUES)
+ nvme_mdev_vcq_delete(vctrl, cqid);
+
+ nvme_mdev_vsq_delete(vctrl, 0);
+ nvme_mdev_vcq_delete(vctrl, 0);
+
+ nvme_mdev_mmio_disable_dbs(vctrl);
+ vctrl->io_idle = true;
+}
+
+/* External reset */
+void nvme_mdev_vctrl_reset(struct nvme_mdev_vctrl *vctrl)
+{
+ mutex_lock(&vctrl->lock);
+ _INFO(vctrl, "reset\n");
+ __nvme_mdev_vctrl_reset(vctrl, true);
+ mutex_unlock(&vctrl->lock);
+}
+
+/* Add IO region*/
+void nvme_mdev_vctrl_add_region(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index, unsigned int size,
+ region_access_fn access_fn)
+{
+ struct nvme_mdev_io_region *region = &vctrl->regions[index];
+
+ region->size = size;
+ region->rw = access_fn;
+ region->mmap_ops = NULL;
+}
+
+/* Enable mmap window on an IO region */
+void nvme_mdev_vctrl_region_set_mmap(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index,
+ unsigned int offset,
+ unsigned int size,
+ const struct vm_operations_struct *ops)
+{
+ struct nvme_mdev_io_region *region = &vctrl->regions[index];
+
+ region->mmap_area_start = offset;
+ region->mmap_area_size = size;
+ region->mmap_ops = ops;
+}
+
+/* Disable mmap window on an IO region */
+void nvme_mdev_vctrl_region_disable_mmap(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index)
+{
+ struct nvme_mdev_io_region *region = &vctrl->regions[index];
+
+ region->mmap_area_start = 0;
+ region->mmap_area_size = 0;
+ region->mmap_ops = NULL;
+}
+
+/* Allocate a host IO queue */
+int nvme_mdev_vctrl_hq_alloc(struct nvme_mdev_vctrl *vctrl)
+{
+ struct nvme_mdev_hq *hq = NULL, *tmp;
+ int hwqcount = 0, ret;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ nvme_mdev_assert_io_not_running(vctrl);
+
+ list_for_each_entry(tmp, &vctrl->host_hw_queues, link) {
+ if (!hq || tmp->usecount < hq->usecount)
+ hq = tmp;
+ hwqcount++;
+ }
+
+ if (hwqcount < vctrl->max_host_hw_queues) {
+ ret = nvme_mdev_hctrl_hq_alloc(vctrl->hctrl);
+ if (ret < 0)
+ return ret;
+
+ hq = kzalloc_node(sizeof(*hq), GFP_KERNEL, vctrl->hctrl->node);
+ if (!hq) {
+ nvme_mdev_hctrl_hq_free(vctrl->hctrl, ret);
+ return -ENOMEM;
+ }
+
+ hq->hqid = ret;
+ hq->usecount = 1;
+ list_add_tail(&hq->link, &vctrl->host_hw_queues);
+ } else {
+ hq->usecount++;
+ }
+ return hq->hqid;
+}
+
+/* Free a host IO queue */
+void nvme_mdev_vctrl_hq_free(struct nvme_mdev_vctrl *vctrl, u16 hqid)
+{
+ struct nvme_mdev_hq *hq;
+
+ lockdep_assert_held(&vctrl->lock);
+ nvme_mdev_assert_io_not_running(vctrl);
+
+ list_for_each_entry(hq, &vctrl->host_hw_queues, link)
+ if (hq->hqid == hqid) {
+ if (--hq->usecount > 0)
+ return;
+ nvme_mdev_hctrl_hq_free(vctrl->hctrl, hq->hqid);
+ list_del(&hq->link);
+ kfree(hq);
+ return;
+ }
+ WARN_ON(1);
+}
+
+/* get current list of host queues */
+unsigned int nvme_mdev_vctrl_hqs_list(struct nvme_mdev_vctrl *vctrl, u16 *out)
+{
+ struct nvme_mdev_hq *q;
+ unsigned int i = 0;
+
+ list_for_each_entry(q, &vctrl->host_hw_queues, link) {
+ out[i++] = q->hqid;
+ if (WARN_ON(i > MAX_HOST_QUEUES))
+ break;
+ }
+ return i;
+}
+
+/* add a user memory mapping */
+int nvme_mdev_vctrl_viommu_map(struct nvme_mdev_vctrl *vctrl, u32 flags,
+ dma_addr_t iova, u64 size)
+{
+ int ret;
+
+ mutex_lock(&vctrl->lock);
+
+ nvme_mdev_io_pause(vctrl);
+ ret = nvme_mdev_viommu_add(&vctrl->viommu, flags, iova, size);
+ nvme_mdev_vctrl_viommu_update(vctrl);
+ nvme_mdev_io_resume(vctrl);
+
+ mutex_unlock(&vctrl->lock);
+ return ret;
+}
+
+/* remove a user memory mapping */
+int nvme_mdev_vctrl_viommu_unmap(struct nvme_mdev_vctrl *vctrl,
+ dma_addr_t iova, u64 size)
+{
+ int ret;
+
+ mutex_lock(&vctrl->lock);
+
+ nvme_mdev_io_pause(vctrl);
+ ret = nvme_mdev_viommu_remove(&vctrl->viommu, iova, size);
+ nvme_mdev_vctrl_viommu_update(vctrl);
+ nvme_mdev_io_resume(vctrl);
+
+ mutex_unlock(&vctrl->lock);
+ return ret;
+}
diff --git a/drivers/nvme/mdev/viommu.c b/drivers/nvme/mdev/viommu.c
new file mode 100644
index 000000000000..31b86e8f5768
--- /dev/null
+++ b/drivers/nvme/mdev/viommu.c
@@ -0,0 +1,322 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Virtual IOMMU - mapping user memory to the real device
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/highmem.h>
+#include <linux/slab.h>
+#include <linux/mdev.h>
+#include <linux/vmalloc.h>
+#include <linux/nvme.h>
+#include <linux/iommu.h>
+#include <linux/interval_tree_generic.h>
+#include "priv.h"
+
+struct mem_mapping {
+ struct rb_node rb;
+ struct list_head link;
+
+ dma_addr_t __subtree_last;
+ dma_addr_t iova_start; /* first iova in this mapping*/
+ dma_addr_t iova_last; /* last iova in this mapping*/
+
+ unsigned long pfn; /* physical address of this mapping */
+ dma_addr_t host_iova; /* dma mapping to the real device*/
+};
+
+#define map_len(m) (((m)->iova_last - (m)->iova_start) + 1ULL)
+#define map_pages(m) (map_len(m) >> PAGE_SHIFT)
+#define START(node) ((node)->iova_start)
+#define LAST(node) ((node)->iova_last)
+
+INTERVAL_TREE_DEFINE(struct mem_mapping, rb, dma_addr_t, __subtree_last,
+ START, LAST, static inline, viommu_int_tree);
+
+static void nvme_mdev_viommu_dbg_dma_range(struct nvme_mdev_viommu *viommu,
+ struct mem_mapping *map,
+ const char *action)
+{
+ dma_addr_t iova_start = map->iova_start;
+ dma_addr_t iova_end = map->iova_start + map_len(map) - 1;
+ dma_addr_t hiova_start = map->host_iova;
+ dma_addr_t hiova_end = map->host_iova + map_len(map) - 1;
+
+ _DBG(viommu->vctrl,
+ "vIOMMU: %s RW IOVA %pad-%pad -> DMA %pad-%pad\n",
+ action, &iova_start, &iova_end, &hiova_start, &hiova_end);
+}
+
+/* unpin N pages starting at given IOVA*/
+static void nvme_mdev_viommu_unpin_pages(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova, int n)
+{
+ int i;
+
+ for (i = 0; i < n; i++) {
+ unsigned long user_pfn = (iova >> PAGE_SHIFT) + i;
+ int ret = vfio_unpin_pages(viommu->sw_dev, &user_pfn, 1);
+
+ WARN_ON(ret != 1);
+ }
+}
+
+/* User memory init code*/
+void nvme_mdev_viommu_init(struct nvme_mdev_viommu *viommu,
+ struct device *sw_dev,
+ struct device *hw_dev)
+{
+ viommu->sw_dev = sw_dev;
+ viommu->hw_dev = hw_dev;
+ viommu->maps_tree = RB_ROOT_CACHED;
+ INIT_LIST_HEAD(&viommu->maps_list);
+}
+
+/* User memory end code*/
+void nvme_mdev_viommu_reset(struct nvme_mdev_viommu *viommu)
+{
+ nvme_mdev_viommu_remove(viommu, 0, 0xFFFFFFFFFFFFFFFFULL);
+ WARN_ON(!list_empty(&viommu->maps_list));
+}
+
+/* Adds a new range of user memory*/
+int nvme_mdev_viommu_add(struct nvme_mdev_viommu *viommu,
+ u32 flags,
+ dma_addr_t iova,
+ u64 size)
+{
+ u64 offset;
+ dma_addr_t iova_end = iova + size - 1;
+ struct mem_mapping *map = NULL, *tmp;
+ LIST_HEAD(new_mappings_list);
+ int ret;
+
+ if (!(flags & VFIO_DMA_MAP_FLAG_READ) ||
+ !(flags & VFIO_DMA_MAP_FLAG_WRITE)) {
+ const char *type = "none";
+
+ if (flags & VFIO_DMA_MAP_FLAG_READ)
+ type = "RO";
+ else if (flags & VFIO_DMA_MAP_FLAG_WRITE)
+ type = "WO";
+
+ _DBG(viommu->vctrl, "vIOMMU: IGN %s IOVA %pad-%pad\n",
+ type, &iova, &iova_end);
+ return 0;
+ }
+
+ WARN_ON_ONCE(nvme_mdev_viommu_remove(viommu, iova, size) != 0);
+
+ if (WARN_ON_ONCE(size & ~PAGE_MASK))
+ return -EINVAL;
+
+ // VFIO pinning all the pages
+ for (offset = 0; offset < size; offset += PAGE_SIZE) {
+ unsigned long vapfn = ((iova + offset) >> PAGE_SHIFT), pa_pfn;
+
+ ret = vfio_pin_pages(viommu->sw_dev,
+ &vapfn, 1,
+ VFIO_DMA_MAP_FLAG_READ |
+ VFIO_DMA_MAP_FLAG_WRITE,
+ &pa_pfn);
+
+ if (ret != 1) {
+ /*sadly mdev api doesn't return an error*/
+ ret = -EFAULT;
+
+ _DBG(viommu->vctrl,
+ "vIOMMU: ADD RW IOVA %pad - pin failed\n",
+ &iova);
+ goto unwind;
+ }
+
+ // new mapping needed
+ if (!map || map->pfn + map_pages(map) != pa_pfn) {
+ int node = viommu->hw_dev ?
+ dev_to_node(viommu->hw_dev) : NUMA_NO_NODE;
+
+ map = kzalloc_node(sizeof(*map), GFP_KERNEL, node);
+
+ if (WARN_ON(!map)) {
+ vfio_unpin_pages(viommu->sw_dev, &vapfn, 1);
+ ret = -ENOMEM;
+ goto unwind;
+ }
+ map->iova_start = iova + offset;
+ map->iova_last = iova + offset + PAGE_SIZE - 1ULL;
+ map->pfn = pa_pfn;
+ map->host_iova = 0;
+ list_add_tail(&map->link, &new_mappings_list);
+ } else {
+ // current map can be extended
+ map->iova_last += PAGE_SIZE;
+ }
+ }
+
+ // DMA mapping the pages
+ list_for_each_entry_safe(map, tmp, &new_mappings_list, link) {
+ if (viommu->hw_dev) {
+ map->host_iova =
+ dma_map_page(viommu->hw_dev,
+ pfn_to_page(map->pfn),
+ 0,
+ map_len(map),
+ DMA_BIDIRECTIONAL);
+
+ ret = dma_mapping_error(viommu->hw_dev, map->host_iova);
+ if (ret) {
+ _DBG(viommu->vctrl,
+ "vIOMMU: ADD RW IOVA %pad-%pad - DMA map failed\n",
+ &iova, &iova_end);
+ goto unwind;
+ }
+ }
+
+ nvme_mdev_viommu_dbg_dma_range(viommu, map, "ADD");
+ list_del(&map->link);
+ list_add_tail(&map->link, &viommu->maps_list);
+ viommu_int_tree_insert(map, &viommu->maps_tree);
+ }
+ return 0;
+unwind:
+ list_for_each_entry_safe(map, tmp, &new_mappings_list, link) {
+ nvme_mdev_viommu_unpin_pages(viommu, map->iova_start,
+ map_pages(map));
+
+ list_del(&map->link);
+ kfree(map);
+ }
+ nvme_mdev_viommu_remove(viommu, iova, size);
+ return ret;
+}
+
+/* Removes a range of user memory*/
+int nvme_mdev_viommu_remove(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova,
+ u64 size)
+{
+ struct mem_mapping *map = NULL, *tmp;
+ dma_addr_t last_iova = iova + (size) - 1ULL;
+ LIST_HEAD(remove_list);
+ int count = 0;
+
+ /* find out all the relevant ranges */
+ map = viommu_int_tree_iter_first(&viommu->maps_tree, iova, last_iova);
+ while (map) {
+ list_del(&map->link);
+ list_add_tail(&map->link, &remove_list);
+ map = viommu_int_tree_iter_next(map, iova, last_iova);
+ }
+
+ /* remove them */
+ list_for_each_entry_safe(map, tmp, &remove_list, link) {
+ count++;
+
+ nvme_mdev_viommu_dbg_dma_range(viommu, map, "DEL");
+ if (viommu->hw_dev)
+ dma_unmap_page(viommu->hw_dev, map->host_iova,
+ map_len(map), DMA_BIDIRECTIONAL);
+
+ nvme_mdev_viommu_unpin_pages(viommu, map->iova_start,
+ map_pages(map));
+
+ viommu_int_tree_remove(map, &viommu->maps_tree);
+ kfree(map);
+ }
+ return count;
+}
+
+/* Translate an IOVA to a physical address and read device bus address */
+int nvme_mdev_viommu_translate(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova,
+ dma_addr_t *physical,
+ dma_addr_t *host_iova)
+{
+ struct mem_mapping *mapping;
+ u64 offset;
+
+ if (WARN_ON_ONCE(OFFSET_IN_PAGE(iova) != 0))
+ return -EINVAL;
+
+ mapping = viommu_int_tree_iter_first(&viommu->maps_tree,
+ iova, iova + PAGE_SIZE - 1);
+ if (!mapping) {
+ _DBG(viommu->vctrl,
+ "vIOMMU: translation of IOVA %pad failed\n", &iova);
+ return -EFAULT;
+ }
+
+ WARN_ON(iova > mapping->iova_last);
+ WARN_ON(OFFSET_IN_PAGE(mapping->iova_start) != 0);
+
+ offset = iova - mapping->iova_start;
+ *physical = PFN_PHYS(mapping->pfn) + offset;
+ *host_iova = mapping->host_iova + offset;
+ return 0;
+}
+
+/* map an IOVA to kernel address space */
+int nvme_mdev_viommu_create_kmap(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova, struct page_map *page)
+{
+ dma_addr_t host_iova;
+ phys_addr_t physical;
+ struct page *new_page;
+ int ret;
+
+ page->iova = iova;
+
+ ret = nvme_mdev_viommu_translate(viommu, iova, &physical, &host_iova);
+ if (ret)
+ return ret;
+
+ new_page = pfn_to_page(PHYS_PFN(physical));
+
+ page->kmap = kmap(new_page);
+ if (!page->kmap)
+ return -ENOMEM;
+
+ page->page = new_page;
+ return 0;
+}
+
+/* update IOVA <-> kernel mapping. If fails, removes the previous mapping */
+void nvme_mdev_viommu_update_kmap(struct nvme_mdev_viommu *viommu,
+ struct page_map *page)
+{
+ dma_addr_t host_iova;
+ phys_addr_t physical;
+ struct page *new_page;
+ int ret;
+
+ ret = nvme_mdev_viommu_translate(viommu, page->iova,
+ &physical, &host_iova);
+ if (ret) {
+ nvme_mdev_viommu_free_kmap(viommu, page);
+ return;
+ }
+
+ new_page = pfn_to_page(PHYS_PFN(physical));
+ if (new_page == page->page)
+ return;
+
+ nvme_mdev_viommu_free_kmap(viommu, page);
+
+ page->kmap = kmap(new_page);
+ if (!page->kmap)
+ return;
+ page->page = new_page;
+}
+
+/* unmap an IOVA to kernel address space */
+void nvme_mdev_viommu_free_kmap(struct nvme_mdev_viommu *viommu,
+ struct page_map *page)
+{
+ if (page->page) {
+ kunmap(page->page);
+ page->page = NULL;
+ page->kmap = NULL;
+ }
+}
diff --git a/drivers/nvme/mdev/vns.c b/drivers/nvme/mdev/vns.c
new file mode 100644
index 000000000000..42d4f8d7423b
--- /dev/null
+++ b/drivers/nvme/mdev/vns.c
@@ -0,0 +1,356 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Virtual NVMe namespace implementation
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/nvme.h>
+#include "priv.h"
+
+/* Reset the changed namespace log */
+void nvme_mdev_vns_log_reset(struct nvme_mdev_vctrl *vctrl)
+{
+ vctrl->ns_log_size = 0;
+}
+
+/* This adds entry to NS changed log and sends to the user a notification */
+static void nvme_mdev_vns_send_event(struct nvme_mdev_vctrl *vctrl, u32 ns)
+{
+ unsigned int i;
+ unsigned int log_size = vctrl->ns_log_size;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ _INFO(vctrl, "host namespace list rescanned\n");
+
+ if (WARN_ON(ns == 0 || ns > MAX_VIRTUAL_NAMESPACES))
+ return;
+
+ // check if the namespace ID is alredy in the log
+ if (log_size == MAX_VIRTUAL_NAMESPACES)
+ return;
+
+ for (i = 0; i < log_size; i++)
+ if (vctrl->ns_log[i] == cpu_to_le32(ns))
+ return;
+
+ vctrl->ns_log[log_size++] = cpu_to_le32(ns);
+ vctrl->ns_log_size++;
+ nvme_mdev_event_send(vctrl, NVME_AER_TYPE_NOTICE,
+ NVME_AER_NOTICE_NS_CHANGED);
+}
+
+/* Read host NS/partition parameters to update our virtual NS */
+static void nvme_mdev_vns_read_host_properties(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_mdev_vns *vns,
+ struct nvme_ns *host_ns)
+{
+ unsigned int sector_to_lba_shift;
+ u64 host_ns_size, start, nr, align_mask;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ /* read the namespace block size */
+ vns->blksize_shift = host_ns->lba_shift;
+
+ if (WARN_ON(vns->blksize_shift < 9)) {
+ _WARN(vctrl, "NS/create: device block size is bad\n");
+ goto error;
+ }
+
+ sector_to_lba_shift = vns->blksize_shift - 9;
+ align_mask = (1ULL << sector_to_lba_shift) - 1;
+
+ /* read the partition start and size*/
+ start = get_start_sect(vns->host_part);
+ nr = part_nr_sects_read(vns->host_part->bd_part);
+
+ /* check that partition is aligned on LBA size*/
+ if (sector_to_lba_shift != 0) {
+ if ((start & align_mask) || (nr & align_mask)) {
+ _WARN(vctrl, "NS/create: partition not aligned\n");
+ goto error;
+ }
+ }
+
+ vns->host_lba_offset = start >> sector_to_lba_shift;
+ vns->ns_size = nr >> sector_to_lba_shift;
+ host_ns_size = get_capacity(host_ns->disk) >> sector_to_lba_shift;
+
+ /*TODOLATER: NS: support metadata on host namespace */
+ if (host_ns->ms) {
+ _WARN(vctrl, "NS/create: no support for namespace metadata\n");
+ goto error;
+ }
+
+ if (vns->ns_size == 0) {
+ _WARN(vctrl, "NS/create: host namespace has size 0\n");
+ goto error;
+ }
+
+ /* sanity check that partition doesn't extend beyond the namespace */
+ if (!check_range(vns->host_lba_offset, vns->ns_size, host_ns_size)) {
+ _WARN(vctrl, "NS/create: host namespace size mismatch\n");
+ goto error;
+ }
+
+ /* check if namespace is readonly*/
+ if (!vns->readonly)
+ vns->readonly = get_disk_ro(host_ns->disk);
+
+ vns->noiob = host_ns->noiob;
+ if (vns->noiob != 0) {
+ u64 tmp = vns->host_lba_offset;
+
+ if (do_div(tmp, vns->noiob)) {
+ _WARN(vctrl,
+ "NS/create: host partition is not aligned on host optimum IO boundary, performance might suffer");
+ vns->noiob = 0;
+ }
+ }
+ return;
+error:
+ vns->ns_size = 0;
+}
+
+/* Open new reference to a host namespace */
+int nvme_mdev_vns_open(struct nvme_mdev_vctrl *vctrl,
+ u32 host_nsid, unsigned int host_partid)
+{
+ struct nvme_mdev_vns *vns;
+ u32 user_nsid;
+ int ret;
+
+ _INFO(vctrl, "open host_namespace=%u, partition=%u\n",
+ host_nsid, host_partid);
+
+ mutex_lock(&vctrl->lock);
+ ret = -ENODEV;
+ if (nvme_mdev_vctrl_is_dead(vctrl))
+ goto out;
+
+ /* create the namespace object */
+ ret = -ENOMEM;
+ vns = kzalloc_node(sizeof(*vns), GFP_KERNEL, vctrl->hctrl->node);
+ if (!vns)
+ goto out;
+
+ uuid_gen(&vns->uuid); // TODOLATER: NS: non random NS UUID
+ vns->host_nsid = host_nsid;
+ vns->host_partid = host_partid;
+
+ /* find the host namespace */
+ vns->host_ns = nvme_find_get_ns(vctrl->hctrl->nvme_ctrl, host_nsid);
+ if (!vns->host_ns) {
+ ret = -ENODEV;
+ goto error1;
+ }
+
+ if (test_bit(NVME_NS_DEAD, &vns->host_ns->flags) ||
+ test_bit(NVME_NS_REMOVING, &vns->host_ns->flags) ||
+ !vns->host_ns->disk) {
+ ret = -ENODEV;
+ goto error2;
+ }
+
+ /* get the block device for the partition that we will use */
+ vns->host_part = bdget_disk(vns->host_ns->disk, host_partid);
+ if (!vns->host_part) {
+ ret = -ENODEV;
+ goto error2;
+ }
+
+ /* get exclusive access to the block device (partition) */
+ vns->fmode = FMODE_READ | FMODE_EXCL;
+ if (!vns->readonly)
+ vns->fmode |= FMODE_WRITE;
+
+ ret = blkdev_get(vns->host_part, vns->fmode, vns);
+ if (ret)
+ goto error2;
+
+ /* read properties of the host namespace */
+ nvme_mdev_vns_read_host_properties(vctrl, vns, vns->host_ns);
+
+ /* Allocate a user namespace ID for this namespace */
+ ret = -ENOSPC;
+ for (user_nsid = 1; user_nsid <= MAX_VIRTUAL_NAMESPACES; user_nsid++)
+ if (!nvme_mdev_vns_from_vnsid(vctrl, user_nsid))
+ break;
+
+ if (user_nsid > MAX_VIRTUAL_NAMESPACES)
+ goto error3;
+
+ nvme_mdev_io_pause(vctrl);
+
+ vctrl->namespaces[user_nsid - 1] = vns;
+ vns->nsid = user_nsid;
+
+ /* Announce the new namespace to the user */
+ nvme_mdev_vns_send_event(vctrl, user_nsid);
+ nvme_mdev_io_resume(vctrl);
+ ret = 0;
+ goto out;
+error3:
+ blkdev_put(vns->host_part, vns->fmode);
+error2:
+ nvme_put_ns(vns->host_ns);
+error1:
+ kfree(vns);
+out:
+ mutex_unlock(&vctrl->lock);
+ return ret;
+}
+
+/* Re-open new reference to a host namespace, after notification
+ * of change in the host namespace
+ */
+static bool nvme_mdev_vns_reopen(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_mdev_vns *vns)
+{
+ struct nvme_ns *host_ns;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ _INFO(vctrl, "reopen host namespace %u, partition=%u\n",
+ vns->host_nsid, vns->host_partid);
+
+ /* namespace disappeared on the host - invalid*/
+ host_ns = nvme_find_get_ns(vctrl->hctrl->nvme_ctrl, vns->host_nsid);
+ if (!host_ns)
+ return false;
+
+ /* different namespace with same ID on the host - invalid*/
+ if (vns->host_ns != host_ns)
+ goto error1;
+
+ // basic checks on the namespace
+ if (test_bit(NVME_NS_DEAD, &host_ns->flags) ||
+ test_bit(NVME_NS_REMOVING, &host_ns->flags) ||
+ !host_ns->disk)
+ goto error1;
+
+ /* read properties of the host namespace */
+ nvme_mdev_io_pause(vctrl);
+ nvme_mdev_vns_read_host_properties(vctrl, vns, host_ns);
+ nvme_mdev_io_resume(vctrl);
+
+ nvme_put_ns(host_ns);
+ return true;
+error1:
+ nvme_put_ns(host_ns);
+ return false;
+}
+
+/* Destroy a virtual namespace*/
+static int __nvme_mdev_vns_destroy(struct nvme_mdev_vctrl *vctrl, u32 user_nsid)
+{
+ struct nvme_mdev_vns *vns;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ vns = nvme_mdev_vns_from_vnsid(vctrl, user_nsid);
+ if (!vns)
+ return -ENODEV;
+
+ nvme_mdev_vns_send_event(vctrl, user_nsid);
+ nvme_mdev_io_pause(vctrl);
+
+ vctrl->namespaces[user_nsid - 1] = NULL;
+ blkdev_put(vns->host_part, vns->fmode);
+ nvme_put_ns(vns->host_ns);
+ kfree(vns);
+ nvme_mdev_io_resume(vctrl);
+ return 0;
+}
+
+/* Destroy a virtual namespace (external interface) */
+int nvme_mdev_vns_destroy(struct nvme_mdev_vctrl *vctrl, u32 user_nsid)
+{
+ int ret;
+
+ mutex_lock(&vctrl->lock);
+ nvme_mdev_io_pause(vctrl);
+ ret = __nvme_mdev_vns_destroy(vctrl, user_nsid);
+ nvme_mdev_io_resume(vctrl);
+ mutex_unlock(&vctrl->lock);
+
+ return ret;
+}
+
+/* Destroy all virtual namespaces */
+void nvme_mdev_vns_destroy_all(struct nvme_mdev_vctrl *vctrl)
+{
+ u32 user_nsid;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ for (user_nsid = 1 ; user_nsid <= MAX_VIRTUAL_NAMESPACES ; user_nsid++)
+ __nvme_mdev_vns_destroy(vctrl, user_nsid);
+}
+
+/* Get a virtual namespace */
+struct nvme_mdev_vns *nvme_mdev_vns_from_vnsid(struct nvme_mdev_vctrl *vctrl,
+ u32 user_ns_id)
+{
+ if (user_ns_id == 0 || user_ns_id > MAX_VIRTUAL_NAMESPACES)
+ return NULL;
+ return vctrl->namespaces[user_ns_id - 1];
+}
+
+/* Print description off all virtual namespaces */
+int nvme_mdev_vns_print_description(struct nvme_mdev_vctrl *vctrl,
+ char *buf, unsigned int size)
+{
+ int nsid, ret = 0;
+
+ mutex_lock(&vctrl->lock);
+
+ for (nsid = 1; nsid <= MAX_VIRTUAL_NAMESPACES; nsid++) {
+ int n;
+ struct nvme_mdev_vns *vns = nvme_mdev_vns_from_vnsid(vctrl,
+ nsid);
+ if (!vns)
+ continue;
+
+ else if (vns->host_partid == 0)
+ n = snprintf(buf, size, "VNS%d: nvme%dn%d\n",
+ nsid, vctrl->hctrl->id,
+ (int)vns->host_nsid);
+ else
+ n = snprintf(buf, size, "VNS%d: nvme%dn%dp%d\n",
+ nsid, vctrl->hctrl->id,
+ (int)vns->host_nsid,
+ (int)vns->host_partid);
+ if (n > size)
+ return -ENOMEM;
+ buf += n;
+ size -= n;
+ ret += n;
+ }
+ mutex_unlock(&vctrl->lock);
+ return ret;
+}
+
+/* Processes an update on the host namespace */
+void nvme_mdev_vns_host_ns_update(struct nvme_mdev_vctrl *vctrl,
+ u32 host_nsid, bool removed)
+{
+ int nsid;
+
+ mutex_lock(&vctrl->lock);
+
+ for (nsid = 1; nsid <= MAX_VIRTUAL_NAMESPACES; nsid++) {
+ struct nvme_mdev_vns *vns = nvme_mdev_vns_from_vnsid(vctrl,
+ nsid);
+ if (!vns || vns->host_nsid != host_nsid)
+ continue;
+
+ if (removed || !nvme_mdev_vns_reopen(vctrl, vns))
+ __nvme_mdev_vns_destroy(vctrl, nsid);
+ else
+ nvme_mdev_vns_send_event(vctrl, nsid);
+ }
+ mutex_unlock(&vctrl->lock);
+}
diff --git a/drivers/nvme/mdev/vsq.c b/drivers/nvme/mdev/vsq.c
new file mode 100644
index 000000000000..5b63081c144d
--- /dev/null
+++ b/drivers/nvme/mdev/vsq.c
@@ -0,0 +1,181 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Virtual NVMe submission queue implementation
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+#include "priv.h"
+
+/* Create new virtual completion queue */
+int nvme_mdev_vsq_init(struct nvme_mdev_vctrl *vctrl,
+ u16 qid, dma_addr_t iova, bool cont, u16 size, u16 cqid)
+{
+ struct nvme_vsq *q = &vctrl->vsqs[qid];
+ int ret;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ q->iova = iova;
+ q->cont = cont;
+ q->qid = qid;
+ q->size = size;
+ q->head = 0;
+ q->vcq = &vctrl->vcqs[cqid];
+ q->data = NULL;
+ q->hsq = 0;
+
+ ret = nvme_mdev_vsq_viommu_update(&vctrl->viommu, q);
+ if (ret && (ret != -EFAULT))
+ return ret;
+
+ if (qid > 0) {
+ ret = nvme_mdev_vctrl_hq_alloc(vctrl);
+ if (ret < 0) {
+ vunmap(q->data);
+ return ret;
+ }
+ q->hsq = ret;
+ }
+
+ _DBG(vctrl, "VSQ: create qid=%d contig=%d, depth=%d cqid=%d\n",
+ qid, cont, size, cqid);
+
+ set_bit(qid, vctrl->vsq_en);
+
+ vctrl->mmio.dbs[q->qid].sqt = 0;
+ vctrl->mmio.eidxs[q->qid].sqt = 0;
+
+ return 0;
+}
+
+/* Update the kernel mapping of the queue */
+int nvme_mdev_vsq_viommu_update(struct nvme_mdev_viommu *viommu,
+ struct nvme_vsq *q)
+{
+ void *data;
+
+ if (q->data)
+ vunmap((void *)q->data);
+
+ data = nvme_mdev_udata_queue_vmap(viommu, q->iova,
+ (unsigned int)q->size *
+ sizeof(struct nvme_command),
+ q->cont);
+
+ q->data = IS_ERR(data) ? NULL : data;
+ return IS_ERR(data) ? PTR_ERR(data) : 0;
+}
+
+/* Delete an virtual completion queue */
+void nvme_mdev_vsq_delete(struct nvme_mdev_vctrl *vctrl, u16 qid)
+{
+ struct nvme_vsq *q = &vctrl->vsqs[qid];
+
+ lockdep_assert_held(&vctrl->lock);
+ _DBG(vctrl, "VSQ: delete qid=%d\n", q->qid);
+
+ if (q->data)
+ vunmap(q->data);
+ q->data = NULL;
+
+ if (q->hsq) {
+ nvme_mdev_vctrl_hq_free(vctrl, q->hsq);
+ q->hsq = 0;
+ }
+
+ clear_bit(qid, vctrl->vsq_en);
+}
+
+/* Move queue head one item forward */
+static void nvme_mdev_vsq_advance_head(struct nvme_vsq *q)
+{
+ q->head++;
+ if (q->head == q->size)
+ q->head = 0;
+}
+
+bool nvme_mdev_vsq_has_data(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_vsq *q)
+{
+ u16 tail = le32_to_cpu(vctrl->mmio.dbs[q->qid].sqt);
+
+ if (!vctrl->mmio.dbs || !vctrl->mmio.eidxs || !q->data)
+ return false;
+
+ if (tail == q->head)
+ return false;
+
+ if (!nvme_mdev_mmio_db_check(vctrl, q->qid, q->size, tail))
+ return false;
+ return true;
+}
+
+/* get one command from a virtual submission queue */
+const struct nvme_command *nvme_mdev_vsq_get_cmd(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_vsq *q)
+{
+ u16 oldhead = q->head;
+ u32 eidx;
+
+ if (!nvme_mdev_vsq_has_data(vctrl, q))
+ return NULL;
+ if (!nvme_mdev_vcq_reserve_space(q->vcq))
+ return NULL;
+ nvme_mdev_vsq_advance_head(q);
+
+ eidx = q->head + (q->size >> 1);
+ if (eidx >= q->size)
+ eidx -= q->size;
+
+ vctrl->mmio.eidxs[q->qid].sqt = cpu_to_le32(eidx);
+
+ return &q->data[oldhead];
+}
+
+bool nvme_mdev_vsq_suspend_io(struct nvme_mdev_vctrl *vctrl, u16 sqid)
+{
+ struct nvme_vsq *q = &vctrl->vsqs[sqid];
+ u16 tail = le32_to_cpu(vctrl->mmio.dbs[q->qid].sqt);
+
+ /* If the queue is not in working state don't allow the idle code
+ * to kick in
+ */
+ if (!vctrl->mmio.dbs || !vctrl->mmio.eidxs || !q->data)
+ return false;
+
+ /* queue has data - refuse idle*/
+ if (tail != q->head)
+ return false;
+
+ /* Write eventid to tell the user to ring normal doorbell*/
+ vctrl->mmio.eidxs[q->qid].sqt = cpu_to_le32(q->head);
+
+ /* memory barrier to ensure that the user have seen the eidx */
+ mb();
+
+ /* Check that doorbell diddn't move meanwhile */
+ tail = le32_to_cpu(vctrl->mmio.dbs[q->qid].sqt);
+ return (tail == q->head);
+}
+
+/* complete a command (IO version)*/
+void nvme_mdev_vsq_cmd_done_io(struct nvme_mdev_vctrl *vctrl,
+ u16 sqid, u16 cid, u16 status)
+{
+ struct nvme_vsq *q = &vctrl->vsqs[sqid];
+
+ nvme_mdev_vcq_write_io(vctrl, q->vcq, q->head, q->qid, cid, status);
+}
+
+/* complete a command (ADMIN version)*/
+void nvme_mdev_vsq_cmd_done_adm(struct nvme_mdev_vctrl *vctrl,
+ u32 dw0, u16 cid, u16 status)
+{
+ struct nvme_vsq *q = &vctrl->vsqs[0];
+
+ nvme_mdev_vcq_write_adm(vctrl, q->vcq, dw0, q->head, cid, status);
+}
--
2.17.2
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* [PATCH 8/9] nvme/core: add nvme-mdev core driver
@ 2019-03-19 14:41 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
To: linux-nvme
Cc: Maxim Levitsky, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney ,
Paolo Bonzini, Liang Cunming, Liu Changpeng, Fam Zheng,
Amnon Ilan, John
This is main commit in the series, adding the mediated nvme
driver.
The idea behind this driver is based on paper you can find at
https://www.usenix.org/conference/atc18/presentation/peng
But this is an independent implementation.
This mdev device exposes a NVME 1.3 virtual device to any VFIO user
which represents an partition (or whole namespace) of a host NVME device.
Unlike the paper, the driver/device uses
one polling thread per mediated device,
and only needs one hw queue per it to achieve near native performance
(but can use more that one hw queue, in which case it splits the queues
between virtual nvme queues thus supporting n:m mapping between
host hw queues and the guest virtual queues).
The nvme-mdev can't yet be used after this commit, as no nvme device
drivers support mediation (support is added in the next commit to
nvme-pci)
The driver can use the shadow doorbell nvme optional feature,
to stop polling after a timeout.
Currently the device has redhat pci vendor ID, and 0x1234 pci device id,
which is only a placeholder till a real device id is allocated.
Use example:
# load the nvme-mdev driver
$ modprobe nvme-mdev
# load the nvme pci driver with 4 polling queues reserved
# (will work with the next patch)
$ modprobe nvme mdev_queues=4
# generate random UUID for the mediated device
$ UUID=$(uuidgen)
$ MDEV_DEVICE=/sys/bus/mdev/devices/$UUID
# the location of the real nvme device (replace with yours)
$ PCI_DEVICE=/sys/bus/pci/devices/0000:44:00.0
# create the mediated device using 2 host polling queues
$ echo $UUID > $PCI_DEVICE/mdev_supported_types/nvme-2Q_V1/create
# attach partition 1 of namespace 1 to a free virtual namespace
# (use n1 to attach whole namespace)
# you can attach up to 16 virtual namespaces for now
$ echo n1p1 > $MDEV_DEVICE/namespaces/add_namespace
# move the polling thread to cpu 11
$ echo 11 > $MDEV_DEVICE/settings/iothread_cpu
# now you can boot qemu with
# -device vfio-pci,sysfsdev=/sys/bus/mdev/devices/$UUID
Note that you can attach and detach virtual namespaces
even while the guest is running which will make the
device sent namespace changed AEN notifications to the guest
prior to attach/detach action.
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
MAINTAINERS | 5 +
drivers/nvme/Kconfig | 1 +
drivers/nvme/Makefile | 1 +
drivers/nvme/host/core.c | 5 +-
drivers/nvme/mdev/Kconfig | 16 +
drivers/nvme/mdev/Makefile | 5 +
drivers/nvme/mdev/adm.c | 873 +++++++++++++++++++++++++++++++++++
drivers/nvme/mdev/events.c | 142 ++++++
drivers/nvme/mdev/host.c | 491 ++++++++++++++++++++
drivers/nvme/mdev/instance.c | 802 ++++++++++++++++++++++++++++++++
drivers/nvme/mdev/io.c | 563 ++++++++++++++++++++++
drivers/nvme/mdev/irq.c | 264 +++++++++++
drivers/nvme/mdev/mdev.h | 56 +++
drivers/nvme/mdev/mmio.c | 591 ++++++++++++++++++++++++
drivers/nvme/mdev/pci.c | 247 ++++++++++
drivers/nvme/mdev/priv.h | 700 ++++++++++++++++++++++++++++
drivers/nvme/mdev/udata.c | 390 ++++++++++++++++
drivers/nvme/mdev/vcq.c | 209 +++++++++
drivers/nvme/mdev/vctrl.c | 515 +++++++++++++++++++++
drivers/nvme/mdev/viommu.c | 322 +++++++++++++
drivers/nvme/mdev/vns.c | 356 ++++++++++++++
drivers/nvme/mdev/vsq.c | 181 ++++++++
22 files changed, 6733 insertions(+), 2 deletions(-)
create mode 100644 drivers/nvme/mdev/Kconfig
create mode 100644 drivers/nvme/mdev/Makefile
create mode 100644 drivers/nvme/mdev/adm.c
create mode 100644 drivers/nvme/mdev/events.c
create mode 100644 drivers/nvme/mdev/host.c
create mode 100644 drivers/nvme/mdev/instance.c
create mode 100644 drivers/nvme/mdev/io.c
create mode 100644 drivers/nvme/mdev/irq.c
create mode 100644 drivers/nvme/mdev/mdev.h
create mode 100644 drivers/nvme/mdev/mmio.c
create mode 100644 drivers/nvme/mdev/pci.c
create mode 100644 drivers/nvme/mdev/priv.h
create mode 100644 drivers/nvme/mdev/udata.c
create mode 100644 drivers/nvme/mdev/vcq.c
create mode 100644 drivers/nvme/mdev/vctrl.c
create mode 100644 drivers/nvme/mdev/viommu.c
create mode 100644 drivers/nvme/mdev/vns.c
create mode 100644 drivers/nvme/mdev/vsq.c
diff --git a/MAINTAINERS b/MAINTAINERS
index dce5c099f43c..d143e929d7ed 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -10896,6 +10896,11 @@ W: http://git.infradead.org/nvme.git
S: Supported
F: drivers/nvme/target/
+NVM EXPRESS MDEV DRIVER
+M: Maxim Levitsky <mlevitsk@redhat.com>
+S: Supported
+F: drivers/nvme/mdev/
+
NVMEM FRAMEWORK
M: Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
S: Maintained
diff --git a/drivers/nvme/Kconfig b/drivers/nvme/Kconfig
index 04008e0bbe81..cbf867e6ac1e 100644
--- a/drivers/nvme/Kconfig
+++ b/drivers/nvme/Kconfig
@@ -2,5 +2,6 @@ menu "NVME Support"
source "drivers/nvme/host/Kconfig"
source "drivers/nvme/target/Kconfig"
+source "drivers/nvme/mdev/Kconfig"
endmenu
diff --git a/drivers/nvme/Makefile b/drivers/nvme/Makefile
index 0096a7fd1431..0458efc57aee 100644
--- a/drivers/nvme/Makefile
+++ b/drivers/nvme/Makefile
@@ -1,3 +1,4 @@
obj-y += host/
obj-y += target/
+obj-y += mdev/
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 90561973bce9..a835884fcbcd 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -1687,6 +1687,7 @@ static void nvme_update_disk_info(struct gendisk *disk,
if (ns->ms && !ns->ext &&
(ns->ctrl->ops->flags & NVME_F_METADATA_SUPPORTED))
nvme_init_integrity(disk, ns->ms, ns->pi_type);
+
if (ns->ms && !nvme_ns_has_pi(ns) && !blk_get_integrity(disk))
capacity = 0;
@@ -2302,7 +2303,7 @@ static void nvme_init_subnqn(struct nvme_subsystem *subsys, struct nvme_ctrl *ct
size_t nqnlen;
int off;
- if(!(ctrl->quirks & NVME_QUIRK_IGNORE_DEV_SUBNQN)) {
+ if (!(ctrl->quirks & NVME_QUIRK_IGNORE_DEV_SUBNQN)) {
nqnlen = strnlen(id->subnqn, NVMF_NQN_SIZE);
if (nqnlen > 0 && nqnlen < NVMF_NQN_SIZE) {
strlcpy(subsys->subnqn, id->subnqn, NVMF_NQN_SIZE);
@@ -3361,8 +3362,8 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
nvme_mpath_add_disk(ns, id);
nvme_fault_inject_init(ns);
- kfree(id);
+ kfree(id);
return;
out_put_disk:
put_disk(ns->disk);
diff --git a/drivers/nvme/mdev/Kconfig b/drivers/nvme/mdev/Kconfig
new file mode 100644
index 000000000000..7ebc66cdeac0
--- /dev/null
+++ b/drivers/nvme/mdev/Kconfig
@@ -0,0 +1,16 @@
+
+config NVME_MDEV
+ bool
+
+config NVME_MDEV_VFIO
+ tristate "NVME Mediated VFIO virtual device"
+ select NVME_MDEV
+ depends on BLOCK
+ depends on VFIO_MDEV
+ depends on NVME_CORE
+ help
+ This provides EXPEREMENTAL support for lightweight software
+ passthrough of an partition on a NVME storage device to
+ guest, also as a NVME namespace, attached to a virtual NVME
+ controller
+ If unsure, say N.
diff --git a/drivers/nvme/mdev/Makefile b/drivers/nvme/mdev/Makefile
new file mode 100644
index 000000000000..114016c48476
--- /dev/null
+++ b/drivers/nvme/mdev/Makefile
@@ -0,0 +1,5 @@
+
+obj-$(CONFIG_NVME_MDEV_VFIO) += nvme-mdev.o
+
+nvme-mdev-y += adm.o events.o instance.o host.o io.o irq.o \
+ udata.o viommu.o vns.o vsq.o vcq.o vctrl.o mmio.o pci.o
diff --git a/drivers/nvme/mdev/adm.c b/drivers/nvme/mdev/adm.c
new file mode 100644
index 000000000000..39a7ad252c69
--- /dev/null
+++ b/drivers/nvme/mdev/adm.c
@@ -0,0 +1,873 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * NVMe admin command implementation
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include "priv.h"
+
+struct adm_ctx {
+ struct nvme_mdev_vctrl *vctrl;
+ struct nvme_mdev_hctrl *hctrl;
+ const struct nvme_command *in;
+ struct nvme_mdev_vns *ns;
+ struct nvme_ext_data_iter udatait;
+ unsigned int datalen;
+};
+
+/*Identify Controller */
+static int nvme_mdev_adm_handle_id_cntrl(struct adm_ctx *ctx)
+{
+ int ret;
+ const struct nvme_identify *in = &ctx->in->identify;
+ struct nvme_id_ctrl *id;
+
+ if (in->nsid != 0)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ id = kzalloc(sizeof(*id), GFP_KERNEL);
+ if (!id)
+ return NVME_SC_INTERNAL;
+
+ /** Controller Capabilities and Features ************************/
+ // PCI vendor ID
+ store_le16(&id->vid, NVME_MDEV_PCI_VENDOR_ID);
+ // PCI Subsystem Vendor ID
+ store_le16(&id->ssvid, NVME_MDEV_PCI_SUBVENDOR_ID);
+ // Serial Number
+ store_strsp(id->sn, ctx->vctrl->serial);
+ // Model Number
+ store_strsp(id->mn, "NVMe MDEV virtual device");
+ // Firmware Revision
+ store_strsp(id->fr, NVME_MDEV_FIRMWARE_VERSION);
+ // Recommended Arbitration Burst
+ id->rab = 6;
+ // IEEE OUI Identifier for the controller vendor
+ id->ieee[0] = 0;
+ // Controller Multi-Path I/O and Namespace Sharing Capabilities
+ id->cmic = 0;
+ // Maximum Data Transfer Size (power of two, in page size units)
+ id->mdts = ctx->hctrl->mdts;
+ // controller ID
+ id->cntlid = 0;
+ // NVME supported version
+ store_le32(&id->ver, NVME_MDEV_NVME_VER);
+ // RTD3 Resume Latency
+ id->rtd3r = 0;
+ //RTD3 Entry Latency
+ id->rtd3e = 0;
+ // Optional Asynchronous Events Supported
+ store_le32(&id->oaes, NVME_AEN_CFG_NS_ATTR);
+ // Controller Attributes (misc junk)
+ id->ctratt = 0;
+
+ /*Admin Command Set Attributes & Optional Controller Capabilities */
+ // Optional Admin Command Support
+ id->oacs = ctx->vctrl->mmio.shadow_db_supported ?
+ NVME_CTRL_OACS_DBBUF_SUPP : 0;
+ // Abort Command Limit (dummy, zero based)
+ id->acl = 3;
+ // Asynchronous Event Request Limit (zero based)
+ id->aerl = MAX_AER_COMMANDS - 1;
+ // Firmware Updates (dummy)
+ id->frmw = 3;
+ // Log Page Attributes
+ // (IMPLEMENT: bit for commands supported and effects)
+ id->lpa = 0;
+ // Error Log Page Entries
+ // (zero based, IMPLEMENT: dummy for now)
+ id->elpe = 0;
+ // Number of Power States Support
+ // (zero based, IMPLEMENT: dummy for now)
+ id->npss = 0;
+ // Admin Vendor Specific Command Configuration (junk)
+ id->avscc = 0;
+ // Autonomous Power State Transition Attributes
+ id->apsta = 0;
+ // Warning Composite Temperature Threshold (dummy)
+ id->wctemp = 0x157;
+ // Critical Composite Temperature Threshold (dummy)
+ id->cctemp = 0x175;
+ // Maximum Time for Firmware Activation (dummy)
+ id->mtfa = 0;
+ // Host Memory Buffer Preferred Size (dummy)
+ id->hmpre = 0;
+ // Host Memory Buffer Minimum Size (dummy)
+ id->hmmin = 0;
+ // Total NVM Capacity (not supported)
+ id->tnvmcap[0] = 0;
+ // Unallocated NVM Capacity (not supported for now)
+ id->unvmcap[0] = 0;
+ // Replay Protected Memory Block Support
+ id->rpmbs = 0;
+ // Extended Device Self-test Time (dummy)
+ id->edstt = 0;
+ // Device Self-test Options (dummy)
+ id->dsto = 0;
+ // Firmware Update Granularity (dummy)
+ id->fwug = 0;
+ // Keep Alive Support (not supported)
+ id->kas = 0;
+ // Host Controlled Thermal Management Attributes (not supported)
+ id->hctma = 0;
+ // Minimum Thermal Management Temperature (not supported)
+ id->mntmt = 0;
+ // Maximum Thermal Management Temperature (not supported)
+ id->mxtmt = 0;
+ // Sanitize capabilities (not supported)
+ id->sanicap = 0;
+
+ /****************** NVM Command Set Attributes ********************/
+ // Submission Queue Entry Size
+ id->sqes = (0x6 << 4) | 0x6;
+ // Completion Queue Entry Size
+ id->cqes = (0x4 << 4) | 0x4;
+ // Maximum Outstanding Commands
+ id->maxcmd = 0;
+ // Number of Namespaces
+ id->nn = MAX_VIRTUAL_NAMESPACES;
+ // Optional NVM Command Support
+ // (we add dsm and write zeros if host supports them)
+ id->oncs = ctx->hctrl->oncs;
+ // TODOLATER: IO: Fused Operation Support
+ id->fuses = 0;
+ // Format NVM Attributes (don't support)
+ id->fna = 0;
+ // Volatile Write Cache (tell that always exist)
+ id->vwc = 1;
+ // Atomic Write Unit Normal (zero based value in blocks)
+ id->awun = 0;
+ // Atomic Write Unit Power Fail (ditto)
+ id->awupf = 0;
+ // NVM Vendor Specific Command Configuration
+ id->nvscc = 0;
+ // Atomic Compare & Write Unit (zero based value in blocks)
+ id->acwu = 0;
+ // SGL Support
+ id->sgls = 0;
+ // NVM Subsystem NVMe Qualified Name
+ strncpy(id->subnqn, ctx->vctrl->subnqn, sizeof(id->subnqn));
+
+ /******************Power state descriptors ***********************/
+ store_le16(&id->psd[0].max_power, 0x9c4); // dummy
+ store_le32(&id->psd[0].entry_lat, 0x10);
+ store_le32(&id->psd[0].exit_lat, 0x4);
+
+ ret = nvme_mdev_write_to_udata(&ctx->udatait, id, sizeof(*id));
+ kfree(id);
+ return nvme_mdev_translate_error(ret);
+}
+
+/*Identify Namespace data structure for the specified NSID or common one */
+static int nvme_mdev_adm_handle_id_ns(struct adm_ctx *ctx)
+{
+ int ret;
+ struct nvme_id_ns *idns;
+ u32 nsid = le32_to_cpu(ctx->in->identify.nsid);
+
+ if (nsid == 0xffffffff || nsid == 0 || nsid > MAX_VIRTUAL_NAMESPACES)
+ return DNR(NVME_SC_INVALID_NS);
+
+ /* Allocate return structure*/
+ idns = kzalloc(NVME_IDENTIFY_DATA_SIZE, GFP_KERNEL);
+ if (!idns)
+ return NVME_SC_INTERNAL;
+
+ if (ctx->ns) {
+ //Namespace Size
+ store_le64(&idns->nsze, ctx->ns->ns_size);
+ // Namespace Capacity
+ store_le64(&idns->ncap, ctx->ns->ns_size);
+ // Namespace Utilization
+ store_le64(&idns->nuse, ctx->ns->ns_size);
+ // Namespace Features (nothing to set here yet)
+ idns->nsfeat = 0;
+ // Number of LBA Formats (dummy, zero based)
+ idns->nlbaf = 0;
+ // Formatted LBA Size (current LBA format in use)
+ // + external metadata bit
+ idns->flbas = 0;
+ // Metadata Capabilities
+ idns->mc = 0;
+ // End-to-end Data Protection Capabilities
+ idns->dpc = 0;
+ // End-to-end Data Protection Type Settings
+ idns->dps = 0;
+ // Namespace Multi-path I/O and Namespace Sharing Capabilities
+ idns->nmic = 0;
+ // Reservation Capabilities
+ idns->rescap = 0;
+ // Format Progress Indicator (dummy)
+ idns->fpi = 0;
+ // Namespace Atomic Write Unit Normal
+ idns->nawun = 0;
+ // Namespace Atomic Write Unit Power Fail
+ idns->nawupf = 0;
+ // Namespace Atomic Compare & Write Unit
+ idns->nacwu = 0;
+ // Namespace Atomic Boundary Size Normal
+ idns->nabsn = 0;
+ // Namespace Atomic Boundary Offset
+ idns->nabo = 0;
+ // Namespace Atomic Boundary Size Power Fail
+ idns->nabspf = 0;
+ // Namespace Optimal IO Boundary
+ idns->noiob = ctx->ns->noiob;
+ // NVM Capacity (another capacity but in bytes)
+ idns->nvmcap[0] = 0;
+
+ // TODOLATER: NS: support NGUID/EUI64
+ idns->nguid[0] = 0;
+ idns->eui64[0] = 0;
+ // format 0 metadata size
+ idns->lbaf[0].ms = 0;
+ // format 0 block size (in power of two)
+ idns->lbaf[0].ds = ctx->ns->blksize_shift;
+ // format 0 relative performance
+ idns->lbaf[0].rp = 0;
+ }
+
+ ret = nvme_mdev_write_to_udata(&ctx->udatait, idns,
+ NVME_IDENTIFY_DATA_SIZE);
+ kfree(idns);
+ return nvme_mdev_translate_error(ret);
+}
+
+/* Namespace Identification Descriptor list for the specified NSID.*/
+static int nvme_mdev_adm_handle_id_ns_desc(struct adm_ctx *ctx)
+{
+ struct ns_desc {
+ struct nvme_ns_id_desc uuid_desc;
+ uuid_t uuid;
+ struct nvme_ns_id_desc null_desc;
+ };
+
+ int ret;
+ struct ns_desc *id;
+
+ if (!ctx->ns)
+ return DNR(NVME_SC_INVALID_NS);
+
+ /* Allocate return structure */
+ id = kzalloc(NVME_IDENTIFY_DATA_SIZE, GFP_KERNEL);
+ if (!id)
+ return NVME_SC_INTERNAL;
+
+ id->uuid_desc.nidt = NVME_NIDT_UUID;
+ id->uuid_desc.nidl = NVME_NIDT_UUID_LEN;
+ memcpy(&id->uuid, &ctx->ns->uuid, sizeof(id->uuid));
+
+ ret = nvme_mdev_write_to_udata(&ctx->udatait, id,
+ NVME_IDENTIFY_DATA_SIZE);
+ kfree(id);
+ return nvme_mdev_translate_error(ret);
+}
+
+/*Active Namespace ID list */
+static int nvme_mdev_adm_handle_id_active_ns_list(struct adm_ctx *ctx)
+{
+ u32 nsid, start_nsid = le32_to_cpu(ctx->in->identify.nsid);
+ struct nvme_mdev_vctrl *vctrl = ctx->vctrl;
+ int i = 0, ret;
+
+ __le32 *nslist = kzalloc(NVME_IDENTIFY_DATA_SIZE, GFP_KERNEL);
+
+ if (start_nsid >= 0xfffffffe)
+ return DNR(NVME_SC_INVALID_NS);
+
+ for (nsid = start_nsid + 1; nsid <= MAX_VIRTUAL_NAMESPACES; nsid++)
+ if (nvme_mdev_vns_from_vnsid(vctrl, nsid))
+ nslist[i++] = nsid;
+
+ ret = nvme_mdev_write_to_udata(&ctx->udatait, nslist,
+ NVME_IDENTIFY_DATA_SIZE);
+ kfree(nslist);
+ return nvme_mdev_translate_error(ret);
+}
+
+/* Handle Identify command*/
+static int nvme_mdev_adm_handle_id(struct adm_ctx *ctx)
+{
+ const struct nvme_identify *in = &ctx->in->identify;
+
+ int ret = nvme_mdev_udata_iter_set_dptr(&ctx->udatait,
+ &ctx->in->common.dptr,
+ NVME_IDENTIFY_DATA_SIZE);
+
+ u32 nsid = le32_to_cpu(in->nsid);
+
+ if (ret)
+ return nvme_mdev_translate_error(ret);
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_DW23 | RSRV_MPTR | RSRV_DW11_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (in->ctrlid)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ ctx->ns = nvme_mdev_vns_from_vnsid(ctx->vctrl, nsid);
+
+ switch (ctx->in->identify.cns) {
+ case NVME_ID_CNS_CTRL:
+ _DBG(ctx->vctrl, "ADMINQ: IDENTIFY CTRL\n");
+ return nvme_mdev_adm_handle_id_cntrl(ctx);
+ case NVME_ID_CNS_NS_ACTIVE_LIST:
+ _DBG(ctx->vctrl, "ADMINQ: IDENTIFY ACTIVE_NS_LIST\n");
+ return nvme_mdev_adm_handle_id_active_ns_list(ctx);
+ case NVME_ID_CNS_NS:
+ _DBG(ctx->vctrl, "ADMINQ: IDENTIFY NS=0x%08x\n", nsid);
+ return nvme_mdev_adm_handle_id_ns(ctx);
+ case NVME_ID_CNS_NS_DESC_LIST:
+ _DBG(ctx->vctrl, "ADMINQ: IDENTIFY NS_DESC NS=0x%08x\n", nsid);
+ return nvme_mdev_adm_handle_id_ns_desc(ctx);
+ default:
+ return DNR(NVME_SC_INVALID_FIELD);
+ }
+}
+
+/* Error log for AER */
+static int nvme_mdev_adm_handle_get_log_page_err(struct adm_ctx *ctx)
+{
+ struct nvme_err_log_entry dummy_entry;
+ int ret;
+
+ // write one dummy entry with 0 error count
+ memset(&dummy_entry, 0, sizeof(dummy_entry));
+
+ ret = nvme_mdev_write_to_udata(&ctx->udatait,
+ &dummy_entry,
+ min((unsigned int)sizeof(dummy_entry),
+ ctx->datalen));
+
+ return nvme_mdev_translate_error(ret);
+}
+
+/* This log page allows to tell user about connected/disconnected namespaces */
+static int nvme_mdev_adm_handle_get_log_page_changed_ns(struct adm_ctx *ctx)
+{
+ unsigned int datasize = min(ctx->vctrl->ns_log_size * 4, ctx->datalen);
+
+ int ret = nvme_mdev_write_to_udata(&ctx->udatait,
+ &ctx->vctrl->ns_log, datasize);
+
+ nvme_mdev_vns_log_reset(ctx->vctrl);
+ return nvme_mdev_translate_error(ret);
+}
+
+/* S.M.A.R.T. log*/
+static int nvme_mdev_adm_handle_get_log_page_smart(struct adm_ctx *ctx)
+{
+ unsigned int datasize = min_t(unsigned int,
+ sizeof(struct nvme_smart_log), ctx->datalen);
+ int ret;
+ struct nvme_smart_log *log = kzalloc(sizeof(*log), GFP_KERNEL);
+
+ if (!log)
+ return NVME_SC_INTERNAL;
+
+ /* Some dummy values */
+ log->avail_spare = 100;
+ log->spare_thresh = 10;
+ store_le16(&log->temperature, 0x140);
+
+ ret = nvme_mdev_write_to_udata(&ctx->udatait, log, datasize);
+ kfree(log);
+ return nvme_mdev_translate_error(ret);
+}
+
+/* FW slot log - useless */
+static int nvme_mdev_adm_handle_get_log_page_fw_slot(struct adm_ctx *ctx)
+{
+ unsigned int datasize = min_t(unsigned int,
+ sizeof(struct nvme_fw_slot_info_log),
+ ctx->datalen);
+ int ret;
+ struct nvme_fw_slot_info_log *log = kzalloc(sizeof(*log), GFP_KERNEL);
+
+ if (!log)
+ return NVME_SC_INTERNAL;
+
+ ret = nvme_mdev_write_to_udata(&ctx->udatait, log, datasize);
+ kfree(log);
+ return nvme_mdev_translate_error(ret);
+}
+
+/* Response to GET LOG PAGE command */
+static int nvme_mdev_adm_handle_get_log_page(struct adm_ctx *ctx)
+{
+ const struct nvme_get_log_page_command *in = &ctx->in->get_log_page;
+ u8 log_page_id = ctx->in->get_log_page.lid;
+ int ret;
+
+ ctx->datalen = (le16_to_cpu(in->numdl) + 1) * 4;
+
+ /* We don't support extensions (NUMDU,LPOL,LPOU) */
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_DW23 | RSRV_MPTR | RSRV_DW11_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ /* Currently ignore the NSID in the command */
+
+ /* ACK the AER */
+ if ((in->lsp & 0x80) == 0)
+ nvme_mdev_event_process_ack(ctx->vctrl, log_page_id);
+
+ /* map data pointer */
+ ret = nvme_mdev_udata_iter_set_dptr(&ctx->udatait,
+ &in->dptr, ctx->datalen);
+ if (ret)
+ return nvme_mdev_translate_error(ret);
+
+ switch (log_page_id) {
+ case NVME_LOG_ERROR:
+ _DBG(ctx->vctrl, "ADMINQ: GET_LOG_PAGE : ERRLOG\n");
+ return nvme_mdev_adm_handle_get_log_page_err(ctx);
+ case NVME_LOG_CHANGED_NS:
+ _DBG(ctx->vctrl, "ADMINQ: GET_LOG_PAGE : CHANGED_NS\n");
+ return nvme_mdev_adm_handle_get_log_page_changed_ns(ctx);
+ case NVME_LOG_SMART:
+ _DBG(ctx->vctrl, "ADMINQ: GET_LOG_PAGE : SMART\n");
+ return nvme_mdev_adm_handle_get_log_page_smart(ctx);
+ case NVME_LOG_FW_SLOT:
+ _DBG(ctx->vctrl, "ADMINQ: GET_LOG_PAGE : FWSLOT\n");
+ return nvme_mdev_adm_handle_get_log_page_fw_slot(ctx);
+ default:
+ _DBG(ctx->vctrl, "ADMINQ: GET_LOG_PAGE : log page 0x%02x\n",
+ log_page_id);
+ return DNR(NVME_SC_INVALID_FIELD);
+ }
+}
+
+/* Response to CREATE CQ command */
+static int nvme_mdev_adm_handle_create_cq(struct adm_ctx *ctx)
+{
+ int irq = -1, ret;
+ struct nvme_mdev_vctrl *vctrl = ctx->vctrl;
+ const struct nvme_create_cq *in = &ctx->in->create_cq;
+ u16 cqid = le16_to_cpu(in->cqid);
+ u16 qsize = le16_to_cpu(in->qsize);
+ u16 cq_flags = le16_to_cpu(in->cq_flags);
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_NSID | RSRV_DW23 | RSRV_DPTR_PRP2 |
+ RSRV_MPTR | RSRV_DW12_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ /* QID checks*/
+ if (!cqid ||
+ cqid >= MAX_VIRTUAL_QUEUES || test_bit(cqid, vctrl->vcq_en))
+ return DNR(NVME_SC_QID_INVALID);
+
+ /* Queue size checks*/
+ if (qsize > (MAX_VIRTUAL_QUEUE_DEPTH - 1) || qsize < 1)
+ return DNR(NVME_SC_QUEUE_SIZE);
+
+ /* Queue flags checks */
+ if (cq_flags & ~(NVME_QUEUE_PHYS_CONTIG | NVME_CQ_IRQ_ENABLED))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (cq_flags & NVME_CQ_IRQ_ENABLED) {
+ irq = le16_to_cpu(in->irq_vector);
+ if (irq >= MAX_VIRTUAL_IRQS)
+ return DNR(NVME_SC_INVALID_VECTOR);
+ }
+
+ ret = nvme_mdev_vcq_init(ctx->vctrl, cqid,
+ le64_to_cpu(in->prp1),
+ cq_flags & NVME_QUEUE_PHYS_CONTIG,
+ qsize + 1, irq);
+
+ return nvme_mdev_translate_error(ret);
+}
+
+/* Response to DELETE CQ command */
+static int nvme_mdev_adm_handle_delete_cq(struct adm_ctx *ctx)
+{
+ struct nvme_mdev_vctrl *vctrl = ctx->vctrl;
+ const struct nvme_delete_queue *in = &ctx->in->delete_queue;
+ u16 qid = le16_to_cpu(in->qid), sqid;
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_NSID | RSRV_DW23 | RSRV_DPTR |
+ RSRV_MPTR | RSRV_DW11_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (!qid || qid >= MAX_VIRTUAL_QUEUES || !test_bit(qid, vctrl->vcq_en))
+ return DNR(NVME_SC_QID_INVALID);
+
+ for_each_set_bit(sqid, vctrl->vsq_en, MAX_VIRTUAL_QUEUES)
+ if (vctrl->vsqs[sqid].vcq == &vctrl->vcqs[qid])
+ return DNR(NVME_SC_INVALID_QUEUE);
+
+ nvme_mdev_vcq_delete(vctrl, qid);
+ return NVME_SC_SUCCESS;
+}
+
+/* Response to CREATE SQ command */
+static int nvme_mdev_adm_handle_create_sq(struct adm_ctx *ctx)
+{
+ const struct nvme_create_sq *in = &ctx->in->create_sq;
+ struct nvme_mdev_vctrl *vctrl = ctx->vctrl;
+ int ret;
+
+ u16 sqid = le16_to_cpu(in->sqid);
+ u16 cqid = le16_to_cpu(in->cqid);
+ u16 qsize = le16_to_cpu(in->qsize);
+ u16 sq_flags = le16_to_cpu(in->sq_flags);
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_NSID | RSRV_DW23 | RSRV_DPTR_PRP2 |
+ RSRV_MPTR | RSRV_DW12_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (!sqid ||
+ sqid >= MAX_VIRTUAL_QUEUES || test_bit(sqid, vctrl->vsq_en))
+ return DNR(NVME_SC_QID_INVALID);
+
+ if (!cqid || cqid >= MAX_VIRTUAL_QUEUES)
+ return DNR(NVME_SC_QID_INVALID);
+
+ if (!test_bit(cqid, vctrl->vcq_en))
+ return DNR(NVME_SC_CQ_INVALID);
+
+ /* Queue size checks */
+ if (qsize > (MAX_VIRTUAL_QUEUE_DEPTH - 1) || qsize < 1)
+ return DNR(NVME_SC_QUEUE_SIZE);
+
+ /* Queue flags checks */
+ if (sq_flags & ~(NVME_QUEUE_PHYS_CONTIG | NVME_SQ_PRIO_MASK))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ ret = nvme_mdev_vsq_init(ctx->vctrl, sqid,
+ le64_to_cpu(in->prp1),
+ sq_flags & NVME_QUEUE_PHYS_CONTIG,
+ qsize + 1, cqid);
+ if (ret)
+ goto error;
+
+ return NVME_SC_SUCCESS;
+error:
+ return nvme_mdev_translate_error(ret);
+}
+
+/* Response to DELETE SQ command */
+static int nvme_mdev_adm_handle_delete_sq(struct adm_ctx *ctx)
+{
+ struct nvme_mdev_vctrl *vctrl = ctx->vctrl;
+ const struct nvme_delete_queue *in = &ctx->in->delete_queue;
+ u16 qid = le16_to_cpu(in->qid);
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_NSID | RSRV_DW23 | RSRV_DPTR |
+ RSRV_MPTR | RSRV_DW11_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (!qid || qid >= MAX_VIRTUAL_QUEUES || !test_bit(qid, vctrl->vsq_en))
+ return DNR(NVME_SC_QID_INVALID);
+
+ nvme_mdev_vsq_delete(ctx->vctrl, qid);
+ return NVME_SC_SUCCESS;
+}
+
+/* Set the shadow doorbell */
+static int nvme_mdev_adm_handle_dbbuf(struct adm_ctx *ctx)
+{
+ const struct nvme_dbbuf *in = &ctx->in->dbbuf;
+ int ret;
+
+ dma_addr_t sdb_iova = le64_to_cpu(in->prp1);
+ dma_addr_t eidx_iova = le64_to_cpu(in->prp2);
+
+ /* Check if we support the shadow doorbell */
+ if (!ctx->vctrl->mmio.shadow_db_supported)
+ return DNR(NVME_SC_INVALID_OPCODE);
+
+ /* Don't allow to enable the shadow doorbell more that once */
+ if (ctx->vctrl->mmio.shadow_db_en)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_NSID | RSRV_DW23 |
+ RSRV_MPTR | RSRV_DW10_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ /* check input buffers */
+ if ((OFFSET_IN_PAGE(sdb_iova) != 0) || (OFFSET_IN_PAGE(eidx_iova) != 0))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ /* switch to the new doorbell buffer */
+ ret = nvme_mdev_mmio_enable_dbs_shadow(ctx->vctrl, sdb_iova, eidx_iova);
+ return nvme_mdev_translate_error(ret);
+}
+
+/* Response to GET_FEATURES command */
+static int nvme_mdev_adm_handle_get_features(struct adm_ctx *ctx)
+{
+ u32 value = 0;
+ u32 irq;
+ const struct nvme_features *in = &ctx->in->features;
+ struct nvme_mdev_vctrl *vctrl = ctx->vctrl;
+ unsigned int tmp;
+
+ u32 fid = le32_to_cpu(in->fid);
+ u16 cid = le16_to_cpu(in->command_id);
+
+ _DBG(ctx->vctrl, "ADMINQ: GET_FEATURES FID=0x%x\n", fid);
+
+ /* common reserved bits*/
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_DW23 | RSRV_DPTR |
+ RSRV_MPTR | RSRV_DW12_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ /* reserved bits in dword10*/
+ if (fid > 0xFF)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ /* reserved bits in dword11*/
+ if (fid != NVME_FEAT_IRQ_CONFIG && in->dword11 != 0)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ switch (fid) {
+ /* Number of queues */
+ case NVME_FEAT_NUM_QUEUES:
+ value = (MAX_VIRTUAL_QUEUES - 1) |
+ ((MAX_VIRTUAL_QUEUES - 1) << 16);
+ goto out;
+
+ /* Arbitration */
+ case NVME_FEAT_ARBITRATION:
+ value = vctrl->arb_burst_shift & 0x7;
+ goto out;
+
+ /* Interrupt coalescing settings*/
+ case NVME_FEAT_IRQ_COALESCE:
+ tmp = vctrl->irqs.irq_coalesc_time_us;
+ do_div(tmp, 100);
+ value = (vctrl->irqs.irq_coalesc_max - 1) | (tmp << 8);
+ goto out;
+
+ /* Interrupt coalescing disable for a specific interrupt */
+ case NVME_FEAT_IRQ_CONFIG:
+ irq = le32_to_cpu(in->dword11);
+ if (irq >= MAX_VIRTUAL_IRQS)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ value = irq;
+ if (vctrl->irqs.vecs[irq].irq_coalesc_en)
+ value |= (1 << 16);
+ goto out;
+
+ /* Volatile write cache */
+ case NVME_FEAT_VOLATILE_WC:
+ /*we always report write cache due to mediation*/
+ value = 0x1;
+ goto out;
+
+ /* Limited error recovery */
+ case NVME_FEAT_ERR_RECOVERY:
+ value = 0;
+ break;
+
+ /* Workload hint + power state */
+ case NVME_FEAT_POWER_MGMT:
+ value = vctrl->worload_hint << 4;
+ break;
+
+ /* Temperature threshold */
+ case NVME_FEAT_TEMP_THRESH:
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ /* AEN permanent masking*/
+ case NVME_FEAT_ASYNC_EVENT:
+ value = nvme_mdev_event_read_aen_config(vctrl);
+ goto out;
+ default:
+ return DNR(NVME_SC_INVALID_FIELD);
+ }
+out:
+ nvme_mdev_vsq_cmd_done_adm(ctx->vctrl, value, cid, NVME_SC_SUCCESS);
+ return -1;
+}
+
+/* Response to SET_FEATURES command */
+static int nvme_mdev_adm_handle_set_features(struct adm_ctx *ctx)
+{
+ const struct nvme_features *in = &ctx->in->features;
+ struct nvme_mdev_vctrl *vctrl = ctx->vctrl;
+
+ u32 value = le32_to_cpu(in->dword11);
+ u8 fid = le32_to_cpu(in->fid) & 0xFF;
+ u16 cid = le16_to_cpu(in->command_id);
+ u32 nsid = le32_to_cpu(in->nsid);
+
+ _DBG(ctx->vctrl, "ADMINQ: SET_FEATURES cmd. FID=0x%x\n", fid);
+
+ if (nsid != 0xffffffff && nsid != 0)
+ return DNR(NVME_SC_FEATURE_NOT_PER_NS);
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_DW23 | RSRV_DPTR |
+ RSRV_MPTR | RSRV_DW12_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ switch (fid) {
+ case NVME_FEAT_NUM_QUEUES:
+ /* need to return the value here as well */
+ value = (MAX_VIRTUAL_QUEUES - 1) |
+ ((MAX_VIRTUAL_QUEUES - 1) << 16);
+
+ nvme_mdev_vsq_cmd_done_adm(ctx->vctrl, value,
+ cid, NVME_SC_SUCCESS);
+ return -1;
+
+ case NVME_FEAT_ARBITRATION:
+ vctrl->arb_burst_shift = value & 0x7;
+ return NVME_SC_SUCCESS;
+
+ case NVME_FEAT_IRQ_COALESCE:
+ vctrl->irqs.irq_coalesc_max = (value & 0xFF) + 1;
+ vctrl->irqs.irq_coalesc_time_us = ((value >> 8) & 0xFF) * 100;
+ return NVME_SC_SUCCESS;
+
+ case NVME_FEAT_IRQ_CONFIG: {
+ u16 irq = value & 0xFFFF;
+
+ if (irq >= MAX_VIRTUAL_IRQS)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ vctrl->irqs.vecs[irq].irq_coalesc_en = (value & 0x10000) != 0;
+ return NVME_SC_SUCCESS;
+ }
+ case NVME_FEAT_VOLATILE_WC:
+ return (value != 0x1) ? DNR(NVME_SC_FEATURE_NOT_CHANGEABLE) :
+ NVME_SC_SUCCESS;
+
+ case NVME_FEAT_ERR_RECOVERY:
+ return (value != 0) ? DNR(NVME_SC_FEATURE_NOT_CHANGEABLE) :
+ NVME_SC_SUCCESS;
+ case NVME_FEAT_POWER_MGMT:
+ if (value & 0xFFFFFF0F)
+ return DNR(NVME_SC_INVALID_FIELD);
+ vctrl->worload_hint = value >> 4;
+ return NVME_SC_SUCCESS;
+
+ case NVME_FEAT_TEMP_THRESH:
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ case NVME_FEAT_ASYNC_EVENT:
+ nvme_mdev_event_set_aen_config(vctrl, value);
+ return NVME_SC_SUCCESS;
+ default:
+ return DNR(NVME_SC_INVALID_FIELD);
+ }
+}
+
+/* Response to AER command */
+static int nvme_mdev_adm_handle_async_event(struct adm_ctx *ctx)
+{
+ u16 cid = le16_to_cpu(ctx->in->common.command_id);
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_NSID | RSRV_DW23 | RSRV_DPTR |
+ RSRV_MPTR | RSRV_DW10_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ return nvme_mdev_event_request_receive(ctx->vctrl, cid);
+}
+
+/* (Dummy) response to ABORT command*/
+static int nvme_mdev_adm_handle_abort(struct adm_ctx *ctx)
+{
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_NSID | RSRV_DW23 | RSRV_DPTR |
+ RSRV_MPTR | RSRV_DW10_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ return DNR(NVME_SC_ABORT_MISSING);
+}
+
+/* Process one new command in the admin queue*/
+static int nvme_mdev_adm_handle_cmd(struct adm_ctx *ctx)
+{
+ u8 optcode = ctx->in->common.opcode;
+
+ ctx->ns = NULL;
+ ctx->datalen = 0;
+
+ if (ctx->in->common.flags != 0)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ switch (optcode) {
+ case nvme_admin_identify:
+ return nvme_mdev_adm_handle_id(ctx);
+ case nvme_admin_create_cq:
+ _DBG(ctx->vctrl, "ADMINQ: CREATE_CQ\n");
+ return nvme_mdev_adm_handle_create_cq(ctx);
+ case nvme_admin_create_sq:
+ _DBG(ctx->vctrl, "ADMINQ: CREATE_SQ\n");
+ return nvme_mdev_adm_handle_create_sq(ctx);
+ case nvme_admin_delete_sq:
+ _DBG(ctx->vctrl, "ADMINQ: DELETE_SQ\n");
+ return nvme_mdev_adm_handle_delete_sq(ctx);
+ case nvme_admin_delete_cq:
+ _DBG(ctx->vctrl, "ADMINQ: DELETE_CQ\n");
+ return nvme_mdev_adm_handle_delete_cq(ctx);
+ case nvme_admin_dbbuf:
+ _DBG(ctx->vctrl, "ADMINQ: DBBUF_CONFIG\n");
+ return nvme_mdev_adm_handle_dbbuf(ctx);
+ case nvme_admin_get_log_page:
+ return nvme_mdev_adm_handle_get_log_page(ctx);
+ case nvme_admin_get_features:
+ return nvme_mdev_adm_handle_get_features(ctx);
+ case nvme_admin_set_features:
+ return nvme_mdev_adm_handle_set_features(ctx);
+ case nvme_admin_async_event:
+ _DBG(ctx->vctrl, "ADMINQ: ASYNC_EVENT_REQ\n");
+ return nvme_mdev_adm_handle_async_event(ctx);
+ case nvme_admin_abort_cmd:
+ _DBG(ctx->vctrl, "ADMINQ: ABORT\n");
+ return nvme_mdev_adm_handle_abort(ctx);
+ default:
+ _DBG(ctx->vctrl, "ADMINQ: optcode 0x%04x\n", optcode);
+ return DNR(NVME_SC_INVALID_OPCODE);
+ }
+}
+
+/* Process all pending admin commands */
+void nvme_mdev_adm_process_sq(struct nvme_mdev_vctrl *vctrl)
+{
+ struct adm_ctx ctx;
+
+ lockdep_assert_held(&vctrl->lock);
+ memset(&ctx, 0, sizeof(struct adm_ctx));
+ ctx.vctrl = vctrl;
+ ctx.hctrl = vctrl->hctrl;
+ nvme_mdev_udata_iter_setup(&vctrl->viommu, &ctx.udatait);
+
+ nvme_mdev_io_pause(ctx.vctrl);
+
+ while (!(nvme_mdev_vctrl_is_dead(vctrl))) {
+ int ret;
+ u16 cid;
+
+ ctx.in = nvme_mdev_vsq_get_cmd(vctrl, &vctrl->vsqs[0]);
+ if (!ctx.in)
+ break;
+
+ cid = le16_to_cpu(ctx.in->common.command_id);
+ ret = nvme_mdev_adm_handle_cmd(&ctx);
+
+ if (ret == -1)
+ continue;
+
+ if (ret != 0)
+ _DBG(vctrl, "ADMINQ: CID 0x%x FAILED: status 0x%x\n",
+ cid, ret);
+ nvme_mdev_vsq_cmd_done_adm(vctrl, 0, cid, ret);
+ }
+ nvme_mdev_io_resume(ctx.vctrl);
+}
diff --git a/drivers/nvme/mdev/events.c b/drivers/nvme/mdev/events.c
new file mode 100644
index 000000000000..9854c1cabdcb
--- /dev/null
+++ b/drivers/nvme/mdev/events.c
@@ -0,0 +1,142 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * NVMe async events implementation (AER, changed namespace log)
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include "priv.h"
+
+/* complete an AER event on the admin queue if it is pending*/
+static void nvme_mdev_event_complete(struct nvme_mdev_vctrl *vctrl)
+{
+ u16 lid, cid;
+ u32 dw0;
+
+ for_each_set_bit(lid, vctrl->events.events_pending, MAX_LOG_PAGES) {
+ /* we have pending aer requests, but no requests*/
+ if (vctrl->events.aer_cid_count == 0)
+ break;
+
+ if (!test_bit(lid, vctrl->events.events_enabled))
+ continue;
+
+ cid = vctrl->events.aer_cids[--vctrl->events.aer_cid_count];
+ dw0 = vctrl->events.event_values[lid];
+ clear_bit(lid, vctrl->events.events_pending);
+
+ _DBG(vctrl,
+ "AEN: replying to AER (CID=%d) with status 0x%08x\n",
+ cid, dw0);
+
+ nvme_mdev_vsq_cmd_done_adm(vctrl, dw0, cid, NVME_SC_SUCCESS);
+ }
+}
+
+/* deal with received async event request from the user*/
+int nvme_mdev_event_request_receive(struct nvme_mdev_vctrl *vctrl,
+ u16 cid)
+{
+ int cnt = vctrl->events.aer_cid_count;
+
+ if (cnt >= MAX_AER_COMMANDS)
+ return DNR(NVME_SC_ASYNC_LIMIT);
+
+ /* don't allow AER to be pending if there is no space left in the
+ * completion queue permanently
+ */
+ if ((cnt + 1) >= vctrl->vcqs[0].size - 1)
+ return DNR(NVME_SC_ASYNC_LIMIT);
+
+ vctrl->events.aer_cids[cnt++] = cid;
+ vctrl->events.aer_cid_count = cnt;
+
+ _DBG(vctrl, "AEN: received new request (cid=%d)\n", cid);
+ nvme_mdev_event_complete(vctrl);
+ return -1;
+}
+
+/* Send an async event request */
+void nvme_mdev_event_send(struct nvme_mdev_vctrl *vctrl,
+ enum nvme_async_event_type type,
+ enum nvme_async_event info)
+{
+ u8 log_page;
+ u32 event;
+
+ // determine the log page for event types that we support
+ switch (type) {
+ case NVME_AER_TYPE_ERROR:
+ log_page = NVME_LOG_ERROR;
+ break;
+ case NVME_AER_TYPE_SMART:
+ log_page = NVME_LOG_SMART;
+ break;
+ case NVME_AER_TYPE_NOTICE:
+ WARN_ON(info != NVME_AER_NOTICE_NS_CHANGED);
+ log_page = NVME_LOG_CHANGED_NS;
+ break;
+ default:
+ WARN_ON(1);
+ return;
+ }
+
+ if (test_and_set_bit(log_page, vctrl->events.events_masked))
+ return;
+
+ event = (u32)type | ((u32)info << 8) | ((u32)log_page << 16);
+ vctrl->events.event_values[log_page] = event;
+ set_bit(log_page, vctrl->events.events_masked);
+ set_bit(log_page, vctrl->events.events_pending);
+ nvme_mdev_event_complete(vctrl);
+}
+
+u32 nvme_mdev_event_read_aen_config(struct nvme_mdev_vctrl *vctrl)
+{
+ u32 value = 0;
+
+ if (test_bit(NVME_LOG_CHANGED_NS, vctrl->events.events_enabled))
+ value |= NVME_AEN_CFG_NS_ATTR;
+ return value;
+}
+
+void nvme_mdev_event_set_aen_config(struct nvme_mdev_vctrl *vctrl, u32 value)
+{
+ _DBG(vctrl, "AEN: set config: 0x%04x\n", value);
+
+ if (value & NVME_AEN_CFG_NS_ATTR)
+ set_bit(NVME_LOG_CHANGED_NS, vctrl->events.events_enabled);
+ else
+ clear_bit(NVME_LOG_CHANGED_NS, vctrl->events.events_enabled);
+
+ nvme_mdev_event_complete(vctrl);
+}
+
+/* called when user acks an log page which causes an AER event to be unmasked*/
+void nvme_mdev_event_process_ack(struct nvme_mdev_vctrl *vctrl, u8 log_page)
+{
+ lockdep_assert_held(&vctrl->lock);
+
+ _DBG(vctrl, "AEN: log page %d ACK\n", log_page);
+
+ if (log_page >= MAX_LOG_PAGES)
+ return;
+
+ clear_bit(log_page, vctrl->events.events_masked);
+ nvme_mdev_event_complete(vctrl);
+}
+
+/* reset event state*/
+void nvme_mdev_events_init(struct nvme_mdev_vctrl *vctrl)
+{
+ memset(&vctrl->events, 0, sizeof(vctrl->events));
+ set_bit(NVME_LOG_CHANGED_NS, vctrl->events.events_enabled);
+ set_bit(NVME_LOG_ERROR, vctrl->events.events_enabled);
+}
+
+/* reset event state*/
+void nvme_mdev_events_reset(struct nvme_mdev_vctrl *vctrl)
+{
+ memset(&vctrl->events, 0, sizeof(vctrl->events));
+}
+
diff --git a/drivers/nvme/mdev/host.c b/drivers/nvme/mdev/host.c
new file mode 100644
index 000000000000..d90275baf5f8
--- /dev/null
+++ b/drivers/nvme/mdev/host.c
@@ -0,0 +1,491 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * NVMe parent (host) device abstraction
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/nvme.h>
+#include <linux/mdev.h>
+#include <linux/module.h>
+#include "priv.h"
+
+static LIST_HEAD(nvme_mdev_hctrl_list);
+static DEFINE_MUTEX(nvme_mdev_hctrl_list_mutex);
+static struct nvme_mdev_inst_type **instance_types;
+
+unsigned int io_timeout_ms = 30000;
+module_param_named(io_timeout, io_timeout_ms, uint, 0644);
+MODULE_PARM_DESC(io_timeout,
+ "Maximum I/O command completion timeout (in msec)");
+
+unsigned int poll_timeout_ms = 500;
+module_param_named(poll_timeout, poll_timeout_ms, uint, 0644);
+MODULE_PARM_DESC(poll_timeout,
+ "Maximum idle time to keep polling (in msec) (0 - poll forever)");
+
+unsigned int admin_poll_rate_ms = 100;
+module_param_named(admin_poll_rate, poll_timeout_ms, uint, 0644);
+MODULE_PARM_DESC(admin_poll_rate,
+ "Admin queue polling rate (in msec) (used only when shadow doorbell is disabled)");
+
+bool use_shadow_doorbell = true;
+module_param(use_shadow_doorbell, bool, 0644);
+MODULE_PARM_DESC(use_shadow_doorbell,
+ "Enable the shadow doorbell NVMe extension");
+
+/* Create a new host controller */
+static struct nvme_mdev_hctrl *nvme_mdev_hctrl_create(struct nvme_ctrl *ctrl)
+{
+ struct nvme_mdev_hctrl *hctrl;
+ u32 max_lba_transfer;
+
+ /* TODOLATER: IO: support more page size configurations*/
+ if (ctrl->page_size != PAGE_SIZE)
+ return NULL;
+
+ hctrl = kzalloc_node(sizeof(*hctrl), GFP_KERNEL,
+ dev_to_node(ctrl->dev));
+ if (!hctrl)
+ return NULL;
+
+ kref_init(&hctrl->ref);
+ mutex_init(&hctrl->lock);
+
+ hctrl->nvme_ctrl = ctrl;
+ nvme_get_ctrl(ctrl);
+
+ hctrl->oncs = ctrl->oncs &
+ (NVME_CTRL_ONCS_DSM | NVME_CTRL_ONCS_WRITE_ZEROES);
+
+ hctrl->id = ctrl->instance;
+ hctrl->node = dev_to_node(ctrl->dev);
+
+ max_lba_transfer = ctrl->max_hw_sectors >> (PAGE_SHIFT - 9);
+ hctrl->mdts = ilog2(__rounddown_pow_of_two(max_lba_transfer));
+
+ hctrl->nr_host_queues = ctrl->ops->ext_queues_available(ctrl);
+
+ mutex_lock(&nvme_mdev_hctrl_list_mutex);
+
+ dev_info(ctrl->dev,
+ "mediated nvme support enabled, using up to %d host queues\n",
+ hctrl->nr_host_queues);
+
+ list_add_tail(&hctrl->link, &nvme_mdev_hctrl_list);
+
+ mutex_unlock(&nvme_mdev_hctrl_list_mutex);
+
+ if (mdev_register_device(ctrl->dev, &mdev_fops) < 0) {
+ nvme_put_ctrl(ctrl);
+ kfree(hctrl);
+ return NULL;
+ }
+ return hctrl;
+}
+
+/* Release an unused host controller*/
+static void nvme_mdev_hctrl_free(struct kref *ref)
+{
+ struct nvme_mdev_hctrl *hctrl =
+ container_of(ref, struct nvme_mdev_hctrl, ref);
+
+ dev_info(hctrl->nvme_ctrl->dev, "mediated nvme support disabled");
+
+ nvme_put_ctrl(hctrl->nvme_ctrl);
+ hctrl->nvme_ctrl = NULL;
+ kfree(hctrl);
+}
+
+/* Lookup a host controller based on mdev parent device*/
+struct nvme_mdev_hctrl *nvme_mdev_hctrl_lookup_get(struct device *parent)
+{
+ struct nvme_mdev_hctrl *hctrl = NULL, *tmp;
+
+ mutex_lock(&nvme_mdev_hctrl_list_mutex);
+ list_for_each_entry(tmp, &nvme_mdev_hctrl_list, link) {
+ if (tmp->nvme_ctrl->dev == parent) {
+ hctrl = tmp;
+ kref_get(&hctrl->ref);
+ break;
+ }
+ }
+ mutex_unlock(&nvme_mdev_hctrl_list_mutex);
+ return hctrl;
+}
+
+/* Release a held reference to a host controller*/
+void nvme_mdev_hctrl_put(struct nvme_mdev_hctrl *hctrl)
+{
+ kref_put(&hctrl->ref, nvme_mdev_hctrl_free);
+}
+
+/* Destroy a host controller. It might still be kept in zombie state
+ * if someone uses a reference to it
+ */
+static void nvme_mdev_hctrl_destroy(struct nvme_mdev_hctrl *hctrl)
+{
+ mutex_lock(&nvme_mdev_hctrl_list_mutex);
+ list_del(&hctrl->link);
+ mutex_unlock(&nvme_mdev_hctrl_list_mutex);
+
+ hctrl->removing = true;
+ mdev_unregister_device(hctrl->nvme_ctrl->dev);
+ nvme_mdev_hctrl_put(hctrl);
+}
+
+/* Check how many host queues are still available */
+int nvme_mdev_hctrl_hqs_available(struct nvme_mdev_hctrl *hctrl)
+{
+ int ret;
+
+ mutex_lock(&hctrl->lock);
+ ret = hctrl->nr_host_queues;
+ mutex_unlock(&hctrl->lock);
+ return ret;
+}
+
+/* Reserve N host IO queues, for later allocation to a specific user*/
+bool nvme_mdev_hctrl_hqs_reserve(struct nvme_mdev_hctrl *hctrl,
+ unsigned int n)
+{
+ mutex_lock(&hctrl->lock);
+
+ if (n > hctrl->nr_host_queues) {
+ mutex_unlock(&hctrl->lock);
+ return false;
+ }
+
+ hctrl->nr_host_queues -= n;
+ mutex_unlock(&hctrl->lock);
+ return true;
+}
+
+/* Free N host IO queues, for allocation for other users*/
+void nvme_mdev_hctrl_hqs_unreserve(struct nvme_mdev_hctrl *hctrl,
+ unsigned int n)
+{
+ mutex_lock(&hctrl->lock);
+ hctrl->nr_host_queues += n;
+ mutex_unlock(&hctrl->lock);
+}
+
+/* Allocate a host IO queue */
+int nvme_mdev_hctrl_hq_alloc(struct nvme_mdev_hctrl *hctrl)
+{
+ u16 qid = 0;
+ int ret = hctrl->nvme_ctrl->ops->ext_queue_alloc(hctrl->nvme_ctrl,
+ &qid);
+
+ if (ret)
+ return ret;
+ return qid;
+}
+
+/* Free an host IO queue */
+void nvme_mdev_hctrl_hq_free(struct nvme_mdev_hctrl *hctrl, u16 qid)
+{
+ hctrl->nvme_ctrl->ops->ext_queue_free(hctrl->nvme_ctrl, qid);
+}
+
+/* Check if we can submit another IO passthrough command */
+bool nvme_mdev_hctrl_hq_can_submit(struct nvme_mdev_hctrl *hctrl, u16 qid)
+{
+ return hctrl->nvme_ctrl->ops->ext_queue_full(hctrl->nvme_ctrl, qid);
+}
+
+/* Check if IO passthrough is supported for given IO optcode */
+bool nvme_mdev_hctrl_hq_check_op(struct nvme_mdev_hctrl *hctrl, u8 optcode)
+{
+ switch (optcode) {
+ case nvme_cmd_flush:
+ case nvme_cmd_read:
+ case nvme_cmd_write:
+ /* these are mandatory*/
+ return true;
+ case nvme_cmd_write_zeroes:
+ return (hctrl->oncs & NVME_CTRL_ONCS_WRITE_ZEROES);
+ case nvme_cmd_dsm:
+ return (hctrl->oncs & NVME_CTRL_ONCS_DSM);
+ default:
+ return false;
+ }
+}
+
+/* Submit a IO passthrough command */
+int nvme_mdev_hctrl_hq_submit(struct nvme_mdev_hctrl *hctrl,
+ u16 qid, u32 tag,
+ struct nvme_command *cmd,
+ struct nvme_ext_data_iter *datait)
+{
+ struct nvme_ctrl *ctrl = hctrl->nvme_ctrl;
+
+ return ctrl->ops->ext_queue_submit(ctrl, qid, tag, cmd, datait);
+}
+
+/* Poll for completion of IO passthrough commands */
+int nvme_mdev_hctrl_hq_poll(struct nvme_mdev_hctrl *hctrl,
+ u32 qid,
+ struct nvme_ext_cmd_result *results,
+ unsigned int max_len)
+{
+ struct nvme_ctrl *ctrl = hctrl->nvme_ctrl;
+
+ return ctrl->ops->ext_queue_poll(ctrl, qid, results, max_len);
+}
+
+/* Destroy all host controllers */
+void nvme_mdev_hctrl_destroy_all(void)
+{
+ struct nvme_mdev_hctrl *hctrl = NULL, *tmp;
+
+ list_for_each_entry_safe(hctrl, tmp, &nvme_mdev_hctrl_list, link) {
+ list_del(&hctrl->link);
+ hctrl->removing = true;
+ mdev_unregister_device(hctrl->nvme_ctrl->dev);
+ nvme_mdev_hctrl_put(hctrl);
+ }
+}
+
+/* Get the mdev instance given it sysfs name */
+struct nvme_mdev_inst_type *nvme_mdev_inst_type_get(const char *name)
+{
+ int i;
+
+ for (i = 0; instance_types[i]; i++) {
+ const char *test =
+ name + strlen(name) - strlen(instance_types[i]->name);
+
+ if (strcmp(instance_types[i]->name, test) == 0)
+ return instance_types[i];
+ }
+ return NULL;
+}
+
+/* This shows name of the instance type */
+static ssize_t name_show(struct kobject *kobj, struct device *dev, char *buf)
+{
+ return sprintf(buf, "%s\n", kobj->name);
+}
+static MDEV_TYPE_ATTR_RO(name);
+
+/* This shows description of the instance type */
+static ssize_t description_show(struct kobject *kobj,
+ struct device *dev, char *buf)
+{
+ struct nvme_mdev_inst_type *type = nvme_mdev_inst_type_get(kobj->name);
+
+ return sprintf(buf,
+ "MDEV nvme device, using maximum %d hw submission queues\n",
+ type->max_hw_queues);
+}
+static MDEV_TYPE_ATTR_RO(description);
+
+/* This shows the device API of the instance type */
+static ssize_t device_api_show(struct kobject *kobj,
+ struct device *dev, char *buf)
+{
+ return sprintf(buf, "%s\n", VFIO_DEVICE_API_PCI_STRING);
+}
+static MDEV_TYPE_ATTR_RO(device_api);
+
+/* This shows how many instances of this instance type can be created */
+static ssize_t available_instances_show(struct kobject *kobj,
+ struct device *dev, char *buf)
+{
+ struct nvme_mdev_inst_type *type = nvme_mdev_inst_type_get(kobj->name);
+ struct nvme_mdev_hctrl *hctrl = nvme_mdev_hctrl_lookup_get(dev);
+ int count;
+
+ if (!hctrl)
+ return -ENODEV;
+
+ count = nvme_mdev_hctrl_hqs_available(hctrl);
+ do_div(count, type->max_hw_queues);
+
+ nvme_mdev_hctrl_put(hctrl);
+ return sprintf(buf, "%d\n", count);
+}
+static MDEV_TYPE_ATTR_RO(available_instances);
+
+static struct attribute *nvme_mdev_types_attrs[] = {
+ &mdev_type_attr_name.attr,
+ &mdev_type_attr_description.attr,
+ &mdev_type_attr_device_api.attr,
+ &mdev_type_attr_available_instances.attr,
+ NULL,
+};
+
+/* Undo the creation of mdev array of instance types */
+static void nvme_mdev_instance_types_fini(struct mdev_parent_ops *ops)
+{
+ int i;
+
+ for (i = 0; instance_types[i]; i++) {
+ struct nvme_mdev_inst_type *type = instance_types[i];
+
+ kfree(type->attrgroup);
+ kfree(type);
+ }
+
+ kfree(instance_types);
+ instance_types = NULL;
+
+ kfree(ops->supported_type_groups);
+ ops->supported_type_groups = NULL;
+}
+
+/* Create the array of mdev instance types from our array of them */
+static int nvme_mdev_instance_types_init(struct mdev_parent_ops *ops)
+{
+ unsigned int i;
+ struct nvme_mdev_inst_type *type;
+ struct attribute_group *attrgroup;
+
+ ops->supported_type_groups = kzalloc(sizeof(struct attribute_group *)
+ * (MAX_HOST_QUEUES + 1), GFP_KERNEL);
+
+ if (!ops->supported_type_groups)
+ return -ENOMEM;
+
+ instance_types = kzalloc(sizeof(struct nvme_mdev_inst_type *)
+ * MAX_HOST_QUEUES + 1, GFP_KERNEL);
+
+ if (!instance_types) {
+ kfree(ops->supported_type_groups);
+ ops->supported_type_groups = NULL;
+ return -ENOMEM;
+ }
+
+ for (i = 0; i < MAX_HOST_QUEUES; i++) {
+ type = kzalloc(sizeof(*type), GFP_KERNEL);
+ if (!type) {
+ nvme_mdev_instance_types_fini(ops);
+ return -ENOMEM;
+ }
+ snprintf(type->name, sizeof(type->name), "%dQ_V1", i + 1);
+ type->max_hw_queues = i + 1;
+
+ attrgroup = kzalloc(sizeof(*attrgroup), GFP_KERNEL);
+ if (!attrgroup) {
+ kfree(type);
+ nvme_mdev_instance_types_fini(ops);
+ return -ENOMEM;
+ }
+
+ attrgroup->attrs = nvme_mdev_types_attrs;
+ attrgroup->name = type->name;
+ type->attrgroup = attrgroup;
+ instance_types[i] = type;
+ ops->supported_type_groups[i] = attrgroup;
+ }
+ return 0;
+}
+
+/* Updates in host controller state*/
+static void nvme_mdev_nvme_ctrl_state_changed(struct nvme_ctrl *ctrl)
+{
+ struct nvme_mdev_hctrl *hctrl = nvme_mdev_hctrl_lookup_get(ctrl->dev);
+ struct nvme_mdev_vctrl *vctrl;
+
+ switch (ctrl->state) {
+ case NVME_CTRL_NEW:
+ /* do nothing as new controller is not yet initialized*/
+ break;
+
+ case NVME_CTRL_LIVE:
+ /* new controller is live, create a mdev for it*/
+ if (!hctrl) {
+ hctrl = nvme_mdev_hctrl_create(ctrl);
+ return;
+ /* a controller is live again after reset/reconnect/suspend*/
+ } else {
+ mutex_lock(&nvme_mdev_vctrl_list_mutex);
+ list_for_each_entry(vctrl, &nvme_mdev_vctrl_list, link)
+ if (vctrl->hctrl == hctrl)
+ nvme_mdev_vctrl_resume(vctrl);
+ mutex_unlock(&nvme_mdev_vctrl_list_mutex);
+ }
+ break;
+
+ case NVME_CTRL_RESETTING:
+ case NVME_CTRL_CONNECTING:
+ case NVME_CTRL_SUSPENDED:
+ /* controller is temporarily not usable, stop using its queues*/
+ if (!hctrl)
+ return;
+
+ mutex_lock(&nvme_mdev_vctrl_list_mutex);
+ list_for_each_entry(vctrl, &nvme_mdev_vctrl_list, link)
+ if (vctrl->hctrl == hctrl)
+ nvme_mdev_vctrl_pause(vctrl);
+ mutex_unlock(&nvme_mdev_vctrl_list_mutex);
+ break;
+
+ case NVME_CTRL_DELETING:
+ case NVME_CTRL_DEAD:
+ case NVME_CTRL_ADMIN_ONLY:
+ /* host nvme controller is dead, remove it*/
+ if (!hctrl)
+ return;
+ nvme_mdev_hctrl_destroy(hctrl);
+ break;
+ }
+ nvme_mdev_hctrl_put(hctrl);
+}
+
+/* A host namespace might have its properties changed/removed.*/
+static void nvme_mdev_nvme_ctrl_ns_updated(struct nvme_ctrl *ctrl,
+ u32 nsid, bool removed)
+{
+ struct nvme_mdev_vctrl *vctrl;
+ struct nvme_mdev_hctrl *hctrl = nvme_mdev_hctrl_lookup_get(ctrl->dev);
+
+ if (!hctrl)
+ return;
+
+ mutex_lock(&nvme_mdev_vctrl_list_mutex);
+ list_for_each_entry(vctrl, &nvme_mdev_vctrl_list, link)
+ if (vctrl->hctrl == hctrl)
+ nvme_mdev_vns_host_ns_update(vctrl, nsid, removed);
+ mutex_unlock(&nvme_mdev_vctrl_list_mutex);
+ nvme_mdev_hctrl_put(hctrl);
+}
+
+static struct nvme_mdev_driver nvme_mdev_driver = {
+ .owner = THIS_MODULE,
+ .nvme_ctrl_state_changed = nvme_mdev_nvme_ctrl_state_changed,
+ .nvme_ns_state_changed = nvme_mdev_nvme_ctrl_ns_updated,
+};
+
+static int __init nvme_mdev_init(void)
+{
+ int ret;
+
+ nvme_mdev_instance_types_init(&mdev_fops);
+ ret = nvme_core_register_mdev_driver(&nvme_mdev_driver);
+ if (ret) {
+ nvme_mdev_instance_types_fini(&mdev_fops);
+ return ret;
+ }
+
+ pr_info("nvme_mdev " NVME_MDEV_FIRMWARE_VERSION " loaded\n");
+ return 0;
+}
+
+static void __exit nvme_mdev_exit(void)
+{
+ nvme_core_unregister_mdev_driver(&nvme_mdev_driver);
+ nvme_mdev_hctrl_destroy_all();
+ nvme_mdev_instance_types_fini(&mdev_fops);
+ pr_info("nvme_mdev unloaded\n");
+}
+
+MODULE_AUTHOR("Maxim Levitsky <mlevitsk@redhat.com>");
+MODULE_LICENSE("GPL");
+MODULE_VERSION(NVME_MDEV_FIRMWARE_VERSION);
+
+module_init(nvme_mdev_init)
+module_exit(nvme_mdev_exit)
+
diff --git a/drivers/nvme/mdev/instance.c b/drivers/nvme/mdev/instance.c
new file mode 100644
index 000000000000..da523006aeda
--- /dev/null
+++ b/drivers/nvme/mdev/instance.c
@@ -0,0 +1,802 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Mediated NVMe instance VFIO code
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+
+#include <linux/init.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/vfio.h>
+#include <linux/sysfs.h>
+#include <linux/mdev.h>
+#include "priv.h"
+
+#define OFFSET_TO_REGION(offset) ((offset) >> 20)
+#define REGION_TO_OFFSET(nr) (((u64)nr) << 20)
+
+LIST_HEAD(nvme_mdev_vctrl_list);
+/*protects the list */
+DEFINE_MUTEX(nvme_mdev_vctrl_list_mutex);
+
+struct mdev_nvme_vfio_region_info {
+ struct vfio_region_info base;
+ struct vfio_region_info_cap_sparse_mmap mmap_cap;
+};
+
+/* User memory added*/
+static int nvme_mdev_map_notifier(struct notifier_block *nb,
+ unsigned long action, void *data)
+{
+ struct vfio_iommu_type1_dma_map *map = data;
+ struct nvme_mdev_vctrl *vctrl =
+ container_of(nb, struct nvme_mdev_vctrl, vfio_map_notifier);
+
+ int ret = nvme_mdev_vctrl_viommu_map(vctrl, map->flags,
+ map->iova, map->size);
+ return ret ? NOTIFY_OK : notifier_from_errno(ret);
+}
+
+/* User memory removed*/
+static int nvme_mdev_unmap_notifier(struct notifier_block *nb,
+ unsigned long action, void *data)
+{
+ struct nvme_mdev_vctrl *vctrl =
+ container_of(nb, struct nvme_mdev_vctrl, vfio_unmap_notifier);
+ struct vfio_iommu_type1_dma_unmap *unmap = data;
+
+ int ret = nvme_mdev_vctrl_viommu_unmap(vctrl, unmap->iova, unmap->size);
+
+ WARN_ON(ret <= 0);
+ return NOTIFY_OK;
+}
+
+/* Called when new mediated device is created */
+static int nvme_mdev_ops_create(struct kobject *kobj, struct mdev_device *mdev)
+{
+ int ret = 0;
+ const struct nvme_mdev_inst_type *type = NULL;
+ struct nvme_mdev_vctrl *vctrl;
+ struct nvme_mdev_hctrl *hctrl = NULL;
+
+ hctrl = nvme_mdev_hctrl_lookup_get(mdev_parent_dev(mdev));
+ if (!hctrl)
+ return -ENODEV;
+
+ type = nvme_mdev_inst_type_get(kobj->name);
+ vctrl = nvme_mdev_vctrl_create(mdev, hctrl, type->max_hw_queues);
+
+ if (IS_ERR(vctrl))
+ ret = PTR_ERR(vctrl);
+
+ mutex_lock(&nvme_mdev_vctrl_list_mutex);
+ list_add_tail(&vctrl->link, &nvme_mdev_vctrl_list);
+ mutex_unlock(&nvme_mdev_vctrl_list_mutex);
+
+ nvme_mdev_hctrl_put(hctrl);
+ return ret;
+}
+
+/* Called when a mediated device is removed */
+static int nvme_mdev_ops_remove(struct mdev_device *mdev)
+{
+ struct nvme_mdev_vctrl *vctrl = mdev_to_vctrl(mdev);
+
+ if (!vctrl)
+ return -ENODEV;
+ return nvme_mdev_vctrl_destroy(vctrl);
+}
+
+/* Called when new mediated device is opened by a user */
+static int nvme_mdev_ops_open(struct mdev_device *mdev)
+{
+ int ret;
+ unsigned long events;
+ struct nvme_mdev_vctrl *vctrl = mdev_to_vctrl(mdev);
+
+ if (!vctrl)
+ return -ENODEV;
+
+ ret = nvme_mdev_vctrl_open(vctrl);
+ if (ret)
+ return ret;
+
+ /* register unmap IOMMU notifier*/
+ vctrl->vfio_unmap_notifier.notifier_call = nvme_mdev_unmap_notifier;
+ events = VFIO_IOMMU_NOTIFY_DMA_UNMAP;
+
+ ret = vfio_register_notifier(mdev_dev(vctrl->mdev),
+ VFIO_IOMMU_NOTIFY, &events,
+ &vctrl->vfio_unmap_notifier);
+
+ if (ret != 0) {
+ nvme_mdev_vctrl_release(vctrl);
+ return ret;
+ }
+
+ /* register map IOMMU notifier*/
+ vctrl->vfio_map_notifier.notifier_call = nvme_mdev_map_notifier;
+ events = VFIO_IOMMU_NOTIFY_DMA_MAP;
+
+ ret = vfio_register_notifier(mdev_dev(vctrl->mdev),
+ VFIO_IOMMU_NOTIFY, &events,
+ &vctrl->vfio_map_notifier);
+
+ if (ret != 0) {
+ vfio_unregister_notifier(mdev_dev(vctrl->mdev),
+ VFIO_IOMMU_NOTIFY,
+ &vctrl->vfio_unmap_notifier);
+ nvme_mdev_vctrl_release(vctrl);
+ return ret;
+ }
+ return ret;
+}
+
+/* Called when new mediated device is closed (last close of the user) */
+static void nvme_mdev_ops_release(struct mdev_device *mdev)
+{
+ struct nvme_mdev_vctrl *vctrl = mdev_to_vctrl(mdev);
+ int ret;
+
+ ret = vfio_unregister_notifier(mdev_dev(vctrl->mdev),
+ VFIO_IOMMU_NOTIFY,
+ &vctrl->vfio_unmap_notifier);
+ WARN_ON(ret);
+
+ ret = vfio_unregister_notifier(mdev_dev(vctrl->mdev),
+ VFIO_IOMMU_NOTIFY,
+ &vctrl->vfio_map_notifier);
+ WARN_ON(ret);
+
+ nvme_mdev_vctrl_release(vctrl);
+}
+
+/* Helper function for bar/pci config read/write access */
+static ssize_t nvme_mdev_access(struct nvme_mdev_vctrl *vctrl,
+ char *buf, size_t count,
+ loff_t pos, bool is_write)
+{
+ int index = OFFSET_TO_REGION(pos);
+ int ret = -EINVAL;
+ unsigned int offset;
+
+ if (index >= VFIO_PCI_NUM_REGIONS || !vctrl->regions[index].rw)
+ goto out;
+
+ offset = pos - REGION_TO_OFFSET(index);
+ if (offset + count > vctrl->regions[index].size)
+ goto out;
+
+ ret = vctrl->regions[index].rw(vctrl, offset, buf, count, is_write);
+out:
+ return ret;
+}
+
+/* Called when read() is done on the device */
+static ssize_t nvme_mdev_ops_read(struct mdev_device *mdev, char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ unsigned int done = 0;
+ int ret;
+ struct nvme_mdev_vctrl *vctrl = mdev_to_vctrl(mdev);
+
+ if (!vctrl)
+ return -ENODEV;
+
+ while (count) {
+ size_t filled;
+
+ if (count >= 4 && !(*ppos % 4)) {
+ u32 val;
+
+ ret = nvme_mdev_access(vctrl, (char *)&val,
+ sizeof(val), *ppos, false);
+ if (ret <= 0)
+ goto read_err;
+
+ if (copy_to_user(buf, &val, sizeof(val)))
+ goto read_err;
+ filled = sizeof(val);
+ } else if (count >= 2 && !(*ppos % 2)) {
+ u16 val;
+
+ ret = nvme_mdev_access(vctrl, (char *)&val,
+ sizeof(val), *ppos, false);
+ if (ret <= 0)
+ goto read_err;
+ if (copy_to_user(buf, &val, sizeof(val)))
+ goto read_err;
+ filled = sizeof(val);
+ } else {
+ u8 val;
+
+ ret = nvme_mdev_access(vctrl, (char *)&val,
+ sizeof(val), *ppos, false);
+ if (ret <= 0)
+ goto read_err;
+ if (copy_to_user(buf, &val, sizeof(val)))
+ goto read_err;
+ filled = sizeof(val);
+ }
+
+ count -= filled;
+ done += filled;
+ *ppos += filled;
+ buf += filled;
+ }
+ return done;
+read_err:
+ return -EFAULT;
+}
+
+/* Called when write() is done on the device */
+static ssize_t nvme_mdev_ops_write(struct mdev_device *mdev,
+ const char __user *buf,
+ size_t count, loff_t *ppos)
+{
+ unsigned int done = 0;
+ int ret;
+ struct nvme_mdev_vctrl *vctrl = mdev_to_vctrl(mdev);
+
+ if (!vctrl)
+ return -ENODEV;
+
+ while (count) {
+ size_t filled;
+
+ if (count >= 4 && !(*ppos % 4)) {
+ u32 val;
+
+ if (copy_from_user(&val, buf, sizeof(val)))
+ goto write_err;
+ ret = nvme_mdev_access(vctrl, (char *)&val,
+ sizeof(val), *ppos, true);
+ if (ret <= 0)
+ goto write_err;
+ filled = sizeof(val);
+ } else if (count >= 2 && !(*ppos % 2)) {
+ u16 val;
+
+ if (copy_from_user(&val, buf, sizeof(val)))
+ goto write_err;
+
+ ret = nvme_mdev_access(vctrl, (char *)&val,
+ sizeof(val), *ppos, true);
+ if (ret <= 0)
+ goto write_err;
+ filled = sizeof(val);
+ } else {
+ u8 val;
+
+ if (copy_from_user(&val, buf, sizeof(val)))
+ goto write_err;
+ ret = nvme_mdev_access(vctrl, (char *)&val,
+ sizeof(val), *ppos, true);
+ if (ret <= 0)
+ goto write_err;
+ filled = sizeof(val);
+ }
+ count -= filled;
+ done += filled;
+ *ppos += filled;
+ buf += filled;
+ }
+ return done;
+write_err:
+ return -EFAULT;
+}
+
+/*Helper for IRQ number VFIO query */
+static int nvme_mdev_irq_counts(struct nvme_mdev_vctrl *vctrl,
+ unsigned int irq_type)
+{
+ switch (irq_type) {
+ case VFIO_PCI_INTX_IRQ_INDEX:
+ return 1;
+ case VFIO_PCI_MSIX_IRQ_INDEX:
+ return MAX_VIRTUAL_IRQS;
+ case VFIO_PCI_REQ_IRQ_INDEX:
+ return 1;
+ default:
+ return 0;
+ }
+}
+
+/* VFIO VFIO_IRQ_SET_ACTION_TRIGGER implementation */
+static int nvme_mdev_ioctl_set_irqs_trigger(struct nvme_mdev_vctrl *vctrl,
+ u32 flags,
+ unsigned int irq_type,
+ unsigned int start,
+ unsigned int count,
+ void *data)
+{
+ u32 data_type = flags & VFIO_IRQ_SET_DATA_TYPE_MASK;
+ u8 *bools = NULL;
+ unsigned int i;
+ int ret = -EINVAL;
+
+ /* Asked to disable the current interrupt mode*/
+ if (data_type == VFIO_IRQ_SET_DATA_NONE && count == 0) {
+ switch (irq_type) {
+ case VFIO_PCI_REQ_IRQ_INDEX:
+ nvme_mdev_irqs_set_unplug_trigger(vctrl, -1);
+ return 0;
+ case VFIO_PCI_INTX_IRQ_INDEX:
+ nvme_mdev_irqs_disable(vctrl, NVME_MDEV_IMODE_INTX);
+ return 0;
+ case VFIO_PCI_MSIX_IRQ_INDEX:
+ nvme_mdev_irqs_disable(vctrl, NVME_MDEV_IMODE_MSIX);
+ return 0;
+ default:
+ return -EINVAL;
+ }
+ }
+
+ if (start + count > nvme_mdev_irq_counts(vctrl, irq_type))
+ return -EINVAL;
+
+ switch (data_type) {
+ case VFIO_IRQ_SET_DATA_BOOL:
+ bools = (u8 *)data;
+ /*fallthrough*/
+ case VFIO_IRQ_SET_DATA_NONE:
+ if (irq_type == VFIO_PCI_REQ_IRQ_INDEX)
+ return -EINVAL;
+
+ for (i = 0 ; i < count ; i++) {
+ int index = start + i;
+
+ if (!bools || bools[i])
+ nvme_mdev_irq_trigger(vctrl, index);
+ }
+ return 0;
+
+ case VFIO_IRQ_SET_DATA_EVENTFD:
+ switch (irq_type) {
+ case VFIO_PCI_REQ_IRQ_INDEX:
+ return nvme_mdev_irqs_set_unplug_trigger(vctrl,
+ *(int32_t *)data);
+ case VFIO_PCI_INTX_IRQ_INDEX:
+ ret = nvme_mdev_irqs_enable(vctrl,
+ NVME_MDEV_IMODE_INTX);
+ break;
+ case VFIO_PCI_MSIX_IRQ_INDEX:
+ ret = nvme_mdev_irqs_enable(vctrl,
+ NVME_MDEV_IMODE_MSIX);
+ break;
+ default:
+ return -EINVAL;
+ }
+ if (ret)
+ return ret;
+
+ return nvme_mdev_irqs_set_triggers(vctrl, start,
+ count, (int32_t *)data);
+ default:
+ return -EINVAL;
+ }
+}
+
+/* VFIO_DEVICE_GET_INFO ioctl implementation */
+static int nvme_mdev_ioctl_get_info(struct nvme_mdev_vctrl *vctrl,
+ void __user *arg)
+{
+ struct vfio_device_info info;
+ unsigned int minsz = offsetofend(struct vfio_device_info, num_irqs);
+
+ if (copy_from_user(&info, (void __user *)arg, minsz))
+ return -EFAULT;
+ if (info.argsz < minsz)
+ return -EINVAL;
+
+ info.flags = VFIO_DEVICE_FLAGS_PCI | VFIO_DEVICE_FLAGS_RESET;
+ info.num_regions = VFIO_PCI_NUM_REGIONS;
+ info.num_irqs = VFIO_PCI_NUM_IRQS;
+
+ if (copy_to_user(arg, &info, minsz))
+ return -EFAULT;
+ return 0;
+}
+
+/* VFIO_DEVICE_GET_REGION_INFO ioctl implementation*/
+static int nvme_mdev_ioctl_get_reg_info(struct nvme_mdev_vctrl *vctrl,
+ void __user *arg)
+{
+ struct nvme_mdev_io_region *region;
+ struct mdev_nvme_vfio_region_info *info;
+ unsigned long minsz, outsz, maxsz;
+ int ret = 0;
+
+ minsz = offsetofend(struct vfio_region_info, offset);
+ maxsz = sizeof(struct mdev_nvme_vfio_region_info) +
+ sizeof(struct vfio_region_sparse_mmap_area);
+
+ info = kzalloc(maxsz, GFP_KERNEL);
+ if (!info)
+ return -ENOMEM;
+
+ if (copy_from_user(info, arg, minsz)) {
+ ret = -EFAULT;
+ goto out;
+ }
+
+ outsz = info->base.argsz;
+ if (outsz < minsz || outsz > maxsz) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ if (info->base.index >= VFIO_PCI_NUM_REGIONS) {
+ ret = -EINVAL;
+ goto out;
+ }
+
+ region = &vctrl->regions[info->base.index];
+ info->base.offset = REGION_TO_OFFSET(info->base.index);
+ info->base.argsz = maxsz;
+ info->base.size = region->size;
+
+ info->base.flags = VFIO_REGION_INFO_FLAG_READ |
+ VFIO_REGION_INFO_FLAG_WRITE;
+
+ if (region->mmap_ops) {
+ info->base.flags |= (VFIO_REGION_INFO_FLAG_MMAP |
+ VFIO_REGION_INFO_FLAG_CAPS);
+
+ info->base.cap_offset =
+ offsetof(struct mdev_nvme_vfio_region_info, mmap_cap);
+
+ info->mmap_cap.header.id = VFIO_REGION_INFO_CAP_SPARSE_MMAP;
+ info->mmap_cap.header.version = 1;
+ info->mmap_cap.header.next = 0;
+ info->mmap_cap.nr_areas = 1;
+ info->mmap_cap.areas[0].offset = region->mmap_area_start;
+ info->mmap_cap.areas[0].size = region->mmap_area_size;
+ }
+
+ if (copy_to_user(arg, info, outsz))
+ ret = -EFAULT;
+out:
+ kfree(info);
+ return ret;
+}
+
+/* VFIO_DEVICE_GET_IRQ_INFO ioctl implementation */
+static int nvme_mdev_ioctl_get_irq_info(struct nvme_mdev_vctrl *vctrl,
+ void __user *arg)
+{
+ struct vfio_irq_info info;
+ unsigned int minsz = offsetofend(struct vfio_irq_info, count);
+
+ if (copy_from_user(&info, arg, minsz))
+ return -EFAULT;
+ if (info.argsz < minsz)
+ return -EINVAL;
+
+ info.count = nvme_mdev_irq_counts(vctrl, info.index);
+ info.flags = VFIO_IRQ_INFO_EVENTFD;
+
+ if (info.index == VFIO_PCI_INTX_IRQ_INDEX)
+ info.flags |= VFIO_IRQ_INFO_MASKABLE | VFIO_IRQ_INFO_AUTOMASKED;
+
+ if (copy_to_user(arg, &info, minsz))
+ return -EFAULT;
+ return 0;
+}
+
+/* VFIO VFIO_DEVICE_SET_IRQS ioctl implementation */
+static int nvme_mdev_ioctl_set_irqs(struct nvme_mdev_vctrl *vctrl,
+ void __user *arg)
+{
+ int ret, irqcount;
+ struct vfio_irq_set hdr;
+ u8 *data = NULL;
+ size_t data_size = 0;
+ unsigned long minsz = offsetofend(struct vfio_irq_set, count);
+
+ if (copy_from_user(&hdr, arg, minsz))
+ return -EFAULT;
+
+ irqcount = nvme_mdev_irq_counts(vctrl, hdr.index);
+ ret = vfio_set_irqs_validate_and_prepare(&hdr,
+ irqcount,
+ VFIO_PCI_NUM_IRQS,
+ &data_size);
+ if (ret)
+ return ret;
+
+ if (data_size) {
+ data = memdup_user((arg + minsz), data_size);
+ if (IS_ERR(data))
+ return PTR_ERR(data);
+ }
+
+ ret = -ENOTTY;
+ switch (hdr.index) {
+ case VFIO_PCI_INTX_IRQ_INDEX:
+ case VFIO_PCI_MSIX_IRQ_INDEX:
+ case VFIO_PCI_REQ_IRQ_INDEX:
+ switch (hdr.flags & VFIO_IRQ_SET_ACTION_TYPE_MASK) {
+ case VFIO_IRQ_SET_ACTION_MASK:
+ case VFIO_IRQ_SET_ACTION_UNMASK:
+ // pretend to support this (even with eventfd)
+ ret = hdr.index == VFIO_PCI_INTX_IRQ_INDEX ?
+ 0 : -EINVAL;
+ break;
+ case VFIO_IRQ_SET_ACTION_TRIGGER:
+ ret = nvme_mdev_ioctl_set_irqs_trigger(vctrl, hdr.flags,
+ hdr.index,
+ hdr.start,
+ hdr.count,
+ data);
+ break;
+ }
+ break;
+ }
+
+ kfree(data);
+ return ret;
+}
+
+/* ioctl() implementation */
+static long nvme_mdev_ops_ioctl(struct mdev_device *mdev, unsigned int cmd,
+ unsigned long arg)
+{
+ struct nvme_mdev_vctrl *vctrl = mdev_get_drvdata(mdev);
+
+ if (!vctrl)
+ return -ENODEV;
+
+ switch (cmd) {
+ case VFIO_DEVICE_GET_INFO:
+ return nvme_mdev_ioctl_get_info(vctrl, (void __user *)arg);
+ case VFIO_DEVICE_GET_REGION_INFO:
+ return nvme_mdev_ioctl_get_reg_info(vctrl, (void __user *)arg);
+ case VFIO_DEVICE_GET_IRQ_INFO:
+ return nvme_mdev_ioctl_get_irq_info(vctrl, (void __user *)arg);
+ case VFIO_DEVICE_SET_IRQS:
+ return nvme_mdev_ioctl_set_irqs(vctrl, (void __user *)arg);
+ case VFIO_DEVICE_RESET:
+ nvme_mdev_vctrl_reset(vctrl);
+ return 0;
+ default:
+ return -ENOTTY;
+ }
+}
+
+/* mmap() implementation (doorbell area) */
+static int nvme_mdev_ops_mmap(struct mdev_device *mdev,
+ struct vm_area_struct *vma)
+{
+ struct nvme_mdev_vctrl *vctrl = mdev_get_drvdata(mdev);
+ int index = OFFSET_TO_REGION((u64)vma->vm_pgoff << PAGE_SHIFT);
+ unsigned long size, start;
+
+ if (!vctrl)
+ return -EFAULT;
+
+ if (index >= VFIO_PCI_NUM_REGIONS || !vctrl->regions[index].mmap_ops)
+ return -EINVAL;
+
+ if (vma->vm_end < vma->vm_start)
+ return -EINVAL;
+
+ size = vma->vm_end - vma->vm_start;
+ start = vma->vm_pgoff << PAGE_SHIFT;
+
+ if (start < vctrl->regions[index].mmap_area_start)
+ return -EINVAL;
+ if (size > vctrl->regions[index].mmap_area_size)
+ return -EINVAL;
+
+ if ((vma->vm_flags & VM_SHARED) == 0)
+ return -EINVAL;
+
+ vma->vm_ops = vctrl->regions[index].mmap_ops;
+ vma->vm_private_data = vctrl;
+ return 0;
+}
+
+/* Request removal of the device*/
+static void nvme_mdev_ops_request(struct mdev_device *mdev, unsigned int count)
+{
+ struct nvme_mdev_vctrl *vctrl = mdev_get_drvdata(mdev);
+
+ if (vctrl)
+ nvme_mdev_irq_raise_unplug_event(vctrl, count);
+}
+
+/* Adding a new namespace given host NS id and partition ID (e/g. n1p2 or n1) */
+static ssize_t add_namespace_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ struct nvme_mdev_vctrl *vctrl = dev_to_vctrl(dev);
+ int ret;
+ unsigned long partno = 0, nsid;
+ char *buf_copy, *token, *tmp;
+
+ if (!vctrl)
+ return -ENODEV;
+
+ buf_copy = kstrdup(buf, GFP_KERNEL);
+ if (!buf_copy)
+ return -ENOMEM;
+
+ tmp = buf_copy;
+ if (tmp[0] != 'n') {
+ ret = -EINVAL;
+ goto out;
+ }
+ tmp++;
+
+ // read namespace ID (mandatory)
+ token = strsep(&tmp, "p");
+ if (!token) {
+ ret = -EINVAL;
+ goto out;
+ }
+ ret = kstrtoul(token, 10, &nsid);
+ if (ret)
+ goto out;
+
+ // read partition ID (optional)
+ if (tmp) {
+ ret = kstrtoul(tmp, 10, &partno);
+ if (ret)
+ goto out;
+ }
+
+ // create the user namespace
+ ret = nvme_mdev_vns_open(vctrl, nsid, partno);
+ if (ret)
+ goto out;
+ ret = count;
+out:
+ kfree(buf_copy);
+ return ret;
+}
+static DEVICE_ATTR_WO(add_namespace);
+
+/* Remove a user namespace */
+static ssize_t remove_namespace_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ unsigned long user_nsid;
+ int ret;
+ struct nvme_mdev_vctrl *vctrl = dev_to_vctrl(dev);
+
+ if (!vctrl)
+ return -ENODEV;
+
+ ret = kstrtoul(buf, 10, &user_nsid);
+ if (ret)
+ return ret;
+
+ ret = nvme_mdev_vns_destroy(vctrl, user_nsid);
+ if (ret)
+ return ret;
+ return count;
+}
+static DEVICE_ATTR_WO(remove_namespace);
+
+/* Show list of user namespaces */
+static ssize_t namespaces_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nvme_mdev_vctrl *vctrl = dev_to_vctrl(dev);
+
+ if (!vctrl)
+ return -ENODEV;
+ return nvme_mdev_vns_print_description(vctrl, buf, PAGE_SIZE - 1);
+}
+static DEVICE_ATTR_RO(namespaces);
+
+/* change the cpu binding of the IO threads*/
+static ssize_t iothread_cpu_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ unsigned long val;
+ int ret;
+ struct nvme_mdev_vctrl *vctrl = dev_to_vctrl(dev);
+
+ if (!vctrl)
+ return -ENODEV;
+ ret = kstrtoul(buf, 10, &val);
+ if (ret)
+ return ret;
+ nvme_mdev_vctrl_bind_iothread(vctrl, val);
+ return count;
+}
+
+/* change the cpu binding of the IO threads*/
+static ssize_t
+iothread_cpu_show(struct device *dev, struct device_attribute *attr, char *buf)
+{
+ struct nvme_mdev_vctrl *vctrl = dev_to_vctrl(dev);
+
+ if (!vctrl)
+ return -ENODEV;
+ return sprintf(buf, "%d\n", vctrl->iothread_cpu);
+}
+static DEVICE_ATTR_RW(iothread_cpu);
+
+/* change the cpu binding of the IO threads*/
+static ssize_t shadow_doorbell_store(struct device *dev,
+ struct device_attribute *attr,
+ const char *buf, size_t count)
+{
+ bool val;
+ int ret;
+ struct nvme_mdev_vctrl *vctrl = dev_to_vctrl(dev);
+
+ if (!vctrl)
+ return -ENODEV;
+ ret = kstrtobool(buf, &val);
+ if (ret)
+ return ret;
+ ret = nvme_mdev_vctrl_set_shadow_doorbell_supported(vctrl, val);
+ if (ret)
+ return ret;
+ return count;
+}
+
+/* change the cpu binding of the IO threads*/
+static ssize_t shadow_doorbell_show(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct nvme_mdev_vctrl *vctrl = dev_to_vctrl(dev);
+
+ if (!vctrl)
+ return -ENODEV;
+
+ return sprintf(buf, "%d\n", vctrl->mmio.shadow_db_supported ? 1 : 0);
+}
+static DEVICE_ATTR_RW(shadow_doorbell);
+
+static struct attribute *nvme_mdev_dev_ns_atttributes[] = {
+ &dev_attr_add_namespace.attr,
+ &dev_attr_remove_namespace.attr,
+ &dev_attr_namespaces.attr,
+ NULL
+};
+
+static struct attribute *nvme_mdev_dev_settings_atttributes[] = {
+ &dev_attr_iothread_cpu.attr,
+ &dev_attr_shadow_doorbell.attr,
+ NULL
+};
+
+static const struct attribute_group nvme_mdev_ns_attr_group = {
+ .name = "namespaces",
+ .attrs = nvme_mdev_dev_ns_atttributes,
+};
+
+static const struct attribute_group nvme_mdev_setting_attr_group = {
+ .name = "settings",
+ .attrs = nvme_mdev_dev_settings_atttributes,
+};
+
+static const struct attribute_group *nvme_mdev_dev_attributte_groups[] = {
+ &nvme_mdev_ns_attr_group,
+ &nvme_mdev_setting_attr_group,
+ NULL,
+};
+
+struct mdev_parent_ops mdev_fops = {
+ .owner = THIS_MODULE,
+ .create = nvme_mdev_ops_create,
+ .remove = nvme_mdev_ops_remove,
+ .open = nvme_mdev_ops_open,
+ .release = nvme_mdev_ops_release,
+ .read = nvme_mdev_ops_read,
+ .write = nvme_mdev_ops_write,
+ .mmap = nvme_mdev_ops_mmap,
+ .ioctl = nvme_mdev_ops_ioctl,
+ .request = nvme_mdev_ops_request,
+ .mdev_attr_groups = nvme_mdev_dev_attributte_groups,
+ .dev_attr_groups = NULL,
+};
+
diff --git a/drivers/nvme/mdev/io.c b/drivers/nvme/mdev/io.c
new file mode 100644
index 000000000000..a731196d0365
--- /dev/null
+++ b/drivers/nvme/mdev/io.c
@@ -0,0 +1,563 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * NVMe IO command translation and polling IO thread
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/kernel.h>
+#include <linux/kthread.h>
+#include <linux/slab.h>
+#include <linux/nvme.h>
+#include <linux/timekeeping.h>
+#include <linux/ktime.h>
+#include "priv.h"
+
+struct io_ctx {
+ struct nvme_mdev_hctrl *hctrl;
+ struct nvme_mdev_vctrl *vctrl;
+
+ const struct nvme_command *in;
+ struct nvme_command out;
+ struct nvme_mdev_vns *ns;
+ struct nvme_ext_data_iter udatait;
+ struct nvme_ext_data_iter *kdatait;
+
+ ktime_t last_io_t;
+ ktime_t last_admin_poll_time;
+ unsigned int idle_timeout_ms;
+ unsigned int admin_poll_rate_ms;
+ unsigned int arb_burst;
+};
+
+/* Handle read/write command.*/
+static int nvme_mdev_io_translate_rw(struct io_ctx *ctx)
+{
+ int ret;
+ const struct nvme_rw_command *in = &ctx->in->rw;
+
+ u64 slba = le64_to_cpu(in->slba);
+ u64 length = le16_to_cpu(in->length) + 1;
+ u16 control = le16_to_cpu(in->control);
+
+ _DBG(ctx->vctrl, "IOQ: READ/WRITE\n");
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_DW23 | RSRV_MPTR | RSRV_DW14_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16, 0b1100000000111100))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (in->opcode == nvme_cmd_write && ctx->ns->readonly)
+ return DNR(NVME_SC_READ_ONLY);
+
+ if (!check_range(slba, length, ctx->ns->ns_size))
+ return DNR(NVME_SC_LBA_RANGE);
+
+ ctx->out.rw.slba = cpu_to_le64(slba + ctx->ns->host_lba_offset);
+ ctx->out.rw.length = in->length;
+
+ ret = nvme_mdev_udata_iter_set_dptr(&ctx->udatait, &in->dptr,
+ length << ctx->ns->blksize_shift);
+ if (ret)
+ return nvme_mdev_translate_error(ret);
+
+ ctx->kdatait = &ctx->udatait;
+ if (control & ~(NVME_RW_LR | NVME_RW_FUA))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ ctx->out.rw.control = in->control;
+ return -1;
+}
+
+/*Handle flush command */
+static int nvme_mdev_io_translate_flush(struct io_ctx *ctx)
+{
+ ctx->kdatait = NULL;
+
+ _DBG(ctx->vctrl, "IOQ: FLUSH\n");
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_DW23 | RSRV_DPTR |
+ RSRV_MPTR | RSRV_DW10_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (ctx->ns->readonly)
+ return DNR(NVME_SC_READ_ONLY);
+
+ return -1;
+}
+
+/* Handle write zeros command */
+static int nvme_mdev_io_translate_write_zeros(struct io_ctx *ctx)
+{
+ const struct nvme_write_zeroes_cmd *in = &ctx->in->write_zeroes;
+ u64 slba = le64_to_cpu(in->slba);
+ u64 length = le16_to_cpu(in->length) + 1;
+ u16 control = le16_to_cpu(in->control);
+
+ _DBG(ctx->vctrl, "IOQ: WRITE_ZEROS\n");
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_DW23 | RSRV_DPTR |
+ RSRV_MPTR | RSRV_DW13_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (!nvme_mdev_hctrl_hq_check_op(ctx->hctrl, in->opcode))
+ return DNR(NVME_SC_INVALID_OPCODE);
+
+ if (ctx->ns->readonly)
+ return DNR(NVME_SC_READ_ONLY);
+ ctx->kdatait = NULL;
+
+ if (!check_range(slba, length, ctx->ns->ns_size))
+ return DNR(NVME_SC_LBA_RANGE);
+
+ ctx->out.write_zeroes.slba =
+ cpu_to_le64(slba + ctx->ns->host_lba_offset);
+ ctx->out.write_zeroes.length = in->length;
+
+ if (control & ~(NVME_RW_LR | NVME_RW_FUA | NVME_WZ_DEAC))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ ctx->out.write_zeroes.control = in->control;
+ return -1;
+}
+
+/* Handle dataset management command */
+static int nvme_mdev_io_translate_dsm(struct io_ctx *ctx)
+{
+ unsigned int size, i, nr;
+ int ret;
+ const struct nvme_dsm_cmd *in = &ctx->in->dsm;
+ struct nvme_dsm_range *data_ptr;
+
+ _DBG(ctx->vctrl, "IOQ: DSM_MANAGEMENT\n");
+
+ if (!check_reserved_dwords(ctx->in->dwords, 16,
+ RSRV_DW23 | RSRV_MPTR | RSRV_DW12_15))
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (le32_to_cpu(in->nr) & 0xFFFFFF00)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ if (!nvme_mdev_hctrl_hq_check_op(ctx->hctrl, in->opcode))
+ return DNR(NVME_SC_INVALID_OPCODE);
+
+ if (ctx->ns->readonly)
+ return DNR(NVME_SC_READ_ONLY);
+
+ nr = le32_to_cpu(in->nr) + 1;
+ size = nr * sizeof(struct nvme_dsm_range);
+
+ ctx->out.dsm.nr = in->nr;
+ ret = nvme_mdev_udata_iter_set_dptr(&ctx->udatait, &in->dptr, size);
+ if (ret)
+ goto error;
+
+ ctx->kdatait = nvme_mdev_kdata_iter_alloc(&ctx->vctrl->viommu, size);
+ if (!ctx->kdatait)
+ return NVME_SC_INTERNAL;
+
+ _DBG(ctx->vctrl, "IOQ: DSM_MANAGEMENT: NR=%d\n", nr);
+
+ ret = nvme_mdev_read_from_udata(ctx->kdatait->kmem.data, &ctx->udatait,
+ size);
+ if (ret)
+ goto error2;
+
+ data_ptr = (struct nvme_dsm_range *)ctx->kdatait->kmem.data;
+
+ for (i = 0 ; i < nr; i++) {
+ u64 slba = le64_to_cpu(data_ptr[i].slba);
+ /* looks like not zero based value*/
+ u32 nlb = le32_to_cpu(data_ptr[i].nlb);
+
+ if (!check_range(slba, nlb, ctx->ns->ns_size))
+ goto error2;
+
+ _DBG(ctx->vctrl, "IOQ: DSM_MANAGEMENT: RANGE 0x%llx-0x%x\n",
+ slba, nlb);
+
+ data_ptr[i].slba = cpu_to_le64(slba + ctx->ns->host_lba_offset);
+ }
+
+ ctx->out.dsm.attributes = in->attributes;
+ return -1;
+error2:
+ ctx->kdatait->release(ctx->kdatait);
+error:
+ return nvme_mdev_translate_error(ret);
+}
+
+/* Process one new command in the io queue*/
+static int nvme_mdev_io_translate_cmd(struct io_ctx *ctx)
+{
+ memset(&ctx->out, 0, sizeof(ctx->out));
+ /* translate opcode */
+ ctx->out.common.opcode = ctx->in->common.opcode;
+
+ /* check flags */
+ if (ctx->in->common.flags != 0)
+ return DNR(NVME_SC_INVALID_FIELD);
+
+ /* namespace*/
+ ctx->ns = nvme_mdev_vns_from_vnsid(ctx->vctrl,
+ le32_to_cpu(ctx->in->rw.nsid));
+ if (!ctx->ns) {
+ _DBG(ctx->vctrl, "IOQ: invalid NSID\n");
+ return DNR(NVME_SC_INVALID_NS);
+ }
+
+ if (!ctx->ns->readonly && bdev_read_only(ctx->ns->host_part))
+ ctx->ns->readonly = true;
+
+ ctx->out.common.nsid = cpu_to_le32(ctx->ns->host_nsid);
+
+ switch (ctx->in->common.opcode) {
+ case nvme_cmd_flush:
+ return nvme_mdev_io_translate_flush(ctx);
+ case nvme_cmd_read:
+ return nvme_mdev_io_translate_rw(ctx);
+ case nvme_cmd_write:
+ return nvme_mdev_io_translate_rw(ctx);
+ case nvme_cmd_write_zeroes:
+ return nvme_mdev_io_translate_write_zeros(ctx);
+ case nvme_cmd_dsm:
+ return nvme_mdev_io_translate_dsm(ctx);
+ default:
+ return DNR(NVME_SC_INVALID_OPCODE);
+ }
+}
+
+static bool nvme_mdev_io_process_sq(struct io_ctx *ctx, u16 sqid)
+{
+ struct nvme_vsq *vsq = &ctx->vctrl->vsqs[sqid];
+ u16 ucid;
+ int ret;
+
+ /* If host queue is full, we can't process a command
+ * as a command will likely result in passthrough
+ */
+ if (!nvme_mdev_hctrl_hq_can_submit(ctx->hctrl, vsq->hsq))
+ return false;
+
+ /* read the command */
+ ctx->in = nvme_mdev_vsq_get_cmd(ctx->vctrl, vsq);
+ if (!ctx->in)
+ return false;
+ ucid = le16_to_cpu(ctx->in->common.command_id);
+
+ /* translate the command */
+ ret = nvme_mdev_io_translate_cmd(ctx);
+ if (ret != -1) {
+ _DBG(ctx->vctrl,
+ "IOQ: QID %d CID %d FAILED: status 0x%x (translate)\n",
+ sqid, ucid, ret);
+ nvme_mdev_vsq_cmd_done_io(ctx->vctrl, sqid, ucid, ret);
+ return true;
+ }
+
+ /*passthrough*/
+ ret = nvme_mdev_hctrl_hq_submit(ctx->hctrl,
+ vsq->hsq,
+ (((u32)vsq->qid) << 16) | ((u32)ucid),
+ &ctx->out,
+ ctx->kdatait);
+ if (ret) {
+ ret = nvme_mdev_translate_error(ret);
+
+ _DBG(ctx->vctrl,
+ "IOQ: QID %d CID %d FAILED: status 0x%x (host submit)\n",
+ sqid, ucid, ret);
+
+ nvme_mdev_vsq_cmd_done_io(ctx->vctrl, sqid, ucid, ret);
+ }
+ return true;
+}
+
+/* process host replies to the passed through commands */
+static int nvme_mdev_io_process_hwq(struct io_ctx *ctx, u16 hwq)
+{
+ int n, i;
+ struct nvme_ext_cmd_result res[16];
+
+ /* process the completions from the hardware */
+ n = nvme_mdev_hctrl_hq_poll(ctx->hctrl, hwq, res, 16);
+ if (n == -1)
+ return -1;
+
+ for (i = 0; i < n; i++) {
+ u16 qid = res[i].tag >> 16;
+ u16 cid = res[i].tag & 0xFFFF;
+ u16 status = res[i].status;
+
+ if (status != 0)
+ _DBG(ctx->vctrl,
+ "IOQ: QID %d CID %d FAILED: status 0x%x (host response)\n",
+ qid, cid, status);
+
+ nvme_mdev_vsq_cmd_done_io(ctx->vctrl, qid, cid, status);
+ }
+ return n;
+}
+
+/* Check if we need to read a command from the admin queue */
+static bool nvme_mdev_adm_needs_processing(struct io_ctx *ctx)
+{
+ if (!timeout(ctx->last_admin_poll_time,
+ ctx->vctrl->now, ctx->admin_poll_rate_ms))
+ return false;
+
+ if (nvme_mdev_vsq_has_data(ctx->vctrl, &ctx->vctrl->vsqs[0]))
+ return true;
+
+ ctx->last_admin_poll_time = ctx->vctrl->now;
+ return false;
+}
+
+/* do polling till one of events stops it */
+static void nvme_mdev_io_maintask(struct io_ctx *ctx)
+{
+ struct nvme_mdev_vctrl *vctrl = ctx->vctrl;
+ u16 i, cqid, sqid, hsqcnt;
+ u16 hsqs[MAX_HOST_QUEUES];
+ bool idle = false;
+
+ hsqcnt = nvme_mdev_vctrl_hqs_list(vctrl, hsqs);
+ ctx->arb_burst = 1 << ctx->vctrl->arb_burst_shift;
+
+ /* can't stop polling when shadow db not enabled */
+ ctx->idle_timeout_ms = vctrl->mmio.shadow_db_en ? poll_timeout_ms : 0;
+ ctx->admin_poll_rate_ms = admin_poll_rate_ms;
+
+ vctrl->now = ktime_get();
+ ctx->last_admin_poll_time = vctrl->now;
+ ctx->last_io_t = vctrl->now;
+
+ /* main loop */
+ while (!kthread_should_park()) {
+ vctrl->now = ktime_get();
+
+ /* check if we have to exit to support admin polling */
+ if (!vctrl->mmio.shadow_db_supported)
+ if (nvme_mdev_adm_needs_processing(ctx))
+ break;
+
+ /* process the submission queues*/
+ sqid = 1;
+ for_each_set_bit_from(sqid, vctrl->vsq_en, MAX_VIRTUAL_QUEUES)
+ for (i = 0 ; i < ctx->arb_burst ; i++)
+ if (!nvme_mdev_io_process_sq(ctx, sqid))
+ break;
+
+ /* process the completions from the guest*/
+ cqid = 1;
+ for_each_set_bit_from(cqid, vctrl->vcq_en, MAX_VIRTUAL_QUEUES)
+ nvme_mdev_vcq_process(vctrl, cqid, true);
+
+ /* process the completions from the hardware*/
+ for (i = 0 ; i < hsqcnt ; i++)
+ if (nvme_mdev_io_process_hwq(ctx, hsqs[i]) > 0)
+ ctx->last_io_t = vctrl->now;
+
+ /* Check if we need to stop polling*/
+ if (ctx->idle_timeout_ms) {
+ if (timeout(ctx->last_io_t,
+ vctrl->now, ctx->idle_timeout_ms)) {
+ idle = true;
+ break;
+ }
+ }
+ cond_resched();
+ }
+
+ /* Drain the host IO */
+ for (;;) {
+ bool pending_io = false;
+
+ vctrl->now = ktime_get_coarse_boottime();
+
+ if (nvme_mdev_vctrl_is_dead(vctrl) || ctx->hctrl->removing) {
+ idle = false;
+ break;
+ }
+
+ for (i = 0; i < hsqcnt; i++) {
+ int n = nvme_mdev_io_process_hwq(ctx, hsqs[i]);
+
+ if (n != -1)
+ pending_io = true;
+ if (n > 0)
+ ctx->last_io_t = vctrl->now;
+ }
+
+ if (!pending_io)
+ break;
+
+ cond_resched();
+
+ if (!timeout(ctx->last_io_t, vctrl->now, io_timeout_ms))
+ continue;
+
+ _WARN(ctx->vctrl, "IO: skipping flush - host IO timeout\n");
+ idle = false;
+ break;
+ }
+
+ /* Drain all the pending completion interrupts to the guest*/
+ cqid = 1;
+ for_each_set_bit_from(cqid, vctrl->vcq_en, MAX_VIRTUAL_QUEUES)
+ if (nvme_mdev_vcq_flush(vctrl, cqid))
+ idle = false;
+
+ /* Park IO thread if IO is truly idle*/
+ if (idle) {
+ /* don't bother going idle if someone holds the vctrl
+ * lock. It might try to park us, and thus
+ * cause a deadlock
+ */
+ if (!mutex_trylock(&vctrl->lock))
+ return;
+
+ sqid = 1;
+ for_each_set_bit_from(sqid, vctrl->vsq_en, MAX_VIRTUAL_QUEUES)
+ if (!nvme_mdev_vsq_suspend_io(vctrl, sqid)) {
+ idle = false;
+ break;
+ }
+
+ if (idle) {
+ _DBG(ctx->vctrl, "IO: self-parking\n");
+ vctrl->io_idle = true;
+ nvme_mdev_io_pause(vctrl);
+ }
+
+ mutex_unlock(&vctrl->lock);
+ }
+
+ /* Admin poll for cases when shadow doorbell is not supported */
+ if (!vctrl->mmio.shadow_db_supported) {
+ if (mutex_trylock(&vctrl->lock)) {
+ nvme_mdev_vcq_process(vctrl, 0, false);
+ nvme_mdev_adm_process_sq(ctx->vctrl);
+ ctx->last_admin_poll_time = vctrl->now;
+ mutex_unlock(&ctx->vctrl->lock);
+ }
+ }
+}
+
+/* the main IO thread */
+static int nvme_mdev_io_polling_thread(void *data)
+{
+ struct io_ctx ctx;
+
+ if (kthread_should_stop())
+ return 0;
+
+ memset(&ctx, 0, sizeof(struct io_ctx));
+ ctx.vctrl = (struct nvme_mdev_vctrl *)data;
+ ctx.hctrl = ctx.vctrl->hctrl;
+ nvme_mdev_udata_iter_setup(&ctx.vctrl->viommu, &ctx.udatait);
+
+ _DBG(ctx.vctrl, "IO: iothread started\n");
+
+ for (;;) {
+ if (kthread_should_park()) {
+ _DBG(ctx.vctrl, "IO: iothread parked\n");
+ kthread_parkme();
+ }
+
+ if (kthread_should_stop())
+ break;
+
+ nvme_mdev_io_maintask(&ctx);
+ }
+
+ _DBG(ctx.vctrl, "IO: iothread stopped\n");
+ return 0;
+}
+
+/* Kick the IO thread into running state*/
+void nvme_mdev_io_resume(struct nvme_mdev_vctrl *vctrl)
+{
+ lockdep_assert_held(&vctrl->lock);
+
+ if (!vctrl->iothread || !vctrl->iothread_parked)
+ return;
+ if (vctrl->io_idle || vctrl->vctrl_paused)
+ return;
+
+ vctrl->iothread_parked = false;
+ /* has memory barrier*/
+ kthread_unpark(vctrl->iothread);
+}
+
+/* Pause the IO thread */
+void nvme_mdev_io_pause(struct nvme_mdev_vctrl *vctrl)
+{
+ lockdep_assert_held(&vctrl->lock);
+
+ if (!vctrl->iothread || vctrl->iothread_parked)
+ return;
+
+ vctrl->iothread_parked = true;
+ kthread_park(vctrl->iothread);
+}
+
+/* setup the main IO thread */
+int nvme_mdev_io_create(struct nvme_mdev_vctrl *vctrl, unsigned int cpu)
+{
+ /*TODOLATER: IO: Better thread name*/
+ char name[TASK_COMM_LEN];
+
+ _DBG(vctrl, "IO: creating the polling iothread\n");
+
+ if (WARN_ON(vctrl->iothread))
+ return -EINVAL;
+
+ snprintf(name, sizeof(name), "nvme%d_poll_io", vctrl->hctrl->id);
+
+ vctrl->iothread_cpu = cpu;
+ vctrl->iothread_parked = false;
+ vctrl->io_idle = true;
+
+ vctrl->iothread = kthread_create_on_node(nvme_mdev_io_polling_thread,
+ vctrl,
+ vctrl->hctrl->node,
+ name);
+ if (IS_ERR(vctrl->iothread)) {
+ vctrl->iothread = NULL;
+ return PTR_ERR(vctrl->iothread);
+ }
+
+ kthread_bind(vctrl->iothread, cpu);
+
+ if (vctrl->io_idle) {
+ vctrl->iothread_parked = true;
+ kthread_park(vctrl->iothread);
+ return 0;
+ }
+
+ wake_up_process(vctrl->iothread);
+ return 0;
+}
+
+/* End the main IO thread */
+void nvme_mdev_io_free(struct nvme_mdev_vctrl *vctrl)
+{
+ int ret;
+
+ _DBG(vctrl, "IO: destroying the polling iothread\n");
+
+ lockdep_assert_held(&vctrl->lock);
+ nvme_mdev_io_pause(vctrl);
+ ret = kthread_stop(vctrl->iothread);
+ WARN_ON(ret);
+ vctrl->iothread = NULL;
+}
+
+void nvme_mdev_assert_io_not_running(struct nvme_mdev_vctrl *vctrl)
+{
+ if (WARN_ON(vctrl->iothread && !vctrl->iothread_parked))
+ nvme_mdev_io_pause(vctrl);
+}
diff --git a/drivers/nvme/mdev/irq.c b/drivers/nvme/mdev/irq.c
new file mode 100644
index 000000000000..5809cdb4d84c
--- /dev/null
+++ b/drivers/nvme/mdev/irq.c
@@ -0,0 +1,264 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * NVMe virtual controller IRQ implementation (MSIx and INTx)
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+
+#include <linux/init.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include "priv.h"
+
+/* Setup the interrupt subsystem */
+void nvme_mdev_irqs_setup(struct nvme_mdev_vctrl *vctrl)
+{
+ vctrl->irqs.mode = NVME_MDEV_IMODE_NONE;
+ vctrl->irqs.irq_coalesc_max = 1;
+}
+
+/* Enable INTx or MSIx interrupts */
+static int __nvme_mdev_irqs_enable(struct nvme_mdev_vctrl *vctrl,
+ enum nvme_mdev_irq_mode mode)
+{
+ if (vctrl->irqs.mode == mode)
+ return 0;
+ if (vctrl->irqs.mode != NVME_MDEV_IMODE_NONE)
+ return -EBUSY;
+
+ if (mode == NVME_MDEV_IMODE_INTX)
+ _DBG(vctrl, "IRQ: enable INTx interrupts\n");
+ else if (mode == NVME_MDEV_IMODE_MSIX)
+ _DBG(vctrl, "IRQ: enable MSIX interrupts\n");
+ else
+ WARN_ON(1);
+
+ nvme_mdev_io_pause(vctrl);
+ vctrl->irqs.mode = mode;
+ nvme_mdev_io_resume(vctrl);
+ return 0;
+}
+
+int nvme_mdev_irqs_enable(struct nvme_mdev_vctrl *vctrl,
+ enum nvme_mdev_irq_mode mode)
+{
+ int retval = 0;
+
+ mutex_lock(&vctrl->lock);
+ retval = __nvme_mdev_irqs_enable(vctrl, mode);
+ mutex_unlock(&vctrl->lock);
+ return retval;
+}
+
+/* Disable INTx or MSIx interrupts */
+static void __nvme_mdev_irqs_disable(struct nvme_mdev_vctrl *vctrl,
+ enum nvme_mdev_irq_mode mode)
+{
+ unsigned int i;
+
+ if (vctrl->irqs.mode == NVME_MDEV_IMODE_NONE)
+ return;
+ if (vctrl->irqs.mode != mode)
+ return;
+
+ if (vctrl->irqs.mode == NVME_MDEV_IMODE_INTX)
+ _DBG(vctrl, "IRQ: disable INTx interrupts\n");
+ else if (vctrl->irqs.mode == NVME_MDEV_IMODE_MSIX)
+ _DBG(vctrl, "IRQ: disable MSIX interrupts\n");
+ else
+ WARN_ON(1);
+
+ nvme_mdev_io_pause(vctrl);
+
+ for (i = 0; i < MAX_VIRTUAL_IRQS; i++) {
+ struct nvme_mdev_user_irq *vec = &vctrl->irqs.vecs[i];
+
+ if (vec->trigger) {
+ eventfd_ctx_put(vec->trigger);
+ vec->trigger = NULL;
+ }
+ vec->irq_pending_cnt = 0;
+ vec->irq_time = 0;
+ }
+ vctrl->irqs.mode = NVME_MDEV_IMODE_NONE;
+ nvme_mdev_io_resume(vctrl);
+}
+
+void nvme_mdev_irqs_disable(struct nvme_mdev_vctrl *vctrl,
+ enum nvme_mdev_irq_mode mode)
+{
+ mutex_lock(&vctrl->lock);
+ __nvme_mdev_irqs_disable(vctrl, mode);
+ mutex_unlock(&vctrl->lock);
+}
+
+/* Set eventfd triggers for INTx or MSIx interrupts */
+int nvme_mdev_irqs_set_triggers(struct nvme_mdev_vctrl *vctrl,
+ int start, int count, int32_t *fds)
+{
+ unsigned int i;
+
+ mutex_lock(&vctrl->lock);
+ nvme_mdev_io_pause(vctrl);
+
+ for (i = 0; i < count; i++) {
+ int irqindex = start + i;
+ struct eventfd_ctx *trigger;
+ struct nvme_mdev_user_irq *irq = &vctrl->irqs.vecs[irqindex];
+
+ if (irq->trigger) {
+ eventfd_ctx_put(irq->trigger);
+ irq->trigger = NULL;
+ }
+
+ if (fds[i] < 0)
+ continue;
+
+ trigger = eventfd_ctx_fdget(fds[i]);
+ if (IS_ERR(trigger))
+ return PTR_ERR(trigger);
+
+ irq->trigger = trigger;
+ }
+ nvme_mdev_io_resume(vctrl);
+ mutex_unlock(&vctrl->lock);
+ return 0;
+}
+
+/* Set eventfd trigger for unplug interrupt */
+static int __nvme_mdev_irqs_set_unplug_trigger(struct nvme_mdev_vctrl *vctrl,
+ int32_t fd)
+{
+ struct eventfd_ctx *trigger;
+
+ if (vctrl->irqs.request_trigger) {
+ _DBG(vctrl, "IRQ: clear hotplug trigger\n");
+ eventfd_ctx_put(vctrl->irqs.request_trigger);
+ vctrl->irqs.request_trigger = NULL;
+ }
+
+ if (fd < 0)
+ return 0;
+
+ _DBG(vctrl, "IRQ: set hotplug trigger\n");
+
+ trigger = eventfd_ctx_fdget(fd);
+ if (IS_ERR(trigger))
+ return PTR_ERR(trigger);
+
+ vctrl->irqs.request_trigger = trigger;
+ return 0;
+}
+
+int nvme_mdev_irqs_set_unplug_trigger(struct nvme_mdev_vctrl *vctrl,
+ int32_t fd)
+{
+ int retval;
+
+ mutex_lock(&vctrl->lock);
+ retval = __nvme_mdev_irqs_set_unplug_trigger(vctrl, fd);
+ mutex_unlock(&vctrl->lock);
+ return retval;
+}
+
+/* Reset the interrupts subsystem */
+void nvme_mdev_irqs_reset(struct nvme_mdev_vctrl *vctrl)
+{
+ int i;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ if (vctrl->irqs.mode != NVME_MDEV_IMODE_NONE)
+ __nvme_mdev_irqs_disable(vctrl, vctrl->irqs.mode);
+
+ __nvme_mdev_irqs_set_unplug_trigger(vctrl, -1);
+
+ for (i = 0; i < MAX_VIRTUAL_IRQS; i++) {
+ struct nvme_mdev_user_irq *vec = &vctrl->irqs.vecs[i];
+
+ vec->irq_coalesc_en = false;
+ vec->irq_pending_cnt = 0;
+ vec->irq_time = 0;
+ }
+
+ vctrl->irqs.irq_coalesc_time_us = 0;
+}
+
+/* Check if interrupt can be coalesced */
+static bool nvme_mdev_irq_coalesce(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_mdev_user_irq *irq)
+{
+ s64 delta;
+
+ if (!irq->irq_coalesc_en)
+ return false;
+
+ if (irq->irq_pending_cnt >= vctrl->irqs.irq_coalesc_max)
+ return false;
+
+ delta = ktime_us_delta(vctrl->now, irq->irq_time);
+ return (delta < vctrl->irqs.irq_coalesc_time_us);
+}
+
+void nvme_mdev_irq_raise_unplug_event(struct nvme_mdev_vctrl *vctrl,
+ unsigned int count)
+{
+ mutex_lock(&vctrl->lock);
+
+ if (vctrl->irqs.request_trigger) {
+ if (!(count % 10))
+ dev_notice_ratelimited(mdev_dev(vctrl->mdev),
+ "Relaying device request to user (#%u)\n",
+ count);
+
+ eventfd_signal(vctrl->irqs.request_trigger, 1);
+
+ } else if (count == 0) {
+ dev_notice(mdev_dev(vctrl->mdev),
+ "No device request channel registered, blocked until released by user\n");
+ }
+ mutex_unlock(&vctrl->lock);
+}
+
+/* Raise an interrupt */
+void nvme_mdev_irq_raise(struct nvme_mdev_vctrl *vctrl, unsigned int index)
+{
+ struct nvme_mdev_user_irq *irq = &vctrl->irqs.vecs[index];
+
+ irq->irq_pending_cnt++;
+}
+
+/* Unraise an interrupt */
+void nvme_mdev_irq_clear(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index)
+{
+ struct nvme_mdev_user_irq *irq = &vctrl->irqs.vecs[index];
+
+ irq->irq_time = vctrl->now;
+ irq->irq_pending_cnt = 0;
+}
+
+/* Directly trigger an interrupt without affecting irq coalescing settings */
+void nvme_mdev_irq_trigger(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index)
+{
+ struct nvme_mdev_user_irq *irq = &vctrl->irqs.vecs[index];
+
+ if (irq->trigger)
+ eventfd_signal(irq->trigger, 1);
+}
+
+/* Trigger previously raised interrupt */
+void nvme_mdev_irq_cond_trigger(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index)
+{
+ struct nvme_mdev_user_irq *irq = &vctrl->irqs.vecs[index];
+
+ if (irq->irq_pending_cnt == 0)
+ return;
+
+ if (!nvme_mdev_irq_coalesce(vctrl, irq)) {
+ nvme_mdev_irq_trigger(vctrl, index);
+ nvme_mdev_irq_clear(vctrl, index);
+ }
+}
diff --git a/drivers/nvme/mdev/mdev.h b/drivers/nvme/mdev/mdev.h
new file mode 100644
index 000000000000..d139e090520e
--- /dev/null
+++ b/drivers/nvme/mdev/mdev.h
@@ -0,0 +1,56 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * NVME VFIO mediated driver
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+
+#ifndef _MDEV_NVME_MDEV_H
+#define _MDEV_NVME_MDEV_H
+
+#include <linux/kernel.h>
+#include <linux/byteorder/generic.h>
+#include <linux/nvme.h>
+
+struct page_map {
+ void *kmap;
+ struct page *page;
+ dma_addr_t iova;
+};
+
+struct user_prplist {
+ /* used by user data iterator*/
+ struct page_map page;
+ unsigned int index; /* index of current entry */
+};
+
+struct kernel_data {
+ /* used by kernel data iterator*/
+ void *data;
+ unsigned int size;
+ dma_addr_t dma_addr;
+};
+
+struct nvme_ext_data_iter {
+ /* private */
+ struct nvme_mdev_viommu *viommu;
+ union {
+ const union nvme_data_ptr *dptr;
+ struct user_prplist uprp;
+ struct kernel_data kmem;
+ };
+
+ /* user interface */
+ u64 count; /* number of data pages, yet to be covered */
+
+ phys_addr_t physical; /* iterator physical address value*/
+ dma_addr_t host_iova; /* iterator dma address value*/
+
+ /* moves iterator to the next item */
+ int (*next)(struct nvme_ext_data_iter *data_iter);
+
+ /* if != NULL, user should call this when it done with data
+ * pointed by the iterator
+ */
+ void (*release)(struct nvme_ext_data_iter *data_iter);
+};
+#endif
diff --git a/drivers/nvme/mdev/mmio.c b/drivers/nvme/mdev/mmio.c
new file mode 100644
index 000000000000..cf03c1f22f4c
--- /dev/null
+++ b/drivers/nvme/mdev/mmio.c
@@ -0,0 +1,591 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * NVMe virtual controller MMIO implementation
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/kernel.h>
+#include <linux/highmem.h>
+#include "priv.h"
+
+#define DB_AREA_SIZE (MAX_VIRTUAL_QUEUES * 2 * (4 << DB_STRIDE_SHIFT))
+#define DB_MASK ((4 << DB_STRIDE_SHIFT) - 1)
+#define MMIO_BAR_SIZE __roundup_pow_of_two(NVME_REG_DBS + DB_AREA_SIZE)
+
+/* Put the controller into fatal error state. Only way out is reset */
+static void nvme_mdev_mmio_fatal_error(struct nvme_mdev_vctrl *vctrl)
+{
+ if (vctrl->mmio.csts & NVME_CSTS_CFS)
+ return;
+
+ vctrl->mmio.csts |= NVME_CSTS_CFS;
+ nvme_mdev_io_pause(vctrl);
+
+ if (vctrl->mmio.csts & NVME_CSTS_RDY)
+ nvme_mdev_vctrl_disable(vctrl);
+}
+
+/* This sends an generic error notification to the user */
+static void nvme_mdev_mmio_error(struct nvme_mdev_vctrl *vctrl,
+ enum nvme_async_event info)
+{
+ nvme_mdev_event_send(vctrl, NVME_AER_TYPE_ERROR, info);
+}
+
+/* This is memory fault handler for the mmap area of the doorbells*/
+static vm_fault_t nvme_mdev_mmio_dbs_mmap_fault(struct vm_fault *vmf)
+{
+ struct vm_area_struct *vma = vmf->vma;
+ struct nvme_mdev_vctrl *vctrl = vma->vm_private_data;
+
+ /* DB area is just one page, starting at offset 4096 of the mmio*/
+ if (WARN_ON(vmf->pgoff != 1))
+ return VM_FAULT_SIGBUS;
+
+ get_page(vctrl->mmio.dbs_page);
+ vmf->page = vctrl->mmio.dbs_page;
+ return 0;
+}
+
+static const struct vm_operations_struct nvme_mdev_mmio_dbs_vm_ops = {
+ .fault = nvme_mdev_mmio_dbs_mmap_fault,
+};
+
+/* check that user db write is valid and send an error if not*/
+bool nvme_mdev_mmio_db_check(struct nvme_mdev_vctrl *vctrl,
+ u16 qid, u16 size, u16 db)
+{
+ if (get_current() != vctrl->iothread)
+ lockdep_assert_held(&vctrl->lock);
+
+ if (db < size)
+ return true;
+ if (qid == 0) {
+ _DBG(vctrl, "MMIO: invalid admin DB write - fatal error\n");
+ nvme_mdev_mmio_fatal_error(vctrl);
+ return false;
+ }
+
+ _DBG(vctrl, "MMIO: invalid DB value write qid=%d, size=%d, value=%d\n",
+ qid, size, db);
+
+ nvme_mdev_mmio_error(vctrl, NVME_AER_ERROR_INVALID_DB_VALUE);
+ return false;
+}
+
+/* handle submission queue doorbell write */
+static void nvme_mdev_mmio_db_write_sq(struct nvme_mdev_vctrl *vctrl,
+ u32 qid, u32 val)
+{
+ _DBG(vctrl, "MMIO: doorbell SQID %d, DB write %d\n", qid, val);
+
+ lockdep_assert_held(&vctrl->lock);
+ /* check if the db belongs to a valid queue */
+ if (qid >= MAX_VIRTUAL_QUEUES || !test_bit(qid, vctrl->vsq_en))
+ goto err_db;
+
+ /* emulate the shadow doorbell functionality */
+ if (!vctrl->mmio.shadow_db_en || qid == 0)
+ vctrl->mmio.dbs[qid].sqt = cpu_to_le32(val & 0x0000FFFF);
+
+ if (qid != 0)
+ vctrl->io_idle = false;
+
+ if (vctrl->vctrl_paused || !vctrl->mmio.shadow_db_supported)
+ return;
+
+ qid ? nvme_mdev_io_resume(vctrl) : nvme_mdev_adm_process_sq(vctrl);
+ return;
+err_db:
+
+ _DBG(vctrl, "MMIO: inactive/invalid SQ DB write qid=%d, value=%d\n",
+ qid, val);
+
+ nvme_mdev_mmio_error(vctrl, NVME_AER_ERROR_INVALID_DB_REG);
+}
+
+/* handle doorbell write */
+static void nvme_mdev_mmio_db_write_cq(struct nvme_mdev_vctrl *vctrl,
+ u32 qid, u32 val)
+{
+ _DBG(vctrl, "MMIO: doorbell CQID %d, DB write %d\n", qid, val);
+
+ lockdep_assert_held(&vctrl->lock);
+ /* check if the db belongs to a valid queue */
+ if (qid >= MAX_VIRTUAL_QUEUES || !test_bit(qid, vctrl->vcq_en))
+ goto err_db;
+
+ /* emulate the shadow doorbell functionality */
+ if (!vctrl->mmio.shadow_db_en || qid == 0)
+ vctrl->mmio.dbs[qid].cqh = cpu_to_le16(val & 0xFFFF);
+
+ if (vctrl->vctrl_paused || !vctrl->mmio.shadow_db_supported)
+ return;
+
+ if (qid == 0) {
+ nvme_mdev_vcq_process(vctrl, 0, false);
+ // if completion queue was full prior to that, we
+ // might have some admin commands pending,
+ // and this is the last chance to process them
+ nvme_mdev_adm_process_sq(vctrl);
+ }
+ return;
+err_db:
+ _DBG(vctrl,
+ "MMIO: inactive/invalid CQ DB write qid=%d, value=%d\n",
+ qid, val);
+
+ nvme_mdev_mmio_error(vctrl, NVME_AER_ERROR_INVALID_DB_REG);
+}
+
+/* This is called when user enables the controller */
+static void nvme_mdev_mmio_cntrl_enable(struct nvme_mdev_vctrl *vctrl)
+{
+ u64 acq, asq;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ // Controller must be reset from the dead state
+ if (nvme_mdev_vctrl_is_dead(vctrl))
+ goto error;
+
+ /* only NVME command set supported */
+ if (((vctrl->mmio.cc >> NVME_CC_CSS_SHIFT) & 0x7) != 0)
+ goto error;
+
+ /* Check the queue arbitration method*/
+ if ((vctrl->mmio.cc & NVME_CC_AMS_MASK) != NVME_CC_AMS_RR)
+ goto error;
+
+ /* Check the page size*/
+ if (((vctrl->mmio.cc >> NVME_CC_MPS_SHIFT) & 0xF) != (PAGE_SHIFT - 12))
+ goto error;
+
+ /* Start the admin completion queue*/
+ acq = vctrl->mmio.acql | ((u64)vctrl->mmio.acqh << 32);
+ asq = vctrl->mmio.asql | ((u64)vctrl->mmio.asqh << 32);
+
+ if (!nvme_mdev_vctrl_enable(vctrl, acq, asq, vctrl->mmio.aqa))
+ goto error;
+
+ /* Success! */
+ vctrl->mmio.csts |= NVME_CSTS_RDY;
+ return;
+error:
+ _DBG(vctrl, "MMIO: failure to enable the controller - fatal error\n");
+ nvme_mdev_mmio_fatal_error(vctrl);
+}
+
+/* This is called when user sends a notification that controller is
+ * about to be disabled
+ */
+static void nvme_mdev_mmio_cntrl_shutdown(struct nvme_mdev_vctrl *vctrl)
+{
+ lockdep_assert_held(&vctrl->lock);
+
+ /* clear shutdown notification bits */
+ vctrl->mmio.cc &= ~NVME_CC_SHN_MASK;
+
+ if (nvme_mdev_vctrl_is_dead(vctrl)) {
+ _DBG(vctrl, "MMIO: shutdown notification for dead ctrl\n");
+ return;
+ }
+
+ /* not enabled */
+ if (!(vctrl->mmio.csts & NVME_CSTS_RDY)) {
+ _DBG(vctrl, "MMIO: shutdown notification with CSTS.RDY==0\n");
+ nvme_mdev_assert_io_not_running(vctrl);
+ return;
+ }
+
+ nvme_mdev_io_pause(vctrl);
+ nvme_mdev_vctrl_disable(vctrl);
+ vctrl->mmio.csts |= NVME_CSTS_SHST_CMPLT;
+}
+
+/* MMIO BAR read/write */
+static int nvme_mdev_mmio_bar_access(struct nvme_mdev_vctrl *vctrl,
+ u16 offset, char *buf,
+ u32 count, bool is_write)
+{
+ u32 val, oldval;
+
+ mutex_lock(&vctrl->lock);
+
+ /* Drop non DWORD sized and aligned reads/writes
+ * (QWORD read/writes are split by the caller)
+ */
+ if (count != 4 || (offset & 0x3))
+ goto drop;
+
+ val = is_write ? le32_to_cpu(*(__le32 *)buf) : 0;
+
+ switch (offset) {
+ case NVME_REG_CAP:
+ /* controller capabilities (low 32 bit)*/
+ if (is_write)
+ goto drop;
+ store_le32(buf, vctrl->mmio.cap & 0xFFFFFFFF);
+ break;
+
+ case NVME_REG_CAP + 4:
+ /* controller capabilities (upper 32 bit)*/
+ if (is_write)
+ goto drop;
+ store_le32(buf, vctrl->mmio.cap >> 32);
+ break;
+
+ case NVME_REG_VS:
+ if (is_write)
+ goto drop;
+ store_le32(buf, NVME_MDEV_NVME_VER);
+ break;
+
+ case NVME_REG_INTMS:
+ case NVME_REG_INTMC:
+ /* Interrupt Mask Set & Clear */
+ goto drop;
+
+ case NVME_REG_CC:
+ /* Controller Configuration */
+ if (!is_write) {
+ store_le32(buf, vctrl->mmio.cc);
+ break;
+ }
+
+ oldval = vctrl->mmio.cc;
+ vctrl->mmio.cc = val;
+
+ /* drop if reserved bits set */
+ if (vctrl->mmio.cc & 0xFF00000E) {
+ _DBG(vctrl,
+ "MMIO: reserved bits of CC set - fatal error\n");
+ nvme_mdev_mmio_fatal_error(vctrl);
+ goto drop;
+ }
+
+ /* CSS(command set),MPS(memory page size),AMS(queue arbitration)
+ * must not be changed while controller is running
+ */
+ if (vctrl->mmio.csts & NVME_CSTS_RDY) {
+ if ((vctrl->mmio.cc & 0x3FF0) != (oldval & 0x3FF0)) {
+ _DBG(vctrl,
+ "MMIO: attempt to change setting bits of CC while CC.EN=1 - fatal error\n");
+
+ nvme_mdev_mmio_fatal_error(vctrl);
+ goto drop;
+ }
+ }
+
+ if ((vctrl->mmio.cc & NVME_CC_SHN_MASK) != NVME_CC_SHN_NONE) {
+ _DBG(vctrl, "MMIO: CC.SHN != 0 - shutdown\n");
+ nvme_mdev_mmio_cntrl_shutdown(vctrl);
+ }
+
+ /* change in controller enabled state */
+ if ((val & NVME_CC_ENABLE) == (oldval & NVME_CC_ENABLE))
+ break;
+
+ if (vctrl->mmio.cc & NVME_CC_ENABLE) {
+ _DBG(vctrl, "MMIO: CC.EN<=1 - enable the controller\n");
+ nvme_mdev_mmio_cntrl_enable(vctrl);
+ } else {
+ _DBG(vctrl, "MMIO: CC.EN<=0 - reset controller\n");
+ __nvme_mdev_vctrl_reset(vctrl, false);
+ }
+
+ break;
+
+ case NVME_REG_CSTS:
+ /* Controller Status */
+ if (is_write)
+ goto drop;
+ store_le32(buf, vctrl->mmio.csts);
+ break;
+
+ case NVME_REG_AQA:
+ /* admin queue submission and completion size*/
+ if (!is_write)
+ store_le32(buf, vctrl->mmio.aqa);
+ else if (!(vctrl->mmio.csts & NVME_CSTS_RDY))
+ vctrl->mmio.aqa = val;
+ else
+ goto drop;
+ break;
+
+ case NVME_REG_ASQ:
+ /* admin submission queue address (low 32 bit)*/
+ if (!is_write)
+ store_le32(buf, vctrl->mmio.asql);
+ else if (!(vctrl->mmio.csts & NVME_CSTS_RDY))
+ vctrl->mmio.asql = val;
+ else
+ goto drop;
+ break;
+
+ case NVME_REG_ASQ + 4:
+ /* admin submission queue address (high 32 bit)*/
+ if (!is_write)
+ store_le32(buf, vctrl->mmio.asqh);
+ else if (!(vctrl->mmio.csts & NVME_CSTS_RDY))
+ vctrl->mmio.asqh = val;
+ else
+ goto drop;
+ break;
+
+ case NVME_REG_ACQ:
+ /* admin completion queue address (low 32 bit)*/
+ if (!is_write)
+ store_le32(buf, vctrl->mmio.acql);
+ else if (!(vctrl->mmio.csts & NVME_CSTS_RDY))
+ vctrl->mmio.acql = val;
+ else
+ goto drop;
+ break;
+
+ case NVME_REG_ACQ + 4:
+ /* admin completion queue address (high 32 bit)*/
+ if (!is_write)
+ store_le32(buf, vctrl->mmio.acqh);
+ else if (!(vctrl->mmio.csts & NVME_CSTS_RDY))
+ vctrl->mmio.acqh = val;
+ else
+ goto drop;
+ break;
+
+ case NVME_REG_CMBLOC:
+ case NVME_REG_CMBSZ:
+ /* not supported - hardwired to 0*/
+ if (is_write)
+ goto drop;
+ store_le32(buf, 0);
+ break;
+
+ case NVME_REG_DBS ... (NVME_REG_DBS + DB_AREA_SIZE - 1): {
+ /* completion and submission doorbells */
+ u16 db_offset = offset - NVME_REG_DBS;
+ u16 index = db_offset >> (DB_STRIDE_SHIFT + 2);
+ u16 qid = index >> 1;
+ bool sq = (index & 0x1) == 0;
+
+ if (!is_write || (db_offset & DB_MASK))
+ goto drop;
+
+ if (!(vctrl->mmio.csts & NVME_CSTS_RDY))
+ goto drop;
+
+ if (nvme_mdev_vctrl_is_dead(vctrl))
+ goto drop;
+
+ sq ? nvme_mdev_mmio_db_write_sq(vctrl, qid, val) :
+ nvme_mdev_mmio_db_write_cq(vctrl, qid, val);
+ break;
+ }
+ default:
+ goto drop;
+ }
+
+ mutex_unlock(&vctrl->lock);
+ return count;
+drop:
+ _DBG(vctrl, "MMIO: dropping write at 0x%x\n", offset);
+ mutex_unlock(&vctrl->lock);
+ return 0;
+}
+
+/* Called when the virtual controller is created */
+int nvme_mdev_mmio_create(struct nvme_mdev_vctrl *vctrl)
+{
+ int ret;
+
+ /* BAR0 */
+ nvme_mdev_pci_setup_bar(vctrl, PCI_BASE_ADDRESS_0,
+ MMIO_BAR_SIZE, nvme_mdev_mmio_bar_access);
+
+ /* Spec allows for maximum depth of 0x10000, but we limit
+ * it to 1 less to avoid various overflows
+ */
+ BUILD_BUG_ON(MAX_VIRTUAL_QUEUE_DEPTH > 0xFFFF);
+
+ /* CAP has 4 bits for the doorbell stride shift*/
+ BUILD_BUG_ON(DB_STRIDE_SHIFT > 0xF);
+
+ /* Shadow doorbell limits doorbells to 1 page*/
+ BUILD_BUG_ON(DB_AREA_SIZE > PAGE_SIZE);
+
+ /* Just in case...*/
+ BUILD_BUG_ON((PAGE_SHIFT - 12) > 0xF);
+
+ vctrl->mmio.cap =
+ // MQES: maximum queue entries
+ ((u64)(MAX_VIRTUAL_QUEUE_DEPTH - 1) << 0) |
+ // CQR: physically contiguous queues - no
+ (0ULL << 16) |
+ // AMS: Queue arbitration.
+ // TODOLATER: IO: implement WRRU
+ (0ULL << 17) |
+ // TO: RDY timeout - 0 (done in sync)
+ (0ULL << 24) |
+ // DSTRD: doorbell stride
+ ((u64)DB_STRIDE_SHIFT << 32) |
+ // NSSRS: no support for nvme subsystem reset
+ (0ULL << 36) |
+ // CSS: NVM command set supported
+ (1ULL << 37) |
+ // BPS: no support for boot partition
+ (0ULL << 45) |
+ // MPSMIN: Minimum page size supported is PAGE_SIZE
+ ((u64)(PAGE_SHIFT - 12) << 48) |
+ // MPSMAX: Maximum page size is PAGE_SIZE as well
+ ((u64)(PAGE_SHIFT - 12) << 52);
+
+ /* Create the (regular) doorbell buffers */
+ vctrl->mmio.dbs_page = alloc_pages_node(vctrl->hctrl->node,
+ __GFP_ZERO, 0);
+
+ ret = -ENOMEM;
+
+ if (!vctrl->mmio.dbs_page)
+ goto error0;
+
+ vctrl->mmio.db_page_kmap = kmap(vctrl->mmio.dbs_page);
+ if (!vctrl->mmio.db_page_kmap)
+ goto error1;
+
+ vctrl->mmio.fake_eidx_page = alloc_pages_node(vctrl->hctrl->node,
+ __GFP_ZERO, 0);
+ if (!vctrl->mmio.fake_eidx_page)
+ goto error2;
+
+ vctrl->mmio.fake_eidx_kmap = kmap(vctrl->mmio.fake_eidx_page);
+ if (!vctrl->mmio.fake_eidx_kmap)
+ goto error3;
+ return 0;
+error3:
+ put_page(vctrl->mmio.fake_eidx_kmap);
+error2:
+ kunmap(vctrl->mmio.dbs_page);
+error1:
+ put_page(vctrl->mmio.dbs_page);
+error0:
+ return ret;
+}
+
+/* Called when the virtual controller is reset */
+void nvme_mdev_mmio_reset(struct nvme_mdev_vctrl *vctrl, bool pci_reset)
+{
+ vctrl->mmio.cc = 0;
+ vctrl->mmio.csts = 0;
+
+ if (pci_reset) {
+ vctrl->mmio.aqa = 0;
+ vctrl->mmio.asql = 0;
+ vctrl->mmio.asqh = 0;
+ vctrl->mmio.acql = 0;
+ vctrl->mmio.acqh = 0;
+ }
+}
+
+/* Called when the virtual controller is opened */
+void nvme_mdev_mmio_open(struct nvme_mdev_vctrl *vctrl)
+{
+ if (!vctrl->mmio.shadow_db_supported)
+ nvme_mdev_vctrl_region_set_mmap(vctrl,
+ VFIO_PCI_BAR0_REGION_INDEX,
+ NVME_REG_DBS, PAGE_SIZE,
+ &nvme_mdev_mmio_dbs_vm_ops);
+ else
+ nvme_mdev_vctrl_region_disable_mmap(vctrl,
+ VFIO_PCI_BAR0_REGION_INDEX);
+}
+
+/* Called when the virtual controller queues are enabled */
+int nvme_mdev_mmio_enable_dbs(struct nvme_mdev_vctrl *vctrl)
+{
+ if (WARN_ON(vctrl->mmio.shadow_db_en))
+ return -EINVAL;
+
+ nvme_mdev_assert_io_not_running(vctrl);
+
+ /* setup normal doorbells and reset them*/
+ vctrl->mmio.dbs = vctrl->mmio.db_page_kmap;
+ vctrl->mmio.eidxs = vctrl->mmio.fake_eidx_kmap;
+ memset((void *)vctrl->mmio.dbs, 0, DB_AREA_SIZE);
+ memset((void *)vctrl->mmio.eidxs, 0, DB_AREA_SIZE);
+ return 0;
+}
+
+/* Called when the virtual controller shadow doorbell is enabled */
+int nvme_mdev_mmio_enable_dbs_shadow(struct nvme_mdev_vctrl *vctrl,
+ dma_addr_t sdb_iova,
+ dma_addr_t eidx_iova)
+{
+ int ret;
+
+ nvme_mdev_assert_io_not_running(vctrl);
+
+ ret = nvme_mdev_viommu_create_kmap(&vctrl->viommu,
+ sdb_iova, &vctrl->mmio.sdb_map);
+ if (ret)
+ return ret;
+
+ ret = nvme_mdev_viommu_create_kmap(&vctrl->viommu,
+ eidx_iova, &vctrl->mmio.seidx_map);
+ if (ret) {
+ nvme_mdev_viommu_free_kmap(&vctrl->viommu,
+ &vctrl->mmio.sdb_map);
+ return ret;
+ }
+
+ vctrl->mmio.dbs = vctrl->mmio.sdb_map.kmap;
+ vctrl->mmio.eidxs = vctrl->mmio.seidx_map.kmap;
+
+ memcpy((void *)vctrl->mmio.dbs,
+ vctrl->mmio.db_page_kmap, DB_AREA_SIZE);
+
+ memcpy((void *)vctrl->mmio.eidxs,
+ vctrl->mmio.db_page_kmap, DB_AREA_SIZE);
+
+ vctrl->mmio.shadow_db_en = true;
+ return 0;
+}
+
+/* Called on guest mapping update to
+ * verify that our mappings are still intact
+ */
+void nvme_mdev_mmio_viommu_update(struct nvme_mdev_vctrl *vctrl)
+{
+ nvme_mdev_assert_io_not_running(vctrl);
+ if (!vctrl->mmio.shadow_db_en)
+ return;
+
+ nvme_mdev_viommu_update_kmap(&vctrl->viommu, &vctrl->mmio.sdb_map);
+ nvme_mdev_viommu_update_kmap(&vctrl->viommu, &vctrl->mmio.seidx_map);
+
+ vctrl->mmio.dbs = vctrl->mmio.sdb_map.kmap;
+ vctrl->mmio.eidxs = vctrl->mmio.seidx_map.kmap;
+}
+
+/* Disable the doorbells */
+void nvme_mdev_mmio_disable_dbs(struct nvme_mdev_vctrl *vctrl)
+{
+ nvme_mdev_assert_io_not_running(vctrl);
+
+ /* Free the shadow doorbells */
+ nvme_mdev_viommu_free_kmap(&vctrl->viommu, &vctrl->mmio.sdb_map);
+ nvme_mdev_viommu_free_kmap(&vctrl->viommu, &vctrl->mmio.seidx_map);
+
+ /* Clear the doorbells */
+ vctrl->mmio.dbs = NULL;
+ vctrl->mmio.eidxs = NULL;
+ vctrl->mmio.shadow_db_en = false;
+}
+
+/* Called when the virtual controller is about to be freed */
+void nvme_mdev_mmio_free(struct nvme_mdev_vctrl *vctrl)
+{
+ nvme_mdev_assert_io_not_running(vctrl);
+ kunmap(vctrl->mmio.dbs_page);
+ put_page(vctrl->mmio.dbs_page);
+ kunmap(vctrl->mmio.fake_eidx_page);
+ put_page(vctrl->mmio.fake_eidx_page);
+}
diff --git a/drivers/nvme/mdev/pci.c b/drivers/nvme/mdev/pci.c
new file mode 100644
index 000000000000..b7cdeaaf9c2e
--- /dev/null
+++ b/drivers/nvme/mdev/pci.c
@@ -0,0 +1,247 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * NVMe virtual controller minimal PCI/PCIe config space implementation
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/kernel.h>
+#include <linux/pci.h>
+#include "priv.h"
+
+/* setup a 64 bit PCI bar */
+void nvme_mdev_pci_setup_bar(struct nvme_mdev_vctrl *vctrl,
+ u8 bar,
+ unsigned int size,
+ region_access_fn access_fn)
+{
+ nvme_mdev_vctrl_add_region(vctrl,
+ VFIO_PCI_BAR0_REGION_INDEX +
+ ((bar - PCI_BASE_ADDRESS_0) >> 2),
+ size, access_fn);
+
+ store_le32(vctrl->pcicfg.wmask + bar, ~((u64)size - 1));
+ store_le32(vctrl->pcicfg.values + bar,
+ PCI_BASE_ADDRESS_SPACE_MEMORY |
+ PCI_BASE_ADDRESS_MEM_TYPE_64);
+}
+
+/* Allocate a pci capability*/
+static u8 nvme_mdev_pci_allocate_cap(struct nvme_mdev_vctrl *vctrl,
+ u8 id, u8 size)
+{
+ u8 *cfg = vctrl->pcicfg.values;
+ u8 newcap = vctrl->pcicfg.end;
+ u8 cap = cfg[PCI_CAPABILITY_LIST];
+
+ size = round_up(size, 4);
+ // only standard cfg space caps for now
+ WARN_ON(newcap + size > 256);
+
+ if (!cfg[PCI_CAPABILITY_LIST]) {
+ /*special case for first capability*/
+ u16 status = load_le16(cfg + PCI_STATUS);
+
+ status |= PCI_STATUS_CAP_LIST;
+ store_le16(cfg + PCI_STATUS, status);
+
+ cfg[PCI_CAPABILITY_LIST] = newcap;
+ goto setupcap;
+ }
+
+ while (cfg[cap + PCI_CAP_LIST_NEXT] != 0)
+ cap = cfg[cap + PCI_CAP_LIST_NEXT];
+
+ cfg[cap + PCI_CAP_LIST_NEXT] = newcap;
+
+setupcap:
+ cfg[newcap + PCI_CAP_LIST_ID] = id;
+ cfg[newcap + PCI_CAP_LIST_NEXT] = 0;
+ vctrl->pcicfg.end += size;
+ return newcap;
+}
+
+static void nvme_mdev_pci_setup_pm_cap(struct nvme_mdev_vctrl *vctrl)
+{
+ u8 *cfg = vctrl->pcicfg.values;
+ u8 *cfgm = vctrl->pcicfg.wmask;
+
+ u8 cap = nvme_mdev_pci_allocate_cap(vctrl,
+ PCI_CAP_ID_PM, PCI_PM_SIZEOF);
+
+ store_le16(cfg + cap + PCI_PM_PMC, 0x3);
+ store_le16(cfg + cap + PCI_PM_CTRL, PCI_PM_CTRL_NO_SOFT_RESET);
+ store_le16(cfgm + cap + PCI_PM_CTRL, 0x3);
+ vctrl->pcicfg.pmcap = cap;
+}
+
+static void nvme_mdev_pci_setup_msix_cap(struct nvme_mdev_vctrl *vctrl)
+{
+ u8 *cfg = vctrl->pcicfg.values;
+ u8 *cfgm = vctrl->pcicfg.wmask;
+ u8 cap = nvme_mdev_pci_allocate_cap(vctrl,
+ PCI_CAP_ID_MSIX,
+ PCI_CAP_MSIX_SIZEOF);
+
+ int MSIX_TBL_SIZE = roundup(MAX_VIRTUAL_IRQS * 16, PAGE_SIZE);
+ int MSIX_PBA_SIZE = roundup(DIV_ROUND_UP(MAX_VIRTUAL_IRQS, 8),
+ PAGE_SIZE);
+
+ store_le16(cfg + cap + PCI_MSIX_FLAGS, MAX_VIRTUAL_IRQS - 1);
+ store_le16(cfgm + cap + PCI_MSIX_FLAGS,
+ PCI_MSIX_FLAGS_MASKALL | PCI_MSIX_FLAGS_ENABLE);
+
+ store_le32(cfg + cap + PCI_MSIX_TABLE, 0x2);
+ store_le32(cfg + cap + PCI_MSIX_PBA, MSIX_TBL_SIZE | 0x2);
+
+ nvme_mdev_pci_setup_bar(vctrl, PCI_BASE_ADDRESS_2,
+ __roundup_pow_of_two(MSIX_TBL_SIZE +
+ MSIX_PBA_SIZE), NULL);
+ vctrl->pcicfg.msixcap = cap;
+}
+
+static void nvme_mdev_pci_setup_pcie_cap(struct nvme_mdev_vctrl *vctrl)
+{
+ u8 *cfg = vctrl->pcicfg.values;
+ u8 cap = nvme_mdev_pci_allocate_cap(vctrl,
+ PCI_CAP_ID_EXP,
+ PCI_CAP_EXP_ENDPOINT_SIZEOF_V2);
+
+ store_le16(cfg + cap + PCI_EXP_FLAGS, 0x02 |
+ (PCI_EXP_TYPE_ENDPOINT << 4));
+
+ store_le32(cfg + cap + PCI_EXP_DEVCAP,
+ PCI_EXP_DEVCAP_RBER | PCI_EXP_DEVCAP_FLR);
+ store_le32(cfg + cap + PCI_EXP_LNKCAP,
+ PCI_EXP_LNKCAP_SLS_8_0GB | (4 << 4) /*4x*/);
+ store_le16(cfg + cap + PCI_EXP_LNKSTA,
+ PCI_EXP_LNKSTA_CLS_8_0GB | (4 << 4) /*4x*/);
+
+ store_le32(cfg + cap + PCI_EXP_LNKCAP2, PCI_EXP_LNKCAP2_SLS_8_0GB);
+ store_le16(cfg + cap + PCI_EXP_LNKCTL2, PCI_EXP_LNKCTL2_TLS_8_0GT);
+ vctrl->pcicfg.pciecap = cap;
+}
+
+/* This is called on PCI config read/write */
+static int nvme_mdev_pci_cfg_access(struct nvme_mdev_vctrl *vctrl,
+ u16 offset, char *buf,
+ u32 count, bool is_write)
+{
+ unsigned int i;
+
+ mutex_lock(&vctrl->lock);
+
+ if (!is_write) {
+ memcpy(buf, (vctrl->pcicfg.values + offset), count);
+ goto out;
+ }
+
+ for (i = 0; i < count; i++) {
+ u8 address = offset + i;
+ u8 value = buf[i];
+ u8 old_value = vctrl->pcicfg.values[address];
+ u8 wmask = vctrl->pcicfg.wmask[address];
+ u8 new_value = (value & wmask) | (old_value & ~wmask);
+
+ /* D3/D0 power control */
+ if (address == vctrl->pcicfg.pmcap + PCI_PM_CTRL) {
+ u8 state = new_value & 0x03;
+
+ if (state != 0 && state != 3)
+ new_value = old_value;
+
+ if (old_value != new_value) {
+ const char *s = state == 3 ? "D3" : "D0";
+
+ if (state == 3)
+ __nvme_mdev_vctrl_reset(vctrl, true);
+ _DBG(vctrl, "PCI: going to %s\n", s);
+ }
+ }
+
+ /* FLR reset*/
+ if (address == vctrl->pcicfg.pciecap + PCI_EXP_DEVCTL + 1)
+ if (value & 0x80) {
+ _DBG(vctrl, "PCI: FLR reset\n");
+ __nvme_mdev_vctrl_reset(vctrl, true);
+ }
+ vctrl->pcicfg.values[offset + i] = new_value;
+ }
+out:
+ mutex_unlock(&vctrl->lock);
+ return count;
+}
+
+/* setup pci configuration */
+int nvme_mdev_pci_create(struct nvme_mdev_vctrl *vctrl)
+{
+ u8 *cfg, *cfgm;
+
+ vctrl->pcicfg.values = kzalloc(PCI_CFG_SIZE, GFP_KERNEL);
+ if (!vctrl->pcicfg.values)
+ return -ENOMEM;
+
+ vctrl->pcicfg.wmask = kzalloc(PCI_CFG_SIZE, GFP_KERNEL);
+ if (!vctrl->pcicfg.wmask) {
+ kfree(vctrl->pcicfg.values);
+ return -ENOMEM;
+ }
+
+ cfg = vctrl->pcicfg.values;
+ cfgm = vctrl->pcicfg.wmask;
+
+ nvme_mdev_vctrl_add_region(vctrl,
+ VFIO_PCI_CONFIG_REGION_INDEX,
+ PCI_CFG_SIZE,
+ nvme_mdev_pci_cfg_access);
+
+ /* vendor information */
+ store_le16(cfg + PCI_VENDOR_ID, NVME_MDEV_PCI_VENDOR_ID);
+ store_le16(cfg + PCI_DEVICE_ID, NVME_MDEV_PCI_DEVICE_ID);
+
+ /* pci command register */
+ store_le16(cfgm + PCI_COMMAND,
+ PCI_COMMAND_INTX_DISABLE |
+ PCI_COMMAND_MEMORY |
+ PCI_COMMAND_MASTER);
+
+ /* pci status register */
+ store_le16(cfg + PCI_STATUS, PCI_STATUS_CAP_LIST);
+
+ /* subsystem information */
+ store_le16(cfg + PCI_SUBSYSTEM_VENDOR_ID, NVME_MDEV_PCI_SUBVENDOR_ID);
+ store_le16(cfg + PCI_SUBSYSTEM_ID, NVME_MDEV_PCI_SUBDEVICE_ID);
+ store_le8(cfg + PCI_CLASS_REVISION, NVME_MDEV_PCI_REVISION);
+
+ /*Programming Interface (NVM Express) */
+ store_le8(cfg + PCI_CLASS_PROG, 0x02);
+
+ /* Device class and subclass
+ * (Mass storage controller, Non-Volatile memory controller)
+ */
+ store_le16(cfg + PCI_CLASS_DEVICE, 0x0108);
+
+ /* dummy read/write */
+ store_le8(cfgm + PCI_CACHE_LINE_SIZE, 0xFF);
+
+ /* initial value*/
+ store_le8(cfg + PCI_CAPABILITY_LIST, 0);
+ vctrl->pcicfg.end = 0x40;
+
+ nvme_mdev_pci_setup_pm_cap(vctrl);
+ nvme_mdev_pci_setup_msix_cap(vctrl);
+ nvme_mdev_pci_setup_pcie_cap(vctrl);
+
+ /* INTX IRQ number - info only for BIOS */
+ store_le8(cfgm + PCI_INTERRUPT_LINE, 0xFF);
+ store_le8(cfg + PCI_INTERRUPT_PIN, 0x01);
+
+ return 0;
+}
+
+/* teardown pci configuration */
+void nvme_mdev_pci_free(struct nvme_mdev_vctrl *vctrl)
+{
+ kfree(vctrl->pcicfg.values);
+ kfree(vctrl->pcicfg.wmask);
+ vctrl->pcicfg.values = NULL;
+ vctrl->pcicfg.wmask = NULL;
+}
diff --git a/drivers/nvme/mdev/priv.h b/drivers/nvme/mdev/priv.h
new file mode 100644
index 000000000000..9f65e46c1ab2
--- /dev/null
+++ b/drivers/nvme/mdev/priv.h
@@ -0,0 +1,700 @@
+/* SPDX-License-Identifier: GPL-2.0+ */
+/*
+ * Driver private data structures and helper macros
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+
+#ifndef _MDEV_NVME_PRIV_H
+#define _MDEV_NVME_PRIV_H
+
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/rbtree.h>
+#include <linux/vfio.h>
+#include <linux/mdev.h>
+#include <linux/pci.h>
+#include <linux/eventfd.h>
+#include <linux/byteorder/generic.h>
+#include "../host/nvme.h"
+#include "mdev.h"
+
+#define NVME_MDEV_NVME_VER NVME_VS(0x01, 0x03, 0x00)
+#define NVME_MDEV_FIRMWARE_VERSION "1.0"
+
+#define NVME_MDEV_PCI_VENDOR_ID PCI_VENDOR_ID_REDHAT_QUMRANET
+#define NVME_MDEV_PCI_DEVICE_ID 0x1234
+#define NVME_MDEV_PCI_SUBVENDOR_ID PCI_SUBVENDOR_ID_REDHAT_QUMRANET
+#define NVME_MDEV_PCI_SUBDEVICE_ID 0
+#define NVME_MDEV_PCI_REVISION 0x0
+
+#define DB_STRIDE_SHIFT 4 /*4 = 1 cacheline */
+#define MAX_VIRTUAL_QUEUES 16
+#define MAX_VIRTUAL_QUEUE_DEPTH 0xFFFF
+#define MAX_VIRTUAL_NAMESPACES 16 /* NSID = 1..16*/
+#define MAX_VIRTUAL_IRQS 16
+
+#define MAX_HOST_QUEUES 4
+#define MAX_AER_COMMANDS 16
+#define MAX_LOG_PAGES 16
+
+extern bool use_shadow_doorbell;
+extern unsigned int io_timeout_ms;
+extern unsigned int poll_timeout_ms;
+extern unsigned int admin_poll_rate_ms;
+
+/* virtual submission queue*/
+struct nvme_vsq {
+ u16 qid;
+ u16 size;
+ u16 head; /*next item to read */
+
+ struct nvme_command *data; /*the queue*/
+ struct nvme_vcq *vcq; /* completion queue*/
+
+ dma_addr_t iova;
+ bool cont;
+
+ u16 hsq;
+};
+
+/* virtual completion queue*/
+struct nvme_vcq {
+ /* basic queue settings */
+ u16 qid;
+ u16 size;
+ u16 head;
+ u16 tail;
+ bool phase; /* current queue phase */
+
+ volatile struct nvme_completion *data;
+
+ /* number of items pending*/
+ u16 pending;
+
+ /* IRQ settings */
+ int irq /* -1 if disabled*/;
+
+ dma_addr_t iova;
+ bool cont;
+};
+
+/*A virtual namespace */
+struct nvme_mdev_vns {
+ /* host nvme namespace that we are attached to it*/
+ struct nvme_ns *host_ns;
+
+ /* block device that corresponds to the partition of that namespace */
+ struct block_device *host_part;
+ fmode_t fmode;
+
+ u32 nsid;
+
+ /* NSID on the host*/
+ u32 host_nsid;
+
+ /* host partition ID*/
+ unsigned int host_partid;
+
+ /* Offset inside the host namespace (start of the partition)*/
+ u64 host_lba_offset;
+
+ /* size of each block on the real namespace, same for host and guest */
+ u8 blksize_shift;
+
+ /* size of the namespace in lbas*/
+ u64 ns_size;
+
+ /* is the namespace read only?*/
+ bool readonly;
+
+ /* UUID of this namespace */
+ uuid_t uuid;
+
+ /* Optimal IO boundary*/
+ u16 noiob;
+};
+
+/* Virtual IOMMU */
+struct nvme_mdev_viommu {
+ struct device *hw_dev;
+ struct device *sw_dev;
+
+ /* dma/prp bookkeeping */
+ struct rb_root_cached maps_tree;
+ struct list_head maps_list;
+ struct nvme_mdev_vctrl *vctrl;
+};
+
+struct doorbell {
+ volatile __le32 sqt;
+ u8 rsvd1[(4 << DB_STRIDE_SHIFT) - sizeof(__le32)];
+ volatile __le32 cqh;
+ u8 rsvd2[(4 << DB_STRIDE_SHIFT) - sizeof(__le32)];
+};
+
+/* MMIO state */
+struct nvme_mdev_user_ctrl_mmio {
+ u32 cc; /* controller configuration */
+ u32 csts; /* controller status */
+ u64 cap /* controller capabilities*/;
+
+ /* admin queue location & size */
+ u32 aqa;
+ u32 asql;
+ u32 asqh;
+ u32 acql;
+ u32 acqh;
+
+ bool shadow_db_supported;
+ bool shadow_db_en;
+
+ /* Regular doorbells */
+ struct page *dbs_page;
+ struct page *fake_eidx_page;
+ void *db_page_kmap;
+ void *fake_eidx_kmap;
+
+ /* Shadow doorbells */
+ struct page_map sdb_map;
+ struct page_map seidx_map;
+
+ /* Current doorbell mappings */
+ volatile struct doorbell *dbs;
+ volatile struct doorbell *eidxs;
+};
+
+/* pci configuration space of the device*/
+#define PCI_CFG_SIZE 4096
+struct nvme_mdev_pci_cfg_space {
+ u8 *values;
+ u8 *wmask;
+
+ u8 pmcap;
+ u8 pciecap;
+ u8 msixcap;
+ u8 end;
+};
+
+/*IRQ state of the controller */
+struct nvme_mdev_user_irq {
+ struct eventfd_ctx *trigger;
+ /* IRQ coalescing */
+ bool irq_coalesc_en;
+ time_t irq_time;
+ unsigned int irq_pending_cnt;
+};
+
+enum nvme_mdev_irq_mode {
+ NVME_MDEV_IMODE_NONE,
+ NVME_MDEV_IMODE_INTX,
+ NVME_MDEV_IMODE_MSIX,
+};
+
+struct nvme_mdev_user_irqs {
+ /* one of VFIO_PCI_{INTX|MSI|MSIX}_IRQ_INDEX */
+ enum nvme_mdev_irq_mode mode;
+
+ struct nvme_mdev_user_irq vecs[MAX_VIRTUAL_IRQS];
+ /* user interrupt coalescing settings */
+ u8 irq_coalesc_max;
+ unsigned int irq_coalesc_time_us;
+ /* device removal trigger*/
+ struct eventfd_ctx *request_trigger;
+};
+
+/*AER state */
+struct nvme_mdev_user_events {
+ /* async event request CIDs*/
+ u16 aer_cids[MAX_AER_COMMANDS];
+ unsigned int aer_cid_count;
+
+ /* events that are enabled */
+ unsigned long events_enabled[BITS_TO_LONGS(MAX_LOG_PAGES)];
+
+ /* events that are masked till next log page read*/
+ unsigned long events_masked[BITS_TO_LONGS(MAX_LOG_PAGES)];
+
+ /* events that are pending to be sent when user gives us an AER*/
+ unsigned long events_pending[BITS_TO_LONGS(MAX_LOG_PAGES)];
+ u32 event_values[MAX_LOG_PAGES];
+};
+
+/* host IO queue */
+struct nvme_mdev_hq {
+ unsigned int usecount;
+ struct list_head link;
+ unsigned int hqid;
+};
+
+/* IO region abstraction (BARs, the PCI config space */
+struct nvme_mdev_vctrl;
+typedef int (*region_access_fn) (struct nvme_mdev_vctrl *vctrl,
+ u16 offset, char *buf,
+ u32 size, bool is_write);
+
+struct nvme_mdev_io_region {
+ unsigned int size;
+ region_access_fn rw;
+
+ /* IF != NULL, the mmap_area_start/size specify the mmaped window
+ * of this region
+ */
+ const struct vm_operations_struct *mmap_ops;
+ unsigned int mmap_area_start;
+ unsigned int mmap_area_size;
+};
+
+/*Virtual NVME controller state */
+struct nvme_mdev_vctrl {
+ struct kref ref;
+ struct mutex lock;
+ struct list_head link;
+
+ struct mdev_device *mdev;
+ struct nvme_mdev_hctrl *hctrl;
+ bool inuse;
+
+ struct nvme_mdev_io_region regions[VFIO_PCI_NUM_REGIONS];
+
+ /* virtual controller state */
+ struct nvme_mdev_user_ctrl_mmio mmio;
+ struct nvme_mdev_pci_cfg_space pcicfg;
+ struct nvme_mdev_user_irqs irqs;
+ struct nvme_mdev_user_events events;
+
+ /* emulated namespaces */
+ struct nvme_mdev_vns *namespaces[MAX_VIRTUAL_NAMESPACES];
+ __le32 ns_log[MAX_VIRTUAL_NAMESPACES];
+ unsigned int ns_log_size;
+
+ /* emulated submission queues*/
+ struct nvme_vsq vsqs[MAX_VIRTUAL_QUEUES];
+ unsigned long vsq_en[BITS_TO_LONGS(MAX_VIRTUAL_QUEUES)];
+
+ /* emulated completion queues*/
+ unsigned long vcq_en[BITS_TO_LONGS(MAX_VIRTUAL_QUEUES)];
+ struct nvme_vcq vcqs[MAX_VIRTUAL_QUEUES];
+
+ /* Host IO queues*/
+ int max_host_hw_queues;
+ struct list_head host_hw_queues;
+
+ /* Interface to access user memory */
+ struct notifier_block vfio_map_notifier;
+ struct notifier_block vfio_unmap_notifier;
+ struct nvme_mdev_viommu viommu;
+
+ /* the IO thread */
+ struct task_struct *iothread;
+ bool iothread_parked;
+ bool io_idle;
+ ktime_t now;
+
+ /* Settings */
+ unsigned int arb_burst_shift;
+ u8 worload_hint;
+ unsigned int iothread_cpu;
+
+ /* Identification*/
+ char subnqn[256];
+ char serial[9];
+
+ bool vctrl_paused; /* true when the host device paused our IO */
+};
+
+/* mdev instance type*/
+struct nvme_mdev_inst_type {
+ unsigned int max_hw_queues;
+ char name[16];
+ struct attribute_group *attrgroup;
+};
+
+/*Abstraction of the host controller that we are connected to */
+struct nvme_mdev_hctrl {
+ struct mutex lock;
+
+ /* numa node of the host controller*/
+ int node;
+
+ struct list_head link;
+ struct kref ref;
+ bool removing;
+
+ /* for reference counting */
+ struct nvme_ctrl *nvme_ctrl;
+
+ /* Host area*/
+ u16 oncs;
+ u8 mdts;
+ unsigned int id;
+
+ /* book-keeping for number of host queues we can allocate*/
+ unsigned int nr_host_queues;
+};
+
+/* vctrl.c*/
+struct nvme_mdev_vctrl *nvme_mdev_vctrl_create(struct mdev_device *mdev,
+ struct nvme_mdev_hctrl *hctrl,
+ unsigned int max_host_queues);
+
+int nvme_mdev_vctrl_destroy(struct nvme_mdev_vctrl *vctrl);
+
+int nvme_mdev_vctrl_open(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_vctrl_release(struct nvme_mdev_vctrl *vctrl);
+
+void nvme_mdev_vctrl_pause(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_vctrl_resume(struct nvme_mdev_vctrl *vctrl);
+
+bool nvme_mdev_vctrl_enable(struct nvme_mdev_vctrl *vctrl,
+ dma_addr_t cqiova, dma_addr_t sqiova, u32 sizes);
+
+void nvme_mdev_vctrl_disable(struct nvme_mdev_vctrl *vctrl);
+
+void nvme_mdev_vctrl_reset(struct nvme_mdev_vctrl *vctrl);
+void __nvme_mdev_vctrl_reset(struct nvme_mdev_vctrl *vctrl, bool pci_reset);
+
+void nvme_mdev_vctrl_add_region(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index, unsigned int size,
+ region_access_fn access_fn);
+
+void nvme_mdev_vctrl_region_set_mmap(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index,
+ unsigned int offset,
+ unsigned int size,
+ const struct vm_operations_struct *ops);
+
+void nvme_mdev_vctrl_region_disable_mmap(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index);
+
+void nvme_mdev_vctrl_bind_iothread(struct nvme_mdev_vctrl *vctrl,
+ unsigned int cpu);
+
+int nvme_mdev_vctrl_set_shadow_doorbell_supported(struct nvme_mdev_vctrl *vctrl,
+ bool enable);
+
+int nvme_mdev_vctrl_hq_alloc(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_vctrl_hq_free(struct nvme_mdev_vctrl *vctrl, u16 qid);
+unsigned int nvme_mdev_vctrl_hqs_list(struct nvme_mdev_vctrl *vctrl, u16 *out);
+bool nvme_mdev_vctrl_is_dead(struct nvme_mdev_vctrl *vctrl);
+
+int nvme_mdev_vctrl_viommu_map(struct nvme_mdev_vctrl *vctrl, u32 flags,
+ dma_addr_t iova, u64 size);
+
+int nvme_mdev_vctrl_viommu_unmap(struct nvme_mdev_vctrl *vctrl,
+ dma_addr_t iova, u64 size);
+
+/* hctrl.c*/
+struct nvme_mdev_inst_type *nvme_mdev_inst_type_get(const char *name);
+struct nvme_mdev_hctrl *nvme_mdev_hctrl_lookup_get(struct device *parent);
+void nvme_mdev_hctrl_put(struct nvme_mdev_hctrl *hctrl);
+
+int nvme_mdev_hctrl_hqs_available(struct nvme_mdev_hctrl *hctrl);
+
+bool nvme_mdev_hctrl_hqs_reserve(struct nvme_mdev_hctrl *hctrl,
+ unsigned int n);
+void nvme_mdev_hctrl_hqs_unreserve(struct nvme_mdev_hctrl *hctrl,
+ unsigned int n);
+
+int nvme_mdev_hctrl_hq_alloc(struct nvme_mdev_hctrl *hctrl);
+void nvme_mdev_hctrl_hq_free(struct nvme_mdev_hctrl *hctrl, u16 qid);
+bool nvme_mdev_hctrl_hq_can_submit(struct nvme_mdev_hctrl *hctrl, u16 qid);
+bool nvme_mdev_hctrl_hq_check_op(struct nvme_mdev_hctrl *hctrl, u8 optcode);
+
+int nvme_mdev_hctrl_hq_submit(struct nvme_mdev_hctrl *hctrl,
+ u16 qid, u32 tag,
+ struct nvme_command *cmd,
+ struct nvme_ext_data_iter *datait);
+
+int nvme_mdev_hctrl_hq_poll(struct nvme_mdev_hctrl *hctrl,
+ u32 qid,
+ struct nvme_ext_cmd_result *results,
+ unsigned int max_len);
+
+void nvme_mdev_hctrl_destroy_all(void);
+
+/* io.c */
+int nvme_mdev_io_create(struct nvme_mdev_vctrl *vctrl, unsigned int cpu);
+void nvme_mdev_io_free(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_io_pause(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_io_resume(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_assert_io_not_running(struct nvme_mdev_vctrl *vctrl);
+
+/* mmio.c*/
+int nvme_mdev_mmio_create(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_mmio_open(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_mmio_reset(struct nvme_mdev_vctrl *vctrl, bool pci_reset);
+void nvme_mdev_mmio_free(struct nvme_mdev_vctrl *vctrl);
+
+int nvme_mdev_mmio_enable_dbs(struct nvme_mdev_vctrl *vctrl);
+int nvme_mdev_mmio_enable_dbs_shadow(struct nvme_mdev_vctrl *vctrl,
+ dma_addr_t sdb_iova, dma_addr_t eidx_iova);
+
+void nvme_mdev_mmio_viommu_update(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_mmio_disable_dbs(struct nvme_mdev_vctrl *vctrl);
+bool nvme_mdev_mmio_db_check(struct nvme_mdev_vctrl *vctrl,
+ u16 qid, u16 size, u16 db);
+
+/* pci.c*/
+int nvme_mdev_pci_create(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_pci_free(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_pci_setup_bar(struct nvme_mdev_vctrl *vctrl,
+ u8 bar, unsigned int size,
+ region_access_fn access_fn);
+/* adm.c*/
+void nvme_mdev_adm_process_sq(struct nvme_mdev_vctrl *vctrl);
+
+/* events.c */
+void nvme_mdev_events_init(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_events_reset(struct nvme_mdev_vctrl *vctrl);
+
+int nvme_mdev_event_request_receive(struct nvme_mdev_vctrl *vctrl, u16 cid);
+void nvme_mdev_event_process_ack(struct nvme_mdev_vctrl *vctrl, u8 log_page);
+
+void nvme_mdev_event_send(struct nvme_mdev_vctrl *vctrl,
+ enum nvme_async_event_type type,
+ enum nvme_async_event info);
+
+u32 nvme_mdev_event_read_aen_config(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_event_set_aen_config(struct nvme_mdev_vctrl *vctrl, u32 value);
+
+/* irq.c*/
+void nvme_mdev_irqs_setup(struct nvme_mdev_vctrl *vctrl);
+void nvme_mdev_irqs_reset(struct nvme_mdev_vctrl *vctrl);
+
+int nvme_mdev_irqs_enable(struct nvme_mdev_vctrl *vctrl,
+ enum nvme_mdev_irq_mode mode);
+void nvme_mdev_irqs_disable(struct nvme_mdev_vctrl *vctrl,
+ enum nvme_mdev_irq_mode mode);
+
+int nvme_mdev_irqs_set_triggers(struct nvme_mdev_vctrl *vctrl,
+ int start, int count, int32_t *fds);
+int nvme_mdev_irqs_set_unplug_trigger(struct nvme_mdev_vctrl *vctrl,
+ int32_t fd);
+
+void nvme_mdev_irq_raise_unplug_event(struct nvme_mdev_vctrl *vctrl,
+ unsigned int count);
+void nvme_mdev_irq_raise(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index);
+void nvme_mdev_irq_trigger(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index);
+void nvme_mdev_irq_cond_trigger(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index);
+void nvme_mdev_irq_clear(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index);
+
+/* ns.c*/
+int nvme_mdev_vns_open(struct nvme_mdev_vctrl *vctrl,
+ u32 host_nsid, unsigned int host_partid);
+int nvme_mdev_vns_destroy(struct nvme_mdev_vctrl *vctrl,
+ u32 user_nsid);
+void nvme_mdev_vns_destroy_all(struct nvme_mdev_vctrl *vctrl);
+
+struct nvme_mdev_vns *nvme_mdev_vns_from_vnsid(struct nvme_mdev_vctrl *vctrl,
+ u32 user_ns_id);
+
+int nvme_mdev_vns_print_description(struct nvme_mdev_vctrl *vctrl,
+ char *buf, unsigned int size);
+void nvme_mdev_vns_host_ns_update(struct nvme_mdev_vctrl *vctrl,
+ u32 host_nsid, bool removed);
+
+void nvme_mdev_vns_log_reset(struct nvme_mdev_vctrl *vctrl);
+
+/* vcq.c */
+int nvme_mdev_vcq_init(struct nvme_mdev_vctrl *vctrl, u16 qid,
+ dma_addr_t iova, bool cont, u16 size, int irq);
+
+int nvme_mdev_vcq_viommu_update(struct nvme_mdev_viommu *viommu,
+ struct nvme_vcq *q);
+
+void nvme_mdev_vcq_delete(struct nvme_mdev_vctrl *vctrl, u16 qid);
+void nvme_mdev_vcq_process(struct nvme_mdev_vctrl *vctrl, u16 qid,
+ bool trigger_irqs);
+
+bool nvme_mdev_vcq_flush(struct nvme_mdev_vctrl *vctrl, u16 qid);
+bool nvme_mdev_vcq_reserve_space(struct nvme_vcq *q);
+
+void nvme_mdev_vcq_write_io(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_vcq *q, u16 sq_head,
+ u16 sqid, u16 cid, u16 status);
+
+void nvme_mdev_vcq_write_adm(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_vcq *q, u32 dw0,
+ u16 sq_head, u16 cid, u16 status);
+/* vsq.c*/
+int nvme_mdev_vsq_init(struct nvme_mdev_vctrl *vctrl, u16 qid,
+ dma_addr_t iova, bool cont, u16 size, u16 cqid);
+
+int nvme_mdev_vsq_viommu_update(struct nvme_mdev_viommu *viommu,
+ struct nvme_vsq *q);
+
+void nvme_mdev_vsq_delete(struct nvme_mdev_vctrl *vctrl, u16 qid);
+
+bool nvme_mdev_vsq_has_data(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_vsq *q);
+
+const struct nvme_command *nvme_mdev_vsq_get_cmd(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_vsq *q);
+
+void nvme_mdev_vsq_cmd_done_io(struct nvme_mdev_vctrl *vctrl,
+ u16 sqid, u16 cid, u16 status);
+void nvme_mdev_vsq_cmd_done_adm(struct nvme_mdev_vctrl *vctrl,
+ u32 dw0, u16 cid, u16 status);
+bool nvme_mdev_vsq_suspend_io(struct nvme_mdev_vctrl *vctrl, u16 sqid);
+
+/* udata.c*/
+void nvme_mdev_udata_iter_setup(struct nvme_mdev_viommu *viommu,
+ struct nvme_ext_data_iter *iter);
+
+int nvme_mdev_udata_iter_set_dptr(struct nvme_ext_data_iter *it,
+ const union nvme_data_ptr *dptr, u64 size);
+
+struct nvme_ext_data_iter *
+nvme_mdev_kdata_iter_alloc(struct nvme_mdev_viommu *viommu, unsigned int size);
+
+int nvme_mdev_read_from_udata(void *dst, struct nvme_ext_data_iter *srcit,
+ u64 size);
+
+int nvme_mdev_write_to_udata(struct nvme_ext_data_iter *dstit, void *src,
+ u64 size);
+
+void *nvme_mdev_udata_queue_vmap(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova,
+ unsigned int size, bool cont);
+/* viommu.c */
+void nvme_mdev_viommu_init(struct nvme_mdev_viommu *viommu,
+ struct device *sw_dev,
+ struct device *hw_dev);
+
+int nvme_mdev_viommu_add(struct nvme_mdev_viommu *viommu, u32 flags,
+ dma_addr_t iova, u64 size);
+
+int nvme_mdev_viommu_remove(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova, u64 size);
+
+int nvme_mdev_viommu_translate(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova,
+ dma_addr_t *physical,
+ dma_addr_t *host_iova);
+
+int nvme_mdev_viommu_create_kmap(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova, struct page_map *page);
+
+void nvme_mdev_viommu_free_kmap(struct nvme_mdev_viommu *viommu,
+ struct page_map *page);
+
+void nvme_mdev_viommu_update_kmap(struct nvme_mdev_viommu *viommu,
+ struct page_map *page);
+
+void nvme_mdev_viommu_reset(struct nvme_mdev_viommu *viommu);
+
+/* some utilities*/
+
+#define store_le64(address, value) (*((__le64 *)(address)) = cpu_to_le64(value))
+#define store_le32(address, value) (*((__le32 *)(address)) = cpu_to_le32(value))
+#define store_le16(address, value) (*((__le16 *)(address)) = cpu_to_le16(value))
+#define store_le8(address, value) (*((u8 *)(address)) = (value))
+
+#define load_le16(address) le16_to_cpu(*(__le16 *)(address))
+#define load_le32(address) le32_to_cpu(*(__le32 *)(address))
+
+#define store_strsp(dst, src) \
+ memcpy_and_pad(dst, sizeof(dst), src, sizeof(src) - 1, ' ')
+
+#define DNR(e) ((e) | NVME_SC_DNR)
+
+#define PAGE_ADDRESS(address) ((address) & PAGE_MASK)
+#define OFFSET_IN_PAGE(address) ((address) & ~(PAGE_MASK))
+
+#define _DBG(vctrl, fmt, ...) \
+ dev_dbg(mdev_dev((vctrl)->mdev), fmt, ##__VA_ARGS__)
+
+#define _INFO(vctrl, fmt, ...) \
+ dev_info(mdev_dev((vctrl)->mdev), fmt, ##__VA_ARGS__)
+
+#define _WARN(vctrl, fmt, ...) \
+ dev_warn(mdev_dev((vctrl)->mdev), fmt, ##__VA_ARGS__)
+
+#define mdev_to_vctrl(mdev) \
+ ((struct nvme_mdev_vctrl *)mdev_get_drvdata(mdev))
+
+#define dev_to_vctrl(mdev) \
+ mdev_to_vctrl(mdev_from_dev(dev))
+
+#define RSRV_NSID (BIT(1))
+#define RSRV_DW23 (BIT(2) | BIT(3))
+#define RSRV_MPTR (BIT(4) | BIT(5))
+
+#define RSRV_DPTR (BIT(6) | BIT(7) | BIT(8) | BIT(9))
+#define RSRV_DPTR_PRP2 (BIT(8) | BIT(9))
+
+#define RSRV_DW10_15 (BIT(10) | BIT(11) | BIT(12) | BIT(13) | BIT(14) | BIT(15))
+#define RSRV_DW11_15 (BIT(11) | BIT(12) | BIT(13) | BIT(14) | BIT(15))
+#define RSRV_DW12_15 (BIT(12) | BIT(13) | BIT(14) | BIT(15))
+#define RSRV_DW13_15 (BIT(13) | BIT(14) | BIT(15))
+#define RSRV_DW14_15 (BIT(14) | BIT(15))
+
+static inline bool check_reserved_dwords(const u32 *dwords,
+ int count, unsigned long bitmask)
+{
+ int bit;
+
+ if (WARN_ON(count > BITS_PER_TYPE(long)))
+ return false;
+
+ for_each_set_bit(bit, &bitmask, count)
+ if (dwords[bit])
+ return false;
+ return true;
+}
+
+static inline bool check_range(u64 start, u64 size, u64 end)
+{
+ u64 test = start + size;
+
+ /* check for overflow */
+ if (test < start || test < size)
+ return false;
+ return test <= end;
+}
+
+/* Rough translation of internal errors to the NVME errors */
+static inline int nvme_mdev_translate_error(int error)
+{
+ // nvme status, including no error (NVME_SC_SUCCESS)
+ if (error >= 0)
+ return error;
+
+ switch (error) {
+ case -ENOMEM:
+ /*no memory - truly an internal error*/
+ return NVME_SC_INTERNAL;
+ case -ENOSPC:
+ /* Happens when user sends to large PRP list
+ * User shoudn't do this since the maximum transfer size
+ * is specified in the controller caps
+ */
+ return DNR(NVME_SC_DATA_XFER_ERROR);
+ case -EFAULT:
+ /* Bad memory pointers in the prp lists*/
+ return DNR(NVME_SC_DATA_XFER_ERROR);
+ case -EINVAL:
+ /* Bad prp offsets in the prp lists/command*/
+ return DNR(NVME_SC_PRP_OFFSET_INVALID);
+ default:
+ /*Shouldn't happen */
+ WARN_ON_ONCE(true);
+ return NVME_SC_INTERNAL;
+ }
+}
+
+static inline bool timeout(ktime_t event, ktime_t now, unsigned long timeout_ms)
+{
+ return ktime_ms_delta(now, event) > (long)timeout_ms;
+}
+
+extern struct mdev_parent_ops mdev_fops;
+extern struct list_head nvme_mdev_vctrl_list;
+extern struct mutex nvme_mdev_vctrl_list_mutex;
+
+#endif // _MDEV_NVME_H
diff --git a/drivers/nvme/mdev/udata.c b/drivers/nvme/mdev/udata.c
new file mode 100644
index 000000000000..7af6b3f6d6aa
--- /dev/null
+++ b/drivers/nvme/mdev/udata.c
@@ -0,0 +1,390 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * User (guest) data access routines
+ * Implementation of PRP iterator in user memory
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/highmem.h>
+#include <linux/slab.h>
+#include <linux/mdev.h>
+#include <linux/nvme.h>
+#include "priv.h"
+
+#define MAX_PRP ((PAGE_SIZE / sizeof(__le64)) - 1)
+
+/* Setup up a new PRP iterator */
+void nvme_mdev_udata_iter_setup(struct nvme_mdev_viommu *viommu,
+ struct nvme_ext_data_iter *iter)
+{
+ iter->viommu = viommu;
+ iter->count = 0;
+ iter->next = NULL;
+ iter->release = NULL;
+}
+
+/* Load a new prp list into the iterator. Internal*/
+static int nvme_mdev_udata_iter_load_prplist(struct nvme_ext_data_iter *iter,
+ dma_addr_t iova)
+{
+ dma_addr_t data_iova;
+ int ret;
+ __le64 *map;
+
+ /* map the prp list*/
+ ret = nvme_mdev_viommu_create_kmap(iter->viommu,
+ PAGE_ADDRESS(iova),
+ &iter->uprp.page);
+ if (ret)
+ return ret;
+
+ iter->uprp.index = OFFSET_IN_PAGE(iova) / (sizeof(__le64));
+
+ /* read its first entry and check its alignment */
+ map = iter->uprp.page.kmap;
+ data_iova = le64_to_cpu(map[iter->uprp.index]);
+
+ if (OFFSET_IN_PAGE(data_iova) != 0) {
+ nvme_mdev_viommu_free_kmap(iter->viommu, &iter->uprp.page);
+ return -EINVAL;
+ }
+
+ /* translate the entry to complete the setup*/
+ ret = nvme_mdev_viommu_translate(iter->viommu, data_iova,
+ &iter->physical, &iter->host_iova);
+ if (ret)
+ nvme_mdev_viommu_free_kmap(iter->viommu, &iter->uprp.page);
+
+ return ret;
+}
+
+/* ->next function when iterator points to prp list*/
+static int nvme_mdev_udata_iter_next_prplist(struct nvme_ext_data_iter *iter)
+{
+ dma_addr_t iova;
+ int ret;
+ __le64 *map = iter->uprp.page.kmap;
+
+ if (WARN_ON(iter->count <= 0))
+ return 0;
+
+ if (--iter->count == 0) {
+ nvme_mdev_viommu_free_kmap(iter->viommu, &iter->uprp.page);
+ return 0;
+ }
+
+ iter->uprp.index++;
+
+ if (iter->uprp.index < MAX_PRP || iter->count == 1) {
+ // advance over next pointer in current prp list
+ // these pointers must be page aligned
+ iova = le64_to_cpu(map[iter->uprp.index]);
+ if (OFFSET_IN_PAGE(iova) != 0)
+ return -EINVAL;
+
+ ret = nvme_mdev_viommu_translate(iter->viommu, iova,
+ &iter->physical,
+ &iter->host_iova);
+ if (ret)
+ nvme_mdev_viommu_free_kmap(iter->viommu,
+ &iter->uprp.page);
+ return ret;
+ }
+
+ /* switch to next prp list. it must be page aligned as well*/
+ iova = le64_to_cpu(map[MAX_PRP]);
+
+ if (OFFSET_IN_PAGE(iova) != 0)
+ return -EINVAL;
+
+ nvme_mdev_viommu_free_kmap(iter->viommu, &iter->uprp.page);
+ return nvme_mdev_udata_iter_load_prplist(iter, iova);
+}
+
+/* ->next function when iterator points to user data pointer*/
+static int nvme_mdev_udata_iter_next_dptr(struct nvme_ext_data_iter *iter)
+{
+ dma_addr_t iova;
+
+ if (WARN_ON(iter->count <= 0))
+ return 0;
+
+ if (--iter->count == 0)
+ return 0;
+
+ /* we will be called only once to deal with the second
+ * pointer in the data pointer
+ */
+ iova = le64_to_cpu(iter->dptr->prp2);
+
+ if (iter->count == 1) {
+ /* only need to read one more entry, meaning
+ * the 2nd entry of the dptr.
+ * It must be page aligned
+ */
+ if (OFFSET_IN_PAGE(iova) != 0)
+ return -EINVAL;
+ return nvme_mdev_viommu_translate(iter->viommu, iova,
+ &iter->physical,
+ &iter->host_iova);
+ } else {
+ /*
+ * Second dptr entry is prp pointer, and it might not
+ * be page aligned (but QWORD aligned at least)
+ */
+ if (iova & 0x7ULL)
+ return -EINVAL;
+ iter->next = nvme_mdev_udata_iter_next_prplist;
+ return nvme_mdev_udata_iter_load_prplist(iter, iova);
+ }
+}
+
+/* Set prp list iterator to point to data pointer found in NVME command */
+int nvme_mdev_udata_iter_set_dptr(struct nvme_ext_data_iter *it,
+ const union nvme_data_ptr *dptr, u64 size)
+{
+ int ret;
+ u64 prp1 = le64_to_cpu(dptr->prp1);
+ dma_addr_t iova = PAGE_ADDRESS(prp1);
+ unsigned int page_offset = OFFSET_IN_PAGE(prp1);
+
+ /* first dptr pointer must be at least DWORD aligned*/
+ if (page_offset & 0x3)
+ return -EINVAL;
+
+ it->dptr = dptr;
+ it->next = nvme_mdev_udata_iter_next_dptr;
+ it->count = DIV_ROUND_UP_ULL(size + page_offset, PAGE_SIZE);
+
+ ret = nvme_mdev_viommu_translate(it->viommu, iova,
+ &it->physical, &it->host_iova);
+ if (ret)
+ return ret;
+
+ it->physical += page_offset;
+ it->host_iova += page_offset;
+ return 0;
+}
+
+/* ->next function when iterator points to kernel memory buffer */
+static int nvme_mdev_kdata_iter_next(struct nvme_ext_data_iter *it)
+{
+ if (WARN_ON(it->count <= 0))
+ return 0;
+
+ if (--it->count == 0)
+ return 0;
+
+ it->physical = PAGE_ADDRESS(it->physical) + PAGE_SIZE;
+ it->host_iova = PAGE_ADDRESS(it->host_iova) + PAGE_SIZE;
+ return 0;
+}
+
+/* ->release function for kdata iterator to free it after use */
+static void nvme_mdev_kdata_iter_free(struct nvme_ext_data_iter *it)
+{
+ struct device *dma_dev = it->viommu->hw_dev;
+
+ if (dma_dev)
+ dma_free_coherent(dma_dev, it->kmem.size,
+ it->kmem.data, it->kmem.dma_addr);
+ else
+ kfree(it->kmem.data);
+ kfree(it);
+}
+
+/* allocate a kernel data buffer with read iterator for nvme host device */
+struct nvme_ext_data_iter *
+nvme_mdev_kdata_iter_alloc(struct nvme_mdev_viommu *viommu, unsigned int size)
+{
+ struct nvme_ext_data_iter *it;
+
+ it = kzalloc(sizeof(*it), GFP_KERNEL);
+ if (!it)
+ return NULL;
+
+ it->viommu = viommu;
+ it->kmem.size = size;
+ if (viommu->hw_dev) {
+ it->kmem.data = dma_alloc_coherent(viommu->hw_dev, size,
+ &it->kmem.dma_addr,
+ GFP_KERNEL);
+ } else {
+ it->kmem.data = kzalloc(size, GFP_KERNEL);
+ it->kmem.dma_addr = 0;
+ }
+
+ if (!it->kmem.data) {
+ kfree(it);
+ return NULL;
+ }
+
+ it->physical = virt_to_phys(it->kmem.data);
+ it->host_iova = it->kmem.dma_addr;
+
+ it->count = DIV_ROUND_UP(size + OFFSET_IN_PAGE(it->physical),
+ PAGE_SIZE);
+
+ it->next = nvme_mdev_kdata_iter_next;
+ it->release = nvme_mdev_kdata_iter_free;
+ return it;
+}
+
+/* copy data from user data iterator to a kernel buffer */
+int nvme_mdev_read_from_udata(void *dst, struct nvme_ext_data_iter *srcit,
+ u64 size)
+{
+ int ret;
+ unsigned int srcoffset, chunk_size;
+
+ while (srcit->count && size > 0) {
+ struct page *page = pfn_to_page(PHYS_PFN(srcit->physical));
+ void *src = kmap(page);
+
+ if (!src)
+ return -ENOMEM;
+
+ srcoffset = OFFSET_IN_PAGE(srcit->physical);
+ chunk_size = min(size, (u64)PAGE_SIZE - srcoffset);
+
+ memcpy(dst, src + srcoffset, chunk_size);
+ dst += chunk_size;
+ size -= chunk_size;
+ kunmap(page);
+
+ ret = srcit->next(srcit);
+ if (ret)
+ return ret;
+ }
+ WARN_ON(size > 0);
+ return 0;
+}
+
+/* copy data from kernel buffer to user data iterator */
+int nvme_mdev_write_to_udata(struct nvme_ext_data_iter *dstit, void *src,
+ u64 size)
+{
+ int ret, dstoffset, chunk_size;
+
+ while (dstit->count && size > 0) {
+ struct page *page = pfn_to_page(PHYS_PFN(dstit->physical));
+ void *dst = kmap(page);
+
+ if (!dst)
+ return -ENOMEM;
+
+ dstoffset = OFFSET_IN_PAGE(dstit->physical);
+ chunk_size = min(size, (u64)PAGE_SIZE - dstoffset);
+
+ memcpy(dst + dstoffset, src, chunk_size);
+ src += chunk_size;
+ size -= chunk_size;
+ kunmap(page);
+
+ ret = dstit->next(dstit);
+ if (ret)
+ return ret;
+ }
+ WARN_ON(size > 0);
+ return 0;
+}
+
+/* Set prp list iterator to point to prp list found in create queue command */
+static int
+nvme_mdev_udata_iter_set_queue_prplist(struct nvme_mdev_viommu *viommu,
+ struct nvme_ext_data_iter *iter,
+ dma_addr_t iova, unsigned int size)
+{
+ if (iova & ~PAGE_MASK)
+ return -EINVAL;
+
+ nvme_mdev_udata_iter_setup(viommu, iter);
+ iter->count = DIV_ROUND_UP(size, PAGE_SIZE);
+ iter->next = nvme_mdev_udata_iter_next_prplist;
+ return nvme_mdev_udata_iter_load_prplist(iter, iova);
+}
+
+/* Map an SQ/CQ queue (contiguous in guest physical memory) */
+static int nvme_mdev_queue_getpages_contiguous(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova,
+ struct page **pages,
+ unsigned int npages)
+{
+ int ret;
+ unsigned int i;
+
+ dma_addr_t host_page_iova;
+ phys_addr_t physical;
+
+ for (i = 0 ; i < npages; i++) {
+ ret = nvme_mdev_viommu_translate(viommu, iova + (PAGE_SIZE * i),
+ &physical,
+ &host_page_iova);
+ if (ret)
+ return ret;
+ pages[i] = pfn_to_page(PHYS_PFN(physical));
+ }
+ return 0;
+}
+
+/* Map an SQ/CQ queue (non contiguous in guest physical memory) */
+static int nvme_mdev_queue_getpages_prplist(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova,
+ struct page **pages,
+ unsigned int npages)
+{
+ int ret, i = 0;
+ struct nvme_ext_data_iter uprpit;
+
+ ret = nvme_mdev_udata_iter_set_queue_prplist(viommu,
+ &uprpit, iova,
+ npages * PAGE_SIZE);
+ if (ret)
+ return ret;
+
+ while (uprpit.count && i < npages) {
+ pages[i++] = pfn_to_page(PHYS_PFN(uprpit.physical));
+ ret = uprpit.next(&uprpit);
+ if (ret)
+ return ret;
+ }
+ return 0;
+}
+
+/* map a SQ/CQ queue to host physical memory */
+void *nvme_mdev_udata_queue_vmap(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova,
+ unsigned int size,
+ bool cont)
+{
+ int ret;
+ unsigned int npages;
+ void *map;
+ struct page **pages;
+
+ // queue must be page aligned
+ if (OFFSET_IN_PAGE(iova) != 0)
+ return ERR_PTR(-EINVAL);
+
+ npages = DIV_ROUND_UP(size, PAGE_SIZE);
+ pages = kcalloc(npages, sizeof(struct page *), GFP_KERNEL);
+ if (!pages)
+ return ERR_PTR(-ENOMEM);
+
+ ret = cont ?
+ nvme_mdev_queue_getpages_contiguous(viommu, iova, pages, npages)
+ : nvme_mdev_queue_getpages_prplist(viommu, iova, pages, npages);
+
+ if (ret) {
+ map = ERR_PTR(ret);
+ goto out;
+ }
+
+ map = vmap(pages, npages, VM_MAP, PAGE_KERNEL);
+ if (!map)
+ map = ERR_PTR(-ENOMEM);
+out:
+ kfree(pages);
+ return map;
+}
diff --git a/drivers/nvme/mdev/vcq.c b/drivers/nvme/mdev/vcq.c
new file mode 100644
index 000000000000..7702137eb8bc
--- /dev/null
+++ b/drivers/nvme/mdev/vcq.c
@@ -0,0 +1,209 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Virtual NVMe completion queue implementation
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/module.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include "priv.h"
+
+/* Create new virtual completion queue */
+int nvme_mdev_vcq_init(struct nvme_mdev_vctrl *vctrl, u16 qid,
+ dma_addr_t iova, bool cont, u16 size, int irq)
+{
+ struct nvme_vcq *q = &vctrl->vcqs[qid];
+ int ret;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ q->iova = iova;
+ q->cont = cont;
+ q->data = NULL;
+ q->qid = qid;
+ q->size = size;
+ q->tail = 0;
+ q->phase = true;
+ q->irq = irq;
+ q->pending = 0;
+ q->head = 0;
+
+ ret = nvme_mdev_vcq_viommu_update(&vctrl->viommu, q);
+ if (ret && (ret != -EFAULT))
+ return ret;
+
+ _DBG(vctrl, "VCQ: create qid=%d contig=%d depth=%d irq=%d\n",
+ qid, cont, size, irq);
+
+ set_bit(qid, vctrl->vcq_en);
+
+ vctrl->mmio.dbs[q->qid].cqh = 0;
+ vctrl->mmio.eidxs[q->qid].cqh = 0;
+ return 0;
+}
+
+/* Update the kernel mapping of the queue */
+int nvme_mdev_vcq_viommu_update(struct nvme_mdev_viommu *viommu,
+ struct nvme_vcq *q)
+{
+ void *data;
+
+ if (q->data)
+ vunmap((void *)q->data);
+
+ data = nvme_mdev_udata_queue_vmap(viommu, q->iova,
+ (unsigned int)q->size *
+ sizeof(struct nvme_completion),
+ q->cont);
+
+ q->data = IS_ERR(data) ? NULL : data;
+ return IS_ERR(data) ? PTR_ERR(data) : 0;
+}
+
+/* Delete a virtual completion queue */
+void nvme_mdev_vcq_delete(struct nvme_mdev_vctrl *vctrl, u16 qid)
+{
+ struct nvme_vcq *q = &vctrl->vcqs[qid];
+
+ lockdep_assert_held(&vctrl->lock);
+
+ if (q->data)
+ vunmap((void *)q->data);
+ q->data = NULL;
+
+ _DBG(vctrl, "VCQ: delete qid=%d\n", q->qid);
+ clear_bit(qid, vctrl->vcq_en);
+}
+
+/* Move queue tail one item forward */
+static void nvme_mdev_vcq_advance_tail(struct nvme_vcq *q)
+{
+ if (++q->tail == q->size) {
+ q->tail = 0;
+ q->phase = !q->phase;
+ }
+}
+
+/* Move queue head one item forward */
+static void nvme_mdev_vcq_advance_head(struct nvme_vcq *q)
+{
+ q->head++;
+ if (q->head == q->size)
+ q->head = 0;
+}
+
+/* Process a virtual completion queue*/
+void nvme_mdev_vcq_process(struct nvme_mdev_vctrl *vctrl, u16 qid,
+ bool trigger_irqs)
+{
+ struct nvme_vcq *q = &vctrl->vcqs[qid];
+ u16 new_head;
+ u32 eidx;
+
+ if (!vctrl->mmio.dbs || !vctrl->mmio.eidxs)
+ return;
+
+ new_head = le32_to_cpu(vctrl->mmio.dbs[qid].cqh);
+
+ if (new_head != q->head) {
+ /* bad tail - can't process*/
+ if (!nvme_mdev_mmio_db_check(vctrl, q->qid, q->size, new_head))
+ return;
+
+ while (q->head != new_head) {
+ nvme_mdev_vcq_advance_head(q);
+ WARN_ON_ONCE(q->pending == 0);
+ if (q->pending > 0)
+ q->pending--;
+ }
+
+ eidx = q->head + (q->size >> 1);
+ if (eidx >= q->size)
+ eidx -= q->size;
+ vctrl->mmio.eidxs[q->qid].cqh = cpu_to_le32(eidx);
+ }
+
+ if (q->irq != -1 && trigger_irqs) {
+ if (q->tail != new_head)
+ nvme_mdev_irq_cond_trigger(vctrl, q->irq);
+ else
+ nvme_mdev_irq_clear(vctrl, q->irq);
+ }
+}
+
+/* flush interrupts on a completion queue */
+bool nvme_mdev_vcq_flush(struct nvme_mdev_vctrl *vctrl, u16 qid)
+{
+ struct nvme_vcq *q = &vctrl->vcqs[qid];
+ u16 new_head = le32_to_cpu(vctrl->mmio.dbs[qid].cqh);
+
+ if (new_head == q->tail || q->irq == -1)
+ return false;
+
+ nvme_mdev_irq_trigger(vctrl, q->irq);
+ nvme_mdev_irq_clear(vctrl, q->irq);
+ return true;
+}
+
+/* Reserve space for one completion entry, that will be added later */
+bool nvme_mdev_vcq_reserve_space(struct nvme_vcq *q)
+{
+ /* TODOLATER: track passed through commmands
+ * If we pass through a command to host and never receive a response
+ * we will keep space for response in CQ forever, eventually stalling
+ * the CQ forever.
+ * In this case, the guest is still expected to recover by resetting
+ * our controller
+ * This can be fixed by tracking all the commands that we send
+ * to the host
+ */
+
+ if (q->pending == q->size - 1)
+ return false;
+ q->pending++;
+ return true;
+}
+
+/* Write a new item into the completion queue (IO version) */
+void nvme_mdev_vcq_write_io(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_vcq *q, u16 sq_head,
+ u16 sqid, u16 cid, u16 status)
+{
+ volatile u64 *qw = (__le64 *)(&q->data[q->tail]);
+
+ u64 phase = q->phase ? (0x1ULL << 48) : 0;
+ u64 qw1 =
+ ((u64)sq_head) |
+ ((u64)sqid << 16) |
+ ((u64)cid << 32) |
+ ((u64)status << 49) | phase;
+
+ WRITE_ONCE(qw[1], cpu_to_le64(qw1));
+
+ nvme_mdev_vcq_advance_tail(q);
+ if (q->irq != -1)
+ nvme_mdev_irq_raise(vctrl, q->irq);
+}
+
+/* Write a new item into the completion queue (ADMIN version) */
+void nvme_mdev_vcq_write_adm(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_vcq *q, u32 dw0,
+ u16 sq_head, u16 cid, u16 status)
+{
+ volatile u64 *qw = (__le64 *)(&q->data[q->tail]);
+
+ u64 phase = q->phase ? (0x1ULL << 48) : 0;
+ u64 qw1 =
+ ((u64)sq_head) |
+ ((u64)cid << 32) |
+ ((u64)status << 49) | phase;
+
+ WRITE_ONCE(qw[0], cpu_to_le64(dw0));
+ /* ensure that hardware sees the phase bit flip last */
+ wmb();
+ WRITE_ONCE(qw[1], cpu_to_le64(qw1));
+
+ nvme_mdev_vcq_advance_tail(q);
+ if (q->irq != -1)
+ nvme_mdev_irq_trigger(vctrl, q->irq);
+}
diff --git a/drivers/nvme/mdev/vctrl.c b/drivers/nvme/mdev/vctrl.c
new file mode 100644
index 000000000000..6f087b8fb2fc
--- /dev/null
+++ b/drivers/nvme/mdev/vctrl.c
@@ -0,0 +1,515 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Virtual NVMe controller implementation
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/kernel.h>
+#include <linux/device.h>
+#include <linux/slab.h>
+#include <linux/mdev.h>
+#include <linux/nvme.h>
+#include "priv.h"
+
+bool nvme_mdev_vctrl_is_dead(struct nvme_mdev_vctrl *vctrl)
+{
+ return (vctrl->mmio.csts & (NVME_CSTS_CFS | NVME_CSTS_SHST_MASK)) != 0;
+}
+
+/* Setup the controller guid and serial */
+static void nvme_mdev_vctrl_init_id(struct nvme_mdev_vctrl *vctrl)
+{
+ guid_t guid = mdev_uuid(vctrl->mdev);
+
+ snprintf(vctrl->subnqn, sizeof(vctrl->subnqn),
+ "nqn.2014-08.org.nvmexpress:uuid:%pUl", guid.b);
+
+ snprintf(vctrl->serial, sizeof(vctrl->serial), "%pUl", guid.b);
+}
+
+/* Change the IO thread CPU pinning */
+void nvme_mdev_vctrl_bind_iothread(struct nvme_mdev_vctrl *vctrl,
+ unsigned int cpu)
+{
+ mutex_lock(&vctrl->lock);
+
+ if (cpu == vctrl->iothread_cpu)
+ goto out;
+
+ nvme_mdev_io_free(vctrl);
+ nvme_mdev_io_create(vctrl, cpu);
+out:
+ mutex_unlock(&vctrl->lock);
+}
+
+/* Change the status of support for shadow doorbell */
+int nvme_mdev_vctrl_set_shadow_doorbell_supported(struct nvme_mdev_vctrl *vctrl,
+ bool enable)
+{
+ if (vctrl->inuse)
+ return -EBUSY;
+ vctrl->mmio.shadow_db_supported = enable;
+ return 0;
+}
+
+/* Called when memory mapping are changed. Propagate this to all kmap users */
+static void nvme_mdev_vctrl_viommu_update(struct nvme_mdev_vctrl *vctrl)
+{
+ u16 qid;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ if (!(vctrl->mmio.csts & NVME_CSTS_RDY))
+ return;
+
+ /* update mappings for submission and completion queues */
+ for_each_set_bit(qid, vctrl->vsq_en, MAX_VIRTUAL_QUEUES)
+ nvme_mdev_vsq_viommu_update(&vctrl->viommu, &vctrl->vsqs[qid]);
+
+ for_each_set_bit(qid, vctrl->vcq_en, MAX_VIRTUAL_QUEUES)
+ nvme_mdev_vcq_viommu_update(&vctrl->viommu, &vctrl->vcqs[qid]);
+
+ /* update mapping for the shadow doorbells */
+ nvme_mdev_mmio_viommu_update(vctrl);
+}
+
+/* Create a new virtual controller */
+struct nvme_mdev_vctrl *nvme_mdev_vctrl_create(struct mdev_device *mdev,
+ struct nvme_mdev_hctrl *hctrl,
+ unsigned int max_host_queues)
+{
+ int ret;
+ struct nvme_mdev_vctrl *vctrl = kzalloc_node(sizeof(*vctrl),
+ GFP_KERNEL, hctrl->node);
+ if (!vctrl)
+ return ERR_PTR(-ENOMEM);
+
+ /* Basic init */
+ vctrl->hctrl = hctrl;
+ vctrl->mdev = mdev;
+ vctrl->max_host_hw_queues = max_host_queues;
+ vctrl->viommu.vctrl = vctrl;
+
+ kref_init(&vctrl->ref);
+ mutex_init(&vctrl->lock);
+ nvme_mdev_vctrl_init_id(vctrl);
+ INIT_LIST_HEAD(&vctrl->host_hw_queues);
+
+ get_device(mdev_dev(mdev));
+ mdev_set_drvdata(mdev, vctrl);
+
+ /* reserve host IO queues */
+ if (!nvme_mdev_hctrl_hqs_reserve(hctrl, max_host_queues)) {
+ ret = -ENOSPC;
+ goto error1;
+ }
+
+ /* default feature values*/
+ vctrl->arb_burst_shift = 3;
+ vctrl->mmio.shadow_db_supported = use_shadow_doorbell;
+
+ ret = nvme_mdev_pci_create(vctrl);
+ if (ret)
+ goto error2;
+
+ ret = nvme_mdev_mmio_create(vctrl);
+ if (ret)
+ goto error3;
+
+ nvme_mdev_irqs_setup(vctrl);
+
+ /* Create the IO thread */
+ /*TODOLATER: IO: smp_processor_id() is not an ideal pinning choice */
+ ret = nvme_mdev_io_create(vctrl, smp_processor_id());
+ if (ret)
+ goto error4;
+
+ _INFO(vctrl, "device created using %d host queues\n", max_host_queues);
+ return vctrl;
+error4:
+ nvme_mdev_mmio_free(vctrl);
+error3:
+ nvme_mdev_pci_free(vctrl);
+error2:
+ nvme_mdev_hctrl_hqs_unreserve(hctrl, max_host_queues);
+error1:
+ put_device(mdev_dev(mdev));
+ kfree(vctrl);
+ return ERR_PTR(ret);
+}
+
+/*Try to destroy an vctrl */
+int nvme_mdev_vctrl_destroy(struct nvme_mdev_vctrl *vctrl)
+{
+ mutex_lock(&vctrl->lock);
+
+ if (vctrl->inuse) {
+ /* vctrl has mdev users */
+ mutex_unlock(&vctrl->lock);
+ return -EBUSY;
+ }
+
+ _INFO(vctrl, "device is destroying\n");
+
+ mdev_set_drvdata(vctrl->mdev, NULL);
+ mutex_unlock(&vctrl->lock);
+
+ mutex_lock(&nvme_mdev_vctrl_list_mutex);
+ list_del_init(&vctrl->link);
+ mutex_unlock(&nvme_mdev_vctrl_list_mutex);
+
+ mutex_lock(&vctrl->lock); /*only for lockdep checks */
+ nvme_mdev_io_free(vctrl);
+ nvme_mdev_vns_destroy_all(vctrl);
+ __nvme_mdev_vctrl_reset(vctrl, true);
+
+ nvme_mdev_hctrl_hqs_unreserve(vctrl->hctrl, vctrl->max_host_hw_queues);
+
+ nvme_mdev_pci_free(vctrl);
+ nvme_mdev_mmio_free(vctrl);
+
+ mutex_unlock(&vctrl->lock);
+
+ put_device(mdev_dev(vctrl->mdev));
+ _INFO(vctrl, "device is destroyed\n");
+ kfree(vctrl);
+ return 0;
+}
+
+/* Suspends a running virtual controller
+ * Called when host needs to regain full control of the device
+ */
+void nvme_mdev_vctrl_pause(struct nvme_mdev_vctrl *vctrl)
+{
+ mutex_lock(&vctrl->lock);
+ if (!vctrl->vctrl_paused) {
+ _INFO(vctrl, "pausing the virtual controller\n");
+ if (vctrl->mmio.csts & NVME_CSTS_RDY)
+ nvme_mdev_io_pause(vctrl);
+ vctrl->vctrl_paused = true;
+ }
+ mutex_unlock(&vctrl->lock);
+}
+
+/* Resumes a virtual controller
+ * Called when host done with exclusive access and allows us
+ * again to attach to the controller
+ */
+void nvme_mdev_vctrl_resume(struct nvme_mdev_vctrl *vctrl)
+{
+ mutex_lock(&vctrl->lock);
+ nvme_mdev_assert_io_not_running(vctrl);
+
+ if (vctrl->vctrl_paused) {
+ _INFO(vctrl, "resuming the virtual controller\n");
+
+ if (vctrl->mmio.csts & NVME_CSTS_RDY) {
+ /* handle all pending admin commands*/
+ nvme_mdev_adm_process_sq(vctrl);
+ /* start the IO thread again if it was stopped or
+ * if we had doorbell writes during the pause
+ */
+ nvme_mdev_io_resume(vctrl);
+ }
+ vctrl->vctrl_paused = false;
+ }
+ mutex_unlock(&vctrl->lock);
+}
+
+/* Called when emulator opens the virtual device */
+int nvme_mdev_vctrl_open(struct nvme_mdev_vctrl *vctrl)
+{
+ struct device *dma_dev = NULL;
+ int ret = 0;
+
+ mutex_lock(&vctrl->lock);
+
+ if (vctrl->hctrl->removing) {
+ ret = -ENODEV;
+ goto out;
+ }
+
+ if (vctrl->inuse) {
+ ret = -EBUSY;
+ goto out;
+ }
+
+ _INFO(vctrl, "device is opened\n");
+
+ if (vctrl->hctrl->nvme_ctrl->ops->flags & NVME_F_MDEV_DMA_SUPPORTED)
+ dma_dev = vctrl->hctrl->nvme_ctrl->dev;
+
+ nvme_mdev_viommu_init(&vctrl->viommu, mdev_dev(vctrl->mdev), dma_dev);
+
+ nvme_mdev_mmio_open(vctrl);
+ vctrl->inuse = true;
+out:
+ mutex_unlock(&vctrl->lock);
+ return ret;
+}
+
+/* Called when emulator closes the virtual device */
+void nvme_mdev_vctrl_release(struct nvme_mdev_vctrl *vctrl)
+{
+ mutex_lock(&vctrl->lock);
+ nvme_mdev_io_pause(vctrl);
+
+ /* Remove the guest DMA mappings - new user that will open the
+ * device might be a different guest
+ */
+ nvme_mdev_viommu_reset(&vctrl->viommu);
+
+ /* Reset the controller to a clean state for a new user */
+ __nvme_mdev_vctrl_reset(vctrl, false);
+ nvme_mdev_irqs_reset(vctrl);
+ vctrl->inuse = false;
+ mutex_unlock(&vctrl->lock);
+
+ WARN_ON(!list_empty(&vctrl->host_hw_queues));
+
+ _INFO(vctrl, "device is released\n");
+
+ /* If we are released after request to remove the host controller
+ * we are dead, won't be opened again ever, so remove ourselves
+ */
+ if (vctrl->hctrl->removing)
+ nvme_mdev_vctrl_destroy(vctrl);
+}
+
+/* Called each time the controller is reset (CC.EN <= 0 or VM level reset) */
+void __nvme_mdev_vctrl_reset(struct nvme_mdev_vctrl *vctrl, bool pci_reset)
+{
+ lockdep_assert_held(&vctrl->lock);
+
+ if ((vctrl->mmio.csts & NVME_CSTS_RDY) &&
+ !(vctrl->mmio.csts & NVME_CSTS_SHST_MASK)) {
+ _DBG(vctrl, "unsafe reset (CSTS.RDY==1)\n");
+ nvme_mdev_io_pause(vctrl);
+ nvme_mdev_vctrl_disable(vctrl);
+ }
+ nvme_mdev_mmio_reset(vctrl, pci_reset);
+}
+
+/* setups initial admin queues and doorbells */
+bool nvme_mdev_vctrl_enable(struct nvme_mdev_vctrl *vctrl,
+ dma_addr_t cqiova, dma_addr_t sqiova, u32 sizes)
+{
+ int ret;
+ u16 cqentries, sqentries;
+
+ nvme_mdev_assert_io_not_running(vctrl);
+
+ lockdep_assert_held(&vctrl->lock);
+
+ sqentries = (sizes & 0xFFFF) + 1;
+ cqentries = (sizes >> 16) + 1;
+
+ if (cqentries > 4096 || cqentries < 2)
+ return false;
+ if (sqentries > 4096 || sqentries < 2)
+ return false;
+
+ ret = nvme_mdev_mmio_enable_dbs(vctrl);
+ if (ret)
+ goto error0;
+
+ ret = nvme_mdev_vcq_init(vctrl, 0, cqiova, true, cqentries, 0);
+ if (ret)
+ goto error1;
+
+ ret = nvme_mdev_vsq_init(vctrl, 0, sqiova, true, sqentries, 0);
+ if (ret)
+ goto error2;
+
+ nvme_mdev_events_init(vctrl);
+
+ if (!vctrl->mmio.shadow_db_supported) {
+ /* start polling right away to support admin queue */
+ vctrl->io_idle = false;
+ nvme_mdev_io_resume(vctrl);
+ }
+
+ return true;
+error2:
+ nvme_mdev_mmio_disable_dbs(vctrl);
+error1:
+ nvme_mdev_vcq_delete(vctrl, 0);
+error0:
+ return false;
+}
+
+/* destroy all io/admin queues on the controller */
+void nvme_mdev_vctrl_disable(struct nvme_mdev_vctrl *vctrl)
+{
+ u16 sqid, cqid;
+
+ nvme_mdev_assert_io_not_running(vctrl);
+
+ lockdep_assert_held(&vctrl->lock);
+
+ nvme_mdev_events_reset(vctrl);
+ nvme_mdev_vns_log_reset(vctrl);
+
+ sqid = 1;
+ for_each_set_bit_from(sqid, vctrl->vsq_en, MAX_VIRTUAL_QUEUES)
+ nvme_mdev_vsq_delete(vctrl, sqid);
+
+ cqid = 1;
+ for_each_set_bit_from(cqid, vctrl->vcq_en, MAX_VIRTUAL_QUEUES)
+ nvme_mdev_vcq_delete(vctrl, cqid);
+
+ nvme_mdev_vsq_delete(vctrl, 0);
+ nvme_mdev_vcq_delete(vctrl, 0);
+
+ nvme_mdev_mmio_disable_dbs(vctrl);
+ vctrl->io_idle = true;
+}
+
+/* External reset */
+void nvme_mdev_vctrl_reset(struct nvme_mdev_vctrl *vctrl)
+{
+ mutex_lock(&vctrl->lock);
+ _INFO(vctrl, "reset\n");
+ __nvme_mdev_vctrl_reset(vctrl, true);
+ mutex_unlock(&vctrl->lock);
+}
+
+/* Add IO region*/
+void nvme_mdev_vctrl_add_region(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index, unsigned int size,
+ region_access_fn access_fn)
+{
+ struct nvme_mdev_io_region *region = &vctrl->regions[index];
+
+ region->size = size;
+ region->rw = access_fn;
+ region->mmap_ops = NULL;
+}
+
+/* Enable mmap window on an IO region */
+void nvme_mdev_vctrl_region_set_mmap(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index,
+ unsigned int offset,
+ unsigned int size,
+ const struct vm_operations_struct *ops)
+{
+ struct nvme_mdev_io_region *region = &vctrl->regions[index];
+
+ region->mmap_area_start = offset;
+ region->mmap_area_size = size;
+ region->mmap_ops = ops;
+}
+
+/* Disable mmap window on an IO region */
+void nvme_mdev_vctrl_region_disable_mmap(struct nvme_mdev_vctrl *vctrl,
+ unsigned int index)
+{
+ struct nvme_mdev_io_region *region = &vctrl->regions[index];
+
+ region->mmap_area_start = 0;
+ region->mmap_area_size = 0;
+ region->mmap_ops = NULL;
+}
+
+/* Allocate a host IO queue */
+int nvme_mdev_vctrl_hq_alloc(struct nvme_mdev_vctrl *vctrl)
+{
+ struct nvme_mdev_hq *hq = NULL, *tmp;
+ int hwqcount = 0, ret;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ nvme_mdev_assert_io_not_running(vctrl);
+
+ list_for_each_entry(tmp, &vctrl->host_hw_queues, link) {
+ if (!hq || tmp->usecount < hq->usecount)
+ hq = tmp;
+ hwqcount++;
+ }
+
+ if (hwqcount < vctrl->max_host_hw_queues) {
+ ret = nvme_mdev_hctrl_hq_alloc(vctrl->hctrl);
+ if (ret < 0)
+ return ret;
+
+ hq = kzalloc_node(sizeof(*hq), GFP_KERNEL, vctrl->hctrl->node);
+ if (!hq) {
+ nvme_mdev_hctrl_hq_free(vctrl->hctrl, ret);
+ return -ENOMEM;
+ }
+
+ hq->hqid = ret;
+ hq->usecount = 1;
+ list_add_tail(&hq->link, &vctrl->host_hw_queues);
+ } else {
+ hq->usecount++;
+ }
+ return hq->hqid;
+}
+
+/* Free a host IO queue */
+void nvme_mdev_vctrl_hq_free(struct nvme_mdev_vctrl *vctrl, u16 hqid)
+{
+ struct nvme_mdev_hq *hq;
+
+ lockdep_assert_held(&vctrl->lock);
+ nvme_mdev_assert_io_not_running(vctrl);
+
+ list_for_each_entry(hq, &vctrl->host_hw_queues, link)
+ if (hq->hqid == hqid) {
+ if (--hq->usecount > 0)
+ return;
+ nvme_mdev_hctrl_hq_free(vctrl->hctrl, hq->hqid);
+ list_del(&hq->link);
+ kfree(hq);
+ return;
+ }
+ WARN_ON(1);
+}
+
+/* get current list of host queues */
+unsigned int nvme_mdev_vctrl_hqs_list(struct nvme_mdev_vctrl *vctrl, u16 *out)
+{
+ struct nvme_mdev_hq *q;
+ unsigned int i = 0;
+
+ list_for_each_entry(q, &vctrl->host_hw_queues, link) {
+ out[i++] = q->hqid;
+ if (WARN_ON(i > MAX_HOST_QUEUES))
+ break;
+ }
+ return i;
+}
+
+/* add a user memory mapping */
+int nvme_mdev_vctrl_viommu_map(struct nvme_mdev_vctrl *vctrl, u32 flags,
+ dma_addr_t iova, u64 size)
+{
+ int ret;
+
+ mutex_lock(&vctrl->lock);
+
+ nvme_mdev_io_pause(vctrl);
+ ret = nvme_mdev_viommu_add(&vctrl->viommu, flags, iova, size);
+ nvme_mdev_vctrl_viommu_update(vctrl);
+ nvme_mdev_io_resume(vctrl);
+
+ mutex_unlock(&vctrl->lock);
+ return ret;
+}
+
+/* remove a user memory mapping */
+int nvme_mdev_vctrl_viommu_unmap(struct nvme_mdev_vctrl *vctrl,
+ dma_addr_t iova, u64 size)
+{
+ int ret;
+
+ mutex_lock(&vctrl->lock);
+
+ nvme_mdev_io_pause(vctrl);
+ ret = nvme_mdev_viommu_remove(&vctrl->viommu, iova, size);
+ nvme_mdev_vctrl_viommu_update(vctrl);
+ nvme_mdev_io_resume(vctrl);
+
+ mutex_unlock(&vctrl->lock);
+ return ret;
+}
diff --git a/drivers/nvme/mdev/viommu.c b/drivers/nvme/mdev/viommu.c
new file mode 100644
index 000000000000..31b86e8f5768
--- /dev/null
+++ b/drivers/nvme/mdev/viommu.c
@@ -0,0 +1,322 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Virtual IOMMU - mapping user memory to the real device
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/highmem.h>
+#include <linux/slab.h>
+#include <linux/mdev.h>
+#include <linux/vmalloc.h>
+#include <linux/nvme.h>
+#include <linux/iommu.h>
+#include <linux/interval_tree_generic.h>
+#include "priv.h"
+
+struct mem_mapping {
+ struct rb_node rb;
+ struct list_head link;
+
+ dma_addr_t __subtree_last;
+ dma_addr_t iova_start; /* first iova in this mapping*/
+ dma_addr_t iova_last; /* last iova in this mapping*/
+
+ unsigned long pfn; /* physical address of this mapping */
+ dma_addr_t host_iova; /* dma mapping to the real device*/
+};
+
+#define map_len(m) (((m)->iova_last - (m)->iova_start) + 1ULL)
+#define map_pages(m) (map_len(m) >> PAGE_SHIFT)
+#define START(node) ((node)->iova_start)
+#define LAST(node) ((node)->iova_last)
+
+INTERVAL_TREE_DEFINE(struct mem_mapping, rb, dma_addr_t, __subtree_last,
+ START, LAST, static inline, viommu_int_tree);
+
+static void nvme_mdev_viommu_dbg_dma_range(struct nvme_mdev_viommu *viommu,
+ struct mem_mapping *map,
+ const char *action)
+{
+ dma_addr_t iova_start = map->iova_start;
+ dma_addr_t iova_end = map->iova_start + map_len(map) - 1;
+ dma_addr_t hiova_start = map->host_iova;
+ dma_addr_t hiova_end = map->host_iova + map_len(map) - 1;
+
+ _DBG(viommu->vctrl,
+ "vIOMMU: %s RW IOVA %pad-%pad -> DMA %pad-%pad\n",
+ action, &iova_start, &iova_end, &hiova_start, &hiova_end);
+}
+
+/* unpin N pages starting at given IOVA*/
+static void nvme_mdev_viommu_unpin_pages(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova, int n)
+{
+ int i;
+
+ for (i = 0; i < n; i++) {
+ unsigned long user_pfn = (iova >> PAGE_SHIFT) + i;
+ int ret = vfio_unpin_pages(viommu->sw_dev, &user_pfn, 1);
+
+ WARN_ON(ret != 1);
+ }
+}
+
+/* User memory init code*/
+void nvme_mdev_viommu_init(struct nvme_mdev_viommu *viommu,
+ struct device *sw_dev,
+ struct device *hw_dev)
+{
+ viommu->sw_dev = sw_dev;
+ viommu->hw_dev = hw_dev;
+ viommu->maps_tree = RB_ROOT_CACHED;
+ INIT_LIST_HEAD(&viommu->maps_list);
+}
+
+/* User memory end code*/
+void nvme_mdev_viommu_reset(struct nvme_mdev_viommu *viommu)
+{
+ nvme_mdev_viommu_remove(viommu, 0, 0xFFFFFFFFFFFFFFFFULL);
+ WARN_ON(!list_empty(&viommu->maps_list));
+}
+
+/* Adds a new range of user memory*/
+int nvme_mdev_viommu_add(struct nvme_mdev_viommu *viommu,
+ u32 flags,
+ dma_addr_t iova,
+ u64 size)
+{
+ u64 offset;
+ dma_addr_t iova_end = iova + size - 1;
+ struct mem_mapping *map = NULL, *tmp;
+ LIST_HEAD(new_mappings_list);
+ int ret;
+
+ if (!(flags & VFIO_DMA_MAP_FLAG_READ) ||
+ !(flags & VFIO_DMA_MAP_FLAG_WRITE)) {
+ const char *type = "none";
+
+ if (flags & VFIO_DMA_MAP_FLAG_READ)
+ type = "RO";
+ else if (flags & VFIO_DMA_MAP_FLAG_WRITE)
+ type = "WO";
+
+ _DBG(viommu->vctrl, "vIOMMU: IGN %s IOVA %pad-%pad\n",
+ type, &iova, &iova_end);
+ return 0;
+ }
+
+ WARN_ON_ONCE(nvme_mdev_viommu_remove(viommu, iova, size) != 0);
+
+ if (WARN_ON_ONCE(size & ~PAGE_MASK))
+ return -EINVAL;
+
+ // VFIO pinning all the pages
+ for (offset = 0; offset < size; offset += PAGE_SIZE) {
+ unsigned long vapfn = ((iova + offset) >> PAGE_SHIFT), pa_pfn;
+
+ ret = vfio_pin_pages(viommu->sw_dev,
+ &vapfn, 1,
+ VFIO_DMA_MAP_FLAG_READ |
+ VFIO_DMA_MAP_FLAG_WRITE,
+ &pa_pfn);
+
+ if (ret != 1) {
+ /*sadly mdev api doesn't return an error*/
+ ret = -EFAULT;
+
+ _DBG(viommu->vctrl,
+ "vIOMMU: ADD RW IOVA %pad - pin failed\n",
+ &iova);
+ goto unwind;
+ }
+
+ // new mapping needed
+ if (!map || map->pfn + map_pages(map) != pa_pfn) {
+ int node = viommu->hw_dev ?
+ dev_to_node(viommu->hw_dev) : NUMA_NO_NODE;
+
+ map = kzalloc_node(sizeof(*map), GFP_KERNEL, node);
+
+ if (WARN_ON(!map)) {
+ vfio_unpin_pages(viommu->sw_dev, &vapfn, 1);
+ ret = -ENOMEM;
+ goto unwind;
+ }
+ map->iova_start = iova + offset;
+ map->iova_last = iova + offset + PAGE_SIZE - 1ULL;
+ map->pfn = pa_pfn;
+ map->host_iova = 0;
+ list_add_tail(&map->link, &new_mappings_list);
+ } else {
+ // current map can be extended
+ map->iova_last += PAGE_SIZE;
+ }
+ }
+
+ // DMA mapping the pages
+ list_for_each_entry_safe(map, tmp, &new_mappings_list, link) {
+ if (viommu->hw_dev) {
+ map->host_iova =
+ dma_map_page(viommu->hw_dev,
+ pfn_to_page(map->pfn),
+ 0,
+ map_len(map),
+ DMA_BIDIRECTIONAL);
+
+ ret = dma_mapping_error(viommu->hw_dev, map->host_iova);
+ if (ret) {
+ _DBG(viommu->vctrl,
+ "vIOMMU: ADD RW IOVA %pad-%pad - DMA map failed\n",
+ &iova, &iova_end);
+ goto unwind;
+ }
+ }
+
+ nvme_mdev_viommu_dbg_dma_range(viommu, map, "ADD");
+ list_del(&map->link);
+ list_add_tail(&map->link, &viommu->maps_list);
+ viommu_int_tree_insert(map, &viommu->maps_tree);
+ }
+ return 0;
+unwind:
+ list_for_each_entry_safe(map, tmp, &new_mappings_list, link) {
+ nvme_mdev_viommu_unpin_pages(viommu, map->iova_start,
+ map_pages(map));
+
+ list_del(&map->link);
+ kfree(map);
+ }
+ nvme_mdev_viommu_remove(viommu, iova, size);
+ return ret;
+}
+
+/* Removes a range of user memory*/
+int nvme_mdev_viommu_remove(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova,
+ u64 size)
+{
+ struct mem_mapping *map = NULL, *tmp;
+ dma_addr_t last_iova = iova + (size) - 1ULL;
+ LIST_HEAD(remove_list);
+ int count = 0;
+
+ /* find out all the relevant ranges */
+ map = viommu_int_tree_iter_first(&viommu->maps_tree, iova, last_iova);
+ while (map) {
+ list_del(&map->link);
+ list_add_tail(&map->link, &remove_list);
+ map = viommu_int_tree_iter_next(map, iova, last_iova);
+ }
+
+ /* remove them */
+ list_for_each_entry_safe(map, tmp, &remove_list, link) {
+ count++;
+
+ nvme_mdev_viommu_dbg_dma_range(viommu, map, "DEL");
+ if (viommu->hw_dev)
+ dma_unmap_page(viommu->hw_dev, map->host_iova,
+ map_len(map), DMA_BIDIRECTIONAL);
+
+ nvme_mdev_viommu_unpin_pages(viommu, map->iova_start,
+ map_pages(map));
+
+ viommu_int_tree_remove(map, &viommu->maps_tree);
+ kfree(map);
+ }
+ return count;
+}
+
+/* Translate an IOVA to a physical address and read device bus address */
+int nvme_mdev_viommu_translate(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova,
+ dma_addr_t *physical,
+ dma_addr_t *host_iova)
+{
+ struct mem_mapping *mapping;
+ u64 offset;
+
+ if (WARN_ON_ONCE(OFFSET_IN_PAGE(iova) != 0))
+ return -EINVAL;
+
+ mapping = viommu_int_tree_iter_first(&viommu->maps_tree,
+ iova, iova + PAGE_SIZE - 1);
+ if (!mapping) {
+ _DBG(viommu->vctrl,
+ "vIOMMU: translation of IOVA %pad failed\n", &iova);
+ return -EFAULT;
+ }
+
+ WARN_ON(iova > mapping->iova_last);
+ WARN_ON(OFFSET_IN_PAGE(mapping->iova_start) != 0);
+
+ offset = iova - mapping->iova_start;
+ *physical = PFN_PHYS(mapping->pfn) + offset;
+ *host_iova = mapping->host_iova + offset;
+ return 0;
+}
+
+/* map an IOVA to kernel address space */
+int nvme_mdev_viommu_create_kmap(struct nvme_mdev_viommu *viommu,
+ dma_addr_t iova, struct page_map *page)
+{
+ dma_addr_t host_iova;
+ phys_addr_t physical;
+ struct page *new_page;
+ int ret;
+
+ page->iova = iova;
+
+ ret = nvme_mdev_viommu_translate(viommu, iova, &physical, &host_iova);
+ if (ret)
+ return ret;
+
+ new_page = pfn_to_page(PHYS_PFN(physical));
+
+ page->kmap = kmap(new_page);
+ if (!page->kmap)
+ return -ENOMEM;
+
+ page->page = new_page;
+ return 0;
+}
+
+/* update IOVA <-> kernel mapping. If fails, removes the previous mapping */
+void nvme_mdev_viommu_update_kmap(struct nvme_mdev_viommu *viommu,
+ struct page_map *page)
+{
+ dma_addr_t host_iova;
+ phys_addr_t physical;
+ struct page *new_page;
+ int ret;
+
+ ret = nvme_mdev_viommu_translate(viommu, page->iova,
+ &physical, &host_iova);
+ if (ret) {
+ nvme_mdev_viommu_free_kmap(viommu, page);
+ return;
+ }
+
+ new_page = pfn_to_page(PHYS_PFN(physical));
+ if (new_page == page->page)
+ return;
+
+ nvme_mdev_viommu_free_kmap(viommu, page);
+
+ page->kmap = kmap(new_page);
+ if (!page->kmap)
+ return;
+ page->page = new_page;
+}
+
+/* unmap an IOVA to kernel address space */
+void nvme_mdev_viommu_free_kmap(struct nvme_mdev_viommu *viommu,
+ struct page_map *page)
+{
+ if (page->page) {
+ kunmap(page->page);
+ page->page = NULL;
+ page->kmap = NULL;
+ }
+}
diff --git a/drivers/nvme/mdev/vns.c b/drivers/nvme/mdev/vns.c
new file mode 100644
index 000000000000..42d4f8d7423b
--- /dev/null
+++ b/drivers/nvme/mdev/vns.c
@@ -0,0 +1,356 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Virtual NVMe namespace implementation
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/nvme.h>
+#include "priv.h"
+
+/* Reset the changed namespace log */
+void nvme_mdev_vns_log_reset(struct nvme_mdev_vctrl *vctrl)
+{
+ vctrl->ns_log_size = 0;
+}
+
+/* This adds entry to NS changed log and sends to the user a notification */
+static void nvme_mdev_vns_send_event(struct nvme_mdev_vctrl *vctrl, u32 ns)
+{
+ unsigned int i;
+ unsigned int log_size = vctrl->ns_log_size;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ _INFO(vctrl, "host namespace list rescanned\n");
+
+ if (WARN_ON(ns == 0 || ns > MAX_VIRTUAL_NAMESPACES))
+ return;
+
+ // check if the namespace ID is alredy in the log
+ if (log_size == MAX_VIRTUAL_NAMESPACES)
+ return;
+
+ for (i = 0; i < log_size; i++)
+ if (vctrl->ns_log[i] == cpu_to_le32(ns))
+ return;
+
+ vctrl->ns_log[log_size++] = cpu_to_le32(ns);
+ vctrl->ns_log_size++;
+ nvme_mdev_event_send(vctrl, NVME_AER_TYPE_NOTICE,
+ NVME_AER_NOTICE_NS_CHANGED);
+}
+
+/* Read host NS/partition parameters to update our virtual NS */
+static void nvme_mdev_vns_read_host_properties(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_mdev_vns *vns,
+ struct nvme_ns *host_ns)
+{
+ unsigned int sector_to_lba_shift;
+ u64 host_ns_size, start, nr, align_mask;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ /* read the namespace block size */
+ vns->blksize_shift = host_ns->lba_shift;
+
+ if (WARN_ON(vns->blksize_shift < 9)) {
+ _WARN(vctrl, "NS/create: device block size is bad\n");
+ goto error;
+ }
+
+ sector_to_lba_shift = vns->blksize_shift - 9;
+ align_mask = (1ULL << sector_to_lba_shift) - 1;
+
+ /* read the partition start and size*/
+ start = get_start_sect(vns->host_part);
+ nr = part_nr_sects_read(vns->host_part->bd_part);
+
+ /* check that partition is aligned on LBA size*/
+ if (sector_to_lba_shift != 0) {
+ if ((start & align_mask) || (nr & align_mask)) {
+ _WARN(vctrl, "NS/create: partition not aligned\n");
+ goto error;
+ }
+ }
+
+ vns->host_lba_offset = start >> sector_to_lba_shift;
+ vns->ns_size = nr >> sector_to_lba_shift;
+ host_ns_size = get_capacity(host_ns->disk) >> sector_to_lba_shift;
+
+ /*TODOLATER: NS: support metadata on host namespace */
+ if (host_ns->ms) {
+ _WARN(vctrl, "NS/create: no support for namespace metadata\n");
+ goto error;
+ }
+
+ if (vns->ns_size == 0) {
+ _WARN(vctrl, "NS/create: host namespace has size 0\n");
+ goto error;
+ }
+
+ /* sanity check that partition doesn't extend beyond the namespace */
+ if (!check_range(vns->host_lba_offset, vns->ns_size, host_ns_size)) {
+ _WARN(vctrl, "NS/create: host namespace size mismatch\n");
+ goto error;
+ }
+
+ /* check if namespace is readonly*/
+ if (!vns->readonly)
+ vns->readonly = get_disk_ro(host_ns->disk);
+
+ vns->noiob = host_ns->noiob;
+ if (vns->noiob != 0) {
+ u64 tmp = vns->host_lba_offset;
+
+ if (do_div(tmp, vns->noiob)) {
+ _WARN(vctrl,
+ "NS/create: host partition is not aligned on host optimum IO boundary, performance might suffer");
+ vns->noiob = 0;
+ }
+ }
+ return;
+error:
+ vns->ns_size = 0;
+}
+
+/* Open new reference to a host namespace */
+int nvme_mdev_vns_open(struct nvme_mdev_vctrl *vctrl,
+ u32 host_nsid, unsigned int host_partid)
+{
+ struct nvme_mdev_vns *vns;
+ u32 user_nsid;
+ int ret;
+
+ _INFO(vctrl, "open host_namespace=%u, partition=%u\n",
+ host_nsid, host_partid);
+
+ mutex_lock(&vctrl->lock);
+ ret = -ENODEV;
+ if (nvme_mdev_vctrl_is_dead(vctrl))
+ goto out;
+
+ /* create the namespace object */
+ ret = -ENOMEM;
+ vns = kzalloc_node(sizeof(*vns), GFP_KERNEL, vctrl->hctrl->node);
+ if (!vns)
+ goto out;
+
+ uuid_gen(&vns->uuid); // TODOLATER: NS: non random NS UUID
+ vns->host_nsid = host_nsid;
+ vns->host_partid = host_partid;
+
+ /* find the host namespace */
+ vns->host_ns = nvme_find_get_ns(vctrl->hctrl->nvme_ctrl, host_nsid);
+ if (!vns->host_ns) {
+ ret = -ENODEV;
+ goto error1;
+ }
+
+ if (test_bit(NVME_NS_DEAD, &vns->host_ns->flags) ||
+ test_bit(NVME_NS_REMOVING, &vns->host_ns->flags) ||
+ !vns->host_ns->disk) {
+ ret = -ENODEV;
+ goto error2;
+ }
+
+ /* get the block device for the partition that we will use */
+ vns->host_part = bdget_disk(vns->host_ns->disk, host_partid);
+ if (!vns->host_part) {
+ ret = -ENODEV;
+ goto error2;
+ }
+
+ /* get exclusive access to the block device (partition) */
+ vns->fmode = FMODE_READ | FMODE_EXCL;
+ if (!vns->readonly)
+ vns->fmode |= FMODE_WRITE;
+
+ ret = blkdev_get(vns->host_part, vns->fmode, vns);
+ if (ret)
+ goto error2;
+
+ /* read properties of the host namespace */
+ nvme_mdev_vns_read_host_properties(vctrl, vns, vns->host_ns);
+
+ /* Allocate a user namespace ID for this namespace */
+ ret = -ENOSPC;
+ for (user_nsid = 1; user_nsid <= MAX_VIRTUAL_NAMESPACES; user_nsid++)
+ if (!nvme_mdev_vns_from_vnsid(vctrl, user_nsid))
+ break;
+
+ if (user_nsid > MAX_VIRTUAL_NAMESPACES)
+ goto error3;
+
+ nvme_mdev_io_pause(vctrl);
+
+ vctrl->namespaces[user_nsid - 1] = vns;
+ vns->nsid = user_nsid;
+
+ /* Announce the new namespace to the user */
+ nvme_mdev_vns_send_event(vctrl, user_nsid);
+ nvme_mdev_io_resume(vctrl);
+ ret = 0;
+ goto out;
+error3:
+ blkdev_put(vns->host_part, vns->fmode);
+error2:
+ nvme_put_ns(vns->host_ns);
+error1:
+ kfree(vns);
+out:
+ mutex_unlock(&vctrl->lock);
+ return ret;
+}
+
+/* Re-open new reference to a host namespace, after notification
+ * of change in the host namespace
+ */
+static bool nvme_mdev_vns_reopen(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_mdev_vns *vns)
+{
+ struct nvme_ns *host_ns;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ _INFO(vctrl, "reopen host namespace %u, partition=%u\n",
+ vns->host_nsid, vns->host_partid);
+
+ /* namespace disappeared on the host - invalid*/
+ host_ns = nvme_find_get_ns(vctrl->hctrl->nvme_ctrl, vns->host_nsid);
+ if (!host_ns)
+ return false;
+
+ /* different namespace with same ID on the host - invalid*/
+ if (vns->host_ns != host_ns)
+ goto error1;
+
+ // basic checks on the namespace
+ if (test_bit(NVME_NS_DEAD, &host_ns->flags) ||
+ test_bit(NVME_NS_REMOVING, &host_ns->flags) ||
+ !host_ns->disk)
+ goto error1;
+
+ /* read properties of the host namespace */
+ nvme_mdev_io_pause(vctrl);
+ nvme_mdev_vns_read_host_properties(vctrl, vns, host_ns);
+ nvme_mdev_io_resume(vctrl);
+
+ nvme_put_ns(host_ns);
+ return true;
+error1:
+ nvme_put_ns(host_ns);
+ return false;
+}
+
+/* Destroy a virtual namespace*/
+static int __nvme_mdev_vns_destroy(struct nvme_mdev_vctrl *vctrl, u32 user_nsid)
+{
+ struct nvme_mdev_vns *vns;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ vns = nvme_mdev_vns_from_vnsid(vctrl, user_nsid);
+ if (!vns)
+ return -ENODEV;
+
+ nvme_mdev_vns_send_event(vctrl, user_nsid);
+ nvme_mdev_io_pause(vctrl);
+
+ vctrl->namespaces[user_nsid - 1] = NULL;
+ blkdev_put(vns->host_part, vns->fmode);
+ nvme_put_ns(vns->host_ns);
+ kfree(vns);
+ nvme_mdev_io_resume(vctrl);
+ return 0;
+}
+
+/* Destroy a virtual namespace (external interface) */
+int nvme_mdev_vns_destroy(struct nvme_mdev_vctrl *vctrl, u32 user_nsid)
+{
+ int ret;
+
+ mutex_lock(&vctrl->lock);
+ nvme_mdev_io_pause(vctrl);
+ ret = __nvme_mdev_vns_destroy(vctrl, user_nsid);
+ nvme_mdev_io_resume(vctrl);
+ mutex_unlock(&vctrl->lock);
+
+ return ret;
+}
+
+/* Destroy all virtual namespaces */
+void nvme_mdev_vns_destroy_all(struct nvme_mdev_vctrl *vctrl)
+{
+ u32 user_nsid;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ for (user_nsid = 1 ; user_nsid <= MAX_VIRTUAL_NAMESPACES ; user_nsid++)
+ __nvme_mdev_vns_destroy(vctrl, user_nsid);
+}
+
+/* Get a virtual namespace */
+struct nvme_mdev_vns *nvme_mdev_vns_from_vnsid(struct nvme_mdev_vctrl *vctrl,
+ u32 user_ns_id)
+{
+ if (user_ns_id == 0 || user_ns_id > MAX_VIRTUAL_NAMESPACES)
+ return NULL;
+ return vctrl->namespaces[user_ns_id - 1];
+}
+
+/* Print description off all virtual namespaces */
+int nvme_mdev_vns_print_description(struct nvme_mdev_vctrl *vctrl,
+ char *buf, unsigned int size)
+{
+ int nsid, ret = 0;
+
+ mutex_lock(&vctrl->lock);
+
+ for (nsid = 1; nsid <= MAX_VIRTUAL_NAMESPACES; nsid++) {
+ int n;
+ struct nvme_mdev_vns *vns = nvme_mdev_vns_from_vnsid(vctrl,
+ nsid);
+ if (!vns)
+ continue;
+
+ else if (vns->host_partid == 0)
+ n = snprintf(buf, size, "VNS%d: nvme%dn%d\n",
+ nsid, vctrl->hctrl->id,
+ (int)vns->host_nsid);
+ else
+ n = snprintf(buf, size, "VNS%d: nvme%dn%dp%d\n",
+ nsid, vctrl->hctrl->id,
+ (int)vns->host_nsid,
+ (int)vns->host_partid);
+ if (n > size)
+ return -ENOMEM;
+ buf += n;
+ size -= n;
+ ret += n;
+ }
+ mutex_unlock(&vctrl->lock);
+ return ret;
+}
+
+/* Processes an update on the host namespace */
+void nvme_mdev_vns_host_ns_update(struct nvme_mdev_vctrl *vctrl,
+ u32 host_nsid, bool removed)
+{
+ int nsid;
+
+ mutex_lock(&vctrl->lock);
+
+ for (nsid = 1; nsid <= MAX_VIRTUAL_NAMESPACES; nsid++) {
+ struct nvme_mdev_vns *vns = nvme_mdev_vns_from_vnsid(vctrl,
+ nsid);
+ if (!vns || vns->host_nsid != host_nsid)
+ continue;
+
+ if (removed || !nvme_mdev_vns_reopen(vctrl, vns))
+ __nvme_mdev_vns_destroy(vctrl, nsid);
+ else
+ nvme_mdev_vns_send_event(vctrl, nsid);
+ }
+ mutex_unlock(&vctrl->lock);
+}
diff --git a/drivers/nvme/mdev/vsq.c b/drivers/nvme/mdev/vsq.c
new file mode 100644
index 000000000000..5b63081c144d
--- /dev/null
+++ b/drivers/nvme/mdev/vsq.c
@@ -0,0 +1,181 @@
+// SPDX-License-Identifier: GPL-2.0+
+/*
+ * Virtual NVMe submission queue implementation
+ * Copyright (c) 2019 - Maxim Levitsky
+ */
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/kernel.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+#include "priv.h"
+
+/* Create new virtual completion queue */
+int nvme_mdev_vsq_init(struct nvme_mdev_vctrl *vctrl,
+ u16 qid, dma_addr_t iova, bool cont, u16 size, u16 cqid)
+{
+ struct nvme_vsq *q = &vctrl->vsqs[qid];
+ int ret;
+
+ lockdep_assert_held(&vctrl->lock);
+
+ q->iova = iova;
+ q->cont = cont;
+ q->qid = qid;
+ q->size = size;
+ q->head = 0;
+ q->vcq = &vctrl->vcqs[cqid];
+ q->data = NULL;
+ q->hsq = 0;
+
+ ret = nvme_mdev_vsq_viommu_update(&vctrl->viommu, q);
+ if (ret && (ret != -EFAULT))
+ return ret;
+
+ if (qid > 0) {
+ ret = nvme_mdev_vctrl_hq_alloc(vctrl);
+ if (ret < 0) {
+ vunmap(q->data);
+ return ret;
+ }
+ q->hsq = ret;
+ }
+
+ _DBG(vctrl, "VSQ: create qid=%d contig=%d, depth=%d cqid=%d\n",
+ qid, cont, size, cqid);
+
+ set_bit(qid, vctrl->vsq_en);
+
+ vctrl->mmio.dbs[q->qid].sqt = 0;
+ vctrl->mmio.eidxs[q->qid].sqt = 0;
+
+ return 0;
+}
+
+/* Update the kernel mapping of the queue */
+int nvme_mdev_vsq_viommu_update(struct nvme_mdev_viommu *viommu,
+ struct nvme_vsq *q)
+{
+ void *data;
+
+ if (q->data)
+ vunmap((void *)q->data);
+
+ data = nvme_mdev_udata_queue_vmap(viommu, q->iova,
+ (unsigned int)q->size *
+ sizeof(struct nvme_command),
+ q->cont);
+
+ q->data = IS_ERR(data) ? NULL : data;
+ return IS_ERR(data) ? PTR_ERR(data) : 0;
+}
+
+/* Delete an virtual completion queue */
+void nvme_mdev_vsq_delete(struct nvme_mdev_vctrl *vctrl, u16 qid)
+{
+ struct nvme_vsq *q = &vctrl->vsqs[qid];
+
+ lockdep_assert_held(&vctrl->lock);
+ _DBG(vctrl, "VSQ: delete qid=%d\n", q->qid);
+
+ if (q->data)
+ vunmap(q->data);
+ q->data = NULL;
+
+ if (q->hsq) {
+ nvme_mdev_vctrl_hq_free(vctrl, q->hsq);
+ q->hsq = 0;
+ }
+
+ clear_bit(qid, vctrl->vsq_en);
+}
+
+/* Move queue head one item forward */
+static void nvme_mdev_vsq_advance_head(struct nvme_vsq *q)
+{
+ q->head++;
+ if (q->head == q->size)
+ q->head = 0;
+}
+
+bool nvme_mdev_vsq_has_data(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_vsq *q)
+{
+ u16 tail = le32_to_cpu(vctrl->mmio.dbs[q->qid].sqt);
+
+ if (!vctrl->mmio.dbs || !vctrl->mmio.eidxs || !q->data)
+ return false;
+
+ if (tail == q->head)
+ return false;
+
+ if (!nvme_mdev_mmio_db_check(vctrl, q->qid, q->size, tail))
+ return false;
+ return true;
+}
+
+/* get one command from a virtual submission queue */
+const struct nvme_command *nvme_mdev_vsq_get_cmd(struct nvme_mdev_vctrl *vctrl,
+ struct nvme_vsq *q)
+{
+ u16 oldhead = q->head;
+ u32 eidx;
+
+ if (!nvme_mdev_vsq_has_data(vctrl, q))
+ return NULL;
+ if (!nvme_mdev_vcq_reserve_space(q->vcq))
+ return NULL;
+ nvme_mdev_vsq_advance_head(q);
+
+ eidx = q->head + (q->size >> 1);
+ if (eidx >= q->size)
+ eidx -= q->size;
+
+ vctrl->mmio.eidxs[q->qid].sqt = cpu_to_le32(eidx);
+
+ return &q->data[oldhead];
+}
+
+bool nvme_mdev_vsq_suspend_io(struct nvme_mdev_vctrl *vctrl, u16 sqid)
+{
+ struct nvme_vsq *q = &vctrl->vsqs[sqid];
+ u16 tail = le32_to_cpu(vctrl->mmio.dbs[q->qid].sqt);
+
+ /* If the queue is not in working state don't allow the idle code
+ * to kick in
+ */
+ if (!vctrl->mmio.dbs || !vctrl->mmio.eidxs || !q->data)
+ return false;
+
+ /* queue has data - refuse idle*/
+ if (tail != q->head)
+ return false;
+
+ /* Write eventid to tell the user to ring normal doorbell*/
+ vctrl->mmio.eidxs[q->qid].sqt = cpu_to_le32(q->head);
+
+ /* memory barrier to ensure that the user have seen the eidx */
+ mb();
+
+ /* Check that doorbell diddn't move meanwhile */
+ tail = le32_to_cpu(vctrl->mmio.dbs[q->qid].sqt);
+ return (tail == q->head);
+}
+
+/* complete a command (IO version)*/
+void nvme_mdev_vsq_cmd_done_io(struct nvme_mdev_vctrl *vctrl,
+ u16 sqid, u16 cid, u16 status)
+{
+ struct nvme_vsq *q = &vctrl->vsqs[sqid];
+
+ nvme_mdev_vcq_write_io(vctrl, q->vcq, q->head, q->qid, cid, status);
+}
+
+/* complete a command (ADMIN version)*/
+void nvme_mdev_vsq_cmd_done_adm(struct nvme_mdev_vctrl *vctrl,
+ u32 dw0, u16 cid, u16 status)
+{
+ struct nvme_vsq *q = &vctrl->vsqs[0];
+
+ nvme_mdev_vcq_write_adm(vctrl, q->vcq, dw0, q->head, cid, status);
+}
--
2.17.2
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* [PATCH 9/9] nvme/pci: implement the mdev external queue allocation interface
2019-03-19 14:41 ` No subject Maxim Levitsky
(?)
@ 2019-03-19 14:41 ` Maxim Levitsky
-1 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
To: linux-nvme
Cc: Maxim Levitsky, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney ,
Paolo Bonzini, Liang Cunming, Liu Changpeng, Fam Zheng,
Amnon Ilan, John Ferlan
Note that currently the number of hw queues reserved for mdev,
has to be pre determined on module load.
(I used to allocate the queues dynamicaly on demand, but
recent changes to allocate polled/read queues made
this somewhat difficult, so I dropped this for now)
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
drivers/nvme/host/pci.c | 376 +++++++++++++++++++++++++++++++++++++++-
1 file changed, 369 insertions(+), 7 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 806b551d3582..deb9e8de0fe8 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -31,6 +31,7 @@
#include <linux/io-64-nonatomic-lo-hi.h>
#include <linux/sed-opal.h>
#include <linux/pci-p2pdma.h>
+#include "../mdev/mdev.h"
#include "trace.h"
#include "nvme.h"
@@ -40,6 +41,7 @@
#define SGES_PER_PAGE (PAGE_SIZE / sizeof(struct nvme_sgl_desc))
+#define USE_SMALL_PRP_POOL(nprps) ((nprps) < (256 / 8))
/*
* These can be higher, but we need to ensure that any command doesn't
* require an sg allocation that needs more than a page of data.
@@ -91,12 +93,24 @@ static int poll_queues = 0;
module_param_cb(poll_queues, &queue_count_ops, &poll_queues, 0644);
MODULE_PARM_DESC(poll_queues, "Number of queues to use for polled IO.");
+static int mdev_queues;
+#ifdef CONFIG_NVME_MDEV
+module_param_cb(mdev_queues, &queue_count_ops, &mdev_queues, 0644);
+MODULE_PARM_DESC(mdev_queues, "Number of queues to use for mediated VFIO");
+#endif
+
struct nvme_dev;
struct nvme_queue;
static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown);
static bool __nvme_disable_io_queues(struct nvme_dev *dev, u8 opcode);
+#ifdef CONFIG_NVME_MDEV
+static void nvme_ext_queue_reset(struct nvme_dev *dev, u16 qid);
+#else
+static void nvme_ext_queue_reset(struct nvme_dev *dev, u16 qid) {}
+#endif
+
/*
* Represents an NVM Express device. Each nvme_dev is a PCI function.
*/
@@ -111,6 +125,7 @@ struct nvme_dev {
unsigned online_queues;
unsigned max_qid;
unsigned io_queues[HCTX_MAX_TYPES];
+ unsigned int mdev_queues;
unsigned int num_vecs;
int q_depth;
u32 db_stride;
@@ -118,6 +133,7 @@ struct nvme_dev {
unsigned long bar_mapped_size;
struct work_struct remove_work;
struct mutex shutdown_lock;
+ struct mutex ext_dev_lock;
bool subsystem;
u64 cmb_size;
bool cmb_use_sqes;
@@ -178,6 +194,16 @@ static inline struct nvme_dev *to_nvme_dev(struct nvme_ctrl *ctrl)
return container_of(ctrl, struct nvme_dev, ctrl);
}
+/* Simplified IO descriptor for MDEV use */
+struct nvme_ext_iod {
+ struct list_head link;
+ u32 user_tag;
+ int nprps;
+ struct nvme_ext_data_iter *saved_iter;
+ dma_addr_t first_prplist_dma;
+ __le64 *prpslists[NVME_MAX_SEGS];
+};
+
/*
* An NVM Express queue. Each device has at least two (one for admin
* commands and one for I/O commands).
@@ -203,14 +229,25 @@ struct nvme_queue {
u16 qid;
u8 cq_phase;
unsigned long flags;
+
#define NVMEQ_ENABLED 0
#define NVMEQ_SQ_CMB 1
#define NVMEQ_DELETE_ERROR 2
+#define NVMEQ_EXTERNAL 4
+
u32 *dbbuf_sq_db;
u32 *dbbuf_cq_db;
u32 *dbbuf_sq_ei;
u32 *dbbuf_cq_ei;
struct completion delete_done;
+
+ /* queue passthrough for external use */
+ struct {
+ int inflight;
+ struct nvme_ext_iod *iods;
+ struct list_head free_iods;
+ struct list_head used_iods;
+ } ext;
};
/*
@@ -255,7 +292,7 @@ static inline void _nvme_check_size(void)
static unsigned int max_io_queues(void)
{
- return num_possible_cpus() + write_queues + poll_queues;
+ return num_possible_cpus() + write_queues + poll_queues + mdev_queues;
}
static unsigned int max_queue_count(void)
@@ -1057,6 +1094,7 @@ static irqreturn_t nvme_irq(int irq, void *data)
* the irq handler, even if that was on another CPU.
*/
rmb();
+
if (nvmeq->cq_head != nvmeq->last_cq_head)
ret = IRQ_HANDLED;
nvme_process_cq(nvmeq, &start, &end, -1);
@@ -1167,6 +1205,7 @@ static int adapter_alloc_cq(struct nvme_dev *dev, u16 qid,
c.create_cq.cqid = cpu_to_le16(qid);
c.create_cq.qsize = cpu_to_le16(nvmeq->q_depth - 1);
c.create_cq.cq_flags = cpu_to_le16(flags);
+
if (vector != -1)
c.create_cq.irq_vector = cpu_to_le16(vector);
else
@@ -1551,7 +1590,11 @@ static void nvme_init_queue(struct nvme_queue *nvmeq, u16 qid)
memset((void *)nvmeq->cqes, 0, CQ_SIZE(nvmeq->q_depth));
nvme_dbbuf_init(dev, nvmeq, qid);
dev->online_queues++;
+
wmb(); /* ensure the first interrupt sees the initialization */
+
+ if (test_bit(NVMEQ_EXTERNAL, &nvmeq->flags))
+ nvme_ext_queue_reset(nvmeq->dev, qid);
}
static int nvme_create_queue(struct nvme_queue *nvmeq, int qid, bool polled)
@@ -1757,7 +1800,7 @@ static int nvme_create_io_queues(struct nvme_dev *dev)
}
max = min(dev->max_qid, dev->ctrl.queue_count - 1);
- if (max != 1 && dev->io_queues[HCTX_TYPE_POLL]) {
+ if (max != 1) {
rw_queues = dev->io_queues[HCTX_TYPE_DEFAULT] +
dev->io_queues[HCTX_TYPE_READ];
} else {
@@ -2094,14 +2137,23 @@ static int nvme_setup_irqs(struct nvme_dev *dev, unsigned int nr_io_queues)
* Poll queues don't need interrupts, but we need at least one IO
* queue left over for non-polled IO.
*/
- this_p_queues = poll_queues;
+ this_p_queues = poll_queues + mdev_queues;
if (this_p_queues >= nr_io_queues) {
this_p_queues = nr_io_queues - 1;
irq_queues = 1;
} else {
irq_queues = nr_io_queues - this_p_queues + 1;
}
+
+ if (mdev_queues > this_p_queues) {
+ mdev_queues = this_p_queues;
+ this_p_queues = 0;
+ } else {
+ this_p_queues -= mdev_queues;
+ }
+
dev->io_queues[HCTX_TYPE_POLL] = this_p_queues;
+ dev->mdev_queues = mdev_queues;
/*
* For irq sets, we have to ask for minvec == maxvec. This passes
@@ -2208,7 +2260,8 @@ static int nvme_setup_io_queues(struct nvme_dev *dev)
dev->num_vecs = result;
result = max(result - 1, 1);
- dev->max_qid = result + dev->io_queues[HCTX_TYPE_POLL];
+ dev->max_qid = result + dev->io_queues[HCTX_TYPE_POLL] +
+ dev->mdev_queues;
/*
* Should investigate if there's a performance win from allocating
@@ -2233,10 +2286,11 @@ static int nvme_setup_io_queues(struct nvme_dev *dev)
nvme_suspend_io_queues(dev);
goto retry;
}
- dev_info(dev->ctrl.device, "%d/%d/%d default/read/poll queues\n",
+ dev_info(dev->ctrl.device, "%d/%d/%d/%d default/read/poll/mdev queues\n",
dev->io_queues[HCTX_TYPE_DEFAULT],
dev->io_queues[HCTX_TYPE_READ],
- dev->io_queues[HCTX_TYPE_POLL]);
+ dev->io_queues[HCTX_TYPE_POLL],
+ dev->mdev_queues);
return 0;
}
@@ -2667,6 +2721,301 @@ static void nvme_remove_dead_ctrl_work(struct work_struct *work)
nvme_put_ctrl(&dev->ctrl);
}
+#ifdef CONFIG_NVME_MDEV
+static void nvme_ext_free_iod(struct nvme_dev *dev, struct nvme_ext_iod *iod)
+{
+ int i = 0, max_prp, nprps = iod->nprps;
+ dma_addr_t dma = iod->first_prplist_dma;
+
+ if (iod->saved_iter) {
+ iod->saved_iter->release(iod->saved_iter);
+ iod->saved_iter = NULL;
+ }
+
+ if (--nprps < 2) {
+ goto out;
+ } else if (USE_SMALL_PRP_POOL(nprps)) {
+ dma_pool_free(dev->prp_small_pool, iod->prpslists[0], dma);
+ goto out;
+ }
+
+ max_prp = (dev->ctrl.page_size >> 3) - 1;
+ while (nprps > 0) {
+ if (i > 0) {
+ dma = iod->prpslists[i - 1][max_prp];
+ if (nprps == 1)
+ break;
+ }
+ dma_pool_free(dev->prp_page_pool, iod->prpslists[i++], dma);
+ nprps -= max_prp;
+ }
+out:
+ iod->nprps = -1;
+ iod->first_prplist_dma = 0;
+ iod->user_tag = 0xDEADDEAD;
+}
+
+static int nvme_ext_setup_iod(struct nvme_dev *dev, struct nvme_ext_iod *iod,
+ struct nvme_common_command *cmd,
+ struct nvme_ext_data_iter *iter)
+{
+ int ret, i, j;
+ __le64 *prp_list;
+ dma_addr_t prp_dma;
+ struct dma_pool *pool;
+ int max_prp = (dev->ctrl.page_size >> 3) - 1;
+
+ iod->saved_iter = iter && iter->release ? iter : NULL;
+ iod->nprps = iter ? iter->count : 0;
+ cmd->dptr.prp1 = 0;
+ cmd->dptr.prp2 = 0;
+ cmd->metadata = 0;
+
+ if (!iter)
+ return 0;
+
+ /* put first pointer*/
+ cmd->dptr.prp1 = cpu_to_le64(iter->host_iova);
+ if (iter->count == 1)
+ return 0;
+
+ ret = iter->next(iter);
+ if (ret)
+ goto error;
+
+ /* if only have one more pointer, put it to second data pointer*/
+ if (iter->count == 1) {
+ cmd->dptr.prp2 = cpu_to_le64(iter->host_iova);
+ return 0;
+ }
+
+ pool = USE_SMALL_PRP_POOL(iter->count) ? dev->prp_small_pool :
+ dev->prp_page_pool;
+
+ /* Allocate prp lists as needed and fill them */
+ for (i = 0 ; i < NVME_MAX_SEGS && iter->count ; i++) {
+ prp_list = dma_pool_alloc(pool, GFP_ATOMIC, &prp_dma);
+ if (!prp_list) {
+ ret = -ENOMEM;
+ goto error;
+ }
+
+ iod->prpslists[i++] = prp_list;
+
+ if (i == 1) {
+ iod->first_prplist_dma = prp_dma;
+ cmd->dptr.prp2 = cpu_to_le64(prp_dma);
+ j = 0;
+ } else {
+ prp_list[0] = iod->prpslists[i - 1][max_prp];
+ iod->prpslists[i - 1][max_prp] = prp_dma;
+ j = 1;
+ }
+
+ while (j <= max_prp && iter->count) {
+ prp_list[j++] = iter->host_iova;
+ ret = iter->next(iter);
+ if (ret)
+ goto error;
+ }
+ }
+
+ if (iter->count) {
+ ret = -ENOSPC;
+ goto error;
+ }
+ return 0;
+error:
+ iod->nprps -= iter->count;
+ nvme_ext_free_iod(dev, iod);
+ return ret;
+}
+
+static int nvme_ext_queues_available(struct nvme_ctrl *ctrl)
+{
+ struct nvme_dev *dev = to_nvme_dev(ctrl);
+ unsigned int ret = 0, qid;
+ unsigned int first_mdev_q = dev->online_queues - dev->mdev_queues;
+
+ for (qid = first_mdev_q; qid < dev->online_queues; qid++) {
+ struct nvme_queue *nvmeq = &dev->queues[qid];
+
+ if (!test_bit(NVMEQ_EXTERNAL, &nvmeq->flags))
+ ret++;
+ }
+ return ret;
+}
+
+static void nvme_ext_queue_reset(struct nvme_dev *dev, u16 qid)
+{
+ struct nvme_queue *nvmeq = &dev->queues[qid];
+ struct nvme_ext_iod *iod, *tmp;
+
+ list_for_each_entry_safe(iod, tmp, &nvmeq->ext.used_iods, link) {
+ if (iod->saved_iter && iod->saved_iter->release) {
+ iod->saved_iter->release(iod->saved_iter);
+ iod->saved_iter = NULL;
+ list_move(&iod->link, &nvmeq->ext.free_iods);
+ }
+ }
+
+ nvmeq->ext.inflight = 0;
+}
+
+static int nvme_ext_queue_alloc(struct nvme_ctrl *ctrl, u16 *ret_qid)
+{
+ struct nvme_dev *dev = to_nvme_dev(ctrl);
+ struct nvme_queue *nvmeq;
+ int ret = 0, qid, i;
+ unsigned int first_mdev_q = dev->online_queues - dev->mdev_queues;
+
+ mutex_lock(&dev->ext_dev_lock);
+
+ /* find a polled queue to allocate */
+ for (qid = dev->online_queues - 1 ; qid >= first_mdev_q ; qid--) {
+ nvmeq = &dev->queues[qid];
+ if (!test_bit(NVMEQ_EXTERNAL, &nvmeq->flags))
+ break;
+ }
+
+ if (qid < first_mdev_q) {
+ ret = -ENOSPC;
+ goto out;
+ }
+
+ INIT_LIST_HEAD(&nvmeq->ext.free_iods);
+ INIT_LIST_HEAD(&nvmeq->ext.used_iods);
+
+ nvmeq->ext.iods =
+ vzalloc_node(sizeof(struct nvme_ext_iod) * nvmeq->q_depth,
+ dev_to_node(dev->dev));
+
+ if (!nvmeq->ext.iods) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ for (i = 0 ; i < nvmeq->q_depth ; i++)
+ list_add_tail(&nvmeq->ext.iods[i].link, &nvmeq->ext.free_iods);
+
+ set_bit(NVMEQ_EXTERNAL, &nvmeq->flags);
+ *ret_qid = qid;
+out:
+ mutex_unlock(&dev->ext_dev_lock);
+ return ret;
+}
+
+static void nvme_ext_queue_free(struct nvme_ctrl *ctrl, u16 qid)
+{
+ struct nvme_dev *dev = to_nvme_dev(ctrl);
+ struct nvme_queue *nvmeq;
+
+ mutex_lock(&dev->ext_dev_lock);
+ nvmeq = &dev->queues[qid];
+
+ if (WARN_ON(!test_bit(NVMEQ_EXTERNAL, &nvmeq->flags)))
+ return;
+
+ nvme_ext_queue_reset(dev, qid);
+
+ vfree(nvmeq->ext.iods);
+ nvmeq->ext.iods = NULL;
+ INIT_LIST_HEAD(&nvmeq->ext.free_iods);
+ INIT_LIST_HEAD(&nvmeq->ext.used_iods);
+
+ clear_bit(NVMEQ_EXTERNAL, &nvmeq->flags);
+ mutex_unlock(&dev->ext_dev_lock);
+}
+
+static int nvme_ext_queue_submit(struct nvme_ctrl *ctrl, u16 qid, u32 user_tag,
+ struct nvme_command *command,
+ struct nvme_ext_data_iter *iter)
+{
+ struct nvme_dev *dev = to_nvme_dev(ctrl);
+ struct nvme_queue *nvmeq = &dev->queues[qid];
+ struct nvme_ext_iod *iod;
+ int ret;
+
+ if (WARN_ON(!test_bit(NVMEQ_EXTERNAL, &nvmeq->flags)))
+ return -EINVAL;
+
+ if (list_empty(&nvmeq->ext.free_iods))
+ return -1;
+
+ iod = list_first_entry(&nvmeq->ext.free_iods,
+ struct nvme_ext_iod, link);
+
+ list_move(&iod->link, &nvmeq->ext.used_iods);
+
+ command->common.command_id = cpu_to_le16(iod - nvmeq->ext.iods);
+ iod->user_tag = user_tag;
+
+ ret = nvme_ext_setup_iod(dev, iod, &command->common, iter);
+ if (ret) {
+ list_move(&iod->link, &nvmeq->ext.free_iods);
+ return ret;
+ }
+
+ nvmeq->ext.inflight++;
+ nvme_submit_cmd(nvmeq, command, true);
+ return 0;
+}
+
+static int nvme_ext_queue_poll(struct nvme_ctrl *ctrl, u16 qid,
+ struct nvme_ext_cmd_result *results,
+ unsigned int max_len)
+{
+ struct nvme_dev *dev = to_nvme_dev(ctrl);
+ struct nvme_queue *nvmeq = &dev->queues[qid];
+ u16 old_head;
+ int i, j;
+
+ if (WARN_ON(!test_bit(NVMEQ_EXTERNAL, &nvmeq->flags)))
+ return -EINVAL;
+
+ if (nvmeq->ext.inflight == 0)
+ return -1;
+
+ old_head = nvmeq->cq_head;
+
+ for (i = 0 ; nvme_cqe_pending(nvmeq) && i < max_len ; i++) {
+ u16 status = le16_to_cpu(nvmeq->cqes[nvmeq->cq_head].status);
+ u16 tag = le16_to_cpu(nvmeq->cqes[nvmeq->cq_head].command_id);
+
+ results[i].status = status >> 1;
+ results[i].tag = (u32)tag;
+ nvme_update_cq_head(nvmeq);
+ }
+
+ if (old_head != nvmeq->cq_head)
+ nvme_ring_cq_doorbell(nvmeq);
+
+ for (j = 0 ; j < i ; j++) {
+ u16 tag = results[j].tag & 0xFFFF;
+ struct nvme_ext_iod *iod = &nvmeq->ext.iods[tag];
+
+ if (WARN_ON(tag >= nvmeq->q_depth || iod->nprps == -1))
+ continue;
+
+ results[j].tag = iod->user_tag;
+ nvme_ext_free_iod(dev, iod);
+ list_move(&iod->link, &nvmeq->ext.free_iods);
+ nvmeq->ext.inflight--;
+ }
+
+ WARN_ON(nvmeq->ext.inflight < 0);
+ return i;
+}
+
+static bool nvme_ext_queue_full(struct nvme_ctrl *ctrl, u16 qid)
+{
+ struct nvme_dev *dev = to_nvme_dev(ctrl);
+ struct nvme_queue *nvmeq = &dev->queues[qid];
+
+ return nvmeq->ext.inflight < nvmeq->q_depth - 1;
+}
+#endif
+
static int nvme_pci_reg_read32(struct nvme_ctrl *ctrl, u32 off, u32 *val)
{
*val = readl(to_nvme_dev(ctrl)->bar + off);
@@ -2696,13 +3045,25 @@ static const struct nvme_ctrl_ops nvme_pci_ctrl_ops = {
.name = "pcie",
.module = THIS_MODULE,
.flags = NVME_F_METADATA_SUPPORTED |
- NVME_F_PCI_P2PDMA,
+ NVME_F_PCI_P2PDMA |
+ NVME_F_MDEV_SUPPORTED |
+ NVME_F_MDEV_DMA_SUPPORTED,
+
.reg_read32 = nvme_pci_reg_read32,
.reg_write32 = nvme_pci_reg_write32,
.reg_read64 = nvme_pci_reg_read64,
.free_ctrl = nvme_pci_free_ctrl,
.submit_async_event = nvme_pci_submit_async_event,
.get_address = nvme_pci_get_address,
+
+#ifdef CONFIG_NVME_MDEV
+ .ext_queues_available = nvme_ext_queues_available,
+ .ext_queue_alloc = nvme_ext_queue_alloc,
+ .ext_queue_free = nvme_ext_queue_free,
+ .ext_queue_submit = nvme_ext_queue_submit,
+ .ext_queue_poll = nvme_ext_queue_poll,
+ .ext_queue_full = nvme_ext_queue_full,
+#endif
};
static int nvme_dev_map(struct nvme_dev *dev)
@@ -2791,6 +3152,7 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id)
INIT_WORK(&dev->ctrl.reset_work, nvme_reset_work);
INIT_WORK(&dev->remove_work, nvme_remove_dead_ctrl_work);
mutex_init(&dev->shutdown_lock);
+ mutex_init(&dev->ext_dev_lock);
result = nvme_setup_prp_pools(dev);
if (result)
--
2.17.2
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* [PATCH 9/9] nvme/pci: implement the mdev external queue allocation interface
@ 2019-03-19 14:41 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
Note that currently the number of hw queues reserved for mdev,
has to be pre determined on module load.
(I used to allocate the queues dynamicaly on demand, but
recent changes to allocate polled/read queues made
this somewhat difficult, so I dropped this for now)
Signed-off-by: Maxim Levitsky <mlevitsk at redhat.com>
---
drivers/nvme/host/pci.c | 376 +++++++++++++++++++++++++++++++++++++++-
1 file changed, 369 insertions(+), 7 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 806b551d3582..deb9e8de0fe8 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -31,6 +31,7 @@
#include <linux/io-64-nonatomic-lo-hi.h>
#include <linux/sed-opal.h>
#include <linux/pci-p2pdma.h>
+#include "../mdev/mdev.h"
#include "trace.h"
#include "nvme.h"
@@ -40,6 +41,7 @@
#define SGES_PER_PAGE (PAGE_SIZE / sizeof(struct nvme_sgl_desc))
+#define USE_SMALL_PRP_POOL(nprps) ((nprps) < (256 / 8))
/*
* These can be higher, but we need to ensure that any command doesn't
* require an sg allocation that needs more than a page of data.
@@ -91,12 +93,24 @@ static int poll_queues = 0;
module_param_cb(poll_queues, &queue_count_ops, &poll_queues, 0644);
MODULE_PARM_DESC(poll_queues, "Number of queues to use for polled IO.");
+static int mdev_queues;
+#ifdef CONFIG_NVME_MDEV
+module_param_cb(mdev_queues, &queue_count_ops, &mdev_queues, 0644);
+MODULE_PARM_DESC(mdev_queues, "Number of queues to use for mediated VFIO");
+#endif
+
struct nvme_dev;
struct nvme_queue;
static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown);
static bool __nvme_disable_io_queues(struct nvme_dev *dev, u8 opcode);
+#ifdef CONFIG_NVME_MDEV
+static void nvme_ext_queue_reset(struct nvme_dev *dev, u16 qid);
+#else
+static void nvme_ext_queue_reset(struct nvme_dev *dev, u16 qid) {}
+#endif
+
/*
* Represents an NVM Express device. Each nvme_dev is a PCI function.
*/
@@ -111,6 +125,7 @@ struct nvme_dev {
unsigned online_queues;
unsigned max_qid;
unsigned io_queues[HCTX_MAX_TYPES];
+ unsigned int mdev_queues;
unsigned int num_vecs;
int q_depth;
u32 db_stride;
@@ -118,6 +133,7 @@ struct nvme_dev {
unsigned long bar_mapped_size;
struct work_struct remove_work;
struct mutex shutdown_lock;
+ struct mutex ext_dev_lock;
bool subsystem;
u64 cmb_size;
bool cmb_use_sqes;
@@ -178,6 +194,16 @@ static inline struct nvme_dev *to_nvme_dev(struct nvme_ctrl *ctrl)
return container_of(ctrl, struct nvme_dev, ctrl);
}
+/* Simplified IO descriptor for MDEV use */
+struct nvme_ext_iod {
+ struct list_head link;
+ u32 user_tag;
+ int nprps;
+ struct nvme_ext_data_iter *saved_iter;
+ dma_addr_t first_prplist_dma;
+ __le64 *prpslists[NVME_MAX_SEGS];
+};
+
/*
* An NVM Express queue. Each device has at least two (one for admin
* commands and one for I/O commands).
@@ -203,14 +229,25 @@ struct nvme_queue {
u16 qid;
u8 cq_phase;
unsigned long flags;
+
#define NVMEQ_ENABLED 0
#define NVMEQ_SQ_CMB 1
#define NVMEQ_DELETE_ERROR 2
+#define NVMEQ_EXTERNAL 4
+
u32 *dbbuf_sq_db;
u32 *dbbuf_cq_db;
u32 *dbbuf_sq_ei;
u32 *dbbuf_cq_ei;
struct completion delete_done;
+
+ /* queue passthrough for external use */
+ struct {
+ int inflight;
+ struct nvme_ext_iod *iods;
+ struct list_head free_iods;
+ struct list_head used_iods;
+ } ext;
};
/*
@@ -255,7 +292,7 @@ static inline void _nvme_check_size(void)
static unsigned int max_io_queues(void)
{
- return num_possible_cpus() + write_queues + poll_queues;
+ return num_possible_cpus() + write_queues + poll_queues + mdev_queues;
}
static unsigned int max_queue_count(void)
@@ -1057,6 +1094,7 @@ static irqreturn_t nvme_irq(int irq, void *data)
* the irq handler, even if that was on another CPU.
*/
rmb();
+
if (nvmeq->cq_head != nvmeq->last_cq_head)
ret = IRQ_HANDLED;
nvme_process_cq(nvmeq, &start, &end, -1);
@@ -1167,6 +1205,7 @@ static int adapter_alloc_cq(struct nvme_dev *dev, u16 qid,
c.create_cq.cqid = cpu_to_le16(qid);
c.create_cq.qsize = cpu_to_le16(nvmeq->q_depth - 1);
c.create_cq.cq_flags = cpu_to_le16(flags);
+
if (vector != -1)
c.create_cq.irq_vector = cpu_to_le16(vector);
else
@@ -1551,7 +1590,11 @@ static void nvme_init_queue(struct nvme_queue *nvmeq, u16 qid)
memset((void *)nvmeq->cqes, 0, CQ_SIZE(nvmeq->q_depth));
nvme_dbbuf_init(dev, nvmeq, qid);
dev->online_queues++;
+
wmb(); /* ensure the first interrupt sees the initialization */
+
+ if (test_bit(NVMEQ_EXTERNAL, &nvmeq->flags))
+ nvme_ext_queue_reset(nvmeq->dev, qid);
}
static int nvme_create_queue(struct nvme_queue *nvmeq, int qid, bool polled)
@@ -1757,7 +1800,7 @@ static int nvme_create_io_queues(struct nvme_dev *dev)
}
max = min(dev->max_qid, dev->ctrl.queue_count - 1);
- if (max != 1 && dev->io_queues[HCTX_TYPE_POLL]) {
+ if (max != 1) {
rw_queues = dev->io_queues[HCTX_TYPE_DEFAULT] +
dev->io_queues[HCTX_TYPE_READ];
} else {
@@ -2094,14 +2137,23 @@ static int nvme_setup_irqs(struct nvme_dev *dev, unsigned int nr_io_queues)
* Poll queues don't need interrupts, but we need at least one IO
* queue left over for non-polled IO.
*/
- this_p_queues = poll_queues;
+ this_p_queues = poll_queues + mdev_queues;
if (this_p_queues >= nr_io_queues) {
this_p_queues = nr_io_queues - 1;
irq_queues = 1;
} else {
irq_queues = nr_io_queues - this_p_queues + 1;
}
+
+ if (mdev_queues > this_p_queues) {
+ mdev_queues = this_p_queues;
+ this_p_queues = 0;
+ } else {
+ this_p_queues -= mdev_queues;
+ }
+
dev->io_queues[HCTX_TYPE_POLL] = this_p_queues;
+ dev->mdev_queues = mdev_queues;
/*
* For irq sets, we have to ask for minvec == maxvec. This passes
@@ -2208,7 +2260,8 @@ static int nvme_setup_io_queues(struct nvme_dev *dev)
dev->num_vecs = result;
result = max(result - 1, 1);
- dev->max_qid = result + dev->io_queues[HCTX_TYPE_POLL];
+ dev->max_qid = result + dev->io_queues[HCTX_TYPE_POLL] +
+ dev->mdev_queues;
/*
* Should investigate if there's a performance win from allocating
@@ -2233,10 +2286,11 @@ static int nvme_setup_io_queues(struct nvme_dev *dev)
nvme_suspend_io_queues(dev);
goto retry;
}
- dev_info(dev->ctrl.device, "%d/%d/%d default/read/poll queues\n",
+ dev_info(dev->ctrl.device, "%d/%d/%d/%d default/read/poll/mdev queues\n",
dev->io_queues[HCTX_TYPE_DEFAULT],
dev->io_queues[HCTX_TYPE_READ],
- dev->io_queues[HCTX_TYPE_POLL]);
+ dev->io_queues[HCTX_TYPE_POLL],
+ dev->mdev_queues);
return 0;
}
@@ -2667,6 +2721,301 @@ static void nvme_remove_dead_ctrl_work(struct work_struct *work)
nvme_put_ctrl(&dev->ctrl);
}
+#ifdef CONFIG_NVME_MDEV
+static void nvme_ext_free_iod(struct nvme_dev *dev, struct nvme_ext_iod *iod)
+{
+ int i = 0, max_prp, nprps = iod->nprps;
+ dma_addr_t dma = iod->first_prplist_dma;
+
+ if (iod->saved_iter) {
+ iod->saved_iter->release(iod->saved_iter);
+ iod->saved_iter = NULL;
+ }
+
+ if (--nprps < 2) {
+ goto out;
+ } else if (USE_SMALL_PRP_POOL(nprps)) {
+ dma_pool_free(dev->prp_small_pool, iod->prpslists[0], dma);
+ goto out;
+ }
+
+ max_prp = (dev->ctrl.page_size >> 3) - 1;
+ while (nprps > 0) {
+ if (i > 0) {
+ dma = iod->prpslists[i - 1][max_prp];
+ if (nprps == 1)
+ break;
+ }
+ dma_pool_free(dev->prp_page_pool, iod->prpslists[i++], dma);
+ nprps -= max_prp;
+ }
+out:
+ iod->nprps = -1;
+ iod->first_prplist_dma = 0;
+ iod->user_tag = 0xDEADDEAD;
+}
+
+static int nvme_ext_setup_iod(struct nvme_dev *dev, struct nvme_ext_iod *iod,
+ struct nvme_common_command *cmd,
+ struct nvme_ext_data_iter *iter)
+{
+ int ret, i, j;
+ __le64 *prp_list;
+ dma_addr_t prp_dma;
+ struct dma_pool *pool;
+ int max_prp = (dev->ctrl.page_size >> 3) - 1;
+
+ iod->saved_iter = iter && iter->release ? iter : NULL;
+ iod->nprps = iter ? iter->count : 0;
+ cmd->dptr.prp1 = 0;
+ cmd->dptr.prp2 = 0;
+ cmd->metadata = 0;
+
+ if (!iter)
+ return 0;
+
+ /* put first pointer*/
+ cmd->dptr.prp1 = cpu_to_le64(iter->host_iova);
+ if (iter->count == 1)
+ return 0;
+
+ ret = iter->next(iter);
+ if (ret)
+ goto error;
+
+ /* if only have one more pointer, put it to second data pointer*/
+ if (iter->count == 1) {
+ cmd->dptr.prp2 = cpu_to_le64(iter->host_iova);
+ return 0;
+ }
+
+ pool = USE_SMALL_PRP_POOL(iter->count) ? dev->prp_small_pool :
+ dev->prp_page_pool;
+
+ /* Allocate prp lists as needed and fill them */
+ for (i = 0 ; i < NVME_MAX_SEGS && iter->count ; i++) {
+ prp_list = dma_pool_alloc(pool, GFP_ATOMIC, &prp_dma);
+ if (!prp_list) {
+ ret = -ENOMEM;
+ goto error;
+ }
+
+ iod->prpslists[i++] = prp_list;
+
+ if (i == 1) {
+ iod->first_prplist_dma = prp_dma;
+ cmd->dptr.prp2 = cpu_to_le64(prp_dma);
+ j = 0;
+ } else {
+ prp_list[0] = iod->prpslists[i - 1][max_prp];
+ iod->prpslists[i - 1][max_prp] = prp_dma;
+ j = 1;
+ }
+
+ while (j <= max_prp && iter->count) {
+ prp_list[j++] = iter->host_iova;
+ ret = iter->next(iter);
+ if (ret)
+ goto error;
+ }
+ }
+
+ if (iter->count) {
+ ret = -ENOSPC;
+ goto error;
+ }
+ return 0;
+error:
+ iod->nprps -= iter->count;
+ nvme_ext_free_iod(dev, iod);
+ return ret;
+}
+
+static int nvme_ext_queues_available(struct nvme_ctrl *ctrl)
+{
+ struct nvme_dev *dev = to_nvme_dev(ctrl);
+ unsigned int ret = 0, qid;
+ unsigned int first_mdev_q = dev->online_queues - dev->mdev_queues;
+
+ for (qid = first_mdev_q; qid < dev->online_queues; qid++) {
+ struct nvme_queue *nvmeq = &dev->queues[qid];
+
+ if (!test_bit(NVMEQ_EXTERNAL, &nvmeq->flags))
+ ret++;
+ }
+ return ret;
+}
+
+static void nvme_ext_queue_reset(struct nvme_dev *dev, u16 qid)
+{
+ struct nvme_queue *nvmeq = &dev->queues[qid];
+ struct nvme_ext_iod *iod, *tmp;
+
+ list_for_each_entry_safe(iod, tmp, &nvmeq->ext.used_iods, link) {
+ if (iod->saved_iter && iod->saved_iter->release) {
+ iod->saved_iter->release(iod->saved_iter);
+ iod->saved_iter = NULL;
+ list_move(&iod->link, &nvmeq->ext.free_iods);
+ }
+ }
+
+ nvmeq->ext.inflight = 0;
+}
+
+static int nvme_ext_queue_alloc(struct nvme_ctrl *ctrl, u16 *ret_qid)
+{
+ struct nvme_dev *dev = to_nvme_dev(ctrl);
+ struct nvme_queue *nvmeq;
+ int ret = 0, qid, i;
+ unsigned int first_mdev_q = dev->online_queues - dev->mdev_queues;
+
+ mutex_lock(&dev->ext_dev_lock);
+
+ /* find a polled queue to allocate */
+ for (qid = dev->online_queues - 1 ; qid >= first_mdev_q ; qid--) {
+ nvmeq = &dev->queues[qid];
+ if (!test_bit(NVMEQ_EXTERNAL, &nvmeq->flags))
+ break;
+ }
+
+ if (qid < first_mdev_q) {
+ ret = -ENOSPC;
+ goto out;
+ }
+
+ INIT_LIST_HEAD(&nvmeq->ext.free_iods);
+ INIT_LIST_HEAD(&nvmeq->ext.used_iods);
+
+ nvmeq->ext.iods =
+ vzalloc_node(sizeof(struct nvme_ext_iod) * nvmeq->q_depth,
+ dev_to_node(dev->dev));
+
+ if (!nvmeq->ext.iods) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ for (i = 0 ; i < nvmeq->q_depth ; i++)
+ list_add_tail(&nvmeq->ext.iods[i].link, &nvmeq->ext.free_iods);
+
+ set_bit(NVMEQ_EXTERNAL, &nvmeq->flags);
+ *ret_qid = qid;
+out:
+ mutex_unlock(&dev->ext_dev_lock);
+ return ret;
+}
+
+static void nvme_ext_queue_free(struct nvme_ctrl *ctrl, u16 qid)
+{
+ struct nvme_dev *dev = to_nvme_dev(ctrl);
+ struct nvme_queue *nvmeq;
+
+ mutex_lock(&dev->ext_dev_lock);
+ nvmeq = &dev->queues[qid];
+
+ if (WARN_ON(!test_bit(NVMEQ_EXTERNAL, &nvmeq->flags)))
+ return;
+
+ nvme_ext_queue_reset(dev, qid);
+
+ vfree(nvmeq->ext.iods);
+ nvmeq->ext.iods = NULL;
+ INIT_LIST_HEAD(&nvmeq->ext.free_iods);
+ INIT_LIST_HEAD(&nvmeq->ext.used_iods);
+
+ clear_bit(NVMEQ_EXTERNAL, &nvmeq->flags);
+ mutex_unlock(&dev->ext_dev_lock);
+}
+
+static int nvme_ext_queue_submit(struct nvme_ctrl *ctrl, u16 qid, u32 user_tag,
+ struct nvme_command *command,
+ struct nvme_ext_data_iter *iter)
+{
+ struct nvme_dev *dev = to_nvme_dev(ctrl);
+ struct nvme_queue *nvmeq = &dev->queues[qid];
+ struct nvme_ext_iod *iod;
+ int ret;
+
+ if (WARN_ON(!test_bit(NVMEQ_EXTERNAL, &nvmeq->flags)))
+ return -EINVAL;
+
+ if (list_empty(&nvmeq->ext.free_iods))
+ return -1;
+
+ iod = list_first_entry(&nvmeq->ext.free_iods,
+ struct nvme_ext_iod, link);
+
+ list_move(&iod->link, &nvmeq->ext.used_iods);
+
+ command->common.command_id = cpu_to_le16(iod - nvmeq->ext.iods);
+ iod->user_tag = user_tag;
+
+ ret = nvme_ext_setup_iod(dev, iod, &command->common, iter);
+ if (ret) {
+ list_move(&iod->link, &nvmeq->ext.free_iods);
+ return ret;
+ }
+
+ nvmeq->ext.inflight++;
+ nvme_submit_cmd(nvmeq, command, true);
+ return 0;
+}
+
+static int nvme_ext_queue_poll(struct nvme_ctrl *ctrl, u16 qid,
+ struct nvme_ext_cmd_result *results,
+ unsigned int max_len)
+{
+ struct nvme_dev *dev = to_nvme_dev(ctrl);
+ struct nvme_queue *nvmeq = &dev->queues[qid];
+ u16 old_head;
+ int i, j;
+
+ if (WARN_ON(!test_bit(NVMEQ_EXTERNAL, &nvmeq->flags)))
+ return -EINVAL;
+
+ if (nvmeq->ext.inflight == 0)
+ return -1;
+
+ old_head = nvmeq->cq_head;
+
+ for (i = 0 ; nvme_cqe_pending(nvmeq) && i < max_len ; i++) {
+ u16 status = le16_to_cpu(nvmeq->cqes[nvmeq->cq_head].status);
+ u16 tag = le16_to_cpu(nvmeq->cqes[nvmeq->cq_head].command_id);
+
+ results[i].status = status >> 1;
+ results[i].tag = (u32)tag;
+ nvme_update_cq_head(nvmeq);
+ }
+
+ if (old_head != nvmeq->cq_head)
+ nvme_ring_cq_doorbell(nvmeq);
+
+ for (j = 0 ; j < i ; j++) {
+ u16 tag = results[j].tag & 0xFFFF;
+ struct nvme_ext_iod *iod = &nvmeq->ext.iods[tag];
+
+ if (WARN_ON(tag >= nvmeq->q_depth || iod->nprps == -1))
+ continue;
+
+ results[j].tag = iod->user_tag;
+ nvme_ext_free_iod(dev, iod);
+ list_move(&iod->link, &nvmeq->ext.free_iods);
+ nvmeq->ext.inflight--;
+ }
+
+ WARN_ON(nvmeq->ext.inflight < 0);
+ return i;
+}
+
+static bool nvme_ext_queue_full(struct nvme_ctrl *ctrl, u16 qid)
+{
+ struct nvme_dev *dev = to_nvme_dev(ctrl);
+ struct nvme_queue *nvmeq = &dev->queues[qid];
+
+ return nvmeq->ext.inflight < nvmeq->q_depth - 1;
+}
+#endif
+
static int nvme_pci_reg_read32(struct nvme_ctrl *ctrl, u32 off, u32 *val)
{
*val = readl(to_nvme_dev(ctrl)->bar + off);
@@ -2696,13 +3045,25 @@ static const struct nvme_ctrl_ops nvme_pci_ctrl_ops = {
.name = "pcie",
.module = THIS_MODULE,
.flags = NVME_F_METADATA_SUPPORTED |
- NVME_F_PCI_P2PDMA,
+ NVME_F_PCI_P2PDMA |
+ NVME_F_MDEV_SUPPORTED |
+ NVME_F_MDEV_DMA_SUPPORTED,
+
.reg_read32 = nvme_pci_reg_read32,
.reg_write32 = nvme_pci_reg_write32,
.reg_read64 = nvme_pci_reg_read64,
.free_ctrl = nvme_pci_free_ctrl,
.submit_async_event = nvme_pci_submit_async_event,
.get_address = nvme_pci_get_address,
+
+#ifdef CONFIG_NVME_MDEV
+ .ext_queues_available = nvme_ext_queues_available,
+ .ext_queue_alloc = nvme_ext_queue_alloc,
+ .ext_queue_free = nvme_ext_queue_free,
+ .ext_queue_submit = nvme_ext_queue_submit,
+ .ext_queue_poll = nvme_ext_queue_poll,
+ .ext_queue_full = nvme_ext_queue_full,
+#endif
};
static int nvme_dev_map(struct nvme_dev *dev)
@@ -2791,6 +3152,7 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id)
INIT_WORK(&dev->ctrl.reset_work, nvme_reset_work);
INIT_WORK(&dev->remove_work, nvme_remove_dead_ctrl_work);
mutex_init(&dev->shutdown_lock);
+ mutex_init(&dev->ext_dev_lock);
result = nvme_setup_prp_pools(dev);
if (result)
--
2.17.2
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* [PATCH 9/9] nvme/pci: implement the mdev external queue allocation interface
@ 2019-03-19 14:41 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:41 UTC (permalink / raw)
To: linux-nvme
Cc: Maxim Levitsky, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney ,
Paolo Bonzini, Liang Cunming, Liu Changpeng, Fam Zheng,
Amnon Ilan, John
Note that currently the number of hw queues reserved for mdev,
has to be pre determined on module load.
(I used to allocate the queues dynamicaly on demand, but
recent changes to allocate polled/read queues made
this somewhat difficult, so I dropped this for now)
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
---
drivers/nvme/host/pci.c | 376 +++++++++++++++++++++++++++++++++++++++-
1 file changed, 369 insertions(+), 7 deletions(-)
diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c
index 806b551d3582..deb9e8de0fe8 100644
--- a/drivers/nvme/host/pci.c
+++ b/drivers/nvme/host/pci.c
@@ -31,6 +31,7 @@
#include <linux/io-64-nonatomic-lo-hi.h>
#include <linux/sed-opal.h>
#include <linux/pci-p2pdma.h>
+#include "../mdev/mdev.h"
#include "trace.h"
#include "nvme.h"
@@ -40,6 +41,7 @@
#define SGES_PER_PAGE (PAGE_SIZE / sizeof(struct nvme_sgl_desc))
+#define USE_SMALL_PRP_POOL(nprps) ((nprps) < (256 / 8))
/*
* These can be higher, but we need to ensure that any command doesn't
* require an sg allocation that needs more than a page of data.
@@ -91,12 +93,24 @@ static int poll_queues = 0;
module_param_cb(poll_queues, &queue_count_ops, &poll_queues, 0644);
MODULE_PARM_DESC(poll_queues, "Number of queues to use for polled IO.");
+static int mdev_queues;
+#ifdef CONFIG_NVME_MDEV
+module_param_cb(mdev_queues, &queue_count_ops, &mdev_queues, 0644);
+MODULE_PARM_DESC(mdev_queues, "Number of queues to use for mediated VFIO");
+#endif
+
struct nvme_dev;
struct nvme_queue;
static void nvme_dev_disable(struct nvme_dev *dev, bool shutdown);
static bool __nvme_disable_io_queues(struct nvme_dev *dev, u8 opcode);
+#ifdef CONFIG_NVME_MDEV
+static void nvme_ext_queue_reset(struct nvme_dev *dev, u16 qid);
+#else
+static void nvme_ext_queue_reset(struct nvme_dev *dev, u16 qid) {}
+#endif
+
/*
* Represents an NVM Express device. Each nvme_dev is a PCI function.
*/
@@ -111,6 +125,7 @@ struct nvme_dev {
unsigned online_queues;
unsigned max_qid;
unsigned io_queues[HCTX_MAX_TYPES];
+ unsigned int mdev_queues;
unsigned int num_vecs;
int q_depth;
u32 db_stride;
@@ -118,6 +133,7 @@ struct nvme_dev {
unsigned long bar_mapped_size;
struct work_struct remove_work;
struct mutex shutdown_lock;
+ struct mutex ext_dev_lock;
bool subsystem;
u64 cmb_size;
bool cmb_use_sqes;
@@ -178,6 +194,16 @@ static inline struct nvme_dev *to_nvme_dev(struct nvme_ctrl *ctrl)
return container_of(ctrl, struct nvme_dev, ctrl);
}
+/* Simplified IO descriptor for MDEV use */
+struct nvme_ext_iod {
+ struct list_head link;
+ u32 user_tag;
+ int nprps;
+ struct nvme_ext_data_iter *saved_iter;
+ dma_addr_t first_prplist_dma;
+ __le64 *prpslists[NVME_MAX_SEGS];
+};
+
/*
* An NVM Express queue. Each device has at least two (one for admin
* commands and one for I/O commands).
@@ -203,14 +229,25 @@ struct nvme_queue {
u16 qid;
u8 cq_phase;
unsigned long flags;
+
#define NVMEQ_ENABLED 0
#define NVMEQ_SQ_CMB 1
#define NVMEQ_DELETE_ERROR 2
+#define NVMEQ_EXTERNAL 4
+
u32 *dbbuf_sq_db;
u32 *dbbuf_cq_db;
u32 *dbbuf_sq_ei;
u32 *dbbuf_cq_ei;
struct completion delete_done;
+
+ /* queue passthrough for external use */
+ struct {
+ int inflight;
+ struct nvme_ext_iod *iods;
+ struct list_head free_iods;
+ struct list_head used_iods;
+ } ext;
};
/*
@@ -255,7 +292,7 @@ static inline void _nvme_check_size(void)
static unsigned int max_io_queues(void)
{
- return num_possible_cpus() + write_queues + poll_queues;
+ return num_possible_cpus() + write_queues + poll_queues + mdev_queues;
}
static unsigned int max_queue_count(void)
@@ -1057,6 +1094,7 @@ static irqreturn_t nvme_irq(int irq, void *data)
* the irq handler, even if that was on another CPU.
*/
rmb();
+
if (nvmeq->cq_head != nvmeq->last_cq_head)
ret = IRQ_HANDLED;
nvme_process_cq(nvmeq, &start, &end, -1);
@@ -1167,6 +1205,7 @@ static int adapter_alloc_cq(struct nvme_dev *dev, u16 qid,
c.create_cq.cqid = cpu_to_le16(qid);
c.create_cq.qsize = cpu_to_le16(nvmeq->q_depth - 1);
c.create_cq.cq_flags = cpu_to_le16(flags);
+
if (vector != -1)
c.create_cq.irq_vector = cpu_to_le16(vector);
else
@@ -1551,7 +1590,11 @@ static void nvme_init_queue(struct nvme_queue *nvmeq, u16 qid)
memset((void *)nvmeq->cqes, 0, CQ_SIZE(nvmeq->q_depth));
nvme_dbbuf_init(dev, nvmeq, qid);
dev->online_queues++;
+
wmb(); /* ensure the first interrupt sees the initialization */
+
+ if (test_bit(NVMEQ_EXTERNAL, &nvmeq->flags))
+ nvme_ext_queue_reset(nvmeq->dev, qid);
}
static int nvme_create_queue(struct nvme_queue *nvmeq, int qid, bool polled)
@@ -1757,7 +1800,7 @@ static int nvme_create_io_queues(struct nvme_dev *dev)
}
max = min(dev->max_qid, dev->ctrl.queue_count - 1);
- if (max != 1 && dev->io_queues[HCTX_TYPE_POLL]) {
+ if (max != 1) {
rw_queues = dev->io_queues[HCTX_TYPE_DEFAULT] +
dev->io_queues[HCTX_TYPE_READ];
} else {
@@ -2094,14 +2137,23 @@ static int nvme_setup_irqs(struct nvme_dev *dev, unsigned int nr_io_queues)
* Poll queues don't need interrupts, but we need at least one IO
* queue left over for non-polled IO.
*/
- this_p_queues = poll_queues;
+ this_p_queues = poll_queues + mdev_queues;
if (this_p_queues >= nr_io_queues) {
this_p_queues = nr_io_queues - 1;
irq_queues = 1;
} else {
irq_queues = nr_io_queues - this_p_queues + 1;
}
+
+ if (mdev_queues > this_p_queues) {
+ mdev_queues = this_p_queues;
+ this_p_queues = 0;
+ } else {
+ this_p_queues -= mdev_queues;
+ }
+
dev->io_queues[HCTX_TYPE_POLL] = this_p_queues;
+ dev->mdev_queues = mdev_queues;
/*
* For irq sets, we have to ask for minvec == maxvec. This passes
@@ -2208,7 +2260,8 @@ static int nvme_setup_io_queues(struct nvme_dev *dev)
dev->num_vecs = result;
result = max(result - 1, 1);
- dev->max_qid = result + dev->io_queues[HCTX_TYPE_POLL];
+ dev->max_qid = result + dev->io_queues[HCTX_TYPE_POLL] +
+ dev->mdev_queues;
/*
* Should investigate if there's a performance win from allocating
@@ -2233,10 +2286,11 @@ static int nvme_setup_io_queues(struct nvme_dev *dev)
nvme_suspend_io_queues(dev);
goto retry;
}
- dev_info(dev->ctrl.device, "%d/%d/%d default/read/poll queues\n",
+ dev_info(dev->ctrl.device, "%d/%d/%d/%d default/read/poll/mdev queues\n",
dev->io_queues[HCTX_TYPE_DEFAULT],
dev->io_queues[HCTX_TYPE_READ],
- dev->io_queues[HCTX_TYPE_POLL]);
+ dev->io_queues[HCTX_TYPE_POLL],
+ dev->mdev_queues);
return 0;
}
@@ -2667,6 +2721,301 @@ static void nvme_remove_dead_ctrl_work(struct work_struct *work)
nvme_put_ctrl(&dev->ctrl);
}
+#ifdef CONFIG_NVME_MDEV
+static void nvme_ext_free_iod(struct nvme_dev *dev, struct nvme_ext_iod *iod)
+{
+ int i = 0, max_prp, nprps = iod->nprps;
+ dma_addr_t dma = iod->first_prplist_dma;
+
+ if (iod->saved_iter) {
+ iod->saved_iter->release(iod->saved_iter);
+ iod->saved_iter = NULL;
+ }
+
+ if (--nprps < 2) {
+ goto out;
+ } else if (USE_SMALL_PRP_POOL(nprps)) {
+ dma_pool_free(dev->prp_small_pool, iod->prpslists[0], dma);
+ goto out;
+ }
+
+ max_prp = (dev->ctrl.page_size >> 3) - 1;
+ while (nprps > 0) {
+ if (i > 0) {
+ dma = iod->prpslists[i - 1][max_prp];
+ if (nprps == 1)
+ break;
+ }
+ dma_pool_free(dev->prp_page_pool, iod->prpslists[i++], dma);
+ nprps -= max_prp;
+ }
+out:
+ iod->nprps = -1;
+ iod->first_prplist_dma = 0;
+ iod->user_tag = 0xDEADDEAD;
+}
+
+static int nvme_ext_setup_iod(struct nvme_dev *dev, struct nvme_ext_iod *iod,
+ struct nvme_common_command *cmd,
+ struct nvme_ext_data_iter *iter)
+{
+ int ret, i, j;
+ __le64 *prp_list;
+ dma_addr_t prp_dma;
+ struct dma_pool *pool;
+ int max_prp = (dev->ctrl.page_size >> 3) - 1;
+
+ iod->saved_iter = iter && iter->release ? iter : NULL;
+ iod->nprps = iter ? iter->count : 0;
+ cmd->dptr.prp1 = 0;
+ cmd->dptr.prp2 = 0;
+ cmd->metadata = 0;
+
+ if (!iter)
+ return 0;
+
+ /* put first pointer*/
+ cmd->dptr.prp1 = cpu_to_le64(iter->host_iova);
+ if (iter->count == 1)
+ return 0;
+
+ ret = iter->next(iter);
+ if (ret)
+ goto error;
+
+ /* if only have one more pointer, put it to second data pointer*/
+ if (iter->count == 1) {
+ cmd->dptr.prp2 = cpu_to_le64(iter->host_iova);
+ return 0;
+ }
+
+ pool = USE_SMALL_PRP_POOL(iter->count) ? dev->prp_small_pool :
+ dev->prp_page_pool;
+
+ /* Allocate prp lists as needed and fill them */
+ for (i = 0 ; i < NVME_MAX_SEGS && iter->count ; i++) {
+ prp_list = dma_pool_alloc(pool, GFP_ATOMIC, &prp_dma);
+ if (!prp_list) {
+ ret = -ENOMEM;
+ goto error;
+ }
+
+ iod->prpslists[i++] = prp_list;
+
+ if (i == 1) {
+ iod->first_prplist_dma = prp_dma;
+ cmd->dptr.prp2 = cpu_to_le64(prp_dma);
+ j = 0;
+ } else {
+ prp_list[0] = iod->prpslists[i - 1][max_prp];
+ iod->prpslists[i - 1][max_prp] = prp_dma;
+ j = 1;
+ }
+
+ while (j <= max_prp && iter->count) {
+ prp_list[j++] = iter->host_iova;
+ ret = iter->next(iter);
+ if (ret)
+ goto error;
+ }
+ }
+
+ if (iter->count) {
+ ret = -ENOSPC;
+ goto error;
+ }
+ return 0;
+error:
+ iod->nprps -= iter->count;
+ nvme_ext_free_iod(dev, iod);
+ return ret;
+}
+
+static int nvme_ext_queues_available(struct nvme_ctrl *ctrl)
+{
+ struct nvme_dev *dev = to_nvme_dev(ctrl);
+ unsigned int ret = 0, qid;
+ unsigned int first_mdev_q = dev->online_queues - dev->mdev_queues;
+
+ for (qid = first_mdev_q; qid < dev->online_queues; qid++) {
+ struct nvme_queue *nvmeq = &dev->queues[qid];
+
+ if (!test_bit(NVMEQ_EXTERNAL, &nvmeq->flags))
+ ret++;
+ }
+ return ret;
+}
+
+static void nvme_ext_queue_reset(struct nvme_dev *dev, u16 qid)
+{
+ struct nvme_queue *nvmeq = &dev->queues[qid];
+ struct nvme_ext_iod *iod, *tmp;
+
+ list_for_each_entry_safe(iod, tmp, &nvmeq->ext.used_iods, link) {
+ if (iod->saved_iter && iod->saved_iter->release) {
+ iod->saved_iter->release(iod->saved_iter);
+ iod->saved_iter = NULL;
+ list_move(&iod->link, &nvmeq->ext.free_iods);
+ }
+ }
+
+ nvmeq->ext.inflight = 0;
+}
+
+static int nvme_ext_queue_alloc(struct nvme_ctrl *ctrl, u16 *ret_qid)
+{
+ struct nvme_dev *dev = to_nvme_dev(ctrl);
+ struct nvme_queue *nvmeq;
+ int ret = 0, qid, i;
+ unsigned int first_mdev_q = dev->online_queues - dev->mdev_queues;
+
+ mutex_lock(&dev->ext_dev_lock);
+
+ /* find a polled queue to allocate */
+ for (qid = dev->online_queues - 1 ; qid >= first_mdev_q ; qid--) {
+ nvmeq = &dev->queues[qid];
+ if (!test_bit(NVMEQ_EXTERNAL, &nvmeq->flags))
+ break;
+ }
+
+ if (qid < first_mdev_q) {
+ ret = -ENOSPC;
+ goto out;
+ }
+
+ INIT_LIST_HEAD(&nvmeq->ext.free_iods);
+ INIT_LIST_HEAD(&nvmeq->ext.used_iods);
+
+ nvmeq->ext.iods =
+ vzalloc_node(sizeof(struct nvme_ext_iod) * nvmeq->q_depth,
+ dev_to_node(dev->dev));
+
+ if (!nvmeq->ext.iods) {
+ ret = -ENOMEM;
+ goto out;
+ }
+
+ for (i = 0 ; i < nvmeq->q_depth ; i++)
+ list_add_tail(&nvmeq->ext.iods[i].link, &nvmeq->ext.free_iods);
+
+ set_bit(NVMEQ_EXTERNAL, &nvmeq->flags);
+ *ret_qid = qid;
+out:
+ mutex_unlock(&dev->ext_dev_lock);
+ return ret;
+}
+
+static void nvme_ext_queue_free(struct nvme_ctrl *ctrl, u16 qid)
+{
+ struct nvme_dev *dev = to_nvme_dev(ctrl);
+ struct nvme_queue *nvmeq;
+
+ mutex_lock(&dev->ext_dev_lock);
+ nvmeq = &dev->queues[qid];
+
+ if (WARN_ON(!test_bit(NVMEQ_EXTERNAL, &nvmeq->flags)))
+ return;
+
+ nvme_ext_queue_reset(dev, qid);
+
+ vfree(nvmeq->ext.iods);
+ nvmeq->ext.iods = NULL;
+ INIT_LIST_HEAD(&nvmeq->ext.free_iods);
+ INIT_LIST_HEAD(&nvmeq->ext.used_iods);
+
+ clear_bit(NVMEQ_EXTERNAL, &nvmeq->flags);
+ mutex_unlock(&dev->ext_dev_lock);
+}
+
+static int nvme_ext_queue_submit(struct nvme_ctrl *ctrl, u16 qid, u32 user_tag,
+ struct nvme_command *command,
+ struct nvme_ext_data_iter *iter)
+{
+ struct nvme_dev *dev = to_nvme_dev(ctrl);
+ struct nvme_queue *nvmeq = &dev->queues[qid];
+ struct nvme_ext_iod *iod;
+ int ret;
+
+ if (WARN_ON(!test_bit(NVMEQ_EXTERNAL, &nvmeq->flags)))
+ return -EINVAL;
+
+ if (list_empty(&nvmeq->ext.free_iods))
+ return -1;
+
+ iod = list_first_entry(&nvmeq->ext.free_iods,
+ struct nvme_ext_iod, link);
+
+ list_move(&iod->link, &nvmeq->ext.used_iods);
+
+ command->common.command_id = cpu_to_le16(iod - nvmeq->ext.iods);
+ iod->user_tag = user_tag;
+
+ ret = nvme_ext_setup_iod(dev, iod, &command->common, iter);
+ if (ret) {
+ list_move(&iod->link, &nvmeq->ext.free_iods);
+ return ret;
+ }
+
+ nvmeq->ext.inflight++;
+ nvme_submit_cmd(nvmeq, command, true);
+ return 0;
+}
+
+static int nvme_ext_queue_poll(struct nvme_ctrl *ctrl, u16 qid,
+ struct nvme_ext_cmd_result *results,
+ unsigned int max_len)
+{
+ struct nvme_dev *dev = to_nvme_dev(ctrl);
+ struct nvme_queue *nvmeq = &dev->queues[qid];
+ u16 old_head;
+ int i, j;
+
+ if (WARN_ON(!test_bit(NVMEQ_EXTERNAL, &nvmeq->flags)))
+ return -EINVAL;
+
+ if (nvmeq->ext.inflight == 0)
+ return -1;
+
+ old_head = nvmeq->cq_head;
+
+ for (i = 0 ; nvme_cqe_pending(nvmeq) && i < max_len ; i++) {
+ u16 status = le16_to_cpu(nvmeq->cqes[nvmeq->cq_head].status);
+ u16 tag = le16_to_cpu(nvmeq->cqes[nvmeq->cq_head].command_id);
+
+ results[i].status = status >> 1;
+ results[i].tag = (u32)tag;
+ nvme_update_cq_head(nvmeq);
+ }
+
+ if (old_head != nvmeq->cq_head)
+ nvme_ring_cq_doorbell(nvmeq);
+
+ for (j = 0 ; j < i ; j++) {
+ u16 tag = results[j].tag & 0xFFFF;
+ struct nvme_ext_iod *iod = &nvmeq->ext.iods[tag];
+
+ if (WARN_ON(tag >= nvmeq->q_depth || iod->nprps == -1))
+ continue;
+
+ results[j].tag = iod->user_tag;
+ nvme_ext_free_iod(dev, iod);
+ list_move(&iod->link, &nvmeq->ext.free_iods);
+ nvmeq->ext.inflight--;
+ }
+
+ WARN_ON(nvmeq->ext.inflight < 0);
+ return i;
+}
+
+static bool nvme_ext_queue_full(struct nvme_ctrl *ctrl, u16 qid)
+{
+ struct nvme_dev *dev = to_nvme_dev(ctrl);
+ struct nvme_queue *nvmeq = &dev->queues[qid];
+
+ return nvmeq->ext.inflight < nvmeq->q_depth - 1;
+}
+#endif
+
static int nvme_pci_reg_read32(struct nvme_ctrl *ctrl, u32 off, u32 *val)
{
*val = readl(to_nvme_dev(ctrl)->bar + off);
@@ -2696,13 +3045,25 @@ static const struct nvme_ctrl_ops nvme_pci_ctrl_ops = {
.name = "pcie",
.module = THIS_MODULE,
.flags = NVME_F_METADATA_SUPPORTED |
- NVME_F_PCI_P2PDMA,
+ NVME_F_PCI_P2PDMA |
+ NVME_F_MDEV_SUPPORTED |
+ NVME_F_MDEV_DMA_SUPPORTED,
+
.reg_read32 = nvme_pci_reg_read32,
.reg_write32 = nvme_pci_reg_write32,
.reg_read64 = nvme_pci_reg_read64,
.free_ctrl = nvme_pci_free_ctrl,
.submit_async_event = nvme_pci_submit_async_event,
.get_address = nvme_pci_get_address,
+
+#ifdef CONFIG_NVME_MDEV
+ .ext_queues_available = nvme_ext_queues_available,
+ .ext_queue_alloc = nvme_ext_queue_alloc,
+ .ext_queue_free = nvme_ext_queue_free,
+ .ext_queue_submit = nvme_ext_queue_submit,
+ .ext_queue_poll = nvme_ext_queue_poll,
+ .ext_queue_full = nvme_ext_queue_full,
+#endif
};
static int nvme_dev_map(struct nvme_dev *dev)
@@ -2791,6 +3152,7 @@ static int nvme_probe(struct pci_dev *pdev, const struct pci_device_id *id)
INIT_WORK(&dev->ctrl.reset_work, nvme_reset_work);
INIT_WORK(&dev->remove_work, nvme_remove_dead_ctrl_work);
mutex_init(&dev->shutdown_lock);
+ mutex_init(&dev->ext_dev_lock);
result = nvme_setup_prp_pools(dev);
if (result)
--
2.17.2
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* [PATCH 0/9] RFC: NVME VFIO mediated device
2019-03-19 14:41 ` No subject Maxim Levitsky
@ 2019-03-19 14:58 ` Maxim Levitsky
-1 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:58 UTC (permalink / raw)
To: linux-nvme
Cc: linux-kernel, kvm, Jens Axboe, Alex Williamson, Keith Busch,
Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney, Paolo Bonzini,
Liang Cunming, Liu Changpeng, Fam Zheng, Amnon Ilan, John Ferlan
Oops, I placed the subject in the wrong place.
Best regards,
Maxim Levitsky
On Tue, 2019-03-19 at 16:41 +0200, Maxim Levitsky wrote:
> Date: Tue, 19 Mar 2019 14:45:45 +0200
> Subject: [PATCH 0/9] RFC: NVME VFIO mediated device
>
> Hi everyone!
>
> In this patch series, I would like to introduce my take on the problem of
> doing
> as fast as possible virtualization of storage with emphasis on low latency.
>
> In this patch series I implemented a kernel vfio based, mediated device that
> allows the user to pass through a partition and/or whole namespace to a guest.
>
> The idea behind this driver is based on paper you can find at
> https://www.usenix.org/conference/atc18/presentation/peng,
>
> Although note that I stared the development prior to reading this paper,
> independently.
>
> In addition to that implementation is not based on code used in the paper as
> I wasn't being able at that time to make the source available to me.
>
> ***Key points about the implementation:***
>
> * Polling kernel thread is used. The polling is stopped after a
> predefined timeout (1/2 sec by default).
> Support for all interrupt driven mode is planned, and it shows promising
> results.
>
> * Guest sees a standard NVME device - this allows to run guest with
> unmodified drivers, for example windows guests.
>
> * The NVMe device is shared between host and guest.
> That means that even a single namespace can be split between host
> and guest based on different partitions.
>
> * Simple configuration
>
> *** Performance ***
>
> Performance was tested on Intel DC P3700, With Xeon E5-2620 v2
> and both latency and throughput is very similar to SPDK.
>
> Soon I will test this on a better server and nvme device and provide
> more formal performance numbers.
>
> Latency numbers:
> ~80ms - spdk with fio plugin on the host.
> ~84ms - nvme driver on the host
> ~87ms - mdev-nvme + nvme driver in the guest
>
> Throughput was following similar pattern as well.
>
> * Configuration example
> $ modprobe nvme mdev_queues=4
> $ modprobe nvme-mdev
>
> $ UUID=$(uuidgen)
> $ DEVICE='device pci address'
> $ echo $UUID > /sys/bus/pci/devices/$DEVICE/mdev_supported_types/nvme-
> 2Q_V1/create
> $ echo n1p3 > /sys/bus/mdev/devices/$UUID/namespaces/add_namespace #attach
> host namespace 1 parition 3
> $ echo 11 > /sys/bus/mdev/devices/$UUID/settings/iothread_cpu #pin the io
> thread to cpu 11
>
> Afterward boot qemu with
> -device vfio-pci,sysfsdev=/sys/bus/mdev/devices/$UUID
>
> Zero configuration on the guest.
>
> *** FAQ ***
>
> * Why to make this in the kernel? Why this is better that SPDK
>
> -> Reuse the existing nvme kernel driver in the host. No new drivers in the
> guest.
>
> -> Share the NVMe device between host and guest.
> Even in fully virtualized configurations,
> some partitions of nvme device could be used by guests as block devices
> while others passed through with nvme-mdev to achieve balance between
> all features of full IO stack emulation and performance.
>
> -> NVME-MDEV is a bit faster due to the fact that in-kernel driver
> can send interrupts to the guest directly without a context
> switch that can be expensive due to meltdown mitigation.
>
> -> Is able to utilize interrupts to get reasonable performance.
> This is only implemented
> as a proof of concept and not included in the patches,
> but interrupt driven mode shows reasonable performance
>
> -> This is a framework that later can be used to support NVMe devices
> with more of the IO virtualization built-in
> (IOMMU with PASID support coupled with device that supports it)
>
> * Why to attach directly to nvme-pci driver and not use block layer IO
> -> The direct attachment allows for better performance, but I will
> check the possibility of using block IO, especially for fabrics drivers.
>
> *** Implementation notes ***
>
> * All guest memory is mapped into the physical nvme device
> but not 1:1 as vfio-pci would do this.
> This allows very efficient DMA.
> To support this, patch 2 adds ability for a mdev device to listen on
> guest's memory map events.
> Any such memory is immediately pinned and then DMA mapped.
> (Support for fabric drivers where this is not possible exits too,
> in which case the fabric driver will do its own DMA mapping)
>
> * nvme core driver is modified to announce the appearance
> and disappearance of nvme controllers and namespaces,
> to which the nvme-mdev driver is subscribed.
>
> * nvme-pci driver is modified to expose raw interface of attaching to
> and sending/polling the IO queues.
> This allows the mdev driver very efficiently to submit/poll for the IO.
> By default one host queue is used per each mediated device.
> (support for other fabric based host drivers is planned)
>
> * The nvme-mdev doesn't assume presence of KVM, thus any VFIO user, including
> SPDK, a qemu running with tccg, ... can use this virtual device.
>
> *** Testing ***
>
> The device was tested with stock QEMU 3.0 on the host,
> with host was using 5.0 kernel with nvme-mdev added and the following
> hardware:
> * QEMU nvme virtual device (with nested guest)
> * Intel DC P3700 on Xeon E5-2620 v2 server
> * Samsung SM981 (in a Thunderbolt enclosure, with my laptop)
> * Lenovo NVME device found in my laptop
>
> The guest was tested with kernel 4.16, 4.18, 4.20 and
> the same custom complied kernel 5.0
> Windows 10 guest was tested too with both Microsoft's inbox driver and
> open source community NVME driver
> (https://lists.openfabrics.org/pipermail/nvmewin/2016-December/001420.html)
>
> Testing was mostly done on x86_64, but 32 bit host/guest combination
> was lightly tested too.
>
> In addition to that, the virtual device was tested with nested guest,
> by passing the virtual device to it,
> using pci passthrough, qemu userspace nvme driver, and spdk
>
>
> PS: I used to contribute to the kernel as a hobby using the
> maximlevitsky@gmail.com address
>
> Maxim Levitsky (9):
> vfio/mdev: add .request callback
> nvme/core: add some more values from the spec
> nvme/core: add NVME_CTRL_SUSPENDED controller state
> nvme/pci: use the NVME_CTRL_SUSPENDED state
> nvme/pci: add known admin effects to augument admin effects log page
> nvme/pci: init shadow doorbell after each reset
> nvme/core: add mdev interfaces
> nvme/core: add nvme-mdev core driver
> nvme/pci: implement the mdev external queue allocation interface
>
> MAINTAINERS | 5 +
> drivers/nvme/Kconfig | 1 +
> drivers/nvme/Makefile | 1 +
> drivers/nvme/host/core.c | 149 +++++-
> drivers/nvme/host/nvme.h | 55 ++-
> drivers/nvme/host/pci.c | 385 ++++++++++++++-
> drivers/nvme/mdev/Kconfig | 16 +
> drivers/nvme/mdev/Makefile | 5 +
> drivers/nvme/mdev/adm.c | 873 ++++++++++++++++++++++++++++++++++
> drivers/nvme/mdev/events.c | 142 ++++++
> drivers/nvme/mdev/host.c | 491 +++++++++++++++++++
> drivers/nvme/mdev/instance.c | 802 +++++++++++++++++++++++++++++++
> drivers/nvme/mdev/io.c | 563 ++++++++++++++++++++++
> drivers/nvme/mdev/irq.c | 264 ++++++++++
> drivers/nvme/mdev/mdev.h | 56 +++
> drivers/nvme/mdev/mmio.c | 591 +++++++++++++++++++++++
> drivers/nvme/mdev/pci.c | 247 ++++++++++
> drivers/nvme/mdev/priv.h | 700 +++++++++++++++++++++++++++
> drivers/nvme/mdev/udata.c | 390 +++++++++++++++
> drivers/nvme/mdev/vcq.c | 207 ++++++++
> drivers/nvme/mdev/vctrl.c | 514 ++++++++++++++++++++
> drivers/nvme/mdev/viommu.c | 322 +++++++++++++
> drivers/nvme/mdev/vns.c | 356 ++++++++++++++
> drivers/nvme/mdev/vsq.c | 178 +++++++
> drivers/vfio/mdev/vfio_mdev.c | 11 +
> include/linux/mdev.h | 4 +
> include/linux/nvme.h | 88 +++-
> 27 files changed, 7375 insertions(+), 41 deletions(-)
> create mode 100644 drivers/nvme/mdev/Kconfig
> create mode 100644 drivers/nvme/mdev/Makefile
> create mode 100644 drivers/nvme/mdev/adm.c
> create mode 100644 drivers/nvme/mdev/events.c
> create mode 100644 drivers/nvme/mdev/host.c
> create mode 100644 drivers/nvme/mdev/instance.c
> create mode 100644 drivers/nvme/mdev/io.c
> create mode 100644 drivers/nvme/mdev/irq.c
> create mode 100644 drivers/nvme/mdev/mdev.h
> create mode 100644 drivers/nvme/mdev/mmio.c
> create mode 100644 drivers/nvme/mdev/pci.c
> create mode 100644 drivers/nvme/mdev/priv.h
> create mode 100644 drivers/nvme/mdev/udata.c
> create mode 100644 drivers/nvme/mdev/vcq.c
> create mode 100644 drivers/nvme/mdev/vctrl.c
> create mode 100644 drivers/nvme/mdev/viommu.c
> create mode 100644 drivers/nvme/mdev/vns.c
> create mode 100644 drivers/nvme/mdev/vsq.c
>
^ permalink raw reply [flat|nested] 3471+ messages in thread
* [PATCH 0/9] RFC: NVME VFIO mediated device
@ 2019-03-19 14:58 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-19 14:58 UTC (permalink / raw)
Oops, I placed the subject in the wrong place.
Best regards,
Maxim Levitsky
On Tue, 2019-03-19@16:41 +0200, Maxim Levitsky wrote:
> Date: Tue, 19 Mar 2019 14:45:45 +0200
> Subject: [PATCH 0/9] RFC: NVME VFIO mediated device
>
> Hi everyone!
>
> In this patch series, I would like to introduce my take on the problem of
> doing
> as fast as possible virtualization of storage with emphasis on low latency.
>
> In this patch series I implemented a kernel vfio based, mediated device that
> allows the user to pass through a partition and/or whole namespace to a guest.
>
> The idea behind this driver is based on paper you can find at
> https://www.usenix.org/conference/atc18/presentation/peng,
>
> Although note that I stared the development prior to reading this paper,
> independently.
>
> In addition to that implementation is not based on code used in the paper as
> I wasn't being able at that time to make the source available to me.
>
> ***Key points about the implementation:***
>
> * Polling kernel thread is used. The polling is stopped after a
> predefined timeout (1/2 sec by default).
> Support for all interrupt driven mode is planned, and it shows promising
> results.
>
> * Guest sees a standard NVME device - this allows to run guest with
> unmodified drivers, for example windows guests.
>
> * The NVMe device is shared between host and guest.
> That means that even a single namespace can be split between host
> and guest based on different partitions.
>
> * Simple configuration
>
> *** Performance ***
>
> Performance was tested on Intel DC P3700, With Xeon E5-2620 v2
> and both latency and throughput is very similar to SPDK.
>
> Soon I will test this on a better server and nvme device and provide
> more formal performance numbers.
>
> Latency numbers:
> ~80ms - spdk with fio plugin on the host.
> ~84ms - nvme driver on the host
> ~87ms - mdev-nvme + nvme driver in the guest
>
> Throughput was following similar pattern as well.
>
> * Configuration example
> $ modprobe nvme mdev_queues=4
> $ modprobe nvme-mdev
>
> $ UUID=$(uuidgen)
> $ DEVICE='device pci address'
> $ echo $UUID > /sys/bus/pci/devices/$DEVICE/mdev_supported_types/nvme-
> 2Q_V1/create
> $ echo n1p3 > /sys/bus/mdev/devices/$UUID/namespaces/add_namespace #attach
> host namespace 1 parition 3
> $ echo 11 > /sys/bus/mdev/devices/$UUID/settings/iothread_cpu #pin the io
> thread to cpu 11
>
> Afterward boot qemu with
> -device vfio-pci,sysfsdev=/sys/bus/mdev/devices/$UUID
>
> Zero configuration on the guest.
>
> *** FAQ ***
>
> * Why to make this in the kernel? Why this is better that SPDK
>
> -> Reuse the existing nvme kernel driver in the host. No new drivers in the
> guest.
>
> -> Share the NVMe device between host and guest.
> Even in fully virtualized configurations,
> some partitions of nvme device could be used by guests as block devices
> while others passed through with nvme-mdev to achieve balance between
> all features of full IO stack emulation and performance.
>
> -> NVME-MDEV is a bit faster due to the fact that in-kernel driver
> can send interrupts to the guest directly without a context
> switch that can be expensive due to meltdown mitigation.
>
> -> Is able to utilize interrupts to get reasonable performance.
> This is only implemented
> as a proof of concept and not included in the patches,
> but interrupt driven mode shows reasonable performance
>
> -> This is a framework that later can be used to support NVMe devices
> with more of the IO virtualization built-in
> (IOMMU with PASID support coupled with device that supports it)
>
> * Why to attach directly to nvme-pci driver and not use block layer IO
> -> The direct attachment allows for better performance, but I will
> check the possibility of using block IO, especially for fabrics drivers.
>
> *** Implementation notes ***
>
> * All guest memory is mapped into the physical nvme device
> but not 1:1 as vfio-pci would do this.
> This allows very efficient DMA.
> To support this, patch 2 adds ability for a mdev device to listen on
> guest's memory map events.
> Any such memory is immediately pinned and then DMA mapped.
> (Support for fabric drivers where this is not possible exits too,
> in which case the fabric driver will do its own DMA mapping)
>
> * nvme core driver is modified to announce the appearance
> and disappearance of nvme controllers and namespaces,
> to which the nvme-mdev driver is subscribed.
>
> * nvme-pci driver is modified to expose raw interface of attaching to
> and sending/polling the IO queues.
> This allows the mdev driver very efficiently to submit/poll for the IO.
> By default one host queue is used per each mediated device.
> (support for other fabric based host drivers is planned)
>
> * The nvme-mdev doesn't assume presence of KVM, thus any VFIO user, including
> SPDK, a qemu running with tccg, ... can use this virtual device.
>
> *** Testing ***
>
> The device was tested with stock QEMU 3.0 on the host,
> with host was using 5.0 kernel with nvme-mdev added and the following
> hardware:
> * QEMU nvme virtual device (with nested guest)
> * Intel DC P3700 on Xeon E5-2620 v2 server
> * Samsung SM981 (in a Thunderbolt enclosure, with my laptop)
> * Lenovo NVME device found in my laptop
>
> The guest was tested with kernel 4.16, 4.18, 4.20 and
> the same custom complied kernel 5.0
> Windows 10 guest was tested too with both Microsoft's inbox driver and
> open source community NVME driver
> (https://lists.openfabrics.org/pipermail/nvmewin/2016-December/001420.html)
>
> Testing was mostly done on x86_64, but 32 bit host/guest combination
> was lightly tested too.
>
> In addition to that, the virtual device was tested with nested guest,
> by passing the virtual device to it,
> using pci passthrough, qemu userspace nvme driver, and spdk
>
>
> PS: I used to contribute to the kernel as a hobby using the
> maximlevitsky at gmail.com address
>
> Maxim Levitsky (9):
> vfio/mdev: add .request callback
> nvme/core: add some more values from the spec
> nvme/core: add NVME_CTRL_SUSPENDED controller state
> nvme/pci: use the NVME_CTRL_SUSPENDED state
> nvme/pci: add known admin effects to augument admin effects log page
> nvme/pci: init shadow doorbell after each reset
> nvme/core: add mdev interfaces
> nvme/core: add nvme-mdev core driver
> nvme/pci: implement the mdev external queue allocation interface
>
> MAINTAINERS | 5 +
> drivers/nvme/Kconfig | 1 +
> drivers/nvme/Makefile | 1 +
> drivers/nvme/host/core.c | 149 +++++-
> drivers/nvme/host/nvme.h | 55 ++-
> drivers/nvme/host/pci.c | 385 ++++++++++++++-
> drivers/nvme/mdev/Kconfig | 16 +
> drivers/nvme/mdev/Makefile | 5 +
> drivers/nvme/mdev/adm.c | 873 ++++++++++++++++++++++++++++++++++
> drivers/nvme/mdev/events.c | 142 ++++++
> drivers/nvme/mdev/host.c | 491 +++++++++++++++++++
> drivers/nvme/mdev/instance.c | 802 +++++++++++++++++++++++++++++++
> drivers/nvme/mdev/io.c | 563 ++++++++++++++++++++++
> drivers/nvme/mdev/irq.c | 264 ++++++++++
> drivers/nvme/mdev/mdev.h | 56 +++
> drivers/nvme/mdev/mmio.c | 591 +++++++++++++++++++++++
> drivers/nvme/mdev/pci.c | 247 ++++++++++
> drivers/nvme/mdev/priv.h | 700 +++++++++++++++++++++++++++
> drivers/nvme/mdev/udata.c | 390 +++++++++++++++
> drivers/nvme/mdev/vcq.c | 207 ++++++++
> drivers/nvme/mdev/vctrl.c | 514 ++++++++++++++++++++
> drivers/nvme/mdev/viommu.c | 322 +++++++++++++
> drivers/nvme/mdev/vns.c | 356 ++++++++++++++
> drivers/nvme/mdev/vsq.c | 178 +++++++
> drivers/vfio/mdev/vfio_mdev.c | 11 +
> include/linux/mdev.h | 4 +
> include/linux/nvme.h | 88 +++-
> 27 files changed, 7375 insertions(+), 41 deletions(-)
> create mode 100644 drivers/nvme/mdev/Kconfig
> create mode 100644 drivers/nvme/mdev/Makefile
> create mode 100644 drivers/nvme/mdev/adm.c
> create mode 100644 drivers/nvme/mdev/events.c
> create mode 100644 drivers/nvme/mdev/host.c
> create mode 100644 drivers/nvme/mdev/instance.c
> create mode 100644 drivers/nvme/mdev/io.c
> create mode 100644 drivers/nvme/mdev/irq.c
> create mode 100644 drivers/nvme/mdev/mdev.h
> create mode 100644 drivers/nvme/mdev/mmio.c
> create mode 100644 drivers/nvme/mdev/pci.c
> create mode 100644 drivers/nvme/mdev/priv.h
> create mode 100644 drivers/nvme/mdev/udata.c
> create mode 100644 drivers/nvme/mdev/vcq.c
> create mode 100644 drivers/nvme/mdev/vctrl.c
> create mode 100644 drivers/nvme/mdev/viommu.c
> create mode 100644 drivers/nvme/mdev/vns.c
> create mode 100644 drivers/nvme/mdev/vsq.c
>
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: [PATCH 0/9] RFC: NVME VFIO mediated device [BENCHMARKS]
2019-03-19 14:58 ` Maxim Levitsky
@ 2019-03-25 18:52 ` Maxim Levitsky
-1 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-25 18:52 UTC (permalink / raw)
To: linux-nvme
Cc: Fam Zheng, Keith Busch, Sagi Grimberg, kvm, Wolfram Sang,
Greg Kroah-Hartman, Liang Cunming, Nicolas Ferre, linux-kernel,
Kirti Wankhede, David S . Miller, Jens Axboe, Alex Williamson,
John Ferlan, Mauro Carvalho Chehab, Paolo Bonzini, Liu Changpeng,
Paul E . McKenney, Amnon Ilan, Christoph Hellwig
Hi
This is first round of benchmarks.
The system is Intel(R) Xeon(R) Gold 6128 CPU @ 3.40GHz
The system has 2 numa nodes, but only cpus and memory from node 0 were used to
avoid noise from numa.
The SSD is Intel® Optane™ SSD 900P Series, 280 GB version
https://ark.intel.com/content/www/us/en/ark/products/123628/intel-optane-ssd-900p-series-280gb-1-2-height-pcie-x4-20nm-3d-xpoint.html
** Latency benchmark with no interrupts at all **
spdk was complited with fio plugin in the host and in the guest.
spdk was first run in the host
then vm was started with one of spdk,pci passthrough, mdev and inside the
vm spdk was run with fio plugin.
spdk was taken from my branch on gitlab, and fio was complied from source for
3.4 branch as needed by the spdk fio plugin.
The following spdk command line was used:
$WORK/fio/fio \
--name=job --runtime=40 --ramp_time=0 --time_based \
--filename="trtype=PCIe traddr=$DEVICE_FOR_FIO ns=1" --ioengine=spdk \
--direct=1 --rw=randread --bs=4K --cpus_allowed=0 \
--iodepth=1 --thread
The average values for slat (submission latency), clat (completion latency) and
its sum (slat+clat) were noted.
The results:
spdk fio host:
573 Mib/s - slat 112.00ns, clat 6.400us, lat 6.52ms
573 Mib/s - slat 111.50ns, clat 6.406us, lat 6.52ms
pci passthough host/
spdk fio guest
571 Mib/s - slat 124.56ns, clat 6.422us lat 6.55ms
571 Mib/s - slat 122.86ns, clat 6.410us lat 6.53ms
570 Mib/s - slat 124.95ns, clat 6.425us lat 6.55ms
spdk host/
spdk fio guest:
535 Mib/s - slat 125.00ns, clat 6.895us lat 7.02ms
534 Mib/s - slat 125.36ns, clat 6.896us lat 7.02ms
534 Mib/s - slat 125.82ns, clat 6.892us lat 7.02ms
mdev host/
spdk fio guest:
534 Mib/s - slat 128.04ns, clat 6.902us lat 7.03ms
535 Mib/s - slat 126.97ns, clat 6.900us lat 7.03ms
535 Mib/s - slat 127.00ns, clat 6.898us lat 7.03ms
As you see, native latency is 6.52ms, pci passthrough barely adds any latency,
while both mdev/spdk added about (7.03/2 - 6.52) - 0.51ms/0.50ms of latency.
In addtion to that I added few 'rdtsc' into my mdev driver to strategically
capture the cycle count it takes it to do 3 things:
1. translate a just received command (till it is copied to the hardware
submission queue)
2. receive a completion (divided by the number of completion received in one
round of polling)
3. deliver an interupt to the guest (call to eventfd_signal)
This is not the whole latency as there is also a latency between the point the
submission entry is written and till it is visible on the polling cpu, plus
latency till polling cpu gets to the code which reads the submission entry,
and of course latency of interrupt delivery, but the above measurements mostly
capture the latency I can control.
The results are:
commands translated : avg cycles: 459.844 avg time(usec): 0.135
commands completed : avg cycles: 354.61 avg time(usec): 0.104
interrupts sent : avg cycles: 590.227 avg time(usec): 0.174
avg time total: 0.413 usec
All measurmenets done in the host kernel. the time calculated using tsc_khz
kernel variable.
The biggest take from this is that both spdk and my driver are very fast and
overhead is just a thousand of cpu cycles give it or take.
*** Throughput benchmarks ***
https://paste.fedoraproject.org/paste/ecijclLMG2B11MVCVIst-w
Here you can find the throughput benchmarks.
The biggest take is that when using no interrupts (spdk fio in guest or spdk fio
in host), the bottelneck is in the device, and througput is about 2290 Mib/s
And mdev vs spdk, with interrupts, my driver sligly wins by giving throughput of
about 2015 Mib/s while spdk is about 2005 Mib/s
mostly due to slightly different timings as the latency of both is about the
same.
Disabling meltdown mitigation didn't had much effect on the performance.
Best regards,
Maxim Levitsky
^ permalink raw reply [flat|nested] 3471+ messages in thread
* [PATCH 0/9] RFC: NVME VFIO mediated device [BENCHMARKS]
@ 2019-03-25 18:52 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-25 18:52 UTC (permalink / raw)
Hi
This is first round of benchmarks.
The system is Intel(R) Xeon(R) Gold 6128 CPU @ 3.40GHz
The system has 2 numa nodes, but only cpus and memory from node 0 were used to
avoid noise from numa.
The SSD is Intel? Optane? SSD 900P Series, 280 GB version
https://ark.intel.com/content/www/us/en/ark/products/123628/intel-optane-ssd-900p-series-280gb-1-2-height-pcie-x4-20nm-3d-xpoint.html
** Latency benchmark with no interrupts at all **
spdk was complited with fio plugin in the host and in the guest.
spdk was first run in the host
then vm was started with one of spdk,pci passthrough, mdev and inside the
vm spdk was run with fio plugin.
spdk was taken from my branch on gitlab, and fio was complied from source for
3.4 branch as needed by the spdk fio plugin.
The following spdk command line was used:
$WORK/fio/fio \
--name=job --runtime=40 --ramp_time=0 --time_based \
--filename="trtype=PCIe traddr=$DEVICE_FOR_FIO ns=1" --ioengine=spdk \
--direct=1 --rw=randread --bs=4K --cpus_allowed=0 \
--iodepth=1 --thread
The average values for slat (submission latency), clat (completion latency) and
its sum (slat+clat) were noted.
The results:
spdk fio host:
573 Mib/s - slat 112.00ns, clat 6.400us, lat 6.52ms
573 Mib/s - slat 111.50ns, clat 6.406us, lat 6.52ms
pci passthough host/
spdk fio guest
571 Mib/s - slat 124.56ns, clat 6.422us lat 6.55ms
571 Mib/s - slat 122.86ns, clat 6.410us lat 6.53ms
570 Mib/s - slat 124.95ns, clat 6.425us lat 6.55ms
spdk host/
spdk fio guest:
535 Mib/s - slat 125.00ns, clat 6.895us lat 7.02ms
534 Mib/s - slat 125.36ns, clat 6.896us lat 7.02ms
534 Mib/s - slat 125.82ns, clat 6.892us lat 7.02ms
mdev host/
spdk fio guest:
534 Mib/s - slat 128.04ns, clat 6.902us lat 7.03ms
535 Mib/s - slat 126.97ns, clat 6.900us lat 7.03ms
535 Mib/s - slat 127.00ns, clat 6.898us lat 7.03ms
As you see, native latency is 6.52ms, pci passthrough barely adds any latency,
while both mdev/spdk added about (7.03/2 - 6.52) - 0.51ms/0.50ms of latency.
In addtion to that I added few 'rdtsc' into my mdev driver to strategically
capture the cycle count it takes it to do 3 things:
1. translate a just received command (till it is copied to the hardware
submission queue)
2. receive a completion (divided by the number of completion received in one
round of polling)
3. deliver an interupt to the guest (call to eventfd_signal)
This is not the whole latency as there is also a latency between the point the
submission entry is written and till it is visible on the polling cpu, plus
latency till polling cpu gets to the code which reads the submission entry,
and of course latency of interrupt delivery, but the above measurements mostly
capture the latency I can control.
The results are:
commands translated : avg cycles: 459.844 avg time(usec): 0.135
commands completed : avg cycles: 354.61 avg time(usec): 0.104
interrupts sent : avg cycles: 590.227 avg time(usec): 0.174
avg time total: 0.413 usec
All measurmenets done in the host kernel. the time calculated using tsc_khz
kernel variable.
The biggest take from this is that both spdk and my driver are very fast and
overhead is just a thousand of cpu cycles give it or take.
*** Throughput benchmarks ***
https://paste.fedoraproject.org/paste/ecijclLMG2B11MVCVIst-w
Here you can find the throughput benchmarks.
The biggest take is that when using no interrupts (spdk fio in guest or spdk fio
in host), the bottelneck is in the device, and througput is about 2290 Mib/s
And mdev vs spdk, with interrupts, my driver sligly wins by giving throughput of
about 2015 Mib/s while spdk is about 2005 Mib/s
mostly due to slightly different timings as the latency of both is about the
same.
Disabling meltdown mitigation didn't had much effect on the performance.
Best regards,
Maxim Levitsky
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: [PATCH 0/9] RFC: NVME VFIO mediated device [BENCHMARKS]
2019-03-25 18:52 ` Maxim Levitsky
(?)
@ 2019-03-26 9:38 ` Stefan Hajnoczi
-1 siblings, 0 replies; 3471+ messages in thread
From: Stefan Hajnoczi @ 2019-03-26 9:38 UTC (permalink / raw)
To: Maxim Levitsky
Cc: linux-nvme, Fam Zheng, Keith Busch, Sagi Grimberg, kvm,
Wolfram Sang, Greg Kroah-Hartman, Liang Cunming, Nicolas Ferre,
linux-kernel, Kirti Wankhede, David S . Miller, Jens Axboe,
Alex Williamson, John Ferlan, Mauro Carvalho Chehab,
Paolo Bonzini, Liu Changpeng, Paul E . McKenney, Amnon Ilan,
Christoph Hellwig
[-- Attachment #1: Type: text/plain, Size: 3718 bytes --]
On Mon, Mar 25, 2019 at 08:52:32PM +0200, Maxim Levitsky wrote:
> Hi
>
> This is first round of benchmarks.
>
> The system is Intel(R) Xeon(R) Gold 6128 CPU @ 3.40GHz
>
> The system has 2 numa nodes, but only cpus and memory from node 0 were used to
> avoid noise from numa.
>
> The SSD is Intel® Optane™ SSD 900P Series, 280 GB version
>
>
> https://ark.intel.com/content/www/us/en/ark/products/123628/intel-optane-ssd-900p-series-280gb-1-2-height-pcie-x4-20nm-3d-xpoint.html
>
>
> ** Latency benchmark with no interrupts at all **
>
> spdk was complited with fio plugin in the host and in the guest.
> spdk was first run in the host
> then vm was started with one of spdk,pci passthrough, mdev and inside the
> vm spdk was run with fio plugin.
>
> spdk was taken from my branch on gitlab, and fio was complied from source for
> 3.4 branch as needed by the spdk fio plugin.
>
> The following spdk command line was used:
>
> $WORK/fio/fio \
> --name=job --runtime=40 --ramp_time=0 --time_based \
> --filename="trtype=PCIe traddr=$DEVICE_FOR_FIO ns=1" --ioengine=spdk \
> --direct=1 --rw=randread --bs=4K --cpus_allowed=0 \
> --iodepth=1 --thread
>
> The average values for slat (submission latency), clat (completion latency) and
> its sum (slat+clat) were noted.
>
> The results:
>
> spdk fio host:
> 573 Mib/s - slat 112.00ns, clat 6.400us, lat 6.52ms
> 573 Mib/s - slat 111.50ns, clat 6.406us, lat 6.52ms
>
>
> pci passthough host/
> spdk fio guest
> 571 Mib/s - slat 124.56ns, clat 6.422us lat 6.55ms
> 571 Mib/s - slat 122.86ns, clat 6.410us lat 6.53ms
> 570 Mib/s - slat 124.95ns, clat 6.425us lat 6.55ms
>
> spdk host/
> spdk fio guest:
> 535 Mib/s - slat 125.00ns, clat 6.895us lat 7.02ms
> 534 Mib/s - slat 125.36ns, clat 6.896us lat 7.02ms
> 534 Mib/s - slat 125.82ns, clat 6.892us lat 7.02ms
>
> mdev host/
> spdk fio guest:
> 534 Mib/s - slat 128.04ns, clat 6.902us lat 7.03ms
> 535 Mib/s - slat 126.97ns, clat 6.900us lat 7.03ms
> 535 Mib/s - slat 127.00ns, clat 6.898us lat 7.03ms
>
>
> As you see, native latency is 6.52ms, pci passthrough barely adds any latency,
> while both mdev/spdk added about (7.03/2 - 6.52) - 0.51ms/0.50ms of latency.
Milliseconds is surprising. The SSD's spec says 10us read/write
latency. Did you mean microseconds?
>
> In addtion to that I added few 'rdtsc' into my mdev driver to strategically
> capture the cycle count it takes it to do 3 things:
>
> 1. translate a just received command (till it is copied to the hardware
> submission queue)
>
> 2. receive a completion (divided by the number of completion received in one
> round of polling)
>
> 3. deliver an interupt to the guest (call to eventfd_signal)
>
> This is not the whole latency as there is also a latency between the point the
> submission entry is written and till it is visible on the polling cpu, plus
> latency till polling cpu gets to the code which reads the submission entry,
> and of course latency of interrupt delivery, but the above measurements mostly
> capture the latency I can control.
>
> The results are:
>
> commands translated : avg cycles: 459.844 avg time(usec): 0.135
> commands completed : avg cycles: 354.61 avg time(usec): 0.104
> interrupts sent : avg cycles: 590.227 avg time(usec): 0.174
>
> avg time total: 0.413 usec
>
> All measurmenets done in the host kernel. the time calculated using tsc_khz
> kernel variable.
>
> The biggest take from this is that both spdk and my driver are very fast and
> overhead is just a thousand of cpu cycles give it or take.
Nice!
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* [PATCH 0/9] RFC: NVME VFIO mediated device [BENCHMARKS]
@ 2019-03-26 9:38 ` Stefan Hajnoczi
0 siblings, 0 replies; 3471+ messages in thread
From: Stefan Hajnoczi @ 2019-03-26 9:38 UTC (permalink / raw)
On Mon, Mar 25, 2019@08:52:32PM +0200, Maxim Levitsky wrote:
> Hi
>
> This is first round of benchmarks.
>
> The system is Intel(R) Xeon(R) Gold 6128 CPU @ 3.40GHz
>
> The system has 2 numa nodes, but only cpus and memory from node 0 were used to
> avoid noise from numa.
>
> The SSD is Intel? Optane? SSD 900P Series, 280 GB version
>
>
> https://ark.intel.com/content/www/us/en/ark/products/123628/intel-optane-ssd-900p-series-280gb-1-2-height-pcie-x4-20nm-3d-xpoint.html
>
>
> ** Latency benchmark with no interrupts at all **
>
> spdk was complited with fio plugin in the host and in the guest.
> spdk was first run in the host
> then vm was started with one of spdk,pci passthrough, mdev and inside the
> vm spdk was run with fio plugin.
>
> spdk was taken from my branch on gitlab, and fio was complied from source for
> 3.4 branch as needed by the spdk fio plugin.
>
> The following spdk command line was used:
>
> $WORK/fio/fio \
> --name=job --runtime=40 --ramp_time=0 --time_based \
> --filename="trtype=PCIe traddr=$DEVICE_FOR_FIO ns=1" --ioengine=spdk \
> --direct=1 --rw=randread --bs=4K --cpus_allowed=0 \
> --iodepth=1 --thread
>
> The average values for slat (submission latency), clat (completion latency) and
> its sum (slat+clat) were noted.
>
> The results:
>
> spdk fio host:
> 573 Mib/s - slat 112.00ns, clat 6.400us, lat 6.52ms
> 573 Mib/s - slat 111.50ns, clat 6.406us, lat 6.52ms
>
>
> pci passthough host/
> spdk fio guest
> 571 Mib/s - slat 124.56ns, clat 6.422us lat 6.55ms
> 571 Mib/s - slat 122.86ns, clat 6.410us lat 6.53ms
> 570 Mib/s - slat 124.95ns, clat 6.425us lat 6.55ms
>
> spdk host/
> spdk fio guest:
> 535 Mib/s - slat 125.00ns, clat 6.895us lat 7.02ms
> 534 Mib/s - slat 125.36ns, clat 6.896us lat 7.02ms
> 534 Mib/s - slat 125.82ns, clat 6.892us lat 7.02ms
>
> mdev host/
> spdk fio guest:
> 534 Mib/s - slat 128.04ns, clat 6.902us lat 7.03ms
> 535 Mib/s - slat 126.97ns, clat 6.900us lat 7.03ms
> 535 Mib/s - slat 127.00ns, clat 6.898us lat 7.03ms
>
>
> As you see, native latency is 6.52ms, pci passthrough barely adds any latency,
> while both mdev/spdk added about (7.03/2 - 6.52) - 0.51ms/0.50ms of latency.
Milliseconds is surprising. The SSD's spec says 10us read/write
latency. Did you mean microseconds?
>
> In addtion to that I added few 'rdtsc' into my mdev driver to strategically
> capture the cycle count it takes it to do 3 things:
>
> 1. translate a just received command (till it is copied to the hardware
> submission queue)
>
> 2. receive a completion (divided by the number of completion received in one
> round of polling)
>
> 3. deliver an interupt to the guest (call to eventfd_signal)
>
> This is not the whole latency as there is also a latency between the point the
> submission entry is written and till it is visible on the polling cpu, plus
> latency till polling cpu gets to the code which reads the submission entry,
> and of course latency of interrupt delivery, but the above measurements mostly
> capture the latency I can control.
>
> The results are:
>
> commands translated : avg cycles: 459.844 avg time(usec): 0.135
> commands completed : avg cycles: 354.61 avg time(usec): 0.104
> interrupts sent : avg cycles: 590.227 avg time(usec): 0.174
>
> avg time total: 0.413 usec
>
> All measurmenets done in the host kernel. the time calculated using tsc_khz
> kernel variable.
>
> The biggest take from this is that both spdk and my driver are very fast and
> overhead is just a thousand of cpu cycles give it or take.
Nice!
Stefan
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-nvme/attachments/20190326/4cc00df9/attachment.sig>
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: [PATCH 0/9] RFC: NVME VFIO mediated device [BENCHMARKS]
@ 2019-03-26 9:38 ` Stefan Hajnoczi
0 siblings, 0 replies; 3471+ messages in thread
From: Stefan Hajnoczi @ 2019-03-26 9:38 UTC (permalink / raw)
To: Maxim Levitsky
Cc: linux-nvme, Fam Zheng, Keith Busch, Sagi Grimberg, kvm,
Wolfram Sang, Greg Kroah-Hartman, Liang Cunming, Nicolas Ferre,
linux-kernel, Kirti Wankhede, David S . Miller, Jens Axboe,
Alex Williamson, John Ferlan, Mauro Carvalho Chehab,
Paolo Bonzini, Liu Changpeng, Paul E . McKenney, Amnon Ilan,
Christoph
[-- Attachment #1: Type: text/plain, Size: 3718 bytes --]
On Mon, Mar 25, 2019 at 08:52:32PM +0200, Maxim Levitsky wrote:
> Hi
>
> This is first round of benchmarks.
>
> The system is Intel(R) Xeon(R) Gold 6128 CPU @ 3.40GHz
>
> The system has 2 numa nodes, but only cpus and memory from node 0 were used to
> avoid noise from numa.
>
> The SSD is Intel® Optane™ SSD 900P Series, 280 GB version
>
>
> https://ark.intel.com/content/www/us/en/ark/products/123628/intel-optane-ssd-900p-series-280gb-1-2-height-pcie-x4-20nm-3d-xpoint.html
>
>
> ** Latency benchmark with no interrupts at all **
>
> spdk was complited with fio plugin in the host and in the guest.
> spdk was first run in the host
> then vm was started with one of spdk,pci passthrough, mdev and inside the
> vm spdk was run with fio plugin.
>
> spdk was taken from my branch on gitlab, and fio was complied from source for
> 3.4 branch as needed by the spdk fio plugin.
>
> The following spdk command line was used:
>
> $WORK/fio/fio \
> --name=job --runtime=40 --ramp_time=0 --time_based \
> --filename="trtype=PCIe traddr=$DEVICE_FOR_FIO ns=1" --ioengine=spdk \
> --direct=1 --rw=randread --bs=4K --cpus_allowed=0 \
> --iodepth=1 --thread
>
> The average values for slat (submission latency), clat (completion latency) and
> its sum (slat+clat) were noted.
>
> The results:
>
> spdk fio host:
> 573 Mib/s - slat 112.00ns, clat 6.400us, lat 6.52ms
> 573 Mib/s - slat 111.50ns, clat 6.406us, lat 6.52ms
>
>
> pci passthough host/
> spdk fio guest
> 571 Mib/s - slat 124.56ns, clat 6.422us lat 6.55ms
> 571 Mib/s - slat 122.86ns, clat 6.410us lat 6.53ms
> 570 Mib/s - slat 124.95ns, clat 6.425us lat 6.55ms
>
> spdk host/
> spdk fio guest:
> 535 Mib/s - slat 125.00ns, clat 6.895us lat 7.02ms
> 534 Mib/s - slat 125.36ns, clat 6.896us lat 7.02ms
> 534 Mib/s - slat 125.82ns, clat 6.892us lat 7.02ms
>
> mdev host/
> spdk fio guest:
> 534 Mib/s - slat 128.04ns, clat 6.902us lat 7.03ms
> 535 Mib/s - slat 126.97ns, clat 6.900us lat 7.03ms
> 535 Mib/s - slat 127.00ns, clat 6.898us lat 7.03ms
>
>
> As you see, native latency is 6.52ms, pci passthrough barely adds any latency,
> while both mdev/spdk added about (7.03/2 - 6.52) - 0.51ms/0.50ms of latency.
Milliseconds is surprising. The SSD's spec says 10us read/write
latency. Did you mean microseconds?
>
> In addtion to that I added few 'rdtsc' into my mdev driver to strategically
> capture the cycle count it takes it to do 3 things:
>
> 1. translate a just received command (till it is copied to the hardware
> submission queue)
>
> 2. receive a completion (divided by the number of completion received in one
> round of polling)
>
> 3. deliver an interupt to the guest (call to eventfd_signal)
>
> This is not the whole latency as there is also a latency between the point the
> submission entry is written and till it is visible on the polling cpu, plus
> latency till polling cpu gets to the code which reads the submission entry,
> and of course latency of interrupt delivery, but the above measurements mostly
> capture the latency I can control.
>
> The results are:
>
> commands translated : avg cycles: 459.844 avg time(usec): 0.135
> commands completed : avg cycles: 354.61 avg time(usec): 0.104
> interrupts sent : avg cycles: 590.227 avg time(usec): 0.174
>
> avg time total: 0.413 usec
>
> All measurmenets done in the host kernel. the time calculated using tsc_khz
> kernel variable.
>
> The biggest take from this is that both spdk and my driver are very fast and
> overhead is just a thousand of cpu cycles give it or take.
Nice!
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: [PATCH 0/9] RFC: NVME VFIO mediated device [BENCHMARKS]
2019-03-26 9:38 ` Stefan Hajnoczi
(?)
@ 2019-03-26 9:50 ` Maxim Levitsky
-1 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-26 9:50 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: linux-nvme, Fam Zheng, Keith Busch, Sagi Grimberg, kvm,
Wolfram Sang, Greg Kroah-Hartman, Liang Cunming, Nicolas Ferre,
linux-kernel, Kirti Wankhede, David S . Miller, Jens Axboe,
Alex Williamson, John Ferlan, Mauro Carvalho Chehab,
Paolo Bonzini, Liu Changpeng, Paul E . McKenney, Amnon Ilan,
Christoph Hellwig
On Tue, 2019-03-26 at 09:38 +0000, Stefan Hajnoczi wrote:
> On Mon, Mar 25, 2019 at 08:52:32PM +0200, Maxim Levitsky wrote:
> > Hi
> >
> > This is first round of benchmarks.
> >
> > The system is Intel(R) Xeon(R) Gold 6128 CPU @ 3.40GHz
> >
> > The system has 2 numa nodes, but only cpus and memory from node 0 were used to
> > avoid noise from numa.
> >
> > The SSD is Intel® Optane™ SSD 900P Series, 280 GB version
> >
> >
> > https://ark.intel.com/content/www/us/en/ark/products/123628/intel-optane-ssd-900p-series-280gb-1-2-height-pcie-x4-20nm-3d-xpoint.html
> >
> >
> > ** Latency benchmark with no interrupts at all **
> >
> > spdk was complited with fio plugin in the host and in the guest.
> > spdk was first run in the host
> > then vm was started with one of spdk,pci passthrough, mdev and inside the
> > vm spdk was run with fio plugin.
> >
> > spdk was taken from my branch on gitlab, and fio was complied from source for
> > 3.4 branch as needed by the spdk fio plugin.
> >
> > The following spdk command line was used:
> >
> > $WORK/fio/fio \
> > --name=job --runtime=40 --ramp_time=0 --time_based \
> > --filename="trtype=PCIe traddr=$DEVICE_FOR_FIO ns=1" --ioengine=spdk \
> > --direct=1 --rw=randread --bs=4K --cpus_allowed=0 \
> > --iodepth=1 --thread
> >
> > The average values for slat (submission latency), clat (completion latency) and
> > its sum (slat+clat) were noted.
> >
> > The results:
> >
> > spdk fio host:
> > 573 Mib/s - slat 112.00ns, clat 6.400us, lat 6.52ms
> > 573 Mib/s - slat 111.50ns, clat 6.406us, lat 6.52ms
> >
> >
> > pci passthough host/
> > spdk fio guest
> > 571 Mib/s - slat 124.56ns, clat 6.422us lat 6.55ms
> > 571 Mib/s - slat 122.86ns, clat 6.410us lat 6.53ms
> > 570 Mib/s - slat 124.95ns, clat 6.425us lat 6.55ms
> >
> > spdk host/
> > spdk fio guest:
> > 535 Mib/s - slat 125.00ns, clat 6.895us lat 7.02ms
> > 534 Mib/s - slat 125.36ns, clat 6.896us lat 7.02ms
> > 534 Mib/s - slat 125.82ns, clat 6.892us lat 7.02ms
> >
> > mdev host/
> > spdk fio guest:
> > 534 Mib/s - slat 128.04ns, clat 6.902us lat 7.03ms
> > 535 Mib/s - slat 126.97ns, clat 6.900us lat 7.03ms
> > 535 Mib/s - slat 127.00ns, clat 6.898us lat 7.03ms
> >
> >
> > As you see, native latency is 6.52ms, pci passthrough barely adds any latency,
> > while both mdev/spdk added about (7.03/2 - 6.52) - 0.51ms/0.50ms of latency.
>
> Milliseconds is surprising. The SSD's spec says 10us read/write
> latency. Did you mean microseconds?
Yea, this is typo - all of this is microseconds.
>
> >
> > In addtion to that I added few 'rdtsc' into my mdev driver to strategically
> > capture the cycle count it takes it to do 3 things:
> >
> > 1. translate a just received command (till it is copied to the hardware
> > submission queue)
> >
> > 2. receive a completion (divided by the number of completion received in one
> > round of polling)
> >
> > 3. deliver an interupt to the guest (call to eventfd_signal)
> >
> > This is not the whole latency as there is also a latency between the point the
> > submission entry is written and till it is visible on the polling cpu, plus
> > latency till polling cpu gets to the code which reads the submission entry,
> > and of course latency of interrupt delivery, but the above measurements mostly
> > capture the latency I can control.
> >
> > The results are:
> >
> > commands translated : avg cycles: 459.844 avg time(usec): 0.135
> > commands completed : avg cycles: 354.61 avg time(usec): 0.104
> > interrupts sent : avg cycles: 590.227 avg time(usec): 0.174
> >
> > avg time total: 0.413 usec
> >
> > All measurmenets done in the host kernel. the time calculated using tsc_khz
> > kernel variable.
> >
> > The biggest take from this is that both spdk and my driver are very fast and
> > overhead is just a thousand of cpu cycles give it or take.
>
> Nice!
>
> Stefan
Best regards,
Maxim Levitsky
^ permalink raw reply [flat|nested] 3471+ messages in thread
* [PATCH 0/9] RFC: NVME VFIO mediated device [BENCHMARKS]
@ 2019-03-26 9:50 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-26 9:50 UTC (permalink / raw)
On Tue, 2019-03-26@09:38 +0000, Stefan Hajnoczi wrote:
> On Mon, Mar 25, 2019@08:52:32PM +0200, Maxim Levitsky wrote:
> > Hi
> >
> > This is first round of benchmarks.
> >
> > The system is Intel(R) Xeon(R) Gold 6128 CPU @ 3.40GHz
> >
> > The system has 2 numa nodes, but only cpus and memory from node 0 were used to
> > avoid noise from numa.
> >
> > The SSD is Intel? Optane? SSD 900P Series, 280 GB version
> >
> >
> > https://ark.intel.com/content/www/us/en/ark/products/123628/intel-optane-ssd-900p-series-280gb-1-2-height-pcie-x4-20nm-3d-xpoint.html
> >
> >
> > ** Latency benchmark with no interrupts at all **
> >
> > spdk was complited with fio plugin in the host and in the guest.
> > spdk was first run in the host
> > then vm was started with one of spdk,pci passthrough, mdev and inside the
> > vm spdk was run with fio plugin.
> >
> > spdk was taken from my branch on gitlab, and fio was complied from source for
> > 3.4 branch as needed by the spdk fio plugin.
> >
> > The following spdk command line was used:
> >
> > $WORK/fio/fio \
> > --name=job --runtime=40 --ramp_time=0 --time_based \
> > --filename="trtype=PCIe traddr=$DEVICE_FOR_FIO ns=1" --ioengine=spdk \
> > --direct=1 --rw=randread --bs=4K --cpus_allowed=0 \
> > --iodepth=1 --thread
> >
> > The average values for slat (submission latency), clat (completion latency) and
> > its sum (slat+clat) were noted.
> >
> > The results:
> >
> > spdk fio host:
> > 573 Mib/s - slat 112.00ns, clat 6.400us, lat 6.52ms
> > 573 Mib/s - slat 111.50ns, clat 6.406us, lat 6.52ms
> >
> >
> > pci passthough host/
> > spdk fio guest
> > 571 Mib/s - slat 124.56ns, clat 6.422us lat 6.55ms
> > 571 Mib/s - slat 122.86ns, clat 6.410us lat 6.53ms
> > 570 Mib/s - slat 124.95ns, clat 6.425us lat 6.55ms
> >
> > spdk host/
> > spdk fio guest:
> > 535 Mib/s - slat 125.00ns, clat 6.895us lat 7.02ms
> > 534 Mib/s - slat 125.36ns, clat 6.896us lat 7.02ms
> > 534 Mib/s - slat 125.82ns, clat 6.892us lat 7.02ms
> >
> > mdev host/
> > spdk fio guest:
> > 534 Mib/s - slat 128.04ns, clat 6.902us lat 7.03ms
> > 535 Mib/s - slat 126.97ns, clat 6.900us lat 7.03ms
> > 535 Mib/s - slat 127.00ns, clat 6.898us lat 7.03ms
> >
> >
> > As you see, native latency is 6.52ms, pci passthrough barely adds any latency,
> > while both mdev/spdk added about (7.03/2 - 6.52) - 0.51ms/0.50ms of latency.
>
> Milliseconds is surprising. The SSD's spec says 10us read/write
> latency. Did you mean microseconds?
Yea, this is typo - all of this is microseconds.
>
> >
> > In addtion to that I added few 'rdtsc' into my mdev driver to strategically
> > capture the cycle count it takes it to do 3 things:
> >
> > 1. translate a just received command (till it is copied to the hardware
> > submission queue)
> >
> > 2. receive a completion (divided by the number of completion received in one
> > round of polling)
> >
> > 3. deliver an interupt to the guest (call to eventfd_signal)
> >
> > This is not the whole latency as there is also a latency between the point the
> > submission entry is written and till it is visible on the polling cpu, plus
> > latency till polling cpu gets to the code which reads the submission entry,
> > and of course latency of interrupt delivery, but the above measurements mostly
> > capture the latency I can control.
> >
> > The results are:
> >
> > commands translated : avg cycles: 459.844 avg time(usec): 0.135
> > commands completed : avg cycles: 354.61 avg time(usec): 0.104
> > interrupts sent : avg cycles: 590.227 avg time(usec): 0.174
> >
> > avg time total: 0.413 usec
> >
> > All measurmenets done in the host kernel. the time calculated using tsc_khz
> > kernel variable.
> >
> > The biggest take from this is that both spdk and my driver are very fast and
> > overhead is just a thousand of cpu cycles give it or take.
>
> Nice!
>
> Stefan
Best regards,
Maxim Levitsky
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: [PATCH 0/9] RFC: NVME VFIO mediated device [BENCHMARKS]
@ 2019-03-26 9:50 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-26 9:50 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: linux-nvme, Fam Zheng, Keith Busch, Sagi Grimberg, kvm,
Wolfram Sang, Greg Kroah-Hartman, Liang Cunming, Nicolas Ferre,
linux-kernel, Kirti Wankhede, David S . Miller, Jens Axboe,
Alex Williamson, John Ferlan, Mauro Carvalho Chehab,
Paolo Bonzini, Liu Changpeng, Paul E . McKenney, Amnon Ilan,
Christoph
On Tue, 2019-03-26 at 09:38 +0000, Stefan Hajnoczi wrote:
> On Mon, Mar 25, 2019 at 08:52:32PM +0200, Maxim Levitsky wrote:
> > Hi
> >
> > This is first round of benchmarks.
> >
> > The system is Intel(R) Xeon(R) Gold 6128 CPU @ 3.40GHz
> >
> > The system has 2 numa nodes, but only cpus and memory from node 0 were used to
> > avoid noise from numa.
> >
> > The SSD is Intel® Optane™ SSD 900P Series, 280 GB version
> >
> >
> > https://ark.intel.com/content/www/us/en/ark/products/123628/intel-optane-ssd-900p-series-280gb-1-2-height-pcie-x4-20nm-3d-xpoint.html
> >
> >
> > ** Latency benchmark with no interrupts at all **
> >
> > spdk was complited with fio plugin in the host and in the guest.
> > spdk was first run in the host
> > then vm was started with one of spdk,pci passthrough, mdev and inside the
> > vm spdk was run with fio plugin.
> >
> > spdk was taken from my branch on gitlab, and fio was complied from source for
> > 3.4 branch as needed by the spdk fio plugin.
> >
> > The following spdk command line was used:
> >
> > $WORK/fio/fio \
> > --name=job --runtime=40 --ramp_time=0 --time_based \
> > --filename="trtype=PCIe traddr=$DEVICE_FOR_FIO ns=1" --ioengine=spdk \
> > --direct=1 --rw=randread --bs=4K --cpus_allowed=0 \
> > --iodepth=1 --thread
> >
> > The average values for slat (submission latency), clat (completion latency) and
> > its sum (slat+clat) were noted.
> >
> > The results:
> >
> > spdk fio host:
> > 573 Mib/s - slat 112.00ns, clat 6.400us, lat 6.52ms
> > 573 Mib/s - slat 111.50ns, clat 6.406us, lat 6.52ms
> >
> >
> > pci passthough host/
> > spdk fio guest
> > 571 Mib/s - slat 124.56ns, clat 6.422us lat 6.55ms
> > 571 Mib/s - slat 122.86ns, clat 6.410us lat 6.53ms
> > 570 Mib/s - slat 124.95ns, clat 6.425us lat 6.55ms
> >
> > spdk host/
> > spdk fio guest:
> > 535 Mib/s - slat 125.00ns, clat 6.895us lat 7.02ms
> > 534 Mib/s - slat 125.36ns, clat 6.896us lat 7.02ms
> > 534 Mib/s - slat 125.82ns, clat 6.892us lat 7.02ms
> >
> > mdev host/
> > spdk fio guest:
> > 534 Mib/s - slat 128.04ns, clat 6.902us lat 7.03ms
> > 535 Mib/s - slat 126.97ns, clat 6.900us lat 7.03ms
> > 535 Mib/s - slat 127.00ns, clat 6.898us lat 7.03ms
> >
> >
> > As you see, native latency is 6.52ms, pci passthrough barely adds any latency,
> > while both mdev/spdk added about (7.03/2 - 6.52) - 0.51ms/0.50ms of latency.
>
> Milliseconds is surprising. The SSD's spec says 10us read/write
> latency. Did you mean microseconds?
Yea, this is typo - all of this is microseconds.
>
> >
> > In addtion to that I added few 'rdtsc' into my mdev driver to strategically
> > capture the cycle count it takes it to do 3 things:
> >
> > 1. translate a just received command (till it is copied to the hardware
> > submission queue)
> >
> > 2. receive a completion (divided by the number of completion received in one
> > round of polling)
> >
> > 3. deliver an interupt to the guest (call to eventfd_signal)
> >
> > This is not the whole latency as there is also a latency between the point the
> > submission entry is written and till it is visible on the polling cpu, plus
> > latency till polling cpu gets to the code which reads the submission entry,
> > and of course latency of interrupt delivery, but the above measurements mostly
> > capture the latency I can control.
> >
> > The results are:
> >
> > commands translated : avg cycles: 459.844 avg time(usec): 0.135
> > commands completed : avg cycles: 354.61 avg time(usec): 0.104
> > interrupts sent : avg cycles: 590.227 avg time(usec): 0.174
> >
> > avg time total: 0.413 usec
> >
> > All measurmenets done in the host kernel. the time calculated using tsc_khz
> > kernel variable.
> >
> > The biggest take from this is that both spdk and my driver are very fast and
> > overhead is just a thousand of cpu cycles give it or take.
>
> Nice!
>
> Stefan
Best regards,
Maxim Levitsky
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: your mail
2019-03-19 14:41 ` No subject Maxim Levitsky
(?)
@ 2019-03-19 15:22 ` Keith Busch
-1 siblings, 0 replies; 3471+ messages in thread
From: Keith Busch @ 2019-03-19 15:22 UTC (permalink / raw)
To: Maxim Levitsky
Cc: linux-nvme, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney ,
Paolo Bonzini, Liang Cunming, Liu Changpeng, Fam Zheng,
Amnon Ilan, John Ferlan
On Tue, Mar 19, 2019 at 04:41:07PM +0200, Maxim Levitsky wrote:
> -> Share the NVMe device between host and guest.
> Even in fully virtualized configurations,
> some partitions of nvme device could be used by guests as block devices
> while others passed through with nvme-mdev to achieve balance between
> all features of full IO stack emulation and performance.
>
> -> NVME-MDEV is a bit faster due to the fact that in-kernel driver
> can send interrupts to the guest directly without a context
> switch that can be expensive due to meltdown mitigation.
>
> -> Is able to utilize interrupts to get reasonable performance.
> This is only implemented
> as a proof of concept and not included in the patches,
> but interrupt driven mode shows reasonable performance
>
> -> This is a framework that later can be used to support NVMe devices
> with more of the IO virtualization built-in
> (IOMMU with PASID support coupled with device that supports it)
Would be very interested to see the PASID support. You wouldn't even
need to mediate the IO doorbells or translations if assigning entire
namespaces, and should be much faster than the shadow doorbells.
I think you should send 6/9 "nvme/pci: init shadow doorbell after each
reset" separately for immediate inclusion.
I like the idea in principle, but it will take me a little time to get
through reviewing your implementation. I would have guessed we could
have leveraged something from the existing nvme/target for the mediating
controller register access and admin commands. Maybe even start with
implementing an nvme passthrough namespace target type (we currently
have block and file).
^ permalink raw reply [flat|nested] 3471+ messages in thread
* your mail
@ 2019-03-19 15:22 ` Keith Busch
0 siblings, 0 replies; 3471+ messages in thread
From: Keith Busch @ 2019-03-19 15:22 UTC (permalink / raw)
On Tue, Mar 19, 2019@04:41:07PM +0200, Maxim Levitsky wrote:
> -> Share the NVMe device between host and guest.
> Even in fully virtualized configurations,
> some partitions of nvme device could be used by guests as block devices
> while others passed through with nvme-mdev to achieve balance between
> all features of full IO stack emulation and performance.
>
> -> NVME-MDEV is a bit faster due to the fact that in-kernel driver
> can send interrupts to the guest directly without a context
> switch that can be expensive due to meltdown mitigation.
>
> -> Is able to utilize interrupts to get reasonable performance.
> This is only implemented
> as a proof of concept and not included in the patches,
> but interrupt driven mode shows reasonable performance
>
> -> This is a framework that later can be used to support NVMe devices
> with more of the IO virtualization built-in
> (IOMMU with PASID support coupled with device that supports it)
Would be very interested to see the PASID support. You wouldn't even
need to mediate the IO doorbells or translations if assigning entire
namespaces, and should be much faster than the shadow doorbells.
I think you should send 6/9 "nvme/pci: init shadow doorbell after each
reset" separately for immediate inclusion.
I like the idea in principle, but it will take me a little time to get
through reviewing your implementation. I would have guessed we could
have leveraged something from the existing nvme/target for the mediating
controller register access and admin commands. Maybe even start with
implementing an nvme passthrough namespace target type (we currently
have block and file).
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: your mail
@ 2019-03-19 15:22 ` Keith Busch
0 siblings, 0 replies; 3471+ messages in thread
From: Keith Busch @ 2019-03-19 15:22 UTC (permalink / raw)
To: Maxim Levitsky
Cc: linux-nvme, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney ,
Paolo Bonzini, Liang Cunming, Liu Changpeng, Fam Zheng,
Amnon Ilan, John Ferlan
On Tue, Mar 19, 2019 at 04:41:07PM +0200, Maxim Levitsky wrote:
> -> Share the NVMe device between host and guest.
> Even in fully virtualized configurations,
> some partitions of nvme device could be used by guests as block devices
> while others passed through with nvme-mdev to achieve balance between
> all features of full IO stack emulation and performance.
>
> -> NVME-MDEV is a bit faster due to the fact that in-kernel driver
> can send interrupts to the guest directly without a context
> switch that can be expensive due to meltdown mitigation.
>
> -> Is able to utilize interrupts to get reasonable performance.
> This is only implemented
> as a proof of concept and not included in the patches,
> but interrupt driven mode shows reasonable performance
>
> -> This is a framework that later can be used to support NVMe devices
> with more of the IO virtualization built-in
> (IOMMU with PASID support coupled with device that supports it)
Would be very interested to see the PASID support. You wouldn't even
need to mediate the IO doorbells or translations if assigning entire
namespaces, and should be much faster than the shadow doorbells.
I think you should send 6/9 "nvme/pci: init shadow doorbell after each
reset" separately for immediate inclusion.
I like the idea in principle, but it will take me a little time to get
through reviewing your implementation. I would have guessed we could
have leveraged something from the existing nvme/target for the mediating
controller register access and admin commands. Maybe even start with
implementing an nvme passthrough namespace target type (we currently
have block and file).
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: your mail
2019-03-19 15:22 ` Keith Busch
(?)
@ 2019-03-19 23:49 ` Chaitanya Kulkarni
-1 siblings, 0 replies; 3471+ messages in thread
From: Chaitanya Kulkarni @ 2019-03-19 23:49 UTC (permalink / raw)
To: Keith Busch, Maxim Levitsky
Cc: Fam Zheng, Keith Busch, Sagi Grimberg, kvm, Wolfram Sang,
Greg Kroah-Hartman, Liang Cunming, Nicolas Ferre, linux-kernel,
linux-nvme, David S . Miller, Jens Axboe, Alex Williamson,
Kirti Wankhede, Mauro Carvalho Chehab, Paolo Bonzini,
Liu Changpeng, Paul E . McKenney ,
Amnon Ilan, Christoph Hellwig, John Ferlan
Hi Keith,
On 03/19/2019 08:21 AM, Keith Busch wrote:
> On Tue, Mar 19, 2019 at 04:41:07PM +0200, Maxim Levitsky wrote:
>> -> Share the NVMe device between host and guest.
>> Even in fully virtualized configurations,
>> some partitions of nvme device could be used by guests as block devices
>> while others passed through with nvme-mdev to achieve balance between
>> all features of full IO stack emulation and performance.
>>
>> -> NVME-MDEV is a bit faster due to the fact that in-kernel driver
>> can send interrupts to the guest directly without a context
>> switch that can be expensive due to meltdown mitigation.
>>
>> -> Is able to utilize interrupts to get reasonable performance.
>> This is only implemented
>> as a proof of concept and not included in the patches,
>> but interrupt driven mode shows reasonable performance
>>
>> -> This is a framework that later can be used to support NVMe devices
>> with more of the IO virtualization built-in
>> (IOMMU with PASID support coupled with device that supports it)
>
> Would be very interested to see the PASID support. You wouldn't even
> need to mediate the IO doorbells or translations if assigning entire
> namespaces, and should be much faster than the shadow doorbells.
>
> I think you should send 6/9 "nvme/pci: init shadow doorbell after each
> reset" separately for immediate inclusion.
>
> I like the idea in principle, but it will take me a little time to get
> through reviewing your implementation. I would have guessed we could
> have leveraged something from the existing nvme/target for the mediating
> controller register access and admin commands. Maybe even start with
> implementing an nvme passthrough namespace target type (we currently
> have block and file).
I have the code for the NVMeOf target passthru-ctrl, I think we can use
that as it is if you are looking for the passthru for NVMeOF.
I'll post patch-series based on the latest code base soon.
>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
>
^ permalink raw reply [flat|nested] 3471+ messages in thread
* your mail
@ 2019-03-19 23:49 ` Chaitanya Kulkarni
0 siblings, 0 replies; 3471+ messages in thread
From: Chaitanya Kulkarni @ 2019-03-19 23:49 UTC (permalink / raw)
Hi Keith,
On 03/19/2019 08:21 AM, Keith Busch wrote:
> On Tue, Mar 19, 2019@04:41:07PM +0200, Maxim Levitsky wrote:
>> -> Share the NVMe device between host and guest.
>> Even in fully virtualized configurations,
>> some partitions of nvme device could be used by guests as block devices
>> while others passed through with nvme-mdev to achieve balance between
>> all features of full IO stack emulation and performance.
>>
>> -> NVME-MDEV is a bit faster due to the fact that in-kernel driver
>> can send interrupts to the guest directly without a context
>> switch that can be expensive due to meltdown mitigation.
>>
>> -> Is able to utilize interrupts to get reasonable performance.
>> This is only implemented
>> as a proof of concept and not included in the patches,
>> but interrupt driven mode shows reasonable performance
>>
>> -> This is a framework that later can be used to support NVMe devices
>> with more of the IO virtualization built-in
>> (IOMMU with PASID support coupled with device that supports it)
>
> Would be very interested to see the PASID support. You wouldn't even
> need to mediate the IO doorbells or translations if assigning entire
> namespaces, and should be much faster than the shadow doorbells.
>
> I think you should send 6/9 "nvme/pci: init shadow doorbell after each
> reset" separately for immediate inclusion.
>
> I like the idea in principle, but it will take me a little time to get
> through reviewing your implementation. I would have guessed we could
> have leveraged something from the existing nvme/target for the mediating
> controller register access and admin commands. Maybe even start with
> implementing an nvme passthrough namespace target type (we currently
> have block and file).
I have the code for the NVMeOf target passthru-ctrl, I think we can use
that as it is if you are looking for the passthru for NVMeOF.
I'll post patch-series based on the latest code base soon.
>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
>
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: your mail
@ 2019-03-19 23:49 ` Chaitanya Kulkarni
0 siblings, 0 replies; 3471+ messages in thread
From: Chaitanya Kulkarni @ 2019-03-19 23:49 UTC (permalink / raw)
To: Keith Busch, Maxim Levitsky
Cc: Fam Zheng, Keith Busch, Sagi Grimberg, kvm, Wolfram Sang,
Greg Kroah-Hartman, Liang Cunming, Nicolas Ferre, linux-kernel,
linux-nvme, David S . Miller, Jens Axboe, Alex Williamson,
Kirti Wankhede, Mauro Carvalho Chehab, Paolo Bonzini,
Liu Changpeng, Paul E . McKenney
Hi Keith,
On 03/19/2019 08:21 AM, Keith Busch wrote:
> On Tue, Mar 19, 2019 at 04:41:07PM +0200, Maxim Levitsky wrote:
>> -> Share the NVMe device between host and guest.
>> Even in fully virtualized configurations,
>> some partitions of nvme device could be used by guests as block devices
>> while others passed through with nvme-mdev to achieve balance between
>> all features of full IO stack emulation and performance.
>>
>> -> NVME-MDEV is a bit faster due to the fact that in-kernel driver
>> can send interrupts to the guest directly without a context
>> switch that can be expensive due to meltdown mitigation.
>>
>> -> Is able to utilize interrupts to get reasonable performance.
>> This is only implemented
>> as a proof of concept and not included in the patches,
>> but interrupt driven mode shows reasonable performance
>>
>> -> This is a framework that later can be used to support NVMe devices
>> with more of the IO virtualization built-in
>> (IOMMU with PASID support coupled with device that supports it)
>
> Would be very interested to see the PASID support. You wouldn't even
> need to mediate the IO doorbells or translations if assigning entire
> namespaces, and should be much faster than the shadow doorbells.
>
> I think you should send 6/9 "nvme/pci: init shadow doorbell after each
> reset" separately for immediate inclusion.
>
> I like the idea in principle, but it will take me a little time to get
> through reviewing your implementation. I would have guessed we could
> have leveraged something from the existing nvme/target for the mediating
> controller register access and admin commands. Maybe even start with
> implementing an nvme passthrough namespace target type (we currently
> have block and file).
I have the code for the NVMeOf target passthru-ctrl, I think we can use
that as it is if you are looking for the passthru for NVMeOF.
I'll post patch-series based on the latest code base soon.
>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
>
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: your mail
2019-03-19 23:49 ` Chaitanya Kulkarni
(?)
@ 2019-03-20 16:44 ` Maxim Levitsky
-1 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-20 16:44 UTC (permalink / raw)
To: Chaitanya Kulkarni, Keith Busch
Cc: Fam Zheng, Jens Axboe, Sagi Grimberg, kvm, Wolfram Sang,
Greg Kroah-Hartman, Liang Cunming, Nicolas Ferre, linux-nvme,
linux-kernel, Keith Busch, Alex Williamson, Christoph Hellwig,
Kirti Wankhede, Mauro Carvalho Chehab, Paolo Bonzini,
Liu Changpeng, Paul E . McKenney, Amnon Ilan, David S . Miller,
John Ferlan
On Tue, 2019-03-19 at 23:49 +0000, Chaitanya Kulkarni wrote:
> Hi Keith,
> On 03/19/2019 08:21 AM, Keith Busch wrote:
> > On Tue, Mar 19, 2019 at 04:41:07PM +0200, Maxim Levitsky wrote:
> > > -> Share the NVMe device between host and guest.
> > > Even in fully virtualized configurations,
> > > some partitions of nvme device could be used by guests as block
> > > devices
> > > while others passed through with nvme-mdev to achieve balance
> > > between
> > > all features of full IO stack emulation and performance.
> > >
> > > -> NVME-MDEV is a bit faster due to the fact that in-kernel driver
> > > can send interrupts to the guest directly without a context
> > > switch that can be expensive due to meltdown mitigation.
> > >
> > > -> Is able to utilize interrupts to get reasonable performance.
> > > This is only implemented
> > > as a proof of concept and not included in the patches,
> > > but interrupt driven mode shows reasonable performance
> > >
> > > -> This is a framework that later can be used to support NVMe devices
> > > with more of the IO virtualization built-in
> > > (IOMMU with PASID support coupled with device that supports it)
> >
> > Would be very interested to see the PASID support. You wouldn't even
> > need to mediate the IO doorbells or translations if assigning entire
> > namespaces, and should be much faster than the shadow doorbells.
> >
> > I think you should send 6/9 "nvme/pci: init shadow doorbell after each
> > reset" separately for immediate inclusion.
> >
> > I like the idea in principle, but it will take me a little time to get
> > through reviewing your implementation. I would have guessed we could
> > have leveraged something from the existing nvme/target for the mediating
> > controller register access and admin commands. Maybe even start with
> > implementing an nvme passthrough namespace target type (we currently
> > have block and file).
>
> I have the code for the NVMeOf target passthru-ctrl, I think we can use
> that as it is if you are looking for the passthru for NVMeOF.
>
> I'll post patch-series based on the latest code base soon.
I am very intersing in this code.
Could you explain how your NVMeOF target passthrough works?
Which components of the NVME stack does it involve?
Best regards,
Maxim Levitsky
> >
> > _______________________________________________
> > Linux-nvme mailing list
> > Linux-nvme@lists.infradead.org
> > http://lists.infradead.org/mailman/listinfo/linux-nvme
> >
>
>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 3471+ messages in thread
* your mail
@ 2019-03-20 16:44 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-20 16:44 UTC (permalink / raw)
On Tue, 2019-03-19@23:49 +0000, Chaitanya Kulkarni wrote:
> Hi Keith,
> On 03/19/2019 08:21 AM, Keith Busch wrote:
> > On Tue, Mar 19, 2019@04:41:07PM +0200, Maxim Levitsky wrote:
> > > -> Share the NVMe device between host and guest.
> > > Even in fully virtualized configurations,
> > > some partitions of nvme device could be used by guests as block
> > > devices
> > > while others passed through with nvme-mdev to achieve balance
> > > between
> > > all features of full IO stack emulation and performance.
> > >
> > > -> NVME-MDEV is a bit faster due to the fact that in-kernel driver
> > > can send interrupts to the guest directly without a context
> > > switch that can be expensive due to meltdown mitigation.
> > >
> > > -> Is able to utilize interrupts to get reasonable performance.
> > > This is only implemented
> > > as a proof of concept and not included in the patches,
> > > but interrupt driven mode shows reasonable performance
> > >
> > > -> This is a framework that later can be used to support NVMe devices
> > > with more of the IO virtualization built-in
> > > (IOMMU with PASID support coupled with device that supports it)
> >
> > Would be very interested to see the PASID support. You wouldn't even
> > need to mediate the IO doorbells or translations if assigning entire
> > namespaces, and should be much faster than the shadow doorbells.
> >
> > I think you should send 6/9 "nvme/pci: init shadow doorbell after each
> > reset" separately for immediate inclusion.
> >
> > I like the idea in principle, but it will take me a little time to get
> > through reviewing your implementation. I would have guessed we could
> > have leveraged something from the existing nvme/target for the mediating
> > controller register access and admin commands. Maybe even start with
> > implementing an nvme passthrough namespace target type (we currently
> > have block and file).
>
> I have the code for the NVMeOf target passthru-ctrl, I think we can use
> that as it is if you are looking for the passthru for NVMeOF.
>
> I'll post patch-series based on the latest code base soon.
I am very intersing in this code.
Could you explain how your NVMeOF target passthrough works?
Which components of the NVME stack does it involve?
Best regards,
Maxim Levitsky
> >
> > _______________________________________________
> > Linux-nvme mailing list
> > Linux-nvme at lists.infradead.org
> > http://lists.infradead.org/mailman/listinfo/linux-nvme
> >
>
>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: your mail
@ 2019-03-20 16:44 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-20 16:44 UTC (permalink / raw)
To: Chaitanya Kulkarni, Keith Busch
Cc: Fam Zheng, Jens Axboe, Sagi Grimberg, kvm, Wolfram Sang,
Greg Kroah-Hartman, Liang Cunming, Nicolas Ferre, linux-nvme,
linux-kernel, Keith Busch, Alex Williamson, Christoph Hellwig,
Kirti Wankhede, Mauro Carvalho Chehab, Paolo Bonzini,
Liu Changpeng, Paul E . McKenney
On Tue, 2019-03-19 at 23:49 +0000, Chaitanya Kulkarni wrote:
> Hi Keith,
> On 03/19/2019 08:21 AM, Keith Busch wrote:
> > On Tue, Mar 19, 2019 at 04:41:07PM +0200, Maxim Levitsky wrote:
> > > -> Share the NVMe device between host and guest.
> > > Even in fully virtualized configurations,
> > > some partitions of nvme device could be used by guests as block
> > > devices
> > > while others passed through with nvme-mdev to achieve balance
> > > between
> > > all features of full IO stack emulation and performance.
> > >
> > > -> NVME-MDEV is a bit faster due to the fact that in-kernel driver
> > > can send interrupts to the guest directly without a context
> > > switch that can be expensive due to meltdown mitigation.
> > >
> > > -> Is able to utilize interrupts to get reasonable performance.
> > > This is only implemented
> > > as a proof of concept and not included in the patches,
> > > but interrupt driven mode shows reasonable performance
> > >
> > > -> This is a framework that later can be used to support NVMe devices
> > > with more of the IO virtualization built-in
> > > (IOMMU with PASID support coupled with device that supports it)
> >
> > Would be very interested to see the PASID support. You wouldn't even
> > need to mediate the IO doorbells or translations if assigning entire
> > namespaces, and should be much faster than the shadow doorbells.
> >
> > I think you should send 6/9 "nvme/pci: init shadow doorbell after each
> > reset" separately for immediate inclusion.
> >
> > I like the idea in principle, but it will take me a little time to get
> > through reviewing your implementation. I would have guessed we could
> > have leveraged something from the existing nvme/target for the mediating
> > controller register access and admin commands. Maybe even start with
> > implementing an nvme passthrough namespace target type (we currently
> > have block and file).
>
> I have the code for the NVMeOf target passthru-ctrl, I think we can use
> that as it is if you are looking for the passthru for NVMeOF.
>
> I'll post patch-series based on the latest code base soon.
I am very intersing in this code.
Could you explain how your NVMeOF target passthrough works?
Which components of the NVME stack does it involve?
Best regards,
Maxim Levitsky
> >
> > _______________________________________________
> > Linux-nvme mailing list
> > Linux-nvme@lists.infradead.org
> > http://lists.infradead.org/mailman/listinfo/linux-nvme
> >
>
>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: your mail
2019-03-19 15:22 ` Keith Busch
(?)
@ 2019-03-20 16:30 ` Maxim Levitsky
-1 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-20 16:30 UTC (permalink / raw)
To: Keith Busch
Cc: Fam Zheng, Keith Busch, Sagi Grimberg, kvm, Wolfram Sang,
Greg Kroah-Hartman, Liang Cunming, Nicolas Ferre, linux-kernel,
linux-nvme, David S . Miller, Jens Axboe, Alex Williamson,
Kirti Wankhede, Mauro Carvalho Chehab, Paolo Bonzini,
Liu Changpeng, Paul E . McKenney, Amnon Ilan, Christoph Hellwig,
John Ferlan
On Tue, 2019-03-19 at 09:22 -0600, Keith Busch wrote:
> On Tue, Mar 19, 2019 at 04:41:07PM +0200, Maxim Levitsky wrote:
> > -> Share the NVMe device between host and guest.
> > Even in fully virtualized configurations,
> > some partitions of nvme device could be used by guests as block
> > devices
> > while others passed through with nvme-mdev to achieve balance between
> > all features of full IO stack emulation and performance.
> >
> > -> NVME-MDEV is a bit faster due to the fact that in-kernel driver
> > can send interrupts to the guest directly without a context
> > switch that can be expensive due to meltdown mitigation.
> >
> > -> Is able to utilize interrupts to get reasonable performance.
> > This is only implemented
> > as a proof of concept and not included in the patches,
> > but interrupt driven mode shows reasonable performance
> >
> > -> This is a framework that later can be used to support NVMe devices
> > with more of the IO virtualization built-in
> > (IOMMU with PASID support coupled with device that supports it)
>
> Would be very interested to see the PASID support. You wouldn't even
> need to mediate the IO doorbells or translations if assigning entire
> namespaces, and should be much faster than the shadow doorbells.
I fully agree with that.
Note that to enable PASID support two things have to happen in this vendor.
1. Mature support for IOMMU with PASID support. On Intel side I know that they
only have a spec released and currently the kernel bits to support it are
placed.
I still don't know when a product actually supporting this spec is going to be
released. For other vendors (ARM/AMD/) I haven't done yet a research on state of
PASID based IOMMU support on their platforms.
2. NVMe spec has to be extended to support PASID. At minimum, we need an ability
to assign an PASID to a sq/cq queue pair and ability to relocate the doorbells,
such as each guest would get its own (hardware backed) MMIO page with its own
doorbells. Plus of course the hardware vendors have to embrace the spec. I guess
these two things will happen in collaborative manner.
>
> I think you should send 6/9 "nvme/pci: init shadow doorbell after each
> reset" separately for immediate inclusion.
I'll do this soon.
Also '5/9 nvme/pci: add known admin effects to augment admin effects log page'
can be considered for immediate inclusion as well, as it works around a flaw
in the NVMe controller badly done admin side effects page with no side effects
(pun intended) for spec compliant controllers (I think so).
This can be fixed with a quirk if you prefer though.
>
> I like the idea in principle, but it will take me a little time to get
> through reviewing your implementation. I would have guessed we could
> have leveraged something from the existing nvme/target for the mediating
> controller register access and admin commands. Maybe even start with
> implementing an nvme passthrough namespace target type (we currently
> have block and file).
I fully agree with you on that I could have used some of the nvme/target code,
and I am planning to do so eventually.
For that I would need to make my driver, to be one of the target drivers, and I
would need to add another target back end, like you said to allow my target
driver to talk directly to the nvme hardware bypassing the block layer.
Or instead I can use the block backend,
(but note that currently the block back-end doesn't support polling which is
critical for the performance).
Switch to the target code might though have some (probably minor) performance
impact, as it would probably lengthen the critical code path a bit (I might need
for instance to translate the PRP lists I am getting from the virtual controller
to a scattergather list and back).
This is why I did this the way I did, but now knowing that probably I can afford
to loose a bit of performance, I can look at doing that.
Best regards,
Thanks in advance for the review,
Maxim Levitsky
PS:
For reference currently the IO path looks more or less like that:
My IO thread notices a doorbell write, reads a command from a submission queue,
translates it (without even looking at the data pointer) and sends it to the
nvme pci driver together with pointer to data iterator'.
The nvme pci driver calls the data iterator N times, which makes the iterator
translate and fetch the DMA addresses where the data is already mapped on the
its pci nvme device (the mdev driver maps all the guest memory to the nvme pci
device).
The nvme pci driver uses these addresses it receives, to create a prp list,
which it puts into the data pointer.
The nvme pci driver also allocates an free command id, from a list, puts it into
the command ID and sends the command to the real hardware.
Later the IO thread calls to the nvme pci driver to poll the queue. When
completions arrive, the nvme pci driver returns them back to the IO thread.
>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 3471+ messages in thread
* your mail
@ 2019-03-20 16:30 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-20 16:30 UTC (permalink / raw)
On Tue, 2019-03-19@09:22 -0600, Keith Busch wrote:
> On Tue, Mar 19, 2019@04:41:07PM +0200, Maxim Levitsky wrote:
> > -> Share the NVMe device between host and guest.
> > Even in fully virtualized configurations,
> > some partitions of nvme device could be used by guests as block
> > devices
> > while others passed through with nvme-mdev to achieve balance between
> > all features of full IO stack emulation and performance.
> >
> > -> NVME-MDEV is a bit faster due to the fact that in-kernel driver
> > can send interrupts to the guest directly without a context
> > switch that can be expensive due to meltdown mitigation.
> >
> > -> Is able to utilize interrupts to get reasonable performance.
> > This is only implemented
> > as a proof of concept and not included in the patches,
> > but interrupt driven mode shows reasonable performance
> >
> > -> This is a framework that later can be used to support NVMe devices
> > with more of the IO virtualization built-in
> > (IOMMU with PASID support coupled with device that supports it)
>
> Would be very interested to see the PASID support. You wouldn't even
> need to mediate the IO doorbells or translations if assigning entire
> namespaces, and should be much faster than the shadow doorbells.
I fully agree with that.
Note that to enable PASID support two things have to happen in this vendor.
1. Mature support for IOMMU with PASID support. On Intel side I know that they
only have a spec released and currently the kernel bits to support it are
placed.
I still don't know when a product actually supporting this spec is going to be
released. For other vendors (ARM/AMD/) I haven't done yet a research on state of
PASID based IOMMU support on their platforms.
2. NVMe spec has to be extended to support PASID. At minimum, we need an ability
to assign an PASID to a sq/cq queue pair and ability to relocate the doorbells,
such as each guest would get its own (hardware backed) MMIO page with its own
doorbells. Plus of course the hardware vendors have to embrace the spec. I guess
these two things will happen in collaborative manner.
>
> I think you should send 6/9 "nvme/pci: init shadow doorbell after each
> reset" separately for immediate inclusion.
I'll do this soon.
Also '5/9 nvme/pci: add known admin effects to augment admin effects log page'
can be considered for immediate inclusion as well, as it works around a flaw
in the NVMe controller badly done admin side effects page with no side effects
(pun intended) for spec compliant controllers (I think so).
This can be fixed with a quirk if you prefer though.
>
> I like the idea in principle, but it will take me a little time to get
> through reviewing your implementation. I would have guessed we could
> have leveraged something from the existing nvme/target for the mediating
> controller register access and admin commands. Maybe even start with
> implementing an nvme passthrough namespace target type (we currently
> have block and file).
I fully agree with you on that I could have used some of the nvme/target code,
and I am planning to do so eventually.
For that I would need to make my driver, to be one of the target drivers, and I
would need to add another target back end, like you said to allow my target
driver to talk directly to the nvme hardware bypassing the block layer.
Or instead I can use the block backend,
(but note that currently the block back-end doesn't support polling which is
critical for the performance).
Switch to the target code might though have some (probably minor) performance
impact, as it would probably lengthen the critical code path a bit (I might need
for instance to translate the PRP lists I am getting from the virtual controller
to a scattergather list and back).
This is why I did this the way I did, but now knowing that probably I can afford
to loose a bit of performance, I can look at doing that.
Best regards,
Thanks in advance for the review,
Maxim Levitsky
PS:
For reference currently the IO path looks more or less like that:
My IO thread notices a doorbell write, reads a command from a submission queue,
translates it (without even looking at the data pointer) and sends it to the
nvme pci driver together with pointer to data iterator'.
The nvme pci driver calls the data iterator N times, which makes the iterator
translate and fetch the DMA addresses where the data is already mapped on the
its pci nvme device (the mdev driver maps all the guest memory to the nvme pci
device).
The nvme pci driver uses these addresses it receives, to create a prp list,
which it puts into the data pointer.
The nvme pci driver also allocates an free command id, from a list, puts it into
the command ID and sends the command to the real hardware.
Later the IO thread calls to the nvme pci driver to poll the queue. When
completions arrive, the nvme pci driver returns them back to the IO thread.
>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: your mail
@ 2019-03-20 16:30 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-20 16:30 UTC (permalink / raw)
To: Keith Busch
Cc: Fam Zheng, Keith Busch, Sagi Grimberg, kvm, Wolfram Sang,
Greg Kroah-Hartman, Liang Cunming, Nicolas Ferre, linux-kernel,
linux-nvme, David S . Miller, Jens Axboe, Alex Williamson,
Kirti Wankhede, Mauro Carvalho Chehab, Paolo Bonzini,
Liu Changpeng, Paul E . McKenney, Amnon Ilan, Christoph Hellwig,
John Ferlan
On Tue, 2019-03-19 at 09:22 -0600, Keith Busch wrote:
> On Tue, Mar 19, 2019 at 04:41:07PM +0200, Maxim Levitsky wrote:
> > -> Share the NVMe device between host and guest.
> > Even in fully virtualized configurations,
> > some partitions of nvme device could be used by guests as block
> > devices
> > while others passed through with nvme-mdev to achieve balance between
> > all features of full IO stack emulation and performance.
> >
> > -> NVME-MDEV is a bit faster due to the fact that in-kernel driver
> > can send interrupts to the guest directly without a context
> > switch that can be expensive due to meltdown mitigation.
> >
> > -> Is able to utilize interrupts to get reasonable performance.
> > This is only implemented
> > as a proof of concept and not included in the patches,
> > but interrupt driven mode shows reasonable performance
> >
> > -> This is a framework that later can be used to support NVMe devices
> > with more of the IO virtualization built-in
> > (IOMMU with PASID support coupled with device that supports it)
>
> Would be very interested to see the PASID support. You wouldn't even
> need to mediate the IO doorbells or translations if assigning entire
> namespaces, and should be much faster than the shadow doorbells.
I fully agree with that.
Note that to enable PASID support two things have to happen in this vendor.
1. Mature support for IOMMU with PASID support. On Intel side I know that they
only have a spec released and currently the kernel bits to support it are
placed.
I still don't know when a product actually supporting this spec is going to be
released. For other vendors (ARM/AMD/) I haven't done yet a research on state of
PASID based IOMMU support on their platforms.
2. NVMe spec has to be extended to support PASID. At minimum, we need an ability
to assign an PASID to a sq/cq queue pair and ability to relocate the doorbells,
such as each guest would get its own (hardware backed) MMIO page with its own
doorbells. Plus of course the hardware vendors have to embrace the spec. I guess
these two things will happen in collaborative manner.
>
> I think you should send 6/9 "nvme/pci: init shadow doorbell after each
> reset" separately for immediate inclusion.
I'll do this soon.
Also '5/9 nvme/pci: add known admin effects to augment admin effects log page'
can be considered for immediate inclusion as well, as it works around a flaw
in the NVMe controller badly done admin side effects page with no side effects
(pun intended) for spec compliant controllers (I think so).
This can be fixed with a quirk if you prefer though.
>
> I like the idea in principle, but it will take me a little time to get
> through reviewing your implementation. I would have guessed we could
> have leveraged something from the existing nvme/target for the mediating
> controller register access and admin commands. Maybe even start with
> implementing an nvme passthrough namespace target type (we currently
> have block and file).
I fully agree with you on that I could have used some of the nvme/target code,
and I am planning to do so eventually.
For that I would need to make my driver, to be one of the target drivers, and I
would need to add another target back end, like you said to allow my target
driver to talk directly to the nvme hardware bypassing the block layer.
Or instead I can use the block backend,
(but note that currently the block back-end doesn't support polling which is
critical for the performance).
Switch to the target code might though have some (probably minor) performance
impact, as it would probably lengthen the critical code path a bit (I might need
for instance to translate the PRP lists I am getting from the virtual controller
to a scattergather list and back).
This is why I did this the way I did, but now knowing that probably I can afford
to loose a bit of performance, I can look at doing that.
Best regards,
Thanks in advance for the review,
Maxim Levitsky
PS:
For reference currently the IO path looks more or less like that:
My IO thread notices a doorbell write, reads a command from a submission queue,
translates it (without even looking at the data pointer) and sends it to the
nvme pci driver together with pointer to data iterator'.
The nvme pci driver calls the data iterator N times, which makes the iterator
translate and fetch the DMA addresses where the data is already mapped on the
its pci nvme device (the mdev driver maps all the guest memory to the nvme pci
device).
The nvme pci driver uses these addresses it receives, to create a prp list,
which it puts into the data pointer.
The nvme pci driver also allocates an free command id, from a list, puts it into
the command ID and sends the command to the real hardware.
Later the IO thread calls to the nvme pci driver to poll the queue. When
completions arrive, the nvme pci driver returns them back to the IO thread.
>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: your mail
2019-03-20 16:30 ` Maxim Levitsky
(?)
@ 2019-03-20 17:03 ` Keith Busch
-1 siblings, 0 replies; 3471+ messages in thread
From: Keith Busch @ 2019-03-20 17:03 UTC (permalink / raw)
To: Maxim Levitsky
Cc: Fam Zheng, Keith Busch, Sagi Grimberg, kvm, Wolfram Sang,
Greg Kroah-Hartman, Liang Cunming, Nicolas Ferre, linux-kernel,
linux-nvme, David S . Miller, Jens Axboe, Alex Williamson,
Kirti Wankhede, Mauro Carvalho Chehab, Paolo Bonzini,
Liu Changpeng, Paul E . McKenney, Amnon Ilan, Christoph Hellwig,
John Ferlan
On Wed, Mar 20, 2019 at 06:30:29PM +0200, Maxim Levitsky wrote:
> Or instead I can use the block backend,
> (but note that currently the block back-end doesn't support polling which is
> critical for the performance).
Oh, I think you can do polling through there. For reference, fs/io_uring.c
has a pretty good implementation that aligns with how you could use it.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* your mail
@ 2019-03-20 17:03 ` Keith Busch
0 siblings, 0 replies; 3471+ messages in thread
From: Keith Busch @ 2019-03-20 17:03 UTC (permalink / raw)
On Wed, Mar 20, 2019@06:30:29PM +0200, Maxim Levitsky wrote:
> Or instead I can use the block backend,
> (but note that currently the block back-end doesn't support polling which is
> critical for the performance).
Oh, I think you can do polling through there. For reference, fs/io_uring.c
has a pretty good implementation that aligns with how you could use it.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: your mail
@ 2019-03-20 17:03 ` Keith Busch
0 siblings, 0 replies; 3471+ messages in thread
From: Keith Busch @ 2019-03-20 17:03 UTC (permalink / raw)
To: Maxim Levitsky
Cc: Fam Zheng, Keith Busch, Sagi Grimberg, kvm, Wolfram Sang,
Greg Kroah-Hartman, Liang Cunming, Nicolas Ferre, linux-kernel,
linux-nvme, David S . Miller, Jens Axboe, Alex Williamson,
Kirti Wankhede, Mauro Carvalho Chehab, Paolo Bonzini,
Liu Changpeng, Paul E . McKenney, Amnon Ilan, Christoph Hellwig,
John Ferlan
On Wed, Mar 20, 2019 at 06:30:29PM +0200, Maxim Levitsky wrote:
> Or instead I can use the block backend,
> (but note that currently the block back-end doesn't support polling which is
> critical for the performance).
Oh, I think you can do polling through there. For reference, fs/io_uring.c
has a pretty good implementation that aligns with how you could use it.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: your mail
2019-03-20 17:03 ` Keith Busch
(?)
@ 2019-03-20 17:33 ` Maxim Levitsky
-1 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-20 17:33 UTC (permalink / raw)
To: Keith Busch
Cc: Fam Zheng, Keith Busch, Sagi Grimberg, kvm, Wolfram Sang,
Greg Kroah-Hartman, Liang Cunming, Nicolas Ferre, linux-kernel,
linux-nvme, David S . Miller, Jens Axboe, Alex Williamson,
Kirti Wankhede, Mauro Carvalho Chehab, Paolo Bonzini,
Liu Changpeng, Paul E . McKenney, Amnon Ilan, Christoph Hellwig,
John Ferlan
On Wed, 2019-03-20 at 11:03 -0600, Keith Busch wrote:
> On Wed, Mar 20, 2019 at 06:30:29PM +0200, Maxim Levitsky wrote:
> > Or instead I can use the block backend,
> > (but note that currently the block back-end doesn't support polling which is
> > critical for the performance).
>
> Oh, I think you can do polling through there. For reference, fs/io_uring.c
> has a pretty good implementation that aligns with how you could use it.
That is exactly my thought. The polling recently got lot of improvements in the
block layer, which migh make this feasable.
I will give it a try.
Best regards,
Maxim Levitsky
^ permalink raw reply [flat|nested] 3471+ messages in thread
* your mail
@ 2019-03-20 17:33 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-20 17:33 UTC (permalink / raw)
On Wed, 2019-03-20@11:03 -0600, Keith Busch wrote:
> On Wed, Mar 20, 2019@06:30:29PM +0200, Maxim Levitsky wrote:
> > Or instead I can use the block backend,
> > (but note that currently the block back-end doesn't support polling which is
> > critical for the performance).
>
> Oh, I think you can do polling through there. For reference, fs/io_uring.c
> has a pretty good implementation that aligns with how you could use it.
That is exactly my thought. The polling recently got lot of improvements in the
block layer, which migh make this feasable.
I will give it a try.
Best regards,
Maxim Levitsky
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: your mail
@ 2019-03-20 17:33 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-20 17:33 UTC (permalink / raw)
To: Keith Busch
Cc: Fam Zheng, Keith Busch, Sagi Grimberg, kvm, Wolfram Sang,
Greg Kroah-Hartman, Liang Cunming, Nicolas Ferre, linux-kernel,
linux-nvme, David S . Miller, Jens Axboe, Alex Williamson,
Kirti Wankhede, Mauro Carvalho Chehab, Paolo Bonzini,
Liu Changpeng, Paul E . McKenney, Amnon Ilan, Christoph Hellwig,
John Ferlan
On Wed, 2019-03-20 at 11:03 -0600, Keith Busch wrote:
> On Wed, Mar 20, 2019 at 06:30:29PM +0200, Maxim Levitsky wrote:
> > Or instead I can use the block backend,
> > (but note that currently the block back-end doesn't support polling which is
> > critical for the performance).
>
> Oh, I think you can do polling through there. For reference, fs/io_uring.c
> has a pretty good implementation that aligns with how you could use it.
That is exactly my thought. The polling recently got lot of improvements in the
block layer, which migh make this feasable.
I will give it a try.
Best regards,
Maxim Levitsky
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: your mail
2019-03-19 15:22 ` Keith Busch
@ 2019-04-08 10:04 ` Maxim Levitsky
-1 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-04-08 10:04 UTC (permalink / raw)
To: Keith Busch
Cc: Fam Zheng, Keith Busch, Sagi Grimberg, kvm, Wolfram Sang,
Greg Kroah-Hartman, Liang Cunming, Nicolas Ferre, linux-kernel,
linux-nvme, David S . Miller, Jens Axboe, Alex Williamson,
Kirti Wankhede, Mauro Carvalho Chehab, Paolo Bonzini,
Liu Changpeng, Paul E . McKenney, Amnon Ilan, Christoph Hellwig,
John Ferlan
On Tue, 2019-03-19 at 09:22 -0600, Keith Busch wrote:
> On Tue, Mar 19, 2019 at 04:41:07PM +0200, Maxim Levitsky wrote:
> > -> Share the NVMe device between host and guest.
> > Even in fully virtualized configurations,
> > some partitions of nvme device could be used by guests as block
> > devices
> > while others passed through with nvme-mdev to achieve balance between
> > all features of full IO stack emulation and performance.
> >
> > -> NVME-MDEV is a bit faster due to the fact that in-kernel driver
> > can send interrupts to the guest directly without a context
> > switch that can be expensive due to meltdown mitigation.
> >
> > -> Is able to utilize interrupts to get reasonable performance.
> > This is only implemented
> > as a proof of concept and not included in the patches,
> > but interrupt driven mode shows reasonable performance
> >
> > -> This is a framework that later can be used to support NVMe devices
> > with more of the IO virtualization built-in
> > (IOMMU with PASID support coupled with device that supports it)
>
> Would be very interested to see the PASID support. You wouldn't even
> need to mediate the IO doorbells or translations if assigning entire
> namespaces, and should be much faster than the shadow doorbells.
>
> I think you should send 6/9 "nvme/pci: init shadow doorbell after each
> reset" separately for immediate inclusion.
>
> I like the idea in principle, but it will take me a little time to get
> through reviewing your implementation. I would have guessed we could
> have leveraged something from the existing nvme/target for the mediating
> controller register access and admin commands. Maybe even start with
> implementing an nvme passthrough namespace target type (we currently
> have block and file).
Hi!
Sorry to bother you, but any update?
I was somewhat sick for the last week, now finally back in shape to continue
working on this and other tasks I have.
I am studing now the nvme target code and the io_uring to evaluate the
difficultiy of using something similiar to talk to the block device instead of /
in addtion to the direct connection I implemented.
I would be glad to hear more feedback on this project.
I will also soon post the few fixes separately as you suggested.
Best regards,
Maxim Levitskky
^ permalink raw reply [flat|nested] 3471+ messages in thread
* your mail
@ 2019-04-08 10:04 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-04-08 10:04 UTC (permalink / raw)
On Tue, 2019-03-19@09:22 -0600, Keith Busch wrote:
> On Tue, Mar 19, 2019@04:41:07PM +0200, Maxim Levitsky wrote:
> > -> Share the NVMe device between host and guest.
> > Even in fully virtualized configurations,
> > some partitions of nvme device could be used by guests as block
> > devices
> > while others passed through with nvme-mdev to achieve balance between
> > all features of full IO stack emulation and performance.
> >
> > -> NVME-MDEV is a bit faster due to the fact that in-kernel driver
> > can send interrupts to the guest directly without a context
> > switch that can be expensive due to meltdown mitigation.
> >
> > -> Is able to utilize interrupts to get reasonable performance.
> > This is only implemented
> > as a proof of concept and not included in the patches,
> > but interrupt driven mode shows reasonable performance
> >
> > -> This is a framework that later can be used to support NVMe devices
> > with more of the IO virtualization built-in
> > (IOMMU with PASID support coupled with device that supports it)
>
> Would be very interested to see the PASID support. You wouldn't even
> need to mediate the IO doorbells or translations if assigning entire
> namespaces, and should be much faster than the shadow doorbells.
>
> I think you should send 6/9 "nvme/pci: init shadow doorbell after each
> reset" separately for immediate inclusion.
>
> I like the idea in principle, but it will take me a little time to get
> through reviewing your implementation. I would have guessed we could
> have leveraged something from the existing nvme/target for the mediating
> controller register access and admin commands. Maybe even start with
> implementing an nvme passthrough namespace target type (we currently
> have block and file).
Hi!
Sorry to bother you, but any update?
I was somewhat sick for the last week, now finally back in shape to continue
working on this and other tasks I have.
I am studing now the nvme target code and the io_uring to evaluate the
difficultiy of using something similiar to talk to the block device instead of /
in addtion to the direct connection I implemented.
I would be glad to hear more feedback on this project.
I will also soon post the few fixes separately as you suggested.
Best regards,
Maxim Levitskky
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re:
2019-03-19 14:41 ` No subject Maxim Levitsky
(?)
@ 2019-03-20 11:03 ` Felipe Franciosi
-1 siblings, 0 replies; 3471+ messages in thread
From: Felipe Franciosi @ 2019-03-20 11:03 UTC (permalink / raw)
To: Maxim Levitsky
Cc: linux-nvme, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney, Paolo Bonzini,
Liang Cunming, Liu Changpeng, Fam Zheng, Amnon Ilan, John Ferlan,
Stefan Hajnoczi, Harris, James R, Thanos Makatos
> On Mar 19, 2019, at 2:41 PM, Maxim Levitsky <mlevitsk@redhat.com> wrote:
>
> Date: Tue, 19 Mar 2019 14:45:45 +0200
> Subject: [PATCH 0/9] RFC: NVME VFIO mediated device
>
> Hi everyone!
>
> In this patch series, I would like to introduce my take on the problem of doing
> as fast as possible virtualization of storage with emphasis on low latency.
>
> In this patch series I implemented a kernel vfio based, mediated device that
> allows the user to pass through a partition and/or whole namespace to a guest.
Hey Maxim!
I'm really excited to see this series, as it aligns to some extent with what we discussed in last year's KVM Forum VFIO BoF.
There's no arguing that we need a better story to efficiently virtualise NVMe devices. So far, for Qemu-based VMs, Changpeng's vhost-user-nvme is the best attempt at that. However, I seem to recall there was some pushback from qemu-devel in the sense that they would rather see investment in virtio-blk. I'm not sure what's the latest on that work and what are the next steps.
The pushback drove the discussion towards pursuing an mdev approach, which is why I'm excited to see your patches.
What I'm thinking is that passing through namespaces or partitions is very restrictive. It leaves no room to implement more elaborate virtualisation stacks like replicating data across multiple devices (local or remote), storage migration, software-managed thin provisioning, encryption, deduplication, compression, etc. In summary, anything that requires software intervention in the datapath. (Worth noting: vhost-user-nvme allows all of that to be easily done in SPDK's bdev layer.)
These complicated stacks should probably not be implemented in the kernel, though. So I'm wondering whether we could talk about mechanisms to allow efficient and performant userspace datapath intervention in your approach or pursue a mechanism to completely offload the device emulation to userspace (and align with what SPDK has to offer).
Thoughts welcome!
Felipe
^ permalink raw reply [flat|nested] 3471+ messages in thread
* No subject
@ 2019-03-20 11:03 ` Felipe Franciosi
0 siblings, 0 replies; 3471+ messages in thread
From: Felipe Franciosi @ 2019-03-20 11:03 UTC (permalink / raw)
> On Mar 19, 2019,@2:41 PM, Maxim Levitsky <mlevitsk@redhat.com> wrote:
>
> Date: Tue, 19 Mar 2019 14:45:45 +0200
> Subject: [PATCH 0/9] RFC: NVME VFIO mediated device
>
> Hi everyone!
>
> In this patch series, I would like to introduce my take on the problem of doing
> as fast as possible virtualization of storage with emphasis on low latency.
>
> In this patch series I implemented a kernel vfio based, mediated device that
> allows the user to pass through a partition and/or whole namespace to a guest.
Hey Maxim!
I'm really excited to see this series, as it aligns to some extent with what we discussed in last year's KVM Forum VFIO BoF.
There's no arguing that we need a better story to efficiently virtualise NVMe devices. So far, for Qemu-based VMs, Changpeng's vhost-user-nvme is the best attempt at that. However, I seem to recall there was some pushback from qemu-devel in the sense that they would rather see investment in virtio-blk. I'm not sure what's the latest on that work and what are the next steps.
The pushback drove the discussion towards pursuing an mdev approach, which is why I'm excited to see your patches.
What I'm thinking is that passing through namespaces or partitions is very restrictive. It leaves no room to implement more elaborate virtualisation stacks like replicating data across multiple devices (local or remote), storage migration, software-managed thin provisioning, encryption, deduplication, compression, etc. In summary, anything that requires software intervention in the datapath. (Worth noting: vhost-user-nvme allows all of that to be easily done in SPDK's bdev layer.)
These complicated stacks should probably not be implemented in the kernel, though. So I'm wondering whether we could talk about mechanisms to allow efficient and performant userspace datapath intervention in your approach or pursue a mechanism to completely offload the device emulation to userspace (and align with what SPDK has to offer).
Thoughts welcome!
Felipe
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re:
@ 2019-03-20 11:03 ` Felipe Franciosi
0 siblings, 0 replies; 3471+ messages in thread
From: Felipe Franciosi @ 2019-03-20 11:03 UTC (permalink / raw)
To: Maxim Levitsky
Cc: linux-nvme, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney, Paolo Bonzini,
Liang Cunming, Liu Changpeng
> On Mar 19, 2019, at 2:41 PM, Maxim Levitsky <mlevitsk@redhat.com> wrote:
>
> Date: Tue, 19 Mar 2019 14:45:45 +0200
> Subject: [PATCH 0/9] RFC: NVME VFIO mediated device
>
> Hi everyone!
>
> In this patch series, I would like to introduce my take on the problem of doing
> as fast as possible virtualization of storage with emphasis on low latency.
>
> In this patch series I implemented a kernel vfio based, mediated device that
> allows the user to pass through a partition and/or whole namespace to a guest.
Hey Maxim!
I'm really excited to see this series, as it aligns to some extent with what we discussed in last year's KVM Forum VFIO BoF.
There's no arguing that we need a better story to efficiently virtualise NVMe devices. So far, for Qemu-based VMs, Changpeng's vhost-user-nvme is the best attempt at that. However, I seem to recall there was some pushback from qemu-devel in the sense that they would rather see investment in virtio-blk. I'm not sure what's the latest on that work and what are the next steps.
The pushback drove the discussion towards pursuing an mdev approach, which is why I'm excited to see your patches.
What I'm thinking is that passing through namespaces or partitions is very restrictive. It leaves no room to implement more elaborate virtualisation stacks like replicating data across multiple devices (local or remote), storage migration, software-managed thin provisioning, encryption, deduplication, compression, etc. In summary, anything that requires software intervention in the datapath. (Worth noting: vhost-user-nvme allows all of that to be easily done in SPDK's bdev layer.)
These complicated stacks should probably not be implemented in the kernel, though. So I'm wondering whether we could talk about mechanisms to allow efficient and performant userspace datapath intervention in your approach or pursue a mechanism to completely offload the device emulation to userspace (and align with what SPDK has to offer).
Thoughts welcome!
Felipe
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re:
2019-03-20 11:03 ` Felipe Franciosi
(?)
@ 2019-03-20 19:08 ` Maxim Levitsky
-1 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-20 19:08 UTC (permalink / raw)
To: Felipe Franciosi
Cc: Fam Zheng, kvm, Wolfram Sang, linux-nvme, linux-kernel,
Keith Busch, Kirti Wankhede, Mauro Carvalho Chehab,
Paul E . McKenney, Christoph Hellwig, Sagi Grimberg, Harris,
James R, Liang Cunming, Jens Axboe, Alex Williamson,
Stefan Hajnoczi, Thanos Makatos, John Ferlan, Liu Changpeng,
Greg Kroah-Hartman, Nicolas Ferre, Paolo Bonzini, Amnon Ilan,
David S . Miller
On Wed, 2019-03-20 at 11:03 +0000, Felipe Franciosi wrote:
> > On Mar 19, 2019, at 2:41 PM, Maxim Levitsky <mlevitsk@redhat.com> wrote:
> >
> > Date: Tue, 19 Mar 2019 14:45:45 +0200
> > Subject: [PATCH 0/9] RFC: NVME VFIO mediated device
> >
> > Hi everyone!
> >
> > In this patch series, I would like to introduce my take on the problem of
> > doing
> > as fast as possible virtualization of storage with emphasis on low latency.
> >
> > In this patch series I implemented a kernel vfio based, mediated device
> > that
> > allows the user to pass through a partition and/or whole namespace to a
> > guest.
>
> Hey Maxim!
>
> I'm really excited to see this series, as it aligns to some extent with what
> we discussed in last year's KVM Forum VFIO BoF.
>
> There's no arguing that we need a better story to efficiently virtualise NVMe
> devices. So far, for Qemu-based VMs, Changpeng's vhost-user-nvme is the best
> attempt at that. However, I seem to recall there was some pushback from qemu-
> devel in the sense that they would rather see investment in virtio-blk. I'm
> not sure what's the latest on that work and what are the next steps.
I agree with that. All my benchmarks were agains his vhost-user-nvme driver, and
I am able to get pretty much the same througput and latency.
The ssd I tested on died just recently (Murphy law), not due to bug in my driver
but some internal fault (even though most of my tests were reads, plus
occassional 'nvme format's.
We are in process of buying an replacement.
>
> The pushback drove the discussion towards pursuing an mdev approach, which is
> why I'm excited to see your patches.
>
> What I'm thinking is that passing through namespaces or partitions is very
> restrictive. It leaves no room to implement more elaborate virtualisation
> stacks like replicating data across multiple devices (local or remote),
> storage migration, software-managed thin provisioning, encryption,
> deduplication, compression, etc. In summary, anything that requires software
> intervention in the datapath. (Worth noting: vhost-user-nvme allows all of
> that to be easily done in SPDK's bdev layer.)
Hi Felipe!
I guess that my driver is not geared toward more complicated use cases like you
mentioned, but instead it is focused to get as fast as possible performance for
the common case.
One thing that I can do which would solve several of the above problems is to
accept an map betwent virtual and real logical blocks, pretty much in exactly
the same way as EPT does it.
Then userspace can map any portions of the device anywhere, while still keeping
the dataplane in the kernel, and having minimal overhead.
On top of that, note that the direction of IO virtualization is to do dataplane
in hardware, which will probably give you even worse partition granuality /
features but will be the fastest option aviable,
like for instance SR-IOV which alrady exists and just allows to split by
namespaces without any more fine grained control.
Think of nvme-mdev as a very low level driver, which currntly uses polling, but
eventually will use PASID based IOMMU to provide the guest with raw PCI device.
The userspace / qemu can build on top of that with varios software layers.
On top of that I am thinking to solve the problem of migration in Qemu, by
creating a 'vfio-nvme' driver which would bind vfio to bind to device exposed by
the kernel, and would pass through all the doorbells and queues to the guest,
while intercepting the admin queue. Such driver I think can be made to support
migration while beeing able to run on top both SR-IOV device, my vfio-nvme abit
with double admin queue emulation (its a bit ugly but won't affect performance
at all) and on top of even regular NVME device vfio assigned to guest.
Best regards,
Maxim Levitsky
>
> These complicated stacks should probably not be implemented in the kernel,
> though. So I'm wondering whether we could talk about mechanisms to allow
> efficient and performant userspace datapath intervention in your approach or
> pursue a mechanism to completely offload the device emulation to userspace
> (and align with what SPDK has to offer).
>
> Thoughts welcome!
> Felipe
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 3471+ messages in thread
* No subject
@ 2019-03-20 19:08 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-20 19:08 UTC (permalink / raw)
On Wed, 2019-03-20@11:03 +0000, Felipe Franciosi wrote:
> > On Mar 19, 2019,@2:41 PM, Maxim Levitsky <mlevitsk@redhat.com> wrote:
> >
> > Date: Tue, 19 Mar 2019 14:45:45 +0200
> > Subject: [PATCH 0/9] RFC: NVME VFIO mediated device
> >
> > Hi everyone!
> >
> > In this patch series, I would like to introduce my take on the problem of
> > doing
> > as fast as possible virtualization of storage with emphasis on low latency.
> >
> > In this patch series I implemented a kernel vfio based, mediated device
> > that
> > allows the user to pass through a partition and/or whole namespace to a
> > guest.
>
> Hey Maxim!
>
> I'm really excited to see this series, as it aligns to some extent with what
> we discussed in last year's KVM Forum VFIO BoF.
>
> There's no arguing that we need a better story to efficiently virtualise NVMe
> devices. So far, for Qemu-based VMs, Changpeng's vhost-user-nvme is the best
> attempt at that. However, I seem to recall there was some pushback from qemu-
> devel in the sense that they would rather see investment in virtio-blk. I'm
> not sure what's the latest on that work and what are the next steps.
I agree with that. All my benchmarks were agains his vhost-user-nvme driver, and
I am able to get pretty much the same througput and latency.
The ssd I tested on died just recently (Murphy law), not due to bug in my driver
but some internal fault (even though most of my tests were reads, plus
occassional 'nvme format's.
We are in process of buying an replacement.
>
> The pushback drove the discussion towards pursuing an mdev approach, which is
> why I'm excited to see your patches.
>
> What I'm thinking is that passing through namespaces or partitions is very
> restrictive. It leaves no room to implement more elaborate virtualisation
> stacks like replicating data across multiple devices (local or remote),
> storage migration, software-managed thin provisioning, encryption,
> deduplication, compression, etc. In summary, anything that requires software
> intervention in the datapath. (Worth noting: vhost-user-nvme allows all of
> that to be easily done in SPDK's bdev layer.)
Hi Felipe!
I guess that my driver is not geared toward more complicated use cases like you
mentioned, but instead it is focused to get as fast as possible performance for
the common case.
One thing that I can do which would solve several of the above problems is to
accept an map betwent virtual and real logical blocks, pretty much in exactly
the same way as EPT does it.
Then userspace can map any portions of the device anywhere, while still keeping
the dataplane in the kernel, and having minimal overhead.
On top of that, note that the direction of IO virtualization is to do dataplane
in hardware, which will probably give you even worse partition granuality /
features but will be the fastest option aviable,
like for instance SR-IOV which alrady exists and just allows to split by
namespaces without any more fine grained control.
Think of nvme-mdev as a very low level driver, which currntly uses polling, but
eventually will use PASID based IOMMU to provide the guest with raw PCI device.
The userspace / qemu can build on top of that with varios software layers.
On top of that I am thinking to solve the problem of migration in Qemu, by
creating a 'vfio-nvme' driver which would bind vfio to bind to device exposed by
the kernel, and would pass through all the doorbells and queues to the guest,
while intercepting the admin queue. Such driver I think can be made to support
migration while beeing able to run on top both SR-IOV device, my vfio-nvme abit
with double admin queue emulation (its a bit ugly but won't affect performance
at all) and on top of even regular NVME device vfio assigned to guest.
Best regards,
Maxim Levitsky
>
> These complicated stacks should probably not be implemented in the kernel,
> though. So I'm wondering whether we could talk about mechanisms to allow
> efficient and performant userspace datapath intervention in your approach or
> pursue a mechanism to completely offload the device emulation to userspace
> (and align with what SPDK has to offer).
>
> Thoughts welcome!
> Felipe
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re:
@ 2019-03-20 19:08 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-20 19:08 UTC (permalink / raw)
To: Felipe Franciosi
Cc: Fam Zheng, kvm, Wolfram Sang, linux-nvme, linux-kernel,
Keith Busch, Kirti Wankhede, Mauro Carvalho Chehab,
Paul E . McKenney, Christoph Hellwig, Sagi Grimberg, Harris,
James R, Liang Cunming, Jens Axboe, Alex Williamson,
Stefan Hajnoczi, Thanos Makatos, John Ferlan
On Wed, 2019-03-20 at 11:03 +0000, Felipe Franciosi wrote:
> > On Mar 19, 2019, at 2:41 PM, Maxim Levitsky <mlevitsk@redhat.com> wrote:
> >
> > Date: Tue, 19 Mar 2019 14:45:45 +0200
> > Subject: [PATCH 0/9] RFC: NVME VFIO mediated device
> >
> > Hi everyone!
> >
> > In this patch series, I would like to introduce my take on the problem of
> > doing
> > as fast as possible virtualization of storage with emphasis on low latency.
> >
> > In this patch series I implemented a kernel vfio based, mediated device
> > that
> > allows the user to pass through a partition and/or whole namespace to a
> > guest.
>
> Hey Maxim!
>
> I'm really excited to see this series, as it aligns to some extent with what
> we discussed in last year's KVM Forum VFIO BoF.
>
> There's no arguing that we need a better story to efficiently virtualise NVMe
> devices. So far, for Qemu-based VMs, Changpeng's vhost-user-nvme is the best
> attempt at that. However, I seem to recall there was some pushback from qemu-
> devel in the sense that they would rather see investment in virtio-blk. I'm
> not sure what's the latest on that work and what are the next steps.
I agree with that. All my benchmarks were agains his vhost-user-nvme driver, and
I am able to get pretty much the same througput and latency.
The ssd I tested on died just recently (Murphy law), not due to bug in my driver
but some internal fault (even though most of my tests were reads, plus
occassional 'nvme format's.
We are in process of buying an replacement.
>
> The pushback drove the discussion towards pursuing an mdev approach, which is
> why I'm excited to see your patches.
>
> What I'm thinking is that passing through namespaces or partitions is very
> restrictive. It leaves no room to implement more elaborate virtualisation
> stacks like replicating data across multiple devices (local or remote),
> storage migration, software-managed thin provisioning, encryption,
> deduplication, compression, etc. In summary, anything that requires software
> intervention in the datapath. (Worth noting: vhost-user-nvme allows all of
> that to be easily done in SPDK's bdev layer.)
Hi Felipe!
I guess that my driver is not geared toward more complicated use cases like you
mentioned, but instead it is focused to get as fast as possible performance for
the common case.
One thing that I can do which would solve several of the above problems is to
accept an map betwent virtual and real logical blocks, pretty much in exactly
the same way as EPT does it.
Then userspace can map any portions of the device anywhere, while still keeping
the dataplane in the kernel, and having minimal overhead.
On top of that, note that the direction of IO virtualization is to do dataplane
in hardware, which will probably give you even worse partition granuality /
features but will be the fastest option aviable,
like for instance SR-IOV which alrady exists and just allows to split by
namespaces without any more fine grained control.
Think of nvme-mdev as a very low level driver, which currntly uses polling, but
eventually will use PASID based IOMMU to provide the guest with raw PCI device.
The userspace / qemu can build on top of that with varios software layers.
On top of that I am thinking to solve the problem of migration in Qemu, by
creating a 'vfio-nvme' driver which would bind vfio to bind to device exposed by
the kernel, and would pass through all the doorbells and queues to the guest,
while intercepting the admin queue. Such driver I think can be made to support
migration while beeing able to run on top both SR-IOV device, my vfio-nvme abit
with double admin queue emulation (its a bit ugly but won't affect performance
at all) and on top of even regular NVME device vfio assigned to guest.
Best regards,
Maxim Levitsky
>
> These complicated stacks should probably not be implemented in the kernel,
> though. So I'm wondering whether we could talk about mechanisms to allow
> efficient and performant userspace datapath intervention in your approach or
> pursue a mechanism to completely offload the device emulation to userspace
> (and align with what SPDK has to offer).
>
> Thoughts welcome!
> Felipe
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re:
2019-03-20 19:08 ` Maxim Levitsky
(?)
@ 2019-03-21 16:12 ` Stefan Hajnoczi
-1 siblings, 0 replies; 3471+ messages in thread
From: Stefan Hajnoczi @ 2019-03-21 16:12 UTC (permalink / raw)
To: Maxim Levitsky
Cc: Felipe Franciosi, Fam Zheng, kvm, Wolfram Sang, linux-nvme,
linux-kernel, Keith Busch, Kirti Wankhede, Mauro Carvalho Chehab,
Paul E . McKenney, Christoph Hellwig, Sagi Grimberg, Harris,
James R, Liang Cunming, Jens Axboe, Alex Williamson,
Thanos Makatos, John Ferlan, Liu Changpeng, Greg Kroah-Hartman,
Nicolas Ferre, Paolo Bonzini, Amnon Ilan, David S . Miller
[-- Attachment #1: Type: text/plain, Size: 4404 bytes --]
On Wed, Mar 20, 2019 at 09:08:37PM +0200, Maxim Levitsky wrote:
> On Wed, 2019-03-20 at 11:03 +0000, Felipe Franciosi wrote:
> > > On Mar 19, 2019, at 2:41 PM, Maxim Levitsky <mlevitsk@redhat.com> wrote:
> > >
> > > Date: Tue, 19 Mar 2019 14:45:45 +0200
> > > Subject: [PATCH 0/9] RFC: NVME VFIO mediated device
> > >
> > > Hi everyone!
> > >
> > > In this patch series, I would like to introduce my take on the problem of
> > > doing
> > > as fast as possible virtualization of storage with emphasis on low latency.
> > >
> > > In this patch series I implemented a kernel vfio based, mediated device
> > > that
> > > allows the user to pass through a partition and/or whole namespace to a
> > > guest.
> >
> > Hey Maxim!
> >
> > I'm really excited to see this series, as it aligns to some extent with what
> > we discussed in last year's KVM Forum VFIO BoF.
> >
> > There's no arguing that we need a better story to efficiently virtualise NVMe
> > devices. So far, for Qemu-based VMs, Changpeng's vhost-user-nvme is the best
> > attempt at that. However, I seem to recall there was some pushback from qemu-
> > devel in the sense that they would rather see investment in virtio-blk. I'm
> > not sure what's the latest on that work and what are the next steps.
> I agree with that. All my benchmarks were agains his vhost-user-nvme driver, and
> I am able to get pretty much the same througput and latency.
>
> The ssd I tested on died just recently (Murphy law), not due to bug in my driver
> but some internal fault (even though most of my tests were reads, plus
> occassional 'nvme format's.
> We are in process of buying an replacement.
>
> >
> > The pushback drove the discussion towards pursuing an mdev approach, which is
> > why I'm excited to see your patches.
> >
> > What I'm thinking is that passing through namespaces or partitions is very
> > restrictive. It leaves no room to implement more elaborate virtualisation
> > stacks like replicating data across multiple devices (local or remote),
> > storage migration, software-managed thin provisioning, encryption,
> > deduplication, compression, etc. In summary, anything that requires software
> > intervention in the datapath. (Worth noting: vhost-user-nvme allows all of
> > that to be easily done in SPDK's bdev layer.)
>
> Hi Felipe!
>
> I guess that my driver is not geared toward more complicated use cases like you
> mentioned, but instead it is focused to get as fast as possible performance for
> the common case.
>
> One thing that I can do which would solve several of the above problems is to
> accept an map betwent virtual and real logical blocks, pretty much in exactly
> the same way as EPT does it.
> Then userspace can map any portions of the device anywhere, while still keeping
> the dataplane in the kernel, and having minimal overhead.
>
> On top of that, note that the direction of IO virtualization is to do dataplane
> in hardware, which will probably give you even worse partition granuality /
> features but will be the fastest option aviable,
> like for instance SR-IOV which alrady exists and just allows to split by
> namespaces without any more fine grained control.
>
> Think of nvme-mdev as a very low level driver, which currntly uses polling, but
> eventually will use PASID based IOMMU to provide the guest with raw PCI device.
> The userspace / qemu can build on top of that with varios software layers.
>
> On top of that I am thinking to solve the problem of migration in Qemu, by
> creating a 'vfio-nvme' driver which would bind vfio to bind to device exposed by
> the kernel, and would pass through all the doorbells and queues to the guest,
> while intercepting the admin queue. Such driver I think can be made to support
> migration while beeing able to run on top both SR-IOV device, my vfio-nvme abit
> with double admin queue emulation (its a bit ugly but won't affect performance
> at all) and on top of even regular NVME device vfio assigned to guest.
mdev-nvme seems like a duplication of SPDK. The performance is not
better and the features are more limited, so why focus on this approach?
One argument might be that the kernel NVMe subsystem wants to offer this
functionality and loading the kernel module is more convenient than
managing SPDK to some users.
Thoughts?
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* No subject
@ 2019-03-21 16:12 ` Stefan Hajnoczi
0 siblings, 0 replies; 3471+ messages in thread
From: Stefan Hajnoczi @ 2019-03-21 16:12 UTC (permalink / raw)
On Wed, Mar 20, 2019@09:08:37PM +0200, Maxim Levitsky wrote:
> On Wed, 2019-03-20@11:03 +0000, Felipe Franciosi wrote:
> > > On Mar 19, 2019,@2:41 PM, Maxim Levitsky <mlevitsk@redhat.com> wrote:
> > >
> > > Date: Tue, 19 Mar 2019 14:45:45 +0200
> > > Subject: [PATCH 0/9] RFC: NVME VFIO mediated device
> > >
> > > Hi everyone!
> > >
> > > In this patch series, I would like to introduce my take on the problem of
> > > doing
> > > as fast as possible virtualization of storage with emphasis on low latency.
> > >
> > > In this patch series I implemented a kernel vfio based, mediated device
> > > that
> > > allows the user to pass through a partition and/or whole namespace to a
> > > guest.
> >
> > Hey Maxim!
> >
> > I'm really excited to see this series, as it aligns to some extent with what
> > we discussed in last year's KVM Forum VFIO BoF.
> >
> > There's no arguing that we need a better story to efficiently virtualise NVMe
> > devices. So far, for Qemu-based VMs, Changpeng's vhost-user-nvme is the best
> > attempt at that. However, I seem to recall there was some pushback from qemu-
> > devel in the sense that they would rather see investment in virtio-blk. I'm
> > not sure what's the latest on that work and what are the next steps.
> I agree with that. All my benchmarks were agains his vhost-user-nvme driver, and
> I am able to get pretty much the same througput and latency.
>
> The ssd I tested on died just recently (Murphy law), not due to bug in my driver
> but some internal fault (even though most of my tests were reads, plus
> occassional 'nvme format's.
> We are in process of buying an replacement.
>
> >
> > The pushback drove the discussion towards pursuing an mdev approach, which is
> > why I'm excited to see your patches.
> >
> > What I'm thinking is that passing through namespaces or partitions is very
> > restrictive. It leaves no room to implement more elaborate virtualisation
> > stacks like replicating data across multiple devices (local or remote),
> > storage migration, software-managed thin provisioning, encryption,
> > deduplication, compression, etc. In summary, anything that requires software
> > intervention in the datapath. (Worth noting: vhost-user-nvme allows all of
> > that to be easily done in SPDK's bdev layer.)
>
> Hi Felipe!
>
> I guess that my driver is not geared toward more complicated use cases like you
> mentioned, but instead it is focused to get as fast as possible performance for
> the common case.
>
> One thing that I can do which would solve several of the above problems is to
> accept an map betwent virtual and real logical blocks, pretty much in exactly
> the same way as EPT does it.
> Then userspace can map any portions of the device anywhere, while still keeping
> the dataplane in the kernel, and having minimal overhead.
>
> On top of that, note that the direction of IO virtualization is to do dataplane
> in hardware, which will probably give you even worse partition granuality /
> features but will be the fastest option aviable,
> like for instance SR-IOV which alrady exists and just allows to split by
> namespaces without any more fine grained control.
>
> Think of nvme-mdev as a very low level driver, which currntly uses polling, but
> eventually will use PASID based IOMMU to provide the guest with raw PCI device.
> The userspace / qemu can build on top of that with varios software layers.
>
> On top of that I am thinking to solve the problem of migration in Qemu, by
> creating a 'vfio-nvme' driver which would bind vfio to bind to device exposed by
> the kernel, and would pass through all the doorbells and queues to the guest,
> while intercepting the admin queue. Such driver I think can be made to support
> migration while beeing able to run on top both SR-IOV device, my vfio-nvme abit
> with double admin queue emulation (its a bit ugly but won't affect performance
> at all) and on top of even regular NVME device vfio assigned to guest.
mdev-nvme seems like a duplication of SPDK. The performance is not
better and the features are more limited, so why focus on this approach?
One argument might be that the kernel NVMe subsystem wants to offer this
functionality and loading the kernel module is more convenient than
managing SPDK to some users.
Thoughts?
Stefan
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-nvme/attachments/20190321/8b770a99/attachment.sig>
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re:
@ 2019-03-21 16:12 ` Stefan Hajnoczi
0 siblings, 0 replies; 3471+ messages in thread
From: Stefan Hajnoczi @ 2019-03-21 16:12 UTC (permalink / raw)
To: Maxim Levitsky
Cc: Felipe Franciosi, Fam Zheng, kvm, Wolfram Sang, linux-nvme,
linux-kernel, Keith Busch, Kirti Wankhede, Mauro Carvalho Chehab,
Paul E . McKenney, Christoph Hellwig, Sagi Grimberg, Harris,
James R, Liang Cunming, Jens Axboe, Alex Williamson,
Thanos Makatos, John Ferlan, Liu
[-- Attachment #1: Type: text/plain, Size: 4404 bytes --]
On Wed, Mar 20, 2019 at 09:08:37PM +0200, Maxim Levitsky wrote:
> On Wed, 2019-03-20 at 11:03 +0000, Felipe Franciosi wrote:
> > > On Mar 19, 2019, at 2:41 PM, Maxim Levitsky <mlevitsk@redhat.com> wrote:
> > >
> > > Date: Tue, 19 Mar 2019 14:45:45 +0200
> > > Subject: [PATCH 0/9] RFC: NVME VFIO mediated device
> > >
> > > Hi everyone!
> > >
> > > In this patch series, I would like to introduce my take on the problem of
> > > doing
> > > as fast as possible virtualization of storage with emphasis on low latency.
> > >
> > > In this patch series I implemented a kernel vfio based, mediated device
> > > that
> > > allows the user to pass through a partition and/or whole namespace to a
> > > guest.
> >
> > Hey Maxim!
> >
> > I'm really excited to see this series, as it aligns to some extent with what
> > we discussed in last year's KVM Forum VFIO BoF.
> >
> > There's no arguing that we need a better story to efficiently virtualise NVMe
> > devices. So far, for Qemu-based VMs, Changpeng's vhost-user-nvme is the best
> > attempt at that. However, I seem to recall there was some pushback from qemu-
> > devel in the sense that they would rather see investment in virtio-blk. I'm
> > not sure what's the latest on that work and what are the next steps.
> I agree with that. All my benchmarks were agains his vhost-user-nvme driver, and
> I am able to get pretty much the same througput and latency.
>
> The ssd I tested on died just recently (Murphy law), not due to bug in my driver
> but some internal fault (even though most of my tests were reads, plus
> occassional 'nvme format's.
> We are in process of buying an replacement.
>
> >
> > The pushback drove the discussion towards pursuing an mdev approach, which is
> > why I'm excited to see your patches.
> >
> > What I'm thinking is that passing through namespaces or partitions is very
> > restrictive. It leaves no room to implement more elaborate virtualisation
> > stacks like replicating data across multiple devices (local or remote),
> > storage migration, software-managed thin provisioning, encryption,
> > deduplication, compression, etc. In summary, anything that requires software
> > intervention in the datapath. (Worth noting: vhost-user-nvme allows all of
> > that to be easily done in SPDK's bdev layer.)
>
> Hi Felipe!
>
> I guess that my driver is not geared toward more complicated use cases like you
> mentioned, but instead it is focused to get as fast as possible performance for
> the common case.
>
> One thing that I can do which would solve several of the above problems is to
> accept an map betwent virtual and real logical blocks, pretty much in exactly
> the same way as EPT does it.
> Then userspace can map any portions of the device anywhere, while still keeping
> the dataplane in the kernel, and having minimal overhead.
>
> On top of that, note that the direction of IO virtualization is to do dataplane
> in hardware, which will probably give you even worse partition granuality /
> features but will be the fastest option aviable,
> like for instance SR-IOV which alrady exists and just allows to split by
> namespaces without any more fine grained control.
>
> Think of nvme-mdev as a very low level driver, which currntly uses polling, but
> eventually will use PASID based IOMMU to provide the guest with raw PCI device.
> The userspace / qemu can build on top of that with varios software layers.
>
> On top of that I am thinking to solve the problem of migration in Qemu, by
> creating a 'vfio-nvme' driver which would bind vfio to bind to device exposed by
> the kernel, and would pass through all the doorbells and queues to the guest,
> while intercepting the admin queue. Such driver I think can be made to support
> migration while beeing able to run on top both SR-IOV device, my vfio-nvme abit
> with double admin queue emulation (its a bit ugly but won't affect performance
> at all) and on top of even regular NVME device vfio assigned to guest.
mdev-nvme seems like a duplication of SPDK. The performance is not
better and the features are more limited, so why focus on this approach?
One argument might be that the kernel NVMe subsystem wants to offer this
functionality and loading the kernel module is more convenient than
managing SPDK to some users.
Thoughts?
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re:
2019-03-21 16:12 ` Stefan Hajnoczi
(?)
@ 2019-03-21 16:21 ` Keith Busch
-1 siblings, 0 replies; 3471+ messages in thread
From: Keith Busch @ 2019-03-21 16:21 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: Maxim Levitsky, Fam Zheng, kvm, Wolfram Sang, linux-nvme,
linux-kernel, Keith Busch, Kirti Wankhede, Mauro Carvalho Chehab,
Paul E . McKenney, Christoph Hellwig, Sagi Grimberg, Harris,
James R, Felipe Franciosi, Liang Cunming, Jens Axboe,
Alex Williamson, Thanos Makatos, John Ferlan, Liu Changpeng,
Greg Kroah-Hartman, Nicolas Ferre, Paolo Bonzini, Amnon Ilan,
David S . Miller
On Thu, Mar 21, 2019 at 04:12:39PM +0000, Stefan Hajnoczi wrote:
> mdev-nvme seems like a duplication of SPDK. The performance is not
> better and the features are more limited, so why focus on this approach?
>
> One argument might be that the kernel NVMe subsystem wants to offer this
> functionality and loading the kernel module is more convenient than
> managing SPDK to some users.
>
> Thoughts?
Doesn't SPDK bind a controller to a single process? mdev binds to
namespaces (or their partitions), so you could have many mdev's assigned
to many VMs accessing a single controller.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* No subject
@ 2019-03-21 16:21 ` Keith Busch
0 siblings, 0 replies; 3471+ messages in thread
From: Keith Busch @ 2019-03-21 16:21 UTC (permalink / raw)
On Thu, Mar 21, 2019@04:12:39PM +0000, Stefan Hajnoczi wrote:
> mdev-nvme seems like a duplication of SPDK. The performance is not
> better and the features are more limited, so why focus on this approach?
>
> One argument might be that the kernel NVMe subsystem wants to offer this
> functionality and loading the kernel module is more convenient than
> managing SPDK to some users.
>
> Thoughts?
Doesn't SPDK bind a controller to a single process? mdev binds to
namespaces (or their partitions), so you could have many mdev's assigned
to many VMs accessing a single controller.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re:
@ 2019-03-21 16:21 ` Keith Busch
0 siblings, 0 replies; 3471+ messages in thread
From: Keith Busch @ 2019-03-21 16:21 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: Maxim Levitsky, Fam Zheng, kvm, Wolfram Sang, linux-nvme,
linux-kernel, Keith Busch, Kirti Wankhede, Mauro Carvalho Chehab,
Paul E . McKenney, Christoph Hellwig, Sagi Grimberg, Harris,
James R, Felipe Franciosi, Liang Cunming, Jens Axboe,
Alex Williamson, Thanos Makatos, Jo
On Thu, Mar 21, 2019 at 04:12:39PM +0000, Stefan Hajnoczi wrote:
> mdev-nvme seems like a duplication of SPDK. The performance is not
> better and the features are more limited, so why focus on this approach?
>
> One argument might be that the kernel NVMe subsystem wants to offer this
> functionality and loading the kernel module is more convenient than
> managing SPDK to some users.
>
> Thoughts?
Doesn't SPDK bind a controller to a single process? mdev binds to
namespaces (or their partitions), so you could have many mdev's assigned
to many VMs accessing a single controller.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re:
2019-03-21 16:21 ` Keith Busch
(?)
@ 2019-03-21 16:41 ` Felipe Franciosi
-1 siblings, 0 replies; 3471+ messages in thread
From: Felipe Franciosi @ 2019-03-21 16:41 UTC (permalink / raw)
To: Keith Busch
Cc: Stefan Hajnoczi, Maxim Levitsky, Fam Zheng, kvm, Wolfram Sang,
linux-nvme, linux-kernel, Keith Busch, Kirti Wankhede,
Mauro Carvalho Chehab, Paul E . McKenney, Christoph Hellwig,
Sagi Grimberg, Harris, James R, Liang Cunming, Jens Axboe,
Alex Williamson, Thanos Makatos, John Ferlan, Liu Changpeng,
Greg Kroah-Hartman, Nicolas Ferre, Paolo Bonzini, Amnon Ilan,
David S . Miller
> On Mar 21, 2019, at 4:21 PM, Keith Busch <kbusch@kernel.org> wrote:
>
> On Thu, Mar 21, 2019 at 04:12:39PM +0000, Stefan Hajnoczi wrote:
>> mdev-nvme seems like a duplication of SPDK. The performance is not
>> better and the features are more limited, so why focus on this approach?
>>
>> One argument might be that the kernel NVMe subsystem wants to offer this
>> functionality and loading the kernel module is more convenient than
>> managing SPDK to some users.
>>
>> Thoughts?
>
> Doesn't SPDK bind a controller to a single process? mdev binds to
> namespaces (or their partitions), so you could have many mdev's assigned
> to many VMs accessing a single controller.
Yes, it binds to a single process which can drive the datapath of multiple virtual controllers for multiple VMs (similar to what you described for mdev). You can therefore efficiently poll multiple VM submission queues (and multiple device completion queues) from a single physical CPU.
The same could be done in the kernel, but the code gets complicated as you add more functionality to it. As this is a direct interface with an untrusted front-end (the guest), it's also arguably safer to do in userspace.
Worth noting: you can eventually have a single physical core polling all sorts of virtual devices (eg. virtual storage or network controllers) very efficiently. And this is quite configurable, too. In the interest of fairness, performance or efficiency, you can choose to dynamically add or remove queues to the poll thread or spawn more threads and redistribute the work.
F.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* No subject
@ 2019-03-21 16:41 ` Felipe Franciosi
0 siblings, 0 replies; 3471+ messages in thread
From: Felipe Franciosi @ 2019-03-21 16:41 UTC (permalink / raw)
> On Mar 21, 2019,@4:21 PM, Keith Busch <kbusch@kernel.org> wrote:
>
> On Thu, Mar 21, 2019@04:12:39PM +0000, Stefan Hajnoczi wrote:
>> mdev-nvme seems like a duplication of SPDK. The performance is not
>> better and the features are more limited, so why focus on this approach?
>>
>> One argument might be that the kernel NVMe subsystem wants to offer this
>> functionality and loading the kernel module is more convenient than
>> managing SPDK to some users.
>>
>> Thoughts?
>
> Doesn't SPDK bind a controller to a single process? mdev binds to
> namespaces (or their partitions), so you could have many mdev's assigned
> to many VMs accessing a single controller.
Yes, it binds to a single process which can drive the datapath of multiple virtual controllers for multiple VMs (similar to what you described for mdev). You can therefore efficiently poll multiple VM submission queues (and multiple device completion queues) from a single physical CPU.
The same could be done in the kernel, but the code gets complicated as you add more functionality to it. As this is a direct interface with an untrusted front-end (the guest), it's also arguably safer to do in userspace.
Worth noting: you can eventually have a single physical core polling all sorts of virtual devices (eg. virtual storage or network controllers) very efficiently. And this is quite configurable, too. In the interest of fairness, performance or efficiency, you can choose to dynamically add or remove queues to the poll thread or spawn more threads and redistribute the work.
F.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re:
@ 2019-03-21 16:41 ` Felipe Franciosi
0 siblings, 0 replies; 3471+ messages in thread
From: Felipe Franciosi @ 2019-03-21 16:41 UTC (permalink / raw)
To: Keith Busch
Cc: Stefan Hajnoczi, Maxim Levitsky, Fam Zheng, kvm, Wolfram Sang,
linux-nvme, linux-kernel, Keith Busch, Kirti Wankhede,
Mauro Carvalho Chehab, Paul E . McKenney, Christoph Hellwig,
Sagi Grimberg, Harris, James R, Liang Cunming, Jens Axboe,
Alex Williamson, Thanos Makatos
> On Mar 21, 2019, at 4:21 PM, Keith Busch <kbusch@kernel.org> wrote:
>
> On Thu, Mar 21, 2019 at 04:12:39PM +0000, Stefan Hajnoczi wrote:
>> mdev-nvme seems like a duplication of SPDK. The performance is not
>> better and the features are more limited, so why focus on this approach?
>>
>> One argument might be that the kernel NVMe subsystem wants to offer this
>> functionality and loading the kernel module is more convenient than
>> managing SPDK to some users.
>>
>> Thoughts?
>
> Doesn't SPDK bind a controller to a single process? mdev binds to
> namespaces (or their partitions), so you could have many mdev's assigned
> to many VMs accessing a single controller.
Yes, it binds to a single process which can drive the datapath of multiple virtual controllers for multiple VMs (similar to what you described for mdev). You can therefore efficiently poll multiple VM submission queues (and multiple device completion queues) from a single physical CPU.
The same could be done in the kernel, but the code gets complicated as you add more functionality to it. As this is a direct interface with an untrusted front-end (the guest), it's also arguably safer to do in userspace.
Worth noting: you can eventually have a single physical core polling all sorts of virtual devices (eg. virtual storage or network controllers) very efficiently. And this is quite configurable, too. In the interest of fairness, performance or efficiency, you can choose to dynamically add or remove queues to the poll thread or spawn more threads and redistribute the work.
F.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re:
2019-03-21 16:41 ` Felipe Franciosi
(?)
@ 2019-03-21 17:04 ` Maxim Levitsky
-1 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-21 17:04 UTC (permalink / raw)
To: Felipe Franciosi, Keith Busch
Cc: Stefan Hajnoczi, Fam Zheng, kvm, Wolfram Sang, linux-nvme,
linux-kernel, Keith Busch, Kirti Wankhede, Mauro Carvalho Chehab,
Paul E . McKenney, Christoph Hellwig, Sagi Grimberg, Harris,
James R, Liang Cunming, Jens Axboe, Alex Williamson,
Thanos Makatos, John Ferlan, Liu Changpeng, Greg Kroah-Hartman,
Nicolas Ferre, Paolo Bonzini, Amnon Ilan, David S . Miller
On Thu, 2019-03-21 at 16:41 +0000, Felipe Franciosi wrote:
> > On Mar 21, 2019, at 4:21 PM, Keith Busch <kbusch@kernel.org> wrote:
> >
> > On Thu, Mar 21, 2019 at 04:12:39PM +0000, Stefan Hajnoczi wrote:
> > > mdev-nvme seems like a duplication of SPDK. The performance is not
> > > better and the features are more limited, so why focus on this approach?
> > >
> > > One argument might be that the kernel NVMe subsystem wants to offer this
> > > functionality and loading the kernel module is more convenient than
> > > managing SPDK to some users.
> > >
> > > Thoughts?
> >
> > Doesn't SPDK bind a controller to a single process? mdev binds to
> > namespaces (or their partitions), so you could have many mdev's assigned
> > to many VMs accessing a single controller.
>
> Yes, it binds to a single process which can drive the datapath of multiple
> virtual controllers for multiple VMs (similar to what you described for mdev).
> You can therefore efficiently poll multiple VM submission queues (and multiple
> device completion queues) from a single physical CPU.
>
> The same could be done in the kernel, but the code gets complicated as you add
> more functionality to it. As this is a direct interface with an untrusted
> front-end (the guest), it's also arguably safer to do in userspace.
>
> Worth noting: you can eventually have a single physical core polling all sorts
> of virtual devices (eg. virtual storage or network controllers) very
> efficiently. And this is quite configurable, too. In the interest of fairness,
> performance or efficiency, you can choose to dynamically add or remove queues
> to the poll thread or spawn more threads and redistribute the work.
>
> F.
Note though that SPDK doesn't support sharing the device between host and the
guests, it takes over the nvme device, thus it makes the kernel nvme driver
unbind from it.
My driver creates a polling thread per guest, but its trivial to add option to
use the same polling thread for many guests if there need for that.
Best regards,
Maxim Levitsky
^ permalink raw reply [flat|nested] 3471+ messages in thread
* No subject
@ 2019-03-21 17:04 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-21 17:04 UTC (permalink / raw)
On Thu, 2019-03-21@16:41 +0000, Felipe Franciosi wrote:
> > On Mar 21, 2019,@4:21 PM, Keith Busch <kbusch@kernel.org> wrote:
> >
> > On Thu, Mar 21, 2019@04:12:39PM +0000, Stefan Hajnoczi wrote:
> > > mdev-nvme seems like a duplication of SPDK. The performance is not
> > > better and the features are more limited, so why focus on this approach?
> > >
> > > One argument might be that the kernel NVMe subsystem wants to offer this
> > > functionality and loading the kernel module is more convenient than
> > > managing SPDK to some users.
> > >
> > > Thoughts?
> >
> > Doesn't SPDK bind a controller to a single process? mdev binds to
> > namespaces (or their partitions), so you could have many mdev's assigned
> > to many VMs accessing a single controller.
>
> Yes, it binds to a single process which can drive the datapath of multiple
> virtual controllers for multiple VMs (similar to what you described for mdev).
> You can therefore efficiently poll multiple VM submission queues (and multiple
> device completion queues) from a single physical CPU.
>
> The same could be done in the kernel, but the code gets complicated as you add
> more functionality to it. As this is a direct interface with an untrusted
> front-end (the guest), it's also arguably safer to do in userspace.
>
> Worth noting: you can eventually have a single physical core polling all sorts
> of virtual devices (eg. virtual storage or network controllers) very
> efficiently. And this is quite configurable, too. In the interest of fairness,
> performance or efficiency, you can choose to dynamically add or remove queues
> to the poll thread or spawn more threads and redistribute the work.
>
> F.
Note though that SPDK doesn't support sharing the device between host and the
guests, it takes over the nvme device, thus it makes the kernel nvme driver
unbind from it.
My driver creates a polling thread per guest, but its trivial to add option to
use the same polling thread for many guests if there need for that.
Best regards,
Maxim Levitsky
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re:
@ 2019-03-21 17:04 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-21 17:04 UTC (permalink / raw)
To: Felipe Franciosi, Keith Busch
Cc: Stefan Hajnoczi, Fam Zheng, kvm, Wolfram Sang, linux-nvme,
linux-kernel, Keith Busch, Kirti Wankhede, Mauro Carvalho Chehab,
Paul E . McKenney, Christoph Hellwig, Sagi Grimberg, Harris,
James R, Liang Cunming, Jens Axboe, Alex Williamson,
Thanos Makatos, John Ferlan, Liu
On Thu, 2019-03-21 at 16:41 +0000, Felipe Franciosi wrote:
> > On Mar 21, 2019, at 4:21 PM, Keith Busch <kbusch@kernel.org> wrote:
> >
> > On Thu, Mar 21, 2019 at 04:12:39PM +0000, Stefan Hajnoczi wrote:
> > > mdev-nvme seems like a duplication of SPDK. The performance is not
> > > better and the features are more limited, so why focus on this approach?
> > >
> > > One argument might be that the kernel NVMe subsystem wants to offer this
> > > functionality and loading the kernel module is more convenient than
> > > managing SPDK to some users.
> > >
> > > Thoughts?
> >
> > Doesn't SPDK bind a controller to a single process? mdev binds to
> > namespaces (or their partitions), so you could have many mdev's assigned
> > to many VMs accessing a single controller.
>
> Yes, it binds to a single process which can drive the datapath of multiple
> virtual controllers for multiple VMs (similar to what you described for mdev).
> You can therefore efficiently poll multiple VM submission queues (and multiple
> device completion queues) from a single physical CPU.
>
> The same could be done in the kernel, but the code gets complicated as you add
> more functionality to it. As this is a direct interface with an untrusted
> front-end (the guest), it's also arguably safer to do in userspace.
>
> Worth noting: you can eventually have a single physical core polling all sorts
> of virtual devices (eg. virtual storage or network controllers) very
> efficiently. And this is quite configurable, too. In the interest of fairness,
> performance or efficiency, you can choose to dynamically add or remove queues
> to the poll thread or spawn more threads and redistribute the work.
>
> F.
Note though that SPDK doesn't support sharing the device between host and the
guests, it takes over the nvme device, thus it makes the kernel nvme driver
unbind from it.
My driver creates a polling thread per guest, but its trivial to add option to
use the same polling thread for many guests if there need for that.
Best regards,
Maxim Levitsky
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re:
2019-03-21 17:04 ` Maxim Levitsky
(?)
@ 2019-03-22 7:54 ` Felipe Franciosi
-1 siblings, 0 replies; 3471+ messages in thread
From: Felipe Franciosi @ 2019-03-22 7:54 UTC (permalink / raw)
To: Maxim Levitsky
Cc: Keith Busch, Stefan Hajnoczi, Fam Zheng, kvm, Wolfram Sang,
linux-nvme, linux-kernel, Keith Busch, Kirti Wankhede,
Mauro Carvalho Chehab, Paul E . McKenney, Christoph Hellwig,
Sagi Grimberg, Harris, James R, Liang Cunming, Jens Axboe,
Alex Williamson, Thanos Makatos, John Ferlan, Liu Changpeng,
Greg Kroah-Hartman, Nicolas Ferre, Paolo Bonzini, Amnon Ilan,
David S . Miller
> On Mar 21, 2019, at 5:04 PM, Maxim Levitsky <mlevitsk@redhat.com> wrote:
>
> On Thu, 2019-03-21 at 16:41 +0000, Felipe Franciosi wrote:
>>> On Mar 21, 2019, at 4:21 PM, Keith Busch <kbusch@kernel.org> wrote:
>>>
>>> On Thu, Mar 21, 2019 at 04:12:39PM +0000, Stefan Hajnoczi wrote:
>>>> mdev-nvme seems like a duplication of SPDK. The performance is not
>>>> better and the features are more limited, so why focus on this approach?
>>>>
>>>> One argument might be that the kernel NVMe subsystem wants to offer this
>>>> functionality and loading the kernel module is more convenient than
>>>> managing SPDK to some users.
>>>>
>>>> Thoughts?
>>>
>>> Doesn't SPDK bind a controller to a single process? mdev binds to
>>> namespaces (or their partitions), so you could have many mdev's assigned
>>> to many VMs accessing a single controller.
>>
>> Yes, it binds to a single process which can drive the datapath of multiple
>> virtual controllers for multiple VMs (similar to what you described for mdev).
>> You can therefore efficiently poll multiple VM submission queues (and multiple
>> device completion queues) from a single physical CPU.
>>
>> The same could be done in the kernel, but the code gets complicated as you add
>> more functionality to it. As this is a direct interface with an untrusted
>> front-end (the guest), it's also arguably safer to do in userspace.
>>
>> Worth noting: you can eventually have a single physical core polling all sorts
>> of virtual devices (eg. virtual storage or network controllers) very
>> efficiently. And this is quite configurable, too. In the interest of fairness,
>> performance or efficiency, you can choose to dynamically add or remove queues
>> to the poll thread or spawn more threads and redistribute the work.
>>
>> F.
>
> Note though that SPDK doesn't support sharing the device between host and the
> guests, it takes over the nvme device, thus it makes the kernel nvme driver
> unbind from it.
That is absolutely true. However, I find it not to be a problem in practice.
Hypervisor products, specially those caring about performance, efficiency and fairness, will dedicate NVMe devices for a particular purpose (eg. vDisk storage, cache, metadata) and will not share these devices for other use cases. That's because these products want to deterministically control the performance aspects of the device, which you just cannot do if you are sharing the device with a subsystem you do not control.
For scenarios where the device must be shared and such fine grained control is not required, it looks like using the kernel driver with io_uring offers very good performance with flexibility.
Cheers,
Felipe
^ permalink raw reply [flat|nested] 3471+ messages in thread
* No subject
@ 2019-03-22 7:54 ` Felipe Franciosi
0 siblings, 0 replies; 3471+ messages in thread
From: Felipe Franciosi @ 2019-03-22 7:54 UTC (permalink / raw)
> On Mar 21, 2019,@5:04 PM, Maxim Levitsky <mlevitsk@redhat.com> wrote:
>
> On Thu, 2019-03-21@16:41 +0000, Felipe Franciosi wrote:
>>> On Mar 21, 2019,@4:21 PM, Keith Busch <kbusch@kernel.org> wrote:
>>>
>>> On Thu, Mar 21, 2019@04:12:39PM +0000, Stefan Hajnoczi wrote:
>>>> mdev-nvme seems like a duplication of SPDK. The performance is not
>>>> better and the features are more limited, so why focus on this approach?
>>>>
>>>> One argument might be that the kernel NVMe subsystem wants to offer this
>>>> functionality and loading the kernel module is more convenient than
>>>> managing SPDK to some users.
>>>>
>>>> Thoughts?
>>>
>>> Doesn't SPDK bind a controller to a single process? mdev binds to
>>> namespaces (or their partitions), so you could have many mdev's assigned
>>> to many VMs accessing a single controller.
>>
>> Yes, it binds to a single process which can drive the datapath of multiple
>> virtual controllers for multiple VMs (similar to what you described for mdev).
>> You can therefore efficiently poll multiple VM submission queues (and multiple
>> device completion queues) from a single physical CPU.
>>
>> The same could be done in the kernel, but the code gets complicated as you add
>> more functionality to it. As this is a direct interface with an untrusted
>> front-end (the guest), it's also arguably safer to do in userspace.
>>
>> Worth noting: you can eventually have a single physical core polling all sorts
>> of virtual devices (eg. virtual storage or network controllers) very
>> efficiently. And this is quite configurable, too. In the interest of fairness,
>> performance or efficiency, you can choose to dynamically add or remove queues
>> to the poll thread or spawn more threads and redistribute the work.
>>
>> F.
>
> Note though that SPDK doesn't support sharing the device between host and the
> guests, it takes over the nvme device, thus it makes the kernel nvme driver
> unbind from it.
That is absolutely true. However, I find it not to be a problem in practice.
Hypervisor products, specially those caring about performance, efficiency and fairness, will dedicate NVMe devices for a particular purpose (eg. vDisk storage, cache, metadata) and will not share these devices for other use cases. That's because these products want to deterministically control the performance aspects of the device, which you just cannot do if you are sharing the device with a subsystem you do not control.
For scenarios where the device must be shared and such fine grained control is not required, it looks like using the kernel driver with io_uring offers very good performance with flexibility.
Cheers,
Felipe
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re:
@ 2019-03-22 7:54 ` Felipe Franciosi
0 siblings, 0 replies; 3471+ messages in thread
From: Felipe Franciosi @ 2019-03-22 7:54 UTC (permalink / raw)
To: Maxim Levitsky
Cc: Keith Busch, Stefan Hajnoczi, Fam Zheng, kvm, Wolfram Sang,
linux-nvme, linux-kernel, Keith Busch, Kirti Wankhede,
Mauro Carvalho Chehab, Paul E . McKenney, Christoph Hellwig,
Sagi Grimberg, Harris, James R, Liang Cunming, Jens Axboe,
Alex Williamson, Thanos Makatos
> On Mar 21, 2019, at 5:04 PM, Maxim Levitsky <mlevitsk@redhat.com> wrote:
>
> On Thu, 2019-03-21 at 16:41 +0000, Felipe Franciosi wrote:
>>> On Mar 21, 2019, at 4:21 PM, Keith Busch <kbusch@kernel.org> wrote:
>>>
>>> On Thu, Mar 21, 2019 at 04:12:39PM +0000, Stefan Hajnoczi wrote:
>>>> mdev-nvme seems like a duplication of SPDK. The performance is not
>>>> better and the features are more limited, so why focus on this approach?
>>>>
>>>> One argument might be that the kernel NVMe subsystem wants to offer this
>>>> functionality and loading the kernel module is more convenient than
>>>> managing SPDK to some users.
>>>>
>>>> Thoughts?
>>>
>>> Doesn't SPDK bind a controller to a single process? mdev binds to
>>> namespaces (or their partitions), so you could have many mdev's assigned
>>> to many VMs accessing a single controller.
>>
>> Yes, it binds to a single process which can drive the datapath of multiple
>> virtual controllers for multiple VMs (similar to what you described for mdev).
>> You can therefore efficiently poll multiple VM submission queues (and multiple
>> device completion queues) from a single physical CPU.
>>
>> The same could be done in the kernel, but the code gets complicated as you add
>> more functionality to it. As this is a direct interface with an untrusted
>> front-end (the guest), it's also arguably safer to do in userspace.
>>
>> Worth noting: you can eventually have a single physical core polling all sorts
>> of virtual devices (eg. virtual storage or network controllers) very
>> efficiently. And this is quite configurable, too. In the interest of fairness,
>> performance or efficiency, you can choose to dynamically add or remove queues
>> to the poll thread or spawn more threads and redistribute the work.
>>
>> F.
>
> Note though that SPDK doesn't support sharing the device between host and the
> guests, it takes over the nvme device, thus it makes the kernel nvme driver
> unbind from it.
That is absolutely true. However, I find it not to be a problem in practice.
Hypervisor products, specially those caring about performance, efficiency and fairness, will dedicate NVMe devices for a particular purpose (eg. vDisk storage, cache, metadata) and will not share these devices for other use cases. That's because these products want to deterministically control the performance aspects of the device, which you just cannot do if you are sharing the device with a subsystem you do not control.
For scenarios where the device must be shared and such fine grained control is not required, it looks like using the kernel driver with io_uring offers very good performance with flexibility.
Cheers,
Felipe
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re:
2019-03-22 7:54 ` Felipe Franciosi
(?)
@ 2019-03-22 10:32 ` Maxim Levitsky
-1 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-22 10:32 UTC (permalink / raw)
To: Felipe Franciosi
Cc: Keith Busch, Stefan Hajnoczi, Fam Zheng, kvm, Wolfram Sang,
linux-nvme, linux-kernel, Keith Busch, Kirti Wankhede,
Mauro Carvalho Chehab, Paul E . McKenney, Christoph Hellwig,
Sagi Grimberg, Harris, James R, Liang Cunming, Jens Axboe,
Alex Williamson, Thanos Makatos, John Ferlan, Liu Changpeng,
Greg Kroah-Hartman, Nicolas Ferre, Paolo Bonzini, Amnon Ilan,
David S . Miller
On Fri, 2019-03-22 at 07:54 +0000, Felipe Franciosi wrote:
> > On Mar 21, 2019, at 5:04 PM, Maxim Levitsky <mlevitsk@redhat.com> wrote:
> >
> > On Thu, 2019-03-21 at 16:41 +0000, Felipe Franciosi wrote:
> > > > On Mar 21, 2019, at 4:21 PM, Keith Busch <kbusch@kernel.org> wrote:
> > > >
> > > > On Thu, Mar 21, 2019 at 04:12:39PM +0000, Stefan Hajnoczi wrote:
> > > > > mdev-nvme seems like a duplication of SPDK. The performance is not
> > > > > better and the features are more limited, so why focus on this
> > > > > approach?
> > > > >
> > > > > One argument might be that the kernel NVMe subsystem wants to offer
> > > > > this
> > > > > functionality and loading the kernel module is more convenient than
> > > > > managing SPDK to some users.
> > > > >
> > > > > Thoughts?
> > > >
> > > > Doesn't SPDK bind a controller to a single process? mdev binds to
> > > > namespaces (or their partitions), so you could have many mdev's assigned
> > > > to many VMs accessing a single controller.
> > >
> > > Yes, it binds to a single process which can drive the datapath of multiple
> > > virtual controllers for multiple VMs (similar to what you described for
> > > mdev).
> > > You can therefore efficiently poll multiple VM submission queues (and
> > > multiple
> > > device completion queues) from a single physical CPU.
> > >
> > > The same could be done in the kernel, but the code gets complicated as you
> > > add
> > > more functionality to it. As this is a direct interface with an untrusted
> > > front-end (the guest), it's also arguably safer to do in userspace.
> > >
> > > Worth noting: you can eventually have a single physical core polling all
> > > sorts
> > > of virtual devices (eg. virtual storage or network controllers) very
> > > efficiently. And this is quite configurable, too. In the interest of
> > > fairness,
> > > performance or efficiency, you can choose to dynamically add or remove
> > > queues
> > > to the poll thread or spawn more threads and redistribute the work.
> > >
> > > F.
> >
> > Note though that SPDK doesn't support sharing the device between host and
> > the
> > guests, it takes over the nvme device, thus it makes the kernel nvme driver
> > unbind from it.
>
> That is absolutely true. However, I find it not to be a problem in practice.
>
> Hypervisor products, specially those caring about performance, efficiency and
> fairness, will dedicate NVMe devices for a particular purpose (eg. vDisk
> storage, cache, metadata) and will not share these devices for other use
> cases. That's because these products want to deterministically control the
> performance aspects of the device, which you just cannot do if you are sharing
> the device with a subsystem you do not control.
>
> For scenarios where the device must be shared and such fine grained control is
> not required, it looks like using the kernel driver with io_uring offers very
> good performance with flexibility
I see the host/guest parition in the following way:
The guest assigned partitions are for guests that need lowest possible latency,
and in between these guests it is possible to guarantee good enough level of
fairness in my driver.
For example, in the current implementation of my driver, each guest gets its own
host submission queue.
On the other hand, the host assigned partitions are for significantly higher
latency IO, with no guarantees, and/or for guests that need all the more
advanced features of full IO virtualization, for instance snapshots, thin
provisioning, replication/backup over network, etc.
io_uring can be used here to speed things up but it won't reach the nvme-mdev
levels of latency.
Furthermore on NVME drives that support WRRU, its possible to let queues of
guest's assigned partitions to belong to the high priority class and let the
host queues use the regular medium/low priority class.
For drives that don't support WRRU, the IO throttling can be done in software on
the host queues.
Host assigned partitions also don't need polling, thus allowing polling to be
used only for guests that actually need low latency IO.
This reduces the number of cores that would be otherwise lost to polling,
because the less work the polling core does, the less latency it contributes to
overall latency, thus with less users, you could use less cores to achieve the
same levels of latency.
For Stefan's argument, we can look at it in a slightly different way too:
While the nvme-mdev can be seen as a duplication of SPDK, the SPDK can also be
seen as duplication of an existing kernel functionality which nvme-mdev can
reuse for free.
Best regards,
Maxim Levitsky
^ permalink raw reply [flat|nested] 3471+ messages in thread
* No subject
@ 2019-03-22 10:32 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-22 10:32 UTC (permalink / raw)
On Fri, 2019-03-22@07:54 +0000, Felipe Franciosi wrote:
> > On Mar 21, 2019,@5:04 PM, Maxim Levitsky <mlevitsk@redhat.com> wrote:
> >
> > On Thu, 2019-03-21@16:41 +0000, Felipe Franciosi wrote:
> > > > On Mar 21, 2019,@4:21 PM, Keith Busch <kbusch@kernel.org> wrote:
> > > >
> > > > On Thu, Mar 21, 2019@04:12:39PM +0000, Stefan Hajnoczi wrote:
> > > > > mdev-nvme seems like a duplication of SPDK. The performance is not
> > > > > better and the features are more limited, so why focus on this
> > > > > approach?
> > > > >
> > > > > One argument might be that the kernel NVMe subsystem wants to offer
> > > > > this
> > > > > functionality and loading the kernel module is more convenient than
> > > > > managing SPDK to some users.
> > > > >
> > > > > Thoughts?
> > > >
> > > > Doesn't SPDK bind a controller to a single process? mdev binds to
> > > > namespaces (or their partitions), so you could have many mdev's assigned
> > > > to many VMs accessing a single controller.
> > >
> > > Yes, it binds to a single process which can drive the datapath of multiple
> > > virtual controllers for multiple VMs (similar to what you described for
> > > mdev).
> > > You can therefore efficiently poll multiple VM submission queues (and
> > > multiple
> > > device completion queues) from a single physical CPU.
> > >
> > > The same could be done in the kernel, but the code gets complicated as you
> > > add
> > > more functionality to it. As this is a direct interface with an untrusted
> > > front-end (the guest), it's also arguably safer to do in userspace.
> > >
> > > Worth noting: you can eventually have a single physical core polling all
> > > sorts
> > > of virtual devices (eg. virtual storage or network controllers) very
> > > efficiently. And this is quite configurable, too. In the interest of
> > > fairness,
> > > performance or efficiency, you can choose to dynamically add or remove
> > > queues
> > > to the poll thread or spawn more threads and redistribute the work.
> > >
> > > F.
> >
> > Note though that SPDK doesn't support sharing the device between host and
> > the
> > guests, it takes over the nvme device, thus it makes the kernel nvme driver
> > unbind from it.
>
> That is absolutely true. However, I find it not to be a problem in practice.
>
> Hypervisor products, specially those caring about performance, efficiency and
> fairness, will dedicate NVMe devices for a particular purpose (eg. vDisk
> storage, cache, metadata) and will not share these devices for other use
> cases. That's because these products want to deterministically control the
> performance aspects of the device, which you just cannot do if you are sharing
> the device with a subsystem you do not control.
>
> For scenarios where the device must be shared and such fine grained control is
> not required, it looks like using the kernel driver with io_uring offers very
> good performance with flexibility
I see the host/guest parition in the following way:
The guest assigned partitions are for guests that need lowest possible latency,
and in between these guests it is possible to guarantee good enough level of
fairness in my driver.
For example, in the current implementation of my driver, each guest gets its own
host submission queue.
On the other hand, the host assigned partitions are for significantly higher
latency IO, with no guarantees, and/or for guests that need all the more
advanced features of full IO virtualization, for instance snapshots, thin
provisioning, replication/backup over network, etc.
io_uring can be used here to speed things up but it won't reach the nvme-mdev
levels of latency.
Furthermore on NVME drives that support WRRU, its possible to let queues of
guest's assigned partitions to belong to the high priority class and let the
host queues use the regular medium/low priority class.
For drives that don't support WRRU, the IO throttling can be done in software on
the host queues.
Host assigned partitions also don't need polling, thus allowing polling to be
used only for guests that actually need low latency IO.
This reduces the number of cores that would be otherwise lost to polling,
because the less work the polling core does, the less latency it contributes to
overall latency, thus with less users, you could use less cores to achieve the
same levels of latency.
For Stefan's argument, we can look at it in a slightly different way too:
While the nvme-mdev can be seen as a duplication of SPDK, the SPDK can also be
seen as duplication of an existing kernel functionality which nvme-mdev can
reuse for free.
Best regards,
Maxim Levitsky
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re:
@ 2019-03-22 10:32 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-22 10:32 UTC (permalink / raw)
To: Felipe Franciosi
Cc: Keith Busch, Stefan Hajnoczi, Fam Zheng, kvm, Wolfram Sang,
linux-nvme, linux-kernel, Keith Busch, Kirti Wankhede,
Mauro Carvalho Chehab, Paul E . McKenney, Christoph Hellwig,
Sagi Grimberg, Harris, James R, Liang Cunming, Jens Axboe,
Alex Williamson, Thanos Makatos
On Fri, 2019-03-22 at 07:54 +0000, Felipe Franciosi wrote:
> > On Mar 21, 2019, at 5:04 PM, Maxim Levitsky <mlevitsk@redhat.com> wrote:
> >
> > On Thu, 2019-03-21 at 16:41 +0000, Felipe Franciosi wrote:
> > > > On Mar 21, 2019, at 4:21 PM, Keith Busch <kbusch@kernel.org> wrote:
> > > >
> > > > On Thu, Mar 21, 2019 at 04:12:39PM +0000, Stefan Hajnoczi wrote:
> > > > > mdev-nvme seems like a duplication of SPDK. The performance is not
> > > > > better and the features are more limited, so why focus on this
> > > > > approach?
> > > > >
> > > > > One argument might be that the kernel NVMe subsystem wants to offer
> > > > > this
> > > > > functionality and loading the kernel module is more convenient than
> > > > > managing SPDK to some users.
> > > > >
> > > > > Thoughts?
> > > >
> > > > Doesn't SPDK bind a controller to a single process? mdev binds to
> > > > namespaces (or their partitions), so you could have many mdev's assigned
> > > > to many VMs accessing a single controller.
> > >
> > > Yes, it binds to a single process which can drive the datapath of multiple
> > > virtual controllers for multiple VMs (similar to what you described for
> > > mdev).
> > > You can therefore efficiently poll multiple VM submission queues (and
> > > multiple
> > > device completion queues) from a single physical CPU.
> > >
> > > The same could be done in the kernel, but the code gets complicated as you
> > > add
> > > more functionality to it. As this is a direct interface with an untrusted
> > > front-end (the guest), it's also arguably safer to do in userspace.
> > >
> > > Worth noting: you can eventually have a single physical core polling all
> > > sorts
> > > of virtual devices (eg. virtual storage or network controllers) very
> > > efficiently. And this is quite configurable, too. In the interest of
> > > fairness,
> > > performance or efficiency, you can choose to dynamically add or remove
> > > queues
> > > to the poll thread or spawn more threads and redistribute the work.
> > >
> > > F.
> >
> > Note though that SPDK doesn't support sharing the device between host and
> > the
> > guests, it takes over the nvme device, thus it makes the kernel nvme driver
> > unbind from it.
>
> That is absolutely true. However, I find it not to be a problem in practice.
>
> Hypervisor products, specially those caring about performance, efficiency and
> fairness, will dedicate NVMe devices for a particular purpose (eg. vDisk
> storage, cache, metadata) and will not share these devices for other use
> cases. That's because these products want to deterministically control the
> performance aspects of the device, which you just cannot do if you are sharing
> the device with a subsystem you do not control.
>
> For scenarios where the device must be shared and such fine grained control is
> not required, it looks like using the kernel driver with io_uring offers very
> good performance with flexibility
I see the host/guest parition in the following way:
The guest assigned partitions are for guests that need lowest possible latency,
and in between these guests it is possible to guarantee good enough level of
fairness in my driver.
For example, in the current implementation of my driver, each guest gets its own
host submission queue.
On the other hand, the host assigned partitions are for significantly higher
latency IO, with no guarantees, and/or for guests that need all the more
advanced features of full IO virtualization, for instance snapshots, thin
provisioning, replication/backup over network, etc.
io_uring can be used here to speed things up but it won't reach the nvme-mdev
levels of latency.
Furthermore on NVME drives that support WRRU, its possible to let queues of
guest's assigned partitions to belong to the high priority class and let the
host queues use the regular medium/low priority class.
For drives that don't support WRRU, the IO throttling can be done in software on
the host queues.
Host assigned partitions also don't need polling, thus allowing polling to be
used only for guests that actually need low latency IO.
This reduces the number of cores that would be otherwise lost to polling,
because the less work the polling core does, the less latency it contributes to
overall latency, thus with less users, you could use less cores to achieve the
same levels of latency.
For Stefan's argument, we can look at it in a slightly different way too:
While the nvme-mdev can be seen as a duplication of SPDK, the SPDK can also be
seen as duplication of an existing kernel functionality which nvme-mdev can
reuse for free.
Best regards,
Maxim Levitsky
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re:
2019-03-22 7:54 ` Felipe Franciosi
(?)
@ 2019-03-22 15:30 ` Keith Busch
-1 siblings, 0 replies; 3471+ messages in thread
From: Keith Busch @ 2019-03-22 15:30 UTC (permalink / raw)
To: Felipe Franciosi
Cc: Maxim Levitsky, Stefan Hajnoczi, Fam Zheng, kvm, Wolfram Sang,
linux-nvme, linux-kernel, Keith Busch, Kirti Wankhede,
Mauro Carvalho Chehab, Paul E . McKenney, Christoph Hellwig,
Sagi Grimberg, Harris, James R, Liang Cunming, Jens Axboe,
Alex Williamson, Thanos Makatos, John Ferlan, Liu Changpeng,
Greg Kroah-Hartman, Nicolas Ferre, Paolo Bonzini, Amnon Ilan,
David S . Miller
On Fri, Mar 22, 2019 at 07:54:50AM +0000, Felipe Franciosi wrote:
> >
> > Note though that SPDK doesn't support sharing the device between host and the
> > guests, it takes over the nvme device, thus it makes the kernel nvme driver
> > unbind from it.
>
> That is absolutely true. However, I find it not to be a problem in practice.
>
> Hypervisor products, specially those caring about performance, efficiency and fairness, will dedicate NVMe devices for a particular purpose (eg. vDisk storage, cache, metadata) and will not share these devices for other use cases. That's because these products want to deterministically control the performance aspects of the device, which you just cannot do if you are sharing the device with a subsystem you do not control.
I don't know, it sounds like you've traded kernel syscalls for IPC,
and I don't think one performs better than the other.
> For scenarios where the device must be shared and such fine grained control is not required, it looks like using the kernel driver with io_uring offers very good performance with flexibility.
NVMe's IO Determinism features provide fine grained control for shared
devices. It's still uncommon to find hardware supporting that, though.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* No subject
@ 2019-03-22 15:30 ` Keith Busch
0 siblings, 0 replies; 3471+ messages in thread
From: Keith Busch @ 2019-03-22 15:30 UTC (permalink / raw)
On Fri, Mar 22, 2019@07:54:50AM +0000, Felipe Franciosi wrote:
> >
> > Note though that SPDK doesn't support sharing the device between host and the
> > guests, it takes over the nvme device, thus it makes the kernel nvme driver
> > unbind from it.
>
> That is absolutely true. However, I find it not to be a problem in practice.
>
> Hypervisor products, specially those caring about performance, efficiency and fairness, will dedicate NVMe devices for a particular purpose (eg. vDisk storage, cache, metadata) and will not share these devices for other use cases. That's because these products want to deterministically control the performance aspects of the device, which you just cannot do if you are sharing the device with a subsystem you do not control.
I don't know, it sounds like you've traded kernel syscalls for IPC,
and I don't think one performs better than the other.
> For scenarios where the device must be shared and such fine grained control is not required, it looks like using the kernel driver with io_uring offers very good performance with flexibility.
NVMe's IO Determinism features provide fine grained control for shared
devices. It's still uncommon to find hardware supporting that, though.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re:
@ 2019-03-22 15:30 ` Keith Busch
0 siblings, 0 replies; 3471+ messages in thread
From: Keith Busch @ 2019-03-22 15:30 UTC (permalink / raw)
To: Felipe Franciosi
Cc: Maxim Levitsky, Stefan Hajnoczi, Fam Zheng, kvm, Wolfram Sang,
linux-nvme, linux-kernel, Keith Busch, Kirti Wankhede,
Mauro Carvalho Chehab, Paul E . McKenney, Christoph Hellwig,
Sagi Grimberg, Harris, James R, Liang Cunming, Jens Axboe,
Alex Williamson, Thanos Makatos
On Fri, Mar 22, 2019 at 07:54:50AM +0000, Felipe Franciosi wrote:
> >
> > Note though that SPDK doesn't support sharing the device between host and the
> > guests, it takes over the nvme device, thus it makes the kernel nvme driver
> > unbind from it.
>
> That is absolutely true. However, I find it not to be a problem in practice.
>
> Hypervisor products, specially those caring about performance, efficiency and fairness, will dedicate NVMe devices for a particular purpose (eg. vDisk storage, cache, metadata) and will not share these devices for other use cases. That's because these products want to deterministically control the performance aspects of the device, which you just cannot do if you are sharing the device with a subsystem you do not control.
I don't know, it sounds like you've traded kernel syscalls for IPC,
and I don't think one performs better than the other.
> For scenarios where the device must be shared and such fine grained control is not required, it looks like using the kernel driver with io_uring offers very good performance with flexibility.
NVMe's IO Determinism features provide fine grained control for shared
devices. It's still uncommon to find hardware supporting that, though.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re:
2019-03-22 15:30 ` Keith Busch
(?)
@ 2019-03-25 15:44 ` Felipe Franciosi
-1 siblings, 0 replies; 3471+ messages in thread
From: Felipe Franciosi @ 2019-03-25 15:44 UTC (permalink / raw)
To: Keith Busch
Cc: Maxim Levitsky, Stefan Hajnoczi, Fam Zheng, kvm, Wolfram Sang,
linux-nvme, linux-kernel, Keith Busch, Kirti Wankhede,
Mauro Carvalho Chehab, Paul E . McKenney, Christoph Hellwig,
Sagi Grimberg, Harris, James R, Liang Cunming, Jens Axboe,
Alex Williamson, Thanos Makatos, John Ferlan, Liu Changpeng,
Greg Kroah-Hartman, Nicolas Ferre, Paolo Bonzini, Amnon Ilan,
David S . Miller
Hi Keith,
> On Mar 22, 2019, at 3:30 PM, Keith Busch <kbusch@kernel.org> wrote:
>
> On Fri, Mar 22, 2019 at 07:54:50AM +0000, Felipe Franciosi wrote:
>>>
>>> Note though that SPDK doesn't support sharing the device between host and the
>>> guests, it takes over the nvme device, thus it makes the kernel nvme driver
>>> unbind from it.
>>
>> That is absolutely true. However, I find it not to be a problem in practice.
>>
>> Hypervisor products, specially those caring about performance, efficiency and fairness, will dedicate NVMe devices for a particular purpose (eg. vDisk storage, cache, metadata) and will not share these devices for other use cases. That's because these products want to deterministically control the performance aspects of the device, which you just cannot do if you are sharing the device with a subsystem you do not control.
>
> I don't know, it sounds like you've traded kernel syscalls for IPC,
> and I don't think one performs better than the other.
Sorry, I'm not sure I understand. My point is that if you are packaging a distro to be a hypervisor and you want to use a storage device for VM data, you _most likely_ won't be using that device for anything else. To that end, driving the device directly from your application definitely gives you more deterministic control.
>
>> For scenarios where the device must be shared and such fine grained control is not required, it looks like using the kernel driver with io_uring offers very good performance with flexibility.
>
> NVMe's IO Determinism features provide fine grained control for shared
> devices. It's still uncommon to find hardware supporting that, though.
Sure, but then your hypervisor needs to certify devices that support that. This will limit your HCL. Moreover, unless the feature is solid, well-established and works reliably on all devices you support, it's arguably preferable to have an architecture which gives you that control in software.
Cheers,
Felipe
^ permalink raw reply [flat|nested] 3471+ messages in thread
* No subject
@ 2019-03-25 15:44 ` Felipe Franciosi
0 siblings, 0 replies; 3471+ messages in thread
From: Felipe Franciosi @ 2019-03-25 15:44 UTC (permalink / raw)
Hi Keith,
> On Mar 22, 2019,@3:30 PM, Keith Busch <kbusch@kernel.org> wrote:
>
> On Fri, Mar 22, 2019@07:54:50AM +0000, Felipe Franciosi wrote:
>>>
>>> Note though that SPDK doesn't support sharing the device between host and the
>>> guests, it takes over the nvme device, thus it makes the kernel nvme driver
>>> unbind from it.
>>
>> That is absolutely true. However, I find it not to be a problem in practice.
>>
>> Hypervisor products, specially those caring about performance, efficiency and fairness, will dedicate NVMe devices for a particular purpose (eg. vDisk storage, cache, metadata) and will not share these devices for other use cases. That's because these products want to deterministically control the performance aspects of the device, which you just cannot do if you are sharing the device with a subsystem you do not control.
>
> I don't know, it sounds like you've traded kernel syscalls for IPC,
> and I don't think one performs better than the other.
Sorry, I'm not sure I understand. My point is that if you are packaging a distro to be a hypervisor and you want to use a storage device for VM data, you _most likely_ won't be using that device for anything else. To that end, driving the device directly from your application definitely gives you more deterministic control.
>
>> For scenarios where the device must be shared and such fine grained control is not required, it looks like using the kernel driver with io_uring offers very good performance with flexibility.
>
> NVMe's IO Determinism features provide fine grained control for shared
> devices. It's still uncommon to find hardware supporting that, though.
Sure, but then your hypervisor needs to certify devices that support that. This will limit your HCL. Moreover, unless the feature is solid, well-established and works reliably on all devices you support, it's arguably preferable to have an architecture which gives you that control in software.
Cheers,
Felipe
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re:
@ 2019-03-25 15:44 ` Felipe Franciosi
0 siblings, 0 replies; 3471+ messages in thread
From: Felipe Franciosi @ 2019-03-25 15:44 UTC (permalink / raw)
To: Keith Busch
Cc: Maxim Levitsky, Stefan Hajnoczi, Fam Zheng, kvm, Wolfram Sang,
linux-nvme, linux-kernel, Keith Busch, Kirti Wankhede,
Mauro Carvalho Chehab, Paul E . McKenney, Christoph Hellwig,
Sagi Grimberg, Harris, James R, Liang Cunming, Jens Axboe,
Alex Williamson, Thanos Makatos
Hi Keith,
> On Mar 22, 2019, at 3:30 PM, Keith Busch <kbusch@kernel.org> wrote:
>
> On Fri, Mar 22, 2019 at 07:54:50AM +0000, Felipe Franciosi wrote:
>>>
>>> Note though that SPDK doesn't support sharing the device between host and the
>>> guests, it takes over the nvme device, thus it makes the kernel nvme driver
>>> unbind from it.
>>
>> That is absolutely true. However, I find it not to be a problem in practice.
>>
>> Hypervisor products, specially those caring about performance, efficiency and fairness, will dedicate NVMe devices for a particular purpose (eg. vDisk storage, cache, metadata) and will not share these devices for other use cases. That's because these products want to deterministically control the performance aspects of the device, which you just cannot do if you are sharing the device with a subsystem you do not control.
>
> I don't know, it sounds like you've traded kernel syscalls for IPC,
> and I don't think one performs better than the other.
Sorry, I'm not sure I understand. My point is that if you are packaging a distro to be a hypervisor and you want to use a storage device for VM data, you _most likely_ won't be using that device for anything else. To that end, driving the device directly from your application definitely gives you more deterministic control.
>
>> For scenarios where the device must be shared and such fine grained control is not required, it looks like using the kernel driver with io_uring offers very good performance with flexibility.
>
> NVMe's IO Determinism features provide fine grained control for shared
> devices. It's still uncommon to find hardware supporting that, though.
Sure, but then your hypervisor needs to certify devices that support that. This will limit your HCL. Moreover, unless the feature is solid, well-established and works reliably on all devices you support, it's arguably preferable to have an architecture which gives you that control in software.
Cheers,
Felipe
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: [PATCH 0/9] RFC: NVME VFIO mediated device
2019-03-19 14:41 ` No subject Maxim Levitsky
@ 2019-03-20 15:08 ` Bart Van Assche
-1 siblings, 0 replies; 3471+ messages in thread
From: Bart Van Assche @ 2019-03-20 15:08 UTC (permalink / raw)
To: Maxim Levitsky, linux-nvme
Cc: Fam Zheng, Keith Busch, Sagi Grimberg, kvm, David S . Miller,
Greg Kroah-Hartman, Liang Cunming, Wolfram Sang, linux-kernel,
Kirti Wankhede, Jens Axboe, Alex Williamson, John Ferlan,
Mauro Carvalho Chehab, Paolo Bonzini, Liu Changpeng,
Paul E . McKenney, Amnon Ilan, Christoph Hellwig, Nicolas Ferre
On Tue, 2019-03-19 at 16:41 +0200, Maxim Levitsky wrote:
> * Polling kernel thread is used. The polling is stopped after a
> predefined timeout (1/2 sec by default).
> Support for all interrupt driven mode is planned, and it shows promising results.
Which cgroup will the CPU cycles used for polling be attributed to? Can the
polling code be moved into user space such that it becomes easy to identify
which process needs most CPU cycles for polling and such that the polling
CPU cycles are attributed to the proper cgroup?
Thanks,
Bart.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* [PATCH 0/9] RFC: NVME VFIO mediated device
@ 2019-03-20 15:08 ` Bart Van Assche
0 siblings, 0 replies; 3471+ messages in thread
From: Bart Van Assche @ 2019-03-20 15:08 UTC (permalink / raw)
On Tue, 2019-03-19@16:41 +0200, Maxim Levitsky wrote:
> * Polling kernel thread is used. The polling is stopped after a
> predefined timeout (1/2 sec by default).
> Support for all interrupt driven mode is planned, and it shows promising results.
Which cgroup will the CPU cycles used for polling be attributed to? Can the
polling code be moved into user space such that it becomes easy to identify
which process needs most CPU cycles for polling and such that the polling
CPU cycles are attributed to the proper cgroup?
Thanks,
Bart.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: [PATCH 0/9] RFC: NVME VFIO mediated device
2019-03-20 15:08 ` Bart Van Assche
@ 2019-03-20 16:48 ` Maxim Levitsky
-1 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-20 16:48 UTC (permalink / raw)
To: Bart Van Assche, linux-nvme
Cc: Fam Zheng, Keith Busch, Sagi Grimberg, kvm, David S . Miller,
Greg Kroah-Hartman, Liang Cunming, Wolfram Sang, linux-kernel,
Kirti Wankhede, Jens Axboe, Alex Williamson, John Ferlan,
Mauro Carvalho Chehab, Paolo Bonzini, Liu Changpeng,
Paul E . McKenney, Amnon Ilan, Christoph Hellwig, Nicolas Ferre
On Wed, 2019-03-20 at 08:08 -0700, Bart Van Assche wrote:
> On Tue, 2019-03-19 at 16:41 +0200, Maxim Levitsky wrote:
> > * Polling kernel thread is used. The polling is stopped after a
> > predefined timeout (1/2 sec by default).
> > Support for all interrupt driven mode is planned, and it shows promising
> > results.
>
> Which cgroup will the CPU cycles used for polling be attributed to? Can the
> polling code be moved into user space such that it becomes easy to identify
> which process needs most CPU cycles for polling and such that the polling
> CPU cycles are attributed to the proper cgroup?
Currently there is a single IO thread per each virtual controller instance.
I would prefer to keep all the driver in the kernel, but I think I can make it
cgroup aware, in a simiar way this is done in vhost-net, and vhost-scsi.
Best regards,
Maxim Levitsky
> Thanks,
>
> Bart.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* [PATCH 0/9] RFC: NVME VFIO mediated device
@ 2019-03-20 16:48 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-20 16:48 UTC (permalink / raw)
On Wed, 2019-03-20@08:08 -0700, Bart Van Assche wrote:
> On Tue, 2019-03-19@16:41 +0200, Maxim Levitsky wrote:
> > * Polling kernel thread is used. The polling is stopped after a
> > predefined timeout (1/2 sec by default).
> > Support for all interrupt driven mode is planned, and it shows promising
> > results.
>
> Which cgroup will the CPU cycles used for polling be attributed to? Can the
> polling code be moved into user space such that it becomes easy to identify
> which process needs most CPU cycles for polling and such that the polling
> CPU cycles are attributed to the proper cgroup?
Currently there is a single IO thread per each virtual controller instance.
I would prefer to keep all the driver in the kernel, but I think I can make it
cgroup aware, in a simiar way this is done in vhost-net, and vhost-scsi.
Best regards,
Maxim Levitsky
> Thanks,
>
> Bart.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: [PATCH 0/9] RFC: NVME VFIO mediated device
2019-03-19 14:41 ` No subject Maxim Levitsky
@ 2019-03-20 15:28 ` Bart Van Assche
-1 siblings, 0 replies; 3471+ messages in thread
From: Bart Van Assche @ 2019-03-20 15:28 UTC (permalink / raw)
To: Maxim Levitsky, linux-nvme
Cc: Fam Zheng, Keith Busch, Sagi Grimberg, kvm, David S . Miller,
Greg Kroah-Hartman, Liang Cunming, Wolfram Sang, linux-kernel,
Kirti Wankhede, Jens Axboe, Alex Williamson, John Ferlan,
Mauro Carvalho Chehab, Paolo Bonzini, Liu Changpeng,
Paul E . McKenney, Amnon Ilan, Christoph Hellwig, Nicolas Ferre
On Tue, 2019-03-19 at 16:41 +0200, Maxim Levitsky wrote:
> * All guest memory is mapped into the physical nvme device
> but not 1:1 as vfio-pci would do this.
> This allows very efficient DMA.
> To support this, patch 2 adds ability for a mdev device to listen on
> guest's memory map events.
> Any such memory is immediately pinned and then DMA mapped.
> (Support for fabric drivers where this is not possible exits too,
> in which case the fabric driver will do its own DMA mapping)
Does this mean that all guest memory is pinned all the time? If so, are you
sure that's acceptable?
Additionally, what is the performance overhead of the IOMMU notifier added
by patch 8/9? How often was that notifier called per second in your tests
and how much time was spent per call in the notifier callbacks?
Thanks,
Bart.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* [PATCH 0/9] RFC: NVME VFIO mediated device
@ 2019-03-20 15:28 ` Bart Van Assche
0 siblings, 0 replies; 3471+ messages in thread
From: Bart Van Assche @ 2019-03-20 15:28 UTC (permalink / raw)
On Tue, 2019-03-19@16:41 +0200, Maxim Levitsky wrote:
> * All guest memory is mapped into the physical nvme device
> but not 1:1 as vfio-pci would do this.
> This allows very efficient DMA.
> To support this, patch 2 adds ability for a mdev device to listen on
> guest's memory map events.
> Any such memory is immediately pinned and then DMA mapped.
> (Support for fabric drivers where this is not possible exits too,
> in which case the fabric driver will do its own DMA mapping)
Does this mean that all guest memory is pinned all the time? If so, are you
sure that's acceptable?
Additionally, what is the performance overhead of the IOMMU notifier added
by patch 8/9? How often was that notifier called per second in your tests
and how much time was spent per call in the notifier callbacks?
Thanks,
Bart.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: [PATCH 0/9] RFC: NVME VFIO mediated device
2019-03-20 15:28 ` Bart Van Assche
@ 2019-03-20 16:42 ` Maxim Levitsky
-1 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-20 16:42 UTC (permalink / raw)
To: Bart Van Assche, linux-nvme
Cc: Fam Zheng, Jens Axboe, Alex Williamson, Sagi Grimberg, kvm,
Wolfram Sang, Greg Kroah-Hartman, Liang Cunming, Nicolas Ferre,
linux-kernel, Liu Changpeng, Keith Busch, Kirti Wankhede,
Christoph Hellwig, Paolo Bonzini, Mauro Carvalho Chehab,
John Ferlan, Paul E . McKenney, Amnon Ilan, David S . Miller
On Wed, 2019-03-20 at 08:28 -0700, Bart Van Assche wrote:
> On Tue, 2019-03-19 at 16:41 +0200, Maxim Levitsky wrote:
> > * All guest memory is mapped into the physical nvme device
> > but not 1:1 as vfio-pci would do this.
> > This allows very efficient DMA.
> > To support this, patch 2 adds ability for a mdev device to listen on
> > guest's memory map events.
> > Any such memory is immediately pinned and then DMA mapped.
> > (Support for fabric drivers where this is not possible exits too,
> > in which case the fabric driver will do its own DMA mapping)
>
> Does this mean that all guest memory is pinned all the time? If so, are you
> sure that's acceptable?
I think so. The VFIO pci passthrough also pins all the guest memory.
SPDK also does this (pins and dma maps) all the guest memory.
I agree that this is not an ideal solution but this is a fastest and simplest
solution possible.
>
> Additionally, what is the performance overhead of the IOMMU notifier added
> by patch 8/9? How often was that notifier called per second in your tests
> and how much time was spent per call in the notifier callbacks?
To be honest I haven't optimized my IOMMU notifier at all, so when it is called,
it stops the IO thread, does its work and then restarts it which is very slow.
Fortunelly it is not called at all during normal operation as VFIO dma map/unmap
events are really rare and happen only on guest boot.
The same is even true for nested guests, as nested guest startup causes a wave
of map unmap events while shadow IOMMU updates, but then it just uses these
mapping without changing them.
The only case when performance is really bad is when you boot a guest with
iommu=on intel_iommu=on and then use the nvme driver there. In this case, the
driver in the guest does itself IOMMU maps/unmaps (on the virtual IOMMU) and for
each such event my VFIO map/unmap callback is called.
This can be optimized though to be much better using also some kind of queued
invalidation in my driver. iommu=pt meanwhile in the guest solves that issue.
Best regards,
Maxim Levitsky
>
> Thanks,
>
> Bart.
>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 3471+ messages in thread
* [PATCH 0/9] RFC: NVME VFIO mediated device
@ 2019-03-20 16:42 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-20 16:42 UTC (permalink / raw)
On Wed, 2019-03-20@08:28 -0700, Bart Van Assche wrote:
> On Tue, 2019-03-19@16:41 +0200, Maxim Levitsky wrote:
> > * All guest memory is mapped into the physical nvme device
> > but not 1:1 as vfio-pci would do this.
> > This allows very efficient DMA.
> > To support this, patch 2 adds ability for a mdev device to listen on
> > guest's memory map events.
> > Any such memory is immediately pinned and then DMA mapped.
> > (Support for fabric drivers where this is not possible exits too,
> > in which case the fabric driver will do its own DMA mapping)
>
> Does this mean that all guest memory is pinned all the time? If so, are you
> sure that's acceptable?
I think so. The VFIO pci passthrough also pins all the guest memory.
SPDK also does this (pins and dma maps) all the guest memory.
I agree that this is not an ideal solution but this is a fastest and simplest
solution possible.
>
> Additionally, what is the performance overhead of the IOMMU notifier added
> by patch 8/9? How often was that notifier called per second in your tests
> and how much time was spent per call in the notifier callbacks?
To be honest I haven't optimized my IOMMU notifier at all, so when it is called,
it stops the IO thread, does its work and then restarts it which is very slow.
Fortunelly it is not called at all during normal operation as VFIO dma map/unmap
events are really rare and happen only on guest boot.
The same is even true for nested guests, as nested guest startup causes a wave
of map unmap events while shadow IOMMU updates, but then it just uses these
mapping without changing them.
The only case when performance is really bad is when you boot a guest with
iommu=on intel_iommu=on and then use the nvme driver there. In this case, the
driver in the guest does itself IOMMU maps/unmaps (on the virtual IOMMU) and for
each such event my VFIO map/unmap callback is called.
This can be optimized though to be much better using also some kind of queued
invalidation in my driver. iommu=pt meanwhile in the guest solves that issue.
Best regards,
Maxim Levitsky
>
> Thanks,
>
> Bart.
>
> _______________________________________________
> Linux-nvme mailing list
> Linux-nvme at lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-nvme
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: [PATCH 0/9] RFC: NVME VFIO mediated device
2019-03-20 16:42 ` Maxim Levitsky
(?)
@ 2019-03-20 17:03 ` Alex Williamson
-1 siblings, 0 replies; 3471+ messages in thread
From: Alex Williamson @ 2019-03-20 17:03 UTC (permalink / raw)
To: Maxim Levitsky
Cc: Bart Van Assche, linux-nvme, Fam Zheng, Jens Axboe,
Sagi Grimberg, kvm, Wolfram Sang, Greg Kroah-Hartman,
Liang Cunming, Nicolas Ferre, linux-kernel, Liu Changpeng,
Keith Busch, Kirti Wankhede, Christoph Hellwig, Paolo Bonzini,
Mauro Carvalho Chehab, John Ferlan, Paul E . McKenney,
Amnon Ilan, David S . Miller
On Wed, 20 Mar 2019 18:42:02 +0200
Maxim Levitsky <mlevitsk@redhat.com> wrote:
> On Wed, 2019-03-20 at 08:28 -0700, Bart Van Assche wrote:
> > On Tue, 2019-03-19 at 16:41 +0200, Maxim Levitsky wrote:
> > > * All guest memory is mapped into the physical nvme device
> > > but not 1:1 as vfio-pci would do this.
> > > This allows very efficient DMA.
> > > To support this, patch 2 adds ability for a mdev device to listen on
> > > guest's memory map events.
> > > Any such memory is immediately pinned and then DMA mapped.
> > > (Support for fabric drivers where this is not possible exits too,
> > > in which case the fabric driver will do its own DMA mapping)
> >
> > Does this mean that all guest memory is pinned all the time? If so, are you
> > sure that's acceptable?
> I think so. The VFIO pci passthrough also pins all the guest memory.
> SPDK also does this (pins and dma maps) all the guest memory.
>
> I agree that this is not an ideal solution but this is a fastest and simplest
> solution possible.
FWIW, the pinned memory request up through the vfio iommu driver count
against the user's locked memory limits, if that's the concern. Thanks,
Alex
^ permalink raw reply [flat|nested] 3471+ messages in thread
* [PATCH 0/9] RFC: NVME VFIO mediated device
@ 2019-03-20 17:03 ` Alex Williamson
0 siblings, 0 replies; 3471+ messages in thread
From: Alex Williamson @ 2019-03-20 17:03 UTC (permalink / raw)
On Wed, 20 Mar 2019 18:42:02 +0200
Maxim Levitsky <mlevitsk@redhat.com> wrote:
> On Wed, 2019-03-20@08:28 -0700, Bart Van Assche wrote:
> > On Tue, 2019-03-19 at 16:41 +0200, Maxim Levitsky wrote:
> > > * All guest memory is mapped into the physical nvme device
> > > but not 1:1 as vfio-pci would do this.
> > > This allows very efficient DMA.
> > > To support this, patch 2 adds ability for a mdev device to listen on
> > > guest's memory map events.
> > > Any such memory is immediately pinned and then DMA mapped.
> > > (Support for fabric drivers where this is not possible exits too,
> > > in which case the fabric driver will do its own DMA mapping)
> >
> > Does this mean that all guest memory is pinned all the time? If so, are you
> > sure that's acceptable?
> I think so. The VFIO pci passthrough also pins all the guest memory.
> SPDK also does this (pins and dma maps) all the guest memory.
>
> I agree that this is not an ideal solution but this is a fastest and simplest
> solution possible.
FWIW, the pinned memory request up through the vfio iommu driver count
against the user's locked memory limits, if that's the concern. Thanks,
Alex
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: [PATCH 0/9] RFC: NVME VFIO mediated device
@ 2019-03-20 17:03 ` Alex Williamson
0 siblings, 0 replies; 3471+ messages in thread
From: Alex Williamson @ 2019-03-20 17:03 UTC (permalink / raw)
To: Maxim Levitsky
Cc: Bart Van Assche, linux-nvme, Fam Zheng, Jens Axboe,
Sagi Grimberg, kvm, Wolfram Sang, Greg Kroah-Hartman,
Liang Cunming, Nicolas Ferre, linux-kernel, Liu Changpeng,
Keith Busch, Kirti Wankhede, Christoph Hellwig, Paolo Bonzini,
Mauro Carvalho Chehab, John Ferlan, Paul E . McKenney,
Amnon Ilan, David S . Miller
On Wed, 20 Mar 2019 18:42:02 +0200
Maxim Levitsky <mlevitsk@redhat.com> wrote:
> On Wed, 2019-03-20 at 08:28 -0700, Bart Van Assche wrote:
> > On Tue, 2019-03-19 at 16:41 +0200, Maxim Levitsky wrote:
> > > * All guest memory is mapped into the physical nvme device
> > > but not 1:1 as vfio-pci would do this.
> > > This allows very efficient DMA.
> > > To support this, patch 2 adds ability for a mdev device to listen on
> > > guest's memory map events.
> > > Any such memory is immediately pinned and then DMA mapped.
> > > (Support for fabric drivers where this is not possible exits too,
> > > in which case the fabric driver will do its own DMA mapping)
> >
> > Does this mean that all guest memory is pinned all the time? If so, are you
> > sure that's acceptable?
> I think so. The VFIO pci passthrough also pins all the guest memory.
> SPDK also does this (pins and dma maps) all the guest memory.
>
> I agree that this is not an ideal solution but this is a fastest and simplest
> solution possible.
FWIW, the pinned memory request up through the vfio iommu driver count
against the user's locked memory limits, if that's the concern. Thanks,
Alex
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: your mail
2019-03-19 14:41 ` No subject Maxim Levitsky
(?)
@ 2019-03-21 16:13 ` Stefan Hajnoczi
-1 siblings, 0 replies; 3471+ messages in thread
From: Stefan Hajnoczi @ 2019-03-21 16:13 UTC (permalink / raw)
To: Maxim Levitsky
Cc: linux-nvme, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney ,
Paolo Bonzini, Liang Cunming, Liu Changpeng, Fam Zheng,
Amnon Ilan, John Ferlan
[-- Attachment #1: Type: text/plain, Size: 2018 bytes --]
On Tue, Mar 19, 2019 at 04:41:07PM +0200, Maxim Levitsky wrote:
> Date: Tue, 19 Mar 2019 14:45:45 +0200
> Subject: [PATCH 0/9] RFC: NVME VFIO mediated device
>
> Hi everyone!
>
> In this patch series, I would like to introduce my take on the problem of doing
> as fast as possible virtualization of storage with emphasis on low latency.
>
> In this patch series I implemented a kernel vfio based, mediated device that
> allows the user to pass through a partition and/or whole namespace to a guest.
>
> The idea behind this driver is based on paper you can find at
> https://www.usenix.org/conference/atc18/presentation/peng,
>
> Although note that I stared the development prior to reading this paper,
> independently.
>
> In addition to that implementation is not based on code used in the paper as
> I wasn't being able at that time to make the source available to me.
>
> ***Key points about the implementation:***
>
> * Polling kernel thread is used. The polling is stopped after a
> predefined timeout (1/2 sec by default).
> Support for all interrupt driven mode is planned, and it shows promising results.
>
> * Guest sees a standard NVME device - this allows to run guest with
> unmodified drivers, for example windows guests.
>
> * The NVMe device is shared between host and guest.
> That means that even a single namespace can be split between host
> and guest based on different partitions.
>
> * Simple configuration
>
> *** Performance ***
>
> Performance was tested on Intel DC P3700, With Xeon E5-2620 v2
> and both latency and throughput is very similar to SPDK.
>
> Soon I will test this on a better server and nvme device and provide
> more formal performance numbers.
>
> Latency numbers:
> ~80ms - spdk with fio plugin on the host.
> ~84ms - nvme driver on the host
> ~87ms - mdev-nvme + nvme driver in the guest
You mentioned the spdk numbers are with vhost-user-nvme. Have you
measured SPDK's vhost-user-blk?
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* your mail
@ 2019-03-21 16:13 ` Stefan Hajnoczi
0 siblings, 0 replies; 3471+ messages in thread
From: Stefan Hajnoczi @ 2019-03-21 16:13 UTC (permalink / raw)
On Tue, Mar 19, 2019@04:41:07PM +0200, Maxim Levitsky wrote:
> Date: Tue, 19 Mar 2019 14:45:45 +0200
> Subject: [PATCH 0/9] RFC: NVME VFIO mediated device
>
> Hi everyone!
>
> In this patch series, I would like to introduce my take on the problem of doing
> as fast as possible virtualization of storage with emphasis on low latency.
>
> In this patch series I implemented a kernel vfio based, mediated device that
> allows the user to pass through a partition and/or whole namespace to a guest.
>
> The idea behind this driver is based on paper you can find at
> https://www.usenix.org/conference/atc18/presentation/peng,
>
> Although note that I stared the development prior to reading this paper,
> independently.
>
> In addition to that implementation is not based on code used in the paper as
> I wasn't being able at that time to make the source available to me.
>
> ***Key points about the implementation:***
>
> * Polling kernel thread is used. The polling is stopped after a
> predefined timeout (1/2 sec by default).
> Support for all interrupt driven mode is planned, and it shows promising results.
>
> * Guest sees a standard NVME device - this allows to run guest with
> unmodified drivers, for example windows guests.
>
> * The NVMe device is shared between host and guest.
> That means that even a single namespace can be split between host
> and guest based on different partitions.
>
> * Simple configuration
>
> *** Performance ***
>
> Performance was tested on Intel DC P3700, With Xeon E5-2620 v2
> and both latency and throughput is very similar to SPDK.
>
> Soon I will test this on a better server and nvme device and provide
> more formal performance numbers.
>
> Latency numbers:
> ~80ms - spdk with fio plugin on the host.
> ~84ms - nvme driver on the host
> ~87ms - mdev-nvme + nvme driver in the guest
You mentioned the spdk numbers are with vhost-user-nvme. Have you
measured SPDK's vhost-user-blk?
Stefan
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-nvme/attachments/20190321/43a43762/attachment.sig>
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: your mail
@ 2019-03-21 16:13 ` Stefan Hajnoczi
0 siblings, 0 replies; 3471+ messages in thread
From: Stefan Hajnoczi @ 2019-03-21 16:13 UTC (permalink / raw)
To: Maxim Levitsky
Cc: linux-nvme, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney ,
Paolo Bonzini, Liang Cunming, Liu Changpeng, Fam Zheng,
Amnon Ilan, John Ferlan
[-- Attachment #1: Type: text/plain, Size: 2018 bytes --]
On Tue, Mar 19, 2019 at 04:41:07PM +0200, Maxim Levitsky wrote:
> Date: Tue, 19 Mar 2019 14:45:45 +0200
> Subject: [PATCH 0/9] RFC: NVME VFIO mediated device
>
> Hi everyone!
>
> In this patch series, I would like to introduce my take on the problem of doing
> as fast as possible virtualization of storage with emphasis on low latency.
>
> In this patch series I implemented a kernel vfio based, mediated device that
> allows the user to pass through a partition and/or whole namespace to a guest.
>
> The idea behind this driver is based on paper you can find at
> https://www.usenix.org/conference/atc18/presentation/peng,
>
> Although note that I stared the development prior to reading this paper,
> independently.
>
> In addition to that implementation is not based on code used in the paper as
> I wasn't being able at that time to make the source available to me.
>
> ***Key points about the implementation:***
>
> * Polling kernel thread is used. The polling is stopped after a
> predefined timeout (1/2 sec by default).
> Support for all interrupt driven mode is planned, and it shows promising results.
>
> * Guest sees a standard NVME device - this allows to run guest with
> unmodified drivers, for example windows guests.
>
> * The NVMe device is shared between host and guest.
> That means that even a single namespace can be split between host
> and guest based on different partitions.
>
> * Simple configuration
>
> *** Performance ***
>
> Performance was tested on Intel DC P3700, With Xeon E5-2620 v2
> and both latency and throughput is very similar to SPDK.
>
> Soon I will test this on a better server and nvme device and provide
> more formal performance numbers.
>
> Latency numbers:
> ~80ms - spdk with fio plugin on the host.
> ~84ms - nvme driver on the host
> ~87ms - mdev-nvme + nvme driver in the guest
You mentioned the spdk numbers are with vhost-user-nvme. Have you
measured SPDK's vhost-user-blk?
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: your mail
2019-03-21 16:13 ` Stefan Hajnoczi
(?)
@ 2019-03-21 17:07 ` Maxim Levitsky
-1 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-21 17:07 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: linux-nvme, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney, Paolo Bonzini,
Liang Cunming, Liu Changpeng, Fam Zheng, Amnon Ilan, John Ferlan
On Thu, 2019-03-21 at 16:13 +0000, Stefan Hajnoczi wrote:
> On Tue, Mar 19, 2019 at 04:41:07PM +0200, Maxim Levitsky wrote:
> > Date: Tue, 19 Mar 2019 14:45:45 +0200
> > Subject: [PATCH 0/9] RFC: NVME VFIO mediated device
> >
> > Hi everyone!
> >
> > In this patch series, I would like to introduce my take on the problem of
> > doing
> > as fast as possible virtualization of storage with emphasis on low latency.
> >
> > In this patch series I implemented a kernel vfio based, mediated device
> > that
> > allows the user to pass through a partition and/or whole namespace to a
> > guest.
> >
> > The idea behind this driver is based on paper you can find at
> > https://www.usenix.org/conference/atc18/presentation/peng,
> >
> > Although note that I stared the development prior to reading this paper,
> > independently.
> >
> > In addition to that implementation is not based on code used in the paper
> > as
> > I wasn't being able at that time to make the source available to me.
> >
> > ***Key points about the implementation:***
> >
> > * Polling kernel thread is used. The polling is stopped after a
> > predefined timeout (1/2 sec by default).
> > Support for all interrupt driven mode is planned, and it shows promising
> > results.
> >
> > * Guest sees a standard NVME device - this allows to run guest with
> > unmodified drivers, for example windows guests.
> >
> > * The NVMe device is shared between host and guest.
> > That means that even a single namespace can be split between host
> > and guest based on different partitions.
> >
> > * Simple configuration
> >
> > *** Performance ***
> >
> > Performance was tested on Intel DC P3700, With Xeon E5-2620 v2
> > and both latency and throughput is very similar to SPDK.
> >
> > Soon I will test this on a better server and nvme device and provide
> > more formal performance numbers.
> >
> > Latency numbers:
> > ~80ms - spdk with fio plugin on the host.
> > ~84ms - nvme driver on the host
> > ~87ms - mdev-nvme + nvme driver in the guest
>
> You mentioned the spdk numbers are with vhost-user-nvme. Have you
> measured SPDK's vhost-user-blk?
I had lot of measuments of vhost-user-blk vs vhost-user-nvme.
vhost-user-nvme was always a bit faster but only a bit.
Thus I don't think it makes sense to benchamrk against vhost-user-blk.
Best regards,
Maxim Levitsky
^ permalink raw reply [flat|nested] 3471+ messages in thread
* your mail
@ 2019-03-21 17:07 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-21 17:07 UTC (permalink / raw)
On Thu, 2019-03-21@16:13 +0000, Stefan Hajnoczi wrote:
> On Tue, Mar 19, 2019@04:41:07PM +0200, Maxim Levitsky wrote:
> > Date: Tue, 19 Mar 2019 14:45:45 +0200
> > Subject: [PATCH 0/9] RFC: NVME VFIO mediated device
> >
> > Hi everyone!
> >
> > In this patch series, I would like to introduce my take on the problem of
> > doing
> > as fast as possible virtualization of storage with emphasis on low latency.
> >
> > In this patch series I implemented a kernel vfio based, mediated device
> > that
> > allows the user to pass through a partition and/or whole namespace to a
> > guest.
> >
> > The idea behind this driver is based on paper you can find at
> > https://www.usenix.org/conference/atc18/presentation/peng,
> >
> > Although note that I stared the development prior to reading this paper,
> > independently.
> >
> > In addition to that implementation is not based on code used in the paper
> > as
> > I wasn't being able at that time to make the source available to me.
> >
> > ***Key points about the implementation:***
> >
> > * Polling kernel thread is used. The polling is stopped after a
> > predefined timeout (1/2 sec by default).
> > Support for all interrupt driven mode is planned, and it shows promising
> > results.
> >
> > * Guest sees a standard NVME device - this allows to run guest with
> > unmodified drivers, for example windows guests.
> >
> > * The NVMe device is shared between host and guest.
> > That means that even a single namespace can be split between host
> > and guest based on different partitions.
> >
> > * Simple configuration
> >
> > *** Performance ***
> >
> > Performance was tested on Intel DC P3700, With Xeon E5-2620 v2
> > and both latency and throughput is very similar to SPDK.
> >
> > Soon I will test this on a better server and nvme device and provide
> > more formal performance numbers.
> >
> > Latency numbers:
> > ~80ms - spdk with fio plugin on the host.
> > ~84ms - nvme driver on the host
> > ~87ms - mdev-nvme + nvme driver in the guest
>
> You mentioned the spdk numbers are with vhost-user-nvme. Have you
> measured SPDK's vhost-user-blk?
I had lot of measuments of vhost-user-blk vs vhost-user-nvme.
vhost-user-nvme was always a bit faster but only a bit.
Thus I don't think it makes sense to benchamrk against vhost-user-blk.
Best regards,
Maxim Levitsky
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: your mail
@ 2019-03-21 17:07 ` Maxim Levitsky
0 siblings, 0 replies; 3471+ messages in thread
From: Maxim Levitsky @ 2019-03-21 17:07 UTC (permalink / raw)
To: Stefan Hajnoczi
Cc: linux-nvme, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney, Paolo Bonzini,
Liang Cunming, Liu Changpeng, Fam Zheng, Amnon Ilan, John Ferlan
On Thu, 2019-03-21 at 16:13 +0000, Stefan Hajnoczi wrote:
> On Tue, Mar 19, 2019 at 04:41:07PM +0200, Maxim Levitsky wrote:
> > Date: Tue, 19 Mar 2019 14:45:45 +0200
> > Subject: [PATCH 0/9] RFC: NVME VFIO mediated device
> >
> > Hi everyone!
> >
> > In this patch series, I would like to introduce my take on the problem of
> > doing
> > as fast as possible virtualization of storage with emphasis on low latency.
> >
> > In this patch series I implemented a kernel vfio based, mediated device
> > that
> > allows the user to pass through a partition and/or whole namespace to a
> > guest.
> >
> > The idea behind this driver is based on paper you can find at
> > https://www.usenix.org/conference/atc18/presentation/peng,
> >
> > Although note that I stared the development prior to reading this paper,
> > independently.
> >
> > In addition to that implementation is not based on code used in the paper
> > as
> > I wasn't being able at that time to make the source available to me.
> >
> > ***Key points about the implementation:***
> >
> > * Polling kernel thread is used. The polling is stopped after a
> > predefined timeout (1/2 sec by default).
> > Support for all interrupt driven mode is planned, and it shows promising
> > results.
> >
> > * Guest sees a standard NVME device - this allows to run guest with
> > unmodified drivers, for example windows guests.
> >
> > * The NVMe device is shared between host and guest.
> > That means that even a single namespace can be split between host
> > and guest based on different partitions.
> >
> > * Simple configuration
> >
> > *** Performance ***
> >
> > Performance was tested on Intel DC P3700, With Xeon E5-2620 v2
> > and both latency and throughput is very similar to SPDK.
> >
> > Soon I will test this on a better server and nvme device and provide
> > more formal performance numbers.
> >
> > Latency numbers:
> > ~80ms - spdk with fio plugin on the host.
> > ~84ms - nvme driver on the host
> > ~87ms - mdev-nvme + nvme driver in the guest
>
> You mentioned the spdk numbers are with vhost-user-nvme. Have you
> measured SPDK's vhost-user-blk?
I had lot of measuments of vhost-user-blk vs vhost-user-nvme.
vhost-user-nvme was always a bit faster but only a bit.
Thus I don't think it makes sense to benchamrk against vhost-user-blk.
Best regards,
Maxim Levitsky
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: your mail
2019-03-21 17:07 ` Maxim Levitsky
(?)
@ 2019-03-25 16:46 ` Stefan Hajnoczi
-1 siblings, 0 replies; 3471+ messages in thread
From: Stefan Hajnoczi @ 2019-03-25 16:46 UTC (permalink / raw)
To: Maxim Levitsky
Cc: linux-nvme, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney, Paolo Bonzini,
Liang Cunming, Liu Changpeng, Fam Zheng, Amnon Ilan, John Ferlan
[-- Attachment #1: Type: text/plain, Size: 2913 bytes --]
On Thu, Mar 21, 2019 at 07:07:38PM +0200, Maxim Levitsky wrote:
> On Thu, 2019-03-21 at 16:13 +0000, Stefan Hajnoczi wrote:
> > On Tue, Mar 19, 2019 at 04:41:07PM +0200, Maxim Levitsky wrote:
> > > Date: Tue, 19 Mar 2019 14:45:45 +0200
> > > Subject: [PATCH 0/9] RFC: NVME VFIO mediated device
> > >
> > > Hi everyone!
> > >
> > > In this patch series, I would like to introduce my take on the problem of
> > > doing
> > > as fast as possible virtualization of storage with emphasis on low latency.
> > >
> > > In this patch series I implemented a kernel vfio based, mediated device
> > > that
> > > allows the user to pass through a partition and/or whole namespace to a
> > > guest.
> > >
> > > The idea behind this driver is based on paper you can find at
> > > https://www.usenix.org/conference/atc18/presentation/peng,
> > >
> > > Although note that I stared the development prior to reading this paper,
> > > independently.
> > >
> > > In addition to that implementation is not based on code used in the paper
> > > as
> > > I wasn't being able at that time to make the source available to me.
> > >
> > > ***Key points about the implementation:***
> > >
> > > * Polling kernel thread is used. The polling is stopped after a
> > > predefined timeout (1/2 sec by default).
> > > Support for all interrupt driven mode is planned, and it shows promising
> > > results.
> > >
> > > * Guest sees a standard NVME device - this allows to run guest with
> > > unmodified drivers, for example windows guests.
> > >
> > > * The NVMe device is shared between host and guest.
> > > That means that even a single namespace can be split between host
> > > and guest based on different partitions.
> > >
> > > * Simple configuration
> > >
> > > *** Performance ***
> > >
> > > Performance was tested on Intel DC P3700, With Xeon E5-2620 v2
> > > and both latency and throughput is very similar to SPDK.
> > >
> > > Soon I will test this on a better server and nvme device and provide
> > > more formal performance numbers.
> > >
> > > Latency numbers:
> > > ~80ms - spdk with fio plugin on the host.
> > > ~84ms - nvme driver on the host
> > > ~87ms - mdev-nvme + nvme driver in the guest
> >
> > You mentioned the spdk numbers are with vhost-user-nvme. Have you
> > measured SPDK's vhost-user-blk?
>
> I had lot of measuments of vhost-user-blk vs vhost-user-nvme.
> vhost-user-nvme was always a bit faster but only a bit.
> Thus I don't think it makes sense to benchamrk against vhost-user-blk.
It's interesting because mdev-nvme is closest to the hardware while
vhost-user-blk is closest to software. Doing things at the NVMe level
isn't buying much performance because it's still going through a
software path comparable to vhost-user-blk.
From what you say it sounds like there isn't much to optimize away :(.
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* your mail
@ 2019-03-25 16:46 ` Stefan Hajnoczi
0 siblings, 0 replies; 3471+ messages in thread
From: Stefan Hajnoczi @ 2019-03-25 16:46 UTC (permalink / raw)
On Thu, Mar 21, 2019@07:07:38PM +0200, Maxim Levitsky wrote:
> On Thu, 2019-03-21@16:13 +0000, Stefan Hajnoczi wrote:
> > On Tue, Mar 19, 2019@04:41:07PM +0200, Maxim Levitsky wrote:
> > > Date: Tue, 19 Mar 2019 14:45:45 +0200
> > > Subject: [PATCH 0/9] RFC: NVME VFIO mediated device
> > >
> > > Hi everyone!
> > >
> > > In this patch series, I would like to introduce my take on the problem of
> > > doing
> > > as fast as possible virtualization of storage with emphasis on low latency.
> > >
> > > In this patch series I implemented a kernel vfio based, mediated device
> > > that
> > > allows the user to pass through a partition and/or whole namespace to a
> > > guest.
> > >
> > > The idea behind this driver is based on paper you can find at
> > > https://www.usenix.org/conference/atc18/presentation/peng,
> > >
> > > Although note that I stared the development prior to reading this paper,
> > > independently.
> > >
> > > In addition to that implementation is not based on code used in the paper
> > > as
> > > I wasn't being able at that time to make the source available to me.
> > >
> > > ***Key points about the implementation:***
> > >
> > > * Polling kernel thread is used. The polling is stopped after a
> > > predefined timeout (1/2 sec by default).
> > > Support for all interrupt driven mode is planned, and it shows promising
> > > results.
> > >
> > > * Guest sees a standard NVME device - this allows to run guest with
> > > unmodified drivers, for example windows guests.
> > >
> > > * The NVMe device is shared between host and guest.
> > > That means that even a single namespace can be split between host
> > > and guest based on different partitions.
> > >
> > > * Simple configuration
> > >
> > > *** Performance ***
> > >
> > > Performance was tested on Intel DC P3700, With Xeon E5-2620 v2
> > > and both latency and throughput is very similar to SPDK.
> > >
> > > Soon I will test this on a better server and nvme device and provide
> > > more formal performance numbers.
> > >
> > > Latency numbers:
> > > ~80ms - spdk with fio plugin on the host.
> > > ~84ms - nvme driver on the host
> > > ~87ms - mdev-nvme + nvme driver in the guest
> >
> > You mentioned the spdk numbers are with vhost-user-nvme. Have you
> > measured SPDK's vhost-user-blk?
>
> I had lot of measuments of vhost-user-blk vs vhost-user-nvme.
> vhost-user-nvme was always a bit faster but only a bit.
> Thus I don't think it makes sense to benchamrk against vhost-user-blk.
It's interesting because mdev-nvme is closest to the hardware while
vhost-user-blk is closest to software. Doing things at the NVMe level
isn't buying much performance because it's still going through a
software path comparable to vhost-user-blk.
From what you say it sounds like there isn't much to optimize away :(.
Stefan
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 455 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-nvme/attachments/20190325/f46e8674/attachment.sig>
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: your mail
@ 2019-03-25 16:46 ` Stefan Hajnoczi
0 siblings, 0 replies; 3471+ messages in thread
From: Stefan Hajnoczi @ 2019-03-25 16:46 UTC (permalink / raw)
To: Maxim Levitsky
Cc: linux-nvme, linux-kernel, kvm, Jens Axboe, Alex Williamson,
Keith Busch, Christoph Hellwig, Sagi Grimberg, Kirti Wankhede,
David S . Miller, Mauro Carvalho Chehab, Greg Kroah-Hartman,
Wolfram Sang, Nicolas Ferre, Paul E . McKenney, Paolo Bonzini,
Liang Cunming, Liu Changpeng, Fam Zheng, Amnon Ilan, John Ferlan
[-- Attachment #1: Type: text/plain, Size: 2913 bytes --]
On Thu, Mar 21, 2019 at 07:07:38PM +0200, Maxim Levitsky wrote:
> On Thu, 2019-03-21 at 16:13 +0000, Stefan Hajnoczi wrote:
> > On Tue, Mar 19, 2019 at 04:41:07PM +0200, Maxim Levitsky wrote:
> > > Date: Tue, 19 Mar 2019 14:45:45 +0200
> > > Subject: [PATCH 0/9] RFC: NVME VFIO mediated device
> > >
> > > Hi everyone!
> > >
> > > In this patch series, I would like to introduce my take on the problem of
> > > doing
> > > as fast as possible virtualization of storage with emphasis on low latency.
> > >
> > > In this patch series I implemented a kernel vfio based, mediated device
> > > that
> > > allows the user to pass through a partition and/or whole namespace to a
> > > guest.
> > >
> > > The idea behind this driver is based on paper you can find at
> > > https://www.usenix.org/conference/atc18/presentation/peng,
> > >
> > > Although note that I stared the development prior to reading this paper,
> > > independently.
> > >
> > > In addition to that implementation is not based on code used in the paper
> > > as
> > > I wasn't being able at that time to make the source available to me.
> > >
> > > ***Key points about the implementation:***
> > >
> > > * Polling kernel thread is used. The polling is stopped after a
> > > predefined timeout (1/2 sec by default).
> > > Support for all interrupt driven mode is planned, and it shows promising
> > > results.
> > >
> > > * Guest sees a standard NVME device - this allows to run guest with
> > > unmodified drivers, for example windows guests.
> > >
> > > * The NVMe device is shared between host and guest.
> > > That means that even a single namespace can be split between host
> > > and guest based on different partitions.
> > >
> > > * Simple configuration
> > >
> > > *** Performance ***
> > >
> > > Performance was tested on Intel DC P3700, With Xeon E5-2620 v2
> > > and both latency and throughput is very similar to SPDK.
> > >
> > > Soon I will test this on a better server and nvme device and provide
> > > more formal performance numbers.
> > >
> > > Latency numbers:
> > > ~80ms - spdk with fio plugin on the host.
> > > ~84ms - nvme driver on the host
> > > ~87ms - mdev-nvme + nvme driver in the guest
> >
> > You mentioned the spdk numbers are with vhost-user-nvme. Have you
> > measured SPDK's vhost-user-blk?
>
> I had lot of measuments of vhost-user-blk vs vhost-user-nvme.
> vhost-user-nvme was always a bit faster but only a bit.
> Thus I don't think it makes sense to benchamrk against vhost-user-blk.
It's interesting because mdev-nvme is closest to the hardware while
vhost-user-blk is closest to software. Doing things at the NVMe level
isn't buying much performance because it's still going through a
software path comparable to vhost-user-blk.
From what you say it sounds like there isn't much to optimize away :(.
Stefan
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 455 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-07-22 5:32 Darlehen Bedienung
0 siblings, 0 replies; 3471+ messages in thread
From: Darlehen Bedienung @ 2020-07-22 5:32 UTC (permalink / raw)
Schönen Tag,Wir sind zuverlässige, vertrauenswürdige Kreditgeber, Wir bieten Darlehen an Unternehmen und Privatpersonen zu niedrigen und günstigen Zinssatz von 2%. Sind Sie auf der Suche nach einem Business-Darlehen, persönliche Darlehen, Schuldenkonsolidierung, unbesicherte Darlehen, Venture Capital. Kontaktieren Sie uns mit Name, Land, Darlehensbetrag, Dauer und Telefonnummer.GrüßeHerr DA COSTA DARREN FAY
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-07-22 5:32 Darlehen Bedienung
0 siblings, 0 replies; 3471+ messages in thread
From: Darlehen Bedienung @ 2020-07-22 5:32 UTC (permalink / raw)
Schönen Tag,Wir sind zuverlässige, vertrauenswürdige Kreditgeber, Wir bieten Darlehen an Unternehmen und Privatpersonen zu niedrigen und günstigen Zinssatz von 2%. Sind Sie auf der Suche nach einem Business-Darlehen, persönliche Darlehen, Schuldenkonsolidierung, unbesicherte Darlehen, Venture Capital. Kontaktieren Sie uns mit Name, Land, Darlehensbetrag, Dauer und Telefonnummer.GrüßeHerr DA COSTA DARREN FAY
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-07-22 5:32 Darlehen Bedienung
0 siblings, 0 replies; 3471+ messages in thread
From: Darlehen Bedienung @ 2020-07-22 5:32 UTC (permalink / raw)
Schönen Tag,Wir sind zuverlässige, vertrauenswürdige Kreditgeber, Wir bieten Darlehen an Unternehmen und Privatpersonen zu niedrigen und günstigen Zinssatz von 2%. Sind Sie auf der Suche nach einem Business-Darlehen, persönliche Darlehen, Schuldenkonsolidierung, unbesicherte Darlehen, Venture Capital. Kontaktieren Sie uns mit Name, Land, Darlehensbetrag, Dauer und Telefonnummer.GrüßeHerr DA COSTA DARREN FAY
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-07-22 4:45 Darlehen Bedienung
0 siblings, 0 replies; 3471+ messages in thread
From: Darlehen Bedienung @ 2020-07-22 4:45 UTC (permalink / raw)
Schönen Tag,Wir sind zuverlässige, vertrauenswürdige Kreditgeber, Wir bieten Darlehen an Unternehmen und Privatpersonen zu niedrigen und günstigen Zinssatz von 2%. Sind Sie auf der Suche nach einem Business-Darlehen, persönliche Darlehen, Schuldenkonsolidierung, unbesicherte Darlehen, Venture Capital. Kontaktieren Sie uns mit Name, Land, Darlehensbetrag, Dauer und Telefonnummer.GrüßeHerr DA COSTA DARREN FAY
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-07-02 19:43 Barr Anthony Calder
0 siblings, 0 replies; 3471+ messages in thread
From: Barr Anthony Calder @ 2020-07-02 19:43 UTC (permalink / raw)
Dobrý den
Jsem Anthony Calder, právní zástupce z Toga. Obrátil jsem se na vás
ohledně mého pozdního klienta, Dr. Edwin, majetek fondu ve výši 2,5
milionu dolarů, který bude vrácen na váš účet. Kromě toho v této
transakci chci, abyste odpověděli důvěrně.
Anthony Calder
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-06-30 17:56 Vasiliy Kupriakov
0 siblings, 0 replies; 3471+ messages in thread
From: Vasiliy Kupriakov @ 2020-06-30 17:56 UTC (permalink / raw)
To: Corentin Chary, Darren Hart, Andy Shevchenko
Cc: Vasiliy Kupriakov,
open list:ASUS NOTEBOOKS AND EEEPC ACPI/WMI EXTRAS DRIVERS,
open list:ASUS NOTEBOOKS AND EEEPC ACPI/WMI EXTRAS DRIVERS,
open list
Subject: [PATCH] platform/x86: asus-wmi: allow BAT1 battery name
The battery on my laptop ASUS TUF Gaming FX706II is named BAT1.
This patch allows battery extension to load.
Signed-off-by: Vasiliy Kupriakov <rublag-ns@yandex.ru>
---
drivers/platform/x86/asus-wmi.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/platform/x86/asus-wmi.c b/drivers/platform/x86/asus-wmi.c
index 877aade19497..8f4acdc06b13 100644
--- a/drivers/platform/x86/asus-wmi.c
+++ b/drivers/platform/x86/asus-wmi.c
@@ -441,6 +441,7 @@ static int asus_wmi_battery_add(struct power_supply *battery)
* battery is named BATT.
*/
if (strcmp(battery->desc->name, "BAT0") != 0 &&
+ strcmp(battery->desc->name, "BAT1") != 0 &&
strcmp(battery->desc->name, "BATT") != 0)
return -ENODEV;
--
2.27.0
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-06-27 21:58 lookman joe
0 siblings, 0 replies; 3471+ messages in thread
From: lookman joe @ 2020-06-27 21:58 UTC (permalink / raw)
MONEY-GRAM TRANSFERRED PAYMENT INFO:
Below is the sender’s information
1. MG. REFERENCE NO#: 36360857
2. SENDER'S NAME: Johnson Williams
3. AMOUNT TO PICKUP: US$10,000
Go to any Money Gram office near you and pick up the payment Track the
Reference Number by visiting and click the link below
(https://secure.moneygram.com/embed/track) and enter the Reference
Number: 36360857 and the Last Name: Williams, you will find the payment
available for pickup instantly.
Yours Sincerely,
Mrs. Helen Marvis
United Nations Liaison Office
Directorate for International Payments
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-06-27 21:58 lookman joe
0 siblings, 0 replies; 3471+ messages in thread
From: lookman joe @ 2020-06-27 21:58 UTC (permalink / raw)
MONEY-GRAM TRANSFERRED PAYMENT INFO:
Below is the sender’s information
1. MG. REFERENCE NO#: 36360857
2. SENDER'S NAME: Johnson Williams
3. AMOUNT TO PICKUP: US$10,000
Go to any Money Gram office near you and pick up the payment Track the
Reference Number by visiting and click the link below
(https://secure.moneygram.com/embed/track) and enter the Reference
Number: 36360857 and the Last Name: Williams, you will find the payment
available for pickup instantly.
Yours Sincerely,
Mrs. Helen Marvis
United Nations Liaison Office
Directorate for International Payments
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-06-27 21:58 lookman joe
0 siblings, 0 replies; 3471+ messages in thread
From: lookman joe @ 2020-06-27 21:58 UTC (permalink / raw)
MONEY-GRAM TRANSFERRED PAYMENT INFO:
Below is the sender’s information
1. MG. REFERENCE NO#: 36360857
2. SENDER'S NAME: Johnson Williams
3. AMOUNT TO PICKUP: US$10,000
Go to any Money Gram office near you and pick up the payment Track the
Reference Number by visiting and click the link below
(https://secure.moneygram.com/embed/track) and enter the Reference
Number: 36360857 and the Last Name: Williams, you will find the payment
available for pickup instantly.
Yours Sincerely,
Mrs. Helen Marvis
United Nations Liaison Office
Directorate for International Payments
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-06-27 21:54 helen
0 siblings, 0 replies; 3471+ messages in thread
From: helen @ 2020-06-27 21:54 UTC (permalink / raw)
To: zhang
MONEY-GRAM TRANSFERRED PAYMENT INFO:
Below is the sender’s information
1. MG. REFERENCE NO#: 36360857
2. SENDER'S NAME: Johnson Williams
3. AMOUNT TO PICKUP: US$10,000
Go to any Money Gram office near you and pick up the payment Track the
Reference Number by visiting and click the link below
(https://secure.moneygram.com/embed/track) and enter the Reference
Number: 36360857 and the Last Name: Williams, you will find the payment
available for pickup instantly.
Yours Sincerely,
Mrs. Helen Marvis
United Nations Liaison Office
Directorate for International Payments
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-06-27 21:52 helen
0 siblings, 0 replies; 3471+ messages in thread
From: helen @ 2020-06-27 21:52 UTC (permalink / raw)
To: ebkrumme
MONEY-GRAM TRANSFERRED PAYMENT INFO:
Below is the sender’s information
1. MG. REFERENCE NO#: 36360857
2. SENDER'S NAME: Johnson Williams
3. AMOUNT TO PICKUP: US$10,000
Go to any Money Gram office near you and pick up the payment Track the
Reference Number by visiting and click the link below
(https://secure.moneygram.com/embed/track) and enter the Reference
Number: 36360857 and the Last Name: Williams, you will find the payment
available for pickup instantly.
Yours Sincerely,
Mrs. Helen Marvis
United Nations Liaison Office
Directorate for International Payments
^ permalink raw reply [flat|nested] 3471+ messages in thread
[parent not found: <1327230475.528260.1591750200327.ref@mail.yahoo.com>]
* (unknown)
[not found] <1327230475.528260.1591750200327.ref@mail.yahoo.com>
@ 2020-06-10 0:50 ` Celine Marchand
0 siblings, 0 replies; 3471+ messages in thread
From: Celine Marchand @ 2020-06-10 0:50 UTC (permalink / raw)
Urgent attention please
Dearest, how are you? I am sorry for intruding your mailbox, but I need to talk to you. I got your email address in my dream and i wonder if it is correct because i emailed you earlier without any response. You should know that my contact to you is by the special grace of God. I am in urgent need of a reliable and reputable person and i believe you are a person of fine repute, hence the revelation of your email to me in the dream.
I am Mrs. Celine Marchand a citizen of France (French). But reside in Burkina Faso for business purposes. I need your collaboration to execute some projects worth € 2.800.000 Euro and it is very urgent as am presently in very critical condition.
Please reply through this email address ( celine88492-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org ) with your full contact information for more private and confidential communication.
Thank you as i wait for your reply.
Mrs. Celine Marchand
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-06-04 19:57 David Shine
0 siblings, 0 replies; 3471+ messages in thread
From: David Shine @ 2020-06-04 19:57 UTC (permalink / raw)
To: linux
Linux
https://clck.ru/NnuZT
David Shine
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-05-08 23:51 Barbara D Wilkins
0 siblings, 0 replies; 3471+ messages in thread
From: Barbara D Wilkins @ 2020-05-08 23:51 UTC (permalink / raw)
Hallo, Wir sind eine christliche Organisation, die gegründet wurde, um Menschen zu helfen, die Hilfe benötigen, beispielsweise finanzielle Hilfe. Wenn Sie also finanzielle Schwierigkeiten haben oder sich in einem finanziellen Chaos befinden und Geld benötigen, um Ihr eigenes Unternehmen zu gründen, oder wenn Sie einen Kredit benötigen Begleichen Sie Ihre Schulden oder zahlen Sie Ihre Rechnungen ab, gründen Sie ein gutes Geschäft oder es fällt Ihnen schwer, einen Kapitalkredit von lokalen Banken zu erhalten. Kontaktieren Sie uns noch heute per E-Mail: Lassen Sie sich diese Gelegenheit also nicht entgehen weil Jesus gestern, heute und für immer derselbe ist. Bitte, diese sind für ernsthafte und gottesfürchtige Menschen.Dein Name:Darlehensbetrag:Leihdauer:Gülti
ge Handynummer:Vielen Dank für Ihr Verständnis für Ihren Kontakt, während wir warten: mrsbarbarawilkinsfunds.usagmail.comGrüßeVerwaltung
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-05-08 23:41 Barbara D Wilkins
0 siblings, 0 replies; 3471+ messages in thread
From: Barbara D Wilkins @ 2020-05-08 23:41 UTC (permalink / raw)
Hallo, Wir sind eine christliche Organisation, die gegründet wurde, um Menschen zu helfen, die Hilfe benötigen, beispielsweise finanzielle Hilfe. Wenn Sie also finanzielle Schwierigkeiten haben oder sich in einem finanziellen Chaos befinden und Geld benötigen, um Ihr eigenes Unternehmen zu gründen, oder wenn Sie einen Kredit benötigen Begleichen Sie Ihre Schulden oder zahlen Sie Ihre Rechnungen ab, gründen Sie ein gutes Geschäft oder es fällt Ihnen schwer, einen Kapitalkredit von lokalen Banken zu erhalten. Kontaktieren Sie uns noch heute per E-Mail: Lassen Sie sich diese Gelegenheit also nicht entgehen weil Jesus gestern, heute und für immer derselbe ist. Bitte, diese sind für ernsthafte und gottesfürchtige Menschen.Dein Name:Darlehensbetrag:Leihdauer:Gülti
ge Handynummer:Vielen Dank für Ihr Verständnis für Ihren Kontakt, während wir warten: mrsbarbarawilkinsfunds.usagmail.comGrüßeVerwaltung
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-05-08 22:58 Barbara D Wilkins
0 siblings, 0 replies; 3471+ messages in thread
From: Barbara D Wilkins @ 2020-05-08 22:58 UTC (permalink / raw)
Hallo, Wir sind eine christliche Organisation, die gegründet wurde, um Menschen zu helfen, die Hilfe benötigen, beispielsweise finanzielle Hilfe. Wenn Sie also finanzielle Schwierigkeiten haben oder sich in einem finanziellen Chaos befinden und Geld benötigen, um Ihr eigenes Unternehmen zu gründen, oder wenn Sie einen Kredit benötigen Begleichen Sie Ihre Schulden oder zahlen Sie Ihre Rechnungen ab, gründen Sie ein gutes Geschäft oder es fällt Ihnen schwer, einen Kapitalkredit von lokalen Banken zu erhalten. Kontaktieren Sie uns noch heute per E-Mail: Lassen Sie sich diese Gelegenheit also nicht entgehen weil Jesus gestern, heute und für immer derselbe ist. Bitte, diese sind für ernsthafte und gottesfürchtige Menschen.Dein Name:Darlehensbetrag:Leihdauer:Gülti
ge Handynummer:Vielen Dank für Ihr Verständnis für Ihren Kontakt, während wir warten: mrsbarbarawilkinsfunds.usagmail.comGrüßeVerwaltung
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-04-23 23:06 Azim Hashim Premji
0 siblings, 0 replies; 3471+ messages in thread
From: Azim Hashim Premji @ 2020-04-23 23:06 UTC (permalink / raw)
--
Hallo,
Ich bin Azim Hashim Premji, ein indischer Wirtschaftsmagnat, Investor
und Philanthrop. Ich bin der Vorsitzende von Wipro Limited. Ich habe
25 Prozent meines persönlichen Vermögens für wohltätige Zwecke
verschenkt. Und ich habe auch zugesagt, den Rest von 25% in diesem
Jahr 2020 an Einzelpersonen COVID-19 Financial zu verschenken. Ich
habe beschlossen, 2.000.000 Euro an Sie zu spenden. Wenn Sie an meiner
Spende interessiert sind, kontaktieren Sie mich für weitere
Informationen.
Sie können auch mehr über mich über den unten stehenden Link lesen
http://en.wikipedia.org/wiki/Azim_Premji
Herzlicher Gruss
CEO Wipro Limited
Azim Hashim Premji
E-Mail: azimhashim011@gmail.com
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-04-23 23:06 Azim Hashim Premji
0 siblings, 0 replies; 3471+ messages in thread
From: Azim Hashim Premji @ 2020-04-23 23:06 UTC (permalink / raw)
--
Hallo,
Ich bin Azim Hashim Premji, ein indischer Wirtschaftsmagnat, Investor
und Philanthrop. Ich bin der Vorsitzende von Wipro Limited. Ich habe
25 Prozent meines persönlichen Vermögens für wohltätige Zwecke
verschenkt. Und ich habe auch zugesagt, den Rest von 25% in diesem
Jahr 2020 an Einzelpersonen COVID-19 Financial zu verschenken. Ich
habe beschlossen, 2.000.000 Euro an Sie zu spenden. Wenn Sie an meiner
Spende interessiert sind, kontaktieren Sie mich für weitere
Informationen.
Sie können auch mehr über mich über den unten stehenden Link lesen
http://en.wikipedia.org/wiki/Azim_Premji
Herzlicher Gruss
CEO Wipro Limited
Azim Hashim Premji
E-Mail: azimhashim011@gmail.com
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-03-27 9:20 chenanqing
0 siblings, 0 replies; 3471+ messages in thread
From: chenanqing @ 2020-03-27 9:20 UTC (permalink / raw)
To: chenanqing, linux-kernel, linux-scsi, open-iscsi, ceph-devel,
martin.petersen, jejb, cleech, lduncan
From: Chen Anqing <chenanqing@oppo.com>
To: Lee Duncan <lduncan@suse.com>
Cc: Chris Leech <cleech@redhat.com>,
"James E . J . Bottomley" <jejb@linux.ibm.com>,
"Martin K . Petersen" <martin.petersen@oracle.com>,
ceph-devel@vger.kernel.org,
open-iscsi@googlegroups.com,
linux-scsi@vger.kernel.org,
linux-kernel@vger.kernel.org,
chenanqing@oppo.com
Subject: [PATCH] scsi: libiscsi: we should take compound page into account also
Date: Fri, 27 Mar 2020 05:20:01 -0400
Message-Id: <20200327092001.56879-1-chenanqing@oppo.com>
X-Mailer: git-send-email 2.18.2
the patch is occur at a real crash,which slab is
come from a compound page,so we need take the compound page
into account also.
fixed commit 08b11eaccfcf ("scsi: libiscsi: fall back to
sendmsg for slab pages").
Signed-off-by: Chen Anqing <chenanqing@oppo.com>
---
drivers/scsi/libiscsi_tcp.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/scsi/libiscsi_tcp.c b/drivers/scsi/libiscsi_tcp.c
index 6ef93c7af954..98304e5e1f6f 100644
--- a/drivers/scsi/libiscsi_tcp.c
+++ b/drivers/scsi/libiscsi_tcp.c
@@ -128,7 +128,8 @@ static void iscsi_tcp_segment_map(struct iscsi_segment *segment, int recv)
* coalescing neighboring slab objects into a single frag which
* triggers one of hardened usercopy checks.
*/
- if (!recv && page_count(sg_page(sg)) >= 1 && !PageSlab(sg_page(sg)))
+ if (!recv && page_count(sg_page(sg)) >= 1 &&
+ !PageSlab(compound_head(sg_page(sg))))
return;
if (recv) {
--
2.18.2
________________________________
OPPO
本电子邮件及其附件含有OPPO公司的保密信息,仅限于邮件指明的收件人使用(包含个人及群组)。禁止任何人在未经授权的情况下以任何形式使用。如果您错收了本邮件,请立即以电子邮件通知发件人并删除本邮件及其附件。
This e-mail and its attachments contain confidential information from OPPO, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it!
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-03-27 8:36 chenanqing
0 siblings, 0 replies; 3471+ messages in thread
From: chenanqing @ 2020-03-27 8:36 UTC (permalink / raw)
To: chenanqing, linux-kernel, netdev, ceph-devel, kuba, sage,
jlayton, idryomov
From: Chen Anqing <chenanqing@oppo.com>
To: Ilya Dryomov <idryomov@gmail.com>
Cc: Jeff Layton <jlayton@kernel.org>,
Sage Weil <sage@redhat.com>,
Jakub Kicinski <kuba@kernel.org>,
ceph-devel@vger.kernel.org,
netdev@vger.kernel.org,
linux-kernel@vger.kernel.org,
chenanqing@oppo.com
Subject: [PATCH] libceph: we should take compound page into account also
Date: Fri, 27 Mar 2020 04:36:30 -0400
Message-Id: <20200327083630.36296-1-chenanqing@oppo.com>
X-Mailer: git-send-email 2.18.2
the patch is occur at a real crash,which slab is
come from a compound page,so we need take the compound page
into account also.
fixed commit 7e241f647dc7 ("libceph: fall back to sendmsg for slab pages")'
Signed-off-by: Chen Anqing <chenanqing@oppo.com>
---
net/ceph/messenger.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/net/ceph/messenger.c b/net/ceph/messenger.c
index f8ca5edc5f2c..e08c1c334cd9 100644
--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -582,7 +582,7 @@ static int ceph_tcp_sendpage(struct socket *sock, struct page *page,
* coalescing neighboring slab objects into a single frag which
* triggers one of hardened usercopy checks.
*/
- if (page_count(page) >= 1 && !PageSlab(page))
+ if (page_count(page) >= 1 && !PageSlab(compound_head(page)))
sendpage = sock->ops->sendpage;
else
sendpage = sock_no_sendpage;
--
2.18.2
________________________________
OPPO
本电子邮件及其附件含有OPPO公司的保密信息,仅限于邮件指明的收件人使用(包含个人及群组)。禁止任何人在未经授权的情况下以任何形式使用。如果您错收了本邮件,请立即以电子邮件通知发件人并删除本邮件及其附件。
This e-mail and its attachments contain confidential information from OPPO, which is intended only for the person or entity whose address is listed above. Any use of the information contained herein in any way (including, but not limited to, total or partial disclosure, reproduction, or dissemination) by persons other than the intended recipient(s) is prohibited. If you receive this e-mail in error, please notify the sender by phone or email immediately and delete it!
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-03-17 0:11 David Ibe
0 siblings, 0 replies; 3471+ messages in thread
From: David Ibe @ 2020-03-17 0:11 UTC (permalink / raw)
Good Day,
I am Mr. David Ibe, I work with the International Standards on Auditing, I have seen on records, that several times people has divert your funds into their own personal accounts.
Now I am writing to you in respect of the amount which I have been able to send to you through our International United Nations accredited and approved Diplomat, who has arrived Africa, I want you to know that the diplomat would deliver the funds which I have packaged as a diplomatic compensation to you and the amount in the consignment is $10,000,000.00 United State Dollars.
I did not disclose the contents to the diplomat, but I told him that it is your compensation from the Auditing Corporate Governance and Stewardship, Auditing and Assurance Standards Board. I want you to know that these funds would help with your financial status as I have seen in records that you have spent a lot trying to receive these funds and I am not demanding so much from you but only 30% for my stress and logistics.
I would like you to get back to me with your personal contact details, so that I can give you the contact information's of the diplomat who has arrived Africa and has been waiting to get your details so that he can proceed with the delivery to you.
Yours Sincerely,
Kindly forward your details to: mrdavidibe966@gmail.com
Mr. David Ibe
International Auditor,
Corporate Governance and Stewardship
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-03-17 0:11 David Ibe
0 siblings, 0 replies; 3471+ messages in thread
From: David Ibe @ 2020-03-17 0:11 UTC (permalink / raw)
Good Day,
I am Mr. David Ibe, I work with the International Standards on Auditing, I have seen on records, that several times people has divert your funds into their own personal accounts.
Now I am writing to you in respect of the amount which I have been able to send to you through our International United Nations accredited and approved Diplomat, who has arrived Africa, I want you to know that the diplomat would deliver the funds which I have packaged as a diplomatic compensation to you and the amount in the consignment is $10,000,000.00 United State Dollars.
I did not disclose the contents to the diplomat, but I told him that it is your compensation from the Auditing Corporate Governance and Stewardship, Auditing and Assurance Standards Board. I want you to know that these funds would help with your financial status as I have seen in records that you have spent a lot trying to receive these funds and I am not demanding so much from you but only 30% for my stress and logistics.
I would like you to get back to me with your personal contact details, so that I can give you the contact information's of the diplomat who has arrived Africa and has been waiting to get your details so that he can proceed with the delivery to you.
Yours Sincerely,
Kindly forward your details to: mrdavidibe966@gmail.com
Mr. David Ibe
International Auditor,
Corporate Governance and Stewardship
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-03-09 8:43 Michael J. Weirsky
0 siblings, 0 replies; 3471+ messages in thread
From: Michael J. Weirsky @ 2020-03-09 8:43 UTC (permalink / raw)
--
My name is Michael J. Weirsky, I'm an unemployed Handy man , winner of
$273million Jackpot in March 8, 2019. I donate $1.000.000,00 to you.
Contact me via email: micjsky@aol.com for info / claim.
Continue reading:
https://abcnews.go.com/WNT/video/jersey-handyman-forward-273m-lottery-winner-61544244
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-03-09 7:37 Michael J. Weirsky
0 siblings, 0 replies; 3471+ messages in thread
From: Michael J. Weirsky @ 2020-03-09 7:37 UTC (permalink / raw)
--
My name is Michael J. Weirsky, I'm an unemployed Handy man , winner of
$273million Jackpot in March 8, 2019. I donate $1.000.000,00 to you.
Contact me via email: micjsky@aol.com for info / claim.
Continue reading:
https://abcnews.go.com/WNT/video/jersey-handyman-forward-273m-lottery-winner-61544244
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-03-09 7:34 Michael J. Weirsky
0 siblings, 0 replies; 3471+ messages in thread
From: Michael J. Weirsky @ 2020-03-09 7:34 UTC (permalink / raw)
--
My name is Michael J. Weirsky, I'm an unemployed Handy man , winner of
$273million Jackpot in March 8, 2019. I donate $1.000.000,00 to you.
Contact me via email: micjsky-YDxpq3io04c@public.gmane.org for info / claim.
Continue reading:
https://abcnews.go.com/WNT/video/jersey-handyman-forward-273m-lottery-winner-61544244
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-03-09 7:34 Michael J. Weirsky
0 siblings, 0 replies; 3471+ messages in thread
From: Michael J. Weirsky @ 2020-03-09 7:34 UTC (permalink / raw)
--
My name is Michael J. Weirsky, I'm an unemployed Handy man , winner of
$273million Jackpot in March 8, 2019. I donate $1.000.000,00 to you.
Contact me via email: micjsky-YDxpq3io04c@public.gmane.org for info / claim.
Continue reading:
https://abcnews.go.com/WNT/video/jersey-handyman-forward-273m-lottery-winner-61544244
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-03-09 7:34 Michael J. Weirsky
0 siblings, 0 replies; 3471+ messages in thread
From: Michael J. Weirsky @ 2020-03-09 7:34 UTC (permalink / raw)
--
My name is Michael J. Weirsky, I'm an unemployed Handy man , winner of
$273million Jackpot in March 8, 2019. I donate $1.000.000,00 to you.
Contact me via email: micjsky@aol.com for info / claim.
Continue reading:
https://abcnews.go.com/WNT/video/jersey-handyman-forward-273m-lottery-winner-61544244
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-03-09 7:34 Michael J. Weirsky
0 siblings, 0 replies; 3471+ messages in thread
From: Michael J. Weirsky @ 2020-03-09 7:34 UTC (permalink / raw)
--
My name is Michael J. Weirsky, I'm an unemployed Handy man , winner of
$273million Jackpot in March 8, 2019. I donate $1.000.000,00 to you.
Contact me via email: micjsky@aol.com for info / claim.
Continue reading:
https://abcnews.go.com/WNT/video/jersey-handyman-forward-273m-lottery-winner-61544244
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-03-05 10:47 Juanito S. Galang
0 siblings, 0 replies; 3471+ messages in thread
From: Juanito S. Galang @ 2020-03-05 10:47 UTC (permalink / raw)
Herzlichen Glückwunsch Lieber Begünstigter,Sie erhalten diese E-Mail von der Robert Bailey Foundation. Ich bin ein pensionierter Regierungsangestellter aus Harlem und ein Gewinner des Powerball Lottery Jackpot im Wert von 343,8 Millionen US-Dollar. Ich bin der größte Jackpot-Gewinner in der Geschichte der New Yorker Lotterie im US-Bundesstaat Amerika. Ich habe diese Lotterie am 27. Oktober 2018 gewonnen und möchte Sie darüber informieren, dass Google in Zusammenarbeit mit Microsoft Ihre "E-Mail-Adresse" auf meine Bitte, einen Spendenbetrag von 3.000.000,00 Millionen Euro zu erhalten, übermittelt hat. Ich spende diese 3 Millionen Euro an Sie, um den Wohltätigkeitsheimen und armen Menschen in Ihrer Gemeinde zu helfen, damit wir die Welt für alle verbessern können.Weitere Informationen finden Sie auf der folgenden Website, damit Sie nicht skeptisch sind
Diese Spende von 3 Mio. EUR.https://nypost.com/2018/11/14/meet-the-winner-of-the-biggest-lottery-jackpot-in-new-york-history/Sie können auch mein YouTube für mehr Bestätigung aufpassen:
https://www.youtube.com/watch?v=H5vT18Ysavc
Bitte beachten Sie, dass alle Antworten an (robertdonation7@gmail.com ) gesendet werden, damit wir das können
Fahren Sie fort, um das gespendete Geld an Sie zu überweisen.E-Mail: robertdonation7@gmail.comFreundliche Grüße,
Robert Bailey
* * * * * * * * * * * * * * * *
Powerball Jackpot Gewinner
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-03-05 10:46 Juanito S. Galang
0 siblings, 0 replies; 3471+ messages in thread
From: Juanito S. Galang @ 2020-03-05 10:46 UTC (permalink / raw)
Herzlichen Glückwunsch Lieber Begünstigter,Sie erhalten diese E-Mail von der Robert Bailey Foundation. Ich bin ein pensionierter Regierungsangestellter aus Harlem und ein Gewinner des Powerball Lottery Jackpot im Wert von 343,8 Millionen US-Dollar. Ich bin der größte Jackpot-Gewinner in der Geschichte der New Yorker Lotterie im US-Bundesstaat Amerika. Ich habe diese Lotterie am 27. Oktober 2018 gewonnen und möchte Sie darüber informieren, dass Google in Zusammenarbeit mit Microsoft Ihre "E-Mail-Adresse" auf meine Bitte, einen Spendenbetrag von 3.000.000,00 Millionen Euro zu erhalten, übermittelt hat. Ich spende diese 3 Millionen Euro an Sie, um den Wohltätigkeitsheimen und armen Menschen in Ihrer Gemeinde zu helfen, damit wir die Welt für alle verbessern können.Weitere Informationen finden Sie auf der folgenden Website, damit Sie nicht skeptisch sind
Diese Spende von 3 Mio. EUR.https://nypost.com/2018/11/14/meet-the-winner-of-the-biggest-lottery-jackpot-in-new-york-history/Sie können auch mein YouTube für mehr Bestätigung aufpassen:
https://www.youtube.com/watch?v=H5vT18Ysavc
Bitte beachten Sie, dass alle Antworten an (robertdonation7@gmail.com ) gesendet werden, damit wir das können
Fahren Sie fort, um das gespendete Geld an Sie zu überweisen.E-Mail: robertdonation7@gmail.comFreundliche Grüße,
Robert Bailey
* * * * * * * * * * * * * * * *
Powerball Jackpot Gewinner
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-03-05 10:46 Juanito S. Galang
0 siblings, 0 replies; 3471+ messages in thread
From: Juanito S. Galang @ 2020-03-05 10:46 UTC (permalink / raw)
Herzlichen Glückwunsch Lieber Begünstigter,Sie erhalten diese E-Mail von der Robert Bailey Foundation. Ich bin ein pensionierter Regierungsangestellter aus Harlem und ein Gewinner des Powerball Lottery Jackpot im Wert von 343,8 Millionen US-Dollar. Ich bin der größte Jackpot-Gewinner in der Geschichte der New Yorker Lotterie im US-Bundesstaat Amerika. Ich habe diese Lotterie am 27. Oktober 2018 gewonnen und möchte Sie darüber informieren, dass Google in Zusammenarbeit mit Microsoft Ihre "E-Mail-Adresse" auf meine Bitte, einen Spendenbetrag von 3.000.000,00 Millionen Euro zu erhalten, übermittelt hat. Ich spende diese 3 Millionen Euro an Sie, um den Wohltätigkeitsheimen und armen Menschen in Ihrer Gemeinde zu helfen, damit wir die Welt für alle verbessern können.Weitere Informationen finden Sie auf der folgenden Website, damit Sie nicht skeptisch sind
Diese Spende von 3 Mio. EUR.https://nypost.com/2018/11/14/meet-the-winner-of-the-biggest-lottery-jackpot-in-new-york-history/Sie können auch mein YouTube für mehr Bestätigung aufpassen:
https://www.youtube.com/watch?v=H5vT18Ysavc
Bitte beachten Sie, dass alle Antworten an (robertdonation7-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org ) gesendet werden, damit wir das können
Fahren Sie fort, um das gespendete Geld an Sie zu überweisen.E-Mail: robertdonation7-Re5JQEeQqe/BREloi785ARduMeKFjlen@public.gmane.org Grüße,
Robert Bailey
* * * * * * * * * * * * * * * *
Powerball Jackpot Gewinner
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-03-05 10:46 Juanito S. Galang
0 siblings, 0 replies; 3471+ messages in thread
From: Juanito S. Galang @ 2020-03-05 10:46 UTC (permalink / raw)
Herzlichen Glückwunsch Lieber Begünstigter,Sie erhalten diese E-Mail von der Robert Bailey Foundation. Ich bin ein pensionierter Regierungsangestellter aus Harlem und ein Gewinner des Powerball Lottery Jackpot im Wert von 343,8 Millionen US-Dollar. Ich bin der größte Jackpot-Gewinner in der Geschichte der New Yorker Lotterie im US-Bundesstaat Amerika. Ich habe diese Lotterie am 27. Oktober 2018 gewonnen und möchte Sie darüber informieren, dass Google in Zusammenarbeit mit Microsoft Ihre "E-Mail-Adresse" auf meine Bitte, einen Spendenbetrag von 3.000.000,00 Millionen Euro zu erhalten, übermittelt hat. Ich spende diese 3 Millionen Euro an Sie, um den Wohltätigkeitsheimen und armen Menschen in Ihrer Gemeinde zu helfen, damit wir die Welt für alle verbessern können.Weitere Informationen finden Sie auf der folgenden Website, damit Sie nicht skeptisch sind
Diese Spende von 3 Mio. EUR.https://nypost.com/2018/11/14/meet-the-winner-of-the-biggest-lottery-jackpot-in-new-york-history/Sie können auch mein YouTube für mehr Bestätigung aufpassen:
https://www.youtube.com/watch?v=H5vT18Ysavc
Bitte beachten Sie, dass alle Antworten an (robertdonation7@gmail.com ) gesendet werden, damit wir das können
Fahren Sie fort, um das gespendete Geld an Sie zu überweisen.E-Mail: robertdonation7@gmail.comFreundliche Grüße,
Robert Bailey
* * * * * * * * * * * * * * * *
Powerball Jackpot Gewinner
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-03-05 10:46 Juanito S. Galang
0 siblings, 0 replies; 3471+ messages in thread
From: Juanito S. Galang @ 2020-03-05 10:46 UTC (permalink / raw)
Herzlichen Glückwunsch Lieber Begünstigter,Sie erhalten diese E-Mail von der Robert Bailey Foundation. Ich bin ein pensionierter Regierungsangestellter aus Harlem und ein Gewinner des Powerball Lottery Jackpot im Wert von 343,8 Millionen US-Dollar. Ich bin der größte Jackpot-Gewinner in der Geschichte der New Yorker Lotterie im US-Bundesstaat Amerika. Ich habe diese Lotterie am 27. Oktober 2018 gewonnen und möchte Sie darüber informieren, dass Google in Zusammenarbeit mit Microsoft Ihre "E-Mail-Adresse" auf meine Bitte, einen Spendenbetrag von 3.000.000,00 Millionen Euro zu erhalten, übermittelt hat. Ich spende diese 3 Millionen Euro an Sie, um den Wohltätigkeitsheimen und armen Menschen in Ihrer Gemeinde zu helfen, damit wir die Welt für alle verbessern können.Weitere Informationen finden Sie auf der folgenden Website, damit Sie nicht skeptisch sind
Diese Spende von 3 Mio. EUR.https://nypost.com/2018/11/14/meet-the-winner-of-the-biggest-lottery-jackpot-in-new-york-history/Sie können auch mein YouTube für mehr Bestätigung aufpassen:
https://www.youtube.com/watch?v=H5vT18Ysavc
Bitte beachten Sie, dass alle Antworten an (robertdonation7@gmail.com ) gesendet werden, damit wir das können
Fahren Sie fort, um das gespendete Geld an Sie zu überweisen.E-Mail: robertdonation7@gmail.comFreundliche Grüße,
Robert Bailey
* * * * * * * * * * * * * * * *
Powerball Jackpot Gewinner
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-03-05 10:46 Juanito S. Galang
0 siblings, 0 replies; 3471+ messages in thread
From: Juanito S. Galang @ 2020-03-05 10:46 UTC (permalink / raw)
Herzlichen Glückwunsch Lieber Begünstigter,Sie erhalten diese E-Mail von der Robert Bailey Foundation. Ich bin ein pensionierter Regierungsangestellter aus Harlem und ein Gewinner des Powerball Lottery Jackpot im Wert von 343,8 Millionen US-Dollar. Ich bin der größte Jackpot-Gewinner in der Geschichte der New Yorker Lotterie im US-Bundesstaat Amerika. Ich habe diese Lotterie am 27. Oktober 2018 gewonnen und möchte Sie darüber informieren, dass Google in Zusammenarbeit mit Microsoft Ihre "E-Mail-Adresse" auf meine Bitte, einen Spendenbetrag von 3.000.000,00 Millionen Euro zu erhalten, übermittelt hat. Ich spende diese 3 Millionen Euro an Sie, um den Wohltätigkeitsheimen und armen Menschen in Ihrer Gemeinde zu helfen, damit wir die Welt für alle verbessern können.Weitere Informationen finden Sie auf der folgenden Website, damit Sie nicht skeptisch sind
Diese Spende von 3 Mio. EUR.https://nypost.com/2018/11/14/meet-the-winner-of-the-biggest-lottery-jackpot-in-new-york-history/Sie können auch mein YouTube für mehr Bestätigung aufpassen:
https://www.youtube.com/watch?v=H5vT18Ysavc
Bitte beachten Sie, dass alle Antworten an (robertdonation7@gmail.com ) gesendet werden, damit wir das können
Fahren Sie fort, um das gespendete Geld an Sie zu überweisen.E-Mail: robertdonation7@gmail.comFreundliche Grüße,
Robert Bailey
* * * * * * * * * * * * * * * *
Powerball Jackpot Gewinner
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-03-05 10:46 Juanito S. Galang
0 siblings, 0 replies; 3471+ messages in thread
From: Juanito S. Galang @ 2020-03-05 10:46 UTC (permalink / raw)
Herzlichen Glückwunsch Lieber Begünstigter,Sie erhalten diese E-Mail von der Robert Bailey Foundation. Ich bin ein pensionierter Regierungsangestellter aus Harlem und ein Gewinner des Powerball Lottery Jackpot im Wert von 343,8 Millionen US-Dollar. Ich bin der größte Jackpot-Gewinner in der Geschichte der New Yorker Lotterie im US-Bundesstaat Amerika. Ich habe diese Lotterie am 27. Oktober 2018 gewonnen und möchte Sie darüber informieren, dass Google in Zusammenarbeit mit Microsoft Ihre "E-Mail-Adresse" auf meine Bitte, einen Spendenbetrag von 3.000.000,00 Millionen Euro zu erhalten, übermittelt hat. Ich spende diese 3 Millionen Euro an Sie, um den Wohltätigkeitsheimen und armen Menschen in Ihrer Gemeinde zu helfen, damit wir die Welt für alle verbessern können.Weitere Informationen finden Sie auf der folgenden Website, damit Sie nicht skeptisch sind
Diese Spende von 3 Mio. EUR.https://nypost.com/2018/11/14/meet-the-winner-of-the-biggest-lottery-jackpot-in-new-york-history/Sie können auch mein YouTube für mehr Bestätigung aufpassen:
https://www.youtube.com/watch?v=H5vT18Ysavc
Bitte beachten Sie, dass alle Antworten an (robertdonation7@gmail.com ) gesendet werden, damit wir das können
Fahren Sie fort, um das gespendete Geld an Sie zu überweisen.E-Mail: robertdonation7@gmail.comFreundliche Grüße,
Robert Bailey
* * * * * * * * * * * * * * * *
Powerball Jackpot Gewinner
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-03-05 2:33 Maria Alessandra Filippi
0 siblings, 0 replies; 3471+ messages in thread
From: Maria Alessandra Filippi @ 2020-03-05 2:33 UTC (permalink / raw)
Hallo,
Ich bin Frau Maria Elisabeth Schaeffler, eine deutsche Wirtschaftsmagnatin, Investorin und Philanthropin. Ich bin der Vorsitzende von Wipro Limited. Ich habe 25 Prozent meines persönlichen Vermögens für wohltätige Zwecke ausgegeben. Und ich habe auch versprochen, den Rest von 25% in diesem Jahr 2020 an Einzelpersonen zu verschenken. Ich habe beschlossen, 1.000.000,00 Euro an Sie zu spenden. Wenn Sie an meiner Spende interessiert sind, kontaktieren Sie mich für weitere Informationen.
Sie können auch mehr über mich über den Link unten lesen
https://en.wikipedia.org/wiki/Maria-Elisabeth_Schaeffler
Herzlicher Gruss
Geschäftsführer Wipro Limited
Maria-Elisabeth_Schaeffler
E-Mail: mrsmariaelisabethschaeffler11@gmail.com
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-03-05 0:26 Maria Alessandra Filippi
0 siblings, 0 replies; 3471+ messages in thread
From: Maria Alessandra Filippi @ 2020-03-05 0:26 UTC (permalink / raw)
Hallo,
Ich bin Frau Maria Elisabeth Schaeffler, eine deutsche Wirtschaftsmagnatin, Investorin und Philanthropin. Ich bin der Vorsitzende von Wipro Limited. Ich habe 25 Prozent meines persönlichen Vermögens für wohltätige Zwecke ausgegeben. Und ich habe auch versprochen, den Rest von 25% in diesem Jahr 2020 an Einzelpersonen zu verschenken. Ich habe beschlossen, 1.000.000,00 Euro an Sie zu spenden. Wenn Sie an meiner Spende interessiert sind, kontaktieren Sie mich für weitere Informationen.
Sie können auch mehr über mich über den Link unten lesen
https://en.wikipedia.org/wiki/Maria-Elisabeth_Schaeffler
Herzlicher Gruss
Geschäftsführer Wipro Limited
Maria-Elisabeth_Schaeffler
E-Mail: mrsmariaelisabethschaeffler11@gmail.com
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-03-04 23:30 Maria Alessandra Filippi
0 siblings, 0 replies; 3471+ messages in thread
From: Maria Alessandra Filippi @ 2020-03-04 23:30 UTC (permalink / raw)
Hallo,
Ich bin Frau Maria Elisabeth Schaeffler, eine deutsche Wirtschaftsmagnatin, Investorin und Philanthropin. Ich bin der Vorsitzende von Wipro Limited. Ich habe 25 Prozent meines persönlichen Vermögens für wohltätige Zwecke ausgegeben. Und ich habe auch versprochen, den Rest von 25% in diesem Jahr 2020 an Einzelpersonen zu verschenken. Ich habe beschlossen, 1.000.000,00 Euro an Sie zu spenden. Wenn Sie an meiner Spende interessiert sind, kontaktieren Sie mich für weitere Informationen.
Sie können auch mehr über mich über den Link unten lesen
https://en.wikipedia.org/wiki/Maria-Elisabeth_Schaeffler
Herzlicher Gruss
Geschäftsführer Wipro Limited
Maria-Elisabeth_Schaeffler
E-Mail: mrsmariaelisabethschaeffler11@gmail.com
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-03-04 9:42 Julie Leach
0 siblings, 0 replies; 3471+ messages in thread
From: Julie Leach @ 2020-03-04 9:42 UTC (permalink / raw)
--
Hallo Liebes, ich habe eine Spende von 3,000,000.00 Euro, die ich Ihnen
als Wohltätigkeitsorganisation zur Verfügung gestellt habe, um den
weniger Privilegierten und Waisen in Ihrer Gemeinde zu helfen. Bitte
antworten Sie über: julieleeach@gmail.com
^ permalink raw reply [flat|nested] 3471+ messages in thread
* kernel panic: audit: backlog limit exceeded
@ 2020-02-24 8:18 syzbot
2020-02-24 22:38 ` Paul Moore
0 siblings, 1 reply; 3471+ messages in thread
From: syzbot @ 2020-02-24 8:18 UTC (permalink / raw)
To: a, b.a.t.m.a.n, dan.carpenter, davem, eparis, fzago, gregkh,
john.hammond, linux-audit, linux-kernel, mareklindner, netdev,
paul, sw, syzkaller-bugs
Hello,
syzbot found the following crash on:
HEAD commit: 36a44bcd Merge branch 'bnxt_en-shutdown-and-kexec-kdump-re..
git tree: net
console output: https://syzkaller.appspot.com/x/log.txt?x=148bfdd9e00000
kernel config: https://syzkaller.appspot.com/x/.config?x=768cc3d3e277cc16
dashboard link: https://syzkaller.appspot.com/bug?extid=9a5e789e4725b9ef1316
compiler: gcc (GCC) 9.0.0 20181231 (experimental)
syz repro: https://syzkaller.appspot.com/x/repro.syz?x=151b1109e00000
C reproducer: https://syzkaller.appspot.com/x/repro.c?x=128bfdd9e00000
The bug was bisected to:
commit 0c1b9970ddd4cc41002321c3877e7f91aacb896d
Author: Dan Carpenter <dan.carpenter@oracle.com>
Date: Fri Jul 28 14:42:27 2017 +0000
staging: lustre: lustre: Off by two in lmv_fid2path()
bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=17e6c3e9e00000
final crash: https://syzkaller.appspot.com/x/report.txt?x=1416c3e9e00000
console output: https://syzkaller.appspot.com/x/log.txt?x=1016c3e9e00000
IMPORTANT: if you fix the bug, please add the following tag to the commit:
Reported-by: syzbot+9a5e789e4725b9ef1316@syzkaller.appspotmail.com
Fixes: 0c1b9970ddd4 ("staging: lustre: lustre: Off by two in lmv_fid2path()")
audit: audit_backlog=13 > audit_backlog_limit=7
audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=7
Kernel panic - not syncing: audit: backlog limit exceeded
CPU: 1 PID: 9913 Comm: syz-executor024 Not tainted 5.6.0-rc1-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x197/0x210 lib/dump_stack.c:118
panic+0x2e3/0x75c kernel/panic.c:221
audit_panic.cold+0x32/0x32 kernel/audit.c:307
audit_log_lost kernel/audit.c:377 [inline]
audit_log_lost+0x8b/0x180 kernel/audit.c:349
audit_log_start kernel/audit.c:1788 [inline]
audit_log_start+0x70e/0x7c0 kernel/audit.c:1745
audit_log+0x95/0x120 kernel/audit.c:2345
xt_replace_table+0x61d/0x830 net/netfilter/x_tables.c:1413
__do_replace+0x1da/0x950 net/ipv6/netfilter/ip6_tables.c:1084
do_replace net/ipv6/netfilter/ip6_tables.c:1157 [inline]
do_ip6t_set_ctl+0x33a/0x4c8 net/ipv6/netfilter/ip6_tables.c:1681
nf_sockopt net/netfilter/nf_sockopt.c:106 [inline]
nf_setsockopt+0x77/0xd0 net/netfilter/nf_sockopt.c:115
ipv6_setsockopt net/ipv6/ipv6_sockglue.c:949 [inline]
ipv6_setsockopt+0x147/0x180 net/ipv6/ipv6_sockglue.c:933
tcp_setsockopt net/ipv4/tcp.c:3165 [inline]
tcp_setsockopt+0x8f/0xe0 net/ipv4/tcp.c:3159
sock_common_setsockopt+0x94/0xd0 net/core/sock.c:3149
__sys_setsockopt+0x261/0x4c0 net/socket.c:2130
__do_sys_setsockopt net/socket.c:2146 [inline]
__se_sys_setsockopt net/socket.c:2143 [inline]
__x64_sys_setsockopt+0xbe/0x150 net/socket.c:2143
do_syscall_64+0xfa/0x790 arch/x86/entry/common.c:294
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x44720a
Code: 49 89 ca b8 37 00 00 00 0f 05 48 3d 01 f0 ff ff 0f 83 1a e0 fb ff c3 66 0f 1f 84 00 00 00 00 00 49 89 ca b8 36 00 00 00 0f 05 <48> 3d 01 f0 ff ff 0f 83 fa df fb ff c3 66 0f 1f 84 00 00 00 00 00
RSP: 002b:00007ffd032dec78 EFLAGS: 00000286 ORIG_RAX: 0000000000000036
RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 000000000044720a
RDX: 0000000000000040 RSI: 0000000000000029 RDI: 0000000000000003
RBP: 00007ffd032deda0 R08: 00000000000003b8 R09: 0000000000004000
R10: 00000000006d7b40 R11: 0000000000000286 R12: 00007ffd032deca0
R13: 00000000006d9d60 R14: 0000000000000029 R15: 00000000006d7ba0
Kernel Offset: disabled
Rebooting in 86400 seconds..
---
This bug is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkaller@googlegroups.com.
syzbot will keep track of this bug report. See:
https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
For information about bisection process see: https://goo.gl/tpsmEJ#bisection
syzbot can test patches for this bug, for details see:
https://goo.gl/tpsmEJ#testing-patches
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: kernel panic: audit: backlog limit exceeded
2020-02-24 8:18 kernel panic: audit: backlog limit exceeded syzbot
@ 2020-02-24 22:38 ` Paul Moore
2020-02-24 22:43 ` Eric Paris
0 siblings, 1 reply; 3471+ messages in thread
From: Paul Moore @ 2020-02-24 22:38 UTC (permalink / raw)
To: syzbot
Cc: a, b.a.t.m.a.n, dan.carpenter, davem, Eric Paris, fzago, gregkh,
john.hammond, linux-audit, linux-kernel, mareklindner, netdev,
sw, syzkaller-bugs
On Mon, Feb 24, 2020 at 3:18 AM syzbot
<syzbot+9a5e789e4725b9ef1316@syzkaller.appspotmail.com> wrote:
>
> Hello,
>
> syzbot found the following crash on:
>
> HEAD commit: 36a44bcd Merge branch 'bnxt_en-shutdown-and-kexec-kdump-re..
> git tree: net
> console output: https://syzkaller.appspot.com/x/log.txt?x=148bfdd9e00000
> kernel config: https://syzkaller.appspot.com/x/.config?x=768cc3d3e277cc16
> dashboard link: https://syzkaller.appspot.com/bug?extid=9a5e789e4725b9ef1316
> compiler: gcc (GCC) 9.0.0 20181231 (experimental)
> syz repro: https://syzkaller.appspot.com/x/repro.syz?x=151b1109e00000
> C reproducer: https://syzkaller.appspot.com/x/repro.c?x=128bfdd9e00000
>
> The bug was bisected to:
>
> commit 0c1b9970ddd4cc41002321c3877e7f91aacb896d
> Author: Dan Carpenter <dan.carpenter@oracle.com>
> Date: Fri Jul 28 14:42:27 2017 +0000
>
> staging: lustre: lustre: Off by two in lmv_fid2path()
>
> bisection log: https://syzkaller.appspot.com/x/bisect.txt?x=17e6c3e9e00000
> final crash: https://syzkaller.appspot.com/x/report.txt?x=1416c3e9e00000
> console output: https://syzkaller.appspot.com/x/log.txt?x=1016c3e9e00000
>
> IMPORTANT: if you fix the bug, please add the following tag to the commit:
> Reported-by: syzbot+9a5e789e4725b9ef1316@syzkaller.appspotmail.com
> Fixes: 0c1b9970ddd4 ("staging: lustre: lustre: Off by two in lmv_fid2path()")
>
> audit: audit_backlog=13 > audit_backlog_limit=7
> audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=7
> Kernel panic - not syncing: audit: backlog limit exceeded
> CPU: 1 PID: 9913 Comm: syz-executor024 Not tainted 5.6.0-rc1-syzkaller #0
> Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
> Call Trace:
> __dump_stack lib/dump_stack.c:77 [inline]
> dump_stack+0x197/0x210 lib/dump_stack.c:118
> panic+0x2e3/0x75c kernel/panic.c:221
> audit_panic.cold+0x32/0x32 kernel/audit.c:307
> audit_log_lost kernel/audit.c:377 [inline]
> audit_log_lost+0x8b/0x180 kernel/audit.c:349
> audit_log_start kernel/audit.c:1788 [inline]
> audit_log_start+0x70e/0x7c0 kernel/audit.c:1745
> audit_log+0x95/0x120 kernel/audit.c:2345
> xt_replace_table+0x61d/0x830 net/netfilter/x_tables.c:1413
> __do_replace+0x1da/0x950 net/ipv6/netfilter/ip6_tables.c:1084
> do_replace net/ipv6/netfilter/ip6_tables.c:1157 [inline]
> do_ip6t_set_ctl+0x33a/0x4c8 net/ipv6/netfilter/ip6_tables.c:1681
> nf_sockopt net/netfilter/nf_sockopt.c:106 [inline]
> nf_setsockopt+0x77/0xd0 net/netfilter/nf_sockopt.c:115
> ipv6_setsockopt net/ipv6/ipv6_sockglue.c:949 [inline]
> ipv6_setsockopt+0x147/0x180 net/ipv6/ipv6_sockglue.c:933
> tcp_setsockopt net/ipv4/tcp.c:3165 [inline]
> tcp_setsockopt+0x8f/0xe0 net/ipv4/tcp.c:3159
> sock_common_setsockopt+0x94/0xd0 net/core/sock.c:3149
> __sys_setsockopt+0x261/0x4c0 net/socket.c:2130
> __do_sys_setsockopt net/socket.c:2146 [inline]
> __se_sys_setsockopt net/socket.c:2143 [inline]
> __x64_sys_setsockopt+0xbe/0x150 net/socket.c:2143
> do_syscall_64+0xfa/0x790 arch/x86/entry/common.c:294
> entry_SYSCALL_64_after_hwframe+0x49/0xbe
> RIP: 0033:0x44720a
> Code: 49 89 ca b8 37 00 00 00 0f 05 48 3d 01 f0 ff ff 0f 83 1a e0 fb ff c3 66 0f 1f 84 00 00 00 00 00 49 89 ca b8 36 00 00 00 0f 05 <48> 3d 01 f0 ff ff 0f 83 fa df fb ff c3 66 0f 1f 84 00 00 00 00 00
> RSP: 002b:00007ffd032dec78 EFLAGS: 00000286 ORIG_RAX: 0000000000000036
> RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 000000000044720a
> RDX: 0000000000000040 RSI: 0000000000000029 RDI: 0000000000000003
> RBP: 00007ffd032deda0 R08: 00000000000003b8 R09: 0000000000004000
> R10: 00000000006d7b40 R11: 0000000000000286 R12: 00007ffd032deca0
> R13: 00000000006d9d60 R14: 0000000000000029 R15: 00000000006d7ba0
> Kernel Offset: disabled
> Rebooting in 86400 seconds..
>
>
> ---
> This bug is generated by a bot. It may contain errors.
> See https://goo.gl/tpsmEJ for more information about syzbot.
> syzbot engineers can be reached at syzkaller@googlegroups.com.
>
> syzbot will keep track of this bug report. See:
> https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
> For information about bisection process see: https://goo.gl/tpsmEJ#bisection
> syzbot can test patches for this bug, for details see:
> https://goo.gl/tpsmEJ#testing-patches
Similar to syzbot report 72461ac44b36c98f58e5, see my comments there.
--
paul moore
www.paul-moore.com
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: kernel panic: audit: backlog limit exceeded
2020-02-24 22:38 ` Paul Moore
@ 2020-02-24 22:43 ` Eric Paris
2020-02-24 22:46 ` Paul Moore
0 siblings, 1 reply; 3471+ messages in thread
From: Eric Paris @ 2020-02-24 22:43 UTC (permalink / raw)
To: Paul Moore, syzbot
Cc: a, b.a.t.m.a.n, dan.carpenter, davem, fzago, gregkh,
john.hammond, linux-audit, linux-kernel, mareklindner, netdev,
sw, syzkaller-bugs
https://syzkaller.appspot.com/x/repro.syz?x=151b1109e00000 (the
reproducer listed) looks like it is literally fuzzing the AUDIT_SET.
Which seems like this is working as designed if it is setting the
failure mode to 2.
On Mon, 2020-02-24 at 17:38 -0500, Paul Moore wrote:
> On Mon, Feb 24, 2020 at 3:18 AM syzbot
> <syzbot+9a5e789e4725b9ef1316@syzkaller.appspotmail.com> wrote:
> > Hello,
> >
> > syzbot found the following crash on:
> >
> > HEAD commit: 36a44bcd Merge branch 'bnxt_en-shutdown-and-kexec-
> > kdump-re..
> > git tree: net
> > console output:
> > https://syzkaller.appspot.com/x/log.txt?x=148bfdd9e00000
> > kernel config:
> > https://syzkaller.appspot.com/x/.config?x=768cc3d3e277cc16
> > dashboard link:
> > https://syzkaller.appspot.com/bug?extid=9a5e789e4725b9ef1316
> > compiler: gcc (GCC) 9.0.0 20181231 (experimental)
> > syz repro:
> > https://syzkaller.appspot.com/x/repro.syz?x=151b1109e00000
> > C reproducer:
> > https://syzkaller.appspot.com/x/repro.c?x=128bfdd9e00000
> >
> > The bug was bisected to:
> >
> > commit 0c1b9970ddd4cc41002321c3877e7f91aacb896d
> > Author: Dan Carpenter <dan.carpenter@oracle.com>
> > Date: Fri Jul 28 14:42:27 2017 +0000
> >
> > staging: lustre: lustre: Off by two in lmv_fid2path()
> >
> > bisection log:
> > https://syzkaller.appspot.com/x/bisect.txt?x=17e6c3e9e00000
> > final crash:
> > https://syzkaller.appspot.com/x/report.txt?x=1416c3e9e00000
> > console output:
> > https://syzkaller.appspot.com/x/log.txt?x=1016c3e9e00000
> >
> > IMPORTANT: if you fix the bug, please add the following tag to the
> > commit:
> > Reported-by: syzbot+9a5e789e4725b9ef1316@syzkaller.appspotmail.com
> > Fixes: 0c1b9970ddd4 ("staging: lustre: lustre: Off by two in
> > lmv_fid2path()")
> >
> > audit: audit_backlog=13 > audit_backlog_limit=7
> > audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=7
> > Kernel panic - not syncing: audit: backlog limit exceeded
> > CPU: 1 PID: 9913 Comm: syz-executor024 Not tainted 5.6.0-rc1-
> > syzkaller #0
> > Hardware name: Google Google Compute Engine/Google Compute Engine,
> > BIOS Google 01/01/2011
> > Call Trace:
> > __dump_stack lib/dump_stack.c:77 [inline]
> > dump_stack+0x197/0x210 lib/dump_stack.c:118
> > panic+0x2e3/0x75c kernel/panic.c:221
> > audit_panic.cold+0x32/0x32 kernel/audit.c:307
> > audit_log_lost kernel/audit.c:377 [inline]
> > audit_log_lost+0x8b/0x180 kernel/audit.c:349
> > audit_log_start kernel/audit.c:1788 [inline]
> > audit_log_start+0x70e/0x7c0 kernel/audit.c:1745
> > audit_log+0x95/0x120 kernel/audit.c:2345
> > xt_replace_table+0x61d/0x830 net/netfilter/x_tables.c:1413
> > __do_replace+0x1da/0x950 net/ipv6/netfilter/ip6_tables.c:1084
> > do_replace net/ipv6/netfilter/ip6_tables.c:1157 [inline]
> > do_ip6t_set_ctl+0x33a/0x4c8 net/ipv6/netfilter/ip6_tables.c:1681
> > nf_sockopt net/netfilter/nf_sockopt.c:106 [inline]
> > nf_setsockopt+0x77/0xd0 net/netfilter/nf_sockopt.c:115
> > ipv6_setsockopt net/ipv6/ipv6_sockglue.c:949 [inline]
> > ipv6_setsockopt+0x147/0x180 net/ipv6/ipv6_sockglue.c:933
> > tcp_setsockopt net/ipv4/tcp.c:3165 [inline]
> > tcp_setsockopt+0x8f/0xe0 net/ipv4/tcp.c:3159
> > sock_common_setsockopt+0x94/0xd0 net/core/sock.c:3149
> > __sys_setsockopt+0x261/0x4c0 net/socket.c:2130
> > __do_sys_setsockopt net/socket.c:2146 [inline]
> > __se_sys_setsockopt net/socket.c:2143 [inline]
> > __x64_sys_setsockopt+0xbe/0x150 net/socket.c:2143
> > do_syscall_64+0xfa/0x790 arch/x86/entry/common.c:294
> > entry_SYSCALL_64_after_hwframe+0x49/0xbe
> > RIP: 0033:0x44720a
> > Code: 49 89 ca b8 37 00 00 00 0f 05 48 3d 01 f0 ff ff 0f 83 1a e0
> > fb ff c3 66 0f 1f 84 00 00 00 00 00 49 89 ca b8 36 00 00 00 0f 05
> > <48> 3d 01 f0 ff ff 0f 83 fa df fb ff c3 66 0f 1f 84 00 00 00 00 00
> > RSP: 002b:00007ffd032dec78 EFLAGS: 00000286 ORIG_RAX:
> > 0000000000000036
> > RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 000000000044720a
> > RDX: 0000000000000040 RSI: 0000000000000029 RDI: 0000000000000003
> > RBP: 00007ffd032deda0 R08: 00000000000003b8 R09: 0000000000004000
> > R10: 00000000006d7b40 R11: 0000000000000286 R12: 00007ffd032deca0
> > R13: 00000000006d9d60 R14: 0000000000000029 R15: 00000000006d7ba0
> > Kernel Offset: disabled
> > Rebooting in 86400 seconds..
> >
> >
> > ---
> > This bug is generated by a bot. It may contain errors.
> > See https://goo.gl/tpsmEJ for more information about syzbot.
> > syzbot engineers can be reached at syzkaller@googlegroups.com.
> >
> > syzbot will keep track of this bug report. See:
> > https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
> > For information about bisection process see:
> > https://goo.gl/tpsmEJ#bisection
> > syzbot can test patches for this bug, for details see:
> > https://goo.gl/tpsmEJ#testing-patches
>
> Similar to syzbot report 72461ac44b36c98f58e5, see my comments there.
>
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: kernel panic: audit: backlog limit exceeded
2020-02-24 22:43 ` Eric Paris
@ 2020-02-24 22:46 ` Paul Moore
[not found] ` <CAHC9VhQnbdJprbdTa_XcgUJaiwhzbnGMWJqHczU54UMk0AFCtw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
0 siblings, 1 reply; 3471+ messages in thread
From: Paul Moore @ 2020-02-24 22:46 UTC (permalink / raw)
To: Eric Paris
Cc: syzbot, a, b.a.t.m.a.n, dan.carpenter, davem, fzago, gregkh,
john.hammond, linux-audit, linux-kernel, mareklindner, netdev,
sw, syzkaller-bugs
On Mon, Feb 24, 2020 at 5:43 PM Eric Paris <eparis@redhat.com> wrote:
> https://syzkaller.appspot.com/x/repro.syz?x=151b1109e00000 (the
> reproducer listed) looks like it is literally fuzzing the AUDIT_SET.
> Which seems like this is working as designed if it is setting the
> failure mode to 2.
So it is, good catch :) I saw the panic and instinctively chalked
that up to a mistaken config, not expecting that it was what was being
tested.
> On Mon, 2020-02-24 at 17:38 -0500, Paul Moore wrote:
> > On Mon, Feb 24, 2020 at 3:18 AM syzbot
> > <syzbot+9a5e789e4725b9ef1316@syzkaller.appspotmail.com> wrote:
> > > Hello,
> > >
> > > syzbot found the following crash on:
> > >
> > > HEAD commit: 36a44bcd Merge branch 'bnxt_en-shutdown-and-kexec-
> > > kdump-re..
> > > git tree: net
> > > console output:
> > > https://syzkaller.appspot.com/x/log.txt?x=148bfdd9e00000
> > > kernel config:
> > > https://syzkaller.appspot.com/x/.config?x=768cc3d3e277cc16
> > > dashboard link:
> > > https://syzkaller.appspot.com/bug?extid=9a5e789e4725b9ef1316
> > > compiler: gcc (GCC) 9.0.0 20181231 (experimental)
> > > syz repro:
> > > https://syzkaller.appspot.com/x/repro.syz?x=151b1109e00000
> > > C reproducer:
> > > https://syzkaller.appspot.com/x/repro.c?x=128bfdd9e00000
> > >
> > > The bug was bisected to:
> > >
> > > commit 0c1b9970ddd4cc41002321c3877e7f91aacb896d
> > > Author: Dan Carpenter <dan.carpenter@oracle.com>
> > > Date: Fri Jul 28 14:42:27 2017 +0000
> > >
> > > staging: lustre: lustre: Off by two in lmv_fid2path()
> > >
> > > bisection log:
> > > https://syzkaller.appspot.com/x/bisect.txt?x=17e6c3e9e00000
> > > final crash:
> > > https://syzkaller.appspot.com/x/report.txt?x=1416c3e9e00000
> > > console output:
> > > https://syzkaller.appspot.com/x/log.txt?x=1016c3e9e00000
> > >
> > > IMPORTANT: if you fix the bug, please add the following tag to the
> > > commit:
> > > Reported-by: syzbot+9a5e789e4725b9ef1316@syzkaller.appspotmail.com
> > > Fixes: 0c1b9970ddd4 ("staging: lustre: lustre: Off by two in
> > > lmv_fid2path()")
> > >
> > > audit: audit_backlog=13 > audit_backlog_limit=7
> > > audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=7
> > > Kernel panic - not syncing: audit: backlog limit exceeded
> > > CPU: 1 PID: 9913 Comm: syz-executor024 Not tainted 5.6.0-rc1-
> > > syzkaller #0
> > > Hardware name: Google Google Compute Engine/Google Compute Engine,
> > > BIOS Google 01/01/2011
> > > Call Trace:
> > > __dump_stack lib/dump_stack.c:77 [inline]
> > > dump_stack+0x197/0x210 lib/dump_stack.c:118
> > > panic+0x2e3/0x75c kernel/panic.c:221
> > > audit_panic.cold+0x32/0x32 kernel/audit.c:307
> > > audit_log_lost kernel/audit.c:377 [inline]
> > > audit_log_lost+0x8b/0x180 kernel/audit.c:349
> > > audit_log_start kernel/audit.c:1788 [inline]
> > > audit_log_start+0x70e/0x7c0 kernel/audit.c:1745
> > > audit_log+0x95/0x120 kernel/audit.c:2345
> > > xt_replace_table+0x61d/0x830 net/netfilter/x_tables.c:1413
> > > __do_replace+0x1da/0x950 net/ipv6/netfilter/ip6_tables.c:1084
> > > do_replace net/ipv6/netfilter/ip6_tables.c:1157 [inline]
> > > do_ip6t_set_ctl+0x33a/0x4c8 net/ipv6/netfilter/ip6_tables.c:1681
> > > nf_sockopt net/netfilter/nf_sockopt.c:106 [inline]
> > > nf_setsockopt+0x77/0xd0 net/netfilter/nf_sockopt.c:115
> > > ipv6_setsockopt net/ipv6/ipv6_sockglue.c:949 [inline]
> > > ipv6_setsockopt+0x147/0x180 net/ipv6/ipv6_sockglue.c:933
> > > tcp_setsockopt net/ipv4/tcp.c:3165 [inline]
> > > tcp_setsockopt+0x8f/0xe0 net/ipv4/tcp.c:3159
> > > sock_common_setsockopt+0x94/0xd0 net/core/sock.c:3149
> > > __sys_setsockopt+0x261/0x4c0 net/socket.c:2130
> > > __do_sys_setsockopt net/socket.c:2146 [inline]
> > > __se_sys_setsockopt net/socket.c:2143 [inline]
> > > __x64_sys_setsockopt+0xbe/0x150 net/socket.c:2143
> > > do_syscall_64+0xfa/0x790 arch/x86/entry/common.c:294
> > > entry_SYSCALL_64_after_hwframe+0x49/0xbe
> > > RIP: 0033:0x44720a
> > > Code: 49 89 ca b8 37 00 00 00 0f 05 48 3d 01 f0 ff ff 0f 83 1a e0
> > > fb ff c3 66 0f 1f 84 00 00 00 00 00 49 89 ca b8 36 00 00 00 0f 05
> > > <48> 3d 01 f0 ff ff 0f 83 fa df fb ff c3 66 0f 1f 84 00 00 00 00 00
> > > RSP: 002b:00007ffd032dec78 EFLAGS: 00000286 ORIG_RAX:
> > > 0000000000000036
> > > RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 000000000044720a
> > > RDX: 0000000000000040 RSI: 0000000000000029 RDI: 0000000000000003
> > > RBP: 00007ffd032deda0 R08: 00000000000003b8 R09: 0000000000004000
> > > R10: 00000000006d7b40 R11: 0000000000000286 R12: 00007ffd032deca0
> > > R13: 00000000006d9d60 R14: 0000000000000029 R15: 00000000006d7ba0
> > > Kernel Offset: disabled
> > > Rebooting in 86400 seconds..
> > >
> > >
> > > ---
> > > This bug is generated by a bot. It may contain errors.
> > > See https://goo.gl/tpsmEJ for more information about syzbot.
> > > syzbot engineers can be reached at syzkaller@googlegroups.com.
> > >
> > > syzbot will keep track of this bug report. See:
> > > https://goo.gl/tpsmEJ#status for how to communicate with syzbot.
> > > For information about bisection process see:
> > > https://goo.gl/tpsmEJ#bisection
> > > syzbot can test patches for this bug, for details see:
> > > https://goo.gl/tpsmEJ#testing-patches
> >
> > Similar to syzbot report 72461ac44b36c98f58e5, see my comments there.
> >
>
--
paul moore
www.paul-moore.com
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-02-15 3:25 mprim37 alcorta
0 siblings, 0 replies; 3471+ messages in thread
From: mprim37 alcorta @ 2020-02-15 3:25 UTC (permalink / raw)
--
Hello, my name is Julie Leach, winner of the Power Ball Jackpot in
October 2015.I want to donate $3,000,000 for charity to help you and the
poor children in your community. Send me an email to:
julieleeach@yahoo.com for more information on how to receive my donation
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-02-11 22:34 Rajat Jain
0 siblings, 0 replies; 3471+ messages in thread
From: Rajat Jain @ 2020-02-11 22:34 UTC (permalink / raw)
To: Daniel Mack, Haojian Zhuang, Robert Jarzmik, Mark Brown,
linux-arm-kernel-IAPFreCvJWM7uuMidbF8XUB+6BGkLq7r,
linux-spi-u79uwXL29TY76Z2rM5mHXA,
linux-kernel-u79uwXL29TY76Z2rM5mHXA
Cc: Evan Green, rajatja-hpIqsD4AKlfQT0dZR+AlfA,
rajatxjain-Re5JQEeQqe8AvxtiuMwx3w,
evgreen-hpIqsD4AKlfQT0dZR+AlfA,
shobhit.srivastava-ral2JQCrhuEAvxtiuMwx3w,
porselvan.muthukrishnan-ral2JQCrhuEAvxtiuMwx3w
From: Evan Green <evgreen-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
Date: Wed, 29 Jan 2020 13:54:16 -0800
Subject: [PATCH] spi: pxa2xx: Add CS control clock quirk
In some circumstances on Intel LPSS controllers, toggling the LPSS
CS control register doesn't actually cause the CS line to toggle.
This seems to be failure of dynamic clock gating that occurs after
going through a suspend/resume transition, where the controller
is sent through a reset transition. This ruins SPI transactions
that either rely on delay_usecs, or toggle the CS line without
sending data.
Whenever CS is toggled, momentarily set the clock gating register
to "Force On" to poke the controller into acting on CS.
Signed-off-by: Evan Green <evgreen-F7+t8E8rja9g9hUCZPvPmw@public.gmane.org>
Signed-off-by: Rajat Jain <rajatja-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
---
drivers/spi/spi-pxa2xx.c | 23 +++++++++++++++++++++++
1 file changed, 23 insertions(+)
diff --git a/drivers/spi/spi-pxa2xx.c b/drivers/spi/spi-pxa2xx.c
index 4c7a71f0fb3e..2e318158fca9 100644
--- a/drivers/spi/spi-pxa2xx.c
+++ b/drivers/spi/spi-pxa2xx.c
@@ -70,6 +70,10 @@ MODULE_ALIAS("platform:pxa2xx-spi");
#define LPSS_CAPS_CS_EN_SHIFT 9
#define LPSS_CAPS_CS_EN_MASK (0xf << LPSS_CAPS_CS_EN_SHIFT)
+#define LPSS_PRIV_CLOCK_GATE 0x38
+#define LPSS_PRIV_CLOCK_GATE_CLK_CTL_MASK 0x3
+#define LPSS_PRIV_CLOCK_GATE_CLK_CTL_FORCE_ON 0x3
+
struct lpss_config {
/* LPSS offset from drv_data->ioaddr */
unsigned offset;
@@ -86,6 +90,8 @@ struct lpss_config {
unsigned cs_sel_shift;
unsigned cs_sel_mask;
unsigned cs_num;
+ /* Quirks */
+ unsigned cs_clk_stays_gated : 1;
};
/* Keep these sorted with enum pxa_ssp_type */
@@ -156,6 +162,7 @@ static const struct lpss_config lpss_platforms[] = {
.tx_threshold_hi = 56,
.cs_sel_shift = 8,
.cs_sel_mask = 3 << 8,
+ .cs_clk_stays_gated = true,
},
};
@@ -383,6 +390,22 @@ static void lpss_ssp_cs_control(struct spi_device *spi, bool enable)
else
value |= LPSS_CS_CONTROL_CS_HIGH;
__lpss_ssp_write_priv(drv_data, config->reg_cs_ctrl, value);
+ if (config->cs_clk_stays_gated) {
+ u32 clkgate;
+
+ /*
+ * Changing CS alone when dynamic clock gating is on won't
+ * actually flip CS at that time. This ruins SPI transfers
+ * that specify delays, or have no data. Toggle the clock mode
+ * to force on briefly to poke the CS pin to move.
+ */
+ clkgate = __lpss_ssp_read_priv(drv_data, LPSS_PRIV_CLOCK_GATE);
+ value = (clkgate & ~LPSS_PRIV_CLOCK_GATE_CLK_CTL_MASK) |
+ LPSS_PRIV_CLOCK_GATE_CLK_CTL_FORCE_ON;
+
+ __lpss_ssp_write_priv(drv_data, LPSS_PRIV_CLOCK_GATE, value);
+ __lpss_ssp_write_priv(drv_data, LPSS_PRIV_CLOCK_GATE, clkgate);
+ }
}
static void cs_assert(struct spi_device *spi)
--
2.25.0.225.g125e21ebc7-goog
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* (unknown)
@ 2020-02-05 8:23 Frau Huan Jlaying
0 siblings, 0 replies; 3471+ messages in thread
From: Frau Huan Jlaying @ 2020-02-05 8:23 UTC (permalink / raw)
--
Schönen Tag,
Ich bin Frau Huan Jlaying, eine Mitarbeiterin der Wing Hang Bank
hier in Hongkong. Kann ich einer Überweisung von 13.991.674 USD
vertrauen? Kontaktieren Sie mich per E-Mail: huanjlaying08@hotmail.com
Grüße
^ permalink raw reply [flat|nested] 3471+ messages in thread
[parent not found: <1187667350.235001.1580574902701.ref@mail.yahoo.com>]
* (unknown)
@ 2019-12-12 15:50 周琰杰 (Zhou Yanjie)
0 siblings, 0 replies; 3471+ messages in thread
From: 周琰杰 (Zhou Yanjie) @ 2019-12-12 15:50 UTC (permalink / raw)
To: linux-mips
Cc: linux-kernel, linux-i2c, devicetree, robh+dt, mark.rutland, paul,
paul.burton, paulburton, sernia.zhou, zhenwenjin
Add support for probing i2c driver on the X1000 Soc from Ingenic.
call the corresponding fifo parameter according to the device
model obtained from the devicetree.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2019-09-12 8:09 Gene Chen
0 siblings, 0 replies; 3471+ messages in thread
From: Gene Chen @ 2019-09-12 8:09 UTC (permalink / raw)
To: matthias.bgg, gene_chen, Wilma.Wu
Cc: linux-arm-kernel, linux-mediatek, linux-kernel
>From 66208ef7fcdb4176bf63cd130b3e3197086ac4b3 Mon Sep 17 00:00:00 2001
From: Gene Chen <gene_chen@mediatek.corp-partner.google.com>
Date: Thu, 22 Aug 2019 14:21:03 +0800
Subject: [PATCH] mfd: mt6360: add pmic mt6360 driver
---
drivers/mfd/Kconfig | 12 ++
drivers/mfd/Makefile | 1 +
drivers/mfd/mt6360-core.c | 463 ++++++++++++++++++++++++++++++++++++++++++++++
3 files changed, 476 insertions(+)
create mode 100644 drivers/mfd/mt6360-core.c
diff --git a/drivers/mfd/Kconfig b/drivers/mfd/Kconfig
index f129f96..a422c76 100644
--- a/drivers/mfd/Kconfig
+++ b/drivers/mfd/Kconfig
@@ -862,6 +862,18 @@ config MFD_MAX8998
additional drivers must be enabled in order to use the functionality
of the device.
+config MFD_MT6360
+ tristate "Mediatek MT6360 SubPMIC"
+ select MFD_CORE
+ select REGMAP_I2C
+ select REGMAP_IRQ
+ depends on I2C
+ help
+ Say Y here to enable MT6360 PMU/PMIC/LDO functional support.
+ PMU part include charger, flashlight, rgb led
+ PMIC part include 2-channel BUCKs and 2-channel LDOs
+ LDO part include 4-channel LDOs
+
config MFD_MT6397
tristate "MediaTek MT6397 PMIC Support"
select MFD_CORE
diff --git a/drivers/mfd/Makefile b/drivers/mfd/Makefile
index f026ada..77a8f0b 100644
--- a/drivers/mfd/Makefile
+++ b/drivers/mfd/Makefile
@@ -241,6 +241,7 @@ obj-$(CONFIG_INTEL_SOC_PMIC) += intel-soc-pmic.o
obj-$(CONFIG_INTEL_SOC_PMIC_BXTWC) += intel_soc_pmic_bxtwc.o
obj-$(CONFIG_INTEL_SOC_PMIC_CHTWC) += intel_soc_pmic_chtwc.o
obj-$(CONFIG_INTEL_SOC_PMIC_CHTDC_TI) += intel_soc_pmic_chtdc_ti.o
+obj-$(CONFIG_MFD_MT6360) += mt6360-core.o
obj-$(CONFIG_MFD_MT6397) += mt6397-core.o
obj-$(CONFIG_MFD_ALTERA_A10SR) += altera-a10sr.o
diff --git a/drivers/mfd/mt6360-core.c b/drivers/mfd/mt6360-core.c
new file mode 100644
index 0000000..d3580618
--- /dev/null
+++ b/drivers/mfd/mt6360-core.c
@@ -0,0 +1,463 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (c) 2019 MediaTek Inc.
+ */
+
+#include <linux/i2c.h>
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/mfd/core.h>
+#include <linux/module.h>
+#include <linux/of_irq.h>
+#include <linux/of_platform.h>
+#include <linux/version.h>
+
+#include <linux/mfd/mt6360.h>
+#include <linux/mfd/mt6360-private.h>
+
+/* reg 0 -> 0 ~ 7 */
+#define MT6360_CHG_TREG_EVT (4)
+#define MT6360_CHG_AICR_EVT (5)
+#define MT6360_CHG_MIVR_EVT (6)
+#define MT6360_PWR_RDY_EVT (7)
+/* REG 1 -> 8 ~ 15 */
+#define MT6360_CHG_BATSYSUV_EVT (9)
+#define MT6360_FLED_CHG_VINOVP_EVT (11)
+#define MT6360_CHG_VSYSUV_EVT (12)
+#define MT6360_CHG_VSYSOV_EVT (13)
+#define MT6360_CHG_VBATOV_EVT (14)
+#define MT6360_CHG_VBUSOV_EVT (15)
+/* REG 2 -> 16 ~ 23 */
+/* REG 3 -> 24 ~ 31 */
+#define MT6360_WD_PMU_DET (25)
+#define MT6360_WD_PMU_DONE (26)
+#define MT6360_CHG_TMRI (27)
+#define MT6360_CHG_ADPBADI (29)
+#define MT6360_CHG_RVPI (30)
+#define MT6360_OTPI (31)
+/* REG 4 -> 32 ~ 39 */
+#define MT6360_CHG_AICCMEASL (32)
+#define MT6360_CHGDET_DONEI (34)
+#define MT6360_WDTMRI (35)
+#define MT6360_SSFINISHI (36)
+#define MT6360_CHG_RECHGI (37)
+#define MT6360_CHG_TERMI (38)
+#define MT6360_CHG_IEOCI (39)
+/* REG 5 -> 40 ~ 47 */
+#define MT6360_PUMPX_DONEI (40)
+#define MT6360_BAT_OVP_ADC_EVT (41)
+#define MT6360_TYPEC_OTP_EVT (42)
+#define MT6360_ADC_WAKEUP_EVT (43)
+#define MT6360_ADC_DONEI (44)
+#define MT6360_BST_BATUVI (45)
+#define MT6360_BST_VBUSOVI (46)
+#define MT6360_BST_OLPI (47)
+/* REG 6 -> 48 ~ 55 */
+#define MT6360_ATTACH_I (48)
+#define MT6360_DETACH_I (49)
+#define MT6360_QC30_STPDONE (51)
+#define MT6360_QC_VBUSDET_DONE (52)
+#define MT6360_HVDCP_DET (53)
+#define MT6360_CHGDETI (54)
+#define MT6360_DCDTI (55)
+/* REG 7 -> 56 ~ 63 */
+#define MT6360_FOD_DONE_EVT (56)
+#define MT6360_FOD_OV_EVT (57)
+#define MT6360_CHRDET_UVP_EVT (58)
+#define MT6360_CHRDET_OVP_EVT (59)
+#define MT6360_CHRDET_EXT_EVT (60)
+#define MT6360_FOD_LR_EVT (61)
+#define MT6360_FOD_HR_EVT (62)
+#define MT6360_FOD_DISCHG_FAIL_EVT (63)
+/* REG 8 -> 64 ~ 71 */
+#define MT6360_USBID_EVT (64)
+#define MT6360_APWDTRST_EVT (65)
+#define MT6360_EN_EVT (66)
+#define MT6360_QONB_RST_EVT (67)
+#define MT6360_MRSTB_EVT (68)
+#define MT6360_OTP_EVT (69)
+#define MT6360_VDDAOV_EVT (70)
+#define MT6360_SYSUV_EVT (71)
+/* REG 9 -> 72 ~ 79 */
+#define MT6360_FLED_STRBPIN_EVT (72)
+#define MT6360_FLED_TORPIN_EVT (73)
+#define MT6360_FLED_TX_EVT (74)
+#define MT6360_FLED_LVF_EVT (75)
+#define MT6360_FLED2_SHORT_EVT (78)
+#define MT6360_FLED1_SHORT_EVT (79)
+/* REG 10 -> 80 ~ 87 */
+#define MT6360_FLED2_STRB_EVT (80)
+#define MT6360_FLED1_STRB_EVT (81)
+#define MT6360_FLED2_STRB_TO_EVT (82)
+#define MT6360_FLED1_STRB_TO_EVT (83)
+#define MT6360_FLED2_TOR_EVT (84)
+#define MT6360_FLED1_TOR_EVT (85)
+/* REG 11 -> 88 ~ 95 */
+/* REG 12 -> 96 ~ 103 */
+#define MT6360_BUCK1_PGB_EVT (96)
+#define MT6360_BUCK1_OC_EVT (100)
+#define MT6360_BUCK1_OV_EVT (101)
+#define MT6360_BUCK1_UV_EVT (102)
+/* REG 13 -> 104 ~ 111 */
+#define MT6360_BUCK2_PGB_EVT (104)
+#define MT6360_BUCK2_OC_EVT (108)
+#define MT6360_BUCK2_OV_EVT (109)
+#define MT6360_BUCK2_UV_EVT (110)
+/* REG 14 -> 112 ~ 119 */
+#define MT6360_LDO1_OC_EVT (113)
+#define MT6360_LDO2_OC_EVT (114)
+#define MT6360_LDO3_OC_EVT (115)
+#define MT6360_LDO5_OC_EVT (117)
+#define MT6360_LDO6_OC_EVT (118)
+#define MT6360_LDO7_OC_EVT (119)
+/* REG 15 -> 120 ~ 127 */
+#define MT6360_LDO1_PGB_EVT (121)
+#define MT6360_LDO2_PGB_EVT (122)
+#define MT6360_LDO3_PGB_EVT (123)
+#define MT6360_LDO5_PGB_EVT (125)
+#define MT6360_LDO6_PGB_EVT (126)
+#define MT6360_LDO7_PGB_EVT (127)
+
+#define MT6360_REGMAP_IRQ_REG(_irq_evt) \
+ REGMAP_IRQ_REG(_irq_evt, (_irq_evt) / 8, BIT((_irq_evt) % 8))
+
+#define MT6360_MFD_CELL(_name) \
+ { \
+ .name = #_name, \
+ .of_compatible = "mediatek," #_name, \
+ .num_resources = ARRAY_SIZE(_name##_resources), \
+ .resources = _name##_resources, \
+ }
+
+static const struct regmap_irq mt6360_pmu_irqs[] = {
+ MT6360_REGMAP_IRQ_REG(MT6360_CHG_TREG_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_CHG_AICR_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_CHG_MIVR_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_PWR_RDY_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_CHG_BATSYSUV_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_FLED_CHG_VINOVP_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_CHG_VSYSUV_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_CHG_VSYSOV_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_CHG_VBATOV_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_CHG_VBUSOV_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_WD_PMU_DET),
+ MT6360_REGMAP_IRQ_REG(MT6360_WD_PMU_DONE),
+ MT6360_REGMAP_IRQ_REG(MT6360_CHG_TMRI),
+ MT6360_REGMAP_IRQ_REG(MT6360_CHG_ADPBADI),
+ MT6360_REGMAP_IRQ_REG(MT6360_CHG_RVPI),
+ MT6360_REGMAP_IRQ_REG(MT6360_OTPI),
+ MT6360_REGMAP_IRQ_REG(MT6360_CHG_AICCMEASL),
+ MT6360_REGMAP_IRQ_REG(MT6360_CHGDET_DONEI),
+ MT6360_REGMAP_IRQ_REG(MT6360_WDTMRI),
+ MT6360_REGMAP_IRQ_REG(MT6360_SSFINISHI),
+ MT6360_REGMAP_IRQ_REG(MT6360_CHG_RECHGI),
+ MT6360_REGMAP_IRQ_REG(MT6360_CHG_TERMI),
+ MT6360_REGMAP_IRQ_REG(MT6360_CHG_IEOCI),
+ MT6360_REGMAP_IRQ_REG(MT6360_PUMPX_DONEI),
+ MT6360_REGMAP_IRQ_REG(MT6360_CHG_TREG_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_BAT_OVP_ADC_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_TYPEC_OTP_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_ADC_WAKEUP_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_ADC_DONEI),
+ MT6360_REGMAP_IRQ_REG(MT6360_BST_BATUVI),
+ MT6360_REGMAP_IRQ_REG(MT6360_BST_VBUSOVI),
+ MT6360_REGMAP_IRQ_REG(MT6360_BST_OLPI),
+ MT6360_REGMAP_IRQ_REG(MT6360_ATTACH_I),
+ MT6360_REGMAP_IRQ_REG(MT6360_DETACH_I),
+ MT6360_REGMAP_IRQ_REG(MT6360_QC30_STPDONE),
+ MT6360_REGMAP_IRQ_REG(MT6360_QC_VBUSDET_DONE),
+ MT6360_REGMAP_IRQ_REG(MT6360_HVDCP_DET),
+ MT6360_REGMAP_IRQ_REG(MT6360_CHGDETI),
+ MT6360_REGMAP_IRQ_REG(MT6360_DCDTI),
+ MT6360_REGMAP_IRQ_REG(MT6360_FOD_DONE_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_FOD_OV_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_CHRDET_UVP_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_CHRDET_OVP_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_CHRDET_EXT_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_FOD_LR_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_FOD_HR_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_FOD_DISCHG_FAIL_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_USBID_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_APWDTRST_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_EN_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_QONB_RST_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_MRSTB_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_OTP_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_VDDAOV_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_SYSUV_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_FLED_STRBPIN_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_FLED_TORPIN_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_FLED_TX_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_FLED_LVF_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_FLED2_SHORT_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_FLED1_SHORT_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_FLED2_STRB_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_FLED1_STRB_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_FLED2_STRB_TO_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_FLED1_STRB_TO_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_FLED2_TOR_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_FLED1_TOR_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_BUCK1_PGB_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_BUCK1_OC_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_BUCK1_OV_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_BUCK1_UV_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_BUCK2_PGB_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_BUCK2_OC_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_BUCK2_OV_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_BUCK2_UV_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_LDO1_OC_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_LDO2_OC_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_LDO3_OC_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_LDO5_OC_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_LDO6_OC_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_LDO7_OC_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_LDO1_PGB_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_LDO2_PGB_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_LDO3_PGB_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_LDO5_PGB_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_LDO6_PGB_EVT),
+ MT6360_REGMAP_IRQ_REG(MT6360_LDO7_PGB_EVT),
+};
+
+static int mt6360_pmu_handle_post_irq(void *irq_drv_data)
+{
+ struct mt6360_pmu_info *mpi = irq_drv_data;
+
+ return regmap_update_bits(mpi->regmap,
+ MT6360_PMU_IRQ_SET, MT6360_IRQ_RETRIG, MT6360_IRQ_RETRIG);
+}
+
+static const struct regmap_irq_chip mt6360_pmu_irq_chip = {
+ .irqs = mt6360_pmu_irqs,
+ .num_irqs = ARRAY_SIZE(mt6360_pmu_irqs),
+ .num_regs = MT6360_PMU_IRQ_REGNUM,
+ .mask_base = MT6360_PMU_CHG_MASK1,
+ .status_base = MT6360_PMU_CHG_IRQ1,
+ .ack_base = MT6360_PMU_CHG_IRQ1,
+ .init_ack_masked = true,
+ .use_ack = true,
+ .handle_post_irq = mt6360_pmu_handle_post_irq,
+};
+
+static const struct regmap_config mt6360_pmu_regmap_config = {
+ .reg_bits = 8,
+ .val_bits = 8,
+ .max_register = MT6360_PMU_MAXREG,
+};
+
+static const struct resource mt6360_adc_resources[] = {
+ DEFINE_RES_IRQ_NAMED(MT6360_ADC_DONEI, "adc_donei"),
+};
+
+static const struct resource mt6360_chg_resources[] = {
+ DEFINE_RES_IRQ_NAMED(MT6360_CHG_TREG_EVT, "chg_treg_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_PWR_RDY_EVT, "pwr_rdy_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_CHG_BATSYSUV_EVT, "chg_batsysuv_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_CHG_VSYSUV_EVT, "chg_vsysuv_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_CHG_VSYSOV_EVT, "chg_vsysov_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_CHG_VBATOV_EVT, "chg_vbatov_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_CHG_VBUSOV_EVT, "chg_vbusov_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_CHG_AICCMEASL, "chg_aiccmeasl"),
+ DEFINE_RES_IRQ_NAMED(MT6360_WDTMRI, "wdtmri"),
+ DEFINE_RES_IRQ_NAMED(MT6360_CHG_RECHGI, "chg_rechgi"),
+ DEFINE_RES_IRQ_NAMED(MT6360_CHG_TERMI, "chg_termi"),
+ DEFINE_RES_IRQ_NAMED(MT6360_CHG_IEOCI, "chg_ieoci"),
+ DEFINE_RES_IRQ_NAMED(MT6360_PUMPX_DONEI, "pumpx_donei"),
+ DEFINE_RES_IRQ_NAMED(MT6360_ATTACH_I, "attach_i"),
+ DEFINE_RES_IRQ_NAMED(MT6360_CHRDET_EXT_EVT, "chrdet_ext_evt"),
+};
+
+static const struct resource mt6360_led_resources[] = {
+ DEFINE_RES_IRQ_NAMED(MT6360_FLED_CHG_VINOVP_EVT, "fled_chg_vinovp_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_FLED_LVF_EVT, "fled_lvf_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_FLED2_SHORT_EVT, "fled2_short_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_FLED1_SHORT_EVT, "fled1_short_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_FLED2_STRB_TO_EVT, "fled2_strb_to_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_FLED1_STRB_TO_EVT, "fled1_strb_to_evt"),
+};
+
+static const struct resource mt6360_pmic_resources[] = {
+ DEFINE_RES_IRQ_NAMED(MT6360_BUCK1_PGB_EVT, "buck1_pgb_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_BUCK1_OC_EVT, "buck1_oc_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_BUCK1_OV_EVT, "buck1_ov_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_BUCK1_UV_EVT, "buck1_uv_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_BUCK2_PGB_EVT, "buck2_pgb_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_BUCK2_OC_EVT, "buck2_oc_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_BUCK2_OV_EVT, "buck2_ov_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_BUCK2_UV_EVT, "buck2_uv_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_LDO6_OC_EVT, "ldo6_oc_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_LDO7_OC_EVT, "ldo7_oc_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_LDO6_PGB_EVT, "ldo6_pgb_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_LDO7_PGB_EVT, "ldo7_pgb_evt"),
+};
+
+static const struct resource mt6360_ldo_resources[] = {
+ DEFINE_RES_IRQ_NAMED(MT6360_LDO1_OC_EVT, "ldo1_oc_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_LDO2_OC_EVT, "ldo2_oc_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_LDO3_OC_EVT, "ldo3_oc_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_LDO5_OC_EVT, "ldo5_oc_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_LDO1_PGB_EVT, "ldo1_pgb_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_LDO2_PGB_EVT, "ldo2_pgb_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_LDO3_PGB_EVT, "ldo3_pgb_evt"),
+ DEFINE_RES_IRQ_NAMED(MT6360_LDO5_PGB_EVT, "ldo5_pgb_evt"),
+};
+
+static const struct mfd_cell mt6360_devs[] = {
+ MT6360_MFD_CELL(mt6360_adc),
+ MT6360_MFD_CELL(mt6360_chg),
+ MT6360_MFD_CELL(mt6360_led),
+ MT6360_MFD_CELL(mt6360_pmic),
+ MT6360_MFD_CELL(mt6360_ldo),
+ /* tcpc dev */
+ {
+ .name = "mt6360_tcpc",
+ .of_compatible = "mediatek,mt6360_tcpc",
+ },
+};
+
+static const unsigned short mt6360_slave_addr[MT6360_SLAVE_MAX] = {
+ MT6360_PMU_SLAVEID,
+ MT6360_PMIC_SLAVEID,
+ MT6360_LDO_SLAVEID,
+ MT6360_TCPC_SLAVEID,
+};
+
+static int mt6360_pmu_probe(struct i2c_client *client,
+ const struct i2c_device_id *id)
+{
+ struct mt6360_pmu_info *mpi;
+ unsigned int reg_data = 0;
+ int i, ret;
+
+ mpi = devm_kzalloc(&client->dev, sizeof(*mpi), GFP_KERNEL);
+ if (!mpi)
+ return -ENOMEM;
+ mpi->dev = &client->dev;
+ i2c_set_clientdata(client, mpi);
+
+ /* regmap regiser */
+ mpi->regmap = devm_regmap_init_i2c(client, &mt6360_pmu_regmap_config);
+ if (IS_ERR(mpi->regmap)) {
+ dev_err(&client->dev, "regmap register fail\n");
+ return PTR_ERR(mpi->regmap);
+ }
+ /* chip id check */
+ ret = regmap_read(mpi->regmap, MT6360_PMU_DEV_INFO, ®_data);
+ if (ret < 0) {
+ dev_err(&client->dev, "device not found\n");
+ return ret;
+ }
+ if ((reg_data & CHIP_VEN_MASK) != CHIP_VEN_MT6360) {
+ dev_err(&client->dev, "not mt6360 chip\n");
+ return -ENODEV;
+ }
+ mpi->chip_rev = reg_data & CHIP_REV_MASK;
+ /* irq register */
+ memcpy(&mpi->irq_chip, &mt6360_pmu_irq_chip, sizeof(mpi->irq_chip));
+ mpi->irq_chip.name = dev_name(&client->dev);
+ mpi->irq_chip.irq_drv_data = mpi;
+ ret = devm_regmap_add_irq_chip(&client->dev, mpi->regmap, client->irq,
+ IRQF_TRIGGER_FALLING, 0, &mpi->irq_chip,
+ &mpi->irq_data);
+ if (ret < 0) {
+ dev_err(&client->dev, "regmap irq chip add fail\n");
+ return ret;
+ }
+ /* new i2c slave device */
+ for (i = 0; i < MT6360_SLAVE_MAX; i++) {
+ if (mt6360_slave_addr[i] == client->addr) {
+ mpi->i2c[i] = client;
+ continue;
+ }
+ mpi->i2c[i] = i2c_new_dummy(client->adapter,
+ mt6360_slave_addr[i]);
+ if (!mpi->i2c[i]) {
+ dev_err(&client->dev, "new i2c dev [%d] fail\n", i);
+ ret = -ENODEV;
+ goto out;
+ }
+ i2c_set_clientdata(mpi->i2c[i], mpi);
+ }
+ /* mfd cell register */
+ ret = devm_mfd_add_devices(&client->dev, PLATFORM_DEVID_AUTO,
+ mt6360_devs, ARRAY_SIZE(mt6360_devs), NULL,
+ 0, regmap_irq_get_domain(mpi->irq_data));
+ if (ret < 0) {
+ dev_err(&client->dev, "mfd add cells fail\n");
+ goto out;
+ }
+ dev_info(&client->dev, "Successfully probed\n");
+ return 0;
+out:
+ while (--i >= 0) {
+ if (mpi->i2c[i]->addr == client->addr)
+ continue;
+ i2c_unregister_device(mpi->i2c[i]);
+ }
+ return ret;
+}
+
+static int mt6360_pmu_remove(struct i2c_client *client)
+{
+ struct mt6360_pmu_info *mpi = i2c_get_clientdata(client);
+ int i;
+
+ for (i = 0; i < MT6360_SLAVE_MAX; i++) {
+ if (mpi->i2c[i]->addr == client->addr)
+ continue;
+ i2c_unregister_device(mpi->i2c[i]);
+ }
+ return 0;
+}
+
+static int __maybe_unused mt6360_pmu_suspend(struct device *dev)
+{
+ struct i2c_client *i2c = to_i2c_client(dev);
+
+ if (device_may_wakeup(dev))
+ enable_irq_wake(i2c->irq);
+ return 0;
+}
+
+static int __maybe_unused mt6360_pmu_resume(struct device *dev)
+{
+
+ struct i2c_client *i2c = to_i2c_client(dev);
+
+ if (device_may_wakeup(dev))
+ disable_irq_wake(i2c->irq);
+ return 0;
+}
+
+static SIMPLE_DEV_PM_OPS(mt6360_pmu_pm_ops,
+ mt6360_pmu_suspend, mt6360_pmu_resume);
+
+static const struct of_device_id __maybe_unused mt6360_pmu_of_id[] = {
+ { .compatible = "mediatek,mt6360_pmu", },
+ {},
+};
+MODULE_DEVICE_TABLE(of, mt6360_pmu_of_id);
+
+static const struct i2c_device_id mt6360_pmu_id[] = {
+ { "mt6360_pmu", 0 },
+ {},
+};
+MODULE_DEVICE_TABLE(i2c, mt6360_pmu_id);
+
+static struct i2c_driver mt6360_pmu_driver = {
+ .driver = {
+ .name = "mt6360_pmu",
+ .owner = THIS_MODULE,
+ .pm = &mt6360_pmu_pm_ops,
+ .of_match_table = of_match_ptr(mt6360_pmu_of_id),
+ },
+ .probe = mt6360_pmu_probe,
+ .remove = mt6360_pmu_remove,
+ .id_table = mt6360_pmu_id,
+};
+module_i2c_driver(mt6360_pmu_driver);
+
+MODULE_AUTHOR("CY_Huang <cy_huang@richtek.com>");
+MODULE_DESCRIPTION("MT6360 PMU I2C Driver");
+MODULE_LICENSE("GPL");
+MODULE_VERSION("1.0.0");
--
1.9.1
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* (unknown)
@ 2019-08-23 2:12 Rob Herring
0 siblings, 0 replies; 3471+ messages in thread
From: Rob Herring @ 2019-08-23 2:12 UTC (permalink / raw)
To: dri-devel
Cc: Maxime Ripard, Tomeu Vizoso, David Airlie, Sean Paul,
Steven Price, Boris Brezillon, Alyssa Rosenzweig, Robin Murphy
Subject: [PATCH v2 0/8] panfrost: Locking and runtime PM fixes
With further testing of recent changes with lockdep and other locking
checks enabled, we've found several bugs in the shrinker code and one
sleep while atomic in panfrost_gem_open(). This series addresses those
issues.
Delaying the unmapping of pages turns out to be a bad idea. Instead we
need to rework panfrost_mmu_unmap() to not do a runtime PM resume which
takes several locks and causes more lockdep warnings. Unfortunately,
there initially appeared to be some mismatches between the runtime PM
state and the h/w. The result is several fixes to the runtime PM
initialization and handling in jobs. With this, the changes to
panfrost_mmu_unmap() are working correctly.
v2:
- Drop already applied 'drm/panfrost: Fix sleeping while atomic in
panfrost_gem_open'
- Runtime PM clean-ups
- Keep panfrost_gem_purge and use mutex_trylock there
- Rework panfrost_mmu_unmap runtime PM
Rob
Rob Herring (8):
drm/panfrost: Fix possible suspend in panfrost_remove
drm/panfrost: Rework runtime PM initialization
drm/panfrost: Hold runtime PM reference until jobs complete
drm/shmem: Do dma_unmap_sg before purging pages
drm/shmem: Use mutex_trylock in drm_gem_shmem_purge
drm/panfrost: Use mutex_trylock in panfrost_gem_purge
drm/panfrost: Rework page table flushing and runtime PM interaction
drm/panfrost: Remove unnecessary flushing from tlb_inv_context
drivers/gpu/drm/drm_gem_shmem_helper.c | 13 ++++-
drivers/gpu/drm/panfrost/panfrost_device.c | 9 ----
drivers/gpu/drm/panfrost/panfrost_drv.c | 16 ++++---
.../gpu/drm/panfrost/panfrost_gem_shrinker.c | 11 +++--
drivers/gpu/drm/panfrost/panfrost_job.c | 16 ++++---
drivers/gpu/drm/panfrost/panfrost_mmu.c | 47 +++++++++----------
include/drm/drm_gem_shmem_helper.h | 2 +-
7 files changed, 59 insertions(+), 55 deletions(-)
--
2.20.1
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2019-06-07 0:54 Dave Airlie
0 siblings, 0 replies; 3471+ messages in thread
From: Dave Airlie @ 2019-06-07 0:54 UTC (permalink / raw)
To: Linus Torvalds, Daniel Vetter; +Cc: dri-devel, LKML
Hey Linus,
A small bit more lively this week but not majorly so. I'm away in
Japan next week for family holiday, so I'll be pretty disconnected,
I've asked Daniel to do fixes for the week while I'm out.
core:
- Allow fb changes in async commits (drivers as well)
udmabuf:
- Unmap scatterlist when unmapping udmabuf
komeda:
- oops, dma mapping and warning fixes
arm-hdlcd:
- clock fixes,
- mode validation fix
i915:
- Add a missing Icelake workaround
- GVT - DMA map fault fix and enforcement fixes
Dave.
amdgpu:
- DCE resume fix
- New raven variation updates
drm-fixes-2019-06-07:
drm i915, amdgpu, arm display, atomic update fixes
The following changes since commit f2c7c76c5d0a443053e94adb9f0918fa2fb85c3a:
Linux 5.2-rc3 (2019-06-02 13:55:33 -0700)
are available in the Git repository at:
git://anongit.freedesktop.org/drm/drm tags/drm-fixes-2019-06-07
for you to fetch changes up to e659b4122cf9e0938b80215de6c06823fb4cf796:
Merge tag 'drm-intel-fixes-2019-06-06' of
git://anongit.freedesktop.org/drm/drm-intel into drm-fixes (2019-06-07
10:41:33 +1000)
----------------------------------------------------------------
drm i915, amdgpu, arm display, atomic update fixes
----------------------------------------------------------------
Aleksei Gimbitskii (2):
drm/i915/gvt: Check if cur_pt_type is valid
drm/i915/gvt: Assign NULL to the pointer after memory free.
Chengming Gui (1):
drm/amd/powerplay: add set_power_profile_mode for raven1_refresh
Colin Xu (3):
drm/i915/gvt: Update force-to-nonpriv register whitelist
drm/i915/gvt: Fix GFX_MODE handling
drm/i915/gvt: Fix vGPU CSFE_CHICKEN1_REG mmio handler
Dan Carpenter (1):
drm/komeda: Potential error pointer dereference
Dave Airlie (5):
Merge tag 'drm-intel-fixes-2019-06-03' of
git://anongit.freedesktop.org/drm/drm-intel into drm-fixes
Merge branch 'drm-fixes-5.2' of
git://people.freedesktop.org/~agd5f/linux into drm-fixes
Merge tag 'drm-misc-fixes-2019-06-05' of
git://anongit.freedesktop.org/drm/drm-misc into drm-fixes
Merge branch 'malidp-fixes' of git://linux-arm.org/linux-ld into drm-fixes
Merge tag 'drm-intel-fixes-2019-06-06' of
git://anongit.freedesktop.org/drm/drm-intel into drm-fixes
Gao, Fred (1):
drm/i915/gvt: Fix cmd length of VEB_DI_IECP
Helen Koike (5):
drm/rockchip: fix fb references in async update
drm/amd: fix fb references in async update
drm/msm: fix fb references in async update
drm/vc4: fix fb references in async update
drm: don't block fb changes for async plane updates
Joonas Lahtinen (2):
Merge tag 'gvt-fixes-2019-05-30' of
https://github.com/intel/gvt-linux into drm-intel-fixes
Merge tag 'gvt-fixes-2019-06-05' of
https://github.com/intel/gvt-linux into drm-intel-fixes
Louis Li (1):
drm/amdgpu: fix ring test failure issue during s3 in vce 3.0 (V2)
Lowry Li (Arm Technology China) (1):
drm/komeda: fixing of DMA mapping sg segment warning
Lucas Stach (1):
udmabuf: actually unmap the scatterlist
Prike Liang (1):
drm/amd/amdgpu: add RLC firmware to support raven1 refresh
Robin Murphy (2):
drm/arm/hdlcd: Actually validate CRTC modes
drm/arm/hdlcd: Allow a bit of clock tolerance
Tina Zhang (1):
drm/i915/gvt: Initialize intel_gvt_gtt_entry in stack
Tvrtko Ursulin (1):
drm/i915/icl: Add WaDisableBankHangMode
Weinan Li (1):
drm/i915/gvt: add F_CMD_ACCESS flag for wa regs
Wen He (1):
drm/arm/mali-dp: Add a loop around the second set CVAL and try 5 times
Xiaolin Zhang (1):
drm/i915/gvt: save RING_HEAD into vreg when vgpu switched out
Xiong Zhang (1):
drm/i915/gvt: refine ggtt range validation
YueHaibing (1):
drm/komeda: remove set but not used variable 'kcrtc'
james qian wang (Arm Technology China) (1):
drm/komeda: Constify the usage of komeda_component/pipeline/dev_funcs
drivers/dma-buf/udmabuf.c | 1 +
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 12 ++---
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c | 15 +++++++
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.h | 1 +
drivers/gpu/drm/amd/amdgpu/amdgpu_vce.c | 4 +-
drivers/gpu/drm/amd/amdgpu/gfx_v9_0.c | 12 ++++-
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 3 +-
drivers/gpu/drm/amd/powerplay/hwmgr/hwmgr.c | 1 +
drivers/gpu/drm/amd/powerplay/hwmgr/smu10_hwmgr.c | 31 +++++++++++--
drivers/gpu/drm/amd/powerplay/inc/hwmgr.h | 1 +
.../gpu/drm/arm/display/komeda/d71/d71_component.c | 8 ++--
drivers/gpu/drm/arm/display/komeda/d71/d71_dev.c | 4 +-
drivers/gpu/drm/arm/display/komeda/komeda_crtc.c | 2 +-
drivers/gpu/drm/arm/display/komeda/komeda_dev.c | 6 ++-
drivers/gpu/drm/arm/display/komeda/komeda_dev.h | 8 ++--
.../gpu/drm/arm/display/komeda/komeda_pipeline.c | 4 +-
.../gpu/drm/arm/display/komeda/komeda_pipeline.h | 10 ++---
drivers/gpu/drm/arm/display/komeda/komeda_plane.c | 4 +-
drivers/gpu/drm/arm/hdlcd_crtc.c | 14 +++---
drivers/gpu/drm/arm/malidp_drv.c | 13 +++++-
drivers/gpu/drm/drm_atomic_helper.c | 22 +++++-----
drivers/gpu/drm/i915/gvt/cmd_parser.c | 2 +-
drivers/gpu/drm/i915/gvt/gtt.c | 38 +++++++++++-----
drivers/gpu/drm/i915/gvt/handlers.c | 49 ++++++++++++++++++---
drivers/gpu/drm/i915/gvt/reg.h | 2 +
drivers/gpu/drm/i915/gvt/scheduler.c | 25 +++++++++++
drivers/gpu/drm/i915/gvt/scheduler.h | 1 +
drivers/gpu/drm/i915/i915_reg.h | 3 ++
drivers/gpu/drm/i915/intel_workarounds.c | 6 +++
drivers/gpu/drm/msm/disp/mdp5/mdp5_plane.c | 4 ++
drivers/gpu/drm/rockchip/rockchip_drm_vop.c | 51 +++++++++++-----------
drivers/gpu/drm/vc4/vc4_plane.c | 2 +-
include/drm/drm_modeset_helper_vtables.h | 8 ++++
33 files changed, 268 insertions(+), 99 deletions(-)
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2019-05-26 11:51 Thomas Meyer
0 siblings, 0 replies; 3471+ messages in thread
From: Thomas Meyer @ 2019-05-26 11:51 UTC (permalink / raw)
From thomas@m3y3r.de Sun May 26 13:49:04 2019
Subject: [PATCH] drm/omap: Make sure device_id tables are NULL terminated
To: tomi.valkeinen@ti.com, airlied@linux.ie, daniel@ffwll.ch,
dri-devel@lists.freedesktop.org, linux-kernel@vger.kernel.org
Content-Type: text/plain; charset="UTF-8"
Mime-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Patch: Cocci
X-Mailer: DiffSplit
Message-ID: <1558871364611-249425076-1-diffsplit-thomas@m3y3r.de>
References: <1558871364605-1026448693-0-diffsplit-thomas@m3y3r.de>
In-Reply-To: <1558871364605-1026448693-0-diffsplit-thomas@m3y3r.de>
X-Serial-No: 1
Make sure (of/i2c/platform)_device_id tables are NULL terminated.
Signed-off-by: Thomas Meyer <thomas@m3y3r.de>
---
diff -u -p a/drivers/gpu/drm/omapdrm/dss/omapdss-boot-init.c b/drivers/gpu/drm/omapdrm/dss/omapdss-boot-init.c
--- a/drivers/gpu/drm/omapdrm/dss/omapdss-boot-init.c
+++ b/drivers/gpu/drm/omapdrm/dss/omapdss-boot-init.c
@@ -198,6 +198,7 @@ static const struct of_device_id omapdss
{ .compatible = "toppoly,td028ttec1" },
{ .compatible = "tpo,td028ttec1" },
{ .compatible = "tpo,td043mtea1" },
+ {},
};
static int __init omapdss_boot_init(void)
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2019-05-16 3:48 Mail Delivery Subsystem
0 siblings, 0 replies; 3471+ messages in thread
From: Mail Delivery Subsystem @ 2019-05-16 3:48 UTC (permalink / raw)
To: linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw
This Message was undeliverable due to the following reason:
Your message was not delivered because the destination computer was
not reachable within the allowed queue period. The amount of time
a message is queued before it is returned depends on local configura-
tion parameters.
Most likely there is a network problem that prevented delivery, but
it is also possible that the computer is turned off, or does not
have a mail system running right now.
Your message was not delivered within 2 days:
Host 81.249.114.149 is not responding.
The following recipients did not receive this message:
<linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org>
Please reply to postmaster-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org
if you feel this message to be in error.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (no subject)
@ 2019-04-10 11:17 Norbert Lange
2019-04-10 14:15 ` (unknown) Jan Kiszka
0 siblings, 1 reply; 3471+ messages in thread
From: Norbert Lange @ 2019-04-10 11:17 UTC (permalink / raw)
To: xenomai
V2 of the patchset. Fixed checkstyle issues, better identation,
and aded casts to silence (false) pedantic warnings.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: (unknown)
2019-04-10 11:17 Norbert Lange
@ 2019-04-10 14:15 ` Jan Kiszka
0 siblings, 0 replies; 3471+ messages in thread
From: Jan Kiszka @ 2019-04-10 14:15 UTC (permalink / raw)
To: Norbert Lange, xenomai
On 10.04.19 13:17, Norbert Lange via Xenomai wrote:
> V2 of the patchset. Fixed checkstyle issues, better identation,
> and aded casts to silence (false) pedantic warnings.
>
Both applied to next, thanks.
You probably want to edit your cover letter subject as well - or use git
format-patch.
Jan
--
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (no subject)
@ 2019-04-10 11:14 Norbert Lange
2019-04-10 13:37 ` (unknown) Jan Kiszka
2019-04-10 14:36 ` (unknown) Jan Kiszka
0 siblings, 2 replies; 3471+ messages in thread
From: Norbert Lange @ 2019-04-10 11:14 UTC (permalink / raw)
To: xenomai
V3 of the patchset, corrected many checkstyle issues,
simplified condvar autoinit.
I did not use ARRAY_SIZE, as that would need another include.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: (unknown)
2019-04-10 11:14 Norbert Lange
@ 2019-04-10 13:37 ` Jan Kiszka
2019-04-10 14:36 ` (unknown) Jan Kiszka
1 sibling, 0 replies; 3471+ messages in thread
From: Jan Kiszka @ 2019-04-10 13:37 UTC (permalink / raw)
To: Norbert Lange, xenomai
On 10.04.19 13:14, Norbert Lange via Xenomai wrote:
> V3 of the patchset, corrected many checkstyle issues,
> simplified condvar autoinit.
>
Thanks for the update!
> I did not use ARRAY_SIZE, as that would need another include.
>
Ah, we do not have this construct in lib/ so far.
There are private ARRAY_LEN macros in lib/analogy/calibration.c and
utils/analogy/analogy_calibrate.h. Well, something that can be consolidates later.
Jan
--
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux
^ permalink raw reply [flat|nested] 3471+ messages in thread
* Re: (unknown)
2019-04-10 11:14 Norbert Lange
2019-04-10 13:37 ` (unknown) Jan Kiszka
@ 2019-04-10 14:36 ` Jan Kiszka
[not found] ` <VI1PR05MB5917B5956F2E9365F10D6539F62E0@VI1PR05MB5917.eurprd05.prod.outlook.com>
1 sibling, 1 reply; 3471+ messages in thread
From: Jan Kiszka @ 2019-04-10 14:36 UTC (permalink / raw)
To: Norbert Lange, xenomai
On 10.04.19 13:14, Norbert Lange via Xenomai wrote:
> V3 of the patchset, corrected many checkstyle issues,
> simplified condvar autoinit.
>
> I did not use ARRAY_SIZE, as that would need another include.
>
All applied now. Patch 1 was not cleanly based on next, though. I think some
local style cleanup was missing.
Jan
--
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2019-04-05 2:38 Changbin Du
0 siblings, 0 replies; 3471+ messages in thread
From: Changbin Du @ 2019-04-05 2:38 UTC (permalink / raw)
To: Jonathan Corbet
Cc: Rafael J. Wysocki, Changbin Du, Rafael J. Wysocki, Len Brown,
ACPI Devel Maling List, Linux Kernel Mailing List,
open list:DOCUMENTATION, Bjorn Helgaas
Bcc:
Subject: Re: [PATCH v2 00/24] Include linux ACPI docs into Sphinx TOC tree
Reply-To:
In-Reply-To: <20190403133613.13f3fd75@lwn.net>
+ Bjorn
On Wed, Apr 03, 2019 at 01:36:13PM -0600, Jonathan Corbet wrote:
> On Tue, 2 Apr 2019 10:25:23 +0200
> "Rafael J. Wysocki" <rafael@kernel.org> wrote:
>
> > There are ACPI-related documents currently in Documentation/acpi/ that
> > don't clearly fall under either driver-api or admin-guide. For
> > example, some of them describe various aspects of the ACPI support
> > subsystem operation and some document expectations with respect to the
> > ACPI tables provided by the firmware etc.
> >
> > Where would you recommend to put them after converting to .rst?
>
> OK, I've done some pondering on this. Maybe what we need is a new
> top-level "hardware guide" book meant to hold information on how the
> hardware works and what the kernel's expectations are. Architecture
> information could maybe go there too. Would this make sense?
>
> If so, I could see a division like this:
>
> Hardware guide
> acpi-lid
> aml-debugger (or maybe driver api?)
> debug (ditto)
> DSD-properties-rules
> gpio-properties
> i2c-muxes
>
> Admin guide
> cppc_sysfs
> initrd_table_override
>
> Driver-API
> enumeration
> scan_handlers
>
> other:
> dsdt-override: find another home for those five lines
>
Then, should we create dedicated sub-directories for these new charpters and
move documents to coresspoding one? Now we have 'admin-guide' and all admin-guid
docs are under it, otherwise we will have reference across different folders.
For example, the 'admin-guide/index.rst' will have:
...
../acpi/osi
...
Which seems not good.
> ...and so on. I've probably gotten at least one of those wrong, but that's
> the idea.
>
> Of course, then it would be nice to better integrate those documents so
> that they fit into a single coherent whole...a guy can dream...:)
>
I am not an adminstrator, so I don't know how adminstrators use this kernel
documentation. But as a kernel developer, I prefer all related documents
integrated into one charpter. Because I probably miss some useful sections
if the documents are distributed into several charpters. And this is usually
how a book is written (One charpter focus on one topic and has sub-sections
such as overview, backgroud knowledge, implemenation details..),
but a book mostly target on hypothetical readers...
> Thanks,
>
> jon
--
Cheers,
Changbin Du
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2019-04-04 5:56 Mail Delivery Subsystem
0 siblings, 0 replies; 3471+ messages in thread
From: Mail Delivery Subsystem @ 2019-04-04 5:56 UTC (permalink / raw)
To: linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw
This Message was undeliverable due to the following reason:
Your message was not delivered because the destination computer was
not reachable within the allowed queue period. The amount of time
a message is queued before it is returned depends on local configura-
tion parameters.
Most likely there is a network problem that prevented delivery, but
it is also possible that the computer is turned off, or does not
have a mail system running right now.
Your message was not delivered within 1 days:
Host 66.24.245.203 is not responding.
The following recipients did not receive this message:
<linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org>
Please reply to postmaster-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org
if you feel this message to be in error.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2019-03-29 0:36 邀请函
0 siblings, 0 replies; 3471+ messages in thread
From: 邀请函 @ 2019-03-29 0:36 UTC (permalink / raw)
To: linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw
-------- 转发邮件信息 --------
发件人:mivczrufz@exu.net
发送日期:2019-3-29 8:36:49
收件人:linux-nvdimm@lists.01.org
~附~件~内~容~请~您~查~阅~
8:36:49
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2019-03-21 1:51 zhuchangchun
0 siblings, 0 replies; 3471+ messages in thread
From: zhuchangchun @ 2019-03-21 1:51 UTC (permalink / raw)
To: linux-kernel, linux-gpio
subscribe linux-kernel
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2019-03-04 3:42 Automatic Email Delivery Software
0 siblings, 0 replies; 3471+ messages in thread
From: Automatic Email Delivery Software @ 2019-03-04 3:42 UTC (permalink / raw)
To: linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=us-ascii, Size: 365 bytes --]
±'óÛýpZ·Âu<êZxfvª
T'f
>X90î±Î|
&ì#*~zâý^°a¦ü^Èà£cÐ}øAè;£
¼µ8[èÅ«/øP¬
îÅ®_ÖË««XÉ·íÊ{¼
µöuCÇüöL~½¡±-_÷u¹.(v1OsM:I¿ù>1ÁÕüäuµèé"ËÚÌÞÙs9ùn,(SO)6ö\4_²ñ£¬UNß¿5
<¦ý4øÉüG!gü§´QrÓÀ:<(§9h7jÈø¹iï§eÛðWõxY¼×ü|ëÙÙPýþç»úU] PÝKx]»ÉÎâ÷]mù[ã.ÂÁf;Ô´5ÙtµU`ݵ
ÛÅ£J$kÊ[À´U
ä|"&`»öÔ[»ã$
¨¢Xm¡ÂålJ±
!aÇJÂHpºÆat¬â69½#ßt0k¤I¤|
Z9
¢Zw£[ªûþS
ç²
[-- Attachment #2: Type: text/plain, Size: 178 bytes --]
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org
https://lists.01.org/mailman/listinfo/linux-nvdimm
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2019-03-01 3:34 Automatic Email Delivery Software
0 siblings, 0 replies; 3471+ messages in thread
From: Automatic Email Delivery Software @ 2019-03-01 3:34 UTC (permalink / raw)
To: linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw
Message could not be delivered
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2019-02-28 3:36 Post Office
0 siblings, 0 replies; 3471+ messages in thread
From: Post Office @ 2019-02-28 3:36 UTC (permalink / raw)
To: linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw
This Message was undeliverable due to the following reason:
Your message was not delivered because the destination computer was
not reachable within the allowed queue period. The amount of time
a message is queued before it is returned depends on local configura-
tion parameters.
Most likely there is a network problem that prevented delivery, but
it is also possible that the computer is turned off, or does not
have a mail system running right now.
Your message was not delivered within 1 days:
Host 203.212.202.129 is not responding.
The following recipients did not receive this message:
<linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org>
Please reply to postmaster-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org
if you feel this message to be in error.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2019-01-15 2:55 Jens Axboe
0 siblings, 0 replies; 3471+ messages in thread
From: Jens Axboe @ 2019-01-15 2:55 UTC (permalink / raw)
To: linux-fsdevel, linux-aio, linux-block, linux-arch; +Cc: hch, jmoyer, avi
Here's v4 of the io_uring interface. No user visible changes this
time, outside of bumping the io_uring_sqe submission entry to a
full 64-bytes. This aligns better with caches, and leaves us some
room to grow for future features. See the v3 posting for full
details on the API:
https://lore.kernel.org/linux-block/20190112213011.1439-1-axboe@kernel.dk/
What I neglected to mention in the v3 posting, is that the fixed
buffer and fixed file interfaces are available through the
io_uring_register() system call. This means they can be registered
(and unregistered) independently of the io_uring context setup.
Patches are against 5.0-rc2 and can also be found in my 'io_uring'
git branch:
git://git.kernel.dk/linux-block io_uring
Changes since v3:
- Clean up fixed buffer index validation
- Add IORING_OP_NOP for ring perf testing
- Drop struct io_kiocb ki_* variable prefix, it clashes with struct
kiocb for no reason except to cause confusement
- Bump io_uring_sqe to 64 bytes. Cacheline sized and aligned
(on x86-64), and more future proof
- Use kmalloc_array()
- Make the page mlock rlimit incremental and not for root / CAP_IPC_LOCK
- Ensure io_uring_register() can't race with fops->release()
- Simplify and improve iopoll implementation
- Use FOLL_WRITE instead of open-coding it
- Fix 32-bit vs 64-bit sizing for the io_uring_register() structs
- Added x86 32-bit system calls
- Added 32-bit compat mode
- Rebased on 5.0-rc2
Documentation/filesystems/vfs.txt | 3 +
arch/x86/entry/syscalls/syscall_32.tbl | 3 +
arch/x86/entry/syscalls/syscall_64.tbl | 3 +
block/bio.c | 59 +-
fs/Makefile | 1 +
fs/block_dev.c | 19 +-
fs/file.c | 15 +-
fs/file_table.c | 9 +-
fs/gfs2/file.c | 2 +
fs/io_uring.c | 2072 ++++++++++++++++++++++++
fs/iomap.c | 48 +-
fs/xfs/xfs_file.c | 1 +
include/linux/bio.h | 14 +
include/linux/blk_types.h | 1 +
include/linux/file.h | 2 +
include/linux/fs.h | 6 +-
include/linux/iomap.h | 1 +
include/linux/sched/user.h | 2 +-
include/linux/syscalls.h | 7 +
include/uapi/linux/io_uring.h | 155 ++
init/Kconfig | 9 +
kernel/sys_ni.c | 3 +
22 files changed, 2395 insertions(+), 40 deletions(-)
--
Jens Axboe
--
To unsubscribe, send a message with 'unsubscribe linux-aio' in
the body to majordomo@kvack.org. For more info on Linux AIO,
see: http://www.kvack.org/aio/
Don't email: <a href=mailto:"aart@kvack.org">aart@kvack.org</a>
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2019-01-02 12:25 Frank Wunderlich
0 siblings, 0 replies; 3471+ messages in thread
From: Frank Wunderlich @ 2019-01-02 12:25 UTC (permalink / raw)
To: linux-arm-kernel, linux-mediatek
resend to mailing-list because previous attempt was blocked by
non-existing blacklist-filter: https://www.dnsbl.info/dnsbl-njabl-org.php
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-11-27 0:07 Offer
0 siblings, 0 replies; 3471+ messages in thread
From: Offer @ 2018-11-27 0:07 UTC (permalink / raw)
--
--
Guten Tag, Wir sind eine registrierte private Geldverleiher. Wir geben
Kredite an Firmen, Einzelpersonen, die ihre finanzielle Status auf der
ganzen Welt aktualisieren müssen, mit minimalen jährlichen Zinsen von
2% .reply, wenn nötig.
Good Day, We are a registered private money lender. We give out loans
to firms, Individual who need to update their financial status all
over the world, with Minimal annual Interest Rates of 2%reply if
needed.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-11-18 20:40 Major Dennis Hornbeck
0 siblings, 0 replies; 3471+ messages in thread
From: Major Dennis Hornbeck @ 2018-11-18 20:40 UTC (permalink / raw)
I am in the military unit here in Afghanistan, we have some amount of funds that we want to move out of the country. My partners and I need a good partner someone we can trust. It is risk free and legal. Reply to this email: hornbeckmajordennis830@gmail.com
Regards,Major Dennis Hornbeck.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-11-18 9:11 Mrs. Maureen Hinckley
0 siblings, 0 replies; 3471+ messages in thread
From: Mrs. Maureen Hinckley @ 2018-11-18 9:11 UTC (permalink / raw)
I am Maureen Hinckley and my foundation is donating (Five hundred and fifty thousand USD) to you. Contact us via my email at (maurhinck1@gmail.com) for further details.
Best Regards, Mrs. Maureen Hinckley, Copyright ©2018 The Maureen Hinckley Foundation All Rights Reserved.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-11-11 8:05 Oliver Carter
0 siblings, 0 replies; 3471+ messages in thread
From: Oliver Carter @ 2018-11-11 8:05 UTC (permalink / raw)
To: netdev
Netdev https://goo.gl/pW8d8y Oliver Carter
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-10-31 0:38 Ubaithullah Masood
0 siblings, 0 replies; 3471+ messages in thread
From: Ubaithullah Masood @ 2018-10-31 0:38 UTC (permalink / raw)
This is Mr Ubaithullah Masood from Banco Santander Bank S A Hong Kong.
I got your contact during my private search on net..Would you be
interested in a business transaction to act as the beneficiary to
claim 9.8M USD funds of my deceased client who died intestate (
Without a Will)and my bank wants to confiscate the funds if the funds
are not claimed soon. Do get back for more details as this deal is
safe and all documentation will be done legally and we will share 50%
each.
Thanks.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-10-21 16:25 Michael Tirado
0 siblings, 0 replies; 3471+ messages in thread
From: Michael Tirado @ 2018-10-21 16:25 UTC (permalink / raw)
To: Airlied, dri-devel, LKML, kraxel, alexander.deucher,
christian.koenig, David1.zhou, Hongbo.He
Cc: seanpaul, Gustavo, maarten.lankhorst
[-- Attachment #1: Type: text/plain, Size: 5516 bytes --]
Mapping a drm "dumb" buffer fails on 32-bit system (i686) from what
appears to be a truncated memory address that has been copied
throughout several files. The bug manifests as an -EINVAL when calling
mmap with the offset gathered from DRM_IOCTL_MODE_MAP_DUMB <--
DRM_IOCTL_MODE_ADDFB <-- DRM_IOCTL_MODE_CREATE_DUMB. I can provide
test code if needed.
The following patch will apply to 4.18 though I've only been able to
test through qemu bochs driver and nouveau. Intel driver worked
without any issues. I'm not sure if everyone is going to want to
share a constant, and the whitespace is screwed up from gmail's awful
javascript client, so let me know if I should resend this with any
specific changes. I have also attached the file with preserved
whitespace.
--- linux-4.13.8/drivers/gpu/drm/bochs/bochs.h 2017-10-18
07:38:33.000000000 +0000
+++ linux-4.13.8-modified/drivers/gpu/drm/bochs/bochs.h 2017-10-20
14:34:50.308633773 +0000
@@ -115,8 +115,6 @@
return container_of(gem, struct bochs_bo, gem);
}
-#define DRM_FILE_PAGE_OFFSET (0x100000000ULL >> PAGE_SHIFT)
-
static inline u64 bochs_bo_mmap_offset(struct bochs_bo *bo)
{
return drm_vma_node_offset_addr(&bo->bo.vma_node);
--- linux-4.13.8/drivers/gpu/drm/nouveau/nouveau_drv.h 2017-10-18
07:38:33.000000000 +0000
+++ linux-4.13.8-modified/drivers/gpu/drm/nouveau/nouveau_drv.h
2017-10-20 14:34:51.581633751 +0000
@@ -57,8 +57,6 @@
struct nouveau_channel;
struct platform_device;
-#define DRM_FILE_PAGE_OFFSET (0x100000000ULL >> PAGE_SHIFT)
-
#include "nouveau_fence.h"
#include "nouveau_bios.h"
--- linux-4.13.8/drivers/gpu/drm/ast/ast_drv.h 2017-10-18
07:38:33.000000000 +0000
+++ linux-4.13.8-modified/drivers/gpu/drm/ast/ast_drv.h 2017-10-20
14:34:50.289633773 +0000
@@ -356,8 +356,6 @@
uint32_t handle,
uint64_t *offset);
-#define DRM_FILE_PAGE_OFFSET (0x100000000ULL >> PAGE_SHIFT)
-
int ast_mm_init(struct ast_private *ast);
void ast_mm_fini(struct ast_private *ast);
--- linux-4.13.8/drivers/gpu/drm/hisilicon/hibmc/hibmc_ttm.c
2017-10-18 07:38:33.000000000 +0000
+++ linux-4.13.8-modified/drivers/gpu/drm/hisilicon/hibmc/hibmc_ttm.c
2017-10-20 14:34:50.644633767 +0000
@@ -21,8 +21,6 @@
#include "hibmc_drm_drv.h"
-#define DRM_FILE_PAGE_OFFSET (0x100000000ULL >> PAGE_SHIFT)
-
static inline struct hibmc_drm_private *
hibmc_bdev(struct ttm_bo_device *bd)
{
--- linux-4.13.8/drivers/gpu/drm/virtio/virtgpu_ttm.c 2017-10-18
07:38:33.000000000 +0000
+++ linux-4.13.8-modified/drivers/gpu/drm/virtio/virtgpu_ttm.c
2017-10-20 14:34:53.055633725 +0000
@@ -37,8 +37,6 @@
#include <linux/delay.h>
-#define DRM_FILE_PAGE_OFFSET (0x100000000ULL >> PAGE_SHIFT)
-
static struct
virtio_gpu_device *virtio_gpu_get_vgdev(struct ttm_bo_device *bdev)
{
--- linux-4.13.8/drivers/gpu/drm/qxl/qxl_drv.h 2017-10-18
07:38:33.000000000 +0000
+++ linux-4.13.8-modified/drivers/gpu/drm/qxl/qxl_drv.h 2017-10-20
14:34:52.072633742 +0000
@@ -88,9 +88,6 @@
} \
} while (0)
-#define DRM_FILE_OFFSET 0x100000000ULL
-#define DRM_FILE_PAGE_OFFSET (DRM_FILE_OFFSET >> PAGE_SHIFT)
-
#define QXL_INTERRUPT_MASK (\
QXL_INTERRUPT_DISPLAY |\
QXL_INTERRUPT_CURSOR |\
--- linux-4.13.8/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
2017-10-18 07:38:33.000000000 +0000
+++ linux-4.13.8-modified/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
2017-10-20 14:34:43.264633895 +0000
@@ -48,3 +48,1 @@
-#define DRM_FILE_PAGE_OFFSET (0x100000000ULL >> PAGE_SHIFT)
-
--- linux-4.13.8/drivers/gpu/drm/mgag200/mgag200_drv.h 2017-10-18
07:38:33.000000000 +0000
+++ linux-4.13.8-modified/drivers/gpu/drm/mgag200/mgag200_drv.h
2017-10-20 14:34:51.404633754 +0000
@@ -276,7 +276,6 @@
struct mga_i2c_chan *mgag200_i2c_create(struct drm_device *dev);
void mgag200_i2c_destroy(struct mga_i2c_chan *i2c);
-#define DRM_FILE_PAGE_OFFSET (0x100000000ULL >> PAGE_SHIFT)
void mgag200_ttm_placement(struct mgag200_bo *bo, int domain);
static inline int mgag200_bo_reserve(struct mgag200_bo *bo, bool no_wait)
--- linux-4.13.8/drivers/gpu/drm/radeon/radeon_ttm.c 2017-10-18
07:38:33.000000000 +0000
+++ linux-4.13.8-modified/drivers/gpu/drm/radeon/radeon_ttm.c
2017-10-20 14:34:52.588633733 +0000
@@ -45,8 +45,6 @@
#include "radeon_reg.h"
#include "radeon.h"
-#define DRM_FILE_PAGE_OFFSET (0x100000000ULL >> PAGE_SHIFT)
-
static int radeon_ttm_debugfs_init(struct radeon_device *rdev);
static void radeon_ttm_debugfs_fini(struct radeon_device *rdev);
--- linux-4.13.8/drivers/gpu/drm/cirrus/cirrus_drv.h 2017-10-18
07:38:33.000000000 +0000
+++ linux-4.13.8-modified/drivers/gpu/drm/cirrus/cirrus_drv.h
2017-10-20 14:34:50.333633772 +0000
@@ -178,7 +178,6 @@
#define to_cirrus_obj(x) container_of(x, struct cirrus_gem_object, base)
-#define DRM_FILE_PAGE_OFFSET (0x100000000ULL >> PAGE_SHIFT)
/* cirrus_mode.c */
void cirrus_crtc_fb_gamma_set(struct drm_crtc *crtc, u16 red, u16 green,
--- linux-4.13.8/include/drm/drmP.h 2017-10-18 07:38:33.000000000 +0000
+++ linux-4.13.8-modified/include/drm/drmP.h 2017-10-20
14:35:31.300633060 +0000
@@ -503,4 +503,10 @@
/* helper for handling conditionals in various for_each macros */
#define for_each_if(condition) if (!(condition)) {} else
+#if BITS_PER_LONG == 64
+#define DRM_FILE_PAGE_OFFSET (0x100000000ULL >> PAGE_SHIFT)
+#else
+#define DRM_FILE_PAGE_OFFSET (0x10000000ULL >> PAGE_SHIFT)
+#endif
+
#endif
[-- Attachment #2: drm_file_offset.patch --]
[-- Type: application/octet-stream, Size: 4581 bytes --]
--- linux-4.13.8/drivers/gpu/drm/bochs/bochs.h 2017-10-18 07:38:33.000000000 +0000
+++ linux-4.13.8-modified/drivers/gpu/drm/bochs/bochs.h 2017-10-20 14:34:50.308633773 +0000
@@ -115,8 +115,6 @@
return container_of(gem, struct bochs_bo, gem);
}
-#define DRM_FILE_PAGE_OFFSET (0x100000000ULL >> PAGE_SHIFT)
-
static inline u64 bochs_bo_mmap_offset(struct bochs_bo *bo)
{
return drm_vma_node_offset_addr(&bo->bo.vma_node);
--- linux-4.13.8/drivers/gpu/drm/nouveau/nouveau_drv.h 2017-10-18 07:38:33.000000000 +0000
+++ linux-4.13.8-modified/drivers/gpu/drm/nouveau/nouveau_drv.h 2017-10-20 14:34:51.581633751 +0000
@@ -57,8 +57,6 @@
struct nouveau_channel;
struct platform_device;
-#define DRM_FILE_PAGE_OFFSET (0x100000000ULL >> PAGE_SHIFT)
-
#include "nouveau_fence.h"
#include "nouveau_bios.h"
--- linux-4.13.8/drivers/gpu/drm/ast/ast_drv.h 2017-10-18 07:38:33.000000000 +0000
+++ linux-4.13.8-modified/drivers/gpu/drm/ast/ast_drv.h 2017-10-20 14:34:50.289633773 +0000
@@ -356,8 +356,6 @@
uint32_t handle,
uint64_t *offset);
-#define DRM_FILE_PAGE_OFFSET (0x100000000ULL >> PAGE_SHIFT)
-
int ast_mm_init(struct ast_private *ast);
void ast_mm_fini(struct ast_private *ast);
--- linux-4.13.8/drivers/gpu/drm/hisilicon/hibmc/hibmc_ttm.c 2017-10-18 07:38:33.000000000 +0000
+++ linux-4.13.8-modified/drivers/gpu/drm/hisilicon/hibmc/hibmc_ttm.c 2017-10-20 14:34:50.644633767 +0000
@@ -21,8 +21,6 @@
#include "hibmc_drm_drv.h"
-#define DRM_FILE_PAGE_OFFSET (0x100000000ULL >> PAGE_SHIFT)
-
static inline struct hibmc_drm_private *
hibmc_bdev(struct ttm_bo_device *bd)
{
--- linux-4.13.8/drivers/gpu/drm/virtio/virtgpu_ttm.c 2017-10-18 07:38:33.000000000 +0000
+++ linux-4.13.8-modified/drivers/gpu/drm/virtio/virtgpu_ttm.c 2017-10-20 14:34:53.055633725 +0000
@@ -37,8 +37,6 @@
#include <linux/delay.h>
-#define DRM_FILE_PAGE_OFFSET (0x100000000ULL >> PAGE_SHIFT)
-
static struct
virtio_gpu_device *virtio_gpu_get_vgdev(struct ttm_bo_device *bdev)
{
--- linux-4.13.8/drivers/gpu/drm/qxl/qxl_drv.h 2017-10-18 07:38:33.000000000 +0000
+++ linux-4.13.8-modified/drivers/gpu/drm/qxl/qxl_drv.h 2017-10-20 14:34:52.072633742 +0000
@@ -88,9 +88,6 @@
} \
} while (0)
-#define DRM_FILE_OFFSET 0x100000000ULL
-#define DRM_FILE_PAGE_OFFSET (DRM_FILE_OFFSET >> PAGE_SHIFT)
-
#define QXL_INTERRUPT_MASK (\
QXL_INTERRUPT_DISPLAY |\
QXL_INTERRUPT_CURSOR |\
--- linux-4.13.8/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 2017-10-18 07:38:33.000000000 +0000
+++ linux-4.13.8-modified/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c 2017-10-20 14:34:43.264633895 +0000
@@ -48,3 +48,1 @@
-#define DRM_FILE_PAGE_OFFSET (0x100000000ULL >> PAGE_SHIFT)
-
--- linux-4.13.8/drivers/gpu/drm/mgag200/mgag200_drv.h 2017-10-18 07:38:33.000000000 +0000
+++ linux-4.13.8-modified/drivers/gpu/drm/mgag200/mgag200_drv.h 2017-10-20 14:34:51.404633754 +0000
@@ -276,7 +276,6 @@
struct mga_i2c_chan *mgag200_i2c_create(struct drm_device *dev);
void mgag200_i2c_destroy(struct mga_i2c_chan *i2c);
-#define DRM_FILE_PAGE_OFFSET (0x100000000ULL >> PAGE_SHIFT)
void mgag200_ttm_placement(struct mgag200_bo *bo, int domain);
static inline int mgag200_bo_reserve(struct mgag200_bo *bo, bool no_wait)
--- linux-4.13.8/drivers/gpu/drm/radeon/radeon_ttm.c 2017-10-18 07:38:33.000000000 +0000
+++ linux-4.13.8-modified/drivers/gpu/drm/radeon/radeon_ttm.c 2017-10-20 14:34:52.588633733 +0000
@@ -45,8 +45,6 @@
#include "radeon_reg.h"
#include "radeon.h"
-#define DRM_FILE_PAGE_OFFSET (0x100000000ULL >> PAGE_SHIFT)
-
static int radeon_ttm_debugfs_init(struct radeon_device *rdev);
static void radeon_ttm_debugfs_fini(struct radeon_device *rdev);
--- linux-4.13.8/drivers/gpu/drm/cirrus/cirrus_drv.h 2017-10-18 07:38:33.000000000 +0000
+++ linux-4.13.8-modified/drivers/gpu/drm/cirrus/cirrus_drv.h 2017-10-20 14:34:50.333633772 +0000
@@ -178,7 +178,6 @@
#define to_cirrus_obj(x) container_of(x, struct cirrus_gem_object, base)
-#define DRM_FILE_PAGE_OFFSET (0x100000000ULL >> PAGE_SHIFT)
/* cirrus_mode.c */
void cirrus_crtc_fb_gamma_set(struct drm_crtc *crtc, u16 red, u16 green,
--- linux-4.13.8/include/drm/drmP.h 2017-10-18 07:38:33.000000000 +0000
+++ linux-4.13.8-modified/include/drm/drmP.h 2017-10-20 14:35:31.300633060 +0000
@@ -503,4 +503,10 @@
/* helper for handling conditionals in various for_each macros */
#define for_each_if(condition) if (!(condition)) {} else
+#if BITS_PER_LONG == 64
+#define DRM_FILE_PAGE_OFFSET (0x100000000ULL >> PAGE_SHIFT)
+#else
+#define DRM_FILE_PAGE_OFFSET (0x10000000ULL >> PAGE_SHIFT)
+#endif
+
#endif
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-10-19 14:40 David Howells
2018-10-19 17:46 ` (unknown) David Miller
2018-10-19 20:51 ` (unknown) David Howells
0 siblings, 2 replies; 3471+ messages in thread
From: David Howells @ 2018-10-19 14:40 UTC (permalink / raw)
To: David Miller; +Cc: dhowells, netdev, linux-afs
Hi Dave,
Is there going to be a merge of net into net-next before the merge window
opens? Or do you have a sample merge that I can rebase my afs-next branch on?
The problem I have is that there's a really necessary patch in net that's not
in net-next:
d7b4c24f45d2efe51b8f213da4593fefd49240ba
rxrpc: Fix an uninitialised variable
(it fixes a fix that went in net just before you last merged it into
net-next).
So I would like to base my branch on both net and net-next, but the merge is
non-trivial, and I'd rather not hand Linus a merge that conflicts with yours.
The issues are:
(*) net/sched/cls_api.c
I think nlmsg_parse() needs to take both rtm_tca_policy and cb->extack as
its last two arguments. Each branch fills in one argument and leaves the
other NULL.
(*) net/ipv4/ipmr_base.c
mr_rtm_dumproute() got a piece abstracted out and modified in one branch,
but the unabstracted branch has a fix in the same area. I think the
thing to do is to apply the fix (removing the same two lines) from the
abstracted-out branch.
Thanks,
David
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
2018-10-19 14:40 (unknown), David Howells
@ 2018-10-19 17:46 ` David Miller
2018-10-19 20:51 ` (unknown) David Howells
1 sibling, 0 replies; 3471+ messages in thread
From: David Miller @ 2018-10-19 17:46 UTC (permalink / raw)
To: dhowells; +Cc: netdev, linux-afs
From: David Howells <dhowells@redhat.com>
Date: Fri, 19 Oct 2018 15:40:53 +0100
> Is there going to be a merge of net into net-next before the merge
> window opens? Or do you have a sample merge that I can rebase my
> afs-next branch on?
I'll be doing a net to net-next merge some time today.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
2018-10-19 14:40 (unknown), David Howells
2018-10-19 17:46 ` (unknown) David Miller
@ 2018-10-19 20:51 ` David Howells
2018-10-19 20:58 ` (unknown) David Miller
1 sibling, 1 reply; 3471+ messages in thread
From: David Howells @ 2018-10-19 20:51 UTC (permalink / raw)
To: David Miller; +Cc: dhowells, netdev, linux-afs
David Miller <davem@davemloft.net> wrote:
> > Is there going to be a merge of net into net-next before the merge
> > window opens? Or do you have a sample merge that I can rebase my
> > afs-next branch on?
>
> I'll be doing a net to net-next merge some time today.
Excellent, thanks!
David
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
2018-10-19 20:51 ` (unknown) David Howells
@ 2018-10-19 20:58 ` David Miller
0 siblings, 0 replies; 3471+ messages in thread
From: David Miller @ 2018-10-19 20:58 UTC (permalink / raw)
To: dhowells; +Cc: netdev, linux-afs
From: David Howells <dhowells@redhat.com>
Date: Fri, 19 Oct 2018 21:51:35 +0100
> David Miller <davem@davemloft.net> wrote:
>
>> > Is there going to be a merge of net into net-next before the merge
>> > window opens? Or do you have a sample merge that I can rebase my
>> > afs-next branch on?
>>
>> I'll be doing a net to net-next merge some time today.
>
> Excellent, thanks!
And this is now complete.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-10-09 15:55 Oliver Carter
0 siblings, 0 replies; 3471+ messages in thread
From: Oliver Carter @ 2018-10-09 15:55 UTC (permalink / raw)
To: netdev
Netdev https://goo.gl/Gf1b7B Oliver
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-09-19 19:57 Saif Hasan
0 siblings, 0 replies; 3471+ messages in thread
From: Saif Hasan @ 2018-09-19 19:57 UTC (permalink / raw)
To: netdev; +Cc: David S. Miller, Saif Hasan, Calvin Owens
>From e4f144286efe0f298c11efe58e17b1ab91c7ee3f Mon Sep 17 00:00:00 2001
From: Saif Hasan <has@fb.com>
Date: Mon, 17 Sep 2018 16:28:54 -0700
Subject: [PATCH] mpls: allow routes on ip6gre devices
Summary:
This appears to be necessary and sufficient change to enable `MPLS` on
`ip6gre` tunnels (RFC4023).
This diff allows IP6GRE devices to be recognized by MPLS kernel module
and hence user can configure interface to accept packets with mpls
headers as well setup mpls routes on them.
Test Plan:
Test plan consists of multiple containers connected via GRE-V6 tunnel.
Then carrying out testing steps as below.
- Carry out necessary sysctl settings on all containers
```
sysctl -w net.mpls.platform_labels=65536
sysctl -w net.mpls.ip_ttl_propagate=1
sysctl -w net.mpls.conf.lo.input=1
```
- Establish IP6GRE tunnels
```
ip -6 tunnel add name if_1_2_1 mode ip6gre \
local 2401:db00:21:6048:feed:0::1 \
remote 2401:db00:21:6048:feed:0::2 key 1
ip link set dev if_1_2_1 up
sysctl -w net.mpls.conf.if_1_2_1.input=1
ip -4 addr add 169.254.0.2/31 dev if_1_2_1 scope link
ip -6 tunnel add name if_1_3_1 mode ip6gre \
local 2401:db00:21:6048:feed:0::1 \
remote 2401:db00:21:6048:feed:0::3 key 1
ip link set dev if_1_3_1 up
sysctl -w net.mpls.conf.if_1_3_1.input=1
ip -4 addr add 169.254.0.4/31 dev if_1_3_1 scope link
```
- Install MPLS encap rules on node-1 towards node-2
```
ip route add 192.168.0.11/32 nexthop encap mpls 32/64 \
via inet 169.254.0.3 dev if_1_2_1
```
- Install MPLS forwarding rules on node-2 and node-3
```
// node2
ip -f mpls route add 32 via inet 169.254.0.7 dev if_2_4_1
// node3
ip -f mpls route add 64 via inet 169.254.0.12 dev if_4_3_1
```
- Ping 192.168.0.11 (node4) from 192.168.0.1 (node1) (where routing
towards 192.168.0.1 is via IP route directly towards node1 from node4)
```
ping 192.168.0.11
```
- tcpdump on interface to capture ping packets wrapped within MPLS
header which inturn wrapped within IP6GRE header
```
16:43:41.121073 IP6
2401:db00:21:6048:feed::1 > 2401:db00:21:6048:feed::2:
DSTOPT GREv0, key=0x1, length 100:
MPLS (label 32, exp 0, ttl 255) (label 64, exp 0, [S], ttl 255)
IP 192.168.0.1 > 192.168.0.11:
ICMP echo request, id 1208, seq 45, length 64
0x0000: 6000 2cdb 006c 3c3f 2401 db00 0021 6048 `.,..l<?$....!`H
0x0010: feed 0000 0000 0001 2401 db00 0021 6048 ........$....!`H
0x0020: feed 0000 0000 0002 2f00 0401 0401 0100 ......../.......
0x0030: 2000 8847 0000 0001 0002 00ff 0004 01ff ...G............
0x0040: 4500 0054 3280 4000 ff01 c7cb c0a8 0001 E..T2.@.........
0x0050: c0a8 000b 0800 a8d7 04b8 002d 2d3c a05b ...........--<.[
0x0060: 0000 0000 bcd8 0100 0000 0000 1011 1213 ................
0x0070: 1415 1617 1819 1a1b 1c1d 1e1f 2021 2223 .............!"#
0x0080: 2425 2627 2829 2a2b 2c2d 2e2f 3031 3233 $%&'()*+,-./0123
0x0090: 3435 3637 4567
```
Signed-off-by: Saif Hasan <has@fb.com>
---
net/mpls/af_mpls.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/net/mpls/af_mpls.c b/net/mpls/af_mpls.c
index 7a4de6d618b1..aeb5bf2f7595 100644
--- a/net/mpls/af_mpls.c
+++ b/net/mpls/af_mpls.c
@@ -1533,10 +1533,11 @@ static int mpls_dev_notify(struct
notifier_block *this, unsigned long event,
unsigned int flags;
if (event == NETDEV_REGISTER) {
- /* For now just support Ethernet, IPGRE, SIT and IPIP devices */
+ /* For now just support Ethernet, IPGRE, IP6GRE, SIT and IPIP devices */
if (dev->type == ARPHRD_ETHER ||
dev->type == ARPHRD_LOOPBACK ||
dev->type == ARPHRD_IPGRE ||
+ dev->type == ARPHRD_IP6GRE ||
dev->type == ARPHRD_SIT ||
dev->type == ARPHRD_TUNNEL) {
mdev = mpls_add_dev(dev);
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-09-16 13:39 iluminati
0 siblings, 0 replies; 3471+ messages in thread
From: iluminati @ 2018-09-16 13:39 UTC (permalink / raw)
--
join the Illuminati secret brotherhood and get $3,000,000.00
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-08-27 14:50 Christoph Hellwig
0 siblings, 0 replies; 3471+ messages in thread
From: Christoph Hellwig @ 2018-08-27 14:50 UTC (permalink / raw)
To: iommu
Cc: Marek Szyprowski, Robin Murphy, Paul Burton, Greg Kroah-Hartman,
linux-mips, linux-kernel
Subject: [RFC] merge dma_direct_ops and dma_noncoherent_ops
While most architectures are either always or never dma coherent for a
given build, the arm, arm64, mips and soon arc architectures can have
different dma coherent settings on a per-device basis. Additionally
some mips builds can decide at boot time if dma is coherent or not.
I've started to look into handling noncoherent dma in swiotlb, and
moving the dma-iommu ops into common code [1], and for that we need a
generic way to check if a given device is coherent or not. Moving
this flag into struct device also simplifies the conditionally coherent
architecture implementations.
These patches are also available in a git tree given that they have
a few previous posted dependencies:
git://git.infradead.org/users/hch/misc.git dma-direct-noncoherent-merge
Gitweb:
http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/dma-direct-noncoherent-merge
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-08-24 4:59 Dave Airlie
0 siblings, 0 replies; 3471+ messages in thread
From: Dave Airlie @ 2018-08-24 4:59 UTC (permalink / raw)
To: Linus Torvalds; +Cc: LKML, dri-devel
Hi Linus,
Just a couple of fixes PRs for rc1,
One MAINTAINERS address change, two panels fixes, and set of amdgpu
fixes (build fixes, display fixes and some others).
Thanks
Dave.
drm-next-2018-08-24:
amdgpu and panel/misc fixes.
The following changes since commit 3d63a3c14741ed015948943076f3c6a2f2cd7b27:
Merge tag 'drm-msm-next-2018-08-10' of
git://people.freedesktop.org/~robclark/linux into drm-next (2018-08-17
10:46:51 +1000)
are available in the Git repository at:
git://anongit.freedesktop.org/drm/drm tags/drm-next-2018-08-24
for you to fetch changes up to 3e20e97c2d55fb18e4b06d16478edc757483b7db:
Merge tag 'drm-misc-next-fixes-2018-08-23-1' of
git://anongit.freedesktop.org/drm/drm-misc into drm-next (2018-08-24
13:41:03 +1000)
----------------------------------------------------------------
amdgpu and panel/misc fixes.
----------------------------------------------------------------
Alex Deucher (1):
drm/amdgpu/display: disable eDP fast boot optimization on DCE8
Christian König (3):
drm/amdgpu: fix incorrect use of fcheck
drm/amdgpu: fix incorrect use of drm_file->pid
drm/amdgpu: fix amdgpu_amdkfd_remove_eviction_fence v3
Dave Airlie (3):
Merge tag 'drm-misc-next-fixes-2018-08-22' of
git://anongit.freedesktop.org/drm/drm-misc into drm-next
Merge branch 'drm-next-4.19' of
git://people.freedesktop.org/~agd5f/linux into drm-next
Merge tag 'drm-misc-next-fixes-2018-08-23-1' of
git://anongit.freedesktop.org/drm/drm-misc into drm-next
Dmytro Laktyushkin (3):
drm/amd/display: fix dp_ss_control vbios flag parsing
drm/amd/display: make dp_ss_off optional
drm/amd/display: fix dentist did ranges
Evan Quan (1):
drm/amdgpu: set correct base for THM/NBIF/MP1 IP
Kai-Heng Feng (1):
drm/edid: Add 6 bpc quirk for SDC panel in Lenovo B50-80
Leo (Sunpeng) Li (2):
Revert "drm/amdgpu/display: Replace CONFIG_DRM_AMD_DC_DCN1_0
with CONFIG_X86"
drm/amd/display: Don't build DCN1 when kcov is enabled
Samson Tam (1):
drm/amd/display: Do not retain link settings
Sean Paul (2):
drm/panel: simple: tv123wam: Add unprepare delay
MAINTAINERS: drm-misc: Change seanpaul's email address
Yintian Tao (2):
drm/amdgpu: access register without KIQ
drm/powerplay: enable dpm under pass-through
MAINTAINERS | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_amdkfd_gpuvm.c | 103 +++++++++------------
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 2 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_sched.c | 21 ++---
drivers/gpu/drm/amd/amdgpu/vega20_reg_init.c | 3 +
drivers/gpu/drm/amd/amdgpu/vi.c | 4 +-
drivers/gpu/drm/amd/display/Kconfig | 6 ++
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 10 +-
drivers/gpu/drm/amd/display/dc/Makefile | 2 +-
.../amd/display/dc/bios/command_table_helper2.c | 2 +-
drivers/gpu/drm/amd/display/dc/calcs/Makefile | 2 +-
drivers/gpu/drm/amd/display/dc/core/dc.c | 21 ++++-
drivers/gpu/drm/amd/display/dc/core/dc_debug.c | 2 +-
drivers/gpu/drm/amd/display/dc/core/dc_link.c | 6 +-
drivers/gpu/drm/amd/display/dc/core/dc_resource.c | 12 +--
drivers/gpu/drm/amd/display/dc/dc.h | 2 +-
.../gpu/drm/amd/display/dc/dce/dce_clock_source.c | 6 +-
.../gpu/drm/amd/display/dc/dce/dce_clock_source.h | 2 +-
drivers/gpu/drm/amd/display/dc/dce/dce_clocks.c | 18 ++--
drivers/gpu/drm/amd/display/dc/dce/dce_clocks.h | 2 +-
drivers/gpu/drm/amd/display/dc/dce/dce_dmcu.c | 6 +-
.../drm/amd/display/dc/dce/dce_stream_encoder.c | 20 ++--
.../amd/display/dc/dce110/dce110_hw_sequencer.c | 10 +-
drivers/gpu/drm/amd/display/dc/gpio/Makefile | 2 +-
drivers/gpu/drm/amd/display/dc/gpio/hw_factory.c | 4 +-
drivers/gpu/drm/amd/display/dc/gpio/hw_translate.c | 4 +-
drivers/gpu/drm/amd/display/dc/i2caux/Makefile | 2 +-
drivers/gpu/drm/amd/display/dc/i2caux/i2caux.c | 4 +-
drivers/gpu/drm/amd/display/dc/inc/core_types.h | 7 +-
drivers/gpu/drm/amd/display/dc/irq/Makefile | 2 +-
drivers/gpu/drm/amd/display/dc/irq/irq_service.c | 2 +-
drivers/gpu/drm/amd/display/dc/os_types.h | 2 +-
.../gpu/drm/amd/powerplay/hwmgr/hardwaremanager.c | 4 +-
drivers/gpu/drm/drm_edid.c | 3 +
drivers/gpu/drm/panel/panel-simple.c | 3 +
35 files changed, 161 insertions(+), 142 deletions(-)
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-08-22 9:07 системы администратор
0 siblings, 0 replies; 3471+ messages in thread
From: системы администратор @ 2018-08-22 9:07 UTC (permalink / raw)
--
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...776774990..2018
Почты технической поддержки ©2018
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-08-09 9:23 системы администратор
0 siblings, 0 replies; 3471+ messages in thread
From: системы администратор @ 2018-08-09 9:23 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет отключен!
Приносим извинения за неудобства.
Проверочный код: EN: 006524
Почты технической поддержки © 2018
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2018-07-29 9:58 Sumitomo Rubber
0 siblings, 0 replies; 3471+ messages in thread
From: Sumitomo Rubber @ 2018-07-29 9:58 UTC (permalink / raw)
--
Did you receive our representative email ?
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-07-28 10:46 Andrew Martinez
0 siblings, 0 replies; 3471+ messages in thread
From: Andrew Martinez @ 2018-07-28 10:46 UTC (permalink / raw)
[-- Attachment #1: Type: text/plain, Size: 252 bytes --]
body {height: 100%; color:#000000; font-size:12pt; font-family:arial, helvetica, sans-serif;}
Brauchen Sie einen Kredit? Wenn ja, mailen Sie uns jetzt für weitere Informationen
Do you need a loan of any kind? If Yes email us now for more info
[-- Attachment #2: Type: text/html, Size: 519 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-07-28 10:14 Andrew Martinez
0 siblings, 0 replies; 3471+ messages in thread
From: Andrew Martinez @ 2018-07-28 10:14 UTC (permalink / raw)
[-- Attachment #1: Type: text/plain, Size: 158 bytes --]
Brauchen Sie einen Kredit? Wenn ja, mailen Sie uns jetzt für weitere Informationen
Do you need a loan of any kind? If Yes email us now for more info
[-- Attachment #2: Type: text/html, Size: 399 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-07-06 1:26 Dave Airlie
0 siblings, 0 replies; 3471+ messages in thread
From: Dave Airlie @ 2018-07-06 1:26 UTC (permalink / raw)
To: Linus Torvalds; +Cc: LKML, dri-devel
Hi Linus, (apologies for blank body pull earlier)
This is the drm fixes for rc4. It's a bit larger than I'd like but the
exynos cleanups are pretty mechanical, and I'd rather have them in
sooner rather than later so we can avoid too much conflicts around
them. The non-mechanincal exynos changes are mostly fixes for new
feature recently introduced.
i915:
GVT and GGTT mapping fixes
amdgpu:
HDMI2.0 4K@60 Hz regression
Hotplug fixes for dual-GPU laptops to make power management better
Misc vega12 bios fixes, a race fix and some typos.
sii8620 bridge: small fixes around mode setting
core: use kvzalloc to allocate blob property memory.
If the exynos changes are too much, I'm happy to push back, and the
blank pull was thanks to baby induced sleep deprivation, fat fingers
and gmail.
Thanks,
Dave.
drm-fixes-2018-07-06:
amdgpu, i915, exynos, udl, sii8620 and core fixes
The following changes since commit 021c91791a5e7e85c567452f1be3e4c2c6cb6063:
Linux 4.18-rc3 (2018-07-01 16:04:53 -0700)
are available in the Git repository at:
git://anongit.freedesktop.org/drm/drm tags/drm-fixes-2018-07-06
for you to fetch changes up to c78d1f9d95a9f2cd5546c64f5315f54681dd6055:
Merge tag 'exynos-drm-fixes-for-v4.18-rc4' of
git://git.kernel.org/pub/scm/linux/kernel/git/daeinki/drm-exynos into
drm-fixes (2018-07-06 10:47:02 +1000)
----------------------------------------------------------------
amdgpu, i915, exynos, udl, sii8620 and core fixes
----------------------------------------------------------------
Alex Deucher (2):
drm/amdgpu: fix swapped emit_ib_size in vce3
drm/amdgpu/pm: fix display count in non-DC path
Andrzej Pietrasiewicz (1):
drm/exynos: scaler: Reset hardware before starting the operation
Chris Wilson (1):
drm/i915: Try GGTT mmapping whole object as partial
Dave Airlie (4):
Merge tag 'drm-misc-fixes-2018-07-05' of
git://anongit.freedesktop.org/drm/drm-misc into drm-fixes
Merge tag 'drm-intel-fixes-2018-07-05' of
git://anongit.freedesktop.org/drm/drm-intel into drm-fixes
Merge branch 'drm-fixes-4.18' of
git://people.freedesktop.org/~agd5f/linux into drm-fixes
Merge tag 'exynos-drm-fixes-for-v4.18-rc4' of
git://git.kernel.org/.../daeinki/drm-exynos into drm-fixes
Evan Quan (3):
drm/amd/powerplay: correct vega12 thermal support as true
drm/amd/powerplay: correct vega12 bootup values settings
drm/amd/powerplay: smc_dpm_info structure change
Jani Nikula (1):
Merge tag 'gvt-fixes-2018-07-03' of
https://github.com/intel/gvt-linux into drm-intel-fixes
Lyude Paul (3):
drm/amdgpu: Make struct amdgpu_atif private to amdgpu_acpi.c
drm/amdgpu: Add amdgpu_atpx_get_dhandle()
drm/amdgpu: Dynamically probe for ATIF handle (v2)
Maciej Purski (3):
drm/bridge/sii8620: Send AVI infoframe in all MHL versions
drm/bridge/sii8620: Fix display of packed pixel modes
drm/bridge/sii8620: Fix link mode selection
Marek Szyprowski (10):
drm/exynos: ipp: Rework checking for the correct buffer formats
drm/exynos: rotator: Fix DRM_MODE_REFLECT_{X,Y} interpretation
drm/exynos: scaler: Fix support for YUV420, YUV422 and YUV444 modes
drm/exynos: gsc: Use real buffer width for configuring the hardware
drm/exynos: gsc: Increase Exynos5433 buffer width alignment to 16 pixels
drm/exynos: gsc: Fix DRM_MODE_REFLECT_{X,Y} interpretation
drm/exynos: gsc: Fix support for NV16/61, YUV420/YVU420 and YUV422 modes
drm/exynos: fimc: Use real buffer width for configuring the hardware
drm/exynos: decon5433: Fix per-plane global alpha for XRGB modes
drm/exynos: decon5433: Fix WINCONx reset value
Michel Dänzer (1):
drm: Use kvzalloc for allocating blob property memory
Mikita Lipski (2):
drm/amd/display: adding ycbcr420 pixel encoding for hdmi
drm/amd/display: add a check for display depth validity
Mikulas Patocka (1):
drm/udl: fix display corruption of the last line
Nicolai Hähnle (1):
drm/amdgpu: fix user fence write race condition
Stefan Agner (1):
drm/exynos: ipp: use correct enum type
Thomas Zimmermann (3):
drm/exynos: Replace drm_framebuffer_{un/reference} with put,get functions
drm/exynos: Replace drm_gem_object_unreference_unlocked with put function
drm/exynos: Replace drm_dev_unref with drm_dev_put
Xiaolin Zhang (1):
drm/i915/gvt: changed DDI mode emulation type
Zhao Yan (1):
drm/i915/gvt: fix a bug of partially write ggtt enties
drivers/gpu/drm/amd/amdgpu/amdgpu.h | 46 ++------
drivers/gpu/drm/amd/amdgpu/amdgpu_acpi.c | 131 +++++++++++++++++----
drivers/gpu/drm/amd/amdgpu/amdgpu_atpx_handler.c | 6 +
drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 12 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_pm.c | 2 +-
drivers/gpu/drm/amd/amdgpu/vce_v3_0.c | 4 +-
drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 49 +++++++-
drivers/gpu/drm/amd/include/atomfirmware.h | 5 +-
drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.c | 96 +++++++++++++--
drivers/gpu/drm/amd/powerplay/hwmgr/ppatomfwctrl.h | 5 +
drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.c | 4 +
drivers/gpu/drm/amd/powerplay/hwmgr/vega12_hwmgr.h | 3 +
.../amd/powerplay/hwmgr/vega12_processpptables.c | 2 +
.../drm/amd/powerplay/inc/vega12/smu9_driver_if.h | 5 +-
drivers/gpu/drm/bridge/sil-sii8620.c | 86 +++++++++-----
drivers/gpu/drm/drm_property.c | 6 +-
drivers/gpu/drm/exynos/exynos5433_drm_decon.c | 6 +-
drivers/gpu/drm/exynos/exynos_drm_drv.c | 4 +-
drivers/gpu/drm/exynos/exynos_drm_fb.c | 2 +-
drivers/gpu/drm/exynos/exynos_drm_fimc.c | 17 +--
drivers/gpu/drm/exynos/exynos_drm_gem.c | 10 +-
drivers/gpu/drm/exynos/exynos_drm_gsc.c | 51 ++++----
drivers/gpu/drm/exynos/exynos_drm_ipp.c | 110 +++++++++--------
drivers/gpu/drm/exynos/exynos_drm_plane.c | 2 +-
drivers/gpu/drm/exynos/exynos_drm_rotator.c | 4 +-
drivers/gpu/drm/exynos/exynos_drm_scaler.c | 44 +++++--
drivers/gpu/drm/exynos/regs-gsc.h | 1 +
drivers/gpu/drm/i915/gvt/display.c | 6 +-
drivers/gpu/drm/i915/gvt/gtt.c | 58 +++++++++
drivers/gpu/drm/i915/gvt/gtt.h | 2 +
drivers/gpu/drm/i915/i915_gem.c | 28 +++--
drivers/gpu/drm/i915/i915_vma.c | 2 +-
drivers/gpu/drm/udl/udl_fb.c | 5 +-
drivers/gpu/drm/udl/udl_transfer.c | 11 +-
34 files changed, 583 insertions(+), 242 deletions(-)
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-07-05 10:36 rosdi ablatiff
0 siblings, 0 replies; 3471+ messages in thread
From: rosdi ablatiff @ 2018-07-05 10:36 UTC (permalink / raw)
To: dri-devel
[-- Attachment #1.1: Type: text/plain, Size: 1 bytes --]
[-- Attachment #1.2: Type: text/html, Size: 1 bytes --]
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-06-23 21:08 David Lechner
0 siblings, 0 replies; 3471+ messages in thread
From: David Lechner @ 2018-06-23 21:08 UTC (permalink / raw)
To: linux-remoteproc, devicetree, linux-omap, linux-arm-kernel
Cc: David Lechner, Ohad Ben-Cohen, Bjorn Andersson, Rob Herring,
Mark Rutland, Benoît Cousson, Tony Lindgren, Sekhar Nori,
Kevin Hilman, linux-kernel
Date: Sat, 23 Jun 2018 15:43:59 -0500
Subject: [PATCH 0/8] New remoteproc driver for TI PRU
This series adds a new remoteproc driver for the TI Programmable Runtime Unit
(PRU) that is present in some TI Sitara processors. This code has been tested
working on AM1808 (LEGO MINDSTORMS EV3) and AM3358 (BeagleBone Green).
There are a couple of quirks that had to be worked around in order to get this
working. The PRU units have multiple memory maps. Notably, both the instruction
RAM and data RAM are at address 0x0. This caused the da_to_va callback to not
work because the same address could refer to two different locations. To work
around this, the first two patches add a "map" parameter to the da_to_va
callbacks so that we have an extra bit of information to make this distinction.
Also, on AM38xx we have to use pdata for accessing a reset since there is not
a reset controller. There are several other devices doing this, so the seems
the best way for now.
For anyone else who would like to test, I used the rpmsg-client-sample driver.
Just enable it in your kernel config. Then grab the appropriate firmware[1]
and put in in /lib/firmware/. Use sysfs to start and stop the PRU:
echo start > /sys/class/remoteproc<n>/state
echo stop > /sys/class/remoteproc<n>/state
[1]: firmware downloads:
AM18XX: https://github.com/ev3dev/ev3dev-pru-firmware/releases/download/mainline-kernel-testing/AM18xx-PRU-rpmsg-client-sample.zip
AM335X: https://github.com/ev3dev/ev3dev-pru-firmware/releases/download/mainline-kernel-testing/AM335x-PRU-rpmsg-client-sample.zip
David Lechner (8):
remoteproc: add map parameter to da_to_va
remoteproc: add page lookup for TI PRU to ELF loader
ARM: OMAP2+: add pdata quirks for PRUSS reset
dt-bindings: add bindings for TI PRU as remoteproc
remoteproc: new driver for TI PRU
ARM: davinci_all_defconfig: enable PRU remoteproc module
ARM: dts: da850: add node for PRUSS
ARM: dts: am33xx: add node for PRU remoteproc
.../bindings/remoteproc/ti_pru_rproc.txt | 51 ++
MAINTAINERS | 5 +
arch/arm/boot/dts/am33xx.dtsi | 9 +
arch/arm/boot/dts/da850.dtsi | 8 +
arch/arm/configs/davinci_all_defconfig | 2 +
arch/arm/mach-omap2/pdata-quirks.c | 9 +
drivers/remoteproc/Kconfig | 7 +
drivers/remoteproc/Makefile | 1 +
drivers/remoteproc/imx_rproc.c | 2 +-
drivers/remoteproc/keystone_remoteproc.c | 3 +-
drivers/remoteproc/qcom_adsp_pil.c | 2 +-
drivers/remoteproc/qcom_q6v5_pil.c | 2 +-
drivers/remoteproc/qcom_wcnss.c | 2 +-
drivers/remoteproc/remoteproc_core.c | 10 +-
drivers/remoteproc/remoteproc_elf_loader.c | 117 +++-
drivers/remoteproc/remoteproc_internal.h | 2 +-
drivers/remoteproc/st_slim_rproc.c | 2 +-
drivers/remoteproc/ti_pru_rproc.c | 660 ++++++++++++++++++
drivers/remoteproc/wkup_m3_rproc.c | 3 +-
include/linux/platform_data/ti-pruss.h | 18 +
include/linux/remoteproc.h | 2 +-
include/uapi/linux/elf-em.h | 1 +
22 files changed, 899 insertions(+), 19 deletions(-)
create mode 100644 Documentation/devicetree/bindings/remoteproc/ti_pru_rproc.txt
create mode 100644 drivers/remoteproc/ti_pru_rproc.c
create mode 100644 include/linux/platform_data/ti-pruss.h
--
2.17.1
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2018-06-16 8:15 Mrs Mavis Wanczyk
0 siblings, 0 replies; 3471+ messages in thread
From: Mrs Mavis Wanczyk @ 2018-06-16 8:15 UTC (permalink / raw)
--
This is the second time i am sending you this mail.
I, Mavis Wanczyk donates $ 5 Million Dollars from part of my Powerball
Jackpot Lottery of $ 758 Million Dollars, respond with your details
for claims.
I await your earliest response and God Bless you
Good luck.
Mrs Mavis L. Wanczyk
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-06-13 15:48 Ubaithullah Masood
0 siblings, 0 replies; 3471+ messages in thread
From: Ubaithullah Masood @ 2018-06-13 15:48 UTC (permalink / raw)
Could you act as the beneficiary to claim 9.8M USD that my bank wants
to consifacte? I will give you 50% and Every documentation would be
put in placed.
Mr Ubaithullah from Hong Kong.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-05-31 17:11 Adam Richter via Containers
0 siblings, 0 replies; 3471+ messages in thread
From: Adam Richter via Containers @ 2018-05-31 17:11 UTC (permalink / raw)
To: FRoss Perry, alexander deucher, containers, sca38018, westglen
http://voice.promang.net
Adam Richter
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-05-29 7:26 администратор
0 siblings, 0 replies; 3471+ messages in thread
From: администратор @ 2018-05-29 7:26 UTC (permalink / raw)
пользователь веб-почты
Обратите внимание, что 95% ваших писем, полученных после последнего раза, когда вам нужно обновить сервер своей веб-почты в нашей базе данных, были отложены. Регулярно получать и отправлять свои сообщения. Техническая команда нашей электронной почты обновит вашу учетную запись в течение 3 рабочих дней. Если вы этого не сделаете, ваша учетная запись будет временно приостановлена нашими службами. Чтобы снова проверить свой почтовый ящик, пришлите следующую информацию:
обычный:
Имя пользователя:
пароль:
Подтвердите Пароль:
Предупреждение!! Если это откажется обновлять аккаунты в течение двух
дней с момента получения электронной почты, он навсегда потеряет счета
владельцы учетных записей электронной почты.
Спасибо за сотрудничество!
Copyright © 2017-2018 Служба технической поддержки WebMail, Inc.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-05-25 3:26 Bounced mail
0 siblings, 0 replies; 3471+ messages in thread
From: Bounced mail @ 2018-05-25 3:26 UTC (permalink / raw)
To: linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw
The original message was received at Fri, 25 May 2018 11:26:13 +0800
from lists.01.org [137.8.247.250]
----- The following addresses had permanent fatal errors -----
<linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org>
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2018-05-18 12:04 DaeRyong Jeong
0 siblings, 0 replies; 3471+ messages in thread
From: DaeRyong Jeong @ 2018-05-18 12:04 UTC (permalink / raw)
To: davem, kuznet, yoshfuji
Cc: netdev, linux-kernel, byoungyoung, kt0755, bammanag
Bcc:
Subject: WARNING in ip_recv_error
Reply-To:
We report the crash: WARNING in ip_recv_error
This crash has been found in v4.17-rc1 using RaceFuzzer (a modified
version of Syzkaller), which we describe more at the end of this
report. Our analysis shows that the race occurs when invoking two
syscalls concurrently, do_ipv6_setsockopt and inet_recvmsg.
Diagnosis:
We think the concurrent execution of do_ipv6_setsockopt() with optname
IPV6_ADDRFORM and inet_recvmsg() causes the crash. do_ipv6_setsockopt()
can update sk->prot to &udp_prot and sk->sk_family to PF_INET. But
inet_recvmsg() can execute sk->sk_prot->recvmsg() right after that
sk->prot is updated and sk->sk_family is not updated by
do_ipv6_setsockopt(). This will lead WARN_ON in ip_recv_error().
Thread interleaving:
CPU0 (do_ipv6_setsockopt) CPU1 (inet_recvmsg)
===== =====
struct proto *prot = &udp_prot;
...
sk->sk_prot = prot;
sk->sk_socket->ops = &inet_dgram_ops;
err = sk->sk_prot->recvmsg(sk, msg, size, flags & MSG_DONTWAIT,
flags & ~MSG_DONTWAIT, &addr_len);
(in udp_recvmsg)
if (flags & MSG_ERRQUEUE)
return ip_recv_error(sk, msg, len, addr_len);
(in ip_recv_error)
WARN_ON_ONCE(sk->sk_family == AF_INET6);
sk->sk_family = PF_INET;
Call Sequence:
CPU0
=====
udpv6_setsockopt
ipv6_setsockopt
do_ipv6_setsockopt
CPU1
=====
sock_recvmsg
sock_recvmsg_nosec
inet_recvmsg
udp_recvmsg
==================================================================
WARNING: CPU: 1 PID: 32600 at /home/daeryong/workspace/new-race-fuzzer/kernels_repo/kernel_v4.17-rc1/net/ipv4/ip_sockglue.c:508 ip_recv_error+0x6f2/0x720 net/ipv4/ip_sockglue.c:508
Kernel panic - not syncing: panic_on_warn set ...
CPU: 1 PID: 32600 Comm: syz-executor0 Not tainted 4.17.0-rc1 #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.8.2-0-g33fbe13 by qemu-project.org 04/01/2014
Call Trace:
__dump_stack lib/dump_stack.c:77 [inline]
dump_stack+0x166/0x21c lib/dump_stack.c:113
panic+0x1a0/0x3a7 kernel/panic.c:184
__warn+0x191/0x1a0 kernel/panic.c:536
report_bug+0x132/0x1b0 lib/bug.c:186
fixup_bug.part.11+0x28/0x50 arch/x86/kernel/traps.c:178
fixup_bug arch/x86/kernel/traps.c:247 [inline]
do_error_trap+0x28b/0x2d0 arch/x86/kernel/traps.c:296
do_invalid_op+0x1b/0x20 arch/x86/kernel/traps.c:315
invalid_op+0x14/0x20 arch/x86/entry/entry_64.S:992
RIP: 0010:ip_recv_error+0x6f2/0x720 net/ipv4/ip_sockglue.c:508
RSP: 0018:ffff8801dadff630 EFLAGS: 00010212
RAX: 0000000000040000 RBX: 0000000000002002 RCX: ffffffff8327de12
RDX: 000000000000008a RSI: ffffc90001a0c000 RDI: ffff8801be615010
RBP: ffff8801dadff720 R08: 0000000000002002 R09: ffff8801dadff918
R10: ffff8801dadff738 R11: ffff8801dadffaff R12: ffff8801be615000
R13: ffff8801dadffd50 R14: 1ffff1003b5bfece R15: ffff8801dadffb90
udp_recvmsg+0x834/0xa10 net/ipv4/udp.c:1571
inet_recvmsg+0x121/0x420 net/ipv4/af_inet.c:830
sock_recvmsg_nosec net/socket.c:802 [inline]
sock_recvmsg+0x7f/0xa0 net/socket.c:809
___sys_recvmsg+0x1f0/0x430 net/socket.c:2279
__sys_recvmsg+0xfc/0x1c0 net/socket.c:2328
__do_sys_recvmsg net/socket.c:2338 [inline]
__se_sys_recvmsg net/socket.c:2335 [inline]
__x64_sys_recvmsg+0x48/0x50 net/socket.c:2335
do_syscall_64+0x15f/0x4a0 arch/x86/entry/common.c:287
entry_SYSCALL_64_after_hwframe+0x49/0xbe
RIP: 0033:0x4563f9
RSP: 002b:00007f24f6927b28 EFLAGS: 00000246 ORIG_RAX: 000000000000002f
RAX: ffffffffffffffda RBX: 000000000072bfa0 RCX: 00000000004563f9
RDX: 0000000000002002 RSI: 0000000020000240 RDI: 0000000000000016
RBP: 00000000000004e4 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00007f24f69286d4
R13: 00000000ffffffff R14: 00000000006fc600 R15: 0000000000000000
Dumping ftrace buffer:
(ftrace buffer empty)
Kernel Offset: disabled
Rebooting in 86400 seconds..
==================================================================
= About RaceFuzzer
RaceFuzzer is a customized version of Syzkaller, specifically tailored
to find race condition bugs in the Linux kernel. While we leverage
many different technique, the notable feature of RaceFuzzer is in
leveraging a custom hypervisor (QEMU/KVM) to interleave the
scheduling. In particular, we modified the hypervisor to intentionally
stall a per-core execution, which is similar to supporting per-core
breakpoint functionality. This allows RaceFuzzer to force the kernel
to deterministically trigger racy condition (which may rarely happen
in practice due to randomness in scheduling).
RaceFuzzer's C repro always pinpoints two racy syscalls. Since C
repro's scheduling synchronization should be performed at the user
space, its reproducibility is limited (reproduction may take from 1
second to 10 minutes (or even more), depending on a bug). This is
because, while RaceFuzzer precisely interleaves the scheduling at the
kernel's instruction level when finding this bug, C repro cannot fully
utilize such a feature. Please disregard all code related to
"should_hypercall" in the C repro, as this is only for our debugging
purposes using our own hypervisor.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-05-14 17:30 Jessica
0 siblings, 0 replies; 3471+ messages in thread
From: Jessica @ 2018-05-14 17:30 UTC (permalink / raw)
Hallo groeten, kunt u me alsjeblieft dringend terugschrijven.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-05-14 6:33 системы администратор
0 siblings, 0 replies; 3471+ messages in thread
From: системы администратор @ 2018-05-14 6:33 UTC (permalink / raw)
пользователь веб-почты
Обратите внимание, что 95% ваших писем, полученных после обновления сервера веб-почты в последнее время в нашей базе данных, были отложены. Регулярно получать и отправлять свои сообщения. Техническая команда нашей веб-почты обновит вашу учетную запись в течение 3 рабочих дней. Если вы этого не сделаете, ваша учетная запись будет временно приостановлена нашими службами. Чтобы повторно подтвердить свой почтовый ящик, пришлите следующую информацию:
обычный:
Имя пользователя:
пароль:
Подтвердите Пароль:
Предупреждение!! Если это не позволит обновлять учетные записи в
течение двух дней после получения электронной почты, они будут
постоянно потерять учетные записи владельцев учетных записей
веб-почты.
Спасибо за ваше сотрудничество!
Copyright © 2017-2018 Служба технической поддержки WebMail, Inc.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-05-05 22:07 Shane Missler
0 siblings, 0 replies; 3471+ messages in thread
From: Shane Missler @ 2018-05-05 22:07 UTC (permalink / raw)
Hallo, Sie haben eine Spende von 5.800.000,00EUR, Am Shane Missler der Gewinner des Powerball-Jackpots im Wert von $ 451 Millionen. Ich spende einen Teil meiner Lotterie an fünf glückliche Menschen und Wohltätigkeitseinrichtungen zum Gedenken an meine verstorbene Mutter, die an Krebs gestorben ist. Kontaktieren Sie mich für weitere Informationen unter: shanemissler84@gmail.com
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-05-04 15:21 Mark Henry
0 siblings, 0 replies; 3471+ messages in thread
From: Mark Henry @ 2018-05-04 15:21 UTC (permalink / raw)
Hello
My name is Gen Henry Mark, I am US military officer,i will like to get
acquainted with you, I read your profile and I really wish to indicate
my interest, please I'll be glad if you get back to me so that i can
contact you and tell you more about my self ,i wish to hear from you
soon.
Best regards,
Gen Henry Mark
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2018-04-20 8:02 ` Christoph Hellwig
0 siblings, 0 replies; 3471+ messages in thread
From: Christoph Hellwig @ 2018-04-20 8:02 UTC (permalink / raw)
Cc: linux-arch, linux-xtensa, Michal Simek, Vincent Chen,
linux-c6x-dev, linux-parisc, linux-sh, linux-hexagon,
linux-kernel, linux-m68k, openrisc, Greentime Hu, linux-alpha,
sparclinux, nios2-dev, linux-snps-arc, linux-arm-kernel
To: iommu@lists.linux-foundation.org
Cc: linux-arch@vger.kernel.org
Cc: Michal Simek <monstr@monstr.eu>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Vincent Chen <deanbo422@gmail.com>
Cc: linux-alpha@vger.kernel.org
Cc: linux-snps-arc@lists.infradead.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-c6x-dev@linux-c6x.org
Cc: linux-hexagon@vger.kernel.org
Cc: linux-m68k@lists.linux-m68k.org
Cc: nios2-dev@lists.rocketboards.org
Cc: openrisc@lists.librecores.org
Cc: linux-parisc@vger.kernel.org
Cc: linux-sh@vger.kernel.org
Cc: sparclinux@vger.kernel.org
Cc: linux-xtensa@linux-xtensa.org
Cc: linux-kernel@vger.kernel.org
Subject: [RFC] common non-cache coherent direct dma mapping ops
Hi all,
this series continues consolidating the dma-mapping code, with a focus
on architectures that do not (always) provide cache coherence for DMA.
Three architectures (arm, mips and powerpc) are still left to be
converted later due to complexity of their dma ops selection.
The dma-noncoherent ops calls the dma-direct ops for the actual
translation of streaming mappins and allow the architecture to provide
any cache flushing required for cpu to device and/or device to cpu
ownership transfers. The dma coherent allocator is for now still left
entirely to architecture supplied implementations due the amount of
variations. Hopefully we can do some consolidation for them later on
as well.
A lot of architectures are currently doing very questionable things
in their dma mapping routines, which are documented in the changelogs
for each patch. Please review them very careful and correct me on
incorrect assumptions.
Because this series sits on top of two previously submitted series
a git tree might be useful to actually test it. It is provided here:
git://git.infradead.org/users/hch/misc.git generic-dma-noncoherent
Gitweb:
http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/generic-dma-noncoherent
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-04-20 8:02 ` Christoph Hellwig
0 siblings, 0 replies; 3471+ messages in thread
From: Christoph Hellwig @ 2018-04-20 8:02 UTC (permalink / raw)
Cc: linux-arch, linux-xtensa, Michal Simek, Vincent Chen,
linux-c6x-dev, linux-parisc, linux-sh, linux-hexagon,
linux-kernel, linux-m68k, openrisc, Greentime Hu, linux-alpha,
sparclinux, nios2-dev, linux-snps-arc, linux-arm-kernel
To: iommu@lists.linux-foundation.org
Cc: linux-arch@vger.kernel.org
Cc: Michal Simek <monstr@monstr.eu>
Cc: Greentime Hu <green.hu@gmail.com>
Cc: Vincent Chen <deanbo422@gmail.com>
Cc: linux-alpha@vger.kernel.org
Cc: linux-snps-arc@lists.infradead.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-c6x-dev@linux-c6x.org
Cc: linux-hexagon@vger.kernel.org
Cc: linux-m68k@lists.linux-m68k.org
Cc: nios2-dev@lists.rocketboards.org
Cc: openrisc@lists.librecores.org
Cc: linux-parisc@vger.kernel.org
Cc: linux-sh@vger.kernel.org
Cc: sparclinux@vger.kernel.org
Cc: linux-xtensa@linux-xtensa.org
Cc: linux-kernel@vger.kernel.org
Subject: [RFC] common non-cache coherent direct dma mapping ops
Hi all,
this series continues consolidating the dma-mapping code, with a focus
on architectures that do not (always) provide cache coherence for DMA.
Three architectures (arm, mips and powerpc) are still left to be
converted later due to complexity of their dma ops selection.
The dma-noncoherent ops calls the dma-direct ops for the actual
translation of streaming mappins and allow the architecture to provide
any cache flushing required for cpu to device and/or device to cpu
ownership transfers. The dma coherent allocator is for now still left
entirely to architecture supplied implementations due the amount of
variations. Hopefully we can do some consolidation for them later on
as well.
A lot of architectures are currently doing very questionable things
in their dma mapping routines, which are documented in the changelogs
for each patch. Please review them very careful and correct me on
incorrect assumptions.
Because this series sits on top of two previously submitted series
a git tree might be useful to actually test it. It is provided here:
git://git.infradead.org/users/hch/misc.git generic-dma-noncoherent
Gitweb:
http://git.infradead.org/users/hch/misc.git/shortlog/refs/heads/generic-dma-noncoherent
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-04-16 1:22 Andrew Worsley
0 siblings, 0 replies; 3471+ messages in thread
From: Andrew Worsley @ 2018-04-16 1:22 UTC (permalink / raw)
To: harinik, APANDEY, asarkar, soren.brinkmann, linux-arm-kernel, linux-i2c
This patch clears the remaining i2c buffer overrun problems that I see in my
hardware. When run at 200kHz over 2 days and 17 hours there were *NO* faults seen
despite continously accessing the all the i2c devices. I feel the remaining issues
are related to the TPM not behaving properly at clock speeds of 285kHz or higher.
The other i2c hardware is fine up to maximum 400khz. At these higher clock speeds
the TPM appears to fall behind and I see SDA held low after the TPM read and the
driver report bus arbitration lost errors. Eventually the TPM completely stops
responding and SDA is held low. But accessing the other i2c hardware causes more
i2c clock pulses which lets the SDA go high again then the other i2c devices work
with out problems which further confirms our thinking that the TPM is source of the
remaining i2c problems.
With the additional i2c fixes in the attached patch the Xilinx i2c driver
is working with out problems on our hardware. I recommend you consider adding these
changes which apply on top of the previous fixes that I sent.
Thanks
Andrew Worsley
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-04-06 1:18 venkatvenkatsubra
0 siblings, 0 replies; 3471+ messages in thread
From: venkatvenkatsubra @ 2018-04-06 1:18 UTC (permalink / raw)
To: netdev
Hi Netdev https://goo.gl/5bDZtk
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-04-04 13:43 системы администратор
0 siblings, 0 replies; 3471+ messages in thread
From: системы администратор @ 2018-04-04 13:43 UTC (permalink / raw)
пользователь веб-почты
Обратите внимание, что 95% ваших писем, полученных после обновления сервера веб-почты в последнее время в нашей базе данных, были отложены. Регулярно получать и отправлять свои сообщения. Техническая команда нашей веб-почты обновит вашу учетную запись в течение 3 рабочих дней. Если вы этого не сделаете, ваша учетная запись будет временно приостановлена нашими службами. Чтобы повторно подтвердить свой почтовый ящик, пришлите следующую
информацию:
обычный:
Имя пользователя:
пароль:
Подтвердите Пароль:
Предупреждение!! Если это не позволит обновлять учетные записи в
течение двух дней после получения электронной почты, они будут
постоянно потерять учетные записи владельцев учетных записей
веб-почты.
Спасибо за ваше сотрудничество!
Copyright © 2017-2018 Служба технической поддержки WebMail, Inc.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-03-23 3:05 Mail Delivery Subsystem
0 siblings, 0 replies; 3471+ messages in thread
From: Mail Delivery Subsystem @ 2018-03-23 3:05 UTC (permalink / raw)
To: linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw
This Message was undeliverable due to the following reason:
Your message was not delivered because the destination computer was
not reachable within the allowed queue period. The amount of time
a message is queued before it is returned depends on local configura-
tion parameters.
Most likely there is a network problem that prevented delivery, but
it is also possible that the computer is turned off, or does not
have a mail system running right now.
Your message was not delivered within 3 days:
Host 160.236.182.197 is not responding.
The following recipients did not receive this message:
<linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org>
Please reply to postmaster-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org
if you feel this message to be in error.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-03-07 7:48 Solen win
0 siblings, 0 replies; 3471+ messages in thread
From: Solen win @ 2018-03-07 7:48 UTC (permalink / raw)
To: virtualization
[-- Attachment #1.1: Type: text/plain, Size: 30 bytes --]
Hi sir
solenwin2.zendesk.com
[-- Attachment #1.2: Type: text/html, Size: 145 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2018-03-05 17:06 Meghana Madhyastha
0 siblings, 0 replies; 3471+ messages in thread
From: Meghana Madhyastha @ 2018-03-05 17:06 UTC (permalink / raw)
To: Noralf Trønnes, Daniel Vetter, dri-devel
linux-spi@vger.kernel.org,Noralf Trønnes <noralf@tronnes.org>,Sean Paul <seanpaul@chromium.org>,kernel@martin.sperl.org
Cc:
Bcc:
Subject: Re: [PATCH v2 0/2] Chunk splitting of spi transfers
Reply-To:
In-Reply-To: <f6dbf3ca-4c1b-90cc-c4af-8889f7407180@tronnes.org>
On Sun, Mar 04, 2018 at 06:38:42PM +0100, Noralf Trønnes wrote:
>
> Den 02.03.2018 12.11, skrev Meghana Madhyastha:
> >On Sun, Feb 25, 2018 at 02:19:10PM +0100, Lukas Wunner wrote:
> >>[cc += linux-rpi-kernel@lists.infradead.org]
> >>
> >>On Sat, Feb 24, 2018 at 06:15:59PM +0000, Meghana Madhyastha wrote:
> >>>I've added bcm2835_spi_transfer_one_message in spi-bcm2835. This calls
> >>>spi_split_transfers_maxsize to split large chunks for spi dma transfers.
> >>>I then removed chunk splitting in the tinydrm spi helper (as now the core
> >>>is handling the chunk splitting). However, although the SPI HW should be
> >>>able to accomodate up to 65535 bytes for dma transfers, the splitting of
> >>>chunks to 65535 bytes results in a dma transfer time out error. However,
> >>>when the chunks are split to < 64 bytes it seems to work fine.
> >>Hm, that is really odd, how did you test this exactly, what did you
> >>use as SPI slave? It contradicts our own experience, we're using
> >>Micrel KSZ8851 Ethernet chips as SPI slave on spi0 of a BCM2837
> >>and can send/receive messages via DMA to the tune of several hundred
> >>bytes without any issues. In fact, for messages < 96 bytes, DMA is
> >>not used at all, so you've probably been using interrupt mode,
> >>see the BCM2835_SPI_DMA_MIN_LENGTH macro in spi-bcm2835.c.
> >Hi Lukas,
> >
> >I think you are right. I checked it and its not using the DMA mode which
> >is why its working with 64 bytes.
> >Noralf, that leaves us back to the
> >initial time out problem. I've tried doing the message splitting in
> >spi_sync as well as spi_pump_messages. Martin had explained that DMA
> >will wait for
> >the SPI HW to set the send_more_data line, but the SPI-HW itself will
> >stop triggering it when SPI_LEN is 0 causing DMA to wait forever. I
> >thought if we split it before itself, the SPI_LEN will not go to zero
> >thus preventing this problem, however it didn't work and started
> >hanging. So I'm a little uncertain as to how to proceed and debug what
> >exactly has caused the time out due to the asynchronous methods.
>
> I did a quick test and at least this is working:
>
> int tinydrm_spi_transfer(struct spi_device *spi, u32 speed_hz,
> struct spi_transfer *header, u8 bpw, const void *buf,
> size_t len)
> {
> struct spi_transfer tr = {
> .bits_per_word = bpw,
> .speed_hz = speed_hz,
> .tx_buf = buf,
> .len = len,
> };
> struct spi_message m;
> size_t maxsize;
> int ret;
>
> maxsize = tinydrm_spi_max_transfer_size(spi, 0);
>
> if (drm_debug & DRM_UT_DRIVER)
> pr_debug("[drm:%s] bpw=%u, maxsize=%zu, transfers:\n",
> __func__, bpw, maxsize);
>
> spi_message_init(&m);
> m.spi = spi;
> if (header)
> spi_message_add_tail(header, &m);
> spi_message_add_tail(&tr, &m);
>
> ret = spi_split_transfers_maxsize(spi->controller, &m, maxsize,
> GFP_KERNEL);
> if (ret)
> return ret;
>
> tinydrm_dbg_spi_message(spi, &m);
>
> return spi_sync(spi, &m);
> }
> EXPORT_SYMBOL(tinydrm_spi_transfer);
>
>
> Log:
> [ 39.015644] [drm:mipi_dbi_fb_dirty [mipi_dbi]] Flushing [FB:36] x1=0,
> x2=320, y1=0, y2=240
>
> [ 39.018079] [drm:mipi_dbi_typec3_command [mipi_dbi]] cmd=2a, par=00 00 01
> 3f
> [ 39.018129] [drm:tinydrm_spi_transfer] bpw=8, maxsize=65532, transfers:
> [ 39.018152] tr(0): speed=10MHz, bpw=8, len=1, tx_buf=[2a]
> [ 39.018231] [drm:tinydrm_spi_transfer] bpw=8, maxsize=65532, transfers:
> [ 39.018248] tr(0): speed=10MHz, bpw=8, len=4, tx_buf=[00 00 01 3f]
>
> [ 39.018330] [drm:mipi_dbi_typec3_command [mipi_dbi]] cmd=2b, par=00 00 00
> ef
> [ 39.018347] [drm:tinydrm_spi_transfer] bpw=8, maxsize=65532, transfers:
> [ 39.018362] tr(0): speed=10MHz, bpw=8, len=1, tx_buf=[2b]
> [ 39.018396] [drm:tinydrm_spi_transfer] bpw=8, maxsize=65532, transfers:
> [ 39.018428] tr(0): speed=10MHz, bpw=8, len=4, tx_buf=[00 00 00 ef]
>
> [ 39.018487] [drm:mipi_dbi_typec3_command [mipi_dbi]] cmd=2c, len=153600
> [ 39.018502] [drm:tinydrm_spi_transfer] bpw=8, maxsize=65532, transfers:
> [ 39.018517] tr(0): speed=10MHz, bpw=8, len=1, tx_buf=[2c]
> [ 39.018565] [drm:tinydrm_spi_transfer] bpw=8, maxsize=65532, transfers:
> [ 39.018594] tr(0): speed=48MHz, bpw=8, len=65532, tx_buf=[c6 18 c6 18
> c6 18 c6 18 c6 18 c6 18 c6 18 c6 18 ...]
> [ 39.018608] tr(1): speed=48MHz, bpw=8, len=65532, tx_buf=[06 18 06 18
> 06 18 06 18 06 18 06 18 06 18 06 18 ...]
> [ 39.018621] tr(2): speed=48MHz, bpw=8, len=22536, tx_buf=[10 82 10 82
> 10 82 10 82 10 82 10 82 18 e3 18 e3 ...]
Hi Noralf,
Yes this works but splitting in the spi subsystem doesn't seem to work.
So this means that spi_split_transfers_maxsize is working.
Should I just send in a patch with splitting done here in tinydrm? (I
had thought we wanted to avoid splitting in the tinydrm helper).
Thanks and regards,
Meghana
>
> Noralf.
>
>
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 3471+ messages in thread
[parent not found: <[PATCH xf86-video-amdgpu 0/3] Add non-desktop and leasing support>]
* (unknown),
@ 2018-02-23 15:54 Adam Richter
0 siblings, 0 replies; 3471+ messages in thread
From: Adam Richter @ 2018-02-23 15:54 UTC (permalink / raw)
To: zh1001, FRoss Perry, alexander deucher, adam richter2004,
barrykendall, containers, ann zhang888, sca38018, westglen,
scott
http://add.chattanooga360.com
Adam Richter
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-02-17 15:29 Ahmed Soliman
0 siblings, 0 replies; 3471+ messages in thread
From: Ahmed Soliman @ 2018-02-17 15:29 UTC (permalink / raw)
To: kvm
subscribe kvm
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-02-17 8:41 Solen win
0 siblings, 0 replies; 3471+ messages in thread
From: Solen win @ 2018-02-17 8:41 UTC (permalink / raw)
To: Virtualization
[-- Attachment #1.1: Type: text/plain, Size: 8 bytes --]
Confirm
[-- Attachment #1.2: Type: text/html, Size: 30 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-02-17 1:45 Ryan Ellis
0 siblings, 0 replies; 3471+ messages in thread
From: Ryan Ellis @ 2018-02-17 1:45 UTC (permalink / raw)
Hi, I am Ryan. I consider myself an easy-going man,honest and loving person. I am currently looking for a relationship in which i feel loved.
Please tell me more about yourself, if you do not mind.
Regards,
Ryan Ellis.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-02-13 22:59 Mitesh Shah
0 siblings, 0 replies; 3471+ messages in thread
From: Mitesh Shah @ 2018-02-13 22:59 UTC (permalink / raw)
To: Linux Sparse
hi Linux https://goo.gl/gg9bWT Mitesh Shah
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-02-13 22:57 Alfred Cheuk Chow
0 siblings, 0 replies; 3471+ messages in thread
From: Alfred Cheuk Chow @ 2018-02-13 22:57 UTC (permalink / raw)
Good Day,
I am Mr. Alfred Cheuk Yu Chow, the Director for Credit & Marketing Chong
Hing Bank, Hong Kong, Chong Hing Bank Center, 24 Des Voeux Road Central,
Hong Kong. I have a business proposal of $ 38,980,369.00.
All confirmable documents to back up the claims will be made available
to you prior to your acceptance and as soon as I receive your return
mail.
Best Regards,
Alfred Chow.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-02-13 22:57 Alfred Cheuk Chow
0 siblings, 0 replies; 3471+ messages in thread
From: Alfred Cheuk Chow @ 2018-02-13 22:57 UTC (permalink / raw)
Good Day,
I am Mr. Alfred Cheuk Yu Chow, the Director for Credit & Marketing Chong
Hing Bank, Hong Kong, Chong Hing Bank Center, 24 Des Voeux Road Central,
Hong Kong. I have a business proposal of $ 38,980,369.00.
All confirmable documents to back up the claims will be made available
to you prior to your acceptance and as soon as I receive your return
mail.
Best Regards,
Alfred Chow.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-02-13 22:57 Alfred Cheuk Chow
0 siblings, 0 replies; 3471+ messages in thread
From: Alfred Cheuk Chow @ 2018-02-13 22:57 UTC (permalink / raw)
Good Day,
I am Mr. Alfred Cheuk Yu Chow, the Director for Credit & Marketing Chong
Hing Bank, Hong Kong, Chong Hing Bank Center, 24 Des Voeux Road Central,
Hong Kong. I have a business proposal of $ 38,980,369.00.
All confirmable documents to back up the claims will be made available
to you prior to your acceptance and as soon as I receive your return
mail.
Best Regards,
Alfred Chow.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-02-13 22:57 Alfred Cheuk Chow
0 siblings, 0 replies; 3471+ messages in thread
From: Alfred Cheuk Chow @ 2018-02-13 22:57 UTC (permalink / raw)
Good Day,
I am Mr. Alfred Cheuk Yu Chow, the Director for Credit & Marketing Chong
Hing Bank, Hong Kong, Chong Hing Bank Center, 24 Des Voeux Road Central,
Hong Kong. I have a business proposal of $ 38,980,369.00.
All confirmable documents to back up the claims will be made available
to you prior to your acceptance and as soon as I receive your return
mail.
Best Regards,
Alfred Chow.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-02-13 22:56 Alfred Cheuk Chow
0 siblings, 0 replies; 3471+ messages in thread
From: Alfred Cheuk Chow @ 2018-02-13 22:56 UTC (permalink / raw)
Good Day,
I am Mr. Alfred Cheuk Yu Chow, the Director for Credit & Marketing Chong
Hing Bank, Hong Kong, Chong Hing Bank Center, 24 Des Voeux Road Central,
Hong Kong. I have a business proposal of $ 38,980,369.00.
All confirmable documents to back up the claims will be made available
to you prior to your acceptance and as soon as I receive your return
mail.
Best Regards,
Alfred Chow.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-02-13 12:43 mavis lilian wanczyk
0 siblings, 0 replies; 3471+ messages in thread
From: mavis lilian wanczyk @ 2018-02-13 12:43 UTC (permalink / raw)
This is the second time i am sending you this mail.
I, Mavis Wanczyk donates $ 5 Million Dollars from part of my Powerball
Jackpot Lottery of $ 758 Million Dollars, respond with your details
for claims.
I await your earliest response and God Bless you
Good luck.
Mavis Wanczyk
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-02-13 11:58 Solen win
0 siblings, 0 replies; 3471+ messages in thread
From: Solen win @ 2018-02-13 11:58 UTC (permalink / raw)
To: virtualization
[-- Attachment #1.1: Type: text/plain, Size: 22 bytes --]
solenwin2.zendesk.com
[-- Attachment #1.2: Type: text/html, Size: 139 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-02-12 1:39 Alfred Cheuk Chow
0 siblings, 0 replies; 3471+ messages in thread
From: Alfred Cheuk Chow @ 2018-02-12 1:39 UTC (permalink / raw)
Good Day,
I am Mr. Alfred Cheuk Yu Chow, the Director for Credit & Marketing Chong
Hing Bank, Hong Kong, Chong Hing Bank Center, 24 Des Voeux Road Central,
Hong Kong. I have a business proposal of $ 38,980,369.00.
All confirmable documents to back up the claims will be made available
to you prior to your acceptance and as soon as I receive your return
mail.
Best Regards,
Alfred Chow.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-02-12 1:39 Alfred Cheuk Chow
0 siblings, 0 replies; 3471+ messages in thread
From: Alfred Cheuk Chow @ 2018-02-12 1:39 UTC (permalink / raw)
Good Day,
I am Mr. Alfred Cheuk Yu Chow, the Director for Credit & Marketing Chong
Hing Bank, Hong Kong, Chong Hing Bank Center, 24 Des Voeux Road Central,
Hong Kong. I have a business proposal of $ 38,980,369.00.
All confirmable documents to back up the claims will be made available
to you prior to your acceptance and as soon as I receive your return
mail.
Best Regards,
Alfred Chow.
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-02-12 1:39 Alfred Cheuk Chow
0 siblings, 0 replies; 3471+ messages in thread
From: Alfred Cheuk Chow @ 2018-02-12 1:39 UTC (permalink / raw)
Good Day,
I am Mr. Alfred Cheuk Yu Chow, the Director for Credit & Marketing Chong
Hing Bank, Hong Kong, Chong Hing Bank Center, 24 Des Voeux Road Central,
Hong Kong. I have a business proposal of $ 38,980,369.00.
All confirmable documents to back up the claims will be made available
to you prior to your acceptance and as soon as I receive your return
mail.
Best Regards,
Alfred Chow.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-02-12 1:39 Alfred Cheuk Chow
0 siblings, 0 replies; 3471+ messages in thread
From: Alfred Cheuk Chow @ 2018-02-12 1:39 UTC (permalink / raw)
Good Day,
I am Mr. Alfred Cheuk Yu Chow, the Director for Credit & Marketing Chong
Hing Bank, Hong Kong, Chong Hing Bank Center, 24 Des Voeux Road Central,
Hong Kong. I have a business proposal of $ 38,980,369.00.
All confirmable documents to back up the claims will be made available
to you prior to your acceptance and as soon as I receive your return
mail.
Best Regards,
Alfred Chow.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-02-11 16:07 glolariu
0 siblings, 0 replies; 3471+ messages in thread
From: glolariu @ 2018-02-11 16:07 UTC (permalink / raw)
To: linux man
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=utf-8, Size: 183 bytes --]
Hello Linux https://goo.gl/54LwKT
GlolariuN§²æìr¸yúèØb²X¬¶Ç§vØ^)Þº{.nÇ+·¥{±©âØ^nr¡ö¦zË\x1aëh¨èÚ&¢îý»\x05ËÛÔØï¦v¬Îf\x1dp)¹¹br ê+Ê+zf£¢·h§~Ûiÿûàz¹\x1e®w¥¢¸?¨èÚ&¢)ߢ^[f
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-02-11 7:19 Alfred Cheuk Chow
0 siblings, 0 replies; 3471+ messages in thread
From: Alfred Cheuk Chow @ 2018-02-11 7:19 UTC (permalink / raw)
Good Day,
I am Mr. Alfred Cheuk Yu Chow, the Director for Credit & Marketing Chong
Hing Bank, Hong Kong, Chong Hing Bank Center, 24 Des Voeux Road Central,
Hong Kong. I have a business proposal of $ 38,980,369.00.
All confirmable documents to back up the claims will be made available
to you prior to your acceptance and as soon as I receive your return
mail.
Best Regards,
Alfred Chow.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-02-08 14:40 Automatic Email Delivery Software
0 siblings, 0 replies; 3471+ messages in thread
From: Automatic Email Delivery Software @ 2018-02-08 14:40 UTC (permalink / raw)
To: linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw
The original message was received at Thu, 8 Feb 2018 22:40:15 +0800
from lists.01.org [63.188.95.85]
----- The following addresses had permanent fatal errors -----
<linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org>
----- Transcript of session follows -----
while talking to lists.01.org.:
>>> MAIL From:"Automatic Email Delivery Software" <postmaster-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org>
<<< 501 "Automatic Email Delivery Software" <postmaster-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org>... Refused
^ permalink raw reply [flat|nested] 3471+ messages in thread
[parent not found: <CALfDnQ8aCTywvhqOBkFv3qQOoME9wvTrKbQq8i8PCPOx2iBp=A@mail.gmail.com>]
* (unknown),
@ 2018-02-02 12:15 Robert Vasek
0 siblings, 0 replies; 3471+ messages in thread
From: Robert Vasek @ 2018-02-02 12:15 UTC (permalink / raw)
To: ceph-devel
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-01-29 17:17 Jones
0 siblings, 0 replies; 3471+ messages in thread
From: Jones @ 2018-01-29 17:17 UTC (permalink / raw)
This is in regards to an inheritance on your surname, reply back using your email address, stating your full name for more details. Reply to email for info. Email me here ( gertvm@dr.com )
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-01-29 17:17 Jones
0 siblings, 0 replies; 3471+ messages in thread
From: Jones @ 2018-01-29 17:17 UTC (permalink / raw)
This is in regards to an inheritance on your surname, reply back using your email address, stating your full name for more details. Reply to email for info. Email me here ( gertvm@dr.com )
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-01-29 17:17 Jones
0 siblings, 0 replies; 3471+ messages in thread
From: Jones @ 2018-01-29 17:17 UTC (permalink / raw)
This is in regards to an inheritance on your surname, reply back using your email address, stating your full name for more details. Reply to email for info. Email me here ( gertvm@dr.com )
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-01-29 17:17 Jones
0 siblings, 0 replies; 3471+ messages in thread
From: Jones @ 2018-01-29 17:17 UTC (permalink / raw)
This is in regards to an inheritance on your surname, reply back using your email address, stating your full name for more details. Reply to email for info. Email me here ( gertvm@dr.com )
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-01-29 17:17 Jones
0 siblings, 0 replies; 3471+ messages in thread
From: Jones @ 2018-01-29 17:17 UTC (permalink / raw)
This is in regards to an inheritance on your surname, reply back using your email address, stating your full name for more details. Reply to email for info. Email me here ( gertvm@dr.com )
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-01-29 17:17 Jones
0 siblings, 0 replies; 3471+ messages in thread
From: Jones @ 2018-01-29 17:17 UTC (permalink / raw)
This is in regards to an inheritance on your surname, reply back using your email address, stating your full name for more details. Reply to email for info. Email me here ( gertvm@dr.com )
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-01-29 16:55 Jones
0 siblings, 0 replies; 3471+ messages in thread
From: Jones @ 2018-01-29 16:55 UTC (permalink / raw)
This is in regards to an inheritance on your surname, reply back using your email address, stating your full name for more details. Reply to email for info. Email me here ( gertvm@dr.com )
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-01-29 16:30 Jones
0 siblings, 0 replies; 3471+ messages in thread
From: Jones @ 2018-01-29 16:30 UTC (permalink / raw)
This is in regards to an inheritance on your surname, reply back using your email address, stating your full name for more details. Reply to email for info. Email me here ( gertvm@dr.com )
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-01-29 16:30 Jones
0 siblings, 0 replies; 3471+ messages in thread
From: Jones @ 2018-01-29 16:30 UTC (permalink / raw)
This is in regards to an inheritance on your surname, reply back using your email address, stating your full name for more details. Reply to email for info. Email me here ( gertvm@dr.com )
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-01-29 14:17 Jones
0 siblings, 0 replies; 3471+ messages in thread
From: Jones @ 2018-01-29 14:17 UTC (permalink / raw)
This is in regards to an inheritance on your surname, reply back using your email address, stating your full name for more details. Reply to email for info. Email me here ( gertvm-w9fAFgjg1Hs@public.gmane.org )
--
To unsubscribe from this list: send the line "unsubscribe dwarves" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-01-28 17:06 whoisthis TG
0 siblings, 0 replies; 3471+ messages in thread
From: whoisthis TG @ 2018-01-28 17:06 UTC (permalink / raw)
To: containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
He
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-01-28 17:01 whoisthis TG
0 siblings, 0 replies; 3471+ messages in thread
From: whoisthis TG @ 2018-01-28 17:01 UTC (permalink / raw)
To: containers-cunTk1MwBs9QetFLy7KEm3xJsTq8ys+cHZ5vskTnxNA
Do it
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-01-27 13:48 Jones
0 siblings, 0 replies; 3471+ messages in thread
From: Jones @ 2018-01-27 13:48 UTC (permalink / raw)
This is in regards to an inheritance on your surname, reply back using your email address, stating your full name for more details. Reply to email for info. Email me here ( gertvm@dr.com )
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-01-27 13:48 Jones
0 siblings, 0 replies; 3471+ messages in thread
From: Jones @ 2018-01-27 13:48 UTC (permalink / raw)
This is in regards to an inheritance on your surname, reply back using your email address, stating your full name for more details. Reply to email for info. Email me here ( gertvm@dr.com )
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-01-27 13:25 Jones
0 siblings, 0 replies; 3471+ messages in thread
From: Jones @ 2018-01-27 13:25 UTC (permalink / raw)
This is in regards to an inheritance on your surname, reply back using your email address, stating your full name for more details. Reply to email for info. Email me here ( gertvm-w9fAFgjg1Hs@public.gmane.org )
--
To unsubscribe from this list: send the line "unsubscribe dwarves" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-01-25 7:23 tirumalareddy marri
0 siblings, 0 replies; 3471+ messages in thread
From: tirumalareddy marri @ 2018-01-25 7:23 UTC (permalink / raw)
To: linux ext4
Greetings
https://goo.gl/zeTgBc
tirumalareddy marri
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-01-23 13:54 Mr Sheng Li Hung
0 siblings, 0 replies; 3471+ messages in thread
From: Mr Sheng Li Hung @ 2018-01-23 13:54 UTC (permalink / raw)
--
I am Mr.Sheng Li Hung I have a very profitable business proposition for you
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-01-23 13:36 Mr Sheng Li Hung
0 siblings, 0 replies; 3471+ messages in thread
From: Mr Sheng Li Hung @ 2018-01-23 13:36 UTC (permalink / raw)
--
I am Mr.Sheng Li Hung I have a very profitable business proposition for you
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2018-01-16 2:23 Jack.Ma
0 siblings, 0 replies; 3471+ messages in thread
From: Jack.Ma @ 2018-01-16 2:23 UTC (permalink / raw)
To: netfilter-devel
subscribe netfilter-devel
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2018-01-16 2:16 Jack.Ma
0 siblings, 0 replies; 3471+ messages in thread
From: Jack.Ma @ 2018-01-16 2:16 UTC (permalink / raw)
To: netfilter-devel
subscribe netdev
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-01-11 3:22 Active lender@
0 siblings, 0 replies; 3471+ messages in thread
From: Active lender@ @ 2018-01-11 3:22 UTC (permalink / raw)
Schöne Grüße,
Ich bin Frau Adrian Irene von Active Lenders Darlehensfirma bekannt als Active Lending Loan®. Wir bieten alle Arten von Darlehen bei 1% Zinssatz. Wenn Sie ein Darlehen benötigen, kontaktieren Sie uns bitte mit den folgenden Informationen.
Bitte füllen Sie das untenstehende Formular aus und senden Sie es so schnell wie möglich zurück.
Benötigte Menge: .........
Laufzeit des Darlehens: ....
Der Grund für das Darlehen: .....
Wir freuen uns darauf, Ihnen zu helfen. Kontaktieren Sie uns per E-Mail: contact@activeslendinggroup.com
Deine
Frau Adrian Irene
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-01-10 10:27 TimGuo
0 siblings, 0 replies; 3471+ messages in thread
From: TimGuo @ 2018-01-10 10:27 UTC (permalink / raw)
To: tglx, mingo, hpa, mingo, x86, linux-pm, linux-kernel
Cc: brucechang, cooperyan, qiyuanwang, benjaminpan, TimGuo
>From 812522018b0f1d9501fbdda4018be9a6fc9c21bf Mon Sep 17 00:00:00 2001
From: TimGuo <timguo@zhaoxin.com>
Date: Wed, 10 Jan 2018 18:16:33 +0800
Subject: [PATCH] x86/centaur: Mark TSC invariant
Centaur CPU has a constant frequency TSC and that TSC
does not stop in C-States. But because the flags are not set for that CPU
the TSC is treated as non constant frequency and assumed to stop in
C-States, which makes it an unreliable and unusable clock source.
Setting those flags tells the kernel that the TSC is usable, so it will
select it over HPET. The effect of this is that reading time stamps (from
kernel or user space) will be faster and more efficient.
Signed-off-by: TimGuo <timguo@zhaoxin.com>
Acked-by: tglx <tglx@linutronix.de>
---
arch/x86/kernel/cpu/centaur.c | 4 ++++
drivers/acpi/processor_idle.c | 1 +
2 files changed, 5 insertions(+)
diff --git a/arch/x86/kernel/cpu/centaur.c b/arch/x86/kernel/cpu/centaur.c
index 68bc6d9..c578cd2 100644
--- a/arch/x86/kernel/cpu/centaur.c
+++ b/arch/x86/kernel/cpu/centaur.c
@@ -106,6 +106,10 @@ static void early_init_centaur(struct cpuinfo_x86 *c)
#ifdef CONFIG_X86_64
set_cpu_cap(c, X86_FEATURE_SYSENTER32);
#endif
+ if (c->x86_power & (1 << 8)) {
+ set_cpu_cap(c, X86_FEATURE_CONSTANT_TSC);
+ set_cpu_cap(c, X86_FEATURE_NONSTOP_TSC);
+ }
}
static void init_centaur(struct cpuinfo_x86 *c)
diff --git a/drivers/acpi/processor_idle.c b/drivers/acpi/processor_idle.c
index d50a7b6..5f0071c 100644
--- a/drivers/acpi/processor_idle.c
+++ b/drivers/acpi/processor_idle.c
@@ -207,6 +207,7 @@ static void tsc_check_state(int state)
switch (boot_cpu_data.x86_vendor) {
case X86_VENDOR_AMD:
case X86_VENDOR_INTEL:
+ case X86_VENDOR_CENTAUR:
/*
* AMD Fam10h TSC will tick in all
* C/P/S0/S1 states when this bit is set.
--
1.9.1
保密声明:
本邮件含有保密或专有信息,仅供指定收件人使用。严禁对本邮件或其内容做任何未经授权的查阅、使用、复制或转发。
CONFIDENTIAL NOTE:
This email contains confidential or legally privileged information and is for the sole use of its intended recipient. Any unauthorized review, use, copying or forwarding of this email or the content of this email is strictly prohibited.
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-01-09 21:23 Emile Kenold
0 siblings, 0 replies; 3471+ messages in thread
From: Emile Kenold @ 2018-01-09 21:23 UTC (permalink / raw)
--
My name is Mrs. Emile Kenold from London. I was diagnosed of lung
cancer which had damaged my liver and my health is no longer
responding to medical treatments.
I have made up my mind to give a charity donation of $11 Million to
you and i pray you will be sincere to use this money for charity work
according to my will, to help orphans, widows and also build schools
for less privilege ones, please i need your sincere and urgent
response to entrust this money to you due to my current health
condition.
Regards
Emile.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2018-01-02 22:11 Mr Sheng Li Hung
0 siblings, 0 replies; 3471+ messages in thread
From: Mr Sheng Li Hung @ 2018-01-02 22:11 UTC (permalink / raw)
--
I am Mr.Sheng Li Hung, from china I got your information while search for
a reliable person, I have a very profitable business proposition for you
and i can assure you that you will not regret been part of this mutual
beneficial transaction after completion. Kindly get back to me for more
details on this email id: shengli19@hotmail.com
Thanks
Mr Sheng Li Hung
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-12-30 4:37 Adam Richter
0 siblings, 0 replies; 3471+ messages in thread
From: Adam Richter @ 2017-12-30 4:37 UTC (permalink / raw)
To: adam richter2004, barrykendall, containers, ann zhang888, sca38018
http://durable.daphnevy.com
Adam Richter
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-12-30 2:10 Arpit Patel
0 siblings, 0 replies; 3471+ messages in thread
From: Arpit Patel @ 2017-12-30 2:10 UTC (permalink / raw)
To: linux scsi
good afternoon Linux
https://goo.gl/P81Ven
Arpit
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-12-24 9:07 Solen win
0 siblings, 0 replies; 3471+ messages in thread
From: Solen win @ 2017-12-24 9:07 UTC (permalink / raw)
To: virtualization
[-- Attachment #1.1: Type: text/plain, Size: 23 bytes --]
Solenwin@freshdesk.com
[-- Attachment #1.2: Type: text/html, Size: 141 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-12-24 2:58 柯弼舜
0 siblings, 0 replies; 3471+ messages in thread
From: 柯弼舜 @ 2017-12-24 2:58 UTC (permalink / raw)
To: ceph-devel
subscribe ceph-devel
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-12-23 15:32 柯弼舜
0 siblings, 0 replies; 3471+ messages in thread
From: 柯弼舜 @ 2017-12-23 15:32 UTC (permalink / raw)
To: ceph-devel
subscribe ceph-devel
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-12-17 17:28 Solen win
0 siblings, 0 replies; 3471+ messages in thread
From: Solen win @ 2017-12-17 17:28 UTC (permalink / raw)
To: virtualization
[-- Attachment #1.1: Type: text/plain, Size: 23 bytes --]
Solenwin@freshdesk.com
[-- Attachment #1.2: Type: text/html, Size: 141 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-12-14 16:26 Solen win
0 siblings, 0 replies; 3471+ messages in thread
From: Solen win @ 2017-12-14 16:26 UTC (permalink / raw)
To: virtualization
[-- Attachment #1.1: Type: text/plain, Size: 23 bytes --]
Solenwin@freshdesk.com
[-- Attachment #1.2: Type: text/html, Size: 141 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-12-12 16:06 Solen win
0 siblings, 0 replies; 3471+ messages in thread
From: Solen win @ 2017-12-12 16:06 UTC (permalink / raw)
To: virtualization
[-- Attachment #1.1: Type: text/plain, Size: 23 bytes --]
Solenwin@freshdesk.com
[-- Attachment #1.2: Type: text/html, Size: 141 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-12-07 12:53 Sistemas administrador
0 siblings, 0 replies; 3471+ messages in thread
From: Sistemas administrador @ 2017-12-07 12:53 UTC (permalink / raw)
ATENCIÓN;
Su buzón ha superado el límite de almacenamiento, que es de 5 GB definidos por el administrador, quien actualmente está ejecutando en 10.9GB, no puede ser capaz de enviar o recibir correo nuevo hasta que vuelva a validar su buzón de correo electrónico. Para revalidar su buzón de correo, envíe la siguiente información a continuación:
nombre:
Nombre de usuario:
contraseña:
Confirmar contraseña:
E-mail:
teléfono:
Si usted no puede revalidar su buzón, el buzón se deshabilitará!
Disculpa las molestias.
Código de verificación: es: 006524
Correo Soporte Técnico © 2017
¡gracias
Sistemas administrador
CLAUSULA DE CONFIDENCIALIDAD: El contenido de este correo y sus anexos es confidencial, debe ser utilizado por el destinatario del mismo. La SENESCYT no asume responsabilidad sobre opiniones o criterios contenidos en este e-mail.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-12-01 14:22 Rein Appeldoorn
0 siblings, 0 replies; 3471+ messages in thread
From: Rein Appeldoorn @ 2017-12-01 14:22 UTC (permalink / raw)
To: linux-can
unsubscribe linux-can
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-12-01 2:56 Post Office
0 siblings, 0 replies; 3471+ messages in thread
From: Post Office @ 2017-12-01 2:56 UTC (permalink / raw)
To: linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw
[-- Attachment #1: Type: text/plain, Size: 1403 bytes --]
Spam detection software, running on the system "blaine.gmane.org",
has identified this incoming email as possible spam. The original
message has been attached to this so you can view it or label
similar future email. If you have any questions, see
@@CONTACT_ADDRESS@@ for details.
Content preview: This Message was undeliverable due to the following reason:
Your message was not delivered because the destination computer was not reachable
within the allowed queue period. The amount of time a message is queued before
it is returned depends on local configura- tion parameters. [...]
Content analysis details: (5.7 points, 5.0 required)
pts rule name description
---- ---------------------- --------------------------------------------------
-0.0 RCVD_IN_DNSWL_NONE RBL: Sender listed at http://www.dnswl.org/, no
trust
[198.145.21.10 listed in list.dnswl.org]
-0.6 RP_MATCHES_RCVD Envelope sender domain matches handover relay domain
-1.9 BAYES_00 BODY: Bayes spam probability is 0 to 1%
[score: 0.0000]
1.4 PYZOR_CHECK Listed in Pyzor (http://pyzor.sf.net/)
2.2 AXB_XMAILER_MIMEOLE_OL_024C2 No description available.
2.6 MSOE_MID_WRONG_CASE No description available.
1.9 FORGED_MUA_OUTLOOK Forged mail pretending to be from MS Outlook
[-- Attachment #2: original message before SpamAssassin --]
[-- Type: message/rfc822, Size: 3458 bytes --]
From: "Post Office" <noreply-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org>
To: linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org
Subject:
Date: Fri, 1 Dec 2017 10:56:17 +0800
Message-ID: <20171201025154.D9EAF220F3C48-y27Ovi1pjclAfugRpC6u6w@public.gmane.org>
This Message was undeliverable due to the following reason:
Your message was not delivered because the destination computer was
not reachable within the allowed queue period. The amount of time
a message is queued before it is returned depends on local configura-
tion parameters.
Most likely there is a network problem that prevented delivery, but
it is also possible that the computer is turned off, or does not
have a mail system running right now.
Your message was not delivered within 4 days:
Host 44.159.81.28 is not responding.
The following recipients did not receive this message:
<linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org>
Please reply to postmaster-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org
if you feel this message to be in error.
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org
https://lists.01.org/mailman/listinfo/linux-nvdimm
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-11-20 2:36 Robert Wang
0 siblings, 0 replies; 3471+ messages in thread
From: Robert Wang @ 2017-11-20 2:36 UTC (permalink / raw)
To: ceph-devel
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-11-19 20:07 Mitesh Shah
0 siblings, 0 replies; 3471+ messages in thread
From: Mitesh Shah @ 2017-11-19 20:07 UTC (permalink / raw)
To: Linux Sparse
Salutations Linux
http://bit.ly/2ATC9sN
Mitesh Shah
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-11-16 10:18 Michal Hocko
0 siblings, 0 replies; 3471+ messages in thread
From: Michal Hocko @ 2017-11-16 10:18 UTC (permalink / raw)
To: linux-api
Cc: Khalid Aziz, Michael Ellerman, Andrew Morton,
Russell King - ARM Linux, Andrea Arcangeli, linux-mm, LKML,
linux-arch, Abdul Haleem, Joel Stanley, Kees Cook, Michal Hocko
Hi,
this has started as a follow up discussion [1][2] resulting in the
runtime failure caused by hardening patch [3] which removes MAP_FIXED
from the elf loader because MAP_FIXED is inherently dangerous as it
might silently clobber and existing underlying mapping (e.g. stack). The
reason for the failure is that some architectures enforce an alignment
for the given address hint without MAP_FIXED used (e.g. for shared or
file backed mappings).
One way around this would be excluding those archs which do alignment
tricks from the hardening [4]. The patch is really trivial but it has
been objected, rightfully so, that this screams for a more generic
solution. We basically want a non-destructive MAP_FIXED.
The first patch introduced MAP_FIXED_SAFE which enforces the given
address but unlike MAP_FIXED it fails with ENOMEM if the given range
conflicts with an existing one. The flag is introduced as a completely
new flag rather than a MAP_FIXED extension because of the backward
compatibility. We really want a never-clobber semantic even on older
kernels which do not recognize the flag. Unfortunately mmap sucks wrt.
flags evaluation because we do not EINVAL on unknown flags. On those
kernels we would simply use the traditional hint based semantic so the
caller can still get a different address (which sucks) but at least not
silently corrupt an existing mapping. I do not see a good way around
that. Except we won't export expose the new semantic to the userspace at
all. It seems there are users who would like to have something like that
[5], though. Atomic address range probing in the multithreaded programs
sounds like an interesting thing to me as well, although I do not have
any specific usecase in mind.
The second patch simply replaces MAP_FIXED use in elf loader by
MAP_FIXED_SAFE. I believe other places which rely on MAP_FIXED should
follow. Actually real MAP_FIXED usages should be docummented properly
and they should be more of an exception.
Does anybody see any fundamental reasons why this is a wrong approach?
Diffstat says
arch/alpha/include/uapi/asm/mman.h | 2 ++
arch/metag/kernel/process.c | 6 +++++-
arch/mips/include/uapi/asm/mman.h | 2 ++
arch/parisc/include/uapi/asm/mman.h | 2 ++
arch/powerpc/include/uapi/asm/mman.h | 1 +
arch/sparc/include/uapi/asm/mman.h | 1 +
arch/tile/include/uapi/asm/mman.h | 1 +
arch/xtensa/include/uapi/asm/mman.h | 2 ++
fs/binfmt_elf.c | 12 ++++++++----
include/uapi/asm-generic/mman.h | 1 +
mm/mmap.c | 11 +++++++++++
11 files changed, 36 insertions(+), 5 deletions(-)
[1] http://lkml.kernel.org/r/20171107162217.382cd754@canb.auug.org.au
[2] http://lkml.kernel.org/r/1510048229.12079.7.camel@abdul.in.ibm.com
[3] http://lkml.kernel.org/r/20171023082608.6167-1-mhocko@kernel.org
[4] http://lkml.kernel.org/r/20171113094203.aofz2e7kueitk55y@dhcp22.suse.cz
[5] http://lkml.kernel.org/r/87efp1w7vy.fsf@concordia.ellerman.id.au
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-11-15 14:44 Qing Chang
0 siblings, 0 replies; 3471+ messages in thread
From: Qing Chang @ 2017-11-15 14:44 UTC (permalink / raw)
To: linux fsdevel
hi Linux
http://bit.ly/2iXiosH
Qing
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2017-11-15 9:18 nanda_kishore_chinna
0 siblings, 0 replies; 3471+ messages in thread
From: nanda_kishore_chinna @ 2017-11-15 9:18 UTC (permalink / raw)
unsubscribe platform-driver-x86
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-11-13 3:13 Bounced mail
0 siblings, 0 replies; 3471+ messages in thread
From: Bounced mail @ 2017-11-13 3:13 UTC (permalink / raw)
To: linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw
The original message was received at Mon, 13 Nov 2017 11:13:14 +0800
from lists.01.org [217.132.172.246]
----- The following addresses had permanent fatal errors -----
<linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org>
----- Transcript of session follows -----
while talking to lists.01.org.:
>>> MAIL From:"Bounced mail" <MAILER-DAEMON-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org>
<<< 501 "Bounced mail" <MAILER-DAEMON-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org>... Refused
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-11-12 15:10 Mitesh Shah
0 siblings, 0 replies; 3471+ messages in thread
From: Mitesh Shah @ 2017-11-12 15:10 UTC (permalink / raw)
To: Linux Sparse
Hi Linux
http://bit.ly/2mgWPIO
;-)
Mitesh
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-11-12 15:09 Friedrich Mayrhofer
0 siblings, 0 replies; 3471+ messages in thread
From: Friedrich Mayrhofer @ 2017-11-12 15:09 UTC (permalink / raw)
This is the second time i am sending you this Email.
I, Friedrich Mayrhofer Donate $ 1,000,000.00 to You, Email Me
personally for more details.
Regards.
Friedrich Mayrhofer
----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-11-12 15:09 Friedrich Mayrhofer
0 siblings, 0 replies; 3471+ messages in thread
From: Friedrich Mayrhofer @ 2017-11-12 15:09 UTC (permalink / raw)
This is the second time i am sending you this Email.
I, Friedrich Mayrhofer Donate $ 1,000,000.00 to You, Email Me
personally for more details.
Regards.
Friedrich Mayrhofer
----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-11-12 15:09 Friedrich Mayrhofer
0 siblings, 0 replies; 3471+ messages in thread
From: Friedrich Mayrhofer @ 2017-11-12 15:09 UTC (permalink / raw)
This is the second time i am sending you this Email.
I, Friedrich Mayrhofer Donate $ 1,000,000.00 to You, Email Me
personally for more details.
Regards.
Friedrich Mayrhofer
----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-11-06 19:51 Qing Chang
0 siblings, 0 replies; 3471+ messages in thread
From: Qing Chang @ 2017-11-06 19:51 UTC (permalink / raw)
To: linux fsdevel
Hey Linux
http://bit.ly/2y5JOmP
Qing Chang
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-11-05 3:40 Solen win
0 siblings, 0 replies; 3471+ messages in thread
From: Solen win @ 2017-11-05 3:40 UTC (permalink / raw)
To: virtualization
[-- Attachment #1.1: Type: text/plain, Size: 9 bytes --]
--
null
[-- Attachment #1.2: Type: text/html, Size: 101 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-11-01 23:35 Roy Cockrum Foundation
0 siblings, 0 replies; 3471+ messages in thread
From: Roy Cockrum Foundation @ 2017-11-01 23:35 UTC (permalink / raw)
Hallo, Sie machen eine Spende von 4.800.000,00 EUR, ich habe die America Lotto in Amerika im Wert von 259,9 Millionen Dollar gewonnen, und ich gebe einen Teil davon fünf glückliche Menschen und Wohltätigkeits-Häuser in Erinnerung an meine verstorbene Frau, die an Krebs gestorben ist. Kontaktieren Sie mich für weitere Informationen: roycockrum2009@gmail.com
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-29 9:46 Solen win
0 siblings, 0 replies; 3471+ messages in thread
From: Solen win @ 2017-10-29 9:46 UTC (permalink / raw)
To: virtualization
[-- Attachment #1.1: Type: text/plain, Size: 23 bytes --]
Solenwin@freshdesk.com
[-- Attachment #1.2: Type: text/html, Size: 141 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-25 12:10 EG
0 siblings, 0 replies; 3471+ messages in thread
From: EG @ 2017-10-25 12:10 UTC (permalink / raw)
To: Recipients
I am Ms.Ella Golan, I am the Executive Vice President Banking Division with
FIRST INTERNATIONAL BANK OF ISRAEL LTD (FIBI).
I am getting in touch with you regarding an extremely important and urgent
matter. If you would oblige me the opportunity, I shall provide you with
details upon your response.
Faithfully,
Ms.Ella Golan
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-23 13:52 Intl Agency
0 siblings, 0 replies; 3471+ messages in thread
From: Intl Agency @ 2017-10-23 13:52 UTC (permalink / raw)
Hello Dear,
All data(s) concerning this will be disclosed to you on your
acknowledgment of this mail for security reasons(to avoid copyright
violations). Your beneficiary sum are ready to be transferred to your
bank account, this is very urgent. The sum which was formerly own to
late Mr. Thomas, Precisely Your distance Uncle who has the same
surname with you.
Meanwhile, a man appears before the bank few days ago with a letter,
claiming to be your true representative. Below is the banking detail
which he submitted to the bank claiming to be your bank account
details?
Customer ID : 405002251
Bank name : Axis Bank
Account Name : Whitelake Technology Solutions Pvt. Ltd.
Axis Bank A/c No : 910020027939048
Address : Tuticorin, Tamil Nadu, India
Swift Code : AXISINBB002
IFSC Code - UTIB0000405
Please, do reconfirm and affirm to this office, as a matter of urgency
if this man by name Mr. Shmeal Mustafa, and the bank account detail
provided are truly from you, to enable the bank release and transfer
your fund to you. I expect hearing from you on or before seven (7)
working days starting from today or we shall order payment to the
above provided bank account detail:
Kindly revert to me with the following info's for more details and
clarifications regarding this transaction
Your ... Full Name:
Your ... Age:
Your ... Occupation and position:
Your ... Marital status:
Your private email / phone:
Best Regards,
Frank
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-20 8:42 membership
0 siblings, 0 replies; 3471+ messages in thread
From: membership @ 2017-10-20 8:42 UTC (permalink / raw)
To: linux-crypto
[-- Attachment #1: cinqz.doc --]
[-- Type: application/msword, Size: 86528 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-20 3:19 dengx
0 siblings, 0 replies; 3471+ messages in thread
From: dengx @ 2017-10-20 3:19 UTC (permalink / raw)
To: linux-ide
[-- Attachment #1: 94585642821.zip --]
[-- Type: application/zip, Size: 43584 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-19 22:54 armouralumni
0 siblings, 0 replies; 3471+ messages in thread
From: armouralumni @ 2017-10-19 22:54 UTC (permalink / raw)
To: linux-m68k
[-- Attachment #1: 21687962919146.zip --]
[-- Type: application/zip, Size: 43589 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-19 20:10 pooks005
0 siblings, 0 replies; 3471+ messages in thread
From: pooks005 @ 2017-10-19 20:10 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 494911192325944.zip --]
[-- Type: application/zip, Size: 43595 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-17 20:28 kelley
0 siblings, 0 replies; 3471+ messages in thread
From: kelley @ 2017-10-17 20:28 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 30998342.doc --]
[-- Type: application/msword, Size: 64000 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-17 12:14 dengx
0 siblings, 0 replies; 3471+ messages in thread
From: dengx @ 2017-10-17 12:14 UTC (permalink / raw)
To: linux-ide
[-- Attachment #1: 1223614.zip --]
[-- Type: application/zip, Size: 35489 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-17 7:00 lswedroe
0 siblings, 0 replies; 3471+ messages in thread
From: lswedroe @ 2017-10-17 7:00 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: EMAILS_540361938616_linux-acpi.zip --]
[-- Type: application/zip, Size: 35488 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-17 0:33 membership
0 siblings, 0 replies; 3471+ messages in thread
From: membership @ 2017-10-17 0:33 UTC (permalink / raw)
To: linux-crypto
[-- Attachment #1: EMAILS_8042814_linux-crypto.zip --]
[-- Type: application/zip, Size: 46096 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-16 19:44 iker-KvP5wT2u2U0
0 siblings, 0 replies; 3471+ messages in thread
From: iker-KvP5wT2u2U0 @ 2017-10-16 19:44 UTC (permalink / raw)
To: linux-efi-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: EMAILS_091059479495328_linux-efi.zip --]
[-- Type: application/zip, Size: 46090 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-16 11:30 kindergartenchaos2
0 siblings, 0 replies; 3471+ messages in thread
From: kindergartenchaos2 @ 2017-10-16 11:30 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 567333220.zip --]
[-- Type: application/zip, Size: 46094 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-16 1:23 fwkz4811-DoVvmRvd3PAA2dtGD8cC2w
0 siblings, 0 replies; 3471+ messages in thread
From: fwkz4811-DoVvmRvd3PAA2dtGD8cC2w @ 2017-10-16 1:23 UTC (permalink / raw)
To: linux-cifs-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 60336090.zip --]
[-- Type: application/zip, Size: 2838 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-15 22:07 info
0 siblings, 0 replies; 3471+ messages in thread
From: info @ 2017-10-15 22:07 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: 377462.zip --]
[-- Type: application/zip, Size: 2740 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-15 18:29 clasico082
0 siblings, 0 replies; 3471+ messages in thread
From: clasico082 @ 2017-10-15 18:29 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 951043539.zip --]
[-- Type: application/zip, Size: 2779 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-15 15:13 nelcastellodicarta
0 siblings, 0 replies; 3471+ messages in thread
From: nelcastellodicarta @ 2017-10-15 15:13 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: 4412770469.zip --]
[-- Type: application/zip, Size: 2798 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-15 13:57 marketing
0 siblings, 0 replies; 3471+ messages in thread
From: marketing @ 2017-10-15 13:57 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 29848250.zip --]
[-- Type: application/zip, Size: 2816 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-15 13:01 pekka.enne
0 siblings, 0 replies; 3471+ messages in thread
From: pekka.enne @ 2017-10-15 13:01 UTC (permalink / raw)
To: linux-m68k
[-- Attachment #1: 01587514.zip --]
[-- Type: application/zip, Size: 2839 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-15 12:17 Solen win2
0 siblings, 0 replies; 3471+ messages in thread
From: Solen win2 @ 2017-10-15 12:17 UTC (permalink / raw)
To: virtualization
[-- Attachment #1.1: Type: text/plain, Size: 23 bytes --]
Solenwin@freshdesk.com
[-- Attachment #1.2: Type: text/html, Size: 141 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-15 12:04 sherrilyn
0 siblings, 0 replies; 3471+ messages in thread
From: sherrilyn @ 2017-10-15 12:04 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: 582087541899203.zip --]
[-- Type: application/zip, Size: 2820 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-15 11:49 edo.hlaca
0 siblings, 0 replies; 3471+ messages in thread
From: edo.hlaca @ 2017-10-15 11:49 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 808992881895551.zip --]
[-- Type: application/zip, Size: 2799 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-15 11:15 cl_luzcc
0 siblings, 0 replies; 3471+ messages in thread
From: cl_luzcc @ 2017-10-15 11:15 UTC (permalink / raw)
To: linux-next
[-- Attachment #1: 211899324.zip --]
[-- Type: application/zip, Size: 2808 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-15 3:28 redaccion
0 siblings, 0 replies; 3471+ messages in thread
From: redaccion @ 2017-10-15 3:28 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 367770.zip --]
[-- Type: application/zip, Size: 2805 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-14 6:44 Ella Golan
0 siblings, 0 replies; 3471+ messages in thread
From: Ella Golan @ 2017-10-14 6:44 UTC (permalink / raw)
I am Ms.Ella Golan, I am the Executive Vice President Banking Division with
FIRST INTERNATIONAL BANK OF ISRAEL LTD (FIBI).
I am getting in touch with you regarding an extremely important and urgent
matter. If you would oblige me the opportunity, I shall provide you with
details upon your Response...
Faithfully,
Ms.Ella Golan
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-13 17:15 susan.christian
0 siblings, 0 replies; 3471+ messages in thread
From: susan.christian @ 2017-10-13 17:15 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: 341777621.zip --]
[-- Type: application/zip, Size: 2835 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-13 6:16 nfrankiyamu
0 siblings, 0 replies; 3471+ messages in thread
From: nfrankiyamu @ 2017-10-13 6:16 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: 362936577323.zip --]
[-- Type: application/zip, Size: 3208 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-12 14:09 redaccion
0 siblings, 0 replies; 3471+ messages in thread
From: redaccion @ 2017-10-12 14:09 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 4319450234.zip --]
[-- Type: application/zip, Size: 2801 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-12 13:53 Andrew Clement
0 siblings, 0 replies; 3471+ messages in thread
From: Andrew Clement @ 2017-10-12 13:53 UTC (permalink / raw)
I sent a message to you before which i am still waiting for your
respones please do reply me.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-12 13:15 mbalhoff
0 siblings, 0 replies; 3471+ messages in thread
From: mbalhoff @ 2017-10-12 13:15 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: 5982826813.zip --]
[-- Type: application/zip, Size: 2805 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-12 11:46 sophie.norman
0 siblings, 0 replies; 3471+ messages in thread
From: sophie.norman @ 2017-10-12 11:46 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 80972472247.zip --]
[-- Type: application/zip, Size: 2812 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-12 8:17 armouralumni
0 siblings, 0 replies; 3471+ messages in thread
From: armouralumni @ 2017-10-12 8:17 UTC (permalink / raw)
To: linux-m68k
[-- Attachment #1: 423180199556284.zip --]
[-- Type: application/zip, Size: 2803 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-12 5:55 xa0et.sirio
0 siblings, 0 replies; 3471+ messages in thread
From: xa0et.sirio @ 2017-10-12 5:55 UTC (permalink / raw)
To: kvm
[-- Attachment #1: 10277430436151.zip --]
[-- Type: application/zip, Size: 2801 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-12 3:08 iker-KvP5wT2u2U0
0 siblings, 0 replies; 3471+ messages in thread
From: iker-KvP5wT2u2U0 @ 2017-10-12 3:08 UTC (permalink / raw)
To: linux-efi-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 97369885.zip --]
[-- Type: application/zip, Size: 2827 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-11 22:32 fwkz4811-DoVvmRvd3PAA2dtGD8cC2w
0 siblings, 0 replies; 3471+ messages in thread
From: fwkz4811-DoVvmRvd3PAA2dtGD8cC2w @ 2017-10-11 22:32 UTC (permalink / raw)
To: linux-cifs-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 96920034978.zip --]
[-- Type: application/zip, Size: 2799 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-11 19:55 kindergartenchaos2
0 siblings, 0 replies; 3471+ messages in thread
From: kindergartenchaos2 @ 2017-10-11 19:55 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 150646856.zip --]
[-- Type: application/zip, Size: 2805 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-11 19:29 info
0 siblings, 0 replies; 3471+ messages in thread
From: info @ 2017-10-11 19:29 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: 85891743224441.zip --]
[-- Type: application/zip, Size: 2775 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-11 11:49 nelcastellodicarta
0 siblings, 0 replies; 3471+ messages in thread
From: nelcastellodicarta @ 2017-10-11 11:49 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: 559072852.zip --]
[-- Type: application/zip, Size: 2822 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-11 9:19 pekka.enne
0 siblings, 0 replies; 3471+ messages in thread
From: pekka.enne @ 2017-10-11 9:19 UTC (permalink / raw)
To: linux-m68k
[-- Attachment #1: 3575697248.zip --]
[-- Type: application/zip, Size: 2799 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-11 8:20 sherrilyn
0 siblings, 0 replies; 3471+ messages in thread
From: sherrilyn @ 2017-10-11 8:20 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: 00401005801.zip --]
[-- Type: application/zip, Size: 2799 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-11 7:34 cl_luzcc
0 siblings, 0 replies; 3471+ messages in thread
From: cl_luzcc @ 2017-10-11 7:34 UTC (permalink / raw)
To: linux-next
[-- Attachment #1: 12752943587877.zip --]
[-- Type: application/zip, Size: 2795 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-11 4:11 morice.diane
0 siblings, 0 replies; 3471+ messages in thread
From: morice.diane @ 2017-10-11 4:11 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 998537216140305.zip --]
[-- Type: application/zip, Size: 6055 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-10 23:27 editor
0 siblings, 0 replies; 3471+ messages in thread
From: editor @ 2017-10-10 23:27 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 6274763896.zip --]
[-- Type: application/zip, Size: 6037 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-09 15:06 jha
0 siblings, 0 replies; 3471+ messages in thread
From: jha @ 2017-10-09 15:06 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: 879807.zip --]
[-- Type: application/zip, Size: 7253 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-09 13:19 carmen.croonquist
0 siblings, 0 replies; 3471+ messages in thread
From: carmen.croonquist @ 2017-10-09 13:19 UTC (permalink / raw)
To: linux-samsung-soc
[-- Attachment #1: 20689.zip --]
[-- Type: application/zip, Size: 7234 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-09 7:37 Michael Lyle
0 siblings, 0 replies; 3471+ messages in thread
From: Michael Lyle @ 2017-10-09 7:37 UTC (permalink / raw)
To: linux-bcache, linux-block; +Cc: colyli
[PATCH v2 1/2] bcache: writeback rate shouldn't artifically clamp
[PATCH v2 2/2] bcache: rearrange writeback main thread ratelimit
This is a reroll of the previous "don't clamp" patch. It corrects
type issues where negative numbers were handled badly (mostly for
display in writeback_rate_debug).
Additionally, a new, related patch-- during scanning for dirty
blocks, don't reset the ratelimiting counter. This can prevent
undershoots/overshoots of the target rate relating to scanning.
Mike
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-09 6:17 durrant
0 siblings, 0 replies; 3471+ messages in thread
From: durrant @ 2017-10-09 6:17 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: 593592994.zip --]
[-- Type: application/zip, Size: 7328 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-09 3:44 roeper
0 siblings, 0 replies; 3471+ messages in thread
From: roeper @ 2017-10-09 3:44 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 417841081322055.zip --]
[-- Type: application/zip, Size: 7265 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-08 23:01 susan.christian
0 siblings, 0 replies; 3471+ messages in thread
From: susan.christian @ 2017-10-08 23:01 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: 7646074810541.zip --]
[-- Type: application/zip, Size: 7348 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-08 22:32 natasha.glauser
0 siblings, 0 replies; 3471+ messages in thread
From: natasha.glauser @ 2017-10-08 22:32 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 7210184386.zip --]
[-- Type: application/zip, Size: 7244 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-08 19:00 matthias.foerster
0 siblings, 0 replies; 3471+ messages in thread
From: matthias.foerster @ 2017-10-08 19:00 UTC (permalink / raw)
To: dash
[-- Attachment #1: 434071651431.zip --]
[-- Type: application/zip, Size: 7308 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-08 14:15 clasico082
0 siblings, 0 replies; 3471+ messages in thread
From: clasico082 @ 2017-10-08 14:15 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 01777909.zip --]
[-- Type: application/zip, Size: 7221 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-08 11:08 nelcastellodicarta
0 siblings, 0 replies; 3471+ messages in thread
From: nelcastellodicarta @ 2017-10-08 11:08 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: 354549.zip --]
[-- Type: application/zip, Size: 7203 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-08 9:52 marketing
0 siblings, 0 replies; 3471+ messages in thread
From: marketing @ 2017-10-08 9:52 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 0528473.zip --]
[-- Type: application/zip, Size: 7243 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-08 9:00 pekka.enne
0 siblings, 0 replies; 3471+ messages in thread
From: pekka.enne @ 2017-10-08 9:00 UTC (permalink / raw)
To: linux-m68k
[-- Attachment #1: 50948.zip --]
[-- Type: application/zip, Size: 7245 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-08 7:59 edo.hlaca
0 siblings, 0 replies; 3471+ messages in thread
From: edo.hlaca @ 2017-10-08 7:59 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 7052236502429.zip --]
[-- Type: application/zip, Size: 7208 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-08 7:32 cl_luzcc
0 siblings, 0 replies; 3471+ messages in thread
From: cl_luzcc @ 2017-10-08 7:32 UTC (permalink / raw)
To: linux-next
[-- Attachment #1: 3534680.zip --]
[-- Type: application/zip, Size: 7324 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-08 1:26 redaccion
0 siblings, 0 replies; 3471+ messages in thread
From: redaccion @ 2017-10-08 1:26 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 9108707.zip --]
[-- Type: application/zip, Size: 7119 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-07 4:45 morice.diane
0 siblings, 0 replies; 3471+ messages in thread
From: morice.diane @ 2017-10-07 4:45 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 57361065.zip --]
[-- Type: application/zip, Size: 7308 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-07 3:40 agar2000
0 siblings, 0 replies; 3471+ messages in thread
From: agar2000 @ 2017-10-07 3:40 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 26521476.zip --]
[-- Type: application/zip, Size: 7189 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-07 0:31 carmen.croonquist
0 siblings, 0 replies; 3471+ messages in thread
From: carmen.croonquist @ 2017-10-07 0:31 UTC (permalink / raw)
To: linux-samsung-soc
[-- Attachment #1: 12905.zip --]
[-- Type: application/zip, Size: 7378 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-06 11:55 info
0 siblings, 0 replies; 3471+ messages in thread
From: info @ 2017-10-06 11:55 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: BUY-517182571linux-acpi.zip --]
[-- Type: application/zip, Size: 7281 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-06 8:31 smallgroups
0 siblings, 0 replies; 3471+ messages in thread
From: smallgroups @ 2017-10-06 8:31 UTC (permalink / raw)
To: linux-crypto
[-- Attachment #1: MICROSOFT-68816linux-crypto.zip --]
[-- Type: application/zip, Size: 7247 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-06 5:16 nelcastellodicarta
0 siblings, 0 replies; 3471+ messages in thread
From: nelcastellodicarta @ 2017-10-06 5:16 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: OLGA-547702238973419linux-scsi.zip --]
[-- Type: application/zip, Size: 7273 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-06 2:19 sherrilyn
0 siblings, 0 replies; 3471+ messages in thread
From: sherrilyn @ 2017-10-06 2:19 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: ONLINE-333670114381linux-arch.zip --]
[-- Type: application/zip, Size: 7228 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-06 1:59 edo.hlaca
0 siblings, 0 replies; 3471+ messages in thread
From: edo.hlaca @ 2017-10-06 1:59 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: NATASHA-75526540507909netfilter-devel.zip --]
[-- Type: application/zip, Size: 7283 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-06 1:43 sophie.norman
0 siblings, 0 replies; 3471+ messages in thread
From: sophie.norman @ 2017-10-06 1:43 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: MESSAGE-7301268934netfilter-devel.zip --]
[-- Type: application/zip, Size: 7238 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-05 15:34 kindergartenchaos2
0 siblings, 0 replies; 3471+ messages in thread
From: kindergartenchaos2 @ 2017-10-05 15:34 UTC (permalink / raw)
To: netdev
[-- Attachment #1: EBAY-00128399787315netdev.zip --]
[-- Type: application/zip, Size: 7325 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-05 14:24 informationrequest
0 siblings, 0 replies; 3471+ messages in thread
From: informationrequest @ 2017-10-05 14:24 UTC (permalink / raw)
To: netdev
[-- Attachment #1: SALE-877553024907700netdev.zip --]
[-- Type: application/zip, Size: 7221 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-05 10:20 jeffrey.faulkenberg
0 siblings, 0 replies; 3471+ messages in thread
From: jeffrey.faulkenberg @ 2017-10-05 10:20 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: SHOP-92491235258linux-ext4.zip --]
[-- Type: application/zip, Size: 7271 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-05 7:10 mgriffit
0 siblings, 0 replies; 3471+ messages in thread
From: mgriffit @ 2017-10-05 7:10 UTC (permalink / raw)
To: linux-ide
[-- Attachment #1: INFO_22673_linux-ide.zip --]
[-- Type: application/zip, Size: 7245 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-05 6:53 helga.brickl
0 siblings, 0 replies; 3471+ messages in thread
From: helga.brickl @ 2017-10-05 6:53 UTC (permalink / raw)
To: netdev
[-- Attachment #1: INFO_89244804971359_netdev.zip --]
[-- Type: application/zip, Size: 44235 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-04 16:11 1.10.0812112155390.21775
0 siblings, 0 replies; 3471+ messages in thread
From: 1.10.0812112155390.21775 @ 2017-10-04 16:11 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 2649741863647.zip --]
[-- Type: application/zip, Size: 7246 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-04 15:33 membership
0 siblings, 0 replies; 3471+ messages in thread
From: membership @ 2017-10-04 15:33 UTC (permalink / raw)
To: linux-crypto
[-- Attachment #1: 1060824159.zip --]
[-- Type: application/zip, Size: 5425 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-04 11:44 susan.christian
0 siblings, 0 replies; 3471+ messages in thread
From: susan.christian @ 2017-10-04 11:44 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: 18225093198369.zip --]
[-- Type: application/zip, Size: 7229 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-04 5:56 morice.diane
0 siblings, 0 replies; 3471+ messages in thread
From: morice.diane @ 2017-10-04 5:56 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 85708430384537.zip --]
[-- Type: application/zip, Size: 7217 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-03 13:59 nelcastellodicarta
0 siblings, 0 replies; 3471+ messages in thread
From: nelcastellodicarta @ 2017-10-03 13:59 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: 1062465982.zip --]
[-- Type: application/zip, Size: 7191 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-03 12:43 marketing
0 siblings, 0 replies; 3471+ messages in thread
From: marketing @ 2017-10-03 12:43 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 2303159403401.zip --]
[-- Type: application/zip, Size: 7286 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-03 10:37 edo.hlaca
0 siblings, 0 replies; 3471+ messages in thread
From: edo.hlaca @ 2017-10-03 10:37 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 951127.zip --]
[-- Type: application/zip, Size: 7235 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-03 8:40 koopk
0 siblings, 0 replies; 3471+ messages in thread
From: koopk @ 2017-10-03 8:40 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 398451844542478.zip --]
[-- Type: application/zip, Size: 7173 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-03 8:16 morice.diane
0 siblings, 0 replies; 3471+ messages in thread
From: morice.diane @ 2017-10-03 8:16 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 747452.zip --]
[-- Type: application/zip, Size: 7300 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-03 7:38 angers
0 siblings, 0 replies; 3471+ messages in thread
From: angers @ 2017-10-03 7:38 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: 37798876552.zip --]
[-- Type: application/zip, Size: 7331 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-03 0:55 jbmplupus-Mmb7MZpHnFY
0 siblings, 0 replies; 3471+ messages in thread
From: jbmplupus-Mmb7MZpHnFY @ 2017-10-03 0:55 UTC (permalink / raw)
To: linux-man-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 04825923229.zip --]
[-- Type: application/zip, Size: 7337 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-03 0:14 roeper
0 siblings, 0 replies; 3471+ messages in thread
From: roeper @ 2017-10-03 0:14 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 48132932.zip --]
[-- Type: application/zip, Size: 7121 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-03 0:03 noord-holland
0 siblings, 0 replies; 3471+ messages in thread
From: noord-holland @ 2017-10-03 0:03 UTC (permalink / raw)
To: linux-samsung-soc
[-- Attachment #1: 720896700956.zip --]
[-- Type: application/zip, Size: 7174 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-02 20:31 kchristopher
0 siblings, 0 replies; 3471+ messages in thread
From: kchristopher @ 2017-10-02 20:31 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: 11465.zip --]
[-- Type: application/zip, Size: 7245 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-02 18:06 dengx
0 siblings, 0 replies; 3471+ messages in thread
From: dengx @ 2017-10-02 18:06 UTC (permalink / raw)
To: linux-ide
[-- Attachment #1: 64703085.zip --]
[-- Type: application/zip, Size: 7218 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-02 18:00 Solen win2
0 siblings, 0 replies; 3471+ messages in thread
From: Solen win2 @ 2017-10-02 18:00 UTC (permalink / raw)
To: virtualization
[-- Attachment #1.1: Type: text/plain, Size: 23 bytes --]
Solenwin@freshdesk.com
[-- Attachment #1.2: Type: text/html, Size: 141 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-02 17:38 nbensoncole81
0 siblings, 0 replies; 3471+ messages in thread
From: nbensoncole81 @ 2017-10-02 17:38 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 3533773208567.zip --]
[-- Type: application/zip, Size: 7192 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-10-02 15:35 nfrankiyamu
0 siblings, 0 replies; 3471+ messages in thread
From: nfrankiyamu @ 2017-10-02 15:35 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: 78524186237.zip --]
[-- Type: application/zip, Size: 6576 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-30 14:07 redaccion
0 siblings, 0 replies; 3471+ messages in thread
From: redaccion @ 2017-09-30 14:07 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 5283737024430.zip --]
[-- Type: application/zip, Size: 7153 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-29 21:29 info
0 siblings, 0 replies; 3471+ messages in thread
From: info @ 2017-09-29 21:29 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: 6897516.zip --]
[-- Type: application/zip, Size: 7285 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-29 18:01 clasico082
0 siblings, 0 replies; 3471+ messages in thread
From: clasico082 @ 2017-09-29 18:01 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 587629173792972.zip --]
[-- Type: application/zip, Size: 7177 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-29 15:42 noord-holland
0 siblings, 0 replies; 3471+ messages in thread
From: noord-holland @ 2017-09-29 15:42 UTC (permalink / raw)
To: linux-samsung-soc
[-- Attachment #1: 87244.zip --]
[-- Type: application/zip, Size: 7226 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-29 15:21 natasha.glauser
0 siblings, 0 replies; 3471+ messages in thread
From: natasha.glauser @ 2017-09-29 15:21 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 033033737342463.zip --]
[-- Type: application/zip, Size: 7234 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-29 14:47 nelcastellodicarta
0 siblings, 0 replies; 3471+ messages in thread
From: nelcastellodicarta @ 2017-09-29 14:47 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: 55645769.zip --]
[-- Type: application/zip, Size: 7222 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-29 13:49 marketing
0 siblings, 0 replies; 3471+ messages in thread
From: marketing @ 2017-09-29 13:49 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 761981.zip --]
[-- Type: application/zip, Size: 7219 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-29 11:49 roeper
0 siblings, 0 replies; 3471+ messages in thread
From: roeper @ 2017-09-29 11:49 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 84914924.zip --]
[-- Type: application/zip, Size: 7285 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-29 11:28 cl_luzcc
0 siblings, 0 replies; 3471+ messages in thread
From: cl_luzcc @ 2017-09-29 11:28 UTC (permalink / raw)
To: linux-next
[-- Attachment #1: 810795425922.zip --]
[-- Type: application/zip, Size: 7224 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-29 7:44 amin
0 siblings, 0 replies; 3471+ messages in thread
From: amin @ 2017-09-29 7:44 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 7475658.zip --]
[-- Type: application/zip, Size: 7277 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-29 7:26 kelley
0 siblings, 0 replies; 3471+ messages in thread
From: kelley @ 2017-09-29 7:26 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 40098069241.zip --]
[-- Type: application/zip, Size: 7206 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-29 3:06 jha
0 siblings, 0 replies; 3471+ messages in thread
From: jha @ 2017-09-29 3:06 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: 09773850.zip --]
[-- Type: application/zip, Size: 7115 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-29 2:48 Tina Aaron
0 siblings, 0 replies; 3471+ messages in thread
From: Tina Aaron @ 2017-09-29 2:48 UTC (permalink / raw)
Do you need urgent LOAN ? If yes, Contact me now via Email: mondataclassic@gmail.com
CONFIDENTIALITY NOTICE: This email message, including any attachments, is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized use, disclosure or distribution is prohibited. If you are not the intended recipient, please discard the message immediately and inform the sender that the message was sent in error.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-28 22:59 rlm85310
0 siblings, 0 replies; 3471+ messages in thread
From: rlm85310 @ 2017-09-28 22:59 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 42094602.zip --]
[-- Type: application/zip, Size: 7139 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-28 15:08 amin
0 siblings, 0 replies; 3471+ messages in thread
From: amin @ 2017-09-28 15:08 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 469279928021744.zip --]
[-- Type: application/zip, Size: 86434 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-28 0:21 natasha.glauser
0 siblings, 0 replies; 3471+ messages in thread
From: natasha.glauser @ 2017-09-28 0:21 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 16782652.doc --]
[-- Type: application/msword, Size: 59577 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-27 19:30 nbensoncole81
0 siblings, 0 replies; 3471+ messages in thread
From: nbensoncole81 @ 2017-09-27 19:30 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 7696623.doc --]
[-- Type: application/msword, Size: 67240 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-27 19:12 rlm85310
0 siblings, 0 replies; 3471+ messages in thread
From: rlm85310 @ 2017-09-27 19:12 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 612074162.doc --]
[-- Type: application/msword, Size: 67084 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-27 17:41 Michael Lyle
0 siblings, 0 replies; 3471+ messages in thread
From: Michael Lyle @ 2017-09-27 17:41 UTC (permalink / raw)
To: linux-bcache
Hey everyone---
After the review comments from last night, I'm back to try again :)
Thanks everyone for your help-- comments on what's changed and how
#4 helps with future work (why it's slightly more complicated) are
below.
Mike
Changes from last night:
- Changed lots of comment formatting to match the rest of bcache-style.
- Fixed a bug noticed by Tang Junhui where contiguous I/O would not be
dispatched together.
- Changed the magic number '5' and '5000' to the macros
MAX_WRITEBACKS_IN_PASS and MAX_WRITESIZE_IN_PASS
- Slight improvements to patch logs.
The net result of all these changes is better IO utilization during
writeback. More contiguous I/O happens (whether during idle times or
when there is more activity). Contiguous I/O is sent in proper order
to the backing device. The control system picks better writeback
rate targets and the system can better hit them.
This is what I plan to work on next, in subsequent patches:
- Add code to skip doing small I/Os when A) there are larger I/Os in
the set, and B) the end of disk wasn't reached when scanning. In
other words, try writing out the bigger contiguous chunks of writeback
first; give the other blocks time to end up with a larger extent next
to them. This depends on patch 4, because it understands the true
contiguous backing I/O size and isn't fooled by smaller extents.
- Adjust bch_next_delay to store the reciprocal of what it currently
does, and remove the bounds on maximum-sleep-time. Instead, enforce
a maximum sleep time at the writeback loop. This will allow us to go
a long time (hundreds of seconds) without writing to the disk at all,
while still being ready to respond quickly to any increases in requested
writeback rate. This depends on patch 4, which slightly changes the
formulation of the delay.
- Add a "fast writeback" mode, that is for use when the disk is idle.
If enabled, and there has been no I/O, it will issue one (contiguous)
write at a time at IOPRIO_CLASS_IDLE, with no delay inbetween (bypassing
the control system). The fact that there is only one I/O and they are
at minimum IOPRIO means that good latency for the first user I/O request
will be maintained-- because they only need to compete with one writeback
I/O in the queue which is set to low priority. This depends on patch 4 in
order to correctly merge contiguous requests in this mode.
- Add code to plug the backing device when there are more contiguous
requests coming. This requires patch 4 (to be able to mark requests
to expect additional contiguous requests after them) and patch 5
(to properly order the I/O for the backing device). This will help
ensure the schduler will properly merge operations (it usually works
now, but not always).
- Add code to lower writeback IOPRIO when the rate is easily being met,
so that end-user IO requests "win".
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-22 19:34 John Michael
0 siblings, 0 replies; 3471+ messages in thread
From: John Michael @ 2017-09-22 19:34 UTC (permalink / raw)
how are you doing,is me John Michael i have a personal reason of
writing you now.write to me ok.
--
To unsubscribe from this list: send the line "unsubscribe linux-man" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-22 8:41 Adrian Gillian Bayford
0 siblings, 0 replies; 3471+ messages in thread
From: Adrian Gillian Bayford @ 2017-09-22 8:41 UTC (permalink / raw)
To: Recipients
£1.5 Million Has Been Granted To You As A Donation Visit www.bbc.co.uk/news/uk-england-19254228 Sendname Address Phone for more info
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-22 3:39 service
0 siblings, 0 replies; 3471+ messages in thread
From: service @ 2017-09-22 3:39 UTC (permalink / raw)
To: linux-crypto
[-- Attachment #1: 78046538807981.doc --]
[-- Type: application/msword, Size: 56742 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-22 1:55 dengx
0 siblings, 0 replies; 3471+ messages in thread
From: dengx @ 2017-09-22 1:55 UTC (permalink / raw)
To: linux-ide
[-- Attachment #1: 240154917498495.doc --]
[-- Type: application/msword, Size: 54325 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-22 1:22 unsubscribe.me
0 siblings, 0 replies; 3471+ messages in thread
From: unsubscribe.me @ 2017-09-22 1:22 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 7172596099.doc --]
[-- Type: application/msword, Size: 55811 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-21 7:47 MAILER-DAEMON
0 siblings, 0 replies; 3471+ messages in thread
From: MAILER-DAEMON @ 2017-09-21 7:47 UTC (permalink / raw)
To: linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw
The original message was received at Thu, 21 Sep 2017 15:47:49 +0800
from lists.01.org [144.3.34.209]
----- The following addresses had permanent fatal errors -----
<linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org>
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-20 1:01 ninfo
0 siblings, 0 replies; 3471+ messages in thread
From: ninfo @ 2017-09-20 1:01 UTC (permalink / raw)
To: linux-i2c
[-- Attachment #1: 6246478047403.doc --]
[-- Type: application/msword, Size: 43564 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-19 7:47 agar2000
0 siblings, 0 replies; 3471+ messages in thread
From: agar2000 @ 2017-09-19 7:47 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 4056041929.doc --]
[-- Type: application/msword, Size: 73845 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-15 17:30 noreply
0 siblings, 0 replies; 3471+ messages in thread
From: noreply @ 2017-09-15 17:30 UTC (permalink / raw)
To: linux-i2c
[-- Attachment #1: EMAIL_870530_linux-i2c.doc --]
[-- Type: application/msword, Size: 76656 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-15 17:29 noreply
0 siblings, 0 replies; 3471+ messages in thread
From: noreply @ 2017-09-15 17:29 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: EMAIL_559609003058869_netfilter-devel.doc --]
[-- Type: application/msword, Size: 76620 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-15 17:01 noreply
0 siblings, 0 replies; 3471+ messages in thread
From: noreply @ 2017-09-15 17:01 UTC (permalink / raw)
To: netdev
[-- Attachment #1: EMAIL_75480323541895_netdev.doc --]
[-- Type: application/msword, Size: 76645 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-13 8:56 kindergartenchaos2
0 siblings, 0 replies; 3471+ messages in thread
From: kindergartenchaos2 @ 2017-09-13 8:56 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 9675261.doc --]
[-- Type: application/msword, Size: 76537 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-13 4:21 natasha.glauser
0 siblings, 0 replies; 3471+ messages in thread
From: natasha.glauser @ 2017-09-13 4:21 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 629854663368780.doc --]
[-- Type: application/msword, Size: 43208 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-12 22:07 marketing
0 siblings, 0 replies; 3471+ messages in thread
From: marketing @ 2017-09-12 22:07 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 43594737937.doc --]
[-- Type: application/msword, Size: 76549 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-12 19:45 edo.hlaca
0 siblings, 0 replies; 3471+ messages in thread
From: edo.hlaca @ 2017-09-12 19:45 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 7460535286371.doc --]
[-- Type: application/msword, Size: 76645 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-12 19:16 cl_luzcc
0 siblings, 0 replies; 3471+ messages in thread
From: cl_luzcc @ 2017-09-12 19:16 UTC (permalink / raw)
To: linux-next
[-- Attachment #1: 9258914528172.doc --]
[-- Type: application/msword, Size: 43182 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-12 18:53 pooks005
0 siblings, 0 replies; 3471+ messages in thread
From: pooks005 @ 2017-09-12 18:53 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 1825633111058.doc --]
[-- Type: application/msword, Size: 43182 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-11 20:10 roeper
0 siblings, 0 replies; 3471+ messages in thread
From: roeper @ 2017-09-11 20:10 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 49188211.doc --]
[-- Type: application/msword, Size: 76265 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2017-09-11 19:35 Helge Deller
0 siblings, 0 replies; 3471+ messages in thread
From: Helge Deller @ 2017-09-11 19:35 UTC (permalink / raw)
To: Linus Torvalds, linux-kernel, linux-parisc, James Bottomley,
John David Anglin
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-10 6:22 Youichi Kanno
0 siblings, 0 replies; 3471+ messages in thread
From: Youichi Kanno @ 2017-09-10 6:22 UTC (permalink / raw)
Sir/Madam
I am sorry to encroach into your privacy in this manner, I found you
listed in the Trade Center Chambers of Commerce directory here in
Japan, My name is Youichi Kanno and I work in Audit & credit
Supervisory role at The Norinchukin Bank, I need your assistance to
process the fund claims oF $18,100,000.00 (Eighteen Million, One
Hundred Thousand, USD) of a deceased client Mr. Grigor Kassan, And i
need your assistance to process the fund claims, I only pray at this
time that your address is still valid. I want to solicit your
attention to receive this money on my behalf. The purpose of my
contacting you is because my status would not permit me to do this
alone.
I hope to hear from you soon so we can discuss the logistic of moving
the funds to a safe offshore bank.
Yours sincerely,
Youichi Kanno
Phone Number: +81345400962
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-07 7:05 tabiadhawatef
0 siblings, 0 replies; 3471+ messages in thread
From: tabiadhawatef @ 2017-09-07 7:05 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: 52593.doc --]
[-- Type: application/msword, Size: 39787 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-07 4:02 dengx
0 siblings, 0 replies; 3471+ messages in thread
From: dengx @ 2017-09-07 4:02 UTC (permalink / raw)
To: linux-ide
[-- Attachment #1: 977913748072031.doc --]
[-- Type: application/msword, Size: 39607 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-06 3:57 informationrequest
0 siblings, 0 replies; 3471+ messages in thread
From: informationrequest @ 2017-09-06 3:57 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 45388.doc --]
[-- Type: application/msword, Size: 75967 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-05 23:34 kkaplanidou
0 siblings, 0 replies; 3471+ messages in thread
From: kkaplanidou @ 2017-09-05 23:34 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: 033300.doc --]
[-- Type: application/msword, Size: 75921 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-05 18:38 john.dahlberg
0 siblings, 0 replies; 3471+ messages in thread
From: john.dahlberg @ 2017-09-05 18:38 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: 60152775241_linux-scsi.zip --]
[-- Type: application/zip, Size: 3435 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-05 18:07 bfoster
0 siblings, 0 replies; 3471+ messages in thread
From: bfoster @ 2017-09-05 18:07 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 79129942816922.doc --]
[-- Type: application/msword, Size: 39379 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-05 16:31 mgriffit
0 siblings, 0 replies; 3471+ messages in thread
From: mgriffit @ 2017-09-05 16:31 UTC (permalink / raw)
To: linux-ide
[-- Attachment #1: 99589200620834_linux-ide.zip --]
[-- Type: application/zip, Size: 3435 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-05 14:02 ecaterinasuciu09
0 siblings, 0 replies; 3471+ messages in thread
From: ecaterinasuciu09 @ 2017-09-05 14:02 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: 7_linux-acpi.zip --]
[-- Type: application/zip, Size: 3435 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-05 12:51 ifalqi
0 siblings, 0 replies; 3471+ messages in thread
From: ifalqi @ 2017-09-05 12:51 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 9731908610.doc --]
[-- Type: application/msword, Size: 73845 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-05 11:11 inn
0 siblings, 0 replies; 3471+ messages in thread
From: inn @ 2017-09-05 11:11 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: 4567040211589.doc --]
[-- Type: application/msword, Size: 75772 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-05 2:43 xb028930336
0 siblings, 0 replies; 3471+ messages in thread
From: xb028930336 @ 2017-09-05 2:43 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 17372103.doc --]
[-- Type: application/msword, Size: 39859 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-05 1:51 halinajan-4Uo9UdwAbX8
0 siblings, 0 replies; 3471+ messages in thread
From: halinajan-4Uo9UdwAbX8 @ 2017-09-05 1:51 UTC (permalink / raw)
To: linux-tegra-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 1107094.doc --]
[-- Type: application/msword, Size: 39859 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-04 23:46 sterrenplan.kampen
0 siblings, 0 replies; 3471+ messages in thread
From: sterrenplan.kampen @ 2017-09-04 23:46 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 6768082.doc --]
[-- Type: application/msword, Size: 39859 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-04 12:17 noord-holland
0 siblings, 0 replies; 3471+ messages in thread
From: noord-holland @ 2017-09-04 12:17 UTC (permalink / raw)
To: linux-samsung-soc
[-- Attachment #1: 9476795550394.doc --]
[-- Type: application/msword, Size: 40698 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-04 5:14 nelcastellodicarta
0 siblings, 0 replies; 3471+ messages in thread
From: nelcastellodicarta @ 2017-09-04 5:14 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: 5854501628.doc --]
[-- Type: application/msword, Size: 40698 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-04 2:33 marketing
0 siblings, 0 replies; 3471+ messages in thread
From: marketing @ 2017-09-04 2:33 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 14036757427509.doc --]
[-- Type: application/msword, Size: 40698 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-04 2:13 x1kn8fk
0 siblings, 0 replies; 3471+ messages in thread
From: x1kn8fk @ 2017-09-04 2:13 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 423567.doc --]
[-- Type: application/msword, Size: 40698 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-03 22:54 sherrilyn
0 siblings, 0 replies; 3471+ messages in thread
From: sherrilyn @ 2017-09-03 22:54 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: 06811825.doc --]
[-- Type: application/msword, Size: 40698 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-03 21:51 xb028930336
0 siblings, 0 replies; 3471+ messages in thread
From: xb028930336 @ 2017-09-03 21:51 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 83163881723765.doc --]
[-- Type: application/msword, Size: 40698 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-03 21:26 cl_luzcc
0 siblings, 0 replies; 3471+ messages in thread
From: cl_luzcc @ 2017-09-03 21:26 UTC (permalink / raw)
To: linux-next
[-- Attachment #1: 680737.doc --]
[-- Type: application/msword, Size: 40698 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-02 23:56 netgalley
0 siblings, 0 replies; 3471+ messages in thread
From: netgalley @ 2017-09-02 23:56 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 111116.doc --]
[-- Type: application/msword, Size: 40147 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-02 6:40 simon.a.t.hardy
0 siblings, 0 replies; 3471+ messages in thread
From: simon.a.t.hardy @ 2017-09-02 6:40 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 907693760.doc --]
[-- Type: application/msword, Size: 40147 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-02 2:47 nbensoncole81
0 siblings, 0 replies; 3471+ messages in thread
From: nbensoncole81 @ 2017-09-02 2:47 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 624346.doc --]
[-- Type: application/msword, Size: 40147 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-02 2:39 een
0 siblings, 0 replies; 3471+ messages in thread
From: een @ 2017-09-02 2:39 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: 8088665.doc --]
[-- Type: application/msword, Size: 40147 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-02 2:35 jbmplupus-Mmb7MZpHnFY
0 siblings, 0 replies; 3471+ messages in thread
From: jbmplupus-Mmb7MZpHnFY @ 2017-09-02 2:35 UTC (permalink / raw)
To: linux-man-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 787132618586608.doc --]
[-- Type: application/msword, Size: 40147 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-02 1:59 danielle.picarda2
0 siblings, 0 replies; 3471+ messages in thread
From: danielle.picarda2 @ 2017-09-02 1:59 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: 2706461535932.doc --]
[-- Type: application/msword, Size: 40147 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-02 0:58 smallgroups
0 siblings, 0 replies; 3471+ messages in thread
From: smallgroups @ 2017-09-02 0:58 UTC (permalink / raw)
To: linux-crypto
[-- Attachment #1: 97238929.doc --]
[-- Type: application/msword, Size: 40147 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-01 22:55 redaccion
0 siblings, 0 replies; 3471+ messages in thread
From: redaccion @ 2017-09-01 22:55 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 861574953961.doc --]
[-- Type: application/msword, Size: 40147 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-01 22:51 zumbalisa
0 siblings, 0 replies; 3471+ messages in thread
From: zumbalisa @ 2017-09-01 22:51 UTC (permalink / raw)
To: linux-pm
[-- Attachment #1: 459612134.doc --]
[-- Type: application/msword, Size: 40147 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-01 21:57 umpvav-YDxpq3io04c
0 siblings, 0 replies; 3471+ messages in thread
From: umpvav-YDxpq3io04c @ 2017-09-01 21:57 UTC (permalink / raw)
To: linux-cifs-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 730020230707.doc --]
[-- Type: application/msword, Size: 40147 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-01 21:32 nenep
0 siblings, 0 replies; 3471+ messages in thread
From: nenep @ 2017-09-01 21:32 UTC (permalink / raw)
To: linux-samsung-soc
[-- Attachment #1: 01796145500215.doc --]
[-- Type: application/msword, Size: 40147 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-01 20:58 wvhyvcm.abyxg
0 siblings, 0 replies; 3471+ messages in thread
From: wvhyvcm.abyxg @ 2017-09-01 20:58 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 001007146.doc --]
[-- Type: application/msword, Size: 40147 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-01 19:52 sunaina
0 siblings, 0 replies; 3471+ messages in thread
From: sunaina @ 2017-09-01 19:52 UTC (permalink / raw)
To: linux-can
[-- Attachment #1: 73711.doc --]
[-- Type: application/msword, Size: 40147 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-01 15:30 stef.ryckmans
0 siblings, 0 replies; 3471+ messages in thread
From: stef.ryckmans @ 2017-09-01 15:30 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 8247893720.doc --]
[-- Type: application/msword, Size: 40147 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-01 15:00 ujagu8185-Re5JQEeQqe8AvxtiuMwx3w
0 siblings, 0 replies; 3471+ messages in thread
From: ujagu8185-Re5JQEeQqe8AvxtiuMwx3w @ 2017-09-01 15:00 UTC (permalink / raw)
To: linux-cifs-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 67113.doc --]
[-- Type: application/msword, Size: 40147 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-01 11:40 witt.kohl
0 siblings, 0 replies; 3471+ messages in thread
From: witt.kohl @ 2017-09-01 11:40 UTC (permalink / raw)
To: linux-crypto
[-- Attachment #1: 668862857.doc --]
[-- Type: application/msword, Size: 40646 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-01 8:16 financialaid
0 siblings, 0 replies; 3471+ messages in thread
From: financialaid @ 2017-09-01 8:16 UTC (permalink / raw)
To: linux-leds
[-- Attachment #1: 31996318536734.doc --]
[-- Type: application/msword, Size: 40638 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-01 6:21 zita.latex
0 siblings, 0 replies; 3471+ messages in thread
From: zita.latex @ 2017-09-01 6:21 UTC (permalink / raw)
To: linux-samsung-soc
[-- Attachment #1: 0054154130299.doc --]
[-- Type: application/msword, Size: 40638 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-01 4:59 adriix.addy
0 siblings, 0 replies; 3471+ messages in thread
From: adriix.addy @ 2017-09-01 4:59 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 2798076558.doc --]
[-- Type: application/msword, Size: 40462 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-01 4:05 andrewf
0 siblings, 0 replies; 3471+ messages in thread
From: andrewf @ 2017-09-01 4:05 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 568268530019.doc --]
[-- Type: application/msword, Size: 40462 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-01 2:30 robert.berry
0 siblings, 0 replies; 3471+ messages in thread
From: robert.berry @ 2017-09-01 2:30 UTC (permalink / raw)
To: linux-ide
[-- Attachment #1: 224909365.doc --]
[-- Type: application/msword, Size: 40462 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-01 1:48 doctornina
0 siblings, 0 replies; 3471+ messages in thread
From: doctornina @ 2017-09-01 1:48 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 204118348.doc --]
[-- Type: application/msword, Size: 40462 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-09-01 1:48 agar2000
0 siblings, 0 replies; 3471+ messages in thread
From: agar2000 @ 2017-09-01 1:48 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 674423596.doc --]
[-- Type: application/msword, Size: 40462 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-31 18:41 helga.brickl
0 siblings, 0 replies; 3471+ messages in thread
From: helga.brickl @ 2017-08-31 18:41 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 66881.doc --]
[-- Type: application/msword, Size: 41431 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-31 15:40 sterrenplan.kampen
0 siblings, 0 replies; 3471+ messages in thread
From: sterrenplan.kampen @ 2017-08-31 15:40 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 540585795659.doc --]
[-- Type: application/msword, Size: 41837 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-31 12:23 mark.robinson
0 siblings, 0 replies; 3471+ messages in thread
From: mark.robinson @ 2017-08-31 12:23 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: 872453539374.doc --]
[-- Type: application/msword, Size: 41374 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-31 9:54 info
0 siblings, 0 replies; 3471+ messages in thread
From: info @ 2017-08-31 9:54 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 3622620.doc --]
[-- Type: application/msword, Size: 41541 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-31 8:20 jessica.jones-PnMVE5gNl/Vkbu+0n/iG1Q
0 siblings, 0 replies; 3471+ messages in thread
From: jessica.jones-PnMVE5gNl/Vkbu+0n/iG1Q @ 2017-08-31 8:20 UTC (permalink / raw)
To: linux-cifs-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 497569661200.doc --]
[-- Type: application/msword, Size: 41259 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-31 4:52 archerrp
0 siblings, 0 replies; 3471+ messages in thread
From: archerrp @ 2017-08-31 4:52 UTC (permalink / raw)
To: linux-crypto
[-- Attachment #1: 713932.doc --]
[-- Type: application/msword, Size: 41252 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2017-08-31 1:39 m.wierczynska
0 siblings, 0 replies; 3471+ messages in thread
From: m.wierczynska @ 2017-08-31 1:39 UTC (permalink / raw)
To: linux-parisc
[-- Attachment #1: 449666.doc --]
[-- Type: application/msword, Size: 41783 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-31 0:58 info
0 siblings, 0 replies; 3471+ messages in thread
From: info @ 2017-08-31 0:58 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 52336.doc --]
[-- Type: application/msword, Size: 30930 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-30 20:26 anita.traylor
0 siblings, 0 replies; 3471+ messages in thread
From: anita.traylor @ 2017-08-30 20:26 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 61571.doc --]
[-- Type: application/msword, Size: 30930 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-30 19:49 susan.christian
0 siblings, 0 replies; 3471+ messages in thread
From: susan.christian @ 2017-08-30 19:49 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: 1467132.doc --]
[-- Type: application/msword, Size: 30930 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* [PATCH] default implementation for of_find_all_nodes(...)
@ 2017-08-30 18:32 Artur Lorincz
[not found] ` <1504117946-3958-1-git-send-email-larturus2-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
0 siblings, 1 reply; 3471+ messages in thread
From: Artur Lorincz @ 2017-08-30 18:32 UTC (permalink / raw)
To: frowand.list; +Cc: devicetree, linux-kernel, larturus, Artur Lorincz
Added default implementation for of_find_all_nodes(). This function is
used by board.c from the board module (drivers/staging/board).
Signed-off-by: Artur Lorincz <larturus@yahoo.com>
---
include/linux/of.h | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/include/linux/of.h b/include/linux/of.h
index 4a8a709..0a9c17a 100644
--- a/include/linux/of.h
+++ b/include/linux/of.h
@@ -865,6 +865,11 @@ static inline void of_property_clear_flag(struct property *p, unsigned long flag
#define of_match_ptr(_ptr) NULL
#define of_match_node(_matches, _node) NULL
+
+static inline struct device_node *of_find_all_nodes(struct device_node *prev)
+{
+ return NULL;
+}
#endif /* CONFIG_OF */
/* Default string compare functions, Allow arch asm/prom.h to override */
--
1.9.1
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-30 1:37 municlerk
0 siblings, 0 replies; 3471+ messages in thread
From: municlerk @ 2017-08-30 1:37 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: 3603332555227.doc --]
[-- Type: application/msword, Size: 30657 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-30 0:38 ifalqi
0 siblings, 0 replies; 3471+ messages in thread
From: ifalqi @ 2017-08-30 0:38 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 650953515.doc --]
[-- Type: application/msword, Size: 30657 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-29 5:40 morice.diane
0 siblings, 0 replies; 3471+ messages in thread
From: morice.diane @ 2017-08-29 5:40 UTC (permalink / raw)
To: netdev
[-- Attachment #1: MAIL_81389397283742_netdev.zip --]
[-- Type: application/zip, Size: 72397 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2017-08-29 3:02 catherine.verge
0 siblings, 0 replies; 3471+ messages in thread
From: catherine.verge @ 2017-08-29 3:02 UTC (permalink / raw)
To: linux-parisc
[-- Attachment #1: MAIL_9444780361_linux-parisc.zip --]
[-- Type: application/zip, Size: 97803 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-28 17:29 befragung
0 siblings, 0 replies; 3471+ messages in thread
From: befragung @ 2017-08-28 17:29 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: MAIL_9030297428213_linux-ext4.zip --]
[-- Type: application/zip, Size: 72397 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-28 13:22 dengx
0 siblings, 0 replies; 3471+ messages in thread
From: dengx @ 2017-08-28 13:22 UTC (permalink / raw)
To: linux-ide
[-- Attachment #1: MAIL_365828227848183_linux-ide.zip --]
[-- Type: application/zip, Size: 72397 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-28 6:48 patientcentral
0 siblings, 0 replies; 3471+ messages in thread
From: patientcentral @ 2017-08-28 6:48 UTC (permalink / raw)
To: linux-samsung-soc
[-- Attachment #1: MAIL_496995800_linux-samsung-soc.zip --]
[-- Type: application/zip, Size: 72397 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-27 10:55 agar2000
0 siblings, 0 replies; 3471+ messages in thread
From: agar2000 @ 2017-08-27 10:55 UTC (permalink / raw)
To: netdev
[-- Attachment #1: MAIL_34929959_netdev.zip --]
[-- Type: application/zip, Size: 72397 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-26 14:48 nfrankiyamu
0 siblings, 0 replies; 3471+ messages in thread
From: nfrankiyamu @ 2017-08-26 14:48 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: MAIL_171083918_linux-arch.zip --]
[-- Type: application/zip, Size: 72397 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-26 5:43 carol.dallstream-WaM/PvcBqAo
0 siblings, 0 replies; 3471+ messages in thread
From: carol.dallstream-WaM/PvcBqAo @ 2017-08-26 5:43 UTC (permalink / raw)
To: linux-api-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: MAIL_710687125_linux-api.zip --]
[-- Type: application/zip, Size: 72397 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-25 0:32 agiva
0 siblings, 0 replies; 3471+ messages in thread
From: agiva @ 2017-08-25 0:32 UTC (permalink / raw)
To: linux-crypto
[-- Attachment #1: 555867372077.doc --]
[-- Type: application/msword, Size: 77007 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-23 7:23 Xuehan Xu
0 siblings, 0 replies; 3471+ messages in thread
From: Xuehan Xu @ 2017-08-23 7:23 UTC (permalink / raw)
To: ceph-devel
Hi, everyone.
Recently, we did a test as follows:
We enabled cache tier and added a cache pool "vms_back_cache" on top
of the base pool "vms_back". we first created an object, and then
created a snap in the base pool and writing to that object again,
which would make the object be promoted into the cache pool. At this
time, we used "ceph-objectstore-tool" to dump the object, and the
result is as follows:
{
"id": {
"oid": "test.obj.6",
"key": "",
"snapid": -2,
"hash": 750422257,
"max": 0,
"pool": 11,
"namespace": "",
"max": 0
},
"info": {
"oid": {
"oid": "test.obj.6",
"key": "",
"snapid": -2,
"hash": 750422257,
"max": 0,
"pool": 11,
"namespace": ""
},
"version": "5010'5",
"prior_version": "4991'3",
"last_reqid": "client.175338.0:1",
"user_version": 5,
"size": 4194303,
"mtime": "2017-08-23 15:09:03.459892",
"local_mtime": "2017-08-23 15:09:03.461111",
"lost": 0,
"flags": 4,
"snaps": [],
"truncate_seq": 0,
"truncate_size": 0,
"data_digest": 4294967295,
"omap_digest": 4294967295,
"watchers": {}
},
"stat": {
"size": 4194303,
"blksize": 4096,
"blocks": 8200,
"nlink": 1
},
"SnapSet": {
"snap_context": {
"seq": 13,
"snaps": [
13
]
},
"head_exists": 1,
"clones": [
{
"snap": 13,
"size": 4194303,
"overlap": "[0~100,115~4194188]"
}
]
}
}
Then we did cache-flush and cache-evict to flush that object down to
the base pool, and, again, used "ceph-objectstore-tool" to dump the
object in the base pool:
{
"id": {
"oid": "test.obj.6",
"key": "",
"snapid": -2,
"hash": 750422257,
"max": 0,
"pool": 10,
"namespace": "",
"max": 0
},
"info": {
"oid": {
"oid": "test.obj.6",
"key": "",
"snapid": -2,
"hash": 750422257,
"max": 0,
"pool": 10,
"namespace": ""
},
"version": "5015'4",
"prior_version": "4991'2",
"last_reqid": "osd.34.5013:1",
"user_version": 5,
"size": 4194303,
"mtime": "2017-08-23 15:09:03.459892",
"local_mtime": "2017-08-23 15:10:48.122138",
"lost": 0,
"flags": 52,
"snaps": [],
"truncate_seq": 0,
"truncate_size": 0,
"data_digest": 163942140,
"omap_digest": 4294967295,
"watchers": {}
},
"stat": {
"size": 4194303,
"blksize": 4096,
"blocks": 8200,
"nlink": 1
},
"SnapSet": {
"snap_context": {
"seq": 13,
"snaps": [
13
]
},
"head_exists": 1,
"clones": [
{
"snap": 13,
"size": 4194303,
"overlap": "[]"
}
]
}
}
As is shown, the "overlap" field is empty.
In the osd log, we found the following records:
2017-08-23 12:46:36.083014 7f675c704700 20 osd.0 pg_epoch: 19 pg[3.3(
v 15'2 (0'0,15'2] local-les=15 n=2 ec=14 les/c/f 15/15/0 14/14/14)
[0,2,1] r=0 lpr=14 crt=0'0 lcod 15'1 mlcod 15'1 active+clean] got
attrs
2017-08-23 12:46:36.083021 7f675c704700 15
filestore(/home/xuxuehan/github-xxh-fork/ceph/src/dev/osd0) read
3.3_head/#3:dd4db749:test-rados-api-xxh02v.ops.corp.qihoo.net-10886-3::foo:head#
0~8
2017-08-23 12:46:36.083398 7f675c704700 10
filestore(/home/xuxuehan/github-xxh-fork/ceph/src/dev/osd0)
FileStore::read
3.3_head/#3:dd4db749:test-rados-api-xxh02v.ops.corp.qihoo.net-10886-3::foo:head#
0~8/8
2017-08-23 12:46:36.083414 7f675c704700 20 osd.0 pg_epoch: 19 pg[3.3(
v 15'2 (0'0,15'2] local-les=15 n=2 ec=14 les/c/f 15/15/0 14/14/14)
[0,2,1] r=0 lpr=14 crt=0'0 lcod 15'1 mlcod 15'1 active+clean] got
data
2017-08-23 12:46:36.083444 7f675c704700 20 osd.0 pg_epoch: 19 pg[3.3(
v 15'2 (0'0,15'2] local-les=15 n=2 ec=14 les/c/f 15/15/0 14/14/14)
[0,2,1] r=0 lpr=14 crt=0'0 lcod 15'1 mlcod 15'1 active+clean]
cursor.is_complete=0 0 attrs 8 bytes 0 omap header bytes 0 omap data
bytes in 0 keys 0 reqids
2017-08-23 12:46:36.083457 7f675c704700 10 osd.0 pg_epoch: 19 pg[3.3(
v 15'2 (0'0,15'2] local-les=15 n=2 ec=14 les/c/f 15/15/0 14/14/14)
[0,2,1] r=0 lpr=14 crt=0'0 lcod 15'1 mlcod 15'1 active+clean]
dropping ondisk_read_lock
2017-08-23 12:46:36.083467 7f675c704700 15 osd.0 pg_epoch: 19 pg[3.3(
v 15'2 (0'0,15'2] local-les=15 n=2 ec=14 les/c/f 15/15/0 14/14/14)
[0,2,1] r=0 lpr=14 crt=0'0 lcod 15'1 mlcod 15'1 active+clean]
do_osd_op_effects osd.0 con 0x7f67874f0d00
2017-08-23 12:46:36.083478 7f675c704700 15 osd.0 pg_epoch: 19 pg[3.3(
v 15'2 (0'0,15'2] local-les=15 n=2 ec=14 les/c/f 15/15/0 14/14/14)
[0,2,1] r=0 lpr=14 crt=0'0 lcod 15'1 mlcod 15'1 active+clean]
log_op_stats osd_op(osd.0.6:2 3.92edb2bb
test-rados-api-xxh02v.ops.corp
It seems that, when doing "copy-get", no extensive attributes are
copied. We believe that it's the following code that led to this
result:
int ReplicatedPG::getattrs_maybe_cache(ObjectContextRef obc,
map<string, bufferlist> *out,
bool user_only) {
int r = 0;
if (pool.info.require_rollback()) {
if (out)
*out = obc->attr_cache;
} else {
r = pgbackend->objects_get_attrs(obc->obs.oi.soid, out);
}
if (out && user_only) {
map<string, bufferlist> tmp;
for (map<string, bufferlist>::iterator i = out->begin();
i != out->end(); ++i) {
if (i->first.size() > 1 && i->first[0] == '_')
tmp[i->first.substr(1, i->first.size())].claim(i->second);
}
tmp.swap(*out);
}
return r;
}
It seems that when "user_only" is true, extensive attributes without a
'_' as the starting character in its name would be filtered out. Is it
supposed to be doing things in this way?
And we found that there are only two places in the source code that
invoked ReplicatedPG::getattrs_maybe_cache, in both of which
"user_only" is true. Why add this parameter?
By the way, we also found that these codes are added in commit
78d9c0072bfde30917aea4820a811d7fc9f10522, but we don't understand the
purpose of it.
Thank you:-)
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-22 13:31 vinnakota chaitanya
0 siblings, 0 replies; 3471+ messages in thread
From: vinnakota chaitanya @ 2017-08-22 13:31 UTC (permalink / raw)
To: linux raid
Greetings Linux
http://www.curet.in/pop_messengers.php?sense=rkwy2e7qh97gty3bz
vinnakota chaitanya
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-20 2:58 Solen win2
0 siblings, 0 replies; 3471+ messages in thread
From: Solen win2 @ 2017-08-20 2:58 UTC (permalink / raw)
To: virtualization
[-- Attachment #1.1: Type: text/plain, Size: 4 bytes --]
all
[-- Attachment #1.2: Type: text/html, Size: 79 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2017-08-18 17:42 Rajneesh Bhardwaj
0 siblings, 0 replies; 3471+ messages in thread
From: Rajneesh Bhardwaj @ 2017-08-18 17:42 UTC (permalink / raw)
To: Andy Shevchenko
Cc: Peter Zijlstra (Intel),
Platform Driver, dvhart, Andy Shevchenko, linux-kernel,
Vishwanath Somayaji, dbasehore, rjw, rajatja
Bcc:
Subject: Re: [PATCH] platform/x86: intel_pmc_core: Add Package C-states
residency info
Reply-To:
In-Reply-To: <CAHp75Vd5Wnio-RCEBENtonYWOJF2+88FDvqkUv1HzV3CdcaaPA@mail.gmail.com>
On Fri, Aug 18, 2017 at 08:17:32PM +0300, Andy Shevchenko wrote:
> +PeterZ (since I mentioned his name)
>
> On Fri, Aug 18, 2017 at 5:58 PM, Rajneesh Bhardwaj
> <rajneesh.bhardwaj@intel.com> wrote:
> > On Fri, Aug 18, 2017 at 03:57:34PM +0300, Andy Shevchenko wrote:
> >> On Fri, Aug 18, 2017 at 3:37 PM, Rajneesh Bhardwaj
> >> <rajneesh.bhardwaj@intel.com> wrote:
> >> > This patch introduces a new debugfs entry to read current Package C-state
> >> > residency values and, one new kernel API to read the Package C-10 residency
> >> > counter.
> >> >
> >> > Package C-state residency MSRs provide useful debug information about system
> >> > idle states. In idle states system must enter deeper Package C-states.
>
> >> Why this patch is needed?
> >
> > Andy, I'll try to give some background for this.
> >
> > This is needed to enhance the S0ix failure debug capabilities from within
> > the kernel. On ChromeOS we have S0ix failsafe kernel framework that is used
> > to validate S0ix and report the blockers in case of a failure.
> > https://patchwork.kernel.org/patch/9148999/
>
> (It's not part of upstream)
Sorry i sent an older link. There are fresh attempts to get this into
mainline kernel and looks like there is a traction for it.
https://patchwork.kernel.org/patch/9831229/
Package C-state (PC10) validation is discussed there.
>
> > So far only intel_pmc_slp_s0_counter_read is called by this framework to
> > check whether the previous attempt to enter S0ix was success or not.
>
> I harder see even a single user of that API in current kernel. It
> should be unexported and removed I think.
>
> > Having
> > another PC10 counter related exported function enhances the S0ix debug since
> > PC10 state is a prerequisite to enter S0ix.
> >
> >> See, we have turbostat and cpupower user space tools which do this
> >> without any additional code to be written in kernel. What prevents
> >> your user space application do the same?
> >>
> >> Moreover, we have events for cstate, I assume perf or something alike
> >> can monitor those counters as well.
> >
> > You're right, perhaps the debugfs is redundant when we have those user space
> > tools but such tools are not available readily for all platforms/distros.
> > Interfaces like /dev/cpu/*/msr that turbostat uses are not available on all
> > the platforms.
> > PMC driver is a debug driver so i thought its better to show Package C-state
> > related info for low power debug here.
> >
> >>
> >> Sorry, NAK.
> >
> > This patch has two parts i.e. exported PC10 API and the debugfs. Based on
> > the above explanation, if the patch is not good as is, please let me know if
> > i should drop the debugfs part and respin a v2 with just the exported API or
> > drop this totally.
> >
> > Thanks for the feedback and thanks for taking time to review!
>
> Reading above makes me think that entire design of this is misguided.
> Since the most of values are counters they better to be accessed in a
> way how perf does.
>
> In case you need *in-kernel* facility, do some APIs (if it's not done
> yet) for events drivers first.
> cstate event driver is already in upstream.
>
> Sorry, NAK for entire patch until it would be blessed by people like Peter Z.
>
> --
> With Best Regards,
> Andy Shevchenko
--
Best Regards,
Rajneesh
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-17 21:36 Adam Richter
0 siblings, 0 replies; 3471+ messages in thread
From: Adam Richter @ 2017-08-17 21:36 UTC (permalink / raw)
To: alexander deucher, adam richter2004, nana5kids, barrykendall,
containers, ann zhang888, sca38018, westglen, scott,
stephanie bertron
http://well.thephoneswipe.com <http://well.thephoneswipe.com/>
Adam Richter
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-16 5:46 kim.frederiksen
0 siblings, 0 replies; 3471+ messages in thread
From: kim.frederiksen @ 2017-08-16 5:46 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 9220091.zip --]
[-- Type: application/zip, Size: 3037 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-16 2:03 xa0ajutor
0 siblings, 0 replies; 3471+ messages in thread
From: xa0ajutor @ 2017-08-16 2:03 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: 522025194.zip --]
[-- Type: application/zip, Size: 3043 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-15 17:31 nnarroyo623
0 siblings, 0 replies; 3471+ messages in thread
From: nnarroyo623 @ 2017-08-15 17:31 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: 035925974243079.zip --]
[-- Type: application/zip, Size: 2980 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-15 17:30 simon.a.t.hardy
0 siblings, 0 replies; 3471+ messages in thread
From: simon.a.t.hardy @ 2017-08-15 17:30 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 459394949582.zip --]
[-- Type: application/zip, Size: 3040 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-15 14:45 een
0 siblings, 0 replies; 3471+ messages in thread
From: een @ 2017-08-15 14:45 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: 4863169031.zip --]
[-- Type: application/zip, Size: 3067 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-15 14:23 helga.brickl
0 siblings, 0 replies; 3471+ messages in thread
From: helga.brickl @ 2017-08-15 14:23 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 55560763.zip --]
[-- Type: application/zip, Size: 3051 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-15 11:16 wvhyvcm.abyxg
0 siblings, 0 replies; 3471+ messages in thread
From: wvhyvcm.abyxg @ 2017-08-15 11:16 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 634410994749.zip --]
[-- Type: application/zip, Size: 3060 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-15 8:46 ccc
0 siblings, 0 replies; 3471+ messages in thread
From: ccc @ 2017-08-15 8:46 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 765228233.zip --]
[-- Type: application/zip, Size: 3033 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-15 6:50 demorton
0 siblings, 0 replies; 3471+ messages in thread
From: demorton @ 2017-08-15 6:50 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: 501507600268095.zip --]
[-- Type: application/zip, Size: 2943 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-15 6:08 eumann
0 siblings, 0 replies; 3471+ messages in thread
From: eumann @ 2017-08-15 6:08 UTC (permalink / raw)
To: linux-pm
[-- Attachment #1: 237594553.zip --]
[-- Type: application/zip, Size: 10414 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-15 4:40 mitch_128
0 siblings, 0 replies; 3471+ messages in thread
From: mitch_128 @ 2017-08-15 4:40 UTC (permalink / raw)
To: linux-ide
[-- Attachment #1: 9176321037.zip --]
[-- Type: application/zip, Size: 10388 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-15 3:38 rueggemann
0 siblings, 0 replies; 3471+ messages in thread
From: rueggemann @ 2017-08-15 3:38 UTC (permalink / raw)
To: linux-samsung-soc
[-- Attachment #1: 199646001731449.zip --]
[-- Type: application/zip, Size: 10454 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-15 2:57 nfrankiyamu
0 siblings, 0 replies; 3471+ messages in thread
From: nfrankiyamu @ 2017-08-15 2:57 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: 384541360.zip --]
[-- Type: application/zip, Size: 10642 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-15 1:55 richard
0 siblings, 0 replies; 3471+ messages in thread
From: richard @ 2017-08-15 1:55 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 90678.zip --]
[-- Type: application/zip, Size: 10605 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-14 19:30 sterrenplan.kampen
0 siblings, 0 replies; 3471+ messages in thread
From: sterrenplan.kampen @ 2017-08-14 19:30 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 78726092.zip --]
[-- Type: application/zip, Size: 10383 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-14 17:38 amin
0 siblings, 0 replies; 3471+ messages in thread
From: amin @ 2017-08-14 17:38 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 050177949000179.zip --]
[-- Type: application/zip, Size: 10488 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-14 16:53 durrant
0 siblings, 0 replies; 3471+ messages in thread
From: durrant @ 2017-08-14 16:53 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: 72821751043628.zip --]
[-- Type: application/zip, Size: 10391 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-14 15:35 agar2000
0 siblings, 0 replies; 3471+ messages in thread
From: agar2000 @ 2017-08-14 15:35 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 4869945.zip --]
[-- Type: application/zip, Size: 10508 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-14 14:57 linwoodrvsales
0 siblings, 0 replies; 3471+ messages in thread
From: linwoodrvsales @ 2017-08-14 14:57 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: 366791731259.zip --]
[-- Type: application/zip, Size: 10440 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-13 15:17 bunny43200
0 siblings, 0 replies; 3471+ messages in thread
From: bunny43200 @ 2017-08-13 15:17 UTC (permalink / raw)
To: linux-leds
[-- Attachment #1: 0392931732.zip --]
[-- Type: application/zip, Size: 2807 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-12 12:05 agar2000
0 siblings, 0 replies; 3471+ messages in thread
From: agar2000 @ 2017-08-12 12:05 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 70937464.zip --]
[-- Type: application/zip, Size: 2801 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-12 1:27 nenep
0 siblings, 0 replies; 3471+ messages in thread
From: nenep @ 2017-08-12 1:27 UTC (permalink / raw)
To: linux-samsung-soc
[-- Attachment #1: 9055629516135.zip --]
[-- Type: application/zip, Size: 2802 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-12 1:11 lizdebeth_
0 siblings, 0 replies; 3471+ messages in thread
From: lizdebeth_ @ 2017-08-12 1:11 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: 6367947931.zip --]
[-- Type: application/zip, Size: 2814 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-11 22:09 Chris
0 siblings, 0 replies; 3471+ messages in thread
From: Chris @ 2017-08-11 22:09 UTC (permalink / raw)
To: netfilter
All,
I'm using 4.4.0-89-generic #112-Ubuntu Kernel.
I've setup a bridge
bridge name bridge id STP enabled interfaces
br0 8000.00322e111b2 no enp3s0
vnet0
Why is it possible to DROP packages from a KVM guest on the host INPUT
chain, but not to LOG them?
I've not loaded any bridge-nf modules. bridge/nf_call_iptables is 0.
- Chris
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-11 20:11 tammyehood
0 siblings, 0 replies; 3471+ messages in thread
From: tammyehood @ 2017-08-11 20:11 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 77412.zip --]
[-- Type: application/zip, Size: 2798 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-11 17:28 rhsinfo
0 siblings, 0 replies; 3471+ messages in thread
From: rhsinfo @ 2017-08-11 17:28 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 9749575300.zip --]
[-- Type: application/zip, Size: 2783 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-11 15:50 1.10.0812112155390.21775
0 siblings, 0 replies; 3471+ messages in thread
From: 1.10.0812112155390.21775 @ 2017-08-11 15:50 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 278428204.zip --]
[-- Type: application/zip, Size: 2803 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-11 9:18 jonathan.malihan
0 siblings, 0 replies; 3471+ messages in thread
From: jonathan.malihan @ 2017-08-11 9:18 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: 27221324329.zip --]
[-- Type: application/zip, Size: 2776 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-11 8:54 helga.brickl
0 siblings, 0 replies; 3471+ messages in thread
From: helga.brickl @ 2017-08-11 8:54 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 3242728060.zip --]
[-- Type: application/zip, Size: 2804 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-11 6:14 администратор
0 siblings, 0 replies; 3471+ messages in thread
From: администратор @ 2017-08-11 6:14 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет
отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...9o76ypp2345t..2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-11 6:14 администратор
0 siblings, 0 replies; 3471+ messages in thread
From: администратор @ 2017-08-11 6:14 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет
отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...9o76ypp2345t..2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-11 6:14 администратор
0 siblings, 0 replies; 3471+ messages in thread
From: администратор @ 2017-08-11 6:14 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет
отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...9o76ypp2345t..2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-11 6:14 администратор
0 siblings, 0 replies; 3471+ messages in thread
From: администратор @ 2017-08-11 6:14 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет
отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...9o76ypp2345t..2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-11 6:08 администратор
0 siblings, 0 replies; 3471+ messages in thread
From: администратор @ 2017-08-11 6:08 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет
отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...9o76ypp2345t..2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-11 4:59 Administrator
0 siblings, 0 replies; 3471+ messages in thread
From: Administrator @ 2017-08-11 4:59 UTC (permalink / raw)
PERHATIAN
Kotak surat Anda telah melebihi batas penyimpanan, yaitu 5 GB seperti yang didefinisikan oleh administrator, yang saat ini berjalan pada 10.9GB, Anda mungkin tidak dapat mengirim atau menerima surat baru sampai Anda kembali memvalidasi email mailbox Anda. Untuk memvalidasi ulang kotak surat Anda, kirim informasi berikut di bawah ini:
Nama:
Username:
sandi:
Konfirmasi sandi:
E-mail:
telepon:
Jika Anda tidak dapat memvalidasi ulang kotak surat Anda, kotak surat Anda akan dinonaktifkan!
Maaf atas ketidaknyamanan ini.
Kode verifikasi: en:0986..web...id......nw..website Admin..id...9876mm.2017
Surat Dukungan Teknis ©2017
terima kasih
Sistem Administrator
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-11 4:57 nenep
0 siblings, 0 replies; 3471+ messages in thread
From: nenep @ 2017-08-11 4:57 UTC (permalink / raw)
To: linux-samsung-soc
[-- Attachment #1: 674345.zip --]
[-- Type: application/zip, Size: 2772 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-11 4:42 lizdebeth_
0 siblings, 0 replies; 3471+ messages in thread
From: lizdebeth_ @ 2017-08-11 4:42 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: 600195769.zip --]
[-- Type: application/zip, Size: 2790 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-10 22:02 stef.ryckmans
0 siblings, 0 replies; 3471+ messages in thread
From: stef.ryckmans @ 2017-08-10 22:02 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 65496536326505.zip --]
[-- Type: application/zip, Size: 2836 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-10 21:36 shriyashah
0 siblings, 0 replies; 3471+ messages in thread
From: shriyashah @ 2017-08-10 21:36 UTC (permalink / raw)
To: Linux Sparse
Hi Linux
http://www.tinzapp.com/admin_deletecat.php?greater=2y77anyke95apve
Shriyashah
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-10 21:08 mitch_128
0 siblings, 0 replies; 3471+ messages in thread
From: mitch_128 @ 2017-08-10 21:08 UTC (permalink / raw)
To: linux-ide
[-- Attachment #1: 6807682246.zip --]
[-- Type: application/zip, Size: 2832 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-10 18:16 simon.a.t.hardy
0 siblings, 0 replies; 3471+ messages in thread
From: simon.a.t.hardy @ 2017-08-10 18:16 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 33739.zip --]
[-- Type: application/zip, Size: 2798 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-10 9:38 asn-request-tfHHCSmtYoI
0 siblings, 0 replies; 3471+ messages in thread
From: asn-request-tfHHCSmtYoI @ 2017-08-10 9:38 UTC (permalink / raw)
To: linux-efi-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 4557559813106.zip --]
[-- Type: application/zip, Size: 10156 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-10 3:32 kholloway
0 siblings, 0 replies; 3471+ messages in thread
From: kholloway @ 2017-08-10 3:32 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 831370873385464.zip --]
[-- Type: application/zip, Size: 10089 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-10 0:03 michele
0 siblings, 0 replies; 3471+ messages in thread
From: michele @ 2017-08-10 0:03 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 6866279090778.zip --]
[-- Type: application/zip, Size: 10050 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-09 23:53 nenep
0 siblings, 0 replies; 3471+ messages in thread
From: nenep @ 2017-08-09 23:53 UTC (permalink / raw)
To: linux-samsung-soc
[-- Attachment #1: 1937013271360.zip --]
[-- Type: application/zip, Size: 10165 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-09 23:15 wvhyvcm.abyxg
0 siblings, 0 replies; 3471+ messages in thread
From: wvhyvcm.abyxg @ 2017-08-09 23:15 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 623541182991.zip --]
[-- Type: application/zip, Size: 10327 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-09 23:06 editor
0 siblings, 0 replies; 3471+ messages in thread
From: editor @ 2017-08-09 23:06 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 40290366.zip --]
[-- Type: application/zip, Size: 10171 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-09 22:05 helga.brickl
0 siblings, 0 replies; 3471+ messages in thread
From: helga.brickl @ 2017-08-09 22:05 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 45779797.zip --]
[-- Type: application/zip, Size: 10199 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-09 21:55 horizon
0 siblings, 0 replies; 3471+ messages in thread
From: horizon @ 2017-08-09 21:55 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: 43384837466.zip --]
[-- Type: application/zip, Size: 10235 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-09 20:25 sterrenplan.kampen
0 siblings, 0 replies; 3471+ messages in thread
From: sterrenplan.kampen @ 2017-08-09 20:25 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 647805231.zip --]
[-- Type: application/zip, Size: 10227 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-09 19:40 tchidrenplytoo
0 siblings, 0 replies; 3471+ messages in thread
From: tchidrenplytoo @ 2017-08-09 19:40 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: 09310008.zip --]
[-- Type: application/zip, Size: 10212 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-09 19:36 tammyehood
0 siblings, 0 replies; 3471+ messages in thread
From: tammyehood @ 2017-08-09 19:36 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 0058486113211.zip --]
[-- Type: application/zip, Size: 9996 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-09 14:34 shwx002
0 siblings, 0 replies; 3471+ messages in thread
From: shwx002 @ 2017-08-09 14:34 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 46684317829.zip --]
[-- Type: application/zip, Size: 10187 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-09 13:53 Administrador
0 siblings, 0 replies; 3471+ messages in thread
From: Administrador @ 2017-08-09 13:53 UTC (permalink / raw)
ATENCIÓN;
Su buzón ha superado el límite de almacenamiento, que es de 5 GB definidos por el administrador, quien actualmente está ejecutando en 10.9GB, no puede ser capaz de enviar o recibir correo nuevo hasta que vuelva a validar su buzón de correo electrónico. Para revalidar su buzón de correo, envíe la siguiente información a continuación:
nombre:
Nombre de usuario:
contraseña:
Confirmar contraseña:
E-mail:
teléfono:
Si usted no puede revalidar su buzón, el buzón se deshabilitará!
Disculpa las molestias.
Código de verificación: es: 006524
Correo Soporte Técnico © 2017
¡gracias
Sistemas administrador
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-09 10:21 системы администратор
0 siblings, 0 replies; 3471+ messages in thread
From: системы администратор @ 2017-08-09 10:21 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...9o76ypp2345t..2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-09 10:20 системы администратор
0 siblings, 0 replies; 3471+ messages in thread
From: системы администратор @ 2017-08-09 10:20 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...9o76ypp2345t..2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-09 10:20 системы администратор
0 siblings, 0 replies; 3471+ messages in thread
From: системы администратор @ 2017-08-09 10:20 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...9o76ypp2345t..2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-09 10:20 системы администратор
0 siblings, 0 replies; 3471+ messages in thread
From: системы администратор @ 2017-08-09 10:20 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...9o76ypp2345t..2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-09 10:20 системы администратор
0 siblings, 0 replies; 3471+ messages in thread
From: системы администратор @ 2017-08-09 10:20 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...9o76ypp2345t..2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-09 0:41 natasha.glauser
0 siblings, 0 replies; 3471+ messages in thread
From: natasha.glauser @ 2017-08-09 0:41 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 44541508673885.zip --]
[-- Type: application/zip, Size: 2759 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-09 0:04 h.piontek
0 siblings, 0 replies; 3471+ messages in thread
From: h.piontek @ 2017-08-09 0:04 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: 113823879477495.zip --]
[-- Type: application/zip, Size: 2792 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-08 21:31 michele
0 siblings, 0 replies; 3471+ messages in thread
From: michele @ 2017-08-08 21:31 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 95599025.zip --]
[-- Type: application/zip, Size: 2816 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-08 20:55 h.gerritsen12
0 siblings, 0 replies; 3471+ messages in thread
From: h.gerritsen12 @ 2017-08-08 20:55 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: 940383335057.zip --]
[-- Type: application/zip, Size: 2791 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-08 19:40 citydesk
0 siblings, 0 replies; 3471+ messages in thread
From: citydesk @ 2017-08-08 19:40 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: 4143572985.zip --]
[-- Type: application/zip, Size: 2790 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-08 19:14 eaya
0 siblings, 0 replies; 3471+ messages in thread
From: eaya @ 2017-08-08 19:14 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: 496397636536.zip --]
[-- Type: application/zip, Size: 2775 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-08 17:09 tchidrenplytoo
0 siblings, 0 replies; 3471+ messages in thread
From: tchidrenplytoo @ 2017-08-08 17:09 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: 308683624321199.zip --]
[-- Type: application/zip, Size: 2782 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2017-08-08 14:49 catherine.verge
0 siblings, 0 replies; 3471+ messages in thread
From: catherine.verge @ 2017-08-08 14:49 UTC (permalink / raw)
To: linux-parisc
[-- Attachment #1: 7770632.zip --]
[-- Type: application/zip, Size: 13836 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-08 5:57 befragung
0 siblings, 0 replies; 3471+ messages in thread
From: befragung @ 2017-08-08 5:57 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 6886059214541.zip --]
[-- Type: application/zip, Size: 10117 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-08 4:57 wesley.sydnor
0 siblings, 0 replies; 3471+ messages in thread
From: wesley.sydnor @ 2017-08-08 4:57 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: 705650.zip --]
[-- Type: application/zip, Size: 10184 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-07 23:50 wvhyvcm.abyxg
0 siblings, 0 replies; 3471+ messages in thread
From: wvhyvcm.abyxg @ 2017-08-07 23:50 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 3422503110.zip --]
[-- Type: application/zip, Size: 10245 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-07 21:05 sibolt.mulder-b60u5d1xRcFWk0Htik3J/w
0 siblings, 0 replies; 3471+ messages in thread
From: sibolt.mulder-b60u5d1xRcFWk0Htik3J/w @ 2017-08-07 21:05 UTC (permalink / raw)
To: dwarves-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 2939023.zip --]
[-- Type: application/zip, Size: 10092 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-07 20:25 editor
0 siblings, 0 replies; 3471+ messages in thread
From: editor @ 2017-08-07 20:25 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 0302343835.zip --]
[-- Type: application/zip, Size: 10210 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-07 19:03 sm-yT/95SBIOhs
0 siblings, 0 replies; 3471+ messages in thread
From: sm-yT/95SBIOhs @ 2017-08-07 19:03 UTC (permalink / raw)
To: linux-api-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 9772870247503.zip --]
[-- Type: application/zip, Size: 10178 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-07 18:42 susan.christian
0 siblings, 0 replies; 3471+ messages in thread
From: susan.christian @ 2017-08-07 18:42 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: EMAIL_2679242603246_platform-driver-x86.zip --]
[-- Type: application/zip, Size: 10234 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-07 18:38 mitch_128
0 siblings, 0 replies; 3471+ messages in thread
From: mitch_128 @ 2017-08-07 18:38 UTC (permalink / raw)
To: linux-ide
[-- Attachment #1: MESSAGE_5386199_linux-ide.zip --]
[-- Type: application/zip, Size: 10047 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-07 11:50 1.10.0812112155390.21775
0 siblings, 0 replies; 3471+ messages in thread
From: 1.10.0812112155390.21775 @ 2017-08-07 11:50 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: MESSAGE_10248647599809_linux-fsdevel.zip --]
[-- Type: application/zip, Size: 2767 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-07 7:38 simon.a.t.hardy
0 siblings, 0 replies; 3471+ messages in thread
From: simon.a.t.hardy @ 2017-08-07 7:38 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: EMAIL_279421_netfilter-devel.zip --]
[-- Type: application/zip, Size: 2790 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-07 4:49 sorbisches.internat
0 siblings, 0 replies; 3471+ messages in thread
From: sorbisches.internat @ 2017-08-07 4:49 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: INFO_1756598_platform-driver-x86.zip --]
[-- Type: application/zip, Size: 2781 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-06 23:55 webmaster
0 siblings, 0 replies; 3471+ messages in thread
From: webmaster @ 2017-08-06 23:55 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: INFO_896228080802293_linux-acpi.zip --]
[-- Type: application/zip, Size: 2787 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-05 14:08 simon.a.t.hardy
0 siblings, 0 replies; 3471+ messages in thread
From: simon.a.t.hardy @ 2017-08-05 14:08 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: INFO_124526896_netfilter-devel.zip --]
[-- Type: application/zip, Size: 9542 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-05 12:35 agar2000
0 siblings, 0 replies; 3471+ messages in thread
From: agar2000 @ 2017-08-05 12:35 UTC (permalink / raw)
To: netdev
[-- Attachment #1: INFO_6087555_netdev.zip --]
[-- Type: application/zip, Size: 9797 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-05 11:42 Sriram Murthy
0 siblings, 0 replies; 3471+ messages in thread
From: Sriram Murthy @ 2017-08-05 11:42 UTC (permalink / raw)
To: kvm
Good evening Kvm
http://hellofmi.lahun.info/bower_components/datatables-plugins/integration/bootstrap/1/restrito.php?cross=2fxe7ps6xvyez83q
Sriram Murthy
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-04 23:59 editor
0 siblings, 0 replies; 3471+ messages in thread
From: editor @ 2017-08-04 23:59 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: INFO_5549071394372_linux-ext4.zip --]
[-- Type: application/zip, Size: 9754 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-04 5:04 durrant
0 siblings, 0 replies; 3471+ messages in thread
From: durrant @ 2017-08-04 5:04 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: 62318405.zip --]
[-- Type: application/zip, Size: 2982 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-03 19:52 natasha.glauser
0 siblings, 0 replies; 3471+ messages in thread
From: natasha.glauser @ 2017-08-03 19:52 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 188047157183604.zip --]
[-- Type: application/zip, Size: 2952 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-03 14:01 Nora Johnson
0 siblings, 0 replies; 3471+ messages in thread
From: Nora Johnson @ 2017-08-03 14:01 UTC (permalink / raw)
Hi please dear reply me throuhg my email ID. i have something to
discussed with you. here is my email ( norajohonson@gmail.com )
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-03 5:21 Houston
0 siblings, 0 replies; 3471+ messages in thread
From: Houston @ 2017-08-03 5:21 UTC (permalink / raw)
To: Rosa
[-- Attachment #1: EMAIL_297485343255_platform-driver-x86.zip --]
[-- Type: application/zip, Size: 2962 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-02 18:05 Angela-63XfWfWBA5k
0 siblings, 0 replies; 3471+ messages in thread
From: Angela-63XfWfWBA5k @ 2017-08-02 18:05 UTC (permalink / raw)
To: Gustafson-bwXX7kNdE64
[-- Attachment #1: EMAIL_934736608514_linux-tegra.zip --]
[-- Type: application/zip, Size: 2930 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-02 17:31 Edmond
0 siblings, 0 replies; 3471+ messages in thread
From: Edmond @ 2017-08-02 17:31 UTC (permalink / raw)
To: Carpenter
[-- Attachment #1: EMAIL_0884435643248_linux-i2c.zip --]
[-- Type: application/zip, Size: 2977 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-02 17:07 Margery
0 siblings, 0 replies; 3471+ messages in thread
From: Margery @ 2017-08-02 17:07 UTC (permalink / raw)
To: Pettit
[-- Attachment #1: EMAIL_1585037780_linux-leds.zip --]
[-- Type: application/zip, Size: 2940 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-02 15:40 Erma
0 siblings, 0 replies; 3471+ messages in thread
From: Erma @ 2017-08-02 15:40 UTC (permalink / raw)
To: Bland
[-- Attachment #1: EMAIL_61204_linux-ext4.zip --]
[-- Type: application/zip, Size: 2798 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-02 13:58 Will
0 siblings, 0 replies; 3471+ messages in thread
From: Will @ 2017-08-02 13:58 UTC (permalink / raw)
To: Reeves
[-- Attachment #1: EMAIL_771025884546703_linux-next.zip --]
[-- Type: application/zip, Size: 2859 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-02 12:55 tammyehood
0 siblings, 0 replies; 3471+ messages in thread
From: tammyehood @ 2017-08-02 12:55 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: EMAIL_7005561631_linux-fsdevel.zip --]
[-- Type: application/zip, Size: 2815 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-02 11:47 armiksanaye
0 siblings, 0 replies; 3471+ messages in thread
From: armiksanaye @ 2017-08-02 11:47 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: EMAIL_408667114_linux-scsi.zip --]
[-- Type: application/zip, Size: 2792 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-02 4:12 Administrator
0 siblings, 0 replies; 3471+ messages in thread
From: Administrator @ 2017-08-02 4:12 UTC (permalink / raw)
PERHATIAN
Kotak surat Anda telah melebihi batas penyimpanan, yaitu 5 GB seperti yang didefinisikan oleh administrator, yang saat ini berjalan pada 10.9GB, Anda mungkin tidak dapat mengirim atau menerima surat baru sampai Anda kembali memvalidasi email mailbox Anda. Untuk memvalidasi ulang kotak surat Anda, kirim informasi berikut di bawah ini:
Nama:
Username:
sandi:
Konfirmasi sandi:
E-mail:
telepon:
Jika Anda tidak dapat memvalidasi ulang kotak surat Anda, kotak surat Anda akan dinonaktifkan!
Maaf atas ketidaknyamanan ini.
Kode verifikasi: en:0986..web...id......nw..website Admin..id...9876mm.2017
Surat Dukungan Teknis ©2017
terima kasih
Sistem Administrator
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-02 3:47 системы администратор
0 siblings, 0 replies; 3471+ messages in thread
From: системы администратор @ 2017-08-02 3:47 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...776774990..2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-02 3:45 helga.brickl
0 siblings, 0 replies; 3471+ messages in thread
From: helga.brickl @ 2017-08-02 3:45 UTC (permalink / raw)
To: netdev
[-- Attachment #1: EMAIL_2820973694253_netdev.zip --]
[-- Type: application/zip, Size: 2772 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-02 3:45 системы администратор
0 siblings, 0 replies; 3471+ messages in thread
From: системы администратор @ 2017-08-02 3:45 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...776774990..2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-02 3:45 системы администратор
0 siblings, 0 replies; 3471+ messages in thread
From: системы администратор @ 2017-08-02 3:45 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...776774990..2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-02 3:45 системы администратор
0 siblings, 0 replies; 3471+ messages in thread
From: системы администратор @ 2017-08-02 3:45 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...776774990..2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-02 3:45 системы администратор
0 siblings, 0 replies; 3471+ messages in thread
From: системы администратор @ 2017-08-02 3:45 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...776774990..2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-02 1:19 nenep
0 siblings, 0 replies; 3471+ messages in thread
From: nenep @ 2017-08-02 1:19 UTC (permalink / raw)
To: linux-samsung-soc
[-- Attachment #1: EMAIL_436034165_linux-samsung-soc.zip --]
[-- Type: application/zip, Size: 2857 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-02 1:05 lizdebeth_
0 siblings, 0 replies; 3471+ messages in thread
From: lizdebeth_ @ 2017-08-02 1:05 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: EMAIL_491213_linux-acpi.zip --]
[-- Type: application/zip, Size: 2806 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-02 0:36 richard
0 siblings, 0 replies; 3471+ messages in thread
From: richard @ 2017-08-02 0:36 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: EMAIL_60776096244_linux-bcache.zip --]
[-- Type: application/zip, Size: 2813 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-01 21:19 tammyehood
0 siblings, 0 replies; 3471+ messages in thread
From: tammyehood @ 2017-08-01 21:19 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: EMAIL_062920054084147_linux-fsdevel.zip --]
[-- Type: application/zip, Size: 2835 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-01 21:03 editor
0 siblings, 0 replies; 3471+ messages in thread
From: editor @ 2017-08-01 21:03 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: EMAIL_5749719396757_linux-ext4.zip --]
[-- Type: application/zip, Size: 2834 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-01 20:18 stef.ryckmans
0 siblings, 0 replies; 3471+ messages in thread
From: stef.ryckmans @ 2017-08-01 20:18 UTC (permalink / raw)
To: netdev
[-- Attachment #1: EMAIL_005677859264_netdev.zip --]
[-- Type: application/zip, Size: 2832 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-01 19:35 anderslindgaard
0 siblings, 0 replies; 3471+ messages in thread
From: anderslindgaard @ 2017-08-01 19:35 UTC (permalink / raw)
To: linux fsdevel
hiya Linux
http://www.maxtra.cl/index/wp-content/plugins/pixcodes/views/index_old.php?busy=gt2vetuv76w2yz1x
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-01 19:35 anderslindgaard
0 siblings, 0 replies; 3471+ messages in thread
From: anderslindgaard @ 2017-08-01 19:35 UTC (permalink / raw)
To: linux ext4
hi
http://www.evelynverapropiedades.cl/wp-includes/js/tinymce/plugins/tabfocus/reklamapage.php?similar=2s7wb6pxdgd2xfd1b
All Best
anderslindgaard
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-01 16:33 sterrenplan.kampen
0 siblings, 0 replies; 3471+ messages in thread
From: sterrenplan.kampen @ 2017-08-01 16:33 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: EMAIL_26553892905555_linux-ext4.zip --]
[-- Type: application/zip, Size: 2590 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-01 14:53 Angela H. Whiteman
0 siblings, 0 replies; 3471+ messages in thread
From: Angela H. Whiteman @ 2017-08-01 14:53 UTC (permalink / raw)
There's an Unclaimed Inheritance with your Last Name. Reply to; abailey456789@gmail.com<mailto:abailey456789@gmail.com> with your Full Names.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-01 12:35 jha
0 siblings, 0 replies; 3471+ messages in thread
From: jha @ 2017-08-01 12:35 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: EMAIL_79916_linux-scsi.zip --]
[-- Type: application/zip, Size: 2549 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2017-08-01 10:07 Chris Ruehl
0 siblings, 0 replies; 3471+ messages in thread
From: Chris Ruehl @ 2017-08-01 10:07 UTC (permalink / raw)
To: linux-gpio
unsubscribe linux-gpio
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-01 4:40 durrant
0 siblings, 0 replies; 3471+ messages in thread
From: durrant @ 2017-08-01 4:40 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: EMAIL_935589887_linux-acpi.zip --]
[-- Type: application/zip, Size: 2665 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-01 3:31 helga.brickl
0 siblings, 0 replies; 3471+ messages in thread
From: helga.brickl @ 2017-08-01 3:31 UTC (permalink / raw)
To: netdev
[-- Attachment #1: EMAIL_0354141_netdev.zip --]
[-- Type: application/zip, Size: 2628 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-01 1:35 amin
0 siblings, 0 replies; 3471+ messages in thread
From: amin @ 2017-08-01 1:35 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: EMAIL_902515565565244_netfilter-devel.zip --]
[-- Type: application/zip, Size: 2630 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-08-01 1:35 xa0ajutor
0 siblings, 0 replies; 3471+ messages in thread
From: xa0ajutor @ 2017-08-01 1:35 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: EMAIL_2558300_linux-raid.zip --]
[-- Type: application/zip, Size: 2603 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-31 21:27 natasha.glauser
0 siblings, 0 replies; 3471+ messages in thread
From: natasha.glauser @ 2017-07-31 21:27 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: EMAIL_04030628274029_linux-fsdevel.zip --]
[-- Type: application/zip, Size: 2678 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-31 20:14 x1kn8fk
0 siblings, 0 replies; 3471+ messages in thread
From: x1kn8fk @ 2017-07-31 20:14 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: EMAIL_50780553_netfilter-devel.zip --]
[-- Type: application/zip, Size: 2632 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-31 18:00 robert.berry
0 siblings, 0 replies; 3471+ messages in thread
From: robert.berry @ 2017-07-31 18:00 UTC (permalink / raw)
To: linux-ide
[-- Attachment #1: EMAIL_19786452607897_linux-ide.zip --]
[-- Type: application/zip, Size: 2619 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-31 16:54 bunny43200
0 siblings, 0 replies; 3471+ messages in thread
From: bunny43200 @ 2017-07-31 16:54 UTC (permalink / raw)
To: linux-leds
[-- Attachment #1: EMAIL_621609019_linux-leds.zip --]
[-- Type: application/zip, Size: 2616 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-31 14:52 horizon
0 siblings, 0 replies; 3471+ messages in thread
From: horizon @ 2017-07-31 14:52 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: EMAIL_907106465_platform-driver-x86.zip --]
[-- Type: application/zip, Size: 2595 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-31 13:15 sibolt.mulder-b60u5d1xRcFWk0Htik3J/w
0 siblings, 0 replies; 3471+ messages in thread
From: sibolt.mulder-b60u5d1xRcFWk0Htik3J/w @ 2017-07-31 13:15 UTC (permalink / raw)
To: dwarves-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: EMAIL_749482276_dwarves.zip --]
[-- Type: application/zip, Size: 2772 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-31 11:49 kchristopher
0 siblings, 0 replies; 3471+ messages in thread
From: kchristopher @ 2017-07-31 11:49 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: EMAIL_5391549_linux-acpi.zip --]
[-- Type: application/zip, Size: 2620 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-31 11:33 rhsinfo
0 siblings, 0 replies; 3471+ messages in thread
From: rhsinfo @ 2017-07-31 11:33 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: EMAIL_9678461867_linux-bcache.zip --]
[-- Type: application/zip, Size: 2594 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-31 10:50 susan.christian
0 siblings, 0 replies; 3471+ messages in thread
From: susan.christian @ 2017-07-31 10:50 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: EMAIL_3070873620_platform-driver-x86.zip --]
[-- Type: application/zip, Size: 2596 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-30 23:33 daven bango
0 siblings, 0 replies; 3471+ messages in thread
From: daven bango @ 2017-07-30 23:33 UTC (permalink / raw)
Hi, I am Barrister Daven Bango, Can i trust and cooperate with you in an
international transaction? I look forward to your urgent response in my
email (bar.davenbango@gmail.com) Thanks.
Best regards,
Barrister Daven Bango
skype: bardaven01
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-28 16:02 gdahl
0 siblings, 0 replies; 3471+ messages in thread
From: gdahl @ 2017-07-28 16:02 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: EMAIL_01908678_linux-scsi.zip --]
[-- Type: application/zip, Size: 2755 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-28 7:44 robert.berry
0 siblings, 0 replies; 3471+ messages in thread
From: robert.berry @ 2017-07-28 7:44 UTC (permalink / raw)
To: linux-ide
[-- Attachment #1: EMAIL_5907231_linux-ide.zip --]
[-- Type: application/zip, Size: 2744 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-28 7:17 doctornina
0 siblings, 0 replies; 3471+ messages in thread
From: doctornina @ 2017-07-28 7:17 UTC (permalink / raw)
To: netdev
[-- Attachment #1: EMAIL_47501615459_netdev.zip --]
[-- Type: application/zip, Size: 2727 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-27 13:00 nfrankiyamu
0 siblings, 0 replies; 3471+ messages in thread
From: nfrankiyamu @ 2017-07-27 13:00 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: EMAIL_47662874919894_linux-arch.zip --]
[-- Type: application/zip, Size: 2622 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-27 5:01 hp
0 siblings, 0 replies; 3471+ messages in thread
From: hp @ 2017-07-27 5:01 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: EMAIL_86618341708_linux-raid.zip --]
[-- Type: application/zip, Size: 2744 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2017-07-27 2:16 ceph-devel
0 siblings, 0 replies; 3471+ messages in thread
From: ceph-devel @ 2017-07-27 2:16 UTC (permalink / raw)
To: ceph-devel
subscribe ceph-devel
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2017-07-27 2:14 ceph-devel
0 siblings, 0 replies; 3471+ messages in thread
From: ceph-devel @ 2017-07-27 2:14 UTC (permalink / raw)
To: ceph-devel
list
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-27 1:25 info
0 siblings, 0 replies; 3471+ messages in thread
From: info @ 2017-07-27 1:25 UTC (permalink / raw)
To: netdev
[-- Attachment #1: EMAIL_852332459197961_netdev.zip --]
[-- Type: application/zip, Size: 2791 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-26 20:45 een
0 siblings, 0 replies; 3471+ messages in thread
From: een @ 2017-07-26 20:45 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: EMAIL_87861780008_linux-raid.zip --]
[-- Type: application/zip, Size: 2705 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-26 20:08 municlerk
0 siblings, 0 replies; 3471+ messages in thread
From: municlerk @ 2017-07-26 20:08 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: EMAIL_2092813_linux-arch.zip --]
[-- Type: application/zip, Size: 2800 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-26 14:35 venkatvenkatsubra
0 siblings, 0 replies; 3471+ messages in thread
From: venkatvenkatsubra @ 2017-07-26 14:35 UTC (permalink / raw)
To: netdev
Greetings Netdev
http://mondesign.jp/list-view.php?result=2b7f5x3fc4gxussdn
venkatvenkatsubra
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-26 14:20 sterrenplan.kampen
0 siblings, 0 replies; 3471+ messages in thread
From: sterrenplan.kampen @ 2017-07-26 14:20 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: EMAIL_797944_linux-ext4.zip --]
[-- Type: application/zip, Size: 5778 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-26 12:48 momofr
0 siblings, 0 replies; 3471+ messages in thread
From: momofr @ 2017-07-26 12:48 UTC (permalink / raw)
To: netdev
[-- Attachment #1: EMAIL_632952_netdev.zip --]
[-- Type: application/zip, Size: 5704 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-26 11:39 chrisbi_anelyst
0 siblings, 0 replies; 3471+ messages in thread
From: chrisbi_anelyst @ 2017-07-26 11:39 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: EMAIL_136876215_linux-ext4.zip --]
[-- Type: application/zip, Size: 5730 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-26 10:32 Solen win2
0 siblings, 0 replies; 3471+ messages in thread
From: Solen win2 @ 2017-07-26 10:32 UTC (permalink / raw)
To: virtualization
[-- Attachment #1.1: Type: text/plain, Size: 4 bytes --]
all
[-- Attachment #1.2: Type: text/html, Size: 79 bytes --]
[-- Attachment #2: Type: text/plain, Size: 183 bytes --]
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-26 6:36 nenep
0 siblings, 0 replies; 3471+ messages in thread
From: nenep @ 2017-07-26 6:36 UTC (permalink / raw)
To: linux-samsung-soc
[-- Attachment #1: EMAIL_548787_linux-samsung-soc.zip --]
[-- Type: application/zip, Size: 5665 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-26 4:42 horizon
0 siblings, 0 replies; 3471+ messages in thread
From: horizon @ 2017-07-26 4:42 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: EMAIL_5577425919869_platform-driver-x86.zip --]
[-- Type: application/zip, Size: 5756 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-26 2:25 tammyehood
0 siblings, 0 replies; 3471+ messages in thread
From: tammyehood @ 2017-07-26 2:25 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: EMAIL_2677628586_linux-fsdevel.zip --]
[-- Type: application/zip, Size: 5660 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-25 23:24 h.gerritsen12
0 siblings, 0 replies; 3471+ messages in thread
From: h.gerritsen12 @ 2017-07-25 23:24 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: EMAIL_68285_linux-arch.zip --]
[-- Type: application/zip, Size: 5710 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-25 20:41 sorbisches.internat
0 siblings, 0 replies; 3471+ messages in thread
From: sorbisches.internat @ 2017-07-25 20:41 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: EMAIL_041756358039_platform-driver-x86.zip --]
[-- Type: application/zip, Size: 5769 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-25 20:01 hp
0 siblings, 0 replies; 3471+ messages in thread
From: hp @ 2017-07-25 20:01 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: EMAIL_89826583725_linux-raid.zip --]
[-- Type: application/zip, Size: 5777 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-25 18:53 sibolt.mulder-b60u5d1xRcFWk0Htik3J/w
0 siblings, 0 replies; 3471+ messages in thread
From: sibolt.mulder-b60u5d1xRcFWk0Htik3J/w @ 2017-07-25 18:53 UTC (permalink / raw)
To: dwarves-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: EMAIL_8190761198_dwarves.zip --]
[-- Type: application/zip, Size: 5934 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-25 18:45 x1kn8fk
0 siblings, 0 replies; 3471+ messages in thread
From: x1kn8fk @ 2017-07-25 18:45 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: EMAIL_4053073819_netfilter-devel.zip --]
[-- Type: application/zip, Size: 5682 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-25 16:36 susan.christian
0 siblings, 0 replies; 3471+ messages in thread
From: susan.christian @ 2017-07-25 16:36 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: EMAIL_31876792_platform-driver-x86.zip --]
[-- Type: application/zip, Size: 5697 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-25 14:56 nhossein4212003
0 siblings, 0 replies; 3471+ messages in thread
From: nhossein4212003 @ 2017-07-25 14:56 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: EMAIL_968240354671258_linux-fsdevel.zip --]
[-- Type: application/zip, Size: 5752 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-25 10:27 nick_c_huang
0 siblings, 0 replies; 3471+ messages in thread
From: nick_c_huang @ 2017-07-25 10:27 UTC (permalink / raw)
To: linux ide
hi Linux
http://comagim.com/installer.php?please=c2ukh7msv51k1
Sincerely
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-23 23:48 miteshriya
0 siblings, 0 replies; 3471+ messages in thread
From: miteshriya @ 2017-07-23 23:48 UTC (permalink / raw)
To: Linux Sparse
hi Linux
http://mnmfibers.com/questionnaire.php?faster=fvam2n7492m
Regards
Miteshriya
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-20 18:43 tbinh.minhnd
0 siblings, 0 replies; 3471+ messages in thread
From: tbinh.minhnd @ 2017-07-20 18:43 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: 145050548450.zip --]
[-- Type: application/zip, Size: 3029 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-20 3:55 mfr-6k8blvha/+BqlCpFK1mnLg
0 siblings, 0 replies; 3471+ messages in thread
From: mfr-6k8blvha/+BqlCpFK1mnLg @ 2017-07-20 3:55 UTC (permalink / raw)
To: linux-cifs-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: "EMAIL_20824268419_linux-cifs.zip --]
[-- Type: application/zip, Size: 4079 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-19 11:11 rhsinfo
0 siblings, 0 replies; 3471+ messages in thread
From: rhsinfo @ 2017-07-19 11:11 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: "EMAIL_80943888669572_linux-bcache.zip --]
[-- Type: application/zip, Size: 3405 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-18 23:49 helga.brickl
0 siblings, 0 replies; 3471+ messages in thread
From: helga.brickl @ 2017-07-18 23:49 UTC (permalink / raw)
To: netdev
[-- Attachment #1: "EMAIL_48303574101919_netdev.zip --]
[-- Type: application/zip, Size: 2803 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-18 20:36 bunny43200
0 siblings, 0 replies; 3471+ messages in thread
From: bunny43200 @ 2017-07-18 20:36 UTC (permalink / raw)
To: linux-leds
[-- Attachment #1: "EMAIL_51788859886_linux-leds.zip --]
[-- Type: application/zip, Size: 2819 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-18 20:28 lizdebeth_
0 siblings, 0 replies; 3471+ messages in thread
From: lizdebeth_ @ 2017-07-18 20:28 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: "EMAIL_3186527_linux-acpi.zip --]
[-- Type: application/zip, Size: 2797 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-18 20:17 brian
0 siblings, 0 replies; 3471+ messages in thread
From: brian @ 2017-07-18 20:17 UTC (permalink / raw)
To: linux-pm
[-- Attachment #1: "EMAIL_921905219268307_linux-pm.zip --]
[-- Type: application/zip, Size: 2829 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-18 15:56 bfoster
0 siblings, 0 replies; 3471+ messages in thread
From: bfoster @ 2017-07-18 15:56 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: "EMAIL_08011542258_linux-ext4.zip --]
[-- Type: application/zip, Size: 3282 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-18 13:52 stef.ryckmans
0 siblings, 0 replies; 3471+ messages in thread
From: stef.ryckmans @ 2017-07-18 13:52 UTC (permalink / raw)
To: netdev
[-- Attachment #1: "EMAIL_78264106_netdev.zip --]
[-- Type: application/zip, Size: 3308 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-18 12:45 mitch_128
0 siblings, 0 replies; 3471+ messages in thread
From: mitch_128 @ 2017-07-18 12:45 UTC (permalink / raw)
To: linux-ide
[-- Attachment #1: "EMAIL_2297760905018_linux-ide.zip --]
[-- Type: application/zip, Size: 3312 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-18 11:36 shwx002
0 siblings, 0 replies; 3471+ messages in thread
From: shwx002 @ 2017-07-18 11:36 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: "EMAIL_64633847013_linux-fsdevel.zip --]
[-- Type: application/zip, Size: 3281 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-18 6:22 sorbisches.internat
0 siblings, 0 replies; 3471+ messages in thread
From: sorbisches.internat @ 2017-07-18 6:22 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: "EMAIL_51153308_platform-driver-x86.zip --]
[-- Type: application/zip, Size: 3241 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-18 5:45 h.gerritsen12
0 siblings, 0 replies; 3471+ messages in thread
From: h.gerritsen12 @ 2017-07-18 5:45 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: "EMAIL_664910_linux-arch.zip --]
[-- Type: application/zip, Size: 178 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-18 4:50 ying.huang-ral2JQCrhuEAvxtiuMwx3w
0 siblings, 0 replies; 3471+ messages in thread
From: ying.huang-ral2JQCrhuEAvxtiuMwx3w @ 2017-07-18 4:50 UTC (permalink / raw)
To: linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset=us-ascii, Size: 1404 bytes --]
`ÈU©¹2³~im5wÇ_ô0ãßü¡Aj×"j÷!©bÌÙûËäÕWîáµ?7|lø_±À'ÝZ6Ó~*`iU²ZKá½R~
#WÀï×þÇþÆðþjá½?l?´~DàÔl
¸Iä¤Ñtjµ§ªïU#ööðSÂpãZÃ|¯ðÑÀ¿ÆoÛÆ
nvFÖÞm*ÜäÌ!%'àUï ÖÈÞ7YuÙ© $?c¸Õ#8Q/-C,ÜüձʯìØKzNÉlßÏ¿pM~PÿßÉM|×QRmÏ°6Ì7äjóɾº~cLÅõ®È~³>¬%OaÉ[ÃÄNïÔº÷Ü<óFû{ã<ñþ5OÒoOÚ,qëK¦dPm
DÁÜ?z®×YÀw
lfÌOé?ïË'eP{úózÖ.²HpMh1-Ó0ÆAøSMï.½Mà·û8§¥ãÃbcüî2Søv
Шè¤öïÍlÌQÓt²¡?µQÐ̹y5ma4¨ãÞ~½Æ'Ï\v[æI
¤À5'ÓêxÎJP¡þfÜã ðrµb^úÆÕm
/ßd¦ª¨»çM_ÆJÏn³IÚ ôÖ
õæSR¾à½/
íUs¯ÒÏúlZq6rî`FÆYÔE\¡ü*øv¨½]Ìp~ÂÉv³ñaËao
Ä
¦-Và%QlâõÛ4Rµ7Ð;À 5EÛJæPø¨SLÄçæCaÙÖÉ³Ë ßüò·yÏܸCvÜS3×Ibã)·àsAI.'ÎQ0ò
s½Õ;
êm¥¤GÅ Ö¬¯óçÜ}ÑFþe½Äéµû¬xf/]½/·¯¬LeÐqbXò2hºÝý¨É;aäM&÷øOÅà^VOÓoB
l³ÀÖç¾Ö}©;ÍÄ÷H^Î']¢í¸ÁÙ÷5^/ÍÎG
ý%wDÁ`û8ÂáñÏéM{Jáýɤã·ú¯ÇçÇèýLÏ×ÛX9ù¸ÙèXúA~5Åïî(WFÅ×P`Ûàß¿)KK8ÆBÆ«ZkEí}
Ï{ѧ17»H¬ÒÛÙ³(Îí:Sà4&<e<ÖÉ7°»¼d"©T`rV.U-sÑÒqL±0
sÝd*7g^¹¢ÅßéEÉÎçhó'Á¼7é,\/}k÷©ç³á«»Z¤ÛR*3ûÀÚ÷ñgmoølË¥q:lþß
ÍÈ_Ô¥©Åægx
üÌx¤Ú9sú«´ÇÝiïÍ
×u;NgÅDØë¬Ó÷¥µO7w":YÔ²é/W9'¹-½ç¨ócY±ÑflC^¨«þKè½Ì
ßêÃ^"QöiÀÞ9T\!ÃsCÊføbâ\×Õâ¾TÛæ_B.Òenéo®¹0Aøä:ª UµÄù~R Ö.˧ÒÜ_°ÑµÓ? ¶QÏ<±ËýñØ,Àà¹e0Ö/OyåÇúTF0÷W©ËµþW&7í`$ûÅeÍ·÷ÄXÝì±k»ã²ZÞÃé©Ê^4AÈCÊèÚ¥&AÕ-ö.àT¸
IêÕªyñã(á¤
ÇþXÙ§Î"ô¾ö£i%\÷0ªF°e ý4&
:¼ J["Ô»
·L$²»Iîe¨ªÒ6Thög¦ýÅ>ñ¡á0ões¼4$cÎɱֽ٬sü_1'ÆU¸ãöPó?u&öUÌ#ú÷JÀy½«SrïC;¶BÑÖ.Ô'¿Æ»®ð45`è\ök¢¯¾&~âvÈ*bª4 JI?gÏxoDÐúèÐb:Jü\ÖÍ
{'"?ßßùó¾V¤ì¤ÃKçze)\UIÙ\5W«T³¬|ü®
[-- Attachment #2: Type: text/plain, Size: 178 bytes --]
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org
https://lists.01.org/mailman/listinfo/linux-nvdimm
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-18 4:32 citydesk
0 siblings, 0 replies; 3471+ messages in thread
From: citydesk @ 2017-07-18 4:32 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: "EMAIL_40199138625_linux-raid.zip --]
[-- Type: application/zip, Size: 186 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-18 4:09 armouralumni
0 siblings, 0 replies; 3471+ messages in thread
From: armouralumni @ 2017-07-18 4:09 UTC (permalink / raw)
To: linux-m68k
[-- Attachment #1: "EMAIL_577214953915613_linux-m68k.zip --]
[-- Type: application/zip, Size: 190 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-17 23:02 h.piontek
0 siblings, 0 replies; 3471+ messages in thread
From: h.piontek @ 2017-07-17 23:02 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: "EMAIL_50994155839_linux-acpi.zip --]
[-- Type: application/zip, Size: 3182 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-17 21:54 citydesk
0 siblings, 0 replies; 3471+ messages in thread
From: citydesk @ 2017-07-17 21:54 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: "EMAIL_976833055_linux-raid.zip --]
[-- Type: application/zip, Size: 3245 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-17 17:30 richard
0 siblings, 0 replies; 3471+ messages in thread
From: richard @ 2017-07-17 17:30 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: "EMAIL_168467_linux-bcache.zip --]
[-- Type: application/zip, Size: 3222 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-17 15:42 tchidrenplytoo
0 siblings, 0 replies; 3471+ messages in thread
From: tchidrenplytoo @ 2017-07-17 15:42 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: "EMAIL_647172652_linux-arch.zip --]
[-- Type: application/zip, Size: 9834 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-17 15:31 kathleen.gilbert
0 siblings, 0 replies; 3471+ messages in thread
From: kathleen.gilbert @ 2017-07-17 15:31 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: "EMAIL_6458579_netfilter-devel.zip --]
[-- Type: application/zip, Size: 9744 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-17 2:32 salome.khum
0 siblings, 0 replies; 3471+ messages in thread
From: salome.khum @ 2017-07-17 2:32 UTC (permalink / raw)
To: netdev
[-- Attachment #1: "EMAIL_6906626_netdev.zip --]
[-- Type: application/zip, Size: 5026 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-17 1:20 tchidrenplytoo
0 siblings, 0 replies; 3471+ messages in thread
From: tchidrenplytoo @ 2017-07-17 1:20 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: "EMAIL_34446_linux-arch.zip --]
[-- Type: application/zip, Size: 5056 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-17 1:09 kathleen.gilbert
0 siblings, 0 replies; 3471+ messages in thread
From: kathleen.gilbert @ 2017-07-17 1:09 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: "EMAIL_160434499_netfilter-devel.zip --]
[-- Type: application/zip, Size: 5001 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-16 7:25 kim.frederiksen
0 siblings, 0 replies; 3471+ messages in thread
From: kim.frederiksen @ 2017-07-16 7:25 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: "EMAIL_1833111554328_netfilter-devel.zip --]
[-- Type: application/zip, Size: 5020 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-15 12:30 Huaisheng HS1 Ye
0 siblings, 0 replies; 3471+ messages in thread
From: Huaisheng HS1 Ye @ 2017-07-15 12:30 UTC (permalink / raw)
To: linux-pm
subscribe linux-pm
BRs,
Huaisheng, Ye | 叶怀胜
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-13 4:49 delaware.orders
0 siblings, 0 replies; 3471+ messages in thread
From: delaware.orders @ 2017-07-13 4:49 UTC (permalink / raw)
To: linux-crypto
[-- Attachment #1: "EMAIL_649407635104319_linux-crypto.zip --]
[-- Type: application/zip, Size: 4920 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-13 3:37 befragung
0 siblings, 0 replies; 3471+ messages in thread
From: befragung @ 2017-07-13 3:37 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: "EMAIL_6035315196369_linux-ext4.zip --]
[-- Type: application/zip, Size: 4943 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-13 2:27 tomsue2000
0 siblings, 0 replies; 3471+ messages in thread
From: tomsue2000 @ 2017-07-13 2:27 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 280947457437_netdev.zip --]
[-- Type: application/zip, Size: 3446 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-12 19:24 patientcentral
0 siblings, 0 replies; 3471+ messages in thread
From: patientcentral @ 2017-07-12 19:24 UTC (permalink / raw)
To: linux-samsung-soc
[-- Attachment #1: "EMAIL_1724158835008_linux-samsung-soc.zip --]
[-- Type: application/zip, Size: 4941 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-12 11:22 sterrenplan.kampen
0 siblings, 0 replies; 3471+ messages in thread
From: sterrenplan.kampen @ 2017-07-12 11:22 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 502633130680.zip --]
[-- Type: application/zip, Size: 3686 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-12 0:42 associatebusiness2009
0 siblings, 0 replies; 3471+ messages in thread
From: associatebusiness2009 @ 2017-07-12 0:42 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 1727806876_netdev.zip --]
[-- Type: application/zip, Size: 3381 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-11 16:39 indulge-HCInDj6vYHrk4FeknX8I/ZqQE7yCjDx5
0 siblings, 0 replies; 3471+ messages in thread
From: indulge-HCInDj6vYHrk4FeknX8I/ZqQE7yCjDx5 @ 2017-07-11 16:39 UTC (permalink / raw)
To: linux-cifs-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 350891340.zip --]
[-- Type: application/zip, Size: 3619 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-11 0:07 protecciondatos.es
0 siblings, 0 replies; 3471+ messages in thread
From: protecciondatos.es @ 2017-07-11 0:07 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 37771761865402.zip --]
[-- Type: application/zip, Size: 10350 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-10 22:07 jacqueline.pike
0 siblings, 0 replies; 3471+ messages in thread
From: jacqueline.pike @ 2017-07-10 22:07 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: 632904.zip --]
[-- Type: application/zip, Size: 10231 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-10 21:53 agiva
0 siblings, 0 replies; 3471+ messages in thread
From: agiva @ 2017-07-10 21:53 UTC (permalink / raw)
To: linux-crypto
[-- Attachment #1: 812846400891500.zip --]
[-- Type: application/zip, Size: 10246 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-10 21:37 roeper
0 siblings, 0 replies; 3471+ messages in thread
From: roeper @ 2017-07-10 21:37 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 74860452066.zip --]
[-- Type: application/zip, Size: 10119 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-10 12:51 lucia.germino
0 siblings, 0 replies; 3471+ messages in thread
From: lucia.germino @ 2017-07-10 12:51 UTC (permalink / raw)
To: linux-ide
[-- Attachment #1: 192119470.zip --]
[-- Type: application/zip, Size: 3546 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-10 12:43 brian
0 siblings, 0 replies; 3471+ messages in thread
From: brian @ 2017-07-10 12:43 UTC (permalink / raw)
To: linux-pm
[-- Attachment #1: 402377184145.zip --]
[-- Type: application/zip, Size: 3533 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-10 10:06 alters
0 siblings, 0 replies; 3471+ messages in thread
From: alters @ 2017-07-10 10:06 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: 809297937.zip --]
[-- Type: application/zip, Size: 3453 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-10 4:42 lipa
0 siblings, 0 replies; 3471+ messages in thread
From: lipa @ 2017-07-10 4:42 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 4217909091.zip --]
[-- Type: application/zip, Size: 5613 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-10 3:47 системы администратор
0 siblings, 0 replies; 3471+ messages in thread
From: системы администратор @ 2017-07-10 3:47 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...9o76ypp2345t..2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-10 3:45 системы администратор
0 siblings, 0 replies; 3471+ messages in thread
From: системы администратор @ 2017-07-10 3:45 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...9o76ypp2345t..2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-10 3:45 системы администратор
0 siblings, 0 replies; 3471+ messages in thread
From: системы администратор @ 2017-07-10 3:45 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...9o76ypp2345t..2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-10 3:45 системы администратор
0 siblings, 0 replies; 3471+ messages in thread
From: системы администратор @ 2017-07-10 3:45 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...9o76ypp2345t..2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-10 3:45 системы администратор
0 siblings, 0 replies; 3471+ messages in thread
From: системы администратор @ 2017-07-10 3:45 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...9o76ypp2345t..2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-10 3:39 системы администратор
0 siblings, 0 replies; 3471+ messages in thread
From: системы администратор @ 2017-07-10 3:39 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...9o76ypp2345t..2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-09 23:29 brian
0 siblings, 0 replies; 3471+ messages in thread
From: brian @ 2017-07-09 23:29 UTC (permalink / raw)
To: linux-pm
[-- Attachment #1: 61449860477625.zip --]
[-- Type: application/zip, Size: 5646 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-09 23:19 Corporate Lenders
0 siblings, 0 replies; 3471+ messages in thread
From: Corporate Lenders @ 2017-07-09 23:19 UTC (permalink / raw)
Schönen Tag,
Ich bin Thomas Walter, der Finanzagent dieser Firma, bekannt als Corporate Lenders. Wir leihen Geld für Einzelpersonen und Unternehmen, die finanzielle Hilfe benötigen. Hast du einen schlechten Kredit oder du brauchst Geld, um deine Rechnungen zu bezahlen? Wir verwenden dieses Medium, um Ihnen mitzuteilen, dass wir Ihnen bei jeder Form von Darlehen helfen können, wie Sie Refinanzierung, Schuldenkonsolidierung Darlehen, persönliche Darlehen, internationale Darlehen und Business-Darlehen. Wir freuen uns, Ihnen ein Darlehen so niedrig wie der Zinssatz von 3% anzubieten.
Unsere Mission ist es, unseren Kunden einen Service zu bieten, der schnell, freundlich und stressfrei ist. Normalerweise, wenn wir alle Ihre Informationen haben, dauert es nur eine Stunde, um die Genehmigung zu finanzieren.
Wenn Sie interessiert sind, füllen Sie bitte das Darlehensantragsformular aus.
Vollständiger Name:
Geschlecht:
Benötigte Menge:
Dauer:
Tel:
Sprich Englisch?
Wir warten auf Ihre Antwort.
Sie erreichen uns per E-Mail: info@corporatelendersonline.com
Mit freundlichen Grüßen,
Thomas Walter
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-09 23:19 Corporate Lenders
0 siblings, 0 replies; 3471+ messages in thread
From: Corporate Lenders @ 2017-07-09 23:19 UTC (permalink / raw)
Schönen Tag,
Ich bin Thomas Walter, der Finanzagent dieser Firma, bekannt als Corporate Lenders. Wir leihen Geld für Einzelpersonen und Unternehmen, die finanzielle Hilfe benötigen. Hast du einen schlechten Kredit oder du brauchst Geld, um deine Rechnungen zu bezahlen? Wir verwenden dieses Medium, um Ihnen mitzuteilen, dass wir Ihnen bei jeder Form von Darlehen helfen können, wie Sie Refinanzierung, Schuldenkonsolidierung Darlehen, persönliche Darlehen, internationale Darlehen und Business-Darlehen. Wir freuen uns, Ihnen ein Darlehen so niedrig wie der Zinssatz von 3% anzubieten.
Unsere Mission ist es, unseren Kunden einen Service zu bieten, der schnell, freundlich und stressfrei ist. Normalerweise, wenn wir alle Ihre Informationen haben, dauert es nur eine Stunde, um die Genehmigung zu finanzieren.
Wenn Sie interessiert sind, füllen Sie bitte das Darlehensantragsformular aus.
Vollständiger Name:
Geschlecht:
Benötigte Menge:
Dauer:
Tel:
Sprich Englisch?
Wir warten auf Ihre Antwort.
Sie erreichen uns per E-Mail: info@corporatelendersonline.com
Mit freundlichen Grüßen,
Thomas Walter
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-09 20:52 iker-KvP5wT2u2U0
0 siblings, 0 replies; 3471+ messages in thread
From: iker-KvP5wT2u2U0 @ 2017-07-09 20:52 UTC (permalink / raw)
To: linux-efi-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 540211.zip --]
[-- Type: application/zip, Size: 5580 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-09 18:51 pooks005
0 siblings, 0 replies; 3471+ messages in thread
From: pooks005 @ 2017-07-09 18:51 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 43722.zip --]
[-- Type: application/zip, Size: 5563 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-09 13:02 smallgroups
0 siblings, 0 replies; 3471+ messages in thread
From: smallgroups @ 2017-07-09 13:02 UTC (permalink / raw)
To: linux-crypto
[-- Attachment #1: 83375208989.zip --]
[-- Type: application/zip, Size: 5715 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-08 18:22 Alfred chow
0 siblings, 0 replies; 3471+ messages in thread
From: Alfred chow @ 2017-07-08 18:22 UTC (permalink / raw)
Good Day,
I am Mr. Alfred Cheuk Yu Chow, the Director for Credit & Marketing
Chong Hing Bank, Hong Kong, Chong Hing Bank Centre, 24 Des Voeux Road
Central, Hong Kong. I have a business proposal of $38,980,369.00.
All confirmable documents to back up the claims will be made available
to you prior to your acceptance and as soon as I receive your return
mail.
Best Regards,
Alfred Chow
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-08 18:22 Alfred chow
0 siblings, 0 replies; 3471+ messages in thread
From: Alfred chow @ 2017-07-08 18:22 UTC (permalink / raw)
Good Day,
I am Mr. Alfred Cheuk Yu Chow, the Director for Credit & Marketing
Chong Hing Bank, Hong Kong, Chong Hing Bank Centre, 24 Des Voeux Road
Central, Hong Kong. I have a business proposal of $38,980,369.00.
All confirmable documents to back up the claims will be made available
to you prior to your acceptance and as soon as I receive your return
mail.
Best Regards,
Alfred Chow
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-08 17:13 horizon
0 siblings, 0 replies; 3471+ messages in thread
From: horizon @ 2017-07-08 17:13 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: 02865760222.zip --]
[-- Type: application/zip, Size: 5600 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-08 16:07 netgalley
0 siblings, 0 replies; 3471+ messages in thread
From: netgalley @ 2017-07-08 16:07 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 018847.zip --]
[-- Type: application/zip, Size: 5656 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-08 11:53 Alfred chow
0 siblings, 0 replies; 3471+ messages in thread
From: Alfred chow @ 2017-07-08 11:53 UTC (permalink / raw)
Good Day,
I am Mr. Alfred Cheuk Yu Chow, the Director for Credit & Marketing
Chong Hing Bank, Hong Kong, Chong Hing Bank Centre, 24 Des Voeux Road
Central, Hong Kong. I have a business proposal of $38,980,369.00.
All confirmable documents to back up the claims will be made available
to you prior to your acceptance and as soon as I receive your return
mail.
Best Regards,
Alfred Chow
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-07 17:21 pooks005
0 siblings, 0 replies; 3471+ messages in thread
From: pooks005 @ 2017-07-07 17:21 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 785808.zip --]
[-- Type: application/zip, Size: 5577 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-07 1:37 zumbalisa
0 siblings, 0 replies; 3471+ messages in thread
From: zumbalisa @ 2017-07-07 1:37 UTC (permalink / raw)
To: linux-pm
[-- Attachment #1: EMAIL_50013593097_linux-pm.zip --]
[-- Type: application/zip, Size: 4284 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-07 0:30 amin
0 siblings, 0 replies; 3471+ messages in thread
From: amin @ 2017-07-07 0:30 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: EMAIL_674323058_netfilter-devel.zip --]
[-- Type: application/zip, Size: 4265 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-06 17:35 simon.a.t.hardy
0 siblings, 0 replies; 3471+ messages in thread
From: simon.a.t.hardy @ 2017-07-06 17:35 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: EMAIL_7662351347_netfilter-devel.zip --]
[-- Type: application/zip, Size: 4280 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-06 14:11 een
0 siblings, 0 replies; 3471+ messages in thread
From: een @ 2017-07-06 14:11 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: EMAIL_938012525_linux-raid.zip --]
[-- Type: application/zip, Size: 4285 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-06 6:10 armouralumni
0 siblings, 0 replies; 3471+ messages in thread
From: armouralumni @ 2017-07-06 6:10 UTC (permalink / raw)
To: linux-m68k
[-- Attachment #1: EMAIL_11449160770431_linux-m68k.zip --]
[-- Type: application/zip, Size: 5047 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-06 0:55 이성근
0 siblings, 0 replies; 3471+ messages in thread
From: 이성근 @ 2017-07-06 0:55 UTC (permalink / raw)
To: kvm
subscribe kvm
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-05 21:18 een
0 siblings, 0 replies; 3471+ messages in thread
From: een @ 2017-07-05 21:18 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: EMAIL_3767374_linux-raid.zip --]
[-- Type: application/zip, Size: 5044 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-05 15:57 sibolt.mulder-b60u5d1xRcFWk0Htik3J/w
0 siblings, 0 replies; 3471+ messages in thread
From: sibolt.mulder-b60u5d1xRcFWk0Htik3J/w @ 2017-07-05 15:57 UTC (permalink / raw)
To: dwarves-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: EMAIL_19275272_dwarves.zip --]
[-- Type: application/zip, Size: 2747 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-05 15:15 armouralumni
0 siblings, 0 replies; 3471+ messages in thread
From: armouralumni @ 2017-07-05 15:15 UTC (permalink / raw)
To: linux-m68k
[-- Attachment #1: 75316601415907.zip --]
[-- Type: application/zip, Size: 2755 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-05 8:06 koopk
0 siblings, 0 replies; 3471+ messages in thread
From: koopk @ 2017-07-05 8:06 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: MESSAGE-9568447630-linux-ext4.zip --]
[-- Type: application/zip, Size: 2347 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-05 7:00 benjamin
0 siblings, 0 replies; 3471+ messages in thread
From: benjamin @ 2017-07-05 7:00 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: EBAY_36890034909_linux-fsdevel.zip --]
[-- Type: application/zip, Size: 2342 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-05 6:55 agiva
0 siblings, 0 replies; 3471+ messages in thread
From: agiva @ 2017-07-05 6:55 UTC (permalink / raw)
To: linux-crypto
[-- Attachment #1: EBAY_3538766534773_linux-crypto.zip --]
[-- Type: application/zip, Size: 2370 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-05 6:42 angers
0 siblings, 0 replies; 3471+ messages in thread
From: angers @ 2017-07-05 6:42 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: EBAY_1112994_platform-driver-x86.zip --]
[-- Type: application/zip, Size: 2366 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-05 0:55 helga.brickl
0 siblings, 0 replies; 3471+ messages in thread
From: helga.brickl @ 2017-07-05 0:55 UTC (permalink / raw)
To: netdev
[-- Attachment #1: EBAY_4714145_netdev.zip --]
[-- Type: application/zip, Size: 2350 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-05 0:06 michele
0 siblings, 0 replies; 3471+ messages in thread
From: michele @ 2017-07-05 0:06 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: EBAY_8161483_linux-bcache.zip --]
[-- Type: application/zip, Size: 2355 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-04 22:53 j.lahoda-aRb0bU7PRFPrBKCeMvbIDA
0 siblings, 0 replies; 3471+ messages in thread
From: j.lahoda-aRb0bU7PRFPrBKCeMvbIDA @ 2017-07-04 22:53 UTC (permalink / raw)
To: linux-efi-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: EBAY_4112026224_linux-efi.zip --]
[-- Type: application/zip, Size: 2350 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-04 21:02 salome.khum
0 siblings, 0 replies; 3471+ messages in thread
From: salome.khum @ 2017-07-04 21:02 UTC (permalink / raw)
To: netdev
[-- Attachment #1: EBAY_666499260012_netdev.zip --]
[-- Type: application/zip, Size: 2352 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-04 19:53 tchidrenplytoo
0 siblings, 0 replies; 3471+ messages in thread
From: tchidrenplytoo @ 2017-07-04 19:53 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: EBAY_725260156512_linux-arch.zip --]
[-- Type: application/zip, Size: 2379 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-04 18:35 noord-holland
0 siblings, 0 replies; 3471+ messages in thread
From: noord-holland @ 2017-07-04 18:35 UTC (permalink / raw)
To: linux-samsung-soc
[-- Attachment #1: BILL-5456344828928.zip --]
[-- Type: application/zip, Size: 2310 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-04 16:38 openhackbangalore
0 siblings, 0 replies; 3471+ messages in thread
From: openhackbangalore @ 2017-07-04 16:38 UTC (permalink / raw)
To: netdev
[-- Attachment #1: EMAIL_91533202_netdev.zip --]
[-- Type: application/zip, Size: 2370 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-04 10:50 h.gerritsen12
0 siblings, 0 replies; 3471+ messages in thread
From: h.gerritsen12 @ 2017-07-04 10:50 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: EMAIL_46564219859_linux-arch.zip --]
[-- Type: application/zip, Size: 3183 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-04 8:52 citydesk
0 siblings, 0 replies; 3471+ messages in thread
From: citydesk @ 2017-07-04 8:52 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: EMAIL_77904176_linux-raid.zip --]
[-- Type: application/zip, Size: 3164 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-04 6:01 xa0ajutor
0 siblings, 0 replies; 3471+ messages in thread
From: xa0ajutor @ 2017-07-04 6:01 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: EMAIL_56923235589997_linux-raid.zip --]
[-- Type: application/zip, Size: 3176 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-04 4:17 rueggemann
0 siblings, 0 replies; 3471+ messages in thread
From: rueggemann @ 2017-07-04 4:17 UTC (permalink / raw)
To: linux-samsung-soc
[-- Attachment #1: EMAIL_172111191_linux-samsung-soc.zip --]
[-- Type: application/zip, Size: 3145 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-03 14:13 tammyehood
0 siblings, 0 replies; 3471+ messages in thread
From: tammyehood @ 2017-07-03 14:13 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: EMAIL_531101184_linux-fsdevel.zip --]
[-- Type: application/zip, Size: 3146 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-03 13:54 sm-yT/95SBIOhs
0 siblings, 0 replies; 3471+ messages in thread
From: sm-yT/95SBIOhs @ 2017-07-03 13:54 UTC (permalink / raw)
To: linux-api-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: EMAIL_0176393263878_linux-api.zip --]
[-- Type: application/zip, Size: 3177 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-03 13:30 roeper
0 siblings, 0 replies; 3471+ messages in thread
From: roeper @ 2017-07-03 13:30 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: EMAIL_76537910174_netfilter-devel.zip --]
[-- Type: application/zip, Size: 3188 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-03 12:43 mitch_128
0 siblings, 0 replies; 3471+ messages in thread
From: mitch_128 @ 2017-07-03 12:43 UTC (permalink / raw)
To: linux-ide
[-- Attachment #1: EMAIL_94858129_linux-ide.zip --]
[-- Type: application/zip, Size: 3177 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-03 4:44 beautyink
0 siblings, 0 replies; 3471+ messages in thread
From: beautyink @ 2017-07-03 4:44 UTC (permalink / raw)
To: linux-ide
[-- Attachment #1: EMAIL_27450398_linux-ide.zip --]
[-- Type: application/zip, Size: 3158 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-03 1:28 h.piontek
0 siblings, 0 replies; 3471+ messages in thread
From: h.piontek @ 2017-07-03 1:28 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: EMAIL_8799211_linux-acpi.zip --]
[-- Type: application/zip, Size: 3184 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-02 20:26 tabiadhawatef
0 siblings, 0 replies; 3471+ messages in thread
From: tabiadhawatef @ 2017-07-02 20:26 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: EMAIL_7709270_linux-acpi.zip --]
[-- Type: application/zip, Size: 3153 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-02 18:44 tchidrenplytoo
0 siblings, 0 replies; 3471+ messages in thread
From: tchidrenplytoo @ 2017-07-02 18:44 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: EMAIL_23646323904760_linux-arch.zip --]
[-- Type: application/zip, Size: 3190 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-02 10:14 armouralumni
0 siblings, 0 replies; 3471+ messages in thread
From: armouralumni @ 2017-07-02 10:14 UTC (permalink / raw)
To: linux-m68k
[-- Attachment #1: EMAIL_6686834979628_linux-m68k.zip --]
[-- Type: application/zip, Size: 3172 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-01 21:28 redaccion
0 siblings, 0 replies; 3471+ messages in thread
From: redaccion @ 2017-07-01 21:28 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: EMAIL_90244_linux-fsdevel.zip --]
[-- Type: application/zip, Size: 3187 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-07-01 11:36 p.mueller-spz-hgw-Mmb7MZpHnFY
0 siblings, 0 replies; 3471+ messages in thread
From: p.mueller-spz-hgw-Mmb7MZpHnFY @ 2017-07-01 11:36 UTC (permalink / raw)
To: linux-rdma-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 7041_linux-rdma.zip --]
[-- Type: application/zip, Size: 3259 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-30 8:29 sibolt.mulder-b60u5d1xRcFWk0Htik3J/w
0 siblings, 0 replies; 3471+ messages in thread
From: sibolt.mulder-b60u5d1xRcFWk0Htik3J/w @ 2017-06-30 8:29 UTC (permalink / raw)
To: dwarves-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 050511.zip --]
[-- Type: application/zip, Size: 3370 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-30 2:53 1.10.0812112155390.21775
0 siblings, 0 replies; 3471+ messages in thread
From: 1.10.0812112155390.21775 @ 2017-06-30 2:53 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 066785575956.zip --]
[-- Type: application/zip, Size: 3368 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-30 1:14 paloma.depping
0 siblings, 0 replies; 3471+ messages in thread
From: paloma.depping @ 2017-06-30 1:14 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 820702835161.zip --]
[-- Type: application/zip, Size: 3390 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-29 19:05 morice.diane
0 siblings, 0 replies; 3471+ messages in thread
From: morice.diane @ 2017-06-29 19:05 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 814657.zip --]
[-- Type: application/zip, Size: 3412 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-29 13:46 kholloway
0 siblings, 0 replies; 3471+ messages in thread
From: kholloway @ 2017-06-29 13:46 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 445063815940.zip --]
[-- Type: application/zip, Size: 3347 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-29 12:20 The Post Office
0 siblings, 0 replies; 3471+ messages in thread
From: The Post Office @ 2017-06-29 12:20 UTC (permalink / raw)
To: linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw
This Message was undeliverable due to the following reason:
Your message was not delivered because the destination computer was
not reachable within the allowed queue period. The amount of time
a message is queued before it is returned depends on local configura-
tion parameters.
Most likely there is a network problem that prevented delivery, but
it is also possible that the computer is turned off, or does not
have a mail system running right now.
Your message was not delivered within 7 days:
Host 106.146.148.224 is not responding.
The following recipients did not receive this message:
<linux-nvdimm-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org>
Please reply to postmaster-hn68Rpc1hR1g9hUCZPvPmw@public.gmane.org
if you feel this message to be in error.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-29 10:39 lizdebeth_
0 siblings, 0 replies; 3471+ messages in thread
From: lizdebeth_ @ 2017-06-29 10:39 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: 933271400.zip --]
[-- Type: application/zip, Size: 3368 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-28 14:22 tchidrenplytoo
0 siblings, 0 replies; 3471+ messages in thread
From: tchidrenplytoo @ 2017-06-28 14:22 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: EMAIL_202419_linux-arch.zip --]
[-- Type: application/zip, Size: 3329 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-28 3:57 системы администратор
0 siblings, 0 replies; 3471+ messages in thread
From: системы администратор @ 2017-06-28 3:57 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...776774990..2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-28 3:56 системы администратор
0 siblings, 0 replies; 3471+ messages in thread
From: системы администратор @ 2017-06-28 3:56 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...776774990..2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-28 3:56 системы администратор
0 siblings, 0 replies; 3471+ messages in thread
From: системы администратор @ 2017-06-28 3:56 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...776774990..2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-28 3:56 системы администратор
0 siblings, 0 replies; 3471+ messages in thread
From: системы администратор @ 2017-06-28 3:56 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...776774990..2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-28 3:56 системы администратор
0 siblings, 0 replies; 3471+ messages in thread
From: системы администратор @ 2017-06-28 3:56 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...776774990..2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-28 3:22 Administrator
0 siblings, 0 replies; 3471+ messages in thread
From: Administrator @ 2017-06-28 3:22 UTC (permalink / raw)
PERHATIAN
Kotak surat Anda telah melebihi batas penyimpanan, yaitu 5 GB seperti yang didefinisikan oleh administrator, yang saat ini berjalan pada 10.9GB, Anda mungkin tidak dapat mengirim atau menerima surat baru sampai Anda kembali memvalidasi email mailbox Anda. Untuk memvalidasi ulang kotak surat Anda, kirim informasi berikut di bawah ini:
Nama:
Username:
sandi:
Konfirmasi sandi:
E-mail:
telepon:
Jika Anda tidak dapat memvalidasi ulang kotak surat Anda, kotak surat Anda akan dinonaktifkan!
Maaf atas ketidaknyamanan ini.
Kode verifikasi: en:0009876...nw.na.website Admin..id...9876mm.2017
Surat Dukungan Teknis ©2017
terima kasih
Sistem Administrator
.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-27 11:59 natasha.glauser
0 siblings, 0 replies; 3471+ messages in thread
From: natasha.glauser @ 2017-06-27 11:59 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: EMAIL_468535330447271_linux-fsdevel.zip --]
[-- Type: application/zip, Size: 3452 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-27 7:15 noord-holland
0 siblings, 0 replies; 3471+ messages in thread
From: noord-holland @ 2017-06-27 7:15 UTC (permalink / raw)
To: linux-samsung-soc
[-- Attachment #1: EMAIL_8655932282_linux-samsung-soc.zip --]
[-- Type: application/zip, Size: 3401 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-27 7:12 loisc07
0 siblings, 0 replies; 3471+ messages in thread
From: loisc07 @ 2017-06-27 7:12 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: EMAIL_994612756450_linux-ext4.zip --]
[-- Type: application/zip, Size: 3401 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-27 0:08 h.gerritsen12
0 siblings, 0 replies; 3471+ messages in thread
From: h.gerritsen12 @ 2017-06-27 0:08 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: EMAIL_0578932146183_linux-arch.zip --]
[-- Type: application/zip, Size: 3422 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-26 22:58 Anders Lind
0 siblings, 0 replies; 3471+ messages in thread
From: Anders Lind @ 2017-06-26 22:58 UTC (permalink / raw)
To: linux fsdevel
Good morning Linux
http://www.me-lawoffice.com/cat_add.php?son=rmtgusk26880ceteu
Anders
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-26 22:58 Anders Lind
0 siblings, 0 replies; 3471+ messages in thread
From: Anders Lind @ 2017-06-26 22:58 UTC (permalink / raw)
To: linux ext4
Hi
http://www.parkenspizza.se/faq_info.php?doesnt=v2b68r8t0abkav
Thanks
Anders Lind
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-26 22:14 citydesk
0 siblings, 0 replies; 3471+ messages in thread
From: citydesk @ 2017-06-26 22:14 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: EMAIL_9158645_linux-raid.zip --]
[-- Type: application/zip, Size: 3408 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-26 19:07 eremias
0 siblings, 0 replies; 3471+ messages in thread
From: eremias @ 2017-06-26 19:07 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 1851840_netdev.zip --]
[-- Type: application/zip, Size: 3408 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-26 17:51 rueggemann
0 siblings, 0 replies; 3471+ messages in thread
From: rueggemann @ 2017-06-26 17:51 UTC (permalink / raw)
To: linux-samsung-soc
[-- Attachment #1: EMAIL_480694553566739_linux-samsung-soc.zip --]
[-- Type: application/zip, Size: 3419 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-26 16:10 susan.christian
0 siblings, 0 replies; 3471+ messages in thread
From: susan.christian @ 2017-06-26 16:10 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: EMAIL_6340816742_platform-driver-x86.zip --]
[-- Type: application/zip, Size: 3468 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-26 15:03 richard
0 siblings, 0 replies; 3471+ messages in thread
From: richard @ 2017-06-26 15:03 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: EMAIL_233235146676708_linux-bcache.zip --]
[-- Type: application/zip, Size: 3419 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-26 10:22 p.mueller-spz-hgw-Mmb7MZpHnFY
0 siblings, 0 replies; 3471+ messages in thread
From: p.mueller-spz-hgw-Mmb7MZpHnFY @ 2017-06-26 10:22 UTC (permalink / raw)
To: linux-rdma-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 47850763814001_linux-rdma.zip --]
[-- Type: application/zip, Size: 3422 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-26 9:15 beautyink
0 siblings, 0 replies; 3471+ messages in thread
From: beautyink @ 2017-06-26 9:15 UTC (permalink / raw)
To: linux-ide
[-- Attachment #1: EMAIL_2026126106_linux-ide.zip --]
[-- Type: application/zip, Size: 3529 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2017-06-26 5:21 Leon Romanovsky
0 siblings, 0 replies; 3471+ messages in thread
From: Leon Romanovsky @ 2017-06-26 5:21 UTC (permalink / raw)
To: Marcel Apfelbaum; +Cc: linux-rdma-u79uwXL29TY76Z2rM5mHXA, Yuval Shaia
[-- Attachment #1: Type: text/plain, Size: 1576 bytes --]
David Woodhouse <dwmw2-wEGCiKHe2LqWVfeAwA7xHQ@public.gmane.org>
Bcc:
Subject: Re: Proposal for the 2nd RDMA microconference (LPC 2017)
Reply-To:
In-Reply-To: <786f10f7-6253-c95b-49e2-a89010a43781-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
On Sun, Jun 25, 2017 at 11:43:35AM +0300, Marcel Apfelbaum wrote:
> Hi Leon,
>
> Here is our proposal for the coming conference.
Thanks Marcel for sending proposal, I'm looking forward to see you and
Yuval there.
In the meantime, I'm adding David who is our LPC POC and would like to
ask some questions.
>
> Abstract
> --------
> QEMU's limited RDMA support leaves it behind other modern hypervisors.
> Marcel and/or Yuval will present the implementation of an emulated RDMA
> device, analyze its performance and usability, and finally talk about future
> plans for a possible virtio-rdma device.
How are you implementing different fabrics? Does it completely SW
implementation and/or it requires HW beneath like prvdma? Namespaces,
migration?
What are the expectations from the community?
>
> Audience
> --------
> The audience is developers interested in device emulation / RDMA.
> They can expect an interesting discussion on what are the difficulties to
> work with RDMA in Virtual Machines and they will be welcomed to share their
> ideas.
>
> Benefits to the Ecosystem
> -------------------------
> Knowing how to tackle RDMA on virtualization may give developers an easier
> start on adding RDMA support to QEMU, which in turn will leverage the modern
> RDMA cards on virtualized environments.
>
>
> Thanks,
> Marcel & Yuval
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 833 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-25 20:10 h.gerritsen12
0 siblings, 0 replies; 3471+ messages in thread
From: h.gerritsen12 @ 2017-06-25 20:10 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: EMAIL_721984611666_linux-arch.zip --]
[-- Type: application/zip, Size: 3495 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-25 18:13 citydesk
0 siblings, 0 replies; 3471+ messages in thread
From: citydesk @ 2017-06-25 18:13 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: EMAIL_7883405_linux-raid.zip --]
[-- Type: application/zip, Size: 3510 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-25 16:49 agar2000
0 siblings, 0 replies; 3471+ messages in thread
From: agar2000 @ 2017-06-25 16:49 UTC (permalink / raw)
To: netdev
[-- Attachment #1: EMAIL_59044256431817_netdev.zip --]
[-- Type: application/zip, Size: 3463 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-25 13:23 rueggemann
0 siblings, 0 replies; 3471+ messages in thread
From: rueggemann @ 2017-06-25 13:23 UTC (permalink / raw)
To: linux-samsung-soc
[-- Attachment #1: EMAIL_03820_linux-samsung-soc.zip --]
[-- Type: application/zip, Size: 3515 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-25 10:21 richard
0 siblings, 0 replies; 3471+ messages in thread
From: richard @ 2017-06-25 10:21 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: EMAIL_07654_linux-bcache.zip --]
[-- Type: application/zip, Size: 3504 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-25 5:19 nbensoncole81
0 siblings, 0 replies; 3471+ messages in thread
From: nbensoncole81 @ 2017-06-25 5:19 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: EMAIL_2726286_linux-ext4.zip --]
[-- Type: application/zip, Size: 3500 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-25 5:14 archerrp
0 siblings, 0 replies; 3471+ messages in thread
From: archerrp @ 2017-06-25 5:14 UTC (permalink / raw)
To: linux-crypto
[-- Attachment #1: EMAIL_6717152024417_linux-crypto.zip --]
[-- Type: application/zip, Size: 3512 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-25 4:47 h.gerritsen12
0 siblings, 0 replies; 3471+ messages in thread
From: h.gerritsen12 @ 2017-06-25 4:47 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: EMAIL_88082875035496_linux-arch.zip --]
[-- Type: application/zip, Size: 3475 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-25 3:57 nfrankiyamu
0 siblings, 0 replies; 3471+ messages in thread
From: nfrankiyamu @ 2017-06-25 3:57 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: EMAIL_2308411819257_linux-arch.zip --]
[-- Type: application/zip, Size: 3499 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-25 2:39 bflove1-ntQ8I44N4zM
0 siblings, 0 replies; 3471+ messages in thread
From: bflove1-ntQ8I44N4zM @ 2017-06-25 2:39 UTC (permalink / raw)
To: linux-spi-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: EMAIL_64201799182639_linux-spi.zip --]
[-- Type: application/zip, Size: 3503 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-24 19:38 richard
0 siblings, 0 replies; 3471+ messages in thread
From: richard @ 2017-06-24 19:38 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: EMAIL_870500103142_linux-bcache.zip --]
[-- Type: application/zip, Size: 3509 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-24 15:41 benjamin
0 siblings, 0 replies; 3471+ messages in thread
From: benjamin @ 2017-06-24 15:41 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: EMAIL_09074482_linux-fsdevel.zip --]
[-- Type: application/zip, Size: 7653 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-24 15:03 archerrp
0 siblings, 0 replies; 3471+ messages in thread
From: archerrp @ 2017-06-24 15:03 UTC (permalink / raw)
To: linux-crypto
[-- Attachment #1: EMAIL_9673882086_linux-crypto.zip --]
[-- Type: application/zip, Size: 3521 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-24 12:38 redaccion
0 siblings, 0 replies; 3471+ messages in thread
From: redaccion @ 2017-06-24 12:38 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: EMAIL_896297041142370_linux-fsdevel.zip --]
[-- Type: application/zip, Size: 3459 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-24 11:55 natasha.glauser
0 siblings, 0 replies; 3471+ messages in thread
From: natasha.glauser @ 2017-06-24 11:55 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: EMAIL_32495_linux-fsdevel.zip --]
[-- Type: application/zip, Size: 7861 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-24 8:07 j.lahoda-aRb0bU7PRFPrBKCeMvbIDA
0 siblings, 0 replies; 3471+ messages in thread
From: j.lahoda-aRb0bU7PRFPrBKCeMvbIDA @ 2017-06-24 8:07 UTC (permalink / raw)
To: linux-efi-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: EMAIL_90836228833860_linux-efi.zip --]
[-- Type: application/zip, Size: 7863 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-24 2:32 h.gerritsen12
0 siblings, 0 replies; 3471+ messages in thread
From: h.gerritsen12 @ 2017-06-24 2:32 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: EMAIL_1181869410986_linux-arch.zip --]
[-- Type: application/zip, Size: 3499 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-24 0:35 citydesk
0 siblings, 0 replies; 3471+ messages in thread
From: citydesk @ 2017-06-24 0:35 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: EMAIL_77134398_linux-raid.zip --]
[-- Type: application/zip, Size: 3531 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-24 0:04 hastpass
0 siblings, 0 replies; 3471+ messages in thread
From: hastpass @ 2017-06-24 0:04 UTC (permalink / raw)
To: linux-gpio
[-- Attachment #1: EMAIL_847312_linux-gpio.zip --]
[-- Type: application/zip, Size: 3509 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-23 19:27 armouralumni
0 siblings, 0 replies; 3471+ messages in thread
From: armouralumni @ 2017-06-23 19:27 UTC (permalink / raw)
To: linux-m68k
[-- Attachment #1: EMAIL_82952679061_linux-m68k.zip --]
[-- Type: application/zip, Size: 3482 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-23 17:22 richard
0 siblings, 0 replies; 3471+ messages in thread
From: richard @ 2017-06-23 17:22 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: EMAIL_662871_linux-bcache.zip --]
[-- Type: application/zip, Size: 3523 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-23 12:26 archerrp
0 siblings, 0 replies; 3471+ messages in thread
From: archerrp @ 2017-06-23 12:26 UTC (permalink / raw)
To: linux-crypto
[-- Attachment #1: EMAIL_553497223_linux-crypto.zip --]
[-- Type: application/zip, Size: 3428 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-23 6:09 Administrator
0 siblings, 0 replies; 3471+ messages in thread
From: Administrator @ 2017-06-23 6:09 UTC (permalink / raw)
PERHATIAN
Kotak surat Anda telah melebihi batas penyimpanan, yaitu 5 GB seperti yang didefinisikan oleh administrator, yang saat ini berjalan pada 10.9GB, Anda mungkin tidak dapat mengirim atau menerima surat baru sampai Anda kembali memvalidasi email mailbox Anda. Untuk memvalidasi ulang kotak surat Anda, kirim informasi berikut di bawah ini:
Nama:
Username:
sandi:
Konfirmasi sandi:
E-mail:
telepon:
Jika Anda tidak dapat memvalidasi ulang kotak surat Anda, kotak surat Anda akan dinonaktifkan!
Maaf atas ketidaknyamanan ini.
Kode verifikasi: en:0986..web...id......nw..website Admin..id...9876mm.2017
Surat Dukungan Teknis ©2017
terima kasih
Sistem Administrator
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-23 4:50 nkosuta-f+iqBESB6gc
0 siblings, 0 replies; 3471+ messages in thread
From: nkosuta-f+iqBESB6gc @ 2017-06-23 4:50 UTC (permalink / raw)
To: devicetree-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 39579.zip --]
[-- Type: application/zip, Size: 3602 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-23 2:49 mdavis
0 siblings, 0 replies; 3471+ messages in thread
From: mdavis @ 2017-06-23 2:49 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: 5600669634007.zip --]
[-- Type: application/zip, Size: 5665 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-23 1:43 horizon
0 siblings, 0 replies; 3471+ messages in thread
From: horizon @ 2017-06-23 1:43 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: 38089400225.zip --]
[-- Type: application/zip, Size: 3418 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-22 20:24 koopk
0 siblings, 0 replies; 3471+ messages in thread
From: koopk @ 2017-06-22 20:24 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 0031586918070.zip --]
[-- Type: application/zip, Size: 3419 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-22 20:22 junplzen
0 siblings, 0 replies; 3471+ messages in thread
From: junplzen @ 2017-06-22 20:22 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: 4607810.zip --]
[-- Type: application/zip, Size: 3419 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-22 13:22 jeffrey.faulkenberg
0 siblings, 0 replies; 3471+ messages in thread
From: jeffrey.faulkenberg @ 2017-06-22 13:22 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 475333248096240.zip --]
[-- Type: application/zip, Size: 2082 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-22 5:49 noord-holland
0 siblings, 0 replies; 3471+ messages in thread
From: noord-holland @ 2017-06-22 5:49 UTC (permalink / raw)
To: linux-samsung-soc
[-- Attachment #1: 3046358.zip --]
[-- Type: application/zip, Size: 3471 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-22 2:13 ecaterinasuciu09
0 siblings, 0 replies; 3471+ messages in thread
From: ecaterinasuciu09 @ 2017-06-22 2:13 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: 83837.zip --]
[-- Type: application/zip, Size: 2080 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-21 20:10 morice.diane
0 siblings, 0 replies; 3471+ messages in thread
From: morice.diane @ 2017-06-21 20:10 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 281270036887.zip --]
[-- Type: application/zip, Size: 3435 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-21 7:43 koopk
0 siblings, 0 replies; 3471+ messages in thread
From: koopk @ 2017-06-21 7:43 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 170045.zip --]
[-- Type: application/zip, Size: 3532 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-21 7:32 tjcrewvolcoordinator-Re5JQEeQqe8AvxtiuMwx3w
0 siblings, 0 replies; 3471+ messages in thread
From: tjcrewvolcoordinator-Re5JQEeQqe8AvxtiuMwx3w @ 2017-06-21 7:32 UTC (permalink / raw)
To: linux-api-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 625752260604.zip --]
[-- Type: application/zip, Size: 3499 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-21 6:23 chrisbi_anelyst
0 siblings, 0 replies; 3471+ messages in thread
From: chrisbi_anelyst @ 2017-06-21 6:23 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 100324712762.zip --]
[-- Type: application/zip, Size: 3475 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-21 6:16 angers
0 siblings, 0 replies; 3471+ messages in thread
From: angers @ 2017-06-21 6:16 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: 089372.zip --]
[-- Type: application/zip, Size: 3475 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-21 4:40 kholloway
0 siblings, 0 replies; 3471+ messages in thread
From: kholloway @ 2017-06-21 4:40 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 5342408615.zip --]
[-- Type: application/zip, Size: 3506 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-20 22:49 redaccion
0 siblings, 0 replies; 3471+ messages in thread
From: redaccion @ 2017-06-20 22:49 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 3712936079.zip --]
[-- Type: application/zip, Size: 3453 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-20 18:45 roeper
0 siblings, 0 replies; 3471+ messages in thread
From: roeper @ 2017-06-20 18:45 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 4870601.zip --]
[-- Type: application/zip, Size: 3508 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-20 17:50 editor
0 siblings, 0 replies; 3471+ messages in thread
From: editor @ 2017-06-20 17:50 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 2566839.zip --]
[-- Type: application/zip, Size: 3483 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-20 16:31 nfrankiyamu
0 siblings, 0 replies; 3471+ messages in thread
From: nfrankiyamu @ 2017-06-20 16:31 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: 2590095223830.zip --]
[-- Type: application/zip, Size: 3501 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-20 6:29 xa0ajutor
0 siblings, 0 replies; 3471+ messages in thread
From: xa0ajutor @ 2017-06-20 6:29 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: 3786494.zip --]
[-- Type: application/zip, Size: 5133 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-20 0:47 durrant
0 siblings, 0 replies; 3471+ messages in thread
From: durrant @ 2017-06-20 0:47 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: 1142760447.zip --]
[-- Type: application/zip, Size: 3184 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-19 19:58 tjcrewvolcoordinator-Re5JQEeQqe8AvxtiuMwx3w
0 siblings, 0 replies; 3471+ messages in thread
From: tjcrewvolcoordinator-Re5JQEeQqe8AvxtiuMwx3w @ 2017-06-19 19:58 UTC (permalink / raw)
To: linux-api-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 324723303283412.zip --]
[-- Type: application/zip, Size: 3185 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-19 18:46 chrisbi_anelyst
0 siblings, 0 replies; 3471+ messages in thread
From: chrisbi_anelyst @ 2017-06-19 18:46 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 9052001159285.zip --]
[-- Type: application/zip, Size: 3174 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-19 16:53 armouralumni
0 siblings, 0 replies; 3471+ messages in thread
From: armouralumni @ 2017-06-19 16:53 UTC (permalink / raw)
To: linux-m68k
[-- Attachment #1: 04319.zip --]
[-- Type: application/zip, Size: 5142 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-19 9:57 anita.traylor
0 siblings, 0 replies; 3471+ messages in thread
From: anita.traylor @ 2017-06-19 9:57 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 0868791.zip --]
[-- Type: application/zip, Size: 3197 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-19 9:36 susan.christian
0 siblings, 0 replies; 3471+ messages in thread
From: susan.christian @ 2017-06-19 9:36 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: 608217586713206.zip --]
[-- Type: application/zip, Size: 3197 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-18 14:27 xa0ajutor
0 siblings, 0 replies; 3471+ messages in thread
From: xa0ajutor @ 2017-06-18 14:27 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: 49828587.zip --]
[-- Type: application/zip, Size: 2024 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-18 13:58 membership
0 siblings, 0 replies; 3471+ messages in thread
From: membership @ 2017-06-18 13:58 UTC (permalink / raw)
To: linux-crypto
[-- Attachment #1: 5314437.zip --]
[-- Type: application/zip, Size: 2002 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-18 3:09 agar2000
0 siblings, 0 replies; 3471+ messages in thread
From: agar2000 @ 2017-06-18 3:09 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 665495.zip --]
[-- Type: application/zip, Size: 3182 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-17 22:46 rhsinfo
0 siblings, 0 replies; 3471+ messages in thread
From: rhsinfo @ 2017-06-17 22:46 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 05702933099528.zip --]
[-- Type: application/zip, Size: 2009 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-16 22:37 kelley
0 siblings, 0 replies; 3471+ messages in thread
From: kelley @ 2017-06-16 22:37 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 01808124726107.zip --]
[-- Type: application/zip, Size: 2056 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-16 14:46 roeper
0 siblings, 0 replies; 3471+ messages in thread
From: roeper @ 2017-06-16 14:46 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 110899877616.zip --]
[-- Type: application/zip, Size: 3197 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-15 17:35 jeffrey.faulkenberg
0 siblings, 0 replies; 3471+ messages in thread
From: jeffrey.faulkenberg @ 2017-06-15 17:35 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 36611.zip --]
[-- Type: application/zip, Size: 5410 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-15 14:56 john.dahlberg
0 siblings, 0 replies; 3471+ messages in thread
From: john.dahlberg @ 2017-06-15 14:56 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: 104973915087041.zip --]
[-- Type: application/zip, Size: 5387 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-15 13:50 pohut00
0 siblings, 0 replies; 3471+ messages in thread
From: pohut00 @ 2017-06-15 13:50 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 241385535172685.zip --]
[-- Type: application/zip, Size: 5327 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-15 8:37 ecaterinasuciu09
0 siblings, 0 replies; 3471+ messages in thread
From: ecaterinasuciu09 @ 2017-06-15 8:37 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: 57413395.zip --]
[-- Type: application/zip, Size: 4905 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-14 22:19 muirs
0 siblings, 0 replies; 3471+ messages in thread
From: muirs @ 2017-06-14 22:19 UTC (permalink / raw)
To: linux-next
[-- Attachment #1: 96799833.zip --]
[-- Type: application/zip, Size: 4850 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-14 21:25 koopk
0 siblings, 0 replies; 3471+ messages in thread
From: koopk @ 2017-06-14 21:25 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 927328920469.zip --]
[-- Type: application/zip, Size: 3174 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-14 20:41 angers
0 siblings, 0 replies; 3471+ messages in thread
From: angers @ 2017-06-14 20:41 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: 1598538396047.zip --]
[-- Type: application/zip, Size: 3210 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-14 19:31 kholloway
0 siblings, 0 replies; 3471+ messages in thread
From: kholloway @ 2017-06-14 19:31 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 153610094672.zip --]
[-- Type: application/zip, Size: 3186 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-14 16:39 nfrankiyamu
0 siblings, 0 replies; 3471+ messages in thread
From: nfrankiyamu @ 2017-06-14 16:39 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: 8121690.zip --]
[-- Type: application/zip, Size: 3179 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-14 12:27 board
0 siblings, 0 replies; 3471+ messages in thread
From: board @ 2017-06-14 12:27 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 2117876976.zip --]
[-- Type: application/zip, Size: 4890 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-14 12:26 sibolt.mulder-b60u5d1xRcFWk0Htik3J/w
0 siblings, 0 replies; 3471+ messages in thread
From: sibolt.mulder-b60u5d1xRcFWk0Htik3J/w @ 2017-06-14 12:26 UTC (permalink / raw)
To: dwarves-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 426810907226614.zip --]
[-- Type: application/zip, Size: 4989 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-14 11:42 sophie.norman
0 siblings, 0 replies; 3471+ messages in thread
From: sophie.norman @ 2017-06-14 11:42 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 297272492.zip --]
[-- Type: application/zip, Size: 4871 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-14 10:27 susan.christian
0 siblings, 0 replies; 3471+ messages in thread
From: susan.christian @ 2017-06-14 10:27 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: 92571547.zip --]
[-- Type: application/zip, Size: 3156 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-14 1:06 durrant
0 siblings, 0 replies; 3471+ messages in thread
From: durrant @ 2017-06-14 1:06 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: 394898621633.zip --]
[-- Type: application/zip, Size: 3509 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-13 21:38 douille.l
0 siblings, 0 replies; 3471+ messages in thread
From: douille.l @ 2017-06-13 21:38 UTC (permalink / raw)
To: linux-crypto
[-- Attachment #1: 771157718762760.zip --]
[-- Type: application/zip, Size: 5007 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-13 11:59 susan.christian
0 siblings, 0 replies; 3471+ messages in thread
From: susan.christian @ 2017-06-13 11:59 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: 78954.zip --]
[-- Type: application/zip, Size: 3506 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-13 10:15 nenep
0 siblings, 0 replies; 3471+ messages in thread
From: nenep @ 2017-06-13 10:15 UTC (permalink / raw)
To: linux-samsung-soc
[-- Attachment #1: 2624499375.zip --]
[-- Type: application/zip, Size: 3508 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-13 9:59 lizdebeth_
0 siblings, 0 replies; 3471+ messages in thread
From: lizdebeth_ @ 2017-06-13 9:59 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: 6313991237204.zip --]
[-- Type: application/zip, Size: 3459 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-13 9:35 wvhyvcm.abyxg
0 siblings, 0 replies; 3471+ messages in thread
From: wvhyvcm.abyxg @ 2017-06-13 9:35 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 4561729095232.zip --]
[-- Type: application/zip, Size: 3497 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-13 8:14 horizon
0 siblings, 0 replies; 3471+ messages in thread
From: horizon @ 2017-06-13 8:14 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: 480167506794.zip --]
[-- Type: application/zip, Size: 3501 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-13 4:53 roeper
0 siblings, 0 replies; 3471+ messages in thread
From: roeper @ 2017-06-13 4:53 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 36443008914.zip --]
[-- Type: application/zip, Size: 3460 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-13 4:35 ujagu8185-Re5JQEeQqe8AvxtiuMwx3w
0 siblings, 0 replies; 3471+ messages in thread
From: ujagu8185-Re5JQEeQqe8AvxtiuMwx3w @ 2017-06-13 4:35 UTC (permalink / raw)
To: linux-cifs-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 51151522.zip --]
[-- Type: application/zip, Size: 3491 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-13 4:22 mitch_128
0 siblings, 0 replies; 3471+ messages in thread
From: mitch_128 @ 2017-06-13 4:22 UTC (permalink / raw)
To: linux-ide
[-- Attachment #1: 142179.zip --]
[-- Type: application/zip, Size: 3471 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-12 21:36 nbensoncole81
0 siblings, 0 replies; 3471+ messages in thread
From: nbensoncole81 @ 2017-06-12 21:36 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 28493054646.zip --]
[-- Type: application/zip, Size: 4963 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-12 19:12 nhossein4212003
0 siblings, 0 replies; 3471+ messages in thread
From: nhossein4212003 @ 2017-06-12 19:12 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 39025725073.zip --]
[-- Type: application/zip, Size: 3493 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-12 17:13 armiksanaye
0 siblings, 0 replies; 3471+ messages in thread
From: armiksanaye @ 2017-06-12 17:13 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: 069816.zip --]
[-- Type: application/zip, Size: 3493 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-12 16:44 nfrankiyamu
0 siblings, 0 replies; 3471+ messages in thread
From: nfrankiyamu @ 2017-06-12 16:44 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: 309045088847053.zip --]
[-- Type: application/zip, Size: 4900 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-12 15:02 amin
0 siblings, 0 replies; 3471+ messages in thread
From: amin @ 2017-06-12 15:02 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 311766356839.zip --]
[-- Type: application/zip, Size: 4807 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-12 10:50 sibolt.mulder-b60u5d1xRcFWk0Htik3J/w
0 siblings, 0 replies; 3471+ messages in thread
From: sibolt.mulder-b60u5d1xRcFWk0Htik3J/w @ 2017-06-12 10:50 UTC (permalink / raw)
To: dwarves-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 019402655217095.zip --]
[-- Type: application/zip, Size: 3588 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-12 7:28 webmaster
0 siblings, 0 replies; 3471+ messages in thread
From: webmaster @ 2017-06-12 7:28 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: 24874228775.zip --]
[-- Type: application/zip, Size: 4916 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-11 18:16 tammyehood
0 siblings, 0 replies; 3471+ messages in thread
From: tammyehood @ 2017-06-11 18:16 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 46536729943268.zip --]
[-- Type: application/zip, Size: 3145 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-11 16:35 mitch_128
0 siblings, 0 replies; 3471+ messages in thread
From: mitch_128 @ 2017-06-11 16:35 UTC (permalink / raw)
To: linux-ide
[-- Attachment #1: 689971074.zip --]
[-- Type: application/zip, Size: 3176 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-11 7:27 roeper
0 siblings, 0 replies; 3471+ messages in thread
From: roeper @ 2017-06-11 7:27 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 700323777.zip --]
[-- Type: application/zip, Size: 3172 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-11 4:42 1.10.0812112155390.21775
0 siblings, 0 replies; 3471+ messages in thread
From: 1.10.0812112155390.21775 @ 2017-06-11 4:42 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 99741.zip --]
[-- Type: application/zip, Size: 3194 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-11 3:28 redaccion
0 siblings, 0 replies; 3471+ messages in thread
From: redaccion @ 2017-06-11 3:28 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 4748827.zip --]
[-- Type: application/zip, Size: 3180 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-11 2:29 energi
0 siblings, 0 replies; 3471+ messages in thread
From: energi @ 2017-06-11 2:29 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 16633582951.zip --]
[-- Type: application/zip, Size: 3178 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-11 0:20 service
0 siblings, 0 replies; 3471+ messages in thread
From: service @ 2017-06-11 0:20 UTC (permalink / raw)
To: linux-crypto
[-- Attachment #1: 68658800.zip --]
[-- Type: application/zip, Size: 3174 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-10 21:10 mbalhoff
0 siblings, 0 replies; 3471+ messages in thread
From: mbalhoff @ 2017-06-10 21:10 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: 436049145.zip --]
[-- Type: application/zip, Size: 4960 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-10 21:03 morice.diane
0 siblings, 0 replies; 3471+ messages in thread
From: morice.diane @ 2017-06-10 21:03 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 6681956269.zip --]
[-- Type: application/zip, Size: 3204 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-10 20:24 board
0 siblings, 0 replies; 3471+ messages in thread
From: board @ 2017-06-10 20:24 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 549426.zip --]
[-- Type: application/zip, Size: 5031 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-10 14:34 kbennett
0 siblings, 0 replies; 3471+ messages in thread
From: kbennett @ 2017-06-10 14:34 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: 0630938972524.zip --]
[-- Type: application/zip, Size: 5025 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-10 13:33 iker-KvP5wT2u2U0
0 siblings, 0 replies; 3471+ messages in thread
From: iker-KvP5wT2u2U0 @ 2017-06-10 13:33 UTC (permalink / raw)
To: linux-efi-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 15583681942264.zip --]
[-- Type: application/zip, Size: 4964 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-10 8:23 kindergartenchaos2
0 siblings, 0 replies; 3471+ messages in thread
From: kindergartenchaos2 @ 2017-06-10 8:23 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 829628.zip --]
[-- Type: application/zip, Size: 2004 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-10 7:07 Youichi Kanno
0 siblings, 0 replies; 3471+ messages in thread
From: Youichi Kanno @ 2017-06-10 7:07 UTC (permalink / raw)
Sir/Madam
I am sorry to encroach into your privacy in this manner, I found you
listed in the Trade Center Chambers of Commerce directory here in
Japan, My name is Youichi Kanno and I work in Audit & credit
Supervisory role at The Norinchukin Bank, I need your assistance to
process the fund claims oF $18,100,000.00 (Eighteen Million, One
Hundred Thousand, USD) of a deceased client Mr. Grigor Kassan, And i
need your assistance to process the fund claims, I only pray at this
time that your address is still valid. I want to solicit your
attention to receive this money on my behalf. The purpose of my
contacting you is because my status would not permit me to do this
alone.
I hope to hear from you soon so we can discuss the logistic of moving
the funds to a safe offshore bank.
Yours sincerely,
Youichi Kanno
Phone Number: +81345400962
--
To unsubscribe from this list: send the line "unsubscribe devicetree" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-10 5:53 jacqueline.pike
0 siblings, 0 replies; 3471+ messages in thread
From: jacqueline.pike @ 2017-06-10 5:53 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: 465348500411810.zip --]
[-- Type: application/zip, Size: 2050 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-10 5:29 agiva
0 siblings, 0 replies; 3471+ messages in thread
From: agiva @ 2017-06-10 5:29 UTC (permalink / raw)
To: linux-crypto
[-- Attachment #1: 32156204782.zip --]
[-- Type: application/zip, Size: 2054 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-09 19:04 armouralumni
0 siblings, 0 replies; 3471+ messages in thread
From: armouralumni @ 2017-06-09 19:04 UTC (permalink / raw)
To: linux-m68k
[-- Attachment #1: 735758270.zip --]
[-- Type: application/zip, Size: 3186 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-09 18:57 editor
0 siblings, 0 replies; 3471+ messages in thread
From: editor @ 2017-06-09 18:57 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 93304718.zip --]
[-- Type: application/zip, Size: 3186 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-09 17:38 nfrankiyamu
0 siblings, 0 replies; 3471+ messages in thread
From: nfrankiyamu @ 2017-06-09 17:38 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: 6928391092.zip --]
[-- Type: application/zip, Size: 3180 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-09 12:45 Mrs Alice Walton
0 siblings, 0 replies; 3471+ messages in thread
From: Mrs Alice Walton @ 2017-06-09 12:45 UTC (permalink / raw)
To: Recipients
I have a charity proposal for you
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-09 10:47 tjcrewvolcoordinator-Re5JQEeQqe8AvxtiuMwx3w
0 siblings, 0 replies; 3471+ messages in thread
From: tjcrewvolcoordinator-Re5JQEeQqe8AvxtiuMwx3w @ 2017-06-09 10:47 UTC (permalink / raw)
To: linux-api-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 85927179231299.zip --]
[-- Type: application/zip, Size: 3189 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-09 8:02 kholloway
0 siblings, 0 replies; 3471+ messages in thread
From: kholloway @ 2017-06-09 8:02 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 006882549.zip --]
[-- Type: application/zip, Size: 3185 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-09 4:30 citydesk
0 siblings, 0 replies; 3471+ messages in thread
From: citydesk @ 2017-06-09 4:30 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: 846894449555915.zip --]
[-- Type: application/zip, Size: 3190 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-09 3:35 office
0 siblings, 0 replies; 3471+ messages in thread
From: office @ 2017-06-09 3:35 UTC (permalink / raw)
To: linux-leds
[-- Attachment #1: 173662.zip --]
[-- Type: application/zip, Size: 3171 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-09 2:06 rueggemann
0 siblings, 0 replies; 3471+ messages in thread
From: rueggemann @ 2017-06-09 2:06 UTC (permalink / raw)
To: linux-samsung-soc
[-- Attachment #1: 6473227.zip --]
[-- Type: application/zip, Size: 3182 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-09 1:31 durrant
0 siblings, 0 replies; 3471+ messages in thread
From: durrant @ 2017-06-09 1:31 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: 48480.zip --]
[-- Type: application/zip, Size: 3163 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-09 0:39 susan.christian
0 siblings, 0 replies; 3471+ messages in thread
From: susan.christian @ 2017-06-09 0:39 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: 577788.zip --]
[-- Type: application/zip, Size: 3149 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-09 0:34 richard
0 siblings, 0 replies; 3471+ messages in thread
From: richard @ 2017-06-09 0:34 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 251180749.zip --]
[-- Type: application/zip, Size: 3149 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-08 22:14 bcohen
0 siblings, 0 replies; 3471+ messages in thread
From: bcohen @ 2017-06-08 22:14 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 64017075.zip --]
[-- Type: application/zip, Size: 3148 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-08 18:00 beautyink
0 siblings, 0 replies; 3471+ messages in thread
From: beautyink @ 2017-06-08 18:00 UTC (permalink / raw)
To: linux-ide
[-- Attachment #1: 31571386.zip --]
[-- Type: application/zip, Size: 3183 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-08 17:59 kirola
0 siblings, 0 replies; 3471+ messages in thread
From: kirola @ 2017-06-08 17:59 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 87350019.zip --]
[-- Type: application/zip, Size: 3183 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-08 17:26 natasha.glauser
0 siblings, 0 replies; 3471+ messages in thread
From: natasha.glauser @ 2017-06-08 17:26 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 9078859657.zip --]
[-- Type: application/zip, Size: 3191 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-08 15:18 junplzen
0 siblings, 0 replies; 3471+ messages in thread
From: junplzen @ 2017-06-08 15:18 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: 79062787448573.zip --]
[-- Type: application/zip, Size: 3183 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-08 14:09 service
0 siblings, 0 replies; 3471+ messages in thread
From: service @ 2017-06-08 14:09 UTC (permalink / raw)
To: linux-crypto
[-- Attachment #1: 7779404973035.zip --]
[-- Type: application/zip, Size: 3141 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2017-06-08 13:35 Yuval Shaia
0 siblings, 0 replies; 3471+ messages in thread
From: Yuval Shaia @ 2017-06-08 13:35 UTC (permalink / raw)
To: netdev
subscribe netdev
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-08 13:07 unsubscribe.me
0 siblings, 0 replies; 3471+ messages in thread
From: unsubscribe.me @ 2017-06-08 13:07 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 33904617.zip --]
[-- Type: application/zip, Size: 3131 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-08 12:51 koopk
0 siblings, 0 replies; 3471+ messages in thread
From: koopk @ 2017-06-08 12:51 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 99695261699.zip --]
[-- Type: application/zip, Size: 3169 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-08 11:31 helga.brickl
0 siblings, 0 replies; 3471+ messages in thread
From: helga.brickl @ 2017-06-08 11:31 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 529462.zip --]
[-- Type: application/zip, Size: 4813 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-08 5:41 Oliver Carter
0 siblings, 0 replies; 3471+ messages in thread
From: Oliver Carter @ 2017-06-08 5:41 UTC (permalink / raw)
To: netdev
Hey Netdev
http://arc-protect.com/m7_gift_giver.php?isnt=pfcz272prn36hk
Oliver
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-08 5:00 noord-holland
0 siblings, 0 replies; 3471+ messages in thread
From: noord-holland @ 2017-06-08 5:00 UTC (permalink / raw)
To: linux-samsung-soc
[-- Attachment #1: 0286300581085.zip --]
[-- Type: application/zip, Size: 3154 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-08 3:14 agar2000
0 siblings, 0 replies; 3471+ messages in thread
From: agar2000 @ 2017-06-08 3:14 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 225162210782.zip --]
[-- Type: application/zip, Size: 3145 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-08 3:14 kgbok.kezyhumh
0 siblings, 0 replies; 3471+ messages in thread
From: kgbok.kezyhumh @ 2017-06-08 3:14 UTC (permalink / raw)
To: linux-pm
[-- Attachment #1: 522358304980.zip --]
[-- Type: application/zip, Size: 3145 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-07 22:30 tammyehood
0 siblings, 0 replies; 3471+ messages in thread
From: tammyehood @ 2017-06-07 22:30 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 2944259303743.zip --]
[-- Type: application/zip, Size: 3184 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-07 21:54 agar2000
0 siblings, 0 replies; 3471+ messages in thread
From: agar2000 @ 2017-06-07 21:54 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 99372.zip --]
[-- Type: application/zip, Size: 3195 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-07 14:00 1.10.0812112155390.21775
0 siblings, 0 replies; 3471+ messages in thread
From: 1.10.0812112155390.21775 @ 2017-06-07 14:00 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 470011811002.zip --]
[-- Type: application/zip, Size: 3194 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-07 11:43 nhossein4212003
0 siblings, 0 replies; 3471+ messages in thread
From: nhossein4212003 @ 2017-06-07 11:43 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 701241854906746.zip --]
[-- Type: application/zip, Size: 3106 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-07 7:42 morice.diane
0 siblings, 0 replies; 3471+ messages in thread
From: morice.diane @ 2017-06-07 7:42 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 284085067588.zip --]
[-- Type: application/zip, Size: 3199 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-07 3:19 lucia.germino
0 siblings, 0 replies; 3471+ messages in thread
From: lucia.germino @ 2017-06-07 3:19 UTC (permalink / raw)
To: linux-ide
[-- Attachment #1: 6949818.zip --]
[-- Type: application/zip, Size: 4774 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-06 23:46 mdavis
0 siblings, 0 replies; 3471+ messages in thread
From: mdavis @ 2017-06-06 23:46 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: 37913653393087.zip --]
[-- Type: application/zip, Size: 4706 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-06 20:36 dengx
0 siblings, 0 replies; 3471+ messages in thread
From: dengx @ 2017-06-06 20:36 UTC (permalink / raw)
To: linux-ide
[-- Attachment #1: 8658933842225.zip --]
[-- Type: application/zip, Size: 3176 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-06 7:19 From Lori J. Robinson
0 siblings, 0 replies; 3471+ messages in thread
From: From Lori J. Robinson @ 2017-06-06 7:19 UTC (permalink / raw)
Hello,
I am General Lori J. Robinson, I am presently in Afghanistan serving
the UN/NATO military assignment here,i have an important discussion
with you kindly respond to me through my private box
lori_robinson.usa@hotmail.com so that we can know ourselves better. I
hope to read from you if your are also interested. Thanks and hoping
to hear from you soonest.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-06 7:19 From Lori J. Robinson
0 siblings, 0 replies; 3471+ messages in thread
From: From Lori J. Robinson @ 2017-06-06 7:19 UTC (permalink / raw)
Hello,
I am General Lori J. Robinson, I am presently in Afghanistan serving
the UN/NATO military assignment here,i have an important discussion
with you kindly respond to me through my private box
lori_robinson.usa@hotmail.com so that we can know ourselves better. I
hope to read from you if your are also interested. Thanks and hoping
to hear from you soonest.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-06 7:19 From Lori J. Robinson
0 siblings, 0 replies; 3471+ messages in thread
From: From Lori J. Robinson @ 2017-06-06 7:19 UTC (permalink / raw)
Hello,
I am General Lori J. Robinson, I am presently in Afghanistan serving
the UN/NATO military assignment here,i have an important discussion
with you kindly respond to me through my private box
lori_robinson.usa@hotmail.com so that we can know ourselves better. I
hope to read from you if your are also interested. Thanks and hoping
to hear from you soonest.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-06 7:19 From Lori J. Robinson
0 siblings, 0 replies; 3471+ messages in thread
From: From Lori J. Robinson @ 2017-06-06 7:19 UTC (permalink / raw)
Hello,
I am General Lori J. Robinson, I am presently in Afghanistan serving
the UN/NATO military assignment here,i have an important discussion
with you kindly respond to me through my private box
lori_robinson.usa@hotmail.com so that we can know ourselves better. I
hope to read from you if your are also interested. Thanks and hoping
to hear from you soonest.
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-05 17:32 armouralumni
0 siblings, 0 replies; 3471+ messages in thread
From: armouralumni @ 2017-06-05 17:32 UTC (permalink / raw)
To: linux-m68k
[-- Attachment #1: 5980832698.zip --]
[-- Type: application/zip, Size: 3204 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-05 5:43 h.gerritsen12
0 siblings, 0 replies; 3471+ messages in thread
From: h.gerritsen12 @ 2017-06-05 5:43 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: 382993317.zip --]
[-- Type: application/zip, Size: 3207 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-05 4:30 citydesk
0 siblings, 0 replies; 3471+ messages in thread
From: citydesk @ 2017-06-05 4:30 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: 721224187.zip --]
[-- Type: application/zip, Size: 3190 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-05 1:08 rueggemann
0 siblings, 0 replies; 3471+ messages in thread
From: rueggemann @ 2017-06-05 1:08 UTC (permalink / raw)
To: linux-samsung-soc
[-- Attachment #1: 03423870618227.zip --]
[-- Type: application/zip, Size: 3182 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-05 0:03 nmckenna
0 siblings, 0 replies; 3471+ messages in thread
From: nmckenna @ 2017-06-05 0:03 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 12178296.zip --]
[-- Type: application/zip, Size: 3159 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-04 19:55 archerrp
0 siblings, 0 replies; 3471+ messages in thread
From: archerrp @ 2017-06-04 19:55 UTC (permalink / raw)
To: linux-crypto
[-- Attachment #1: 73310.zip --]
[-- Type: application/zip, Size: 3181 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-04 10:30 Yuval Mintz
0 siblings, 0 replies; 3471+ messages in thread
From: Yuval Mintz @ 2017-06-04 10:30 UTC (permalink / raw)
To: davem, netdev; +Cc: Yuval Mintz
Subject: [PATCH net-next 00/11] qed*: Support VF XDP attachment
Each driver queue [Rx, Tx, XDP-forwarding] requires an allocated HW/FW
connection + configured queue-zone.
VF handling by the PF has several limitations that prevented adding the
capability to perform XDP at driver-level:
- The VF assumes there's 1-to-1 correspondance between the VF queue and
the used connection, meaning q<x> is always going to use cid<x>,
whereas for its own queues the PF is acquiring a new cid per each new
queue.
- There's a 1-to-1 correspondate between the VF-queues and the HW queue
zones. While this is necessary for Rx-queues [as the queue-zone
contains the producer], transmission queues can share the underlaying
queue-zone [only shared configuration is coalescing].
But all VF<->PF communication mechanisms assume there's a single
identifier that identify a queue [as queue-zone == queue], while
sharing queue-zones requires passing additional information.
- VFs currently don't try mapping a doorbell bar - there's a small
doorbell window in the regview allowing VFs to doorbell up to 16
connections; but this window isn's wide enough for the added XDP
forwarding queues.
This series is going to add the necessary infrastrucutre to finally let
our VFs support XDP assuming both the PF and VF drivers are sufficiently
new [Legacy support would be retained both for older VFs and older PFs,
but both will be needed for this new support to work].
Basically, the various database driver maintains for its queue-cids
would be revised, and queue-cids would be identified using the
(queue-zone, unique index) pair. The TLV mechanism would then be
extended to allow VFs to communicate that unique-index as well as the
already provided queue-zone. Finally, the VFs would try to map their
doorbell bar and inform their PF that they're using it.
Almost all the changes are in qed, with exception of #3 [which does some
cleanup in qede as well] and #11 that actually enables the feature.
Dave,
Please consider applying this series to 'net-next'.
Thanks,
Yuval
Yuval Mintz (11):
qed: Add bitmaps for VF CIDs
qed: Create L2 queue database
qed*: L2 interface to use the SB structures directly
qed: Pass vf_params when creating a queue-cid
qed: Assign a unique per-queue index to queue-cid
qed: Make VF legacy a bitfield
qed: IOV db support multiple queues per qzone
qed: Multiple qzone queues for VFs
qed: VFs to try utilizing the doorbell bar
qed: VF XDP support
qede: VF XDP support
drivers/net/ethernet/qlogic/qed/qed.h | 8 +
drivers/net/ethernet/qlogic/qed/qed_cxt.c | 230 ++++++++++----
drivers/net/ethernet/qlogic/qed/qed_cxt.h | 54 +++-
drivers/net/ethernet/qlogic/qed/qed_dev.c | 32 +-
drivers/net/ethernet/qlogic/qed/qed_l2.c | 298 +++++++++++++++---
drivers/net/ethernet/qlogic/qed/qed_l2.h | 79 ++++-
drivers/net/ethernet/qlogic/qed/qed_main.c | 24 +-
drivers/net/ethernet/qlogic/qed/qed_reg_addr.h | 1 +
drivers/net/ethernet/qlogic/qed/qed_sriov.c | 418 +++++++++++++++++++------
drivers/net/ethernet/qlogic/qed/qed_sriov.h | 20 +-
drivers/net/ethernet/qlogic/qed/qed_vf.c | 244 +++++++++++----
drivers/net/ethernet/qlogic/qed/qed_vf.h | 79 ++++-
drivers/net/ethernet/qlogic/qede/qede_main.c | 38 ++-
include/linux/qed/qed_eth_if.h | 6 +-
include/linux/qed/qed_if.h | 4 +
15 files changed, 1205 insertions(+), 330 deletions(-)
--
2.9.4
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-03 7:17 nbensoncole81
0 siblings, 0 replies; 3471+ messages in thread
From: nbensoncole81 @ 2017-06-03 7:17 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 6713067025.zip --]
[-- Type: application/zip, Size: 3162 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-03 5:45 nfrankiyamu
0 siblings, 0 replies; 3471+ messages in thread
From: nfrankiyamu @ 2017-06-03 5:45 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: 34618930.zip --]
[-- Type: application/zip, Size: 3162 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-02 8:02 jessica.jones-PnMVE5gNl/Vkbu+0n/iG1Q
0 siblings, 0 replies; 3471+ messages in thread
From: jessica.jones-PnMVE5gNl/Vkbu+0n/iG1Q @ 2017-06-02 8:02 UTC (permalink / raw)
To: linux-cifs-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 6161255567811.zip --]
[-- Type: application/zip, Size: 3144 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-02 6:04 mari.kayhko
0 siblings, 0 replies; 3471+ messages in thread
From: mari.kayhko @ 2017-06-02 6:04 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 083544262110.zip --]
[-- Type: application/zip, Size: 3182 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-01 20:40 nbensoncole81
0 siblings, 0 replies; 3471+ messages in thread
From: nbensoncole81 @ 2017-06-01 20:40 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 208405710.zip --]
[-- Type: application/zip, Size: 3155 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-01 2:26 Dave Airlie
0 siblings, 0 replies; 3471+ messages in thread
From: Dave Airlie @ 2017-06-01 2:26 UTC (permalink / raw)
To: Linus Torvalds; +Cc: LKML, dri-devel
Hi Linus,
This is the main set of fixes for rc4, one amdgpu fix, some exynos
regression fixes, some msm fixes and some i915 and GVT fixes.
I've got a second regression fix for some DP chips that might be a bit
large, but I think we'd like to land it now, I'll send it along
tomorrow, once you are happy with this set.
Dave.
The following changes since commit 5ed02dbb497422bf225783f46e6eadd237d23d6b:
Linux 4.12-rc3 (2017-05-28 17:20:53 -0700)
are available in the git repository at:
git://people.freedesktop.org/~airlied/linux tags/drm-fixes-for-v4.12-rc4
for you to fetch changes up to 400129f0a3ae989c30b37104bbc23b35c9d7a9a4:
Merge tag 'exynos-drm-fixes-for-v4.12' of
git://git.kernel.org/pub/scm/linux/kernel/git/daeinki/drm-exynos into
drm-fixes (2017-06-01 12:07:48 +1000)
----------------------------------------------------------------
msm/exynos/i915/amdgpu fixes
----------------------------------------------------------------
Changbin Du (1):
drm/i915/gvt: clean up unsubmited workloads before destroying kmem cache
Chris Wilson (1):
drm/i915/selftests: Silence compiler warning in igt_ctx_exec
Chuanxiao Dong (2):
drm/i915: set initialised only when init_context callback is NULL
drm/i915/gvt: Disable compression workaround for Gen9
Daniel Vetter (2):
Revert "drm/i915: Restore lost "Initialized i915" welcome message"
drm/exynos: Merge pre/postclose hooks
Dave Airlie (4):
Merge tag 'drm-intel-fixes-2017-05-29' of
git://anongit.freedesktop.org/git/drm-intel into drm-fixes
Merge branch 'msm-fixes-4.12-rc4' of
git://people.freedesktop.org/~robclark/linux into drm-fixes
Merge branch 'drm-fixes-4.12' of
git://people.freedesktop.org/~agd5f/linux into drm-fixes
Merge tag 'exynos-drm-fixes-for-v4.12' of
git://git.kernel.org/.../daeinki/drm-exynos into drm-fixes
Eric Anholt (2):
drm/msm: Expose our reservation object when exporting a dmabuf.
drm/msm: Reuse dma_fence_release.
Hans de Goede (1):
drm/i915: Fix new -Wint-in-bool-context gcc compiler warning
Hoegeun Kwon (2):
drm/exynos: dsi: Fix the parse_dt function
drm/exynos: dsi: Remove bridge node reference in removal
Inki Dae (1):
drm/exynos: clean up description of exynos_drm_crtc
Jani Nikula (1):
Merge tag 'gvt-fixes-2017-05-25' of
https://github.com/01org/gvt-linux into drm-intel-fixes
Joonas Lahtinen (1):
drm/i915: Do not sync RCU during shrinking
Jordan Crouse (2):
drm/msm: Take the mutex before calling msm_gem_new_impl
drm/msm: Fix the check for the command size
Leo Liu (1):
drm/amdgpu: Program ring for vce instance 1 at its register space
Matthew Auld (1):
drm/i915: use vma->size for appgtt allocate_va_range
Philipp Zabel (1):
drm/msm: for array in-fences, check if all backing fences are
from our own context before waiting
Rob Clark (4):
drm/msm: select PM_OPP
drm/msm/mdp5: use __drm_atomic_helper_plane_duplicate_state()
drm/msm/gpu: check legacy clk names in get_clocks()
drm/msm/mdp5: release hwpipe(s) for unused planes
Tobias Klauser (1):
drm/msm: constify irq_domain_ops
Ville Syrjälä (1):
drm/i915: Stop pretending to mask/unmask LPE audio interrupts
drivers/gpu/drm/amd/amdgpu/vce_v3_0.c | 95 ++++++++++++++++-------
drivers/gpu/drm/exynos/exynos_drm_drv.c | 8 +-
drivers/gpu/drm/exynos/exynos_drm_drv.h | 5 +-
drivers/gpu/drm/exynos/exynos_drm_dsi.c | 26 +++----
drivers/gpu/drm/i915/gvt/execlist.c | 30 ++++---
drivers/gpu/drm/i915/gvt/handlers.c | 30 ++++---
drivers/gpu/drm/i915/i915_drv.c | 4 -
drivers/gpu/drm/i915/i915_gem_gtt.c | 2 +-
drivers/gpu/drm/i915/i915_gem_shrinker.c | 5 --
drivers/gpu/drm/i915/i915_irq.c | 15 ++--
drivers/gpu/drm/i915/i915_reg.h | 2 +-
drivers/gpu/drm/i915/intel_lpe_audio.c | 36 ---------
drivers/gpu/drm/i915/intel_lrc.c | 2 +-
drivers/gpu/drm/i915/selftests/i915_gem_context.c | 8 +-
drivers/gpu/drm/msm/Kconfig | 1 +
drivers/gpu/drm/msm/mdp/mdp5/mdp5_mdss.c | 2 +-
drivers/gpu/drm/msm/mdp/mdp5/mdp5_plane.c | 9 ++-
drivers/gpu/drm/msm/msm_drv.c | 1 +
drivers/gpu/drm/msm/msm_drv.h | 1 +
drivers/gpu/drm/msm/msm_fence.c | 10 +--
drivers/gpu/drm/msm/msm_gem.c | 6 ++
drivers/gpu/drm/msm/msm_gem_prime.c | 7 ++
drivers/gpu/drm/msm/msm_gem_submit.c | 14 ++--
drivers/gpu/drm/msm/msm_gpu.c | 4 +-
24 files changed, 169 insertions(+), 154 deletions(-)
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-01 2:25 kbennett
0 siblings, 0 replies; 3471+ messages in thread
From: kbennett @ 2017-06-01 2:25 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: 652137435079.zip --]
[-- Type: application/zip, Size: 3170 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-01 1:55 cdevries
0 siblings, 0 replies; 3471+ messages in thread
From: cdevries @ 2017-06-01 1:55 UTC (permalink / raw)
To: linux-crypto
[-- Attachment #1: 3483483188.zip --]
[-- Type: application/zip, Size: 3174 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-06-01 0:43 armouralumni
0 siblings, 0 replies; 3471+ messages in thread
From: armouralumni @ 2017-06-01 0:43 UTC (permalink / raw)
To: linux-m68k
[-- Attachment #1: 382518.zip --]
[-- Type: application/zip, Size: 3175 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-31 14:53 tjcrewvolcoordinator-Re5JQEeQqe8AvxtiuMwx3w
0 siblings, 0 replies; 3471+ messages in thread
From: tjcrewvolcoordinator-Re5JQEeQqe8AvxtiuMwx3w @ 2017-05-31 14:53 UTC (permalink / raw)
To: linux-api-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 76390.zip --]
[-- Type: application/zip, Size: 3189 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-31 11:36 p.mueller-spz-hgw-Mmb7MZpHnFY
0 siblings, 0 replies; 3471+ messages in thread
From: p.mueller-spz-hgw-Mmb7MZpHnFY @ 2017-05-31 11:36 UTC (permalink / raw)
To: linux-rdma-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 775_linux-rdma.zip --]
[-- Type: application/zip, Size: 3334 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2017-05-26 16:33 Anderson McEnany <
0 siblings, 0 replies; 3471+ messages in thread
From: Anderson McEnany < @ 2017-05-26 16:33 UTC (permalink / raw)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 2030 bytes --]
lisoch@veco.ru
>
Subject: Gegenseitiger Partnervorschlag
Date: Fri, 26 May 2017 18:15:45 +0200
MIME-Version: 1.0
Content-Type: text/plain;
charset="Windows-1251"
Content-Transfer-Encoding: 7bit
X-Priority: 3
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook Express 6.00.2600.0000
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2600.0000
Lieber Freund,
Vielen Dank, dass Sie sich die Zeit genommen haben, meinen gegenseitigen Geschäftsvorschlag zu Ihnen zu lesen. Mein Name ist Herr Anderson McEnany, Investment Manager bei der City Bank New York, USA.
Mein Ziel, Ihnen den Vorschlag zu schreiben, ist, weil ich einen europäischen Klienten habe, dass ich geholfen habe, eine Reihe von seinen Investitionen im Mittleren Osten mit einer Einlagensumme von USD21 Millionen Dollar zu verwalten, bis vor etwa fünf Jahren, als er plötzlich verstarb und seit fast 4 Jahren Ich habe nach seinen Verwandten gesucht und schließlich entdeckte ich, dass er keine lebenden Verwandten hatte.
Mein Vorschlag an Sie ist, mit Ihnen in der Deal zu arbeiten, ich beabsichtige, Sie als die nächste Angehörige zu präsentieren, die es legal für Sie machen wird, die Ablagerung für eine der folgenden Angehörigen zu erhalten.
Beide von uns werden gleichberechtigte Partner in diesem Deal sein und ich werde von Ihnen abhängen, um den Gesamtbetrag in Ihrem Konto zu erhalten. Weitere Informationen, die ich Ihnen geben werde, um meinen Anteil an den Mitteln zu überweisen.
Schließlich, wenn du es interessierst, könntest du doch so freundlich sein, mir deine vollen Namen zu schicken, Adresse und direkte Telefonnummern und auch könntest du mir von dir erzählen und was du für das Leben tust, weil das eine riesige finanzielle Transaktion ist und ich will sicher sein Dass du diese Transaktion bearbeiten kannst. An der Bank, wo die Gelder hinterlegt sind.
Ihre dringende Antwort wird geschätzt, bitte senden Sie Ihre Antwort Nachricht an diese vertrauliche E-Mail Adresse: Anderson_mcenany@gmx.com.
Freundliche Grüße,
Herr Anderson McEnany
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown)
@ 2017-05-26 16:33 Anderson McEnany <
0 siblings, 0 replies; 3471+ messages in thread
From: Anderson McEnany < @ 2017-05-26 16:33 UTC (permalink / raw)
[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 2030 bytes --]
lisoch@veco.ru
>
Subject: Gegenseitiger Partnervorschlag
Date: Fri, 26 May 2017 18:15:45 +0200
MIME-Version: 1.0
Content-Type: text/plain;
charset="Windows-1251"
Content-Transfer-Encoding: 7bit
X-Priority: 3
X-MSMail-Priority: Normal
X-Mailer: Microsoft Outlook Express 6.00.2600.0000
X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2600.0000
Lieber Freund,
Vielen Dank, dass Sie sich die Zeit genommen haben, meinen gegenseitigen Geschäftsvorschlag zu Ihnen zu lesen. Mein Name ist Herr Anderson McEnany, Investment Manager bei der City Bank New York, USA.
Mein Ziel, Ihnen den Vorschlag zu schreiben, ist, weil ich einen europäischen Klienten habe, dass ich geholfen habe, eine Reihe von seinen Investitionen im Mittleren Osten mit einer Einlagensumme von USD21 Millionen Dollar zu verwalten, bis vor etwa fünf Jahren, als er plötzlich verstarb und seit fast 4 Jahren Ich habe nach seinen Verwandten gesucht und schließlich entdeckte ich, dass er keine lebenden Verwandten hatte.
Mein Vorschlag an Sie ist, mit Ihnen in der Deal zu arbeiten, ich beabsichtige, Sie als die nächste Angehörige zu präsentieren, die es legal für Sie machen wird, die Ablagerung für eine der folgenden Angehörigen zu erhalten.
Beide von uns werden gleichberechtigte Partner in diesem Deal sein und ich werde von Ihnen abhängen, um den Gesamtbetrag in Ihrem Konto zu erhalten. Weitere Informationen, die ich Ihnen geben werde, um meinen Anteil an den Mitteln zu überweisen.
Schließlich, wenn du es interessierst, könntest du doch so freundlich sein, mir deine vollen Namen zu schicken, Adresse und direkte Telefonnummern und auch könntest du mir von dir erzählen und was du für das Leben tust, weil das eine riesige finanzielle Transaktion ist und ich will sicher sein Dass du diese Transaktion bearbeiten kannst. An der Bank, wo die Gelder hinterlegt sind.
Ihre dringende Antwort wird geschätzt, bitte senden Sie Ihre Antwort Nachricht an diese vertrauliche E-Mail Adresse: Anderson_mcenany@gmx.com.
Freundliche Grüße,
Herr Anderson McEnany
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-24 16:26 natasha.glauser
0 siblings, 0 replies; 3471+ messages in thread
From: natasha.glauser @ 2017-05-24 16:26 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 2605145.zip --]
[-- Type: application/zip, Size: 3160 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-24 0:12 bcohen
0 siblings, 0 replies; 3471+ messages in thread
From: bcohen @ 2017-05-24 0:12 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 07974248344583.zip --]
[-- Type: application/zip, Size: 3196 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-23 22:44 noord-holland
0 siblings, 0 replies; 3471+ messages in thread
From: noord-holland @ 2017-05-23 22:44 UTC (permalink / raw)
To: linux-samsung-soc
[-- Attachment #1: 6211920350.zip --]
[-- Type: application/zip, Size: 3191 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-23 16:29 benjamin
0 siblings, 0 replies; 3471+ messages in thread
From: benjamin @ 2017-05-23 16:29 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 73458503834697.zip --]
[-- Type: application/zip, Size: 3207 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-23 16:24 agiva
0 siblings, 0 replies; 3471+ messages in thread
From: agiva @ 2017-05-23 16:24 UTC (permalink / raw)
To: linux-crypto
[-- Attachment #1: 146902373.zip --]
[-- Type: application/zip, Size: 3207 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-23 9:36 bendis.michal
0 siblings, 0 replies; 3471+ messages in thread
From: bendis.michal @ 2017-05-23 9:36 UTC (permalink / raw)
To: linux-samsung-soc
[-- Attachment #1: 1443947.zip --]
[-- Type: application/zip, Size: 3175 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-23 8:42 delaware.orders
0 siblings, 0 replies; 3471+ messages in thread
From: delaware.orders @ 2017-05-23 8:42 UTC (permalink / raw)
To: linux-crypto
[-- Attachment #1: 0880469378.zip --]
[-- Type: application/zip, Size: 3197 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-23 7:38 scotte
0 siblings, 0 replies; 3471+ messages in thread
From: scotte @ 2017-05-23 7:38 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 615690.zip --]
[-- Type: application/zip, Size: 3201 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-23 4:53 nfrankiyamu
0 siblings, 0 replies; 3471+ messages in thread
From: nfrankiyamu @ 2017-05-23 4:53 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: 002602599.zip --]
[-- Type: application/zip, Size: 3215 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-23 2:19 mdavis
0 siblings, 0 replies; 3471+ messages in thread
From: mdavis @ 2017-05-23 2:19 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: 6775563555.zip --]
[-- Type: application/zip, Size: 3184 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-22 22:32 patientcentral
0 siblings, 0 replies; 3471+ messages in thread
From: patientcentral @ 2017-05-22 22:32 UTC (permalink / raw)
To: linux-samsung-soc
[-- Attachment #1: 8395290876929.zip --]
[-- Type: application/zip, Size: 3190 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-22 20:39 horizon
0 siblings, 0 replies; 3471+ messages in thread
From: horizon @ 2017-05-22 20:39 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: 66593863594.zip --]
[-- Type: application/zip, Size: 3156 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-22 16:10 mitch_128
0 siblings, 0 replies; 3471+ messages in thread
From: mitch_128 @ 2017-05-22 16:10 UTC (permalink / raw)
To: linux-ide
[-- Attachment #1: 5877262087.zip --]
[-- Type: application/zip, Size: 3116 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-22 0:57 mari.kayhko
0 siblings, 0 replies; 3471+ messages in thread
From: mari.kayhko @ 2017-05-22 0:57 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 58679201840822.zip --]
[-- Type: application/zip, Size: 3176 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-21 20:35 armiksanaye
0 siblings, 0 replies; 3471+ messages in thread
From: armiksanaye @ 2017-05-21 20:35 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: 66504.zip --]
[-- Type: application/zip, Size: 3178 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-21 16:36 x1kn8fk
0 siblings, 0 replies; 3471+ messages in thread
From: x1kn8fk @ 2017-05-21 16:36 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 697470622.zip --]
[-- Type: application/zip, Size: 39559 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-21 13:56 sibolt.mulder-b60u5d1xRcFWk0Htik3J/w
0 siblings, 0 replies; 3471+ messages in thread
From: sibolt.mulder-b60u5d1xRcFWk0Htik3J/w @ 2017-05-21 13:56 UTC (permalink / raw)
To: dwarves-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 295991102.zip --]
[-- Type: application/zip, Size: 39559 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-21 11:59 anita.traylor
0 siblings, 0 replies; 3471+ messages in thread
From: anita.traylor @ 2017-05-21 11:59 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 059670471941.zip --]
[-- Type: application/zip, Size: 39559 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-21 11:38 susan.christian
0 siblings, 0 replies; 3471+ messages in thread
From: susan.christian @ 2017-05-21 11:38 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: 0580348.zip --]
[-- Type: application/zip, Size: 39559 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-21 11:13 mariobronti
0 siblings, 0 replies; 3471+ messages in thread
From: mariobronti @ 2017-05-21 11:13 UTC (permalink / raw)
To: linux-ide
[-- Attachment #1: 37588996.zip --]
[-- Type: application/zip, Size: 39559 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-21 9:17 jacqueline.pike
0 siblings, 0 replies; 3471+ messages in thread
From: jacqueline.pike @ 2017-05-21 9:17 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: 544961.zip --]
[-- Type: application/zip, Size: 2917 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-21 8:55 benjamin
0 siblings, 0 replies; 3471+ messages in thread
From: benjamin @ 2017-05-21 8:55 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 087833.zip --]
[-- Type: application/zip, Size: 2855 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-21 8:55 agiva
0 siblings, 0 replies; 3471+ messages in thread
From: agiva @ 2017-05-21 8:55 UTC (permalink / raw)
To: linux-crypto
[-- Attachment #1: 5812262.zip --]
[-- Type: application/zip, Size: 2921 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-21 8:42 brucet
0 siblings, 0 replies; 3471+ messages in thread
From: brucet @ 2017-05-21 8:42 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 00725.zip --]
[-- Type: application/zip, Size: 2874 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-20 21:16 h.gerritsen12
0 siblings, 0 replies; 3471+ messages in thread
From: h.gerritsen12 @ 2017-05-20 21:16 UTC (permalink / raw)
To: linux-arch
[-- Attachment #1: 31217786072134.zip --]
[-- Type: application/zip, Size: 2902 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-20 20:00 citydesk
0 siblings, 0 replies; 3471+ messages in thread
From: citydesk @ 2017-05-20 20:00 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: 39874.zip --]
[-- Type: application/zip, Size: 2821 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-20 18:58 office
0 siblings, 0 replies; 3471+ messages in thread
From: office @ 2017-05-20 18:58 UTC (permalink / raw)
To: linux-leds
[-- Attachment #1: 22687017334589.zip --]
[-- Type: application/zip, Size: 1271 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-20 17:45 counselling-30L6jp03H7UtpYsHHOQ6Llpr/1R2p/CL
0 siblings, 0 replies; 3471+ messages in thread
From: counselling-30L6jp03H7UtpYsHHOQ6Llpr/1R2p/CL @ 2017-05-20 17:45 UTC (permalink / raw)
To: linux-cifs-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 9301738364993.zip --]
[-- Type: application/zip, Size: 2890 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-20 16:22 alters
0 siblings, 0 replies; 3471+ messages in thread
From: alters @ 2017-05-20 16:22 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: 4380970736.zip --]
[-- Type: application/zip, Size: 2837 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-20 14:29 cv
0 siblings, 0 replies; 3471+ messages in thread
From: cv @ 2017-05-20 14:29 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 65318589.zip --]
[-- Type: application/zip, Size: 2878 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-20 12:27 ajae
0 siblings, 0 replies; 3471+ messages in thread
From: ajae @ 2017-05-20 12:27 UTC (permalink / raw)
To: linux-bcache
[-- Attachment #1: 1415432.zip --]
[-- Type: application/zip, Size: 2909 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-20 11:47 john.dahlberg
0 siblings, 0 replies; 3471+ messages in thread
From: john.dahlberg @ 2017-05-20 11:47 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: 899954395.zip --]
[-- Type: application/zip, Size: 2920 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-20 11:03 pohut00
0 siblings, 0 replies; 3471+ messages in thread
From: pohut00 @ 2017-05-20 11:03 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 82304.zip --]
[-- Type: application/zip, Size: 2820 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-20 9:40 mgriffit
0 siblings, 0 replies; 3471+ messages in thread
From: mgriffit @ 2017-05-20 9:40 UTC (permalink / raw)
To: linux-ide
[-- Attachment #1: 64749729775761.zip --]
[-- Type: application/zip, Size: 2868 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-20 8:14 ecaterinasuciu09
0 siblings, 0 replies; 3471+ messages in thread
From: ecaterinasuciu09 @ 2017-05-20 8:14 UTC (permalink / raw)
To: linux-acpi
[-- Attachment #1: 0482411662588.zip --]
[-- Type: application/zip, Size: 2825 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-20 1:09 board
0 siblings, 0 replies; 3471+ messages in thread
From: board @ 2017-05-20 1:09 UTC (permalink / raw)
To: linux-ext4
[-- Attachment #1: 17734612411.zip --]
[-- Type: application/zip, Size: 2913 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-20 0:40 sophie.norman
0 siblings, 0 replies; 3471+ messages in thread
From: sophie.norman @ 2017-05-20 0:40 UTC (permalink / raw)
To: netfilter-devel
[-- Attachment #1: 260009073203663.zip --]
[-- Type: application/zip, Size: 2939 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-20 0:26 brian
0 siblings, 0 replies; 3471+ messages in thread
From: brian @ 2017-05-20 0:26 UTC (permalink / raw)
To: linux-pm
[-- Attachment #1: 843245271.zip --]
[-- Type: application/zip, Size: 2945 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-19 16:59 zumbalisa
0 siblings, 0 replies; 3471+ messages in thread
From: zumbalisa @ 2017-05-19 16:59 UTC (permalink / raw)
To: linux-pm
[-- Attachment #1: 032259416649.zip --]
[-- Type: application/zip, Size: 2894 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-19 15:35 susan.christian
0 siblings, 0 replies; 3471+ messages in thread
From: susan.christian @ 2017-05-19 15:35 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: 57847049147979.zip --]
[-- Type: application/zip, Size: 2923 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-19 14:51 citydesk
0 siblings, 0 replies; 3471+ messages in thread
From: citydesk @ 2017-05-19 14:51 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: 128734285588468.zip --]
[-- Type: application/zip, Size: 2883 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-19 13:31 office
0 siblings, 0 replies; 3471+ messages in thread
From: office @ 2017-05-19 13:31 UTC (permalink / raw)
To: linux-leds
[-- Attachment #1: 13476503689.zip --]
[-- Type: application/zip, Size: 2885 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-19 12:56 kindergartenchaos2
0 siblings, 0 replies; 3471+ messages in thread
From: kindergartenchaos2 @ 2017-05-19 12:56 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 012614448.zip --]
[-- Type: application/zip, Size: 2858 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-19 11:45 counselling-30L6jp03H7UtpYsHHOQ6Llpr/1R2p/CL
0 siblings, 0 replies; 3471+ messages in thread
From: counselling-30L6jp03H7UtpYsHHOQ6Llpr/1R2p/CL @ 2017-05-19 11:45 UTC (permalink / raw)
To: linux-cifs-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 8991909.zip --]
[-- Type: application/zip, Size: 2916 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-19 6:45 j.lahoda-aRb0bU7PRFPrBKCeMvbIDA
0 siblings, 0 replies; 3471+ messages in thread
From: j.lahoda-aRb0bU7PRFPrBKCeMvbIDA @ 2017-05-19 6:45 UTC (permalink / raw)
To: linux-efi-u79uwXL29TY76Z2rM5mHXA
[-- Attachment #1: 27843789264095.zip --]
[-- Type: application/zip, Size: 2870 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-19 4:32 archerrp
0 siblings, 0 replies; 3471+ messages in thread
From: archerrp @ 2017-05-19 4:32 UTC (permalink / raw)
To: linux-crypto
[-- Attachment #1: 6070538009916.zip --]
[-- Type: application/zip, Size: 2908 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-19 3:34 openhackbangalore
0 siblings, 0 replies; 3471+ messages in thread
From: openhackbangalore @ 2017-05-19 3:34 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 04104849287.zip --]
[-- Type: application/zip, Size: 2893 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-18 19:22 lucia.germino
0 siblings, 0 replies; 3471+ messages in thread
From: lucia.germino @ 2017-05-18 19:22 UTC (permalink / raw)
To: linux-ide
[-- Attachment #1: 6078544384.zip --]
[-- Type: application/zip, Size: 2928 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-18 16:47 susan.christian
0 siblings, 0 replies; 3471+ messages in thread
From: susan.christian @ 2017-05-18 16:47 UTC (permalink / raw)
To: platform-driver-x86
[-- Attachment #1: 40144986.zip --]
[-- Type: application/zip, Size: 4845 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-18 14:13 agiva
0 siblings, 0 replies; 3471+ messages in thread
From: agiva @ 2017-05-18 14:13 UTC (permalink / raw)
To: linux-crypto
[-- Attachment #1: 233627250363201.zip --]
[-- Type: application/zip, Size: 4829 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-18 13:41 alters
0 siblings, 0 replies; 3471+ messages in thread
From: alters @ 2017-05-18 13:41 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: 449685.zip --]
[-- Type: application/zip, Size: 4787 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-18 13:40 hp
0 siblings, 0 replies; 3471+ messages in thread
From: hp @ 2017-05-18 13:40 UTC (permalink / raw)
To: linux-raid
[-- Attachment #1: 2518423.zip --]
[-- Type: application/zip, Size: 4661 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-17 18:42 stef.ryckmans
0 siblings, 0 replies; 3471+ messages in thread
From: stef.ryckmans @ 2017-05-17 18:42 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 08149217870.zip --]
[-- Type: application/zip, Size: 4646 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-17 13:39 J Walker
0 siblings, 0 replies; 3471+ messages in thread
From: J Walker @ 2017-05-17 13:39 UTC (permalink / raw)
To: netdev
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-17 10:59 anita.traylor
0 siblings, 0 replies; 3471+ messages in thread
From: anita.traylor @ 2017-05-17 10:59 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 55356090.zip --]
[-- Type: application/zip, Size: 3008 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-17 7:10 1.10.0812112155390.21775
0 siblings, 0 replies; 3471+ messages in thread
From: 1.10.0812112155390.21775 @ 2017-05-17 7:10 UTC (permalink / raw)
To: linux-fsdevel
[-- Attachment #1: 056305.zip --]
[-- Type: application/zip, Size: 2929 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-16 6:37 momofr
0 siblings, 0 replies; 3471+ messages in thread
From: momofr @ 2017-05-16 6:37 UTC (permalink / raw)
To: netdev
[-- Attachment #1: EMAIL_373084188081_netdev.zip --]
[-- Type: application/zip, Size: 2077 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-16 3:06 armiksanaye
0 siblings, 0 replies; 3471+ messages in thread
From: armiksanaye @ 2017-05-16 3:06 UTC (permalink / raw)
To: linux-scsi
[-- Attachment #1: EMAIL_94744_linux-scsi.zip --]
[-- Type: application/zip, Size: 2116 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-15 23:49 morice.diane
0 siblings, 0 replies; 3471+ messages in thread
From: morice.diane @ 2017-05-15 23:49 UTC (permalink / raw)
To: netdev
[-- Attachment #1: EMAIL_0461021_netdev.zip --]
[-- Type: application/zip, Size: 2062 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-15 23:19 bcohen
0 siblings, 0 replies; 3471+ messages in thread
From: bcohen @ 2017-05-15 23:19 UTC (permalink / raw)
To: netdev
[-- Attachment #1: EMAIL_94874074783512_netdev.zip --]
[-- Type: application/zip, Size: 2074 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-14 3:19 unixkeeper
0 siblings, 0 replies; 3471+ messages in thread
From: unixkeeper @ 2017-05-14 3:19 UTC (permalink / raw)
To: linux-bcache
reg
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-11 1:02 info
0 siblings, 0 replies; 3471+ messages in thread
From: info @ 2017-05-11 1:02 UTC (permalink / raw)
To: dri-devel
[-- Attachment #1: 898372657368076_dri-devel.zip --]
[-- Type: application/zip, Size: 2851 bytes --]
[-- Attachment #2: Type: text/plain, Size: 160 bytes --]
_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-10 7:23 kelley
0 siblings, 0 replies; 3471+ messages in thread
From: kelley @ 2017-05-10 7:23 UTC (permalink / raw)
To: netdev
[-- Attachment #1: 620_netdev.zip --]
[-- Type: application/zip, Size: 1923 bytes --]
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-05-04 13:20 Steve French
0 siblings, 0 replies; 3471+ messages in thread
From: Steve French @ 2017-05-04 13:20 UTC (permalink / raw)
To: Long Li, linux-cifs-u79uwXL29TY76Z2rM5mHXA, Pavel Shilovskiy
[-- Attachment #1: Type: text/plain, Size: 465 bytes --]
Any thoughts on removing the dependency on srv_mutex in deleting
completed requests (and freeing its memory) ? Otherwise it can cause
problems with long running socket writes (on sending new requests)
which don't get the benefit of finishing up response processing due to
the blocking of DeleteMidQEntry on server->srv_mutex
This should improve performance as well.
(Long Li was noticing this looking at RDMA with cifs.ko)
See attached patch
--
Thanks,
Steve
[-- Attachment #2: 0001-CIFS-Don-t-delay-freeing-mids-when-blocked-on-slow-s.patch --]
[-- Type: text/x-patch, Size: 4037 bytes --]
From 429bb0e9da0db34bf75d5335fe6b4011db8765ad Mon Sep 17 00:00:00 2001
From: Steve French <smfrench@gmail.com>
Date: Thu, 4 May 2017 07:54:04 -0500
Subject: [PATCH] [CIFS] Don't delay freeing mids when blocked on slow socket
write of request
When processing responses, and in particular freeing mids (DeleteMidQEntry),
which is very important since it also frees the associated buffers (cifs_buf_release),
we can block a long time if (writes to) socket is slow due to low memory or networking
issues.
We can block in send (smb request) waiting for memory, and be blocked in processing
responess (which could free memory if we let it) - since they both grab the
server->srv_mutex.
In practice, in the DeleteMidQEntry case - there is no reason we need to
grab the srv_mutex so remove these around DeleteMidQEntry, and it allows
us to free memory faster.
Signed-off-by: Steve French <steve.french@primarydata.com>
---
fs/cifs/cifssmb.c | 7 -------
fs/cifs/smb2pdu.c | 7 -------
fs/cifs/transport.c | 2 --
3 files changed, 16 deletions(-)
diff --git a/fs/cifs/cifssmb.c b/fs/cifs/cifssmb.c
index 205fd94..5245723 100644
--- a/fs/cifs/cifssmb.c
+++ b/fs/cifs/cifssmb.c
@@ -697,9 +697,7 @@ static int validate_t2(struct smb_t2_rsp *pSMB)
{
struct TCP_Server_Info *server = mid->callback_data;
- mutex_lock(&server->srv_mutex);
DeleteMidQEntry(mid);
- mutex_unlock(&server->srv_mutex);
add_credits(server, 1, CIFS_ECHO_OP);
}
@@ -1599,9 +1597,7 @@ static __u16 convert_disposition(int disposition)
}
queue_work(cifsiod_wq, &rdata->work);
- mutex_lock(&server->srv_mutex);
DeleteMidQEntry(mid);
- mutex_unlock(&server->srv_mutex);
add_credits(server, 1, 0);
}
@@ -2058,7 +2054,6 @@ struct cifs_writedata *
{
struct cifs_writedata *wdata = mid->callback_data;
struct cifs_tcon *tcon = tlink_tcon(wdata->cfile->tlink);
- struct TCP_Server_Info *server = tcon->ses->server;
unsigned int written;
WRITE_RSP *smb = (WRITE_RSP *)mid->resp_buf;
@@ -2095,9 +2090,7 @@ struct cifs_writedata *
}
queue_work(cifsiod_wq, &wdata->work);
- mutex_lock(&server->srv_mutex);
DeleteMidQEntry(mid);
- mutex_unlock(&server->srv_mutex);
add_credits(tcon->ses->server, 1, 0);
}
diff --git a/fs/cifs/smb2pdu.c b/fs/cifs/smb2pdu.c
index 0fd63f0..e4007ee 100644
--- a/fs/cifs/smb2pdu.c
+++ b/fs/cifs/smb2pdu.c
@@ -2172,9 +2172,7 @@ static inline void init_copy_chunk_defaults(struct cifs_tcon *tcon)
if (mid->mid_state == MID_RESPONSE_RECEIVED)
credits_received = le16_to_cpu(rsp->hdr.sync_hdr.CreditRequest);
- mutex_lock(&server->srv_mutex);
DeleteMidQEntry(mid);
- mutex_unlock(&server->srv_mutex);
add_credits(server, credits_received, CIFS_ECHO_OP);
}
@@ -2432,9 +2430,7 @@ void smb2_reconnect_server(struct work_struct *work)
cifs_stats_fail_inc(tcon, SMB2_READ_HE);
queue_work(cifsiod_wq, &rdata->work);
- mutex_lock(&server->srv_mutex);
DeleteMidQEntry(mid);
- mutex_unlock(&server->srv_mutex);
add_credits(server, credits_received, 0);
}
@@ -2593,7 +2589,6 @@ void smb2_reconnect_server(struct work_struct *work)
{
struct cifs_writedata *wdata = mid->callback_data;
struct cifs_tcon *tcon = tlink_tcon(wdata->cfile->tlink);
- struct TCP_Server_Info *server = tcon->ses->server;
unsigned int written;
struct smb2_write_rsp *rsp = (struct smb2_write_rsp *)mid->resp_buf;
unsigned int credits_received = 1;
@@ -2633,9 +2628,7 @@ void smb2_reconnect_server(struct work_struct *work)
cifs_stats_fail_inc(tcon, SMB2_WRITE_HE);
queue_work(cifsiod_wq, &wdata->work);
- mutex_lock(&server->srv_mutex);
DeleteMidQEntry(mid);
- mutex_unlock(&server->srv_mutex);
add_credits(tcon->ses->server, credits_received, 0);
}
diff --git a/fs/cifs/transport.c b/fs/cifs/transport.c
index 4d64b5b..de589d0 100644
--- a/fs/cifs/transport.c
+++ b/fs/cifs/transport.c
@@ -613,9 +613,7 @@ struct mid_q_entry *
}
spin_unlock(&GlobalMid_Lock);
- mutex_lock(&server->srv_mutex);
DeleteMidQEntry(mid);
- mutex_unlock(&server->srv_mutex);
return rc;
}
--
1.9.1
^ permalink raw reply related [flat|nested] 3471+ messages in thread
* Re: [PATCHv2 1/1] IB/ipoib: add get_settings in ethtool
@ 2017-05-01 18:59 Doug Ledford
[not found] ` <1493665155.3041.186.camel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
0 siblings, 1 reply; 3471+ messages in thread
From: Doug Ledford @ 2017-05-01 18:59 UTC (permalink / raw)
To: Thomas Bogendoerfer, Zhu Yanjun
Cc: sean.hefty-ral2JQCrhuEAvxtiuMwx3w,
hal.rosenstock-Re5JQEeQqe8AvxtiuMwx3w,
linux-rdma-u79uwXL29TY76Z2rM5mHXA,
yuval.shaia-QHcLZuEGTsvQT0dZR+AlfA,
haakon.bugge-QHcLZuEGTsvQT0dZR+AlfA,
wen.gang.wang-QHcLZuEGTsvQT0dZR+AlfA,
joe.jin-QHcLZuEGTsvQT0dZR+AlfA,
junxiao.bi-QHcLZuEGTsvQT0dZR+AlfA
On Wed, 2017-04-26 at 11:31 +0200, Thomas Bogendoerfer wrote:
> On Wed, 26 Apr 2017 05:02:34 -0400
> Zhu Yanjun <yanjun.zhu-QHcLZuEGTsvQT0dZR+AlfA@public.gmane.org> wrote:
>
> >
> > In order to let the bonding driver report the correct speed
> > of the underlaying interfaces, when they are IPoIB, the ethtool
> > function get_settings() in the IPoIB driver is implemented.
>
> FYI get_settings is DEPRECATED. Wouldn't it make more sense to
> directly
> implement get_link_ksettings ?
Indeed, please redo this patchset using the preferred method of getting
link settings. Thanks.
--
Doug Ledford <dledford-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
GPG KeyID: B826A3330E572FDD
Key fingerprint = AE6B 1BDA 122B 23B4 265B 1274 B826 A333 0E57 2FDD
--
To unsubscribe from this list: send the line "unsubscribe linux-rdma" in
the body of a message to majordomo-u79uwXL29TY76Z2rM5mHXA@public.gmane.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-04-29 15:25 Dmitry Bazhenov
0 siblings, 0 replies; 3471+ messages in thread
From: Dmitry Bazhenov @ 2017-04-29 15:25 UTC (permalink / raw)
To: netdev
unsubscribe
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-04-28 9:09 администратор
0 siblings, 0 replies; 3471+ messages in thread
From: администратор @ 2017-04-28 9:09 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...635829wjxnxl....74990.RU.2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-04-28 8:36 администратор
0 siblings, 0 replies; 3471+ messages in thread
From: администратор @ 2017-04-28 8:36 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...635829wjxnxl....74990.RU.2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-04-28 8:36 администратор
0 siblings, 0 replies; 3471+ messages in thread
From: администратор @ 2017-04-28 8:36 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...635829wjxnxl....74990.RU.2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-04-28 8:36 администратор
0 siblings, 0 replies; 3471+ messages in thread
From: администратор @ 2017-04-28 8:36 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...635829wjxnxl....74990.RU.2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-04-28 8:36 администратор
0 siblings, 0 replies; 3471+ messages in thread
From: администратор @ 2017-04-28 8:36 UTC (permalink / raw)
внимания;
Ваши сообщения превысил лимит памяти, который составляет 5 Гб, определенных администратором, который в настоящее время работает на 10.9GB, Вы не сможете отправить или получить новую почту, пока вы повторно не проверить ваш почтовый ящик почты. Чтобы восстановить работоспособность Вашего почтового ящика, отправьте следующую информацию ниже:
имя:
Имя пользователя:
пароль:
Подтверждение пароля:
Адрес электронной почты:
телефон:
Если вы не в состоянии перепроверить сообщения, ваш почтовый ящик будет отключен!
Приносим извинения за неудобства.
Проверочный код: EN: Ru...635829wjxnxl....74990.RU.2017
Почты технической поддержки ©2017
спасибо
системы администратор
^ permalink raw reply [flat|nested] 3471+ messages in thread
* (unknown),
@ 2017-04-28 8:20 Anatolij Gustschin
0 siblings, 0 replies; 3471+ messages in thread
From: Anatolij Gustschin @ 2017-04-28 8:20 UTC (permalink / raw)
To: linus.walleij, gnurou; +Cc: andy.shevchenko, linux-gpio, linux-kernel
Subject: [PATCH v3] gpiolib: Add stubs for gpiod lookup table interface
Add stubs for gpiod_add_lookup_table() and gpiod_remove_lookup_table()
for the !GPIOLIB case to prevent build errors. Also add prototypes.
Signed-off-by: Anatolij Gustschin <agust@denx.de>
---
Changes in v3:
- add stubs for !GPIOLIB case. Drop prototypes, these are
already in gpio/machine.h
Changes in v2:
- move gpiod_lookup_table out of #ifdef
include/linux/gpio/consumer.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/include/linux/gpio/consumer.h b/include/linux/gpio/consumer.h
index 8f702fc..cf3fee2 100644
--- a/include/linux/gpio/consumer.h
+++ b/include/linux/gpio/consumer.h
@@ -41,6 +41,8 @@ enum gpiod_flags {
GPIOD_FLAGS_BIT_DIR_VAL,
};
+struct gpiod_lookup_table;
+
#ifdef CONFIG_GPIOLIB
/* Return the number of GPIOs associated with a device / function */
@@ -435,6 +437,12 @@ struct gpio_desc *devm_fwnode_get_index_gpiod_from_child(struct device *dev,
return ERR_PTR(-ENOSYS);
}
+static inline
+void gpiod_add_lookup_table(struct gpiod_lookup_table *table) {}
+
+static inline
+void gpiod_remove_lookup_table(struct gpiod_lookup_table *table) {}
+
#endif /* CONFIG_GPIOLIB */
static inline
--
2.7.4
^ permalink raw reply related [flat|nested] 3471+ messages in thread
end of thread, other threads:[~2020-07-22 5:40 UTC | newest]
Thread overview: 3471+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-03-19 14:41 (unknown) Maxim Levitsky
2019-03-19 14:41 ` No subject Maxim Levitsky
2019-03-19 14:41 ` [PATCH 1/9] vfio/mdev: add .request callback Maxim Levitsky
2019-03-19 14:41 ` Maxim Levitsky
2019-03-19 14:41 ` Maxim Levitsky
2019-03-19 14:41 ` [PATCH 2/9] nvme/core: add some more values from the spec Maxim Levitsky
2019-03-19 14:41 ` Maxim Levitsky
2019-03-19 14:41 ` Maxim Levitsky
2019-03-19 14:41 ` [PATCH 3/9] nvme/core: add NVME_CTRL_SUSPENDED controller state Maxim Levitsky
2019-03-19 14:41 ` Maxim Levitsky
2019-03-19 14:41 ` Maxim Levitsky
2019-03-19 14:41 ` [PATCH 4/9] nvme/pci: use the NVME_CTRL_SUSPENDED state Maxim Levitsky
2019-03-19 14:41 ` Maxim Levitsky
2019-03-19 14:41 ` Maxim Levitsky
2019-03-20 2:54 ` Fam Zheng
2019-03-20 2:54 ` Fam Zheng
2019-03-19 14:41 ` [PATCH 5/9] nvme/pci: add known admin effects to augument admin effects log page Maxim Levitsky
2019-03-19 14:41 ` Maxim Levitsky
2019-03-19 14:41 ` Maxim Levitsky
2019-03-19 14:41 ` [PATCH 6/9] nvme/pci: init shadow doorbell after each reset Maxim Levitsky
2019-03-19 14:41 ` Maxim Levitsky
2019-03-19 14:41 ` Maxim Levitsky
2019-03-19 14:41 ` [PATCH 7/9] nvme/core: add mdev interfaces Maxim Levitsky
2019-03-19 14:41 ` Maxim Levitsky
2019-03-19 14:41 ` Maxim Levitsky
2019-03-20 11:46 ` Stefan Hajnoczi
2019-03-20 11:46 ` Stefan Hajnoczi
2019-03-20 11:46 ` Stefan Hajnoczi
2019-03-20 12:50 ` Maxim Levitsky
2019-03-20 12:50 ` Maxim Levitsky
2019-03-20 12:50 ` Maxim Levitsky
2019-03-19 14:41 ` [PATCH 8/9] nvme/core: add nvme-mdev core driver Maxim Levitsky
2019-03-19 14:41 ` Maxim Levitsky
2019-03-19 14:41 ` Maxim Levitsky
2019-03-19 14:41 ` [PATCH 9/9] nvme/pci: implement the mdev external queue allocation interface Maxim Levitsky
2019-03-19 14:41 ` Maxim Levitsky
2019-03-19 14:41 ` Maxim Levitsky
2019-03-19 14:58 ` [PATCH 0/9] RFC: NVME VFIO mediated device Maxim Levitsky
2019-03-19 14:58 ` Maxim Levitsky
2019-03-25 18:52 ` [PATCH 0/9] RFC: NVME VFIO mediated device [BENCHMARKS] Maxim Levitsky
2019-03-25 18:52 ` Maxim Levitsky
2019-03-26 9:38 ` Stefan Hajnoczi
2019-03-26 9:38 ` Stefan Hajnoczi
2019-03-26 9:38 ` Stefan Hajnoczi
2019-03-26 9:50 ` Maxim Levitsky
2019-03-26 9:50 ` Maxim Levitsky
2019-03-26 9:50 ` Maxim Levitsky
2019-03-19 15:22 ` your mail Keith Busch
2019-03-19 15:22 ` Keith Busch
2019-03-19 15:22 ` Keith Busch
2019-03-19 23:49 ` Chaitanya Kulkarni
2019-03-19 23:49 ` Chaitanya Kulkarni
2019-03-19 23:49 ` Chaitanya Kulkarni
2019-03-20 16:44 ` Maxim Levitsky
2019-03-20 16:44 ` Maxim Levitsky
2019-03-20 16:44 ` Maxim Levitsky
2019-03-20 16:30 ` Maxim Levitsky
2019-03-20 16:30 ` Maxim Levitsky
2019-03-20 16:30 ` Maxim Levitsky
2019-03-20 17:03 ` Keith Busch
2019-03-20 17:03 ` Keith Busch
2019-03-20 17:03 ` Keith Busch
2019-03-20 17:33 ` Maxim Levitsky
2019-03-20 17:33 ` Maxim Levitsky
2019-03-20 17:33 ` Maxim Levitsky
2019-04-08 10:04 ` Maxim Levitsky
2019-04-08 10:04 ` Maxim Levitsky
2019-03-20 11:03 ` Felipe Franciosi
2019-03-20 11:03 ` No subject Felipe Franciosi
2019-03-20 11:03 ` Felipe Franciosi
2019-03-20 19:08 ` Re: Maxim Levitsky
2019-03-20 19:08 ` No subject Maxim Levitsky
2019-03-20 19:08 ` Maxim Levitsky
2019-03-21 16:12 ` Re: Stefan Hajnoczi
2019-03-21 16:12 ` No subject Stefan Hajnoczi
2019-03-21 16:12 ` Stefan Hajnoczi
2019-03-21 16:21 ` Re: Keith Busch
2019-03-21 16:21 ` No subject Keith Busch
2019-03-21 16:21 ` Keith Busch
2019-03-21 16:41 ` Re: Felipe Franciosi
2019-03-21 16:41 ` No subject Felipe Franciosi
2019-03-21 16:41 ` Felipe Franciosi
2019-03-21 17:04 ` Re: Maxim Levitsky
2019-03-21 17:04 ` No subject Maxim Levitsky
2019-03-21 17:04 ` Maxim Levitsky
2019-03-22 7:54 ` Re: Felipe Franciosi
2019-03-22 7:54 ` No subject Felipe Franciosi
2019-03-22 7:54 ` Felipe Franciosi
2019-03-22 10:32 ` Re: Maxim Levitsky
2019-03-22 10:32 ` No subject Maxim Levitsky
2019-03-22 10:32 ` Maxim Levitsky
2019-03-22 15:30 ` Re: Keith Busch
2019-03-22 15:30 ` No subject Keith Busch
2019-03-22 15:30 ` Keith Busch
2019-03-25 15:44 ` Re: Felipe Franciosi
2019-03-25 15:44 ` No subject Felipe Franciosi
2019-03-25 15:44 ` Felipe Franciosi
2019-03-20 15:08 ` [PATCH 0/9] RFC: NVME VFIO mediated device Bart Van Assche
2019-03-20 15:08 ` Bart Van Assche
2019-03-20 16:48 ` Maxim Levitsky
2019-03-20 16:48 ` Maxim Levitsky
2019-03-20 15:28 ` Bart Van Assche
2019-03-20 15:28 ` Bart Van Assche
2019-03-20 16:42 ` Maxim Levitsky
2019-03-20 16:42 ` Maxim Levitsky
2019-03-20 17:03 ` Alex Williamson
2019-03-20 17:03 ` Alex Williamson
2019-03-20 17:03 ` Alex Williamson
2019-03-21 16:13 ` your mail Stefan Hajnoczi
2019-03-21 16:13 ` Stefan Hajnoczi
2019-03-21 16:13 ` Stefan Hajnoczi
2019-03-21 17:07 ` Maxim Levitsky
2019-03-21 17:07 ` Maxim Levitsky
2019-03-21 17:07 ` Maxim Levitsky
2019-03-25 16:46 ` Stefan Hajnoczi
2019-03-25 16:46 ` Stefan Hajnoczi
2019-03-25 16:46 ` Stefan Hajnoczi
-- strict thread matches above, loose matches on Subject: below --
2020-07-22 5:32 (unknown) Darlehen Bedienung
2020-07-22 5:32 (unknown) Darlehen Bedienung
2020-07-22 5:32 (unknown) Darlehen Bedienung
2020-07-22 4:45 (unknown) Darlehen Bedienung
2020-07-02 19:43 (unknown) Barr Anthony Calder
2020-06-30 17:56 (unknown) Vasiliy Kupriakov
2020-06-27 21:58 (unknown) lookman joe
2020-06-27 21:58 (unknown) lookman joe
2020-06-27 21:58 (unknown) lookman joe
2020-06-27 21:54 (unknown) helen
2020-06-27 21:52 (unknown) helen
[not found] <1327230475.528260.1591750200327.ref@mail.yahoo.com>
2020-06-10 0:50 ` (unknown) Celine Marchand
2020-06-04 19:57 (unknown) David Shine
2020-05-08 23:51 (unknown) Barbara D Wilkins
2020-05-08 23:41 (unknown) Barbara D Wilkins
2020-05-08 22:58 (unknown) Barbara D Wilkins
2020-04-23 23:06 (unknown) Azim Hashim Premji
2020-04-23 23:06 (unknown) Azim Hashim Premji
2020-03-27 9:20 (unknown) chenanqing
2020-03-27 8:36 (unknown) chenanqing
2020-03-17 0:11 (unknown) David Ibe
2020-03-17 0:11 (unknown) David Ibe
2020-03-09 8:43 (unknown) Michael J. Weirsky
2020-03-09 7:37 (unknown) Michael J. Weirsky
2020-03-09 7:34 (unknown) Michael J. Weirsky
2020-03-09 7:34 (unknown) Michael J. Weirsky
2020-03-09 7:34 (unknown) Michael J. Weirsky
2020-03-09 7:34 (unknown) Michael J. Weirsky
2020-03-05 10:47 (unknown) Juanito S. Galang
2020-03-05 10:46 (unknown) Juanito S. Galang
2020-03-05 10:46 (unknown) Juanito S. Galang
2020-03-05 10:46 (unknown) Juanito S. Galang
2020-03-05 10:46 (unknown) Juanito S. Galang
2020-03-05 10:46 (unknown) Juanito S. Galang
2020-03-05 10:46 (unknown) Juanito S. Galang
2020-03-05 2:33 (unknown) Maria Alessandra Filippi
2020-03-05 0:26 (unknown) Maria Alessandra Filippi
2020-03-04 23:30 (unknown) Maria Alessandra Filippi
2020-03-04 9:42 (unknown) Julie Leach
2020-02-24 8:18 kernel panic: audit: backlog limit exceeded syzbot
2020-02-24 22:38 ` Paul Moore
2020-02-24 22:43 ` Eric Paris
2020-02-24 22:46 ` Paul Moore
[not found] ` <CAHC9VhQnbdJprbdTa_XcgUJaiwhzbnGMWJqHczU54UMk0AFCtw-JsoAwUIsXosN+BqQ9rBEUg@public.gmane.org>
2020-02-27 15:39 ` (unknown) Dmitry Vyukov via B.A.T.M.A.N
2020-02-15 3:25 (unknown) mprim37 alcorta
2020-02-11 22:34 (unknown) Rajat Jain
2020-02-05 8:23 (unknown) Frau Huan Jlaying
[not found] <1187667350.235001.1580574902701.ref@mail.yahoo.com>
2020-02-01 16:35 ` (unknown) Mrs. Maureen Hinckley
2019-12-12 15:50 (unknown) 周琰杰 (Zhou Yanjie)
2019-09-12 8:09 (unknown) Gene Chen
2019-08-23 2:12 (unknown) Rob Herring
2019-06-07 0:54 (unknown) Dave Airlie
2019-05-26 11:51 (unknown) Thomas Meyer
2019-05-16 3:48 (unknown) Mail Delivery Subsystem
2019-04-10 11:17 Norbert Lange
2019-04-10 14:15 ` (unknown) Jan Kiszka
2019-04-10 11:14 Norbert Lange
2019-04-10 13:37 ` (unknown) Jan Kiszka
2019-04-10 14:36 ` (unknown) Jan Kiszka
[not found] ` <VI1PR05MB5917B5956F2E9365F10D6539F62E0@VI1PR05MB5917.eurprd05.prod.outlook.com>
2019-04-10 14:47 ` (unknown) Jan Kiszka
2019-04-10 15:02 ` (unknown) Lange Norbert
2019-04-10 16:46 ` (unknown) Jan Kiszka
2019-04-05 2:38 (unknown) Changbin Du
2019-04-04 5:56 (unknown) Mail Delivery Subsystem
2019-03-29 0:36 (unknown) 邀请函
2019-03-21 1:51 (unknown) zhuchangchun
2019-03-04 3:42 (unknown) Automatic Email Delivery Software
2019-03-01 3:34 (unknown) Automatic Email Delivery Software
2019-02-28 3:36 (unknown) Post Office
2019-01-15 2:55 (unknown), Jens Axboe
2019-01-02 12:25 (unknown), Frank Wunderlich
2018-11-27 0:07 (unknown), Offer
2018-11-18 20:40 (unknown), Major Dennis Hornbeck
2018-11-18 9:11 (unknown), Mrs. Maureen Hinckley
2018-11-11 8:05 (unknown), Oliver Carter
2018-10-31 0:38 (unknown), Ubaithullah Masood
2018-10-21 16:25 (unknown), Michael Tirado
2018-10-19 14:40 (unknown), David Howells
2018-10-19 17:46 ` (unknown) David Miller
2018-10-19 20:51 ` (unknown) David Howells
2018-10-19 20:58 ` (unknown) David Miller
2018-10-09 15:55 (unknown), Oliver Carter
2018-09-19 19:57 (unknown), Saif Hasan
2018-09-16 13:39 (unknown), iluminati
2018-08-27 14:50 (unknown), Christoph Hellwig
2018-08-24 4:59 (unknown), Dave Airlie
2018-08-22 9:07 (unknown), системы администратор
2018-08-09 9:23 (unknown), системы администратор
2018-07-29 9:58 (unknown) Sumitomo Rubber
2018-07-28 10:46 (unknown), Andrew Martinez
2018-07-28 10:14 (unknown), Andrew Martinez
2018-07-06 1:26 (unknown), Dave Airlie
2018-07-05 10:36 (unknown), rosdi ablatiff
2018-06-23 21:08 (unknown), David Lechner
2018-06-16 8:15 (unknown) Mrs Mavis Wanczyk
2018-06-13 15:48 (unknown), Ubaithullah Masood
2018-05-31 17:11 (unknown), Adam Richter via Containers
2018-05-29 7:26 (unknown), администратор
2018-05-25 3:26 (unknown), Bounced mail
2018-05-18 12:04 (unknown) DaeRyong Jeong
2018-05-14 17:30 (unknown), Jessica
2018-05-14 6:33 (unknown), системы администратор
2018-05-05 22:07 (unknown), Shane Missler
2018-05-04 15:21 (unknown), Mark Henry
2018-04-20 8:02 (unknown) Christoph Hellwig
2018-04-20 8:02 ` (unknown), Christoph Hellwig
2018-04-16 1:22 (unknown), Andrew Worsley
2018-04-06 1:18 (unknown), venkatvenkatsubra
2018-04-04 13:43 (unknown), системы администратор
2018-03-23 3:05 (unknown), Mail Delivery Subsystem
2018-03-07 7:48 (unknown), Solen win
2018-03-05 17:06 (unknown) Meghana Madhyastha
[not found] <[PATCH xf86-video-amdgpu 0/3] Add non-desktop and leasing support>
2018-03-03 4:49 ` (unknown), Keith Packard
2018-02-23 15:54 (unknown), Adam Richter
2018-02-17 15:29 (unknown), Ahmed Soliman
2018-02-17 8:41 (unknown), Solen win
2018-02-17 1:45 (unknown), Ryan Ellis
2018-02-13 22:59 (unknown), Mitesh Shah
2018-02-13 22:57 (unknown), Alfred Cheuk Chow
2018-02-13 22:57 (unknown), Alfred Cheuk Chow
2018-02-13 22:57 (unknown), Alfred Cheuk Chow
2018-02-13 22:57 (unknown), Alfred Cheuk Chow
2018-02-13 22:56 (unknown), Alfred Cheuk Chow
2018-02-13 12:43 (unknown), mavis lilian wanczyk
2018-02-13 11:58 (unknown), Solen win
2018-02-12 1:39 (unknown), Alfred Cheuk Chow
2018-02-12 1:39 (unknown), Alfred Cheuk Chow
2018-02-12 1:39 (unknown), Alfred Cheuk Chow
2018-02-12 1:39 (unknown), Alfred Cheuk Chow
2018-02-11 16:07 (unknown), glolariu
2018-02-11 7:19 (unknown), Alfred Cheuk Chow
2018-02-08 14:40 (unknown), Automatic Email Delivery Software
[not found] <CALfDnQ8aCTywvhqOBkFv3qQOoME9wvTrKbQq8i8PCPOx2iBp=A@mail.gmail.com>
[not found] ` <CALfDnQ-NihbhS=8C+ZfiKepj5x+Zd5uS2zH82-VrwV40A55s0w@mail.gmail.com>
2018-02-07 10:50 ` (unknown), Solen win
2018-02-02 12:15 (unknown), Robert Vasek
2018-01-29 17:17 (unknown), Jones
2018-01-29 17:17 (unknown), Jones
2018-01-29 17:17 (unknown), Jones
2018-01-29 17:17 (unknown), Jones
2018-01-29 17:17 (unknown), Jones
2018-01-29 17:17 (unknown), Jones
2018-01-29 16:55 (unknown), Jones
2018-01-29 16:30 (unknown), Jones
2018-01-29 16:30 (unknown), Jones
2018-01-29 14:17 (unknown), Jones
2018-01-28 17:06 (unknown), whoisthis TG
2018-01-28 17:01 (unknown), whoisthis TG
2018-01-27 13:48 (unknown), Jones
2018-01-27 13:48 (unknown), Jones
2018-01-27 13:25 (unknown), Jones
2018-01-25 7:23 (unknown), tirumalareddy marri
2018-01-23 13:54 (unknown), Mr Sheng Li Hung
2018-01-23 13:36 (unknown), Mr Sheng Li Hung
2018-01-16 2:23 (unknown) Jack.Ma
2018-01-16 2:16 (unknown) Jack.Ma
2018-01-11 3:22 (unknown), Active lender@
2018-01-10 10:27 (unknown), TimGuo
2018-01-09 21:23 (unknown), Emile Kenold
2018-01-02 22:11 (unknown), Mr Sheng Li Hung
2017-12-30 4:37 (unknown), Adam Richter
2017-12-30 2:10 (unknown), Arpit Patel
2017-12-24 9:07 (unknown), Solen win
2017-12-24 2:58 (unknown), 柯弼舜
2017-12-23 15:32 (unknown), 柯弼舜
2017-12-17 17:28 (unknown), Solen win
2017-12-14 16:26 (unknown), Solen win
2017-12-12 16:06 (unknown), Solen win
2017-12-07 12:53 (unknown), Sistemas administrador
2017-12-01 14:22 (unknown), Rein Appeldoorn
2017-12-01 2:56 (unknown), Post Office
2017-11-20 2:36 (unknown), Robert Wang
2017-11-19 20:07 (unknown), Mitesh Shah
2017-11-16 10:18 (unknown), Michal Hocko
2017-11-15 14:44 (unknown), Qing Chang
2017-11-15 9:18 (unknown) nanda_kishore_chinna
2017-11-13 3:13 (unknown), Bounced mail
2017-11-12 15:10 (unknown), Mitesh Shah
2017-11-12 15:09 (unknown), Friedrich Mayrhofer
2017-11-12 15:09 (unknown), Friedrich Mayrhofer
2017-11-12 15:09 (unknown), Friedrich Mayrhofer
2017-11-06 19:51 (unknown), Qing Chang
2017-11-05 3:40 (unknown), Solen win
2017-11-01 23:35 (unknown), Roy Cockrum Foundation
2017-10-29 9:46 (unknown), Solen win
2017-10-25 12:10 (unknown), EG
2017-10-23 13:52 (unknown), Intl Agency
2017-10-20 8:42 (unknown), membership
2017-10-20 3:19 (unknown), dengx
2017-10-19 22:54 (unknown), armouralumni
2017-10-19 20:10 (unknown), pooks005
2017-10-17 20:28 (unknown), kelley
2017-10-17 12:14 (unknown), dengx
2017-10-17 7:00 (unknown), lswedroe
2017-10-17 0:33 (unknown), membership
2017-10-16 19:44 (unknown), iker-KvP5wT2u2U0
2017-10-16 11:30 (unknown), kindergartenchaos2
2017-10-16 1:23 (unknown), fwkz4811-DoVvmRvd3PAA2dtGD8cC2w
2017-10-15 22:07 (unknown), info
2017-10-15 18:29 (unknown), clasico082
2017-10-15 15:13 (unknown), nelcastellodicarta
2017-10-15 13:57 (unknown), marketing
2017-10-15 13:01 (unknown), pekka.enne
2017-10-15 12:17 (unknown), Solen win2
2017-10-15 12:04 (unknown), sherrilyn
2017-10-15 11:49 (unknown), edo.hlaca
2017-10-15 11:15 (unknown), cl_luzcc
2017-10-15 3:28 (unknown), redaccion
2017-10-14 6:44 (unknown), Ella Golan
2017-10-13 17:15 (unknown), susan.christian
2017-10-13 6:16 (unknown), nfrankiyamu
2017-10-12 14:09 (unknown), redaccion
2017-10-12 13:53 (unknown), Andrew Clement
2017-10-12 13:15 (unknown), mbalhoff
2017-10-12 11:46 (unknown), sophie.norman
2017-10-12 8:17 (unknown), armouralumni
2017-10-12 5:55 (unknown), xa0et.sirio
2017-10-12 3:08 (unknown), iker-KvP5wT2u2U0
2017-10-11 22:32 (unknown), fwkz4811-DoVvmRvd3PAA2dtGD8cC2w
2017-10-11 19:55 (unknown), kindergartenchaos2
2017-10-11 19:29 (unknown), info
2017-10-11 11:49 (unknown), nelcastellodicarta
2017-10-11 9:19 (unknown), pekka.enne
2017-10-11 8:20 (unknown), sherrilyn
2017-10-11 7:34 (unknown), cl_luzcc
2017-10-11 4:11 (unknown), morice.diane
2017-10-10 23:27 (unknown), editor
2017-10-09 15:06 (unknown), jha
2017-10-09 13:19 (unknown), carmen.croonquist
2017-10-09 7:37 (unknown), Michael Lyle
2017-10-09 6:17 (unknown), durrant
2017-10-09 3:44 (unknown), roeper
2017-10-08 23:01 (unknown), susan.christian
2017-10-08 22:32 (unknown), natasha.glauser
2017-10-08 19:00 (unknown), matthias.foerster
2017-10-08 14:15 (unknown), clasico082
2017-10-08 11:08 (unknown), nelcastellodicarta
2017-10-08 9:52 (unknown), marketing
2017-10-08 9:00 (unknown), pekka.enne
2017-10-08 7:59 (unknown), edo.hlaca
2017-10-08 7:32 (unknown), cl_luzcc
2017-10-08 1:26 (unknown), redaccion
2017-10-07 4:45 (unknown), morice.diane
2017-10-07 3:40 (unknown), agar2000
2017-10-07 0:31 (unknown), carmen.croonquist
2017-10-06 11:55 (unknown), info
2017-10-06 8:31 (unknown), smallgroups
2017-10-06 5:16 (unknown), nelcastellodicarta
2017-10-06 2:19 (unknown), sherrilyn
2017-10-06 1:59 (unknown), edo.hlaca
2017-10-06 1:43 (unknown), sophie.norman
2017-10-05 15:34 (unknown), kindergartenchaos2
2017-10-05 14:24 (unknown), informationrequest
2017-10-05 10:20 (unknown), jeffrey.faulkenberg
2017-10-05 7:10 (unknown), mgriffit
2017-10-05 6:53 (unknown), helga.brickl
2017-10-04 16:11 (unknown), 1.10.0812112155390.21775
2017-10-04 15:33 (unknown), membership
2017-10-04 11:44 (unknown), susan.christian
2017-10-04 5:56 (unknown), morice.diane
2017-10-03 13:59 (unknown), nelcastellodicarta
2017-10-03 12:43 (unknown), marketing
2017-10-03 10:37 (unknown), edo.hlaca
2017-10-03 8:40 (unknown), koopk
2017-10-03 8:16 (unknown), morice.diane
2017-10-03 7:38 (unknown), angers
2017-10-03 0:55 (unknown), jbmplupus-Mmb7MZpHnFY
2017-10-03 0:14 (unknown), roeper
2017-10-03 0:03 (unknown), noord-holland
2017-10-02 20:31 (unknown), kchristopher
2017-10-02 18:06 (unknown), dengx
2017-10-02 18:00 (unknown), Solen win2
2017-10-02 17:38 (unknown), nbensoncole81
2017-10-02 15:35 (unknown), nfrankiyamu
2017-09-30 14:07 (unknown), redaccion
2017-09-29 21:29 (unknown), info
2017-09-29 18:01 (unknown), clasico082
2017-09-29 15:42 (unknown), noord-holland
2017-09-29 15:21 (unknown), natasha.glauser
2017-09-29 14:47 (unknown), nelcastellodicarta
2017-09-29 13:49 (unknown), marketing
2017-09-29 11:49 (unknown), roeper
2017-09-29 11:28 (unknown), cl_luzcc
2017-09-29 7:44 (unknown), amin
2017-09-29 7:26 (unknown), kelley
2017-09-29 3:06 (unknown), jha
2017-09-29 2:48 (unknown), Tina Aaron
2017-09-28 22:59 (unknown), rlm85310
2017-09-28 15:08 (unknown), amin
2017-09-28 0:21 (unknown), natasha.glauser
2017-09-27 19:30 (unknown), nbensoncole81
2017-09-27 19:12 (unknown), rlm85310
2017-09-27 17:41 (unknown), Michael Lyle
2017-09-22 19:34 (unknown), John Michael
2017-09-22 8:41 (unknown), Adrian Gillian Bayford
2017-09-22 3:39 (unknown), service
2017-09-22 1:55 (unknown), dengx
2017-09-22 1:22 (unknown), unsubscribe.me
2017-09-21 7:47 (unknown), MAILER-DAEMON
2017-09-20 1:01 (unknown), ninfo
2017-09-19 7:47 (unknown), agar2000
2017-09-15 17:30 (unknown), noreply
2017-09-15 17:29 (unknown), noreply
2017-09-15 17:01 (unknown), noreply
2017-09-13 8:56 (unknown), kindergartenchaos2
2017-09-13 4:21 (unknown), natasha.glauser
2017-09-12 22:07 (unknown), marketing
2017-09-12 19:45 (unknown), edo.hlaca
2017-09-12 19:16 (unknown), cl_luzcc
2017-09-12 18:53 (unknown), pooks005
2017-09-11 20:10 (unknown), roeper
2017-09-11 19:35 (unknown) Helge Deller
2017-09-10 6:22 (unknown), Youichi Kanno
2017-09-07 7:05 (unknown), tabiadhawatef
2017-09-07 4:02 (unknown), dengx
2017-09-06 3:57 (unknown), informationrequest
2017-09-05 23:34 (unknown), kkaplanidou
2017-09-05 18:38 (unknown), john.dahlberg
2017-09-05 18:07 (unknown), bfoster
2017-09-05 16:31 (unknown), mgriffit
2017-09-05 14:02 (unknown), ecaterinasuciu09
2017-09-05 12:51 (unknown), ifalqi
2017-09-05 11:11 (unknown), inn
2017-09-05 2:43 (unknown), xb028930336
2017-09-05 1:51 (unknown), halinajan-4Uo9UdwAbX8
2017-09-04 23:46 (unknown), sterrenplan.kampen
2017-09-04 12:17 (unknown), noord-holland
2017-09-04 5:14 (unknown), nelcastellodicarta
2017-09-04 2:33 (unknown), marketing
2017-09-04 2:13 (unknown), x1kn8fk
2017-09-03 22:54 (unknown), sherrilyn
2017-09-03 21:51 (unknown), xb028930336
2017-09-03 21:26 (unknown), cl_luzcc
2017-09-02 23:56 (unknown), netgalley
2017-09-02 6:40 (unknown), simon.a.t.hardy
2017-09-02 2:47 (unknown), nbensoncole81
2017-09-02 2:39 (unknown), een
2017-09-02 2:35 (unknown), jbmplupus-Mmb7MZpHnFY
2017-09-02 1:59 (unknown), danielle.picarda2
2017-09-02 0:58 (unknown), smallgroups
2017-09-01 22:55 (unknown), redaccion
2017-09-01 22:51 (unknown), zumbalisa
2017-09-01 21:57 (unknown), umpvav-YDxpq3io04c
2017-09-01 21:32 (unknown), nenep
2017-09-01 20:58 (unknown), wvhyvcm.abyxg
2017-09-01 19:52 (unknown), sunaina
2017-09-01 15:30 (unknown), stef.ryckmans
2017-09-01 15:00 (unknown), ujagu8185-Re5JQEeQqe8AvxtiuMwx3w
2017-09-01 11:40 (unknown), witt.kohl
2017-09-01 8:16 (unknown), financialaid
2017-09-01 6:21 (unknown), zita.latex
2017-09-01 4:59 (unknown), adriix.addy
2017-09-01 4:05 (unknown), andrewf
2017-09-01 2:30 (unknown), robert.berry
2017-09-01 1:48 (unknown), doctornina
2017-09-01 1:48 (unknown), agar2000
2017-08-31 18:41 (unknown), helga.brickl
2017-08-31 15:40 (unknown), sterrenplan.kampen
2017-08-31 12:23 (unknown), mark.robinson
2017-08-31 9:54 (unknown), info
2017-08-31 8:20 (unknown), jessica.jones-PnMVE5gNl/Vkbu+0n/iG1Q
2017-08-31 4:52 (unknown), archerrp
2017-08-31 1:39 (unknown) m.wierczynska
2017-08-31 0:58 (unknown), info
2017-08-30 20:26 (unknown), anita.traylor
2017-08-30 19:49 (unknown), susan.christian
2017-08-30 18:32 [PATCH] default implementation for of_find_all_nodes(...) Artur Lorincz
[not found] ` <1504117946-3958-1-git-send-email-larturus2-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>
2017-09-24 15:50 ` (unknown), Artur Lorincz
2017-10-06 19:31 ` (unknown), Artur Lorincz
2017-10-08 16:28 ` (unknown), Artur Lorincz
2017-08-30 1:37 (unknown), municlerk
2017-08-30 0:38 (unknown), ifalqi
2017-08-29 5:40 (unknown), morice.diane
2017-08-29 3:02 (unknown) catherine.verge
2017-08-28 17:29 (unknown), befragung
2017-08-28 13:22 (unknown), dengx
2017-08-28 6:48 (unknown), patientcentral
2017-08-27 10:55 (unknown), agar2000
2017-08-26 14:48 (unknown), nfrankiyamu
2017-08-26 5:43 (unknown), carol.dallstream-WaM/PvcBqAo
2017-08-25 0:32 (unknown), agiva
2017-08-23 7:23 (unknown), Xuehan Xu
2017-08-22 13:31 (unknown), vinnakota chaitanya
2017-08-20 2:58 (unknown), Solen win2
2017-08-18 17:42 (unknown) Rajneesh Bhardwaj
2017-08-17 21:36 (unknown), Adam Richter
2017-08-16 5:46 (unknown), kim.frederiksen
2017-08-16 2:03 (unknown), xa0ajutor
2017-08-15 17:31 (unknown), nnarroyo623
2017-08-15 17:30 (unknown), simon.a.t.hardy
2017-08-15 14:45 (unknown), een
2017-08-15 14:23 (unknown), helga.brickl
2017-08-15 11:16 (unknown), wvhyvcm.abyxg
2017-08-15 8:46 (unknown), ccc
2017-08-15 6:50 (unknown), demorton
2017-08-15 6:08 (unknown), eumann
2017-08-15 4:40 (unknown), mitch_128
2017-08-15 3:38 (unknown), rueggemann
2017-08-15 2:57 (unknown), nfrankiyamu
2017-08-15 1:55 (unknown), richard
2017-08-14 19:30 (unknown), sterrenplan.kampen
2017-08-14 17:38 (unknown), amin
2017-08-14 16:53 (unknown), durrant
2017-08-14 15:35 (unknown), agar2000
2017-08-14 14:57 (unknown), linwoodrvsales
2017-08-13 15:17 (unknown), bunny43200
2017-08-12 12:05 (unknown), agar2000
2017-08-12 1:27 (unknown), nenep
2017-08-12 1:11 (unknown), lizdebeth_
2017-08-11 22:09 (unknown), Chris
2017-08-11 20:11 (unknown), tammyehood
2017-08-11 17:28 (unknown), rhsinfo
2017-08-11 15:50 (unknown), 1.10.0812112155390.21775
2017-08-11 9:18 (unknown), jonathan.malihan
2017-08-11 8:54 (unknown), helga.brickl
2017-08-11 6:14 (unknown), администратор
2017-08-11 6:14 (unknown), администратор
2017-08-11 6:14 (unknown), администратор
2017-08-11 6:14 (unknown), администратор
2017-08-11 6:08 (unknown), администратор
2017-08-11 4:59 (unknown), Administrator
2017-08-11 4:57 (unknown), nenep
2017-08-11 4:42 (unknown), lizdebeth_
2017-08-10 22:02 (unknown), stef.ryckmans
2017-08-10 21:36 (unknown), shriyashah
2017-08-10 21:08 (unknown), mitch_128
2017-08-10 18:16 (unknown), simon.a.t.hardy
2017-08-10 9:38 (unknown), asn-request-tfHHCSmtYoI
2017-08-10 3:32 (unknown), kholloway
2017-08-10 0:03 (unknown), michele
2017-08-09 23:53 (unknown), nenep
2017-08-09 23:15 (unknown), wvhyvcm.abyxg
2017-08-09 23:06 (unknown), editor
2017-08-09 22:05 (unknown), helga.brickl
2017-08-09 21:55 (unknown), horizon
2017-08-09 20:25 (unknown), sterrenplan.kampen
2017-08-09 19:40 (unknown), tchidrenplytoo
2017-08-09 19:36 (unknown), tammyehood
2017-08-09 14:34 (unknown), shwx002
2017-08-09 13:53 (unknown), Administrador
2017-08-09 10:21 (unknown), системы администратор
2017-08-09 10:20 (unknown), системы администратор
2017-08-09 10:20 (unknown), системы администратор
2017-08-09 10:20 (unknown), системы администратор
2017-08-09 10:20 (unknown), системы администратор
2017-08-09 0:41 (unknown), natasha.glauser
2017-08-09 0:04 (unknown), h.piontek
2017-08-08 21:31 (unknown), michele
2017-08-08 20:55 (unknown), h.gerritsen12
2017-08-08 19:40 (unknown), citydesk
2017-08-08 19:14 (unknown), eaya
2017-08-08 17:09 (unknown), tchidrenplytoo
2017-08-08 14:49 (unknown) catherine.verge
2017-08-08 5:57 (unknown), befragung
2017-08-08 4:57 (unknown), wesley.sydnor
2017-08-07 23:50 (unknown), wvhyvcm.abyxg
2017-08-07 21:05 (unknown), sibolt.mulder-b60u5d1xRcFWk0Htik3J/w
2017-08-07 20:25 (unknown), editor
2017-08-07 19:03 (unknown), sm-yT/95SBIOhs
2017-08-07 18:42 (unknown), susan.christian
2017-08-07 18:38 (unknown), mitch_128
2017-08-07 11:50 (unknown), 1.10.0812112155390.21775
2017-08-07 7:38 (unknown), simon.a.t.hardy
2017-08-07 4:49 (unknown), sorbisches.internat
2017-08-06 23:55 (unknown), webmaster
2017-08-05 14:08 (unknown), simon.a.t.hardy
2017-08-05 12:35 (unknown), agar2000
2017-08-05 11:42 (unknown), Sriram Murthy
2017-08-04 23:59 (unknown), editor
2017-08-04 5:04 (unknown), durrant
2017-08-03 19:52 (unknown), natasha.glauser
2017-08-03 14:01 (unknown), Nora Johnson
2017-08-03 5:21 (unknown), Houston
2017-08-02 18:05 (unknown), Angela-63XfWfWBA5k
2017-08-02 17:31 (unknown), Edmond
2017-08-02 17:07 (unknown), Margery
2017-08-02 15:40 (unknown), Erma
2017-08-02 13:58 (unknown), Will
2017-08-02 12:55 (unknown), tammyehood
2017-08-02 11:47 (unknown), armiksanaye
2017-08-02 4:12 (unknown), Administrator
2017-08-02 3:47 (unknown), системы администратор
2017-08-02 3:45 (unknown), helga.brickl
2017-08-02 3:45 (unknown), системы администратор
2017-08-02 3:45 (unknown), системы администратор
2017-08-02 3:45 (unknown), системы администратор
2017-08-02 3:45 (unknown), системы администратор
2017-08-02 1:19 (unknown), nenep
2017-08-02 1:05 (unknown), lizdebeth_
2017-08-02 0:36 (unknown), richard
2017-08-01 21:19 (unknown), tammyehood
2017-08-01 21:03 (unknown), editor
2017-08-01 20:18 (unknown), stef.ryckmans
2017-08-01 19:35 (unknown), anderslindgaard
2017-08-01 19:35 (unknown), anderslindgaard
2017-08-01 16:33 (unknown), sterrenplan.kampen
2017-08-01 14:53 (unknown), Angela H. Whiteman
2017-08-01 12:35 (unknown), jha
2017-08-01 10:07 (unknown) Chris Ruehl
2017-08-01 4:40 (unknown), durrant
2017-08-01 3:31 (unknown), helga.brickl
2017-08-01 1:35 (unknown), amin
2017-08-01 1:35 (unknown), xa0ajutor
2017-07-31 21:27 (unknown), natasha.glauser
2017-07-31 20:14 (unknown), x1kn8fk
2017-07-31 18:00 (unknown), robert.berry
2017-07-31 16:54 (unknown), bunny43200
2017-07-31 14:52 (unknown), horizon
2017-07-31 13:15 (unknown), sibolt.mulder-b60u5d1xRcFWk0Htik3J/w
2017-07-31 11:49 (unknown), kchristopher
2017-07-31 11:33 (unknown), rhsinfo
2017-07-31 10:50 (unknown), susan.christian
2017-07-30 23:33 (unknown), daven bango
2017-07-28 16:02 (unknown), gdahl
2017-07-28 7:44 (unknown), robert.berry
2017-07-28 7:17 (unknown), doctornina
2017-07-27 13:00 (unknown), nfrankiyamu
2017-07-27 5:01 (unknown), hp
2017-07-27 2:16 (unknown) ceph-devel
2017-07-27 2:14 (unknown) ceph-devel
2017-07-27 1:25 (unknown), info
2017-07-26 20:45 (unknown), een
2017-07-26 20:08 (unknown), municlerk
2017-07-26 14:35 (unknown), venkatvenkatsubra
2017-07-26 14:20 (unknown), sterrenplan.kampen
2017-07-26 12:48 (unknown), momofr
2017-07-26 11:39 (unknown), chrisbi_anelyst
2017-07-26 10:32 (unknown), Solen win2
2017-07-26 6:36 (unknown), nenep
2017-07-26 4:42 (unknown), horizon
2017-07-26 2:25 (unknown), tammyehood
2017-07-25 23:24 (unknown), h.gerritsen12
2017-07-25 20:41 (unknown), sorbisches.internat
2017-07-25 20:01 (unknown), hp
2017-07-25 18:53 (unknown), sibolt.mulder-b60u5d1xRcFWk0Htik3J/w
2017-07-25 18:45 (unknown), x1kn8fk
2017-07-25 16:36 (unknown), susan.christian
2017-07-25 14:56 (unknown), nhossein4212003
2017-07-25 10:27 (unknown), nick_c_huang
2017-07-23 23:48 (unknown), miteshriya
2017-07-20 18:43 (unknown), tbinh.minhnd
2017-07-20 3:55 (unknown), mfr-6k8blvha/+BqlCpFK1mnLg
2017-07-19 11:11 (unknown), rhsinfo
2017-07-18 23:49 (unknown), helga.brickl
2017-07-18 20:36 (unknown), bunny43200
2017-07-18 20:28 (unknown), lizdebeth_
2017-07-18 20:17 (unknown), brian
2017-07-18 15:56 (unknown), bfoster
2017-07-18 13:52 (unknown), stef.ryckmans
2017-07-18 12:45 (unknown), mitch_128
2017-07-18 11:36 (unknown), shwx002
2017-07-18 6:22 (unknown), sorbisches.internat
2017-07-18 5:45 (unknown), h.gerritsen12
2017-07-18 4:50 (unknown), ying.huang-ral2JQCrhuEAvxtiuMwx3w
2017-07-18 4:32 (unknown), citydesk
2017-07-18 4:09 (unknown), armouralumni
2017-07-17 23:02 (unknown), h.piontek
2017-07-17 21:54 (unknown), citydesk
2017-07-17 17:30 (unknown), richard
2017-07-17 15:42 (unknown), tchidrenplytoo
2017-07-17 15:31 (unknown), kathleen.gilbert
2017-07-17 2:32 (unknown), salome.khum
2017-07-17 1:20 (unknown), tchidrenplytoo
2017-07-17 1:09 (unknown), kathleen.gilbert
2017-07-16 7:25 (unknown), kim.frederiksen
2017-07-15 12:30 (unknown), Huaisheng HS1 Ye
2017-07-13 4:49 (unknown), delaware.orders
2017-07-13 3:37 (unknown), befragung
2017-07-13 2:27 (unknown), tomsue2000
2017-07-12 19:24 (unknown), patientcentral
2017-07-12 11:22 (unknown), sterrenplan.kampen
2017-07-12 0:42 (unknown), associatebusiness2009
2017-07-11 16:39 (unknown), indulge-HCInDj6vYHrk4FeknX8I/ZqQE7yCjDx5
2017-07-11 0:07 (unknown), protecciondatos.es
2017-07-10 22:07 (unknown), jacqueline.pike
2017-07-10 21:53 (unknown), agiva
2017-07-10 21:37 (unknown), roeper
2017-07-10 12:51 (unknown), lucia.germino
2017-07-10 12:43 (unknown), brian
2017-07-10 10:06 (unknown), alters
2017-07-10 4:42 (unknown), lipa
2017-07-10 3:47 (unknown), системы администратор
2017-07-10 3:45 (unknown), системы администратор
2017-07-10 3:45 (unknown), системы администратор
2017-07-10 3:45 (unknown), системы администратор
2017-07-10 3:45 (unknown), системы администратор
2017-07-10 3:39 (unknown), системы администратор
2017-07-09 23:29 (unknown), brian
2017-07-09 23:19 (unknown), Corporate Lenders
2017-07-09 23:19 (unknown), Corporate Lenders
2017-07-09 20:52 (unknown), iker-KvP5wT2u2U0
2017-07-09 18:51 (unknown), pooks005
2017-07-09 13:02 (unknown), smallgroups
2017-07-08 18:22 (unknown), Alfred chow
2017-07-08 18:22 (unknown), Alfred chow
2017-07-08 17:13 (unknown), horizon
2017-07-08 16:07 (unknown), netgalley
2017-07-08 11:53 (unknown), Alfred chow
2017-07-07 17:21 (unknown), pooks005
2017-07-07 1:37 (unknown), zumbalisa
2017-07-07 0:30 (unknown), amin
2017-07-06 17:35 (unknown), simon.a.t.hardy
2017-07-06 14:11 (unknown), een
2017-07-06 6:10 (unknown), armouralumni
2017-07-06 0:55 (unknown), 이성근
2017-07-05 21:18 (unknown), een
2017-07-05 15:57 (unknown), sibolt.mulder-b60u5d1xRcFWk0Htik3J/w
2017-07-05 15:15 (unknown), armouralumni
2017-07-05 8:06 (unknown), koopk
2017-07-05 7:00 (unknown), benjamin
2017-07-05 6:55 (unknown), agiva
2017-07-05 6:42 (unknown), angers
2017-07-05 0:55 (unknown), helga.brickl
2017-07-05 0:06 (unknown), michele
2017-07-04 22:53 (unknown), j.lahoda-aRb0bU7PRFPrBKCeMvbIDA
2017-07-04 21:02 (unknown), salome.khum
2017-07-04 19:53 (unknown), tchidrenplytoo
2017-07-04 18:35 (unknown), noord-holland
2017-07-04 16:38 (unknown), openhackbangalore
2017-07-04 10:50 (unknown), h.gerritsen12
2017-07-04 8:52 (unknown), citydesk
2017-07-04 6:01 (unknown), xa0ajutor
2017-07-04 4:17 (unknown), rueggemann
2017-07-03 14:13 (unknown), tammyehood
2017-07-03 13:54 (unknown), sm-yT/95SBIOhs
2017-07-03 13:30 (unknown), roeper
2017-07-03 12:43 (unknown), mitch_128
2017-07-03 4:44 (unknown), beautyink
2017-07-03 1:28 (unknown), h.piontek
2017-07-02 20:26 (unknown), tabiadhawatef
2017-07-02 18:44 (unknown), tchidrenplytoo
2017-07-02 10:14 (unknown), armouralumni
2017-07-01 21:28 (unknown), redaccion
2017-07-01 11:36 (unknown), p.mueller-spz-hgw-Mmb7MZpHnFY
2017-06-30 8:29 (unknown), sibolt.mulder-b60u5d1xRcFWk0Htik3J/w
2017-06-30 2:53 (unknown), 1.10.0812112155390.21775
2017-06-30 1:14 (unknown), paloma.depping
2017-06-29 19:05 (unknown), morice.diane
2017-06-29 13:46 (unknown), kholloway
2017-06-29 12:20 (unknown), The Post Office
2017-06-29 10:39 (unknown), lizdebeth_
2017-06-28 14:22 (unknown), tchidrenplytoo
2017-06-28 3:57 (unknown), системы администратор
2017-06-28 3:56 (unknown), системы администратор
2017-06-28 3:56 (unknown), системы администратор
2017-06-28 3:56 (unknown), системы администратор
2017-06-28 3:56 (unknown), системы администратор
2017-06-28 3:22 (unknown), Administrator
2017-06-27 11:59 (unknown), natasha.glauser
2017-06-27 7:15 (unknown), noord-holland
2017-06-27 7:12 (unknown), loisc07
2017-06-27 0:08 (unknown), h.gerritsen12
2017-06-26 22:58 (unknown), Anders Lind
2017-06-26 22:58 (unknown), Anders Lind
2017-06-26 22:14 (unknown), citydesk
2017-06-26 19:07 (unknown), eremias
2017-06-26 17:51 (unknown), rueggemann
2017-06-26 16:10 (unknown), susan.christian
2017-06-26 15:03 (unknown), richard
2017-06-26 10:22 (unknown), p.mueller-spz-hgw-Mmb7MZpHnFY
2017-06-26 9:15 (unknown), beautyink
2017-06-26 5:21 (unknown) Leon Romanovsky
2017-06-25 20:10 (unknown), h.gerritsen12
2017-06-25 18:13 (unknown), citydesk
2017-06-25 16:49 (unknown), agar2000
2017-06-25 13:23 (unknown), rueggemann
2017-06-25 10:21 (unknown), richard
2017-06-25 5:19 (unknown), nbensoncole81
2017-06-25 5:14 (unknown), archerrp
2017-06-25 4:47 (unknown), h.gerritsen12
2017-06-25 3:57 (unknown), nfrankiyamu
2017-06-25 2:39 (unknown), bflove1-ntQ8I44N4zM
2017-06-24 19:38 (unknown), richard
2017-06-24 15:41 (unknown), benjamin
2017-06-24 15:03 (unknown), archerrp
2017-06-24 12:38 (unknown), redaccion
2017-06-24 11:55 (unknown), natasha.glauser
2017-06-24 8:07 (unknown), j.lahoda-aRb0bU7PRFPrBKCeMvbIDA
2017-06-24 2:32 (unknown), h.gerritsen12
2017-06-24 0:35 (unknown), citydesk
2017-06-24 0:04 (unknown), hastpass
2017-06-23 19:27 (unknown), armouralumni
2017-06-23 17:22 (unknown), richard
2017-06-23 12:26 (unknown), archerrp
2017-06-23 6:09 (unknown), Administrator
2017-06-23 4:50 (unknown), nkosuta-f+iqBESB6gc
2017-06-23 2:49 (unknown), mdavis
2017-06-23 1:43 (unknown), horizon
2017-06-22 20:24 (unknown), koopk
2017-06-22 20:22 (unknown), junplzen
2017-06-22 13:22 (unknown), jeffrey.faulkenberg
2017-06-22 5:49 (unknown), noord-holland
2017-06-22 2:13 (unknown), ecaterinasuciu09
2017-06-21 20:10 (unknown), morice.diane
2017-06-21 7:43 (unknown), koopk
2017-06-21 7:32 (unknown), tjcrewvolcoordinator-Re5JQEeQqe8AvxtiuMwx3w
2017-06-21 6:23 (unknown), chrisbi_anelyst
2017-06-21 6:16 (unknown), angers
2017-06-21 4:40 (unknown), kholloway
2017-06-20 22:49 (unknown), redaccion
2017-06-20 18:45 (unknown), roeper
2017-06-20 17:50 (unknown), editor
2017-06-20 16:31 (unknown), nfrankiyamu
2017-06-20 6:29 (unknown), xa0ajutor
2017-06-20 0:47 (unknown), durrant
2017-06-19 19:58 (unknown), tjcrewvolcoordinator-Re5JQEeQqe8AvxtiuMwx3w
2017-06-19 18:46 (unknown), chrisbi_anelyst
2017-06-19 16:53 (unknown), armouralumni
2017-06-19 9:57 (unknown), anita.traylor
2017-06-19 9:36 (unknown), susan.christian
2017-06-18 14:27 (unknown), xa0ajutor
2017-06-18 13:58 (unknown), membership
2017-06-18 3:09 (unknown), agar2000
2017-06-17 22:46 (unknown), rhsinfo
2017-06-16 22:37 (unknown), kelley
2017-06-16 14:46 (unknown), roeper
2017-06-15 17:35 (unknown), jeffrey.faulkenberg
2017-06-15 14:56 (unknown), john.dahlberg
2017-06-15 13:50 (unknown), pohut00
2017-06-15 8:37 (unknown), ecaterinasuciu09
2017-06-14 22:19 (unknown), muirs
2017-06-14 21:25 (unknown), koopk
2017-06-14 20:41 (unknown), angers
2017-06-14 19:31 (unknown), kholloway
2017-06-14 16:39 (unknown), nfrankiyamu
2017-06-14 12:27 (unknown), board
2017-06-14 12:26 (unknown), sibolt.mulder-b60u5d1xRcFWk0Htik3J/w
2017-06-14 11:42 (unknown), sophie.norman
2017-06-14 10:27 (unknown), susan.christian
2017-06-14 1:06 (unknown), durrant
2017-06-13 21:38 (unknown), douille.l
2017-06-13 11:59 (unknown), susan.christian
2017-06-13 10:15 (unknown), nenep
2017-06-13 9:59 (unknown), lizdebeth_
2017-06-13 9:35 (unknown), wvhyvcm.abyxg
2017-06-13 8:14 (unknown), horizon
2017-06-13 4:53 (unknown), roeper
2017-06-13 4:35 (unknown), ujagu8185-Re5JQEeQqe8AvxtiuMwx3w
2017-06-13 4:22 (unknown), mitch_128
2017-06-12 21:36 (unknown), nbensoncole81
2017-06-12 19:12 (unknown), nhossein4212003
2017-06-12 17:13 (unknown), armiksanaye
2017-06-12 16:44 (unknown), nfrankiyamu
2017-06-12 15:02 (unknown), amin
2017-06-12 10:50 (unknown), sibolt.mulder-b60u5d1xRcFWk0Htik3J/w
2017-06-12 7:28 (unknown), webmaster
2017-06-11 18:16 (unknown), tammyehood
2017-06-11 16:35 (unknown), mitch_128
2017-06-11 7:27 (unknown), roeper
2017-06-11 4:42 (unknown), 1.10.0812112155390.21775
2017-06-11 3:28 (unknown), redaccion
2017-06-11 2:29 (unknown), energi
2017-06-11 0:20 (unknown), service
2017-06-10 21:10 (unknown), mbalhoff
2017-06-10 21:03 (unknown), morice.diane
2017-06-10 20:24 (unknown), board
2017-06-10 14:34 (unknown), kbennett
2017-06-10 13:33 (unknown), iker-KvP5wT2u2U0
2017-06-10 8:23 (unknown), kindergartenchaos2
2017-06-10 7:07 (unknown), Youichi Kanno
2017-06-10 5:53 (unknown), jacqueline.pike
2017-06-10 5:29 (unknown), agiva
2017-06-09 19:04 (unknown), armouralumni
2017-06-09 18:57 (unknown), editor
2017-06-09 17:38 (unknown), nfrankiyamu
2017-06-09 12:45 (unknown), Mrs Alice Walton
2017-06-09 10:47 (unknown), tjcrewvolcoordinator-Re5JQEeQqe8AvxtiuMwx3w
2017-06-09 8:02 (unknown), kholloway
2017-06-09 4:30 (unknown), citydesk
2017-06-09 3:35 (unknown), office
2017-06-09 2:06 (unknown), rueggemann
2017-06-09 1:31 (unknown), durrant
2017-06-09 0:39 (unknown), susan.christian
2017-06-09 0:34 (unknown), richard
2017-06-08 22:14 (unknown), bcohen
2017-06-08 18:00 (unknown), beautyink
2017-06-08 17:59 (unknown), kirola
2017-06-08 17:26 (unknown), natasha.glauser
2017-06-08 15:18 (unknown), junplzen
2017-06-08 14:09 (unknown), service
2017-06-08 13:35 (unknown) Yuval Shaia
2017-06-08 13:07 (unknown), unsubscribe.me
2017-06-08 12:51 (unknown), koopk
2017-06-08 11:31 (unknown), helga.brickl
2017-06-08 5:41 (unknown), Oliver Carter
2017-06-08 5:00 (unknown), noord-holland
2017-06-08 3:14 (unknown), agar2000
2017-06-08 3:14 (unknown), kgbok.kezyhumh
2017-06-07 22:30 (unknown), tammyehood
2017-06-07 21:54 (unknown), agar2000
2017-06-07 14:00 (unknown), 1.10.0812112155390.21775
2017-06-07 11:43 (unknown), nhossein4212003
2017-06-07 7:42 (unknown), morice.diane
2017-06-07 3:19 (unknown), lucia.germino
2017-06-06 23:46 (unknown), mdavis
2017-06-06 20:36 (unknown), dengx
2017-06-06 7:19 (unknown), From Lori J. Robinson
2017-06-06 7:19 (unknown), From Lori J. Robinson
2017-06-06 7:19 (unknown), From Lori J. Robinson
2017-06-06 7:19 (unknown), From Lori J. Robinson
2017-06-05 17:32 (unknown), armouralumni
2017-06-05 5:43 (unknown), h.gerritsen12
2017-06-05 4:30 (unknown), citydesk
2017-06-05 1:08 (unknown), rueggemann
2017-06-05 0:03 (unknown), nmckenna
2017-06-04 19:55 (unknown), archerrp
2017-06-04 10:30 (unknown), Yuval Mintz
2017-06-03 7:17 (unknown), nbensoncole81
2017-06-03 5:45 (unknown), nfrankiyamu
2017-06-02 8:02 (unknown), jessica.jones-PnMVE5gNl/Vkbu+0n/iG1Q
2017-06-02 6:04 (unknown), mari.kayhko
2017-06-01 20:40 (unknown), nbensoncole81
2017-06-01 2:26 (unknown), Dave Airlie
2017-06-01 2:25 (unknown), kbennett
2017-06-01 1:55 (unknown), cdevries
2017-06-01 0:43 (unknown), armouralumni
2017-05-31 14:53 (unknown), tjcrewvolcoordinator-Re5JQEeQqe8AvxtiuMwx3w
2017-05-31 11:36 (unknown), p.mueller-spz-hgw-Mmb7MZpHnFY
2017-05-26 16:33 (unknown) Anderson McEnany <
2017-05-26 16:33 (unknown) Anderson McEnany <
2017-05-24 16:26 (unknown), natasha.glauser
2017-05-24 0:12 (unknown), bcohen
2017-05-23 22:44 (unknown), noord-holland
2017-05-23 16:29 (unknown), benjamin
2017-05-23 16:24 (unknown), agiva
2017-05-23 9:36 (unknown), bendis.michal
2017-05-23 8:42 (unknown), delaware.orders
2017-05-23 7:38 (unknown), scotte
2017-05-23 4:53 (unknown), nfrankiyamu
2017-05-23 2:19 (unknown), mdavis
2017-05-22 22:32 (unknown), patientcentral
2017-05-22 20:39 (unknown), horizon
2017-05-22 16:10 (unknown), mitch_128
2017-05-22 0:57 (unknown), mari.kayhko
2017-05-21 20:35 (unknown), armiksanaye
2017-05-21 16:36 (unknown), x1kn8fk
2017-05-21 13:56 (unknown), sibolt.mulder-b60u5d1xRcFWk0Htik3J/w
2017-05-21 11:59 (unknown), anita.traylor
2017-05-21 11:38 (unknown), susan.christian
2017-05-21 11:13 (unknown), mariobronti
2017-05-21 9:17 (unknown), jacqueline.pike
2017-05-21 8:55 (unknown), benjamin
2017-05-21 8:55 (unknown), agiva
2017-05-21 8:42 (unknown), brucet
2017-05-20 21:16 (unknown), h.gerritsen12
2017-05-20 20:00 (unknown), citydesk
2017-05-20 18:58 (unknown), office
2017-05-20 17:45 (unknown), counselling-30L6jp03H7UtpYsHHOQ6Llpr/1R2p/CL
2017-05-20 16:22 (unknown), alters
2017-05-20 14:29 (unknown), cv
2017-05-20 12:27 (unknown), ajae
2017-05-20 11:47 (unknown), john.dahlberg
2017-05-20 11:03 (unknown), pohut00
2017-05-20 9:40 (unknown), mgriffit
2017-05-20 8:14 (unknown), ecaterinasuciu09
2017-05-20 1:09 (unknown), board
2017-05-20 0:40 (unknown), sophie.norman
2017-05-20 0:26 (unknown), brian
2017-05-19 16:59 (unknown), zumbalisa
2017-05-19 15:35 (unknown), susan.christian
2017-05-19 14:51 (unknown), citydesk
2017-05-19 13:31 (unknown), office
2017-05-19 12:56 (unknown), kindergartenchaos2
2017-05-19 11:45 (unknown), counselling-30L6jp03H7UtpYsHHOQ6Llpr/1R2p/CL
2017-05-19 6:45 (unknown), j.lahoda-aRb0bU7PRFPrBKCeMvbIDA
2017-05-19 4:32 (unknown), archerrp
2017-05-19 3:34 (unknown), openhackbangalore
2017-05-18 19:22 (unknown), lucia.germino
2017-05-18 16:47 (unknown), susan.christian
2017-05-18 14:13 (unknown), agiva
2017-05-18 13:41 (unknown), alters
2017-05-18 13:40 (unknown), hp
2017-05-17 18:42 (unknown), stef.ryckmans
2017-05-17 13:39 (unknown), J Walker
2017-05-17 10:59 (unknown), anita.traylor
2017-05-17 7:10 (unknown), 1.10.0812112155390.21775
2017-05-16 6:37 (unknown), momofr
2017-05-16 3:06 (unknown), armiksanaye
2017-05-15 23:49 (unknown), morice.diane
2017-05-15 23:19 (unknown), bcohen
2017-05-14 3:19 (unknown), unixkeeper
2017-05-11 1:02 (unknown), info
2017-05-10 7:23 (unknown), kelley
2017-05-04 13:20 (unknown), Steve French
2017-05-01 18:59 [PATCHv2 1/1] IB/ipoib: add get_settings in ethtool Doug Ledford
[not found] ` <1493665155.3041.186.camel-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
2017-05-04 5:24 ` (unknown), Zhu Yanjun
2017-04-29 15:25 (unknown), Dmitry Bazhenov
2017-04-28 9:09 (unknown), администратор
2017-04-28 8:36 (unknown), администратор
2017-04-28 8:36 (unknown), администратор
2017-04-28 8:36 (unknown), администратор
2017-04-28 8:36 (unknown), администратор
2017-04-28 8:20 (unknown), Anatolij Gustschin
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.