All of lore.kernel.org
 help / color / mirror / Atom feed
From: Haozhong Zhang <haozhong.zhang@intel.com>
To: Konrad Rzeszutek Wilk <konrad@darnok.org>
Cc: Xiao Guangrong <guangrong.xiao@linux.intel.com>,
	Wei Liu <wei.liu2@citrix.com>,
	Ian Jackson <ian.jackson@eu.citrix.com>,
	xen-devel@lists.xen.org
Subject: Re: [RFC XEN PATCH 16/16] tools/libxl: initiate pmem mapping via qmp callback
Date: Wed, 8 Feb 2017 14:08:02 +0800	[thread overview]
Message-ID: <20170208060802.xfledvqoy6yebcrr@hz-desktop> (raw)
In-Reply-To: <20170127221322.GL18581@localhost.localdomain>

On 01/27/17 17:13 -0500, Konrad Rzeszutek Wilk wrote:
>On Mon, Oct 10, 2016 at 08:32:35AM +0800, Haozhong Zhang wrote:
>> QMP command 'query-nvdimms' is used by libxl to get the backend, the
>> guest SPA and size of each vNVDIMM device, and then libxl starts mapping
>> backend to guest for each vNVDIMM device.
>>
>> Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
>> ---
>> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
>> Cc: Wei Liu <wei.liu2@citrix.com>
>> ---
>>  tools/libxl/libxl_qmp.c | 64 +++++++++++++++++++++++++++++++++++++++++++++++++
>>  1 file changed, 64 insertions(+)
>>
>> diff --git a/tools/libxl/libxl_qmp.c b/tools/libxl/libxl_qmp.c
>> index f8addf9..02edd09 100644
>> --- a/tools/libxl/libxl_qmp.c
>> +++ b/tools/libxl/libxl_qmp.c
>> @@ -26,6 +26,7 @@
>>
>>  #include "_libxl_list.h"
>>  #include "libxl_internal.h"
>> +#include "libxl_nvdimm.h"
>>
>>  /* #define DEBUG_RECEIVED */
>>
>> @@ -1146,6 +1147,66 @@ out:
>>      return rc;
>>  }
>>
>> +static int qmp_register_nvdimm_callback(libxl__qmp_handler *qmp,
>> +                                        const libxl__json_object *o,
>> +                                        void *unused)
>> +{
>> +    GC_INIT(qmp->ctx);
>> +    const libxl__json_object *obj = NULL;
>> +    const libxl__json_object *sub_obj = NULL;
>> +    int i = 0;
>
>unsigned int.

will fix

Thanks,
Haozhong

>> +    const char *mem_path;
>> +    uint64_t slot, spa, length;
>> +    int ret = 0;
>> +
>> +    for (i = 0; (obj = libxl__json_array_get(o, i)); i++) {
>> +        if (!libxl__json_object_is_map(obj))
>> +            continue;
>> +
>> +        sub_obj = libxl__json_map_get("slot", obj, JSON_INTEGER);
>> +        slot = libxl__json_object_get_integer(sub_obj);
>> +
>> +        sub_obj = libxl__json_map_get("mem-path", obj, JSON_STRING);
>> +        mem_path = libxl__json_object_get_string(sub_obj);
>> +        if (!mem_path) {
>> +            LOG(ERROR, "No mem-path is specified for NVDIMM #%" PRId64, slot);
>> +            ret = -EINVAL;
>> +            goto out;
>> +        }
>> +
>> +        sub_obj = libxl__json_map_get("spa", obj, JSON_INTEGER);
>> +        spa = libxl__json_object_get_integer(sub_obj);
>> +
>> +        sub_obj = libxl__json_map_get("length", obj, JSON_INTEGER);
>> +        length = libxl__json_object_get_integer(sub_obj);
>> +
>> +        LOG(DEBUG,
>> +            "vNVDIMM #%" PRId64 ": %s, spa 0x%" PRIx64 ", length 0x%" PRIx64,
>> +            slot, mem_path, spa, length);
>> +
>> +        ret = libxl_nvdimm_add_device(gc, qmp->domid, mem_path, spa, length);
>> +        if (ret) {
>> +            LOG(ERROR,
>> +                "Failed to add NVDIMM #%" PRId64
>> +                "(mem_path %s, spa 0x%" PRIx64 ", length 0x%" PRIx64 ") "
>> +                "to domain %d (err = %d)",
>> +                slot, mem_path, spa, length, qmp->domid, ret);
>> +            goto out;
>> +        }
>> +    }
>> +
>> + out:
>> +    GC_FREE;
>> +    return ret;
>> +}
>> +
>> +static int libxl__qmp_query_nvdimms(libxl__qmp_handler *qmp)
>> +{
>> +    return qmp_synchronous_send(qmp, "query-nvdimms", NULL,
>> +                                qmp_register_nvdimm_callback,
>> +                                NULL, qmp->timeout);
>> +}
>> +
>>  int libxl__qmp_hmp(libxl__gc *gc, int domid, const char *command_line,
>>                     char **output)
>>  {
>> @@ -1187,6 +1248,9 @@ int libxl__qmp_initializations(libxl__gc *gc, uint32_t domid,
>>      if (!ret) {
>>          ret = qmp_query_vnc(qmp);
>>      }
>> +    if (!ret && guest_config->num_vnvdimms) {
>> +        ret = libxl__qmp_query_nvdimms(qmp);
>> +    }
>>      libxl__qmp_close(qmp);
>>      return ret;
>>  }
>> --
>> 2.10.1
>>
>>
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@lists.xen.org
>> https://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

  reply	other threads:[~2017-02-08  6:08 UTC|newest]

Thread overview: 77+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-10-10  0:32 [RFC XEN PATCH 00/16] Add vNVDIMM support to HVM domains Haozhong Zhang
2016-10-10  0:32 ` [RFC XEN PATCH 01/16] x86_64/mm: explicitly specify the location to place the frame table Haozhong Zhang
2016-12-09 21:35   ` Konrad Rzeszutek Wilk
2016-12-12  2:27     ` Haozhong Zhang
2016-12-12  8:25       ` Jan Beulich
2016-10-10  0:32 ` [RFC XEN PATCH 02/16] x86_64/mm: explicitly specify the location to place the M2P table Haozhong Zhang
2016-12-09 21:38   ` Konrad Rzeszutek Wilk
2016-12-12  2:31     ` Haozhong Zhang
2016-12-12  8:26       ` Jan Beulich
2016-12-12  8:35         ` Haozhong Zhang
2016-10-10  0:32 ` [RFC XEN PATCH 03/16] xen/x86: add a hypercall XENPF_pmem_add to report host pmem regions Haozhong Zhang
2016-10-11 19:13   ` Andrew Cooper
2016-12-09 22:02   ` Konrad Rzeszutek Wilk
2016-12-12  4:16     ` Haozhong Zhang
2016-12-12  8:30       ` Jan Beulich
2016-12-12  8:38         ` Haozhong Zhang
2016-12-12 14:44           ` Konrad Rzeszutek Wilk
2016-12-13  1:08             ` Haozhong Zhang
2016-12-22 11:58   ` Jan Beulich
2016-10-10  0:32 ` [RFC XEN PATCH 04/16] xen/x86: add XENMEM_populate_pmemmap to map host pmem pages to guest Haozhong Zhang
2016-12-09 22:22   ` Konrad Rzeszutek Wilk
2016-12-12  4:38     ` Haozhong Zhang
2016-12-22 12:19   ` Jan Beulich
2016-10-10  0:32 ` [RFC XEN PATCH 05/16] xen/x86: release pmem pages at domain destroy Haozhong Zhang
2016-12-09 22:27   ` Konrad Rzeszutek Wilk
2016-12-12  4:47     ` Haozhong Zhang
2016-12-22 12:22   ` Jan Beulich
2016-10-10  0:32 ` [RFC XEN PATCH 06/16] tools: reserve guest memory for ACPI from device model Haozhong Zhang
2017-01-27 20:44   ` Konrad Rzeszutek Wilk
2017-02-08  1:39     ` Haozhong Zhang
2017-02-08 14:31       ` Konrad Rzeszutek Wilk
2016-10-10  0:32 ` [RFC XEN PATCH 07/16] tools/libacpi: add callback acpi_ctxt.p2v to get a pointer from physical address Haozhong Zhang
2017-01-27 20:46   ` Konrad Rzeszutek Wilk
2017-02-08  1:42     ` Haozhong Zhang
2016-10-10  0:32 ` [RFC XEN PATCH 08/16] tools/libacpi: expose details of memory allocation callback Haozhong Zhang
2017-01-27 20:58   ` Konrad Rzeszutek Wilk
2017-02-08  2:12     ` Haozhong Zhang
2016-10-10  0:32 ` [RFC XEN PATCH 09/16] tools/libacpi: add callbacks to access XenStore Haozhong Zhang
2017-01-27 21:10   ` Konrad Rzeszutek Wilk
2017-02-08  2:19     ` Haozhong Zhang
2016-10-10  0:32 ` [RFC XEN PATCH 10/16] tools/libacpi: add a simple AML builder Haozhong Zhang
2017-01-27 21:19   ` Konrad Rzeszutek Wilk
2017-02-08  2:33     ` Haozhong Zhang
2016-10-10  0:32 ` [RFC XEN PATCH 11/16] tools/libacpi: load ACPI built by the device model Haozhong Zhang
2017-01-27 21:40   ` Konrad Rzeszutek Wilk
2017-02-08  5:38     ` Haozhong Zhang
2017-02-08 14:35       ` Konrad Rzeszutek Wilk
2016-10-10  0:32 ` [RFC XEN PATCH 12/16] tools/libxl: build qemu options from xl vNVDIMM configs Haozhong Zhang
2017-01-27 21:47   ` Konrad Rzeszutek Wilk
2017-02-08  5:42     ` Haozhong Zhang
2017-01-27 21:48   ` Konrad Rzeszutek Wilk
2017-02-08  5:47     ` Haozhong Zhang
2016-10-10  0:32 ` [RFC XEN PATCH 13/16] tools/libxl: add support to map host pmem device to guests Haozhong Zhang
2017-01-27 22:06   ` Konrad Rzeszutek Wilk
2017-01-27 22:09     ` Konrad Rzeszutek Wilk
2017-02-08  5:59     ` Haozhong Zhang
2017-02-08 14:37       ` Konrad Rzeszutek Wilk
2016-10-10  0:32 ` [RFC XEN PATCH 14/16] tools/libxl: add support to map files on pmem devices " Haozhong Zhang
2017-01-27 22:10   ` Konrad Rzeszutek Wilk
2017-02-08  6:03     ` Haozhong Zhang
2016-10-10  0:32 ` [RFC XEN PATCH 15/16] tools/libxl: handle return code of libxl__qmp_initializations() Haozhong Zhang
2017-01-27 22:11   ` Konrad Rzeszutek Wilk
2017-02-08  6:07     ` Haozhong Zhang
2017-02-08 10:31       ` Wei Liu
2017-02-09  2:47         ` Haozhong Zhang
2017-02-09 10:13           ` Wei Liu
2017-02-09 10:16             ` Wei Liu
2017-02-10  2:37             ` Haozhong Zhang
2017-02-10  8:11               ` Wei Liu
2017-02-10  8:23                 ` Wei Liu
2017-02-10  8:24                 ` Haozhong Zhang
2016-10-10  0:32 ` [RFC XEN PATCH 16/16] tools/libxl: initiate pmem mapping via qmp callback Haozhong Zhang
2017-01-27 22:13   ` Konrad Rzeszutek Wilk
2017-02-08  6:08     ` Haozhong Zhang [this message]
2016-10-24 16:37 ` [RFC XEN PATCH 00/16] Add vNVDIMM support to HVM domains Wei Liu
2016-10-25  6:55   ` Haozhong Zhang
2016-10-25 11:28     ` Wei Liu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170208060802.xfledvqoy6yebcrr@hz-desktop \
    --to=haozhong.zhang@intel.com \
    --cc=guangrong.xiao@linux.intel.com \
    --cc=ian.jackson@eu.citrix.com \
    --cc=konrad@darnok.org \
    --cc=wei.liu2@citrix.com \
    --cc=xen-devel@lists.xen.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.