xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: "Jan Beulich" <JBeulich@suse.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Keir Fraser <keir@xen.org>
Subject: [PATCH] x86/domctl: don't waste domain CPUID slot for all zero data
Date: Tue, 22 Mar 2016 06:56:23 -0600	[thread overview]
Message-ID: <56F14F0702000078000DF2B4@prv-mh.provo.novell.com> (raw)

[-- Attachment #1: Type: text/plain, Size: 979 bytes --]

domain_cpuid() returns all zeroes anyway when not finding a match, so
there's no need to explicitly store such a set of values.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -714,7 +714,7 @@ long arch_do_domctl(
 
     case XEN_DOMCTL_set_cpuid:
     {
-        xen_domctl_cpuid_t *ctl = &domctl->u.cpuid;
+        const xen_domctl_cpuid_t *ctl = &domctl->u.cpuid;
         cpuid_input_t *cpuid, *unused = NULL;
 
         if ( d == currd ) /* no domain_pause() */
@@ -742,7 +742,12 @@ long arch_do_domctl(
 
         domain_pause(d);
 
-        if ( i < MAX_CPUID_INPUT )
+        if ( !(ctl->eax | ctl->ebx | ctl->ecx | ctl->edx) )
+        {
+            if ( i < MAX_CPUID_INPUT )
+                cpuid->input[0] = XEN_CPUID_INPUT_UNUSED;
+        }
+        else if ( i < MAX_CPUID_INPUT )
             *cpuid = *ctl;
         else if ( unused )
             *unused = *ctl;




[-- Attachment #2: x86-domctl-CPUID-unused.patch --]
[-- Type: text/plain, Size: 1036 bytes --]

x86/domctl: don't waste domain CPUID slot for all zero data

domain_cpuid() returns all zeroes anyway when not finding a match, so
there's no need to explicitly store such a set of values.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -714,7 +714,7 @@ long arch_do_domctl(
 
     case XEN_DOMCTL_set_cpuid:
     {
-        xen_domctl_cpuid_t *ctl = &domctl->u.cpuid;
+        const xen_domctl_cpuid_t *ctl = &domctl->u.cpuid;
         cpuid_input_t *cpuid, *unused = NULL;
 
         if ( d == currd ) /* no domain_pause() */
@@ -742,7 +742,12 @@ long arch_do_domctl(
 
         domain_pause(d);
 
-        if ( i < MAX_CPUID_INPUT )
+        if ( !(ctl->eax | ctl->ebx | ctl->ecx | ctl->edx) )
+        {
+            if ( i < MAX_CPUID_INPUT )
+                cpuid->input[0] = XEN_CPUID_INPUT_UNUSED;
+        }
+        else if ( i < MAX_CPUID_INPUT )
             *cpuid = *ctl;
         else if ( unused )
             *unused = *ctl;

[-- Attachment #3: Type: text/plain, Size: 126 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

             reply	other threads:[~2016-03-22 12:56 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-03-22 12:56 Jan Beulich [this message]
2016-03-22 15:10 ` [PATCH] x86/domctl: don't waste domain CPUID slot for all zero data Andrew Cooper

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=56F14F0702000078000DF2B4@prv-mh.provo.novell.com \
    --to=jbeulich@suse.com \
    --cc=andrew.cooper3@citrix.com \
    --cc=keir@xen.org \
    --cc=xen-devel@lists.xenproject.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).