All of lore.kernel.org
 help / color / mirror / Atom feed
* network misbehaviour with gplpv and 2.6.30
@ 2009-07-18  3:42 James Harper
  2009-07-18 18:28 ` Andrew Lyon
                   ` (2 more replies)
  0 siblings, 3 replies; 25+ messages in thread
From: James Harper @ 2009-07-18  3:42 UTC (permalink / raw)
  To: xen-devel; +Cc: Andrew Lyon

With GPLPV under 2.6.30, GPLPV gets the following from the ring:

ring slot n (first buffer):
 status (length) = 54 bytes
 offset = 0
 flags = NETRXF_extra_info (possibly csum too but not relevant)
ring slot n + 1 (extra info)
 gso.size (mss) = 1460

Because NETRXF_extra_info is not set, that's all I get for that packet.
In the IP header though, the total length is 1544 (which in itself is a
little strange), but obviously I'm not getting a full packet, just the
ETH+IP+TCP header.

According to Andrew Lyon it works fine in previous versions, so this
problem only arises on 2.6.30. I don't know if netfront on Linux suffers
from a similar problem.

I can't identify any changes that could cause this, but if the problem
is in netback either the frags count isn't being set correctly, or
skb->cb (which appears to be used temporarily to hold nr_frags) is
becoming corrupt (set to 0) somehow, but the window where this could
occur is very small and I can't see where it could happen.

Any suggestions as to where to start looking?

(one nice thing is that I have identified a crash that would occur when
the IP header lied about its length!)

Thanks

James

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: network misbehaviour with gplpv and 2.6.30
  2009-07-18  3:42 network misbehaviour with gplpv and 2.6.30 James Harper
@ 2009-07-18 18:28 ` Andrew Lyon
  2009-07-21  9:35 ` Paul Durrant
  2009-07-21 10:53 ` Nerijus Narmontas
  2 siblings, 0 replies; 25+ messages in thread
From: Andrew Lyon @ 2009-07-18 18:28 UTC (permalink / raw)
  To: James Harper; +Cc: xen-devel

On Sat, Jul 18, 2009 at 4:42 AM, James
Harper<james.harper@bendigoit.com.au> wrote:
> With GPLPV under 2.6.30, GPLPV gets the following from the ring:
>
> ring slot n (first buffer):
>  status (length) = 54 bytes
>  offset = 0
>  flags = NETRXF_extra_info (possibly csum too but not relevant)
> ring slot n + 1 (extra info)
>  gso.size (mss) = 1460
>
> Because NETRXF_extra_info is not set, that's all I get for that packet.
> In the IP header though, the total length is 1544 (which in itself is a
> little strange), but obviously I'm not getting a full packet, just the
> ETH+IP+TCP header.
>
> According to Andrew Lyon it works fine in previous versions, so this
> problem only arises on 2.6.30. I don't know if netfront on Linux suffers
> from a similar problem.
>
> I can't identify any changes that could cause this, but if the problem
> is in netback either the frags count isn't being set correctly, or
> skb->cb (which appears to be used temporarily to hold nr_frags) is
> becoming corrupt (set to 0) somehow, but the window where this could
> occur is very small and I can't see where it could happen.
>
> Any suggestions as to where to start looking?
>
> (one nice thing is that I have identified a crash that would occur when
> the IP header lied about its length!)
>
> Thanks
>
> James
>
>

James,

I tried using the 2.6.29 netback.c with 2.6.30, I had to change a
couple of calls to __mod_timer to use mod_timer instead but after that
it compiles and seems to work normally, but it does not get rid of the
problem.

I will keep trying to find the change that caused this problem.

Andy

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: network misbehaviour with gplpv and 2.6.30
  2009-07-18  3:42 network misbehaviour with gplpv and 2.6.30 James Harper
  2009-07-18 18:28 ` Andrew Lyon
@ 2009-07-21  9:35 ` Paul Durrant
  2009-07-21 10:05   ` James Harper
  2009-07-21 10:53 ` Nerijus Narmontas
  2 siblings, 1 reply; 25+ messages in thread
From: Paul Durrant @ 2009-07-21  9:35 UTC (permalink / raw)
  To: James Harper; +Cc: xen-devel, Lyon, Andrew

James Harper wrote:
> With GPLPV under 2.6.30, GPLPV gets the following from the ring:
> 
> ring slot n (first buffer):
>  status (length) = 54 bytes
>  offset = 0
>  flags = NETRXF_extra_info (possibly csum too but not relevant)
> ring slot n + 1 (extra info)
>  gso.size (mss) = 1460
> 
> Because NETRXF_extra_info is not set, that's all I get for that packet.

I assume you mean NETRXF_more_data here? Are you saying that ring slot n 
has only NETRXF_extra_info and *not* NETRXF_more_data?

> In the IP header though, the total length is 1544 (which in itself is a
> little strange), but obviously I'm not getting a full packet, just the
> ETH+IP+TCP header.
> 
> According to Andrew Lyon it works fine in previous versions, so this
> problem only arises on 2.6.30. I don't know if netfront on Linux suffers
> from a similar problem.
> 
> I can't identify any changes that could cause this, but if the problem
> is in netback either the frags count isn't being set correctly, or
> skb->cb (which appears to be used temporarily to hold nr_frags) is
> becoming corrupt (set to 0) somehow, but the window where this could
> occur is very small and I can't see where it could happen.
> 
> Any suggestions as to where to start looking?
> 
> (one nice thing is that I have identified a crash that would occur when
> the IP header lied about its length!)
> 
> Thanks
> 
> James
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel


-- 
===============================
Paul Durrant, Software Engineer

Citrix Systems (R&D) Ltd.
First Floor, Building 101
Cambridge Science Park
Milton Road
Cambridge CB4 0FY
United Kingdom

TEL: x35957 (+44 1223 225957)
===============================

^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: network misbehaviour with gplpv and 2.6.30
  2009-07-21  9:35 ` Paul Durrant
@ 2009-07-21 10:05   ` James Harper
  2009-07-21 10:13     ` Paul Durrant
  0 siblings, 1 reply; 25+ messages in thread
From: James Harper @ 2009-07-21 10:05 UTC (permalink / raw)
  To: Paul Durrant; +Cc: xen-devel, Andrew Lyon

> 
> James Harper wrote:
> > With GPLPV under 2.6.30, GPLPV gets the following from the ring:
> >
> > ring slot n (first buffer):
> >  status (length) = 54 bytes
> >  offset = 0
> >  flags = NETRXF_extra_info (possibly csum too but not relevant)
> > ring slot n + 1 (extra info)
> >  gso.size (mss) = 1460
> >
> > Because NETRXF_extra_info is not set, that's all I get for that
packet.
> 
> I assume you mean NETRXF_more_data here?

Oops. Yes, that's exactly what I mean.

> Are you saying that ring slot n
> has only NETRXF_extra_info and *not* NETRXF_more_data?
> 

Yes. From the debug I have received from Andrew Lyon, NETRXF_more_data
is _never_ set.

>From what Andrew tells me (and it's not unlikely that I misunderstood),
the packets in question come from a physical machine external to the
machine running xen. I can't quite understand how that could be as they
are 'large' packets (>1514 byte total packet length) which should only
be locally originated. Unless he's running with jumbo frames (are you
Andrew?).

I've asked for some more debug info but he's in a different timezone to
me and probably isn't awake yet. I'm less and less inclined to think
that this is actually a problem with GPLPV and more a problem with
netback (or a physical network driver) in 2.6.30, but a tcpdump in Dom0,
HVM without GPLPV and maybe in a Linux DomU should tell us more.

Thanks

James

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: network misbehaviour with gplpv and 2.6.30
  2009-07-21 10:05   ` James Harper
@ 2009-07-21 10:13     ` Paul Durrant
  2009-07-21 11:09       ` James Harper
  2009-07-29  9:48       ` Andrew Lyon
  0 siblings, 2 replies; 25+ messages in thread
From: Paul Durrant @ 2009-07-21 10:13 UTC (permalink / raw)
  To: James Harper; +Cc: xen-devel, Lyon, Andrew

James Harper wrote:
> 
>> Are you saying that ring slot n
>> has only NETRXF_extra_info and *not* NETRXF_more_data?
>>
> 
> Yes. From the debug I have received from Andrew Lyon, NETRXF_more_data
> is _never_ set.
> 
> From what Andrew tells me (and it's not unlikely that I misunderstood),
> the packets in question come from a physical machine external to the
> machine running xen. I can't quite understand how that could be as they
> are 'large' packets (>1514 byte total packet length) which should only
> be locally originated. Unless he's running with jumbo frames (are you
> Andrew?).
> 

It's not unusual for h/w drivers to support 'LRO', i.e. they re-assemble 
consecutive in-order TCP segments into a large packet before passing up 
the stack. I believe that these would manifest themselves as TSOs coming 
into the transmit side of netback, just as locally originated large 
packets would.

> I've asked for some more debug info but he's in a different timezone to
> me and probably isn't awake yet. I'm less and less inclined to think
> that this is actually a problem with GPLPV and more a problem with
> netback (or a physical network driver) in 2.6.30, but a tcpdump in Dom0,
> HVM without GPLPV and maybe in a Linux DomU should tell us more.
> 

Yes, a tcpdump of what's being passed into netback in dom0 should tell 
us what's happening.

   Paul

-- 
===============================
Paul Durrant, Software Engineer

Citrix Systems (R&D) Ltd.
First Floor, Building 101
Cambridge Science Park
Milton Road
Cambridge CB4 0FY
United Kingdom

TEL: x35957 (+44 1223 225957)
===============================

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: network misbehaviour with gplpv and 2.6.30
  2009-07-18  3:42 network misbehaviour with gplpv and 2.6.30 James Harper
  2009-07-18 18:28 ` Andrew Lyon
  2009-07-21  9:35 ` Paul Durrant
@ 2009-07-21 10:53 ` Nerijus Narmontas
  2009-07-21 11:01   ` dom0-cpus problem Pasi Kärkkäinen
  2 siblings, 1 reply; 25+ messages in thread
From: Nerijus Narmontas @ 2009-07-21 10:53 UTC (permalink / raw)
  To: xen-users; +Cc: xen-devel


[-- Attachment #1.1: Type: text/plain, Size: 176 bytes --]

Hello,
If I set (dom0-cpus 1) in /etc/xen/xend-config.sxp, after I gracefully
shutdown domU, the domain stays in ---s- state.

Is this fixed in 3.4.1-rc8?

Regards,
Nerijus N.

[-- Attachment #1.2: Type: text/html, Size: 262 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: dom0-cpus problem
  2009-07-21 10:53 ` Nerijus Narmontas
@ 2009-07-21 11:01   ` Pasi Kärkkäinen
  2009-07-22 15:18     ` Nerijus Narmontas
  0 siblings, 1 reply; 25+ messages in thread
From: Pasi Kärkkäinen @ 2009-07-21 11:01 UTC (permalink / raw)
  To: Nerijus Narmontas; +Cc: xen-devel, xen-users

On Tue, Jul 21, 2009 at 01:53:25PM +0300, Nerijus Narmontas wrote:
> Hello,
> If I set (dom0-cpus 1) in /etc/xen/xend-config.sxp, after I gracefully
> shutdown domU, the domain stays in ---s- state.
> 
> Is this fixed in 3.4.1-rc8?
> 

Hello.

Please don't hijack threads - you replied to a thread about network problems
and gplpv drivers. Always start a new thread for new subjects.

What version are you seeing this behaviour with? Xen 3.4.0 ? What dom0
kernel version?

-- Pasi

^ permalink raw reply	[flat|nested] 25+ messages in thread

* RE: network misbehaviour with gplpv and 2.6.30
  2009-07-21 10:13     ` Paul Durrant
@ 2009-07-21 11:09       ` James Harper
  2009-07-29  9:48       ` Andrew Lyon
  1 sibling, 0 replies; 25+ messages in thread
From: James Harper @ 2009-07-21 11:09 UTC (permalink / raw)
  To: Paul Durrant; +Cc: xen-devel, Andrew Lyon

> James Harper wrote:
> >
> >> Are you saying that ring slot n
> >> has only NETRXF_extra_info and *not* NETRXF_more_data?
> >>
> >
> > Yes. From the debug I have received from Andrew Lyon,
NETRXF_more_data
> > is _never_ set.
> >
> > From what Andrew tells me (and it's not unlikely that I
misunderstood),
> > the packets in question come from a physical machine external to the
> > machine running xen. I can't quite understand how that could be as
they
> > are 'large' packets (>1514 byte total packet length) which should
only
> > be locally originated. Unless he's running with jumbo frames (are
you
> > Andrew?).
> >
> 
> It's not unusual for h/w drivers to support 'LRO', i.e. they
re-assemble
> consecutive in-order TCP segments into a large packet before passing
up
> the stack. I believe that these would manifest themselves as TSOs
coming
> into the transmit side of netback, just as locally originated large
> packets would.
> 

Interesting. My work with the windows NDIS framework said that this must
be very rare as I couldn't find a way to make Windows accept 'large'
packets. GPLPV actually has to break up the packets and checksum them.
Checksum is another thing that Windows is very fussy about. The checksum
on rx has to be correct. There is no 'the data is good, don't worry
about the checksum' flag. Windows seems to check it anyway and drop the
packet if it is incorrect.

James 

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: dom0-cpus problem
  2009-07-21 11:01   ` dom0-cpus problem Pasi Kärkkäinen
@ 2009-07-22 15:18     ` Nerijus Narmontas
  2009-07-22 15:21       ` dom0-cpus problem with Xen 3.4.1-rc6 Pasi Kärkkäinen
  2009-07-23  9:39       ` [Xen-devel] dom0-cpus problem George Dunlap
  0 siblings, 2 replies; 25+ messages in thread
From: Nerijus Narmontas @ 2009-07-22 15:18 UTC (permalink / raw)
  To: Pasi Kärkkäinen; +Cc: xen-devel, xen-users


[-- Attachment #1.1: Type: text/plain, Size: 3908 bytes --]

On Tue, Jul 21, 2009 at 2:01 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote:

> On Tue, Jul 21, 2009 at 01:53:25PM +0300, Nerijus Narmontas wrote:
> > Hello,
> > If I set (dom0-cpus 1) in /etc/xen/xend-config.sxp, after I gracefully
> > shutdown domU, the domain stays in ---s- state.
> >
> > Is this fixed in 3.4.1-rc8?
> >
>
> Hello.
>
> Please don't hijack threads - you replied to a thread about network
> problems
> and gplpv drivers. Always start a new thread for new subjects.
>
> What version are you seeing this behaviour with? Xen 3.4.0 ? What dom0
> kernel version?
>
> -- Pasi
>

Sorry for the threads thing.

root@xen1:/# more /etc/xen/xend-config.sxp | grep cpu
# In SMP system, dom0 will use dom0-cpus # of CPUS
# If dom0-cpus = 0, dom0 will take all cpus available
(dom0-cpus 1)

root@xen1:/# xm dmesg | grep Command
(XEN) Command line: console=com2 com2=115200,8n1

root@xen1:/# xm dmesg | grep VCPUs
(XEN) Dom0 has maximum 8 VCPUs

root@xen1:/# xm vcpu-list
Name                                ID  VCPU   CPU State   Time(s) CPU
Affinity
Domain-0                             0     0     5   r--       9.2 any cpu
Domain-0                             0     1     -   --p       1.8 any cpu
Domain-0                             0     2     -   --p       1.7 any cpu
Domain-0                             0     3     -   --p       1.6 any cpu
Domain-0                             0     4     -   --p       1.4 any cpu
Domain-0                             0     5     -   --p       1.4 any cpu
Domain-0                             0     6     -   --p       1.5 any cpu
Domain-0                             0     7     -   --p       1.3 any cpu

root@xen1:/# xm create /etc/xen/dc3.conf
Using config file "/etc/xen/dc3.conf".
Started domain dc3 (id=1)

root@xen1:/# xm vcpu-list
Name                                ID  VCPU   CPU State   Time(s) CPU
Affinity
Domain-0                             0     0     7   r--      36.5 any cpu
Domain-0                             0     1     -   --p       1.8 any cpu
Domain-0                             0     2     -   --p       1.7 any cpu
Domain-0                             0     3     -   --p       1.6 any cpu
Domain-0                             0     4     -   --p       1.4 any cpu
Domain-0                             0     5     -   --p       1.4 any cpu
Domain-0                             0     6     -   --p       1.5 any cpu
Domain-0                             0     7     -   --p       1.3 any cpu
dc3                                  1     0     0   -b-      15.2 0
dc3                                  1     1     1   -b-       6.8 1
dc3                                  1     2     2   -b-       7.5 2
dc3                                  1     3     3   -b-       8.0 3

After HVM Windows domU shutdown, it stays in ---s- state.

root@xen1:/# xm li
Name                                        ID   Mem VCPUs      State
Time(s)
Domain-0                                     0 24106     1     r-----
58.7
dc3                                          1  8192     4     ---s--
59.0

root@xen1:/# xm vcpu-list
Name                                ID  VCPU   CPU State   Time(s) CPU
Affinity
Domain-0                             0     0     4   r--      48.4 any cpu
...
Domain-0                             0     7     -   --p       1.3 any cpu
dc3                                  1     0     0   ---      20.0 0
dc3                                  1     1     1   ---      10.9 1
dc3                                  1     2     2   ---      15.2 2
dc3                                  1     3     3   ---      12.9 3

The problem goes away if I tell Xen to boot with options dom0_max_vcpus=1
dom0_vcpus_pin.

What's the difference between Xen boot options to limit vcpus for dom0 to
/etc/xen/xend-config.sxp?

I am running Xen 3.4.1-rc6 version.

[-- Attachment #1.2: Type: text/html, Size: 5221 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: dom0-cpus problem with Xen 3.4.1-rc6
  2009-07-22 15:18     ` Nerijus Narmontas
@ 2009-07-22 15:21       ` Pasi Kärkkäinen
  2009-07-22 16:34         ` Nerijus Narmontas
  2009-07-23  9:39       ` [Xen-devel] dom0-cpus problem George Dunlap
  1 sibling, 1 reply; 25+ messages in thread
From: Pasi Kärkkäinen @ 2009-07-22 15:21 UTC (permalink / raw)
  To: Nerijus Narmontas; +Cc: xen-devel, xen-users

On Wed, Jul 22, 2009 at 06:18:37PM +0300, Nerijus Narmontas wrote:
> On Tue, Jul 21, 2009 at 2:01 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote:
> 
> > On Tue, Jul 21, 2009 at 01:53:25PM +0300, Nerijus Narmontas wrote:
> > > Hello,
> > > If I set (dom0-cpus 1) in /etc/xen/xend-config.sxp, after I gracefully
> > > shutdown domU, the domain stays in ---s- state.
> > >
> > > Is this fixed in 3.4.1-rc8?
> > >
> >
> > Hello.
> >
> > Please don't hijack threads - you replied to a thread about network
> > problems
> > and gplpv drivers. Always start a new thread for new subjects.
> >
> > What version are you seeing this behaviour with? Xen 3.4.0 ? What dom0
> > kernel version?
> >
> > -- Pasi
> >
> 
> Sorry for the threads thing.
> 
> root@xen1:/# more /etc/xen/xend-config.sxp | grep cpu
> # In SMP system, dom0 will use dom0-cpus # of CPUS
> # If dom0-cpus = 0, dom0 will take all cpus available
> (dom0-cpus 1)
> 
> root@xen1:/# xm dmesg | grep Command
> (XEN) Command line: console=com2 com2=115200,8n1
> 
> root@xen1:/# xm dmesg | grep VCPUs
> (XEN) Dom0 has maximum 8 VCPUs
> 
> root@xen1:/# xm vcpu-list
> Name                                ID  VCPU   CPU State   Time(s) CPU
> Affinity
> Domain-0                             0     0     5   r--       9.2 any cpu
> Domain-0                             0     1     -   --p       1.8 any cpu
> Domain-0                             0     2     -   --p       1.7 any cpu
> Domain-0                             0     3     -   --p       1.6 any cpu
> Domain-0                             0     4     -   --p       1.4 any cpu
> Domain-0                             0     5     -   --p       1.4 any cpu
> Domain-0                             0     6     -   --p       1.5 any cpu
> Domain-0                             0     7     -   --p       1.3 any cpu
> 
> root@xen1:/# xm create /etc/xen/dc3.conf
> Using config file "/etc/xen/dc3.conf".
> Started domain dc3 (id=1)
> 
> root@xen1:/# xm vcpu-list
> Name                                ID  VCPU   CPU State   Time(s) CPU
> Affinity
> Domain-0                             0     0     7   r--      36.5 any cpu
> Domain-0                             0     1     -   --p       1.8 any cpu
> Domain-0                             0     2     -   --p       1.7 any cpu
> Domain-0                             0     3     -   --p       1.6 any cpu
> Domain-0                             0     4     -   --p       1.4 any cpu
> Domain-0                             0     5     -   --p       1.4 any cpu
> Domain-0                             0     6     -   --p       1.5 any cpu
> Domain-0                             0     7     -   --p       1.3 any cpu
> dc3                                  1     0     0   -b-      15.2 0
> dc3                                  1     1     1   -b-       6.8 1
> dc3                                  1     2     2   -b-       7.5 2
> dc3                                  1     3     3   -b-       8.0 3
> 
> After HVM Windows domU shutdown, it stays in ---s- state.
> 
> root@xen1:/# xm li
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0 24106     1     r-----
> 58.7
> dc3                                          1  8192     4     ---s--
> 59.0
> 
> root@xen1:/# xm vcpu-list
> Name                                ID  VCPU   CPU State   Time(s) CPU
> Affinity
> Domain-0                             0     0     4   r--      48.4 any cpu
> ...
> Domain-0                             0     7     -   --p       1.3 any cpu
> dc3                                  1     0     0   ---      20.0 0
> dc3                                  1     1     1   ---      10.9 1
> dc3                                  1     2     2   ---      15.2 2
> dc3                                  1     3     3   ---      12.9 3
> 
> The problem goes away if I tell Xen to boot with options dom0_max_vcpus=1
> dom0_vcpus_pin.
> 
> What's the difference between Xen boot options to limit vcpus for dom0 to
> /etc/xen/xend-config.sxp?
> 
> I am running Xen 3.4.1-rc6 version.

OK.

What dom0 kernel version are you running? 

-- Pasi

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: dom0-cpus problem with Xen 3.4.1-rc6
  2009-07-22 15:21       ` dom0-cpus problem with Xen 3.4.1-rc6 Pasi Kärkkäinen
@ 2009-07-22 16:34         ` Nerijus Narmontas
  2009-07-22 16:39           ` [Xen-devel] " Pasi Kärkkäinen
  0 siblings, 1 reply; 25+ messages in thread
From: Nerijus Narmontas @ 2009-07-22 16:34 UTC (permalink / raw)
  To: Pasi Kärkkäinen; +Cc: xen-devel, xen-users


[-- Attachment #1.1: Type: text/plain, Size: 4610 bytes --]

On Wed, Jul 22, 2009 at 6:21 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote:

> On Wed, Jul 22, 2009 at 06:18:37PM +0300, Nerijus Narmontas wrote:
> > On Tue, Jul 21, 2009 at 2:01 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote:
> >
> > > On Tue, Jul 21, 2009 at 01:53:25PM +0300, Nerijus Narmontas wrote:
> > > > Hello,
> > > > If I set (dom0-cpus 1) in /etc/xen/xend-config.sxp, after I
> gracefully
> > > > shutdown domU, the domain stays in ---s- state.
> > > >
> > > > Is this fixed in 3.4.1-rc8?
> > > >
> > >
> > > Hello.
> > >
> > > Please don't hijack threads - you replied to a thread about network
> > > problems
> > > and gplpv drivers. Always start a new thread for new subjects.
> > >
> > > What version are you seeing this behaviour with? Xen 3.4.0 ? What dom0
> > > kernel version?
> > >
> > > -- Pasi
> > >
> >
> > Sorry for the threads thing.
> >
> > root@xen1:/# more /etc/xen/xend-config.sxp | grep cpu
> > # In SMP system, dom0 will use dom0-cpus # of CPUS
> > # If dom0-cpus = 0, dom0 will take all cpus available
> > (dom0-cpus 1)
> >
> > root@xen1:/# xm dmesg | grep Command
> > (XEN) Command line: console=com2 com2=115200,8n1
> >
> > root@xen1:/# xm dmesg | grep VCPUs
> > (XEN) Dom0 has maximum 8 VCPUs
> >
> > root@xen1:/# xm vcpu-list
> > Name                                ID  VCPU   CPU State   Time(s) CPU
> > Affinity
> > Domain-0                             0     0     5   r--       9.2 any
> cpu
> > Domain-0                             0     1     -   --p       1.8 any
> cpu
> > Domain-0                             0     2     -   --p       1.7 any
> cpu
> > Domain-0                             0     3     -   --p       1.6 any
> cpu
> > Domain-0                             0     4     -   --p       1.4 any
> cpu
> > Domain-0                             0     5     -   --p       1.4 any
> cpu
> > Domain-0                             0     6     -   --p       1.5 any
> cpu
> > Domain-0                             0     7     -   --p       1.3 any
> cpu
> >
> > root@xen1:/# xm create /etc/xen/dc3.conf
> > Using config file "/etc/xen/dc3.conf".
> > Started domain dc3 (id=1)
> >
> > root@xen1:/# xm vcpu-list
> > Name                                ID  VCPU   CPU State   Time(s) CPU
> > Affinity
> > Domain-0                             0     0     7   r--      36.5 any
> cpu
> > Domain-0                             0     1     -   --p       1.8 any
> cpu
> > Domain-0                             0     2     -   --p       1.7 any
> cpu
> > Domain-0                             0     3     -   --p       1.6 any
> cpu
> > Domain-0                             0     4     -   --p       1.4 any
> cpu
> > Domain-0                             0     5     -   --p       1.4 any
> cpu
> > Domain-0                             0     6     -   --p       1.5 any
> cpu
> > Domain-0                             0     7     -   --p       1.3 any
> cpu
> > dc3                                  1     0     0   -b-      15.2 0
> > dc3                                  1     1     1   -b-       6.8 1
> > dc3                                  1     2     2   -b-       7.5 2
> > dc3                                  1     3     3   -b-       8.0 3
> >
> > After HVM Windows domU shutdown, it stays in ---s- state.
> >
> > root@xen1:/# xm li
> > Name                                        ID   Mem VCPUs      State
> > Time(s)
> > Domain-0                                     0 24106     1     r-----
> > 58.7
> > dc3                                          1  8192     4     ---s--
> > 59.0
> >
> > root@xen1:/# xm vcpu-list
> > Name                                ID  VCPU   CPU State   Time(s) CPU
> > Affinity
> > Domain-0                             0     0     4   r--      48.4 any
> cpu
> > ...
> > Domain-0                             0     7     -   --p       1.3 any
> cpu
> > dc3                                  1     0     0   ---      20.0 0
> > dc3                                  1     1     1   ---      10.9 1
> > dc3                                  1     2     2   ---      15.2 2
> > dc3                                  1     3     3   ---      12.9 3
> >
> > The problem goes away if I tell Xen to boot with options dom0_max_vcpus=1
> > dom0_vcpus_pin.
> >
> > What's the difference between Xen boot options to limit vcpus for dom0 to
> > /etc/xen/xend-config.sxp?
> >
> > I am running Xen 3.4.1-rc6 version.
>
> OK.
>
> What dom0 kernel version are you running?
>
> -- Pasi
>

>From Ubuntu hardy-backports repositories 2.6.24-24-xen.

[-- Attachment #1.2: Type: text/html, Size: 5502 bytes --]

[-- Attachment #2: Type: text/plain, Size: 138 bytes --]

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Xen-devel] dom0-cpus problem with Xen 3.4.1-rc6
  2009-07-22 16:34         ` Nerijus Narmontas
@ 2009-07-22 16:39           ` Pasi Kärkkäinen
  2009-07-22 16:42             ` Nerijus Narmontas
  0 siblings, 1 reply; 25+ messages in thread
From: Pasi Kärkkäinen @ 2009-07-22 16:39 UTC (permalink / raw)
  To: Nerijus Narmontas; +Cc: xen-devel, xen-users

On Wed, Jul 22, 2009 at 07:34:16PM +0300, Nerijus Narmontas wrote:
> On Wed, Jul 22, 2009 at 6:21 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote:
> 
> > On Wed, Jul 22, 2009 at 06:18:37PM +0300, Nerijus Narmontas wrote:
> > > On Tue, Jul 21, 2009 at 2:01 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote:
> > >
> > > > On Tue, Jul 21, 2009 at 01:53:25PM +0300, Nerijus Narmontas wrote:
> > > > > Hello,
> > > > > If I set (dom0-cpus 1) in /etc/xen/xend-config.sxp, after I
> > gracefully
> > > > > shutdown domU, the domain stays in ---s- state.
> > > > >
> > > > > Is this fixed in 3.4.1-rc8?
> > > > >
> > > >
> > > > Hello.
> > > >
> > > > Please don't hijack threads - you replied to a thread about network
> > > > problems
> > > > and gplpv drivers. Always start a new thread for new subjects.
> > > >
> > > > What version are you seeing this behaviour with? Xen 3.4.0 ? What dom0
> > > > kernel version?
> > > >
> > > > -- Pasi
> > > >
> > >
> > > Sorry for the threads thing.
> > >
> > > root@xen1:/# more /etc/xen/xend-config.sxp | grep cpu
> > > # In SMP system, dom0 will use dom0-cpus # of CPUS
> > > # If dom0-cpus = 0, dom0 will take all cpus available
> > > (dom0-cpus 1)
> > >
> > > root@xen1:/# xm dmesg | grep Command
> > > (XEN) Command line: console=com2 com2=115200,8n1
> > >
> > > root@xen1:/# xm dmesg | grep VCPUs
> > > (XEN) Dom0 has maximum 8 VCPUs
> > >
> > > root@xen1:/# xm vcpu-list
> > > Name                                ID  VCPU   CPU State   Time(s) CPU
> > > Affinity
> > > Domain-0                             0     0     5   r--       9.2 any
> > cpu
> > > Domain-0                             0     1     -   --p       1.8 any
> > cpu
> > > Domain-0                             0     2     -   --p       1.7 any
> > cpu
> > > Domain-0                             0     3     -   --p       1.6 any
> > cpu
> > > Domain-0                             0     4     -   --p       1.4 any
> > cpu
> > > Domain-0                             0     5     -   --p       1.4 any
> > cpu
> > > Domain-0                             0     6     -   --p       1.5 any
> > cpu
> > > Domain-0                             0     7     -   --p       1.3 any
> > cpu
> > >
> > > root@xen1:/# xm create /etc/xen/dc3.conf
> > > Using config file "/etc/xen/dc3.conf".
> > > Started domain dc3 (id=1)
> > >
> > > root@xen1:/# xm vcpu-list
> > > Name                                ID  VCPU   CPU State   Time(s) CPU
> > > Affinity
> > > Domain-0                             0     0     7   r--      36.5 any
> > cpu
> > > Domain-0                             0     1     -   --p       1.8 any
> > cpu
> > > Domain-0                             0     2     -   --p       1.7 any
> > cpu
> > > Domain-0                             0     3     -   --p       1.6 any
> > cpu
> > > Domain-0                             0     4     -   --p       1.4 any
> > cpu
> > > Domain-0                             0     5     -   --p       1.4 any
> > cpu
> > > Domain-0                             0     6     -   --p       1.5 any
> > cpu
> > > Domain-0                             0     7     -   --p       1.3 any
> > cpu
> > > dc3                                  1     0     0   -b-      15.2 0
> > > dc3                                  1     1     1   -b-       6.8 1
> > > dc3                                  1     2     2   -b-       7.5 2
> > > dc3                                  1     3     3   -b-       8.0 3
> > >
> > > After HVM Windows domU shutdown, it stays in ---s- state.
> > >
> > > root@xen1:/# xm li
> > > Name                                        ID   Mem VCPUs      State
> > > Time(s)
> > > Domain-0                                     0 24106     1     r-----
> > > 58.7
> > > dc3                                          1  8192     4     ---s--
> > > 59.0
> > >
> > > root@xen1:/# xm vcpu-list
> > > Name                                ID  VCPU   CPU State   Time(s) CPU
> > > Affinity
> > > Domain-0                             0     0     4   r--      48.4 any
> > cpu
> > > ...
> > > Domain-0                             0     7     -   --p       1.3 any
> > cpu
> > > dc3                                  1     0     0   ---      20.0 0
> > > dc3                                  1     1     1   ---      10.9 1
> > > dc3                                  1     2     2   ---      15.2 2
> > > dc3                                  1     3     3   ---      12.9 3
> > >
> > > The problem goes away if I tell Xen to boot with options dom0_max_vcpus=1
> > > dom0_vcpus_pin.
> > >
> > > What's the difference between Xen boot options to limit vcpus for dom0 to
> > > /etc/xen/xend-config.sxp?
> > >
> > > I am running Xen 3.4.1-rc6 version.
> >
> > OK.
> >
> > What dom0 kernel version are you running?
> >
> > -- Pasi
> >
> 
> From Ubuntu hardy-backports repositories 2.6.24-24-xen.

Maybe dom0 kernel is your problem.. I remember there was a bug in kernel
that caused that kind of problems.

That hardy's dom0 kernel is known to have other bugs aswell.

If possible, try running the latest linux-2.6.18-xen from xenbits.
Or some other dom0 kernel, and see if that fixes the problem.

-- Pasi

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Xen-devel] dom0-cpus problem with Xen 3.4.1-rc6
  2009-07-22 16:39           ` [Xen-devel] " Pasi Kärkkäinen
@ 2009-07-22 16:42             ` Nerijus Narmontas
  2009-07-22 17:01               ` Pasi Kärkkäinen
  0 siblings, 1 reply; 25+ messages in thread
From: Nerijus Narmontas @ 2009-07-22 16:42 UTC (permalink / raw)
  To: Pasi Kärkkäinen; +Cc: xen-devel, xen-users


[-- Attachment #1.1: Type: text/plain, Size: 5857 bytes --]

On Wed, Jul 22, 2009 at 7:39 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote:

> On Wed, Jul 22, 2009 at 07:34:16PM +0300, Nerijus Narmontas wrote:
> > On Wed, Jul 22, 2009 at 6:21 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote:
> >
> > > On Wed, Jul 22, 2009 at 06:18:37PM +0300, Nerijus Narmontas wrote:
> > > > On Tue, Jul 21, 2009 at 2:01 PM, Pasi Kärkkäinen <pasik@iki.fi>
> wrote:
> > > >
> > > > > On Tue, Jul 21, 2009 at 01:53:25PM +0300, Nerijus Narmontas wrote:
> > > > > > Hello,
> > > > > > If I set (dom0-cpus 1) in /etc/xen/xend-config.sxp, after I
> > > gracefully
> > > > > > shutdown domU, the domain stays in ---s- state.
> > > > > >
> > > > > > Is this fixed in 3.4.1-rc8?
> > > > > >
> > > > >
> > > > > Hello.
> > > > >
> > > > > Please don't hijack threads - you replied to a thread about network
> > > > > problems
> > > > > and gplpv drivers. Always start a new thread for new subjects.
> > > > >
> > > > > What version are you seeing this behaviour with? Xen 3.4.0 ? What
> dom0
> > > > > kernel version?
> > > > >
> > > > > -- Pasi
> > > > >
> > > >
> > > > Sorry for the threads thing.
> > > >
> > > > root@xen1:/# more /etc/xen/xend-config.sxp | grep cpu
> > > > # In SMP system, dom0 will use dom0-cpus # of CPUS
> > > > # If dom0-cpus = 0, dom0 will take all cpus available
> > > > (dom0-cpus 1)
> > > >
> > > > root@xen1:/# xm dmesg | grep Command
> > > > (XEN) Command line: console=com2 com2=115200,8n1
> > > >
> > > > root@xen1:/# xm dmesg | grep VCPUs
> > > > (XEN) Dom0 has maximum 8 VCPUs
> > > >
> > > > root@xen1:/# xm vcpu-list
> > > > Name                                ID  VCPU   CPU State   Time(s)
> CPU
> > > > Affinity
> > > > Domain-0                             0     0     5   r--       9.2
> any
> > > cpu
> > > > Domain-0                             0     1     -   --p       1.8
> any
> > > cpu
> > > > Domain-0                             0     2     -   --p       1.7
> any
> > > cpu
> > > > Domain-0                             0     3     -   --p       1.6
> any
> > > cpu
> > > > Domain-0                             0     4     -   --p       1.4
> any
> > > cpu
> > > > Domain-0                             0     5     -   --p       1.4
> any
> > > cpu
> > > > Domain-0                             0     6     -   --p       1.5
> any
> > > cpu
> > > > Domain-0                             0     7     -   --p       1.3
> any
> > > cpu
> > > >
> > > > root@xen1:/# xm create /etc/xen/dc3.conf
> > > > Using config file "/etc/xen/dc3.conf".
> > > > Started domain dc3 (id=1)
> > > >
> > > > root@xen1:/# xm vcpu-list
> > > > Name                                ID  VCPU   CPU State   Time(s)
> CPU
> > > > Affinity
> > > > Domain-0                             0     0     7   r--      36.5
> any
> > > cpu
> > > > Domain-0                             0     1     -   --p       1.8
> any
> > > cpu
> > > > Domain-0                             0     2     -   --p       1.7
> any
> > > cpu
> > > > Domain-0                             0     3     -   --p       1.6
> any
> > > cpu
> > > > Domain-0                             0     4     -   --p       1.4
> any
> > > cpu
> > > > Domain-0                             0     5     -   --p       1.4
> any
> > > cpu
> > > > Domain-0                             0     6     -   --p       1.5
> any
> > > cpu
> > > > Domain-0                             0     7     -   --p       1.3
> any
> > > cpu
> > > > dc3                                  1     0     0   -b-      15.2 0
> > > > dc3                                  1     1     1   -b-       6.8 1
> > > > dc3                                  1     2     2   -b-       7.5 2
> > > > dc3                                  1     3     3   -b-       8.0 3
> > > >
> > > > After HVM Windows domU shutdown, it stays in ---s- state.
> > > >
> > > > root@xen1:/# xm li
> > > > Name                                        ID   Mem VCPUs      State
> > > > Time(s)
> > > > Domain-0                                     0 24106     1     r-----
> > > > 58.7
> > > > dc3                                          1  8192     4     ---s--
> > > > 59.0
> > > >
> > > > root@xen1:/# xm vcpu-list
> > > > Name                                ID  VCPU   CPU State   Time(s)
> CPU
> > > > Affinity
> > > > Domain-0                             0     0     4   r--      48.4
> any
> > > cpu
> > > > ...
> > > > Domain-0                             0     7     -   --p       1.3
> any
> > > cpu
> > > > dc3                                  1     0     0   ---      20.0 0
> > > > dc3                                  1     1     1   ---      10.9 1
> > > > dc3                                  1     2     2   ---      15.2 2
> > > > dc3                                  1     3     3   ---      12.9 3
> > > >
> > > > The problem goes away if I tell Xen to boot with options
> dom0_max_vcpus=1
> > > > dom0_vcpus_pin.
> > > >
> > > > What's the difference between Xen boot options to limit vcpus for
> dom0 to
> > > > /etc/xen/xend-config.sxp?
> > > >
> > > > I am running Xen 3.4.1-rc6 version.
> > >
> > > OK.
> > >
> > > What dom0 kernel version are you running?
> > >
> > > -- Pasi
> > >
> >
> > From Ubuntu hardy-backports repositories 2.6.24-24-xen.
>
> Maybe dom0 kernel is your problem.. I remember there was a bug in kernel
> that caused that kind of problems.
>
> That hardy's dom0 kernel is known to have other bugs aswell.
>
> If possible, try running the latest linux-2.6.18-xen from xenbits.
> Or some other dom0 kernel, and see if that fixes the problem.
>
> -- Pasi
>

Ok I will try to build the latest 2.6.18 kernel.
Can you tell me what's the difference between Xen boot
option dom0_max_vcpus=1 and (dom0-cpus 1) option
in /etc/xen/xend-config.sxp?

[-- Attachment #1.2: Type: text/html, Size: 7736 bytes --]

[-- Attachment #2: Type: text/plain, Size: 137 bytes --]

_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: dom0-cpus problem with Xen 3.4.1-rc6
  2009-07-22 16:42             ` Nerijus Narmontas
@ 2009-07-22 17:01               ` Pasi Kärkkäinen
  2009-07-22 17:08                 ` [Xen-devel] " Pasi Kärkkäinen
                                   ` (2 more replies)
  0 siblings, 3 replies; 25+ messages in thread
From: Pasi Kärkkäinen @ 2009-07-22 17:01 UTC (permalink / raw)
  To: Nerijus Narmontas; +Cc: xen-devel, xen-users

On Wed, Jul 22, 2009 at 07:42:14PM +0300, Nerijus Narmontas wrote:
> On Wed, Jul 22, 2009 at 7:39 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote:
> 
> > On Wed, Jul 22, 2009 at 07:34:16PM +0300, Nerijus Narmontas wrote:
> > > On Wed, Jul 22, 2009 at 6:21 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote:
> > >
> > > > On Wed, Jul 22, 2009 at 06:18:37PM +0300, Nerijus Narmontas wrote:
> > > > > On Tue, Jul 21, 2009 at 2:01 PM, Pasi Kärkkäinen <pasik@iki.fi>
> > wrote:
> > > > >
> > > > > > On Tue, Jul 21, 2009 at 01:53:25PM +0300, Nerijus Narmontas wrote:
> > > > > > > Hello,
> > > > > > > If I set (dom0-cpus 1) in /etc/xen/xend-config.sxp, after I
> > > > gracefully
> > > > > > > shutdown domU, the domain stays in ---s- state.
> > > > > > >
> > > > > > > Is this fixed in 3.4.1-rc8?
> > > > > > >
> > > > > >
> > > > > > Hello.
> > > > > >
> > > > > > Please don't hijack threads - you replied to a thread about network
> > > > > > problems
> > > > > > and gplpv drivers. Always start a new thread for new subjects.
> > > > > >
> > > > > > What version are you seeing this behaviour with? Xen 3.4.0 ? What
> > dom0
> > > > > > kernel version?
> > > > > >
> > > > > > -- Pasi
> > > > > >
> > > > >
> > > > > Sorry for the threads thing.
> > > > >
> > > > > root@xen1:/# more /etc/xen/xend-config.sxp | grep cpu
> > > > > # In SMP system, dom0 will use dom0-cpus # of CPUS
> > > > > # If dom0-cpus = 0, dom0 will take all cpus available
> > > > > (dom0-cpus 1)
> > > > >
> > > > > root@xen1:/# xm dmesg | grep Command
> > > > > (XEN) Command line: console=com2 com2=115200,8n1
> > > > >
> > > > > root@xen1:/# xm dmesg | grep VCPUs
> > > > > (XEN) Dom0 has maximum 8 VCPUs
> > > > >
> > > > > root@xen1:/# xm vcpu-list
> > > > > Name                                ID  VCPU   CPU State   Time(s)
> > CPU
> > > > > Affinity
> > > > > Domain-0                             0     0     5   r--       9.2
> > any
> > > > cpu
> > > > > Domain-0                             0     1     -   --p       1.8
> > any
> > > > cpu
> > > > > Domain-0                             0     2     -   --p       1.7
> > any
> > > > cpu
> > > > > Domain-0                             0     3     -   --p       1.6
> > any
> > > > cpu
> > > > > Domain-0                             0     4     -   --p       1.4
> > any
> > > > cpu
> > > > > Domain-0                             0     5     -   --p       1.4
> > any
> > > > cpu
> > > > > Domain-0                             0     6     -   --p       1.5
> > any
> > > > cpu
> > > > > Domain-0                             0     7     -   --p       1.3
> > any
> > > > cpu
> > > > >
> > > > > root@xen1:/# xm create /etc/xen/dc3.conf
> > > > > Using config file "/etc/xen/dc3.conf".
> > > > > Started domain dc3 (id=1)
> > > > >
> > > > > root@xen1:/# xm vcpu-list
> > > > > Name                                ID  VCPU   CPU State   Time(s)
> > CPU
> > > > > Affinity
> > > > > Domain-0                             0     0     7   r--      36.5
> > any
> > > > cpu
> > > > > Domain-0                             0     1     -   --p       1.8
> > any
> > > > cpu
> > > > > Domain-0                             0     2     -   --p       1.7
> > any
> > > > cpu
> > > > > Domain-0                             0     3     -   --p       1.6
> > any
> > > > cpu
> > > > > Domain-0                             0     4     -   --p       1.4
> > any
> > > > cpu
> > > > > Domain-0                             0     5     -   --p       1.4
> > any
> > > > cpu
> > > > > Domain-0                             0     6     -   --p       1.5
> > any
> > > > cpu
> > > > > Domain-0                             0     7     -   --p       1.3
> > any
> > > > cpu
> > > > > dc3                                  1     0     0   -b-      15.2 0
> > > > > dc3                                  1     1     1   -b-       6.8 1
> > > > > dc3                                  1     2     2   -b-       7.5 2
> > > > > dc3                                  1     3     3   -b-       8.0 3
> > > > >
> > > > > After HVM Windows domU shutdown, it stays in ---s- state.
> > > > >
> > > > > root@xen1:/# xm li
> > > > > Name                                        ID   Mem VCPUs      State
> > > > > Time(s)
> > > > > Domain-0                                     0 24106     1     r-----
> > > > > 58.7
> > > > > dc3                                          1  8192     4     ---s--
> > > > > 59.0
> > > > >
> > > > > root@xen1:/# xm vcpu-list
> > > > > Name                                ID  VCPU   CPU State   Time(s)
> > CPU
> > > > > Affinity
> > > > > Domain-0                             0     0     4   r--      48.4
> > any
> > > > cpu
> > > > > ...
> > > > > Domain-0                             0     7     -   --p       1.3
> > any
> > > > cpu
> > > > > dc3                                  1     0     0   ---      20.0 0
> > > > > dc3                                  1     1     1   ---      10.9 1
> > > > > dc3                                  1     2     2   ---      15.2 2
> > > > > dc3                                  1     3     3   ---      12.9 3
> > > > >
> > > > > The problem goes away if I tell Xen to boot with options
> > dom0_max_vcpus=1
> > > > > dom0_vcpus_pin.
> > > > >
> > > > > What's the difference between Xen boot options to limit vcpus for
> > dom0 to
> > > > > /etc/xen/xend-config.sxp?
> > > > >
> > > > > I am running Xen 3.4.1-rc6 version.
> > > >
> > > > OK.
> > > >
> > > > What dom0 kernel version are you running?
> > > >
> > > > -- Pasi
> > > >
> > >
> > > From Ubuntu hardy-backports repositories 2.6.24-24-xen.
> >
> > Maybe dom0 kernel is your problem.. I remember there was a bug in kernel
> > that caused that kind of problems.
> >
> > That hardy's dom0 kernel is known to have other bugs aswell.
> >
> > If possible, try running the latest linux-2.6.18-xen from xenbits.
> > Or some other dom0 kernel, and see if that fixes the problem.
> >
> > -- Pasi
> >
> 
> Ok I will try to build the latest 2.6.18 kernel.

hg clone http://xenbits.xen.org/linux-2.6.18-xen.hg

> Can you tell me what's the difference between Xen boot
> option dom0_max_vcpus=1 and (dom0-cpus 1) option
> in /etc/xen/xend-config.sxp?

If I haven't misunderstood this dom0-cpus option in xend-config.sxp tells 
which physical CPUs/cores vcpu's of dom0 will use..

ie. if you limit dom0_max_vcpus=1, then you can use dom0-cpus to tell which
one of the 8 available cpus/cores dom0's 1 vcpu will run on.

So you can use that option do dedicate a core for dom0, and then use the
cpus= option for other domains to make them use other cores.. and this way
you'll be able to dedicate a core _only_ for dom0.

But yeah, I don't know why you're seeing problems with shutting down HVM
domains.. sounds like a bug, like I said earlier..

-- Pasi

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Xen-devel] dom0-cpus problem with Xen 3.4.1-rc6
  2009-07-22 17:01               ` Pasi Kärkkäinen
@ 2009-07-22 17:08                 ` Pasi Kärkkäinen
  2009-07-22 17:55                   ` Keir Fraser
  2009-07-22 17:15                 ` Keir Fraser
  2009-07-22 17:30                 ` dom0-cpus problem with Xen 3.4.1-rc6 / HVM domains don't die Pasi Kärkkäinen
  2 siblings, 1 reply; 25+ messages in thread
From: Pasi Kärkkäinen @ 2009-07-22 17:08 UTC (permalink / raw)
  To: Nerijus Narmontas; +Cc: xen-devel, xen-users

On Wed, Jul 22, 2009 at 08:01:02PM +0300, Pasi Kärkkäinen wrote:
> On Wed, Jul 22, 2009 at 07:42:14PM +0300, Nerijus Narmontas wrote:
> > On Wed, Jul 22, 2009 at 7:39 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote:
> > 
> > > On Wed, Jul 22, 2009 at 07:34:16PM +0300, Nerijus Narmontas wrote:
> > > > On Wed, Jul 22, 2009 at 6:21 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote:
> > > >
> > > > > On Wed, Jul 22, 2009 at 06:18:37PM +0300, Nerijus Narmontas wrote:
> > > > > > On Tue, Jul 21, 2009 at 2:01 PM, Pasi Kärkkäinen <pasik@iki.fi>
> > > wrote:
> > > > > >
> > > > > > > On Tue, Jul 21, 2009 at 01:53:25PM +0300, Nerijus Narmontas wrote:
> > > > > > > > Hello,
> > > > > > > > If I set (dom0-cpus 1) in /etc/xen/xend-config.sxp, after I
> > > > > gracefully
> > > > > > > > shutdown domU, the domain stays in ---s- state.
> > > > > > > >
> > > > > > > > Is this fixed in 3.4.1-rc8?
> > > > > > > >
> > > > > > >
> > > > > > > Hello.
> > > > > > >
> > > > > > > Please don't hijack threads - you replied to a thread about network
> > > > > > > problems
> > > > > > > and gplpv drivers. Always start a new thread for new subjects.
> > > > > > >
> > > > > > > What version are you seeing this behaviour with? Xen 3.4.0 ? What
> > > dom0
> > > > > > > kernel version?
> > > > > > >
> > > > > > > -- Pasi
> > > > > > >
> > > > > >
> > > > > > Sorry for the threads thing.
> > > > > >
> > > > > > root@xen1:/# more /etc/xen/xend-config.sxp | grep cpu
> > > > > > # In SMP system, dom0 will use dom0-cpus # of CPUS
> > > > > > # If dom0-cpus = 0, dom0 will take all cpus available
> > > > > > (dom0-cpus 1)
> > > > > >
> > > > > > root@xen1:/# xm dmesg | grep Command
> > > > > > (XEN) Command line: console=com2 com2=115200,8n1
> > > > > >
> > > > > > root@xen1:/# xm dmesg | grep VCPUs
> > > > > > (XEN) Dom0 has maximum 8 VCPUs
> > > > > >
> > > > > > root@xen1:/# xm vcpu-list
> > > > > > Name                                ID  VCPU   CPU State   Time(s)
> > > CPU
> > > > > > Affinity
> > > > > > Domain-0                             0     0     5   r--       9.2
> > > any
> > > > > cpu
> > > > > > Domain-0                             0     1     -   --p       1.8
> > > any
> > > > > cpu
> > > > > > Domain-0                             0     2     -   --p       1.7
> > > any
> > > > > cpu
> > > > > > Domain-0                             0     3     -   --p       1.6
> > > any
> > > > > cpu
> > > > > > Domain-0                             0     4     -   --p       1.4
> > > any
> > > > > cpu
> > > > > > Domain-0                             0     5     -   --p       1.4
> > > any
> > > > > cpu
> > > > > > Domain-0                             0     6     -   --p       1.5
> > > any
> > > > > cpu
> > > > > > Domain-0                             0     7     -   --p       1.3
> > > any
> > > > > cpu
> > > > > >
> > > > > > root@xen1:/# xm create /etc/xen/dc3.conf
> > > > > > Using config file "/etc/xen/dc3.conf".
> > > > > > Started domain dc3 (id=1)
> > > > > >
> > > > > > root@xen1:/# xm vcpu-list
> > > > > > Name                                ID  VCPU   CPU State   Time(s)
> > > CPU
> > > > > > Affinity
> > > > > > Domain-0                             0     0     7   r--      36.5
> > > any
> > > > > cpu
> > > > > > Domain-0                             0     1     -   --p       1.8
> > > any
> > > > > cpu
> > > > > > Domain-0                             0     2     -   --p       1.7
> > > any
> > > > > cpu
> > > > > > Domain-0                             0     3     -   --p       1.6
> > > any
> > > > > cpu
> > > > > > Domain-0                             0     4     -   --p       1.4
> > > any
> > > > > cpu
> > > > > > Domain-0                             0     5     -   --p       1.4
> > > any
> > > > > cpu
> > > > > > Domain-0                             0     6     -   --p       1.5
> > > any
> > > > > cpu
> > > > > > Domain-0                             0     7     -   --p       1.3
> > > any
> > > > > cpu
> > > > > > dc3                                  1     0     0   -b-      15.2 0
> > > > > > dc3                                  1     1     1   -b-       6.8 1
> > > > > > dc3                                  1     2     2   -b-       7.5 2
> > > > > > dc3                                  1     3     3   -b-       8.0 3
> > > > > >
> > > > > > After HVM Windows domU shutdown, it stays in ---s- state.
> > > > > >
> > > > > > root@xen1:/# xm li
> > > > > > Name                                        ID   Mem VCPUs      State
> > > > > > Time(s)
> > > > > > Domain-0                                     0 24106     1     r-----
> > > > > > 58.7
> > > > > > dc3                                          1  8192     4     ---s--
> > > > > > 59.0
> > > > > >
> > > > > > root@xen1:/# xm vcpu-list
> > > > > > Name                                ID  VCPU   CPU State   Time(s)
> > > CPU
> > > > > > Affinity
> > > > > > Domain-0                             0     0     4   r--      48.4
> > > any
> > > > > cpu
> > > > > > ...
> > > > > > Domain-0                             0     7     -   --p       1.3
> > > any
> > > > > cpu
> > > > > > dc3                                  1     0     0   ---      20.0 0
> > > > > > dc3                                  1     1     1   ---      10.9 1
> > > > > > dc3                                  1     2     2   ---      15.2 2
> > > > > > dc3                                  1     3     3   ---      12.9 3
> > > > > >
> > > > > > The problem goes away if I tell Xen to boot with options
> > > dom0_max_vcpus=1
> > > > > > dom0_vcpus_pin.
> > > > > >
> > > > > > What's the difference between Xen boot options to limit vcpus for
> > > dom0 to
> > > > > > /etc/xen/xend-config.sxp?
> > > > > >
> > > > > > I am running Xen 3.4.1-rc6 version.
> > > > >
> > > > > OK.
> > > > >
> > > > > What dom0 kernel version are you running?
> > > > >
> > > > > -- Pasi
> > > > >
> > > >
> > > > From Ubuntu hardy-backports repositories 2.6.24-24-xen.
> > >
> > > Maybe dom0 kernel is your problem.. I remember there was a bug in kernel
> > > that caused that kind of problems.
> > >
> > > That hardy's dom0 kernel is known to have other bugs aswell.
> > >
> > > If possible, try running the latest linux-2.6.18-xen from xenbits.
> > > Or some other dom0 kernel, and see if that fixes the problem.
> > >
> > > -- Pasi
> > >
> > 
> > Ok I will try to build the latest 2.6.18 kernel.
> 
> hg clone http://xenbits.xen.org/linux-2.6.18-xen.hg
> 
> > Can you tell me what's the difference between Xen boot
> > option dom0_max_vcpus=1 and (dom0-cpus 1) option
> > in /etc/xen/xend-config.sxp?
> 
> If I haven't misunderstood this dom0-cpus option in xend-config.sxp tells 
> which physical CPUs/cores vcpu's of dom0 will use..
> 
> ie. if you limit dom0_max_vcpus=1, then you can use dom0-cpus to tell which
> one of the 8 available cpus/cores dom0's 1 vcpu will run on.
> 
> So you can use that option do dedicate a core for dom0, and then use the
> cpus= option for other domains to make them use other cores.. and this way
> you'll be able to dedicate a core _only_ for dom0.
> 

http://lists.xensource.com/archives/html/xen-users/2009-06/msg00037.html

Explained better there..

-- Pasi

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: dom0-cpus problem with Xen 3.4.1-rc6
  2009-07-22 17:01               ` Pasi Kärkkäinen
  2009-07-22 17:08                 ` [Xen-devel] " Pasi Kärkkäinen
@ 2009-07-22 17:15                 ` Keir Fraser
  2009-07-22 17:29                   ` [Xen-devel] " Pasi Kärkkäinen
  2009-07-22 17:30                 ` dom0-cpus problem with Xen 3.4.1-rc6 / HVM domains don't die Pasi Kärkkäinen
  2 siblings, 1 reply; 25+ messages in thread
From: Keir Fraser @ 2009-07-22 17:15 UTC (permalink / raw)
  To: Pasi Kärkkäinen, Nerijus Narmontas; +Cc: xen-devel, xen-users

On 22/07/2009 18:01, "Pasi Kärkkäinen" <pasik@iki.fi> wrote:

>> Can you tell me what's the difference between Xen boot
>> option dom0_max_vcpus=1 and (dom0-cpus 1) option
>> in /etc/xen/xend-config.sxp?
> 
> If I haven't misunderstood this dom0-cpus option in xend-config.sxp tells
> which physical CPUs/cores vcpu's of dom0 will use..
> 
> ie. if you limit dom0_max_vcpus=1, then you can use dom0-cpus to tell which
> one of the 8 available cpus/cores dom0's 1 vcpu will run on.

dom0-cpus does the same as dom0_max_vcpus -- it specifies number of VCPUs
which dom0 should run with. The difference is that dom0_max_vcpus=1 means
that is all that dom0 kernel will detect and boot with: you cannot
subsequently enable more. With (dom0-cpus 1), dom0 will boot with a vcpu for
every host cpu (by default) and then hot-unplug/offline all but one vcpu
when xend starts. The latter is obviously a more complex operation, but
could be reverted (i.e., you could online some of those vcpus at a later
time).

 -- Keir

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Xen-devel] dom0-cpus problem with Xen 3.4.1-rc6
  2009-07-22 17:15                 ` Keir Fraser
@ 2009-07-22 17:29                   ` Pasi Kärkkäinen
  2009-07-22 17:46                     ` Keir Fraser
  0 siblings, 1 reply; 25+ messages in thread
From: Pasi Kärkkäinen @ 2009-07-22 17:29 UTC (permalink / raw)
  To: Keir Fraser; +Cc: xen-devel, xen-users, Nerijus Narmontas

On Wed, Jul 22, 2009 at 06:15:05PM +0100, Keir Fraser wrote:
> On 22/07/2009 18:01, "Pasi Kärkkäinen" <pasik@iki.fi> wrote:
> 
> >> Can you tell me what's the difference between Xen boot
> >> option dom0_max_vcpus=1 and (dom0-cpus 1) option
> >> in /etc/xen/xend-config.sxp?
> > 
> > If I haven't misunderstood this dom0-cpus option in xend-config.sxp tells
> > which physical CPUs/cores vcpu's of dom0 will use..
> > 
> > ie. if you limit dom0_max_vcpus=1, then you can use dom0-cpus to tell which
> > one of the 8 available cpus/cores dom0's 1 vcpu will run on.
> 
> dom0-cpus does the same as dom0_max_vcpus -- it specifies number of VCPUs
> which dom0 should run with. The difference is that dom0_max_vcpus=1 means
> that is all that dom0 kernel will detect and boot with: you cannot
> subsequently enable more. With (dom0-cpus 1), dom0 will boot with a vcpu for
> every host cpu (by default) and then hot-unplug/offline all but one vcpu
> when xend starts. The latter is obviously a more complex operation, but
> could be reverted (i.e., you could online some of those vcpus at a later
> time).
> 

Hmm, so 'dom0-cpus' in xend-config.sxp doesn't limit on what physical CPUs
the VCPUs of dom0 can run on?

Then many people have gotten that wrong.. :)

-- Pasi

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: dom0-cpus problem with Xen 3.4.1-rc6 / HVM domains don't die
  2009-07-22 17:01               ` Pasi Kärkkäinen
  2009-07-22 17:08                 ` [Xen-devel] " Pasi Kärkkäinen
  2009-07-22 17:15                 ` Keir Fraser
@ 2009-07-22 17:30                 ` Pasi Kärkkäinen
  2009-07-23 13:56                   ` Re: [Xen-devel] " Pasi Kärkkäinen
  2 siblings, 1 reply; 25+ messages in thread
From: Pasi Kärkkäinen @ 2009-07-22 17:30 UTC (permalink / raw)
  To: Nerijus Narmontas; +Cc: xen-devel, xen-users

On Wed, Jul 22, 2009 at 08:01:02PM +0300, Pasi Kärkkäinen wrote:
> 
> But yeah, I don't know why you're seeing problems with shutting down HVM
> domains.. sounds like a bug, like I said earlier..
> 

And I meant this bug:
http://lists.xensource.com/archives/html/xen-devel/2009-01/msg00004.html

"Domains don't die, they just stay in the 's' state until you 'xm destroy' them"

And a fix/patch to dom0 kernel here:
http://lists.xensource.com/archives/html/xen-devel/2009-01/msg00050.html

-- Pasi

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: dom0-cpus problem with Xen 3.4.1-rc6
  2009-07-22 17:29                   ` [Xen-devel] " Pasi Kärkkäinen
@ 2009-07-22 17:46                     ` Keir Fraser
  2009-07-22 18:03                       ` [Xen-devel] " Pasi Kärkkäinen
  0 siblings, 1 reply; 25+ messages in thread
From: Keir Fraser @ 2009-07-22 17:46 UTC (permalink / raw)
  To: Pasi Kärkkäinen; +Cc: xen-devel, xen-users, Nerijus Narmontas

On 22/07/2009 18:29, "Pasi Kärkkäinen" <pasik@iki.fi> wrote:

>> dom0-cpus does the same as dom0_max_vcpus -- it specifies number of VCPUs
>> which dom0 should run with. The difference is that dom0_max_vcpus=1 means
>> that is all that dom0 kernel will detect and boot with: you cannot
>> subsequently enable more. With (dom0-cpus 1), dom0 will boot with a vcpu for
>> every host cpu (by default) and then hot-unplug/offline all but one vcpu
>> when xend starts. The latter is obviously a more complex operation, but
>> could be reverted (i.e., you could online some of those vcpus at a later
>> time).
> 
> Hmm, so 'dom0-cpus' in xend-config.sxp doesn't limit on what physical CPUs
> the VCPUs of dom0 can run on?

No, there's no way to configure affinity in the xend config file. You'd have
to issue 'xm vcpu-pin' commands after xend is started.

 -- Keir

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Xen-devel] dom0-cpus problem with Xen 3.4.1-rc6
  2009-07-22 17:08                 ` [Xen-devel] " Pasi Kärkkäinen
@ 2009-07-22 17:55                   ` Keir Fraser
  0 siblings, 0 replies; 25+ messages in thread
From: Keir Fraser @ 2009-07-22 17:55 UTC (permalink / raw)
  To: Pasi Kärkkäinen, Nerijus Narmontas; +Cc: xen-devel, xen-users

On 22/07/2009 18:08, "Pasi Kärkkäinen" <pasik@iki.fi> wrote:

>> So you can use that option do dedicate a core for dom0, and then use the
>> cpus= option for other domains to make them use other cores.. and this way
>> you'll be able to dedicate a core _only_ for dom0.
>> 
> 
> http://lists.xensource.com/archives/html/xen-users/2009-06/msg00037.html
> 
> Explained better there..

The above-cited posting is mostly correct. In particular cpus= in a guest
config does behave as you think, whereas (dom0-cpus 1) will cause dom0 to
enable only one vcpu for itself. However, it is not true that by default
each dom0 vcpu is pinned to its equivalent numbered physical cpu. To get
that behaviour you must either configure it via use of 'vm vcpu-pin'
commands, or by specifying dom0_vcpus_pin as a Xen boot parameter.

 -- Keir

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Xen-devel] dom0-cpus problem with Xen 3.4.1-rc6
  2009-07-22 17:46                     ` Keir Fraser
@ 2009-07-22 18:03                       ` Pasi Kärkkäinen
  0 siblings, 0 replies; 25+ messages in thread
From: Pasi Kärkkäinen @ 2009-07-22 18:03 UTC (permalink / raw)
  To: Keir Fraser; +Cc: xen-devel, xen-users, Nerijus Narmontas

On Wed, Jul 22, 2009 at 06:46:00PM +0100, Keir Fraser wrote:
> On 22/07/2009 18:29, "Pasi Kärkkäinen" <pasik@iki.fi> wrote:
> 
> >> dom0-cpus does the same as dom0_max_vcpus -- it specifies number of VCPUs
> >> which dom0 should run with. The difference is that dom0_max_vcpus=1 means
> >> that is all that dom0 kernel will detect and boot with: you cannot
> >> subsequently enable more. With (dom0-cpus 1), dom0 will boot with a vcpu for
> >> every host cpu (by default) and then hot-unplug/offline all but one vcpu
> >> when xend starts. The latter is obviously a more complex operation, but
> >> could be reverted (i.e., you could online some of those vcpus at a later
> >> time).
> > 
> > Hmm, so 'dom0-cpus' in xend-config.sxp doesn't limit on what physical CPUs
> > the VCPUs of dom0 can run on?
> 
> No, there's no way to configure affinity in the xend config file. You'd have
> to issue 'xm vcpu-pin' commands after xend is started.
> 

Ok, thanks for clarifying that. I was already checking the xend-config.sxp
docs and figured out it's correctly written/described there.

-- Pasi

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Xen-devel] dom0-cpus problem
  2009-07-22 15:18     ` Nerijus Narmontas
  2009-07-22 15:21       ` dom0-cpus problem with Xen 3.4.1-rc6 Pasi Kärkkäinen
@ 2009-07-23  9:39       ` George Dunlap
  2009-07-23 10:03         ` Pasi Kärkkäinen
  1 sibling, 1 reply; 25+ messages in thread
From: George Dunlap @ 2009-07-23  9:39 UTC (permalink / raw)
  To: Nerijus Narmontas; +Cc: xen-devel, xen-users

I didn't see the original question; but the "problem" seems to be that
when using xend, xm vcpu-list still shows 8 vcpus for dom0?

The number of cpus for a domain is assigned at creation; in domain 0's
case, this is at boot, necessarily before xend runs.

I suspect what (dom0-cpus 1) does is tell xend to unplug all cpus
except one, by writing "0" into /sys/.../cpus/[1-7]/online.  This will
tell dom0 to take vcpus 1-7 offline, which will put them in a "paused"
state (as you can see from xm vcpu-list); but they're still registered
to dom0 in Xen, and still available to be brought online at any time.

Setting the boot parameter will change the number of vcpus assigned on
VM creation.

 -George

On Wed, Jul 22, 2009 at 4:18 PM, Nerijus Narmontas<n.narmontas@gmail.com> wrote:
> On Tue, Jul 21, 2009 at 2:01 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote:
>>
>> On Tue, Jul 21, 2009 at 01:53:25PM +0300, Nerijus Narmontas wrote:
>> > Hello,
>> > If I set (dom0-cpus 1) in /etc/xen/xend-config.sxp, after I gracefully
>> > shutdown domU, the domain stays in ---s- state.
>> >
>> > Is this fixed in 3.4.1-rc8?
>> >
>>
>> Hello.
>>
>> Please don't hijack threads - you replied to a thread about network
>> problems
>> and gplpv drivers. Always start a new thread for new subjects.
>>
>> What version are you seeing this behaviour with? Xen 3.4.0 ? What dom0
>> kernel version?
>>
>> -- Pasi
>
> Sorry for the threads thing.
> root@xen1:/# more /etc/xen/xend-config.sxp | grep cpu
> # In SMP system, dom0 will use dom0-cpus # of CPUS
> # If dom0-cpus = 0, dom0 will take all cpus available
> (dom0-cpus 1)
> root@xen1:/# xm dmesg | grep Command
> (XEN) Command line: console=com2 com2=115200,8n1
> root@xen1:/# xm dmesg | grep VCPUs
> (XEN) Dom0 has maximum 8 VCPUs
> root@xen1:/# xm vcpu-list
> Name                                ID  VCPU   CPU State   Time(s) CPU
> Affinity
> Domain-0                             0     0     5   r--       9.2 any cpu
> Domain-0                             0     1     -   --p       1.8 any cpu
> Domain-0                             0     2     -   --p       1.7 any cpu
> Domain-0                             0     3     -   --p       1.6 any cpu
> Domain-0                             0     4     -   --p       1.4 any cpu
> Domain-0                             0     5     -   --p       1.4 any cpu
> Domain-0                             0     6     -   --p       1.5 any cpu
> Domain-0                             0     7     -   --p       1.3 any cpu
> root@xen1:/# xm create /etc/xen/dc3.conf
> Using config file "/etc/xen/dc3.conf".
> Started domain dc3 (id=1)
> root@xen1:/# xm vcpu-list
> Name                                ID  VCPU   CPU State   Time(s) CPU
> Affinity
> Domain-0                             0     0     7   r--      36.5 any cpu
> Domain-0                             0     1     -   --p       1.8 any cpu
> Domain-0                             0     2     -   --p       1.7 any cpu
> Domain-0                             0     3     -   --p       1.6 any cpu
> Domain-0                             0     4     -   --p       1.4 any cpu
> Domain-0                             0     5     -   --p       1.4 any cpu
> Domain-0                             0     6     -   --p       1.5 any cpu
> Domain-0                             0     7     -   --p       1.3 any cpu
> dc3                                  1     0     0   -b-      15.2 0
> dc3                                  1     1     1   -b-       6.8 1
> dc3                                  1     2     2   -b-       7.5 2
> dc3                                  1     3     3   -b-       8.0 3
> After HVM Windows domU shutdown, it stays in ---s- state.
> root@xen1:/# xm li
> Name                                        ID   Mem VCPUs      State
> Time(s)
> Domain-0                                     0 24106     1     r-----
> 58.7
> dc3                                          1  8192     4     ---s--
> 59.0
> root@xen1:/# xm vcpu-list
> Name                                ID  VCPU   CPU State   Time(s) CPU
> Affinity
> Domain-0                             0     0     4   r--      48.4 any cpu
> ...
> Domain-0                             0     7     -   --p       1.3 any cpu
> dc3                                  1     0     0   ---      20.0 0
> dc3                                  1     1     1   ---      10.9 1
> dc3                                  1     2     2   ---      15.2 2
> dc3                                  1     3     3   ---      12.9 3
> The problem goes away if I tell Xen to boot with options dom0_max_vcpus=1
> dom0_vcpus_pin.
> What's the difference between Xen boot options to limit vcpus for dom0 to
> /etc/xen/xend-config.sxp?
> I am running Xen 3.4.1-rc6 version.
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@lists.xensource.com
> http://lists.xensource.com/xen-devel
>
>

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [Xen-devel] dom0-cpus problem
  2009-07-23  9:39       ` [Xen-devel] dom0-cpus problem George Dunlap
@ 2009-07-23 10:03         ` Pasi Kärkkäinen
  0 siblings, 0 replies; 25+ messages in thread
From: Pasi Kärkkäinen @ 2009-07-23 10:03 UTC (permalink / raw)
  To: George Dunlap; +Cc: xen-devel, xen-users, Nerijus Narmontas

On Thu, Jul 23, 2009 at 10:39:27AM +0100, George Dunlap wrote:
> I didn't see the original question; but the "problem" seems to be that
> when using xend, xm vcpu-list still shows 8 vcpus for dom0?
> 
> The number of cpus for a domain is assigned at creation; in domain 0's
> case, this is at boot, necessarily before xend runs.
> 
> I suspect what (dom0-cpus 1) does is tell xend to unplug all cpus
> except one, by writing "0" into /sys/.../cpus/[1-7]/online.  This will
> tell dom0 to take vcpus 1-7 offline, which will put them in a "paused"
> state (as you can see from xm vcpu-list); but they're still registered
> to dom0 in Xen, and still available to be brought online at any time.
> 
> Setting the boot parameter will change the number of vcpus assigned on
> VM creation.
> 

Yep, thanks for explaining that.

Although the original problem was when you specify (dom0-cpus 1) you cannot
stop HVM domains anymore - they just get stuck and stay in 's' state.

I believe it's because the user is running ubuntu 2.6.24 kernel in dom0,
which most probably has this bug:

http://lists.xensource.com/archives/html/xen-devel/2009-01/msg00004.html

"Domains don't die, they just stay in the 's' state until you 'xm destroy' them"

And a fix/patch to dom0 kernel here:
http://lists.xensource.com/archives/html/xen-devel/2009-01/msg00050.html

-- Pasi

>  -George
> 
> On Wed, Jul 22, 2009 at 4:18 PM, Nerijus Narmontas<n.narmontas@gmail.com> wrote:
> > On Tue, Jul 21, 2009 at 2:01 PM, Pasi Kärkkäinen <pasik@iki.fi> wrote:
> >>
> >> On Tue, Jul 21, 2009 at 01:53:25PM +0300, Nerijus Narmontas wrote:
> >> > Hello,
> >> > If I set (dom0-cpus 1) in /etc/xen/xend-config.sxp, after I gracefully
> >> > shutdown domU, the domain stays in ---s- state.
> >> >
> >> > Is this fixed in 3.4.1-rc8?
> >> >
> >>
> >> Hello.
> >>
> >> Please don't hijack threads - you replied to a thread about network
> >> problems
> >> and gplpv drivers. Always start a new thread for new subjects.
> >>
> >> What version are you seeing this behaviour with? Xen 3.4.0 ? What dom0
> >> kernel version?
> >>
> >> -- Pasi
> >
> > Sorry for the threads thing.
> > root@xen1:/# more /etc/xen/xend-config.sxp | grep cpu
> > # In SMP system, dom0 will use dom0-cpus # of CPUS
> > # If dom0-cpus = 0, dom0 will take all cpus available
> > (dom0-cpus 1)
> > root@xen1:/# xm dmesg | grep Command
> > (XEN) Command line: console=com2 com2=115200,8n1
> > root@xen1:/# xm dmesg | grep VCPUs
> > (XEN) Dom0 has maximum 8 VCPUs
> > root@xen1:/# xm vcpu-list
> > Name                                ID  VCPU   CPU State   Time(s) CPU
> > Affinity
> > Domain-0                             0     0     5   r--       9.2 any cpu
> > Domain-0                             0     1     -   --p       1.8 any cpu
> > Domain-0                             0     2     -   --p       1.7 any cpu
> > Domain-0                             0     3     -   --p       1.6 any cpu
> > Domain-0                             0     4     -   --p       1.4 any cpu
> > Domain-0                             0     5     -   --p       1.4 any cpu
> > Domain-0                             0     6     -   --p       1.5 any cpu
> > Domain-0                             0     7     -   --p       1.3 any cpu
> > root@xen1:/# xm create /etc/xen/dc3.conf
> > Using config file "/etc/xen/dc3.conf".
> > Started domain dc3 (id=1)
> > root@xen1:/# xm vcpu-list
> > Name                                ID  VCPU   CPU State   Time(s) CPU
> > Affinity
> > Domain-0                             0     0     7   r--      36.5 any cpu
> > Domain-0                             0     1     -   --p       1.8 any cpu
> > Domain-0                             0     2     -   --p       1.7 any cpu
> > Domain-0                             0     3     -   --p       1.6 any cpu
> > Domain-0                             0     4     -   --p       1.4 any cpu
> > Domain-0                             0     5     -   --p       1.4 any cpu
> > Domain-0                             0     6     -   --p       1.5 any cpu
> > Domain-0                             0     7     -   --p       1.3 any cpu
> > dc3                                  1     0     0   -b-      15.2 0
> > dc3                                  1     1     1   -b-       6.8 1
> > dc3                                  1     2     2   -b-       7.5 2
> > dc3                                  1     3     3   -b-       8.0 3
> > After HVM Windows domU shutdown, it stays in ---s- state.
> > root@xen1:/# xm li
> > Name                                        ID   Mem VCPUs      State
> > Time(s)
> > Domain-0                                     0 24106     1     r-----
> > 58.7
> > dc3                                          1  8192     4     ---s--
> > 59.0
> > root@xen1:/# xm vcpu-list
> > Name                                ID  VCPU   CPU State   Time(s) CPU
> > Affinity
> > Domain-0                             0     0     4   r--      48.4 any cpu
> > ...
> > Domain-0                             0     7     -   --p       1.3 any cpu
> > dc3                                  1     0     0   ---      20.0 0
> > dc3                                  1     1     1   ---      10.9 1
> > dc3                                  1     2     2   ---      15.2 2
> > dc3                                  1     3     3   ---      12.9 3
> > The problem goes away if I tell Xen to boot with options dom0_max_vcpus=1
> > dom0_vcpus_pin.
> > What's the difference between Xen boot options to limit vcpus for dom0 to
> > /etc/xen/xend-config.sxp?
> > I am running Xen 3.4.1-rc6 version.
> >
> >
> >
> >
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@lists.xensource.com
> > http://lists.xensource.com/xen-devel
> >
> >

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: Re: [Xen-devel] dom0-cpus problem with Xen 3.4.1-rc6 / HVM domains don't die
  2009-07-22 17:30                 ` dom0-cpus problem with Xen 3.4.1-rc6 / HVM domains don't die Pasi Kärkkäinen
@ 2009-07-23 13:56                   ` Pasi Kärkkäinen
  0 siblings, 0 replies; 25+ messages in thread
From: Pasi Kärkkäinen @ 2009-07-23 13:56 UTC (permalink / raw)
  To: Nerijus Narmontas; +Cc: xen-devel, xen-users

On Wed, Jul 22, 2009 at 08:30:57PM +0300, Pasi Kärkkäinen wrote:
> On Wed, Jul 22, 2009 at 08:01:02PM +0300, Pasi Kärkkäinen wrote:
> > 
> > But yeah, I don't know why you're seeing problems with shutting down HVM
> > domains.. sounds like a bug, like I said earlier..
> > 
> 
> And I meant this bug:
> http://lists.xensource.com/archives/html/xen-devel/2009-01/msg00004.html
> 
> "Domains don't die, they just stay in the 's' state until you 'xm destroy' them"
> 
> And a fix/patch to dom0 kernel here:
> http://lists.xensource.com/archives/html/xen-devel/2009-01/msg00050.html
> 

And here's the fix/patch in linux-2.6.18-xen.hg:
http://xenbits.xen.org/linux-2.6.18-xen.hg?rev/79e82ae1bad0

-- Pasi

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: network misbehaviour with gplpv and 2.6.30
  2009-07-21 10:13     ` Paul Durrant
  2009-07-21 11:09       ` James Harper
@ 2009-07-29  9:48       ` Andrew Lyon
  1 sibling, 0 replies; 25+ messages in thread
From: Andrew Lyon @ 2009-07-29  9:48 UTC (permalink / raw)
  To: Paul Durrant; +Cc: James Harper, xen-devel

On Tue, Jul 21, 2009 at 11:13 AM, Paul Durrant<paul.durrant@citrix.com> wrote:
> James Harper wrote:
>>
>>> Are you saying that ring slot n
>>> has only NETRXF_extra_info and *not* NETRXF_more_data?
>>>
>>
>> Yes. From the debug I have received from Andrew Lyon, NETRXF_more_data
>> is _never_ set.
>>
>> From what Andrew tells me (and it's not unlikely that I misunderstood),
>> the packets in question come from a physical machine external to the
>> machine running xen. I can't quite understand how that could be as they
>> are 'large' packets (>1514 byte total packet length) which should only
>> be locally originated. Unless he's running with jumbo frames (are you
>> Andrew?).
>>
>
> It's not unusual for h/w drivers to support 'LRO', i.e. they re-assemble
> consecutive in-order TCP segments into a large packet before passing up the
> stack. I believe that these would manifest themselves as TSOs coming into
> the transmit side of netback, just as locally originated large packets
> would.
>
>> I've asked for some more debug info but he's in a different timezone to
>> me and probably isn't awake yet. I'm less and less inclined to think
>> that this is actually a problem with GPLPV and more a problem with
>> netback (or a physical network driver) in 2.6.30, but a tcpdump in Dom0,
>> HVM without GPLPV and maybe in a Linux DomU should tell us more.
>>
>
> Yes, a tcpdump of what's being passed into netback in dom0 should tell us
> what's happening.
>
>  Paul
>


I did more testing including running various wireshark captures which
James looked at, the problem is not the gplpv drivers as it also
affects the linux pv netfront driver, it seems to be a dom0 problem,
packets arrive with frame.len < 72 but ip.len > 72 which of course
causes terrible throughput in domU networking, and also crashed the
gplpv drivers until James added a check for the condition (see
http://xenbits.xensource.com/ext/win-pvdrivers.hg?rev/0436238bcda5),
now it triggers a warning message, for example:

XenNet     XN_HDR_SIZE + ip4_length (2974) > total_length (54)

Yesterday I noticed something quite interesting, if I switch off
receive checksum offloading on the dom0 nic (ethtool -K peth0 rx off)
the network performance in domU is much improved, but something is
still wrong because some network performance tests are still very
slow, and a different warning message is triggered in the Xennet
driver:

XenNet     Size Mismatch 54 (ip4_length + XN_HDR_SIZE) != 60 (total_length)

Now the really strange thing is that if I re-enable rx checksum
offload (ethtool -K peth0 rx on) everything works perfectly,
networking throughput is the same as with 2.6.29 and no warning
messages are triggered in the Xennet driver.

The dom0 NIC is a 82575EB, I have tried using both the 1.3.16-k2
driver which is included in 2.6.30, and the 1.3.19.3 which I
downloaded from Intel's support site, I will try another nic if I can
find one.

I don't understand how toggling rx offload off and on can fix the
problem but it does.

Andy

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2009-07-29  9:48 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-07-18  3:42 network misbehaviour with gplpv and 2.6.30 James Harper
2009-07-18 18:28 ` Andrew Lyon
2009-07-21  9:35 ` Paul Durrant
2009-07-21 10:05   ` James Harper
2009-07-21 10:13     ` Paul Durrant
2009-07-21 11:09       ` James Harper
2009-07-29  9:48       ` Andrew Lyon
2009-07-21 10:53 ` Nerijus Narmontas
2009-07-21 11:01   ` dom0-cpus problem Pasi Kärkkäinen
2009-07-22 15:18     ` Nerijus Narmontas
2009-07-22 15:21       ` dom0-cpus problem with Xen 3.4.1-rc6 Pasi Kärkkäinen
2009-07-22 16:34         ` Nerijus Narmontas
2009-07-22 16:39           ` [Xen-devel] " Pasi Kärkkäinen
2009-07-22 16:42             ` Nerijus Narmontas
2009-07-22 17:01               ` Pasi Kärkkäinen
2009-07-22 17:08                 ` [Xen-devel] " Pasi Kärkkäinen
2009-07-22 17:55                   ` Keir Fraser
2009-07-22 17:15                 ` Keir Fraser
2009-07-22 17:29                   ` [Xen-devel] " Pasi Kärkkäinen
2009-07-22 17:46                     ` Keir Fraser
2009-07-22 18:03                       ` [Xen-devel] " Pasi Kärkkäinen
2009-07-22 17:30                 ` dom0-cpus problem with Xen 3.4.1-rc6 / HVM domains don't die Pasi Kärkkäinen
2009-07-23 13:56                   ` Re: [Xen-devel] " Pasi Kärkkäinen
2009-07-23  9:39       ` [Xen-devel] dom0-cpus problem George Dunlap
2009-07-23 10:03         ` Pasi Kärkkäinen

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.