All of lore.kernel.org
 help / color / mirror / Atom feed
* Ceph and bonnie++
@ 2010-10-29 13:37 Smets, Jan (Jan)
  2010-10-29 19:49 ` Gregory Farnum
  0 siblings, 1 reply; 5+ messages in thread
From: Smets, Jan (Jan) @ 2010-10-29 13:37 UTC (permalink / raw)
  To: ceph-devel

Hi

ceph version 0.23~rc (commit:a869b35abdab37bd4505f435bf0f7ab1860b28cc)

client0:/mnt/ceph# bonnie -s 40 -r 10 -u root -f
Using uid:0, gid:0.
Writing intelligently...done
Rewriting...done
Reading intelligently...done
start 'em...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...Expected 16384 files but only got 0
Cleaning up test directory after error.


Any suggestions? There was a thread about this some time ago: 

http://www.mail-archive.com/ceph-devel@lists.sourceforge.net/msg00170.html


Thanks!

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: Ceph and bonnie++
  2010-10-29 13:37 Ceph and bonnie++ Smets, Jan (Jan)
@ 2010-10-29 19:49 ` Gregory Farnum
  2010-10-30 12:34   ` Smets, Jan (Jan)
  0 siblings, 1 reply; 5+ messages in thread
From: Gregory Farnum @ 2010-10-29 19:49 UTC (permalink / raw)
  To: Smets, Jan (Jan); +Cc: ceph-devel

On Fri, Oct 29, 2010 at 6:37 AM, Smets, Jan (Jan)
<jan.smets@alcatel-lucent.com> wrote:
> client0:/mnt/ceph# bonnie -s 40 -r 10 -u root -f
> Using uid:0, gid:0.
> Writing intelligently...done
> Rewriting...done
> Reading intelligently...done
> start 'em...done...done...done...
> Create files in sequential order...done.
> Stat files in sequential order...Expected 16384 files but only got 0
> Cleaning up test directory after error.
>
>
> Any suggestions? There was a thread about this some time ago:
Are you running this with one or many MDSes? It should be fine under a
single MDS.

bonnie++ is one of the workloads we've had issues with on a multi-mds
system, although I thought we had it working at this point. I'll run
our tests again now and see if I can reproduce locally.
-Greg

^ permalink raw reply	[flat|nested] 5+ messages in thread

* RE: Ceph and bonnie++
  2010-10-29 19:49 ` Gregory Farnum
@ 2010-10-30 12:34   ` Smets, Jan (Jan)
  2010-10-30 16:02     ` Sage Weil
  0 siblings, 1 reply; 5+ messages in thread
From: Smets, Jan (Jan) @ 2010-10-30 12:34 UTC (permalink / raw)
  To: Gregory Farnum; +Cc: ceph-devel

3 servers with each 2 OSDs, 1 MON and 1 MDS.

- Jan 

-----Original Message-----
From: gfarnum@gmail.com [mailto:gfarnum@gmail.com] On Behalf Of Gregory Farnum
Sent: vrijdag 29 oktober 2010 21:50
To: Smets, Jan (Jan)
Cc: ceph-devel@vger.kernel.org
Subject: Re: Ceph and bonnie++

On Fri, Oct 29, 2010 at 6:37 AM, Smets, Jan (Jan) <jan.smets@alcatel-lucent.com> wrote:
> client0:/mnt/ceph# bonnie -s 40 -r 10 -u root -f Using uid:0, gid:0.
> Writing intelligently...done
> Rewriting...done
> Reading intelligently...done
> start 'em...done...done...done...
> Create files in sequential order...done.
> Stat files in sequential order...Expected 16384 files but only got 0 
> Cleaning up test directory after error.
>
>
> Any suggestions? There was a thread about this some time ago:
Are you running this with one or many MDSes? It should be fine under a single MDS.

bonnie++ is one of the workloads we've had issues with on a multi-mds
system, although I thought we had it working at this point. I'll run our tests again now and see if I can reproduce locally.
-Greg

^ permalink raw reply	[flat|nested] 5+ messages in thread

* RE: Ceph and bonnie++
  2010-10-30 12:34   ` Smets, Jan (Jan)
@ 2010-10-30 16:02     ` Sage Weil
  2010-10-30 18:06       ` Smets, Jan (Jan)
  0 siblings, 1 reply; 5+ messages in thread
From: Sage Weil @ 2010-10-30 16:02 UTC (permalink / raw)
  To: Smets, Jan (Jan); +Cc: Gregory Farnum, ceph-devel

On Sat, 30 Oct 2010, Smets, Jan (Jan) wrote:
> 3 servers with each 2 OSDs, 1 MON and 1 MDS.

Which version of the kernel client are you using?  It's possible this is 
related to efa4c120, which loosened the locking surrounding satisfying 
readdir requests from the dcache.

sage



> 
> - Jan 
> 
> -----Original Message-----
> From: gfarnum@gmail.com [mailto:gfarnum@gmail.com] On Behalf Of Gregory Farnum
> Sent: vrijdag 29 oktober 2010 21:50
> To: Smets, Jan (Jan)
> Cc: ceph-devel@vger.kernel.org
> Subject: Re: Ceph and bonnie++
> 
> On Fri, Oct 29, 2010 at 6:37 AM, Smets, Jan (Jan) <jan.smets@alcatel-lucent.com> wrote:
> > client0:/mnt/ceph# bonnie -s 40 -r 10 -u root -f Using uid:0, gid:0.
> > Writing intelligently...done
> > Rewriting...done
> > Reading intelligently...done
> > start 'em...done...done...done...
> > Create files in sequential order...done.
> > Stat files in sequential order...Expected 16384 files but only got 0 
> > Cleaning up test directory after error.
> >
> >
> > Any suggestions? There was a thread about this some time ago:
> Are you running this with one or many MDSes? It should be fine under a single MDS.
> 
> bonnie++ is one of the workloads we've had issues with on a multi-mds
> system, although I thought we had it working at this point. I'll run our tests again now and see if I can reproduce locally.
> -Greg
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* RE: Ceph and bonnie++
  2010-10-30 16:02     ` Sage Weil
@ 2010-10-30 18:06       ` Smets, Jan (Jan)
  0 siblings, 0 replies; 5+ messages in thread
From: Smets, Jan (Jan) @ 2010-10-30 18:06 UTC (permalink / raw)
  To: Sage Weil; +Cc: Gregory Farnum, ceph-devel

2.6.36+ for servers (git version from somewhere last week)

2.6.36-final for the clients

I'll backout that commit and see if the problem goes away.

- Jan


-----Original Message-----
From: Sage Weil [mailto:sage@newdream.net] 
Sent: zaterdag 30 oktober 2010 18:02
To: Smets, Jan (Jan)
Cc: Gregory Farnum; ceph-devel@vger.kernel.org
Subject: RE: Ceph and bonnie++

On Sat, 30 Oct 2010, Smets, Jan (Jan) wrote:
> 3 servers with each 2 OSDs, 1 MON and 1 MDS.

Which version of the kernel client are you using?  It's possible this is related to efa4c120, which loosened the locking surrounding satisfying readdir requests from the dcache.

sage



> 
> - Jan
> 
> -----Original Message-----
> From: gfarnum@gmail.com [mailto:gfarnum@gmail.com] On Behalf Of 
> Gregory Farnum
> Sent: vrijdag 29 oktober 2010 21:50
> To: Smets, Jan (Jan)
> Cc: ceph-devel@vger.kernel.org
> Subject: Re: Ceph and bonnie++
> 
> On Fri, Oct 29, 2010 at 6:37 AM, Smets, Jan (Jan) <jan.smets@alcatel-lucent.com> wrote:
> > client0:/mnt/ceph# bonnie -s 40 -r 10 -u root -f Using uid:0, gid:0.
> > Writing intelligently...done
> > Rewriting...done
> > Reading intelligently...done
> > start 'em...done...done...done...
> > Create files in sequential order...done.
> > Stat files in sequential order...Expected 16384 files but only got 0 
> > Cleaning up test directory after error.
> >
> >
> > Any suggestions? There was a thread about this some time ago:
> Are you running this with one or many MDSes? It should be fine under a single MDS.
> 
> bonnie++ is one of the workloads we've had issues with on a multi-mds
> system, although I thought we had it working at this point. I'll run our tests again now and see if I can reproduce locally.
> -Greg
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" 
> in the body of a message to majordomo@vger.kernel.org More majordomo 
> info at  http://vger.kernel.org/majordomo-info.html
> 
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2010-10-30 18:06 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2010-10-29 13:37 Ceph and bonnie++ Smets, Jan (Jan)
2010-10-29 19:49 ` Gregory Farnum
2010-10-30 12:34   ` Smets, Jan (Jan)
2010-10-30 16:02     ` Sage Weil
2010-10-30 18:06       ` Smets, Jan (Jan)

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.