All of lore.kernel.org
 help / color / mirror / Atom feed
* multiple BLK-MQ queues for Ceph's RADOS Block Device (RBD) and CephFS
@ 2020-07-13 18:40 Bobby
  2020-07-14 21:16 ` [Ceph-qa] " Ilya Dryomov
  0 siblings, 1 reply; 4+ messages in thread
From: Bobby @ 2020-07-13 18:40 UTC (permalink / raw)
  To: ceph-users, dev, ceph-devel-u79uwXL29TY76Z2rM5mHXA, ceph-qa-a8pt6IJUokc

Hi,

I have a question regarding support for multiple BLK-MQ queues for Ceph's
RADOS Block Device (RBD). The below given link says that the driver has
been using the BLK-MQ interface for a while but not actually multiple
queues until now with having a queue per-CPU. A change to not hold onto
caps that aren't actually needed.  These improvements and more can be found
as part of the Ceph changes for Linux 5.7, which should be released as
stable in early June.

https://www.phoronix.com/scan.php?page=news_item&px=Linux-5.7-Ceph-Performance

My question is: Is it possible that through Ceph FS (Filesystem in User
Space) I can develop a multi-queue driver for Ceph? Asking because this way
I can avoid kernel space. (
https://docs.ceph.com/docs/nautilus/start/quick-cephfs/)

Looking forward for some help

BR
Bobby
_______________________________________________
ceph-users mailing list -- ceph-users-a8pt6IJUokc@public.gmane.org
To unsubscribe send an email to ceph-users-leave-a8pt6IJUokc@public.gmane.org

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Ceph-qa] multiple BLK-MQ queues for Ceph's RADOS Block Device (RBD) and CephFS
  2020-07-13 18:40 multiple BLK-MQ queues for Ceph's RADOS Block Device (RBD) and CephFS Bobby
@ 2020-07-14 21:16 ` Ilya Dryomov
       [not found]   ` <CA+xD70Neac2hpzu-Tg7s+1NCDegwzKs-zdTk8DYTWZPjNaexaA@mail.gmail.com>
  0 siblings, 1 reply; 4+ messages in thread
From: Ilya Dryomov @ 2020-07-14 21:16 UTC (permalink / raw)
  To: Bobby; +Cc: dev, Ceph Development

On Mon, Jul 13, 2020 at 8:40 PM Bobby <italienisch1987@gmail.com> wrote:
>
>
> Hi,
>
> I have a question regarding support for multiple BLK-MQ queues for Ceph's RADOS Block Device (RBD). The below given link says that the driver has been using the BLK-MQ interface for a while but not actually multiple queues until now with having a queue per-CPU. A change to not hold onto caps that aren't actually needed.  These improvements and more can be found as part of the Ceph changes for Linux 5.7, which should be released as stable in early June.
>
> https://www.phoronix.com/scan.php?page=news_item&px=Linux-5.7-Ceph-Performance
>
> My question is: Is it possible that through Ceph FS (Filesystem in User Space) I can develop a multi-queue driver for Ceph? Asking because this way I can avoid kernel space. (https://docs.ceph.com/docs/nautilus/start/quick-cephfs/)

[ trimming CCs to dev and ceph-devel ]

Hi Bobby,

I'm not sure what you mean by a "multi-queue driver for Ceph".
blk-mq is the block layer framework, it has nothing to do with
filesystems, whether local sitting on top of a block device (such
as ext4 or XFS) or distributed sitting on top of a network stack
(such as CephFS).

Do you have a specific project in mind or are you just looking to
make ceph-fuse faster?

Thanks,

                Ilya

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Ceph-qa] multiple BLK-MQ queues for Ceph's RADOS Block Device (RBD) and CephFS
       [not found]   ` <CA+xD70Neac2hpzu-Tg7s+1NCDegwzKs-zdTk8DYTWZPjNaexaA@mail.gmail.com>
@ 2020-07-15 19:04     ` Ilya Dryomov
       [not found]       ` <CA+xD70MyfKgn5m3=JvgFfg+Ww=T8eJ2b9bogdbS4ogYAifNCTw@mail.gmail.com>
  0 siblings, 1 reply; 4+ messages in thread
From: Ilya Dryomov @ 2020-07-15 19:04 UTC (permalink / raw)
  To: Bobby; +Cc: dev, Ceph Development

On Wed, Jul 15, 2020 at 12:47 AM Bobby <italienisch1987@gmail.com> wrote:
>
>
>
> Hi Ilya,
>
> Thanks for the reply. It's basically both i.e. I have a specific project currently and also I am looking to make ceph-fuse faster.
>
> But for now, let me ask specifically the project based question. In the project I have to write a blk-mq kernel driver for the Ceph client machine. The Ceph client machine will transfer the data to HBA or lets say any embedded device.

What is a "Ceph client machine"?

A Ceph client (or more specifically a RADOS client) speaks RADOS
protocol and transfers data to OSD daemons.  It can't transfer data
directly to a physical device because something has to take care of
replication, ensure consistency and self healing, etc.  This is the
job of the OSD.

>
> My hope is that there can be an alternative and that alternative is to not implement a blk-mq kernel driver and instead do the stuff in userspace. I am trying to avoid writing a blk-mq kernel driver and yet achieve the multi-queue implementation through userspace. Is it possible?
>
> Also AFAIK, the Ceph’s block storage implementation uses a client module and this client module has two implementations librbd (user-space) and krbd (kernel module). I have not gone deep into these client modules. but can librbd help me with this?

I guess I don't understand the goal of your project.  A multi-queue
implementation of what exactly?  A Ceph block device, a Ceph filesystem
or something else entirely?  It would help if you were more specific
because "a multi-queue driver for Ceph" is really vague.

Thanks,

                Ilya

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: multiple BLK-MQ queues for Ceph's RADOS Block Device (RBD) and CephFS
       [not found]       ` <CA+xD70MyfKgn5m3=JvgFfg+Ww=T8eJ2b9bogdbS4ogYAifNCTw@mail.gmail.com>
@ 2020-07-16 20:11         ` Jason Dillaman
  0 siblings, 0 replies; 4+ messages in thread
From: Jason Dillaman @ 2020-07-16 20:11 UTC (permalink / raw)
  To: Bobby; +Cc: Ilya Dryomov, dev, ceph-devel

On Thu, Jul 16, 2020 at 3:19 PM Bobby <italienisch1987@gmail.com> wrote:
>
> Hi,
>
> I completely agree to what you said regarding Ceph client. This is exactly my understanding of a Ceph client.
>
> And regarding blk-mq, I meant for a block device. A multi-queue implementation of a block device.

krbd is a blk-mq implementation of a block device. So is the nbd block
device driver -- which can be combined w/ rbd-nbd (or any other NBD
server) to utilize librbd in user-space.

> On Wednesday, July 15, 2020, Ilya Dryomov <idryomov@gmail.com> wrote:
> > On Wed, Jul 15, 2020 at 12:47 AM Bobby <italienisch1987@gmail.com> wrote:
> >>
> >>
> >>
> >> Hi Ilya,
> >>
> >> Thanks for the reply. It's basically both i.e. I have a specific project currently and also I am looking to make ceph-fuse faster.
> >>
> >> But for now, let me ask specifically the project based question. In the project I have to write a blk-mq kernel driver for the Ceph client machine. The Ceph client machine will transfer the data to HBA or lets say any embedded device.
> >
> > What is a "Ceph client machine"?
> >
> > A Ceph client (or more specifically a RADOS client) speaks RADOS
> > protocol and transfers data to OSD daemons.  It can't transfer data
> > directly to a physical device because something has to take care of
> > replication, ensure consistency and self healing, etc.  This is the
> > job of the OSD.
> >
> >>
> >> My hope is that there can be an alternative and that alternative is to not implement a blk-mq kernel driver and instead do the stuff in userspace. I am trying to avoid writing a blk-mq kernel driver and yet achieve the multi-queue implementation through userspace. Is it possible?
> >>
> >> Also AFAIK, the Ceph’s block storage implementation uses a client module and this client module has two implementations librbd (user-space) and krbd (kernel module). I have not gone deep into these client modules. but can librbd help me with this?
> >
> > I guess I don't understand the goal of your project.  A multi-queue
> > implementation of what exactly?  A Ceph block device, a Ceph filesystem
> > or something else entirely?  It would help if you were more specific
> > because "a multi-queue driver for Ceph" is really vague.
> >
> > Thanks,
> >
> >                 Ilya
> > _______________________________________________
> Dev mailing list -- dev@ceph.io
> To unsubscribe send an email to dev-leave@ceph.io



-- 
Jason

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2020-07-16 20:12 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-13 18:40 multiple BLK-MQ queues for Ceph's RADOS Block Device (RBD) and CephFS Bobby
2020-07-14 21:16 ` [Ceph-qa] " Ilya Dryomov
     [not found]   ` <CA+xD70Neac2hpzu-Tg7s+1NCDegwzKs-zdTk8DYTWZPjNaexaA@mail.gmail.com>
2020-07-15 19:04     ` Ilya Dryomov
     [not found]       ` <CA+xD70MyfKgn5m3=JvgFfg+Ww=T8eJ2b9bogdbS4ogYAifNCTw@mail.gmail.com>
2020-07-16 20:11         ` Jason Dillaman

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.