All of lore.kernel.org
 help / color / mirror / Atom feed
* understand ceph packets on the wire
@ 2017-02-03 23:18 Ming Lin
  2017-02-04  2:02 ` LIU, Fei
  2017-02-06 11:18 ` kefu chai
  0 siblings, 2 replies; 5+ messages in thread
From: Ming Lin @ 2017-02-03 23:18 UTC (permalink / raw)
  To: Ceph Development

Hi,

With a one monitor one OSD test setup(osd pool default size = 1), I
captured the packets with tshark.
pcap.out file: https://ufile.io/43c78

The test I did was:
dd if=tmp.txt bs=512 count=1 of=/dev/rbd0 oflag=direct

About 350 packets were captured just for above simple "dd" command.

If you open the file with wireshark, you'll see packet #41 is the data packet.
"192.168.122.1" is the OSD and "192.168.122.131" is the rbd client.

What are other packets?

Thanks.

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: understand ceph packets on the wire
  2017-02-03 23:18 understand ceph packets on the wire Ming Lin
@ 2017-02-04  2:02 ` LIU, Fei
  2017-02-05 23:36   ` Ming Lin
  2017-02-06 11:18 ` kefu chai
  1 sibling, 1 reply; 5+ messages in thread
From: LIU, Fei @ 2017-02-04  2:02 UTC (permalink / raw)
  To: Ming Lin, Ceph Development

Hi Ming,
   Very interesting, for some reason , we can not download the file. Wondering what kind of data the tshark is able to capture?

  Regards,
  James

本邮件及其附件含有阿里巴巴集团的商业秘密信息,仅限于发送给上面地址中列出的个人和群组,禁止任何其他人以任何形式使用(包括但不限于全部或部分地泄露、复制和散发)本邮件及其附件中的信息,如果您错收本邮件,请您立即电话或邮件通知发件人并删除本邮件。
This email and its attachments contain confidential information from Alibaba Group.which is intended only for the person or entity whose address is listed above.Any use of information contained herein in any way(including,but not limited to,total or partial disclosure,reproduction or dissemination)by persons other than the intended recipient(s) is prohibited.If you receive this email in error,please notify the sender by phone or email immediately and delete it.

On 2/3/17, 3:18 PM, "Ming Lin" <ceph-devel-owner@vger.kernel.org on behalf of minggr@gmail.com> wrote:

    Hi,
    
    With a one monitor one OSD test setup(osd pool default size = 1), I
    captured the packets with tshark.
    pcap.out file: https://ufile.io/43c78
    
    The test I did was:
    dd if=tmp.txt bs=512 count=1 of=/dev/rbd0 oflag=direct
    
    About 350 packets were captured just for above simple "dd" command.
    
    If you open the file with wireshark, you'll see packet #41 is the data packet.
    "192.168.122.1" is the OSD and "192.168.122.131" is the rbd client.
    
    What are other packets?
    
    Thanks.
    --
    To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at  http://vger.kernel.org/majordomo-info.html
    



^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: understand ceph packets on the wire
  2017-02-04  2:02 ` LIU, Fei
@ 2017-02-05 23:36   ` Ming Lin
  0 siblings, 0 replies; 5+ messages in thread
From: Ming Lin @ 2017-02-05 23:36 UTC (permalink / raw)
  To: LIU, Fei; +Cc: Ceph Development

On Fri, Feb 3, 2017 at 6:02 PM, LIU, Fei <james.liu@alibaba-inc.com> wrote:
> Hi Ming,
>    Very interesting, for some reason , we can not download the file. Wondering what kind of data the tshark is able to capture?

Just like tcpdump, wireshark/tshark can capture all packets go through
the network interface.
The interesting thing is there is a ceph packet decoder in wireshark/tshark.

https://github.com/wireshark/wireshark/blob/master/epan/dissectors/packet-ceph.c

>
>   Regards,
>   James

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: understand ceph packets on the wire
  2017-02-03 23:18 understand ceph packets on the wire Ming Lin
  2017-02-04  2:02 ` LIU, Fei
@ 2017-02-06 11:18 ` kefu chai
  2017-02-06 21:14   ` Ming Lin
  1 sibling, 1 reply; 5+ messages in thread
From: kefu chai @ 2017-02-06 11:18 UTC (permalink / raw)
  To: Ming Lin; +Cc: Ceph Development

On Sat, Feb 4, 2017 at 7:18 AM, Ming Lin <minggr@gmail.com> wrote:
> Hi,
>
> With a one monitor one OSD test setup(osd pool default size = 1), I
> captured the packets with tshark.
> pcap.out file: https://ufile.io/43c78
>
> The test I did was:
> dd if=tmp.txt bs=512 count=1 of=/dev/rbd0 oflag=direct
>
> About 350 packets were captured just for above simple "dd" command.
>
> If you open the file with wireshark, you'll see packet #41 is the data packet.
> "192.168.122.1" is the OSD and "192.168.122.131" is the rbd client.
>
> What are other packets?

you can set "debug-ms=1" and probably set "log_file" to a writable
path, then check the log.

>
> Thanks.
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html



-- 
Regards
Kefu Chai

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: understand ceph packets on the wire
  2017-02-06 11:18 ` kefu chai
@ 2017-02-06 21:14   ` Ming Lin
  0 siblings, 0 replies; 5+ messages in thread
From: Ming Lin @ 2017-02-06 21:14 UTC (permalink / raw)
  To: kefu chai; +Cc: Ceph Development

On Mon, Feb 6, 2017 at 3:18 AM, kefu chai <tchaikov@gmail.com> wrote:
> On Sat, Feb 4, 2017 at 7:18 AM, Ming Lin <minggr@gmail.com> wrote:
>> Hi,
>>
>> With a one monitor one OSD test setup(osd pool default size = 1), I
>> captured the packets with tshark.
>> pcap.out file: https://ufile.io/43c78
>>
>> The test I did was:
>> dd if=tmp.txt bs=512 count=1 of=/dev/rbd0 oflag=direct
>>
>> About 350 packets were captured just for above simple "dd" command.
>>
>> If you open the file with wireshark, you'll see packet #41 is the data packet.
>> "192.168.122.1" is the OSD and "192.168.122.131" is the rbd client.
>>
>> What are other packets?
>
> you can set "debug-ms=1" and probably set "log_file" to a writable
> path, then check the log.

Got it. The other packets were actually from systemd-udevd doing read
on the background.
I added a debug in the krbd driver. Here is the trace.

[   90.206486] CPU: 0 PID: 1382 Comm: systemd-udevd Tainted: G
  OE   4.10.0-rc6+ #291

[   90.206488] Call Trace:
[   90.206493]  dump_stack+0x63/0x89
[   90.206497]  rbd_queue_rq+0x85/0x90 [rbd]
[   90.206499]  blk_mq_dispatch_rq_list+0xdb/0x1f0
[   90.206500]  blk_mq_process_rq_list+0x122/0x140
[   90.206501]  __blk_mq_run_hw_queue+0xae/0xc0
[   90.206502]  blk_mq_run_hw_queue+0x95/0xc0
[   90.206503]  blk_mq_insert_requests+0xdc/0x1c0
[   90.206504]  blk_mq_flush_plug_list+0x127/0x140
[   90.206506]  blk_flush_plug_list+0xc7/0x220
[   90.206507]  blk_finish_plug+0x2c/0x40
[   90.206510]  __do_page_cache_readahead+0x19c/0x240
[   90.206511]  force_page_cache_readahead+0xa2/0x100
[   90.206512]  page_cache_sync_readahead+0x3f/0x50
[   90.206514]  generic_file_read_iter+0x579/0x7c0
[   90.206517]  ? _copy_to_user+0x2d/0x40
[   90.206519]  blkdev_read_iter+0x37/0x40
[   90.206521]  __vfs_read+0xbd/0x110
[   90.206522]  vfs_read+0x8c/0x130
[   90.206524]  SyS_read+0x46/0xa0
[   90.206525]  ? SyS_lseek+0x87/0xb0
[   90.206528]  entry_SYSCALL_64_fastpath+0x1e/0xad

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2017-02-06 21:14 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-03 23:18 understand ceph packets on the wire Ming Lin
2017-02-04  2:02 ` LIU, Fei
2017-02-05 23:36   ` Ming Lin
2017-02-06 11:18 ` kefu chai
2017-02-06 21:14   ` Ming Lin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.