All of lore.kernel.org
 help / color / mirror / Atom feed
From: Luse, Paul E <paul.e.luse at intel.com>
To: spdk@lists.01.org
Subject: [SPDK] Re: Could not write synchronously to blobfs
Date: Thu, 07 Jan 2021 22:21:01 +0000	[thread overview]
Message-ID: <BYAPR11MB3831AD8F5E442D2C6AB35BFDADAF0@BYAPR11MB3831.namprd11.prod.outlook.com> (raw)
In-Reply-To: 20210107220654.2833.35071@ml01.vlan13.01.org

[-- Attachment #1: Type: text/plain, Size: 9715 bytes --]

Hi Toan,

Thanks for reaching out and welcome to SPDK! I know it seems like a quick question but I think we’d get to the bottom of this more quickly if you entered an issue/sighting at https://github.com/spdk/spdk/issues that way we can get more config info and keep all the Q&A associated with your specific issue in one place.

Thanks!
Paul

From: toan.d.le3(a)gmail.com <toan.d.le3(a)gmail.com>
Date: Thursday, January 7, 2021 at 3:07 PM
To: spdk(a)lists.01.org <spdk(a)lists.01.org>
Subject: [SPDK] Could not write synchronously to blobfs
I am new to SPDK and I try to use synchronous write (spdk_file_write). After creating a file successfully (spdk_fs_open_file) on NVMe drive, writing to the file causes the crash. I attached the call stack below
Asynchronous functions on blobfs seem to work fine with the system.
Any suggestions would be appreciated very much.
//Toan Le

Configuration:
OS: CentOS Linux release 7.9.2009
Kernel: 5.1.6-1.el7.elrepo.x86_64
SPDK: 20.07, DPDK 19.11.3


Call stack
#0  rte_mempool_default_cache (mp=<optimized out>, mp=<optimized out>,
    lcore_id=<optimized out>) at /opt/dpdk/dpdk-19.11.3/build/include/rte_mempool.h:1260
No locals.
#1  rte_mempool_get_bulk (n=1, obj_table=0x7fa6728910b8, mp=0x0)
    at /opt/dpdk/dpdk-19.11.3/build/include/rte_mempool.h:1538
No locals.
#2  rte_mempool_get (obj_p=0x7fa6728910b8, mp=0x0)
    at /opt/dpdk/dpdk-19.11.3/build/include/rte_mempool.h:1565
No locals.
#3  spdk_mempool_get (mp=0x0) at env.c:264
        ele = <optimized out>
#4  0x0000000000425a37 in cache_insert_buffer (file=file(a)entry=0x7fa66cd40870, offset=0)
    at blobfs.c:2174
        buf = 0x7fa66cd3f470
        count = 0
        need_update = false
        __func__ = <error reading variable __func__ (Cannot access memory at address 0x5110c0)
#5  0x0000000000427c78 in cache_append_buffer (file=0x7fa66cd40870) at blobfs.c:2210
        last = <optimized out>
#6  spdk_file_write (file=0x7fa66cd40870, ctx=0x7fa66cd404b0,
    payload=payload(a)entry=0x1262229, offset=offset(a)entry=0, length=11) at blobfs.c:2490
        channel = <optimized out>
        rem_length = <optimized out>
        copy = <optimized out>
        cluster_sz = <optimized out>
        cache_buffers_filled = 0
        cur_payload = <optimized out>
        last = <optimized out>
#7  0x000000000040a76b in write_file (cli_context=0x1262100) at blobfs_sync.c:314
        name = 0x1262338 "/test1.txt"
        file = 0x7fa66cd40870
        rc = <optimized out>
#8  load_blobfs_cb (cb_arg=0x1262100, fs=<optimized out>, fserrno=<optimized out>)
    at blobfs_sync.c:421
        fserrno = 0
        fs = <optimized out>
        cb_arg = 0x1262100
        cli_context = 0x1262100
#9  0x0000000000425290 in fs_load_done (ctx=0x1263740, bserrno=<optimized out>)
    at blobfs.c:724
        req = 0x1263740
        args = 0x1263740
        fs = <optimized out>
#10 0x000000000043378e in bs_request_set_complete (set=<optimized out>) at request.c:90
        cpl = {type = SPDK_BS_CPL_TYPE_BS_HANDLE, u = {bs_basic = {
              cb_fn = 0x427750 <load_cb>, cb_arg = 0x1263740}, bs_handle = {
              cb_fn = 0x427750 <load_cb>, cb_arg = 0x1263740, bs = 0x128dac0}, blob_basic = {
              cb_fn = 0x427750 <load_cb>, cb_arg = 0x1263740}, blobid = {
              cb_fn = 0x427750 <load_cb>, cb_arg = 0x1263740, blobid = 19454656},
            blob_handle = {cb_fn = 0x427750 <load_cb>, cb_arg = 0x1263740,
              blob = 0x128dac0}, nested_seq = {cb_fn = 0x427750 <load_cb>,
              cb_arg = 0x1263740, parent = 0x128dac0}}}
        bserrno = <optimized out>
#11 0x0000000000433a8c in bs_sequence_finish (seq=<optimized out>, bserrno=bserrno(a)entry=0)
    at request.c:256
No locals.
#12 0x0000000000431660 in bs_load_iter (arg=0x12a0200, blob=<optimized out>, bserrno=0)
    at blobstore.c:3519
        ctx = 0x12a0200
#13 0x000000000042f496 in bs_iter_cpl (cb_arg=0x7fa66cd3e8f0, _blob=<optimized out>,
    bserrno=<optimized out>) at blobstore.c:7076
        ctx = 0x7fa66cd3e8f0
        bs = 0x128dac0
#14 0x000000000042852a in bdev_blob_io_complete (bdev_io=0x2000072b7a80,
    success=<optimized out>, arg=<optimized out>) at blob_bdev.c:86
        cb_args = <optimized out>
        bserrno = <optimized out>
#15 0x000000000044dd47 in nvme_complete_request (qpair=0x200006c10ad8, cpl=0x200006c0d030,
    req=0x200015cab280, cb_arg=<optimized out>, cb_fn=<optimized out>)
    at nvme_internal.h:1048
        err_cpl = {cdw0 = 1921586144, rsvd1 = 32678, sqhd = 4896, sqid = 29321, cid = 32678,
          {status_raw = 0, status = {p = 0, sc = 0, sct = 0, rsvd2 = 0, m = 0, dnr = 0}}}
        cmd = 0x8020c
#16 nvme_pcie_qpair_complete_tracker (qpair=qpair(a)entry=0x200006c10ad8, tr=0x2000004fd000,
    cpl=cpl(a)entry=0x200006c0d030, print_on_error=print_on_error(a)entry=true)
    at nvme_pcie.c:1402
        req = 0x200015cab280
        retry = <optimized out>
        error = <optimized out>
        req_from_current_proc = true
#17 0x000000000044dfe8 in nvme_pcie_qpair_process_completions (qpair=0x200006c10ad8,
    max_completions=64) at nvme_pcie.c:2434
        tr = <optimized out>
        cpl = 0x200006c0d030
        next_cpl = <optimized out>
        num_completions = 0
        ctrlr = 0x200006c17840
        next_cq_head = <optimized out>
        next_phase = <optimized out>
        next_is_valid = false
        __func__ = <error reading variable __func__ (Cannot access memory at address 0x514fc0)
#18 0x00000000004542dc in spdk_nvme_qpair_process_completions (
    qpair=qpair(a)entry=0x200006c10ad8, max_completions=max_completions(a)entry=0)
    at nvme_qpair.c:710
        ret = <optimized out>
        req = <optimized out>
        tmp = <optimized out>
        __func__ = <error reading variable __func__ (Cannot access memory at address 0x516100)
#19 0x000000000044c2cc in nvme_pcie_poll_group_process_completions (tgroup=0x12a00b0,
    completions_per_qpair=0,
    disconnected_qpair_cb=0x40d240 <bdev_nvme_disconnected_qpair_cb>) at nvme_pcie.c:2526
        qpair = 0x200006c10ad8
        tmp_qpair = 0x0
        local_completions = <optimized out>
        total_completions = 0
#20 0x00000000004583f5 in nvme_transport_poll_group_process_completions (
    tgroup=tgroup(a)entry=0x12a00b0, completions_per_qpair=completions_per_qpair(a)entry=0,
    disconnected_qpair_cb=disconnected_qpair_cb(a)entry=0x40d240 <bdev_nvme_disconnected_qpair_cb>) at nvme_transport.c:499
        qpair = 0x40d240 <bdev_nvme_disconnected_qpair_cb>
        rc = <optimized out>
#21 0x0000000000462972 in spdk_nvme_poll_group_process_completions (group=<optimized out>,
    completions_per_qpair=completions_per_qpair(a)entry=0,
    disconnected_qpair_cb=disconnected_qpair_cb(a)entry=0x40d240 <bdev_nvme_disconnected_qpair_cb>) at nvme_poll_group.c:127
        tgroup = 0x12a00b0
        local_completions = <optimized out>
        error_reason = 0
        num_completions = 0
#22 0x000000000040d1d7 in bdev_nvme_poll (arg=0x129ff10) at bdev_nvme.c:248
        group = 0x129ff10
        num_completions = <optimized out>
#23 0x000000000048ceed in thread_poll (now=404406211862183, max_msgs=0, thread=0x1263350)
    at thread.c:602
        poller_rc = <optimized out>
        poller = 0x129ff50
        tmp = 0x0
        critical_msg = <optimized out>
        rc = 0
#24 spdk_thread_poll (thread=thread(a)entry=0x1263350, max_msgs=max_msgs(a)entry=0,
    now=404406211862183) at thread.c:689
No locals.
#25 0x000000000048ac18 in _reactor_run (reactor=0x7fa66cd3e7c0) at reactor.c:326
        thread = 0x1263350
        tmp = 0x0
        lw_thread = 0x1263678
        now = <optimized out>
        rc = <optimized out>
#26 reactor_run (arg=0x7fa66cd3e7c0) at reactor.c:382
        thread = <optimized out>
        lw_thread = <optimized out>
        tmp = <optimized out>
        thread_name = "reactor_0\000\030\004\000\000\000\000\300\356(\004\000\000\000\000\330\016\071\004\000\000\000"
        __func__ = <error reading variable __func__ (Cannot access memory at address 0x520c61)>
#27 0x000000000048b0d1 in spdk_reactors_start () at reactor.c:477
        reactor = <optimized out>
        tmp_cpumask = {str = '\000' <repeats 256 times>, cpus = '\000' <repeats 127 times>}
        i = 4294967295
        current_core = 0
        rc = <optimized out>
        thread_name = "\340o\352\004\000\000\000\000\370\217\372\004\000\000\000\000\020\260\005\000\000\000\000(\320\032\005\000\000\000"
        __func__ = <error reading variable __func__ (Cannot access memory at address 0x520c70)>
#28 0x00000000004894d0 in spdk_app_start (opts=0x7ffd8307fe70,
    start_fn=start_fn(a)entry=0x40a2e0 <initialize_spdk_ready>, arg1=arg1(a)entry=0x1262100)
    at app.c:710
        config = <optimized out>
        rc = <optimized out>
        tty = <optimized out>
        tmp_cpumask = {str = '\000' <repeats 256 times>,
          cpus = "\001", '\000' <repeats 126 times>}
        g_env_was_setup = false
        __func__ = <error reading variable __func__ (Cannot access memory at address 0x5209f2)>
#29 0x000000000040a2b6 in initialize_spdk (arg=0x1262100) at blobfs_sync.c:708
        cli_context = 0x1262100
        opts = <optimized out>
        rc = <optimized out>
#30 0x00007fa6713d0ea5 in start_thread () from /lib64/libpthread.so.0
No symbol table info available.
#31 0x00007fa6710f996d in clone () from /lib64/libc.so.6
_______________________________________________
SPDK mailing list -- spdk(a)lists.01.org
To unsubscribe send an email to spdk-leave(a)lists.01.org

             reply	other threads:[~2021-01-07 22:21 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-07 22:21 Luse, Paul E [this message]
2021-01-11 20:07 [SPDK] Re: Could not write synchronously to blobfs toan.d.le3

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=BYAPR11MB3831AD8F5E442D2C6AB35BFDADAF0@BYAPR11MB3831.namprd11.prod.outlook.com \
    --to=spdk@lists.01.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.