From: Sage Weil <sage@newdream.net>
To: Willem Jan Withagen <wjw@digiware.nl>
Cc: Ceph Development <ceph-devel@vger.kernel.org>
Subject: Re: OSD crashing in ~Job()
Date: Mon, 27 Feb 2017 14:54:11 +0000 (UTC) [thread overview]
Message-ID: <alpine.DEB.2.11.1702271452560.7782@piezo.novalocal> (raw)
In-Reply-To: <fc507396-656c-c2a5-8995-cf142ec01636@digiware.nl>
On Mon, 27 Feb 2017, Willem Jan Withagen wrote:
> Hi,
>
> Once every 10 runs test-erasure-code.sh is terminated with a timeout
> during my FreeBSD tests.
>
> It receives an assert in:
> /usr/srcs/Ceph/work/ceph/src/osd/OSDMapMapping.h:31
> class ParallelPGMapper {
> public:
> struct Job {
> ....
> virtual ~Job() {
> assert(shards == 0);
> }
> } }
>
> Anydody any suggestions on what this can be, and/or how to start
> debugging this?
I'm guessing this fixes it: https://github.com/ceph/ceph/pull/13574
sage
>
> --WjW
>
>
> 2017-02-27 14:15:28.034683 b134480 -1
> /usr/srcs/Ceph/work/ceph/src/osd/OSDMapMapping.h: In function 'virtual
> ParallelPGMapper::Job::~Job()' thread b134480 time 2017
> -02-27 14:15:27.975342
> /usr/srcs/Ceph/work/ceph/src/osd/OSDMapMapping.h: 31: FAILED
> assert(shards == 0)
>
> ceph version Development (no_version)
> 1: <ceph::__ceph_assert_fail(char const*, char const, int, char
> const)+0xb21> at /usr/srcs/Ceph/work/ceph/build/bin/ceph-mon
> 2: <ParallelPGMapper::Job::~Job(void)+0x50> at
> /usr/srcs/Ceph/work/ceph/build/bin/ceph-mon
> 3: <OSDMapMapping::MappingJob::~MappingJob(void)+0x15> at
> /usr/srcs/Ceph/work/ceph/build/bin/ceph-mon
> 4: <OSDMapMapping::MappingJob::~MappingJob(void)+0x19> at
> /usr/srcs/Ceph/work/ceph/build/bin/ceph-mon
> 5:
> <OSDMonitor::encode_pending(std::__1::shared_ptr<MonitorDBStore::Transaction>)+0xefd>
> at /usr/srcs/Ceph/work/ceph/build/bin/ceph-mon
> 6: <PaxosService::propose_pending(void)+0x91d> at
> /usr/srcs/Ceph/work/ceph/build/bin/ceph-mon
> 7:
> <PaxosService::dispatch(boost::intrusive_ptr<MonOpRequest>)::C_Propose::finish(int)+0x3a>
> at /usr/srcs/Ceph/work/ceph/build/bin/ceph-mon
> 8: <Context::complete(int)+0x22> at
> /usr/srcs/Ceph/work/ceph/build/bin/ceph-mon
> 9: <SafeTimer::timer_thread(void)+0x8d7> at
> /usr/srcs/Ceph/work/ceph/build/bin/ceph-mon
> 10: <SafeTimerThread::entry(void)+0x19> at
> /usr/srcs/Ceph/work/ceph/build/bin/ceph-mon
> 11: <Thread::entry_wrapper(void)+0xc6> at
> /usr/srcs/Ceph/work/ceph/build/bin/ceph-mon
> 12: <Thread::_entry_func(void*)+0x15> at
> /usr/srcs/Ceph/work/ceph/build/bin/ceph-mon
> NOTE: a copy of the executable, or `objdump -rdS <executable>` is
> needed to interpret this.
>
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
>
next prev parent reply other threads:[~2017-02-27 15:32 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-02-27 14:37 OSD crashing in ~Job() Willem Jan Withagen
2017-02-27 14:54 ` Sage Weil [this message]
2017-02-27 15:24 ` Gregory Farnum
2017-02-28 8:03 ` Willem Jan Withagen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=alpine.DEB.2.11.1702271452560.7782@piezo.novalocal \
--to=sage@newdream.net \
--cc=ceph-devel@vger.kernel.org \
--cc=wjw@digiware.nl \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.