All of lore.kernel.org
 help / color / mirror / Atom feed
* hammer tasks in http://tracker.ceph.com/projects/ceph-releases
@ 2015-03-22  8:54 Loic Dachary
  2015-03-22 16:16 ` Yuri Weinstein
  0 siblings, 1 reply; 10+ messages in thread
From: Loic Dachary @ 2015-03-22  8:54 UTC (permalink / raw)
  To: Sage Weil; +Cc: Ceph Development

[-- Attachment #1: Type: text/plain, Size: 202 bytes --]

Hi Sage,

You have created a few hammer related tasks at http://tracker.ceph.com/projects/ceph-releases/issues . What did you have in mind ?

Cheers

-- 
Loïc Dachary, Artisan Logiciel Libre


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
  2015-03-22  8:54 hammer tasks in http://tracker.ceph.com/projects/ceph-releases Loic Dachary
@ 2015-03-22 16:16 ` Yuri Weinstein
  2015-03-22 23:50   ` Loic Dachary
  2015-03-23  0:35   ` Loic Dachary
  0 siblings, 2 replies; 10+ messages in thread
From: Yuri Weinstein @ 2015-03-22 16:16 UTC (permalink / raw)
  To: Loic Dachary; +Cc: Sage Weil, Ceph Development

Loic, I think the idea was to do more process driven approach for releasing hammer, e.g. keep track of suites vs. results and open issues, so we can have a high level view on status at any time before the final cut day.

Do you have any suggestions or objections?

Thx
YuriW

----- Original Message -----
From: "Loic Dachary" <loic@dachary.org>
To: "Sage Weil" <sweil@redhat.com>
Cc: "Ceph Development" <ceph-devel@vger.kernel.org>
Sent: Sunday, March 22, 2015 1:54:06 AM
Subject: hammer tasks in http://tracker.ceph.com/projects/ceph-releases

Hi Sage,

You have created a few hammer related tasks at http://tracker.ceph.com/projects/ceph-releases/issues . What did you have in mind ?

Cheers

-- 
Loïc Dachary, Artisan Logiciel Libre
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
  2015-03-22 16:16 ` Yuri Weinstein
@ 2015-03-22 23:50   ` Loic Dachary
  2015-03-23  0:35   ` Loic Dachary
  1 sibling, 0 replies; 10+ messages in thread
From: Loic Dachary @ 2015-03-22 23:50 UTC (permalink / raw)
  To: Yuri Weinstein; +Cc: Ceph Development

[-- Attachment #1: Type: text/plain, Size: 922 bytes --]



On 22/03/2015 17:16, Yuri Weinstein wrote:
> Loic, I think the idea was to do more process driven approach for releasing hammer, e.g. keep track of suites vs. results and open issues, so we can have a high level view on status at any time before the final cut day.
> 
> Do you have any suggestions or objections?

That sounds interesting :-) How would that work exactly ?

Cheers
> 
> Thx
> YuriW
> 
> ----- Original Message -----
> From: "Loic Dachary" <loic@dachary.org>
> To: "Sage Weil" <sweil@redhat.com>
> Cc: "Ceph Development" <ceph-devel@vger.kernel.org>
> Sent: Sunday, March 22, 2015 1:54:06 AM
> Subject: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
> 
> Hi Sage,
> 
> You have created a few hammer related tasks at http://tracker.ceph.com/projects/ceph-releases/issues . What did you have in mind ?
> 
> Cheers
> 

-- 
Loïc Dachary, Artisan Logiciel Libre


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
  2015-03-22 16:16 ` Yuri Weinstein
  2015-03-22 23:50   ` Loic Dachary
@ 2015-03-23  0:35   ` Loic Dachary
  2015-03-23 15:09     ` Yuri Weinstein
  1 sibling, 1 reply; 10+ messages in thread
From: Loic Dachary @ 2015-03-23  0:35 UTC (permalink / raw)
  To: Yuri Weinstein; +Cc: Sage Weil, Ceph Development

[-- Attachment #1: Type: text/plain, Size: 4937 bytes --]



On 22/03/2015 17:16, Yuri Weinstein wrote:
> Loic, I think the idea was to do more process driven approach for releasing hammer, e.g. keep track of suites vs. results and open issues, so we can have a high level view on status at any time before the final cut day.
> 
> Do you have any suggestions or objections?

Reading http://tracker.ceph.com/issues/11189 I see it has one run, and a run of failed tests, and got resolved because all passed. The title is hammer: upgrade/giant-x. How will that go for the next run of upgrade/giant-x ?

I use a python snippet to display the errors in a redmine format (http://workbench.dachary.org/dachary/ceph-workbench/issues/2)

$ python ../fail.py teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps
** *'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=giant TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/cls/test_cls_rgw.sh'*
*** "upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/parallel_run/{ec-rados-parallel.yaml rados_api.yaml rados_loadgenbig.yaml test_cache-pool-snaps.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814081
** *"2015-03-20 23:04:51.042345 mon.0 10.214.130.49:6789/0 3 : cluster [WRN] message from mon.1 was stamped 14400.248297s in the future, clocks not synchronized" in cluster log*
*** "upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_rbd_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814155
** *Could not reconnect to ubuntu@vpm169.front.sepia.ceph.com*
*** "upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/ec-rados-default.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814108
** *Could not reconnect to ubuntu@vpm166.front.sepia.ceph.com*
*** "upgrade:giant-x/stress-split-erasure-code/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814194
** *'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f -i a'*
*** "upgrade:giant-x/stress-split-erasure-code-x86_64/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=isa-k=2-m=1.yaml distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814197
** *timed out waiting for admin_socket to appear after osd.13 restart*
*** "upgrade:giant-x/stress-split/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/rhel_6.5.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814186

> 
> Thx
> YuriW
> 
> ----- Original Message -----
> From: "Loic Dachary" <loic@dachary.org>
> To: "Sage Weil" <sweil@redhat.com>
> Cc: "Ceph Development" <ceph-devel@vger.kernel.org>
> Sent: Sunday, March 22, 2015 1:54:06 AM
> Subject: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
> 
> Hi Sage,
> 
> You have created a few hammer related tasks at http://tracker.ceph.com/projects/ceph-releases/issues . What did you have in mind ?
> 
> Cheers
> 

-- 
Loïc Dachary, Artisan Logiciel Libre


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
  2015-03-23  0:35   ` Loic Dachary
@ 2015-03-23 15:09     ` Yuri Weinstein
  2015-03-23 15:40       ` Loic Dachary
  0 siblings, 1 reply; 10+ messages in thread
From: Yuri Weinstein @ 2015-03-23 15:09 UTC (permalink / raw)
  To: Loic Dachary; +Cc: Sage Weil, Ceph Development

"How will that go for the next run of upgrade/giant-x ?"

I was thinking that as soon as for example this suite passed, #11189 gets resolved as thus indicates that it's ready for for the hammer release cut. 


Thx
YuriW

----- Original Message -----
From: "Loic Dachary" <loic@dachary.org>
To: "Yuri Weinstein" <yweinste@redhat.com>
Cc: "Sage Weil" <sweil@redhat.com>, "Ceph Development" <ceph-devel@vger.kernel.org>
Sent: Sunday, March 22, 2015 5:35:19 PM
Subject: Re: hammer tasks in http://tracker.ceph.com/projects/ceph-releases



On 22/03/2015 17:16, Yuri Weinstein wrote:
> Loic, I think the idea was to do more process driven approach for releasing hammer, e.g. keep track of suites vs. results and open issues, so we can have a high level view on status at any time before the final cut day.
> 
> Do you have any suggestions or objections?

Reading http://tracker.ceph.com/issues/11189 I see it has one run, and a run of failed tests, and got resolved because all passed. The title is hammer: upgrade/giant-x. How will that go for the next run of upgrade/giant-x ?

I use a python snippet to display the errors in a redmine format (http://workbench.dachary.org/dachary/ceph-workbench/issues/2)

$ python ../fail.py teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps
** *'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=giant TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/cls/test_cls_rgw.sh'*
*** "upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/parallel_run/{ec-rados-parallel.yaml rados_api.yaml rados_loadgenbig.yaml test_cache-pool-snaps.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814081
** *"2015-03-20 23:04:51.042345 mon.0 10.214.130.49:6789/0 3 : cluster [WRN] message from mon.1 was stamped 14400.248297s in the future, clocks not synchronized" in cluster log*
*** "upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_rbd_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814155
** *Could not reconnect to ubuntu@vpm169.front.sepia.ceph.com*
*** "upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/ec-rados-default.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814108
** *Could not reconnect to ubuntu@vpm166.front.sepia.ceph.com*
*** "upgrade:giant-x/stress-split-erasure-code/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814194
** *'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f -i a'*
*** "upgrade:giant-x/stress-split-erasure-code-x86_64/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=isa-k=2-m=1.yaml distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814197
** *timed out waiting for admin_socket to appear after osd.13 restart*
*** "upgrade:giant-x/stress-split/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/rhel_6.5.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814186

> 
> Thx
> YuriW
> 
> ----- Original Message -----
> From: "Loic Dachary" <loic@dachary.org>
> To: "Sage Weil" <sweil@redhat.com>
> Cc: "Ceph Development" <ceph-devel@vger.kernel.org>
> Sent: Sunday, March 22, 2015 1:54:06 AM
> Subject: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
> 
> Hi Sage,
> 
> You have created a few hammer related tasks at http://tracker.ceph.com/projects/ceph-releases/issues . What did you have in mind ?
> 
> Cheers
> 

-- 
Loïc Dachary, Artisan Logiciel Libre
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
  2015-03-23 15:09     ` Yuri Weinstein
@ 2015-03-23 15:40       ` Loic Dachary
  2015-03-23 15:44         ` Yuri Weinstein
  0 siblings, 1 reply; 10+ messages in thread
From: Loic Dachary @ 2015-03-23 15:40 UTC (permalink / raw)
  To: Yuri Weinstein; +Cc: Sage Weil, Ceph Development

[-- Attachment #1: Type: text/plain, Size: 6384 bytes --]

Hi Yuri,

On 23/03/2015 16:09, Yuri Weinstein wrote:
> "How will that go for the next run of upgrade/giant-x ?"
> 
> I was thinking that as soon as for example this suite passed, #11189 gets resolved as thus indicates that it's ready for for the hammer release cut. 

If the following happens:

* hammer: upgrade/giant-x runs and passes
* a dozen more commits are added because problems are fixed
* hammer: upgrade/giant-x runs and passes

That leaves us with two issues with the same name but with different update dates. So if I look at the "hammer: upgrade/giant-x" issues in chronological order, I have a complete history of the successive runs and I can check the latest one to see how it went. Or older ones if I need to dig the history. 

This is good :-)

After hammer is released, the same will presumably happen for point releases. Instead of naming them "hammer: upgrade/giant-x" which would be confusing, I guess we could name them "v0.94.1: upgrade/giant-x" instead. 

Does that sound right ?

> 
> Thx
> YuriW
> 
> ----- Original Message -----
> From: "Loic Dachary" <loic@dachary.org>
> To: "Yuri Weinstein" <yweinste@redhat.com>
> Cc: "Sage Weil" <sweil@redhat.com>, "Ceph Development" <ceph-devel@vger.kernel.org>
> Sent: Sunday, March 22, 2015 5:35:19 PM
> Subject: Re: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
> 
> 
> 
> On 22/03/2015 17:16, Yuri Weinstein wrote:
>> Loic, I think the idea was to do more process driven approach for releasing hammer, e.g. keep track of suites vs. results and open issues, so we can have a high level view on status at any time before the final cut day.
>>
>> Do you have any suggestions or objections?
> 
> Reading http://tracker.ceph.com/issues/11189 I see it has one run, and a run of failed tests, and got resolved because all passed. The title is hammer: upgrade/giant-x. How will that go for the next run of upgrade/giant-x ?
> 
> I use a python snippet to display the errors in a redmine format (http://workbench.dachary.org/dachary/ceph-workbench/issues/2)
> 
> $ python ../fail.py teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps
> ** *'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=giant TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/cls/test_cls_rgw.sh'*
> *** "upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/parallel_run/{ec-rados-parallel.yaml rados_api.yaml rados_loadgenbig.yaml test_cache-pool-snaps.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814081
> ** *"2015-03-20 23:04:51.042345 mon.0 10.214.130.49:6789/0 3 : cluster [WRN] message from mon.1 was stamped 14400.248297s in the future, clocks not synchronized" in cluster log*
> *** "upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_rbd_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814155
> ** *Could not reconnect to ubuntu@vpm169.front.sepia.ceph.com*
> *** "upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/ec-rados-default.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814108
> ** *Could not reconnect to ubuntu@vpm166.front.sepia.ceph.com*
> *** "upgrade:giant-x/stress-split-erasure-code/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814194
> ** *'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f -i a'*
> *** "upgrade:giant-x/stress-split-erasure-code-x86_64/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=isa-k=2-m=1.yaml distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814197
> ** *timed out waiting for admin_socket to appear after osd.13 restart*
> *** "upgrade:giant-x/stress-split/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/rhel_6.5.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814186
> 
>>
>> Thx
>> YuriW
>>
>> ----- Original Message -----
>> From: "Loic Dachary" <loic@dachary.org>
>> To: "Sage Weil" <sweil@redhat.com>
>> Cc: "Ceph Development" <ceph-devel@vger.kernel.org>
>> Sent: Sunday, March 22, 2015 1:54:06 AM
>> Subject: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
>>
>> Hi Sage,
>>
>> You have created a few hammer related tasks at http://tracker.ceph.com/projects/ceph-releases/issues . What did you have in mind ?
>>
>> Cheers
>>
> 

-- 
Loïc Dachary, Artisan Logiciel Libre


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
  2015-03-23 15:40       ` Loic Dachary
@ 2015-03-23 15:44         ` Yuri Weinstein
  2015-03-23 16:10           ` Loic Dachary
  0 siblings, 1 reply; 10+ messages in thread
From: Yuri Weinstein @ 2015-03-23 15:44 UTC (permalink / raw)
  To: Loic Dachary; +Cc: Sage Weil, Ceph Development



Thx
YuriW

----- Original Message -----
From: "Loic Dachary" <loic@dachary.org>
To: "Yuri Weinstein" <yweinste@redhat.com>
Cc: "Sage Weil" <sweil@redhat.com>, "Ceph Development" <ceph-devel@vger.kernel.org>
Sent: Monday, March 23, 2015 8:40:02 AM
Subject: Re: hammer tasks in http://tracker.ceph.com/projects/ceph-releases

Hi Yuri,

On 23/03/2015 16:09, Yuri Weinstein wrote:
> "How will that go for the next run of upgrade/giant-x ?"
> 
> I was thinking that as soon as for example this suite passed, #11189 gets resolved as thus indicates that it's ready for for the hammer release cut. 

If the following happens:

* hammer: upgrade/giant-x runs and passes
* a dozen more commits are added because problems are fixed
* hammer: upgrade/giant-x runs and passes

That leaves us with two issues with the same name but with different update dates. So if I look at the "hammer: upgrade/giant-x" issues in chronological order, I have a complete history of the successive runs and I can check the latest one to see how it went. Or older ones if I need to dig the history. 

This is good :-)

After hammer is released, the same will presumably happen for point releases. Instead of naming them "hammer: upgrade/giant-x" which would be confusing, I guess we could name them "v0.94.1: upgrade/giant-x" instead. 

Does that sound right ?
============
Yes, we can alternatively name the set of those tasks as hammer v0.94.1

============
> 
> Thx
> YuriW
> 
> ----- Original Message -----
> From: "Loic Dachary" <loic@dachary.org>
> To: "Yuri Weinstein" <yweinste@redhat.com>
> Cc: "Sage Weil" <sweil@redhat.com>, "Ceph Development" <ceph-devel@vger.kernel.org>
> Sent: Sunday, March 22, 2015 5:35:19 PM
> Subject: Re: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
> 
> 
> 
> On 22/03/2015 17:16, Yuri Weinstein wrote:
>> Loic, I think the idea was to do more process driven approach for releasing hammer, e.g. keep track of suites vs. results and open issues, so we can have a high level view on status at any time before the final cut day.
>>
>> Do you have any suggestions or objections?
> 
> Reading http://tracker.ceph.com/issues/11189 I see it has one run, and a run of failed tests, and got resolved because all passed. The title is hammer: upgrade/giant-x. How will that go for the next run of upgrade/giant-x ?
> 
> I use a python snippet to display the errors in a redmine format (http://workbench.dachary.org/dachary/ceph-workbench/issues/2)
> 
> $ python ../fail.py teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps
> ** *'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=giant TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/cls/test_cls_rgw.sh'*
> *** "upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/parallel_run/{ec-rados-parallel.yaml rados_api.yaml rados_loadgenbig.yaml test_cache-pool-snaps.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814081
> ** *"2015-03-20 23:04:51.042345 mon.0 10.214.130.49:6789/0 3 : cluster [WRN] message from mon.1 was stamped 14400.248297s in the future, clocks not synchronized" in cluster log*
> *** "upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_rbd_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814155
> ** *Could not reconnect to ubuntu@vpm169.front.sepia.ceph.com*
> *** "upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/ec-rados-default.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814108
> ** *Could not reconnect to ubuntu@vpm166.front.sepia.ceph.com*
> *** "upgrade:giant-x/stress-split-erasure-code/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814194
> ** *'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f -i a'*
> *** "upgrade:giant-x/stress-split-erasure-code-x86_64/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=isa-k=2-m=1.yaml distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814197
> ** *timed out waiting for admin_socket to appear after osd.13 restart*
> *** "upgrade:giant-x/stress-split/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/rhel_6.5.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814186
> 
>>
>> Thx
>> YuriW
>>
>> ----- Original Message -----
>> From: "Loic Dachary" <loic@dachary.org>
>> To: "Sage Weil" <sweil@redhat.com>
>> Cc: "Ceph Development" <ceph-devel@vger.kernel.org>
>> Sent: Sunday, March 22, 2015 1:54:06 AM
>> Subject: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
>>
>> Hi Sage,
>>
>> You have created a few hammer related tasks at http://tracker.ceph.com/projects/ceph-releases/issues . What did you have in mind ?
>>
>> Cheers
>>
> 

-- 
Loïc Dachary, Artisan Logiciel Libre
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
  2015-03-23 15:44         ` Yuri Weinstein
@ 2015-03-23 16:10           ` Loic Dachary
  2015-03-23 16:27             ` Yuri Weinstein
  0 siblings, 1 reply; 10+ messages in thread
From: Loic Dachary @ 2015-03-23 16:10 UTC (permalink / raw)
  To: Yuri Weinstein; +Cc: Sage Weil, Ceph Development

[-- Attachment #1: Type: text/plain, Size: 7131 bytes --]



On 23/03/2015 16:44, Yuri Weinstein wrote:
> 
> 
> Thx
> YuriW
> 
> ----- Original Message -----
> From: "Loic Dachary" <loic@dachary.org>
> To: "Yuri Weinstein" <yweinste@redhat.com>
> Cc: "Sage Weil" <sweil@redhat.com>, "Ceph Development" <ceph-devel@vger.kernel.org>
> Sent: Monday, March 23, 2015 8:40:02 AM
> Subject: Re: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
> 
> Hi Yuri,
> 
> On 23/03/2015 16:09, Yuri Weinstein wrote:
>> "How will that go for the next run of upgrade/giant-x ?"
>>
>> I was thinking that as soon as for example this suite passed, #11189 gets resolved as thus indicates that it's ready for for the hammer release cut. 
> 
> If the following happens:
> 
> * hammer: upgrade/giant-x runs and passes
> * a dozen more commits are added because problems are fixed
> * hammer: upgrade/giant-x runs and passes
> 
> That leaves us with two issues with the same name but with different update dates. So if I look at the "hammer: upgrade/giant-x" issues in chronological order, I have a complete history of the successive runs and I can check the latest one to see how it went. Or older ones if I need to dig the history. 
> 
> This is good :-)
> 
> After hammer is released, the same will presumably happen for point releases. Instead of naming them "hammer: upgrade/giant-x" which would be confusing, I guess we could name them "v0.94.1: upgrade/giant-x" instead. 
> 
> Does that sound right ?
> ============
> Yes, we can alternatively name the set of those tasks as hammer v0.94.1

Great !

Would you like me to add a section at http://tracker.ceph.com/projects/ceph-releases/wiki/Wiki to summarize this conversation ?

> 
> ============
>>
>> Thx
>> YuriW
>>
>> ----- Original Message -----
>> From: "Loic Dachary" <loic@dachary.org>
>> To: "Yuri Weinstein" <yweinste@redhat.com>
>> Cc: "Sage Weil" <sweil@redhat.com>, "Ceph Development" <ceph-devel@vger.kernel.org>
>> Sent: Sunday, March 22, 2015 5:35:19 PM
>> Subject: Re: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
>>
>>
>>
>> On 22/03/2015 17:16, Yuri Weinstein wrote:
>>> Loic, I think the idea was to do more process driven approach for releasing hammer, e.g. keep track of suites vs. results and open issues, so we can have a high level view on status at any time before the final cut day.
>>>
>>> Do you have any suggestions or objections?
>>
>> Reading http://tracker.ceph.com/issues/11189 I see it has one run, and a run of failed tests, and got resolved because all passed. The title is hammer: upgrade/giant-x. How will that go for the next run of upgrade/giant-x ?
>>
>> I use a python snippet to display the errors in a redmine format (http://workbench.dachary.org/dachary/ceph-workbench/issues/2)
>>
>> $ python ../fail.py teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps
>> ** *'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=giant TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/cls/test_cls_rgw.sh'*
>> *** "upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/parallel_run/{ec-rados-parallel.yaml rados_api.yaml rados_loadgenbig.yaml test_cache-pool-snaps.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814081
>> ** *"2015-03-20 23:04:51.042345 mon.0 10.214.130.49:6789/0 3 : cluster [WRN] message from mon.1 was stamped 14400.248297s in the future, clocks not synchronized" in cluster log*
>> *** "upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_rbd_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814155
>> ** *Could not reconnect to ubuntu@vpm169.front.sepia.ceph.com*
>> *** "upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/ec-rados-default.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814108
>> ** *Could not reconnect to ubuntu@vpm166.front.sepia.ceph.com*
>> *** "upgrade:giant-x/stress-split-erasure-code/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814194
>> ** *'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f -i a'*
>> *** "upgrade:giant-x/stress-split-erasure-code-x86_64/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=isa-k=2-m=1.yaml distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814197
>> ** *timed out waiting for admin_socket to appear after osd.13 restart*
>> *** "upgrade:giant-x/stress-split/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/rhel_6.5.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814186
>>
>>>
>>> Thx
>>> YuriW
>>>
>>> ----- Original Message -----
>>> From: "Loic Dachary" <loic@dachary.org>
>>> To: "Sage Weil" <sweil@redhat.com>
>>> Cc: "Ceph Development" <ceph-devel@vger.kernel.org>
>>> Sent: Sunday, March 22, 2015 1:54:06 AM
>>> Subject: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
>>>
>>> Hi Sage,
>>>
>>> You have created a few hammer related tasks at http://tracker.ceph.com/projects/ceph-releases/issues . What did you have in mind ?
>>>
>>> Cheers
>>>
>>
> 

-- 
Loïc Dachary, Artisan Logiciel Libre


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
  2015-03-23 16:10           ` Loic Dachary
@ 2015-03-23 16:27             ` Yuri Weinstein
  2015-03-23 17:09               ` Loic Dachary
  0 siblings, 1 reply; 10+ messages in thread
From: Yuri Weinstein @ 2015-03-23 16:27 UTC (permalink / raw)
  To: Loic Dachary; +Cc: Sage Weil, Ceph Development

Loic, done, pls review and edit.

Thx
YuriW

----- Original Message -----
From: "Loic Dachary" <loic@dachary.org>
To: "Yuri Weinstein" <yweinste@redhat.com>
Cc: "Sage Weil" <sweil@redhat.com>, "Ceph Development" <ceph-devel@vger.kernel.org>
Sent: Monday, March 23, 2015 9:10:20 AM
Subject: Re: hammer tasks in http://tracker.ceph.com/projects/ceph-releases



On 23/03/2015 16:44, Yuri Weinstein wrote:
> 
> 
> Thx
> YuriW
> 
> ----- Original Message -----
> From: "Loic Dachary" <loic@dachary.org>
> To: "Yuri Weinstein" <yweinste@redhat.com>
> Cc: "Sage Weil" <sweil@redhat.com>, "Ceph Development" <ceph-devel@vger.kernel.org>
> Sent: Monday, March 23, 2015 8:40:02 AM
> Subject: Re: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
> 
> Hi Yuri,
> 
> On 23/03/2015 16:09, Yuri Weinstein wrote:
>> "How will that go for the next run of upgrade/giant-x ?"
>>
>> I was thinking that as soon as for example this suite passed, #11189 gets resolved as thus indicates that it's ready for for the hammer release cut. 
> 
> If the following happens:
> 
> * hammer: upgrade/giant-x runs and passes
> * a dozen more commits are added because problems are fixed
> * hammer: upgrade/giant-x runs and passes
> 
> That leaves us with two issues with the same name but with different update dates. So if I look at the "hammer: upgrade/giant-x" issues in chronological order, I have a complete history of the successive runs and I can check the latest one to see how it went. Or older ones if I need to dig the history. 
> 
> This is good :-)
> 
> After hammer is released, the same will presumably happen for point releases. Instead of naming them "hammer: upgrade/giant-x" which would be confusing, I guess we could name them "v0.94.1: upgrade/giant-x" instead. 
> 
> Does that sound right ?
> ============
> Yes, we can alternatively name the set of those tasks as hammer v0.94.1

Great !

Would you like me to add a section at http://tracker.ceph.com/projects/ceph-releases/wiki/Wiki to summarize this conversation ?

> 
> ============
>>
>> Thx
>> YuriW
>>
>> ----- Original Message -----
>> From: "Loic Dachary" <loic@dachary.org>
>> To: "Yuri Weinstein" <yweinste@redhat.com>
>> Cc: "Sage Weil" <sweil@redhat.com>, "Ceph Development" <ceph-devel@vger.kernel.org>
>> Sent: Sunday, March 22, 2015 5:35:19 PM
>> Subject: Re: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
>>
>>
>>
>> On 22/03/2015 17:16, Yuri Weinstein wrote:
>>> Loic, I think the idea was to do more process driven approach for releasing hammer, e.g. keep track of suites vs. results and open issues, so we can have a high level view on status at any time before the final cut day.
>>>
>>> Do you have any suggestions or objections?
>>
>> Reading http://tracker.ceph.com/issues/11189 I see it has one run, and a run of failed tests, and got resolved because all passed. The title is hammer: upgrade/giant-x. How will that go for the next run of upgrade/giant-x ?
>>
>> I use a python snippet to display the errors in a redmine format (http://workbench.dachary.org/dachary/ceph-workbench/issues/2)
>>
>> $ python ../fail.py teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps
>> ** *'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=giant TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/cls/test_cls_rgw.sh'*
>> *** "upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/parallel_run/{ec-rados-parallel.yaml rados_api.yaml rados_loadgenbig.yaml test_cache-pool-snaps.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814081
>> ** *"2015-03-20 23:04:51.042345 mon.0 10.214.130.49:6789/0 3 : cluster [WRN] message from mon.1 was stamped 14400.248297s in the future, clocks not synchronized" in cluster log*
>> *** "upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_rbd_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814155
>> ** *Could not reconnect to ubuntu@vpm169.front.sepia.ceph.com*
>> *** "upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/ec-rados-default.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814108
>> ** *Could not reconnect to ubuntu@vpm166.front.sepia.ceph.com*
>> *** "upgrade:giant-x/stress-split-erasure-code/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814194
>> ** *'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f -i a'*
>> *** "upgrade:giant-x/stress-split-erasure-code-x86_64/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=isa-k=2-m=1.yaml distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814197
>> ** *timed out waiting for admin_socket to appear after osd.13 restart*
>> *** "upgrade:giant-x/stress-split/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/rhel_6.5.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814186
>>
>>>
>>> Thx
>>> YuriW
>>>
>>> ----- Original Message -----
>>> From: "Loic Dachary" <loic@dachary.org>
>>> To: "Sage Weil" <sweil@redhat.com>
>>> Cc: "Ceph Development" <ceph-devel@vger.kernel.org>
>>> Sent: Sunday, March 22, 2015 1:54:06 AM
>>> Subject: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
>>>
>>> Hi Sage,
>>>
>>> You have created a few hammer related tasks at http://tracker.ceph.com/projects/ceph-releases/issues . What did you have in mind ?
>>>
>>> Cheers
>>>
>>
> 

-- 
Loïc Dachary, Artisan Logiciel Libre

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
  2015-03-23 16:27             ` Yuri Weinstein
@ 2015-03-23 17:09               ` Loic Dachary
  0 siblings, 0 replies; 10+ messages in thread
From: Loic Dachary @ 2015-03-23 17:09 UTC (permalink / raw)
  To: Yuri Weinstein; +Cc: Ceph Development

[-- Attachment #1: Type: text/plain, Size: 7816 bytes --]



On 23/03/2015 17:27, Yuri Weinstein wrote:
> Loic, done, pls review and edit.

Perfect. I did not realize it was organized to be viewed via

http://tracker.ceph.com/rb/master_backlog/ceph-releases

very convenient.

> 
> Thx
> YuriW
> 
> ----- Original Message -----
> From: "Loic Dachary" <loic@dachary.org>
> To: "Yuri Weinstein" <yweinste@redhat.com>
> Cc: "Sage Weil" <sweil@redhat.com>, "Ceph Development" <ceph-devel@vger.kernel.org>
> Sent: Monday, March 23, 2015 9:10:20 AM
> Subject: Re: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
> 
> 
> 
> On 23/03/2015 16:44, Yuri Weinstein wrote:
>>
>>
>> Thx
>> YuriW
>>
>> ----- Original Message -----
>> From: "Loic Dachary" <loic@dachary.org>
>> To: "Yuri Weinstein" <yweinste@redhat.com>
>> Cc: "Sage Weil" <sweil@redhat.com>, "Ceph Development" <ceph-devel@vger.kernel.org>
>> Sent: Monday, March 23, 2015 8:40:02 AM
>> Subject: Re: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
>>
>> Hi Yuri,
>>
>> On 23/03/2015 16:09, Yuri Weinstein wrote:
>>> "How will that go for the next run of upgrade/giant-x ?"
>>>
>>> I was thinking that as soon as for example this suite passed, #11189 gets resolved as thus indicates that it's ready for for the hammer release cut. 
>>
>> If the following happens:
>>
>> * hammer: upgrade/giant-x runs and passes
>> * a dozen more commits are added because problems are fixed
>> * hammer: upgrade/giant-x runs and passes
>>
>> That leaves us with two issues with the same name but with different update dates. So if I look at the "hammer: upgrade/giant-x" issues in chronological order, I have a complete history of the successive runs and I can check the latest one to see how it went. Or older ones if I need to dig the history. 
>>
>> This is good :-)
>>
>> After hammer is released, the same will presumably happen for point releases. Instead of naming them "hammer: upgrade/giant-x" which would be confusing, I guess we could name them "v0.94.1: upgrade/giant-x" instead. 
>>
>> Does that sound right ?
>> ============
>> Yes, we can alternatively name the set of those tasks as hammer v0.94.1
> 
> Great !
> 
> Would you like me to add a section at http://tracker.ceph.com/projects/ceph-releases/wiki/Wiki to summarize this conversation ?
> 
>>
>> ============
>>>
>>> Thx
>>> YuriW
>>>
>>> ----- Original Message -----
>>> From: "Loic Dachary" <loic@dachary.org>
>>> To: "Yuri Weinstein" <yweinste@redhat.com>
>>> Cc: "Sage Weil" <sweil@redhat.com>, "Ceph Development" <ceph-devel@vger.kernel.org>
>>> Sent: Sunday, March 22, 2015 5:35:19 PM
>>> Subject: Re: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
>>>
>>>
>>>
>>> On 22/03/2015 17:16, Yuri Weinstein wrote:
>>>> Loic, I think the idea was to do more process driven approach for releasing hammer, e.g. keep track of suites vs. results and open issues, so we can have a high level view on status at any time before the final cut day.
>>>>
>>>> Do you have any suggestions or objections?
>>>
>>> Reading http://tracker.ceph.com/issues/11189 I see it has one run, and a run of failed tests, and got resolved because all passed. The title is hammer: upgrade/giant-x. How will that go for the next run of upgrade/giant-x ?
>>>
>>> I use a python snippet to display the errors in a redmine format (http://workbench.dachary.org/dachary/ceph-workbench/issues/2)
>>>
>>> $ python ../fail.py teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps
>>> ** *'mkdir -p -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && cd -- /home/ubuntu/cephtest/mnt.1/client.1/tmp && CEPH_CLI_TEST_DUP_COMMAND=1 CEPH_REF=giant TESTDIR="/home/ubuntu/cephtest" CEPH_ID="1" PATH=$PATH:/usr/sbin adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 3h /home/ubuntu/cephtest/workunit.client.1/cls/test_cls_rgw.sh'*
>>> *** "upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/parallel_run/{ec-rados-parallel.yaml rados_api.yaml rados_loadgenbig.yaml test_cache-pool-snaps.yaml test_rbd_api.yaml test_rbd_python.yaml} 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814081
>>> ** *"2015-03-20 23:04:51.042345 mon.0 10.214.130.49:6789/0 3 : cluster [WRN] message from mon.1 was stamped 14400.248297s in the future, clocks not synchronized" in cluster log*
>>> *** "upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/test_rbd_api.yaml 3-upgrade-sequence/upgrade-all.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/centos_6.5.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814155
>>> ** *Could not reconnect to ubuntu@vpm169.front.sepia.ceph.com*
>>> *** "upgrade:giant-x/parallel/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-workload/sequential_run/ec-rados-default.yaml 3-upgrade-sequence/upgrade-mon-osd-mds.yaml 4-final-workload/{rados-snaps-few-objects.yaml rados_loadgenmix.yaml rados_mon_thrash.yaml rbd_cls.yaml rbd_import_export.yaml rgw_swift.yaml} distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814108
>>> ** *Could not reconnect to ubuntu@vpm166.front.sepia.ceph.com*
>>> *** "upgrade:giant-x/stress-split-erasure-code/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=jerasure-k=3-m=1.yaml distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814194
>>> ** *'sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage daemon-helper kill ceph-mon -f -i a'*
>>> *** "upgrade:giant-x/stress-split-erasure-code-x86_64/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/ec-rados-default.yaml 6-next-mon/monb.yaml 8-next-mon/monc.yaml 9-workload/ec-rados-plugin=isa-k=2-m=1.yaml distros/rhel_7.0.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814197
>>> ** *timed out waiting for admin_socket to appear after osd.13 restart*
>>> *** "upgrade:giant-x/stress-split/{0-cluster/start.yaml 1-giant-install/giant.yaml 2-partial-upgrade/firsthalf.yaml 3-thrash/default.yaml 4-mon/mona.yaml 5-workload/{rbd-cls.yaml rbd-import-export.yaml readwrite.yaml snaps-few-objects.yaml} 6-next-mon/monb.yaml 7-workload/{radosbench.yaml rbd_api.yaml} 8-next-mon/monc.yaml 9-workload/{rbd-python.yaml rgw-swift.yaml snaps-many-objects.yaml} distros/rhel_6.5.yaml}":http://pulpito.ceph.com/teuthology-2015-03-20_17:05:02-upgrade:giant-x-hammer-distro-basic-vps/814186
>>>
>>>>
>>>> Thx
>>>> YuriW
>>>>
>>>> ----- Original Message -----
>>>> From: "Loic Dachary" <loic@dachary.org>
>>>> To: "Sage Weil" <sweil@redhat.com>
>>>> Cc: "Ceph Development" <ceph-devel@vger.kernel.org>
>>>> Sent: Sunday, March 22, 2015 1:54:06 AM
>>>> Subject: hammer tasks in http://tracker.ceph.com/projects/ceph-releases
>>>>
>>>> Hi Sage,
>>>>
>>>> You have created a few hammer related tasks at http://tracker.ceph.com/projects/ceph-releases/issues . What did you have in mind ?
>>>>
>>>> Cheers
>>>>
>>>
>>
> 

-- 
Loïc Dachary, Artisan Logiciel Libre


[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 198 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2015-03-23 17:09 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-03-22  8:54 hammer tasks in http://tracker.ceph.com/projects/ceph-releases Loic Dachary
2015-03-22 16:16 ` Yuri Weinstein
2015-03-22 23:50   ` Loic Dachary
2015-03-23  0:35   ` Loic Dachary
2015-03-23 15:09     ` Yuri Weinstein
2015-03-23 15:40       ` Loic Dachary
2015-03-23 15:44         ` Yuri Weinstein
2015-03-23 16:10           ` Loic Dachary
2015-03-23 16:27             ` Yuri Weinstein
2015-03-23 17:09               ` Loic Dachary

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.