All of lore.kernel.org
 help / color / mirror / Atom feed
* [Fuego] Adding new test case to fuego
@ 2019-08-02 10:10 Kumar Thangavel
  2019-08-03  0:12 ` Tim.Bird
  0 siblings, 1 reply; 6+ messages in thread
From: Kumar Thangavel @ 2019-08-02 10:10 UTC (permalink / raw)
  To: fuego

[-- Attachment #1: Type: text/plain, Size: 1899 bytes --]

Hi All,


          I would like to contribute to fuego framework. So I am planning to add a test for "file systems were mounted with correct permissions and attributes".


is this ok to start  or please suggest any good test case/test ideas to start working.


Thanks,

Kumar.



::DISCLAIMER::
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only. E-mail transmission is not guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or may contain viruses in transmission. The e mail and its contents (with or without referred errors) shall therefore not attach any liability on the originator or HCL or its affiliates. Views or opinions, if any, presented in this email are solely those of the author and may not necessarily reflect the views or opinions of HCL or its affiliates. Any form of reproduction, dissemination, copying, disclosure, modification, distribution and / or publication of this message without the prior written consent of authorized representative of HCL is strictly prohibited. If you have received this email in error please delete it and notify the sender immediately. Before opening any email and/or attachments, please check them for viruses and other defects.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

[-- Attachment #2: Type: text/html, Size: 3007 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Fuego] Adding new test case to fuego
  2019-08-02 10:10 [Fuego] Adding new test case to fuego Kumar Thangavel
@ 2019-08-03  0:12 ` Tim.Bird
  2019-08-14  5:45   ` Kumar Thangavel
  0 siblings, 1 reply; 6+ messages in thread
From: Tim.Bird @ 2019-08-03  0:12 UTC (permalink / raw)
  To: thangavel.k, fuego



> -----Original Message-----
> From: Kumar Thangavel
> 
> Hi All,
> 
>           I would like to contribute to fuego framework. So I am planning to add a
> test for "file systems were mounted with correct permissions and
> attributes".
> 
> is this ok to start  or please suggest any good test case/test ideas to start
> working.

Thank you for wanting to contribute to Fuego.

Here is some feedback on your idea.

I think many people would like a simple test that verified that file systems
were mounted correctly.  Something can easily be done using the 'mount'
command, or by looking at mtab.

In order to make this test general-purpose, you will probably want to 
allow the user to hand in board-specific data to the test, that reflects
what their board is supposed to have mounted.  If the comparison
is just done with static code, it will be hard for others to use this test
in their scenario.

Also, you may want to consider whether you want to test all mounted
filesystems, or just the "real" ones (like those of type ext4, nfs, etc.
as opposed to pseudo filesystems of type tmpfs, cgroup, etc., or the
weird snap ones of type squashfs used by Ubuntu).

So you might define multiple test specs (variants), that let the user
choose whether to only check 'real' filesystems, or to check all
filesystems, or filesystems of a particular type.

I have been thinking for a while about how to make it easy for people
to generalize tests for their own use.  I think that the local customization
of tests (with expected values for the local use case) is one of the big 
barriers to people sharing tests.

I've been thinking it would be a good idea to allow the user to provide
data about their expected values.  Also, I think it would be good to 
have a way to very easily update the expected values to ones that
match their configuration of Linux.

One thing I've considered is adding a spec to perform an "expected value update".
What this would do is take the current data from the system, and set the 
expected value for the test to that data.

For example, if  your test had the spec "default", that did a mount command, and
compared with a text file that had the expected results for 'mount', then you could
easily detect if there was a difference in the data.

If your test had another spec "update", that did a mount command, and set the 
expected results from the data that was returned, then the following flow would
allow a user with a different mounted filesystem configuration to use your test:

1) you publish the test with the expected mount configuration for your board
2) another user runs the test and sees errors, because the mount configuration
for their board is different.
3) if the user verifies that their current mount configuration is actually OK, then
4) the user can run the test with the "update" spec, to save their mount configuration
data to the expected data file (saving it into, say, the /fuego-rw/boards/<board>/ directory)
5) the user can then use the test to verify that the mount status of their board(s)
6) the user could potentially publish their expected results, to augment your test,
for other people to use with boards that had a similar configuration as theirs

Does that all make sense?

Please let me know your ideas for making a mounted filesystem verification test.
I'd be happy to discuss with you ideas for making it a nice, generic, reusable test,
and a nice addition to Fuego.
 -- Tim


--------------------------------------------------------------

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Fuego] Adding new test case to fuego
  2019-08-03  0:12 ` Tim.Bird
@ 2019-08-14  5:45   ` Kumar Thangavel
  2019-09-28  1:07     ` Tim.Bird
  0 siblings, 1 reply; 6+ messages in thread
From: Kumar Thangavel @ 2019-08-14  5:45 UTC (permalink / raw)
  To: Tim.Bird, fuego

[-- Attachment #1: Type: text/plain, Size: 6965 bytes --]

Yes, Nice Idea. Thanks for your valuable feedback.


As per your suggestions, my test spec idea will be like,


  1.  Test spec will get the expected mount configurations of their board from the user.
  2.  User may not be knowing all the expected file systems. But user might be knowing some important expected file systems. So, Test spec would compare each and every expected filesystems with mounted filesystems list.
  3.  If expected file system is not presented/matched with mounted file system list, it will display the errors, and test will fail.
  4.  If expected file system is presented/matched with mounted file system list,  test will pass and ask the user to save these configurations.
  5.  If user would like to save their configurations, the spec will save their configurations in the path you mentioned.
  6.  If user don't want to save their configurations, it will not save the configurations.
  7.  Next time, if user run the test for same board, then it will take the expected file systems from that path and compare with the mounted filesystems.
  8.  Test will display the status of all expected filesystems.

          Could you please check these and provide your suggestions on this.

           Also, I am just thinking to make easy for users,  So that Instead of getting mount configurations from users, test spec should give default or common file systems if they entered board names. I am not sure will this work for all boards. Any suggestion on this.

Thanks,
Kumar.
________________________________
From: Tim.Bird@sony.com <Tim.Bird@sony.com>
Sent: 03 August 2019 05:42:49
To: Kumar Thangavel <thangavel.k@hcl.com>; fuego@lists.linuxfoundation.org <fuego@lists.linuxfoundation.org>
Subject: RE: Adding new test case to fuego



> -----Original Message-----
> From: Kumar Thangavel
>
> Hi All,
>
>           I would like to contribute to fuego framework. So I am planning to add a
> test for "file systems were mounted with correct permissions and
> attributes".
>
> is this ok to start  or please suggest any good test case/test ideas to start
> working.

Thank you for wanting to contribute to Fuego.

Here is some feedback on your idea.

I think many people would like a simple test that verified that file systems
were mounted correctly.  Something can easily be done using the 'mount'
command, or by looking at mtab.

In order to make this test general-purpose, you will probably want to
allow the user to hand in board-specific data to the test, that reflects
what their board is supposed to have mounted.  If the comparison
is just done with static code, it will be hard for others to use this test
in their scenario.

Also, you may want to consider whether you want to test all mounted
filesystems, or just the "real" ones (like those of type ext4, nfs, etc.
as opposed to pseudo filesystems of type tmpfs, cgroup, etc., or the
weird snap ones of type squashfs used by Ubuntu).

So you might define multiple test specs (variants), that let the user
choose whether to only check 'real' filesystems, or to check all
filesystems, or filesystems of a particular type.

I have been thinking for a while about how to make it easy for people
to generalize tests for their own use.  I think that the local customization
of tests (with expected values for the local use case) is one of the big
barriers to people sharing tests.

I've been thinking it would be a good idea to allow the user to provide
data about their expected values.  Also, I think it would be good to
have a way to very easily update the expected values to ones that
match their configuration of Linux.

One thing I've considered is adding a spec to perform an "expected value update".
What this would do is take the current data from the system, and set the
expected value for the test to that data.

For example, if  your test had the spec "default", that did a mount command, and
compared with a text file that had the expected results for 'mount', then you could
easily detect if there was a difference in the data.

If your test had another spec "update", that did a mount command, and set the
expected results from the data that was returned, then the following flow would
allow a user with a different mounted filesystem configuration to use your test:

1) you publish the test with the expected mount configuration for your board
2) another user runs the test and sees errors, because the mount configuration
for their board is different.
3) if the user verifies that their current mount configuration is actually OK, then
4) the user can run the test with the "update" spec, to save their mount configuration
data to the expected data file (saving it into, say, the /fuego-rw/boards/<board>/ directory)
5) the user can then use the test to verify that the mount status of their board(s)
6) the user could potentially publish their expected results, to augment your test,
for other people to use with boards that had a similar configuration as theirs

Does that all make sense?

Please let me know your ideas for making a mounted filesystem verification test.
I'd be happy to discuss with you ideas for making it a nice, generic, reusable test,
and a nice addition to Fuego.
 -- Tim


--------------------------------------------------------------
::DISCLAIMER::
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only. E-mail transmission is not guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or may contain viruses in transmission. The e mail and its contents (with or without referred errors) shall therefore not attach any liability on the originator or HCL or its affiliates. Views or opinions, if any, presented in this email are solely those of the author and may not necessarily reflect the views or opinions of HCL or its affiliates. Any form of reproduction, dissemination, copying, disclosure, modification, distribution and / or publication of this message without the prior written consent of authorized representative of HCL is strictly prohibited. If you have received this email in error please delete it and notify the sender immediately. Before opening any email and/or attachments, please check them for viruses and other defects.
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

[-- Attachment #2: Type: text/html, Size: 11081 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Fuego] Adding new test case to fuego
  2019-08-14  5:45   ` Kumar Thangavel
@ 2019-09-28  1:07     ` Tim.Bird
  2019-09-30  5:40       ` Kumar Thangavel
  0 siblings, 1 reply; 6+ messages in thread
From: Tim.Bird @ 2019-09-28  1:07 UTC (permalink / raw)
  To: thangavel.k, fuego

Kumar,

Are you doing anything with this idea?

It's been a while, so I presume not, but I'll comment on the ideas below, just in case.

> -----Original Message-----
> From: Kumar Thangavel on Tuesday, August 13, 2019 7:45 PM
> 
> Yes, Nice Idea. Thanks for your valuable feedback.
> 
> As per your suggestions, my test spec idea will be like,
> 
> 1.	Test spec will get the expected mount configurations of their board
> from the user.
How?  I would suggest specifying these in a list in a text file, one-per line,
that is placed in the rw board directory for a board.  That is, for a board called
'min1', I would put the file in /fuego-rw/boards/min1/expected_mounts.txt

For extra checking (at some point in the future) you could extend this and 
check additional contents of each filesystem, as follows:
For each filesystem, you could gather the filesystem information and
put it in its own file, as baseline data.  For example, I would put the
root data into a file called:
/fuego-rw/boards/min1/root_fs_data.txt

> 2.	User may not be knowing all the expected file systems. But user
> might be knowing some important expected file systems. So, Test spec
> would compare each and every expected filesystems with mounted
> filesystems list.
OK, for the actual test you would create something in 
fuego-core/tests/Functional.check_mounts
(creating fuego_test.sh, parser.py, and spec.json)

the spec.json file should include the following specs:
'default' - which performs a normal test of the filesystems, output pass or fail for each one
'save_baseline' - which collects the information about the mounts and sets the new
baseline data for it, by writing it into expected_mounts.txt

> 3.	If expected file system is not presented/matched with mounted file
> system list, it will display the errors, and test will fail.
I would output the pass/fail results in TAP format.

> 4.	If expected file system is presented/matched with mounted file
> system list,  test will pass and ask the user to save these configurations.
If the data is as expected, then there should be no need to save anything.
I'm not sure I'm following this.

> 5.	If user would like to save their configurations, the spec will save their
> configurations in the path you mentioned.
See the'save_baseline' spec above.

> 6.	If user don't want to save their configurations, it will not save the
> configurations.
> 7.	Next time, if user run the test for same board, then it will take the
> expected file systems from that path and compare with the mounted
> filesystems.
I've added a feature to not just examine the mounts, but the 
actual filesystem contents as well.  But maybe it would be good to 
start with processing the mounts only.

> 8.	Test will display the status of all expected filesystems.
There should be one testcase per filesystem.

> 
>           Could you please check these and provide your suggestions on this.
> 
> 
>            Also, I am just thinking to make easy for users,  So that Instead of
> getting mount configurations from users, test spec should give default or
> common file systems if they entered board names. I am not sure will this
> work for all boards. Any suggestion on this.

Yes, Fuego could store the 'expected values' for mounted filesystems
for some common boards in fuego-ro/boards/<board_name>/expected_mounts.txt
And people could share these between each other.

I hope these suggestions are helpful.
 -- Tim

> ________________________________
> 
> From: Tim.Bird@sony.com <Tim.Bird@sony.com>
> Sent: 03 August 2019 05:42:49
> To: Kumar Thangavel <thangavel.k@hcl.com>;
> fuego@lists.linuxfoundation.org <fuego@lists.linuxfoundation.org>
> Subject: RE: Adding new test case to fuego
> 
> 
> 
> > -----Original Message-----
> > From: Kumar Thangavel
> >
> > Hi All,
> >
> >           I would like to contribute to fuego framework. So I am planning to add
> a
> > test for "file systems were mounted with correct permissions and
> > attributes".
> >
> > is this ok to start  or please suggest any good test case/test ideas to start
> > working.
> 
> Thank you for wanting to contribute to Fuego.
> 
> Here is some feedback on your idea.
> 
> I think many people would like a simple test that verified that file systems
> were mounted correctly.  Something can easily be done using the 'mount'
> command, or by looking at mtab.
> 
> In order to make this test general-purpose, you will probably want to
> allow the user to hand in board-specific data to the test, that reflects
> what their board is supposed to have mounted.  If the comparison
> is just done with static code, it will be hard for others to use this test
> in their scenario.
> 
> Also, you may want to consider whether you want to test all mounted
> filesystems, or just the "real" ones (like those of type ext4, nfs, etc.
> as opposed to pseudo filesystems of type tmpfs, cgroup, etc., or the
> weird snap ones of type squashfs used by Ubuntu).
> 
> So you might define multiple test specs (variants), that let the user
> choose whether to only check 'real' filesystems, or to check all
> filesystems, or filesystems of a particular type.
> 
> I have been thinking for a while about how to make it easy for people
> to generalize tests for their own use.  I think that the local customization
> of tests (with expected values for the local use case) is one of the big
> barriers to people sharing tests.
> 
> I've been thinking it would be a good idea to allow the user to provide
> data about their expected values.  Also, I think it would be good to
> have a way to very easily update the expected values to ones that
> match their configuration of Linux.
> 
> One thing I've considered is adding a spec to perform an "expected value
> update".
> What this would do is take the current data from the system, and set the
> expected value for the test to that data.
> 
> For example, if  your test had the spec "default", that did a mount command,
> and
> compared with a text file that had the expected results for 'mount', then you
> could
> easily detect if there was a difference in the data.
> 
> If your test had another spec "update", that did a mount command, and set
> the
> expected results from the data that was returned, then the following flow
> would
> allow a user with a different mounted filesystem configuration to use your
> test:
> 
> 1) you publish the test with the expected mount configuration for your board
> 2) another user runs the test and sees errors, because the mount
> configuration
> for their board is different.
> 3) if the user verifies that their current mount configuration is actually OK,
> then
> 4) the user can run the test with the "update" spec, to save their mount
> configuration
> data to the expected data file (saving it into, say, the /fuego-
> rw/boards/<board>/ directory)
> 5) the user can then use the test to verify that the mount status of their
> board(s)
> 6) the user could potentially publish their expected results, to augment your
> test,
> for other people to use with boards that had a similar configuration as theirs
> 
> Does that all make sense?
> 
> Please let me know your ideas for making a mounted filesystem verification
> test.
> I'd be happy to discuss with you ideas for making it a nice, generic, reusable
> test,
> and a nice addition to Fuego.
>  -- Tim
> 
> 
> --------------------------------------------------------------


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Fuego] Adding new test case to fuego
  2019-09-28  1:07     ` Tim.Bird
@ 2019-09-30  5:40       ` Kumar Thangavel
  2019-09-30 21:38         ` Tim.Bird
  0 siblings, 1 reply; 6+ messages in thread
From: Kumar Thangavel @ 2019-09-30  5:40 UTC (permalink / raw)
  To: Tim.Bird, fuego

[-- Attachment #1: Type: text/plain, Size: 9068 bytes --]

Thanks for your valuable suggestions.

I am working on this test cases.  Coding partially completed.


Thanks,
Kumar.
________________________________
From: Tim.Bird@sony.com <Tim.Bird@sony.com>
Sent: 28 September 2019 06:37
To: Kumar Thangavel <thangavel.k@hcl.com>; fuego@lists.linuxfoundation.org <fuego@lists.linuxfoundation.org>
Subject: RE: Adding new test case to fuego

Kumar,

Are you doing anything with this idea?

It's been a while, so I presume not, but I'll comment on the ideas below, just in case.

> -----Original Message-----
> From: Kumar Thangavel on Tuesday, August 13, 2019 7:45 PM
>
> Yes, Nice Idea. Thanks for your valuable feedback.
>
> As per your suggestions, my test spec idea will be like,
>
> 1.    Test spec will get the expected mount configurations of their board
> from the user.
How?  I would suggest specifying these in a list in a text file, one-per line,
that is placed in the rw board directory for a board.  That is, for a board called
'min1', I would put the file in /fuego-rw/boards/min1/expected_mounts.txt

For extra checking (at some point in the future) you could extend this and
check additional contents of each filesystem, as follows:
For each filesystem, you could gather the filesystem information and
put it in its own file, as baseline data.  For example, I would put the
root data into a file called:
/fuego-rw/boards/min1/root_fs_data.txt

> 2.    User may not be knowing all the expected file systems. But user
> might be knowing some important expected file systems. So, Test spec
> would compare each and every expected filesystems with mounted
> filesystems list.
OK, for the actual test you would create something in
fuego-core/tests/Functional.check_mounts
(creating fuego_test.sh, parser.py, and spec.json)

the spec.json file should include the following specs:
'default' - which performs a normal test of the filesystems, output pass or fail for each one
'save_baseline' - which collects the information about the mounts and sets the new
baseline data for it, by writing it into expected_mounts.txt

> 3.    If expected file system is not presented/matched with mounted file
> system list, it will display the errors, and test will fail.
I would output the pass/fail results in TAP format.

> 4.    If expected file system is presented/matched with mounted file
> system list,  test will pass and ask the user to save these configurations.
If the data is as expected, then there should be no need to save anything.
I'm not sure I'm following this.

> 5.    If user would like to save their configurations, the spec will save their
> configurations in the path you mentioned.
See the'save_baseline' spec above.

> 6.    If user don't want to save their configurations, it will not save the
> configurations.
> 7.    Next time, if user run the test for same board, then it will take the
> expected file systems from that path and compare with the mounted
> filesystems.
I've added a feature to not just examine the mounts, but the
actual filesystem contents as well.  But maybe it would be good to
start with processing the mounts only.

> 8.    Test will display the status of all expected filesystems.
There should be one testcase per filesystem.

>
>           Could you please check these and provide your suggestions on this.
>
>
>            Also, I am just thinking to make easy for users,  So that Instead of
> getting mount configurations from users, test spec should give default or
> common file systems if they entered board names. I am not sure will this
> work for all boards. Any suggestion on this.

Yes, Fuego could store the 'expected values' for mounted filesystems
for some common boards in fuego-ro/boards/<board_name>/expected_mounts.txt
And people could share these between each other.

I hope these suggestions are helpful.
 -- Tim

> ________________________________
>
> From: Tim.Bird@sony.com <Tim.Bird@sony.com>
> Sent: 03 August 2019 05:42:49
> To: Kumar Thangavel <thangavel.k@hcl.com>;
> fuego@lists.linuxfoundation.org <fuego@lists.linuxfoundation.org>
> Subject: RE: Adding new test case to fuego
>
>
>
> > -----Original Message-----
> > From: Kumar Thangavel
> >
> > Hi All,
> >
> >           I would like to contribute to fuego framework. So I am planning to add
> a
> > test for "file systems were mounted with correct permissions and
> > attributes".
> >
> > is this ok to start  or please suggest any good test case/test ideas to start
> > working.
>
> Thank you for wanting to contribute to Fuego.
>
> Here is some feedback on your idea.
>
> I think many people would like a simple test that verified that file systems
> were mounted correctly.  Something can easily be done using the 'mount'
> command, or by looking at mtab.
>
> In order to make this test general-purpose, you will probably want to
> allow the user to hand in board-specific data to the test, that reflects
> what their board is supposed to have mounted.  If the comparison
> is just done with static code, it will be hard for others to use this test
> in their scenario.
>
> Also, you may want to consider whether you want to test all mounted
> filesystems, or just the "real" ones (like those of type ext4, nfs, etc.
> as opposed to pseudo filesystems of type tmpfs, cgroup, etc., or the
> weird snap ones of type squashfs used by Ubuntu).
>
> So you might define multiple test specs (variants), that let the user
> choose whether to only check 'real' filesystems, or to check all
> filesystems, or filesystems of a particular type.
>
> I have been thinking for a while about how to make it easy for people
> to generalize tests for their own use.  I think that the local customization
> of tests (with expected values for the local use case) is one of the big
> barriers to people sharing tests.
>
> I've been thinking it would be a good idea to allow the user to provide
> data about their expected values.  Also, I think it would be good to
> have a way to very easily update the expected values to ones that
> match their configuration of Linux.
>
> One thing I've considered is adding a spec to perform an "expected value
> update".
> What this would do is take the current data from the system, and set the
> expected value for the test to that data.
>
> For example, if  your test had the spec "default", that did a mount command,
> and
> compared with a text file that had the expected results for 'mount', then you
> could
> easily detect if there was a difference in the data.
>
> If your test had another spec "update", that did a mount command, and set
> the
> expected results from the data that was returned, then the following flow
> would
> allow a user with a different mounted filesystem configuration to use your
> test:
>
> 1) you publish the test with the expected mount configuration for your board
> 2) another user runs the test and sees errors, because the mount
> configuration
> for their board is different.
> 3) if the user verifies that their current mount configuration is actually OK,
> then
> 4) the user can run the test with the "update" spec, to save their mount
> configuration
> data to the expected data file (saving it into, say, the /fuego-
> rw/boards/<board>/ directory)
> 5) the user can then use the test to verify that the mount status of their
> board(s)
> 6) the user could potentially publish their expected results, to augment your
> test,
> for other people to use with boards that had a similar configuration as theirs
>
> Does that all make sense?
>
> Please let me know your ideas for making a mounted filesystem verification
> test.
> I'd be happy to discuss with you ideas for making it a nice, generic, reusable
> test,
> and a nice addition to Fuego.
>  -- Tim
>
>
> --------------------------------------------------------------

::DISCLAIMER::
________________________________
The contents of this e-mail and any attachment(s) are confidential and intended for the named recipient(s) only. E-mail transmission is not guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or may contain viruses in transmission. The e mail and its contents (with or without referred errors) shall therefore not attach any liability on the originator or HCL or its affiliates. Views or opinions, if any, presented in this email are solely those of the author and may not necessarily reflect the views or opinions of HCL or its affiliates. Any form of reproduction, dissemination, copying, disclosure, modification, distribution and / or publication of this message without the prior written consent of authorized representative of HCL is strictly prohibited. If you have received this email in error please delete it and notify the sender immediately. Before opening any email and/or attachments, please check them for viruses and other defects.
________________________________

[-- Attachment #2: Type: text/html, Size: 12124 bytes --]

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Fuego] Adding new test case to fuego
  2019-09-30  5:40       ` Kumar Thangavel
@ 2019-09-30 21:38         ` Tim.Bird
  0 siblings, 0 replies; 6+ messages in thread
From: Tim.Bird @ 2019-09-30 21:38 UTC (permalink / raw)
  To: thangavel.k, fuego



> -----Original Message-----
> From: Kumar Thangavel 
> 
> Thanks for your valuable suggestions.
> 
> I am working on this test cases.  Coding partially completed.
> 

OK - I created a sample test, called Functional.filesystem_compare

See the patch for this below:

Note that is in the 'development' branch of Fuego in the fuego-core
repository on bitbucket).
(see https://bitbucket.org/fuegotest/fuego-core/commits/5e879bcd190c6cedf509daa24faf2d6be7025e86?at=development )

Subject: [PATCH] filesystem_compare: Add a test of the filesystem

This is an example "compare with baseline" test, that I think will be
useful in the future.  Currently, this test checks the contents of the
board's /bin directory.  A more exhaustive test would check the contents
of the entire filesystem, and support passing in a list of
directories or filesystem mounts to check.

But this test shows the basic outline and techniques for implementing
a "compare with baseline" test.

Signed-off-by: Tim Bird <tim.bird@sony.com>
---
 .../fuego_test.sh                             | 88 +++++++++++++++++++
 tests/Functional.filesystem_compare/spec.json | 11 +++
 2 files changed, 99 insertions(+)
 create mode 100755 tests/Functional.filesystem_compare/fuego_test.sh
 create mode 100644 tests/Functional.filesystem_compare/spec.json

diff --git a/tests/Functional.filesystem_compare/fuego_test.sh b/tests/Functional.filesystem_compare/fuego_test.sh
new file mode 100755
index 0000000..791f77b
--- /dev/null
+++ b/tests/Functional.filesystem_compare/fuego_test.sh
@@ -0,0 +1,88 @@
+# Functional.filesytem_compare
+#  This test checks to see if the filesystem is different from a 
+#  a baseline snapshot taken some time in the past.
+#
+#  The purpose of this test is to do a high-level comparison of the 
+#  contents of a filesystem, to see if anything has changed since
+#  the last baseline snapshot was made.
+#
+# Usage:
+#  preparation:
+#    Create 2 jobs:
+#     $ ftc add-job -b <brd> -t Functional.filesystem_compare -s default
+#     $ ftc add-job -b <brd> -t Functional.filesystem_compare -s save_baseline
+#    Save baseline reports
+#      If the filesystem is in a known-good state that reflects "normal"
+#      status for a board,
+#      save the baseline data into a file by running run
+#      the 'save_baseline' job:
+#     $ ftc build-job <brd>.save_baseline.Functional.filesystem_compare
+#      This will create baseline report files in /fuego-rw/boards/<brd>
+# periodic usage:
+#     Check that the overall reports still match, by running the default job:
+#     $ ftc build-job <board>.default.Functional.filesystem_compare
+#
+# To do for this test:
+# - omit timestamp field from run-failures
+# suppress gathering current board state information
+# FIXTHIS - function override from fuego_test.sh didn't work
+#override-func ov_rootfs_state() {
+#    return 0
+#}
+
+function test_pre_check {
+    export board_rw_dir=$FUEGO_RW/boards/$NODE_NAME
+    export baseline_file=$board_rw_dir/$TESTDIR-baseline-data.txt
+
+    mkdir -p $board_rw_dir
+
+    # check for existence of baseline file
+
+    # but don't check if we're doing the save operation
+    if [ "$TESTSPEC" = "save_baseline" ] ; then
+        return 0
+    fi
+
+    if [ ! -f $baseline_file ] ; then
+        echo "Missing baseline results file: $baseline_file1"
+        echo "Maybe try running test with the 'save_baseline' spec?"
+        abort_job "Missing baseline results file"
+    fi
+}
+
+function test_run {
+    echo "Getting filesystem data"
+
+    DATA_FILE=$LOGDIR/filesystem-data.txt
+
+    # get the contents of /bin
+    cmd "ls -l /bin" >$DATA_FILE
+    log_this "echo \"Here is the current filesystem data:\""
+    log_this "cat $DATA_FILE"
+    echo "--------"
+
+    # if we're doing the "save_baseline" spec, save the data
+    if [ "$TESTSPEC" = "save_baseline" ] ; then
+        cp $DATA_FILE $baseline_file
+        log_this "echo \"baseline file: $baseline_file saved\""
+        log_this "echo \"ok 1 compare /bin contents and permissions baseline saved\""
+        return 0
+    fi
+
+    # check for differences from baseline
+    set +e
+    log_this "echo \"Checking for differences in filesystem data\""
+    log_this "diff -u $baseline_file $LOGDIR/filesystem-data.txt"
+    diff_rcode="$?"
+    log_this "echo ------------------------"
+    set -e
+
+    if [ $diff_rcode == "0" ] ; then
+        log_this "echo \"no changes found between current filesystem and baseline\""
+        log_this "echo \"ok 1 compare /bin contents and permissions\""
+    fi
+}
+
+function test_processing {
+    log_compare $TESTDIR 1 "^ok " "p"
+}
diff --git a/tests/Functional.filesystem_compare/spec.json b/tests/Functional.filesystem_compare/spec.json
new file mode 100644
index 0000000..aa34e16
--- /dev/null
+++ b/tests/Functional.filesystem_compare/spec.json
@@ -0,0 +1,11 @@
+{
+    "testName": "Functional.filesystem_compare",
+    "specs": {
+        "default": {
+        },
+        "save_baseline": {
+            "save_baseline": "true"
+        }
+    }
+}
+
-- 
2.17.1

> ________________________________
> 
> From: Tim.Bird@sony.com <Tim.Bird@sony.com>
> Sent: 28 September 2019 06:37
> To: Kumar Thangavel <thangavel.k@hcl.com>;
> fuego@lists.linuxfoundation.org <fuego@lists.linuxfoundation.org>
> Subject: RE: Adding new test case to fuego
> 
> Kumar,
> 
> Are you doing anything with this idea?
> 
> It's been a while, so I presume not, but I'll comment on the ideas below, just
> in case.
> 
> > -----Original Message-----
> > From: Kumar Thangavel on Tuesday, August 13, 2019 7:45 PM
> >
> > Yes, Nice Idea. Thanks for your valuable feedback.
> >
> > As per your suggestions, my test spec idea will be like,
> >
> > 1.    Test spec will get the expected mount configurations of their board
> > from the user.
> How?  I would suggest specifying these in a list in a text file, one-per line,
> that is placed in the rw board directory for a board.  That is, for a board called
> 'min1', I would put the file in /fuego-rw/boards/min1/expected_mounts.txt
> 
> For extra checking (at some point in the future) you could extend this and
> check additional contents of each filesystem, as follows:
> For each filesystem, you could gather the filesystem information and
> put it in its own file, as baseline data.  For example, I would put the
> root data into a file called:
> /fuego-rw/boards/min1/root_fs_data.txt
> 
> > 2.    User may not be knowing all the expected file systems. But user
> > might be knowing some important expected file systems. So, Test spec
> > would compare each and every expected filesystems with mounted
> > filesystems list.
> OK, for the actual test you would create something in
> fuego-core/tests/Functional.check_mounts
> (creating fuego_test.sh, parser.py, and spec.json)
> 
> the spec.json file should include the following specs:
> 'default' - which performs a normal test of the filesystems, output pass or fail
> for each one
> 'save_baseline' - which collects the information about the mounts and sets
> the new
> baseline data for it, by writing it into expected_mounts.txt
> 
> > 3.    If expected file system is not presented/matched with mounted file
> > system list, it will display the errors, and test will fail.
> I would output the pass/fail results in TAP format.
> 
> > 4.    If expected file system is presented/matched with mounted file
> > system list,  test will pass and ask the user to save these configurations.
> If the data is as expected, then there should be no need to save anything.
> I'm not sure I'm following this.
> 
> > 5.    If user would like to save their configurations, the spec will save their
> > configurations in the path you mentioned.
> See the'save_baseline' spec above.
> 
> > 6.    If user don't want to save their configurations, it will not save the
> > configurations.
> > 7.    Next time, if user run the test for same board, then it will take the
> > expected file systems from that path and compare with the mounted
> > filesystems.
> I've added a feature to not just examine the mounts, but the
> actual filesystem contents as well.  But maybe it would be good to
> start with processing the mounts only.
> 
> > 8.    Test will display the status of all expected filesystems.
> There should be one testcase per filesystem.
> 
> >
> >           Could you please check these and provide your suggestions on this.
> >
> >
> >            Also, I am just thinking to make easy for users,  So that Instead of
> > getting mount configurations from users, test spec should give default or
> > common file systems if they entered board names. I am not sure will this
> > work for all boards. Any suggestion on this.
> 
> Yes, Fuego could store the 'expected values' for mounted filesystems
> for some common boards in fuego-
> ro/boards/<board_name>/expected_mounts.txt
> And people could share these between each other.
> 
> I hope these suggestions are helpful.
>  -- Tim
> 
> > ________________________________
> >
> > From: Tim.Bird@sony.com <Tim.Bird@sony.com>
> > Sent: 03 August 2019 05:42:49
> > To: Kumar Thangavel <thangavel.k@hcl.com>;
> > fuego@lists.linuxfoundation.org <fuego@lists.linuxfoundation.org>
> > Subject: RE: Adding new test case to fuego
> >
> >
> >
> > > -----Original Message-----
> > > From: Kumar Thangavel
> > >
> > > Hi All,
> > >
> > >           I would like to contribute to fuego framework. So I am planning to
> add
> > a
> > > test for "file systems were mounted with correct permissions and
> > > attributes".
> > >
> > > is this ok to start  or please suggest any good test case/test ideas to start
> > > working.
> >
> > Thank you for wanting to contribute to Fuego.
> >
> > Here is some feedback on your idea.
> >
> > I think many people would like a simple test that verified that file systems
> > were mounted correctly.  Something can easily be done using the 'mount'
> > command, or by looking at mtab.
> >
> > In order to make this test general-purpose, you will probably want to
> > allow the user to hand in board-specific data to the test, that reflects
> > what their board is supposed to have mounted.  If the comparison
> > is just done with static code, it will be hard for others to use this test
> > in their scenario.
> >
> > Also, you may want to consider whether you want to test all mounted
> > filesystems, or just the "real" ones (like those of type ext4, nfs, etc.
> > as opposed to pseudo filesystems of type tmpfs, cgroup, etc., or the
> > weird snap ones of type squashfs used by Ubuntu).
> >
> > So you might define multiple test specs (variants), that let the user
> > choose whether to only check 'real' filesystems, or to check all
> > filesystems, or filesystems of a particular type.
> >
> > I have been thinking for a while about how to make it easy for people
> > to generalize tests for their own use.  I think that the local customization
> > of tests (with expected values for the local use case) is one of the big
> > barriers to people sharing tests.
> >
> > I've been thinking it would be a good idea to allow the user to provide
> > data about their expected values.  Also, I think it would be good to
> > have a way to very easily update the expected values to ones that
> > match their configuration of Linux.
> >
> > One thing I've considered is adding a spec to perform an "expected value
> > update".
> > What this would do is take the current data from the system, and set the
> > expected value for the test to that data.
> >
> > For example, if  your test had the spec "default", that did a mount
> command,
> > and
> > compared with a text file that had the expected results for 'mount', then
> you
> > could
> > easily detect if there was a difference in the data.
> >
> > If your test had another spec "update", that did a mount command, and set
> > the
> > expected results from the data that was returned, then the following flow
> > would
> > allow a user with a different mounted filesystem configuration to use your
> > test:
> >
> > 1) you publish the test with the expected mount configuration for your
> board
> > 2) another user runs the test and sees errors, because the mount
> > configuration
> > for their board is different.
> > 3) if the user verifies that their current mount configuration is actually OK,
> > then
> > 4) the user can run the test with the "update" spec, to save their mount
> > configuration
> > data to the expected data file (saving it into, say, the /fuego-
> > rw/boards/<board>/ directory)
> > 5) the user can then use the test to verify that the mount status of their
> > board(s)
> > 6) the user could potentially publish their expected results, to augment
> your
> > test,
> > for other people to use with boards that had a similar configuration as
> theirs
> >
> > Does that all make sense?
> >
> > Please let me know your ideas for making a mounted filesystem
> verification
> > test.
> > I'd be happy to discuss with you ideas for making it a nice, generic, reusable
> > test,
> > and a nice addition to Fuego.
> >  -- Tim
> >
> >
> > --------------------------------------------------------------
> 


^ permalink raw reply related	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-09-30 21:38 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-02 10:10 [Fuego] Adding new test case to fuego Kumar Thangavel
2019-08-03  0:12 ` Tim.Bird
2019-08-14  5:45   ` Kumar Thangavel
2019-09-28  1:07     ` Tim.Bird
2019-09-30  5:40       ` Kumar Thangavel
2019-09-30 21:38         ` Tim.Bird

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.