All of lore.kernel.org
 help / color / mirror / Atom feed
* [refpolicy] SELinux policy for Hadoop
@ 2012-02-08 19:29 Jean Khosalim
  2012-02-08 19:46 ` Christopher J. PeBenito
  0 siblings, 1 reply; 13+ messages in thread
From: Jean Khosalim @ 2012-02-08 19:29 UTC (permalink / raw)
  To: refpolicy

Hi all,

 

I built a Fedora 16 system and installed Cloudera's CDH3 (with Hadoop-0.20).
SElinux is enforcing and policy used is 'targeted'. Ran a simple wordcount
example and it works. But I noticed that the Hadoop related processes are
running with 'system_u:system_r:initrc_t:s0'. I was expecting hadoop_t
instead of initrc_t. I also noticed that there is no 'hadoop.pp' in
/etc/selinux/targeted/modules/active/modules directory.

 

I ran 'yum update' on the system and force autorelabel on boot (add
'enforcing=0 autorelabel' to grub). After reboot, it looks like nothing
changed, i.e., Hadoop related processes still run with
'system_u:system_r:initrc_t:s0' and there is no 'hadoop.pp' in
/etc/selinux/targeted/modules/active/modules directory.

 

Then I downloaded the source rpm for selinux-policy-3.10.0-75.fc16.src.rpm.
Looking at the source files, I noticed that modules_targeted.conf doesn't
have 'hadoop'. I modified the file to add in 'hadoop' and ran 'rpmbuild -ba
./rpmbuild/SPECS/selinux-policy.spec' which generated a new set of rpm. I
did a force rpm install of the newly created
selinux-policy-3.10.0-75.fc16.noarch.rpm and
selinux-policy-targeted-3.10.0-75.fc16.noarch.rpm. Then I rebooted the
system.

 

After the reboot, I now see 'hadoop.pp' IS in
/etc/selinux/targeted/modules/active/modules directory and the hadoop
related processes are now running with
'system_u:system_r:unconfined_java_t:s0'. Is my expectation that the hadoop
related processes will run as 'hadoop_t' incorrect? Are there any steps that
I am missing?

 

Any help will be much appreciated. Thank you in advance.

 

Sincerely,

Jean Khosalim

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://oss.tresys.com/pipermail/refpolicy/attachments/20120208/cc365404/attachment.html 

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [refpolicy] SELinux policy for Hadoop
  2012-02-08 19:29 [refpolicy] SELinux policy for Hadoop Jean Khosalim
@ 2012-02-08 19:46 ` Christopher J. PeBenito
  2012-02-08 20:33   ` Jean Khosalim
  0 siblings, 1 reply; 13+ messages in thread
From: Christopher J. PeBenito @ 2012-02-08 19:46 UTC (permalink / raw)
  To: refpolicy

On 02/08/12 14:29, Jean Khosalim wrote:
> I built a Fedora 16 system and installed Cloudera's CDH3 (with Hadoop-0.20).
> SElinux is enforcing and policy used is 'targeted'. Ran a simple wordcount
> example and it works. But I noticed that the Hadoop related processes are
> running with 'system_u:system_r:initrc_t:s0'. I was expecting hadoop_t
> instead of initrc_t. I also noticed that there is no 'hadoop.pp' in
> /etc/selinux/targeted/modules/active/modules directory.
> 
>  
> 
> I ran 'yum update' on the system and force autorelabel on boot (add
> 'enforcing=0 autorelabel' to grub). After reboot, it looks like nothing
> changed, i.e., Hadoop related processes still run with
> 'system_u:system_r:initrc_t:s0' and there is no 'hadoop.pp' in
> /etc/selinux/targeted/modules/active/modules directory.
> 
>  
> 
> Then I downloaded the source rpm for selinux-policy-3.10.0-75.fc16.src.rpm.
> Looking at the source files, I noticed that modules_targeted.conf doesn't
> have 'hadoop'. I modified the file to add in 'hadoop' and ran 'rpmbuild -ba
> ./rpmbuild/SPECS/selinux-policy.spec' which generated a new set of rpm. I
> did a force rpm install of the newly created
> selinux-policy-3.10.0-75.fc16.noarch.rpm and
> selinux-policy-targeted-3.10.0-75.fc16.noarch.rpm. Then I rebooted the
> system.
> 
>  
> 
> After the reboot, I now see 'hadoop.pp' IS in
> /etc/selinux/targeted/modules/active/modules directory and the hadoop
> related processes are now running with
> 'system_u:system_r:unconfined_java_t:s0'. Is my expectation that the hadoop
> related processes will run as 'hadoop_t' incorrect? Are there any steps that
> I am missing?

Did you relabel after you updated the policy?

-- 
Chris PeBenito
Tresys Technology, LLC
www.tresys.com | oss.tresys.com

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [refpolicy] SELinux policy for Hadoop
  2012-02-08 19:46 ` Christopher J. PeBenito
@ 2012-02-08 20:33   ` Jean Khosalim
  2012-02-08 20:40     ` Daniel J Walsh
  0 siblings, 1 reply; 13+ messages in thread
From: Jean Khosalim @ 2012-02-08 20:33 UTC (permalink / raw)
  To: refpolicy

Yes, I did.

Jean Khosalim

> -----Original Message-----
> From: Christopher J. PeBenito [mailto:cpebenito at tresys.com]
> Sent: Wednesday, February 08, 2012 11:46 AM
> To: Jean Khosalim
> Cc: refpolicy at oss.tresys.com
> Subject: Re: [refpolicy] SELinux policy for Hadoop
> 
> On 02/08/12 14:29, Jean Khosalim wrote:
> > I built a Fedora 16 system and installed Cloudera's CDH3 (with
> Hadoop-0.20).
> > SElinux is enforcing and policy used is 'targeted'. Ran a simple
> wordcount
> > example and it works. But I noticed that the Hadoop related processes
> are
> > running with 'system_u:system_r:initrc_t:s0'. I was expecting
> hadoop_t
> > instead of initrc_t. I also noticed that there is no 'hadoop.pp' in
> > /etc/selinux/targeted/modules/active/modules directory.
> >
> >
> >
> > I ran 'yum update' on the system and force autorelabel on boot (add
> > 'enforcing=0 autorelabel' to grub). After reboot, it looks like
> nothing
> > changed, i.e., Hadoop related processes still run with
> > 'system_u:system_r:initrc_t:s0' and there is no 'hadoop.pp' in
> > /etc/selinux/targeted/modules/active/modules directory.
> >
> >
> >
> > Then I downloaded the source rpm for selinux-policy-3.10.0-
> 75.fc16.src.rpm.
> > Looking at the source files, I noticed that modules_targeted.conf
> doesn't
> > have 'hadoop'. I modified the file to add in 'hadoop' and ran
> 'rpmbuild -ba
> > ./rpmbuild/SPECS/selinux-policy.spec' which generated a new set of
> rpm. I
> > did a force rpm install of the newly created
> > selinux-policy-3.10.0-75.fc16.noarch.rpm and
> > selinux-policy-targeted-3.10.0-75.fc16.noarch.rpm. Then I rebooted
> the
> > system.
> >
> >
> >
> > After the reboot, I now see 'hadoop.pp' IS in
> > /etc/selinux/targeted/modules/active/modules directory and the hadoop
> > related processes are now running with
> > 'system_u:system_r:unconfined_java_t:s0'. Is my expectation that the
> hadoop
> > related processes will run as 'hadoop_t' incorrect? Are there any
> steps that
> > I am missing?
> 
> Did you relabel after you updated the policy?
> 
> --
> Chris PeBenito
> Tresys Technology, LLC
> www.tresys.com | oss.tresys.com

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [refpolicy] SELinux policy for Hadoop
  2012-02-08 20:33   ` Jean Khosalim
@ 2012-02-08 20:40     ` Daniel J Walsh
  2012-02-08 21:00       ` Jean Khosalim
  0 siblings, 1 reply; 13+ messages in thread
From: Daniel J Walsh @ 2012-02-08 20:40 UTC (permalink / raw)
  To: refpolicy

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 02/08/2012 03:33 PM, Jean Khosalim wrote:
> Yes, I did.
> 
> Jean Khosalim
> 
>> -----Original Message----- From: Christopher J. PeBenito
>> [mailto:cpebenito at tresys.com] Sent: Wednesday, February 08, 2012
>> 11:46 AM To: Jean Khosalim Cc: refpolicy at oss.tresys.com Subject:
>> Re: [refpolicy] SELinux policy for Hadoop
>> 
>> On 02/08/12 14:29, Jean Khosalim wrote:
>>> I built a Fedora 16 system and installed Cloudera's CDH3 (with
>> Hadoop-0.20).
>>> SElinux is enforcing and policy used is 'targeted'. Ran a
>>> simple
>> wordcount
>>> example and it works. But I noticed that the Hadoop related
>>> processes
>> are
>>> running with 'system_u:system_r:initrc_t:s0'. I was expecting
>> hadoop_t
>>> instead of initrc_t. I also noticed that there is no
>>> 'hadoop.pp' in /etc/selinux/targeted/modules/active/modules
>>> directory.
>>> 
>>> 
>>> 
>>> I ran 'yum update' on the system and force autorelabel on boot
>>> (add 'enforcing=0 autorelabel' to grub). After reboot, it looks
>>> like
>> nothing
>>> changed, i.e., Hadoop related processes still run with 
>>> 'system_u:system_r:initrc_t:s0' and there is no 'hadoop.pp' in 
>>> /etc/selinux/targeted/modules/active/modules directory.
>>> 
>>> 
>>> 
>>> Then I downloaded the source rpm for selinux-policy-3.10.0-
>> 75.fc16.src.rpm.
>>> Looking at the source files, I noticed that
>>> modules_targeted.conf
>> doesn't
>>> have 'hadoop'. I modified the file to add in 'hadoop' and ran
>> 'rpmbuild -ba
>>> ./rpmbuild/SPECS/selinux-policy.spec' which generated a new set
>>> of
>> rpm. I
>>> did a force rpm install of the newly created 
>>> selinux-policy-3.10.0-75.fc16.noarch.rpm and 
>>> selinux-policy-targeted-3.10.0-75.fc16.noarch.rpm. Then I
>>> rebooted
>> the
>>> system.
>>> 
>>> 
>>> 
>>> After the reboot, I now see 'hadoop.pp' IS in 
>>> /etc/selinux/targeted/modules/active/modules directory and the
>>> hadoop related processes are now running with 
>>> 'system_u:system_r:unconfined_java_t:s0'. Is my expectation
>>> that the
>> hadoop
>>> related processes will run as 'hadoop_t' incorrect? Are there
>>> any
>> steps that
>>> I am missing?
>> 
>> Did you relabel after you updated the policy?
>> 
>> -- Chris PeBenito Tresys Technology, LLC www.tresys.com |
>> oss.tresys.com
> 
> _______________________________________________ refpolicy mailing
> list refpolicy at oss.tresys.com 
> http://oss.tresys.com/mailman/listinfo/refpolicy


What is the path to the daemon executables?  Are they labeled with a
hadoop*_exec_t type label?
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk8y3aEACgkQrlYvE4MpobNkzwCfbk+GiOqZPmBSadfgVjFOz/bX
lBQAoNXK3Mgqe81K9Aj3ip5djNYX3KTb
=aW6b
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [refpolicy] SELinux policy for Hadoop
  2012-02-08 20:40     ` Daniel J Walsh
@ 2012-02-08 21:00       ` Jean Khosalim
  2012-02-09 19:02         ` Daniel J Walsh
  0 siblings, 1 reply; 13+ messages in thread
From: Jean Khosalim @ 2012-02-08 21:00 UTC (permalink / raw)
  To: refpolicy

The following are the labels:

In /etc/init.d directory:
system_u:object_r:hadoop_datanode_initrc_exec_t:s0 hadoop-0.20-datanode
system_u:object_r:hadoop_jobtracker_initrc_exec_t:s0 hadoop-0.20-jobtracker
system_u:object_r:hadoop_namenode_initrc_exec_t:s0 hadoop-0.20-namenode
system_u:object_r:hadoop_secondarynamenode_initrc_exec_t:s0
hadoop-0.20-secondarynamenode
system_u:object_r:hadoop_tasktracker_initrc_exec_t:s0
hadoop-0.20-tasktracker

In /usr/lib/hadoop-0.20/bin directory:
system_u:object_r:hadoop_exec_t:s0 hadoop
system_u:object_r:hadoop_exec_t:s0 hadoop-config.sh
system_u:object_r:hadoop_exec_t:s0 hadoop-daemon.sh
system_u:object_r:hadoop_exec_t:s0 hadoop-daemons.sh
system_u:object_r:hadoop_exec_t:s0 rcc
system_u:object_r:hadoop_exec_t:s0 slaves.sh
system_u:object_r:hadoop_exec_t:s0 start-all.sh
system_u:object_r:hadoop_exec_t:s0 start-balancer.sh
system_u:object_r:hadoop_exec_t:s0 start-dfs.sh
system_u:object_r:hadoop_exec_t:s0 start-mapred.sh
system_u:object_r:hadoop_exec_t:s0 stop-all.sh
system_u:object_r:hadoop_exec_t:s0 stop-balancer.sh
system_u:object_r:hadoop_exec_t:s0 stop-dfs.sh
system_u:object_r:hadoop_exec_t:s0 stop-mapred.sh


Jean Khosalim
Research Associate
Computer Science Department
Naval Postgraduate School
1411 Cunningham Rd, GE-231
Monterey, CA  93943
(831) 656-2222
jkhosali at nps.edu



> -----Original Message-----
> From: Daniel J Walsh [mailto:dwalsh at redhat.com]
> Sent: Wednesday, February 08, 2012 12:40 PM
> To: Jean Khosalim
> Cc: 'Christopher J. PeBenito'; refpolicy at oss1.tresys.com
> Subject: Re: [refpolicy] SELinux policy for Hadoop
> 
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> On 02/08/2012 03:33 PM, Jean Khosalim wrote:
> > Yes, I did.
> >
> > Jean Khosalim
> >
> >> -----Original Message----- From: Christopher J. PeBenito
> >> [mailto:cpebenito at tresys.com] Sent: Wednesday, February 08, 2012
> >> 11:46 AM To: Jean Khosalim Cc: refpolicy at oss.tresys.com Subject:
> >> Re: [refpolicy] SELinux policy for Hadoop
> >>
> >> On 02/08/12 14:29, Jean Khosalim wrote:
> >>> I built a Fedora 16 system and installed Cloudera's CDH3 (with
> >> Hadoop-0.20).
> >>> SElinux is enforcing and policy used is 'targeted'. Ran a
> >>> simple
> >> wordcount
> >>> example and it works. But I noticed that the Hadoop related
> >>> processes
> >> are
> >>> running with 'system_u:system_r:initrc_t:s0'. I was expecting
> >> hadoop_t
> >>> instead of initrc_t. I also noticed that there is no
> >>> 'hadoop.pp' in /etc/selinux/targeted/modules/active/modules
> >>> directory.
> >>>
> >>>
> >>>
> >>> I ran 'yum update' on the system and force autorelabel on boot
> >>> (add 'enforcing=0 autorelabel' to grub). After reboot, it looks
> >>> like
> >> nothing
> >>> changed, i.e., Hadoop related processes still run with
> >>> 'system_u:system_r:initrc_t:s0' and there is no 'hadoop.pp' in
> >>> /etc/selinux/targeted/modules/active/modules directory.
> >>>
> >>>
> >>>
> >>> Then I downloaded the source rpm for selinux-policy-3.10.0-
> >> 75.fc16.src.rpm.
> >>> Looking at the source files, I noticed that
> >>> modules_targeted.conf
> >> doesn't
> >>> have 'hadoop'. I modified the file to add in 'hadoop' and ran
> >> 'rpmbuild -ba
> >>> ./rpmbuild/SPECS/selinux-policy.spec' which generated a new set
> >>> of
> >> rpm. I
> >>> did a force rpm install of the newly created
> >>> selinux-policy-3.10.0-75.fc16.noarch.rpm and
> >>> selinux-policy-targeted-3.10.0-75.fc16.noarch.rpm. Then I
> >>> rebooted
> >> the
> >>> system.
> >>>
> >>>
> >>>
> >>> After the reboot, I now see 'hadoop.pp' IS in
> >>> /etc/selinux/targeted/modules/active/modules directory and the
> >>> hadoop related processes are now running with
> >>> 'system_u:system_r:unconfined_java_t:s0'. Is my expectation
> >>> that the
> >> hadoop
> >>> related processes will run as 'hadoop_t' incorrect? Are there
> >>> any
> >> steps that
> >>> I am missing?
> >>
> >> Did you relabel after you updated the policy?
> >>
> >> -- Chris PeBenito Tresys Technology, LLC www.tresys.com |
> >> oss.tresys.com
> >
> > _______________________________________________ refpolicy mailing
> > list refpolicy at oss.tresys.com
> > http://oss.tresys.com/mailman/listinfo/refpolicy
> 
> 
> What is the path to the daemon executables?  Are they labeled with a
> hadoop*_exec_t type label?
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.12 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
> 
> iEYEARECAAYFAk8y3aEACgkQrlYvE4MpobNkzwCfbk+GiOqZPmBSadfgVjFOz/bX
> lBQAoNXK3Mgqe81K9Aj3ip5djNYX3KTb
> =aW6b
> -----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [refpolicy] SELinux policy for Hadoop
  2012-02-08 21:00       ` Jean Khosalim
@ 2012-02-09 19:02         ` Daniel J Walsh
  2012-02-09 19:30           ` Jean Khosalim
  0 siblings, 1 reply; 13+ messages in thread
From: Daniel J Walsh @ 2012-02-09 19:02 UTC (permalink / raw)
  To: refpolicy

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 02/08/2012 04:00 PM, Jean Khosalim wrote:
> The following are the labels:
> 
> In /etc/init.d directory: 
> system_u:object_r:hadoop_datanode_initrc_exec_t:s0
> hadoop-0.20-datanode 
> system_u:object_r:hadoop_jobtracker_initrc_exec_t:s0
> hadoop-0.20-jobtracker 
> system_u:object_r:hadoop_namenode_initrc_exec_t:s0
> hadoop-0.20-namenode 
> system_u:object_r:hadoop_secondarynamenode_initrc_exec_t:s0 
> hadoop-0.20-secondarynamenode 
> system_u:object_r:hadoop_tasktracker_initrc_exec_t:s0 
> hadoop-0.20-tasktracker
> 
> In /usr/lib/hadoop-0.20/bin directory: 
> system_u:object_r:hadoop_exec_t:s0 hadoop 
> system_u:object_r:hadoop_exec_t:s0 hadoop-config.sh 
> system_u:object_r:hadoop_exec_t:s0 hadoop-daemon.sh 
> system_u:object_r:hadoop_exec_t:s0 hadoop-daemons.sh 
> system_u:object_r:hadoop_exec_t:s0 rcc 
> system_u:object_r:hadoop_exec_t:s0 slaves.sh 
> system_u:object_r:hadoop_exec_t:s0 start-all.sh 
> system_u:object_r:hadoop_exec_t:s0 start-balancer.sh 
> system_u:object_r:hadoop_exec_t:s0 start-dfs.sh 
> system_u:object_r:hadoop_exec_t:s0 start-mapred.sh 
> system_u:object_r:hadoop_exec_t:s0 stop-all.sh 
> system_u:object_r:hadoop_exec_t:s0 stop-balancer.sh 
> system_u:object_r:hadoop_exec_t:s0 stop-dfs.sh 
> system_u:object_r:hadoop_exec_t:s0 stop-mapred.sh
> 
> 
> Jean Khosalim Research Associate Computer Science Department Naval
> Postgraduate School 1411 Cunningham Rd, GE-231 Monterey, CA  93943 
> (831) 656-2222 jkhosali at nps.edu
> 
> 
> 
>> -----Original Message----- From: Daniel J Walsh
>> [mailto:dwalsh at redhat.com] Sent: Wednesday, February 08, 2012
>> 12:40 PM To: Jean Khosalim Cc: 'Christopher J. PeBenito';
>> refpolicy at oss1.tresys.com Subject: Re: [refpolicy] SELinux policy
>> for Hadoop
>> 
> On 02/08/2012 03:33 PM, Jean Khosalim wrote:
>>>> Yes, I did.
>>>> 
>>>> Jean Khosalim
>>>> 
>>>>> -----Original Message----- From: Christopher J. PeBenito 
>>>>> [mailto:cpebenito at tresys.com] Sent: Wednesday, February 08,
>>>>> 2012 11:46 AM To: Jean Khosalim Cc:
>>>>> refpolicy at oss.tresys.com Subject: Re: [refpolicy] SELinux
>>>>> policy for Hadoop
>>>>> 
>>>>> On 02/08/12 14:29, Jean Khosalim wrote:
>>>>>> I built a Fedora 16 system and installed Cloudera's CDH3
>>>>>> (with
>>>>> Hadoop-0.20).
>>>>>> SElinux is enforcing and policy used is 'targeted'. Ran
>>>>>> a simple
>>>>> wordcount
>>>>>> example and it works. But I noticed that the Hadoop
>>>>>> related processes
>>>>> are
>>>>>> running with 'system_u:system_r:initrc_t:s0'. I was
>>>>>> expecting
>>>>> hadoop_t
>>>>>> instead of initrc_t. I also noticed that there is no 
>>>>>> 'hadoop.pp' in
>>>>>> /etc/selinux/targeted/modules/active/modules directory.
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> I ran 'yum update' on the system and force autorelabel on
>>>>>> boot (add 'enforcing=0 autorelabel' to grub). After
>>>>>> reboot, it looks like
>>>>> nothing
>>>>>> changed, i.e., Hadoop related processes still run with 
>>>>>> 'system_u:system_r:initrc_t:s0' and there is no
>>>>>> 'hadoop.pp' in 
>>>>>> /etc/selinux/targeted/modules/active/modules directory.
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> Then I downloaded the source rpm for
>>>>>> selinux-policy-3.10.0-
>>>>> 75.fc16.src.rpm.
>>>>>> Looking at the source files, I noticed that 
>>>>>> modules_targeted.conf
>>>>> doesn't
>>>>>> have 'hadoop'. I modified the file to add in 'hadoop' and
>>>>>> ran
>>>>> 'rpmbuild -ba
>>>>>> ./rpmbuild/SPECS/selinux-policy.spec' which generated a
>>>>>> new set of
>>>>> rpm. I
>>>>>> did a force rpm install of the newly created 
>>>>>> selinux-policy-3.10.0-75.fc16.noarch.rpm and 
>>>>>> selinux-policy-targeted-3.10.0-75.fc16.noarch.rpm. Then
>>>>>> I rebooted
>>>>> the
>>>>>> system.
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> After the reboot, I now see 'hadoop.pp' IS in 
>>>>>> /etc/selinux/targeted/modules/active/modules directory
>>>>>> and the hadoop related processes are now running with 
>>>>>> 'system_u:system_r:unconfined_java_t:s0'. Is my
>>>>>> expectation that the
>>>>> hadoop
>>>>>> related processes will run as 'hadoop_t' incorrect? Are
>>>>>> there any
>>>>> steps that
>>>>>> I am missing?
>>>>> 
>>>>> Did you relabel after you updated the policy?
>>>>> 
>>>>> -- Chris PeBenito Tresys Technology, LLC www.tresys.com | 
>>>>> oss.tresys.com
>>>> 
>>>> _______________________________________________ refpolicy
>>>> mailing list refpolicy at oss.tresys.com 
>>>> http://oss.tresys.com/mailman/listinfo/refpolicy
> 
> 
> What is the path to the daemon executables?  Are they labeled with
> a hadoop*_exec_t type label?
> 
Ok then which hadoop process is running as initrc_t?
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk80GEsACgkQrlYvE4MpobM1pgCeO/P3RTGdlnZjtuqv9DS4t30W
hAoAoNO9n9Qjj/nK700MJGYjx0wUraR3
=ygVH
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [refpolicy] SELinux policy for Hadoop
  2012-02-09 19:02         ` Daniel J Walsh
@ 2012-02-09 19:30           ` Jean Khosalim
  2012-02-09 21:59             ` Daniel J Walsh
  0 siblings, 1 reply; 13+ messages in thread
From: Jean Khosalim @ 2012-02-09 19:30 UTC (permalink / raw)
  To: refpolicy

The following is the output of 'ps auxZ | grep java' (with portion of the ps
line replaced with '.....' because it is too long):

----- Begin output of 'ps auxZ | grep java' ------

system_u:system_r:initrc_t:s0   root      1107  0.0  0.2   7808  2180 ?
S    10:44   0:00 su mapred -s /usr/java/jdk1.6.0_30/bin/java --
-Dproc_tasktracker ..... org.apache.hadoop.mapred.TaskTracker
system_u:system_r:initrc_t:s0   root      1109  0.0  0.2   7812  2188 ?
S    10:44   0:00 su mapred -s /usr/java/jdk1.6.0_30/bin/java --
-Dproc_jobtracker .....  org.apache.hadoop.mapred.JobTracker
system_u:system_r:initrc_t:s0   root      1111  0.0  0.2   7812  2188 ?
S    10:44   0:00 su hdfs -s /usr/java/jdk1.6.0_30/bin/java --
-Dproc_secondarynamenode .....
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode
system_u:system_r:initrc_t:s0   root      1113  0.0  0.2   7812  2192 ?
S    10:44   0:00 su hdfs -s /usr/java/jdk1.6.0_30/bin/java --
-Dproc_datanode .....  org.apache.hadoop.hdfs.server.datanode.DataNode
system_u:system_r:initrc_t:s0   root      1115  0.0  0.2   7812  2184 ?
S    10:44   0:00 su hdfs -s /usr/java/jdk1.6.0_30/bin/java --
-Dproc_namenode .....  org.apache.hadoop.hdfs.server.namenode.NameNode
system_u:system_r:unconfined_java_t:s0 mapred 1130 1.1  4.1 1197024 42552 ?
Sl   10:44   0:06 java -Dproc_jobtracker .....
org.apache.hadoop.mapred.JobTracker
system_u:system_r:unconfined_java_t:s0 hdfs 1131 1.1  6.3 1197864 64808 ?
Sl   10:44   0:05 java -Dproc_namenode .....
org.apache.hadoop.hdfs.server.namenode.NameNode
system_u:system_r:unconfined_java_t:s0 hdfs 1132 1.0  6.1 1191856 62752 ?
Sl   10:44   0:05 java -Dproc_secondarynamenode .....
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode
system_u:system_r:unconfined_java_t:s0 mapred 1133 1.3  4.1 1195780 42856 ?
Sl   10:44   0:07 java -Dproc_tasktracker .....
org.apache.hadoop.mapred.TaskTracker
system_u:system_r:unconfined_java_t:s0 hdfs 1134 1.1  4.1 1194756 42528 ?
Sl   10:44   0:05 java -Dproc_datanode .....
org.apache.hadoop.hdfs.server.datanode.DataNode

----- End output of 'ps auxZ | grep java' ------

Thanks,
Jean Khosalim

> -----Original Message-----
> From: Daniel J Walsh [mailto:dwalsh at redhat.com]
> Sent: Thursday, February 09, 2012 11:03 AM
> To: Jean Khosalim
> Cc: 'Christopher J. PeBenito'; refpolicy at oss1.tresys.com
> Subject: Re: [refpolicy] SELinux policy for Hadoop
> 
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> On 02/08/2012 04:00 PM, Jean Khosalim wrote:
> > The following are the labels:
> >
> > In /etc/init.d directory:
> > system_u:object_r:hadoop_datanode_initrc_exec_t:s0
> > hadoop-0.20-datanode
> > system_u:object_r:hadoop_jobtracker_initrc_exec_t:s0
> > hadoop-0.20-jobtracker
> > system_u:object_r:hadoop_namenode_initrc_exec_t:s0
> > hadoop-0.20-namenode
> > system_u:object_r:hadoop_secondarynamenode_initrc_exec_t:s0
> > hadoop-0.20-secondarynamenode
> > system_u:object_r:hadoop_tasktracker_initrc_exec_t:s0
> > hadoop-0.20-tasktracker
> >
> > In /usr/lib/hadoop-0.20/bin directory:
> > system_u:object_r:hadoop_exec_t:s0 hadoop
> > system_u:object_r:hadoop_exec_t:s0 hadoop-config.sh
> > system_u:object_r:hadoop_exec_t:s0 hadoop-daemon.sh
> > system_u:object_r:hadoop_exec_t:s0 hadoop-daemons.sh
> > system_u:object_r:hadoop_exec_t:s0 rcc
> > system_u:object_r:hadoop_exec_t:s0 slaves.sh
> > system_u:object_r:hadoop_exec_t:s0 start-all.sh
> > system_u:object_r:hadoop_exec_t:s0 start-balancer.sh
> > system_u:object_r:hadoop_exec_t:s0 start-dfs.sh
> > system_u:object_r:hadoop_exec_t:s0 start-mapred.sh
> > system_u:object_r:hadoop_exec_t:s0 stop-all.sh
> > system_u:object_r:hadoop_exec_t:s0 stop-balancer.sh
> > system_u:object_r:hadoop_exec_t:s0 stop-dfs.sh
> > system_u:object_r:hadoop_exec_t:s0 stop-mapred.sh
> >
> >
> > Jean Khosalim Research Associate Computer Science Department Naval
> > Postgraduate School 1411 Cunningham Rd, GE-231 Monterey, CA  93943
> > (831) 656-2222 jkhosali at nps.edu
> >
> >
> >
> >> -----Original Message----- From: Daniel J Walsh
> >> [mailto:dwalsh at redhat.com] Sent: Wednesday, February 08, 2012
> >> 12:40 PM To: Jean Khosalim Cc: 'Christopher J. PeBenito';
> >> refpolicy at oss1.tresys.com Subject: Re: [refpolicy] SELinux policy
> >> for Hadoop
> >>
> > On 02/08/2012 03:33 PM, Jean Khosalim wrote:
> >>>> Yes, I did.
> >>>>
> >>>> Jean Khosalim
> >>>>
> >>>>> -----Original Message----- From: Christopher J. PeBenito
> >>>>> [mailto:cpebenito at tresys.com] Sent: Wednesday, February 08,
> >>>>> 2012 11:46 AM To: Jean Khosalim Cc:
> >>>>> refpolicy at oss.tresys.com Subject: Re: [refpolicy] SELinux
> >>>>> policy for Hadoop
> >>>>>
> >>>>> On 02/08/12 14:29, Jean Khosalim wrote:
> >>>>>> I built a Fedora 16 system and installed Cloudera's CDH3
> >>>>>> (with
> >>>>> Hadoop-0.20).
> >>>>>> SElinux is enforcing and policy used is 'targeted'. Ran
> >>>>>> a simple
> >>>>> wordcount
> >>>>>> example and it works. But I noticed that the Hadoop
> >>>>>> related processes
> >>>>> are
> >>>>>> running with 'system_u:system_r:initrc_t:s0'. I was
> >>>>>> expecting
> >>>>> hadoop_t
> >>>>>> instead of initrc_t. I also noticed that there is no
> >>>>>> 'hadoop.pp' in
> >>>>>> /etc/selinux/targeted/modules/active/modules directory.
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> I ran 'yum update' on the system and force autorelabel on
> >>>>>> boot (add 'enforcing=0 autorelabel' to grub). After
> >>>>>> reboot, it looks like
> >>>>> nothing
> >>>>>> changed, i.e., Hadoop related processes still run with
> >>>>>> 'system_u:system_r:initrc_t:s0' and there is no
> >>>>>> 'hadoop.pp' in
> >>>>>> /etc/selinux/targeted/modules/active/modules directory.
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> Then I downloaded the source rpm for
> >>>>>> selinux-policy-3.10.0-
> >>>>> 75.fc16.src.rpm.
> >>>>>> Looking at the source files, I noticed that
> >>>>>> modules_targeted.conf
> >>>>> doesn't
> >>>>>> have 'hadoop'. I modified the file to add in 'hadoop' and
> >>>>>> ran
> >>>>> 'rpmbuild -ba
> >>>>>> ./rpmbuild/SPECS/selinux-policy.spec' which generated a
> >>>>>> new set of
> >>>>> rpm. I
> >>>>>> did a force rpm install of the newly created
> >>>>>> selinux-policy-3.10.0-75.fc16.noarch.rpm and
> >>>>>> selinux-policy-targeted-3.10.0-75.fc16.noarch.rpm. Then
> >>>>>> I rebooted
> >>>>> the
> >>>>>> system.
> >>>>>>
> >>>>>>
> >>>>>>
> >>>>>> After the reboot, I now see 'hadoop.pp' IS in
> >>>>>> /etc/selinux/targeted/modules/active/modules directory
> >>>>>> and the hadoop related processes are now running with
> >>>>>> 'system_u:system_r:unconfined_java_t:s0'. Is my
> >>>>>> expectation that the
> >>>>> hadoop
> >>>>>> related processes will run as 'hadoop_t' incorrect? Are
> >>>>>> there any
> >>>>> steps that
> >>>>>> I am missing?
> >>>>>
> >>>>> Did you relabel after you updated the policy?
> >>>>>
> >>>>> -- Chris PeBenito Tresys Technology, LLC www.tresys.com |
> >>>>> oss.tresys.com
> >>>>
> >>>> _______________________________________________ refpolicy
> >>>> mailing list refpolicy at oss.tresys.com
> >>>> http://oss.tresys.com/mailman/listinfo/refpolicy
> >
> >
> > What is the path to the daemon executables?  Are they labeled with
> > a hadoop*_exec_t type label?
> >
> Ok then which hadoop process is running as initrc_t?
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.12 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
> 
> iEYEARECAAYFAk80GEsACgkQrlYvE4MpobM1pgCeO/P3RTGdlnZjtuqv9DS4t30W
> hAoAoNO9n9Qjj/nK700MJGYjx0wUraR3
> =ygVH
> -----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [refpolicy] SELinux policy for Hadoop
  2012-02-09 19:30           ` Jean Khosalim
@ 2012-02-09 21:59             ` Daniel J Walsh
  2012-02-13 21:26               ` Jean Khosalim
  0 siblings, 1 reply; 13+ messages in thread
From: Daniel J Walsh @ 2012-02-09 21:59 UTC (permalink / raw)
  To: refpolicy

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Ok this looks like the init scripts are executing java rather then
going through a shell script.  SELinux relies on transition rules.

When a_t executes b_exec_t transition to b_t.  So we would have a rule
saying

initrc_t -> hadoop_exec_t -> hadoop_t

But you are showing
initrc_t -> java_exec_t -> initrc_t

The way to make this work would be to have a shell script that would
execute the java for each different user or to use runcon.



On 02/09/2012 02:30 PM, Jean Khosalim wrote:
> The following is the output of 'ps auxZ | grep java' (with portion
> of the ps line replaced with '.....' because it is too long):
> 
> ----- Begin output of 'ps auxZ | grep java' ------
> 
> system_u:system_r:initrc_t:s0   root      1107  0.0  0.2   7808
> 2180 ? S    10:44   0:00 su mapred -s
> /usr/java/jdk1.6.0_30/bin/java -- -Dproc_tasktracker .....
> org.apache.hadoop.mapred.TaskTracker system_u:system_r:initrc_t:s0
> root      1109  0.0  0.2   7812  2188 ? S    10:44   0:00 su mapred
> -s /usr/java/jdk1.6.0_30/bin/java -- -Dproc_jobtracker .....
> org.apache.hadoop.mapred.JobTracker system_u:system_r:initrc_t:s0
> root      1111  0.0  0.2   7812  2188 ? S    10:44   0:00 su hdfs
> -s /usr/java/jdk1.6.0_30/bin/java -- -Dproc_secondarynamenode
> ..... org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode 
> system_u:system_r:initrc_t:s0   root      1113  0.0  0.2   7812
> 2192 ? S    10:44   0:00 su hdfs -s /usr/java/jdk1.6.0_30/bin/java
> -- -Dproc_datanode .....
> org.apache.hadoop.hdfs.server.datanode.DataNode 
> system_u:system_r:initrc_t:s0   root      1115  0.0  0.2   7812
> 2184 ? S    10:44   0:00 su hdfs -s /usr/java/jdk1.6.0_30/bin/java
> -- -Dproc_namenode .....
> org.apache.hadoop.hdfs.server.namenode.NameNode 
> system_u:system_r:unconfined_java_t:s0 mapred 1130 1.1  4.1 1197024
> 42552 ? Sl   10:44   0:06 java -Dproc_jobtracker ..... 
> org.apache.hadoop.mapred.JobTracker 
> system_u:system_r:unconfined_java_t:s0 hdfs 1131 1.1  6.3 1197864
> 64808 ? Sl   10:44   0:05 java -Dproc_namenode ..... 
> org.apache.hadoop.hdfs.server.namenode.NameNode 
> system_u:system_r:unconfined_java_t:s0 hdfs 1132 1.0  6.1 1191856
> 62752 ? Sl   10:44   0:05 java -Dproc_secondarynamenode ..... 
> org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode 
> system_u:system_r:unconfined_java_t:s0 mapred 1133 1.3  4.1 1195780
> 42856 ? Sl   10:44   0:07 java -Dproc_tasktracker ..... 
> org.apache.hadoop.mapred.TaskTracker 
> system_u:system_r:unconfined_java_t:s0 hdfs 1134 1.1  4.1 1194756
> 42528 ? Sl   10:44   0:05 java -Dproc_datanode ..... 
> org.apache.hadoop.hdfs.server.datanode.DataNode
> 
> ----- End output of 'ps auxZ | grep java' ------
> 
> Thanks, Jean Khosalim
> 
>> -----Original Message----- From: Daniel J Walsh
>> [mailto:dwalsh at redhat.com] Sent: Thursday, February 09, 2012
>> 11:03 AM To: Jean Khosalim Cc: 'Christopher J. PeBenito';
>> refpolicy at oss1.tresys.com Subject: Re: [refpolicy] SELinux policy
>> for Hadoop
>> 
> On 02/08/2012 04:00 PM, Jean Khosalim wrote:
>>>> The following are the labels:
>>>> 
>>>> In /etc/init.d directory: 
>>>> system_u:object_r:hadoop_datanode_initrc_exec_t:s0 
>>>> hadoop-0.20-datanode 
>>>> system_u:object_r:hadoop_jobtracker_initrc_exec_t:s0 
>>>> hadoop-0.20-jobtracker 
>>>> system_u:object_r:hadoop_namenode_initrc_exec_t:s0 
>>>> hadoop-0.20-namenode 
>>>> system_u:object_r:hadoop_secondarynamenode_initrc_exec_t:s0 
>>>> hadoop-0.20-secondarynamenode 
>>>> system_u:object_r:hadoop_tasktracker_initrc_exec_t:s0 
>>>> hadoop-0.20-tasktracker
>>>> 
>>>> In /usr/lib/hadoop-0.20/bin directory: 
>>>> system_u:object_r:hadoop_exec_t:s0 hadoop 
>>>> system_u:object_r:hadoop_exec_t:s0 hadoop-config.sh 
>>>> system_u:object_r:hadoop_exec_t:s0 hadoop-daemon.sh 
>>>> system_u:object_r:hadoop_exec_t:s0 hadoop-daemons.sh 
>>>> system_u:object_r:hadoop_exec_t:s0 rcc 
>>>> system_u:object_r:hadoop_exec_t:s0 slaves.sh 
>>>> system_u:object_r:hadoop_exec_t:s0 start-all.sh 
>>>> system_u:object_r:hadoop_exec_t:s0 start-balancer.sh 
>>>> system_u:object_r:hadoop_exec_t:s0 start-dfs.sh 
>>>> system_u:object_r:hadoop_exec_t:s0 start-mapred.sh 
>>>> system_u:object_r:hadoop_exec_t:s0 stop-all.sh 
>>>> system_u:object_r:hadoop_exec_t:s0 stop-balancer.sh 
>>>> system_u:object_r:hadoop_exec_t:s0 stop-dfs.sh 
>>>> system_u:object_r:hadoop_exec_t:s0 stop-mapred.sh
>>>> 
>>>> 
>>>> Jean Khosalim Research Associate Computer Science Department
>>>> Naval Postgraduate School 1411 Cunningham Rd, GE-231
>>>> Monterey, CA  93943 (831) 656-2222 jkhosali at nps.edu
>>>> 
>>>> 
>>>> 
>>>>> -----Original Message----- From: Daniel J Walsh 
>>>>> [mailto:dwalsh at redhat.com] Sent: Wednesday, February 08,
>>>>> 2012 12:40 PM To: Jean Khosalim Cc: 'Christopher J.
>>>>> PeBenito'; refpolicy at oss1.tresys.com Subject: Re:
>>>>> [refpolicy] SELinux policy for Hadoop
>>>>> 
>>>> On 02/08/2012 03:33 PM, Jean Khosalim wrote:
>>>>>>> Yes, I did.
>>>>>>> 
>>>>>>> Jean Khosalim
>>>>>>> 
>>>>>>>> -----Original Message----- From: Christopher J.
>>>>>>>> PeBenito [mailto:cpebenito at tresys.com] Sent:
>>>>>>>> Wednesday, February 08, 2012 11:46 AM To: Jean
>>>>>>>> Khosalim Cc: refpolicy at oss.tresys.com Subject: Re:
>>>>>>>> [refpolicy] SELinux policy for Hadoop
>>>>>>>> 
>>>>>>>> On 02/08/12 14:29, Jean Khosalim wrote:
>>>>>>>>> I built a Fedora 16 system and installed Cloudera's
>>>>>>>>> CDH3 (with
>>>>>>>> Hadoop-0.20).
>>>>>>>>> SElinux is enforcing and policy used is 'targeted'.
>>>>>>>>> Ran a simple
>>>>>>>> wordcount
>>>>>>>>> example and it works. But I noticed that the
>>>>>>>>> Hadoop related processes
>>>>>>>> are
>>>>>>>>> running with 'system_u:system_r:initrc_t:s0'. I
>>>>>>>>> was expecting
>>>>>>>> hadoop_t
>>>>>>>>> instead of initrc_t. I also noticed that there is
>>>>>>>>> no 'hadoop.pp' in 
>>>>>>>>> /etc/selinux/targeted/modules/active/modules
>>>>>>>>> directory.
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> I ran 'yum update' on the system and force
>>>>>>>>> autorelabel on boot (add 'enforcing=0 autorelabel'
>>>>>>>>> to grub). After reboot, it looks like
>>>>>>>> nothing
>>>>>>>>> changed, i.e., Hadoop related processes still run
>>>>>>>>> with 'system_u:system_r:initrc_t:s0' and there is
>>>>>>>>> no 'hadoop.pp' in 
>>>>>>>>> /etc/selinux/targeted/modules/active/modules
>>>>>>>>> directory.
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> Then I downloaded the source rpm for 
>>>>>>>>> selinux-policy-3.10.0-
>>>>>>>> 75.fc16.src.rpm.
>>>>>>>>> Looking at the source files, I noticed that 
>>>>>>>>> modules_targeted.conf
>>>>>>>> doesn't
>>>>>>>>> have 'hadoop'. I modified the file to add in
>>>>>>>>> 'hadoop' and ran
>>>>>>>> 'rpmbuild -ba
>>>>>>>>> ./rpmbuild/SPECS/selinux-policy.spec' which
>>>>>>>>> generated a new set of
>>>>>>>> rpm. I
>>>>>>>>> did a force rpm install of the newly created 
>>>>>>>>> selinux-policy-3.10.0-75.fc16.noarch.rpm and 
>>>>>>>>> selinux-policy-targeted-3.10.0-75.fc16.noarch.rpm.
>>>>>>>>> Then I rebooted
>>>>>>>> the
>>>>>>>>> system.
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> After the reboot, I now see 'hadoop.pp' IS in 
>>>>>>>>> /etc/selinux/targeted/modules/active/modules
>>>>>>>>> directory and the hadoop related processes are now
>>>>>>>>> running with 
>>>>>>>>> 'system_u:system_r:unconfined_java_t:s0'. Is my 
>>>>>>>>> expectation that the
>>>>>>>> hadoop
>>>>>>>>> related processes will run as 'hadoop_t' incorrect?
>>>>>>>>> Are there any
>>>>>>>> steps that
>>>>>>>>> I am missing?
>>>>>>>> 
>>>>>>>> Did you relabel after you updated the policy?
>>>>>>>> 
>>>>>>>> -- Chris PeBenito Tresys Technology, LLC
>>>>>>>> www.tresys.com | oss.tresys.com
>>>>>>> 
>>>>>>> _______________________________________________
>>>>>>> refpolicy mailing list refpolicy at oss.tresys.com 
>>>>>>> http://oss.tresys.com/mailman/listinfo/refpolicy
>>>> 
>>>> 
>>>> What is the path to the daemon executables?  Are they labeled
>>>> with a hadoop*_exec_t type label?
>>>> 
> Ok then which hadoop process is running as initrc_t?
> 

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk80QdAACgkQrlYvE4MpobNMjgCfaz1b6aS30WnxH4KFQNKGtC3l
WAoAoMIM9gQ64yRqpDnNOMeIzZpuMQxX
=Bi/v
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [refpolicy] SELinux policy for Hadoop
  2012-02-09 21:59             ` Daniel J Walsh
@ 2012-02-13 21:26               ` Jean Khosalim
  2012-02-13 21:44                 ` Daniel J Walsh
  0 siblings, 1 reply; 13+ messages in thread
From: Jean Khosalim @ 2012-02-13 21:26 UTC (permalink / raw)
  To: refpolicy

Hi Daniel,

Thank you for responding. To try your suggestion, I did the following:
1. First stop all the services:
   service hadoop-0.20-datanode stop
   service hadoop-0.20-namenode stop
   service hadoop-0.20-secondarynamenode stop
   service hadoop-0.20-jobtracker stop
   service hadoop-0.20-tasktracker stop
   (Make sure all Hadoop processes are stopped. And ps no longer show them).
2. Modified /usr/lib/hadoop-0.20/conf/hadoop-env.sh, by adding the following
lines:
   export HADOOP_DATANODE_USER=hdfs
   export HADOOP_NAMENODE_USER=hdfs
   export HADOOP_SECONDARYNAMENODE_USER=hdfs
   export HADOOP_JOBTRACKER_USER=mapred
   export HADOOP_TASKTRACKER_USER=mapred
3. Start the Hadoop processes manually:
   /usr/lib/hadoop-0.20/bin/start-all.sh

But the result of the ps output is still the same, i.e., running with
unconfined_java_t.

Is this what you meant by "a shell script that would execute the java for
each different user" method?

I am trying to figure how to use runcon (what arguments to use).

Thanks,
Jean Khosalim


> -----Original Message-----
> From: Daniel J Walsh [mailto:dwalsh at redhat.com]
> Sent: Thursday, February 09, 2012 2:00 PM
> To: Jean Khosalim
> Cc: 'Christopher J. PeBenito'; refpolicy at oss1.tresys.com
> Subject: Re: [refpolicy] SELinux policy for Hadoop
> 
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> Ok this looks like the init scripts are executing java rather then
> going through a shell script.  SELinux relies on transition rules.
> 
> When a_t executes b_exec_t transition to b_t.  So we would have a rule
> saying
> 
> initrc_t -> hadoop_exec_t -> hadoop_t
> 
> But you are showing
> initrc_t -> java_exec_t -> initrc_t
> 
> The way to make this work would be to have a shell script that would
> execute the java for each different user or to use runcon.
> 
> 
> 
> On 02/09/2012 02:30 PM, Jean Khosalim wrote:
> > The following is the output of 'ps auxZ | grep java' (with portion
> > of the ps line replaced with '.....' because it is too long):
> >
> > ----- Begin output of 'ps auxZ | grep java' ------
> >
> > system_u:system_r:initrc_t:s0   root      1107  0.0  0.2   7808
> > 2180 ? S    10:44   0:00 su mapred -s
> > /usr/java/jdk1.6.0_30/bin/java -- -Dproc_tasktracker .....
> > org.apache.hadoop.mapred.TaskTracker system_u:system_r:initrc_t:s0
> > root      1109  0.0  0.2   7812  2188 ? S    10:44   0:00 su mapred
> > -s /usr/java/jdk1.6.0_30/bin/java -- -Dproc_jobtracker .....
> > org.apache.hadoop.mapred.JobTracker system_u:system_r:initrc_t:s0
> > root      1111  0.0  0.2   7812  2188 ? S    10:44   0:00 su hdfs
> > -s /usr/java/jdk1.6.0_30/bin/java -- -Dproc_secondarynamenode
> > ..... org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode
> > system_u:system_r:initrc_t:s0   root      1113  0.0  0.2   7812
> > 2192 ? S    10:44   0:00 su hdfs -s /usr/java/jdk1.6.0_30/bin/java
> > -- -Dproc_datanode .....
> > org.apache.hadoop.hdfs.server.datanode.DataNode
> > system_u:system_r:initrc_t:s0   root      1115  0.0  0.2   7812
> > 2184 ? S    10:44   0:00 su hdfs -s /usr/java/jdk1.6.0_30/bin/java
> > -- -Dproc_namenode .....
> > org.apache.hadoop.hdfs.server.namenode.NameNode
> > system_u:system_r:unconfined_java_t:s0 mapred 1130 1.1  4.1 1197024
> > 42552 ? Sl   10:44   0:06 java -Dproc_jobtracker .....
> > org.apache.hadoop.mapred.JobTracker
> > system_u:system_r:unconfined_java_t:s0 hdfs 1131 1.1  6.3 1197864
> > 64808 ? Sl   10:44   0:05 java -Dproc_namenode .....
> > org.apache.hadoop.hdfs.server.namenode.NameNode
> > system_u:system_r:unconfined_java_t:s0 hdfs 1132 1.0  6.1 1191856
> > 62752 ? Sl   10:44   0:05 java -Dproc_secondarynamenode .....
> > org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode
> > system_u:system_r:unconfined_java_t:s0 mapred 1133 1.3  4.1 1195780
> > 42856 ? Sl   10:44   0:07 java -Dproc_tasktracker .....
> > org.apache.hadoop.mapred.TaskTracker
> > system_u:system_r:unconfined_java_t:s0 hdfs 1134 1.1  4.1 1194756
> > 42528 ? Sl   10:44   0:05 java -Dproc_datanode .....
> > org.apache.hadoop.hdfs.server.datanode.DataNode
> >
> > ----- End output of 'ps auxZ | grep java' ------
> >
> > Thanks, Jean Khosalim
> >
> >> -----Original Message----- From: Daniel J Walsh
> >> [mailto:dwalsh at redhat.com] Sent: Thursday, February 09, 2012
> >> 11:03 AM To: Jean Khosalim Cc: 'Christopher J. PeBenito';
> >> refpolicy at oss1.tresys.com Subject: Re: [refpolicy] SELinux policy
> >> for Hadoop
> >>
> > On 02/08/2012 04:00 PM, Jean Khosalim wrote:
> >>>> The following are the labels:
> >>>>
> >>>> In /etc/init.d directory:
> >>>> system_u:object_r:hadoop_datanode_initrc_exec_t:s0
> >>>> hadoop-0.20-datanode
> >>>> system_u:object_r:hadoop_jobtracker_initrc_exec_t:s0
> >>>> hadoop-0.20-jobtracker
> >>>> system_u:object_r:hadoop_namenode_initrc_exec_t:s0
> >>>> hadoop-0.20-namenode
> >>>> system_u:object_r:hadoop_secondarynamenode_initrc_exec_t:s0
> >>>> hadoop-0.20-secondarynamenode
> >>>> system_u:object_r:hadoop_tasktracker_initrc_exec_t:s0
> >>>> hadoop-0.20-tasktracker
> >>>>
> >>>> In /usr/lib/hadoop-0.20/bin directory:
> >>>> system_u:object_r:hadoop_exec_t:s0 hadoop
> >>>> system_u:object_r:hadoop_exec_t:s0 hadoop-config.sh
> >>>> system_u:object_r:hadoop_exec_t:s0 hadoop-daemon.sh
> >>>> system_u:object_r:hadoop_exec_t:s0 hadoop-daemons.sh
> >>>> system_u:object_r:hadoop_exec_t:s0 rcc
> >>>> system_u:object_r:hadoop_exec_t:s0 slaves.sh
> >>>> system_u:object_r:hadoop_exec_t:s0 start-all.sh
> >>>> system_u:object_r:hadoop_exec_t:s0 start-balancer.sh
> >>>> system_u:object_r:hadoop_exec_t:s0 start-dfs.sh
> >>>> system_u:object_r:hadoop_exec_t:s0 start-mapred.sh
> >>>> system_u:object_r:hadoop_exec_t:s0 stop-all.sh
> >>>> system_u:object_r:hadoop_exec_t:s0 stop-balancer.sh
> >>>> system_u:object_r:hadoop_exec_t:s0 stop-dfs.sh
> >>>> system_u:object_r:hadoop_exec_t:s0 stop-mapred.sh
> >>>>
> >>>>
> >>>> Jean Khosalim Research Associate Computer Science Department
> >>>> Naval Postgraduate School 1411 Cunningham Rd, GE-231
> >>>> Monterey, CA  93943 (831) 656-2222 jkhosali at nps.edu
> >>>>
> >>>>
> >>>>
> >>>>> -----Original Message----- From: Daniel J Walsh
> >>>>> [mailto:dwalsh at redhat.com] Sent: Wednesday, February 08,
> >>>>> 2012 12:40 PM To: Jean Khosalim Cc: 'Christopher J.
> >>>>> PeBenito'; refpolicy at oss1.tresys.com Subject: Re:
> >>>>> [refpolicy] SELinux policy for Hadoop
> >>>>>
> >>>> On 02/08/2012 03:33 PM, Jean Khosalim wrote:
> >>>>>>> Yes, I did.
> >>>>>>>
> >>>>>>> Jean Khosalim
> >>>>>>>
> >>>>>>>> -----Original Message----- From: Christopher J.
> >>>>>>>> PeBenito [mailto:cpebenito at tresys.com] Sent:
> >>>>>>>> Wednesday, February 08, 2012 11:46 AM To: Jean
> >>>>>>>> Khosalim Cc: refpolicy at oss.tresys.com Subject: Re:
> >>>>>>>> [refpolicy] SELinux policy for Hadoop
> >>>>>>>>
> >>>>>>>> On 02/08/12 14:29, Jean Khosalim wrote:
> >>>>>>>>> I built a Fedora 16 system and installed Cloudera's
> >>>>>>>>> CDH3 (with
> >>>>>>>> Hadoop-0.20).
> >>>>>>>>> SElinux is enforcing and policy used is 'targeted'.
> >>>>>>>>> Ran a simple
> >>>>>>>> wordcount
> >>>>>>>>> example and it works. But I noticed that the
> >>>>>>>>> Hadoop related processes
> >>>>>>>> are
> >>>>>>>>> running with 'system_u:system_r:initrc_t:s0'. I
> >>>>>>>>> was expecting
> >>>>>>>> hadoop_t
> >>>>>>>>> instead of initrc_t. I also noticed that there is
> >>>>>>>>> no 'hadoop.pp' in
> >>>>>>>>> /etc/selinux/targeted/modules/active/modules
> >>>>>>>>> directory.
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> I ran 'yum update' on the system and force
> >>>>>>>>> autorelabel on boot (add 'enforcing=0 autorelabel'
> >>>>>>>>> to grub). After reboot, it looks like
> >>>>>>>> nothing
> >>>>>>>>> changed, i.e., Hadoop related processes still run
> >>>>>>>>> with 'system_u:system_r:initrc_t:s0' and there is
> >>>>>>>>> no 'hadoop.pp' in
> >>>>>>>>> /etc/selinux/targeted/modules/active/modules
> >>>>>>>>> directory.
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> Then I downloaded the source rpm for
> >>>>>>>>> selinux-policy-3.10.0-
> >>>>>>>> 75.fc16.src.rpm.
> >>>>>>>>> Looking at the source files, I noticed that
> >>>>>>>>> modules_targeted.conf
> >>>>>>>> doesn't
> >>>>>>>>> have 'hadoop'. I modified the file to add in
> >>>>>>>>> 'hadoop' and ran
> >>>>>>>> 'rpmbuild -ba
> >>>>>>>>> ./rpmbuild/SPECS/selinux-policy.spec' which
> >>>>>>>>> generated a new set of
> >>>>>>>> rpm. I
> >>>>>>>>> did a force rpm install of the newly created
> >>>>>>>>> selinux-policy-3.10.0-75.fc16.noarch.rpm and
> >>>>>>>>> selinux-policy-targeted-3.10.0-75.fc16.noarch.rpm.
> >>>>>>>>> Then I rebooted
> >>>>>>>> the
> >>>>>>>>> system.
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> After the reboot, I now see 'hadoop.pp' IS in
> >>>>>>>>> /etc/selinux/targeted/modules/active/modules
> >>>>>>>>> directory and the hadoop related processes are now
> >>>>>>>>> running with
> >>>>>>>>> 'system_u:system_r:unconfined_java_t:s0'. Is my
> >>>>>>>>> expectation that the
> >>>>>>>> hadoop
> >>>>>>>>> related processes will run as 'hadoop_t' incorrect?
> >>>>>>>>> Are there any
> >>>>>>>> steps that
> >>>>>>>>> I am missing?
> >>>>>>>>
> >>>>>>>> Did you relabel after you updated the policy?
> >>>>>>>>
> >>>>>>>> -- Chris PeBenito Tresys Technology, LLC
> >>>>>>>> www.tresys.com | oss.tresys.com
> >>>>>>>
> >>>>>>> _______________________________________________
> >>>>>>> refpolicy mailing list refpolicy at oss.tresys.com
> >>>>>>> http://oss.tresys.com/mailman/listinfo/refpolicy
> >>>>
> >>>>
> >>>> What is the path to the daemon executables?  Are they labeled
> >>>> with a hadoop*_exec_t type label?
> >>>>
> > Ok then which hadoop process is running as initrc_t?
> >
> 
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.12 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
> 
> iEYEARECAAYFAk80QdAACgkQrlYvE4MpobNMjgCfaz1b6aS30WnxH4KFQNKGtC3l
> WAoAoMIM9gQ64yRqpDnNOMeIzZpuMQxX
> =Bi/v
> -----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [refpolicy] SELinux policy for Hadoop
  2012-02-13 21:26               ` Jean Khosalim
@ 2012-02-13 21:44                 ` Daniel J Walsh
  2012-02-13 22:25                   ` Jean Khosalim
  0 siblings, 1 reply; 13+ messages in thread
From: Daniel J Walsh @ 2012-02-13 21:44 UTC (permalink / raw)
  To: refpolicy

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 02/13/2012 04:26 PM, Jean Khosalim wrote:
> Hi Daniel,
> 
> Thank you for responding. To try your suggestion, I did the
> following: 1. First stop all the services: service
> hadoop-0.20-datanode stop service hadoop-0.20-namenode stop service
> hadoop-0.20-secondarynamenode stop service hadoop-0.20-jobtracker
> stop service hadoop-0.20-tasktracker stop (Make sure all Hadoop
> processes are stopped. And ps no longer show them). 2. Modified
> /usr/lib/hadoop-0.20/conf/hadoop-env.sh, by adding the following 
> lines: export HADOOP_DATANODE_USER=hdfs export
> HADOOP_NAMENODE_USER=hdfs export
> HADOOP_SECONDARYNAMENODE_USER=hdfs export
> HADOOP_JOBTRACKER_USER=mapred export
> HADOOP_TASKTRACKER_USER=mapred 3. Start the Hadoop processes
> manually: /usr/lib/hadoop-0.20/bin/start-all.sh
> 
> But the result of the ps output is still the same, i.e., running
> with unconfined_java_t.
> 
> Is this what you meant by "a shell script that would execute the
> java for each different user" method?
> 
> I am trying to figure how to use runcon (what arguments to use).
> 
> Thanks, Jean Khosalim
> 

The problem is haddoop-0.20-jobtracker is executing java --class.  So
no transition happens.

If hadoop-0.20-jobtracker executing /usr/bin/hadoop-jobtracker which
had java --class within it, then we could label
/usr/bin/hadoop-jobtracker hadoop_exec_t, and the transitions would
happen.

Alternatively you could attempt

runcon -t hadpoop_t -- java --class ...
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk85hDoACgkQrlYvE4MpobPB7wCfdD0woHw+DrSAqQCtlr4tIkxy
B8wAn1JtWhsQNhGNWo5XwFfW7dQgPRDV
=U4PI
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [refpolicy] SELinux policy for Hadoop
  2012-02-13 21:44                 ` Daniel J Walsh
@ 2012-02-13 22:25                   ` Jean Khosalim
  2012-02-14 14:25                     ` Daniel J Walsh
  0 siblings, 1 reply; 13+ messages in thread
From: Jean Khosalim @ 2012-02-13 22:25 UTC (permalink / raw)
  To: refpolicy

I am using Cloudera CDH3 (I followed instructions found in
https://ccp.cloudera.com/display/CDHDOC/CDH3+Installation to install it).

Using the above installation:
/etc/init.d/hadoop-0.20-jobtracker (labeled
system_u:object_r:hadoop_jobtracker_initrc_exec_t:s0)
Its 'start()' calls:
daemon /usr/lib/hadoop-0.20/bin/hadoop-daemon.sh --config
"/etc/hadoop-0.20/conf" start jobtracker $DAEMON_FLAGS


The script /usr/lib/hadoop-0.20/bin/hadoop-daemon.sh (labeled
system_u:object_r:hadoop_exec_t:s0)
in turn calls
nice -n $HADOOP_NICENESS "$HADOOP_HOME"/bin/hadoop --config $HADOOP_CONF_DIR
$command "$@" < /dev/null


Then /usr/lib/hadoop-0.20/bin/hadoop script (labeled
system_u:object_r:hadoop_exec_t:s0) invoke java:
nohup su $HADOOP_DAEMON_USER -s $JAVA -- -Dproc_$COMMAND_JAVA.....


If I try to run:
runcon -t hadoop_t su hdfs -s /usr/java/jdk1.6.0_30/bin/java --
-Dproc_$COMMAND_JAVA.....
I got
runcon: invalid contect: unconfined_u: unconfined_r:hadoop_t:s0-s0:c0.c1023:
Invalid argument.


Thanks,
Jean Khosalim



> -----Original Message-----
> From: Daniel J Walsh [mailto:dwalsh at redhat.com]
> Sent: Monday, February 13, 2012 1:44 PM
> To: Jean Khosalim
> Cc: 'Christopher J. PeBenito'; refpolicy at oss1.tresys.com
> Subject: Re: [refpolicy] SELinux policy for Hadoop
> 
> -----BEGIN PGP SIGNED MESSAGE-----
> Hash: SHA1
> 
> On 02/13/2012 04:26 PM, Jean Khosalim wrote:
> > Hi Daniel,
> >
> > Thank you for responding. To try your suggestion, I did the
> > following: 1. First stop all the services: service
> > hadoop-0.20-datanode stop service hadoop-0.20-namenode stop service
> > hadoop-0.20-secondarynamenode stop service hadoop-0.20-jobtracker
> > stop service hadoop-0.20-tasktracker stop (Make sure all Hadoop
> > processes are stopped. And ps no longer show them). 2. Modified
> > /usr/lib/hadoop-0.20/conf/hadoop-env.sh, by adding the following
> > lines: export HADOOP_DATANODE_USER=hdfs export
> > HADOOP_NAMENODE_USER=hdfs export
> > HADOOP_SECONDARYNAMENODE_USER=hdfs export
> > HADOOP_JOBTRACKER_USER=mapred export
> > HADOOP_TASKTRACKER_USER=mapred 3. Start the Hadoop processes
> > manually: /usr/lib/hadoop-0.20/bin/start-all.sh
> >
> > But the result of the ps output is still the same, i.e., running
> > with unconfined_java_t.
> >
> > Is this what you meant by "a shell script that would execute the
> > java for each different user" method?
> >
> > I am trying to figure how to use runcon (what arguments to use).
> >
> > Thanks, Jean Khosalim
> >
> 
> The problem is haddoop-0.20-jobtracker is executing java --class.  So
> no transition happens.
> 
> If hadoop-0.20-jobtracker executing /usr/bin/hadoop-jobtracker which
> had java --class within it, then we could label
> /usr/bin/hadoop-jobtracker hadoop_exec_t, and the transitions would
> happen.
> 
> Alternatively you could attempt
> 
> runcon -t hadpoop_t -- java --class ...
> -----BEGIN PGP SIGNATURE-----
> Version: GnuPG v1.4.12 (GNU/Linux)
> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
> 
> iEYEARECAAYFAk85hDoACgkQrlYvE4MpobPB7wCfdD0woHw+DrSAqQCtlr4tIkxy
> B8wAn1JtWhsQNhGNWo5XwFfW7dQgPRDV
> =U4PI
> -----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [refpolicy] SELinux policy for Hadoop
  2012-02-13 22:25                   ` Jean Khosalim
@ 2012-02-14 14:25                     ` Daniel J Walsh
  2012-02-14 16:24                       ` Jean Khosalim
  0 siblings, 1 reply; 13+ messages in thread
From: Daniel J Walsh @ 2012-02-14 14:25 UTC (permalink / raw)
  To: refpolicy

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On 02/13/2012 05:25 PM, Jean Khosalim wrote:
> I am using Cloudera CDH3 (I followed instructions found in 
> https://ccp.cloudera.com/display/CDHDOC/CDH3+Installation to
> install it).
> 
> Using the above installation: /etc/init.d/hadoop-0.20-jobtracker
> (labeled system_u:object_r:hadoop_jobtracker_initrc_exec_t:s0) Its
> 'start()' calls: daemon /usr/lib/hadoop-0.20/bin/hadoop-daemon.sh
> --config "/etc/hadoop-0.20/conf" start jobtracker $DAEMON_FLAGS
> 
> 
> The script /usr/lib/hadoop-0.20/bin/hadoop-daemon.sh (labeled 
> system_u:object_r:hadoop_exec_t:s0) in turn calls nice -n
> $HADOOP_NICENESS "$HADOOP_HOME"/bin/hadoop --config
> $HADOOP_CONF_DIR $command "$@" < /dev/null
> 
> 
> Then /usr/lib/hadoop-0.20/bin/hadoop script (labeled 
> system_u:object_r:hadoop_exec_t:s0) invoke java: nohup su
> $HADOOP_DAEMON_USER -s $JAVA -- -Dproc_$COMMAND_JAVA.....
> 
Ok what label does this run as?
> 
> If I try to run: runcon -t hadoop_t su hdfs -s
> /usr/java/jdk1.6.0_30/bin/java -- -Dproc_$COMMAND_JAVA..... I got 
> runcon: invalid contect: unconfined_u:
> unconfined_r:hadoop_t:s0-s0:c0.c1023: Invalid argument.
> 
Try

runcon system_u:system_r:hadoop_t:s0  su hdfs -s
/usr/java/jdk1.6.0_30/bin/java --

> 
> Thanks, Jean Khosalim
> 
> 
> 
>> -----Original Message----- From: Daniel J Walsh
>> [mailto:dwalsh at redhat.com] Sent: Monday, February 13, 2012 1:44
>> PM To: Jean Khosalim Cc: 'Christopher J. PeBenito';
>> refpolicy at oss1.tresys.com Subject: Re: [refpolicy] SELinux policy
>> for Hadoop
>> 
> On 02/13/2012 04:26 PM, Jean Khosalim wrote:
>>>> Hi Daniel,
>>>> 
>>>> Thank you for responding. To try your suggestion, I did the 
>>>> following: 1. First stop all the services: service 
>>>> hadoop-0.20-datanode stop service hadoop-0.20-namenode stop
>>>> service hadoop-0.20-secondarynamenode stop service
>>>> hadoop-0.20-jobtracker stop service hadoop-0.20-tasktracker
>>>> stop (Make sure all Hadoop processes are stopped. And ps no
>>>> longer show them). 2. Modified 
>>>> /usr/lib/hadoop-0.20/conf/hadoop-env.sh, by adding the
>>>> following lines: export HADOOP_DATANODE_USER=hdfs export 
>>>> HADOOP_NAMENODE_USER=hdfs export 
>>>> HADOOP_SECONDARYNAMENODE_USER=hdfs export 
>>>> HADOOP_JOBTRACKER_USER=mapred export 
>>>> HADOOP_TASKTRACKER_USER=mapred 3. Start the Hadoop processes 
>>>> manually: /usr/lib/hadoop-0.20/bin/start-all.sh
>>>> 
>>>> But the result of the ps output is still the same, i.e.,
>>>> running with unconfined_java_t.
>>>> 
>>>> Is this what you meant by "a shell script that would execute
>>>> the java for each different user" method?
>>>> 
>>>> I am trying to figure how to use runcon (what arguments to
>>>> use).
>>>> 
>>>> Thanks, Jean Khosalim
>>>> 
> 
> The problem is haddoop-0.20-jobtracker is executing java --class.
> So no transition happens.
> 
> If hadoop-0.20-jobtracker executing /usr/bin/hadoop-jobtracker
> which had java --class within it, then we could label 
> /usr/bin/hadoop-jobtracker hadoop_exec_t, and the transitions
> would happen.
> 
> Alternatively you could attempt
> 
> runcon -t hadpoop_t -- java --class ...
> 

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iEYEARECAAYFAk86buUACgkQrlYvE4MpobMh/wCgnhgP7RhyASBXD4p+9R4CWRJk
ec8An27OGLwk2KE6rAWM1p1EWgRYoeyP
=SpEF
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 13+ messages in thread

* [refpolicy] SELinux policy for Hadoop
  2012-02-14 14:25                     ` Daniel J Walsh
@ 2012-02-14 16:24                       ` Jean Khosalim
  0 siblings, 0 replies; 13+ messages in thread
From: Jean Khosalim @ 2012-02-14 16:24 UTC (permalink / raw)
  To: refpolicy

> > Then /usr/lib/hadoop-0.20/bin/hadoop script (labeled
> > system_u:object_r:hadoop_exec_t:s0) invoke java: nohup su
> > $HADOOP_DAEMON_USER -s $JAVA -- -Dproc_$COMMAND_JAVA.....
> >
> Ok what label does this run as?
The 'su' processes seem to run as 'system_u:system_r:initrc_t:s0'.
The actual java processes run as 'system_u:system_r:unconfined_java_t:s0'

The following is the output of 'ps auxZ | grep java' (with portion of the ps
line replaced with '.....' because it is too long):

----- Begin output of 'ps auxZ | grep java' ------

system_u:system_r:initrc_t:s0   root      1107  0.0  0.2   7808  2180 ?
S    10:44   0:00 su mapred -s /usr/java/jdk1.6.0_30/bin/java --
-Dproc_tasktracker ..... org.apache.hadoop.mapred.TaskTracker

system_u:system_r:initrc_t:s0   root      1109  0.0  0.2   7812  2188 ?
S    10:44   0:00 su mapred -s /usr/java/jdk1.6.0_30/bin/java --
-Dproc_jobtracker .....  org.apache.hadoop.mapred.JobTracker

system_u:system_r:initrc_t:s0   root      1111  0.0  0.2   7812  2188 ?
S    10:44   0:00 su hdfs -s /usr/java/jdk1.6.0_30/bin/java --
-Dproc_secondarynamenode .....
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode

system_u:system_r:initrc_t:s0   root      1113  0.0  0.2   7812  2192 ?
S    10:44   0:00 su hdfs -s /usr/java/jdk1.6.0_30/bin/java --
-Dproc_datanode .....  org.apache.hadoop.hdfs.server.datanode.DataNode

system_u:system_r:initrc_t:s0   root      1115  0.0  0.2   7812  2184 ?
S    10:44   0:00 su hdfs -s /usr/java/jdk1.6.0_30/bin/java --
-Dproc_namenode .....  org.apache.hadoop.hdfs.server.namenode.NameNode

system_u:system_r:unconfined_java_t:s0 mapred 1130 1.1  4.1 1197024 42552 ?
Sl   10:44   0:06 java -Dproc_jobtracker .....
org.apache.hadoop.mapred.JobTracker

system_u:system_r:unconfined_java_t:s0 hdfs 1131 1.1  6.3 1197864 64808 ?
Sl   10:44   0:05 java -Dproc_namenode .....
org.apache.hadoop.hdfs.server.namenode.NameNode

system_u:system_r:unconfined_java_t:s0 hdfs 1132 1.0  6.1 1191856 62752 ?
Sl   10:44   0:05 java -Dproc_secondarynamenode .....
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode

system_u:system_r:unconfined_java_t:s0 mapred 1133 1.3  4.1 1195780 42856 ?
Sl   10:44   0:07 java -Dproc_tasktracker .....
org.apache.hadoop.mapred.TaskTracker

system_u:system_r:unconfined_java_t:s0 hdfs 1134 1.1  4.1 1194756 42528 ?
Sl   10:44   0:05 java -Dproc_datanode .....
org.apache.hadoop.hdfs.server.datanode.DataNode

----- End output of 'ps auxZ | grep java' ------

> >
> > If I try to run: runcon -t hadoop_t su hdfs -s
> > /usr/java/jdk1.6.0_30/bin/java -- -Dproc_$COMMAND_JAVA..... I got
> > runcon: invalid contect: unconfined_u:
> > unconfined_r:hadoop_t:s0-s0:c0.c1023: Invalid argument.
> >
> Try
> 
> runcon system_u:system_r:hadoop_t:s0  su hdfs -s
> /usr/java/jdk1.6.0_30/bin/java --
I got the following error when I run the above:
runcon: invalid context: system_u:system_r:hadoop_t:s0: Invalid argument


Thanks,
Jean Khosalim

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2012-02-14 16:24 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-02-08 19:29 [refpolicy] SELinux policy for Hadoop Jean Khosalim
2012-02-08 19:46 ` Christopher J. PeBenito
2012-02-08 20:33   ` Jean Khosalim
2012-02-08 20:40     ` Daniel J Walsh
2012-02-08 21:00       ` Jean Khosalim
2012-02-09 19:02         ` Daniel J Walsh
2012-02-09 19:30           ` Jean Khosalim
2012-02-09 21:59             ` Daniel J Walsh
2012-02-13 21:26               ` Jean Khosalim
2012-02-13 21:44                 ` Daniel J Walsh
2012-02-13 22:25                   ` Jean Khosalim
2012-02-14 14:25                     ` Daniel J Walsh
2012-02-14 16:24                       ` Jean Khosalim

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.