All of lore.kernel.org
 help / color / mirror / Atom feed
* auditd and redhat cluster
@ 2016-02-29 12:45 Maupertuis Philippe
  2016-03-01 13:25 ` Paul Moore
  0 siblings, 1 reply; 8+ messages in thread
From: Maupertuis Philippe @ 2016-02-29 12:45 UTC (permalink / raw)
  To: linux-audit


[-- Attachment #1.1: Type: text/plain, Size: 2547 bytes --]

Hi list,
One clusters fenced the passive node around two hours  after auditd was started.
We have found that iowait has increased since auditd was started and was unusually high.
Auditd wasn't generating many messages and there were no noticeable added activity on the disk were the audit and syslog files were written.
Besides watches, the only general rules were :
# creation
-a exit,always -F arch=b32 -S creat -S mkdir -S mknod -S link -S symlink -S mkdirat -S mknodat -S linkat -S symlinkat -F uid=root -F success=1 -k creation
-a exit,always -F arch=b64 -S creat -S mkdir -S mknod -S link -S symlink -S mkdirat -S mknodat -S linkat -S symlinkat -F uid=root -F success=1 -k creation
# deletion
-a exit,always -F arch=b32 -S rmdir -S unlink -S unlinkat -F uid=root -F success=1 -k deletion
-a exit,always -F arch=b64 -S rmdir -S unlink -S unlinkat -F uid=root -F success=1 -k deletion
After the rebot we deleted all rules and didn't notice extra iowait anymore.

Could these rules be the cause of additional iowait even if not generating many events (around 20 in two hours) ?
Is there any other auditd mechanism  that could explain this phenomenon ?

I would appreciate any hints.

Regards
Philippe




!!!*************************************************************************************
"Ce message et les pi?ces jointes sont confidentiels et r?serv?s ? l'usage exclusif de ses destinataires. Il peut ?galement ?tre prot?g? par le secret professionnel. Si vous recevez ce message par erreur, merci d'en avertir imm?diatement l'exp?diteur et de le d?truire. L'int?grit? du message ne pouvant ?tre assur?e sur Internet, la responsabilit? de Worldline ne pourra ?tre recherch?e quant au contenu de ce message. Bien que les meilleurs efforts soient faits pour maintenir cette transmission exempte de tout virus, l'exp?diteur ne donne aucune garantie ? cet ?gard et sa responsabilit? ne saurait ?tre recherch?e pour tout dommage r?sultant d'un virus transmis.

This e-mail and the documents attached are confidential and intended solely for the addressee; it may also be privileged. If you receive this e-mail in error, please notify the sender immediately and destroy it. As its integrity cannot be secured on the Internet, the Worldline liability cannot be triggered for the message content. Although the sender endeavours to maintain a computer virus-free network, the sender does not warrant that this transmission is virus-free and will not be liable for any damages resulting from any virus transmitted.!!!"

[-- Attachment #1.2: Type: text/html, Size: 5912 bytes --]

[-- Attachment #2: Type: text/plain, Size: 0 bytes --]



^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: auditd and redhat cluster
  2016-02-29 12:45 auditd and redhat cluster Maupertuis Philippe
@ 2016-03-01 13:25 ` Paul Moore
  2016-03-01 13:57   ` Maupertuis Philippe
  0 siblings, 1 reply; 8+ messages in thread
From: Paul Moore @ 2016-03-01 13:25 UTC (permalink / raw)
  To: Maupertuis Philippe; +Cc: linux-audit

On Mon, Feb 29, 2016 at 7:45 AM, Maupertuis Philippe
<philippe.maupertuis@worldline.com> wrote:
> Hi list,
>
> One clusters fenced the passive node around two hours  after auditd was
> started.
>
> We have found that iowait has increased since auditd was started and was
> unusually high.
>
> Auditd wasn’t generating many messages and there were no noticeable added
> activity on the disk were the audit and syslog files were written.
>
> Besides watches, the only general rules were :
>
> # creation
> -a exit,always -F arch=b32 -S creat -S mkdir -S mknod -S link -S symlink -S
> mkdirat -S mknodat -S linkat -S symlinkat -F uid=root -F success=1 -k
> creation
> -a exit,always -F arch=b64 -S creat -S mkdir -S mknod -S link -S symlink -S
> mkdirat -S mknodat -S linkat -S symlinkat -F uid=root -F success=1 -k
> creation
>
> # deletion
> -a exit,always -F arch=b32 -S rmdir -S unlink -S unlinkat -F uid=root -F
> success=1 -k deletion
> -a exit,always -F arch=b64 -S rmdir -S unlink -S unlinkat -F uid=root -F
> success=1 -k deletion
>
> After the rebot we deleted all rules and didn’t notice extra iowait anymore.
>
> Could these rules be the cause of additional iowait even if not generating
> many events (around 20 in two hours) ?
>
> Is there any other auditd mechanism  that could explain this phenomenon ?
>
> I would appreciate any hints.

Hi Philippe,

First, as this is a RH cluster product, I would suggest contacting RH
support with your question if you haven't already; this list is
primarily for upstream development and support.

If you are able to experiment with the system, or have a test
environment, I would suggest trying to narrow down the list of audit
rules/watches to see which rules/watches have the most affect on the
iowait times.  You've listed four rules, but you didn't list the
watches you have configured.  Also, what kernel version are you using?

-- 
paul moore
www.paul-moore.com

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: auditd and redhat cluster
  2016-03-01 13:25 ` Paul Moore
@ 2016-03-01 13:57   ` Maupertuis Philippe
  2016-03-01 14:14     ` Steve Grubb
  0 siblings, 1 reply; 8+ messages in thread
From: Maupertuis Philippe @ 2016-03-01 13:57 UTC (permalink / raw)
  To: Paul Moore, linux-audit

The kernel is  : 2.6.32-573.12.1.el6.x86_64
And the whole audit.rules file is  :
-D
-i
-b 8192
-a exit,never -F arch=b32 -F dir=/tmp/
-a exit,never -F arch=b64 -F dir=/tmp/
-a exit,never -F arch=b32 -F dir=/dev/shm/
-a exit,never -F arch=b64 -F dir=/dev/shm/
-a exit,never -F arch=b32 -F dir=/var/lock/lvm/
-a exit,never -F arch=b64 -F dir=/var/lock/lvm/
-w /sbin/agetty -p x -k console_access
-w /sbin/mingetty -p x -k console_access
-w /var/log/audit/ -k audit_logs
-w /var/log/secure -k audit_logs
-a exit,always -F arch=b32 -S creat -S mkdir -S mknod -S link -S symlink -S mkdirat -S mknodat -S linkat -S symlinkat -F uid=root -F success=1 -k creation
-a exit,always -F arch=b64 -S creat -S mkdir -S mknod -S link -S symlink -S mkdirat -S mknodat -S linkat -S symlinkat -F uid=root -F success=1 -k creation
-a exit,always -F arch=b32 -S rmdir -S unlink -S unlinkat -F uid=root -F success=1 -k deletion
-a exit,always -F arch=b64 -S rmdir -S unlink -S unlinkat -F uid=root -F success=1 -k deletion
-a always,exit -F arch=b32 -S adjtimex -S settimeofday -S stime -k time_change
-a always,exit -F arch=b64 -S adjtimex -S settimeofday -k time_change
-w /etc/localtime -p wa -k time_change
-w /etc/group -p wa -k identity
-w /etc/passwd -p wa -k identity
-w /etc/gshadow -p wa -k identity
-w /etc/shadow -p wa -k identity
-w /etc/security/opasswd -p wa -k identity
-w /etc/cron.allow -p wa -k system_files
-w /etc/ntp.conf -p wa -k system_files
-w /etc/ssh/sshd_config -p wa -k system_files
-w /etc/hosts -p wa -k system_files
-w /etc/resolv.conf -p wa -k system_files
-w /etc/audit.rules -p wa -k system_files
-w /etc/auditd.conf -p wa -k system_files
-w /etc/rsyslog.conf -p wa -k system_files
-a exit,always -F arch=b32 -S sethostname -k system_locale
-a exit,always -F arch=b64 -S sethostname -k system_locale
-w /etc/issue -p wa -k system_locale
-w /etc/issue.net -p wa -k system_locale
-w /etc/hosts -p wa -k system_locale
-w /etc/sysconfig/network -p wa -k system_locale
-w /etc/sudoers -p wa -k actions
-w /root/.ssh/authorized_keys -p wa -k ssh_files
-w /home/admnet/.ssh/authorized_keys -p wa -k ssh_files
-w /home/system/.ssh/authorized_keys -p war -k ssh_files
-w /home/oper/.ssh/authorized_keys -p wa -k ssh_files
-w /home/sprod/.ssh/authorized_keys -p wa -k ssh_files
-w /home/www/.ssh/authorized_keys -p wa -k ssh_files
-w /home/integ/.ssh/authorized_keys -p wa -k ssh_files
-w /home/stat/.ssh/authorized_keys -p wa -k ssh_files
-w /home/reference/.ssh/authorized_keys -p wa -k ssh_files
-w /bin/chown -p x -k system_commands
-w /usr/local/sbin/tcpdump -p x -k system_commands
-w /usr/bin/passwd -p x -k system_commands
-w /usr/sbin/useradd -p x -k system_commands
-w /usr/sbin/usermod -p x -k system_commands
-w /bin/chgrp -p x -k system_commands
-w /sbin/route -p x -k system_commands
-w /sbin/shutdown -p x -k system_commands
-w /sbin/reboot -p x -k system_commands
-w /sbin/sysctl -p x -k system_commands
-w /sbin/ifconfig -p x -k system_commands
-w /usr/sbin/visudo -p x -k system_commands
-w /usr/bin/crontab -p x -k system_commands
-w /bin/chmod -p x -k system_commands
-w /bin/su -p x -k system_commands
-w /bin/env -p x -k system_commands
-w /sbin/auditctl -p x -k system_commands
-w /bin/mount -p x -k system_commands
-w /bin/umount -p x -k system_commands
-w /bin/ping6 -p x -k system_commands
-w /bin/ping -p x -k system_commands
-w /sbin/pam_timestamp_check -p x -k system_commands
-w /sbin/netreport -p x -k system_commands
-w /sbin/unix_chkpwd -p x -k system_commands
-w /sbin/mount.nfs -p x -k system_commands
-w /sbin/rmmod -p x -k modules
-w /sbin/modprobe -p x -k modules
-a exit,always -F arch=b64 -S init_module -S delete_module -k modules
-a exit,always -F arch=b32 -S init_module -S delete_module -k modules
-a exit,always -F arch=b64 -S open -S openat -F exit=-EPERM -k rights
-a exit,always -F arch=b32 -S open -S openat -F exit=-EPERM -k rights
-a exit,always -F arch=b64 -S ptrace -k info_scan
-a exit,always -F arch=b32 -S ptrace -k info_scan

During the hour preceding the fence we got  these events from the passive node
Key Summary Report
===========================
total  key
===========================
891  system_commands (ping)

And on the active node  :

Key Summary Report
===========================
total  key
===========================
1330  system_commands
286  deletion

I am going to follow your advice and to open a call with redhat.
Anyway, I am interested in knowing if auditd has been reported to cause trouble without generating many events.

Regards
Philippe



-----Message d'origine-----
De : Paul Moore [mailto:paul@paul-moore.com]
Envoyé : mardi 1 mars 2016 14:25
À : Maupertuis Philippe
Cc : linux-audit@redhat.com
Objet : Re: auditd and redhat cluster

On Mon, Feb 29, 2016 at 7:45 AM, Maupertuis Philippe <philippe.maupertuis@worldline.com> wrote:
> Hi list,
>
> One clusters fenced the passive node around two hours  after auditd
> was started.
>
> We have found that iowait has increased since auditd was started and
> was unusually high.
>
> Auditd wasn’t generating many messages and there were no noticeable
> added activity on the disk were the audit and syslog files were written.
>
> Besides watches, the only general rules were :
>
> # creation
> -a exit,always -F arch=b32 -S creat -S mkdir -S mknod -S link -S
> symlink -S mkdirat -S mknodat -S linkat -S symlinkat -F uid=root -F
> success=1 -k creation -a exit,always -F arch=b64 -S creat -S mkdir -S
> mknod -S link -S symlink -S mkdirat -S mknodat -S linkat -S symlinkat
> -F uid=root -F success=1 -k creation
>
> # deletion
> -a exit,always -F arch=b32 -S rmdir -S unlink -S unlinkat -F uid=root
> -F
> success=1 -k deletion
> -a exit,always -F arch=b64 -S rmdir -S unlink -S unlinkat -F uid=root
> -F
> success=1 -k deletion
>
> After the rebot we deleted all rules and didn’t notice extra iowait anymore.
>
> Could these rules be the cause of additional iowait even if not
> generating many events (around 20 in two hours) ?
>
> Is there any other auditd mechanism  that could explain this phenomenon ?
>
> I would appreciate any hints.

Hi Philippe,

First, as this is a RH cluster product, I would suggest contacting RH support with your question if you haven't already; this list is primarily for upstream development and support.

If you are able to experiment with the system, or have a test environment, I would suggest trying to narrow down the list of audit rules/watches to see which rules/watches have the most affect on the iowait times.  You've listed four rules, but you didn't list the watches you have configured.  Also, what kernel version are you using?

--
paul moore
www.paul-moore.com

!!!*************************************************************************************
"Ce message et les pièces jointes sont confidentiels et réservés à l'usage exclusif de ses destinataires. Il peut également être protégé par le secret professionnel. Si vous recevez ce message par erreur, merci d'en avertir immédiatement l'expéditeur et de le détruire. L'intégrité du message ne pouvant être assurée sur Internet, la responsabilité de Worldline ne pourra être recherchée quant au contenu de ce message. Bien que les meilleurs efforts soient faits pour maintenir cette transmission exempte de tout virus, l'expéditeur ne donne aucune garantie à cet égard et sa responsabilité ne saurait être recherchée pour tout dommage résultant d'un virus transmis.

This e-mail and the documents attached are confidential and intended solely for the addressee; it may also be privileged. If you receive this e-mail in error, please notify the sender immediately and destroy it. As its integrity cannot be secured on the Internet, the Worldline liability cannot be triggered for the message content. Although the sender endeavours to maintain a computer virus-free network, the sender does not warrant that this transmission is virus-free and will not be liable for any damages resulting from any virus transmitted.!!!"

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: auditd and redhat cluster
  2016-03-01 13:57   ` Maupertuis Philippe
@ 2016-03-01 14:14     ` Steve Grubb
  2016-03-01 21:25       ` Burn Alting
  0 siblings, 1 reply; 8+ messages in thread
From: Steve Grubb @ 2016-03-01 14:14 UTC (permalink / raw)
  To: linux-audit; +Cc: Maupertuis Philippe

On Tuesday, March 01, 2016 02:57:45 PM Maupertuis Philippe wrote:
> The kernel is  : 2.6.32-573.12.1.el6.x86_64
> And the whole audit.rules file is  :

<snip>

> During the hour preceding the fence we got  these events from the passive
> node Key Summary Report
> ===========================
> total  key
> ===========================
> 891  system_commands (ping)
> 
> And on the active node  :
> 
> Key Summary Report
> ===========================
> total  key
> ===========================
> 1330  system_commands
> 286  deletion
> 
> I am going to follow your advice and to open a call with redhat.
> Anyway, I am interested in knowing if auditd has been reported to cause
> trouble without generating many events.

Those numbers work out to 27 events per minute. That's not really a lot of 
events. To see if its the rules or auditd causing the iowait, you might set 
the logging format to NOLOG. This will discard events rather than log them. If 
you still have iowait, its something to do with the rules. If that cleared it 
up, then auditd might be the source. Either way, put the format back to raw. 

I did some benchmarking of auditd over the holidays and posted some results 
here:

https://www.redhat.com/archives/linux-audit/2015-December/msg00061.html

I'd recommend:

flush = incremental
freq = 100

for a modest performance improvement.

-Steve


> -----Message d'origine-----
> De : Paul Moore [mailto:paul@paul-moore.com]
> Envoyé : mardi 1 mars 2016 14:25
> À : Maupertuis Philippe
> Cc : linux-audit@redhat.com
> Objet : Re: auditd and redhat cluster
> 
> On Mon, Feb 29, 2016 at 7:45 AM, Maupertuis Philippe 
<philippe.maupertuis@worldline.com> wrote:
> > Hi list,
> > 
> > One clusters fenced the passive node around two hours  after auditd
> > was started.
> > 
> > We have found that iowait has increased since auditd was started and
> > was unusually high.
> > 
> > Auditd wasn’t generating many messages and there were no noticeable
> > added activity on the disk were the audit and syslog files were written.
> > 
> > Besides watches, the only general rules were :
> > 
> > # creation
> > -a exit,always -F arch=b32 -S creat -S mkdir -S mknod -S link -S
> > symlink -S mkdirat -S mknodat -S linkat -S symlinkat -F uid=root -F
> > success=1 -k creation -a exit,always -F arch=b64 -S creat -S mkdir -S
> > mknod -S link -S symlink -S mkdirat -S mknodat -S linkat -S symlinkat
> > -F uid=root -F success=1 -k creation
> > 
> > # deletion
> > -a exit,always -F arch=b32 -S rmdir -S unlink -S unlinkat -F uid=root
> > -F
> > success=1 -k deletion
> > -a exit,always -F arch=b64 -S rmdir -S unlink -S unlinkat -F uid=root
> > -F
> > success=1 -k deletion
> > 
> > After the rebot we deleted all rules and didn’t notice extra iowait
> > anymore.
> > 
> > Could these rules be the cause of additional iowait even if not
> > generating many events (around 20 in two hours) ?
> > 
> > Is there any other auditd mechanism  that could explain this phenomenon ?
> > 
> > I would appreciate any hints.
> 
> Hi Philippe,
> 
> First, as this is a RH cluster product, I would suggest contacting RH
> support with your question if you haven't already; this list is primarily
> for upstream development and support.
> 
> If you are able to experiment with the system, or have a test environment, I
> would suggest trying to narrow down the list of audit rules/watches to see
> which rules/watches have the most affect on the iowait times.  You've
> listed four rules, but you didn't list the watches you have configured. 
> Also, what kernel version are you using?
> 
> --
> paul moore
> www.paul-moore.com
> 
> !!!*************************************************************************
> ************ "Ce message et les pièces jointes sont confidentiels et
> réservés à l'usage exclusif de ses destinataires. Il peut également être
> protégé par le secret professionnel. Si vous recevez ce message par erreur,
> merci d'en avertir immédiatement l'expéditeur et de le détruire.
> L'intégrité du message ne pouvant être assurée sur Internet, la
> responsabilité de Worldline ne pourra être recherchée quant au contenu de
> ce message. Bien que les meilleurs efforts soient faits pour maintenir
> cette transmission exempte de tout virus, l'expéditeur ne donne aucune
> garantie à cet égard et sa responsabilité ne saurait être recherchée pour
> tout dommage résultant d'un virus transmis.
> 
> This e-mail and the documents attached are confidential and intended solely
> for the addressee; it may also be privileged. If you receive this e-mail in
> error, please notify the sender immediately and destroy it. As its
> integrity cannot be secured on the Internet, the Worldline liability cannot
> be triggered for the message content. Although the sender endeavours to
> maintain a computer virus-free network, the sender does not warrant that
> this transmission is virus-free and will not be liable for any damages
> resulting from any virus transmitted.!!!"
> 
> --
> Linux-audit mailing list
> Linux-audit@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-audit


--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: auditd and redhat cluster
  2016-03-01 14:14     ` Steve Grubb
@ 2016-03-01 21:25       ` Burn Alting
  2016-03-01 21:53         ` Paul Moore
  2016-03-09  9:44         ` Maupertuis Philippe
  0 siblings, 2 replies; 8+ messages in thread
From: Burn Alting @ 2016-03-01 21:25 UTC (permalink / raw)
  To: Steve Grubb; +Cc: Maupertuis Philippe, linux-audit

Philippe,

What does a perf top show?

Do you see get_task_cred or audit_filter_rules as high consumers? If
they are high, then try turning off the monitoring of the /tmp, /dev/shm
and /var/lock/lvm trees or if appropriate, switch to monitoring via a
path directive if you don't need to monitor the entire tree.


Steve, Paul,

I have yet to put together a bug report, or researched to see if the
problem exists upstream, but have discovered recursive directory rules
can be expensive on the kernel. The rules below on a system running
rabbitmq can see get_task_cred and audit_filter_rules above 10% each.

-w /etc/pam.d -p wa -k PAM_Mods
-w /boot -k BOOT_Mods
-w /boot/grub/grub.conf -p war -k BOOT_Mods
-w /etc/security -p wa -k Security_Mods
-w /etc/sysconfig -p wa -k Sysconfig_Mods
-w /etc/ld.so.conf.d -p wa -k Library_Mods
-w /etc/inittab -p wa -k StartUp_Mods
-w /etc/rc.d -p wa -k StartUp_Mods

Regards


On Tue, 2016-03-01 at 09:14 -0500, Steve Grubb wrote:
> On Tuesday, March 01, 2016 02:57:45 PM Maupertuis Philippe wrote:
> > The kernel is  : 2.6.32-573.12.1.el6.x86_64
> > And the whole audit.rules file is  :
> 
> <snip>
> 
> > During the hour preceding the fence we got  these events from the passive
> > node Key Summary Report
> > ===========================
> > total  key
> > ===========================
> > 891  system_commands (ping)
> > 
> > And on the active node  :
> > 
> > Key Summary Report
> > ===========================
> > total  key
> > ===========================
> > 1330  system_commands
> > 286  deletion
> > 
> > I am going to follow your advice and to open a call with redhat.
> > Anyway, I am interested in knowing if auditd has been reported to cause
> > trouble without generating many events.
> 
> Those numbers work out to 27 events per minute. That's not really a lot of 
> events. To see if its the rules or auditd causing the iowait, you might set 
> the logging format to NOLOG. This will discard events rather than log them. If 
> you still have iowait, its something to do with the rules. If that cleared it 
> up, then auditd might be the source. Either way, put the format back to raw. 
> 
> I did some benchmarking of auditd over the holidays and posted some results 
> here:
> 
> https://www.redhat.com/archives/linux-audit/2015-December/msg00061.html
> 
> I'd recommend:
> 
> flush = incremental
> freq = 100
> 
> for a modest performance improvement.
> 
> -Steve
> 
> 
> > -----Message d'origine-----
> > De : Paul Moore [mailto:paul@paul-moore.com]
> > Envoyé : mardi 1 mars 2016 14:25
> > À : Maupertuis Philippe
> > Cc : linux-audit@redhat.com
> > Objet : Re: auditd and redhat cluster
> > 
> > On Mon, Feb 29, 2016 at 7:45 AM, Maupertuis Philippe 
> <philippe.maupertuis@worldline.com> wrote:
> > > Hi list,
> > > 
> > > One clusters fenced the passive node around two hours  after auditd
> > > was started.
> > > 
> > > We have found that iowait has increased since auditd was started and
> > > was unusually high.
> > > 
> > > Auditd wasn’t generating many messages and there were no noticeable
> > > added activity on the disk were the audit and syslog files were written.
> > > 
> > > Besides watches, the only general rules were :
> > > 
> > > # creation
> > > -a exit,always -F arch=b32 -S creat -S mkdir -S mknod -S link -S
> > > symlink -S mkdirat -S mknodat -S linkat -S symlinkat -F uid=root -F
> > > success=1 -k creation -a exit,always -F arch=b64 -S creat -S mkdir -S
> > > mknod -S link -S symlink -S mkdirat -S mknodat -S linkat -S symlinkat
> > > -F uid=root -F success=1 -k creation
> > > 
> > > # deletion
> > > -a exit,always -F arch=b32 -S rmdir -S unlink -S unlinkat -F uid=root
> > > -F
> > > success=1 -k deletion
> > > -a exit,always -F arch=b64 -S rmdir -S unlink -S unlinkat -F uid=root
> > > -F
> > > success=1 -k deletion
> > > 
> > > After the rebot we deleted all rules and didn’t notice extra iowait
> > > anymore.
> > > 
> > > Could these rules be the cause of additional iowait even if not
> > > generating many events (around 20 in two hours) ?
> > > 
> > > Is there any other auditd mechanism  that could explain this phenomenon ?
> > > 
> > > I would appreciate any hints.
> > 
> > Hi Philippe,
> > 
> > First, as this is a RH cluster product, I would suggest contacting RH
> > support with your question if you haven't already; this list is primarily
> > for upstream development and support.
> > 
> > If you are able to experiment with the system, or have a test environment, I
> > would suggest trying to narrow down the list of audit rules/watches to see
> > which rules/watches have the most affect on the iowait times.  You've
> > listed four rules, but you didn't list the watches you have configured. 
> > Also, what kernel version are you using?
> > 
> > --
> > paul moore
> > www.paul-moore.com
> > 
> > !!!*************************************************************************
> > ************ "Ce message et les pièces jointes sont confidentiels et
> > réservés à l'usage exclusif de ses destinataires. Il peut également être
> > protégé par le secret professionnel. Si vous recevez ce message par erreur,
> > merci d'en avertir immédiatement l'expéditeur et de le détruire.
> > L'intégrité du message ne pouvant être assurée sur Internet, la
> > responsabilité de Worldline ne pourra être recherchée quant au contenu de
> > ce message. Bien que les meilleurs efforts soient faits pour maintenir
> > cette transmission exempte de tout virus, l'expéditeur ne donne aucune
> > garantie à cet égard et sa responsabilité ne saurait être recherchée pour
> > tout dommage résultant d'un virus transmis.
> > 
> > This e-mail and the documents attached are confidential and intended solely
> > for the addressee; it may also be privileged. If you receive this e-mail in
> > error, please notify the sender immediately and destroy it. As its
> > integrity cannot be secured on the Internet, the Worldline liability cannot
> > be triggered for the message content. Although the sender endeavours to
> > maintain a computer virus-free network, the sender does not warrant that
> > this transmission is virus-free and will not be liable for any damages
> > resulting from any virus transmitted.!!!"
> > 
> > --
> > Linux-audit mailing list
> > Linux-audit@redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-audit
> 
> 
> --
> Linux-audit mailing list
> Linux-audit@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-audit


--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: auditd and redhat cluster
  2016-03-01 21:25       ` Burn Alting
@ 2016-03-01 21:53         ` Paul Moore
  2016-03-02  9:16           ` Burn Alting
  2016-03-09  9:44         ` Maupertuis Philippe
  1 sibling, 1 reply; 8+ messages in thread
From: Paul Moore @ 2016-03-01 21:53 UTC (permalink / raw)
  To: burn; +Cc: rgb, Maupertuis Philippe, linux-audit

On Tue, Mar 1, 2016 at 4:25 PM, Burn Alting <burn@swtf.dyndns.org> wrote:
> Steve, Paul,
>
> I have yet to put together a bug report, or researched to see if the
> problem exists upstream, but have discovered recursive directory rules
> can be expensive on the kernel. The rules below on a system running
> rabbitmq can see get_task_cred and audit_filter_rules above 10% each.
>
> -w /etc/pam.d -p wa -k PAM_Mods
> -w /boot -k BOOT_Mods
> -w /boot/grub/grub.conf -p war -k BOOT_Mods
> -w /etc/security -p wa -k Security_Mods
> -w /etc/sysconfig -p wa -k Sysconfig_Mods
> -w /etc/ld.so.conf.d -p wa -k Library_Mods
> -w /etc/inittab -p wa -k StartUp_Mods
> -w /etc/rc.d -p wa -k StartUp_Mods

Some of the work that Richard did with fsnotify for audit-by-exec
could be used to help make filesystem watches much more efficient,
especially the case where you are watching a lot of files in a common
directory.

-- 
paul moore
www.paul-moore.com

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: auditd and redhat cluster
  2016-03-01 21:53         ` Paul Moore
@ 2016-03-02  9:16           ` Burn Alting
  0 siblings, 0 replies; 8+ messages in thread
From: Burn Alting @ 2016-03-02  9:16 UTC (permalink / raw)
  To: Paul Moore; +Cc: rgb, Maupertuis Philippe, linux-audit

On Tue, 2016-03-01 at 16:53 -0500, Paul Moore wrote:
> On Tue, Mar 1, 2016 at 4:25 PM, Burn Alting <burn@swtf.dyndns.org> wrote:
> > Steve, Paul,
> >
> > I have yet to put together a bug report, or researched to see if the
> > problem exists upstream, but have discovered recursive directory rules
> > can be expensive on the kernel. The rules below on a system running
> > rabbitmq can see get_task_cred and audit_filter_rules above 10% each.
> >
> > -w /etc/pam.d -p wa -k PAM_Mods
> > -w /boot -k BOOT_Mods
> > -w /boot/grub/grub.conf -p war -k BOOT_Mods
> > -w /etc/security -p wa -k Security_Mods
> > -w /etc/sysconfig -p wa -k Sysconfig_Mods
> > -w /etc/ld.so.conf.d -p wa -k Library_Mods
> > -w /etc/inittab -p wa -k StartUp_Mods
> > -w /etc/rc.d -p wa -k StartUp_Mods
> 
> Some of the work that Richard did with fsnotify for audit-by-exec
> could be used to help make filesystem watches much more efficient,
> especially the case where you are watching a lot of files in a common
> directory.

Interestingly, if we convert all the above into possibly 100's of
specific file watches (for all files in the tree's at a given time), the
system does not take a hit any more.

Again, as soon as I can, I will produce a test configuration.

I will be interested in Philippe's results, if he has/can test my
suggestion.

Rgds

^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: auditd and redhat cluster
  2016-03-01 21:25       ` Burn Alting
  2016-03-01 21:53         ` Paul Moore
@ 2016-03-09  9:44         ` Maupertuis Philippe
  1 sibling, 0 replies; 8+ messages in thread
From: Maupertuis Philippe @ 2016-03-09  9:44 UTC (permalink / raw)
  To: burn, Steve Grubb; +Cc: linux-audit

Sorry for the delayed answer.
We restarted auditd on both node of the cluster after taking snapshot with perf top several time before and after.
The auditd processes were higher on the passive node (15% together) but it's probably a sampling effect since the server was mostly idle.
On the active node , audit_filter_rules was around 1% and get_task_cred around 0.8%.
I will try to replicate the settings in a test environment to have more leeway in playing with the rules.
Regards
Philippe

-----Message d'origine-----
De : Burn Alting [mailto:burn@swtf.dyndns.org]
Envoyé : mardi 1 mars 2016 22:25
À : Steve Grubb
Cc : linux-audit@redhat.com; Maupertuis Philippe
Objet : Re: auditd and redhat cluster

Philippe,

What does a perf top show?

Do you see get_task_cred or audit_filter_rules as high consumers? If they are high, then try turning off the monitoring of the /tmp, /dev/shm and /var/lock/lvm trees or if appropriate, switch to monitoring via a path directive if you don't need to monitor the entire tree.


Steve, Paul,

I have yet to put together a bug report, or researched to see if the problem exists upstream, but have discovered recursive directory rules can be expensive on the kernel. The rules below on a system running rabbitmq can see get_task_cred and audit_filter_rules above 10% each.

-w /etc/pam.d -p wa -k PAM_Mods
-w /boot -k BOOT_Mods
-w /boot/grub/grub.conf -p war -k BOOT_Mods -w /etc/security -p wa -k Security_Mods -w /etc/sysconfig -p wa -k Sysconfig_Mods -w /etc/ld.so.conf.d -p wa -k Library_Mods -w /etc/inittab -p wa -k StartUp_Mods -w /etc/rc.d -p wa -k StartUp_Mods

Regards


On Tue, 2016-03-01 at 09:14 -0500, Steve Grubb wrote:
> On Tuesday, March 01, 2016 02:57:45 PM Maupertuis Philippe wrote:
> > The kernel is  : 2.6.32-573.12.1.el6.x86_64 And the whole
> > audit.rules file is  :
>
> <snip>
>
> > During the hour preceding the fence we got  these events from the
> > passive node Key Summary Report =========================== total
> > key ===========================
> > 891  system_commands (ping)
> >
> > And on the active node  :
> >
> > Key Summary Report
> > ===========================
> > total  key
> > ===========================
> > 1330  system_commands
> > 286  deletion
> >
> > I am going to follow your advice and to open a call with redhat.
> > Anyway, I am interested in knowing if auditd has been reported to
> > cause trouble without generating many events.
>
> Those numbers work out to 27 events per minute. That's not really a
> lot of events. To see if its the rules or auditd causing the iowait,
> you might set the logging format to NOLOG. This will discard events
> rather than log them. If you still have iowait, its something to do
> with the rules. If that cleared it up, then auditd might be the source. Either way, put the format back to raw.
>
> I did some benchmarking of auditd over the holidays and posted some
> results
> here:
>
> https://www.redhat.com/archives/linux-audit/2015-December/msg00061.htm
> l
>
> I'd recommend:
>
> flush = incremental
> freq = 100
>
> for a modest performance improvement.
>
> -Steve
>
>
> > -----Message d'origine-----
> > De : Paul Moore [mailto:paul@paul-moore.com] Envoyé : mardi 1 mars
> > 2016 14:25 À : Maupertuis Philippe Cc : linux-audit@redhat.com Objet
> > : Re: auditd and redhat cluster
> >
> > On Mon, Feb 29, 2016 at 7:45 AM, Maupertuis Philippe
> <philippe.maupertuis@worldline.com> wrote:
> > > Hi list,
> > >
> > > One clusters fenced the passive node around two hours  after
> > > auditd was started.
> > >
> > > We have found that iowait has increased since auditd was started
> > > and was unusually high.
> > >
> > > Auditd wasn’t generating many messages and there were no
> > > noticeable added activity on the disk were the audit and syslog files were written.
> > >
> > > Besides watches, the only general rules were :
> > >
> > > # creation
> > > -a exit,always -F arch=b32 -S creat -S mkdir -S mknod -S link -S
> > > symlink -S mkdirat -S mknodat -S linkat -S symlinkat -F uid=root
> > > -F
> > > success=1 -k creation -a exit,always -F arch=b64 -S creat -S mkdir
> > > -S mknod -S link -S symlink -S mkdirat -S mknodat -S linkat -S
> > > symlinkat -F uid=root -F success=1 -k creation
> > >
> > > # deletion
> > > -a exit,always -F arch=b32 -S rmdir -S unlink -S unlinkat -F
> > > uid=root -F
> > > success=1 -k deletion
> > > -a exit,always -F arch=b64 -S rmdir -S unlink -S unlinkat -F
> > > uid=root -F
> > > success=1 -k deletion
> > >
> > > After the rebot we deleted all rules and didn’t notice extra
> > > iowait anymore.
> > >
> > > Could these rules be the cause of additional iowait even if not
> > > generating many events (around 20 in two hours) ?
> > >
> > > Is there any other auditd mechanism  that could explain this phenomenon ?
> > >
> > > I would appreciate any hints.
> >
> > Hi Philippe,
> >
> > First, as this is a RH cluster product, I would suggest contacting
> > RH support with your question if you haven't already; this list is
> > primarily for upstream development and support.
> >
> > If you are able to experiment with the system, or have a test
> > environment, I would suggest trying to narrow down the list of audit
> > rules/watches to see which rules/watches have the most affect on the
> > iowait times.  You've listed four rules, but you didn't list the watches you have configured.
> > Also, what kernel version are you using?
> >
> > --
> > paul moore
> > www.paul-moore.com
> >
> > !!!*****************************************************************
> > ********
> > ************ "Ce message et les pièces jointes sont confidentiels et
> > réservés à l'usage exclusif de ses destinataires. Il peut également
> > être protégé par le secret professionnel. Si vous recevez ce message
> > par erreur, merci d'en avertir immédiatement l'expéditeur et de le détruire.
> > L'intégrité du message ne pouvant être assurée sur Internet, la
> > responsabilité de Worldline ne pourra être recherchée quant au
> > contenu de ce message. Bien que les meilleurs efforts soient faits
> > pour maintenir cette transmission exempte de tout virus,
> > l'expéditeur ne donne aucune garantie à cet égard et sa
> > responsabilité ne saurait être recherchée pour tout dommage résultant d'un virus transmis.
> >
> > This e-mail and the documents attached are confidential and intended
> > solely for the addressee; it may also be privileged. If you receive
> > this e-mail in error, please notify the sender immediately and
> > destroy it. As its integrity cannot be secured on the Internet, the
> > Worldline liability cannot be triggered for the message content.
> > Although the sender endeavours to maintain a computer virus-free
> > network, the sender does not warrant that this transmission is
> > virus-free and will not be liable for any damages resulting from any virus transmitted.!!!"
> >
> > --
> > Linux-audit mailing list
> > Linux-audit@redhat.com
> > https://www.redhat.com/mailman/listinfo/linux-audit
>
>
> --
> Linux-audit mailing list
> Linux-audit@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-audit



!!!*************************************************************************************
"Ce message et les pièces jointes sont confidentiels et réservés à l'usage exclusif de ses destinataires. Il peut également être protégé par le secret professionnel. Si vous recevez ce message par erreur, merci d'en avertir immédiatement l'expéditeur et de le détruire. L'intégrité du message ne pouvant être assurée sur Internet, la responsabilité de Worldline ne pourra être recherchée quant au contenu de ce message. Bien que les meilleurs efforts soient faits pour maintenir cette transmission exempte de tout virus, l'expéditeur ne donne aucune garantie à cet égard et sa responsabilité ne saurait être recherchée pour tout dommage résultant d'un virus transmis.

This e-mail and the documents attached are confidential and intended solely for the addressee; it may also be privileged. If you receive this e-mail in error, please notify the sender immediately and destroy it. As its integrity cannot be secured on the Internet, the Worldline liability cannot be triggered for the message content. Although the sender endeavours to maintain a computer virus-free network, the sender does not warrant that this transmission is virus-free and will not be liable for any damages resulting from any virus transmitted.!!!"

--
Linux-audit mailing list
Linux-audit@redhat.com
https://www.redhat.com/mailman/listinfo/linux-audit

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2016-03-09  9:44 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-29 12:45 auditd and redhat cluster Maupertuis Philippe
2016-03-01 13:25 ` Paul Moore
2016-03-01 13:57   ` Maupertuis Philippe
2016-03-01 14:14     ` Steve Grubb
2016-03-01 21:25       ` Burn Alting
2016-03-01 21:53         ` Paul Moore
2016-03-02  9:16           ` Burn Alting
2016-03-09  9:44         ` Maupertuis Philippe

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.