MCollective Cheat Sheet

# Print all nodes with filemgr plugin installed
mco find –with-agent filemgr

# Print all nodes where the last puppet run took more than 50 seconds
mco find -S “resource().total_time>50”

# Complex hostname filters
mco find -S “hostname=/(?i:hostname77[4-9])/”
mco find -S “hostname=/(?i:hostname77+)/”
mco find -S “hostname=/(?i:hostname(72|79|85|84))/”
mco find -S ‘!hostname=/(?i:hostname[12]7*)/’
mco find -S “not hostname=/(?i-mx:hostname[12]dev7*)/”
mco find -S “hostname=/(?i:hostname(\d))/”
mco find -S “hostname=/(?i:hostname{3,})/”
mco find -S “hostname=/(?i-mx:hostname([2-5]|0|7))/”

# Find hosts by md5sum of /etc/hosts
mco find -S “fstat(‘/etc/hosts’).md5=/462528a8bfcd7/”

# Get status of file on three nodes, can also use touch or remove instead of status
mco filemgr -f /etc/hosts status -S “hostname=hostname1” -d

# Get list of directory contents of /var/tmp/hotfixes
mco rpc filemgr status file=/tmp dirlist=true

# Get status of bis service on three nodes, can also use stop or start instead of status
mco rpc service status service=bis -S “hostname=hostname1”

# Restart bis service in batches of two, pausing 10 seconds between batches in dev
mco rpc service restart service=httpd –batch 2 –batch-sleep 10 -S “environment=dev”

# Get status of puppet agent on three nodes
mco puppet status -S “hostname=hostname1”

# Run puppet agent noop on three nodes for mcollective
mco puppet runonce –noop –tags mcollective -S “hostname=hostname1”

# Run puppet agent noop for mcollective, 5 nodes at a time, in site1. Use this for running pagent on more than 10 nodes at once.
mco puppet runall 5 –noop –tags mcollective -S “environment=dev”

# Get last run summary
mco rpc puppet last_run_summary -S “hostname=hostname1”

# Get last run summary including the last log details
mco rpc puppet last_run_summary parse_log=true -S “hostname=hostname1”

# List yum check updates, 20 nodes at a time
mco rpc package checkupdates –batch 20 -S “environment=dev” | egrep -v “:”

# Get service info, including chkconfig settings
mco rpc puppetral find type=service title=’ntpd’ -S “environment=dev and /ntp/”

# Retrieve  os_family for all dev nodes, use -v for per node details
mco facts os_family -S “environment=dev”

Advertisements

Moving Cron Jobs into Hiera

I recently received a request from one of our customers to add a few cron jobs to help them during a migration.  Upon looking at our ‘crontab’ module, I noticed a separate manifest for each node, the class name being the node’s hostname.  *sigh*  To make matters worse, our hostnames are mixed case, so some are capitalized and since class names have to be lowercase I couldn’t even temporarily use the current method.

The solution was to move this information to hiera, that way the code would be the same for all hosts, but the actual cron entries could be included on a node/role/environment.  There does seem to be a trade off between readability and flexibility, however.  On the one hand, it’s incredibly flexible but someone just looking at the manifest will not know whether it’s on a node/role/environment level and would take some searching.   So we’ve basically decided that if it’s going to require a conditional statement to make work, it’s better off in hiera and any jobs for all hosts can be left in separate manifests and included in init.pp

init.pp:

class crontab (

$crontab_entries = hiera(‘crontab_entries’, ‘false’)

) {

create_resources (cron, $crontab_entries)

}

 

YAML file:

crontab_entries:
script_cron:
ensure: ‘present’
command: ‘/home/foo/script.sh &>/dev/null’
user: ‘root’
minute: ‘0’
hour: ‘*/3’
script1_cron:
ensure: ‘present’
command: ‘/home/foo/script1.sh &>/dev/null’
user: ‘root’
minute: ‘0’
hour: ‘*/1’

 

 

Refreshing Facts in Mcollective

By default, puppet will use pe-mcollective-metadata cron job to refresh facter information every 15 minutes.  This is usually sufficient for the default facts but as we started adding custom facts for things like application version, we noticed during upgrades or patching that we’d have to wait up to 15 minutes for our facts to be updated.

We ended up disabling the default cron job and creating a custom one to run every minute but before doing this, it’s a good idea to look at how long each fact is taking to run and factor (pun) in the additional CPU and memory usage by running it more frequently.


Facter has a built in timing flag:


facter -t

A quick one-liner to display facts taking the longest, in milliseconds:


facter -t -p 2>&1 >/dev/null | awk '{print $NF "\t" $1}' | sort -g && echo -e "\x1b[39;49;00m"


The default Puppet Enterprise variable will need to be disabled, this was put in our common.yaml in Hiera:


puppet_enterprise::profile::mcollective::agent::manage_metadata_cron: false


Created a separate class just for this cron:

class crontab::mcollective-metadata {

#Disable the default mcollective cronjob
cron { 'pe-mcollective-metadata':
ensure => absent,
}

#Control the frequency of the mcollective facter updates
cron { 'pe-mcollective-metadata-custom':
ensure => present,
command => '/opt/puppet/sbin/refresh-mcollective-metadata 2>&1 >>/var/log/pe-mcollective/mcollective-metadata-cron.log',
user => 'root',
minute => absent,
}
}

Run puppet agent and apply the custom mcollective cron:

[root@hostname ~]# crontab -l
# HEADER: This file was autogenerated at 2016-03-23 17:43:02 -0700 by puppet.
# HEADER: While it can still be managed manually, it is definitely not recommended.
# HEADER: Note particularly that the comments starting with 'Puppet Name' should
# HEADER: not be deleted, as doing so could cause duplicate cron jobs.
# Puppet Name: pe-mcollective-metadata-custom
* * * * * /opt/puppet/sbin/refresh-mcollective-metadata 2>&1 >>/var/log/pe-mcollective/mcollective-metadata-cron.log