# How to generate a facts.yaml file using Puppet only once a day versus every 30 minutes?

Referencing the following code mentioned in https://docs.puppetlabs.com/mcollecti...

# /etc/puppet/manifests/site.pp
file{"/etc/mcollective/facts.yaml":
owner    => root,
group    => root,
mode     => 400,
loglevel => debug, # reduce noise in Puppet reports
content  => inline_template("<%= scope.to_hash.reject { |k,v| k.to_s =~ /(uptime_seconds|timestamp|free)/ }.to_yaml %>"), # exclude rapidly changing facts
}


Trying to apply this to windows clients, I had to modify it this way:

 file { "c:/mcollective/etc/facts.yaml":
loglevel => debug,
content => inline_template("<%= scope.to_hash.reject { |k,v| k.to_s =~ /(uptime_seconds|timestamp|free)/ }.to_yaml %>"), # exclude rapidly changing facts
}


Now this works fine in generating the file, however, it is constantly generating a 'change' to the system every time puppet runs. Is there a way to get this applied once ever day or an interval as we only need this updated for mcollective reporting unless something changes... And even with the exclusions, it should need to update unless there is a major change..

edit retag close merge delete

What about the schedule resource & metaparameter? This _prevents_ a resource being applied more often than the schedule specifies.

( 2015-04-11 08:53:16 -0600 )edit

Sort by » oldest newest most voted

I'm no Ruby expert, so I'm not 100% sure why the version of the inline_template that I included below works while the one in the documentation doesn't. However, I have tested the version included here, and it seems to work on my PE 3.7 system.

# /etc/puppet/manifests/site.pp
file { '/etc/mcollective/facts.yaml':
owner     => root,
group     => root,
mode      => '0400',
loglevel  => debug, # reduce noise in Puppet reports
content   => inline_template('<%= scope.to_hash.reject { |k,v| !( k.is_a?(String) && v.is_a?(String) && k !~ /(uptime|free|timestamp)/ ) }.to_yaml %>'),
}


I cribbed the code from http://www.puppetcookbook.com/posts/see-all-client-variables.html and added the filtering to that.

more

Hi, I want to reject "7-Zip" software from the above inline_template as it starts with digit (7) which is creating a problem to execute this template. Can some one help me how to avoid as i am new to ruby

( 2016-06-25 14:34:16 -0600 )edit

I use puppet 3.8 with ruby 1.8.7 in a linux box which OS is CentOS 6.

I write my facts.yaml.erb:

<%=
# remove dynamic facts
# the origin remove items
# h_facts = scope.to_hash.reject{|k,v| k.to_s =~ /^(uptime.*|rubysitedir|_timestamp|memoryfree.*|swapfree.*|title|name|caller_module_name|module_name)$/} # this my defined ignore items h_facts = scope.to_hash.reject{|k,v| k.to_s =~ /^(uptime.*|rubysitedir|_timestamp|memoryfree.*|swapfree.*|title|name|caller_module_name|module_name|ec2_*|mc_Packages|sshfp_dsa|sshfp_rsa|sshrsakey|system_uptime)$/}

def hash_to_yaml_sort(h_facts, wc = 2)
output = ''
if wc == 2 then
output = "---\n"
end

# define line head, white space string
lg = ' ' * wc

h_facts.keys.sort.map do |k|
if h_facts[k].is_a?(Hash) then
output += "#{lg}#{k}: \n" + hash_to_yaml_sort(h_facts[k], wc+2)
elsif h_facts[k].is_a?(Array) then
output += "#{lg}#{k}: \n" + h_facts[k].sort.map{|x| "#{lg}  - #{x}\n"}.join
elsif h_facts[k].is_a?(String) then
output += "#{lg}#{k}: \"#{h_facts[k]}\"\n"
else
output += "#{lg}#{k}: #{h_facts[k]}\n"
end
end
return output
end

# make out_facts as yaml
hash_to_yaml_sort(h_facts)
%>


you can try it, the output never change.

one more thing, the ec2_* facts confuse me, sometimes it exists, sometime not, I don't know why. so I ignore all of them.

more

The one in the documentation works fine for me. I found my solution by adding additional filters, in this case, uptime_hours|uptime|id since those will change more often than I need to worry about fact wise with mcollective.

Looking at the facts.yaml file, those were the only other entries that were likely to generate a new 'facts.yaml' file unnecessarily in the time period given as I don't need it to tell me who is logged in or the uptime this way when I can do it another way that doesn't require making a constant change on the nodes.

more