Ask Your Question

Linux patch management via Puppet.

asked 2014-05-16 10:22:55 -0600

bbull gravatar image

I am new to Puppet and am interested in using Puppet for Linux patch management. Can you share your experiences with Linux patch management via Puppet and/or provide links to Puppet documentation referring to patch management via Puppet?

edit retag flag offensive close merge delete

3 Answers

Sort by ยป oldest newest most voted

answered 2014-05-16 11:21:17 -0600

Ancillas gravatar image

Puppet works well for managing a finite list of packages, but it is not the right tool for ensuring that monthly security patches are properly installed.

The reasons why Puppet is not the right tool for patch management are

  • Puppet will not track or audit what patches are applied
  • Puppet is designed to enforce the state that you declare. If you run an apt-get upgrade (or equivalent) in your test environment, via Puppet, and the community releases a new patch before you upgrade production, then Puppet will effectively apply a different state on production. You will have no visibility into this state difference via Puppet.

A system that might work (I've never done it this way), would be to create your own mirrors of the package repositories, and control when you update those mirrors. You could then have Puppet update packages on a rough schedule with an Exec resource.

With that type of setup, you'd control what packages are available via your mirroring setup, and then Puppet would simply apply anything was was available.

edit flag offensive delete link more


I've looked at puppet from every angle to see how it could manage patches, and it doesn't do the job the way you want it to, so you really need another tool. I recommend Pulp for mirroring/ freezing repositories. Only RHEL/CentOS/Fedora is supported currently though, I believe. See

banjer gravatar imagebanjer ( 2014-05-19 08:33:44 -0600 )edit

answered 2014-05-18 14:11:11 -0600

Azul gravatar image

We use a similar approach to what is mentioned here. I have used this same approach with puppet, ansible, chef. You deploy one server which contains a copy of the upstream package repositories used (yum, apt-get, python modules, rubygems. This server is then running a web server to serve those packages from a snapshot (can be daily or it can created add-hoc).

The approach I use in my CI/CD pipeline is to when I commit code, my jenkins box creates a new snapshot on my 'package repository server' using hardlinks, this new snapshot is then used across my pipeline, first on the dev boxes, then on QA then in PRD. As soon PRD is deployed, I update the upstream repositories on the 'repository packages server', next time code is commited the boxes will be deployed in the same fashion (dev->qa->prd) using a new snapshot than contains newer packages than the previous run.

I manage the updates in my boxes by updating the URLs the boxes use to retrieve those packages (yum repo files for example) to use something like http://repo/snapshot/date. This can be done with puppet, chef, ansible, your pick. Then when yum update or apt-get upgrade is run, they get updated to the latest packages available in that snapshot.

edit flag offensive delete link more


how do you manage your pipeline? do you have automated tests, and other manual roles in Jenkins?

Walid Shaari gravatar imageWalid Shaari ( 2014-11-24 22:32:22 -0600 )edit

yes, I use smoke tests for checking that the application does what it should do, as well as infrastructure checks using serverspec. These check that the servers are configured according to the expectactions and go along to check that monitoring and restarting of services works as it should

Azul gravatar imageAzul ( 2014-11-25 05:14:59 -0600 )edit

answered 2014-05-20 09:59:34 -0600

tyknee gravatar image

Somewhat similar to the other responses (but different enough to warrant it's own answer) is creating your own repo (or your flavor of Linux's version of a yum repo). Use whatever various tools to control what's in your repo (reposync/ Pulp/etc), and use Puppet to create the repo on your clients and then manage the cron job to run the update. Assuming you have some sort of puppet reporting (Dashboard/Foreman/PuppetDB) you can verify the results of the patch and have something close to an audit log. Not perfect but it can do the job. We use Foreman as an ENC to puppet and we run a script that queries our configuration database to get patch times too.

I'll admit this is all very RHEL/YUM centric, but I assume you could do something similar with apt?

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools



Asked: 2014-05-16 10:22:55 -0600

Seen: 15,938 times

Last updated: May 20 '14