Ask Your Question

How to atomically update Puppet configuration?

asked 2016-02-01 17:40:14 -0500

Tim Landscheidt gravatar image

I have a puppetmaster (3.4.3) in a development environment fetching its configuration from a clone of a Git repository. Every ten minutes, the clone is updated by (essentially) git pull.

From time to time, in the process of Git doing its thing, the directory does not contain a valid Puppet configuration because files are replaced, swapped around, etc. If a Puppet client connects to the master during that time frame, the results are undefined, for example manifests or files cannot be found. (In theory, the worst thing would probably the puppetmaster not seeing and thus not evaluating a Hiera file and using class defaults instead.) If there are no updates from Git, puppetmaster will find a valid Puppet configuration again, so the problems are transient.

So I want to "lock" puppetmaster from compiling a catalog when a Git update is in progress so that updates are atomic. I have read "Git Workflow and Puppet Environments", but for existing branches that does (essentially) git pull as well. I do not want to create a new environment for each new commit, because that's probably more work than the ache the transient problems cause, and I'd like to avoid shutting down the puppetmaster before a Git update and starting it again afterwards because that feels like overkill as well.

Is there an easy way to atomically update a Puppet configuration?

edit retag flag offensive close merge delete

4 Answers

Sort by » oldest newest most voted

answered 2016-03-28 14:31:30 -0500

cprice404 gravatar image

For details on how we handle this in Puppet Enterprise, see the blog post File sync: more predictable Puppet masters, at scale.

I also wrote a blog post that touches on some ideas of how we might make this kind of functionality available in OSS: Ready, set, deploy… your Puppet code!. If you're interested in seeing something like this in OSS, feel free to vote for or add comments to this Jira ticket:

We're in the early stages of thinking about the best UX for this, but if folks were to weigh in on the ticket, that'd help our product team gauge how much interest there was in the community.

edit flag offensive delete link more

answered 2016-02-06 13:00:22 -0500

IanD gravatar image

I'm assuming it'd be better if the catalog failed while you were in an inconsistent state rather than create an incorrect catalog? If that's okay then why not use a lockfile on the puppetmaster side? So your git pull would actually be:

  1. touch lockfile
  2. git pull
  3. rm lockfile

Then just create a class that all nodes have using:

so, roughly: if file_exists('/my/lockfile') { fail('Failing, catalog potentially in inconsistent state.') }

Since that create a failure window every ~10 minutes (even with no changes to pull) the workflow should change to check if your local branch is behind its remote before doing the lockfile/pull.

edit flag offensive delete link more


Actually, I'm not super sure without testing that the file_exists works on the master side. You could create the lockfile in a custom module's files directory and use the `puppet:///...` source syntax to check for it. Again, better to fail and make no changes than to potentially make incorrect ones

IanD gravatar imageIanD ( 2016-02-06 13:05:03 -0500 )edit

answered 2016-02-07 15:56:12 -0500

DarylW gravatar image

updated 2016-02-07 16:00:17 -0500

Another potential solution... make your environment folder a soft link to the repo. When you are doing your pull, delete the link. When you have finished, recreate it.

That would cause it to fail with 'unable to find environment yournev', but it would at least be consistent.

  • EDIT - Thinking about the solution that would print out a useful message, you could switch the symlink to an environment that has a single manifest that has fail('Puppet Master Environment currently being updated') and you could atomically switch to and from that.

Another option is have a pair of cloned repositories, and you could update the one you aren't linked to, then flip the link once the update has finished, and just make sure your git pull and link flip-flop correctly knows what state it's in and what state it needs to switch to.

edit flag offensive delete link more


That still won't work because the Puppet master could be half way through reading the files for a catalog compile when the environment directory starts updating. For this to work, the puppet master needs to tell the code update process to wait until it has finished reading.

Alex Harvey gravatar imageAlex Harvey ( 2016-02-07 19:48:57 -0500 )edit

Would it be ok if you had a pair of repos, and you updated one, and then switched to the other one, or do you think it holds onto the actual file handles? The only other option, service puppetmaster stop update service puppetmaster start

DarylW gravatar imageDarylW ( 2016-02-07 21:29:40 -0500 )edit

Well, when compiling a catalog, the puppet master reads in hundreds of files in its modulepath and manifestdir. To avoid a race condition, BOTH the code update and puppet master processes would need to share some kind of lock on the code directories.

Alex Harvey gravatar imageAlex Harvey ( 2016-02-07 23:19:30 -0500 )edit

answered 2016-02-05 06:00:47 -0500

updated 2016-03-27 06:54:48 -0500

update #2

I can see that this problem is solved in Puppet Enterprise with Code Manager.

When you push code to your control repo, a webhook triggers Code Manager to pull that new code into a staging code directory (/etc/puppetlabs/code-staging). File sync then picks up those changes, pauses Puppet Server to avoid conflicts, and then syncs the new code to the live code directories on your Puppet masters.

That doesn't much help us open source users of course. :)

update #1

As pointed out below this is incorrect. I think of Git checkout as moving a single reference, and therefore an atomic operation. But of course, under the hood, it's moving stuff around.

That post is from 2011, which is like the 1970s in Puppet years. :)

You should be able to make the updates atomic by using something like a Gitflow workflow, where you add tags to your releases before putting them on Puppet masters.

So instead of just a git pull:

$ git branch 
* release_1.0
$ git fetch --all
$ git checkout release_2.0

And the git checkout operation is atomic.

edit flag offensive delete link more


No, the checkout is not atomic: When Git is working through ~ 4000 files, with sometimes rebases as part of the pull, there will always be temporary inconsistencies. But even if the checkout would be atomic, `puppetmaster` could have read half the files prior to the checkout and half after it.

Tim Landscheidt gravatar imageTim Landscheidt ( 2016-02-06 00:36:19 -0500 )edit

Yeah, sorry. I see your point. I'll be interested to learn the answer to this one. Strangely enough, I've never actually seen this issue.

Alex Harvey gravatar imageAlex Harvey ( 2016-02-06 03:34:05 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools



Asked: 2016-02-01 17:40:14 -0500

Seen: 359 times

Last updated: Mar 28 '16