Ask Your Question
0

how do I use puppet apply for a whole code tree?

asked 2018-02-27 14:54:24 -0500

vrmerlin gravatar image

I'm trying to figure out the best "development loop" for developing Puppet code by my IT team. What I'd like is to be able to have a code tree in a git repository, and then clone it under my user home directory. Then, I'd be able to run "puppet apply" on my local Linux machine, and have it make changes to my local development system. When I like the changes I've made, I'd commit the code, and the production systems would update. The code tree in my git repo would basically be what would go into /etc/puppetlabs/code in our production systems. In the actual production systems, I'd use r10k to stay in sync with the git repo.

First, am I taking the right approach?

Second, what parameters do I pass to "puppet apply" to reference an arbitary code directory as if it's /etc/puppetlabs/code? This means I'd be able to specify the environment, see all the appropriate modules, etc.

Thanks, John

edit retag flag offensive close merge delete

3 Answers

Sort by ยป oldest newest most voted
0

answered 2018-03-04 02:30:02 -0500

Hypnoz gravatar image

I have been working on developing this same testing strategy. My goals were to have a docker image started with my local puppet git repo inside the docker container, be able to run a puppet apply and the server would build exactly like it's a real server. Then I can modify the git code and test again, and only "commit & push" once I've worked out the bugs inside the docker containers.

For all the scripts and commands below, I replaced my home directory path with ~, but you may want to replace it back to your actual home directory path if ~ doesn't work for you.

First I started by creating a script that will build the machine quickly for me:

#!/bin/bash
echo -n "container name: "
read name
echo -n "hostname: "
read hostname
echo -n "centos version (5,6,7): "
read version
globaloptions=$(head -n1 ~/git/docker/puppetdocker/files/global/mount)
options=$(head -n1 ~/git/docker/puppetdocker/files/centos${version}/mount)
echo -n "options: "
read extraoptions

docker run -itd $globaloptions $options $extraoptions --name $name --hostname $hostname centos:centos${version} /bin/bash
docker exec -it $name /bin/bash -c "export COLUMNS=`tput cols`; export LINES=`tput lines`; exec bash"

In case you're wondering about the "tput lines, tput cols" part, docker sets the lines/columns strange so if you run something like vi or less, it won't look right and this is a hack to pass the correct values into the container.

I also created a script that will delete all my docker containers quickly:

#!/bin/bash
docker stop $(docker ps -a | grep puppet | grep -v CONTAINER | awk '{print $1}') ; docker rm $(docker ps -a | grep puppet | grep -v CONTAINER | awk '{print $1}')

In the "run" script you can see I'm reading two "mount" files. A generic "global/mount" and a mount specific to the centos version. Both of these just mount useful directories from my local laptop into the docker container to save me setup time.

.../global/mount

-v ~/git/docker/puppetdocker/files/global/yum.repos.d:/etc/yum.repos.d -v ~/git/puppet:/root/puppet -v ~/git/docker/puppetdocker/files/global/home/.bashrc:/root/.bashrc -v ~/git/docker/puppetdocker/files/global/home/.bash_aliases:/root/.bash_aliases -v ~/git/docker/puppetdocker/files/global/home/.vimrc:/root/.vimrc -v ~/git/docker/puppetdocker/files/global/hiera.yaml:/root/hiera.yaml

.../files/centos7/mount

-v ~/git/docker/puppetdocker/files/centos7/rpm-gpg:/etc/pki/rpm-gpg

You can see I needed the custom centos mount for the gpg keys used for those repos. I guess if they had unique names you could put all the gpg keys for every OS into one folder and add it to the global mount file.

So you can see I made the following directories with files that are useful inside the docker container:

~/git/docker/puppetdocker$ find . -type d
./files
./files/centos5
./files/centos5/rpm-gpg
./files/centos6
./files/centos6/rpm-gpg
./files/centos7
./files/centos7/rpm-gpg
./files/global
./files/global/home
./files/global/ssh
./files/global ...
(more)
edit flag offensive delete link more

Comments

one thing to look out for is 'starting services' inside docker containers usually requires special handling, particular with systemd based systems (running in privileged mode, mounting different /dev and /proc locations in the container, etc).

DarylW gravatar imageDarylW ( 2018-03-05 09:33:26 -0500 )edit
0

answered 2018-02-28 12:19:56 -0500

vrmerlin gravatar image

Ok, thanks for the suggestions!

edit flag offensive delete link more
0

answered 2018-02-28 11:22:19 -0500

DarylW gravatar image

The typical test loop for puppet code is to develop locally using rspec_puppet and writing unit tests to verify catalog complication. Then you push your code out to an Integration test framework, and run some (hopefully automated) tests against the nodes in the framework.

All of the tools you need can be found in the Puppet Development Kit ( https://puppet.com/download-puppet-de... )

What you are describing is more of an 'integration test' and I used to work with a similar flow, except I never did it on my local workstation, I would either create a VM (EC2 Instance, Vagrant/Virtualbox) or a docker host, and test out my commands there. In those cases, I still pushed my changes out to a specific environment (based on my username) on my puppet master, and ran puppet agent -t --environment myusername. This would allow me to work on the same based image as we would be using in production, and when I was done with initial incremental prototyping, I would blow the VM away and try it again on a fresh one, to make sure I didn't have any manual changes lying around on that node allowing it to work.

For a proper Integration test flow, The recommended approach for puppet is to use beaker and beaker_rspec. Beaker is the tool which allows you to spin up Docker Containers / VMs, and beaker_rspec allows you to execute tests against them. You would write rspec tests using beaker_rspec to verify that your nodes end up in the correct state.

For an alternative integration test environment, there is an example of using Test Kitchen (from Chef), but deploying your puppet code to it via https://github.com/scoopex/puppet-kit... . In that case, you would write either serverspec or inspec tests to verify that your system ends up in the desired state.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower

Stats

Asked: 2018-02-27 14:54:24 -0500

Seen: 62 times

Last updated: Mar 04