We use a similar approach to what is mentioned here. I have used this same approach with puppet, ansible, chef.
You deploy one server which contains a copy of the upstream package repositories used (yum, apt-get, python modules, rubygems. This server is then running a web server to serve those packages from a snapshot (can be daily or it can created add-hoc).
The approach I use in my CI/CD pipeline is to when I commit code, my jenkins box creates a new snapshot on my 'package repository server' using hardlinks, this new snapshot is then used across my pipeline, first on the dev boxes, then on QA then in PRD.
As soon PRD is deployed, I update the upstream repositories on the 'repository packages server', next time code is commited the boxes will be deployed in the same fashion (dev->qa->prd) using a new snapshot than contains newer packages than the previous run.
I manage the updates in my boxes by updating the URLs the boxes use to retrieve those packages (yum repo files for example) to use something like http://repo/snapshot/date.
This can be done with puppet, chef, ansible, your pick.
Then when yum update or apt-get upgrade is run, they get updated to the latest packages available in that snapshot.