Ask Your Question
0

What is the best way to add files from S3? Type/provider, puppet code, or expand file resource?

asked 2015-02-02 05:28:13 -0500

Simple premise, difficult to figure out execution.. or at least that's how I feel.

I want to copy files down from S3, and I'd also like to be able to manage properties on those files such as mode, user and group. I've conceived of 3 approaches, and am not sure which is the most reasonable.

  1. Create a "type" in puppet code and stage it as a component module. Use an Exec resource to copy the file down from S3, then use a file resource to manage the properties of the file. Potential problem is the reuse of $title, so maybe the file has source=>/path/to/s3file, but then I'm stuck with 2 versions of the file to be idempotent with a creates=> on the exec.
  2. Create a type and provider set in Ruby. This sounded like a good idea, until I realized how good/complex the file type/posix provider was. I could heavily plagiarize to replicate the behavior, but it seems unwieldy. Bonus points for a param that distinguishes between s3 cp and s3 sync based on recursive.
  3. Extend the "source" param in Puppet's native file type/provider to understand how to handle s3:// URIs. This sounds the most appealing, were it not for having to host an alteration of a core type in Puppet. Obviously I'd be worried about maintaining my enhancement.

Any thoughts? Is there a better approach? Am I completely missing a nice way to fold existing type/provider data into new type/providers?

edit retag flag offensive close merge delete

1 answer

Sort by ยป oldest newest most voted
0

answered 2015-02-03 09:17:52 -0500

llowder gravatar image

updated 2015-02-03 10:25:31 -0500

The best thing to do would be to use one of the existing modules, such as this module: https://forge.puppetlabs.com/branan/s3file

This way you don't have to have the burden of creating it all over again and maintaining it.

edit flag offensive delete link more

Comments

So for reasons of efficiency it is desirable to be able to leverage the awscli mechanism to retrieve files from S3. In general, if you're going to pull data from S3 in an AWS environment, the s3 cp and s3 sync mechanisms can pull files by segment in parallel threads.

pwattstbd gravatar imagepwattstbd ( 2015-02-03 09:37:48 -0500 )edit

Updated answer to reference an s3 specific module, but the general idea of the answer remains the same. Use a module form the forge.

llowder gravatar imagellowder ( 2015-02-03 10:26:14 -0500 )edit

That module happens to use curl as a download mechanism. So my response remains the same. Really it's more an approach question, about how one would best enhance puppet capabilities, and less one about how one might get a file from s3...

pwattstbd gravatar imagepwattstbd ( 2015-02-03 10:44:50 -0500 )edit

Is it possible to extend a native type/provider? For instance, I have a WIP here https://github.com/Cinderhaze/puppet-s3 but it would be silly to reimplement all of the file bits. Can I write just the s3 specific parts and 'extend' or inherit from the base file provider?

DarylW gravatar imageDarylW ( 2016-03-22 18:14:21 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

2 followers

Stats

Asked: 2015-02-02 05:28:13 -0500

Seen: 1,148 times

Last updated: Feb 03 '15