Ask Your Question
0

Easy? Connection Pooling in PuppetDB

asked 2018-04-22 22:51:07 -0500

Illydth gravatar image

Product: Community Puppet 4, Puppetdb 4.4.0-1 (RPM Installation from Puppetlabs-pc1).

The long and short of it is I'm flooding my Postgres server with PuppetDB...i've upped it to 500 connections and it's still having all connections taken.

Is this a pool in PuppetDB's settings? I know about the following:

#Database Connection Info
maximum-pool-size = 50
conn-max-age = 5
conn-lifetime = 30

But I guess I don't understand how to calculate the proper number of connections a PuppetDB / Postgres Server should be set to for proper function of PuppetDB. I get "depends on the number of nodes", but if someone could give me a "rule of thumb" for how to calculate it, I would appreciate it.

--Doug

Longer: I have several puppetdb master servers on the network sending puppetdb requests to a single centralized Postgres database. (I'm happy to talk architecture if this is a bad way of doing it, but I'd prefer to get everything working FIRST before talking about how this isn't the right way to do it).

Each master is set for a pool size in it's local config. Some puppetdb servers will be handling hundreds of puppet clients, some will be handling 10s of clients (by location of the master).

So One server may be set with a pool size of 50, another with at pool size of 10.

First question: Is there a good rule of thumb as to how many connections in a connection pool I should be using based upon the number of clients (at a default 30 minute puppet checkin for each client) that will be connecting to that master?

Secondly, are connections for Puppet's Pooling 1 to 1 with Postgres? i.e. If my "maximum-pool-size" is 10 on 8 different puppet masters, I should only need 80 connections for puppet on Postgres? Or is there some other factor there?

HELP! This is driving me nuts, I'm getting puppetdb checkins but I'm also having major issues with my Postgres servers on a regular basis (out of connections, only admin users can log in, etc.).

ANY Advice, information or useful words would be most appreciated.

edit retag flag offensive close merge delete

1 Answer

Sort by ยป oldest newest most voted
0

answered 2018-04-25 15:34:20 -0500

reidmv gravatar image

This is not a rule of thumb, but it is a configuration used successfully on a very large Puppet installation.

The architecture in use consists of a primary Puppet master, and >10 compile masters. Each compile master (which supports >2,000 nodes) is running puppetserver, and running a puppetdb instance.

These small, co-located PuppetDB instances are each configured with the following:

puppet_enterprise::puppetdb::command_processing_threads: 2
puppet_enterprise::puppetdb::write_maximum_pool_size: 4
puppet_enterprise::puppetdb::read_maximum_pool_size: 10

This translates to:

[command-processing]
threads = 2

[database]
maximum-pool-size = 4

[read-database]
maximum-pool-size = 10

That is, for one (1) compile master supporting ~2,000 nodes, ~14 PostgreSQL connections are consumed.

These numbers could probably be tuned down even further if needed. One way to explore that would be to watch the metrics for PuppetDB, and pay attention to whether or not the command queue stays low, or if it seems to grow. The puppetlabs/puppet_metrics_collector module can help collect this information, and more, all of which can be used to observe and inform tuning of the various Puppet services.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

2 followers

Stats

Asked: 2018-04-22 22:51:07 -0500

Seen: 404 times

Last updated: Apr 25