Ask Your Question
1

Replace Catalog/Facts Command Submitted to PuppetDB Very Slow

asked 2015-08-18 02:07:00 -0500

zsl gravatar image

updated 2015-08-18 08:29:05 -0500

It tooks puppet-master over 20 seconds to submit the replace facts and replace catalog commands to PuppetDB , which is on another server but in the same internal private network with puppet master. And I notice that it creates new SSL connection every time for every node.

    ...
Aug 18 14:04:12 ppmaster0 puppet-master[31664]: Did not match path ("/production/catalog/testnode1")
Aug 18 14:04:12 ppmaster0 puppet-master[31664]: Evaluating match for Route /.*/
Aug 18 14:04:12 ppmaster0 puppet-master[31664]: Evicting cache entry for environment 'production'
Aug 18 14:04:12 ppmaster0 puppet-master[31664]: Caching environment 'production' (ttl = 0 sec)
Aug 18 14:04:12 ppmaster0 puppet-master[31664]: TrustedInformation expected a certificate, but none was given.
Aug 18 14:04:12 ppmaster0 puppet-master[31664]: Creating new connection for https://puppetdb:8081

Aug 18 14:04:33 ppmaster0 puppet-master[31664]: 'replace facts' command for testnode1 submitted to PuppetDB with UUID a57e2001-1953-42f9-bef7-4a09857f490a
Aug 18 14:05:33 ppmaster0 puppet-master[31664]: Using cached facts for testnode1
Aug 18 14:05:33 ppmaster0 puppet-master[31664]: Caching node for testnode1
    ...

The network between puppet master and puppetDB should be OK.

root@ppmaster0:/var/log/puppetserver# ping puppetdb
PING puppetdb (10.0.0.21) 56(84) bytes of data.
64 bytes from puppetdb (10.0.0.21): icmp_seq=1 ttl=64 time=0.302 ms
64 bytes from puppetdb (10.0.0.21): icmp_seq=2 ttl=64 time=0.241 ms
64 bytes from puppetdb (10.0.0.21): icmp_seq=3 ttl=64 time=0.216 ms
64 bytes from puppetdb (10.0.0.21): icmp_seq=4 ttl=64 time=0.254 ms

Is this normal? How can I optimize this, please?

Environment:

  1. Puppet Master 3.8.2
  2. PuppetDB 2.3.6
  3. Apache 2.7.4
  4. puppetmaster-passenger 3.8.2

Update: Profiling results as following. The problem is the command submitted to PuppetDB that takes too long. The PuppetDB Server was installed via official puppetDB module, with PostgreSQL as backend. It's a VM with 8 CPUs and 16GB RAM, which I think is enough for 200+ nodes in my environment. The CPU is 90+% idle in most time actually.

2015-08-18 20:52:35 +0800 Puppet (debug): PROFILE [17575380] 1.5.2.2 PuppetDB: Submit command HTTP post: took 10.1434 seconds
2015-08-18 20:52:35 +0800 Puppet (debug): PROFILE [17575380] 1.5.2 PuppetDB: Submitted command 'replace catalog' version '5': took 10.1664 seconds
2015-08-18 20:52:35 +0800 Puppet (debug): PROFILE [17575380] 1.5 PuppetDB: catalog#save: took 10.2486 seconds
2015-08-18 20:52:35 +0800 Puppet (debug): PROFILE [17575380] 1.6 Filtered result for catalog testnode1: took 0.0436 seconds
2015-08-18 20:52:35 +0800 Puppet (debug): PROFILE [17575380] 1.7 Rendered result in Puppet::Network::Format[pson]: took 0.0527 seconds
2015-08-18 20:52:35 +0800 Puppet (debug): PROFILE [17575380] 1.8 Sent response: took 0.0001 seconds
2015-08-18 20:52:35 +0800 Puppet (debug): PROFILE [17575380] 1 Processed request POST /production/catalog/testnode1: took 24.8045 seconds
edit retag flag offensive close merge delete

Comments

1

Its not normal to take 20 seconds to submit a command on its own. How did you prove this was SSL specifically? Commands are asynchronous, so it might be that there is a backlog of commands to be processed, or that processing itself is slow. Enabling master profiling is one way to prove/disprove this

ken gravatar imageken ( 2015-08-18 05:57:35 -0500 )edit

See here: https://puppetlabs.com/blog/tune-puppet-performance-profiler for details on how to enable profiling in the master.

ken gravatar imageken ( 2015-08-18 05:58:52 -0500 )edit

@ken Thanks for your comments. Yes, I noticed later that it's was the replace command itself that takes too long to complete. Will post profiling results.

zsl gravatar imagezsl ( 2015-08-18 07:55:57 -0500 )edit

@ken I updated the question. Could you please have a check? Where can I check further for the slow command processing? Thanks!

zsl gravatar imagezsl ( 2015-08-18 08:14:54 -0500 )edit

1 Answer

Sort by ยป oldest newest most voted
1

answered 2015-08-18 09:14:25 -0500

zsl gravatar image

updated 2015-08-18 10:49:43 -0500

OK. It turns out the root cause is DNS settings on my PuppetDB Server... WTF? How can I even imagine that?

The DNS set on my PuppetDB(/etc/resolv.conf) is not statble. I guess every time the puppet master try to establish SSL connection, PuppetDB Server tries to do resolve DNS lookup for the incoming IP Address, which leads to slow connections.

I deleted the DNS setting. The processing is very fast now.

2015-08-18 21:54:10 +0800 Puppet (debug): PROFILE [22120800] 1.4.2.2 PuppetDB: Submit command HTTP post: took 0.1160 seconds
2015-08-18 21:54:10 +0800 Puppet (debug): PROFILE [22120800] 1.4.2 PuppetDB: Submitted command 'replace catalog' version '5': took 0.1219 seconds
2015-08-18 21:54:10 +0800 Puppet (debug): PROFILE [22120800] 1.4 PuppetDB: catalog#save: took 0.1336 seconds
2015-08-18 21:54:10 +0800 Puppet (debug): PROFILE [22120800] 1.5 Filtered result for catalog testnode1: took 0.0478 seconds
2015-08-18 21:54:10 +0800 Puppet (debug): PROFILE [22120800] 1.6 Rendered result in Puppet::Network::Format[pson]: took 0.0056 seconds
2015-08-18 21:54:10 +0800 Puppet (debug): PROFILE [22120800] 1.7 Sent response: took 0.0001 seconds
2015-08-18 21:54:10 +0800 Puppet (debug): PROFILE [22120800] 1 Processed request POST /devel/catalog/testnode1: took 0.6394 seconds
edit flag offensive delete link more

Comments

Hehehe ... nice one :-). I guess it would be great to see DNS resolution profiled separately from submission somehow, would probably make people more 'aware' of this kind of thing.

ken gravatar imageken ( 2015-08-18 10:17:26 -0500 )edit

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower

Stats

Asked: 2015-08-18 02:07:00 -0500

Seen: 377 times

Last updated: Aug 18 '15