Ask Your Question

puppetserver.log exhausts the memory with a repeated error

asked 2015-01-28 08:10:08 -0600

miteshdesai1989 gravatar image

updated 2015-01-28 17:30:12 -0600

Stefan gravatar image

Hi Team,

There seems to be some problem with puppetserver.log as it is exhausting the memory in which its present with a repeated error: Too many open files
        at Method) ~[na:1.7.0_71-icedtea]
        at ~[na:1.7.0_71-icedtea]
        at org.eclipse.jetty.server.ServerConnector.accept( ~[puppet-server-release.jar:na]
        at org.eclipse.jetty.server.AbstractConnector$ ~[puppet-server-release.jar:na]
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob( [puppet-server-release.jar:na]
        at org.eclipse.jetty.util.thread.QueuedThreadPool$ [puppet-server-release.jar:na]
        at [na:1.7.0_71-icedtea]
2015-01-28 04:47:14,909 WARN  [o.e.j.s.ServerConnector]

Also, there is some issue with Mcollective and ActiveMQ components, the excepts of those repeated logs are:

MCollective Log:

E, [2015-01-28T02:39:18.650868 #3444] ERROR -- : activemq.rb:148:in `on_ssl_connectfail' SSL session creation with stomp+ssl://mcollective@hostname:61613 failed: Connection refused - connect(2)
I, [2015-01-28T02:39:18.650988 #3444]  INFO -- : activemq.rb:128:in `on_connectfail' TCP Connection to stomp+ssl://mcollective@hostname:61613 failed on attempt 2
E, [2015-01-28T02:46:49.240851 #3444] ERROR -- : activemq.rb:153:in `on_hbread_fail' Heartbeat read failed from 'stomp+ssl://mcollective@hostname:61613': {"ticker_interval"=>119.5, "read_fail_count"=>0, "lock_fail"=>true, "lock_fail_count"=>1}

ActiveMQ log:

2014-12-26 02:56:30,889 | WARN  | Transport Connection to: tcp://IP:50795 failed: Received fatal alert: unknown_ca | | ActiveMQ Transport: ssl:///IP:50795

Please assist in finding a solution to this issue.

edit retag flag offensive close merge delete


Are you using the latest Puppet Server release for your master - 1.0.2 for Open Source or 3.7.1 for PE? A number of file descriptor leak fixes went into the more recent releases. One trigger for the leak on older builds was the Puppet Server's inability to connect to PuppetDB to send reports.

camlow325 gravatar imagecamlow325 ( 2015-01-29 10:48:12 -0600 )edit

1 Answer

Sort by ยป oldest newest most voted

answered 2015-01-28 17:39:04 -0600

Stefan gravatar image

Too many open files does not point to high memory consumption, just that the process has exceeds its number of open files limit. As this includes network connections there might be a problem here. Can you please check the following:

# strings /proc/<PID_OF_PUPPETSERVER_PROCESS>/limits
# ls -l /proc/<PID_OF_PUPPETSERVER_PROCESS>/fd/|wc -l

if both values are fairly close you may be able check with lsof -p <PID_OF_PUPPETSERVER_PROCESS> and netstat what actually makes you hit that limit.

If you share the output of the mentioned commands we might be able to help, too.

edit flag offensive delete link more

Your Answer

Please start posting anonymously - your entry will be published after you log in or create a new account.

Add Answer

Question Tools

1 follower


Asked: 2015-01-28 08:10:08 -0600

Seen: 803 times

Last updated: Jan 28 '15