I run java's client on windows. After catch Connection_LOST (by exceed idle time or kick or just call disconnect) CPU at 100% using.
How avoid overusing CPU?
How can I restore the connection right?
CPU at 100%
Re: CPU at 100%
Are you sure it is the java process causing the spike?
What code runs in the disconnection handler?
Also, are you using the latest API available? --> http://www.smartfoxserver.com/download/sfs2x#p=updates
What code runs in the disconnection handler?
Also, are you using the latest API available? --> http://www.smartfoxserver.com/download/sfs2x#p=updates
Re: CPU at 100%
I'm experiencing this also.
When the CONNECTION_LOST event happens after a timeout for example, the concurrent garbage collection goes crazy spewing out messages as below
In the client handler there is no processing going on bar forwarding the event to the UI, but nothing is being done there.
Looking at the threads that are active, one has a huge uptime that continues to climb corresponding to this garbage collection I think.
It is names as "New I/O client worker #8-2"
A snapshot of classes and methods listed under the thread are below.
I'm using the latest client Android API from May.
When the CONNECTION_LOST event happens after a timeout for example, the concurrent garbage collection goes crazy spewing out messages as below
Code: Select all
D/dalvikvm(30128) GC_CONCURRENT freed 9776K 64% free 6551K/17859K paused 15ms+5ms total 79ms
D/dalvikvm(30128) WAIT_FOR_CONCURRENT_GC blocked 19ms
D/dalvikvm(30128) GC_CONCURRENT freed 1492K 64% free 6528K/17859K paused 17ms+16ms total 78ms
D/dalvikvm(30128) WAIT_FOR_CONCURRENT_GC blocked 8ms
D/dalvikvm(30128) GC_CONCURRENT freed 1469K 64% free 6520K/17859K paused 14ms+3ms total 70ms
D/dalvikvm(30128) WAIT_FOR_CONCURRENT_GC blocked 13ms
D/dalvikvm(30128) GC_CONCURRENT freed 1452K 64% free 6527K/17859K paused 15ms+17ms total 82ms
and on and on causing the CPU usage to ramp up.
In the client handler there is no processing going on bar forwarding the event to the UI, but nothing is being done there.
Looking at the threads that are active, one has a huge uptime that continues to climb corresponding to this garbage collection I think.
It is names as "New I/O client worker #8-2"
A snapshot of classes and methods listed under the thread are below.
I'm using the latest client Android API from May.
Code: Select all
java.util.concurrent.locks.AbstractQueuedSynchronizer compareAndSetState
java.util.concurrent.locks.ReentrantLock$NonfairSync lock
java.util.concurrent.locks.ReentrantLock lock
java.util.concurrent.ThreadPoolExecutor shutdownNow
org.jboss.netty.util.internal.ExecutorUtil terminate
org.jboss.netty.channel.socket.nio.NioClientSocketChannelFactory releaseExternalResources
org.jboss.netty.bootstrap.Bootstrap releaseExternalResources
sfs2x.client.bitswarm.bbox.BBClient handleConnectionLost
sfs2x.client.bitswarm.bbox.BBClient handleConnectionLost
sfs2x.client.bitswarm.bbox.BBClient onHttpResponse
sfs2x.client.bitswarm.bbox.BBClient access$300
sfs2x.client.bitswarm.bbox.BBClient$HttpResponseHandler messageReceived
org.jboss.netty.channel.SimpleChannelUpstreamHandler handleUpstream
org.jboss.netty.channel.DefaultChannelPipeline sendUpstream
org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext sendUpstream
org.jboss.netty.channel.Channels fireMessageReceived
org.jboss.netty.handler.codec.http.HttpChunkAggregator messageReceived
org.jboss.netty.channel.SimpleChannelUpstreamHandler handleUpstream
org.jboss.netty.channel.DefaultChannelPipeline sendUpstream
org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext sendUpstream
org.jboss.netty.handler.codec.http.HttpContentDecoder messageReceived
org.jboss.netty.channel.SimpleChannelUpstreamHandler handleUpstream
org.jboss.netty.channel.DefaultChannelPipeline sendUpstream
org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext sendUpstream
org.jboss.netty.channel.Channels fireMessageReceived
org.jboss.netty.handler.codec.replay.ReplayingDecoder unfoldAndfireMessageReceived
org.jboss.netty.handler.codec.replay.ReplayingDecoder callDecode
org.jboss.netty.handler.codec.replay.ReplayingDecoder cleanup
org.jboss.netty.handler.codec.replay.ReplayingDecoder channelDisconnected
org.jboss.netty.channel.SimpleChannelUpstreamHandler handleUpstream
org.jboss.netty.handler.codec.http.HttpClientCodec handleUpstream
org.jboss.netty.channel.DefaultChannelPipeline sendUpstream
org.jboss.netty.channel.DefaultChannelPipeline sendUpstream
org.jboss.netty.channel.Channels fireChannelDisconnected
org.jboss.netty.channel.socket.nio.NioWorker close
org.jboss.netty.channel.socket.nio.NioWorker read
org.jboss.netty.channel.socket.nio.NioWorker processSelectedKeys
org.jboss.netty.channel.socket.nio.NioWorker run
org.jboss.netty.util.ThreadRenamingRunnable run
org.jboss.netty.util.internal.IoWorkerRunnable run
java.util.concurrent.ThreadPoolExecutor runWorker
java.util.concurrent.ThreadPoolExecutor$Worker run
java.lang.Thread run
Re: CPU at 100%
Hi,
my previous questions are still valid:
Also, are you using the latest API available? --> http://www.smartfoxserver.com/download/sfs2x#p=updates
The fact that the GC runs is not bad per se. It depends if it keeps running indefinitely or if it's just running for a while removing unused data from memory. Can you explain?
my previous questions are still valid:
Also, are you using the latest API available? --> http://www.smartfoxserver.com/download/sfs2x#p=updates
The fact that the GC runs is not bad per se. It depends if it keeps running indefinitely or if it's just running for a while removing unused data from memory. Can you explain?
Re: CPU at 100%
Hi,
The GC behaviour persists for a long time. I always had to kill the app to get it to stop. The thread I referred to above stays alive using the majority of the CPU time.
I checked the API and I am using the most recent and, as i said the disconnection handler does nothing.
There are no other threads running that are getting near as much time or being allocated time at the same rate as the one above.
What other info might be helpful?
The GC behaviour persists for a long time. I always had to kill the app to get it to stop. The thread I referred to above stays alive using the majority of the CPU time.
I checked the API and I am using the most recent and, as i said the disconnection handler does nothing.
There are no other threads running that are getting near as much time or being allocated time at the same rate as the one above.
What other info might be helpful?
Re: CPU at 100%
Thanks, I think this is all we need.
I think Thomas Lund, our Java API guy, is already looking into it. We'll get back to you asap.
Stay tuned
I think Thomas Lund, our Java API guy, is already looking into it. We'll get back to you asap.
Stay tuned
Re: CPU at 100%
Thanks Lapo.
Do you guys plan on releasing the source of the client API for Android or is that something you do?
Do you guys plan on releasing the source of the client API for Android or is that something you do?
Re: CPU at 100%
Hi,
Can you confirm that this is a bug?
Have you guys been able to reproduce it?
It's an app killer at the moment, connecting and disconnecting to zones is leaving these orphaned threads that just can't be GC'ed it seems.
Can you confirm that this is a bug?
Have you guys been able to reproduce it?
It's an app killer at the moment, connecting and disconnecting to zones is leaving these orphaned threads that just can't be GC'ed it seems.
Re: CPU at 100%
Any news?
Re: CPU at 100%
Hi, we apologize, lots of things going on at the same time. The report is being investigated.
We'll get back to you next week.
We'll get back to you next week.
Re: CPU at 100%
We are working on an update that solves these problems, it's not out yet, but we can provide it as a pre-release while we're adding a couple more things. It already solves the 100% CPU issue and the threads not shutting down properly upon disconnection.
Feel free to drop us an email using the Support > Contact Us link from the website.
Thanks
Feel free to drop us an email using the Support > Contact Us link from the website.
Thanks
Re: CPU at 100%
Thanks Lapo, great news!
I sent off the email asking to get access to the release.
I sent off the email asking to get access to the release.
Return to “SFS2X Java / Android API”
Who is online
Users browsing this forum: No registered users and 52 guests