JBoss thread leaking JMS problem

All these zombie threads had apparently been created by JBoss’s JMS/MQ system to facilitate reading messages from a JMS sender, and these threads were never knowing to terminate themselves, even when we’d been finished consuming the messages on the server-side.

(Non-technical readers of my blog, you’ll want to glaze past this entry. This is a case where I encountered a problem, scoured the net for any clues and, finding none, slogged on forever until finally solving it. I’m writing this entry because I’ve found that the last couple of times I’ve solved an obscure computer problem, if I posted my solution in my blog then the entry would accumulate lots of comments saying “Thanks so much. I was really stuck on this.”)

So, to state the problem we encountered…

We’ve got a JBoss application that makes pretty heavy use of MDB (message drive beans) to manage tasks. Our JBoss server talks to another proprietary server (via HTTP mostly) and the status of many tasks depends on whether that other server reports it is ready or finished or whatever.

Essentially, we have a fair number of MDBs that kick off other MDBs using JMS messages.

The problem was that we discovered the JBoss server was “accumulating” hundreds if not thousands of threads until eventually the server would die from resource starvation. (OutOfMemory error in this case.)

We looked at the stack traces of these threads to get an idea of what was going on. This has been made possible as of JBoss 4.0.2 via the jmx-console and the ServerInfo JMX MBean. (Note: you have to be running JBoss under a Java 1.5 JVM. That doesn’t mean your code has to be Java 1.5, as ours isn’t)

What we found was hundreds of pairs of threads running UIL2.SocketManager.ReadTask and UIL2.SocketManager.WriteTask. Where the ReadTasks were blocking in java.net.SocketInputStream.socketRead0 and the WriteTasks were waiting via java.lang.Object.wait.

So here’s the magic clue I’m going to give that I sure wish someone else had posted on the web earlier. This behavior happens if your JBoss code doesn’t close its QueueSession and QueueConnection objects. Actually, it may be one or the other; I found in our code that we hadn’t called close() on either and the garbage collector was obviously not closing them for me.

All these zombie threads had apparently been created by JBoss’s JMS/MQ system to facilitate reading messages from a JMS sender, and these threads were never knowing to terminate themselves, even when we’d been finished consuming the messages on the server-side.

Hope this helps anyone else in a similar situation! Write a comment if it has.

Author: Murray Todd Williams

I live in Austin, Texas. I'm enthusiastic about food, wine, programming (especially Scala), tennis and politics. I've worked for Accenture for over 12 years where I'm a Product Manager for suite of analytical business applications.

14 thoughts on “JBoss thread leaking JMS problem”

  1. I was experiencing exactly the same thing. We were rapidly developing a product prototype to determine whether J2EE is a suitabel technology for teh application which we are seeking to develop. The prototype has a message driven architecture, with 1000’s of messages being sent as we performed our tests.

    Sure enough, we had the same problem – JBoss would accumulate ~5000 threads then die. We are all reletaive J2EE novices, so it was not until I found this blog post that a solution was found.

    Many thanks Murray.

    Colin E.

  2. hi, murray thanks for this tip you never know how much you helped me you save my ass from my boss and the client were working . I been thinking already to switch server if i cant resolve this kind of problem. Thanks alot and God bless

  3. Hi people,
    I’m facing the same problem, however we don’t use Message services or MDBs… there is some possibility to be related with Hibernate sessions?

  4. Thanks a lot! I was experiencing the same problem. I spent hours with searching for the solution.

  5. Thanks for your post I am in the very situation you describe, desperately browsing the web in order to find why my JBoss doesn’t work as it should. It turns out I didn’t close the JMS session neither 🙂

  6. Thank you thank you thank you! This really saved me a bunch of time.

    I was opening the connection and reusing it but, like you found, those threads never died and eventually would smash my server. That’s a funny little joke the folks at JBoss played on us.

  7. First of all a BIG thanks! I spent nearly a whole day trying to figure out why we were running out of stack memory! Thanks for sharing your solution.

  8. Hi,
    Thanks for the article. This gives me an insight regarding the error.
    One question though. Have you come across a similar thing to do for webservices ? I am using JAX-WS webservices implemented via EJB and find that many of the threads are waiting at the “java.net.SocketInputStream.socketRead0”. Since, we don’t close connections here, I am wondering if JBoss has to do it for me…

  9. Funny that you posted this 5 years ago, and it solved my problem. I had one call closing the connections in a finally block, but another call closed them in a catch block, so it basically never closed them. We had a new client that had a lot of weird stuff going on, so I was looking in all the wrong places. This code hasnt changed in a couple years. Turns out we just never had the volume to trigger the issue.

    I see youre in NY. if you wind up at a java meetup, hit me up and Ill buy you a beer.

    -Kevin

Leave a Reply

Your email address will not be published. Required fields are marked *