Okay, so I’ve been messing around with this “catalina leaks” thing and let me tell you, it’s been a wild ride. I wanted to share my experience, just in case anyone else is banging their head against the wall with this.

It all started when I noticed my Tomcat server acting all wonky. I mean, it was using way more memory than it should have. Like, seriously, it was hogging resources like there’s no tomorrow. I started digging around, you know, checking logs, monitoring stuff, the usual troubleshooting dance.
And then I saw it – the dreaded “SEVERE: The web application [XXX] appears to have started a thread named [XXX] but has failed to stop it. This is very likely to create a memory leak.” message in my *. Ugh, memory leaks, my old nemesis.
The Hunt Begins
So, I rolled up my sleeves and started hunting for the culprit. I knew it had something to do with threads not being stopped properly when my web app was getting redeployed or stopped. This is when the real fun started.
- First, I started playing around with the thread dumps using jstack. The jstack thing let me see what each thread was up to. This helped me narrow down the suspect threads.
- Next, I found out that using a profiler like JProfiler or YourKit is super helpful. I attached a profiler and watched what happened during the redeployment.
- Then I realized that some threads were related to external libraries I was using. Those were tricky because I couldn’t really control what went on inside those.
- After that, I started looking at my own code, especially the parts that used custom threads or thread pools.
I went through my code with a fine-tooth comb, checking for any place where I might have forgotten to shut down a thread properly. You know how it is, you start a thread and then you get distracted by something else and poof, it’s left hanging in the background.
The “Aha!” Moment
Finally, after a lot of trial and error, I found a few spots that were causing the leaks. I had a couple of custom threads that weren’t being terminated correctly in my ServletContextListener’s contextDestroyed method. Classic mistake, I know. But don’t throw stones, we’ve all been there.

I also had a third-party library that was notorious for creating these kinds of leaks. The solution there was a bit hacky, I had to call some internal cleanup methods via reflection to force it to release its resources. Not ideal, but hey, it worked. I also updated the libraries to the latest version, hoping there are some fixes for leaks.
After deploying the fixes, I kept a close eye on the logs and memory usage. And guess what? The memory leaks were gone! The server was running smoothly, and I could finally sleep at night. It took a while, but I got it, it’s not that hard, just need to be patient, and a little bit lucky.