Skip to main content

Memory consumption issues of a Java program


I have a Java program that runs on my Ubuntu 10.04 machine and, without any user interaction, repeatedly queries a MySQL database and then constructs img- and txt-files according to the data read from the DB. It makes tens of thousands of queries and creates tens of thousands of files.



After some hours of running, the available memory on my machine including swap space is totally used up. I haven't started other programs and the processes running in the background don't consume much memory and don't really grow in consumption.



To find out what is allocating so much memory I wanted to analyse a heap dump, so I started the process with -Xms64m -Xmx128m -XX:+HeapDumpOnOutOfMemoryError.



To my surprise, the situation was the same as before, after some hours the program was allocating all of the swap which is way beyond the given max of 128m.



Another run debugged with VisualVM showed that the heap allocation never is beyond the max of 128m - when the allocated memory is approximating the max, a big part of it is released again (I assume by the garbage collector).



So, it cannot be a problem a steadily growing heap.



When the memory is all used up:



free shows the following:




total used free shared buffers cached
Mem: 2060180 2004860 55320 0 848 1042908
-/+ buffers/cache: 961104 1099076
Swap: 3227640 3227640 0



top shows the following:




USER VIRT RES SHR COMMAND
[my_id] 504m 171m 4520 java
[my_id] 371m 162m 4368 java



(by far the two "biggest" processes and the only java processes running)



My first question is:



  • How can I find out on the OS level (e.g. with command line tools) what is allocating so much memory? top / htop hasn't helped me. In case of many, many tiny processes of the same type eating up the memory: is there a way to intelligently sum up similar processes? (I know that is probably off topic as it is a Linux/Ubuntu question, but my main problem may still be Java-related)



My old questions were:



  • Why isn't the memory consumption of my program given in the top output?

  • How can I find out what is allocating so much memory?

  • If the heap isn't the problem, is the only "allocating factor" the stack? (the stack shouldn't be a problem as there is no deep "method call depth")

  • What about external resources as DB connections?


Source: Tips4allCCNA FINAL EXAM

Comments

  1. If indeed your Java process is the one which takes memory and there is nothing suspucios in VisualVM or memory dump then it must be somewhere in native code - either in JVM or in some of the libraries you're using. On JVM level it could be, for example, if you're using NIO or memory mapped files. If some of your libraries are using native calls or you're using not type 4 JDBC driver for your database then leak could be there.

    Some suggestions:


    There are some details how to find memory leaks in native code here. Good read also.
    As usual, make sure you're properly closing all resources (Files, Streams, Connections, Threads etc). Most of these are calling native implementation at some point so memory consumed might not be directly visible in JVM
    Check resources consumed on OS level - number of open files, file descriptors, network connections etc.

    ReplyDelete
  2. @maximdim's answer is great general advice for this kind of situation. What is likely happening here is that a very small Java object is being retained that causes some larger amount of native(OS-level) memory to hang around. This native memory is not accounted for in the Java heap. The Java object is likely so small that you will hit your system memory limit well before the Java object retention would overwhelm the heap.

    So the trick for finding this is to use successive heap dumps, far enough apart that you have noticed memory growth for the whole process, but not so far apart that a ton of work has gone on. What you are looking for are Java object counts in the heap that keep increasing and have native memory attached.

    These could be file handles, sockets, db connections, or image handles just to name a few that are likely directly applicable for you.

    On more rare occasions, there is a native resource that is leaked by the java implementation, even after the Java object is garbage collected. I once ran into a WinCE 5 bug where 4k were leaked with each socket close. So there was no Java object growth, but there was process memory usage growth. In these cases, it is helpful to make some counters and keep track of java allocations of objects with native memory vs. the actual growth. Then over a short enough window, you can look for any correlations and use these to make smaller testcases.

    One other hint, make sure all your close operations are in finally blocks, just in case an exception is popping you out of your normal control flow. This has been known to cause this sort of problem as well.

    ReplyDelete
  3. Hmm... use ipcs to check that shared memory segments aren't left open. Check the open file descriptors of your JVM (/proc/<jvm proccess id>/fd/*). In top, type fpFp to show swap and sort by used swap the task list.

    That's all I can come up with for now, hope it helps at least a bit.

    ReplyDelete
  4. As @maximdim and @JamesBranigan point out, the likely culprit is some native interaction from your code. But as you haven't been able to track down exactly where the problematic interaction is using available tools, why don't you try a brute force approach?

    You've outlined a two part process: query MySQL and write files. Either one of those things could be excluded from the process as a test. Test one: eliminate the query and hard code the content that would have been returned. Test two: do the query, but don't bother writing the files. Do you still have leaks?

    There may be other testable cases as well, depending on what else your application does.

    ReplyDelete
  5. Your file system caching is probably causing this, the file system cache will eat up all available memory when doing a large amount of IO. You systems performance should not be adversely affected by this behaviour, the kernel will immediately release file system cache when memory is requested by a process.

    ReplyDelete
  6. As there was no activity after the day I asked the question (until March 23) and as I still couldn't find the cause for the memory consumption I "solved" the problem pragmatically.

    The program causing the problem is basically a repetition of a "task" (i.e. querying a DB and then creating files). It is relatively easy to parameterize the program so that a certain subset of tasks is executed and not all of them.

    So now I repeatedly run my program from a shell script, in each process executing only a set of tasks (parameterized through arguments). In the end, all tasks are being executed, but as a single process only processes a subset of tasks there are no memory issues any more.

    For me that is a sufficient solution. If you have a similar problem and your program has a batch-like execution structure this may be a pragmatic approach.

    When I find the time I will look into the new suggestions hopefully identifying the root cause (thanks for the help!).

    ReplyDelete
  7. Are you creating separate threads to run your "tasks"? The memory used to create threads is separate from the Java heap.

    This means that even if you specify -Xmx128m the memory used by the Java process could be much higher, depending on how many threads you're using and the thread stack size (each thread gets a stack allocated, of size specified by -Xss).

    Example from work recently:
    We had a Java heap of 4GB (-Xmx4G), but the OS process was consuming upwards of 6GB,
    also using up the swap space.
    When I checked the process status with cat /proc/<PID>/status I noticed we had 11000 threads running.
    Since we had -Xss256K set, this is easily explained: 10000 threads means 2,5GB.

    ReplyDelete
  8. You say you are creating image files are you creating image objects? If so, are you calling dispose() on these objects when you are done?

    If I remember rightly, java awt imagine objects allocate native resources that must be disposed explicitly.

    ReplyDelete

Post a Comment

Popular posts from this blog

[韓日関係] 首相含む大幅な内閣改造の可能性…早ければ来月10日ごろ=韓国

div not scrolling properly with slimScroll plugin

I am using the slimScroll plugin for jQuery by Piotr Rochala Which is a great plugin for nice scrollbars on most browsers but I am stuck because I am using it for a chat box and whenever the user appends new text to the boxit does scroll using the .scrollTop() method however the plugin's scrollbar doesnt scroll with it and when the user wants to look though the chat history it will start scrolling from near the top. I have made a quick demo of my situation http://jsfiddle.net/DY9CT/2/ Does anyone know how to solve this problem?

Why does this javascript based printing cause Safari to refresh the page?

The page I am working on has a javascript function executed to print parts of the page. For some reason, printing in Safari, causes the window to somehow update. I say somehow, because it does not really refresh as in reload the page, but rather it starts the "rendering" of the page from start, i.e. scroll to top, flash animations start from 0, and so forth. The effect is reproduced by this fiddle: http://jsfiddle.net/fYmnB/ Clicking the print button and finishing or cancelling a print in Safari causes the screen to "go white" for a sec, which in my real website manifests itself as something "like" a reload. While running print button with, let's say, Firefox, just opens and closes the print dialogue without affecting the fiddle page in any way. Is there something with my way of calling the browsers print method that causes this, or how can it be explained - and preferably, avoided? P.S.: On my real site the same occurs with Chrome. In the ex