Skip to main content

Why reading a volatile and writing to a field member is not scalable in Java?



Observe the following program written in Java (complete runnable version follows, but the important part of the program is in the snippet a little bit further below):







import java.util.ArrayList;







/** A not easy to explain benchmark.

*/

class MultiVolatileJavaExperiment {



public static void main(String[] args) {

(new MultiVolatileJavaExperiment()).mainMethod(args);

}



int size = Integer.parseInt(System.getProperty("size"));

int par = Integer.parseInt(System.getProperty("par"));



public void mainMethod(String[] args) {

int times = 0;

if (args.length == 0) times = 1;

else times = Integer.parseInt(args[0]);

ArrayList < Long > measurements = new ArrayList < Long > ();



for (int i = 0; i < times; i++) {

long start = System.currentTimeMillis();

run();

long end = System.currentTimeMillis();



long time = (end - start);

System.out.println(i + ") Running time: " + time + " ms");

measurements.add(time);

}



System.out.println(">>>");

System.out.println(">>> All running times: " + measurements);

System.out.println(">>>");

}



public void run() {

int sz = size / par;

ArrayList < Thread > threads = new ArrayList < Thread > ();



for (int i = 0; i < par; i++) {

threads.add(new Reader(sz));

threads.get(i).start();

}

for (int i = 0; i < par; i++) {

try {

threads.get(i).join();

} catch (Exception e) {}

}

}



final class Foo {

int x = 0;

}



final class Reader extends Thread {

volatile Foo vfoo = new Foo();

Foo bar = null;

int sz;



public Reader(int _sz) {

sz = _sz;

}



public void run() {

int i = 0;

while (i < sz) {

vfoo.x = 1;

// with the following line commented

// the scalability is almost linear

bar = vfoo; // <- makes benchmark 2x slower for 2 processors - why?

i++;

}

}

}



}







Explanation : The program is actually very simple. It loads integers size and par from the system properties (passed to jvm with the -D flag) - these are the input length and the number of threads to use later. It then parses the first command line argument which says how many time to repeat the program (we want to be sure that the JIT has done its work and have more reliable measurements).





The run method is called in each repetition. This method simply starts par threads, each of which will do a loop with size / par iterations. The thread body is defined in the Reader class. Each repetition of the loop reads a volatile member vfoo and assigns 1 to its public field. After that, vfoo is read once again and assigned to a non-volatile field bar .





Notice how most of the time the program is executing the loop body, so the run in the thread is the focus of this benchmark:







final class Reader extends Thread {

volatile Foo vfoo = new Foo();

Foo bar = null;

int sz;



public Reader(int _sz) {

sz = _sz;

}



public void run() {

int i = 0;

while (i < sz) {

vfoo.x = 1;

// with the following line commented

// the scalability is almost linear

bar = vfoo; // <- makes benchmark 2x slower for 2 processors - why?

i++;

}

}

}







Observations : Running java -Xmx512m -Xms512m -server -Dsize=500000000 -Dpar=1 MultiVolatileJavaExperiment 10 on an







Ubuntu Server 10.04.3 LTS

8 core Intel(R) Xeon(R) CPU X5355 @2.66GHz

~20GB ram

java version "1.6.0_26"

Java(TM) SE Runtime Environment (build 1.6.0_26-b03)

Java HotSpot(TM) 64-Bit Server VM (build 20.1-b02, mixed mode)







I get the following times:







>>> All running times: [821, 750, 1011, 750, 758, 755, 1219, 751, 751, 1012]







Now, setting -Dpar=2 , I get:







>>> All running times: [1618, 380, 1476, 1245, 1390, 1391, 1445, 1393, 1511, 1508]







Apparently, this doesn't scale for some reason - I would have expected the second output to be twice as fast (although it does seem to be in one of the early iterations - 380ms ).





Interestingly, commenting out the line bar = vfoo (which isn't even supposed to be a volatile write), yields the following times for -Dpar set to 1,2,4,8 .







>>> All running times: [762, 563, 563, 563, 563, 563, 570, 566, 563, 563]

>>> All running times: [387, 287, 285, 284, 283, 281, 282, 282, 281, 282]

>>> All running times: [204, 146, 143, 142, 141, 141, 141, 141, 141, 141]

>>> All running times: [120, 78, 74, 74, 81, 75, 73, 73, 72, 71]







It scales perfectly.





Analysis : First of all, there are no garbage collection cycles occuring here (I've added -verbose:gc as well to check this).





I get similar results on my iMac.





Each thread is writing to its own field, and different Foo object instances belonging to different threads don't appear to be ending up in the same cachelines - adding more members into Foo to increase its size doesn't change the measurements. Each thread object instance has more than enough fields to fill up the L1 cache line. So this probably isn't a memory issue.





My next thought was that the JIT might be doing something weird, because the early iterations usually do scale as expected in the uncommented version, so I checked this out by printing the assembly (see this post on how to do that ).







java -Xmx512m -Xms512m -server -XX:CompileCommand=print,*Reader.run MultiVolatileJavaExperiment -Dsize=500000000 -Dpar=1 10







and I get these 2 outputs for the 2 versions for the Jitted method run in Reader . The commented (properly scalable) version:







[Verified Entry Point]

0xf36c9fac: mov %eax,-0x3000(%esp)

0xf36c9fb3: push %ebp

0xf36c9fb4: sub $0x8,%esp

0xf36c9fba: mov 0x68(%ecx),%ebx

0xf36c9fbd: test %ebx,%ebx

0xf36c9fbf: jle 0xf36c9fec

0xf36c9fc1: xor %ebx,%ebx

0xf36c9fc3: nopw 0x0(%eax,%eax,1)

0xf36c9fcc: xchg %ax,%ax

0xf36c9fd0: mov 0x6c(%ecx),%ebp

0xf36c9fd3: test %ebp,%ebp

0xf36c9fd5: je 0xf36c9ff7

0xf36c9fd7: movl $0x1,0x8(%ebp)



---------------------------------------------



0xf36c9fde: mov 0x68(%ecx),%ebp

0xf36c9fe1: inc %ebx ; OopMap{ecx=Oop off=66}

;*goto

; - org.scalapool.bench.MultiVolatileJavaExperiment$Reader::run@21 (line 83)



---------------------------------------------



0xf36c9fe2: test %edi,0xf7725000 ; {poll}

0xf36c9fe8: cmp %ebp,%ebx

0xf36c9fea: jl 0xf36c9fd0

0xf36c9fec: add $0x8,%esp

0xf36c9fef: pop %ebp

0xf36c9ff0: test %eax,0xf7725000 ; {poll_return}

0xf36c9ff6: ret

0xf36c9ff7: mov $0xfffffff6,%ecx

0xf36c9ffc: xchg %ax,%ax

0xf36c9fff: call 0xf36a56a0 ; OopMap{off=100}

;*putfield x

; - org.scalapool.bench.MultiVolatileJavaExperiment$Reader::run@15 (line 79)

; {runtime_call}

0xf36ca004: call 0xf6f877a0 ; {runtime_call}







The uncommented bar = vfoo (non-scalable, slower) version:







[Verified Entry Point]

0xf3771aac: mov %eax,-0x3000(%esp)

0xf3771ab3: push %ebp

0xf3771ab4: sub $0x8,%esp

0xf3771aba: mov 0x68(%ecx),%ebx

0xf3771abd: test %ebx,%ebx

0xf3771abf: jle 0xf3771afe

0xf3771ac1: xor %ebx,%ebx

0xf3771ac3: nopw 0x0(%eax,%eax,1)

0xf3771acc: xchg %ax,%ax

0xf3771ad0: mov 0x6c(%ecx),%ebp

0xf3771ad3: test %ebp,%ebp

0xf3771ad5: je 0xf3771b09

0xf3771ad7: movl $0x1,0x8(%ebp)



-------------------------------------------------



0xf3771ade: mov 0x6c(%ecx),%ebp

0xf3771ae1: mov %ebp,0x70(%ecx)

0xf3771ae4: mov 0x68(%ecx),%edi

0xf3771ae7: inc %ebx

0xf3771ae8: mov %ecx,%eax

0xf3771aea: shr $0x9,%eax

0xf3771aed: movb $0x0,-0x3113c300(%eax) ; OopMap{ecx=Oop off=84}

;*goto

; - org.scalapool.bench.MultiVolatileJavaExperiment$Reader::run@29 (line 83)



-----------------------------------------------



0xf3771af4: test %edi,0xf77ce000 ; {poll}

0xf3771afa: cmp %edi,%ebx

0xf3771afc: jl 0xf3771ad0

0xf3771afe: add $0x8,%esp

0xf3771b01: pop %ebp

0xf3771b02: test %eax,0xf77ce000 ; {poll_return}

0xf3771b08: ret

0xf3771b09: mov $0xfffffff6,%ecx

0xf3771b0e: nop

0xf3771b0f: call 0xf374e6a0 ; OopMap{off=116}

;*putfield x

; - org.scalapool.bench.MultiVolatileJavaExperiment$Reader::run@15 (line 79)

; {runtime_call}

0xf3771b14: call 0xf70307a0 ; {runtime_call}







The differences in the two versions are within --------- . I expected to find synchronization instructions in the assembly which might account for the performance issue - while few extra shift , mov and inc instructions might affect absolute performance numbers, I don't see how they could affect scalability.





So, I suspect that this is some sort of a memory issue related to storing to a field in the class. On the other hand, I'm also inclined to believe that the JIT does something funny, because in one iteration the measured time is twice as fast, as it should be.





Can anyone explain what is going on here? Please be precise and include references that support your claims.





Thank you!





EDIT:





Here's the bytecode for the fast (scalable) version:







public void run();

LineNumberTable:

line 77: 0

line 78: 2

line 79: 10

line 83: 18

line 85: 24







Code:

Stack=2, Locals=2, Args_size=1

0: iconst_0

1: istore_1

2: iload_1

3: aload_0

4: getfield #7; //Field sz:I

7: if_icmpge 24

10: aload_0

11: getfield #5; //Field vfoo:Lorg/scalapool/bench/MultiVolatileJavaExperiment$Foo;

14: iconst_1

15: putfield #8; //Field org/scalapool/bench/MultiVolatileJavaExperiment$Foo.x:I

18: iinc 1, 1

21: goto 2

24: return

LineNumberTable:

line 77: 0

line 78: 2

line 79: 10

line 83: 18

line 85: 24



StackMapTable: number_of_entries = 2

frame_type = 252 /* append */

offset_delta = 2

locals = [ int ]

frame_type = 21 /* same */







The slow (non-scalable) version with bar = vfoo :







public void run();

LineNumberTable:

line 77: 0

line 78: 2

line 79: 10

line 82: 18

line 83: 26

line 85: 32







Code:

Stack=2, Locals=2, Args_size=1

0: iconst_0

1: istore_1

2: iload_1

3: aload_0

4: getfield #7; //Field sz:I

7: if_icmpge 32

10: aload_0

11: getfield #5; //Field vfoo:Lorg/scalapool/bench/MultiVolatileJavaExperiment$Foo;

14: iconst_1

15: putfield #8; //Field org/scalapool/bench/MultiVolatileJavaExperiment$Foo.x:I

18: aload_0

19: aload_0

20: getfield #5; //Field vfoo:Lorg/scalapool/bench/MultiVolatileJavaExperiment$Foo;

23: putfield #6; //Field bar:Lorg/scalapool/bench/MultiVolatileJavaExperiment$Foo;

26: iinc 1, 1

29: goto 2

32: return

LineNumberTable:

line 77: 0

line 78: 2

line 79: 10

line 82: 18

line 83: 26

line 85: 32



StackMapTable: number_of_entries = 2

frame_type = 252 /* append */

offset_delta = 2

locals = [ int ]

frame_type = 29 /* same */







The more I am experimenting with this, it seems to me that this has nothing to do with volatiles at all - it has something to do with writing to object fields. My hunch is that this is somehow a memory contention issue - something with caches and false sharing, although there is no explicit synchronization at all.





EDIT 2:





Interestingly, changing the program like this:







final class Holder {

public Foo bar = null;

}



final class Reader extends Thread {

volatile Foo vfoo = new Foo();

Holder holder = null;

int sz;



public Reader(int _sz) {

sz = _sz;

}



public void run() {

int i = 0;

holder = new Holder();

while (i < sz) {

vfoo.x = 1;

holder.bar = vfoo;

i++;

}

}

}







resolves the scaling issue. Apparently, the Holder object above gets created after the thread is started, and is probably allocated in a different segment of memory, which is then being modified concurrently, as opposed to modifying the field bar in the thread object, which is somehow "close" in memory between different thread instances.


Comments

  1. This is what I think is happening (keep in mind I'm not familiar with HotSpot):

    0xf36c9fd0: mov 0x6c(%ecx),%ebp ; vfoo
    0xf36c9fd3: test %ebp,%ebp ; vfoo is null?
    0xf36c9fd5: je 0xf36c9ff7 ; throw NullPointerException (I guess)
    0xf36c9fd7: movl $0x1,0x8(%ebp) ; vfoo.x = 1
    0xf36c9fde: mov 0x68(%ecx),%ebp ; sz
    0xf36c9fe1: inc %ebx ; i++
    0xf36c9fe2: test %edi,0xf7725000 ; safepoint on end of loop
    0xf36c9fe8: cmp %ebp,%ebx ; i < sz?
    0xf36c9fea: jl 0xf36c9fd0


    0xf3771ad0: mov 0x6c(%ecx),%ebp ; vfoo
    0xf3771ad3: test %ebp,%ebp ; vfoo is null?
    0xf3771ad5: je 0xf3771b09 ; throw NullPointerException (I guess)
    0xf3771ad7: movl $0x1,0x8(%ebp) ; vfoo.x = 1
    0xf3771ade: mov 0x6c(%ecx),%ebp ; \
    0xf3771ae1: mov %ebp,0x70(%ecx) ; / bar = vfoo
    0xf3771ae4: mov 0x68(%ecx),%edi ; sz
    0xf3771ae7: inc %ebx ; i++
    0xf3771ae8: mov %ecx,%eax ;
    0xf3771aea: shr $0x9,%eax ; ??? \ Probably replaced later
    0xf3771aed: movb $0x0,-0x3113c300(%eax) ; ??? / by some barrier code?
    0xf3771af4: test %edi,0xf77ce000 ; safepoint
    0xf3771afa: cmp %edi,%ebx ; i < sz ?
    0xf3771afc: jl 0xf3771ad0 ;


    The reason I think the above code stands in for a barrier is that when taking the NullPointerException, the scalable version has a XCHG, which acts as a barrier, while the non-scalable version has a NOP there.

    The rationale would be that there needs to be a happens-before ordering between the initial load of vfoo and joining the thread. In the volatile case, the barrier would be inside the loop, so it wouldn't need to be elsewhere. What I don't understand is why XCHG isn't used inside the loop. Maybe runtime detection of MFENCE support?

    ReplyDelete
  2. You are not actually writing to a volatile field so the volatile field can be cached in each thread.

    Using volatile prevents some compiler optimisations and in a micro-benchmark, you can see a large relative difference.

    In the example above, the commented out version is longer because it has loop unrolled to place two iterations in one actual loop. This can almost double performance.

    When using volatile you can see there is no loop unrolling.

    BTW: You can remove a lot of the code in your example to make it easier to read. ;)

    ReplyDelete
  3. Edit: This answer did not stand up to testing.

    I have no way to test this right now (no multicore CPU in this machine), but here is a theory: The Foo instances might not be in the same cache lines, but perhaps the Reader instances are.

    This means the slowdown could be explained by the write to bar, rather than the read of foo, because writing to bar would invalidate that cache line for the other core and cause lots of copying between caches. Commenting out the write to bar (which is the only write to a field of Reader in the loop) stops the slowdown, which is consistent with this explanation.

    Edit: According to this article, the memory layout of objects is such that the bar reference would be the last field in the layout of the Reader object. This means it is probable to land in the same cache line as the next object on the Heap. Since I am not sure about the order in which new objects are allocated on the Heap, I suggested in the comment below to pad both "hot" object types with references, which would be effective in separating the objects (At least, I hope it will, but it depends on how fields of the same type are sorted in memory).

    ReplyDelete

Post a Comment

Popular posts from this blog

[韓日関係] 首相含む大幅な内閣改造の可能性…早ければ来月10日ごろ=韓国

div not scrolling properly with slimScroll plugin

I am using the slimScroll plugin for jQuery by Piotr Rochala Which is a great plugin for nice scrollbars on most browsers but I am stuck because I am using it for a chat box and whenever the user appends new text to the boxit does scroll using the .scrollTop() method however the plugin's scrollbar doesnt scroll with it and when the user wants to look though the chat history it will start scrolling from near the top. I have made a quick demo of my situation http://jsfiddle.net/DY9CT/2/ Does anyone know how to solve this problem?

Why does this javascript based printing cause Safari to refresh the page?

The page I am working on has a javascript function executed to print parts of the page. For some reason, printing in Safari, causes the window to somehow update. I say somehow, because it does not really refresh as in reload the page, but rather it starts the "rendering" of the page from start, i.e. scroll to top, flash animations start from 0, and so forth. The effect is reproduced by this fiddle: http://jsfiddle.net/fYmnB/ Clicking the print button and finishing or cancelling a print in Safari causes the screen to "go white" for a sec, which in my real website manifests itself as something "like" a reload. While running print button with, let's say, Firefox, just opens and closes the print dialogue without affecting the fiddle page in any way. Is there something with my way of calling the browsers print method that causes this, or how can it be explained - and preferably, avoided? P.S.: On my real site the same occurs with Chrome. In the ex