0

From Java Concurrency in Practice

Threads share the memory address space of their owning process, all the threads within a process have access to the same variables & allocate objects from the same heap.

Also

Declaring a variable as volatile means that threads should not cache such a variable or in other words should not trust the values of these variables unless they are directly read from the main memory.

My question is

Say there is a non-volatile instance variable 'a' which is modified by a thread. Won't the modified value of 'a' be updated on the heap. If it is updated on the heap another thread reading that instance variable would automatically read the updated value as threads share the instance variables from the heap. So how is the functioning of a volatile variable different?

1 Answer 1

3

The difference is that a volatile variable is forced to be flushed from all caches before reading and all reads come from main memory.

An non-volatile variable can be cached as many times as is desired in all threads.

Essentially

  • Every time you read a volatile variable it has the value of the most recent write to it from any thread.

  • Every time you read a non-volatile variable it has the value of the most recent write to it from this thread and only may have the value that other threads have written.

In the specific case that is the most common cause of issues it is quite possible for one thread to write a value to a variable and a second thread never sees the new value.

Sign up to request clarification or add additional context in comments.

7 Comments

So do threads while reading writing cache even the instance variables?
@underdog - they can - they don't actually have to and often they don't but the critical point is that you cannot assume either way.
@OldCurmudgeon can you explain "they don't actually have to"?
@Prakash - A VM writer need not do any cacheing at all but if they do they are not allowed to cache valatile variables. Well actually they are but all access to them must flush any cache of them first.
@Prakash It's because of the underlying hardware architecture. When multiple processors access a shared memory system, each access takes an order of magnitude longer than when a processor accesses its own private cache. The processors cooperate to keep the shared memory up-to-date with their caches as time permits or, when the program executes special synchronization instructions (e.g., when accessing a volatile field). The less often a program needs to synchronize caches, the better it will perform.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.