Volatile
Marking a member variable as volatile, makes sure, when ever the member variable is accessed, its value is always read from the main memory rather than the cache which is associated to that specific thread.
Lets consider, if thread T1 and thread T2 are spawned from the same object, then all member variables will be shared between the threads T1 and T2. Each thread will store the values of these member variables in their cache in order to increase performance (hitting the cache is always quicker than accessing the main memory).
So if the thread T2 changes the value of xyz (which is ideally a shared variable between T1 and T2), but the change would not be reflected in T1 cache. Thread T1 would continue reading an older value and hence have an inconsistent state of xyz.
Marking a variable as volatile, makes sure that threads will always lookup the value of the shared variable from the main memory rather their cache, hence avoiding inconsistent states.
If we are working on a member variable xyz marked as volatile, and we perform a multi-step operation like xyz++. This will result in inconsistent read. Reason being, the update being performed on xyz in the main memory will be in 2 steps, rather than a single step. So if T1 is updating xyz through xyz++, and meanwhile T2 reads the value of xyz, it might read an inconsistent copy of xyz.
If we are working on a 32 bit OS, then variables with data types Long and Double marked as volatile will not work as expected. Reason is Long and Double use 64 bit of memory (2- 32 bit memory places would be utilized). If thread T1 and T2 are accessing a volatile Long xyz, and T1 updates xyz, the update would have in two steps rather than a single step. And T2 reads xyz during this update, then T2 might get an inconsistent value.
Based on above two examples, we should use volatile with caution depending on our use case.
Synchronization keyword
Synchronization keyword has a different use case in a concurrent environment as compared to volatile. Synchronized keyword ensure, mutual exclusion on a piece of code, which means only a single thread will have access to the piece of code wrapped under the synchronized keyword.
We can say that, synchronization keyword acts like a guarded block only allowing a single thread access to avoid race conditions from happening.
Synchronization works in acquiring a lock on the objects monitor. And threads which are accessing the guarded block are either spawned from this object or the object is shared (member variable) among all the threads.
Synchronization is re-entrant in nature, which means, if the thread has acquired access to a synchronized block through a objects monitor, then the thread can further go and access more synchronized blocks using the same object monitor.
Synchronization keyword can be used either by acquiring the class monitor or by objects monitor.
The class monitor is acquired if the synchronized keyword is used along with static methods, static blocks. Using this locking, only one thread will have access to the guarded block, irrespective from what ever object the thread was spawned from with in that JVM instance.
The object monitor is acquired if the synchronized keyword is used along with non static methods of on piece of code with in a method, referred to as synchronized blocks. Point to be kept in mind, if we don't have a guarantee, that the threads that are going to access the guarded block are spawned from the same thread, then avoid using the keyword this to acquire the monitor. Use an object which would be shared among all threads to acquire the monitor.
Its always better to only put the code of lines that can actually bring out race condition, rather than marking the whole method as synchronized. The more the synchronization, more the wait time for threads.
Note: Synchronization and volatile keywords make sure happens-before relationship is not violated, which is the crux of multi-threading
Lock Interface
Lock
implementations provide more extensive locking operations than can be obtained using synchronized
methods and statements. They allow more flexible structuring, may have quite different properties, and may support multiple associated Condition
objects.Key difference between Lock and synchronized keyword
- A synchronized block is fully contained within a method – we can have Lock API’s lock() and unlock() operation in separate methods
- A synchronized block doesn’t support the fairness, any thread can acquire the lock once released, no preference can be specified. We can achieve fairness within the Lock APIs by specifying the fairness property. It makes sure that longest waiting thread is given access to the lock
- A thread gets blocked if it can’t get an access to the synchronized block. The Lock API provides tryLock() method. The thread acquires lock only if it’s available and not held by any other thread. This reduces blocking time of thread waiting for the lock
- A thread which is in “waiting” state to acquire the access to synchronized block, can’t be interrupted. The Lock API provides a method lockInterruptibly() which can be used to interrupt the thread when it’s waiting for the lock
Lock provides implementation like ReentrantLock and ReentrantReadWriteLock.
No comments:
Post a Comment