Sharing limited resources
You can think of a single-threaded program as one lonely entity moving around through your problem space and doing one thing at a time. Because there's only one entity, you never have to think about the problem of two entities trying to use the same resource at the same time, problems like two people trying to park in the same space, walk through a door at the same time, or even talk at the same time.
With multithreading, things aren't lonely anymore, but you now have the possibility of two or more threads trying to use the same limited resource at once. Colliding over a resource must be prevented, or else you'll have two threads trying to access the same bank account at the same time, print to the same printer, adjust the same valve, and so on.
Improperly accessing resources
Consider the following example in which the class "guarantees" that it will always deliver an even number when you call getValue( ). However, there's a second thread named "Watcher" that is constantly calling getValue( ) and checking to see if this value is truly even. This seems like a needless activity, since it appears obvious by looking at the code that the value will indeed be even. But that's where the surprise comes in. Here's the first version of the program:
//: c13:AlwaysEven.java // Demonstrating thread collision over resources by // reading an object in an unstable intermediate state. public class AlwaysEven { private int i; public void next() { i++; i++; } public int getValue() { return i; } public static void main(String[] args) { final AlwaysEven ae = new AlwaysEven(); new Thread("Watcher") { public void run() { while(true) { int val = ae.getValue(); if(val % 2 != 0) { System.out.println(val); System.exit(0); } } } }.start(); while(true) ae.next(); } } ///:~
In main( ), an AlwaysEven object is createdit must be final because it is accessed inside the anonymous inner class defined as a Thread. If the value read by the thread is not even, it prints it out (as proof that it has caught the object in an unstable state) and then exits the program.
This example shows a fundamental problem with using threads. You never know when a thread might be run. Imagine sitting at a table with a fork, about to spear the last piece of food on your plate, and as your fork reaches for it, the food suddenly vanishes (because your thread was suspended and another thread came in and stole the food). That's the problem that you're dealing with when writing concurrent programs.
Sometimes you don't care if a resource is being accessed at the same time you're trying to use it (the food is on some other plate). But for multithreading to work, you need some way to prevent two threads from accessing the same resource, at least during critical periods.
Preventing this kind of collision is simply a matter of putting a lock on a resource when one thread is using it. The first thread that accesses a resource locks it, and then the other threads cannot access that resource until it is unlocked, at which time another thread locks and uses it, etc. If the front seat of the car is the limited resource, the child who shouts "Dibs!" asserts the lock.
A resource testing framework
Before going on, let's try to simplify things a bit by creating a little framework for performing tests on these types of threading examples. We can accomplish this by separating out the common code that might appear across multiple examples. First, note that the "watcher" thread is actually watching for a violated invariant in a particular object. That is, the object is supposed to preserve rules about its internal state, and if you can see the object from outside in an invalid intermediate state, then the invariant has been violated from the standpoint of the client (this is not to say that the object can never exist in the invalid intermediate state, just that it should not be visible by the client in such a state). Thus, we want to be able to detect that the invariant is violated, and also know what the violation value is. To get both of these values from one method call, we combine them in a tagging interface that exists only to provide a meaningful name in the code:
//: c13:InvariantState.java // Messenger carrying invariant data public interface InvariantState {} ///:~
In this scheme, the information about success or failure is encoded in the class name and type to make the result more readable. The class indicating success is:
//: c13:InvariantOK.java // Indicates that the invariant test succeeded public class InvariantOK implements InvariantState {} ///:~
To indicate failure, the InvariantFailure object will carry an object with information about what caused the failure, typically so that it can be displayed:
//: c13:InvariantFailure.java // Indicates that the invariant test failed public class InvariantFailure implements InvariantState { public Object value; public InvariantFailure(Object value) { this.value = value; } } ///:~
Now we can define an interface that must be implemented by any class that wishes to have its invariance tested:
//: c13:Invariant.java public interface Invariant { InvariantState invariant(); } ///:~
Before creating the generic "watcher" thread, note that some of the examples in this chapter will not behave as expected on all platforms. Many of the examples here attempt to show violations of single-threaded behavior when multiple threads are present, and this may not always happen.2 Alternatively, an example may attempt to show that the violation does not occur by attempting (and failing) to demonstrate the violation. In these cases, we'll need a way to stop the program after a few seconds. The following class does this by subclassing the standard library Timer class:
//: c13:Timeout.java // Set a time limit on the execution of a program import java.util.*; public class Timeout extends Timer { public Timeout(int delay, final String msg) { super(true); // Daemon thread schedule(new TimerTask() { public void run() { System.out.println(msg); System.exit(0); } }, delay); } } ///;~
The delay is in milliseconds, and the message will be printed if the timeout expires. Note that by calling super(true), this is created as a daemon thread so that if your program completes in some other way, this thread will not prevent it from exiting. The Timer.schedule( ) method is given a TimerTask subclass (created here as an anonymous inner class) whose run( ) is executed after the second schedule( ) argument delay (in milliseconds) runs out. Using Timer is generally simpler and clearer than writing the code directly with an explicit sleep( ). In addition, Timer is designed to scale to large numbers of concurrently scheduled tasks (in the thousands), so it can be a very useful tool.
Now we can use the Invariant interface and the Timeout class in the
InvariantWatcher thread:
//: c13:InvariantWatcher.java // Repeatedly checks to ensure invariant is not violated public class InvariantWatcher extends Thread { private Invariant invariant; public InvariantWatcher(Invariant invariant) { this.invariant = invariant; setDaemon(true); start(); } // Stop everything after awhile: public InvariantWatcher(Invariant invariant, final int timeOut){ this(invariant); new Timeout(timeOut, "Timed out without violating invariant"); } public void run() { while(true) { InvariantState state = invariant.invariant(); if(state instanceof InvariantFailure) { System.out.println("Invariant violated: " + ((InvariantFailure)state).value); System.exit(0); } } } } ///:~
The constructor captures a reference to the Invariant object to be tested, and starts the thread. The second constructor calls the first constructor, then creates a Timeout that stops everything after a desired delaythis is used in situations where the program may not exit by violating an invariant. In run( ), the current InvariantState is captured and tested, and if it fails, the value is printed. Note that we cannot throw an exception inside this thread, because that would only terminate the thread, not the program.
Now AlwaysEven.java can be rewritten using the framework:
//: c13:EvenGenerator.java // AlwaysEven.java using the invariance tester public class EvenGenerator implements Invariant { private int i; public void next() { i++; i++; } public int getValue() { return i; } public InvariantState invariant() { int val = i; // Capture it in case it changes if(val % 2 == 0) return new InvariantOK(); else return new InvariantFailure(new Integer(val)); } public static void main(String[] args) { EvenGenerator gen = new EvenGenerator(); new InvariantWatcher(gen); while(true) gen.next(); } } ///:~
When defining the invariant( ) method, you must capture all the values of interest into local variables. This way, you can return the actual value you have tested, not one that may have been changed (by another thread) in the meantime.
In this case, the problem is not that the object goes through a state that violates invariance, but that methods can be called by threads while the object is in that intermediate unstable state.
Colliding over resources
The worst thing that happens with EvenGenerator is that a client thread might see it in an unstable intermediate state. The object's internal consistency is maintained, however, and it eventually becomes visible in a good state. But if two threads are actually modifying an object, the contention over shared resources is much worse, because the object can be put into an incorrect state.
Consider the simple concept of a semaphore, which is a flag object used for communication between threads. If the semaphore's value is zero, then whatever it is monitoring is available, but if the value is nonzero, then the monitored entity is unavailable, and the thread must wait for it. When it's available, the thread increments the semaphore and then goes ahead and uses the monitored entity. Because incrementing and decrementing are atomic operations (that is, they cannot be interrupted), the semaphore keeps two threads from using the same entity at the same time.
If the semaphore is going to properly guard the entity that it is monitoring, then it must never get into an unstable state. Here's a simple version of the semaphore idea:
//: c13:Semaphore.java // A simple threading flag public class Semaphore implements Invariant { private volatile int semaphore = 0; public boolean available() { return semaphore == 0; } public void acquire() { ++semaphore; } public void release() { --semaphore; } public InvariantState invariant() { int val = semaphore; if(val == 0 || val == 1) return new InvariantOK(); else return new InvariantFailure(new Integer(val)); } } ///:~
The core part of the class is straightforward, consisting of available( ), acquire( ), and release( ). Since a thread should check for availability before acquiring, the value of semaphore should never be other than one or zero, and this is tested by invariant( ).
But look what happens when Semaphore is tested for thread consistency:
//: c13:SemaphoreTester.java // Colliding over shared resources public class SemaphoreTester extends Thread { private volatile Semaphore semaphore; public SemaphoreTester(Semaphore semaphore) { this.semaphore = semaphore; setDaemon(true); start(); } public void run() { while(true) if(semaphore.available()) { yield(); // Makes it fail faster semaphore.acquire(); yield(); semaphore.release(); yield(); } } public static void main(String[] args) throws Exception { Semaphore sem = new Semaphore(); new SemaphoreTester(sem); new SemaphoreTester(sem); new InvariantWatcher(sem).join(); } } ///:~
The SemaphoreTester creates a thread that continuously tests to see if a Semaphore object is available, and if so acquires and releases it. Note that the semaphore field is volatile to make sure that the compiler doesn't optimize away any reads of that value.
In main( ), two SemaphoreTester threads are created, and you'll see that in short order the invariant is violated. This happens because one thread might get a true result from calling available( ), but by the time that thread calls acquire( ), the other thread may have already called acquire( ) and incremented the semaphore field. The InvariantWatcher may see the field with too high a value, or possibly see it after both threads have called release( ) and decremented it to a negative value. Note that InvariantWatcher join( )s with the main thread to keep the program running until there is a failure.
On my machine, I discovered that the inclusion of yield( ) caused failure to occur much faster, but this will vary with operating systems and JVM implementations. You should experiment with taking the yield( ) statements out; the failure might take a very long time to occur, which demonstrates how difficult it can be to detect a flaw in your program when you're writing multithreaded code.
This class emphasizes the risk of concurrent programming: If a class this simple can produce problems, you can never trust any assumptions about concurrency.
Resolving shared resource contention
To solve the problem of thread collision, virtually all multithreading schemes serialize access to shared resources. This means that only one thread at a time is allowed to access the shared resource. This is ordinarily accomplished by putting a locked clause around a piece of code so that only one thread at a time may pass through that piece of code. Because this locked clause produces mutual exclusion, a common name for such a mechanism is mutex.
Consider the bathroom in your house; multiple people (threads) may each want to have exclusive use of the bathroom (the shared resource). To access the bathroom, a person knocks on the door to see if it's available. If so, they enter and lock the door. Any other thread that wants to use the bathroom is "blocked" from using it, so that thread waits at the door until the bathroom is available.
The analogy breaks down a bit when the bathroom is released and it comes time to give access to another thread. There isn't actually a line of people and we don't know for sure who gets the bathroom next, because the thread scheduler isn't deterministic that way. Instead, it's as if there is a group of blocked threads milling about in front of the bathroom, and when the thread that has locked the bathroom unlocks it and emerges, the one that happens to be nearest the door at the moment goes in. As noted earlier, suggestions can be made to the thread scheduler via yield( ) and setPriority( ), but these suggestions may not have much of an effect depending on your platform and JVM implementation.
Java has built-in support to prevent collisions over resources in the form of the synchronized keyword. This works much like the Semaphore class was supposed to: When a thread wishes to execute a piece of code guarded by the synchronized keyword, it checks to see if the semaphore is available, then acquires it, executes the code, and releases it. However, synchronized is built into the language, so it's guaranteed to always work, unlike the Semaphore class.
The shared resource is typically just a piece of memory in the form of an object, but may also be a file, I/O port, or something like a printer. To control access to a shared resource, you first put it inside an object. Then any method that accesses that resource can be made synchronized. This means that if a thread is inside one of the synchronized methods, all other threads are blocked from entering any of the synchronized methods of the class until the first thread returns from its call.
Since you typically make the data elements of a class private and access that memory only through methods, you can prevent collisions by making methods synchronized. Here is how you declare synchronized methods:
synchronized void f() { /* ... */ } synchronized void g(){ /* ... */ }
Each object contains a single lock (also referred to as a monitor) that is automatically part of the object (you don't have to write any special code). When you call any synchronized method, that object is locked and no other synchronized method of that object can be called until the first one finishes and releases the lock. In the preceding example, if f( ) is called for an object, g( ) cannot be called for the same object until f( ) is completed and releases the lock. Thus, there is a single lock that is shared by all the synchronized methods of a particular object, and this lock prevents common memory from being written by more than one thread at a time.
One thread may acquire an object's lock multiple times. This happens if one method calls a second method on the same object, which in turn calls another method on the same object, etc. The JVM keeps track of the number of times the object has been locked. If the object is unlocked, it has a count of zero. As a thread acquires the lock for the first time, the count goes to one. Each time the thread acquires a lock on the same object, the count is incremented. Naturally, multiple lock acquisition is only allowed for the thread that acquired the lock in the first place. Each time the thread leaves a synchronized method, the count is decremented, until the count goes to zero, releasing the lock entirely for use by other threads.
There's also a single lock per class (as part of the Class object for the class), so that synchronized static methods can lock each other out from simultaneous access of static data on a class-wide basis.
Synchronizing the EvenGenerator
By adding synchronized to EvenGenerator.java, we can prevent the undesirable thread access:
//: c13:SynchronizedEvenGenerator.java // Using "synchronized" to prevent thread collisions public class SynchronizedEvenGenerator implements Invariant { private int i; public synchronized void next() { i++; i++; } public synchronized int getValue() { return i; } // Not synchronized so it can run at // any time and thus be a genuine test: public InvariantState invariant() { int val = getValue(); if(val % 2 == 0) return new InvariantOK(); else return new InvariantFailure(new Integer(val)); } public static void main(String[] args) { SynchronizedEvenGenerator gen = new SynchronizedEvenGenerator(); new InvariantWatcher(gen, 4000); // 4-second timeout while(true) gen.next(); } } ///:~
You'll notice that both next( ) and getValue( ) are synchronized. If you synchronize only one of the methods, then the other is free to ignore the object lock and can be called with impunity. This is an important point: Every method that accesses a critical shared resource must be synchronized or it won't work right. On the other hand, InvariantState is not synchronized because it is doing the testing, and we want it to be called at any time so that it produces a true test of the object.
Atomic operations
A common piece of lore often repeated in Java threading discussions is that "atomic operations do not need to be synchronized." An atomic operation is one that cannot be interrupted by the thread scheduler; if the operation begins, then it will run to completion before the possibility of a context switch (switching execution to another thread).
The atomic operations commonly mentioned in this lore include simple assignment and returning a value when the variable in question is a primitive type that is not a long or a double. The latter types are excluded because they are larger than the rest of the types, and the JVM is thus not required to perform reads and assignments as single atomic operations (a JVM may choose to do so anyway, but there's no guarantee). However, you do get atomicity if you use the volatile keyword with long or double.
If you were to blindly apply the idea of atomicity to
SynchronizedEvenGenerator.java, you would notice that
public synchronized int getValue() { return i; }
fits the description. But try removing synchronized, and the test will fail, because even though return i is indeed an atomic operation, removing synchronized allows the value to be read while the object is in an unstable intermediate state. You must genuinely understand what you're doing before you try to apply optimizations like this. There are no easily-applicable rules that work.
As a second example, consider something even simpler: a class that produces serial numbers.3 Each time nextSerialNumber( ) is called, it must return a unique value to the caller:
//: c13:SerialNumberGenerator.java public class SerialNumberGenerator { private static volatile int serialNumber = 0; public static int nextSerialNumber() { return serialNumber++; } } ///:~
SerialNumberGenerator is about as simple a class as you can imagine, and if you're coming from C++ or some other low-level background, you would expect the increment to be an atomic operation, because increment is usually implemented as a microprocessor instruction. However, in the JVM an increment is not atomic and involves both a read and a write, so there's room for threading problems even in such a simple operation.
The serialNumber field is volatile because it is possible for each thread to have a local stack and maintain copies of some variables there. If you define a variable as volatile, it tells the compiler not to do any optimizations that would remove reads and writes that keep the field in exact synchronization with the local data in the threads.
To test this, we need a set that doesn't run out of memory, in case it takes a long time to detect a problem. The CircularSet shown here reuses the memory used to store ints, with the assumption that by the time you wrap around, the possibility of a collision with the overwritten values is minimal. The add( ) and contains( ) methods are synchronized to prevent thread collisions:
//: c13:SerialNumberChecker.java // Operations that may seem safe are not, // when threads are present. // Reuses storage so we don't run out of memory: class CircularSet { private int[] array; private int len; private int index = 0; public CircularSet(int size) { array = new int[size]; len = size; // Initialize to a value not produced // by the SerialNumberGenerator: for(int i = 0; i < size; i++) array[i] = -1; } public synchronized void add(int i) { array[index] = i; // Wrap index and write over old elements: index = ++index % len; } public synchronized boolean contains(int val) { for(int i = 0; i < len; i++) if(array[i] == val) return true; return false; } } public class SerialNumberChecker { private static CircularSet serials = new CircularSet(1000); static class SerialChecker extends Thread { SerialChecker() { start(); } public void run() { while(true) { int serial = SerialNumberGenerator.nextSerialNumber(); if(serials.contains(serial)) { System.out.println("Duplicate: " + serial); System.exit(0); } serials.add(serial); } } } public static void main(String[] args) { for(int i = 0; i < 10; i++) new SerialChecker(); // Stop after 4 seconds: new Timeout(4000, "No duplicates detected"); } } ///:~
SerialNumberChecker contains a static CircularSet that contains all the serial numbers that have been extracted, and a nested Thread that gets serial numbers and ensures that they are unique. By creating multiple threads to contend over serial numbers, you'll discover that the threads get a duplicate serial number reasonably soon (note that this program may not indicate a collision on your machine, but it has successfully detected collisions on a multiprocessor machine). To solve the problem, add the synchronized keyword to nextSerialNumber( ).
The atomic operations that are supposed to be safe are reading and assignment of primitives. However, as seen in EvenGenerator.java, it's still easily possible to use an atomic operation that accesses your object while it's in an unstable intermediate state, so you cannot make any assumptions.
On top of this, the atomic operations are not guaranteed to work with long and double (although some JVM implementations do guarantee atomicity for long and double operations, you won't be writing portable code if you depend on this).
It's safest to use the following guidelines:
If you need to synchronize one method in a class, synchronize all of them. It's often difficult to tell for sure if a method will be negatively affected if you leave synchronization out.
Be extremely careful when removing synchronization from methods. The typical reason to do this is for performance, but in JDK 1.3 and 1.4 the overhead of synchronized has been greatly reduced. In addition, you should only do this after using a profiler to determine that synchronized is indeed the bottleneck.
Fixing Semaphore
Now consider Semaphore.java. It would seem that we should be able to repair this by synchronizing the three class methods, like this:
//: c13:SynchronizedSemaphore.java // Colliding over shared resources public class SynchronizedSemaphore extends Semaphore { private volatile int semaphore = 0; public synchronized boolean available() { return semaphore == 0; } public synchronized void acquire() { ++semaphore; } public synchronized void release() { --semaphore; } public InvariantState invariant() { int val = semaphore; if(val == 0 || val == 1) return new InvariantOK(); else return new InvariantFailure(new Integer(val)); } public static void main(String[] args) throws Exception { SynchronizedSemaphore sem =new SynchronizedSemaphore(); new SemaphoreTester(sem); new SemaphoreTester(sem); new InvariantWatcher(sem).join(); } } ///:~
This looks rather odd at firstSynchronizedSemaphore is inherited from Semaphore, and yet all the overridden methods are synchronized, but the base-class versions aren't. Java doesn't allow you to change the method signature during overriding, and yet doesn't complain about this. That's because the synchronized keyword is not part of the method signature, so you can add it in and it doesn't limit overriding.
The reason for inheriting from Semaphore is to reuse the SemaphoreTester class. When you run the program you'll see that it still causes an InvariantFailure.
Why does this fail? By the time a thread detects that the Semaphore is available because available( ) returns true, it has released the lock on the object. Another thread can dash in and increment the semaphore value before the first thread does. The first thread still assumes the Semaphore object is available and so goes ahead and blindly enters the acquire( ) method, putting the object into an unstable state. This is just one more lesson about rule zero of concurrent programming: Never make any assumptions.
The only solution to this problem is to make the test for availability and the acquisition a single atomic operationwhich is exactly what the synchronized keyword provides in conjunction with the lock on an object. That is, Java's lock and synchronized keyword is a built-in semaphore mechanism, so you don't need to create your own.
Critical sections
Sometimes, you only want to prevent multiple thread access to part of the code inside a method instead of the entire method. The section of code you want to isolate this way is called a critical section and is also created using the synchronized keyword. Here, synchronized is used to specify the object whose lock is being used to synchronize the enclosed code:
synchronized(syncObject) { // This code can be accessed // by only one thread at a time }
This is also called a synchronized block; before it can be entered, the lock must be acquired on syncObject. If some other thread already has this lock, then the critical section cannot be entered until the lock is given up.
The following example compares both approaches to synchronization by showing how the time available for other threads to access an object is significantly increased by using a synchronized block instead of synchronizing an entire method. In addition, it shows how an unprotected class can be used in a multithreaded situation if it is controlled and protected by another class:
//: c13:CriticalSection.java // Synchronizing blocks instead of entire methods. Also // demonstrates protection of a non-thread-safe class // with a thread-safe one. import java.util.*; class Pair { // Not thread-safe private int x, y; public Pair(int x, int y) { this.x = x; this.y = y; } public Pair() { this(0, 0); } public int getX() { return x; } public int getY() { return y; } public void incrementX() { x++; } public void incrementY() { y++; } public String toString() { return "x: " + x + ", y: " + y; } public class PairValuesNotEqualException extends RuntimeException { public PairValuesNotEqualException() { super("Pair values not equal: " + Pair.this); } } // Arbitrary invariant -- both variables must be equal: public void checkState() { if(x != y) throw new PairValuesNotEqualException(); } } // Protect a Pair inside a thread-safe class: abstract class PairManager { protected Pair p = new Pair(); private List storage = new ArrayList(); public synchronized Pair getPair() { // Make a copy to keep the original safe: return new Pair(p.getX(), p.getY()); } protected void store() { storage.add(getPair()); } // A "template method": public abstract void doTask(); } // Synchronize the entire method: class PairManager1 extends PairManager { public synchronized void doTask() { p.incrementX(); p.incrementY(); store(); } } // Use a critical section: class PairManager2 extends PairManager { public void doTask() { synchronized(this) { p.incrementX(); p.incrementY(); } store(); } } class PairManipulator extends Thread { private PairManager pm; private int checkCounter = 0; private class PairChecker extends Thread { PairChecker() { start(); } public void run() { while(true) { checkCounter++; pm.getPair().checkState(); } } } public PairManipulator(PairManager pm) { this.pm = pm; start(); new PairChecker(); } public void run() { while(true) { pm.doTask(); } } public String toString() { return "Pair: " + pm.getPair() + " checkCounter = " + checkCounter; } } public class CriticalSection { public static void main(String[] args) { // Test the two different approaches: final PairManipulator pm1 = new PairManipulator(new PairManager1()), pm2 = new PairManipulator(new PairManager2()); new Timer(true).schedule(new TimerTask() { public void run() { System.out.println("pm1: " + pm1); System.out.println("pm2: " + pm2); System.exit(0); } }, 500); // run() after 500 milliseconds } } ///:~
As noted, Pair is not thread-safe because its invariant (admittedly arbitrary) requires that both variables maintain the same values. In addition, as seen earlier in this chapter, the increment operations are not thread-safe, and because none of the methods are synchronized, you can't trust a Pair object to stay uncorrupted in a threaded program.
The PairManager class holds a Pair object and controls all access to it. Note that the only public methods are getPair( ), which is synchronized, and the abstract doTask( ). Synchronization for this method will be handled when it is implemented.
The structure of PairManager, where some of the functionality is implemented in the base class with one or more abstract methods defined in derived classes, is called a Template Method in Design Patterns parlance.4 Design patterns allow you to encapsulate change in your code; here, the part that is changing is the template method doTask( ). In PairManager1 the entire doTask( ) is synchronized, but in PairManager2 only part of doTask( ) is synchronized by using a synchronized block. Note that the synchronized keyword is not part of the method signature and thus may be added during overriding.
PairManager2 is observing, in effect, that store( ) is a protected method and thus is not available to the general client, but only to subclasses. Thus, it doesn't necessarily need to be guarded inside a synchronized method, and is instead placed outside of the synchronized block.
A synchronized block must be given an object to synchronize upon, and usually the most sensible object to use is just the current object that the method is being called for: synchronized(this), which is the approach taken in PairManager2. That way, when the lock is acquired for the synchronized block, other synchronized methods in the object cannot be called. So the effect is that of simply reducing the scope of synchronization.
Sometimes this isn't what you want, in which case you can create a separate object and synchronize on that. The following example demonstrates that two threads can enter an object when the methods in that object synchronize on different locks:
//: c13:SyncObject.java // Synchronizing on another object import com.bruceeckel.simpletest.*; class DualSynch { private Object syncObject = new Object(); public synchronized void f() { System.out.println("Inside f()"); // Doesn't release lock: try { Thread.sleep(500); } catch(InterruptedException e) { throw new RuntimeException(e); } System.out.println("Leaving f()"); } public void g() { synchronized(syncObject) { System.out.println("Inside g()"); try { Thread.sleep(500); } catch(InterruptedException e) { throw new RuntimeException(e); } System.out.println("Leaving g()"); } } } public class SyncObject { private static Test monitor = new Test(); public static void main(String[] args) { final DualSynch ds = new DualSynch(); new Thread() { public void run() { ds.f(); } }.start(); ds.g(); monitor.expect(new String[] { "Inside g()", "Inside f()", "Leaving g()", "Leaving f()" }, Test.WAIT + Test.IGNORE_ORDER); } } ///;~
The DualSync method f( ) synchronizes on this (by synchronizing the entire method) and g( ) has a synchronized block that synchronizes on syncObject. Thus, the two synchronizations are independent. This is demonstrated in main( ) by creating a Thread that calls f( ). The main( ) thread is used to call g( ). You can see from the output that both methods are running at the same time, so neither one is blocked by the synchronization of the other.
Returning to CriticalSection.java, PairManipulator is created to test the two different types of PairManager by running doTask( ) in one thread and an instance of the inner class PairChecker in the other. To trace how often it is able to run the test, PairChecker increments checkCounter every time it is successful. In main( ), two PairManipulator objects are created and allowed to run for awhile. When the Timer runs out, it executes its run( ) method, that displays the results of each PairManipulator and exits. When you run the program, you should see something like this:
pm1: Pair: x: 58892, y: 58892 checkCounter = 44974 pm2: Pair: x: 73153, y: 73153 checkCounter = 100535
Although you will probably see a lot of variation from one run to the next, in general you will see that PairManager1.doTask( ) does not allow the PairChecker nearly as much access as PairManager2.doTask( ), which has the synchronized block and thus provides more unlocked time. This is typically the reason that you want to use a synchronized block instead of synchronizing the whole method: to allow other threads more access (as long as it is safe to do so).
Of course, all synchronization depends on programmer diligence: Every piece of code that can access a shared resource must be wrapped in an appropriate synchronized block.