Analyzing Program Performance With Sun WorkShop

Lock Inversions

A function is said to invert a lock if the lock is already held when the function is called, and the function releases the lock, such as:


foo() {
	pthread_mutex_unlock(&mtx);
	...
	pthread_mutex_lock(&mtx);
}

Lock inversions are a potential source of insidious race conditions, since observations made under the protection of a lock may be invalidated by the inversion. In the following example, if foo() inverts mtx, then upon its return zort_list may be NULL (another thread may have emptied the list while the lock was dropped):


ZORT* zort_list;
	/* VARIABLES PROTECTED BY mtx: zort_list */

void f() {
	pthread_mutex_lock(&mtx);
	if (zort_list == NULL) {/* trying to be careful here */
			pthread_mutex_unlock(&mtx);
			return;
	}
	foo();
	zort_list->count++; /* but zort_list may be NULL here!! */
	pthread_mutex_unlock(&mtx);
}

Lock inversions may be found using the commands:


% lock_lint funcs [directly] inverting lock ...
% lock_lint locks [directly] inverted by func ...

An interesting question to ask is "Which functions acquire locks that then get inverted by calls they make?" That is, which functions are in danger of having stale data? The following (Bourne shell) code can answer this question:


$ LOCKS=`lock_lint locks`
$ lock_lint funcs calling `lock_lint funcs inverting $LOCKS`

The following gives similar output, separated by lock:


for lock in `lock_lint locks`
do
	echo "functions endangered by inversions of lock $lock"
	lock_lint funcs calling `lock_lint funcs inverting $lock`
done