STREAMS Programming Guide

Chapter 12 MultiThreaded STREAMS

This chapter describes how to multithread a STREAMS driver or module. It covers the necessary conversion topics so that new and existing STREAMS modules and drivers run in the multithreaded kernel. It describes STREAMS-specific multithreading issues and techniques. Refer also to Writing Device Drivers.

MultiThreaded (MT) STREAMS Overview

The SunOS 5 operating system is fully multithreaded, able to make effective use of the available parallelism of a symmetric shared-memory multiprocessor computer. All kernel subsystems are multithreaded: scheduler, virtual memory, file systems, block/character/STREAMS I/O, networking protocols, and device drivers.

MT STREAMS requires you to use some new concepts and terminology. These concepts apply not only to STREAMS drivers, but to all device drivers in the SunOS 5 system. For a more complete description of these terms, see Writing Device Drivers. Additionally, see Chapter 1, Overview of STREAMS of this guide for definitions and Chapter 8, Messages - Kernel Level for elements of MT drivers.

You need to understand the following terms and ideas.

Thread

Sequence of instructions executed within context of a process

Lock

Mechanism to restrict access to data structures

Single Threaded

Restricting access to a single thread

Multithreaded

Allowing two or more threads access

Multiprocessing

Two or more CPUs concurrently executing the OS

Concurrency

Simultaneous execution

Preemption

Suspending execution for the next thread to run

Monitor

Portion of code that is single threaded

Mutual Exclusion

Exclusive access to a data element by a single thread at one time

Condition Variables

Kernel event synchronization primitives

Counting Semaphores

Memory based synchronization mechanism

Readers/Writer Locks

Data lock allowing one writer or many readers at one time

Callback

On specific event, call module function

MT STREAMS Framework

The STREAMS framework consists of the Stream head, STREAMS utility routines, and documented STREAMS data structures. The STREAMS framework allows multiple kernel threads to concurrently enter and execute within each module. Multiple threads can be actively executing in the open, close, put, and service procedures of each queue within the system.

The first goal of the SunOS 5 system is to preserve the interface and flavor of STREAMS and to shield module code as much as possible from the impact of migrating to the multithreaded kernel. Most of the locking is hidden from the programmer and performed by the STREAMS kernel framework. As long as module code uses the standard, documented programmatic interfaces to shared kernel data structures (such as queue_t, mblk_t, and dblk_t), it does not have to explicitly lock these framework data structures.

The second goal is to make it simple to write MT SAFE modules. The framework accomplishes this by providing the MT STREAMS perimeter mechanisms for controlling and restricting the concurrency in a STREAMS module. See the section "MT SAFE Modules".

The DDI/DKI entry points (open, close, put, and service procedures) plus certain callback procedures (scheduled with qtimeout, qbufcall, or qwriter) are synchronous entry points. All other entry points into a module are asynchronous. Examples of the latter are hardware interrupt routines, timeout, bufcall, and esballoc callback routines.

STREAMS Framework Integrity

The STREAMS framework guarantees the integrity of the STREAMS data structures, such as queue_t, mblk_t, and dblk_t. This assumes that a module conforms to the DDI/DKI and does not directly access global operating system data structures or facilities not described within the Driver-Kernel Interface.

The q_next and q_ptr fields of the queue_t structure are not modified by the system while a thread is actively executing within a synchronous entry point. The q_next field of the queue_t structure can change while a thread is executing within an asynchronous entry point.

As in previous Solaris system releases, a module must not call another module's put or service procedures directly. The DDI/DKI routines putnext(9F), put(9F), and others in Section 9F must be used to pass a message to another queue. Calling another module's routines directly circumvents the design of the MT STREAMS framework and can yield unknown results.

When making your module MT SAFE, the integrity of private module data structures must be ensured by the module itself. Knowing what the framework supports is critical in deciding what you must provide. The integrity of private module data structures can be maintained by either using the MT STREAMS perimeters to control the concurrency in the module, by using module private locks, or by a combination of the two.

Message Ordering

The STREAMS framework guarantees the ordering of messages along a stream if all the modules in the stream preserve message ordering internally. This ordering guarantee only applies to messages that are sent along the same stream and produced by the same source.

The STREAMS framework does not guarantee that a message has been seen by the next put procedure when putnext(9F), qreply(9F) returns.

MT Configurations

A module or a driver can be either MT SAFE or MT UNSAFE. Beginning with the release of the Solaris 7 system, no MT UNSAFE module or driver will be supported.

MT SAFE modules

For MT SAFE mode, use MT STREAMS perimeters to restrict the concurrency in a module or driver to:

It is easiest to initially implement your module and configure it to be per-module single threaded, and increase the level of concurrency as needed. "Sample Multithreaded Device Driver"provides a complete example of using a per-module perimeter, and "Sample Multithreaded Module with Outer Perimeter" provides a complete example with a higher level of concurrency.

MT SAFE modules can use different MT STREAMS perimeters to restrict the concurrency in the module to a concurrency that is natural given the data structures that the module contains, thereby removing the need for module private locks. A module that requires unrestricted concurrency can be configured to have no perimeters. Such modules have to use explicit locking primitives to protect their data structures. While such modules can exploit the maximum level of concurrency allowed by the underlying hardware platform, they are more complex to develop and support. See "MT SAFE Modules Using Explicit Locks".

Independent of the perimeters, there will be at most one thread allowed within any given queue's service procedure.

MT UNSAFE Modules

MT UNSAFE mode for STREAMS modules were temporarily supported as an aid in porting SVR4 modules. MT UNSAFE are not supported after SVR4..

Preparing to Port

When modifying a STREAMS driver to take advantage of the multithreaded kernel, a level of MT safety is selected according to:

Note that much of the effort in conversion is simply determining the appropriate degree of data sharing and the corresponding granularity of locking. The actual time spent configuring perimeters and/or installing locks should be much smaller than the time spent in analysis.

To port your module, you must understand the data structures used within your module, as well as the accesses to those data structures. It is your responsibility to fully understand the relationship between all portions of the module and private data within that module, and to use the MT STREAMS perimeters (or the synchronization primitives available) to maintain the integrity of these private data structures.

You must explicitly restrict access to private module data structures as appropriate to ensure the integrity of these data structures. You must use the MT STREAMS perimeters to restrict the concurrency in the module so that the parts of the module that modify private data are single threaded with respect to the parts of the module that read the same data. Alternatively to the perimeters, you can use the synchronization primitives available (mutex, condition variables, readers/writer, semaphore) to explicitly restrict access to module private data appropriate for the operations within the module on that data.

The first step in multithreading a module or driver is to analyze the module, breaking the entire module up into a list of individual operations and the private data structures referenced in each operation. Part of this first step is deciding upon a level of concurrency for the module. Ask yourself which of these operations can be multithreaded and which must be single threaded. Try to find a level of concurrency that is "natural" for the module and that matches one of the available perimeters (or alternatively, requires the minimal number of locks) and that has a simple and straightforward implementation. Avoid additional complexity.

It is very common to overdo multithreading that results in a very low performance module.

Typical questions to ask are:

Examples of natural levels of concurrency are:

Porting to the SunOS 5 System

When porting a STREAMS module or driver from the SunOS 4 system to the SunOS 5 system, the module should be examined with respect to the following areas:

For portability and correct operation, each module must adhere to the SunOS DDI/DKI. Several facilities available in previous releases of the SunOS system have changed and can take different arguments, or produce different side effects, or no longer exist in the SunOS 5 system. The module writer should carefully review the module with respect to the DDI/DKI.

Each module that accesses underlying Sun-specific features included in the SunOS 5 system should conform to the Device Driver Interface. The SunOS 5 DDI defines the interface used by the device driver to register device hardware interrupts, access device node properties, map device slave memory, and establish and synchronize memory mappings for DVMA (Direct Virtual Memory Access). These areas are primarily applicable to hardware device drivers. Refer to the Device Driver Interface Specification within the Writing Device Drivers for details on the 5 DDI and DVMA.

The kernel networking subsystem in the SunOS 5 system is STREAMS based. Datalink drivers that used the ifnet interface in the SunOS 4 system must be converted to use DLPI for the SunOS 5 system. Refer to the Data Link Provider Interface, Revision 2 specification.

After reviewing the module for conformance to the SunOS 5 DKI and DDI specifications, you should be able to consider the impact of multithreading on the module.

MT SAFE Modules

Your MT SAFE modules should use perimeters and avoid using module private locks. Should you opt to use module private locks, you need to read "MT SAFE Modules Using Explicit Locks" along with this section.

MT STREAMS Perimeters

For the purpose of controlling and restricting the concurrency for the synchronous entry points, the STREAMS framework defines two MT perimeters. The STREAMS framework provides the concepts of inner and outer perimeters. A module can be configured either to have no perimeters, to have only an inner or an outer perimeter, or to have both an inner and outer perimeter. For inner perimeters there are different scope perimeters to choose from. Unrestricted concurrency can be obtained by configuring no perimeters.

Figure Figure 12-1 and Figure 12-2 are examples of inner perimeters. Figure 12-3 shows multiple inner perimeters inside an outer perimeter.

Figure 12-1 Inner Perimeter Spanning a Pair of Queues. (D_MPTQAIR)

Graphic

Both the inner and outer perimeters act as readers/writer locks allowing multiple readers or a single writer. Thus, each perimeter can be entered in two modes: shared (reader) or exclusive (writer). By default, all synchronous entry points enter the inner perimeter exclusively and the outer perimeter shared.

The inner and outer perimeters are entered when one of the synchronous entry points is called. The perimeters are retained until the call returns from the entry point. Thus, for example, the thread does not leave the perimeter of one module when it calls putnext to enter another module.

Figure 12-2 Inner Perimeter Spanning All queues In a Module. (D_MTPERMOD)

Graphic

When a thread is inside a perimeter and it calls putnext(9F) (or putnextctl1(9F)), the thread can "loop around" through other STREAMS modules and try to reenter a put procedure inside the original perimeter. If this reentry conflicts with the earlier entry (for example if the first entry has exclusive access at the inner perimeter), the STREAMS framework defers the reentry while preserving the order of the messages attempting to enter the perimeter. Thus, putnext(9F) returns without the message having been passed to the put procedure and the framework passes the message to the put procedure when it is possible to enter the perimeters.

The optional outer perimeter spans all queues in a module is illustrated in Figure 12-3.

Figure 12-3 Outer Perimeter Spanning All Queues With Inner Perimeters Spanning Each Pair (D_MTOUTPERIM Combined With D_MTQPAIR)

Graphic

Perimeter options

Several flags are used to specify the perimeters. These flags fall into three categories:

The inner perimeter is controlled by these mutually exclusive flags:

The presence of the outer perimeter is configured using:

Recall that by default all synchronous entry points enter the inner perimeter exclusively and enter the outer perimeter shared. This behavior can be modified in two ways:

MT Configuration

To configure the driver as being MT SAFE, the cb_ops(9S) and dev_ops(9S) data structures must be initialized. This code must be in the header section of your module. For more information, see Example 12-1, and dev_ops(9S).

The driver is configured to be MT SAFE by setting the cb_flag field to D_MP. It also configures any MT STREAMS perimeters by setting flags in the cb_flag field. (See mt-streams(9F). The corresponding configuration for a module is done using the f_flag field in fmodsw(9S).

qprocson(9F)/qprocsoff(9F)

The routines qprocson(9F) and qprocsoff(9F)) respectively enable and disable the put and service procedures of the queue pair. Before calling qprocson(9F), and after calling qprocsoff(9F)), the module's put and service procedures are disabled; messages flow around the module as if it were not present in the Stream.

qprocson(9F) must be called by the first open of a module, but only after allocation and initialization of any module resources on which the put and service procedures depend. The qprocsoff routine must be called by the close routine of the module before deallocating any resources on which the put and service procedures depend.

To avoid deadlocks, modules must not hold private locks across the calls to qprocson(9F) or qprocsoff(9F).

qtimeout(9F)/qunbufcall(9F)

The timeout(9F) and bufcall(9F) callbacks are asynchronous. For a module using MT STREAMS perimeters, the timeout(9F) and bufcall(9F) callback functions execute outside the scope of the perimeters. This makes it complex for the callbacks to synchronize with the rest of the module.

To make timeout(9F) and bufcall(9F) functionality easier to use for modules with perimeters, there are additional interfaces that use synchronous callbacks. These routines are qtimeout(9F), quntimeout(9F), qbufcall(9F), and qunbufcall(9F). When using these routines, the callback functions are executed inside the perimeters, hence with the same concurrency restrictions as the put and service procedures.

qwriter(9F)

Modules can use the qwriter(9F) function to upgrade from shared to exclusive access at a perimeter. For example, a module with an outer perimeter can use qwriter(9F) in the put procedure to upgrade to exclusive access at the outer perimeter. A module where the put procedure runs with shared access at the inner perimeter (D_MTPUTSHARED) can use qwriter(9F) in the put procedure to upgrade to exclusive access at the inner perimeter.


Note -

Note that qwriter(9F) cannot be used in the open or close procedures. If a module needs exclusive access at the outer perimeter in the open and/or close procedures, it has to specify that the outer perimeter should always be entered exclusively for open and close (using D_MTOCEXCL).


The STREAMS framework guarantees that all deferred qwriter(9F) callbacks associated with a queue have executed before the module's close routine is called for that queue.

For an example of a driver using qwriter(9F) see Example 12-2.

qwait(9F)

A module that uses perimeters and must wait in its open or close procedure for a message from another STREAMS module has to wait outside the perimeters; otherwise, the message would never be allowed to enter its put and service procedures. This is accomplished by using the qwait(9F) interface. See qwriter(9F) man page for an example.

Asynchronous Callbacks

Interrupt handlers and other asynchronous callback functions require special care by the module writer, since they can execute asynchronously to threads executing within the module open, close, put, and service procedures.

For modules using perimeters, use qtimeout(9F) and qbufcall(9F) instead of timeout(9F) and bufcall(9F). The qtimeout and qbufcall callbacks are synchronous and consequently introduce no special synchronization requirements.

Since a thread can enter the module at any time, you must ensure that the asynchronous callback function acquires the proper private locks before accessing private module data structures, and releases these locks before returning. You must cancel any outstanding registered callback routines before the data structures on which the callback routines depend are deallocated and the module closed.

Close Race Conditions

Since the callback functions are by nature asynchronous, they can be executing or about to execute at the time the module close routine is called. You must cancel all outstanding callback and interrupt conditions before deallocating those data structures or returning from the close routine.

The callback functions scheduled with timeout(9F) and bufcall(9F) are guaranteed to have been canceled by the time untimeout(9F) and unbufcall(9F) return. The same is true for qtimeout(9F) and qbufcall(9F) by the time quntimeout(9F) and qunbufcall(9F) return. You must also take responsibility for other asynchronous routines, including esballoc(9F) callbacks and hardware, as well as software interrupts.

Module Unloading and esballoc(9F)

The STREAMS framework prevents a module or driver text from being unloaded while there are open instances of the module or driver. If a module does not cancel all callbacks in the last close routine, it has to refuse to be unloaded.

This is an issue mainly for modules and drivers using esballoc since esballoc callbacks cannot be canceled. Thus modules and drivers using esballoc have to be prepared to handle calls to the esballoc callback free function after the last instance of the module or driver has been closed.

Modules and drivers can maintain a count of outstanding callbacks. They can refuse to be unloaded by having their _fini(9E) routine return EBUSY if there are outstanding callbacks.

Use of q_next

The q_next field in the queue_t structure can be referenced in open, close, put, and service procedures as well as the synchronous callback procedures (scheduled with qtimeout(9F), qbufcall(9F), and qwriter(9F)). However, the value in the q_next field should not be trusted. It is relevant to the STREAMS framework, but may not be relevant to a specific module.

All other module code, such as interrupt routines, timeout(9F) and esballoc(9F) callback routines, cannot dereference q_next. Those routines have to use the "next" version of all functions. For instance, use canputnext(9F) instead of dereferencing q_next and using canput(9F).

MT SAFE Modules Using Explicit Locks

Although the result is not reliable, you can use explicit locks either instead of perimeters or to augment the concurrency restrictions provided by the perimeters.


Caution - Caution -

Explicit locks cannot be used to preserve message ordering in a module because of the risk of reentering the module. Use MT STREAMS perimeters to preserve message ordering.


All four types of kernel synchronization primitives are available to the module writer: mutexes, readers/writer locks, semaphores, and condition variables. Since cv_wait implies a context switch, it can only be called from the module's open and close procedures, which are executed with valid process context. You must use the synchronization primitives to protect accesses and ensure the integrity of private module data structures.

Constraints When Using Locks

When adding locks in a module, it is important to observe these constraints:

The first restriction makes it hard to use module private locks to preserve message ordering. MT STREAMS perimeters is the preferred mechanism to preserve message ordering.

Preserving Message Ordering

Module private locks cannot be used to preserve message ordering, since they cannot be held across calls to putnext(9F) and the other messages that pass routines to other modules. The alternatives for preserving message ordering are:

Use perimeters since there is a performance penalty for using service procedures.

Sample Multithreaded Device Driver

Example 12-1 is a sample multithreaded, loadable, STREAMS pseudo-driver. The driver MT design is the simplest possible based on using a per module inner perimeter. Thus, only one thread can execute in the driver at any time. In addition, a quntimeout(9F) synchronous callback routine is used. The driver cancels an outstanding qtimeout(9F) by calling quntimeout(9F) in the close routine. See "Close Race Conditions".


Example 12-1 Sample Multithreaded, Loadable, STREAMS Pseudo-Driver

/*
 * Example SunOS 5 multithreaded STREAMS pseudo device driver.
 * Using a D_MTPERMOD inner perimeter.
 */

#include				<sys/types.h>
#include				<sys/errno.h>
#include				<sys/stropts.h>
#include				<sys/stream.h>
#include				<sys/strlog.h>
#include				<sys/cmn_err.h>
#include				<sys/modctl.h>
#include				<sys/kmem.h>
#include				<sys/conf.h>
#include				<sys/ksynch.h>
#include				<sys/stat.h>
#include				<sys/ddi.h>
#include				<sys/sunddi.h>

/*
 * Function prototypes.
 */
static			int xxidentify(dev_info_t *);
static			int xxattach(dev_info_t *, ddi_attach_cmd_t);
static			int xxdetach(dev_info_t *, ddi_detach_cmd_t);
static			int xxgetinfo(dev_info_t *,ddi_info_cmd_t,void *,void**);
static			int xxopen(queue_t *, dev_t *, int, int, cred_t *);
static			int xxclose(queue_t *, int, cred_t *);
static			int xxwput(queue_t *, mblk_t *);
static			int xxwsrv(queue_t *);
static 			void xxtick(caddr_t);

/*
 * Streams Declarations
 */
static struct module_info xxm_info = {
   99,            /* mi_idnum */
   "xx",        /* mi_idname */
   0,             /* mi_minpsz */
   INFPSZ,        /* mi_maxpsz */
   0,             /* mi_hiwat */
   0              /* mi_lowat */
};

static struct qinit xxrinit = {
		NULL,           /* qi_putp */
		NULL,           /* qi_srvp */
		xxopen,         /* qi_qopen */
		xxclose,        /* qi_qclose */
		NULL,           /* qi_qadmin */
		&xxm_info,      /* qi_minfo */
		NULL            /* qi_mstat */
};

static struct qinit xxwinit = {
		xxwput,         /* qi_putp */
		xxwsrv,         /* qi_srvp */
		NULL,           /* qi_qopen */
		NULL,           /* qi_qclose */
		NULL,           /* qi_qadmin */
		&xxm_info,      /* qi_minfo */
		NULL            /* qi_mstat */
};

static struct streamtab xxstrtab = {
		&xxrinit,       /* st_rdinit */
		&xxwinit,       /* st_wrinit */
		NULL,           /* st_muxrinit */
		NULL            /* st_muxwrinit */
};

/*
 * define the xx_ops structure.
 */

static 				struct cb_ops cb_xx_ops = {
		nodev,            /* cb_open */
		nodev,            /* cb_close */
		nodev,            /* cb_strategy */
		nodev,            /* cb_print */
		nodev,            /* cb_dump */
		nodev,            /* cb_read */
		nodev,            /* cb_write */
		nodev,            /* cb_ioctl */
		nodev,            /* cb_devmap */
		nodev,            /* cb_mmap */
		nodev,            /* cb_segmap */
		nochpoll,         /* cb_chpoll */
		ddi_prop_op,      /* cb_prop_op */
		&xxstrtab,        /* cb_stream */
		(D_NEW|D_MP|D_MTPERMOD) /* cb_flag */
};

static struct dev_ops xx_ops = {
		DEVO_REV,         /* devo_rev */
		0,                /* devo_refcnt */
		xxgetinfo,        /* devo_getinfo */
		xxidentify,       /* devo_identify */
		nodev,            /* devo_probe */
		xxattach,         /* devo_attach */
		xxdetach,         /* devo_detach */
		nodev,            /* devo_reset */
		&cb_xx_ops,       /* devo_cb_ops */
		(struct bus_ops *)NULL /* devo_bus_ops */
};


/*
 * Module linkage information for the kernel.
 */
static struct modldrv modldrv = {
		&mod_driverops,   /* Type of module. This one is a driver */
		"xx",           /* Driver name */
		&xx_ops,          /* driver ops */
};

static struct modlinkage modlinkage = {
		MODREV_1,
		&modldrv,
		NULL
};

/*
 * Driver private data structure. One is allocated per Stream.
 */
struct xxstr {
		struct		xxstr *xx_next;	/* pointer to next in list */
		queue_t		*xx_rq;				/* read side queue pointer */
		minor_t		xx_minor;			/* minor device # (for clone) */
		int			xx_timeoutid;		/* id returned from timeout() */
};

/*
 * Linked list of opened Stream xxstr structures.
 * No need for locks protecting it since the whole module is
 * single threaded using the D_MTPERMOD perimeter.
 */
static struct xxstr						*xxup = NULL;


/*
 * Module Config entry points
 */

_init(void)
{
		return (mod_install(&modlinkage));
}

_fini(void)
{
		return (mod_remove(&modlinkage));
}

_info(struct modinfo *modinfop)
{
		return (mod_info(&modlinkage, modinfop));
}

/*
 * Auto Configuration entry points
 */

/* Identify device. */
static int
xxidentify(dev_info_t *dip)
{
		if (strcmp(ddi_get_name(dip), "xx") == 0)
			return (DDI_IDENTIFIED);
		else
			return (DDI_NOT_IDENTIFIED);
}

/* Attach device. */
static int
xxattach(dev_info_t *dip, ddi_attach_cmd_t cmd)
{
		/* This creates the device node. */
		if (ddi_create_minor_node(dip, "xx", S_IFCHR, ddi_get_instance(dip), 
				DDI_PSEUDO, CLONE_DEV) == DDI_FAILURE) {
			return (DDI_FAILURE);
		}
		ddi_report_dev(dip);
		return (DDI_SUCCESS);
}

/* Detach device. */
static int
xxdetach(dev_info_t *dip, ddi_detach_cmd_t cmd)
{
		ddi_remove_minor_node(dip, NULL);
		return (DDI_SUCCESS);
}

/* ARGSUSED */
static int
xxgetinfo(dev_info_t *dip, ddi_info_cmd_t infocmd, void *arg,	void **resultp)
{
		dev_t dev = (dev_t) arg;
		int instance, ret = DDI_FAILURE;

		devstate_t *sp;
		state *statep;
		instance = getminor(dev);

		switch (infocmd) {
			case DDI_INFO_DEVT2DEVINFO:
				if ((sp = ddi_get_soft_state(statep, 
						getminor((dev_t) arg))) != NULL) {
					*resultp = sp->devi;
					ret = DDI_SUCCESS;
				} else
					*result = NULL;
				break;

			case DDI_INFO_DEVT2INSTANCE:
				*resultp = (void *)instance;
				ret = DDI_SUCCESS;
				break;

			default:
				break;
		}
		return (ret);
}

static
xxopen(rq, devp, flag, sflag, credp)
		queue_t			*rq;
		dev_t				*devp;
		int				flag;
		int				sflag;
		cred_t			*credp;
{
		struct xxstr *xxp;
		struct xxstr **prevxxp;
		minor_t 			minordev;

		/* If this Stream already open - we're done. */
		if (rq->q_ptr)
			return (0);

		/* Determine minor device number. */
		prevxxp = & xxup;
		if (sflag == CLONEOPEN) {
			minordev = 0;
			while ((xxp = *prevxxp) != NULL) {
				if (minordev < xxp->xx_minor)
					break;
				minordev++;
				prevxxp = &xxp->xx_next;
			}
			*devp = makedevice(getmajor(*devp), minordev)
		} else
			minordev = getminor(*devp);

		/* Allocate our private per-Stream data structure. */
		if ((xxp = kmem_alloc(sizeof (struct xxstr), KM_SLEEP)) == NULL)
			return (ENOMEM);

		/* Point q_ptr at it. */
		rq->q_ptr = WR(rq)->q_ptr = (char *) xxp;

		/* Initialize it. */
		xxp->xx_minor = minordev;
		xxp->xx_timeoutid = 0;
		xxp->xx_rq = rq;

		/* Link new entry into the list of active entries. */
		xxp->xx_next = *prevxxp;
		*prevxxp = xxp;

		/* Enable xxput() and xxsrv() procedures on this queue. */
		qprocson(rq);

		return (0);
}

static
xxclose(rq, flag, credp)
		queue_t			*rq;
		int				flag;
		cred_t			*credp;

{
		struct		xxstr		*xxp;
		struct		xxstr		**prevxxp;

		/* Disable xxput() and xxsrv() procedures on this queue. */
		qprocsoff(rq);
		/* Cancel any pending timeout. */
		 xxp = (struct xxstr *) rq->q_ptr;
		 if (xxp->xx_timeoutid != 0) {
	 		 (void) quntimeout(rq, xxp->xx_timeoutid);
	 		 xxp->xx_timeoutid = 0;
		 }
		/* Unlink per-Stream entry from the active list and free it. */
		for (prevxxp = &xxup; (xxp = *prevxxp) != NULL; prevxxp = &xxp->xx_next)
			if (xxp == (struct xxstr *) rq->q_ptr)
				break;
		*prevxxp = xxp->xx_next;
		kmem_free (xxp, sizeof (struct xxstr));

		rq->q_ptr = WR(rq)->q_ptr = NULL;

		return (0);
}

static
xxwput(wq, mp)
		queue_t		*wq;
		mblk_t		*mp;
{
		struct xxstr	*xxp = (struct xxstr *)wq->q_ptr;

		/* do stuff here */
		freemsg(mp);
		mp = NULL;

		if (mp != NULL)
			putnext(wq, mp);
}

static
xxwsrv(wq)
		queue_t		*wq;
{
		mblk_t		*mp;
		struct xxstr	*xxp;

		xxp = (struct xxstr *) wq->q_ptr;

		while (mp = getq(wq)) {
			/* do stuff here */
			freemsg(mp);

			/* for example, start a timeout */
			if (xxp->xx_timeoutid != 0) {
				/* cancel running timeout */
				(void) quntimeout(wq, xxp->xx_timeoutid);
			}
			xxp->xx_timeoutid = qtimeout(wq, xxtick, (char *)xxp, 10);
		}
}

static void
xxtick(arg)
		caddr_t arg;
{
		struct xxstr *xxp = (struct xxstr *)arg;

		xxp->xx_timeoutid = 0;      /* timeout has run */
		/* do stuff */

}

Sample Multithreaded Module with Outer Perimeter

Example 12-2 is a sample multithreaded, loadable STREAMS module. The module MT design is a relatively simple one based on a per queue-pair inner perimeter plus an outer perimeter. The inner perimeter protects per-instance data structure (accessed through the q_ptr field) and the module global data is protected by the outer perimeter. The outer perimeter is configured so that the open and close routines have exclusive access to the outer perimeter. This is necessary since they both modify the global linked list of instances. Other routines that modify global data are run as qwriter(9F) callbacks, giving them exclusive access to the whole module.


Example 12-2 Multithread Module with Outer Perimeter

/*
 * Example SunOS 5 multi-threaded STREAMS module.
 * Using a per-queue-pair inner perimeter plus an outer perimeter.
 */

#include				<sys/types.h>
#include				<sys/errno.h>
#include				<sys/stropts.h>
#include				<sys/stream.h>
#include				<sys/strlog.h>
#include				<sys/cmn_err.h>
#include				<sys/kmem.h>
#include				<sys/conf.h>
#include				<sys/ksynch.h>
#include				<sys/modctl.h>
#include				<sys/stat.h>
#include				<sys/ddi.h>
#include				<sys/sunddi.h>

/*
 * Function prototypes.
 */
static			int xxopen(queue_t *, dev_t *, int, int, cred_t *);
static			int xxclose(queue_t *, int, cred_t *);
static			int xxwput(queue_t *, mblk_t *);
static			int xxwsrv(queue_t *);
static			void xxwput_ioctl(queue_t *, mblk_t *);
static			int xxrput(queue_t *, mblk_t *);
static 			void xxtick(caddr_t);

/*
 * Streams Declarations
 */
static struct module_info xxm_info = {
   99,            /* mi_idnum */
   "xx",        /* mi_idname */
   0,             /* mi_minpsz */
   INFPSZ,        /* mi_maxpsz */
   0,             /* mi_hiwat */
   0              /* mi_lowat */
};
/*
 * Define the read side qinit structure
 */
static struct qinit xxrinit = {
		xxrput,         /* qi_putp */
		NULL,           /* qi_srvp */
		xxopen,         /* qi_qopen */
		xxclose,        /* qi_qclose */
		NULL,           /* qi_qadmin */
		&xxm_info,      /* qi_minfo */
		NULL            /* qi_mstat */
};
/*
 * Define the write side qinit structure
 */
static struct qinit xxwinit = {
		xxwput,         /* qi_putp */
		xxwsr,          /* qi_srvp */
		NULL,           /* qi_qopen */
		NULL,           /* qi_qclose */
		NULL,           /* qi_qadmin */
		&xxm_info,      /* qi_minfo */
		NULL            /* qi_mstat */
};

static struct streamtab xxstrtab = {
		&xxrini,        /* st_rdinit */
		&xxwini,        /* st_wrinit */
		NULL,           /* st_muxrinit */
		NULL            /* st_muxwrinit */
};

/*
 * define the fmodsw structure.
 */

static struct fmodsw xx_fsw = {
		"xx",         /* f_name */
		&xxstrtab,      /* f_str */
		(D_NEW|D_MP|D_MTQPAIR|D_MTOUTPERIM|D_MTOCEXCL) /* f_flag */
};

/*
 * Module linkage information for the kernel.
 */
static struct modlstrmod modlstrmod = {
		&mod_strmodops,	/* Type of module; a STREAMS module */
		"xx module",		/* Module name */
		&xx_fsw,				/* fmodsw */
};

static struct modlinkage modlinkage = {
		MODREV_1,
		&modlstrmod,
		NULL
};

/*
 * Module private data structure. One is allocated per Stream.
 */
struct xxstr {
		struct		xxstr *xx_next;	/* pointer to next in list */
		queue_t		*xx_rq;				/* read side queue pointer */
		int			xx_timeoutid;		/* id returned from timeout() */
};

/*
 * Linked list of opened Stream xxstr structures and other module
 * global data. Protected by the outer perimeter.
 */
static struct xxstr						*xxup = NULL;
static int some_module_global_data;


/*
 * Module Config entry points
 */
int
_init(void)
{
		return (mod_install(&modlinkage));
}
int
_fini(void)
{
		return (mod_remove(&modlinkage));
}
int
_info(struct modinfo *modinfop)
{
		return (mod_info(&modlinkage, modinfop));
}


static int
xxopen(queue_t *rq,dev_t *devp,int flag,int sflag, cred_t *credp)
{
		struct xxstr *xxp;
		/* If this Stream already open - we're done. */
		if (rq->q_ptr)
			return (0);
		/* We must be a module */
		if (sflag != MODOPEN)
			return (EINVAL);

		/*
		 * The perimeter flag D_MTOCEXCL implies that the open and
		 * close routines have exclusive access to the module global
		 * data structures.
		 *
		 * Allocate our private per-Stream data structure.
		 */
	 	xxp = kmem_alloc(sizeof (struct xxstr),KM_SLEEP);

		/* Point q_ptr at it. */
		rq->q_ptr = WR(rq)->q_ptr = (char *) xxp;

		/* Initialize it. */
		xxp->xx_rq = rq;
		xxp->xx_timeoutid = 0;

		/* Link new entry into the list of active entries. */
		xxp->xx_next = xxup;
		xxup = xxp;

		/* Enable xxput() and xxsrv() procedures on this queue. */
		qprocson(rq);
		/* Return success */
		return (0);
}

static int
xxclose(queue_t,*rq, int flag,cred_t *credp)
{
		struct			xxstr				*xxp;
		struct			xxstr				**prevxxp;

		/* Disable xxput() and xxsrv() procedures on this queue. */
		qprocsoff(rq);
		/* Cancel any pending timeout. */
	 	xxp = (struct xxstr *) rq->q_ptr;
	 	if (xxp->xx_timeoutid != 0) {
	 		(void) quntimeout(WR(rq), xxp->xx_timeoutid);
	 	 	xxp->xx_timeoutid = 0;
	 	}
		/*
		 * D_MTOCEXCL implies that the open and close routines have
		 * exclusive access to the module global data structures.
		 *
		 * Unlink per-Stream entry from the active list and free it.
		 */
		for (prevxxp = &xxup; (xxp = *prevxxp) != NULL; prevxxp = &xxp->xx_next) {
			if (xxp == (struct xxstr *) rq->q_ptr)
				break;
		}
		*prevxxp = xxp->xx_next;
		kmem_free (xxp, sizeof (struct xxstr));
		rq->q_ptr = WR(rq)->q_ptr = NULL;
		return (0);
}

static int
xxrput(queue_t, *wq, mblk_t *mp)
{
		struct xxstr	*xxp = (struct xxstr *)wq->q_ptr;
	
		/*
		 * Do stuff here. Can read "some_module_global_data" since we
		 * have shared access at the outer perimeter.
		 */
		putnext(wq, mp);
}

/* qwriter callback function for handling M_IOCTL messages */
static void
xxwput_ioctl(queue_t, *wq, mblk_t *mp)
{
		struct xxstr				*xxp = (struct xxstr *)wq->q_ptr;

		/*
		 * Do stuff here. Can modify "some_module_global_data" since
		 * we have exclusive access at the outer perimeter.
		 */
		mp->b_datap->db_type = M_IOCNAK;
		qreply(wq, mp);
}

static
xxwput(queue_t *wq, mblk_t *mp)
{
		struct xxstr				*xxp = (struct xxstr *)wq->q_ptr;

		if (mp->b_datap->db_type == M_IOCTL) {
			/* M_IOCTL will modify the module global data */
			qwriter(wq, mp, xxwput_ioctl, PERIM_OUTER);
			return;
		}
		/*
		 * Do stuff here. Can read "some_module_global_data" since
		 * we have exclusive access at the outer perimeter.
		 */
		putnext(wq, mp);
}

static
xxwsrv(queue_t wq)
{
		mblk_t			*mp;
		struct xxstr	*xxp= (struct xxstr *) wq->q_ptr;

		while (mp = getq(wq)) {
		/*
		 * Do stuff here. Can read "some_module_global_data" since
		 * we have exclusive access at the outer perimeter.
		 */
			freemsg(mp);

			/* for example, start a timeout */
			if (xxp->xx_timeoutid != 0) {
				/* cancel running timeout */
				(void) quntimeout(wq, xxp->xx_timeoutid);
			}
			xxp->xx_timeoutid = qtimeout(wq, xxtick, (char *)xxp, 10);
		}
}

static void
xxtick(arg)
		caddr_t arg;
{
		struct xxstr *xxp = (struct xxstr *)arg;

		xxp->xx_timeoutid = 0;      /* timeout has run */
		/*
		 * Do stuff here. Can read "some_module_global_data" since we
		 * have shared access at the outer perimeter.
		 */
}