Introduction to Sun WorkShop HomeContentsPreviousNextIndex


Appendix C

The dmake Utility

This appendix describes the way the distributed make (dmake) utility distributes builds over several hosts to build programs concurrently over a number of workstations or multiple CPUs. See also the dmake(1) man page.

Basic Concepts

Distributed make (dmake) is a superset of the make utility and allows you to concurrently distribute the process of building large projects, consisting of many programs, over a number of workstations and, in the case of multiprocessor systems, over multiple CPUs.

You execute dmake on a dmake host and distribute jobs to build servers. You can also distribute jobs to the dmake host, in which case it is also considered to be a build server. The dmake utility distributes jobs based on makefile targets that dmake determines (based on your makefiles) can be built concurrently. From the dmake host you can control which build servers are used and how many dmake jobs are allotted to each build server. The number of dmake jobs that can run on a given build server can also be limited on that server.

The distribution of dmake jobs is controlled in two ways:

  1. A dmake user on a dmake host can specify the machines to use as build servers and the number of jobs to distribute to each build server.

  2. The owner of a build server (a user who can alter the /etc/opt/SPROdmake/dmake.conf build server configuration file) can control the maximum total number of dmake jobs that can be distributed to that build server.


    Note – If you access dmake from the Building window, see the online help for information about specifying your build servers and jobs. If you access dmake from the command line, see the dmake(1) man page.

To understand dmake, you should know about:

The dmake Host

The dmake host is defined as the machine on which the dmake command is initially issued. The dmake utility searches for a runtime configuration file to determine where to distribute jobs. Generally, this file must be in your home directory on the dmake host and is named .dmakerc. The dmake utility searches for the runtime configuration file in these locations and in the following order:

  1. The path name you specify on the command line using the -c option

  2. The path name you specify using the DMAKE_RCFILE makefile macro

  3. The path name you specify using the DMAKE_RCFILE environment variable

  4. $(HOME)/.dmakerc

If a runtime configuration file is not found, the dmake utility distributes two jobs to the dmake host.

The runtime configuration file allows you to specify a list of build servers and the number of jobs you want distributed to each build server. CODE EXAMPLE C-1 is an example of a .dmakerc file.

CODE EXAMPLE C-1 .dmakerc File

# My machine. This entry causes dmake to distribute to it.
falcon	 	 	 { jobs = 1 }
hawk
eagle	 	 	 { jobs = 3 }
# Manager's machine. She's usually at meetings
heron	 	 	 { jobs = 4 }
avocet


The entries falcon, hawk, eagle, heron, and avocet are listed build servers. You can specify the number of jobs you want distributed to each build server. The default number of jobs is two. Any line that begins with the # character is interpreted as a comment. In the example above, the list of build servers includes falcon which is also the dmake host. The dmake host can also be specified as a build server. If you do not include it in the runtime configuration file, no dmake jobs are distributed to it.

You can also construct groups of build servers in the runtime configuration file. The dmake utility provides you with the flexibility of easily switching between different groups of build servers as circumstances warrant. For instance, you may define groups of build servers for builds under different operating systems, or you may define groups of build servers that have special software installed on them.

CODE EXAMPLE C-2 shows a .dmakerc file that contains groups of build servers.

CODE EXAMPLE C-2 .dmakerc File With Groups of Build Servers

earth	 	 	 { jobs = 2 }
mars	 	 	 { jobs = 3 }
 
group lab1 {
	 	 	 host falcon	 { jobs = 3 }
	 	 	 host hawk
	 	 	 host eagle	 	 	 	 	 { jobs = 3 }
}
	 	 	  
group lab2 {
	 	 	 host heron
	 	 	 host avocet	 	 	 	 	 { jobs = 3 }
	 	 	 host stilt	 	 	 	 	 { jobs = 2 }
}
	 	 	  
group labs {
	 	 	 group lab1
	 	 	 group lab2
}
 
group sunos5.x {
	 	 	 group labs
	 	 	 host jupiter
	 	 	 host venus	 { jobs = 2 }
	 	 	 host pluto 	 { jobs = 3 }
}


Formal groups are specified by the group keyword and lists of their members are delimited by braces ({}). Build servers that are members of groups are specified by the optional host keyword. Groups can be members of other groups. Individual build servers can be listed in runtime configuration files that also contain groups of build servers; in this case, dmake treats these build servers as members of the unnamed group.

In order of precedence, the dmake utility distributes jobs to the following:

  1. The formal group specified on the command-line as an argument to the --g option

  2. The formal group specified by the DMAKE_GROUP makefile macro

  3. The formal group specified by the DMAKE_GROUP environment variable

  4. The first group specified in the runtime configuration file

The dmake utility allows you to specify a different execution path for each build server. By default dmake looks for the dmake support binaries on the build server in the same logical path as on the dmake host. You can specify alternate paths for build servers as a host attribute in the .dmakerc file. For example:

CODE EXAMPLE C-3 .dmakerc File With Alternate Paths for Build Servers

group lab1 {
	 	 	 host falcon	 { jobs = 10 , path = "/set/dist/sparc-S2/bin" }
	 	 	 host hawk	 { path = "/opt/SUNWspro/bin"                    }
}


You can use double quotation marks to enclose the names of groups and hosts in the .dmakerc file. This allows you more flexibility in the characters that you can use in group names. Digits are allowed, as well as alphabetic characters. Names that start with digits should be enclosed in double quotes. For example:

CODE EXAMPLE C-4 .dmakerc File With Special Characters

group "123_lab" {
	 	 	 host "456_hawk"	 { path = "/opt/SUNWspro/bin"                  }
}


The Build Server

Each build server that is to participate in a distributed build must have a file called /etc/opt/SPROdmake/dmake.conf. This file is the build server configuration file and specifies the maximum total number of dmake jobs that can be distributed to that particular build server by all dmake users. In addition, it might specify the nice priority under which all dmake jobs should run.


Note – If the /etc/opt/SPROdmake/dmake.conf file does not exist on a build server, no dmake jobs will be allowed to run on that server.

CODE EXAMPLE C-5 is an example of an /etc/opt/SPROdmake/dmake.conf file. This file sets the maximum number of dmake jobs permitted to run on a build server (from all dmake users) to be eight (8).

CODE EXAMPLE C-5 dmake.conf File

max_jobs: 8
nice_prio: 5


You can use a machine as a build server if it meets the following requirements:

Impact of the dmake Utility on Makefiles

To run a distributed make, use the executable file dmake in place of the standard make utility. You should understand the Solaris make utility before you use dmake. If you need to read more about the make utility, see the Solaris Programming Utilities Guide and the make(1) man page. If you use the make utility, the transition to dmake requires little or no alteration.

The methods and examples shown in this section present the kinds of problems that lend themselves to being solved with dmake. This section does not suggest that any one approach or example is the best.

As procedures become more complicated, so do the makefiles that implement them. The examples in this section illustrate common code-development predicaments and some straightforward methods to simplify them using dmake.

If you use a makefile template from the outset of your project, custom makefiles that evolve from the makefile templates will be more familiar, easier to understand, easier to integrate, easier to maintain, and easier to reuse.

Concurrent Building of Targets

Large software projects typically consist of multiple independent modules that can be built concurrently. The dmake utility supports concurrent processing of targets on multiple machines over a network. This concurrency can markedly reduce the time required to build a large project.

When given a target to build, dmake checks the dependencies associated with that target, and builds those that are out of date. Building those dependencies may, in turn, entail building some of their dependencies. When distributing jobs, dmake starts every target that it can. As these targets complete, dmake starts other targets. Nested invocations of dmake are not run concurrently by default, but this can be changed (see Parallelism for more information).

Since dmake builds multiple targets concurrently, the output of each build is produced simultaneously. To avoid intermixing the output of various commands, dmake collects output from each build separately. The dmake utility displays the commands before they are executed. If an executed command generates any output, warnings, or errors, dmake displays the entire output for that command. Since commands started later might finish earlier, this output might be displayed in an unexpected order.

Limitations on Makefiles

Concurrent building of multiple targets places some restrictions on makefiles. Makefiles that depend on the implicit ordering of dependencies might fail when built concurrently. Targets in makefiles that modify the same files may fail if those files are modified concurrently by two different targets. Some examples of possible problems are discussed in this section.

Dependency Lists

When building targets concurrently, it is important that dependency lists be accurate. For example, if two executables use the same object file but only one specifies the dependency, then the build may cause errors when done concurrently. For example, consider the following makefile fragment:

all: prog1 prog2 
prog1: prog1.o aux.o 
	 $(LINK.c) prog1.o aux.o -o prog1 
prog2: prog2.o 
	 $(LINK.c) prog2.o aux.o -o prog2 

When built serially, the target aux.o is built as a dependent of prog1 and is up-to-date for the build of prog2. If built in parallel, the link of prog2 can begin before aux.o is built and is therefore incorrect. The .KEEP_STATE feature of make detects some dependencies, but not the one shown above.

Explicit Ordering of Dependency Lists

Other examples of implicit ordering dependencies are more difficult to fix. For example, if all of the headers for a system must be constructed before anything else is built, then everything must be dependent on this construction. This causes the makefile to be more complex and increases the potential for error when new targets are added to the makefile. The user can specify the special target .WAIT in a makefile to indicate this implicit ordering of dependents. When dmake encounters the .WAIT target in a dependency list, it finishes processing all prior dependents before proceeding with the following dependents. More than one .WAIT target can be used in a dependency list. The following example shows how to use .WAIT to indicate that the headers must be constructed before anything else.

all: hdrs .WAIT libs functions 

You can add an empty rule for the .WAIT target to the makefile so that the makefile is compatible with the make utility.

Concurrent File Modification

You must make sure that targets built concurrently do not attempt to modify the same files at the same time. This can happen in a variety of ways. If a new suffix rule is defined that must use a temporary file, the temporary file name must be different for each target. You can accomplish this by using the dynamic macros $@ or $*. For example, a .c.o rule that performs some modification of the .c file before compiling it might be defined as:

.c.o:
    awk -f modify.awk $*.c > $*.mod.c
    $(COMPILE.c) $*.mod.c -o $*.o
    $(RM) $*.mod.c

Concurrent Library Update

Another potential concurrency problem is the default rule for creating libraries that also modifies a fixed file, that is, the library. The inappropriate .c.a rule causes dmake to build each object file and then archive that object file. When dmake archives two object files in parallel, the concurrent updates will corrupt the archive file.

.c.a:
    $(COMPILE.c) -o $% $<
    $(AR) $(ARFLAGS) $@ $%
    $(RM) $%

A better method is to build each object file and then archive all the object files after completion of the builds. An appropriate suffix rule and the corresponding library rule are:

.c.a:
    $(COMPILE.c) -o $% $<
 
lib.a: lib.a($(OBJECTS))
    $(AR) $(ARFLAGS) $(OBJECTS)
    $(RM) $(OBJECTS)

Multiple Targets

Another form of concurrent file update occurs when the same rule is defined for multiple targets. An example is a yacc(1) program that builds both a program and a header for use with lex(1). When a rule builds several target files, it is important to specify them as a group using the + notation. This is especially so in the case of a parallel build.

y.tab.c y.tab.h: parser.y 
    $(YACC.y) parser.y

This rule is actually equivalent to the two rules:

y.tab.c: parser.y
	 $(YACC.y) parser.y
y.tab.h: parser.y
	 $(YACC.y) parser.y

The serial version of make builds the first rule to produce y.tab.c and then determines that y.tab.h is up-to-date and need not be built. When building in parallel, dmake checks y.tab.h before yacc has finished building y.tab.c and notices that y.tab.h does need to be built, it then starts another yacc in parallel with the first one. Since both yacc invocations are writing to the same files (y.tab.c and y.tab.h), these files are apt to be corrupted and incorrect. The correct rule uses the + construct to indicate that both targets are built simultaneously by the same rule. For example:

y.tab.c + y.tab.h: parser.y
	 $(YACC.y) parser.y

Parallelism

Sometimes file collisions cannot be avoided in a makefile. An example is xstr(1), which extracts strings from a C program to implement shared strings. The xstr command writes the modified C program to the fixed file x.c and appends the strings to the fixed file strings. Since xstr must be run over each C file, the following new .c.o rule is commonly defined:

.c.o:
	 $(CC) $(CPPFLAGS) -E $*.c | xstr -c - 
	 $(CC) $(CFLAGS) $(TARGET_ARCH) -c x.c
	 mv x.o $*.o

The dmake utility cannot concurrently build targets using this rule since the build of each target writes to the same x.c and strings files. Nor is it possible to change the files used. You can use the special target .NO_PARALLEL: to tell dmake not to build these targets concurrently. For example, if the objects being built using the .c.o rule were defined by the OBJECTS macro, the following entry would force dmake to build those targets serially:

.NO_PARALLEL: $(OBJECTS)

If most of the objects must be built serially, it is easier and safer to force all objects to default to serial processing by including the .NO_PARALLEL: target without any dependents. Any targets that can be built in parallel can be listed as dependencies of the .PARALLEL: target:

.NO_PARALLEL:
.PARALLEL: $(LIB_OBJECT)

When dmake encounters a target that invokes another dmake command, it builds that target serially, rather than concurrently. This prevents problems where two different dmake invocations attempt to build the same targets in the same directory. Such a problem might occur when two different programs are built concurrently, and each must access the same library. The only way for each dmake invocation to be sure that the library is up-to-date is for each to invoke dmake recursively to build that library. The dmake utility recognizes a nested invocation only when the $(MAKE) macro is used in the command line.

If you nest commands that you know will not collide, you can force them to be done in parallel by using the .PARALLEL: construct.

When a makefile contains many nested commands that run concurrently, the load-balancing algorithm may force too many builds to be assigned to the local machine. This may cause high loads and possibly other problems, such as running out of swap space. If such problems occur, allow the nested commands to run serially.


Sun Microsystems, Inc.
Copyright information. All rights reserved.
Feedback
Library   |   Contents   |   Previous   |   Next   |   Index