The biggest problem with Microkernels is that they are very difficult
to test and debug. Anyone who has tried doing C or C++ programming
with more than 5-6 threads in a single process doesn't need a much
reminding of the problems of race conditions and deadlocks associated
with trying to run a thread for every "process". A typical Linux
machine can be running hundreds of concurrent processes.
Linux and FreeBSD both have some very good approaches that provide good
comprimizes between whe massive monolithic kernels of systems like AT&T
System 3, VMS, and Windows NT 3.1 through 4.0 and the microkernels of
Mach, QNX, and Minix.
Linux and BSD both evolved as a product of enhanced memory management,
including demand paged virtual memory. The BSD 4.x kernel was
originally designed to take advantage of the new "demand paged virtual
memory" of the VAX 11/750 and 11/780 which could map relatively small
"pages" of memory, often less than 1 kilobyte, into thousands of
virtual pages. When the physical memory was full, less frequently used
memory could be "paged" to the hard drive.
As a side effect, BSD developers found ways to handle common operating
functions such as intersystem communications, using this same demand
paged virtual memory. Pages could be filled by one process, then
simply mapped to another process which could read them. Pages could
also be "copied" by mapping the same memory page to two MMU locations,
which meant that commonly used routines could be "shared" among
multiple systems. To prevent one process from corrupting another,
memory could be marked as "read-only" and when an application attempted
to "write" to that memory, the operating system could allocate some
available memory and give the writing process it's own memory.
This wasn't totally unique or revolutionary. IBM had been doing this
in their OS/370 and OS/360 operating systems and DEC had been doing
this with VMS (which DEC tried to bundle with the VAX).
Linus was also intrigued by the possibilities of the memory management
of the 80386 processor. He found a number of creative ways to
accelerate context switching and to accelerate interprocess
communications. Again, not particularly revolutionary, but quite
advanced considering that the dominant Operating system at the time he
introduced Linux was Windows 3.0. Windows 3.1 had just come out.
Ironically, Tannenbaum's concept of using multiple bulkheads and
compartments to reduce the risk of sinking was not lost on Linus.
Linux also has numerous levels of control and containment, restricting
the amount of damage that a virus or rogue process can create.
Ironically, the microkernel vs macrokernel debate is almost as old as
the RISC vs CISC debate. And in both cases, what has evolved is
hybrids providing many of the best features of both. A SPArC or PPC
has a Reduced Instruction Set Chip (RISC), but very large L1, L2, and
even L3 caches, make it possible to store entire libraries in the
fastest memory part of the chip - essentially turning it into a CISC
computer. Intel and AMD on the other hand, have Complex Instructions,
but the Microcode has been implemented more like "inline macros" being
fed into a RISC chip, allowing better management of both instruction
and data caches.
The same is true with Linux and BSD. They provide many of the best
features of a Microkernel, and at the same time, provide the superior
support and flexibility of a more robust kernel. They provide a kernel
with good interfaces to standard drivers and resources, and at the same
time, support "plug-ins" or "modules" which can be called to make the
system more flexible, making it easier to support the millions of
possible permutations possible in modern Intel/AMD based PCs.