Yhe Coming of Windows NT

© Mercury Communications Ltd - April 1993

2007 network writings:
My TechnologyInside blog


You either love UNIX or hate it, in the world of operating systems there has been little ground for middle space. Whatever else, no one can be indifferent as UNIX has been at the heart of the movement towards open computing throughout the 1980s. Ironically, many of the original benefits offered by UNIX are now proving to be a stumbling block and it could very well be that an upstart in the form of Microsoft's Windows New Technology (Windows-NT) will take the pole position in the commercial operating system market of the late 1990s. If you feel that in 1993 a commitment to UNIX is a safe decision along the lines that oft quoted line "nobody ever got fired for buying IBM" then read on.

What is an Operating System (OS)?

All general purpose computers have an operating system(OS), whether they be small desktop system systems, workstations, mini-computers, or mainframes. The first real OS for microprocessor based PCs was CP/M developed in the late 1970s by Gary Kildall of Digital Research. This reigned supreme until ousted by PC-DOS developed by Microsoft on contract to IBM in the early 1980s.

Figure 1 - The Role of the Operating System

An operating system acts as a software buffer between application programs and the underlying hardware of the computer (Figure 1). The OS provides a standard by which applications can access hardware resources within the computer. This interface is generally called the Applications Programming Interface (API). A typical OS call might be to write a block of data to disk. In this case, the application would put the data into a defined part of RAM memory or into a register and request that it be written to disk. The OS would then take over by undertaking all the 'low level' code necessary to achieve the write function. In part, an OS can be considered as a library of low-level sub-routines that control all hardware aspects of the computer. In a single processor computer, the OS also shares the central processor unit (CPU) between users using a time-slice principal. With a multi-processor computer, concurrent execution of programs is achieved through the OS sharing the application between a number of parallel processors.

The History of UNIX

It is now difficult to remember the world of computers in the 1970s which was very different to today. The market was dominated by a few major manufacturers who believed that providing proprietary hardware and software products was a good marketing policy to pursue. Operating systems like IBM's MVS, DEC's VMS, ICL's VME and HP's MPE were all highly proprietary. This made the APIs highly vendor specific making the task of supporting an application program on a number of machines a task not to be undertaken likely. Even worse, major differences existed between high-level language compilers such as FORTRAN making recompilation on another manufacturers machine a non-trivial task. Prices of computer equipment, ancillary equipment, and services were held high making the gross profit margins of the mainframe computer manufacturer excessive by today's standards. Everyone was happy, except for the customer.

Most mainframe machines were physically large because there was little integration of the main components of machines. The processor unit extended to many large circuit boards and memory boards, based on 4K bit RAMs filled complete cabinets. Using bipolar technology, currents of to 200 to 300 amps at 5 volts were needed to power the units. Thus, justification for high prices was generally blamed on the high cost of developing the hardware and software of new machines from scratch. The arrival of the microprocessor in the late 1970s changed all that. With these computers-on-a-chip, it was possible to achieve a level of processing performance in a single integrated circuit (IC) that previously filled a large cabinet.

In the early 1980s, it was soon realised that low cost computers could be put together and marketed at cost/performance ratios previously unheard of in the mainframe and minicomputer dominated world. A pack of lean and mean companies appeared that focused exclusively on this potential new market. They included Sun, NCR (with the Tower), Apollo, and Pyramid. Interestingly, the microprocessor chosen was Motorola's 68000 which went on to better things when Apple announced their secret Lisa product in 1981. The 8088/86 based IBM PC was yet to arrive. There were many new applications that the horsepower of these new desktop workstations could be harnessed to. These were dominated by computer aided design (CAD) applications in the mechanical, architectural and electronic industries. An early question was; what operating system was to be used by these new ventures?

As a response to the stifling world of large business, UNIX was born out of the concept of development by community rather than committee. Following the earliest UNIX, developed by Ken Thompson of Bell Laboratories, evolution was taken up by a large unaffiliated group; students. In the early 1980s the first shrink-wrapped UNIX came out of the University of California, Berkeley, known as 4.0BSD (Berkeley Software Distributions). Talk in those days was of a lot of 'neat stuff' put together for 'fun'. BSD technology became the foundation of several successful commercial operating systems including SunOS and NexT Computers NextStep.

Concepts such as networking, full screen editors, fast file systems and many others were born in Berkeley and many vendors opted for Berkeley's work rather than that of AT&T. Before long UNIX code had found its way into most universities around the world and student and corporate engineers worked side by side to develop a plethora of system applications that enhanced basic UNIX. In parallel with this activity, AT&T was imposing what was seen as high licensing fees and increasingly restrictive terms forcing commercial implementors to look for alternatives. This was how the Open Software Foundation (OSF) was born in 1987. The aim of this group, which included HP, DEC, and Apollo was to get out from under AT&T's thumb by creating a UNIX-like operating system that did not attract the expensive AT&T licensing fees. OSF's first product Motif, was quickly taken up as the first commercial tool kit for creating windows based Graphical User Interfaces (GUIs). OSF/1, a mix of Carnegie Mellon University's Mach kernel (i.e. the core of the operating system). The tactic was successful; shortly after OSF was formed, AT&T turned around completely and revamped its licensing policies.

UNIX commercialisation had begun. Instead of getting UNIX from AT&T or Berkeley, consumers began getting it from 3rd party packagers such as Sun, Microsoft, and the Santa Cruz Operation (SCO). Microsoft's UNIX was called XENIX. UNIX, in one form or another, could now be run on PCs, workstations, minicomputers and even mainframes. Although originally the principal benefit of UNIX was its openness and availability on a wide range of platforms, this fast became its major problem and, as we shall see later, could lead to its demise.

As each new vendor jumped on the UNIX bandwagon in an attempt to stop sliding market share, UNIX code size ballooned, it got slower, and the sheer volume of code and contributors fostered bugs and incompatibilities. UNIX was out of control which played into the hand of the proprietary operating system competitors such as IBM. If early commercial adopters of UNIX had focused more on reliability, ease of use, and cross-platform compatibility, rather than heaping on features that would position them against other open-system UNIX competitors, none of the proprietary operating systems such as VME, VMS, or MVS would have had any reason to exist. The exciting openness of the early days of UNIX gave way to separatism and inter-vendor bickering. By the late 1980s this misplaced competivity resulted in over twenty-five different, and non-compatible variants of UNIX to be in use. The top six being SunOS, SCO XENIX and V/386, HP-UX, DEC's Ultrix, AT&T's System V release 4.0, and IBM's AIX. Any semblance to the originalgoals of UNIX such as open systems and an operating system that would support applications across a range of platforms seemingly disappeared. A cynic would say that the concept of open systems had been hijacked by corporate management and marketing staff as the new sales ploy to grab market share from the old players.

Figure 3 - UNIX Hardware Costs 90 vs. 91 vs. 92

A few days before Christmas 1992, Novell, the major US networking company, signed a letter of intent to purchase UNIX Systems Laboratories (USL) the company set up by AT&T to licence UNIX. It is remarkable, given the history and origins of UNIX, that this announcement has resulted in a dearth of comment and has almost been ignored by the commercial vendors of UNIX operating systems. It could be conjectured that this is because nobody cares now that most computer vendors have their own proprietary versions of UNIX and are in control of their own destiny. Or could it be that the world has decided that NT is the operating system of the future and have thrown in the towel?

Despite its problems, UNIX is not dead. UNIX remains the only operating system available today that potentially offers multitasking, graphics, and cross platform compatibility. Dataquest estimate that in 1991, UNIX sales (systems and OSs) totalled 1.2M units and was valued at $18.2 billion in revenue. In 1996 it could grow to $44.7 billion on units sales of 4.1 million units (Figure 2).

UNIX Bits and Pieces

UNIX originated in the world of dumb terminals where communication with computers took place via ASCII text on visual display units (VDUs). The move towards graphical user interfaces (GUIs) was welcomed by all but applying the GUI concept to UNIX was problematical. Like Windows and DOS, the GUI was not an integral part of UNIX but offered as a thin user-interface layer. This works well until a GUI application program crashes at which point control is passed back to a full-screen character based interface and the normal 'hieroglyphic' UNIX command line. Although it hasn't sold well, the NexT machine has shown that it is possible to rewrite UNIX with fully integrated graphics in the form of NextStep. Here, every single command and message takes place graphically.

PCs and UNIX has not been a good mix. PC versions of UNIX has been plagued by bugs, incompatibilities, and a lack of standards. Getting UNIX to work on a PC has been a black art because it was so fussy. You had to have the right processor, the right disk, the right bus and display card. Whereas, DOS and Windows run out-of-the-box PC, UNIX never did. Indeed, many applications developers needed several versions for each UNIX flavour. Because of this, UNIX never made good inroads on the desktop.

The Advent of OS/2 and NT

In the late 1980s IBM and Microsoft were partners in the PC operating system market based on Microsoft's development of PC DOS and Windows . Both companies needed a multi-user multi-tasking operating system for use on server platforms. A joint proposal resulted in OS/2. After several years of co-operation, major disagreements between IBM and Microsoft based on personalities, politics, technical, and strategic issues centred on Microsoft's release of Windows v1.0 and 2.0. IBM and Microsoft formally split with Microsoft abandoning its OS/2 efforts. IBM ploughed ahead with OS/2 independently and launched a rush release, as it turns out rather prematurely, in 1989. OS/2 was initially 'riddled' with technical problems but with the advent of v2.0 in 1992 it has gone on to gain significant market share in the server market. Microsoft on the other hand formalised Windows NT and put all of its efforts into this area ready for a much delayed launch in mid 1993.

Since OS/2's release and its recent major first upgrade v2.0, it has not had a major impact on the PC world, caused in part by IBM's ever decreasing influence on the PC marketplace; concerns about the robustness of the product as a network server; and, most importantly, through being overshadowed by the imminent release of Microsoft's NT product.

NT Close Up

Microsoft's NT operating system first saw the light of day in November 1989 running on a 'hot' new reduced instruction set computer (RISC) by Intel called the 860. The user interface was OS/2. By 1992 Microsoft's allegiance switched to a combination of MIPS R4000, Intel's 80xx architecture (including its new Pentium [586] due for release late this year), and to DEC's 64-bit Alpha.

The micro kernel of NT adapts readily to any microprocessor whether RISC or complex instruction set computer (CISC) e.g. Pentium or 68040 and is capable of hosting certain alien OS applications. Do not be misled by the term micro kernel, while the actual kernel is only 60k bytes, the full system requires 16 Mbyte of RAM (as of the July 1992 -version). Although this is likely to be reduced by the time it ships, NT's resource appetite puts it well beyond what is standard for the corporate desktop.

NT's Basic Building Blocks

Figure 3 shows the core structure of Windows NT. No matter whether NT finds itself running on a single processor, a multi-processor, a CISC or a RISC, the NT kernel always sees the same view of the underlying hardware thanks to the hardware abstraction layer (HAL). There is a standard HAL for 386/486 AT-bus single processor systems and one for the R4000 single processor system. NCR has written two: one for its four-processor 3450 and one for its eight-processor 3550. Compaq has written one for its dual-processor SystemPro, and Wyse for its three-processor 7000i. And, most interestingly, DEC has announced an £8,000 desktop NT machine based on its new powerful Alpha RISC chip.

The NT kernel manages memory, basic I/O, security, disk access, and the subsystems that emulate OS/2, Windows 3, MS-DOS, Posix APIs, as well as the applications that use those APIs. The kernel also manages such issues as context switching (time sharing between each users applications), exception handling (the management of error conditions), interrupt handling (interrupts are the way high-priority system hardware components, such as real-time clocks and hard disks, tell the OS that they have data that is ready for processing), and multiprocessor synchronisation .

On top of NT kernel are layered a group of operating system emulators so that it is possible run applications developed for OSs with minimum modification. It is interesting to note that unlike OS/2 a DOS emulator is not envisaged. This could be considered risky, but it is very likely that if NT is successful most DOS software houses will put in the effort to rewrite the necessary code to work with NT.

Some of the key features of NT are:

True 32-bit operating system

Like OS/2, Windows NT is a full 32-bit operating system unlike the current combination of DOS and Windows which is 16 bits.

Portability

PC buyers in medium sized companies looking for a departmental server can look seriously at machines that are not based on Intel processors. Before OS/2 and Windows NT, a move away from Intel meant a move away from established software standards in their companies; MS-DOS, and Windows 3.x. More often than not this necessitated a move to UNIX, with NT, this is not now required. If NT succeeds the barrier between workstations and PCs may disappear (this was the original, but largely unfulfilled, promise of OS/2).

Figure 3 - Windows NT Architecture

Compatible with Windows 3.x

Windows NT is fully compatible with Windows 3.1 and WIN32 a hybrid of OS/2 and Windows that supports 32-bit applications. Because WIN32 supports the Windows 3.1 API, and other Windows libraries such as OLE, DDE, TrueType fonts , and multimedia extensions, Windows 3.1 applications will readily port to NT with little on no modification. This leads to full compatibility between client and server platforms and applications rather than the rather disjointed environment caused by running DOS and Windows on the client platform and UNIX on the server. It should be remembered though that Windows compatibility is not necessary on a server.

Symmetric multiprocessing (SMP)

The big strength of NT is that it is fully optimised for use in a multi-processor environment. A Symmetric multiprocessing (SMP) system requires that all processors have identical instruction sets, memory management, and access to devices or peripherals. Further, all the processors should have the ability to interrupt other processors and be interruptable. Given such an environment NT can efficiently carve up an application to be executed concurrently. Technically, the kernel maintains data structures, such as a queue of threads that are ready to run and a matrix that describes running threads and their priorities. In an n-processor SMP system, NT guarantees that the n highest-priority threads will be running.

NT's scheduling is event driven. When an interesting event happens, such as a key is pressed, the key processing thread's priority is temporarily increased. When the thread has been processed the priority drops back down again. The kernel can give the thread to any processor, but it favours the one it last executed on in case the processor's cache

memory still contains relevant data. During quiet times, the kernel creates artificial events to keep things going.

User/Supervisor Modes

The hardware abstraction layer and kernel run in supervisor mode (as with OS/2). The operating system emulation subsystems run in user mode. The Win32, OS/2, Posix emulators have their own private and protected address space. This means that a user is limited to interaction with their own code and cannot modify the core operating system. Only someone with supervisor capability is able to do that. This is not only key for maintaining security between users but also prevents a rogue application bringing the whole system down at once. This is a major issue in the current Windows environment.

File Systems and Device drivers

NT supports three file systems; file allocation table (FAT) as used in DOS, high performance file system as (HPFS) used with OS/2, and the new NT file system (NTFS). Why introduce yet another file system? HPFS and NTFS are closely related in many ways. For example, both support long 256-character file names, but NTFS adds some key strategic features. The base products supports disk striping. Striping is the transparent writing of data spread across several disks and can improve disk performance. In itself this does not improve resilience, but is does allow the joining of two physically different disks into a single logical drive. A user will still need to use LAN Manager NT to include fault tolerant facilities such as mirroring (writing to two disks at the same time) or redundant array of inexpensive disks (RAID) techniques.

The most notable feature in the file system is the addition of in-built recoverability. NT logs all operations that affect the structure of a disk and stores them in the master file table. The log file is circular and NTFS checks it periodically to check the number operations stored. In the event of a crash, it replays the file automatically to restore the file directory structure to the instant just before the crash. Because of the journal file, NTFS can recover from a crash in seconds rather than minutes with UNIX. NTFS stores filenames and data on disk in UNICODE, the new 16-bit international character set that is replacing ASCII code. This means that the problems encountered when using non-English or mathematical symbols are, at long last, over. Finally, NTFS files can be made secure by attaching security descriptors to its files and directories.

C2 security

When NT ships it will be C2-certifiable by the National Computer Security Center (NCSC) in the USA. A C2 secure system provides discretionary access control. The owner of a particular software 'object' dictates how and which other users of a network may access it. A future version of NT may support B-level security, mandatory access control which means that objects must carry sensitivity labels that govern access control on a system wide basis.

To simplify the use of security in client-server applications, NT introduces the concept of impersonation. If a client and server talk, the server can temporarily assume the identity of the client so that it can evaluate a request for access relative to that client's rights. Once achieved, the server reverts to its own identity.

Networking

NT bundles a peer-to-peer LAN manager as already seen in the Windows for Workgroups product. It is not appropriate to go into too much depth in this report, but NT also supports Remote Procedure Call (RPC) protocol of the Open Software Foundation's(OSF) Distributed Computing Environment (DCE). RPC allows many of NT's utilities be operated remotely. For example, Print Manager handles true remote printing. There is no need to install a printer driver locally because you can execute a print run on a remote machine using its printer driver. You can point NT's performance monitor, process monitor and other utilities to either the local or to a remote machine.

UNIX vs. Windows NT



UNIX = NT


Long file names

Multitasking

Shared libraries

Integral peer networking


NT better than UNIX


Symmetric multiprocessing. NextStep is threaded but lacks SMP; Sun's Solaris claims to have threaded SMP.

Easy to write layered device drivers; UNIX drivers are monolithic and difficult to write and maintain.

Well defined inter-application communication (OLE)

Well integrated TrueType scalable fonts

Source code compatible with Windows 3.x


UNIX better than NT


Multi-user capabilities

Distributable Graphical User Interface (GUI) now.

Well understood network services, TCP/IP and

RPC.

Well standardised E-mail

Available now!


The Birth of COSE

A few weeks before going to press with this issue of Technology Watch a new fight-back initiative by UNIX vendors was announced in the form of the Common Open Software Environment (COSE). Clearly triggered by the imminence of Windows-NT, IBM - Hewlett Packard, The Santa Cruz Operation (SCO), Sun, Univel and USL (now owned by Novell) have got together with the aim of delivering a common operating environment for their different UNIXs. The announcement also validates (to the author anyway) the view that the UNIX industry recognises that it is still fractured and has failed so far to deliver products in the true spirit of open systems. Would HP be in COSE if its RISC technology had put it in leadership position? Would IBM support the initiative if it had not plunged into the red last year? Would Sun have joined if Solaris had been a major success? Probably not, but their joining together will, if they can actually co-operate, form a credible competitor to Windows-NT.

COSE does not mean that HP-UX, AIX, SCO Open Desktop, SunSoft Solaris, Univel UNIX ware, and USL UNIX SVR4 will finally become a single UNIX. What is does mean is that users will experience the same 'look and feel' irrespective of what variety of UNIX they use. This ambition will involve the development of specifications in six key areas: a common desktop environment (GUI), networking, distributed object technology, systems management, graphics, and multimedia.

X.Desktop

COSE will not be developing product from scratch. As an example of what COSE will be getting up to let us look at one area. The key component of COSE will be a common graphical user interface (GUI) otherwise known as a desktop. As discussed earlier, the GUI, in the form of Motif, formed the key part of OSF in the late 1980s.

Figure 4 - The UNIX COSE Initiative

An English company called IXI Ltd, of Cambridge, sells a product called X.Desktop which is the nearest the UNIX world has to a standard desktop. X.Desktop is a desktop manager and other programs that form what the user sees and feels. It is a program that has seen a high degree of success and has sold 250,000 copies world-wide. In February 1993, IXI was bought by the Santa Cruz Operation thus carrying on the UNIX story of rationalisation and consolidation.

X.Desktop, however, is not a single product. Its main role in life was to provide a common look and feel to a diverse number of UNIX implementations. Thus X.Desktop itself comes in a number of versions each one tailored to the platform it was intended for. IXI also created a good business out of customising the product to individual customer's whims.

All this work done by IXI will be taken over by COSE. The COSE desktop will, it can be assumed, incorporate parts of X.Desktop combined with elements from HP, IBM, Sun, and other members of the group. Unlike Motif and X.Desktop, this time there will be no variation in implementation from supplier to supplier. Specifications agreed by COSE will be submitted for ratification by X/Open, which will also develop test, verification, and performance suites.

All this does sound like the early days of OSF, maybe this time, with Windows-NT at its heels, the UNIX industry can pull it off and produce a common UNIX interface albeit based on several manufacturers incompatible UNIX kernels. Preliminary specifications are due to be released at the end of June and first release of product is expected in the first half of 1994.

UNIX R.I.P.?

The UNIX market can be divided into two segments: commercial and technical. In 1993, technical is by far the larger, but commercial is growing fast. It is likely that gurus in the technical markets will stay with UNIX to the bitter end, but with the impetus of many millions of Windows users behind NT it will make strong inroads in the commercial application arena. Yes, perhaps at first NT will have key weaknesses in the areas of performance and reliability which will slow its penetration into the server market which requires greater reliability than workstations. Yes, NT will initially lack the strong network support of UNIX. Yes, NT can be knocked because of its 'PC' connections but in a recent US survey, more than 60% of UNIX workstation users said they were evaluating NT, as opposed to only 45% for Solaris, the most popular of the UNIX variants.

NT calls for a power PC and demands 16 Mbyte of RAM, 100 Mbyte of disk and a CD-ROM. This level of resource is not unusual for power users of PCs but is a little daunting for the average small desktop or client machine. Microsoft have already announced the Chicago project which is a cut down version of NT aimed at the desktop. Although DOS 6.0 is already announced, it could very well be that DOS 7.0 could be seen in the form a mini-NT, finally signalling the end of DOS as we know it.

Although accurate forecasting is impossible without access to people with the power of prescience, it is likely that Microsoft has the strength, derived from its dominant position in the desktop market, to take on UNIX head-on in the commercial arena and achieve double-digit market penetration within five years. Have the UNIX vendors left it too late with COSE? Time will tell, but have you ordered your -version of Windows NT for your PC yet?

Mercury and Cable & Wireless acknowledge all of the trademarks of the companies mentioned in this article.

Back to home