A Piece of the Action – The Multi-User Sig/Net

Terry Lang investigates the benefits – and drawbacks – of a shared access system from Shelton Instruments.

Shelton001

Front view of a multi-user system with hub and satellites stacked together.

In building their phenomenal success, microcomputers have had the advantage of needing only to provide an operating system which supports just a single user. This has enabled them to avoid much of the dead weight which encumbers mainframe systems. However, there has always been a need for micro systems to support a small number of simultaneous users – for example in neighbouring offices in a small business. (Such users will always need to share access to common data for business purposes. Sometimes users choose to share peripherals – eg, hard disks or printers – simply to save money, but the economic reasons for this latter type of sharing are likely to weaken as the technology continues to develop.)

Even in a shared microcomputer system, it has generally been economic to provide a separate processor for each user, and thus the spirit of simplicity in the operating system can be maintained. Nonetheless, the administration of the shared data does impose an additional challenge, and it is always interesting to see how this challenge is met.

In this article I will be looking at the way this is tackled by the Sig/net system produced by Shelton Instruments Ltd in North London. During a previous incarnation I was responsible for buying a large number of single-user Sig/net systems, which met all my expectations at that time, and I was keen to see how the multi-user combination would be carried through.

Hardware

Shelton002

Rear view of multi-user system showing ribbon bus cable and terminal and printer ports.

The original single-user Sig/net is itself based on a ribbon-cable bus which connects together the internal components of Z80 processor and memory board, disk controller board, and communications boards (serial and/or parallel). In developing a multi-user system it was therefore a natural step to extend the bus cable to simply chain on other systems, each supporting a single user by means of a processor and memory board. This is illustrated in Figure 1.

Figure_001

Fig. 1. Modules making up the ‘hub’ and user satellite processors on a multi-user system.

The central or ‘hub’ system with one floppy disk and one hard disk fits in a case of its own. The satellite user systems fit three to a case, and these cases are designed to stack neatly with the ‘hub’ as shown. As many satellite cases as may be needed can be chained on via the bus cable. (I understand a 14-user system is the largest installed so far.)

The basic component boards, with the exception of the new ring-ring bus connector, are all those which have proved very reliable in the original single-user system (Since the company has a considerable background in process control reliability should be something it appreciates.) To my mind the cases do run rather hot but I am told this has not caused problems.

The bus cable runs at a maximum speed somewhat below 1 MHz, not particularly fast but adequate for the purpose, as I shall discuss below. More significantly, it has a maximum length of only a few feet. This is sufficient for stacking the cases as illustrated in the photographs, but does mean that all the processors and disks have to be sited in the same room. Of course the user terminals are connected via standard RS232 serial communications ports, and can thus be located wherever required (using line drivers or modems for the longer distances).

Alternatively, it is also possible to connect a complete satellite to the hub via an RS232 link. This would enable a satellite with its own floppy disk to be placed alongside a user and distant from the hub hardware, but it would mean that access to the files on the hub would be correspondingly slower.

Both the hub and the user satellites use Z80 A processors running at 4 MHz. For the purposes of the standard PCW Benchmark programs, which are entirely processor-bound and make no reference at all to disks, it didn’t matter at all that a multi-user system was involved, since each Benchmark program ran in its own satellite processor plus RAM board, unaffected by the rest of the system. The Benchmark times, with the programs written in Microsoft Interpretive Basic, are given in Figure 2.

These times are as good as one would expect from an equivalent single-user system and illustrate the benefits (or perhaps one should say the lack of drawbacks) of this kind of multi-user sharing. (Of course, where user satellites share access to the common hub filestore, then the user programs will slow each other down – this is discussed in detail below.)

The one-off end-user prices for multi-user and single-user Signet systems are given below. These represent very reasonable value for money. Much of the system is of British manufacture or assembly, which should help price stability. It should be emphasised that in addition to the prices quoted you would require an additional terminal for each user. (Integral screens and keyboards are of course not appropriate to this configuration of centralised hardware. This does permit a range of terminal choice according to need)

An important feature is the ease with which a single-user system can be upgraded to multiuser. The old single-user system simply becomes the hub, with one of the floppy disk drives exchanged for a hard disk. Multi-user satellites are then added as required. If you find a dealer who will give you a reasonable trade-in on the exchanged floppy, then the upgraded system should cost you the same as if you went multi-user straight from the start – a cost-effective upgrade path. Since a satellite case and power supply can be shared between three users, it is most cost-effective to add three users at a time, for a cost of £622 per user (plus terminals, of course).

For those who need such things, other peripheral hardware is also available – eg, graphics drivers, A/D converters, industrial I/O, S100 bus adaptor.

Shelton003

Inside view of case with three user satellite processors and common power supply.

Sharing a hard disk

So much for a single user accessing one file over the McNOS network. As the next step, I looked at the facilities for several users to access different files on one hard disk. McNOS provides for separate users to be identified by distinct system ‘user names’, and each user name is protected by its own password. All files remain private to their owner unless explicitly made public via the appropriate command.

Each user name is provided with both a main directory and with up to 16 subdirectories (just as if the user had 16 separate floppy disk drives) identified by the letters A to P. Thus instead of the traditional CP/M prompt of the form

A>

where A identifies the logged disk drive, in McNOS this becomes

A.C>

where A identifies the hard disk drive and C the default sub-directory for this user. Whenever the user creates a new file, space for this is taken from wherever it can be found on the drive. Some multi-user systems divide the hard disk up in advance, so that each user has a fixed allocation but whilst this protects other users against an ill-mannered user grabbing more than his share of space, it also means that space allocation has to be fixed in advance. In a well-ordered community, the McNOS approach is much more flexible.

To measure the effect of sharing the one disk. I repeated my Benchmark, with a different file on the hard disk for each of two users. When I ran the program for just one user alone, the execution time was 33 seconds: when I did the same for the second user alone, the time was 54 seconds. This very large difference was due to the different positions of the two files on the disk, thus requiring different amounts of head movement (This is one of the bugbears for would-be designers of benchmarks for disk systems!)

Then to measure the effects of sharing, I set the second user program to loop continuously and timed the program for the first user. With this sharing, the execution time increased from 33 seconds to 205 seconds. This increase is explained partly by the competition for buffer space in the hub, but I suspect largely by the greatly increased disk head movement as the head moved constantly between the two files. This is inevitable for physical reasons under any operating system. Sharing access to one disk is going to have a big impact if a number of file-intensive activities are run at the same time; but this should not be a problem for programs where disk access is only occasional (eg for occasional interactive enquiries).

Sharing a file

However, as I indicated at the beginning of this article, the real reason for a multi-user system is often to provide different users with shared access not just to the same disk, but to the same file at the same time (eg, for stock enquiry and sales order entry from several terminals). But if one program is going to read a record, alter the contents, and finally rewrite that record, then that whole updating process must be indivisible. (For if a second program read the same record at the same time and tried to rewrite its new data at the same time, the two processes would interfere with each other). To overcome this problem of synchronisation, a ‘locking’ mechanism (sometimes called a ‘semaphore’) is required, whereby a process carrying out an update can ‘lock’ the record until the update is complete, and whereby any other process accessing that same record at the same time is automatically held up until the lock is released.

On a mainframe database system it is generally possible to apply a lock to any record in this way. However, this can be rather complex (for example if two adjacent records share the same physical disk sector, then it is also important not to allow two programs to buffer two copies of that same sector at the same time).

In keeping with the spirit of micro systems, McNOS implements a simpler compromise mechanism, by providing one central pool of ‘locks’ stored as 128 bytes in the hub. A user program can set a lock simply by writing to the appropriate byte, and release it again by clearing that byte. It is up to programs which wish to share access to the same data to agree on which locks they are to use and when they are to use them In general the programs will by agreement associate a lock byte with a whole file rather than with an individual record as this avoids the problem of two adjacent records sharing the same buffer. It also avoids the problem of the restricted number of locks (even if a bit rather than a byte is treated as a lock, this still only provides 1024 locks).

McNOS maintains the lock record on the hub as if it were a file (of just one record) called LOCKSTAT. SYS, though this ‘file’ is in fact stored in RAM and never written to disk. A user program which wishes to set a lock simply generates a request to read this record. If the record is returned with byte zero set to non-zero, this indicates that some other process is itself busy setting a lock: the program must then wait and try again later. When the record is returned with byte 0 set to zero, this program may examine the bytes (or bits) it wishes to set and if it is clear to proceed set them and rewrite the record (The reverse process must be followed later to clear the bytes and hence release the locks.)

To measure the impact of this locking mechanism, I next changed the Benchmark program for the first user so that it shared exactly the same data file as the second user. McNOS provides a particularly convenient way of doing this, for it is possible to create in one directory entry a pointer not simply to a file, but rather to another file entry in another directory. Thus all I needed to do was to change the directory entry for the first user so that the original file name now pointed to the data file of the second user. Running the Benchmark for either user alone now took 54 seconds (ie, I was using the ‘slower’ of the two data files as far as disk head movements were concerned). I then changed the Benchmark program itself for the two users, so that each read/write pair was bracketed by a lock and an unlock operation as would be required for sharing the file. Now running the Benchmark for either user alone took 106 seconds – a measure of the overheads of using the locking mechanism.

Finally I ran the programs for the two users simultaneously. This meant that the overheads of the locking mechanism, of buffer sharing in the hub and of competing head movements were now all included resulting in a total execution time of 262 seconds. All of which simply shows that the sharing of data in this way consumes resources (as usual you do not get ‘owt for nowt).

Another important resource is of course software. Just because the operating system provides a locking mechanism does not mean that you can take any CP/M system, run it from two terminals, and neatly share simultaneous data access. This will happen only if the program is explicitly written in the first place to use the locking mechanism. At least two general data management packages are already available which use the McNOS locking mechanism: ‘Superfile’ from SouthData of London (reviewed in PCW January 1983), and ‘aDMS’ from Advanced Systems of Stockport (PCW review shortly).

Multi-user software

Thus in the Signet multi-user configuration we can see hardware which is a simple extension of a single-user system. However, the software extension is not quite so straightforward when moving from a single-user to a multi-user operating system. The need for such a system of course became apparent some considerable time ago. Unfortunately, the first attempts by Digital Research to extend CP/M in this direction ran into a number of difficulties. Therefore Shelton was obliged to look elsewhere, and eventually obtained the McNOS (Micro Network Operating System) system from its originators in the USA. McNOS aims to provide a file store and printer spooling system in the hub processor, plus a CP/M-like environment for each satellite user, and the necessary communications software to link them together. As others have found who have followed the same route, a lot depends on exactly what you mean by ‘CP/M-like’. While a well-behaved program may just use CP/M by calling on it in the approved fashion for any functions it needs to be carried out many other programs also call upon the internal subroutines of CP/M or utilise direct access to its internal data tables.

Indeed, in the early days of CP/M, many programs were forced to employ such dodges in order to work at all. (One well-known package reportedly follows each call to write to a file by a ‘close’ call in order to force the writing of any partially filled buffers; though the file is thus repeatedly closed and never subsequently re-opened, earlier versions of CP/M would still allow the following ‘writes’ to take place.) For such programs any strict implementation of CP/M is sure to stop them running. With additional work by Shelton, these problems were eventually overcome by relaxing the conditions of the CP/M-like environment to permit such dodges to be employed.

In the single-user versions of CP/M such dodges did little harm since, if the worst came to the worst, the user would only upset his own program. In a multi-user situation, however, it must be realised that such dodges, if incorrectly employed by a user program, can upset other users as well. This has to be accepted as the price of making sure that the whole wealth of existing CP/M software will continue to run in the multi-user environment.

Before looking at how disks and files can be shared between several users, I thought I should first check how much delay is introduced into file accesses for a single user with a file which is no longer on his own satellite system, but which is now accessed on the hub through McNOS over the connecting lines. For this purpose I constructed a file of fixed length records, and wrote a simple Basic program which read and then rewrote each record. Records were taken alternately from either end of the file, stepping up from the bottom of the file and down from the top until the two met in the middle, thus ensuring a reasonable spread of disk head movement. To provide a norm for my measurements, I first ran this program in a true single-user standalone CP/M Signet system with floppy disks, and obtained an execution time of 257 seconds. Next I transferred the floppy disk to the hub of the multi- user system and re-ran the program from a satellite. The first thing I noted (cynic that I am) was that the program still ran, and that the floppy format was indeed the same under McNOS as CP/M. Would you now care to guess the execution time running over the network? In fact it was 53 seconds, a reduction of almost 80%! The reason for this of course (and it may be ‘of course’ now, but I confess I didn’t expect it at the time) is that much of the 64K RAM in the hub system can be devoted to file store buffering, thus minimising the number of physical transfers actually needed. (If other users had been running at the same time, they would have taken their own share of these buffers. Where there is competition, McNOS sensibly arranges to keep in its buffers that information which has been most recently accessed.)

Shelton004

Processor/memory card, serial communications card and bus interface to support a single user.

The terminal command language

In the beginning were mainframes, which ran programs in batch mode. Because the user could not direct his program from a terminal but had to think ahead for every likely eventuality, the operating system provided a ‘Job Control Language’ to help in directing the compiling, loading and executing of programs. Some Job Control Languages were so elaborate that they could even be used to solve differential equations (or so rumour had it). Then came the micros and operating systems like CP/M, with very simple commands which could be used from terminals. This command structure could hardly be dignified with the title ‘language’ (even though SUBMIT and XSUB do give the possibility of issuing several commands at once). There does seem a need for a more comprehensive job control language, even on micros, for tailoring packages and giving the user turnkey systems. (Sometimes this is done through a specially written program, or via a general purpose ‘front-end’ package which sits on top of CP/M)

McNOS tackles this situation by providing its own job control language, complete with variables, arithmetic and assignment statements, conditional expressions, and subroutines. All this is of very great power, but at the cost of considerable overheads in processing time. To test this out in a pale imitation of those who solved differential equations with the job control language on mainframes, I coded one of the PCW Benchmarks in the McNOS command language. This ‘program’ is shown in Figure 2. I estimate (since I didn’t feel inclined to wait for the whole 1000 iterations to finish) that this program would have taken over 14,000 seconds to complete (compared with 9.6 seconds in Basic)! Time may not be so critical in more typical job control situations, but it must be possible to do better than this. However you do not need to use it if you don’t need it. It is perfectly possible to stick to a very small subset of the simple commands, which then makes the system very like CP/M. Unfortunately, of course, it can not be exactly like CP/M because it is necessary to maintain a unified underlying syntax capable of supporting the larger language too. As a fairly experienced user of CP/M I must say I had no difficulties with the differences, though they would prevent a novice user from working with a standard CP/M primer as a guide. (I have heard it said that at least one user was so impressed by the McNOS command language that he asked to have it implemented on his single-user CP/M systems as well).

Figure_002

Fig.2. Coding of PCW Benchmark Program 3 in McNOS Terminal Command Language.

Future developments

A user who is just starting on a microcomputer development which requires only one system now but which could expand to become multi-user later, could well choose a Sig/net system for its development potential. If Shelton maintains its record in exploiting its technical expertise, then it would be expected that other developments would be on the way. I understand that one of these developments is the provision of a local area network facility based upon the Datapoint ARCNET approach. This will be used instead of the current ribbon bus to provide high speed communication over much longer distances, and thus permit the siting of user satellite systems away from the central hub. I must point out however that this is not yet an available product or, as Guy Kewney so aptly put it in this same magazine ‘the future is not now…’

Conclusions

The Shelton Sig/net system is based on good hardware and provides good value for money. The system provides a convenient cost effective growth-path for the user who wants to start small but expects to expand to a multi-user system later. The McNOS multi-user operating system provides convenient facilities for users who wish to share data between a number of terminals and a number of CP/M programs, provided this can be done on a scheduled basis (ie, no file being used in update mode by more than one user at a time). It is also possible to share simultaneous update access to the same data files with programs written specifically to take advantage of the McNOS ‘locking’ mechanism. The powerful McNOS terminal command language would be useful in some circumstances, but can be slow to use.

Benchmarks  
BM1 1.1
BM2 3.4
BM3 9.6
BM4 9.3
BM5 10.0
BM6 18.1
BM7 28.9
BM8* 51.3
Average 16.5
*Full 1,000 cycles  
For a full explanation of Benchmark timings, see PCW November 1982

 

Prices – Multi-User  
Hub filestore, 1 x 400K floppy  
Hard disk, 5.25Mb (formatted) £2,695
Hard disk, 10.5Mb (formatted) £2,954
Hard disk, 15.75Mb (formatted) £3,195
Hard disk, 21Mb (formatted) £3,500
Satellite case  
1-user (Z80A, 64K RAM, 1 x RS232) £1,100
2-user £1,550
3-user £1,865
Single User  
Z80A, 64K RAM, 2 x RS232  
Floppies 2 x 200K £1,390
Floppies 2 x 400K £1,690
Floppies 2 x 800K £1,890

First published in Personal Computer World magazine, April 1983

Advertisements