Apple Lisa

Lisa_001

Each month one of Microcomputer Printout’s team of experts gives a vast amount of free publicity to a product they happen to like. Julian Allason opted for Apple’s new LISA – because, quite simply, it works the way you do.

Soft Soap

Julian Allason is critical of LISA’s Software.

“A soft answer turneth away wrath” says the Bible’s book of Proverbs. And answers to the microcomputer problem come no softer than LISA. Indeed, so friendly is she that the doyen of micro dealers, Mike Sterland, professed himself worried that existing Apple clients might be jealous at the thought of mere newcomers enjoying her sundry charms.

There can be little double that what I suppose we must call the LISA operating system – although it is so transparent as to be invisible – is superbly executed. After a few minutes one is merrily scuttling the mouse across the table top; selecting here; opening there; consigning files to the waste basket, and drawing the prettiest of pictures. To the experienced hacker, the sheer joy of being able to see what files are open; what jobs remain to be done; whose birthday is impending; is little short of a revelation. Beginners soon take it for granted – which is perhaps the highest accolade of all.

Lisa_002

The movable mouse with its single ‘select’ button is used to point to a function on the screen

Lisa_003

But is the applications software (although again this is a term Apple wouldn’t dream of using) as good? I fear not.

Like the curate’s egg, it is good in parts. The authoritative Rosen Electronics Letter, describes them, with the exception of LISADrawer and LISAProject as ‘pedestrian versions of standard functions that have been better done elsewhere’. Quite so. The point, however, is the degree of integration between them. Unhappily, even this is not as comprehensive as it might have been. The LISAWrite word processor for example does not allow you to insert information from the other applications, except by adding an extra page to the document you are working from.

If you are the sort of writer who needs fancy functions like footnote management and indexing, you would be frankly better off with Wordstar (and it is just a matter of time before that appears on LISA). On the other hand LISAWrite gives you lots of type faces and sizes to play with. And you can print them all out exactly as they appear on the screen, using both Apple’s new dot matrix printer, and, amazingly, the daisywheel printer. Most of the usual functions for manipulating text are there, and in my humble judgement, the program is more than adequate.

Much the same judgement must apply to LISACalc, which would be a fairly run-of-the-mill first generation spreadsheet if it didn’t offer variable column widths and one or two other goodies. One criticism levelled at it is the absence of multi-sheet consolidation, a feature which might have been expected to appeal to the corporate users who supposedly constitute LISA’s target market.

LISAGraph offers the usual types of business graphs in four different sizes. Thanks to the 720×364 dot resolution of the 12” screen, they look a lot better than on most other micros. Up to seven different sets of data can be held, and converted into graphic form.

This data can either be keyed in directly as a set of values, or moused over from the Calc program. The point that caught the eye of almost all lucky enough to have had a sneak preview of the system, was the way in which the graphs change automatically following any amendment to the data. Clever stuff!

LISA Terminal is optional, and it sounds like, and emulates DEC VT100 and VT52 modem and terminals. IBM 3270 emulation is likely to be included by the time LISA goes on sale here sometime this summer (is September still the summer? “It’s real hot out here in September,” says my chum in Cupertino with a wink).

And now the exciting bits. LISADraw is astonishing. If you’re drunk it will even straighten out your lines. Combine it with LISAGraph or LISAProject and the results get to look very professional indeed. The first time I saw LISA the demonstrator, Apple’s Brian Reynolds, created first one, then a whole series of drawings of LISA just by selecting from lines, shapes and shading with the mouse. And, Apple II users please note, text can go anywhere on the screen.

LISAProject is for critical path analysts. I’m no expert on project management, but even I could understand the schedules when they were displayed in graphical form, showing the critical paths amongst tasks, represented by boxes containing the details of the resources required, and milestones. In true calc fashion the critical path can be changed to take into account changes in resources – more Irishmen hired, a compressor stolen, for example – or unexpected delays. Once the output has been tarted up using LISADraw, the results are well up to management consultancy standards.

The last application is LISAList which is really a sort of database for dumbos. I’m not sure why it’s been billed as a list management package as several of the more standard mailing list functions seem to be missing; ditto a proper report generator.

Apple would probably argue that LISAList is intended for general use rather than high powered mailing or database management. Packages dedicated to precisely these applications may be expected sometime in the future. Quite when, however, remains a bit of a mystery. As I write, more than a month after the launch, the LISA development toolkit has yet to appear, and latest word is that it is unlikely to be before June. Without it third party software houses are going to have difficulty writing any applications programs that exploit LISA’s true capabilities. Without those programs LISA could turn into a seven month wonder.

The computer supports Pascal, BASIC, and COBOL languages so the problems are hardly insoluble. The onus must also be on Apple to get out and sell LISA in quantity. These self-same software houses subscribe to a strictly commercial code. Commandment 1 of this states that Thou Shalt Only Convert Software for Machines with a Large User Base’.

So different and so special is LISA it can truly be said to have a user base of zero.

But perhaps not for long. I, for one, have placed my order.

Lisa – An Expensive Lady?

In counterpoint to the otherwise noisy proceedings at LISA’s launch was the silence that greeted the announcement of the price – the sterling equivalent of $10,000 plus travelling expenses.

With the pound sick, and the gnomes tremulous, that translates to something like £7,500 – a lot of anyone’s money for what is still essentially a personal computer. Have Apple blown LISA’s chances then?

Some of the more cynical dealers thought not. “No one knows better what the market will bear than Keith Hall,” remarked one computer retailer, who had known the rugger-playing Sales Director in his incarnation as Commodore’s marketing mafioso. The existence of a market at that sort of price level is certainly not in doubt. Xerox have demonstrated that by selling every 8010 work station – the only piece of hardware remotely comparable to LISA – at over £11,000 each.

The other conclusion reached by the trade, after the customary head scratching, was that when LISA does arrive it could be in short supply. Indeed Apple have already indicated their intention of restricting LISA dealerships to a select few. The official explanation is that only the most experienced business systems houses would be able to do justice to the new baby. Quite how this squares with Apple’s claim (probably justifiable) that LISA is so easy to use that it can be learned in twenty minutes, is anyone’s guess.

Rumour, that oft ill-informed lady, has it that the original UK target price was £6,500; that was before the gnomes weighed in and sterling tumbled. There seems also to have been genuine disagreement on price within Apple. Sources close to the company’s Cupertino headquarters talk of two distinct schools of thought, one favouring a ’low’ price around the $8,000 mark with a view to maximising the company’s advantage in being first. A second group is said to have canvassed a $12,000 price tag on the basis that this would generate the optimum revenue, given the inevitable supply problems during the first year.

In the event, Apple’s chief executive, Mike Markkula, seemed to have split the difference, conscious perhaps that LISA’s market lead had been whittled down by successive software delays.

The unknown factor in the LISA price equation is Macintosh, LISA’s little brother. The conundrum now entertaining Cupertino’s corporate types is this: how cheaply can we make Little Mac?

Like LISA Macintosh is based on the Motorola 68000 16-bit microprocessor. Like LISA it should run much the same software. But will it? Like Topsy, LISA’s software just grew and grew and now occupies more than two megabytes of memory in all. Any possibility of marketing a floppy only version of LISA went out of the window more than a year ago; hence the presence of the separately boxed Profile five megabyte hard disk. Exactly the same problem now arises with Little Mac.

One theory now current amongst Apple watchers proposes $10,000 as an artificially high price for LISA, simply in order to maintain market separation from Macintosh. All this speculation – for speculation it largely is – is based on the assumption that LISA is overpriced. But is it? Try as one may, it is hard to put together a 68000-based system with Hi Res graphics, a megabyte of RAM, five megabytes of Winchester storage and half a dozen or so applications packages and still find oneself with much change from £8,000. And what price user friendliness?

LISA may not be within reach of everyone’s pocket, but it certainly looks like good value to me.

Which side of the Blanket?

Julian Allason examines Lisa’s parentage…

Lisa_004

The Xerox Star was the first workstation to employ the multiple-window technique. More recently Visicorp announced Vision for the IBM PC.

Frowns outweighed smiles as microcomputer folk reacted to the launch of Apple’s LISA computer last month.

The most maniacal grin adorned the visage of Apple’s rugger playing Marketing Director, Keith Hall, as he exhorted his dealers into orgasms of excitement at the prospect of selling the wonder micro.

The details of LISA, which will not have come as a very great surprise to readers of this organ, brought a furrow to the brows of competitors. “Now everyone will want integrated software,” moaned one small British microcomputer manufacturer. “Look how long that took Apple to develop – and we don’t have a fraction of their resources.”

Ecstasy was also less than unanimous amongst dealers. “Apple have wrecked the market. I’ve already had two of my best customers call to put a freeze on further orders. The worst part is that Apple won’t even be able to deliver LISAs for six months and then not in any quantity,” complained one member of the Computer Retailers’ Association.

Wry smiles were the order of the day in Uxbridge, headquarters of Xerox, makers of LISAs only competitor, the 8010 workstation, otherwise known as the Star. As noted elsewhere on these pages, LISA owes much of its heritage to work done at Xerox’s Palo Alto Research Center. Work that culminated in the Alto prototype user friendly computer. From the Alto – so far the only personal computer to have achieved true cult status – sprang from the aforementioned Star.

When industry pundits take a step back from the trees to inspect the wood, they will notice something very odd. Working from the same starting point, Apple reached a very different – one is tempted to say the opposite – conclusion from Xerox. For the Star is viewed by Xerox as a workstation for their Ethernet local area network. Apple, on the other hand, are adamant that LISA is a one-man machine, a personal computer that will adorn the desk of professional managers.

It is a curious conflict and one is tempted to wonder whether both companies can be right.

In truth not even Apple are convinced that they enjoy a monopoly of wisdom. As one senior manager remarked, after looking round to ensure that we were not being overhead, “In scientific circles the very best rows start with opposing conclusions being drawn from the same data…”

But it may not even be a two sided argument, because VisiCorp, whose VisiON operating environment has received the rough edge of Apple’s corporate tongue, think they are dealing with a very different sort of animal. If one could reconstruct the chain of evolution of the concepts first developed at Xerox PARC and Stanford University, it might go something like this: Alto user friendly personal computer becomes the Star workstation, a single component in a network of stations sharing printing and file storage resources, but its principal function is to exchange information.

As a personal computer company, Apple find other aspects of the Alto more sympathetic. The use of multiple screen windows, the mouse as a pointer to them, and of icons (small graphic symbols) to indicate the status of the work in hand all appeal. The network emphasis less so. Apple see the integration of the most popular office applications as a means of closing the gap between computers and office functions as they are normally (i.e. manually) carried out.

At the bottom of the chain, or at least as far down as we can see for the nonce, is VisiCorp. In their world view the PARC concept is primarily a means of making applications programs more user friendly. Not surprisingly, the first programs to receive the VisiON treatment will be VisiCalc, VisiWord, VisiPlot, VisiTrend, Visi etc. And least anyone deprecate that, your correspondent would like to add that he was enormously impressed the first time he sat down with VisiON. Moreover, the system has received the imprimatur of mega-mini-computer-maker Digital Equipment Corporation. In the computer world this is the equivalent of not just a feather in the cap for VisiCorp, but a whole bird in their bonnet.

Whether future micro biological expeditions down this particular evolutionary train will be warranted in the future remains to be seen. Certainly there are some interesting growths under culture in the labs of Microsoft and Digital Research. Our microscopes will be trained in their direction over the coming months…

First published in Microcomputer Printout magazine, April 1983

Advertisements

Profile on Apple

Apple_001

Peter Cobb – Apple UK General Manager – “Ultimately my job is to earn dollars for the US shareholders. There are all sorts of wrinkles to this thing: where are these things bought, how long forward, managing exchange exposures, there’s a whole sophisticated exercise going on designed to avoid the consumer having to cough up simply because the exchange stays low. That’s not good business practice in my view.”

By Martin Hayman

History, it is said, repeats itself. Subscribers to the teleological view could do worse than search the annals of the computer business if they seek evidence of this theory. The backwards and forwards surge of capital is perhaps an unlikely place to look for patterns, but really the movement is ever onwards: the next wave is always the biggest. Right now Apple is the next wave.

Apple has just had its most successful year ever, with sales of $580m. Its sales topped $200m for the last quarter of last year alone. It no longer talks about ‘if but ‘when’ it will reach the Fortune 500 – the index of the biggest-grossing companies in the US. If it does, it will be the youngest company ever to do so. It talks of spending $50m every year on research and development. This is a staggering achievement for a company which, as just about everyone must have heard by now, was started in 1976 by two young men, Steve Jobs and Steve Wozniak, working out of a garage, who raised the launch capital to build an order of 50 from the sale of a pre-owned Volkswagen van and a programmable calculator. The computer prototype took six months to design and a mere 40 hours to build.

The company’s success is the more surprising because it relies largely on that one ageing machine, the Apple II, in an industry where technical novelty appears to be paramount. Though according to Adam Osborne it isn’t. He ascribes Apple’s success not to the ingenuity of the product or indeed the dynamism of its youthful progenitors, but to the solid understanding of Apple’s backer, marketing chief and eminence grise, Mike Markkula, of the simple market expedients of outlets (lots, and one in your neighbourhood); service; and support.

“Markkula was the only one in the  business in 1976/7 who understood that simple list,” reckons the Big O. Heard it before? Right: Osborne describing his own operation. But before that was another wave…

Fortune 500

Long before there were micros, there were minis. And there was a firm known as Data General, who set the cat among the pigeons by playing rough and tough. They started in 1968, and it took them a decade to get into the Fortune 500. DG put Digital Electric Corp’s nose out of joint something rotten, but then, in their turn, doubtless DEC- world’s No.2 in computers – cost IBM more than a fleabite, even when mainframes ruled the roost and a minicomputer was something you put in a small room rather than a big one. And as for IBM, long before transistors and the like, when the acme of business software was the stack of Hollerith punched cards, and salesmen travelled on trains, you may be damn sure that John L. Watson and his team put somebody else’s nose out of joint. Then, they were the next wave; now they’re in the Fortune 500, and pretty near the top of it too.

Undoubtedly Mike Markkula is one of Apple’s biggest assets. It’s debatable whether the two Steves would have got far with their garage computer without enlisting his experience on their side. As a former marketing chief in two not exactly unknown semicomductor firms, Intel Corporation and Fairchild Semiconductor, he had already made a pile and was reputedly a dollar millionaire. He was able to introduce the Apple boys to sources of venture capital without which Apple would merely have shrivelled: firms with resonant names like Venrock Associates, Arthur Rock & Associates and Capital Management Corporation; plus, for good measure, he put in some of his own.

Markkula certainly must have understood the nature of the marketplace, volatile as it is; there is a consciousness among Apple people of their customers ‘out there’ (a favourite phrase) and the sheer availability of the kit must, in the early days before the turn of the decade, have been a strong enough argument. Because of its simple, modular construction, just about anyone could configure the system with their own boards, and soon a whole sub-industry of add-ons was going for the ambitious punter’s cash: plotters, graphics tablets, communications interfaces (one polytechnic hobbyist relates using a high-speed communications card to interface his Apple with a Prime 550 and he was by no means alone), digital music synthesisers, Z-80 softcards if you insist on CP/M, the usual add-on memory boards and A-D converters for instrument control; the Apple was even the first microcomputer to be approved for connection to British Telecom’s network, and you may imagine how arduous it was to make that stick.

Serious tool

And not only kit: Apple seemed to get the best software releases soonest, with the undoubted clincher being Personal Software’s VisiCalc, which arrived in this country in early 1980. This renowned piece of software had been adapted from mainframe use by Dan Bricklin and Bob Frankston, curiously working out of Massachusetts rather than California, and soon to become the world’s hottest-selling piece of software.

One reviewer of the time wrote, “We were unable to find any bugs in the program or to crash the system”. Given that it would run in a mere 32K and a couple of disks, and cost a mere £95, it made the Apple look like a potentially serious business tool rather than an obscure hobbyist’s plaything.

Apple_002

Peter Cobb – Apple UK General Manager

For the fact is that in the US the Apple was seen principally as a ‘home’ computer. Called on to describe the difference between a hobbyist and a home user, Apple UK General Manager, Peter Cobb, responds drily, “about a year”. VisiCalc further elided the distinction between the home user and the business-person and, at least in the UK, Apple rapidly became the first cheap business microcomputer: it could be  used for ‘serious’ office-type work without the user really needing to bend his brain with concepts of computing for which he had no time and which certainly held no charms for him.

Cobb, looking very like Denis Healey, says bluntly: “The great Mr. Prospect now is a perfectly straight-forward businessman like me who doesn’t want to play technical games with the machines, doesn’t particularly want to know how it works, but just wants it to do a job”. In this respect he makes a distinction between the user, typically one such as himself, and the ingenious insiders who saw the retail potential of micros – people like the Brewer Brothers.

Crock of Gold

Apple_003

The Apple III and newly-announced IIe are supported by a new range of both floppy and hard disk systems

The Brewer Brothers’ story has a ‘room at the top’ feel. As the first distributors of Apple products they were living proof that there was a crock of gold to be made in micros. Theirs is not a rags to riches story, but there is something of the fairy tale about the way they took the business by the scruff of the neck. Their sell-out price to Apple, when the US company decided it needed to control the burgeoning UK market, has never been disclosed but it was undoubtedly worth several million – far more than any comparable business might have been expected to produce in such a short time.

Their business was already well established when the word went out that a new firm in California had a product intended as a low-cost hobbyist’s computer, but which might have some use in business. In fact the Brothers Brewer had been supplying items to the computer trade since 1964 – mostly furniture and supplies, by mail order.

Curiously, though, the first computers which they started to import from across the water were Commodore PETs which, Stephen Brewer says, they bought for around £500 and resold at around £750, yielding a margin of ‘around 30% off retail’ (sic).

Data Efficiency, as the Hemel Hempstead-based firm was called, was not the only dealer to want in on micros. It was in competition with Keen and Personal Computers to pick the winner. DE ordered 60 ITT 2020s, which were made to Apple’s design under licence by ITT Consumer Products, who had approached Apple as early as mid-1977 and been granted a Europe-only agreement. Some arrived; some worked. “It was not a particularly auspicious start,” says Brewer, who was looking to feed a newly set-up chain of 10 dealers in north-west London and the West Country. ITT’s partnership with Apple ended after a copyright lawsuit about the design of disk operating system was settled out of court.

A trip to NCC in 1977 has yielded some contracts for distributorships of printers, monitors and boards, but it was not until two years later, also at NCC, that the Brewer Brothers made their big connection: Andre Souson of Eurapple, then the sole European control centre for Apple, appointed Microsense as the UK distributor for Apple. It was a coup which failed to please Data Efficiency’s rivals: “Personal and Keen went up the wall,” recalls Brewer. From then on Microsense, which had been formed as a splinter company from DE to market Apples, went from strength to strength.

Freddie Laker

Stephen Brewer was Marketing Director, while his elder brother, Mike was Managing Director.To Stephen fell the lot of organising the dealers, and marketing and advertising the product. One of his wheezes was to hook Freddie Laker in to promote the computer (Sir Freddie was then flying high as a sort of popular hero), though there is some doubt as to whether he ever actually used the Apple installed behind his desk.

Brewer is aware that Microsense was perhaps not too, uh, popular among some of the people who were buying from him but contends that it was necessary to be tough. As Peter Cobb remarks, many of the people who took on micro dealerships in those early days quickly found that it was not, perhaps, the right business for them.

Some were just enthusiasts operating out of their front rooms, and had very little idea of business practice. Brewer decided to use the contentious technique of credit factoring – that is to say, he sold his debts to a collection agency who would invoice the dealers and deal with other routine debt-collection. But the beauty of credit factoring, from Microsense’s point of view, was that the collection agency would investigate the credit-worthiness of would-be dealers and assign each one a credit ceiling. This avoided Microsense the headache of attempting to assess the story of anyone who came banging at their door asking for stock on credit.

Microsense itself was among the entrepreneurial merchandisers who made good. In turn the Brewer Brothers reported to Eurapple, an independent organisation which was bought out and run by Apple’s own-employee ‘commando’ of which Peter Cobb was one of the first members. As former financial controller or, as he cheerfully puts it, ‘chief bean-counter’ for Intel in Brussels, Cobb followed a little later by Keith Hall, recruited from Commodore to take charge of sales, marked Apple’s tightening grip on world ‘local’ markets.

World-wide Marketing

Eurapple handled the marketing and re-engineering, if needed, of all Apple computers sold outside the US with the exception of Japan. In late 1979, in an interview with Yorkshire Apple dealer, David Hebditch of Microtrend, Eurapple chief Andre Souson claimed that he was about to start up in Japan, showing Apple’s determination from comparatively early – Eurapple was set up as a world-wide marketing operation in June 1977 – to expand and compete with Commodore, who had excellent worldwide distribution for its PET, sold alongside its range of successful calculators, and Radio Shack, whose TRS-80 sold through that company’s coast-to-coast chain of electrical hardware retailers.

In fact Apple even alluded to its competitors Commodore and Radio Shack in its prospectus for the first public sale of shares in 1980, in which it admitted that ‘the company might be at a competitive disadvantage because it purchases integrated circuits and other components from outside vendors, while certain of its competitors manufacture such parts’.

It owned modestly that it might have to expand its distribution channels, or establish additional marketing arrangements such as a direct sales force. Well, Apple shifted the 4.5 million of its 52.4 million outstanding shares for the right price in December 1980 and a further 2.3 million in May 1981 and they were in business.

Andre Souson, when asked in autumn 1979 about the definition of, and prospects for a home computer, replying on behalf of the company (for whom he was at the time entitled to speak, since he did work closely with Apple Corp) said that the day of the home computer had not arrived and that he had seen no evidence that it would. Rightly, he distinguishes the ‘personal’ from the ‘home’ computer and remarks that what makes a personal computer personal is that one person uses it. He also shows that Apple grasped the nettle of service and back-up early, and sought to implement a policy of 24-hour turnaround to the end user anywhere in the world.

Over-pricing

It is intriguing to study the prices of mid-1979. Then, the price to the UK customer of a 16K Apple II was £750 (current price of the 48K Apple II Plus, £675), that is, around $1600 at the prevailing exchange rate, compared with a US price of $1200. Import duties for manufactured computers, then as now, stood at 16% (working from end-user price in the UK, nearly $300) and the PAL or SECAM conversion cost was around $80. Then, as now, Apple had to defend its products against accusations of over-pricing on export markets, but it is a fair indication of how well Apple has contained its costs that the stated price to the UK customer is little different now. But then neither, I daresay, is that of the Commodore PET, which Souson identified as the principal competitor to Apple, and for whom he had previously worked as chief calculator design engineer.

Apple_004

LISA – an important step in re-establishing Apple’s credibility as an innovator.

Apple still has to defend the price of its product: at the Barbican launch of the 1983 model year range, some were disappointed that Apple had not taken the opportunity to cut the price of its easy-build Apple IIe, despite the fall of the pound against the dollar. This is tricky, because Apple UK buys its computers in dollar prices from the Cork factory in Eire. “Ultimately my job is to earn dollars for the US shareholders,” says Cobb candidly. “There’re all sorts of wrinkles to this thing: where are the things bought, how long forward, managing exchange exposures, there’s a whole sophisticated exercise going on designed to avoid the consumer having to cough up simply because the exchange rate is low. That’s not good business practice in my view.”

Looking at Apple’s technical strategy, Souson let slip some intriguing speculations in 1979, among them the assertion that “Pascal is the language that all our future machines are going to support primarily… It is the sort of language that a lot of people believe is going to be the basis of all the languages of the future”. Whether or not this is a Good Thing, if indeed it was an a priori decision in 1979, is debatable. Vile rumour has it that the Microsoft BASIC in Lisa actually runs slower than that in the VIC-20, because it has to be interpreted into p-code and thence into its native code.

Furthermore, Souson asserted, “The real question is, do we want to build a machine with a register architecture or not? And I think the answer is no.” He then alluded darkly to a machine that was 5% completed but would be ready for delivery in late 1980. “I think it will be a very nice machine for the user.” And that’s the point: most of the people who use Lisa will not be interested in running BASIC.

Smalltalk

As it turned out, this revolutionary product appears to have been Lisa. Now we know how it came to take two person-centuries of research and development to get it out on the market. It seems surprising that the ground-work for the astonishing Lisa was then already in progress; that Steven Jobs and software engineer, John Crouch, had already toured Xerox in Palo Alto to look at Smalltalk and were ready to recruit their tour guide (followed later by another 15). Was the design for Lisa laid down that long ago? Souson says the architecture for a ‘totally innovative’ machine was ready in autumn 1979. Maybe he’d already seen it at work in Palo Alto.

This long gestation for the new model is reassuring. If a great many people have hammered away at it for a couple of years before the customer gets his hands on the product, it is likely to have shaped up. This point is still being made about Apple’s bread and butter computer, the Apple II which is sometimes referred to as outmoded. So, too, was the Volkswagen Beetle. And the Apple II, like the Volkswagen, is subject to continual improvement to its subcutaneous performance: the new IIe is the thirteenth revision to the garage computer, and now it is a different and more powerful machine, which nevertheless remains capable of running programs developed ages ago.

People do not like to junk major time investment in intellectual tools, and Apple still understands well that the individual favours continuity. It is becoming more generally recognized in the corporate environment, too: when the US Defense Department proposed to buy a hundred or so new mainframes, it required the contestants, Sperry-Univac and Burroughs, to enter into the lists in a computing tournament to adapt the Department’s software to run on the machines they were pitching to sell. But that’s an altogether different story…

Ill Feeling

For all that the Apple II has scarcely shown any signs of flagging in the cheap 8-bit personal computer market, it is just as well that Apple has the Revolution slogan (for Lisa) to add to Evolution, for much more flattering interface with his computer, they would have been out of luck. The Apple III has been troublesome for the company and in the UK at least led to ill-feeling among dealers who thought that a two-tier operation was coming into force, with only some of the existing Apple II stockists being permitted to handle the III. In the event they might have been relieved – because on its launch two years ago Apple III was something of a turkey, and 14,000 were recalled, for what Newsweek delicately describes as ‘retooling’.

How revolutionary is Apple’s strategy? Will Lisa be so easy to learn and use that everyone who deserves one will have a clear desk-top? As an onlooker one can only applaud Apple’s determination to improve the computer’s model of the human brain engaged in so-called ‘mindwork’. The intellectual tools used by the human brain for this sort of work are sophisticated, so any computer which comes nearer to an extension of the human brain, in the same way as a hammer and chisel, a quill – or even, dare I say, a typewriter – is good news. Especially when it costs as little as $12,000 – or is it $9,995 (the latter figure is Newsweek’s).

Truth to tell, an office worker might feel a bit of a Charlie pushing a streamlined dinky toy around the desktop and peering into a screen displaying ‘icons’ of the familiar equipment now banished from the office – the filing cabinets and folders, the wastepaper bin and calculator, and the ready-reckoner. Secretly he might prefer to invite one of the girls to go and retrieve such-and-such a file, but hell, that’s progress.

Clerks of a century ago, used to pens and ledgers, undoubtedly thought the office typewriter a bizarre mode to employ, so who’s to deny the mouse? It sounds better than sitting at your office computer talking to it in precisely modulated tones, as Texas Instruments seems to be inviting us to do with its new voice input Professional Computer. That invokes an altogether different muse, a new Thespian slant to computer salesmanship.

Volkswagen

Lisa had better work. Volkswagen came back from the dead when the Beetle finally waned and was banished to local assembly sites, and it took them time to find the right follow-up, but they did. Although Apple Corp’s performance is impressive, it has to make sure that Lisa sells well to recoup costs. Its market share in the US has declined from 29% to 24% since the introduction of the IBM PC in August 1981, and the PC will be able to run VisiOn rather more cheaply than the Lisa package. Certainly IBM are gunning for Apple, who must be kicking themselves for not using Personal Computer; after all, say Apple, “We invented the Personal Computer” – one of the big sales slogans in dealer motivation pep-talks. Stewart Lakey of Personal Computers, London, reckons that even if he had a hundred Lisas in stock right now, it would take him more than a year to sell them: as well as being an Apple dealer, Personal handle both DEC and IBM.

Definitely on the stocks for the future, and enjoying the wholehearted attentions of Steve Jobs as project leader, is the economy Lisa, which may well also be based on the Motorola 68000 and is aimed to sell at around $2,000. Why MacKintosh? I hope it’s not an acrostic, but is it anything to do with outgoing Apple President Michael Scott who, it is reported, refused to let young Jobs run the Lisa team because he was too inexperienced? Come what may, Jobs will have to keep his nerve, because IBM is said to be ready, with its own ‘Popcorn’ executive workstation aimed to compete with Lisa, and a 16-bit ‘Peanut’ machine designed to undersell even the Apple IIe. But none of this blue sky has been seen yet. It will be interesting to see whether, as some people predicted, the era of the garage microcomputer is over, now that the punks have shown the big boys how the market for personal computers works. Apple should be in the Fortune 500 this year, and that’s good going in six years.

First published in Microcomputer Printout magazine, April 1983

Filing the Fillings

Fillings

Michael H Rich describes how micros can improve dental health

Using any kind of computer in a dental practice neatly divides itself into two compartments: use in the office, which is comparable to using a micro in any small business, and use for clinical records. This latter use involves a far wider concept than ‘ordinary’ business use as the software is highly specialised and, as will be described below, needs the use of combined graphics and text on the screen to be fully effective.

Before the advent of the microcomputer there was very little hard/software available for the dentist to be able to introduce computerisation into a dental practice.

What there was was in the nature of a large ‘mini’, complete with the necessity for an air-conditioned ‘cubicle’ for the CPU which used fixed/removable hard disk cartridges. This, of course, allowed a multiuser facility but in the context of a small dental practice was far too expensive to be cost-effective.

Minicomputers are still available for dental practices; they are smaller in size as well as being slightly cheaper in price, and the suites of software with these systems do a reasonable job of helping the dentist to run his practice. The argument about being cost-effective still applies and thus they are for the larger practice only.

The micros of the Apple/PET/Tandy variety (and this list is by no means exhaustive) have, of course, opened up the world of computerisation for the small business, and it should be realised that a dental practice is precisely that. Many of the available software packages for running such a business can be applied to a dental practice. The management of accounts can be dealt with in a standard manner, as can stock control; although a practice employing half a dozen people hardly needs payroll software!

What distinguishes the dental practice from a small business is the clinical aspect of treating patients and the paperwork that this generates. When examining patients a dentist records the clinical information derived from the teeth in a form consisting of various shapes to designate types of cavity, fillings present, teeth to be extracted, dentures present and a variety of other conditions. This pictorial representation of a mouth is easy to scan and assess and is an internationally standard method. To record this information in written form, although suitable for a standard database software package using routine file handling procedures, would be very long-winded and would mean abandoning the standard procedures used.

There is software available for use on micros which does do this graphic charting of the clinical conditions in a mouth and this is allied with space to write clinical notes of treatment to be done, or which has been done. This is often conjoined with a suite of programs which will price the work done, whether under the NHS or privately, and will produce bills for patients and carry out the usual reconciliation with payments, aged debt analysis and so on. The software will often include a facility for routine recall of patients at a standard time interval and this raises the other major aspect of the application of computerisation of a dental practice – the appointment book.

It is necessary to realise that anything other than the appointment book in a dental practice is capable of being replaced or renewed in the event of a complete disaster, eg, a fire. To take an extreme example, if the premises are totally destroyed one can set up a tent with a telephone line outside the front door and with a list of patients due one can reconstruct records and re-schedule appointments until the premises are fully functional again. Without this book a dentist might as well go home. Consequently a dentist has to consider very carefully whether to commit this vital aspect of his/her practice to an electronic form which may be subject to the vagaries of an irregular power supply, corruption of storage media and the sundry other faults which can occur. To back up one’s records every time a fresh appointment is made or one deleted from the ‘book’ would be counterproductive in terms of time even though it is essential if the possibility of either missing a vacant time slot or double-booking is to be avoided. An actual appointment book can be kept in a fire-proof safe for peace of mind.

In addition to this, the software available at present for this function will only display, at best, one day per VDU screen (some only half a day) per dentist. A good receptionist can keep a visual image in mind of the black spaces in an actual book and can turn a page to ‘bring up’ a whole week at a time much quicker than any software can on a screen.

To go back to the function of computerisation of clinical records, one has to realise that for this to be fully effective there has to be a terminal and screen in each surgery with central mass storage as well as a terminal, etc, at the front desk. This again raises the question of cost: even using micros for only two surgeries and reception on this basis with, say, 10Mb storage will put the cost towards the five-figure mark, which becomes very expensive in the context of a small dental practice. The actual storage figures for dental records with chartings for each patient may be in the range of 500-700 bytes per patient per course of treatment and this multiplied by approximately 3000 patients per dentist gives some idea of the basic storage needed to keep clinical records. Details of treatment have to be kept for at least two years after completing a course of treatment and this, allied with all the other office functions needed, suggests that the 10Mb mentioned above could be a conservative estimate for a practice containing three or more dentists.

The other main problem concerning dentists at the present time is the possible computerisation of the NHS claim form FP17. This is a complex form which has to be filled in accurately so the dentist can be paid by the NHS. It contains details of the patient; name, address, clinical charting grid, a minimum of seven different dates to be filled in and various other details. Software has been written to cope with this so it can be printed out after the data has been put in from the handwritten clinical notes. The problem with this is that the slightest change in the format of the grids, etc, on the FP17 would mean rewriting this software. A suggestion has been made that the central collating body for these forms could use ‘light pens’ to read any printed codes produced by any printer, enabling a dentist to use whatever internal record system is desired. This problem still has to be resolved and will depend on whatever change in method of remuneration of dentists may be applied in the future.

The only other main office function for which a computer is often used and not yet mentioned in connection with a dental practice is the use of word processing. This is not generally a great necessity in a dental practice. Recalling patients every six months is often a feature of a dental software package and would incorporate a print-out (hard copy) format.

In summation, one can state that the small system with a couple of disk drives, screen and printer (not necessarily of letter quality) with a good database software package at about £3000 is a viable proposition for even the single-handed practitioner. The limitation of use to office procedures only is still worthwhile, even solely on the basis of eliminating lots of pieces of paper. Clinical records require considerable mass storage, sophisticated software and even provision in the actual surgeries to accommodate the extra terminals needed.

First published in Personal Computer World, April 1983

Multiplan

Mike Liardet looks at Multiplan – Microsoft’s entry to the spreadsheet fray.

After releasing the Apple version of Visicalc about three years ago, Visicorp enjoyed at least 18 months completely unchallenged in the market for what has now become known as spreadsheet software. But in the last year and a half there has been a steady stream of Visicalc rivals arriving on the scene and, naturally, some of the established companies have been getting involved in this growth area.

Probably the best known of all the micro software companies, Microsoft’s pedigree goes right back to those prehistoric days of ‘core-store’, paper-tape and teletypes – 1975 in fact, when the first of a million microcomputer systems was equipped with a Microsoft Basic interpreter. Now Microsoft has augmented its own spreadsheet system: Multiplan. Will Multiplan further enhance Microsoft’s reputation for excellence? Will it be another Ford Edsel? (You should get this point if you have heard of a Ford Edsel and you definitely will if you haven’t!)

The first thing that strikes you when confronted with a copy of Multiplan is the packaging: Microsoft has obviously invested a lot of effort (and money as well, I am sure) in presenting its ‘new baby’ to maximum advantage. A heavy-duty transparent plastic case holds a substantial ring-bound manual, system disks, various leaflets and a few pieces of carefully positioned cardboard mouldings – simply there to mask out awkward gaps and present an uncluttered appearance through the transparent box. Readers who are concerned by such a flagrant wastage of the world’s resources on a mere piece of marketing-hype will doubtless be relieved to learn that you need not throw the box away after purchase – it readily converts into a sweet little bookstand to support your manual!

Anyway, underneath the packaging we eventually find the disks – my review copy was for the Apple II (DOS 3.3), but Multiplan is also available for The Apple III, CP/M systems and, of course, Microsoft’s MS-DOS. All versions are evidently functionally identical, with just a few pages at the start of the manual outlining any minor differences, so non-Apple owners should still bear with me! (I also had the opportunity to take a quick look at the MSDOS version on a Sirius, so have made occasional references to this, too. In particular, I have included benchmark results for the Sirius version, specifically to check out Multiplan’s performance with a new generation (8088) processor and all that extra memory capacity.)

Getting started

Getting started proved fairly easy – the ‘First Time’ instructions were not on page 1, where I like to see them, but a little bit of page-thumbing soon tracked them down. A bit of disk copying, data disk initialisation, and two or three minutes later I was faced with a reassuringly familiar display of a spreadsheet. The only hold-up in all this was to have a good chuckle at the latest piece of computer jargon, encountered in the instructions for seeking the system for optional (on the Apple) 80-column display mode: ‘Recable’ – to exchange 40-column video cable connection with 80-column!

The initial display is of the top left hand corner of the spreadsheet, showing seven spreadsheet columns and 20 rows, all completely blank. The remainder of the display is devoted to helpful prompts: the names of twenty different ‘commands’, a ‘what to do now’ message and status information, such as percentage of storage space remaining, current cursor position, etc. Both rows and columns are identified by numbers, unlike many systems which use the alphabet for column headings. The repercussions of this are fairly great, since whereas ‘Q99’ is unambiguously a reference to a specified cell, ‘1799’ clearly is not. Multiplan provides several alternatives for identifying cells, but the simplest is that they be written as ‘RyCx’ – eg, ‘R17C99’ – a little bit longer than ‘Q99’!

Moving around

Moving the cursor around the spreadsheet is very simple – single control-key hits (ie. simultaneously pressing ‘Control’ and one other key) move the cursor left, right, up and down, with the VDU screen window being ‘pulled along’ by the cursor if an attempt is made to move to a cell off the edge of the screen. Sensibly, the keys that achieve this movement are arranged in a diamond (on the Sirius the arrow keys are used) – easy to remember and easy to touch-type when you are looking at the screen. Further investigation reveals that there are also control-key hits to ‘home’ the cursor to the top left hand cell and to the bottom-right, and a ‘Go-to’ command where destination coordinates can be typed in, as well as a rapid scrolling facility where the cursor is moved several cells at one go.

Also of particular interest is a very powerful split-screen facility. The screen can be subdivided into display areas (called ‘windows’ in the manual), each displaying different parts of the spreadsheet, and the cursor can be quickly ‘jumped’ from one to the next. There are many possible uses for this: locking row and column headings for continual display, quick movement between different parts of the spreadsheet, and keeping totals or whatever continually in view when other parts of the spreadsheet are being modified. Moreover each window can be displayed with a nice surrounding border, and can also be ‘linked’ to another window so that columns or rows in both always line up correctly. If all this sounds a little confusing to the newcomer, then take heart. You can completely ignore the facility at first, but once you are ready for it, the chances are that however you want to lay-out your display then Multiplan will accommodate you.

Entering data

As with most spreadsheet systems, the ‘bread and butter’ activity centres on entering or changing numbers, titles and formulae. To achieve this, simply move the cursor to the cell to be changed and start typing whatever is required there. The only thing to watch out for is that text entry must be preceded by selecting ‘Alpha’ mode (simply press ‘A’ before typing the text) otherwise the chances are Multiplan will assume you are entering a command – occasionally disastrous. For example, a sensible abbreviation for Total-Costs-Yacht could be ‘TCY’. Enter this without pressing ‘A’ and Multiplan does a ‘Transfer-Clear-Yes’ wiping out the entire spreadsheet! Don’t believe it could happen? A PCW editor (I’ll spare his blushes) did it! Well, it probably wasn’t a yacht, but a yo-yo or a yard-of-ale or something…

The formulae themselves can be built up using a wide range of maths and other functions, including trig, standard deviation, string concatenation, logical and table look-up, etc. The notation used is the classic keyboard version of school maths notation, easily learned by anyone not already familiar with it. As we have already mentioned, formula references to cells require an RyCx’ notation – eg, the formula to add the first 2 cells on the first row could be written as ‘R1C1 + R1C2’. However, there is a little trap lurking for experienced spreadsheet users – the replication facility does no formula adjustment whatsoever. Thus, if the above formula was located at R1C3, and then copied to 99 cells below, each and every copy would be ‘R1C1 + R1C2’ and the expected Column 3 = Column 1 + Column 2 would not be achieved. It turns out that the original formula, quite correct if no replication is envisaged, should be ‘RC[-2| + RC[-1)’, meaning ‘add cell in current row two columns back, to one in current row one column back’. Now, wherever this formula is located, it will add together the two previous values on the row, and in particular, if replicated right down column 3 it will do the column sum correctly.

If typing ‘RC[-2] + RC[-1]’ seems like a bit of a fingerful (tactile equivalent of mouthful) then Multiplan to the rescue! Instead of working out ‘RC[-2]’, etc, simply use cursor moves in mid-formula entry and Multiplan will type in the formula for you. In the above example only the ‘+’ need be entered from the keyboard, the rest of the formula being built up by using the cursor to point to the cells to be referenced.

It is also possible to refer to cells by their row or column name and thus build up formulae like ‘profit = sales – costs’. Since (a) this is immediately comprehensible and (b) always replicates correctly, the extra typing involved is well worth it!

In conclusion, I must say that I did not greatly like Multiplan’s methodology for referencing cells. It should be noted that cell references occur not only in formulae, but are also required by the majority of commands (see below), so a major part of one’s time at the keyboard is spent using them. In fairness I must point out that (a) my previous spreadsheet has been with the Visicalc style of cell-reference and (b) that Multiplan has some compensations for this minor irritation with some excellent other features and facilities.

Commands

Thus far, we have looked at Multiplan’s basic essential facilities, but of course there are many other, typically more peripheral (in both senses!), functions needed to provide a comprehensive spreadsheet system. These extra functions are provided for by Multiplan commands, and invoked by selection from a command-menu.

Actually, in passing, we have already touched upon four commands provided by Multiplan – ‘Go-to’ cell, ‘Alpha’ for entering text, ‘Copy’ for replicating cells, and ‘Window’ for the split-screen facility. There are in fact 20 in all, each starting with a different letter of the alphabet, and all permanently displayed at the bottom of the screen. Bearing in mind that there were only six letters of the alphabet to spare, the implementers have done a pretty good job of choosing 20 sensible names – probably the worst one is ‘Alpha’ (it couldn’t be ‘Text’ because that clashes with ‘Transfer’ and ‘Transfer’ couldn’t be ‘File’, ‘Storage’ or ‘Disk’ because F, S and D are in use, etc).

Anyway, in the unlikely event that a command’s meaning is unknown, or in the more probable event that the precise method of usage is unclear, there is an excellent ‘Help’ facility available. Basically the list of command names has its own cursor, which can be shifted along by pushing the space bar. Commands can be selected by moving the command-cursor then pushing ‘Return’ (or by just typing the command’s first letter – much quicker). However, if ‘?’ is hit instead of ‘Return’ the spreadsheet screen is replaced with a ‘help’ screen for the currently indicated command. Moreover the information is not just a few cryptic instructions, but a fairly comprehensive run-down which in some instances extends to several pages. By the way, all the help-screen information is read from disk when needed, and does not affect the precious memory allocation for the spreadsheet itself.

To get some idea of the command facilities available, here is a quick rundown of all 20:

  • Enables text to be entered at the current cursor position.
  • Blanks out one or more cells. Contents are blanked out, but display format assigned to cell is unchanged. Not the same as Delete since, in particular, the following rows or columns are not shifted.
  • Copies cells from one place to another (ie, replication). Relative-copy is not possible (see text above) – must do absolute copy of relative formula!
  • Deletes a row or column of cells, moving all subsequent rows/columns back by one.
  • Instead of correcting a long formula by retyping from scratch, this command can be used to apply the changes quickly.
  • Numerous different display formats are possible: different column widths, centre, left, right justify, scientific, integer, financial, primitive bar graph, and more besides! As an extra convenience, a default format can be specified, assigning the format you most expect to use to all cells not explicitly reformatted to something else.
  • Go to cell specified by its name or coordinates.
  • Gives general help information, not covered by the help-screens, for each specific command.
  • Inserts a blank row or column, moving all subsequent rows/columns along by one.
  • Locks or unlocks specified cells. Can permanently lock all formulae – useful for turnkey systems.
  • Moves a row or column to between two other row/columns.
  • Enables a cell or group of cells to be given a user-supplied name. This name can be used in formulae, and also by the ‘Goto’ command. It saves confusion if the name here is the same as the visible title.
  • Used to set basic operational features, eg, switch off auto-recalculation or audible error beeps. The former is very useful when the spreadsheet is getting fairly full and every change takes several seconds – not to be registered on the screen, but for its effects to permeate through the system. The latter is absolutely priceless if you work at home and your family ‘can’t stand that incessant cheeping’ (to quote my good lady).
  • Can print to printer or disk file. Option to print the formulae as well as the calculated values. This is useful for documenting or debugging the model. It’s also possible to print selected areas.
  • Finish – back to resident operating system (eg, CP/M, MS-DOS, etc).
  • Sorts calculated or entered numbers or text by suitably shuffling rows.
  • Load, save, delete and other disk file operations. Of particular note: Multiplan can read Visicalc data files, or read/write files in a well-documented external interchange format, as well as using its own internal disk format. As it can also print to disk, it is extremely versatile in its file-handling.
  • Can optionally be used for entering formulae or numbers.
  • Split screen facility.
  • Used to read in answers calculated by one spreadsheet as raw input data for another. Can be used for ‘consolidation’.

Documentation

The documentation is comprehensive, clear and well-written. The bulk of it is in a stout ring-bound manual (minor niggle – the rings are not circular and tend to snag the pages when you are turning them quickly). It has obviously been put together with the sort of thoroughness we would expect from Microsoft, right from the Contents page at the front to the Index at the back. The basic material provided is:

  • System-specific instructions. How to create your working disks under your particular operating system.
  • Organised as seven lessons. Gives you key by key instructions, starting with simple cursor moves in lesson one through to multiple work-sheets at the end. Well illustrated.
  • In alphabetical order, everything you need to know about the command, key-strokes and formula-functions. Also includes a list of all system messages, together with advice on what to do when you encounter them.
  • Extra helpful information, including a glossary and notes for Visicalc experts – a nice touch!
  • Quick Reference Guide. A separate pocket book (16 pages), being a condensation of the reference section in the main manual.
  • Help Screens. Comprehensive instructions on-screen for every command and a few of the other facilities.
  • With this breadth of documentation, there should be something to please all levels of user. Complete beginners can try the tutorial. Experts will probably just use the quick reference guide or help-screens and everyone can make good use of the comprehensive index.

Sirius slip-up

Having given the Apple version a thorough work-over, I arranged a joyride on somebody else’s Sirius. The article was nearly complete – I just needed to pencil in the Sirius Benchmark times and then off to Mustique for yet another three weeks.

First problem: Sirius version of Multiplan manual temporarily mislaid. Well, I should know the system well enough by now. So, in preparation for Benchmark 1, I quickly set up the first 12 columns by 200 rows of the spreadsheet. (Readers familiar with the benchtests will know that this results in a display of 1.. 12 in the first row, 13. . 24 in the second, etc.)

Next I needed to set up column 13, each cell in it being the sum of the previous 12 in the row. Easy! Just use the row-sum function in column 13 of row 1, and then copy it down to all cells below it. Unfortunately I couldn’t remember the correct syntax for using it. Anyway, after experimentation I found that ‘SUM(C1:C12)’ at least did not give a formula error message, but it did seem to be displaying the wrong answer. Okay – time to copy it. Well, much disk-whirring and clanking, then watch the calculation count-down on the VDU display. 45 minutes later; I’m still waiting and the disk is still whirring and clanking and countdown’s still not finished – I’m frightened to switch off in case I corrupt the disk (it’s not mine, anyway) – can’t stop it at the keyboard, etc. Anyway it took about 50 frustrating minutes.

So, what went wrong? Well, basically a minor slip-up in my use of the SUM formula. I eventually got it right (by using a help-screen, what else?): ‘SUM(RC[-12]:RC[-1])’ and the whole test was over in under a minute. The formula I had originally used did not add the row up, but calculated the whole 12 x 200 array of numbers, and of course this formula was then copied 200 times down the column – a bit of a hefty number-crunch!

Anyway, the moral of this story is: make a good effort to learn Multiplan’s cell referencing – it could save you a long wait!

Conclusion

We have taken a fairly fast swoop right through the major facilities and features of Multiplan; so fast that some very valuable features, not generally available in mere state-of-the-art spreadsheet systems, may have gone unnoticed. Just for the record.

Multiplan gives you:

  • If you need to sort columns of figures or text then it is impossible to do this without a ‘Sort’ command.
  • Multiple worksheets. Results from one worksheet can be communicated to another, useful for consolidation.
  • Multiple split-screens. Very flexible facility to design VDU screen display of spreadsheet.
  • Flexible file handling. In particular data interchange with other software is feasible, and Visicalc data files can be read (but not written! – no doubt Microsoft doesn’t want to encourage users to migrate that way!).
  • Available on 16-bit microprocessor (8088/6). The new 16-bit processors can handle a lot more memory, and spreadsheet systems which have been properly installed on them can use this extra memory for setting up bigger spreadsheets (see Benchmarks).
  • Comprehensive help-screens. In addition to these. Multiplan also provides more mundane, but by no means universally available, facilities – such as cell references by names, formula protection, formula printout, print to disk and formula editing.

Certainly Multiplan has a lot of facilities to offer, but what is it like to use? Well some minor complaints here: the row/column numbering scheme increases the amount of typing for formulae. You have to consider replication consequences when you enter a formula, rather than when you do the replication, you have to choose the ‘Alpha’ command before you enter text (okay, it’s only one extra character, but most other spreadsheet systems don’t do it this way). To balance these minor grumbles are comprehensive error messages, and understandable prompts for all input.

So finally, my advice to spreadsheetless owners of Apples, CP/M or MS-DOS systems, or to anyone looking for an upgrade: put it near the top of your list!

Benchmarks and other measurements

These tests were run on an Apple II system with 64k of RAM (which is in fact mandatory) and an 80-column display card (which is optional). Available space for the spreadsheet itself amounted to 21k. Figures are also included for the Sirius (with 128k of RAM, and theoretically extendable to 800k+), running MS-DOS and allowing greater storage space for the spreadsheet. Where the Sirius figures are different they are appended in parentheses after the Apple figures.

Incidentally, a Sirius retails for around £2500, and the nearest equivalent Apple system (but with lower disk capacity, half the RAM, 8-bit processor) would be around £1750.

  • Spreadsheet size: 63 columns wide by 255 rows.
  • Numeric precision: 14 digits.
  • Max column width: 32 characters.

The benchmark tests are described in ‘Which Spreadsheet’, PCW Feb 1983.

Benchmark 1: (a) max rows accommodated: 95 (235); (b) recalculation time: 60 (55) seconds – ie, 1.5 (4) rows per second: (c) recalculation time: 60 (55) seconds; (d) vertical scrolling: 6 (6) rows per second; horizontal scrolling: 4 (4) columns per second.

Benchmarks 2: max rows of text accommodated: 190 (Sirius not tested).

Benchmark 3: max rows of numbers accommodated: 190 (Sirius not tested).

Price: Around £150.

Checklist

Documentation: 400+ pages, contents, tutorial, reference, index, quick reference and help-screens. Well-illustrated. Excellent.

User-friendliness: Consistent and easy to use — cell-referencing can be a little tricky!

Error-handling: 20+ error messages. Erroneous calculations (eg, zero-divides) displayed as special error values.

Facilities: Arithmetic and other functions: +, -, *, /, %, string operations, logic, descriptive statistics, trig, logs, look-up and more besides!

Configuration: version tested easily configured for different types of Apple screen.

Graphics: a let-down compared with the other facilities!

Interface to other software: specifically can read Visicalc files, and print to disk. Can also be interfaced to other software using data interchange format (requires programming skills to do this).

Spreadsheet overlays: yes – can do consolidation or merge information into existing spreadsheet.

Turnkey: Apple version is turnkey with all disk formatting, copying, etc, achievable without recourse to Apple DOS.

Insertion, deletion and replication: yes.

Display flexibility: just about everything you could possibly want. Excellent.

Protected cells: yes.

Formula printout: yes.

Formula editing: yes.

Automatic/manual recalculation: yes.

Out of memory: memory left permanently displayed. Recovers correctly when it runs out of memory.

Long jumps: can jump directly to any specified cell.

Sorts, searching and logic: yes.

First published in Personal Computer World magazine, April 1983

Data Management to the Rescue?

Kathy Lang checks out a flexible new CP/M package.

Regular readers will know that many of the packages I’ve reviewed in this series have particular areas of strength that make them well suited to certain areas of data management. This month’s offering, a British package called Rescue, which comes from Microcomputer Business Systems and runs under CP/M, is a general-purpose, menu driven data management package which has much in common with others in this field. But it has unusually flexible provision for different types of information, and its data validation is among the best I’ve seen.

Rescue comes in three parts: the first deals with configuring the system for your computer, and is not needed again unless you make major changes. The second part covers the creation and amendment of the data files, and of the screen and report formats, while the third permits record amendment and display. This separation makes it easy to set up a system in which most users have access to the information in the files, but cannot change the format of those files or interfere with any provision for protecting parts of the data for security reasons.

Data is stored in fixed-length records in Rescue, but some ingenious methods are used to keep data storage to a minimum – I’ll say more about that later. Once you’ve set up a record format, you can still add fields to the end of the records, but you can’t change the sizes of existing fields unless you’ve made provision for that in advance. (MBS is apparently about to release an option to permit more radical changes to existing files, but it isn’t available yet). You can access the records in two ways. Individual fields may be used as keys, and any one of them used to access a particular record for display and/or editing. You can also select subsets of the data by setting up a set of selection rules, which are used to extract a set of records for browsing on the screen or for printing. You can set up as many screen and report definitions as you please for any set of data; these definitions need describe only a few fields in a record if necessary, and any or all of these descriptions may be password protected.

Rescue is used through menus, but users can set up their own menus through quite simple procedures. Thus you can set up a series of operations to be activated by one menu option. You can’t at present access one file from another, so that the current version of Rescue does not have true database capabilities.

Constraints

Figure 1 shows the major constraints imposed by Rescue. The maximum record size of 1024 is the same as several others I’ve reviewed, but Rescue’s dictionary capability makes it more economical of data storage than many.

Some people will find the limitation of 60 characters in a field more serious. I haven’t included in the figure a full list of the field types allowed, as it is very lengthy. Virtually any kind of data format can be expressed with one of the field types provided. I’ll say more about them in the next section.

File creation

The process of file creation is shown in Figure 2, which is a ‘road map’ of all the menus associated with the data definition part of Rescue.

The first stage in file creation involves setting up a data description file, specifying the basic format of each record and the keys it will have. At this stage you must assign a data type to each field. There are four main groups of data alphanumeric, numeric, date, and dictionary. There are several forms of data type in each group; for instance, character data may be just that and contain any valid ASCII character, or they may be alphanumeric, in which case they may only contain letters or digits and any attempt to enter invalid data will be rejected by the system. There is quite a variety of numeric fields, too, including money (sterling). You can specify that a field is to conform to a mask, to ensure that such items as account references, which often have prescribed formats, are entered in a valid form.

Probably the most unusual type of data is the dictionary field, which permits the person entering data to include only certain values. There are two kinds of dictionary field; a short form, which permits up to 29 characters in total to be used for each field, and a long form, which allows up to 255 entries, each of up to 60 characters. The latter are shared among all the fields in the file, so supposing one has a series of questions each with the same range of answers – for example, answers ranging from Poor to Excellent in a market research survey – you only need one dictionary entry for all the fields to refer to. Each response takes up only one character in the record in the data file for either type of dictionary, so the method is really a way of combining coding with captions for codes.

Every field within the record must also fall into one of four entry categories: mandatory (ie, the field must always have a value), optional (the field may be empty), calculated or display-only. Calculated fields are derived from calculations on constants or on other fields in the same record. Display-only fields are provided so that for certain modes of access fields can be shown but not altered – account numbers might for instance be protected in this way. Any field in a record may also be linked to others in a number of ways.

Direct linkage provides for situations where some fields only have values if another field – said to be the controlling field – has a certain value. For instance, details about a property might say if the property were freehold or leasehold but only if it were leasehold would it be sensible to ask for the life of the lease and the annual charge. This approach can also be used to deal with records with lists of information; you might want to store the names of all a person’s children where some people might have as many as six, without asking six questions about childless people. Most packages expect you at least to hit one key for each question when entering data from the keyboard but with the Rescue approach entry can be more finely tuned to stop
prompting for answers if they are not needed.

During file definition you must also specify the fields which are to be used as keys. Rescue treats the key field which is physically nearest to the beginning of the record as the main key, in that you have to ask specifically for other keys when you come to access the file; so it can save a little time to think about what order to store fields in the record. Up to 10 fields may be defined as key fields. Keys may be either unique or duplicate, and Rescue checks when supposedly unique key values are entered. All the key fields are referenced from a single index, which is automatically kept up to date when data is added or amended.

The next step is to define screen and print formats for the records; you can have as many of these as you wish, and each may describe only parts of the record – for instance, to prevent confidential information being seen by everyone. Next, you tell Rescue to set up an empty data file and structure the index file, and finally you construct any custom-defined menus you will need If you do specify more than one screen or report definition, then you will have to do some customisation of the menus in order to use the alternative formats, but this is quite a straightforward process.

Input and editing

The provisions for data validation given by the dictionary facilities, by the variety of data types and by the range checking which can also be set up at file definition time, are extremely powerful – it’s always possible to get the data wrong in a logical sense, but Rescue makes it quite hard to get it wrong in any other sense. That said I did find the mechanics of correcting data a bit clumsy; if you’ve made a mistake and go back to edit a record you can say where in the record you want the editing to begin but from there you must work sequentially through – you can’t work back up the screen either when entering or editing data. Since the program requires you to have a terminal which can move the cursor left and right, it seems a bit strange not to utilise cursor movement up as well, since no terminal is likely to have horizontal movement but not vertical…

When you retrieve records for amendment, you do so by specifying a particular key value; you can specify the use of any key, but you have to get the value of the first four or five characters exactly right (except that Rescue is ‘case-blind’ in this situation, so it will for instance match Smith and smith). Even when matching exactly on a key value you may retrieve more than one record, as duplicate keys are allowed. But searching for field values within ranges is only possible when you want to look at records, not when you want to change them.

Screen display

I said that you can have several definitions for a single file, so that records can be displayed on the screen in different ways for different users or applications. These screen definitions can be created by copying existing definitions and amending them, but I couldn’t find a way to see what definitions I already had except by going out to CP/M and using the Directory command. Screen layout is specified by giving row and column coordinates for each field you want to display, which I found much more difficult to use than the ‘paint-a-screen’ approach which has become fairly common. The coordinate approach also makes it more difficult to amend the layout, though Rescue does have one provision to make this a little easier by letting you specify a re-ordering of the display without changing the absolute coordinates.

The screen layouts are set up in the ‘definition’ part of Rescue. However, they are invoked from the main part of Rescue, through executing one of the options in the menus shown in Figure 3. Display can be of records specified either by matching one key, or by selection using the selection and extraction procedure which is described later.

Reporting

Rescue uses the same mechanism for printed reports as for screen display, so both are strictly record based. The only provision for aggregated information is totalling of numeric fields. It is possible to force page-breaks when values of particular fields change, but subtotalling is not provided. There is, however, a very flexible facility to interface with Wordstar and Mail/Merge, so it is easy to use them in combination with Rescue to write circular letters and concoct sets of standard paragraphs.

Selection

Rescue provides the ability to select parts of the data file for browsing, printing or further selection. The main method of doing this is to set up a set of selection rules in a file, and then to apply these to the data file to produce another file containing the selected records. The selection rules are very flexible: you have all the usual comparison operators (less than/greater than/equal to/not equal to) and data values can be compared with constants or with the values of other fields in the same record. Rules can be combined to provide ANDing and ORing within and between fields, and these combination facilities together with the NOT operator make it possible to select virtually any combination of values you could need. However, personally I don’t like the need to set up rules in a file, as it is rather cumbersome in practice; if you are using the standard facilities menus you must go to the ‘Maintain Rules’ menu (at the third level of menus), create the rules, then go back to the first level of menus and down to the third level ‘Extract and Sort’ menu to actually extract the records you need. Finally (from the same Extract menu) you can display or print the records that have been found. This provides a sharp contrast to the command language approach, in which one command will extract your records and a second at the same level will display them. However, you could tune the menus in Rescue to avoid some of this ponderousness, so it’s better in that sense than menu systems which you can’t adapt.

While actually comparing fields, upper and lower case letters are regarded as equivalent. You can use wild codes: ? will match any one character, * will match one or more characters. For dictionary fields, the order for comparison purposes is the order in the dictionary, so if you have a set of answers with Poor as the first and Excellent as the last Poor will be regarded as ‘less than’ Excellent even though P comes after E in the alphabet. This is usually what you want and with much coded data would be a very valuable feature.

Sorting

Rescue can sort a data file on up to five fields in one operation; the process is similar to selection and you can also combine selection and sorting to give a sorted extract file. Sorting is either in ascending or descending order, as with selection dictionary fields sort in their dictionary order (Poor before Excellent) rather than in alphabetical or numeric order. In addition ordinary character fields can be given a sort value which is different from their simple alphabetical order. This could be particularly useful where you had fields such as book titles which often have prefix words such as A or The, which you want to ignore for sorting purposes but wish to include as part of the field for printing (In most packages these prefix words must occupy a separate field, which will be empty for titles without a prefix word.)

Calculations

The calculation facilities in Rescue are quite powerful in the input phase, and practically non-existent after that. When you set up a data definition file, you can specify that a field is to be calculated from constants, or from combinations of other fields (including dictionary fields) in the same record. All the usual arithmetic operators are available. After input the only calculation you can request is totalling on printed reports; this is activated by requesting totalling of a field when a description file is set up. Up to 10 fields in any one description file may be set to be totalled.

Security

Protection in Rescue is of two kinds. It is possible to take the programs used in the Define stage off the run-time disk, so that the ordinary user can use file definitions and screen and report formats, but not amend them. At a more detailed level, password protection can be provided for particular data files, for individual description files (so that a user can be given access only to part of the data in a file) or for particular menu items in custom built menus (so that some users may have access to some functions but not others, while other users have greater facilities, but all within one menu). This is a flexible and powerful scheme, and should provide for most needs.

Stability and reliability

I didn’t have any problems over reliability with my use of Rescue. As to stability, new versions of Rescue, which are ‘cost options’, are intended to be compatible with existing versions. New features in the pipeline include a version for MS-DOS and a multi-user version.

Tailoring

As usual, the first task is to tailor Rescue for your particular terminal. This appeared quite straightforward (although, as is the common bad practice, you can’t be sure the tailoring has worked until you actually run the main Rescue suite). However, I had one misunderstanding which I never managed to sort out; this resulted in repeated prompts being printed on the same line as the error messages, which were thereby overlaid so that I couldn’t read the error message. I wasn’t able to discover whether this was an error in the software, the documentation or my interpretation of them and my Sirius manual, but it hasn’t happened to me before. While tailoring for the terminal, you can tell Rescue about cursor movement left and right but not about which keys move the cursor up and down, so much potential editing flexibility is lost.

Once into Rescue, the main tailoring facility is the ability to set up sequences of activities on custom-defined menus. This gets round some of the inflexibilities associated with menu-driven systems, and I found the approach quite easy to use.

Relations with outside

Rescue can write files in standard ASCII characters, using the ‘comma delimited’ format required by many other packages including specifically Wordstar’s Mail-Merge option. Thus you can set up files of information which you want included in circular letters or standard paragraphs, and then fire them off to Wordstar or another similar package.

Within Rescue you can include on a menu the ability to run another program, so it would be possible to tailor a menu to carry out a selection/printing sequence of this kind, called by Rescue ‘record processing’, without the user having to go back to CP/M. You can’t at the moment read external files of ASCII records into Rescue, though there is a menu option to do this already shown, which I’m told will be implemented in the very near future.

User image: software

Once again, your overall reaction to Rescue will be governed by whether you like menu-driven packages or not. I found the ability to tailor menus to provide facilities oriented to particular requirements a big help in mitigating the inflexibilities of menus. However, most users are likely to follow the well-established principle of ‘satisficing (a word coined by Herbert Simon the psycho-economist to describe the tendency to accept adequate or satisfactory results rather than go for the best possible) and only set up extra menus when they absolutely have to, for instance to access alternative screen layouts. So I suspect that mostly people will use the rather cumbersome standard menu facilities. I also had a rather mixed reaction to the complete separation of description of and access to the data files. Within an organisation which has a database administrator (who might simply be the boss in a small business) this could be a useful separation for security reasons, but it would be less helpful where the same person organises the data files and puts information into them, perhaps in a small office, one person business, etc.

Within the package itself, I as usual found some goodies and some nasties. The progress through the menus was orderly and logical and was made straightforward by the provision of the two ‘road maps’ which I show as Figures 2 and 3. The process of prompting was easy to understand. It would have been even easier if, when a question has a default response, this was displayed before the question is posed – in many cases the default is not shown even after you’ve accepted it unless you go back and edit the record concerned. Allowing the use of identifiable abbreviations, both for field names and for data values, is sensible.

I didn’t like the use of row and column coordinates when formatting screen displays and printed reports, especially as there is no default format so you always have to supply one. The ‘paint-a-screen’ approach is much easier in general than coordinate specification and if this is not supplied then there should at least be a default format with records displayed one field per line starting at the left of the screen or paper. I also found the inability to move back within a record when editing a real nuisance.

Documentation

The manual is basically a reference document but written in so much detail that it could be used to teach yourself about the package if you were reasonably familiar with data management terminology. However, the amount of detail makes it rather difficult to find your way around. Two goodies help a little in this: the use of emphasis within the text to call the readers attention to the most important parts of each section, and the printing of chapter headings right-aligned on each page (a real help to browsing at a general level). But the chapter names didn’t always make it easy to guess where a particular feature would be described, and since there was neither a detailed table of contents relating to each chapter nor an index, it was very hard to get from ‘now I’ve seen something about that feature somewhere’ to the exact part of the manual in question. Part of the remedy is close at hand, since if the ‘road maps’ (which perform most of the functions of a reference card) were annotated with the numbers of the sections documenting each menu item, readers would find it very much easier to locate the particular piece of information they need fast (As this article went to press, MBS issued an index for the manual, which should help.)

The other problem I had was that while each feature is documented in detail with examples of the particular feature, there are no examples of the display or use of groups of features. For instance, all the features of data entry are described in turn, but there is no figure showing how data definitions are displayed on the screen. Nothing bolsters a user’s confidence like some complete examples shown in real screen pictures!

I can’t resist ending this section by awarding MBS second prize so far in this year’s contest for manual typo errors, with ‘Data Validification’.

Costs and overheads

Rescue costs £295, and is available from MBS. To be realistic, you would need a disk system with the regular double-sided, double-density capacity of 370 Kbytes per drive on a two-drive floppy disk system, to enable you to have all the Rescue software on one disk drive and use the other for data I found the system very slow in loading individual program modules, which seemed to happen whenever I changed from one sub-menu to another. I was told that this was specific to the Sirius-Z80 card method of disk access, but I haven’t noticed the problem with other packages I’ve used. The times for actually running the Benchtests are shown in Figure 4. (Details of the tests were given in PCW December 1982.)

Conclusions

Rescue provides data management facilities through individual files. Data description facilities are very powerful. Rescue provides a variety of data types and validation features more extensive than any I have found before. These features also help to make Rescue much more economical on data storage than is usual in programs which use fixed length records. You can select and sort the data to provide pretty well any required subset but the process is rather cumbersome. Screen and report formats can be varied according to the needs of particular users, which makes it straightforward to protect particular data items; you can also permit users access only to certain Rescue features. Screen and report formats are described in a rather rigid way, and there are no default formats for easy initial use.

On the other hand, the ability to send data to and run Wordstars Mail-Merge option from within Rescue could be very valuable in some environments. Apart from the calculation features on data entry, the only calculating power within the package is the ability to total particular fields. The system is menu-driven, which can be ponderous in use, but you can if you wish design your own menus to mitigate this disadvantage to some extent. Rescue is in the main a single-file system – you cannot reference one file through data values in another. Provided this limitation is not a problem, you would find Rescue worth investigating, particularly if the variety of data types and the extensive data validation would be beneficial in your application.

Fig.1. Constraints  
Max no. files in one menu structure 20
Max file size CP/M limit or disk size, whichever is smaller
Max no. records 32760
Max size record 1024 characters (but good data compression methods)
Max no. fields 100
Max field size 60 characters, 14 digits
Max no. keyfields 10
Field types See text – several varieties of character, numeric, date (day/month/year), monetary (sterling), dictionary

Figure_002

Fig.2. ‘Roadmap’ of menus

Figure_003

Fig.3. Menu options

Fig.4. Benchmark times
BM1 Time to add 1 new field to each of 1000 records Setup time
BM2 Time to add 50 records interactively Scrolling time
BM3 Time to add 50 records “in a batch” NA
BM4 Time to access 50 records from 1000 sequentially on 25-character field 1 min 20 secs
BM5 Time to access 50 records from 1000 by index on 25-character field NA* (1-3 secs)
BM6 Time to index 1000 records on 25-character field 12 mins
BM7 Time to sort 1000 records on 5-character field 4 mins 10 secs
BM8 Time to calculate on 1 field per record and store result in record NA
BM9 Time to total 3 fields over 1000 records NA yet
BM10 Time to import a file of 1000 records NA yet
Note: NA=Not available. NA*=Not available as tested – key must match exactly.

 First published in Personal Computer magazine, April 1983

A Piece of the Action – The Multi-User Sig/Net

Terry Lang investigates the benefits – and drawbacks – of a shared access system from Shelton Instruments.

Shelton001

Front view of a multi-user system with hub and satellites stacked together.

In building their phenomenal success, microcomputers have had the advantage of needing only to provide an operating system which supports just a single user. This has enabled them to avoid much of the dead weight which encumbers mainframe systems. However, there has always been a need for micro systems to support a small number of simultaneous users – for example in neighbouring offices in a small business. (Such users will always need to share access to common data for business purposes. Sometimes users choose to share peripherals – eg, hard disks or printers – simply to save money, but the economic reasons for this latter type of sharing are likely to weaken as the technology continues to develop.)

Even in a shared microcomputer system, it has generally been economic to provide a separate processor for each user, and thus the spirit of simplicity in the operating system can be maintained. Nonetheless, the administration of the shared data does impose an additional challenge, and it is always interesting to see how this challenge is met.

In this article I will be looking at the way this is tackled by the Sig/net system produced by Shelton Instruments Ltd in North London. During a previous incarnation I was responsible for buying a large number of single-user Sig/net systems, which met all my expectations at that time, and I was keen to see how the multi-user combination would be carried through.

Hardware

Shelton002

Rear view of multi-user system showing ribbon bus cable and terminal and printer ports.

The original single-user Sig/net is itself based on a ribbon-cable bus which connects together the internal components of Z80 processor and memory board, disk controller board, and communications boards (serial and/or parallel). In developing a multi-user system it was therefore a natural step to extend the bus cable to simply chain on other systems, each supporting a single user by means of a processor and memory board. This is illustrated in Figure 1.

Figure_001

Fig. 1. Modules making up the ‘hub’ and user satellite processors on a multi-user system.

The central or ‘hub’ system with one floppy disk and one hard disk fits in a case of its own. The satellite user systems fit three to a case, and these cases are designed to stack neatly with the ‘hub’ as shown. As many satellite cases as may be needed can be chained on via the bus cable. (I understand a 14-user system is the largest installed so far.)

The basic component boards, with the exception of the new ring-ring bus connector, are all those which have proved very reliable in the original single-user system (Since the company has a considerable background in process control reliability should be something it appreciates.) To my mind the cases do run rather hot but I am told this has not caused problems.

The bus cable runs at a maximum speed somewhat below 1 MHz, not particularly fast but adequate for the purpose, as I shall discuss below. More significantly, it has a maximum length of only a few feet. This is sufficient for stacking the cases as illustrated in the photographs, but does mean that all the processors and disks have to be sited in the same room. Of course the user terminals are connected via standard RS232 serial communications ports, and can thus be located wherever required (using line drivers or modems for the longer distances).

Alternatively, it is also possible to connect a complete satellite to the hub via an RS232 link. This would enable a satellite with its own floppy disk to be placed alongside a user and distant from the hub hardware, but it would mean that access to the files on the hub would be correspondingly slower.

Both the hub and the user satellites use Z80 A processors running at 4 MHz. For the purposes of the standard PCW Benchmark programs, which are entirely processor-bound and make no reference at all to disks, it didn’t matter at all that a multi-user system was involved, since each Benchmark program ran in its own satellite processor plus RAM board, unaffected by the rest of the system. The Benchmark times, with the programs written in Microsoft Interpretive Basic, are given in Figure 2.

These times are as good as one would expect from an equivalent single-user system and illustrate the benefits (or perhaps one should say the lack of drawbacks) of this kind of multi-user sharing. (Of course, where user satellites share access to the common hub filestore, then the user programs will slow each other down – this is discussed in detail below.)

The one-off end-user prices for multi-user and single-user Signet systems are given below. These represent very reasonable value for money. Much of the system is of British manufacture or assembly, which should help price stability. It should be emphasised that in addition to the prices quoted you would require an additional terminal for each user. (Integral screens and keyboards are of course not appropriate to this configuration of centralised hardware. This does permit a range of terminal choice according to need)

An important feature is the ease with which a single-user system can be upgraded to multiuser. The old single-user system simply becomes the hub, with one of the floppy disk drives exchanged for a hard disk. Multi-user satellites are then added as required. If you find a dealer who will give you a reasonable trade-in on the exchanged floppy, then the upgraded system should cost you the same as if you went multi-user straight from the start – a cost-effective upgrade path. Since a satellite case and power supply can be shared between three users, it is most cost-effective to add three users at a time, for a cost of £622 per user (plus terminals, of course).

For those who need such things, other peripheral hardware is also available – eg, graphics drivers, A/D converters, industrial I/O, S100 bus adaptor.

Shelton003

Inside view of case with three user satellite processors and common power supply.

Sharing a hard disk

So much for a single user accessing one file over the McNOS network. As the next step, I looked at the facilities for several users to access different files on one hard disk. McNOS provides for separate users to be identified by distinct system ‘user names’, and each user name is protected by its own password. All files remain private to their owner unless explicitly made public via the appropriate command.

Each user name is provided with both a main directory and with up to 16 subdirectories (just as if the user had 16 separate floppy disk drives) identified by the letters A to P. Thus instead of the traditional CP/M prompt of the form

A>

where A identifies the logged disk drive, in McNOS this becomes

A.C>

where A identifies the hard disk drive and C the default sub-directory for this user. Whenever the user creates a new file, space for this is taken from wherever it can be found on the drive. Some multi-user systems divide the hard disk up in advance, so that each user has a fixed allocation but whilst this protects other users against an ill-mannered user grabbing more than his share of space, it also means that space allocation has to be fixed in advance. In a well-ordered community, the McNOS approach is much more flexible.

To measure the effect of sharing the one disk. I repeated my Benchmark, with a different file on the hard disk for each of two users. When I ran the program for just one user alone, the execution time was 33 seconds: when I did the same for the second user alone, the time was 54 seconds. This very large difference was due to the different positions of the two files on the disk, thus requiring different amounts of head movement (This is one of the bugbears for would-be designers of benchmarks for disk systems!)

Then to measure the effects of sharing, I set the second user program to loop continuously and timed the program for the first user. With this sharing, the execution time increased from 33 seconds to 205 seconds. This increase is explained partly by the competition for buffer space in the hub, but I suspect largely by the greatly increased disk head movement as the head moved constantly between the two files. This is inevitable for physical reasons under any operating system. Sharing access to one disk is going to have a big impact if a number of file-intensive activities are run at the same time; but this should not be a problem for programs where disk access is only occasional (eg for occasional interactive enquiries).

Sharing a file

However, as I indicated at the beginning of this article, the real reason for a multi-user system is often to provide different users with shared access not just to the same disk, but to the same file at the same time (eg, for stock enquiry and sales order entry from several terminals). But if one program is going to read a record, alter the contents, and finally rewrite that record, then that whole updating process must be indivisible. (For if a second program read the same record at the same time and tried to rewrite its new data at the same time, the two processes would interfere with each other). To overcome this problem of synchronisation, a ‘locking’ mechanism (sometimes called a ‘semaphore’) is required, whereby a process carrying out an update can ‘lock’ the record until the update is complete, and whereby any other process accessing that same record at the same time is automatically held up until the lock is released.

On a mainframe database system it is generally possible to apply a lock to any record in this way. However, this can be rather complex (for example if two adjacent records share the same physical disk sector, then it is also important not to allow two programs to buffer two copies of that same sector at the same time).

In keeping with the spirit of micro systems, McNOS implements a simpler compromise mechanism, by providing one central pool of ‘locks’ stored as 128 bytes in the hub. A user program can set a lock simply by writing to the appropriate byte, and release it again by clearing that byte. It is up to programs which wish to share access to the same data to agree on which locks they are to use and when they are to use them In general the programs will by agreement associate a lock byte with a whole file rather than with an individual record as this avoids the problem of two adjacent records sharing the same buffer. It also avoids the problem of the restricted number of locks (even if a bit rather than a byte is treated as a lock, this still only provides 1024 locks).

McNOS maintains the lock record on the hub as if it were a file (of just one record) called LOCKSTAT. SYS, though this ‘file’ is in fact stored in RAM and never written to disk. A user program which wishes to set a lock simply generates a request to read this record. If the record is returned with byte zero set to non-zero, this indicates that some other process is itself busy setting a lock: the program must then wait and try again later. When the record is returned with byte 0 set to zero, this program may examine the bytes (or bits) it wishes to set and if it is clear to proceed set them and rewrite the record (The reverse process must be followed later to clear the bytes and hence release the locks.)

To measure the impact of this locking mechanism, I next changed the Benchmark program for the first user so that it shared exactly the same data file as the second user. McNOS provides a particularly convenient way of doing this, for it is possible to create in one directory entry a pointer not simply to a file, but rather to another file entry in another directory. Thus all I needed to do was to change the directory entry for the first user so that the original file name now pointed to the data file of the second user. Running the Benchmark for either user alone now took 54 seconds (ie, I was using the ‘slower’ of the two data files as far as disk head movements were concerned). I then changed the Benchmark program itself for the two users, so that each read/write pair was bracketed by a lock and an unlock operation as would be required for sharing the file. Now running the Benchmark for either user alone took 106 seconds – a measure of the overheads of using the locking mechanism.

Finally I ran the programs for the two users simultaneously. This meant that the overheads of the locking mechanism, of buffer sharing in the hub and of competing head movements were now all included resulting in a total execution time of 262 seconds. All of which simply shows that the sharing of data in this way consumes resources (as usual you do not get ‘owt for nowt).

Another important resource is of course software. Just because the operating system provides a locking mechanism does not mean that you can take any CP/M system, run it from two terminals, and neatly share simultaneous data access. This will happen only if the program is explicitly written in the first place to use the locking mechanism. At least two general data management packages are already available which use the McNOS locking mechanism: ‘Superfile’ from SouthData of London (reviewed in PCW January 1983), and ‘aDMS’ from Advanced Systems of Stockport (PCW review shortly).

Multi-user software

Thus in the Signet multi-user configuration we can see hardware which is a simple extension of a single-user system. However, the software extension is not quite so straightforward when moving from a single-user to a multi-user operating system. The need for such a system of course became apparent some considerable time ago. Unfortunately, the first attempts by Digital Research to extend CP/M in this direction ran into a number of difficulties. Therefore Shelton was obliged to look elsewhere, and eventually obtained the McNOS (Micro Network Operating System) system from its originators in the USA. McNOS aims to provide a file store and printer spooling system in the hub processor, plus a CP/M-like environment for each satellite user, and the necessary communications software to link them together. As others have found who have followed the same route, a lot depends on exactly what you mean by ‘CP/M-like’. While a well-behaved program may just use CP/M by calling on it in the approved fashion for any functions it needs to be carried out many other programs also call upon the internal subroutines of CP/M or utilise direct access to its internal data tables.

Indeed, in the early days of CP/M, many programs were forced to employ such dodges in order to work at all. (One well-known package reportedly follows each call to write to a file by a ‘close’ call in order to force the writing of any partially filled buffers; though the file is thus repeatedly closed and never subsequently re-opened, earlier versions of CP/M would still allow the following ‘writes’ to take place.) For such programs any strict implementation of CP/M is sure to stop them running. With additional work by Shelton, these problems were eventually overcome by relaxing the conditions of the CP/M-like environment to permit such dodges to be employed.

In the single-user versions of CP/M such dodges did little harm since, if the worst came to the worst, the user would only upset his own program. In a multi-user situation, however, it must be realised that such dodges, if incorrectly employed by a user program, can upset other users as well. This has to be accepted as the price of making sure that the whole wealth of existing CP/M software will continue to run in the multi-user environment.

Before looking at how disks and files can be shared between several users, I thought I should first check how much delay is introduced into file accesses for a single user with a file which is no longer on his own satellite system, but which is now accessed on the hub through McNOS over the connecting lines. For this purpose I constructed a file of fixed length records, and wrote a simple Basic program which read and then rewrote each record. Records were taken alternately from either end of the file, stepping up from the bottom of the file and down from the top until the two met in the middle, thus ensuring a reasonable spread of disk head movement. To provide a norm for my measurements, I first ran this program in a true single-user standalone CP/M Signet system with floppy disks, and obtained an execution time of 257 seconds. Next I transferred the floppy disk to the hub of the multi- user system and re-ran the program from a satellite. The first thing I noted (cynic that I am) was that the program still ran, and that the floppy format was indeed the same under McNOS as CP/M. Would you now care to guess the execution time running over the network? In fact it was 53 seconds, a reduction of almost 80%! The reason for this of course (and it may be ‘of course’ now, but I confess I didn’t expect it at the time) is that much of the 64K RAM in the hub system can be devoted to file store buffering, thus minimising the number of physical transfers actually needed. (If other users had been running at the same time, they would have taken their own share of these buffers. Where there is competition, McNOS sensibly arranges to keep in its buffers that information which has been most recently accessed.)

Shelton004

Processor/memory card, serial communications card and bus interface to support a single user.

The terminal command language

In the beginning were mainframes, which ran programs in batch mode. Because the user could not direct his program from a terminal but had to think ahead for every likely eventuality, the operating system provided a ‘Job Control Language’ to help in directing the compiling, loading and executing of programs. Some Job Control Languages were so elaborate that they could even be used to solve differential equations (or so rumour had it). Then came the micros and operating systems like CP/M, with very simple commands which could be used from terminals. This command structure could hardly be dignified with the title ‘language’ (even though SUBMIT and XSUB do give the possibility of issuing several commands at once). There does seem a need for a more comprehensive job control language, even on micros, for tailoring packages and giving the user turnkey systems. (Sometimes this is done through a specially written program, or via a general purpose ‘front-end’ package which sits on top of CP/M)

McNOS tackles this situation by providing its own job control language, complete with variables, arithmetic and assignment statements, conditional expressions, and subroutines. All this is of very great power, but at the cost of considerable overheads in processing time. To test this out in a pale imitation of those who solved differential equations with the job control language on mainframes, I coded one of the PCW Benchmarks in the McNOS command language. This ‘program’ is shown in Figure 2. I estimate (since I didn’t feel inclined to wait for the whole 1000 iterations to finish) that this program would have taken over 14,000 seconds to complete (compared with 9.6 seconds in Basic)! Time may not be so critical in more typical job control situations, but it must be possible to do better than this. However you do not need to use it if you don’t need it. It is perfectly possible to stick to a very small subset of the simple commands, which then makes the system very like CP/M. Unfortunately, of course, it can not be exactly like CP/M because it is necessary to maintain a unified underlying syntax capable of supporting the larger language too. As a fairly experienced user of CP/M I must say I had no difficulties with the differences, though they would prevent a novice user from working with a standard CP/M primer as a guide. (I have heard it said that at least one user was so impressed by the McNOS command language that he asked to have it implemented on his single-user CP/M systems as well).

Figure_002

Fig.2. Coding of PCW Benchmark Program 3 in McNOS Terminal Command Language.

Future developments

A user who is just starting on a microcomputer development which requires only one system now but which could expand to become multi-user later, could well choose a Sig/net system for its development potential. If Shelton maintains its record in exploiting its technical expertise, then it would be expected that other developments would be on the way. I understand that one of these developments is the provision of a local area network facility based upon the Datapoint ARCNET approach. This will be used instead of the current ribbon bus to provide high speed communication over much longer distances, and thus permit the siting of user satellite systems away from the central hub. I must point out however that this is not yet an available product or, as Guy Kewney so aptly put it in this same magazine ‘the future is not now…’

Conclusions

The Shelton Sig/net system is based on good hardware and provides good value for money. The system provides a convenient cost effective growth-path for the user who wants to start small but expects to expand to a multi-user system later. The McNOS multi-user operating system provides convenient facilities for users who wish to share data between a number of terminals and a number of CP/M programs, provided this can be done on a scheduled basis (ie, no file being used in update mode by more than one user at a time). It is also possible to share simultaneous update access to the same data files with programs written specifically to take advantage of the McNOS ‘locking’ mechanism. The powerful McNOS terminal command language would be useful in some circumstances, but can be slow to use.

Benchmarks  
BM1 1.1
BM2 3.4
BM3 9.6
BM4 9.3
BM5 10.0
BM6 18.1
BM7 28.9
BM8* 51.3
Average 16.5
*Full 1,000 cycles  
For a full explanation of Benchmark timings, see PCW November 1982

 

Prices – Multi-User  
Hub filestore, 1 x 400K floppy  
Hard disk, 5.25Mb (formatted) £2,695
Hard disk, 10.5Mb (formatted) £2,954
Hard disk, 15.75Mb (formatted) £3,195
Hard disk, 21Mb (formatted) £3,500
Satellite case  
1-user (Z80A, 64K RAM, 1 x RS232) £1,100
2-user £1,550
3-user £1,865
Single User  
Z80A, 64K RAM, 2 x RS232  
Floppies 2 x 200K £1,390
Floppies 2 x 400K £1,690
Floppies 2 x 800K £1,890

First published in Personal Computer World magazine, April 1983

Microtan 65 Review

Microtan_65_001

Just another 6502 system? We think not. Microtan’s expandability is almost second to none and it could be a winner.

By Henry Budgett

The ideal system in most people’s minds is one that is as cheap as possible, provides the most facilities, is expandable to the limits of its design and can be obtained piece by piece as the money is saved. Up to now there have been several systems that have sought to achieve these varied aims, and the results have been many and varied.

The machine reviewed here is another contender in this field and certainly seems to be set for success where others have not. Based on the 6502 CPU, the same chip as used in PET and Apple among others, it has several very interesting items to offer.

Concept of A System

The usefulness of a computer on a board is limited, the usefulness of a system with attendant peripherals is much greater. The ideal balance is struck when the single board can become part of a system and thus have the capability of fulfilling the needs of both markets.

Microtan has been designed in this way, the complete system has been planned and is then offered board by board. This review is only going to cover the basic board but mention will be made of the available expansion to construct the system. Table 1 gives details of the various stages that are available, in their various configured forms.

Assembly Or Assembled?

I built the Microtan from a kit, something that I believe is worth doing as you not only save money but you do get an insight into how the hardware is strung together.

Presentation is superb and to anyone who is competent with a soldering iron this should represent no more than an evening’s work. Please note that the PCB is double sided and through-hole-plated so you must use a fine tip on the iron and fine solder otherwise you will have problems.

The manual that is supplied covers all the areas needed to construct the kit and get it up and running. It covers other areas that I will mention later. The only serious omission is the lack of a circuit diagram, but this is being rectified I gather. You will need a power supply, the 5 volt supply that we published in CT serves admirably or you can buy one from Tangerine.

The true test of any kit is whether it works, it did in its basic format and with the graphics option in place but it died when I tried to add the lower case option. Immediate thoughts of dead ROMs were not correct, the eventual culprit was a tri-state device that was permanently tri-stated. Quick work by Tangerine meant that I was back on the screen before the postman had called twice.

Micro Monitor

The old question of “How much can you fit into a pint pot?” rears its ugly head with Tanbug, the 1K monitor supplied as standard. The answer in this case is “Enough!”. At this stage you have a system that can only deal with machine code and a glance at Table 2 will show that there is only one possible omission from the monitor, that of cassette handling. Well, you don’t have a cassette interface yet so what are you worrying about? If you are going to expand to Tanex, which has that necessary interface, you get the routine for handling named files which you can either load up yourself or get in an EPROM that plugs in to a socket and is called through Tanbug. The cassette handling is at a choice of 300 or 4800 Baud so you can’t even boil the kettle let alone drink a cup of coffee while loading programs.

What does Tanbug have that sets it apart from other monitors? Two things really, you get a full listing of the firmware with notes and explanations and you don’t get any bugs, at least I haven’t found any yet. It does all that the Microtan user will require and if you ever get big enough to warrant it there is a bigger version called XBUG lurking in a dark corner.

Manual Means Handy?

The little orange covered book that is supplied is worthy of a mention in its own right. OK, it’s not perfect but it is detailed and concise. One or two errors have escaped correction but nothing that will cause the crashing of programs or other damage. The book fits into a ring binder and will be joined by the manuals for Tanex and the other family members, a neat concept in its own way.

For a change the manual is logical, it explains the concept of the board, then the system, then the details of the 6502 with the complete instruction set and then a very detailed chapter on the monitor and its uses – complete with the listing and finally it gives you a couple of games. It is essential to read the whole thing through from cover to cover before starting to play, the unit is complex and should be understood before anything is attempted.

What You Get

Once the board is built and the manual read you are ready to go. All you need now is a black and white TV and a 5 volt power supply at about 1 amp. Connect up, turn on and hit reset. The screen is covered in a pretty pattern with the words TANBUG at the bottom followed by a prompt character. At this point we find the only serious problem with the system. You have a ? on the screen but you are told that you should have a square blob, have you blown it up? No, you haven’t got the lower case option. It is explained in the manual but it is very unclear and has caused much confusion and alarm both in the office and outside.

So, you have a working system. Machine code programmers can now go and have a ball, the rest of us start learning. If you are a dedicated BASIC person Tanex is a must, throw away the Hex keypad (superb thought it is), plug in the full ASCII, the system works out which you are using, and let yourself go with a 10K Microsoft BASIC.

Points worth of note, and praise, are the rock steady display on your TV, the VDU RAM is only accessed when the system RAM is not so you don’t get the usual flicker, the excellent Hex keypad and the almost unbelievable packing density. Because the system is based around the 6502 comparisons with the Acorn, reviewed in August 79, are almost inevitable. With Microtan you get a proper VDU as opposed to an LED display, a decent keypad that is separate and a slightly more powerful monitor but you do lose the cassette (at this stage).

The Guts Of The Matter

Figure_001

Fig.1. Microtan’s architecture, all on one board to!

Because of a lack of space on the board certain apparently ignored features are implemented at other points in the system. Figure 1 shows the architecture of the board, the keyboard interface is intelligent in that it detects what type is being used. The memory map of the system, see Fig.2, appears to be rather limited, full decoding is done on the Tanex board and gives the map shown in Figure 3. This is not the disadvantage that it might appear to be, it allows the contents of RAM on Microtan to be protected against DMA as are the I/O ports and the ROM area. Whilst on the subject of I/O it is worth noting that you get a 1K area that is addressable as I/O, this compares with a maximum of 256 on devices using the Z80 or 8080.

Figure_002

Fig.2. The simple memory map produced by Microtan

Figure_003

Fig.3. Once you’ve added Tanex you get a proper memory map, which is fairly impressive.

The system expansion is shown in Figure 4, full details of the bus structure are given along with notes for DIY people. The address bus buffer chips are supplied as part of the Tanex unit, so don’t worry about the empty sockets.

Figure_004

Fig.4. How you expand through Tanbus

On Board Options

Despite the fact that the basic Microtan packs in a 6502, keyboard interface, VDU, 1K of RAM and 1K of monitor ROM there is more to come! As seen from the previously mentioned Fig.4 the address bus buffers fit on, but that’s not the end. You can have lower case alphabetics, not essential at this stage, and pixel type graphics, sometimes called “high resolution” but really made up of little squares not dots. Tangerine are quite honest about them and call them “Chunky” which is a very apt description, they are good enough for Teletext simulations and games etc.

Because of the ingenious VDU design it is quite possible to run a program that actually resides in the screen memory without bombing everything, try that on your system.

Expanding Horizons

Glancing back to Table 1 you can see the basis of your system emerging. The rest is coming shortly and completes the story. Tanram will be the next board to go on sale and offers 40K of memory on a single board and this will have the capability of bank selection so RAM freaks can have the odd megabyte or six if they want. You may have realised that the system is now full, see Fig.3. Next on the stocks is Tandisc, offering you the floppies that you dream of, up to four double density units are planned.

Microtan_65_004

The Microtan board installed in the mini-rack

Microtan_65_003

Tanex fits on top in the mini-rack system, along with the power supply and Hex keyboard.

Housing all this exotic hardware need not be a problem either, the case that Tangerine supply will hold Microtan and Tanex complete with power supply. The other style that you could use is a card frame, I am building my system in a Vero unit, assembled from System KM4C parts which also offer such goodies as front panels and modules. However you could design your system to fit into a VDU case and have a self-contained system, it’s up to you.

Microtan_65_002

The author’s system growing inside a Vero rack.

Summing Up

Microtan, and its attendant extras, offer the first time buyer a low cost entry point into computing. Taking a boxed two-board system with all the options, power supply and key board you have a more powerful unit than a PET, it has more I/O capability and at £350 it is a lot cheaper!

The product appears to have been launched with a great deal of thought and planning, in itself a change from some rivals, and seems to have found a niche in the market almost overnight. The only thing it hasn’t got is a “second generation” CPU such as a Z80 or 6809 but that doesn’t seem to be too much of a handicap, the dedicated machine code programmers among you might disagree but no-one else has!

Table 1. The various system configurations for Microtan
Board Microtan 65
Features 6502, 1K RAM, 1K ROM, 6 I/O ports
Options Pixel graphics, lower case alphas, address bus buffers
Need to run TV, Hex keypad, 5V PSU @ 1A

 

Board Tanex
Features 1K RAM, 16 parallel I/O, TTL serial I/O, cassette I/O, 2 by 16 bit counter timers, full memory map, data bus buffers
Options 6K RAM, 4K ROM, 10K Microsoft BASIC, double above I/O plus RS232/20 mA serial with full modem control

 

Board Tanram
Features 40K mixed static and dynamic RAM

 

Board Tandisc
Features Control of four drives
Extras Motherboard, case, power supply, Hex keypad, ASCII keyboard

 

Table. 2 The available monitor commands on Tanbug
Monitor Command Function
M(add)(term) Modify memory locations, terminator type allows step through, cancel or jump out.
L(add),(numb)(term) Lists the contents of specified memory locations in tabular form.
G(add)(term) Sets internal registers and executes program at address given. NB cursor disappears.
R Sets memory modify command to register mode. Allows the 6502s internal registers to be altered.
S Sets single step mode, see P & N
N Resets to normal mode from single step
P Causes monitor to execute next instruction, can be set to execute n instructions. Gives display of all registers and returns to monitor.
B(add),(numb)(term) Sets breakpoint at specified address, up to eight are allowed. All registers are displayed and P command may be sued to continue.
O(branch add)(dest add)(term) Calculates offsets between specified addresses for use in branch arguments.
C(start add)(end add)(start add dest)(term) NB (term) can be CR, LF or SP Copies memory locations and blocks.

 First published in Computing Today magazine, June 1980