In part 1, we talked about the basics of how GEM VDI works and how that applies to the concept of VDI functions being passed from an application to a device driver.

This time around, we’ll talk about the printer driver kit that Atari sent out to selected developers. Printers were by far the most commonly supported device, and also had perhaps the greatest variety in technology.

Before we talk about the printer driver kit, let’s take a look at the state of technology for printers back in the mid-80’s.

The Printer Market At The Dawn Of Time (the mid 80’s)

Today, you can walk into a store and for $200 or so, maybe less, you can buy a fairly decent color laser printer that prints 10 pages a minute or more at resolutions upwards of 1200 dpi. To those of us who survived using computers in the mid’80’s, that is simply insane. You get so much more bang for the buck from printers these days that some people buy new printers instead of replacing toner cartridges.

In the mid-80’s, the printer market was much different than it is now. Aside from the differences in technology, printers were relatively much more expensive.  A basic 9-pin printer in 1985 would cost you $250 or $300. That’d be like $500-$600 today. You could literally buy a half-dozen cheap laser printers today for what it cost for a good 9-pin dot matrix printer back then. A good 24-pin printer would set you back $500 or more.

Laser printers, in 1985, were about $2500 for a basic Hewlett Packard LaserJet or similar model. Apple introduced the LaserWriter in March with a price tag of almost $7000. Fortunately, more and more manufacturers were entering the market, and prices were starting to drop. I paid about $1300 for my first laser printer in late ’86, and that was as cheap as they came back then. It was compatible with PCL 2 (printer control language version 2) which meant that most drivers for the HP LaserJet would work with it.

Today, the typical printer found in most people’s homes is an inkjet dot-matrix printer. That kind of printer wasn’t really a mainstream thing yet in 1985. The first truly popular model would be the HP DeskJet in 1988.

Graphics Printing Was SLOW!

Today, most printer output, other than speciality devices like receipt printers, is done using bitmapped graphics. The printer driver on your computer builds an image in the computer’s memory, and then when the page is complete, sends it to the printer. This gives the application and printer driver nearly complete control over every pixel that is printed.

However, in 1985, sending everything to the printer as one or more large bitmaps didn’t work so well, for a couple of reasons. First was the fact that sending data from the computer to the printer was fairly slow. Most printers connected to the computer via a Centronics-style parallel data port, which typically used the system’s CPU to handshake the transfer of data. Typical transfer speeds were rarely more than a couple of dozen kilobytes per second, even though the hardware was theoretically capable of much faster speeds.

Even though the data connection was fairly slow, the main bottleneck in most cases was the printer’s ability to receive the data and output it. Most printers had no more than a couple of kilobytes of buffer space to receive data, generally no more than about one pass of the print head when doing graphics. It was the speed of the print head moving back-and-forth across the page that was the ultimate bottleneck.

A popular add-on in those days was a print buffer, basically a little box filled with RAM that connected in-between the printer and the computer. This device would accept data from the computer as fast as the computer could send it, and store it in its internal RAM buffer. Then it would feed the data out the other end as fast as the printer could accept it. The print buffer could accept data from the computer more quickly than the printer could, and assuming it had enough RAM to hold the entire print job, it would free up the computer to do other things.

But even with a print buffer, if you had an impact dot-matrix printer and wanted to produce graphics output, you simply had to get used to it taking awhile to print. For those with bigger budgets, there were other options. Laser printer manufacturers started to make smarter printers that were capable of generating graphics in their own local memory buffers. This was generally done using what we call a Page Description Language, or PDL.

Page Description Languages

With a PDL, instead of sending a bitmap of a circle, you would send a series of commands that would tell the printer where on the page to draw it, what line thickness to use, how big it should be, the fill pattern for the interior, etc. This might only take a couple dozen or perhaps a few hundred bytes, rather than several hundred kilobytes.

One of the most capable and popular PDLs was PostScript, which was introduced to the world with the release of the Apple LaserWriter printer. PostScript was actually a programming language, so you could define a fairly complex bit of output and then use it as a subroutine over and over, varying things like the scale factor, rotation, and so forth. PostScript also popularized the concept of using outline scalable fonts.

The downside to Postscript or other PDLs was that the printer needed a beefy processor and lots of RAM, making the printer fairly expensive. Often more expensive than the computer you used to generate the page being printed.The Apple LaserWriter actually had a faster version of the Motorola 68020 processor and more memory than early models of the Mac computer.

The other downside was that even if you’re printing a couple of dozen pages everyday, the printer is actually sitting idle most of the time. Meaning that extra processing power and RAM isn’t really fully utilized.

Graphics Output On A Budget

Back in the 8-bit days and early PC days, most people didn’t have thousands of dollars to drop on a laser printer. If you had a basic 9-pin dot matrix printer, it had relatively primitive graphics and it was fairly slow to output a page using graphics mode. Most of the time you made a printout of something text-oriented, it used the printer’s built-in text capabilities. Basic printing modes were fast but low-quality, but more and more printers introduced a “letter quality” mode which was somewhat slower, but still much faster than doing graphics output.

However, the whole situation with printers was on the cusp of a paradigm shift. RAM was getting cheaper by the day. Computers were getting faster. The quality of graphics printing was improving. And, perhaps more than anything, the release of the Apple Macintosh computer in 1984 had whetted the market’s interest in the flexibility of bitmapped graphics output, and the subsequent release of Microsoft Windows and GEM with similar capabilities had added fuel to the fire.

Being able to combine text and graphics side by side was the new target, even for people with basic 9-pin dot matrix printers, and even though it was often orders of magnitude slower than basic text output, people were willing to wait. And for higher-quality output, they were willing to wait a bit longer.

Printer Drivers In The Wild West

Today, when you buy a printer, you get a driver for Windows, maybe one for Mac OS X. I would imagine Linux users recompile the kernel or something to get things going there.  (Kidding!)  And once you install that driver on your computer, that’s pretty much all you need to worry about. You tell an application to print, and it does.

By comparison, back when the ST first came out, printing was the wild wild west, and getting your printer to produce output could make you feel like you were in an old-fashioned gunfight. Before GUI-based operating systems became popular, every single program required its own printer driver.

And then we have the fact that there were about fourteen billion different ways of outputting graphics to a printer. Even within the product line of a single manufacturer, you’d find compatibility issues between devices that had more or less the same functionality as far as graphics output went. Even with the same printer, two different programs might have different ways of producing what appeared to be the same exact result.

Back in those days, most dot-matrix printer manufacturers followed the standards set by Epson. For example, when Star Micronics came out with their Gemini 10x 9-pin dot matrix printer, it used most of the same printer codes as the Epson FX and MX printers. Likewise with many other manufacturers. Overall, there was often as much as approximately 95% compatibility between one device and another.

The problem was, most of the efforts towards compatibility were oriented around text output, not graphics. That is, the same code would engage bold printing on most printers, but the code for “Advance the paper 1/144th inch” used for graphics printing might be different from one printer to the next.  This was further complicated by the fact that printers sometimes differed somewhat in capability. One printer might be able to advance the paper 1/144″ at a time, while another could do 1/216″.

The one good thing was that in most cases it was possible for users to create their own driver, or more accurately, a printer definition file. For most programs, this was nothing more than a text file containing a list of the printer command codes required by the program. In some cases it was a small binary file created by a separate utility program that let you enter the codes into a form on screen.

The Transition To OS-Based Printing

The main reason every DOS application (or Atari 8-bit program, or Commodore 64 program, etc.) had its own proprietary printing solution was, of course, the fact that the operating system did not offer any alternative. It facilitated the output of raw data to the printer, but otherwise provided no management of the printing process.

That started to change for desktop computer users in 1984, when Apple introduced the Macintosh. The Mac’s OS provided developers with the means to create printer output using the same Quickdraw library calls that they used to create screen output. And it could manage print jobs and take care of all the nitty-gritty details like what printer codes were required for specific printer functions. Furthermore, using that OS-based printing wasn’t simply an option. If you wanted to print, you had to go through the system. Sending data directly to a printer was a big no-no.

One significant issue with the whole transition to OS-based printing was the fact that printer drivers were significantly more complex. It generally wasn’t possible, or at least not practical, for users to create their own.

Apple addressed the potentially murky driver situation by simply not supporting third party printers. They had two output devices in those early years, the ImageWriter 9-pin dot-matrix printer, and then the LaserWriter. It would be a couple of years before third party printing solutions got any traction on Macintosh.

When Microsoft Windows came out a short time later, it addressed the question of printing in largely the same way as the Macintosh, except that it supported a variety of third-party printer devices. 

When the Atari ST came out, the situation regarding printing with GEM should have been theoretically similar to the Mac and Windows, except for two little things.

First was the minor tripping point that the part of GEM responsible for printing (GDOS) wasn’t included with the machine at first. What was included was BIOS and GEMDOS functions for outputting raw data to the printer. As a result, application programmers ended up using their own proprietary solutions.

Second was the fact that even after GDOS was released, there were only a few printer drivers included. And Atari didn’t seem to be in any big rush to get more out the door. As a result, application developers were slow to embrace GEM-based printing.

GDOS Printing On The Atari

As far as I know, the first commercial product to ship with GDOS support included was Easy Draw from Migraph at the start of 1986, about six months after the ST was released, and about two months after Atari starting shipping machines with the TOS operating system in ROM rather than being disk-loaded.

Migraph included pretty much exactly what Atari had given them as a redistributable setup: the GDOS.PRG file which installed the GEM VDI functionality missing from the ROM, the OUTPUT program for printing GEM metafiles, and a set of GEM device drivers and matching bitmapped fonts. The device drivers included a GEM Metafile driver and printer drivers for Epson FX 9-pin dot-matrix printers and Epson LQ 24-pin dot-matrix printers.

Compared to most other programs, this situation had a significant drawback. This was not Migraph’s fault in any way. It was a GEM issue, not an Easy-Draw issue. So what was the problem? Well, basically it comes down to device support. The GDOS printer drivers supplied by Atari simply didn’t work with a lot of printers. They targeted the most popular brand and models, but if you had something else, you had to take your chances regarding compatibility. This was a major problem for users, not to mention something of a surprise.

If there’s any aspect of GEM’s design or implementation where the blame for something wrong can be pointed at Atari rather than Digital Research, it’s got to be the poor selection of printer drivers.

With a word processor like First Word, if your printer wasn’t supported by a driver out of the box, chances were pretty good you’d be able to take your printer manual and figure out how to modify one of the existing drivers to work. Or, maybe you’d pass the ball to a more tech-savvy friend and they’d figure it out for you, but one way or the other, you probably weren’t stuck without a way to print. Not so with Easy-Draw, or any other program that relied on GDOS for output. GDOS printer drivers weren’t simply a collection of printer codes required for specific functions. If there was no driver for your printer, and chances of that were pretty good, you couldn’t print. Period.

The GDOS Printer Driver Kit

When I was at Neocept (aka “Neotron Engineering“) and our WordUp! v1.0 word processor shipped, we included basically the same GDOS redistributable files that Migraph had included with Easy-Draw, except for the OUTPUT program which we didn’t need because WordUp! did its own output directly to the printer device. It wasn’t long before we started getting a lot of requests from users who had printers that weren’t supported, or which were capable of better results with a more customized driver.

We asked Atari repeatedly for the information necessary to create our own drivers. I dunno if they simply eventually got tired of our incessant begging, or if they thought it was a way to get someone else to do the work of creating more drivers, but eventually we got a floppy disk in the mail with a hand-printed label that read “GDOS Printer Driver Kit” that had the source code and library files we needed.

There weren’t really a lot of files on that floppy disk, so I’ll go ahead and list some of them here:

  • FX80DEP.S
  • FX80DATA.S
  • LQ800DAT.S
  • LQ800DEP.S
  • STYLES.C
  • INDEP.LIB
  • DO.BAT

That might not be 100% accurate as I’m going from memory, but it’s close enough. I think there might have “DEP” and “DATA” files for the Atari SMM804 printer as well, but it’s possible those were added later.

The “*DEP” files were the device-dependent code for a specific device.  Basically there was a version for 9-pin printers and one for 24-pin printers.  There were some constants unique to individual printers that should have been elsewhere.

The “*DATA” files were the related data, things like printer codes and resolution-based constants.

INDEP.LIB” was the linkable library for what amounted to a GEM VDI bitmap driver.

The STYLES.C file contained definitions for the basic pre-defined VDI line styles and fill styles.

The DO.BAT file was a batch file that did the build.

Figuring It Out

There were no instructions or documentation of any kind. That may have been why Atari was originally reluctant to send anything out. It took a little experimenting but eventually I figured out what was what. The idea here was that the bulk of the code, the routines that actually created a page from the VDI commands sent to the driver, was in the INDEP.LIB library. The actual output routine that would take the resulting bitmap and send it to the printer was in the *DEP file. By altering that routine and placing the other information specific to an individual printer into the DEP and DATA files, you customized the library’s operation as needed for a specific printer.

The ****DATA file would contain things like the device resolution, the printer codes required to output graphics data, and so forth. This included the various bits of information returned by the VDI’s Open Workstation or Extended Inquire functions.

The first drivers I created were relatively simple variations on the existing drivers, but fortunately that’s mainly what was needed. There were a ton of 9-pin dot-matrix printers in those days, and while many of them worked fine with the FX80 driver, some were ever so slightly different. Like literally changing one or two printer codes would make it work. The situation was a little better with the 24-pin printers but again there were a few that needed some changes.

The first significant change we made was probably when I created a 360 DPI driver for the NEC P-series 24-pin printers. These were compatible with the Epson printers at 180 DPI, but offered a higher-resolution mode that the Epson did not. I’ll admit I had a personal stake here, as I’d bought a nice wide-carriage NEC P7 printer that I wanted to use with the Atari. That thing was slower than crap but oh, gosh was the output good looking. At the time, for a dot-matrix impact printer, that is.

One thing that was confusing at first was that the startup code for the drivers was actually contained in the library. The code in the ****DEP.S files was called as subroutines from the v_opnwk and v_updwk functions.

Anatomy Of A GDOS Printer Driver, Circa 1986

The INDEP.LIB library (or COLOR.LIB for color devices) contained the vast bulk of the driver code. It contained all of the functions necessary to handle all of the VDI functions supported by the device. It would spool VDI commands until the v_updwk function was called. That was the call which triggered the actual output. At that point, it would create a GEM standard raster format bitmap and render all of the VDI commands which had been spooled up since the open workstation, or previous update workstation.

In order to conserve memory, the printer drivers were designed to output the page in slices. A “slice” was basically a subsection of the overall page that extended the entire width, but only a fraction of the height. The minimum slice size was typically set to whatever number of lines of graphics data you could send to the printer at once. For example, with a 9-pin printer, the minimum “slice height” would be 8 scanlines tall. If the horizontal width of the page was 960 pixels (120 dots per inch), then the minimum slice size would be 960 pixels across by 8 pixels tall. The maximum slice height could be the entire page height, if enough memory was available to the driver.

The driver would allocate a buffer for a slice, then render all of the VDI commands with the clipping set to the rectangle represent by that slice.  Then it would call the PRT_OUT function.  This was a bit of code in the DEP.S file that would output whatever was in the slice buffer to the printer, using whatever printer codes and other information were defined by the DATA.S file. After a slice was output to the printer, the library would clear the buffer and repeat the whole process for the next slice down the page.  For example, the first slice might output scanlines 0-95, then the next slice would do scanlines 96-191, and so forth until it had worked its way all the way down to the bottom of the page.

Once it got to the bottom of the last slice, the code in DEP.S would send a form feed code to the printer to advance the paper to the start of the next page.

This all may sound inefficient, since it had to render all of the VDI commands for the page over and over again, but the bottleneck here was sending the data to the printer so that didn’t really matter.

A Semi-Universal Printer Driver

Something I always kind of wanted to do, but never got around to, was creating a reasonably universal GDOS printer driver that stored printer codes and other parameters in an external configuration file that could be edited by the user. Or, perhaps, stored within the driver but with a utility program that could edit the data.

You see, the main part of the library didn’t have any clue if the printer was 9-pin, 24-pin, or whatever. So there’s no reason it shouldn’t have been possible to create an output routine that would output to any kind of printer.

In hindsight, that probably should have been the goal as soon as I had a good handle on how the driver code worked.

Next Time

Next time we’ll jump right into creating our driver shell.

Related Articles


I had originally planned to cover some things about GEM AES in this installment but I’ve recently seen a variety of questions or comments about a couple of VDI related things lately that I’d like to address.

I’m also gonna cut down a bit on the exposition at the beginning that I’ve included in previous posts in this series.  If you don’t know, at least generally, what GEM is, or what VDI is, then why are you reading this? If you need to, go look at earlier posts in the series and then come back to this one.

A disclaimer. Much of what I’m going to discuss here pertains to developments at Atari that occurred long before my employment began. Some of the information was obtained while I was a third-party developer, and some from having conversations with guys who worked in the TOS group at Atari in the years before I started working there. Finally, quite a lot of it comes from conversations with coworkers after I started working there as the ST developer support guy.

GDOS In ROM.

One of the biggest questions people have about GDOS, to this day, is why didn’t Atari put it into the TOS ROM?  Even it wasn’t ready for v1.0, why not v1.02 or v1.04?  Why not 2.0x or 3.0x when the Mega STE and TT030 came out?

Ok, here’s a LITTLE exposition.  GDOS was the part of GEM VDI which was responsible for reading device drivers into memory on request and hooking them into the system so that VDI requests went to the right place.  It also handled loading fonts into memory and hooking them in so that they were available to be used by GEM VDI.

The PC version of GEM had GDOS in its own separate executable file, loaded separately from the rest of GEM. Since GEM was never in ROM at all on the PC this largely escaped attention, but the Atari simply followed this example.

You can download the PC GEM/3 source code here if you’re interested in having a look. The “GDOS (GEMVDI.EXE) source” file contains what corresponds to GDOS.PRG on the Atari.  The “Screen Drivers” file corresponds to the Atari’s ROM portion of VDI, which was really just the screen device driver plus a basic trap handler that routed VDI commands to it.

Beyond following the PC’s example, perhaps the main reason GDOS was not included in ROM on the Atari was that there wasn’t room.

When the ST was designed, it included a ROM space of 192 kilobytes. The problem was, the total size of early versions of TOS came in at a little over 200 kb, not even including GDOS, so it simply didn’t fit within the available ROM space. So instead of having everything in ROM, the operating system was loaded into RAM from disk.  So, the first problem that needed solving was getting everything else squeezed down to 192k.  This took another few months after the ST was put on the market, but finally at the end of 1985, Atari started including TOS 1.0 ROMs in new 520ST shipments, and made ROM chips available to existing owners.

But the TOS 1.0 ROMs still didn’t include GDOS.  It remained a separate executable file of about 9 kb that you loaded by placing into your boot disk’s AUTO folder. Not that big, in the overall scheme of things, but it was big enough that there was no room in ROM even if they had wanted to include it.

OK, so no room in the early 192kb ROM.  Later machines had bigger ROM space, so why didn’t it make its way into those?  Well, at about the same time new machines like the Mega STE and TT030 came out with bigger ROM spaces, Atari was also working on FSMGDOS, which included an outline font scaler as well as new VDI functions for things like drawing bezier curves.  FSMGDOS was too big to fit even with the larger 256kb ROM of the Mega STE.

It might have fit in the TT030’s 512kb space, but by that point, most serious users had hard drives and plenty of RAM. I don’t remember the idea ever even coming up.  Plus, realistically it was too volatile.  There was a new version every week or so for quite awhile and it simply wouldn’t have made sense to put it into ROM. And before the TT030 shipped in significant quantities, FSMGDOS was pulled in favor of replacing it with SpeedoGDOS.

Why Didn’t Atari Make The ST’s ROM Space Bigger?

Not really an expert on the history of ROM chips, but from what I recall from those days, I’m reasonably sure that 32kb was the biggest available ROM chip at the time the ST was first designed.  Or, possibly larger capacities were available but only at significantly greater cost, and maybe they were physically larger.  Either way, larger chips either weren’t available or weren’t practical.

Reasonably, the only way that Atari could have made the ROM space bigger than 192kb would have been to put more sockets onto the motherboard.  Two more sockets would have bumped up the capacity to 256kb, but it also would have required another few square inches of space on the motherboard, which was already pretty much jam-packed.  Look at the picture of the 520ST motherboard below.  Aside from the area at the top center where which was reserved for the RF modulator that is not included in this particular example, there was simply nowhere you could possibly put two more ROM sockets.

520st-motherboard

The other thing to consider was that the basic design of the motherboard was done long before TOS needed to be finalized.  When they decided to include six ROM sockets they very may well have thought they were being generous.  It’s very likely nobody ever even considered the possibility that 192kb wouldn’t have been enough space.

Why Didn’t Atari Put As Much As Possible Into ROM & Disk Load The Rest?

This refers, of course, to the fact that the early 520ST shipments didn’t have TOS in ROM.  Instead, you loaded it from an included floppy disk into RAM, meaning it took up 200K (~40%) of your available memory.  So if the problem was that everything didn’t fit, why didn’t Atari put as much possible into ROM and only soft-load what didn’t fit?

The answer is, they did put some stuff in ROM right from the beginning.  The early 520ST that loaded TOS from disk had two ROM chips instead of six, with the other four sockets left empty.  That means there was as much as 64kb of code in ROM already.

There are essentially 7 components to TOS that ultimately had to fit into the ROM:

  • Bootstrap code – the code that gets the system up and running at power-on, and tries to read a boot sector from floppy disk or hard disk.
  • XBIOS – A variety of low-level functions for accessing hardware resources.
  • BIOS – Low-level functions for accessing hardware devices (i.e. serial ports, printer port, TTY screen driver, disk drives)
  • GEMDOS – Disk operating system & buffered device access
  • GEM VDI – Graphics library
  • GEM AES – Application Environment Services
  • GEM Desktop – Shell application

The preliminary ROMs that were shipped in early machines included the first four items in this list, albeit perhaps not in a finalized form. If you remember the early pre-ROM days, the disk loaded version of TOS was based on having a file named TOS.IMG on your boot disk.  There was nothing else special about the disk. It wasn’t specially formatted or anything.

If you think about what was necessary to read that disk at all, you’ll realize that some version of GEMDOS had to be in ROM, or else the machine wouldn’t have been able to read the disk’s directory, find the TOS.IMG file, and load it. In order for GEMDOS to work, that means a pretty good chunk of the BIOS had to be there.  And that means that certain XBIOS functions had to be there.  And of course, if the bootstrap code wasn’t in place, then the whole system would have been a paperweight when you turned the power on.

So if some of this stuff was in ROM already, then why was TOS.IMG around 200kb in size? Clearly, the TOS.IMG file included new versions of all of the TOS components, not just the GEM stuff.  The main answer to that is, the versions of the components that were in the 64kb ROM were neither complete nor finalized.  They really only included what was necessary to read the TOS.IMG file into RAM and get it started.

I’ve been saying I’d revisit the idea of GEM design flaws since the first installment of this series, and now I’ve finally gotten around to it.  This time around we’re gonna discuss design flaws in GEM VDI.  We’ll save the AES for another time.

For those who are coming late to the party, a reminder: GEM (Graphics Environment Manager) from Digital Research (DRI) was a first generation graphics user interface (GUI) environment. First developed for the PC, it was also used by the Atari ST computers as the backbone of the TOS operating system. The VDI (Virtual Device Interface) was the graphics library portion of GEM, and it grew out of an earlier DRI product called GSX. This was their implementation of the GKS, a published standard for a basic computer graphics library created in the late 70’s. In modern terms, GSX was a combination of graphics library and hardware drivers for the popular video cards and printers of the day.

There are certain shortcomings of GEM that reflect the hardware for which it was designed. Some of these aren’t really design flaws, however. For example, integer values used by GEM are 16-bit, mainly because the 8086 processor used by the IBM PC was a 16-bit processor.  It’s arguably unfortunate that GEM didn’t adopt the use of 32-bit values but this would undoubtedly have introduced a performance hit on the PC.

In many ways, GEM is more optimized for the PC and Intel processors than for the Atari and Motorola processors.  Since it originated on the PC that’s not really surprising, but it’s unfortunate for those of us who were on the Atari side. After all, beyond earlier versions of the popular desktop publishing software Ventura Publisher and a few other apps, not much ever really happened with GEM on the PC.

Aside from being more optimized for Intel processors, GEM on the Atari was arguably held back in some ways because in the beginning, there was a certain desire to keep the PC and Atari versions of GEM more or less in sync in terms of functionality and operation.  No doubt the idea was that developers would make versions of their applications for both platforms.  However, in reality that didn’t end up happening.  Only a handful of applications ever crossed over from one platform to the other.

GEM VDI Design Flaws

Many of the design flaws in VDI stem from two simple things. First, the whole concept of a graphics based interface and using the same basic methodology to output to the screen and other devices was very new, and developers were still trying to figure out the whole idea. Quite a lot of the problem was simply that there was no proven in the field, real world example to follow. They were literally making it up as they went along.Perhaps the biggest design flaw with VDI, or more accurately, collection of related flaws, was that device abstraction wasn’t really handled correctly in some important ways.  Some of that probably comes from the VDI’s origins in GSX.  If VDI had been designed from scratch, some things might have been different.

VDI provided the programmer with a “virtual” output device and a collection of library functions for drawing graphics primitives to it. The programmer, for the most part, didn’t need to worry about the specifics of how to do things like manipulate memory in a screen buffer, or keep track of what printer codes were used by an Epson FX-80 dot-matrix printer versus those needed for an HP LaserJet. They just had to send commands to VDI and it would take care of those details.

Sounds good, yes?  In theory,  yes, but in practice it wasn’t executed very well in some ways.

A Palette-Based Abstract Virtual Device

I talked about this before in the first segment of this series, but I want to revisit it in a broader scope.

The first problem is that the device abstraction model is limited in scope to those output devices which were in common use as of about 1983-1984 or so.  Specifically, the abstraction is largely wrapped around the basic functionality of a VGA video card, with limited consideration given to other kinds of device.

At that time, almost all video cards for PC computers used palette-based graphics, typically with up to 16 colors (4 bits/pixel) in the higher resolution modes and maybe up to 256 colors (8 bits/pixel) in the lower resolution modes. As a result, VDI is almost completely wrapped around the idea of palette-based graphics.

Palette-based graphics is when the value of each pixel in a bitmap represents an index into a color palette table, rather than directly containing the color value.  To change a color from red, to blue, you would change the entry in the color palette table, and this would in turn change all pixels drawn with that palette index to the new color.  There was no way to change a color palette entry without affecting the pixels that had already been drawn.

While palette-based video cards were the mainstream back in those days, they weren’t the only option.  If you had access to large piles of cash, you could get what was known as a “framebuffer” card which offered 24-bit color and up to 16 million possible colors instead of a maximum of 256. These “truecolor” devices were still quite expensive, and therefore uncommon, when GEM was created.

A less-expensive version of “truecolor” was known as “high-color”, which used 16 bits per pixel instead of 24. Such devices might use 5-bits each for red, green, and blue, offering a total of 32768 colors at once.  Others used 6 bits for green, offering a total of 65536 colors.

Unfortunately, neither “truecolor” or “high-color” seem to have received much consideration when GEM’s device abstraction model was created.  When the Atari Falcon030 came out, featuring 16-bit high-color video modes, Atari’s main VDI programmer, Slavic Losben, did his best to make everything work.  However, in the end there were still a variety of places where applications ended up having to use special-case code to work correctly and take full advantage. That would not have been necessary if VDI’s original design had considered true color in the abstraction model.

There are other types of device like plotters or color dot-matrix printers which don’t quite fit into the VDI device abstraction model.  A plotter may have multiple pens with different colors, but the colors are fixed, not changeable. Likewise, a color printer back in those days typically had a ribbon with 3 fixed colors plus black.  Getting a reasonable color image out of such a printer was possible but required special handling to get the best results.

My guess is that the older GSX library was ultimately changed very little when it was made into the new VDI.  The device abstraction model inherited by VDI was already a couple of years old at that point and it was based around relatively primitive graphics hardware.

Device Units Are (Were) Pixels

One thing that a virtualized device has to deal with is the fact that different hardware devices have different pixel densities, different pixel shapes, and different overall resolutions.

For example, the Atari ST’s monochrome display had a pixel density of 90 pixels per inch.  That is, a line 90 pixels long on screen theoretically represented a distance of 1 inch.  (In practice this varied depending on the individual monitor and how it was adjusted.) The overall screen dimensions were 640 x 400, which theoretically represented 7.11″ x 4.44″.

By comparison, an Epson-FX80 dot matrix printer had a pixel density of 120 DPI horizontally x 144 DPI vertically, with a printable area that measured 960 x 1526 pixels, or 8.0″ x 10.6″ on an 8.5″ x 11.0″ sheet of paper.

A typical 24-pin printer like the NEC P-6 had a pixel density of up to 360 DPI with a printable area of 2880 x 3816 pixels covering 8.0″ x 10.6″ on an 8.5″ x 11.0″ sheet of paper.

GEM VDI dealt with these differences by ignoring them almost completely, except for giving your program a few pieces of information so it could figure out things for itself.

There was a rather bizarrely useless option to use “Normalized Device Coordinates” (NDC) which took the device’s output area and applied the range of 0-32767 to each axis. Now, a virtualized coordinate system can be a very useful feature if it’s done right, but the NDC wasn’t done right at all. To name a few of the many issues:

  • It didn’t work with the ROM-based screen device driver.  Theoretically it could work with the screen if a RAM-loaded driver was used, which was aware of the NDC system, but in practice this never occurred.  Maybe this was an example of something that had been done on the PC side of things that never made it’s way to the Atari.
  • NDC paid no attention to the aspect ratio of the output area.  It always applied the full range of 0-32767 to each axis.
  • The vertical axis went from 32767 at the top to 0 at the bottom, reversing the usual coordinate system used by everything else, and there were no options to change this.
  • The coordinate range used the entire positive half of the available range of a 16-bit integer.  So it was impossible, for example, to specify objects that required coordinates that lay past the right-hand edge, because the X-axis coordinate couldn’t be larger than 32767.
  • Likewise for the top edge and the Y-axis.

You couldn’t, for example, have an arc, circle, or ellipse with a center point that was past the top or right edges.

I can only imagine that the NDC was another thing VDI inherited from GSX, but I have never been able to figure out a situation in which it would have been useful.

Related to the lack of a useful virtualized coordinate system was the fact that, with the exception of being able to specify text size in terms of points, GEM VDI offered no means of specifying sizes or positions using anything but pixels.

Suppose you want to draw a box that is 2″ wide by 1.5″ tall using a line thickness of 0.10″, with the top left corner positioned at 4.25″ from the left side and 1″ down from the top. You would have to translate each of those values into the correct number of pixels before issuing your VDI commands. Meaning, it was up to the program to figure out how many pixels equaled 2 inches, or a line thickness of 0.10 inch.  This is a fair amount of extra work when you have to do it for everything you draw.

And to complicate matters, it turns out VDI was lying to you about some of the numbers.

Do The Math, Plus, You Know, That Extra Stuff

Doing the math to translate inches (or whatever other measurement) into pixels wasn’t quite all you needed to do to ensure correct output.  You also needed to know how to figure out when and how VDI was lying to you.

When you open a device workstation, you get back a variety of bits of information that tell you about the device.  This includes the overall size of the output area in pixels, like 640 x 400 for the Atari ST monochrome display, as well as the pixel size in microns.

Returning the pixel size in microns is another design flaw.  There’s simply too much round-off error involved.

A micron is a thousandth of a millimeter, so there are 25400 microns to an inch.  Unfortunately, this value was returned as a 16-bit integer and many device pixel sizes don’t translate well into that without round-off error.  For example, 90 DPI translates to 282.2222222 microns, returned by VDI as just 282, while 360 DPI translates to 70.5555555 microns returned as just 70.

This means that a program has to be aware that when VDI says that a particular device has pixels that are 70 microns, it really means they’re 70.5555555 microns, and likewise for the other devices and their pixel sizes.

If you don’t think that the difference is big enough to be important, then consider a vertical line drawn on a 24-pin 360 DPI printer that is intended to be positioned at 7.5 inches from the left side of a sheet of paper.  To figure out where to draw the line, you’ve got to translate 7.5 inches into the right number of pixels for the device.

If you base your calculations on the value of 70 microns returned by VDI, you’ll draw the line at column 2700:

7.5 inches x 25400 microns per inch = 190500 microns
190500 microns / 70 microns per pixel = 2700 pixels

If your application is aware that 70 microns returned by VDI really means 70.5555555 microns, then the second part of the above calculation works out to:

190500 microns / 70.5555555 microns per pixel = 2721 pixels

Now we’re at column 2721. That’s a difference of 21 pixels in where the line gets positioned.  That’s almost 1/16″ at 360 DPI and it is definitely noticeable.

When I worked on the WordUp word processor and Fontz font editor during my time at Neocept, we used a translation table to convert the values returned by VDI for the pixel size into the actual, un-rounded off values.

This issue could have easily been avoided if GEM had used floating point values or perhaps 16.16 fixed point values, as either would have provided sufficient precision to eliminate significant errors from round-off.

Some Devices Are More Variable Than Others

The other big problem with the VDI abstraction model was that it ignored or minimized differences between devices where it shouldn’t have done so.  For example, while a particular display screen mode is always going to be a certain number of pixels wide or tall, a device like a printer may be capable of using different paper sizes, different paper trays, and even different pixel densities.

VDI essentially ignored these things.

For example, at Neocept we wanted WordUp to be able to print on envelopes, or maybe use legal-sized paper, not just letter-sized, which is what most GEM printer drivers were setup to do.

The right way to do things would have been to allow an application some method of specifying the desired paper size and other options when you opened a printer workstation.  But… no.  With VDI, you get the page size you get. Be happy with it.

Some workarounds for these limitations included having a desk accessory which could be used to change the printer driver configuration before you started a print job, but this only worked with specific, matched drivers, and you couldn’t change some parameters like output resolution since the bitmapped fonts were configured for specific screen sizes.

We figured out a way to get it done for WordUp, but it required us to twiddle around with the printer drivers in ways that weren’t really by the book.  It shouldn’t have been necessary.

Fonts Don’t Specify What Resolution / Device They’re Intended For

Writing the previous paragraph reminded me of a VDI design flaw that I’d not thought about in years.

Despite the absolute necessity for a GEM bitmapped font to be designed for a specific device resolution, the font header contains no information about what resolution for which the font is intended.

The font header contains no information to indicate the aspect ratio.  There’s no way to tell if the font is designed around the idea of square pixels (i.e. 90dpi monochrome screen fonts) or rectangular pixels (i.e. 120h x 144v DPI 9-pin printer fonts, or 90h x 45v medium-res Atari screen mode fonts).

Instead of placing such information into the font header, it was expected that the filename would be encoded in such away as to indicate this information.  Keep in mind we’re talking an old-fashioned 8.3 filename, which was expected to be used like this:

yyxxxxzz.fnt

The yy portion indicates the device type.  For example, “FX” would mean Epson FX 9-point printers at 120 x 144 resolution, but also dozens of other printers which supported the same graphics codes.  The xxxx portion indicates the typeface.  The zz portion indicated the size in points.  Good luck if you had a font bigger than 99 points.

Not including this information in the font header is a huge, huge flaw, and it’s one that makes living with bitmapped fonts harder to live with.

Consider, if the font header specified the target device resolution, then GEM could have easily been written to use  any font with the correct aspect ratio for a device, adjusting the apparent point size as needed.

That is, an 18 point font for a 180 dpi device could be used as a 36 point font for a 90 dpi device, for example. People did this sort of thing manually, but it could have all been done automatically had the required information been included in the font header.

What Fonts Are Installed?

Until 1990 when FSMGDOS came out (briefly) and was subsequently replaced by SpeedoGDOS, GEM on Atari used only bitmapped fonts.  Bitmapped fonts are fine when they’re output at their intended size, but generally don’t look very good when they’re scaled to other sizes.

Bitmapped font scaling could have been improved to a certain degree by using a filtered scaling routine, but to be honest that’s probably not a reasonable expectation for the horsepower of the hardware back then.

For commonly used fonts, it was typical for several sizes to be available, ranging from 8 point to 36 point.  To avoid bad-looking output, many applications limited to user to font sizes that had a corresponding bitmap.  The problem was, there wasn’t really any direct means of inquiring what sizes of bitmapped fonts were installed.

Instead, you had to do a loop calling the vst_point() function for each font.  This function would look for the largest installed font that was less than or equal to the requested size.  So if you asked it for 128 points and 36 points was the biggest available, it would tell you that it selected 36 points.  A program would start with a relatively large number, see what it got back and save it, then loop back try the next lowest number.  In this way, it would find out that there were sizes of 36, 28, 24, 18, 14, 12, and 8 points, for example.  Then the application could limit the selection of sizes to those it found.

When FSMGDOS came out, a new function, vst_arbpt() allowed programs that were aware of the new font scaler the option to select different sizes, but older programs using the loop to call vst_point() to get information about installed sizes got bit in the ass.  Each call to vst_point() resulted in the font scaler saying “yes, that size is available!”  Furthermore, it would cause the font scaler to do things in preparation for outputting text at the requested size.  Essentially, a call to vst_point() with FSMGDOS took a lot longer than it had with the older bitmap-only GDOS.  The end result was that when a program was doing a loop testing all sizes from 1 to 128 points, or something like that, it basically froze-up for awhile because the process took much, much longer than it had taken when bitmapped fonts were being used.

That’s All For Now

More to come in part 6… playing soon at a theatre near you.

« Previous Entries