In part 1, we talked about the basics of how GEM VDI works and how that applies to the concept of VDI functions being passed from an application to a device driver.

This time around, we’ll talk about the printer driver kit that Atari sent out to selected developers. Printers were by far the most commonly supported device, and also had perhaps the greatest variety in technology.

Before we talk about the printer driver kit, let’s take a look at the state of technology for printers back in the mid-80’s.

The Printer Market At The Dawn Of Time (the mid 80’s)

Today, you can walk into a store and for $200 or so, maybe less, you can buy a fairly decent color laser printer that prints 10 pages a minute or more at resolutions upwards of 1200 dpi. To those of us who survived using computers in the mid’80’s, that is simply insane. You get so much more bang for the buck from printers these days that some people buy new printers instead of replacing toner cartridges.

In the mid-80’s, the printer market was much different than it is now. Aside from the differences in technology, printers were relatively much more expensive.  A basic 9-pin printer in 1985 would cost you $250 or $300. That’d be like $500-$600 today. You could literally buy a half-dozen cheap laser printers today for what it cost for a good 9-pin dot matrix printer back then. A good 24-pin printer would set you back $500 or more.

Laser printers, in 1985, were about $2500 for a basic Hewlett Packard LaserJet or similar model. Apple introduced the LaserWriter in March with a price tag of almost $7000. Fortunately, more and more manufacturers were entering the market, and prices were starting to drop. I paid about $1300 for my first laser printer in late ’86, and that was as cheap as they came back then. It was compatible with PCL 2 (printer control language version 2) which meant that most drivers for the HP LaserJet would work with it.

Today, the typical printer found in most people’s homes is an inkjet dot-matrix printer. That kind of printer wasn’t really a mainstream thing yet in 1985. The first truly popular model would be the HP DeskJet in 1988.

Graphics Printing Was SLOW!

Today, most printer output, other than speciality devices like receipt printers, is done using bitmapped graphics. The printer driver on your computer builds an image in the computer’s memory, and then when the page is complete, sends it to the printer. This gives the application and printer driver nearly complete control over every pixel that is printed.

However, in 1985, sending everything to the printer as one or more large bitmaps didn’t work so well, for a couple of reasons. First was the fact that sending data from the computer to the printer was fairly slow. Most printers connected to the computer via a Centronics-style parallel data port, which typically used the system’s CPU to handshake the transfer of data. Typical transfer speeds were rarely more than a couple of dozen kilobytes per second, even though the hardware was theoretically capable of much faster speeds.

Even though the data connection was fairly slow, the main bottleneck in most cases was the printer’s ability to receive the data and output it. Most printers had no more than a couple of kilobytes of buffer space to receive data, generally no more than about one pass of the print head when doing graphics. It was the speed of the print head moving back-and-forth across the page that was the ultimate bottleneck.

A popular add-on in those days was a print buffer, basically a little box filled with RAM that connected in-between the printer and the computer. This device would accept data from the computer as fast as the computer could send it, and store it in its internal RAM buffer. Then it would feed the data out the other end as fast as the printer could accept it. The print buffer could accept data from the computer more quickly than the printer could, and assuming it had enough RAM to hold the entire print job, it would free up the computer to do other things.

But even with a print buffer, if you had an impact dot-matrix printer and wanted to produce graphics output, you simply had to get used to it taking awhile to print. For those with bigger budgets, there were other options. Laser printer manufacturers started to make smarter printers that were capable of generating graphics in their own local memory buffers. This was generally done using what we call a Page Description Language, or PDL.

Page Description Languages

With a PDL, instead of sending a bitmap of a circle, you would send a series of commands that would tell the printer where on the page to draw it, what line thickness to use, how big it should be, the fill pattern for the interior, etc. This might only take a couple dozen or perhaps a few hundred bytes, rather than several hundred kilobytes.

One of the most capable and popular PDLs was PostScript, which was introduced to the world with the release of the Apple LaserWriter printer. PostScript was actually a programming language, so you could define a fairly complex bit of output and then use it as a subroutine over and over, varying things like the scale factor, rotation, and so forth. PostScript also popularized the concept of using outline scalable fonts.

The downside to Postscript or other PDLs was that the printer needed a beefy processor and lots of RAM, making the printer fairly expensive. Often more expensive than the computer you used to generate the page being printed.The Apple LaserWriter actually had a faster version of the Motorola 68020 processor and more memory than early models of the Mac computer.

The other downside was that even if you’re printing a couple of dozen pages everyday, the printer is actually sitting idle most of the time. Meaning that extra processing power and RAM isn’t really fully utilized.

Graphics Output On A Budget

Back in the 8-bit days and early PC days, most people didn’t have thousands of dollars to drop on a laser printer. If you had a basic 9-pin dot matrix printer, it had relatively primitive graphics and it was fairly slow to output a page using graphics mode. Most of the time you made a printout of something text-oriented, it used the printer’s built-in text capabilities. Basic printing modes were fast but low-quality, but more and more printers introduced a “letter quality” mode which was somewhat slower, but still much faster than doing graphics output.

However, the whole situation with printers was on the cusp of a paradigm shift. RAM was getting cheaper by the day. Computers were getting faster. The quality of graphics printing was improving. And, perhaps more than anything, the release of the Apple Macintosh computer in 1984 had whetted the market’s interest in the flexibility of bitmapped graphics output, and the subsequent release of Microsoft Windows and GEM with similar capabilities had added fuel to the fire.

Being able to combine text and graphics side by side was the new target, even for people with basic 9-pin dot matrix printers, and even though it was often orders of magnitude slower than basic text output, people were willing to wait. And for higher-quality output, they were willing to wait a bit longer.

Printer Drivers In The Wild West

Today, when you buy a printer, you get a driver for Windows, maybe one for Mac OS X. I would imagine Linux users recompile the kernel or something to get things going there.  (Kidding!)  And once you install that driver on your computer, that’s pretty much all you need to worry about. You tell an application to print, and it does.

By comparison, back when the ST first came out, printing was the wild wild west, and getting your printer to produce output could make you feel like you were in an old-fashioned gunfight. Before GUI-based operating systems became popular, every single program required its own printer driver.

And then we have the fact that there were about fourteen billion different ways of outputting graphics to a printer. Even within the product line of a single manufacturer, you’d find compatibility issues between devices that had more or less the same functionality as far as graphics output went. Even with the same printer, two different programs might have different ways of producing what appeared to be the same exact result.

Back in those days, most dot-matrix printer manufacturers followed the standards set by Epson. For example, when Star Micronics came out with their Gemini 10x 9-pin dot matrix printer, it used most of the same printer codes as the Epson FX and MX printers. Likewise with many other manufacturers. Overall, there was often as much as approximately 95% compatibility between one device and another.

The problem was, most of the efforts towards compatibility were oriented around text output, not graphics. That is, the same code would engage bold printing on most printers, but the code for “Advance the paper 1/144th inch” used for graphics printing might be different from one printer to the next.  This was further complicated by the fact that printers sometimes differed somewhat in capability. One printer might be able to advance the paper 1/144″ at a time, while another could do 1/216″.

The one good thing was that in most cases it was possible for users to create their own driver, or more accurately, a printer definition file. For most programs, this was nothing more than a text file containing a list of the printer command codes required by the program. In some cases it was a small binary file created by a separate utility program that let you enter the codes into a form on screen.

The Transition To OS-Based Printing

The main reason every DOS application (or Atari 8-bit program, or Commodore 64 program, etc.) had its own proprietary printing solution was, of course, the fact that the operating system did not offer any alternative. It facilitated the output of raw data to the printer, but otherwise provided no management of the printing process.

That started to change for desktop computer users in 1984, when Apple introduced the Macintosh. The Mac’s OS provided developers with the means to create printer output using the same Quickdraw library calls that they used to create screen output. And it could manage print jobs and take care of all the nitty-gritty details like what printer codes were required for specific printer functions. Furthermore, using that OS-based printing wasn’t simply an option. If you wanted to print, you had to go through the system. Sending data directly to a printer was a big no-no.

One significant issue with the whole transition to OS-based printing was the fact that printer drivers were significantly more complex. It generally wasn’t possible, or at least not practical, for users to create their own.

Apple addressed the potentially murky driver situation by simply not supporting third party printers. They had two output devices in those early years, the ImageWriter 9-pin dot-matrix printer, and then the LaserWriter. It would be a couple of years before third party printing solutions got any traction on Macintosh.

When Microsoft Windows came out a short time later, it addressed the question of printing in largely the same way as the Macintosh, except that it supported a variety of third-party printer devices. 

When the Atari ST came out, the situation regarding printing with GEM should have been theoretically similar to the Mac and Windows, except for two little things.

First was the minor tripping point that the part of GEM responsible for printing (GDOS) wasn’t included with the machine at first. What was included was BIOS and GEMDOS functions for outputting raw data to the printer. As a result, application programmers ended up using their own proprietary solutions.

Second was the fact that even after GDOS was released, there were only a few printer drivers included. And Atari didn’t seem to be in any big rush to get more out the door. As a result, application developers were slow to embrace GEM-based printing.

GDOS Printing On The Atari

As far as I know, the first commercial product to ship with GDOS support included was Easy Draw from Migraph at the start of 1986, about six months after the ST was released, and about two months after Atari starting shipping machines with the TOS operating system in ROM rather than being disk-loaded.

Migraph included pretty much exactly what Atari had given them as a redistributable setup: the GDOS.PRG file which installed the GEM VDI functionality missing from the ROM, the OUTPUT program for printing GEM metafiles, and a set of GEM device drivers and matching bitmapped fonts. The device drivers included a GEM Metafile driver and printer drivers for Epson FX 9-pin dot-matrix printers and Epson LQ 24-pin dot-matrix printers.

Compared to most other programs, this situation had a significant drawback. This was not Migraph’s fault in any way. It was a GEM issue, not an Easy-Draw issue. So what was the problem? Well, basically it comes down to device support. The GDOS printer drivers supplied by Atari simply didn’t work with a lot of printers. They targeted the most popular brand and models, but if you had something else, you had to take your chances regarding compatibility. This was a major problem for users, not to mention something of a surprise.

If there’s any aspect of GEM’s design or implementation where the blame for something wrong can be pointed at Atari rather than Digital Research, it’s got to be the poor selection of printer drivers.

With a word processor like First Word, if your printer wasn’t supported by a driver out of the box, chances were pretty good you’d be able to take your printer manual and figure out how to modify one of the existing drivers to work. Or, maybe you’d pass the ball to a more tech-savvy friend and they’d figure it out for you, but one way or the other, you probably weren’t stuck without a way to print. Not so with Easy-Draw, or any other program that relied on GDOS for output. GDOS printer drivers weren’t simply a collection of printer codes required for specific functions. If there was no driver for your printer, and chances of that were pretty good, you couldn’t print. Period.

The GDOS Printer Driver Kit

When I was at Neocept (aka “Neotron Engineering“) and our WordUp! v1.0 word processor shipped, we included basically the same GDOS redistributable files that Migraph had included with Easy-Draw, except for the OUTPUT program which we didn’t need because WordUp! did its own output directly to the printer device. It wasn’t long before we started getting a lot of requests from users who had printers that weren’t supported, or which were capable of better results with a more customized driver.

We asked Atari repeatedly for the information necessary to create our own drivers. I dunno if they simply eventually got tired of our incessant begging, or if they thought it was a way to get someone else to do the work of creating more drivers, but eventually we got a floppy disk in the mail with a hand-printed label that read “GDOS Printer Driver Kit” that had the source code and library files we needed.

There weren’t really a lot of files on that floppy disk, so I’ll go ahead and list some of them here:

  • FX80DEP.S
  • FX80DATA.S
  • LQ800DAT.S
  • LQ800DEP.S
  • DO.BAT

That might not be 100% accurate as I’m going from memory, but it’s close enough. I think there might have “DEP” and “DATA” files for the Atari SMM804 printer as well, but it’s possible those were added later.

The “*DEP” files were the device-dependent code for a specific device.  Basically there was a version for 9-pin printers and one for 24-pin printers.  There were some constants unique to individual printers that should have been elsewhere.

The “*DATA” files were the related data, things like printer codes and resolution-based constants.

INDEP.LIB” was the linkable library for what amounted to a GEM VDI bitmap driver.

The STYLES.C file contained definitions for the basic pre-defined VDI line styles and fill styles.

The DO.BAT file was a batch file that did the build.

Figuring It Out

There were no instructions or documentation of any kind. That may have been why Atari was originally reluctant to send anything out. It took a little experimenting but eventually I figured out what was what. The idea here was that the bulk of the code, the routines that actually created a page from the VDI commands sent to the driver, was in the INDEP.LIB library. The actual output routine that would take the resulting bitmap and send it to the printer was in the *DEP file. By altering that routine and placing the other information specific to an individual printer into the DEP and DATA files, you customized the library’s operation as needed for a specific printer.

The ****DATA file would contain things like the device resolution, the printer codes required to output graphics data, and so forth. This included the various bits of information returned by the VDI’s Open Workstation or Extended Inquire functions.

The first drivers I created were relatively simple variations on the existing drivers, but fortunately that’s mainly what was needed. There were a ton of 9-pin dot-matrix printers in those days, and while many of them worked fine with the FX80 driver, some were ever so slightly different. Like literally changing one or two printer codes would make it work. The situation was a little better with the 24-pin printers but again there were a few that needed some changes.

The first significant change we made was probably when I created a 360 DPI driver for the NEC P-series 24-pin printers. These were compatible with the Epson printers at 180 DPI, but offered a higher-resolution mode that the Epson did not. I’ll admit I had a personal stake here, as I’d bought a nice wide-carriage NEC P7 printer that I wanted to use with the Atari. That thing was slower than crap but oh, gosh was the output good looking. At the time, for a dot-matrix impact printer, that is.

One thing that was confusing at first was that the startup code for the drivers was actually contained in the library. The code in the ****DEP.S files was called as subroutines from the v_opnwk and v_updwk functions.

Anatomy Of A GDOS Printer Driver, Circa 1986

The INDEP.LIB library (or COLOR.LIB for color devices) contained the vast bulk of the driver code. It contained all of the functions necessary to handle all of the VDI functions supported by the device. It would spool VDI commands until the v_updwk function was called. That was the call which triggered the actual output. At that point, it would create a GEM standard raster format bitmap and render all of the VDI commands which had been spooled up since the open workstation, or previous update workstation.

In order to conserve memory, the printer drivers were designed to output the page in slices. A “slice” was basically a subsection of the overall page that extended the entire width, but only a fraction of the height. The minimum slice size was typically set to whatever number of lines of graphics data you could send to the printer at once. For example, with a 9-pin printer, the minimum “slice height” would be 8 scanlines tall. If the horizontal width of the page was 960 pixels (120 dots per inch), then the minimum slice size would be 960 pixels across by 8 pixels tall. The maximum slice height could be the entire page height, if enough memory was available to the driver.

The driver would allocate a buffer for a slice, then render all of the VDI commands with the clipping set to the rectangle represent by that slice.  Then it would call the PRT_OUT function.  This was a bit of code in the DEP.S file that would output whatever was in the slice buffer to the printer, using whatever printer codes and other information were defined by the DATA.S file. After a slice was output to the printer, the library would clear the buffer and repeat the whole process for the next slice down the page.  For example, the first slice might output scanlines 0-95, then the next slice would do scanlines 96-191, and so forth until it had worked its way all the way down to the bottom of the page.

Once it got to the bottom of the last slice, the code in DEP.S would send a form feed code to the printer to advance the paper to the start of the next page.

This all may sound inefficient, since it had to render all of the VDI commands for the page over and over again, but the bottleneck here was sending the data to the printer so that didn’t really matter.

A Semi-Universal Printer Driver

Something I always kind of wanted to do, but never got around to, was creating a reasonably universal GDOS printer driver that stored printer codes and other parameters in an external configuration file that could be edited by the user. Or, perhaps, stored within the driver but with a utility program that could edit the data.

You see, the main part of the library didn’t have any clue if the printer was 9-pin, 24-pin, or whatever. So there’s no reason it shouldn’t have been possible to create an output routine that would output to any kind of printer.

In hindsight, that probably should have been the goal as soon as I had a good handle on how the driver code worked.

Next Time

Next time we’ll jump right into creating our driver shell.

Related Articles

I recently got an interesting email from an Atari fan named Wong CK. Wong has been programming the Atari ST computers as a hobby, and was interested in creating a GEM printer driver that would create PDF files. He had read some of my articles here and after not finding a lot of information elsewhere online, was hoping that I might be able to provide some useful information and advice.

Here’s a portion of Wong’s original message:

One of the software that I wanted to do is a GDOS printer driver to create a PDF file. On the web I found some PDF source code and so my idea was just to map the PDF library code to each VDI functions. I also researched on how to make a Atari GDOS printer driver but there was very little information. I found the now public released GEM 3 GDOS printer drivers as well as CPM GSX printer driver source codes, but I have not figured out what it needs to be done, confused futher by the assembly codes and the x86 codes as I program in C language. This is the stumbling block and I have been stuck at this stage for nearly 2 year plus. Even the guys over at do not know (or they are not telling).

I thought that a PDF driver was an interesting idea, and Wong’s request kind of overlapped a long- unresolved ambition of my own regarding GEM  drivers.  I replied to Wong, telling him that…

Well, actually, that’s sort of the point of the article so let’s just jump in.

I’m expecting this to be a four-part series, as outlined below. The good news is that I’ve already got parts 2 and 3 mostly done, so there hopefully shouldn’t be a huge delay between installments.

  • Part 1 – Overview of How GEM Works & How Device Drivers Are Called
  • Part 2 – The GDOS Printer Driver Kit
  • Part 3 – Creating A Basic GEM Device Driver Shell
  • Part 4 – Sample Device Driver

Beyond part 3 I don’t have it completely mapped out yet, so that could get expanded a bit when the time comes.

Back To The Beginning

We’re going to start by going back to the beginning and talking about some of the basic fundamentals about GEM VDI.  First, let’s recognize that there are two targets for an application’s VDI requests, VDI itself, and the device driver (for whatever device is involved). This idea ties into the original GEM VDI documentation from Digital Research.  On page 1-2, you’ll find this tidbit (sic):

GEM VDI is composed of two components:

* Graphics Device Operating System (GDOS)
* device drivers and face files

When you open a VDI workstation, you’re asking GDOS to do something.  It has to figure out what device driver is required for the request. For some devices like the screen, the driver may be in ROM, for others it might have to load it from disk.  Then it has to wait for the result of the “open workstation” request so it knows if it should unload the driver or not.

On the other hand, when you draw a circle, you’re not really asking VDI to do it. Really, you’re asking the device driver that’s in charge of the specified workstation to do it. In the latter case, VDI is responsible for routing the request to the correct device driver, but doesn’t otherwise involve itself in the drawing of the circle, because VDI knows nothing about what’s required to draw a circle on a particular device. That’s what the device driver is for.

Atari ST users have typically referred to GDOS as though it was some sort of bolted-on extra piece of GEM VDI that you didn’t need unless you wanted to use loadable fonts or device drivers for things like printers. There’s a grain of truth in there, but it’s also somewhat misleading, because what Atari users call “GDOS” actually is GEM VDI. The term “GDOS” is supposed to refer to everything that’s not a font or device driver, but that idea got corrupted on the Atari side of things for some reason. We used to say that the TOS ROM didn’t include GDOS. Maybe it would have been more accurate to say it didn’t include VDI.

The majority of the code in the Atari’s TOS ROM that everybody has traditionally referred to as “the VDI” is actually a device driver for the screen.  But the “GDOS” aka the rest of VDI is missing. The TOS ROM includes just a tiny piece of code, a mini-VDI you might call it, that catches the GEM system trap and passes through VDI commands to the screen driver.  It doesn’t know anything about other devices or drivers, doesn’t know how to load fonts, or do anything else. In fact, the assembly language source file for it is under 150 lines long.

How Does A VDI Request Get From Application To Driver?

GEM uses a “parameter block” to pass information back and forth between the application and the VDI.  This is a table of pointers to five arrays which contain the input parameters, and which receive the output parameters.  They are:


Each of these arrays consists of 16-bit values.  The CONTROL array is used for both input and output.  On input, it tells GEM what function is being requested, and how much information is contained in the INTIN and PTSIN arrays. When the function is done, it tells the application how much information was returned in the INTOUT and PTSOUT arrays.

The “PTS*” arrays are used to pass pixel coordinate values. These are always done in pairs. That is, there’s an x-axis value and a y-axis value. The CONTROL array specifies how many coordinate pairs are passed back and forth.

The “INT*” arrays are used to pass integer values back and forth.  The CONTROL array specifies how many values are in INTIN or INTOUT.

To call VDI, an application puts the required input parameters into the CONTROL, INTIN, and PTSIN arrays, then it loads the address of the GEM parameter block into register d1 of the 680×0 CPU, and the magic number $73 into register d0.  Finally, it calls trap #2.

Wondering what “trap #2” means? For you new kids who haven’t ever written assembly code, or accessed things at that low-level, most microprocessors since the 16-bit days have implemented the concept of a system trap.  This is a special processor instruction that causes the processor to jump to whatever code is pointed to by a specific, pre-defined pointer in memory.  It’s sort of an “interrupt on demand” and it allows system programmers to install bits of code that can be called without the calling program known where it resides in memory, as would otherwise need to be the case.

Here’s a bit of assembly code that demonstrates the definition of the parameter arrays, the parameter block, and the actual system trap call.  This assumes the arrays have been loaded with the correct parameters:

    .bss        ; Block storage segment, aka uninitialized data space
_control:       ; Adding underscore makes symbol accessible from C
    ds.w 20     ; Reserve space for 20 elements
    ds.w 128    ; 128 elements
    ds.w 128
    ds.w 128
    ds.w 128

    .data        ; initialized data
    dc.l _control
    dc.l _intin
    dc.l _ptsin
    dc.l _intout
    dc.l _ptsout

    move.l #_VDIParams,d1
    move.l #$73,d0
    trap   #2

That’s likely the first 68000 assembly code I’ve written in probably at least 15 years, maybe more… excuse me while I catch my breath…

The high-level language binding for a call like vsf_style might look like this:

WORD vsf_style( WORD wsHandle, WORD newStyle )
    control[0] = 24;       /* VDI opcode for this function */
    control[3] = 1;        /* # of values in INTIN array */
    control[6] = wsHandle; /* Workstation handle */
    intin[0] = newStyle;   /* Requested fill style */
    return intout[0];      /* return style actually set */

These are just examples, of course. Most of these details are generally managed by the function bindings that came with your C compiler (or whatever other language) so that most programmers creating GEM applications don’t have to worry about it, but it’s important for those of us who are doing system stuff like creating device drivers from scratch. We need to make sure the underlying concept is clear here because it ties into the big secret.

What’s the big secret?

The Big Secret

Here’s the big secret of GEM VDI. A secret that wasn’t really a secret, but which nevertheless it seems very few people properly understood, going by the questions that people still ask to this day.

A device driver’s point of entry, where it first starts executing code, is what sits on the other side of the trap #2 call.  Register d0 contains the VDI magic number, and register d1 contains a pointer to the parameter block.  So at that point it’s up to the driver to take that information and do something meaningful with it.

It’s that simple.

How GEM Calls The Device Driver

Oh, technically, the driver isn’t EXACTLY on the other side. The system trap call doesn’t actually point directly into the driver. That would be stupid. But from the driver’s point of view, it looks pretty much like that.

When the system trap is made, VDI/GDOS will first verify that it’s an API call by checking for the magic number in register d0.  If the magic number is found, VDI grabs the address of the parameter block from register d1.  The first entry is a pointer to the CONTROL array, where it grabs the workstation handle and the opcode of the function being requested.

Next, it looks at the function opcode to figure out if the request should be routed to the driver, or handled by GDOS.  Something like v_opnwk (Open Workstation) would be handled by GDOS, while v_pline (Draw Poly Line) would be handled by the driver.

For functions that need to be handled to the driver, GDOS first has to figure out which workstation and driver should receive the command.  GDOS maintains a table of information for each open workstation, including the entry point for the driver. It searches that table until it finds a matching workstation handle.  Then it simply grabs the driver’s entry point, and jumps into the driver.  Something like this:

;; Note that I'm not including important things like
;; saving and restoring registers in this sample code

    cmp.w  #$73,d0      ; d0 have magic number?
    beq.s  .VDIcall     ; no, so not a VDI call
    rte                 ; return from system trap

    move.l d1,a0        ; Get address of parameter block
    move.l (a0),a0      ; Get first entry in parameter block
    move.w (a0),d2      ; Get control[0] into register d2

;; At this point, we need to determine if the requested 
;; operation is a VDI/GDOS thing like opening a workstation 
;; or a device driver thing like drawing something.
;; That's too much code to include here, so just assume 
;; this comment does that and then jumps to the label 
;; below if it's a driver thing, and exits otherwise.

    move.w 12(a0),d2      ; Get workstation handle from 
                          ; control[6] into register d2

;; OK, now VDI would search its internal data to find 
;; the workstation and device driver associated with the
;; workstation handle passed in. Again, too much code, so
;; let's just assume that we found the information and that
;; the driver entry point is now contained in register a0.

    jsr    (a0)           ; Jump to driver entry point
    rte                   ; Return back to application

Once it gets control, the driver is expected to do whatever is called for by the specific function opcode, and return whatever data is appropriate.

The big secret here is that VDI doesn’t really have any big secrets. The VDI manual pretty much tells you exactly what GDOS does and what’s expected of a drivers. It was actually pretty mundane stuff when you get down to it.

In The Next Installment

We’ll discuss the GDOS Printer Driver Kit that Atari sent out to some developers.  We’ll go over how one used it to create new drivers and why it’s not really that suitable as a general-purpose driver kit.

Related Articles

After part 7 of this series came out, I got some interesting feedback and a question in particular stood out. Milan Kovac asked how did MiNT handle things differently regarding applications waiting for evnt_multi() to return?

To clarify, he’s referring to MultiTOS, of which MiNT was the core, and how GEM AES behaved differently in that environment.

That question was sort of out of the scope of the original topic, but it got me thinking and I realized it sort of touched on a few other issues with AES we hadn’t talked about yet.  So here we go, and Milan, if you read through the whole thing your question gets answered eventually.

On a side note… when I write these articles, I often have the GEM source code open in a window in the background so that I can make sure I’m not remembering something incorrectly. Once again I’ve noticed how the original GEM source code is very terse and poorly commented. Function names are generally no more than 6 or 7 characters long, even with an underscore taking up a spot some where. Names of variables or structure elements are about the same. For example:

EVSPEC mwait(mask)	 	 
EVSPEC mask;	 	 
 rlr->p_evwait = mask;	 	 
 if ( !(mask & rlr->p_evflg) )	 	 
 rlr->p_stat |= WAITIN;	 	 

Of course, that’s not really that unusual for code written back in those days.  But gosh, it often seems like the GEM source code takes things to extremes. Someone really ought to dump this stuff into a modern IDE and refactor the source code to give things meaningful names.

What is MiNT and MultiTOS?

Just in case anybody doesn’t know, MiNT is a multitasking kernel created by Eric Smith while he was a university student. He was trying to port over some GNU libraries and utility software to the Atari ST computers, and the problem was that the TOS operating system on the Atari was lacking certain functionality required by the code.  At first, he modified the individual GNU programs and libraries as needed, but eventually decided that instead of changing the libraries and programs, it’d be easier overall to create an extension to TOS to add the required functions. MiNT was the result.

Originally the name stood for MiNT Is Not TOS.  It basically hooked into the BIOS and GEMDOS and provided the ability to do preemptive multitasking, among other things.

MiNT caught the attention of the programmers in the TOS development group at Atari Corp., including Allan Pratt, the programmer who maintained the GEMDOS portion of TOS. He was impressed with MiNT and eventually started talking with Smith about incorporating it into a new, preemptive multitasking version of TOS. Smith was later hired by Atari in 1992, and in 1993, after a lot more work on everything, MultiTOS was released.

Now the name stood for MiNT Is Now TOS.

Unfortunately, the release of MultiTOS came only shortly before Atari decided to focus all of its development efforts on the new Jaguar game console, effectively ending the active life cycle of the ST computer series.  But that’s a story for another day.

Multitasking 101

Let’s cover a couple of basic concepts regarding multitasking that we haven’t talked about before, or to refresh your memory briefly.

There are two main types of multitasking, cooperative and preemptive.  Task switching is what we call it when the system stops one program’s execution and starts another one. Task switching back and forth quickly enough makes it look like all the programs are actually running at the same time. And in fact, on a modern computer with multiple processor cores, your programs probably are actually running at the same time.

Vanilla GEM features cooperative multitasking. “Cooperative” means that the system doesn’t automatically switch from one program to the next. Programs have to cooperate by doing some specific operation before task switching occurs. In vanilla GEM, that specific operation is making a call to the AES event library. Every time a GEM application calls an event library function, it may end up waiting for the event to occur, and it may end up waiting for other applications as well.

MiNT features the other flavor, preemptive multitasking.  The main thing that’s different about preemptive multitasking is that the program doesn’t have to do anything special to facilitate task switching.  It can happen at any time, regardless of what the program is doing at the moment. (There are exceptions to that which we’ll ignore for now.)

Under a preemptive multitasking system, each program, also known as a process, has at least one thread, which is what we call a distinct piece of code being executed.  A process may own more than one thread, but it always has at least one.

Preemptive multitasking systems typically operate using a timer-based interrupt.  Each thread is given a certain maximum amount of time to execute before the system stops it and gives control to another thread. Each chance that a thread gets to run is called a “time slice“.

There are a couple of things that can happen to make a thread end its time slice early, or to make the system skip a turn for a particular thread.  For example, a threads can voluntarily end its time slice early.  There are a variety of reasons it might do this, but waiting for an asynchronous task to finish is a common example. Threads can also choose to sleep and wait for a certain amount of time to pass.  This is a bit different from simply ending a time slice, as it also means that the system will skip past that thread for future time slices, until the requested duration has passed.

It’s also possible for a thread to be blocked waiting for a semaphore or MUTEX (mutually exclusive) object.  These are software mechanisms that are used to allow a thread to wait for a certain condition, or to control access to something in the system that can only be safely accessed by one thread at a time.  A good example would be something trying to send data to the printer port.  If you had several programs trying to send output to the printer at the same time, the result would be a lot of wasted time and paper.

The idea of a MUTEX is that a process has to ask for exclusive access to such items before it can use them.  Upon receiving such a request, the system will then do one of the following:

  • Grant access if the requested item is currently available. The item now belongs to the requesting process until it releases it or until that process ends.
  • If the item is already in use by another process, then the system will block the thread until the item is released.  In some cases, you can optionally have the system return an error indicating that the item is not currently available.

All this means that threads don’t always execute in the same order and frequency. It’s not always A-B-C-A-B-C, etc. It actually gets even more complicated when you consider things like thread priority settings, but that’s another left turn down this tangental road so let’s not.

There are many other important aspects to multitasking, but they are pretty much beyond the scope of this article.

GEM Applications Under Vanilla GEM

Under vanilla GEM, the core of the cooperative multitasking system was contained in the AES event library.  Whenever program “A” calls an event library function like evnt_multi, and there’s no event of the requested type in the queue waiting to be processed, the event library calls a dispatcher function that checks to see if events are waiting for any other GEM applications, and if so, performs a task switch.

That is, incidentally, the purpose of the mwait function shown above as an example of the GEM source code. As simple as it is, that function is where GEM makes the decision to pass control back to the same program, or task switch to another.  It’s called by each of the various public functions of the AES event library, like evnt_multi or evnt_mouse and so forth.

The mask parameter indicates the types of events the application is requesting, and this function compares that against the events that are available.  If nothing is available, it calls the dsptch function, which is responsible for vanilla GEM AES’s cooperative task switching.

If the dspatch function found events waiting for program “B”, which by definition in vanilla GEM would currently be waiting for an event library function to return, then it would perform a task switch to that application so the events could be processed. Eventually, program “B” would make another call to an event library function, and maybe this time program “A” gets control back, or perhaps program “C” is called instead, depending on what’s waiting in the queue of unprocessed events. In this way, all of the applications currently loaded into the system would get a chance to process their events and interact with the user.

This sort of task switching is essentially the same general process that’s used by preemptive multitasking systems, except that it relies on programs making calls to the AES event library. Note that non-GEM applications couldn’t be included in this setup, since they don’t make calls to the event library. Whenever you ran a non-GEM application, it essentially blocked GEM applications until it exited.

GEM Applications Under MultiTOS (MiNT)

A well-designed GEM application that handles events properly and doesn’t try to draw to parts of the screen that it doesn’t “own” should work fine under MultiTOS.  In fact, programs which occasionally need to suspend event processing while doing something else will arguably work better under MultiTOS, since they will no longer freeze up the whole system.  The program’s own UI will be blocked until it starts making event library calls again, but other programs will continue to operate normally.

But as to how it works…

Quite a lot about GEM AES was changed for MultiTOS, but we’re only going to talk about certain things here.

Under MultiTOS, the MiNT kernel is now responsible for handling task switching between applications, rather than the AES event library.  Each application has at least one thread, including non-GEM applications.  Additionally, the AES maintains its own process that corresponds to the “original” single process in vanilla GEM, which is responsible for managing the user’s interaction with UI elements like the menu bar, window frames, etc.

So, if the event library is no longer doing its own task switching, what happens if program “A” calls the event library to request an event, and the desired event is not available?

Instead of doing its own task switch, AES will tell MiNT “I’m done for now” for the current thread’s time slice, prompting MiNT to perform a task switch.  The AES code is actually shorter and more simple than under vanilla GEM.

On the next time slice for program “A”, the first thing it will do is check again to see if the desired event is available. If not, then it will once again release the time slice. This will repeat until the event becomes available. Thus, programs which are waiting for events use very little CPU time; just enough to see if there are events pending.


We talked about MUTEX items earlier. While it doesn’t use that terminology, GEM AES has always had something that acts as a MUTEX, and it’s something all GEM programmers should know about.  When an application does a window update, the process is wrapped with calls to wind_update. This is intended to block any other application from starting a window update while another one is already happening.  It’s intended to provide exclusive access to the screen to a single application at a time.

To accomplish this, the original vanilla GEM code for wind_update ties into the event library.  It adds a special “mutex released” item to the list of requested events so that ending the update has to occur before another application can be called.

Under vanilla GEM, the wind_update function didn’t actually check to see if an application had locked down the screen.  It relied solely on the mutex event to block other applications from being able to do anything, since they wouldn’t be running until AES had events for them to process. However, under MultiTOS, another application might not have been waiting for an event to occur.  In that case, the application will keep on doing whatever it was doing. Unfortunately, this could eventually include a window refresh, so under MultiTOS, the wind_update call gets significantly more complicated than it was under vanilla.

Don’t Cross The Beams!  Whoops!

Finally, we’ve come around to another flaw in vanilla GEM AES.  From day one, GEM was supposed to be a multitasking system, but other than using the wind_update function to manage, somewhat imperfectly, screen output, it didn’t include any sort of a general purpose MUTEX or semaphore library so that applications could avoid stepping on each other when they all wanted to use the same resources at the same time.

It always amazed me that this was never revealed to be the big problem it had the potential to be.  I guess users were really just so used to interacting with just program at a time that it rarely came up. But consider how many things in the system could fail if more than one application wanted access.  Just to name a few:

  • Serial ports.  What if you had a FAX program and a telecommunications terminal program going at the same time?  One as the main application, the other as a desk accessory?  Until the Mega STE and TT030 machines, there was only one serial port so this would have definitely resulted in a conflict as both programs tried to access the same port and modem at the same time.
  • Printer port.  Two programs trying to print something at the same time could step on each other unless both were doing it through GDOS.  Under GDOS,  once the printer workstation is opened by one application, any attempts by other applications to open it will fail.  Unfortunately, the application won’t have any idea why it failed because VDI doesn’t return error codes.
  • MIDI ports.  Basically the same problem as the serial ports, except with different kinds of program.
  • Sound.  Sound on the ST computers was mainly done by banging on the sound chip, either directly on the hardware registers, or via the XBIOS DoSound call. Either way, two programs trying to this at the same time would result in some interesting results that would hurt your ears.

Basically, when it came to these items and other similar system resources, AES basically relied on the idea that a program would start using the item and finish it between one event library call and the next, when no other programs could be called and start trying to use the same resource.

That sounds pretty risky, but actually it more or less worked most of the time.

MultiTOS and Mutex

When MultiTOS came out, MiNT added the basic capability needed to create mutex objects, but except for defining a couple of specific hardware resources like the SCSI or ASCI ports, which were used by the system itself, there were no preset definitions for anything that applications could rely on.

Now that it had the low-level functionality to do the job, you would think that someone would have added some functions to GEM AES to do basic application-level resource management.

You’d think that, but you’d be wrong.  AES continued to ignore the problem under MultiTOS.


Don’t get me wrong… I loved MultiTOS when it finally got to be more or less stable, and I used it on a daily basis long before it even got to that point.

Of course,  at the time, my machine at the office was a TT030 with maxed-out RAM, and a big 320mb hard drive.  And it was reasonably useable on the Falcon030 too.  What about on the older machines running at 8 mHz?  Even with the max 4mb of RAM, I avoided ever really using that setup. So I really couldn’t tell you how badly it sucked.  I was just pretty sure it did.

And by the way, 320mb was big for a hard drive back then.  Honest.  But even so, even with a relatively nice system like what I was using, we all knew how easy it really was to do something that just plain wouldn’t work.

Maybe if Atari had kept going with development on the ST series, some of those issues would have gotten fixed.  We weren’t unaware of them, in many cases, but there was only so much we could do with the manpower and time available.  And then, of course, the Jaguar came along and we all shifted gears to focus on it.

It’s really kind of ironic, because the last two or three years worth of TOS development had seen far more improvements and new functionality added to the system than the previous six years had.