I had originally planned to cover some things about GEM AES in this installment but I’ve recently seen a variety of questions or comments about a couple of VDI related things lately that I’d like to address.

I’m also gonna cut down a bit on the exposition at the beginning that I’ve included in previous posts in this series.  If you don’t know, at least generally, what GEM is, or what VDI is, then why are you reading this? If you need to, go look at earlier posts in the series and then come back to this one.

A disclaimer. Much of what I’m going to discuss here pertains to developments at Atari that occurred long before my employment began. Some of the information was obtained while I was a third-party developer, and some from having conversations with guys who worked in the TOS group at Atari in the years before I started working there. Finally, quite a lot of it comes from conversations with coworkers after I started working there as the ST developer support guy.

GDOS In ROM.

One of the biggest questions people have about GDOS, to this day, is why didn’t Atari put it into the TOS ROM?  Even it wasn’t ready for v1.0, why not v1.02 or v1.04?  Why not 2.0x or 3.0x when the Mega STE and TT030 came out?

Ok, here’s a LITTLE exposition.  GDOS was the part of GEM VDI which was responsible for reading device drivers into memory on request and hooking them into the system so that VDI requests went to the right place.  It also handled loading fonts into memory and hooking them in so that they were available to be used by GEM VDI.

The PC version of GEM had GDOS in its own separate executable file, loaded separately from the rest of GEM. Since GEM was never in ROM at all on the PC this largely escaped attention, but the Atari simply followed this example.

You can download the PC GEM/3 source code here if you’re interested in having a look. The “GDOS (GEMVDI.EXE) source” file contains what corresponds to GDOS.PRG on the Atari.  The “Screen Drivers” file corresponds to the Atari’s ROM portion of VDI, which was really just the screen device driver plus a basic trap handler that routed VDI commands to it.

Beyond following the PC’s example, perhaps the main reason GDOS was not included in ROM on the Atari was that there wasn’t room.

When the ST was designed, it included a ROM space of 192 kilobytes. The problem was, the total size of early versions of TOS came in at a little over 200 kb, not even including GDOS, so it simply didn’t fit within the available ROM space. So instead of having everything in ROM, the operating system was loaded into RAM from disk.  So, the first problem that needed solving was getting everything else squeezed down to 192k.  This took another few months after the ST was put on the market, but finally at the end of 1985, Atari started including TOS 1.0 ROMs in new 520ST shipments, and made ROM chips available to existing owners.

But the TOS 1.0 ROMs still didn’t include GDOS.  It remained a separate executable file of about 9 kb that you loaded by placing into your boot disk’s AUTO folder. Not that big, in the overall scheme of things, but it was big enough that there was no room in ROM even if they had wanted to include it.

OK, so no room in the early 192kb ROM.  Later machines had bigger ROM space, so why didn’t it make its way into those?  Well, at about the same time new machines like the Mega STE and TT030 came out with bigger ROM spaces, Atari was also working on FSMGDOS, which included an outline font scaler as well as new VDI functions for things like drawing bezier curves.  FSMGDOS was too big to fit even with the larger 256kb ROM of the Mega STE.

It might have fit in the TT030’s 512kb space, but by that point, most serious users had hard drives and plenty of RAM. I don’t remember the idea ever even coming up.  Plus, realistically it was too volatile.  There was a new version every week or so for quite awhile and it simply wouldn’t have made sense to put it into ROM. And before the TT030 shipped in significant quantities, FSMGDOS was pulled in favor of replacing it with SpeedoGDOS.

Why Didn’t Atari Make The ST’s ROM Space Bigger?

Not really an expert on the history of ROM chips, but from what I recall from those days, I’m reasonably sure that 32kb was the biggest available ROM chip at the time the ST was first designed.  Or, possibly larger capacities were available but only at significantly greater cost, and maybe they were physically larger.  Either way, larger chips either weren’t available or weren’t practical.

Reasonably, the only way that Atari could have made the ROM space bigger than 192kb would have been to put more sockets onto the motherboard.  Two more sockets would have bumped up the capacity to 256kb, but it also would have required another few square inches of space on the motherboard, which was already pretty much jam-packed.  Look at the picture of the 520ST motherboard below.  Aside from the area at the top center where which was reserved for the RF modulator that is not included in this particular example, there was simply nowhere you could possibly put two more ROM sockets.

520st-motherboard

The other thing to consider was that the basic design of the motherboard was done long before TOS needed to be finalized.  When they decided to include six ROM sockets they very may well have thought they were being generous.  It’s very likely nobody ever even considered the possibility that 192kb wouldn’t have been enough space.

Why Didn’t Atari Put As Much As Possible Into ROM & Disk Load The Rest?

This refers, of course, to the fact that the early 520ST shipments didn’t have TOS in ROM.  Instead, you loaded it from an included floppy disk into RAM, meaning it took up 200K (~40%) of your available memory.  So if the problem was that everything didn’t fit, why didn’t Atari put as much possible into ROM and only soft-load what didn’t fit?

The answer is, they did put some stuff in ROM right from the beginning.  The early 520ST that loaded TOS from disk had two ROM chips instead of six, with the other four sockets left empty.  That means there was as much as 64kb of code in ROM already.

There are essentially 7 components to TOS that ultimately had to fit into the ROM:

  • Bootstrap code – the code that gets the system up and running at power-on, and tries to read a boot sector from floppy disk or hard disk.
  • XBIOS – A variety of low-level functions for accessing hardware resources.
  • BIOS – Low-level functions for accessing hardware devices (i.e. serial ports, printer port, TTY screen driver, disk drives)
  • GEMDOS – Disk operating system & buffered device access
  • GEM VDI – Graphics library
  • GEM AES – Application Environment Services
  • GEM Desktop – Shell application

The preliminary ROMs that were shipped in early machines included the first four items in this list, albeit perhaps not in a finalized form. If you remember the early pre-ROM days, the disk loaded version of TOS was based on having a file named TOS.IMG on your boot disk.  There was nothing else special about the disk. It wasn’t specially formatted or anything.

If you think about what was necessary to read that disk at all, you’ll realize that some version of GEMDOS had to be in ROM, or else the machine wouldn’t have been able to read the disk’s directory, find the TOS.IMG file, and load it. In order for GEMDOS to work, that means a pretty good chunk of the BIOS had to be there.  And that means that certain XBIOS functions had to be there.  And of course, if the bootstrap code wasn’t in place, then the whole system would have been a paperweight when you turned the power on.

So if some of this stuff was in ROM already, then why was TOS.IMG around 200kb in size? Clearly, the TOS.IMG file included new versions of all of the TOS components, not just the GEM stuff.  The main answer to that is, the versions of the components that were in the 64kb ROM were neither complete nor finalized.  They really only included what was necessary to read the TOS.IMG file into RAM and get it started.

Sorry it’s taken me so long to come back to this subject.  I had a lot of good feedback from part 1 and I’d originally mentioned talking more about true color video modes and outline fonts, but since then I’ve had a lot more ideas popup and I wasn’t sure which way to go until now.

In early 1987, I became the “Chief Software Engineer” for a small company named Neotron Engineering.  Small in this case meaning that I was the third employee.  I was in the final stages of completing my GEM font editor program (Fontz!) and had seen an ad in one of the Atari-related magazines for a new word processor called WordUp that Neotron was releasing soon.

WordUp looked like it might be the first product to fulfill the promise of GEM’s ability to create multi-page documents that mixed graphics and text with different typefaces and styles.  At that point, it was a year and a half since the Atari ST series had come out and only a handful of programs had touched on those promised capabilities.  In retrospect, one can see that the reasons for that promise not being fulfilled weren’t completely as they seemed at the time, but back then the main thing everybody pointed their finger at was GDOS.

GDOS

When Atari first announced the Atari ST, the promise of GEM included the idea that you could use a GUI-based word processor that would allow you to format your document with a variety of different typefaces and styles, include graphics together with text, etc.  In fact, the main hype about the machine in those early days was that it promised the power of the just-released Apple Macintosh at a lower price.  However, when the machine shipped in July 1985, the GEM VDI graphics engine was missing a key component necessary to make that possible: GDOS.

GDOS stands for Graphics Device Operating System but to be honest it’s really a lot less complicated than it sounds.  GDOS did basically three things.

  • Read a configuration file (ASSIGN.SYS) at system boot time which specified the names of loadable device drivers and the names of the font files that went along with them.
  • Upon demand, install a loadable device driver into memory and patch it into the GEM VDI dispatcher.
  • Upon demand, load the bitmapped fonts that were specified for a given driver into memory.

That’s all it did.  It really should have been part of the regular GEM VDI that was (eventually) put into ROM, but it didn’t make it into the original release.  Even when I worked at Atari I’d get different answers depending on who and when I asked… was it bugs (early versions had a lot of memory leaks), or was it a lack of licensed bitmapped fonts, or what?

I’m sure part of the problem was the logistics of fitting everything into the system.  The early Atari ST models shipped without a hard drive of any sort, and had the operating system loaded from disk instead of in ROM.  That meant a standard 520ST had somewhat less than 300kb of RAM available after booting, before any application was loaded.

That still seemed like a lot of memory to those of us freshly upgraded from 8-bit computers where 64kb was the typical maximum.  But a printer driver would take up a minimum of about 70kb before loading any fonts.  Even a minimal set of bitmapped fonts for a printer device could run another 100kb or more, leaving minimal room for the application itself.

Keep in mind that this was in the old days before virtual memory was a standard feature on computers.  The Atari development system did include a linker that theoretically allowed for overlays – program segments which shared the same memory blocks and which were loaded upon demand as needed, but the feature was never really more than briefly alluded to in the documention.  I remember thinking it seemed like the required runtime support was missing.

At some point the decision was made that GDOS would be provided as a separately loaded program that you would place in your boot disk’s AUTO folder (containing programs to be run at boot time).  This would allow it to be included with programs that needed the features it provided, while not consuming memory or other system resources otherwise.

The first program that shipped with GDOS was Easy-Draw, from Migraph Software.  At least I think it was the first one.  It was a vector graphics drawing program which allowed the user to create drawings from combinations of the various graphics primitives supported by GEM VDI.  (This did NOT include bezier-curves or splines until many years later.)

The way Easy-Draw got around the memory issue was to use one program for creating the drawings and another for printing them.  When you wanted to print something, it would save your drawing as a GEM Metafile.  This was basically a file consisting of a list of GEM VDI commands that, when executed in sequence, would reproduce the drawing.  The only thing the program needed to do was scale the coordinates in the metafile to the resolution of the output device.  Once the metafile was created, then the Easy-Draw program would unload the screen fonts, freeing memory, and then terminate.  As the final step of the process, it would ask GEM to run the OUTPUT program.

OUTPUT was actually a part of the original GEM setup from Digital Research, where the memory situation on 640K-limited PCs wasn’t much different than that on the Atari.  As such, it was included in the GDOS distribution from Atari.  GEM even had specific API calls for launching the OUTPUT program.

When you exited from OUTPUT, it would ask GEM to run the program that had started it, in this case Easy-Draw, so that it looked like you were more or less going from the “design” module to the “output” module as seamlessly as might be possible under the circumstances.  In those early days it meant a fair amount of swapping floppy disks around, but ultimately it worked.

Easy-Draw shipped with GDOS disks containing the printer drivers and bitmapped fonts that Atari provided.  There was a printer driver for the Epson FX series dot-matrix impact printer, another for the Star Micronics printers, and three sets of bitmapped fonts, two for different screen modes, and one more shared by both printer drivers.

The three typefaces included were Typewriter – basically a Courier fixed-character-width font, Swiss– a sans-serif font similar to Helvetica or Arial, and Dutch – a serif font like Times Roman.  Typewriter was included in 1 size (per device) and the others in 5 or 6 sizes ranging from 9 point to 36 point.  Those fonts were high-quality as long as they were output at the right size, but didn’t provide much variety.

Enter Fontz!

In mid 1987 I purchased an Apple Macintosh II computer.  This was the first model with expansion slots and no built-in monitor.  I had been using the Magic Sac emulator on the Atari ST since the previous summer, but I found that when I started programming with the Lightspeed C (later Think C) package, the debugger didn’t work so well under emulation.  Not really surprising since the emulator had to hook into low-level stuff to make things work on the Atari, and the debugger had to hook into much of the same stuff to do its thing.  Anyway, I really wanted to learn how to program for the Mac platform, and I also wanted to experiment with high-res color graphics.

To be honest, in large part another reason I got the Mac was because of the unfulfilled promise of the Atari ST.  However, the Mac didn’t quite take over as my main machine and I continued to work regularily with the Atari.  The Mac was heavily supported by many large software publishers, so I saw the Atari platform as a better place for a single developer just starting out to make his mark.

At some point, I wrote a little utility program that would read bitmapped fonts from the Atari 8-bit computer and create a GEM-format bitmapped font from it.  Then I started playing around with converting other bitmapped font formats.  I had font collections on the Mac with dozens of bitmapped fonts in a variety of sizes, so I dug down into the Inside Macintosh book (the programmer’s bible for the Mac in those days) and before long I had figured out how to extract the data and create a GEM font.

Eventually I pulled these various individual conversion utilities together and a friend suggested it would be nice to be able to edit the actual bitmaps.  Before too long, I’d created the Fontz! bitmap font editor, which could import a variety of different bitmap formats, scale fonts to different sizes for different devices, and more.  It had what amounted to a paint program built-in for manipulating the character bitmaps.

This brings us back to the beginning of the story… I was getting a lot of interest from various publishers about Fontz! when I saw the advertisement for Wordup.  After meeting with Neotron, they offered to buy the rights to publish the program and a position as a software engineer.

GDOS Printer Drivers

After I started at Neotron Engineering (later renamed to Neocept), my first task was to finish off Fontz! so we could ship it, and then I started working on various features for WordUp. Once we finally shipped, we included the same basic GDOS package as anybody else.  The main difference was that we didn’t use the OUTPUT program for printing, but rather printed directly from the main program.  This used more memory, so we specifed that the minimum memory requirement for WordUp was 1mb instead of the usual 512kb.  This was early 1988, and at that point there were all sorts of memory upgrades available for 512k machines, and newer machines like the 1040ST and Mega ST had more memory in the first place. The program actually worked on a 512kb machine, but you often needed to use a limited number of fonts to make everything fit, so we decided just to say that 1mb was required.

One of the biggest problems we had in with the initial release was that there were a lot of new printers coming on the market every time you turned around, and many of them required a different printer driver than either of the ones Atari provided.  So we were constantly getting requests from users for drivers.

For whatever reason, Atari was really slow to come out with new drivers, so initially we had no way to respond to those requests, but eventually we were able to convince Leonard Tramiel, VP of SW Development at Atari, to release the GDOS Driver Kit to us.  I’m not sure if we were the first developer to get this from Atari, but I don’t think there was ever more than 1 or 2 other developers who ever released a driver.  I think part of the reason was that some of what we ultimately did with it went way beyond what Leonard was expecting us to do.

The GDOS Driver Kit was basically the source code and linkable library for the existing printer drivers.  The linkable library was, for all intents and purposes, a version of GEM VDI which rendered into a bitmap buffer.  This would be combined with a chunk of code that you customized for each printer that did nothing more than output a buffer using whatever custom codes that the printer required.   There was printer-specific code provided for the Epson-FX series 9-pin dot matrix impact printers, and for the Epson LQ series 24-pin dot matrix impact printers.  I’m thinking there may have also been a third example for Atari’s own SMM804 9-pin dot matrix printer, but without digging up those old floppies I’m just not sure if that was there originally, or if it came later.

In order to minimize memory usage, the driver would allocate a buffer that was only a fraction of the size needed for the overall page, represented as a horizontal strip across the overall raster area.  This strip would extend across the entire width of the raster, but might only be 8 lines tall even if the overall printer raster was 3000 lines.  The driver would rasterize all of the GEM VDI primitives making up the page and clip the output to the current strip.  Then the printer-specific code you created would output to the printer using whatever custom codes were needed.  Then it would move down to the next strip and repeat the process.  This would happen over and over until it reached the bottom of the page.

This may sound inefficient in some respects, since the primitives might be rendered dozens or even hundreds of times in the process.  However, keep in mind that sending data to a printer over a standard parallel port was not a very speedy process back then.  Compared to the amount of time required to actually send the data to the printer, not to mention the time it took most impact dot matrix printers to actually print that data, the rendering time was usually not a major factor.

Breaking the raster down into strips allowed the printer driver to use much less memory than would otherwise be required if it rendered the entire page at once.   The minimum resolution of the printer drivers we were using was 120 x 144 dpi, or 960 x 1260 dots for a letter-sized page.  For regular monochrome output that would have required about 180kb of RAM to create the entire raster at once.  Remember what I said earlier about how much RAM was available and you see why minimizing RAM usage was important.

Back in those days, you might have 5 different manufacturers creating printers that had basically the same exact capabilities, but for whatever reason, used different escape codes for the same function.  More likely, you had 80% compatibility from one device to the next, but one or two codes here and there would be different.  The first few printer drivers I created were fairly easy to do, as they involving nothing more than changing the printer escape codes that were used.  This was, I think, all that Leonard Tramiel at Atari really expected we were going to do, even though no restrictions were ever placed upon us.

However, before long we started looking at printers that required more work.  The Epson JX-80 was a color version of their popular FX-80 printer.  Another popular inexpensive color printer was the thermal wax transfer Okimate from Okidata.  The first inkjet printer the HP Deskjet, had just hit the market.  The original HP LaserJet was quite popular among the higher-end users.

When I first started working on the drivers that were more complex, or which at least needed changes beyond what escape codes were output, I needed to figure out a way to debug them when things didn’t work right.  This was not straightforward, because of the way printer drivers were loaded into the system and executed.  Eventually I added code to the printer driver that did the equivalent of standing up and shouting “here I am!” when the system loaded it into memory.  This allowed me to tell the debugger where it was in memory, load debugging symbols, etc.  Along the way, I learned a ton of important facts and trivia about GEM VDI and the way it worked internally.

Optimizing The Output

Remember a moment ago when I said that sending the data to the printer was a significant bottleneck?  Well, the basic printer driver code supplied by Atari was set up to output complete strips of data to the printer no matter what the strip actually contained.  That meant we output the same amount of data if the strip had lots of information or if it was empty.  That’s not very efficient.  We noticed that some printers had the ability to skip over empty space and start printing graphics data in the middle of the line instead of just starting at the beginning, so we started working on optimized drivers that would take advantage of that.  This could speed up the time it took to print a page tremendously.  I think we did one test where the unoptimized driver would take about 6 minutes to output a page, but the optimized version took like 30 seconds.  That’s a 1200% improvement, and it was not even the theoretical maximum improvement.  Of course, most pages people printed fell somewhere in the middle.

Somewhere along the line, Migraph software had come out with a GDOS driver for the HP LaserJet laser printer.  This also worked with the HP DeskJet printer, which was a very popular printer with Atari users as it produced near-laser quality at a much lower price.  However, it was slow, mostly because of the amount of time it took to send over the 1mb+ of data that it took for each page, so Migraph supplied two versions, one for fast output at 150 dpi and another for high-quality 300 dpi.

Our WordUp word processor worked with the Migraph driver,  but we weren’t happy with the speed so we decided to do our own driver.  Our original optimization technique that I described earlier did not apply to the DeskJet, but we discovered that it supported the concept of sending Run-Length Encoded (RLE) graphics data. This would allow you to send just a few bytes of data whenever there was a long string of repeating values, as would be the case whenever there was an area that was all black, or all white, or which repeated some pattern over and over.

My boss hadn’t really gotten involved in the printer drivers too much up to that point, but he got interested in this and worked on the encoding algorithm for the data while I did the rest of it, and ultimately we created a driver that was 200-300% faster on average than the Migraph DeskJet driver out there.  This was released along with a full set of fonts as the TurboJet printer driver package.

G+Plus From Codehead Software

As I said earlier, in the early days, GDOS had a reputation for being buggy and slowing down your system.  As far as I could tell, the “buggy” part was mainly a reference to memory leaks that could occur as drivers and fonts were loaded and unloaded.   It was something where you might easily never see any problem.  But as far as slowing your system down goes, this was a bit more commonplace and easy to measure.  It wasn’t a huge difference, but it was there.  The cause was simply that there was extra code being executed each time a GEM VDI call was made, and that code wasn’t really especially optimized.

Codehead Software was a company that specialized in publishing utility software for the Atari ST.  One of their early products was G+Plus which was advertised as a GDOS replacement.  They claimed to have used optimized assembly language to eliminate most of the system slowdown. They also had other improvements to the way regular GDOS worked, like the ability to change the list of installed fonts for a device without rebooting your system.

I first got involved with G+Plus because the very early version had some compatibility issues with our WordUp program.  It initially wasn’t clear if it was a bug with G+Plus, or something WordUp was doing, and so I ended up working with Codehead’s Charles Johnson to figure out what was going on and fix it.  I don’t remember exactly what the problem was now, but it didn’t take us long to figure out.

That all happened before we had gotten the GDOS Driver kit and I had acquired the knowledge about the internal workings of GEM VDI that came from working with it.  Later, as I learned more about how GDOS and GEM VDI worked internally from the printer driver stuff, a variety of questions occurred to me regarding how G+Plus did certain things.  Eventually, I ran into Charles again at a regional Atari User Group show and asked him those questions.  To my surprise, he was largely unable to answer them.  In fact, he didn’t seem to know some of the very basic low-level details about how GEM VDI worked internally.

I couldn’t understand how someone could write a new version of GDOS without such knowledge.  I came to the conclusion that Codehead hadn’t actually written their own version of GDOS from scratch, but rather that they had taken the original version from Atari and disassembled it.  That was easily done, and the program file was only about 8-9 kb.  Once they had a disassembled version, it would be possible to hand-tune the original code generated by Atari’s C compiler to eliminate the various inefficiencies that contributed to the system slowdown. Along the way, they also figured out how to do a couple of things like alter the bitmapped fonts configuration on the fly.

While I was initially disappointed to concluded they hadn’t written their own version from scratch, I later came to appreciate the fact that they managed to introduce as many new features as they did, while maintaining compatibility.  It was quite impressive.

The Atari Gets Vector Fonts

As 1990 drew to a close, the big news was the impending arrival of FSMGDOS.  “FSM” stood for “Font Scaling Module” and basically that meant outline fonts. Outline, or vector-based, fonts were nothing new.  Several desktop publishing applications for the Atari computers used their own proprietary flavor of outline fonts, including Calamus and Pagestream.  However, there was as yet no system-wide solution.

FSMGDOS was a new version of GDOS that took the regular features of GDOS and added in the ability to generate character bitmaps as-needed from scaleable outline fonts.   It also added some new GEM VDI drawing primitives for bezier curve and splines.  All while maintaining backwards compatibility (mostly) with the original GDOS and programs that used it.

The biggest compatibility problem with FSMGDOS was the fact that existing applications using GDOS with bitmapped fonts would typically determine which specific sizes were available and limit the user options accordingly.  But with FSMGDOS, this didn’t work quite right, since any size requested would resolve back to “available”.  However, this was a relatively minor problem overall.

At Neocept we got an early version of FSMGDOS and started making changes to accomodate it, but Neocept was not doing well and essentially dissolved before that job was really finished.

A couple of months later, I moved to the SF Bay Area to take a position at Atari as the developer support engineer for the ST platform.  When I arrived at Atari, FSMGDOS was just about ready to go in many ways, but one of the big problems was the fact that it used a proprietary type of outline font that was not in common usage.  The availability of a variety of typefaces was a big question mark.  The big player with outline fonts was Adobe’s Postscript Type 1, which had been around for several years and which was supported on Windows and Mac via their ubiquitous Adobe Type Manager software. True Type was just making its debut but had backing from both Apple and Microsoft.

We tested the waters briefly by allowing the original version of FSMGDOS to ship with a new version of WordFlair from Goldleaf Publishing, but at the same time, we started looking into an alternate font scaling library from Bitstream.  Since Bitstream was (and continues to be) one of the world’s foremost type foundries, all the lingering questions about font availability faded away.  The big question came down to performance, as the Speedo format required a bit more math processing to render, and floating-point support was software-based on all Atari machines except for the new TT030, which could take an optional floating point chip.

If I remember correctly, the rendering speed differences turned out to be less of an issue than originally anticipated, and combined with improved bitmap caching routines, the Bitstream Speedo scaler and fonts ultimately won out.  Thus was born “SpeedoGDOS”, the “Speedo” version of “GDOS”.

In Conclusion

Part 3 of this series is forming in the back of my brain even now… stay tuned!

Way back in 1985, I started my “professional” career as a software guy as a developer for the brand new Atari ST computer.  After a few years as a 3rd party developer, I was hired by Atari to provide developer support to ST developers in the USA. 

Part of what made me a good choice for that role was that I had a really good in-depth understanding of GEM.   For example, when I worked on the WordUp word processor for Neocept, I wrote more than a dozen GDOS printer drivers for various printers, including color, that Atari’s drivers didn’t support.  Quite a lot of that information is still burned deep into my brain, even though it’s been many years since I actually wrote any code for the Atari.

These days, when something reminds me of GEM for some reason, the main things that come to mind are the various problems, glitches, and workarounds for various things.  This article is going to be mainly about the various design flaws in GEM, their workarounds, and how they impacted development.

GEM – The Origins

In the mid 80’s, just as computers were starting to break out of their character-based screens into more graphically oriented environments, Digital Research came out with GEM, or the Graphics Environment Manager.  The idea was to offer a graphic-based environment for applications that could compete with the brand new Macintosh computer, and Microsoft’s new Windows product.

GEM started life in the late 70’s and early 80’s as the GSX graphics library.  This was a library that could run on different platforms and provide a common API for applications to use, regardless of the underlying graphics hardware.  This was a pretty big deal at the time, since the standard for graphics programming was to write directly to the video card’s registers.  And since every video card did things a little differently, it often meant that a given application would only support one or two specific video cards.  The GSX library would later become the basis of the VDI portion of GEM, responsible for graphics device management and rendering.

GEM was basically a marriage of two separate APIs.  The VDI (Virtual Device Interface) was responsible for all interaction with graphics devices of any sort, while the AES (Application Environment Services) was responsible for creating and managing windows, menu bars, dialog boxes, and all the other basic GUI components that an application might use.

GEM was first demoed running on the IBM PC with an 8086 processor, running on top of MSDOS.  However, various references in the documentation to the Motorola 68000 processor and integration with their own CP/M-68K operating system as the host make it seem clear that that DR intended GEM to be available for multiple processors at a relatively early stage of development.

Ironically, the PC version of GEM never really took off.  Other than being bundled as a runtime for Ventura Publisher, there were never any major applications written for the PC version.  Overall, it was the Atari ST series where GEM found its real home.

Overview of GEM VDI

In case you never programmed anything for GEM VDI, let me give you a brief overview of how it worked.  The first thing you do in order to use a device is open a workstation.  This returns a variety of information about the device’s capabilities.  Another API call you can do once the workstation has been opened will give you additional information about device capability.  Once you have an open workstation, you can execute the appropriate VDI calls to draw graphics onto the device’s raster area.

Most devices aren’t meant to be shared so you can only have one workstation open at a time.  However, in order to support multitasking with multiple GEM applications and desk accessories running together, you need to be able to share the display.  Therefore, the VDI supports the notion of opening a “virtual” workstation which is basically a context for the underlying physical workstation. 

GEM VDI Design Issues

The VDI has a number of huge design flaws that are easily recognized today.  I’m generally not talking about missing features, either.  I’m sure we could come up with a long list of things that might have been added to the VDI given enough time and resources.  I’m talking about flaws in the intended functionality.  Many of these issues were common cause for complaint from day one.

Also, let me be clear about this: when I suggest some fix to one of these flaws, I’m not saying someone should find the sources and do it now.  I’m saying it should have been done back in 1983 or 1984 when Digital Research was creating GEM in the first place.  Any of these flaws should have been noticeable at the time…  most of them are simply a matter of short-sightedness.

No Device Enumeration

Until the release of FSMGDOS in 1991, 6 years after the ST’s initial release, there was no mechanism for an application to find out what GEM devices were available, other than going through the process of attempting to open each possible device number and seeing what happened.  This was slow and inefficient, but the real problem underneath it all is a bit more subtle.  Even once FSMGDOS hit the scene, the new vqt_devinfo() function still required you to test every possible device ID.

The fix here would have been simple.  There should have been a VDI call that enumerated available devices.  Something like this:

typedef struct
{
/* defined in VDI.H - various bits of device info */
} VDIDeviceInfo;

VDIDeviceInfo deviceinfo[100];
numdevices = 0;
dev_id = 0;
while( dev_id = vq_device(dev_id, &deviceinfo[numdevices++]) );

The idea here is that the vq_device() function would return information about the next available device with a number higher than the dev_id parameter passed into it.   So if you pass in zero, it gives you info on device #1 and returns 1 as a result.  When it returns zero, you’ve reached the end of the list.

Device ID Assignments

Related to the basic problem of device enumeration is the whole way in which device IDs were handled overall.  GEM graphics devices are managed by a configuration text file named assign.sys that lived in the root directory of your boot volume.  This file would look something like this:

PATH=C:\SYS\GDOS
01 screen.sys
scrfont1.fnt
21 slm.sys
font1.fnt
font2.fnt
font3.fnt

The first line specifies the path where device driver files and device-specific bitmapped fonts were located.  The rest of the file specifies the available devices and the fonts that go with them.  For example, device 21 is the “slm.sys” driver, and “font1.fnt”, “font2.fnt” and “font3.fnt” are bitmapped font files for that device.

The device id number is not completely arbitrary.  There are different ranges of values for different device types.  For example, devices 1-10 were considered to be screen devices, 11-20 were considered to be pen plotter devices, 21-30 were printer devices, and so forth.  Oddly complicating things in a few places is Digital Research’s decision to mix input devices like touch tablets together with output devices like screens and printers.

The way device IDs worked was mainly a contributing factor in other situations, rather than a problem in its own right.  For example, because there was no easy way to enumerate available devices, many applications simply made the assumption that the printer was always going to be device 21 and that the metafile driver was device 31.  And in most cases, that’s all they would support.

The bigger problem, however, was that while the device ID assignments were mostly purely arbitrary, they were anything but arbitrary for the display screen.

Getting The Screen Device ID

Remember earlier when I explained how applications would open a “virtual” workstation for the screen?  Well, in order to do that, you have to know the handle of the physical workstation.  That’s something you get from the GEM AES function graf_handle().  One would think, since the physical workstation is already open, that you shouldn’t need to tell VDI the device ID, right?  Wrong.  Even though the physical workstation for the screen device is already opened by the GEM AES, you still need to pass the device ID number as one of the parameters when you open a virtual workstation.  So how do you get the device ID for the screen device that’s already open?  Well, there really isn’t a good answer to that question, and therein lies the chocolaty center of this gooey mess. 

On the Atari, the recommended method was to call the BIOS function GetRez() and add 2 to the returned value.  The first problem with this idea is there is no direct correlation between that value and anything like the screen resolution or number of colors available.   And even if there was some correlation, there are far more different screen modes than you can fit in the device ID range of 1-10.

Furthermore, this method only really worked for those video modes supported by the built-in hardware.  Add-on cards needed to not only have a driver, they also needed to install a patch to make GetRez() return the desired value when other video modes were used.

This pissed me off then, in large part because developers didn’t univerally follow the recommended method and their code broke when Atari or third parties introduced new hardware.  In fact, the very first article that I wrote for the ATARI.RSC Developer newsletter after I started at Atari was about this very subject. 

Looking back, the thing that pisses me off the most about this is the fact that I can think of at least three really easy fixes.  Any one of them would have avoided the situation, but all three are things that probably should have been part of GEM from day one.

The first, and most obvious, is that opening a virtual workstation shouldn’t require a device ID as part of the input.  The VDI should be able to figure it out from the physical workstation handle.  Seriously… what’s the point?  The device is already open!

Another option would have been adding a single line of code to the GEM AES function graf_handle() to make it also return the device ID number, rather than just the handle of the physical workstation.  If you’re going to insist on passing it as a parameter to open a virtual workstation, this is what makes sense.  After all, this function’s whole purpose is to provide you with information about the physical workstation!

Lastly, and independent of the other two ideas, there probably should have been a VDI function that would accept a workstation handle as a parameter and return information about the corresponding physical workstation, including the device ID.  This arguably comes under the heading of “new” features, but I prefer to think that it’s an essential yet “missing” feature.

Palette-Based Graphics

Perhaps the biggest flaws about GEM VDI are based in the fact that that the VDI is wrapped around the idea of a palette-based raster area.  This is where each “pixel” of the raster is an index into a table containing the actual color values that are shown.  Moreover, it’s not even a generic bit-packed raster.  The native bitmap format understood by GEM VDI is actually the same multiple bitplane format as what most VGA video cards used. 

Considering that the goal of the VDI was to create an abstract, virtual graphics device that could be mirrored onto an arbitrary actual piece of hardware, this is hard to forgive.

At the very least, the VDI should have acknowledged the idea of raster formats where the pixel value directly represents the color being displayed.  I’ve often wondered if this failure represents short-sightedness or a lack of development resources.

One might make the argument that “true color” video cards were still a few years away from common usage, and that’s undoubtedly part of the original thinking, but the problem is that this affects more than just the display screen.  Many other devices don’t use palette-based graphics.  For example, most color printers that were available back then had a selection of fixed, unchangeable colors.

Inefficient Device Attribute Management

Quite a lot of the VDI library consists of functions to set attributes like line thickness, line color, pattern, fill style, fill color, etc.  There’s an equally impressive list of functions whose purpose is to retrieve the current state of these attributes.

For the most part, these attributes are set one at a time.  That is, to set up the attributes for drawing a red box with a green hatched fill pattern, you have to do the following:

vsl_type( screenhandle, 0 );    // set solid line style
vsl_width( screenhandle, 3 );  // set line thickness of 3 pixels
vsl_color( screenhandle, linecolor );
vsf_color( screenhandle, fillcolor );
vsf_interior( screenhandle, 3 );
vsf_style( screenhandle, 3 );

By the way, we’re making the assumption here that the linecolor and fillcolor variables have already been set to values that represent red and green colors in the current palette.  That’s not necessarily a trivial assumption but let’s keep this example modest.

At first glance you might say, “Well, six lines of code… I see how it could be improved but that’s really not that terrible.

It really is… if you know how GEM VDI calls work, you’ll recognize how it’s horribly, horribly bad in a way that makes you want to kill small animals if you think about it too much.  Each one of those functions is ultimately doing nothing more than storing a single 16-bit value into a table, but there’s so much overhead involved in making even a simple VDI function call that it takes a few hundred cycles of processor time for each of these calls.

First, the C compiler has to push the parameters onto the stack and call the function binding.  The function binding reads the parameters off the stack and then saves them into the GEM VDI parameter arrays.  Then it loads up the address of the parameter arrays table and executes the 68000 processor’s trap #2 function.  This involved a context switch from user mode to supervisor mode, meaning that all of the processor’s registers and flags had to be saved on entry and restored on exit.  From there, GEM picks up the parameters and grabs the appropriate function pointer out of a table, and then passes control to that function.  At that point, the very, very special 16-bit value we cared about in the first place is lovingly deposited into the appropriate location within the table that the VDI has allocated for that particular workstation handle.  Then the function exits and starts making its way back up to your code. Along the way, there is much saving and restoring of 32-bit registers.  Those are uncached reads and writes on most ST systems, by the way.

The bottom line is that for things like this, GEM was simply horribly inefficient. And this could have been quite easily avoided, is the really bizzare part.

The way that 68000-based programs make GEM VDI calls is to load a magic code into the 68000’s d0 register, and the address of the VDI parameter block in the 68000’s d1 register, and then make a trap #2 call.  The parameter block is simply a list of pointers to the 5 arrays that GEM VDI uses to pass information back and forth with the application.  My idea is simply to add another pointer to the VDI parameter block, pointing to a structure that maintains all of the current drawing attributes of the workstation, including the handle and the device ID.

Suppose that opening a physical workstation (for device #21 in this example) looked something like this:

int v_opnwk( int devID, VDIWorkstation *dev, VDIContext *context );

VDIWorkstation printerDevice;
int handle = v_opnwk( 21, &printerDevice, v_getcontext(0L) );

Opening a virtual workstation is similar, except that we specify the handle for the open physical workstation instead of the device ID:

int v_opnwk( int physHandle, VDIWorkstation *dev, VDIContext *context );

VDIWorkstation screenDevice;
int handle = v_opnvwk( phys_handle, &screenDevice, v_getcontext(0L) ); 

Thereafter, VDI calls look much the same, except that instead of passing the handle of your workstation as a parameter, you pass a pointer to the desired VDIWorkstation structure:

v_ellipse( &screendevice, x, y, xrad, yrad );

instead of:

v_ellipse( handle, x, y, xrad, yrad );

The VDIWorkstation structure would look something like this:

typedef struct {
         VDIContext *ws;
         int *control;
         int *intin;
         int *ptsin;
         int *intout;
         int *ptsout;
} VDIWorkstation;

typedef struct {
         int contextSize;
         int handle;
         int deviceID;
         int lineType;
         int lineWidth;
         int lineColor;
     /* other various attribute fields listed here */
} VDIContext;

The heavy lifting is really done by the addition of the VDIContext structure. The first parameter would be a size field so the structure could be extended as needed.  And a new function called v_getcontext() would be used to allocate and initialize a context structure that resides in the application’s memory space.

With this setup, you would be able to change simple things like drawing attributes by direct manipulation of that context structure.  Let’s return to the example of setting up the attributes to draw a red rectangle with green hatch fill pattern.  Instead of the lines of code we saw earlier, we could instead have something like this:

screenDevice.ws->lineType = 0;  // set solid line style
screenDevice.ws->lineWidth = 3;  // set line thickness of 3 pixels
screenDevice.ws->lineColor = linecolor;
screenDevice.ws->fillColor = fillcolor;
screenDevice.ws->fillInterior = 3;
screenDevice.ws->fillStyle = 3;

This requires no function calls, no 68000 trap #2 call, no pushing or popping a ton of registers onto and off of the stack.  This entire block of code would take fewer cycles than just one line of code from the first example, by a pretty big margin.

The one thing that this does impact is the creation of metafiles, since attribute setting would no longer generate entries in the output file.  But that is easily solved by creating a new function, let’s call it vm_updatecontext(), which would simply take all the parameters from the context structure and output them to the metafile all at once.

These are relatively simple changes from an implementation standpoint, but they would have had a significant impact on the performance of GEM on the 68000, and I suspect the difference would be comparable on the 808x processors as well.

More coming in part 2

In part 2 of this, written whenever I get around to it, I’ll talk more about the VDI including more stuff about true color support, and outline font support — too little, too late?