I’ve been saying I’d revisit the idea of GEM design flaws since the first installment of this series, and now I’ve finally gotten around to it. This time around we’re gonna discuss design flaws in GEM VDI. We’ll save the AES for another time.
For those who are coming late to the party, a reminder: GEM (Graphics Environment Manager) from Digital Research (DRI) was a first generation graphics user interface (GUI) environment used by the Atari ST computers as the backbone of the TOS operating system. The VDI (Virtual Device Interface) was the graphics library portion of GEM, and it grew out of an earlier DRI product called GSX. This was their implementation of the GKS, a published standard for a basic computer graphics library created in the late 70’s. In modern terms, GSX was a combination of graphics library and hardware drivers for the popular video cards and printers of the day.
There are certain shortcomings of GEM that reflect the hardware for which it was designed. Some of these aren’t really design flaws, however. For example, integer values used by GEM are 16-bit, mainly because the 8086 processor used by the IBM PC was a 16-bit processor. It’s arguably unfortunate that GEM didn’t adopt the use of 32-bit values but this would undoubtedly have introduced a performance hit on the PC.
In many ways, GEM is more optimized for the PC and Intel processors than for the Atari and Motorola processors. Since it originated on the PC that’s not really surprising, but it’s unfortunate for those of us who were on the Atari side. After all, beyond earlier versions of the popular desktop publishing software Ventura Publisher and a few other apps, not much ever really happened with GEM on the PC.
Aside from being more optimized for Intel processors, GEM on the Atari was arguably held back in some ways because in the beginning, there was a certain desire to keep the PC and Atari versions of GEM more or less in sync in terms of functionality and operation. No doubt the idea was that developers would make versions of their applications for both platforms. However, in reality that didn’t end up happening. Only a handful of applications ever crossed over from one platform to the other.
GEM VDI Design Flaws
Many of the design flaws in VDI stem from two simple things. First, the whole concept of a graphics based interface and using the same basic methodology to output to the screen and other devices was very new, and developers were still trying to figure out the whole idea. Quite a lot of the problem was simply that there was no proven in the field, real world example to follow. They were literally making it up as they went along.
Perhaps the biggest design flaw with VDI, or more accurately, collection of related flaws, was that device abstraction wasn’t really handled correctly in some important ways. Some of that probably comes from the VDI’s origins in GSX. If VDI had been designed from scratch, some things might have been different.
VDI provided the programmer with a “virtual” output device and a collection of library functions for drawing graphics primitives to it. The programmer, for the most part, didn’t need to worry about the specifics of how to do things like manipulate memory in a screen buffer, or keep track of what printer codes were used by an Epson FX-80 dot-matrix printer versus those needed for an HP LaserJet. They just had to send commands to VDI and it would take care of those details.
Sounds good, yes? In theory, yes, but in practice it wasn’t executed very well in some ways.
A Palette-Based Abstract Virtual Device
I talked about this before in the first segment of this series, but I want to revisit it in a broader scope.
The first problem is that the device abstraction model is limited in scope to those output devices which were in common use as of about 1983-1984 or so. Specifically, the abstraction is largely wrapped around the basic functionality of a VGA video card, with limited consideration given to other kinds of device.
At that time, almost all video cards for PC computers used palette-based graphics, typically with up to 16 colors (4 bits/pixel) in the higher resolution modes and maybe up to 256 colors (8 bits/pixel) in the lower resolution modes. As a result, VDI is almost completely wrapped around the idea of palette-based graphics.
Palette-based graphics is when the value of each pixel in a bitmap represents an index into a color palette table, rather than directly containing the color value. To change a color from red, to blue, you would change the entry in the color palette table, and this would in turn change all pixels drawn with that palette index to the new color. There was no way to change a color palette entry without affecting the pixels that had already been drawn.
While palette-based video cards were the mainstream back in those days, they weren’t the only option. If you had access to large piles of cash, you could get what was known as a “framebuffer” card which offered 24-bit color and up to 16 million possible colors instead of a maximum of 256. These “truecolor” devices were still quite expensive, and therefore uncommon, when GEM was created.
A less-expensive version of “truecolor” was known as “high-color”, which used 16 bits per pixel instead of 24. Such devices might use 5-bits each for red, green, and blue, offering a total of 32768 colors at once. Others used 6 bits for green, offering a total of 65536 colors.
Unfortunately, neither “truecolor” or “high-color” seem to have received much consideration when GEM’s device abstraction model was created. When the Atari Falcon030 came out, featuring 16-bit high-color video modes, Atari’s main VDI programmer, Slavic Losben, did his best to make everything work. However, in the end there were still a variety of places where applications ended up having to use special-case code to work correctly and take full advantage. That would not have been necessary if VDI’s original design had considered true color in the abstraction model.
There are other types of device like plotters or color dot-matrix printers which don’t quite fit into the VDI device abstraction model. A plotter may have multiple pens with different colors, but the colors are fixed, not changeable. Likewise, a color printer back in those days typically had a ribbon with 3 fixed colors plus black. Getting a reasonable color image out of such a printer was possible but required special handling to get the best results.
My guess is that the older GSX library was ultimately changed very little when it was made into the new VDI. The device abstraction model inherited by VDI was already a couple of years old at that point and it was based around relatively primitive graphics hardware.
Device Units Are (Were) Pixels
One thing that a virtualized device has to deal with is the fact that different hardware devices have different pixel densities, different pixel shapes, and different overall resolutions.
For example, the Atari ST’s monochrome display had a pixel density of 90 pixels per inch. That is, a line 90 pixels long on screen theoretically represented a distance of 1 inch. (In practice this varied depending on the individual monitor and how it was adjusted.) The overall screen dimensions were 640 x 400, which theoretically represented 7.11″ x 4.44″.
By comparison, an Epson-FX80 dot matrix printer had a pixel density of 120 DPI horizontally x 144 DPI vertically, with a printable area that measured 960 x 1526 pixels, or 8.0″ x 10.6″ on an 8.5″ x 11.0″ sheet of paper.
A typical 24-pin printer like the NEC P-6 had a pixel density of up to 360 DPI with a printable area of 2880 x 3816 pixels covering 8.0″ x 10.6″ on an 8.5″ x 11.0″ sheet of paper.
GEM VDI dealt with these differences by ignoring them almost completely, except for giving your program a few pieces of information so it could figure out things for itself.
There was a rather bizarrely useless option to use “Normalized Device Coordinates” (NDC) which took the device’s output area and applied the range of 0-32767 to each axis. Now, a virtualized coordinate system can be a very useful feature if it’s done right, but the NDC wasn’t done right at all. To name a few of the many issues:
- It didn’t work with the ROM-based screen device driver. Theoretically it could work with the screen if a RAM-loaded driver was used, which was aware of the NDC system, but in practice this never occurred. Maybe this was an example of something that had been done on the PC side of things that never made it’s way to the Atari.
- NDC paid no attention to the aspect ratio of the output area. It always applied the full range of 0-32767 to each axis.
- The coordinate range used the entire positive half of the available range of a 16-bit integer. So it was impossible, for example, to specify objects that required coordinates that lay past the right-hand edge, because the X-axis coordinate couldn’t be larger than 32767.
- The vertical axis went from 32767 at the top to 0 at the bottom, reversing the usual coordinate system used by everything else, and there were no options to change this.
I can only imagine that the NDC was another thing VDI inherited from GSX, but I have never been able to figure out a situation in which it would have been useful.
Related to the lack of a useful virtualized coordinate system was the fact that, with the exception of being able to specify text size in terms of points, GEM VDI offered no means of specifying sizes or positions using anything but pixels.
Suppose you want to draw a box that is 2″ wide by 1.5″ tall using a line thickness of 0.10″, with the top left corner positioned at 4.25″ from the left side and 1″ down from the top. You would have to translate each of those values into the correct number of pixels before issuing your VDI commands. Meaning, it was up to the program to figure out how many pixels equaled 2 inches, or a line thickness of 0.10 inch. This is a fair amount of extra work when you have to do it for everything you draw.
And to complicate matters, it turns out VDI was lying to you about some of the numbers.
Do The Math, Plus, You Know, That Extra Stuff
Doing the math to translate inches (or whatever other measurement) into pixels wasn’t quite all you needed to do to ensure correct output. You also needed to know how to figure out when and how VDI was lying to you.
When you open a device workstation, you get back a variety of bits of information that tell you about the device. This includes the overall size of the output area in pixels, like 640 x 400 for the Atari ST monochrome display, as well as the pixel size in microns.
Returning the pixel size in microns is another design flaw. There’s simply too much round-off error involved.
A micron is a thousandth of a millimeter, so there are 25400 microns to an inch. Unfortunately, this value was returned as a 16-bit integer and many device pixel sizes don’t translate well into that without round-off error. For example, 90 DPI translates to 282.2222222 microns, returned by VDI as just 282, while 360 DPI translates to 70.5555555 microns returned as just 70.
This means that a program has to be aware that when VDI says that a particular device has pixels that are 70 microns, it really means they’re 70.5555555 microns, and likewise for the other devices and their pixel sizes.
If you don’t think that the difference is big enough to be important, then consider a vertical line drawn on a 24-pin 360 DPI that is intended to be positioned at 7.5 inches from the left side of a sheet of paper. To figure out where to draw the line, you’ve got to translate 7.5 inches into the right number of pixels for the device.
If you base your calculations on the value of 70 microns returned by VDI, you’ll draw the line at column 2700:
7.5 inches x 25400 microns per inch = 190500 microns
190500 microns / 70 microns per pixel = 2700 pixels
If your application is aware that 70 microns returned by VDI really means 70.5555555 microns, then the second part of the above calculation works out to:
190500 microns / 70.5555555 microns per pixel = 2721 pixels
Now we’re at column 2721. That’s a difference of 21 pixels in where the line gets positioned. That’s almost 1/16″ at 360 DPI and it is definitely noticeable.
When I worked on the WordUp word processor and Fontz font editor during my time at Neocept, we used a translation table to convert the values returned by VDI for the pixel size into the actual, un-rounded off values.
This issue could have easily been avoided if GEM had used floating point values or perhaps 16.16 fixed point values, as either would have provided sufficient precision to eliminate significant errors from round-off.
Some Devices Are More Variable Than Others
The other big problem with the VDI abstraction model was that it ignored or minimized differences between devices where it shouldn’t have done so. For example, while a particular display screen mode is always going to be a certain number of pixels wide or tall, a device like a printer may be capable of using different paper sizes, different paper trays, and even different pixel densities.
VDI essentially ignored these things.
For example, at Neocept we wanted WordUp to be able to print on envelopes, or maybe use legal-sized paper, not just letter-sized, which is what most GEM printer drivers were setup to do.
The right way to do things would have been to allow an application some method of specifying the desired paper size and other options when you opened a printer workstation. But… no. With VDI, you get the page size you get. Be happy with it.
Some workarounds for these limitations included having a desk accessory which could be used to change the printer driver configuration before you started a print job, but this only worked with specific, matched drivers, and you couldn’t change some parameters like output resolution since the bitmapped fonts were configured for specific screen sizes.
We figured out a way to get it done for WordUp, but it required us to twiddle around with the printer drivers in ways that weren’t really by the book. It shouldn’t have been necessary.
Fonts Don’t Specify What Resolution / Device They’re Intended For
Writing the previous paragraph reminded me of a VDI design flaw that I’d not thought about in years.
Despite the absolute necessity for a GEM bitmapped font to be designed for a specific device resolution, the font header contains no information about what resolution for which the font is intended.
The font header contains no information to indicate the aspect ratio. There’s no way to tell if the font is designed around the idea of square pixels (i.e. 90dpi monochrome screen fonts) or rectangular pixels (i.e. 120h x 144v DPI 9-pin printer fonts, or 90h x 45v medium-res Atari screen mode fonts).
Instead of placing such information into the font header, it was expected that the filename would be encoded in such away as to indicate this information. Keep in mind we’re talking an old-fashioned 8.3 filename, which was expected to be used like this:
The yy portion indicates the device type. For example, “FX” would mean Epson FX 9-point printers at 120 x 144 resolution, but also dozens of other printers which supported the same graphics codes. The xxxx portion indicates the typeface. The zz portion indicated the size in points. Good luck if you had a font bigger than 99 points.
Not including this information in the font header is a huge, huge flaw, and it’s one that makes living with bitmapped fonts harder to live with.
Consider, if the font header specified the target device resolution, then GEM could have easily been written to use any font with the correct aspect ratio for a device, adjusting the apparent point size as needed.
That is, an 18 point font for a 180 dpi device could be used as a 36 point font for a 90 dpi device, for example. People did this sort of thing manually, but it could have all been done automatically had the required information been included in the font header.
What Fonts Are Installed?
Until 1990 when FSMGDOS came out (briefly) and was subsequently replaced by SpeedoGDOS, GEM on Atari used only bitmapped fonts. Bitmapped fonts are fine when they’re output at their intended size, but generally don’t look very good when they’re scaled to other sizes.
Bitmapped font scaling could have been improved to a certain degree by using a filtered scaling routine, but to be honest that’s probably not a reasonable expectation for the horsepower of the hardware back then.
For commonly used fonts, it was typical for several sizes to be available, ranging from 8 point to 36 point. To avoid bad-looking output, many applications limited to user to font sizes that had a corresponding bitmap. The problem was, there wasn’t really any direct means of inquiring what sizes of bitmapped fonts were installed.
Instead, you had to do a loop calling the vst_point() function for each font. This function would look for the largest installed font that was less than or equal to the requested size. So if you asked it for 128 points and 36 points was the biggest available, it would tell you that it selected 36 points. A program would start with a relatively large number, see what it got back and save it, then loop back try the next lowest number. In this way, it would find out that there were sizes of 36, 28, 24, 18, 14, 12, and 8 points, for example. Then the application could limit the selection of sizes to those it found.
When FSMGDOS came out, a new function, vst_arbpt() allowed programs that were aware of the new font scaler the option to select different sizes, but older programs using the loop to call vst_point() to get information about installed sizes got bit in the ass. Each call to vst_point() resulted in the font scaler saying “yes, that size is available!” Furthermore, it would cause the font scaler to do things in preparation for outputting text at the requested size. Essentially, a call to vst_point() with FSMGDOS took a lot longer than it had with the older bitmap-only GDOS. The end result was that when a program was doing a loop testing all sizes from 1 to 128 points, or something like that, it basically froze-up for awhile because the process took much, much longer than it had taken when bitmapped fonts were being used.
That’s All For Now
More to come in part 6… playing soon at a theatre near you.